
[{"content":"","date":"April 16, 2026","externalUrl":null,"permalink":"/categories/aws/","section":"Categories","summary":"","title":"AWS","type":"categories"},{"content":"","date":"April 16, 2026","externalUrl":null,"permalink":"/blog/","section":"Blog","summary":"","title":"Blog","type":"blog"},{"content":"","date":"April 16, 2026","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":" The setup # I\u0026rsquo;m an Apple person. Notes for thinking, Reminders for doing. It\u0026rsquo;s simple, it syncs everywhere, and it stays out of my way. I\u0026rsquo;ve used Reminders for years to track what needs to happen and when.\nBut as I started spending more time in AI-powered development environments like Kiro, I noticed a friction point: every time I thought of a task while coding, I had to switch apps to add it to Reminders. Context switch, lose focus, come back, try to remember where I was. The classic productivity killer.\nWhat if I could just tell Kiro \u0026ldquo;remind me to follow up with the customer on Thursday\u0026rdquo; and have it show up in Apple Reminders?\nAttempt 1: An MCP server # My first approach was building an MCP (Model Context Protocol) server. MCP is a standard that lets AI tools connect to external systems — databases, APIs, local tools — through a structured interface. It\u0026rsquo;s powerful and well-supported in Kiro.\nThe MCP server worked great. I could create reminders, list them, mark them complete, all from within Kiro. The integration was solid.\nBut there was a catch.\nThe context window problem # MCP servers are always loaded. When Kiro starts a session, it loads the tool definitions for every configured MCP server into the context window. That means even in a session where I\u0026rsquo;m purely writing code and don\u0026rsquo;t need Reminders at all, the MCP server\u0026rsquo;s tool schemas are sitting there taking up space.\nFor a single MCP server, that\u0026rsquo;s not a big deal. But if you\u0026rsquo;re like me and have several MCP servers configured, the context window overhead adds up. Every token spent on tool definitions is a token not available for your actual work.\nI wanted Reminders integration to be available when I need it, not consuming resources when I don\u0026rsquo;t.\nAttempt 2: A Kiro skill # Kiro has a concept called skills — markdown files that contain instructions, context, and tool definitions that are only loaded when activated. Think of it as on-demand capability instead of always-on.\nThe key insight: Apple Reminders is scriptable via AppleScript, and Kiro can execute shell commands. So instead of a full MCP server with a runtime process, I could write a skill that teaches Kiro how to use osascript to interact with Reminders directly.\nHere is an outline of the SKILL.md\n--- name: apple-reminders description: Manage Apple Reminders using osascript. Use when the user asks about tasks, reminders, to-dos, or personal task management. --- # Apple Reminders ## Available Lists Get all list names: \\```bash osascript -e \u0026#39;tell application \u0026#34;Reminders\u0026#34; to get name of every list\u0026#39; \\``` ## Add a Reminder Create a reminder. Only `name` is required. `body` and `due date` are optional. Without due date: \\```bash osascript -e \u0026#39; tell application \u0026#34;Reminders\u0026#34; tell list \u0026#34;LIST_NAME\u0026#34; make new reminder with properties {name:\u0026#34;TITLE\u0026#34;, body:\u0026#34;NOTES\u0026#34;} end tell end tell\u0026#39; \\``` ## Error Handling - If a list is not found, AppleScript returns error -1728. Tell the user the list doesn\u0026#39;t exist and show available lists. - If a reminder is not found, tell the user and list current reminders in that list. - Name matching is exact. If the user gives a partial name, list reminders first and confirm which one. The skill file contains:\nInstructions on how to interact with Apple Reminders AppleScript commands for creating, listing, completing, and deleting reminders Context about how Reminders organizes data (lists, due dates, priorities) When I need it, I activate the skill. When I don\u0026rsquo;t, it\u0026rsquo;s just a markdown file on disk — zero context window impact.\nThe comparison # MCP Server Kiro Skill Always loaded Yes — tool schemas in every session No — activated on demand Context window cost Constant overhead Zero when not in use Response speed Fast Fast (AppleScript is near-instant) Setup complexity Node.js/Python runtime, config in mcp.json Single markdown file Maintenance Dependencies, versioning, process management Edit a text file Capability Full programmatic access Same — AppleScript covers all the operations For a tool I use in maybe 1 out of 5 sessions, the skill approach is clearly better. The MCP server would make more sense for something I need in every session, like a database connection or a deployment tool.\nWhen to use which # Use an MCP server when:\nYou need the tool in most or all sessions The integration requires complex state management or long-running connections You\u0026rsquo;re connecting to external APIs that need authentication flows Multiple people on your team need the same integration Use a Kiro skill when:\nYou need the tool occasionally, not every session The underlying system is scriptable via CLI or shell commands You want zero overhead when the tool isn\u0026rsquo;t active The integration is personal (your machine, your apps, your workflow) The result # The skill works exactly as well as the MCP server did, but with none of the overhead. I can say \u0026ldquo;add a reminder to follow up with the customer next Thursday\u0026rdquo; and it shows up in Apple Reminders within a second. When I\u0026rsquo;m in a pure coding session, the skill sits dormant and my full context window is available for code.\nSometimes the simpler approach is the better one. Not every integration needs a server.\n","date":"April 16, 2026","externalUrl":null,"permalink":"/2026/04/from-mcp-server-to-kiro-skill-managing-apple-reminders-with-ai/","section":"Blog","summary":"The setup # I’m an Apple person. Notes for thinking, Reminders for doing. It’s simple, it syncs everywhere, and it stays out of my way. I’ve used Reminders for years to track what needs to happen and when.\nBut as I started spending more time in AI-powered development environments like Kiro, I noticed a friction point: every time I thought of a task while coding, I had to switch apps to add it to Reminders. Context switch, lose focus, come back, try to remember where I was. The classic productivity killer.\n","title":"From MCP Server to Kiro Skill: Managing Apple Reminders with AI","type":"blog"},{"content":"","date":"April 16, 2026","externalUrl":null,"permalink":"/","section":"retgits.com","summary":"","title":"retgits.com","type":"page"},{"content":"Customers are building large data lakes on Amazon Web Services (AWS) to democratize their access to data. As a result of that, data governance becomes increasingly important. Customers need to know data is accessed at the right time, by the right people, and in the right context. To implement fine-grained data access permissions, customers use AWS Lake Formation. AWS Lake Formation provides data access controls for AWS services like Amazon Redshift, Amazon Athena, and Amazon EMR. It also offers data access controls for AWS Partners like Dremio.\nDremio offers an Open Data Lakehouse platform that accelerates data analytics across diverse data sources. It provides a high-performance SQL query engine that efficiently queries data from cloud storage, databases, and various file formats. With its distributed execution and advanced caching techniques, Dremio delivers ultra-low latency query performance on large datasets. Dremio has recently added support for AWS Lake Formation data governance framework for secure and controlled data access. This integration ensures Dremio is compliant with permissions on Data Catalog resources, which include tag-based access control, data filtering, and cell-level security permissions established in AWS Lake Formation.\nThis post details how a financial services customer leveraged Dremio and AWS Lake Formation to establish consistent governance, eliminate data silos, and achieve fast analytics.\nWhy is this integration important? # One of Dremio’s customers, a Fortune 100 financial services organization, needs to effectively balance the imperatives of data access and control to meet stringent regulatory requirements while optimizing data value. This organization efficiently manages data, ensures compliance, and unlocks the full potential of its data resources, by addressing data risks and implementing governing best practices.\nChallenges # A Fortune 100 financial services organization faced three main challenges. First, their data was severely underutilized due to data silos, despite its potential value in individual business processes. The organization needed the ability to effectively share and combine data assets to unlock additional potential across the organization.\nSecondly, operating within the highly-regulated financial services industry, the organization had to manage data risks and implement robust governance and access controls to prevent potential compliance transgressions.\nFinally, the organization struggled with fragmented data analytics. They required a unified data analytics platform that would enable data consumers to better comprehend their data and gain deeper insights. This platform needed to support better data-driven decisions through low-latency reports and dashboards, regardless of whether the data resided in an AWS data lake or other relational/non-relational sources, whether in the cloud or on-premises.\nSolution # This financial institution used AWS Lake Formation and AWS Glue Data Catalog for centralized data administration and fine-grained access control to overcome these challenges; Dremio enabled low-latency analytics within a Data Mesh architecture.\nImplementation # The organization adopted a data sharing architecture inspired by the concept of a data mesh. They defined data products curated by experts who understood the nuances, management requirements, permissible uses, and limitations of the data. This approach enabled better data governance and management. The organization utilized AWS Lake Formation to centralize and manage granular, fine-grained access control for data sources in the AWS Glue Data Catalog, ensuring regulatory compliance and improving data governance. To enable efficient and data-driven decision-making, the organization implemented Dremio’s high-performance SQL engine for unified self-service analytics across AWS and other data sources. Dremio provided sub-second performance for consistent analytics across all data sources. Dremio was also instrumental in maintaining a consistent governance model for the customer. It inherited and incorporated the granular permissions defined by AWS Lake Formation for data management into the governance and access policies of non-Glue-managed data sources.\nHow it works? # Dremio adheres to the workflow shown in Figure 1 each time an end user attempts to access, edit, or query datasets with AWS Lake Formation managed privileges.\nFigure 1 – Workflow of Dremio integration with AWS Lake Formation\nAs a prerequisite connect an external identity provider (IdP) through the Security Assertion Markup Language (SAML) 2.0 protocol to IAM Identity Center.\nUser authenticates through AWS IAM Identity Center and runs a query in Dremio. Dremio checks each table in the query to determine if they are configured to use Lake Formation for security. If one or more datasets leverage Lake Formation, Dremio determines the IAM identifiers, specifically User or Group Amazon Resource Names (ARNs) associated with the IAM Identity Center user. Dremio makes a ListPermissions and ListDataCellsFilter API call to Lake Formation for the table. AWS Lake Formation returns the list of permissions for the table being queried. Permissions are cached in a permission cache to improve performance. Dremio validates that user ARN has SELECT Lake Formation permissions. If the user does not have permission, the query is rejected with a permission error. If authorized, Dremio reads the underlying data from Amazon Simple Storage Service (Amazon S3). Amazon S3 returns the data to Dremio. Dremio returns the query results to the end user. Benefits # The integration of Dremio with AWS Lake Formation benefited the organization across multiple fronts. By eliminating data silos and facilitating the exchange and integration of data across various business processes, the organization unlocked latent potential of the data. This led to leading to improved strategic insights and decision-making.\nThe organization implemented stringent data governance and access controls using AWS Lake Formation. This helped mitigate the organization’s exposure to regulatory risks and avoided potential penalties. Additionally, the organization used Dremio’s unified analytics software to consistently performed low-latency analytics across all their data sources, whether cloud-managed or on-premises.\nThe adoption of a data mesh architecture provided the necessary flexibility and scalability. This allowed for the integration of a variety of data sources while simultaneously adhering to governance and control standards.\nDemocratization with Control # The convergence of data governance and analytics showed in this solution reflects broader industry shifts. While data democratization drives innovation, it must be balanced with proper data governance controls. Leading institutions are moving toward “controlled democratization” – where access is broad but governed and audited.\nThis solution achieves this balance through several key mechanisms:\nCentralized Governance: AWS Lake Formation simplifies data lake governance by centralizing data security and governance. Granular Access Control: Fine-grained permissions ensure users have access to the right data down to the row and column level. Performance without Compromise: Dremio unified analytics platform delivers low-latency analytics while maintaining compliance with Lake Formation’s governance policies, proving that strong controls need not impede performance. Audit and Visibility: Lake Formation tracks data interactions by role and user, and it provides comprehensive data access auditing to verify the right data was accessed by the right users at the right time. This blueprint shows how organizations can achieve the dual objectives of democratizing data access while maintaining robust governance controls.\nConclusion # We see how financial institutions can evolve from traditional data management approaches to modern data architectures without compromising security or compliance. The financial services organization not only solved its immediate challenges, but also established a foundation for future data initiatives that can adapt to evolving regulatory requirements and business needs.\nFor organizations facing similar challenges in regulated industries, this implementation provides a blueprint for balancing data democratization with governance, while still offering low-latency analytics. Dremio’s analytics and AWS Lake Formation’s governance create a reliable solution for organizations wanting to fully utilize their data while keeping it secure.\nDremio software release 25.1 and later offer this capability. Dremio is the industry’s leading engine for the Data Lakehouse with Apache Iceberg table format. For additional information regarding the Dremio Unified Data Lakehouse engine, please click here.\n","date":"May 1, 2025","externalUrl":null,"permalink":"/2025/05/streamline-unified-data-governance-with-aws-lake-formation-and-dremio/","section":"Blog","summary":"Customers are building large data lakes on Amazon Web Services (AWS) to democratize their access to data. As a result of that, data governance becomes increasingly important. Customers need to know data is accessed at the right time, by the right people, and in the right context. To implement fine-grained data access permissions, customers use AWS Lake Formation. AWS Lake Formation provides data access controls for AWS services like Amazon Redshift, Amazon Athena, and Amazon EMR. It also offers data access controls for AWS Partners like Dremio.\n","title":"Streamline Unified Data Governance with AWS Lake Formation and Dremio","type":"blog"},{"content":"AWS Lake Formation and the AWS Glue Data Catalog form an integral part of a data governance solution for data lakes built on Amazon Simple Storage Service (Amazon S3) with multiple AWS analytics services integrating with them. In 2022, we talked about the enhancements we had done to these services. We continue to listen to customer stories and work backwards to incorporate their thoughts in our products. In this post, we are happy to summarize the results of our hard work in 2023 to improve and simplify data governance for customers.\nWe announced our new features and capabilities during AWS re:Invent 2023, as is our custom every year. The following are re:Invent 2023 talks showcasing Lake Formation and Data Catalog capabilities:\nWhat’s new in AWS Lake Formation – This session recaps new capabilities and how you can get the most out of Lake Formation. The session also highlights Duke Energy’s journey with Lake Formation and the AWS Glue Data Catalog. Easily and securely prepare, share, and query data – This session shows how you can use Lake Formation and the AWS Glue Data Catalog to share data without copying, transform and prepare data without coding, and query data. Curate your data at scale – This session shows how solutions like AWS Glue, AWS Glue Data Quality, and Lake Formation can help you manage your best sources and find sensitive information. We group the new capabilities into four categories:\nDiscover and secure Connect with data sharing Scale and optimize Audit and monitor Let’s dive deeper and discuss the new capabilities introduced in 2023.\nDiscover and secure # Using Lake Formation and the Data Catalog as the foundational building blocks, we launched Amazon DataZone in October 2023. DataZone is a data management service that makes it faster and more straightforward for you to catalog, discover, share, and govern data stored across AWS, on premises, and third-party sources. The publishing and subscription workflows of DataZone enhance collaboration between various roles in your organization and speed up the time to derive business insights from your data. You can enhance the technical metadata of the Data Catalog using AI-powered assistants into business metadata of DataZone, making it more easily discoverable. DataZone automatically manages the permissions of your shared data in the DataZone projects. To learn more about DataZone, refer to the User Guide. Bienvenue dans DataZone!\nAWS Glue crawlers classify data to determine the format, schema, and associated properties of the raw data, group data into tables or partitions, and write metadata to the Data Catalog. In 2023, we released several updates to AWS Glue crawlers. We added the ability to bring your custom versions of JDBC drivers in crawlers to extract data schemas from your data sources and populate the Data Catalog. To optimize partition retrieval and improve query performance, we added the feature for crawlers to automatically add partition indexes for newly discovered tables. We also integrated crawlers with Lake Formation, supporting centralized permissions for in-account and cross-account crawling of S3 data lakes. These are some much sought-after improvements that simplify your metadata discovery using crawlers. Crawlers, salut!\nWe have also seen a tremendous rise in the usage of open table formats (OTFs) like Linux Foundation Delta Lake, Apache Iceberg, and Apache Hudi. To support these popular OTFs, we added support to natively crawl these three table formats into the Data Catalog. Furthermore, we worked with other AWS analytics services, such as Amazon EMR, to enable Lake Formation fine-grained permissions on all the three open table formats. We encourage you to explore which features of Lake Formation are supported for OTF tables. Bien intégré!\nAs the data sources and types increase over time, you are bound to have nested data types in your data lake sooner or later. To bring data governance to these datasets without flattening them, Lake Formation added support for fine-grained access controls on nested data types and columns. We also added support for Lake Formation fine-grained access controls while running Apache Hive jobs on Amazon EMR on EC2 and on Amazon EMR Studio. With Amazon EMR Serverless, fine-grained access control with Lake Formation is now available in preview. Connecté les points!\nAt AWS, we work very closely with our customers to understand their experience. We came to understand that onboarding to Lake Formation from AWS Identity and Access Management (IAM) based permissions for Amazon S3 and the AWS Glue Data Catalog could be streamlined. We realized that your use cases need more flexibility in data governance. With the hybrid access mode in Lake Formation, we introduced selective addition of Lake Formation permissions for some users and databases, without interrupting other users and workloads. You can define a catalog table in hybrid mode and grant access to new users like data analysts and data scientists using Lake Formation while your production extract, transform, and load (ETL) pipelines continue to use their existing IAM-based permissions. Double victoire!\nLet’s talk about identity management. You can use IAM principals, Amazon Quicksight users and groups, and external accounts and IAM principals in external accounts to grant access to Data Catalog resources in Lake Formation. What about your corporate identities? Do you need to create and maintain multiple IAM roles and map them to various corporate identities? You could see the IAM role that accessed the table, but how could you find out which user accessed it? To answer these questions, Lake Formation integrated with AWS IAM Identity Center and added the feature for trusted identity propagation. With this, you can grant fine-grained access permissions to the identities from your organization’s existing identity provider. Other AWS analytics services also support the user identity to be propagated. Your auditors can now see that the user john@anycompany.com, for example, had accessed the table managed by Lake Formation permissions using Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum. Intégration facile!\nNow you don’t have to worry about moving the data or copying the Data Catalog to another AWS Region to use the AWS services for data governance. We have expanded and made Lake Formation available in all Regions in 2023. Et voila!\nConnect with data sharing # Lake Formation provides a straightforward way to share Data Catalog objects like databases and tables with internal and external users. This mechanism empowers organizations with quick and secure access to data and speeds up their business decision-making. Let’s review the new features and enhancements made in 2023 under this theme.\nThe AWS Glue Data Catalog is the central and foundational component of data governance for both Lake Formation and DataZone. In 2023, we extended the Data Catalog through federation to integrate with external Apache Hive metastores and Redshift datashares. We also made available the connector code, which you can customize to connect the Data Catalog with additional Apache Hive-compatible metastores. These integrations pave the way to get more metadata into the Data Catalog, and allow fine-grained access controls and sharing of these resources across AWS accounts effortlessly with Lake Formation permissions. We also added support to access the Data Catalog table of one Region from other Regions using cross-Region resource links. This enhancement simplifies many use cases to avoid metadata duplication.\nWith the AWS CloudTrail Lake federation feature, you can discover, analyze, join, and share CloudTrail Lake data with other data sources in Data Catalog. For CloudTrail Lake, fine-grained access controls and querying and visualizing capabilities are available through Athena.\nWe further extended the Data Catalog capabilities to support uniform views across your data lake. You can create views using different SQL dialects and query from Athena, Redshift Spectrum, and Amazon EMR. This allows you to maintain permissions at the view level and not share the individual tables. The Data Catalog views feature is available in preview, announced at re:Invent 2023.\nScale and optimize # As SQL queries get more complex with the data changes over time or has multiple joins, a cost-based optimizer (CBO) can drive optimizations in the query plan and lead to faster performance, based on statistics of the data in the tables. In 2023, we added support for column-level statistics for tables in the Data Catalog. Customers are already seeing query performance improvements in Athena and Redshift Spectrum, with table column statistics turned on. Suivez les chiffres!\nTag-based access control removes the need to update your policies every time a new resource is added to the data lake. Instead, data lake administrators create Lake Formation Tags (LF-Tags) to tag Data Catalog objects and grant access based on these LF-Tags to users and groups. In 2023, we added support for LF-Tag delegation, where data lake administrators can give permissions to data stewards and other users to manage LF-Tags without the need for administrator privileges. LF-Tag democratization!\nApache Iceberg format uses metadata to keep track of the data files that make up the table. Changes to tables, like inserts or updates, result in new data files being created. As the number of data files for a table grows, the queries using that table can become less efficient. To improve query performance on the Iceberg table, you need to reduce the number of data files by compacting the smaller change capture files into bigger files. Users typically create and run scripts to perform optimization of these Iceberg table files in their own servers or through AWS Glue ETL. To alleviate this complex maintenance of Iceberg tables, customers approached us for a better solution. We introduced the feature for automatic compaction of Apache Iceberg tables in the Data Catalog. After you turn on automatic compaction, the Data Catalog automatically manages the metadata of the table and gives you an always-optimized Amazon S3 layout for your Iceberg tables. To learn more, check out Optimizing Iceberg tables. Automatique!\nAudit and monitor # Knowing who has access to what data is a critical component of data governance. Auditors need to validate that the right metadata and data permissions are set in Lake Formation and the Data Catalog. Data lake administrators have full access to permissions and metadata, and can grant access to the data itself. To provide auditors with an option to search and review metadata permissions without granting them access to make changes to permissions, we introduced the read-only administrator role in Lake Formation. This role allows you to audit the catalog metadata and Lake Formation permissions and LF-Tags while restricting it from making any changes to them.\nConclusion # We had an amazing 2023, developing product enhancements to help you simplify and enhance your data governance using Lake Formation and Data Catalog. We invite you to try these new features. The following is a list of our launch posts for reference:\nData Catalog and crawler features: AWS Glue crawlers support cross-account crawling to support data mesh architecture Efficiently crawl your data lake and improve data access with an AWS Glue crawler using partition indexes Introducing native Delta Lake table support with AWS Glue crawlers Introducing AWS Glue crawler and create table support for Apache Iceberg format Introducing Apache Hudi support with AWS Glue crawlers Enhance query performance using AWS Glue Data Catalog column-level statistics AWS Glue Data Catalog now supports automatic compaction of Apache Iceberg tables Lake Formation features: Amazon DataZone Now Generally Available – Collaborate on Data Projects across Organizational Boundaries Query your Apache Hive metastore with AWS Lake Formation permissions Centrally manage access and permissions for Amazon Redshift data sharing with AWS Lake Formation Implement tag-based access control for your data lake and Amazon Redshift data sharing with AWS Lake Formation Configure cross-Region table access with the AWS Glue Catalog and AWS Lake Formation Introducing hybrid access mode for AWS Glue Data Catalog to secure access using AWS Lake Formation and IAM and Amazon S3 policies Decentralize LF-tag management with AWS Lake Formation Use IAM runtime roles with Amazon EMR Studio Workspaces and AWS Lake Formation for cross-account fine-grained access control We will continue to innovate on behalf of our customers in 2024. Please share your thoughts, use cases, and feedback for our product improvements in the comments section or through your AWS account teams. We wish you a happy and prosperous 2024. Bonne année!\n","date":"January 18, 2024","externalUrl":null,"permalink":"/2024/01/aws-lake-formation-2023-year-in-review/","section":"Blog","summary":"AWS Lake Formation and the AWS Glue Data Catalog form an integral part of a data governance solution for data lakes built on Amazon Simple Storage Service (Amazon S3) with multiple AWS analytics services integrating with them. In 2022, we talked about the enhancements we had done to these services. We continue to listen to customer stories and work backwards to incorporate their thoughts in our products. In this post, we are happy to summarize the results of our hard work in 2023 to improve and simplify data governance for customers.\n","title":"AWS Lake Formation 2023 Year in Review","type":"blog"},{"content":"","date":"November 29, 2023","externalUrl":null,"permalink":"/categories/talks/","section":"Categories","summary":"","title":"Talks","type":"categories"},{"content":"Chief data officers, data platform administrators, architects, owners, and consumers are looking to simplify data access permissions and governance. AWS Lake Formation makes it easier to centrally govern, secure, and globally share data for analytics and machine learning use cases. Join this session to learn about new capabilities, customer stories, and how you can get the most out of Lake Formation.\n","date":"November 29, 2023","externalUrl":null,"permalink":"/2023/11/whats-new-in-aws-lake-formation-reinvent-2023/","section":"Blog","summary":"Chief data officers, data platform administrators, architects, owners, and consumers are looking to simplify data access permissions and governance. AWS Lake Formation makes it easier to centrally govern, secure, and globally share data for analytics and machine learning use cases. Join this session to learn about new capabilities, customer stories, and how you can get the most out of Lake Formation.\n","title":"What's new in AWS Lake Formation (reInvent 2023)","type":"blog"},{"content":"Many organizations have standardized or plan to standardize their unified data security governance on AWS Lake Formation, which provides powerful data access control to Amazon Redshift, Amazon Athena, and Amazon EMR.\nSome of these organizations are also leveraging Databricks, however, and would like to create and manage data access policies for Databricks using AWS Lake Formation as well. They want to have consistent policy enforcement and monitoring across their AWS services, Databricks, and Amazon Simple Storage Service (Amazon S3).\nIn this post, we will discuss the AWS Lake Formation and Privacera integrated solution that extends AWS Lake Formation source support to Databricks. It provides data access policy authorship and maintenance from one safe and convenient location, AWS Lake Formation.\nPrivacera is an AWS Data and Analytics Competency Partner and AWS Marketplace Seller that is a leading provider of unified data access governance solutions. It enables customers to deliver responsible data-powered performance from their ever-expanding data landscape.\nAWS Lake Formation Overview # AWS Lake Formation is a fully managed service that makes it easy to build, secure, and manage data lakes. Together with AWS Glue Data Catalog, a persistent technical metadata store to store, annotate, and share metadata, Lake Formation is a critical component of unified data security governance for AWS customers. It provides AWS customers a single place to manage data access permissions for the data in their data lake.\nUsing Lake Formation capabilities like tag-based access control, data filters, and cross-account data sharing, customers are able to break down data silos. The available APIs make it straightforward to extend and augment the capabilities and reach of AWS Lake Formation.\nDatabricks Overview # Databricks is an AWS Data and Analytics Competency Partner and AWS Marketplace Seller that allows customers to manage all of your data, analytics, and artificial intelligence (AI) on one platform.\nThe Databricks Lakehouse Platform combines the best of data warehouses and data lakes to offer end-to-end services like data ingestion, data engineering, machine learning (ML), and business analytics. With this unified approach, Databricks offers enterprises to simplify the modern data stack by eliminating data silos and helps them operate more efficiently and innovate faster.\nThe Databricks Lakehouse Platform delivers the reliability, strong governance, and performance of data warehouses with the openness, flexibility, and machine learning support of data lakes. Databricks is built on open source and open standards with a common approach to data management, security, and governance.\nPrivacera Overview # Privacera is an AWS Data and Analytics Competency Partner and AWS Marketplace Seller that delivers a unified data security governance platform based on open standards. Privacera enables organizations to discover sensitive data, protect and control access to data, and monitor data security and access across over 40 data sources, including Amazon S3, Amazon EMR, Amazon Redshift, Databricks, and Snowflake. This allows organizations to enhance data security while making data more accessible and reducing time to insights.\nPrivacera has also delivered two purpose-built solutions that integrate with AWS Lake Formation and allow AWS customers to augment their usage while having the choice to author, manage, and monitor data security and access policies in a single central location using either Lake Formation or Privacera.\nThese solutions are purpose built for AWS customers that want or need to use Lake Formation as part of their overall data security governance solution, but need additional functional capabilities or source support that Privacera can provide.\nIntegration Overview # This post covers the solution that uses Privacera to extend AWS Lake Formation to centrally author, manage, and monitor data security and access policies in Databricks.\nFigure 1 – AWS Lake Formation-centric architecture to govern Databricks access.\nAs described in the diagram above, Data Steward creates data access policies in AWS Lake Formation.\nData Access Policies are then synced to PrivaceraCloud, which is a fully managed software-as-a-service (SaaS) solution delivering unified data access governance. It’s built on the core attribute-based access control (ABAC) policy model of Apache Ranger.\nPrivaceraCloud translates the AWS Lake Formation policy to Apache Ranger policies. It then uses a plugin to enforce policies to Databricks, and uses PolicySync to sync policies to Databricks SQL Analytics.\nPrivaceraCloud also enforces policies created in Lake Formation on AWS services that are not natively supported by Lake Formation, such as Amazon Redshift and Amazon EMR. Lake Formation supports Apache Spark on Amazon EMR and Apache Hive on Amazon EMR. PrivaceraCloud can extend that and support policies on Presto and Trino.\nAWS Lake Formation can natively enforce policies to AWS services like Amazon Redshift Spectrum, Amazon Athena, AWS Glue, and Amazon QuickSight. Using PrivaceraCloud, AWS Lake Formation can serve as a single pane of glass for data access policies for any customer using Databricks along with AWS services.\nWalkthrough # Terminologies # Principal: This could be the AWS Identity and Access Management (IAM) user, group, role, or SAML ARN. Resource: This means either catalog, database, table, or column. Prerequisites # Follow these steps to create a test database in AWS Glue. Next, follow these steps to create a test table in AWS Glue. Create a PrivaceraCloud account. Ideally, IAM groups and IAM users are synchronized from AD/LDAP or Okta/SCIM into Privacera. You can also synchronize it manually by following these steps. If the users/groups are not present in Privacera, then these permissions won’t be synchronized. However, for IAM roles, Privacera will automatically sync the IAM roles as Apache Ranger roles into Privacera. Step 1: Create Cross-Account Trust IAM Role # Create an IAM role called privacera_cloud_lf_connector_to_lf_and_glue with the following custom trust policy. Figure 2 – IAM role trust policy.\nAfter the role is created, grant the IAM role read-only privileges to AWS Glue Data Catalog and AWS Lake Formation. Create an inline policy and add the permissions as shown below. Figure 3 – IAM policies attached to IAM role.\nGo to AWS Lake Formation, click on Administrative roles and tasks, choose Administrators, and add the IAM role created in the above steps. This enables Privacera to get the policies from AWS Lake Formation. Step 2: Configure AWS Lake Formation Connector in Privacera # Log in to Privacera and navigate to Lake Formation, which is listed under Application. Click on Access Management and update the AWS account ID, ARN of the IAM role created in the previous step, and the AWS region. The Lake Formation permissions sink type should be listed as reverse_sink. Figure 4 – AWS Lake Formation connector configuration in Privacera.\nIf everything is done correctly, you should see the privacera_lakeformation tile under Access Management \u0026gt; Audit \u0026gt; Plugin. You should also see connector audit logs in PolicySync tab. The Privacera Lake Formation connector will automatically pull the IAM roles and add them to Apache Ranger. You can check it under Access Management \u0026gt; Users/ Groups/ Roles \u0026gt; Roles. The existing policies in Lake Formation will be synchronized in Privacera. You can check this by going to Access Management \u0026gt; Resource Policies \u0026gt; privacera_lakeformation. Step 3: Create Data Access Policies in AWS Lake Formation # Navigate to AWS Lake Formation and select Grant Data Lake Permissions. Select the IAM user under IAM Users and roles. Select Named data catalog resources and select sales_db and table sales_data. This database and table are already present in AWS Glue Data Catalog. Figure 5 – Defining table access policy in AWS Lake Formation.\nStep 4: Visualize Data Access Policy in Privacera # The access policy created in AWS Lake Formation should be synced to Privacera. You should be able to see it in Access Management \u0026gt; Resource Policies \u0026gt; privacera_lakeformation. You won’t be able edit this policy in Privacera. Step 5: Validate Data Access in Databricks # Go to the Databricks environment and try to access the table. If everything is right, you should be able to access the sales_data table present in the sales_db database.\nFigure 6 – Accessing the table in Databricks.\nYou can also define a column-level access policy and row-level filters in AWS Lake Formation, and Privacera can enforce that to the Databricks environment.\nConclusion # This post demonstrates how AWS customers can extend AWS Lake Formation to centrally author, manage, and monitor data security and access policies in Databricks using Privacera’s Lake Formation-centric solution and the Privacera-AWS Glue integration.\nData access policies can be created in Lake Formation and Privacera will automatically pull policies and translate them into Databricks access controls for native data access enforcement. This solution extends your Lake Formation capabilities into Databricks and delivers greater data security and data accessibility to data consumers.\n","date":"August 4, 2023","externalUrl":null,"permalink":"/2023/08/governing-databricks-data-access-with-aws-lake-formation-and-privacera-aws-partner-network-blog/","section":"Blog","summary":"Many organizations have standardized or plan to standardize their unified data security governance on AWS Lake Formation, which provides powerful data access control to Amazon Redshift, Amazon Athena, and Amazon EMR.\nSome of these organizations are also leveraging Databricks, however, and would like to create and manage data access policies for Databricks using AWS Lake Formation as well. They want to have consistent policy enforcement and monitoring across their AWS services, Databricks, and Amazon Simple Storage Service (Amazon S3).\nIn this post, we will discuss the AWS Lake Formation and Privacera integrated solution that extends AWS Lake Formation source support to Databricks. It provides data access policy authorship and maintenance from one safe and convenient location, AWS Lake Formation.\n","title":"Governing Databricks Data Access with AWS Lake Formation and Privacera (AWS Partner Network Blog)","type":"blog"},{"content":"What’s the point of data if you can’t get your hands on (or mind around) it?\nIn today’s data-driven world, ensuring the security and proper management of sensitive information is paramount. Collibra Protect and AWS Lake Formation offer a powerful combination to address the growing challenges of enterprise data access governance.\nCollibra Protect, part of the Collibra Data Intelligence Cloud, protects sensitive data and makes it available, or partially available, to specified groups of users. AWS Lake Formation is a fully managed serverless service that allows you to build clean and secure data lakes in days.\nIn this post, we’ll show you how to start building data access policies at scale. Collibra is an AWS Partner and AWS Marketplace Seller that provides data governance and catalog solutions giving teams tools that make it easy to consume data across the enterprise.\nChallenge in Enterprises # A common enterprise challenge is that different groups of people need varying access levels to the same data. Data producers require a different level of access to data than data consumers, and financial analysts use company data differently than HR data analysts.\nWith Collibra Protect, you get intelligent controls for better results with less risk. You grant access to individuals and protect sensitive information based on access rules and data protection standards.\nAll of your rules and standards with different data access levels are managed through the Collibra platform and pushed to the data source. The aim is to promote a safe data-open culture in organizations.\nSimplified Access Governance # The goal of Collibra Protect is to centralize and simplify access governance and remove the need for repetitive action and approval. Data access and privacy management promotes an ethical company standard, giving permission to view information only to those that need it. Collibra Protect allows you to perform these actions accordingly.\nAs an example of how Collibra Protect is used, consider a data steward giving everyone access to a dataset. Based on data categories in Collibra Protect, the steward can allow or deny access to parts of that dataset to groups within the organization—this is known as differential access. It’s suggested that rules/standards are grouped together (by business processes, for example) so you don’t have to make a rule or standard for every dataset.\nWhy AWS Lake Formation? # AWS Lake Formation provides a single place to manage access controls for data in your data lake. You can define security policies that restrict access to data at the database, table, column, row, and cell levels. These policies apply to AWS Identity and Access Management (IAM) users and roles, and to users and groups when using SAML-based identity providers (IdPs).\nYou can use fine-grained controls to access data secured by AWS Lake Formation within Amazon Redshift Spectrum, Amazon Athena, AWS Glue, and Amazon EMR for Apache Spark.\nData filters in AWS Lake Formation can be used to govern access at row, column, and cell levels. Tag-based access control can be achieved by defining LF-tags and attaching them to databases, tables, or columns. This allows you to scale data governance, manage hundreds or even thousands of data permissions, and share controlled access across analytic, machine learning (ML), and extract, transform, and load (ETL) services for consumption.\nCollibra Protect + AWS Lake Formation Benefits # With the combination of both products, organizations can:\nAllow data stewards to control access to their datasets or data categories without the need of technical expertise or support from IT departments. Leverage Collibra’s capabilities to identify, classify, and tag sensitive data within the organization’s data landscape and control the access from that structure. Audit and evaluate the rules and standards associated with data. Leverage the integrations and capabilities of Lake Formation to control access at a granular level for AWS products that support it. Have a single pane to look and control access in the AWS environment. Architecture # The architecture diagram below shows how Collibra Protect residing in Collibra’s cloud platform integrates with AWS Lake Formation and enforces data protection policies in various underlying services.\nFigure 1 – Collibra and AWS Lake Formation integration.\nHow it Works # Collibra Protect relies on the creation of protection standards and access rules. Protection standards apply data protection to the source data based on how the data is classified or categorized within the Collibra platform. Access rules grant access to a less restrictive view of the data that overrides the restrictions from protection standards.\nGiven a table with a column for personal emails, for example, we can create a protection standard that will hide that column to all users, and then create an access rule that shows that column to the users in the marketing group to launch an email campaign.\nThe key benefit of using Collibra Protect is that with a few clicks you can make sure your business-critical data is accessible by the right users and your sensitive data is protected.\nCollibra Protect makes use of AWS Lake Formation’s Data Filter feature to protect data. Whenever a protection standard or access rule is set up then it’s pushed to AWS Lake Formation and a data filter is created automatically.\nEach data filter belongs to a specific table and includes the following information:\nFilter name (this will be prefixed with collibra/assetid). Table name. Name of the database that contains the table. Column specification – list of columns to include or exclude in query results. Row filter expression – expression that specifies the rows to include in query results. With some restrictions, the expression has the syntax of a WHERE clause in the PartiQL language. To specify all rows, enter true in the console or use AllRowsWildcard in API calls. Examples # In this section, we are going to create a data protection standard to hide all of the columns that contain personal emails across the databases. Then, we will allow the marketing team to access a dataset that contains personal information like first name, last name, and personal emails by creating a data access rule. They’ll be able to see the name and email to inform customers about a promotion, but we’ll hide the last name for compliance.\nData Protection Standard # After clicking on Create a Data Protection Standard the setup menu will show up. We’ll start by assigning a name and a description.\nNext, select the group Everyone in the drop-down menu. Then select data classification and choose personal email. Data classes are a form of a tag that are assigned to the columns in the Collibra Data Catalog that are used to provide context to the data itself.\nFigure 2 – Data protection standard setup.\nAfter saving the standard, it will result in the following in AWS Lake Formation:\nCreate an LF-tag Assign the tag to all columns identified as personal email For each of the tables: Create data filter to exclude columns tagged as personal email Assign the data filters to all groups Now, all of the columns identified as “personal email” have been hidden in AWS. Let’s proceed by creating an access rule.\nRule # After clicking on Create a Data Access Rule the setup menu will show up. As in the standard setup, we’ll give it a name and a description.\nNow, we want the marketing team to be able to access the email information and the names of the customers to send out a promotion. We’ll select the group marketing and the asset customer by country which is a table that contains the information the team needs.\nAs an optional feature, we’ll hide the “last name” information since the team doesn’t need it and this way we secure that sensitive information. We’ll do so by selecting data classification and choosing last name in the drop-down menu.\nFigure 3 – Data access rule setup.\nCollibra Protect offers advanced filtering controls. For example, we could show only the customers from a specific country or region. For simplicity, though, we’ll leave it empty and click the Save button.\nThe resulting filter and data access being created automatically in AWS Lake Formation are:\nFor each of the tables with the column identified as “last name”: Create a data filter with exclude columns Assign the data filters to Marketing in AWS Lake Formation; note that the table “customer” is targeted by the Collibra dataset “Customer by country” Figure 4 – Resulting data filter in AWS Lake Formation.\nFor users with Marketing role: Grant access to table “Customer” Apply the previously created data filter Figure 5 – Assigned data filter to Marketing in AWS Lake Formation.\nConclusion # Collibra Protect along with AWS Lake Formation is a powerful combination that offers a robust, comprehensive solution to address the growing challenges of enterprise data access governance.\nAs businesses continue to rely on vast amounts of data to make informed decisions, it becomes increasingly important to manage and protect sensitive information while providing the necessary access to relevant stakeholders.\nBy leveraging Collibra Protect’s centralized access governance and data protection capabilities alongside AWS Lake Formation’s serverless service for building clean and secure data lakes, organizations can realize these benefits:\nEffectively strike a balance between data openness and security. Simplify access governance. Promote ethical data access and privacy management. Scale data governance across your data sources and services. By harnessing the power of Collibra Protect and AWS Lake Formation, organizations can confidently navigate the complex data landscape and facilitate the secure data sharing environment that can drive business growth.\n","date":"June 16, 2023","externalUrl":null,"permalink":"/2023/06/protecting-sensitive-data-with-collibra-protect-and-aws-lake-formation-aws-partner-network-blog/","section":"Blog","summary":"What’s the point of data if you can’t get your hands on (or mind around) it?\nIn today’s data-driven world, ensuring the security and proper management of sensitive information is paramount. Collibra Protect and AWS Lake Formation offer a powerful combination to address the growing challenges of enterprise data access governance.\nCollibra Protect, part of the Collibra Data Intelligence Cloud, protects sensitive data and makes it available, or partially available, to specified groups of users. AWS Lake Formation is a fully managed serverless service that allows you to build clean and secure data lakes in days.\n","title":"Protecting Sensitive Data with Collibra Protect and AWS Lake Formation (AWS Partner Network Blog)","type":"blog"},{"content":"Many organizations are using AWS Lake Formation to manage data security and access management for Amazon Athena, Amazon Redshift Spectrum, or Presto with Amazon EMR, but they want to be able to manage other sources with a single central data security platform. Thus, allowing organizations to apply consistent and un-siloed data security and access policies across all their data sources, reduce the effort required to manage data security and access policies, make data more accessible, and enhance their security posture.\nPrivacera is an AWS Data \u0026amp; Analytics Software Competency Partner and has delivered two new solutions that integrate Privacera and AWS Lake Formation to extend AWS Lake Formation across AWS and non-AWS data and analytical environments. These two new integrated solutions allow AWS customers to augment their usage of AWS Lake Formation, while having the choice to author, manage, and monitor data security and access policies in a single central location, using either AWS Lake Formation or Privacera. These solutions are purpose-built for AWS customers that want or need to use Lake Formation as part of their overall data security governance solution but require additional functional capabilities or source support that Privacera can provide.\nPrivacera, a leading unified data security governance platform provider, offers a complete unified data security governance solution for most AWS data and analytics services, like Amazon S3, Amazon EMR, Amazon Redshift, and Amazon RDS, and third-party services that run on AWS, like Databricks and Snowflake. Privacera also supports multi and hybrid-cloud architectures. Now, Privacera has extended its integration with AWS services to include AWS Lake Formation, allowing customers to either create and manage data security and access policies centrally in AWS Lake Formation with Privacera extending AWS Lake Formation source support, or centrally create and manage data security and access policies in Privacera and leverage AWS Lake Formation integration with AWS services.\nBoth Privacera solutions for AWS Lake Formation provide data security and access policy authorship and maintenance from one safe and convenient location to help organizations reduce overall data policy creation, management, and monitoring complexities.\nAWS Lake Formation as the Central Data Security Policy Store # This solution is ideal for customers who have an AWS services-first approach in building their data architecture and are using or planning to use AWS Lake Formation for centralized data governance. If you are using or planning on using AWS Lake Formation for Amazon Athena, Amazon Redshift Spectrum, Redshift data sharing, or Amazon EMR Presto, but also need to extend AWS Lake Formation to Databricks, Trino, or Amazon EMR Spark and Hive using this solution should meet your needs. Privacera has integrated with AWS Lake Formation through the AWS Lake Formation API to provide native connectivity to Databricks, Trino, and Amazon EMR Spark and Hive.\nAWS Lake Formation supports a subset of Amazon EMR capabilities (Amazon EMR Spark and Amazon EMR Hive)\nThis AWS Lake Formation solution allows AWS users to create and manage all data access policies in AWS Lake Formation, including Databricks, Trino, and Amazon EMR Spark and Hive, taking advantage of AWS Lake Formation’s integration with Amazon Athena, Amazon Redshift Spectrum, and AWS Glue.\nThis solution enables:\nData access policy creators can use the AWS Lake Formation UI and capabilities that they are familiar with to leverage AWS Glue and ensure consistency in policies. Privacera automatically pulls Databricks, Amazon EMR, and Trino Policies that were created in Lake Formation. Privacera automatically translates the Databricks, Amazon EMR, and Trino Policies into source native policies for enforcement. Lake Formation directly enforces policies for AWS Lake Formation-supported sources. Amazon S3 access is managed consistently with Amazon Redshift, Amazon Athena, Amazon EMR, Trino, and Databricks permissions that are centrally managed in AWS Lake Formation. AWS Lake Formation and Privacera both use AWS CloudTrail to provide an integrated and holistic view of an organization’s data access and security policies, as well as what data is being accessed, when, and who is accessing it. This solution provides unified, cross-account, fine-grained data security governance across Amazon Redshift, Amazon Athena, Amazon EMR, Databricks, and Trino.\nPrivacera as the Central Data Security Policy Store # This solution is ideal for customers that have complex data and analytics ecosystems, and want to have unified data security governance natively on sources, such as Amazon S3, Snowflake, Databricks with or without Unity Catalog, Amazon EMR, Amazon Athena, Amazon Redshift, Amazon RDS, and many more, but also want to leverage AWS Lake Formation that need fine-grained access control on Amazon Redshift Spectrum, or who wish to use AWS Lake Formation to enforce access controls on Amazon Athena. This solution also can be used in a multi or hybrid-cloud architecture.\nThis solution also integrates with AWS Lake Formation through the AWS Lake Formation application API, but this solution allows AWS users to create and manage all data access policies in Privacera using the Privacera UI and capabilities that they know.\nThis solution enables:\nData access policy creators can use the Privacera UI and capabilities that they are familiar with to ensure consistency in policies. Privacera automatically translates and pushes the data security and access policies into native AWS Lake Formation policies for Amazon Athena or Amazon Redshift Spectrum. AWS Lake Formation automatically enforces policies for AWS-supported sources. Privacera translates data security and access policies to supported sources to natively enforce the data access controls. Privacera’s integration with AWS Glue allows organizations the option of leveraging AWS Glue if they desire. This solution provides unified, cross-account, fine-grained data security governance across over 50 data sources and data governance and security services and products. It also allows users to benefit from unique Privacera capabilities, such as:\n● Custom conditions, that allow data access and security to be applied based on a condition, such as completion of PII training\n● Wild carding for access controls, which allows organizations with well-defined naming conventions to broadly allocate data access to resources based on their naming conventions, which also will apply to future resources based on that naming convention\n● Compliance workflows, which allow compliance rules to be created once and applied across your data ecosystem\n● Governed Data Stewardship, which allows organizations to create virtual business data domains/sets and delegate data security and access ownership to data stewards while providing data security guardrails\nDelivering Solutions to Meet Your Requirements # Privacera is committed to delivering data security governance solutions to meet our customers’ needs. Privacera can be used with or without AWS Lake Formation depending on your needs, but if you are currently using or planning on using AWS Lake Formation, Privacera has 2 new integrated solutions for you that can either extend AWS Lake Formation into additional sources or that can leverage AWS Lake Formation to power data security for Amazon Redshift Spectrum or Amazon Athena. These solutions allow AWS Lake Formation and Privacera users to benefit from source access and unique capabilities from both AWS Lake Formation and Privacera creating a better together solution that, in certain scenarios, is more powerful than each independent product.\n","date":"June 2, 2023","externalUrl":null,"permalink":"/2023/06/privacera-launches-2-new-solutions-for-aws-lake-formation/","section":"Blog","summary":"Many organizations are using AWS Lake Formation to manage data security and access management for Amazon Athena, Amazon Redshift Spectrum, or Presto with Amazon EMR, but they want to be able to manage other sources with a single central data security platform. Thus, allowing organizations to apply consistent and un-siloed data security and access policies across all their data sources, reduce the effort required to manage data security and access policies, make data more accessible, and enhance their security posture.\n","title":"Privacera Launches 2 New Solutions for AWS Lake Formation","type":"blog"},{"content":"A modern data strategy is a comprehensive plan for how you manage, access, analyze, and act on data. Most companies are already building roadmaps toward that goal, but the gap between \u0026ldquo;we have a plan\u0026rdquo; and \u0026ldquo;we\u0026rsquo;re getting value from our data\u0026rdquo; can be significant.\nThis session covers how deploying a modern data architecture on AWS helps close that gap — navigating common data challenges, streamlining analytics processes, and getting to business insights faster. We take a closer look at AWS Glue and AWS Lake Formation specifically, and how they accelerate the journey.\n","date":"August 18, 2022","externalUrl":null,"permalink":"/2022/08/achieving-your-modern-data-architecture/","section":"Blog","summary":"A modern data strategy is a comprehensive plan for how you manage, access, analyze, and act on data. Most companies are already building roadmaps toward that goal, but the gap between “we have a plan” and “we’re getting value from our data” can be significant.\nThis session covers how deploying a modern data architecture on AWS helps close that gap — navigating common data challenges, streamlining analytics processes, and getting to business insights faster. We take a closer look at AWS Glue and AWS Lake Formation specifically, and how they accelerate the journey.\n","title":"Achieving your modern data architecture","type":"blog"},{"content":"One of the hardest things in product is articulating your organization\u0026rsquo;s unique ability to deliver value to its market. It\u0026rsquo;s also one of the most important. So how do you build a path that combines innovation, proven methodology, and practical approaches to identify the attributes and differentiators that set you apart from your competitors?\n","date":"July 28, 2022","externalUrl":null,"permalink":"/2022/07/how-to-use-innovation-and-proven-methodologies-to-uncover-your-distinctive-competencies/","section":"Blog","summary":"One of the hardest things in product is articulating your organization’s unique ability to deliver value to its market. It’s also one of the most important. So how do you build a path that combines innovation, proven methodology, and practical approaches to identify the attributes and differentiators that set you apart from your competitors?\n","title":"How To Use Innovation And Proven Methodologies To Uncover Your Distinctive Competencies","type":"blog"},{"content":"","date":"October 28, 2021","externalUrl":null,"permalink":"/categories/serverless/","section":"Categories","summary":"","title":"Serverless","type":"categories"},{"content":"One of the things I love about serverless is that I never have to be bothered with managing servers, it’s just using a service like Lambda, Cloud Run, etc and my code is running. If I want to use a database I can rely on services like DynamoDB or CosmosDB. While I think that is absolutely great, it feels like serverless is only for stateless processes. I think serverless needs a bold and stateful vision so that we can build any type of application (stateful and stateless) without ever managing servers. In this talk, I’ll touch on why statefulness matters and how stateful serverless makes patterns like Event Sourcing and CQRS available to anyone.\n","date":"October 28, 2021","externalUrl":null,"permalink":"/2021/10/simply-stateful-serverless/","section":"Blog","summary":"One of the things I love about serverless is that I never have to be bothered with managing servers, it’s just using a service like Lambda, Cloud Run, etc and my code is running. If I want to use a database I can rely on services like DynamoDB or CosmosDB. While I think that is absolutely great, it feels like serverless is only for stateless processes. I think serverless needs a bold and stateful vision so that we can build any type of application (stateful and stateless) without ever managing servers. In this talk, I’ll touch on why statefulness matters and how stateful serverless makes patterns like Event Sourcing and CQRS available to anyone.\n","title":"Simply Stateful Serverless","type":"blog"},{"content":"","date":"September 21, 2021","externalUrl":null,"permalink":"/categories/vmware/","section":"Categories","summary":"","title":"VMware","type":"categories"},{"content":"In the session I went over why serverless is important to our industry, why server admins (which I then rephrased to SREs) are so important to our serverless success, and why stateless isn\u0026rsquo;t the answer for everything. Technology wise I\u0026rsquo;ll be \u0026ldquo;all over the map\u0026rdquo; talking about things like Knative and the VMware Event Broker Appliance, AWS Lambda, Akka Serverless\n","date":"September 21, 2021","externalUrl":null,"permalink":"/2021/09/why-stateful-serverless-matters-for-server-admins/","section":"Blog","summary":"In the session I went over why serverless is important to our industry, why server admins (which I then rephrased to SREs) are so important to our serverless success, and why stateless isn’t the answer for everything. Technology wise I’ll be “all over the map” talking about things like Knative and the VMware Event Broker Appliance, AWS Lambda, Akka Serverless\n","title":"Why (stateful) serverless matters for server admins","type":"blog"},{"content":"Leon Stigter, senior product manager for serverless at Lightbend, explained the core problem to SiliconANGLE: developers generally think of serverless as a \u0026ldquo;stateless solution,\u0026rdquo; meaning every time an application needs to do something, it has to connect to a database first. For a single service that\u0026rsquo;s manageable, but at scale, things like connection pooling get painful fast.\n","date":"June 10, 2021","externalUrl":null,"permalink":"/2021/06/lightbends-akka-serverless-enables-stateful-app-development-without-a-database-siliconangle/","section":"Blog","summary":"Leon Stigter, senior product manager for serverless at Lightbend, explained the core problem to SiliconANGLE: developers generally think of serverless as a “stateless solution,” meaning every time an application needs to do something, it has to connect to a database first. For a single service that’s manageable, but at scale, things like connection pooling get painful fast.\n","title":"Lightbend's Akka Serverless enables stateful app development without a database - SiliconANGLE","type":"blog"},{"content":"As Auth0 says on their website \u0026ldquo;Identity is the front door of every user interaction\u0026rdquo;. When you\u0026rsquo;re building serverless applications, that becomes even more important since you often have multiple apps that all need to be secured. In this post I\u0026rsquo;ll walk you through how to wire up Auth0 with Akka Serverless.\nTL;DR all code is avaialble on GitHub too: https://github.com/retgits/akkaserverless-auth0-javascript\nPrerequisites # To follow along, you\u0026rsquo;ll need:\nAn Akka Serverless account Node.js v14 or higher installed The Docker CLI installed An account with Auth0 Your service # The plan is straightforward: build a \u0026ldquo;Hello, World\u0026rdquo; Action that validates a JWT token and returns either an error (HTTP 500) or a normal response.\nProto file # Akka Serverless is API-first, so we start with the API description in Protobuf format:\nsyntax = \u0026#34;proto3\u0026#34;; package com.retgits.akkaserverless.actions; import \u0026#34;akkaserverless/annotations.proto\u0026#34;; import \u0026#34;google/api/annotations.proto\u0026#34;; message GreetingRequest { string name = 1; string greeting = 2; } message GreetingResponse { string message = 1; } service GreetingService { /** * The Greeting method accepts a GreetingRequest message and returns a * GreetingResponse message if the function completes successfully. The * method is exposed to the outside world over HTTP on the URL `/greet` */ rpc Greeting(GreetingRequest) returns (GreetingResponse) { option (google.api.http) = { post: \u0026#34;/greet\u0026#34; body: \u0026#34;*\u0026#34; }; } } Next, we need a class to handle JWT validation:\nimport jwksClient from \u0026#39;jwks-rsa\u0026#39;; import jwt from \u0026#39;jsonwebtoken\u0026#39;; class JWTValidator { header client /** * Creates an instance of JWTValidator * @param {string} header the name of the header parameter that contains the JWT token * @param {string} uri the URI to find the JWKS file */ constructor(header, uri) { this.header = header; this.client = new jwksClient.JwksClient({ jwksUri: uri }); } /** * Validate and decode the JWT token * @param {*} metadata the metadata of the Akka Serverless request * @returns a decoded JWT token * @throws an error when decoding fails */ async validateAndDecode(metadata) { const jwtHeader = metadata.entries.find(entry =\u0026gt; entry.key === this.header); const token = jwtHeader.stringValue; let result = jwt.decode(token, { complete: true }); if(result == null) { throw new Error(\u0026#39;Unable to obtain valid JWT token\u0026#39;) } const kid = result.header.kid; if(this.client == null) { throw new Error(\u0026#39;To validate with JWKS the withJWKS method must be called first\u0026#39;) } const key = await this.client.getSigningKey(kid); const signingKey = key.getPublicKey(); try { var decoded = jwt.verify(token, signingKey, { complete: true }); return decoded } catch (err) { throw new Error(\u0026#39;Unable to verify JWT token\u0026#39;); } } } export default JWTValidator; And finally, the action implementation itself:\nimport as from \u0026#39;@lightbend/akkaserverless-javascript-sdk\u0026#39;; import JWTValidator from \u0026#39;./jwtvalidator.js\u0026#39;; const greetingservice = new as.Action( [\u0026#39;./app.proto\u0026#39;], \u0026#39;com.retgits.akkaserverless.actions.GreetingService\u0026#39;, { serializeFallbackToJson: true } ); /** * The command handlers for this Action. * The names of the properties (before the colon) must match the names of the rpc * methods specified in the protobuf file. */ greetingservice.commandHandlers = { Greeting: generateGreeting } /** * generateGreeting implements the business logic for the Greeting RPC method. It * validates the JWT token passed in the `X-Custom-JWT-Auth` HTTP Header parameter * first and if that succeeds, sends back a message. If the validation fails. an * HTTP 500 error is sent back. * @param {*} request * @param {*} context */ async function generateGreeting(request, context) { const validator = new JWTValidator(\u0026#39;X-Custom-JWT-Auth\u0026#39;, \u0026#39;https://\u0026lt;your tenant\u0026gt;.auth0.com/.well-known/jwks.json\u0026#39;); try { await validator.validateAndDecode(context.metadata); return { message: `${request.greeting}, ${request.name}!` } } catch (err) { return context.fail(err); } } export default greetingservice; The first line of generateGreeting is the important bit. It initializes the JWT validation class with two parameters:\nThe name of the HTTP Header parameter that carries the JWT token The URL to the JWKS file in Auth0 const validator = new JWTValidator(\u0026#39;X-Custom-JWT-Auth\u0026#39;, \u0026#39;https://\u0026lt;your tenant\u0026gt;.auth0.com/.well-known/jwks.json\u0026#39;) Security # Building the service was pretty standard. Securing it with Auth0 is equally straightforward.\nIn the Auth0 console, go to Applications -\u0026gt; APIs:\nClick + Create API to create a new API. The name and identifier are up to you, but the Signing Algorithm should be set to RS256 so tokens get signed with Auth0\u0026rsquo;s private key. Once the API is created, Auth0 also creates a Test Application. On the Test tab, you can request tokens for any of your authorized applications (like the default Test Application). The response section will contain the access_token.\nTesting it # To test locally, you need to run the Akka Serverless proxy alongside your service container:\n## Set your dockerhub username export DOCKER_REGISTRY=docker.io export DOCKER_USER=\u0026lt;your dockerhub username\u0026gt; ## Run the npm install command npm install ## Build container docker build . -t $DOCKER_REGISTRY/$DOCKER_USER/akkaserverless-auth0-javascript:1.0.0 ## Create a docker bridged network docker network create -d bridge akkasls ## Run your userfunction docker run -d --name userfunction --hostname userfunction --network akkasls $DOCKER_REGISTRY/$DOCKER_USER/akkaserverless-auth0-javascript:1.0.0 ## Run the proxy docker run -d --name proxy --network akkasls -p 9000:9000 --env USER_FUNCTION_HOST=userfunction gcr.io/akkaserverless-public/akkaserverless-proxy:0.7.0-beta.9 -Dconfig.resource=dev-mode.conf -Dcloudstate.proxy.protocol-compatibility-check=false With the access_token from Auth0, you can call the service:\n## Set your Access Token export ACCESS_TOKEN=\u0026lt;your access token\u0026gt; ## Send a request curl --request POST \\ --url http://localhost:9000/greet \\ --header \u0026#39;Content-Type: application/json\u0026#39; \\ --header \u0026#39;X-Custom-JWT-Auth: \u0026#39;$ACCESS_TOKEN\u0026#39;\u0026#39; \\ --data \u0026#39;{ \u0026#34;name\u0026#34;: \u0026#34;World\u0026#34;, \u0026#34;greeting\u0026#34;: \u0026#34;Hello\u0026#34; }\u0026#39; ## The result will be {\u0026#34;message\u0026#34;:\u0026#34;Hello, World!\u0026#34;} If you change the token to anything else, you\u0026rsquo;ll get an error:\nError: Unable to obtain valid JWT token What\u0026rsquo;s next? # That\u0026rsquo;s really all there is to it. Securing Akka Serverless apps with Auth0 comes down to a JWT validation class and a few lines of configuration. Let me know what you\u0026rsquo;d like to see next!\nCover photo by vishnu vijayan from Pixabay\n","date":"June 8, 2021","externalUrl":null,"permalink":"/2021/06/how-to-secure-akka-serverless-apps-with-auth0/","section":"Blog","summary":"As Auth0 says on their website “Identity is the front door of every user interaction”. When you’re building serverless applications, that becomes even more important since you often have multiple apps that all need to be secured. In this post I’ll walk you through how to wire up Auth0 with Akka Serverless.\n","title":"How To Secure Akka Serverless Apps With Auth0","type":"blog"},{"content":"CI/CD is one of those things that pays for itself almost immediately. In serverless especially, where the whole point is to focus on code and let the platform handle the rest, automating your deployment pipeline is a no-brainer. It lets developers focus on code and lets the business ship quality software faster. So how does that work with Akka Serverless?\nAkka Serverless brings together Functions-as-a-Service and serverless databases into a single package, so developers don\u0026rsquo;t have to worry about the database layer either. In this post, I\u0026rsquo;ll walk through automating the CI/CD workflow for Akka Serverless using GitHub Actions.\nThe GitHub Actions # The pipeline uses four GitHub Actions:\nDocker Login to authenticate with Docker Hub (or any supported Docker registry) Docker Setup Buildx which uses BuildKit and enables multi-platform image builds (out of scope here, but nice to have) Build and push Docker images to push the image to Docker Hub Akka Serverless CLI for GitHub Actions to deploy to Akka Serverless For security, store credentials as encrypted secrets in GitHub. Under Settings -\u0026gt; Secrets, create four variables:\nDOCKERHUB_TOKEN: a personal authentication token for Docker Hub (strongly recommended over a password) DOCKERHUB_USERNAME: your Docker Hub username PROJECT: the Akka Serverless project ID (akkasls projects get \u0026lt;project name\u0026gt; shows the ID) TOKEN: an Akka Serverless authentication token with execution scope (create one with akkasls auth tokens create --type=refresh --scopes=execution --description=\u0026quot;My CI/CD token\u0026quot;) The YAML # With the secrets in place, the only thing left is the GitHub Actions workflow file. I\u0026rsquo;ll show the complete file first, then break it down. Store it in .github/workflows in your repository.\nThe full file # I named mine deploy.yaml:\nname: akkasls-deployer on: push: branches: [ main ] workflow_dispatch: jobs: build: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v2 - name: Set up Docker Buildx uses: docker/setup-buildx-action@v1 - name: Login to Docker Hub uses: docker/login-action@v1 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }} - name: Build and push to Docker Hub uses: docker/build-push-action@v2 with: push: true tags: retgits/myapp:1.0.0 - name: Deploy to Akka Serverless uses: retgits/akkasls-action@v1 with: cmd: \u0026#34;services deploy myapp retgits/myapp:1.0.0\u0026#34; env: token: ${{ secrets.TOKEN }} project: ${{ secrets.PROJECT }} Breaking up the file in parts # Here\u0026rsquo;s what each section does:\nname: akkasls-deployer This sets the workflow name as it appears in the Actions tab of your repository.\non: push: branches: [ main ] workflow_dispatch: The on section defines triggers. This workflow runs on every push to main, or when manually triggered via workflow_dispatch (handy when you want to redeploy without a code change).\njobs: build: runs-on: ubuntu-latest I run builds on Ubuntu. You could use a different OS or a self-hosted runner.\nsteps: - name: Checkout uses: actions/checkout@v2 Checks out the code so subsequent steps can access it.\n- name: Set up Docker Buildx uses: docker/setup-buildx-action@v1 Sets up Buildx for multi-platform image builds (especially useful for Apple Silicon containers).\n- name: Login to Docker Hub uses: docker/login-action@v1 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }} Logs in to Docker Hub using the secrets we created earlier.\n- name: Build and push to Docker Hub uses: docker/build-push-action@v2 with: push: true tags: retgits/myapp:1.0.0 Builds the container and pushes it to Docker Hub as retgits/myapp:1.0.0. In practice, you\u0026rsquo;d want to parameterize this with the git tag, commit SHA, or something else to uniquely identify each image.\n- name: Deploy to Akka Serverless uses: retgits/akkasls-action@v1 with: cmd: \u0026#34;services deploy myapp retgits/myapp:1.0.0\u0026#34; env: token: ${{ secrets.TOKEN }} project: ${{ secrets.PROJECT }} The final step deploys the container image to Akka Serverless using the other two secrets and the same tag from the previous step.\nWhat\u0026rsquo;s next? # That\u0026rsquo;s the whole setup. Once it\u0026rsquo;s in place, every push to main builds a container and deploys it to Akka Serverless automatically. Set it up once and forget about it. Let me know what you\u0026rsquo;d like to see next!\nCover photo by Gerd Altmann from Pixabay\n","date":"June 1, 2021","externalUrl":null,"permalink":"/2021/06/how-to-set-up-continuous-integration-and-delivery-with-github-actions-and-akka-serverless/","section":"Blog","summary":"CI/CD is one of those things that pays for itself almost immediately. In serverless especially, where the whole point is to focus on code and let the platform handle the rest, automating your deployment pipeline is a no-brainer. It lets developers focus on code and lets the business ship quality software faster. So how does that work with Akka Serverless?\n","title":"How To Set Up Continuous Integration and Delivery With Github Actions and Akka Serverless","type":"blog"},{"content":"As developers, we all want to be more productive. Serverless helps you do just that, by letting you focus on the business logic while shifting operations somewhere else. As more companies discover this emerging technology, we also discover drawbacks like state management. In this session, I focused on what serverless is, how it helps developers, what potential drawbacks exist, and how we can add state management into serverless.\nSlides # Video # ","date":"December 9, 2020","externalUrl":null,"permalink":"/2020/12/thinking-stateful-serverless-@-micro.sphere.it/","section":"Blog","summary":"As developers, we all want to be more productive. Serverless helps you do just that, by letting you focus on the business logic while shifting operations somewhere else. As more companies discover this emerging technology, we also discover drawbacks like state management. In this session, I focused on what serverless is, how it helps developers, what potential drawbacks exist, and how we can add state management into serverless.\n","title":"Thinking Stateful Serverless @ Micro.Sphere.IT","type":"blog"},{"content":"As developers, we all want to be more productive. Knative, a Kubernetes-based platform to deploy and manage modern serverless workloads, helps to do just that. The idea behind Knative is to abstract away the complexity of building apps on top of Kubernetes as much as possible, and Tekton is a powerful and flexible open-source CI/CD tool. How can you bring those two together on your local machine to try a few things out or even develop your apps? During this talk, we looked at setting up a KinD cluster, bootstrapping Knative and Tekton, and deploying an app!\nTalk materials # The tools I used:\nKinD Knative Tekton Octant The KinD configuration I used to create a cluster\nkind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraPortMappings: - containerPort: 31080 hostPort: 80","date":"October 15, 2020","externalUrl":null,"permalink":"/2020/10/test-driving-event-driven-apps-on-kubernetes/","section":"Blog","summary":"As developers, we all want to be more productive. Knative, a Kubernetes-based platform to deploy and manage modern serverless workloads, helps to do just that. The idea behind Knative is to abstract away the complexity of building apps on top of Kubernetes as much as possible, and Tekton is a powerful and flexible open-source CI/CD tool. How can you bring those two together on your local machine to try a few things out or even develop your apps? During this talk, we looked at setting up a KinD cluster, bootstrapping Knative and Tekton, and deploying an app!\n","title":"Test-driving Event-Driven Apps on Kubernetes","type":"blog"},{"content":"As developers, we all want to be more productive. Knative, a Kubernetes based platform to deploy and manage modern serverless works, helps to do just that. The idea behind Knative is to abstract away the complexity of building apps on top of Kubernetes as much as possible. How can you get Knative running on your local machine to try a few things out or even develop your apps? In this session, we \u0026rsquo;ll look at setting up a Kubernetes cluster, installing Knative, and building an app.\n","date":"August 13, 2020","externalUrl":null,"permalink":"/2020/08/deploying-your-first-app-on-the-kubernetes-based-knative-platform/","section":"Blog","summary":"As developers, we all want to be more productive. Knative, a Kubernetes based platform to deploy and manage modern serverless works, helps to do just that. The idea behind Knative is to abstract away the complexity of building apps on top of Kubernetes as much as possible. How can you get Knative running on your local machine to try a few things out or even develop your apps? In this session, we ’ll look at setting up a Kubernetes cluster, installing Knative, and building an app.\n","title":"Deploying your first app on the Kubernetes based Knative platform","type":"blog"},{"content":"With everything going on in DevOps, I think we can safely say that building pipelines is the way to deploy your applications to production. But knowing what you deploy to production and whether it is actually okay needs more data, like security checks, performance checks, and budget checks. We\u0026rsquo;ve come up with a process for that, which we call Continuous Verification \u0026ldquo;A process of querying external systems and using information from the response to make decisions to improve the development and deployment process.\u0026rdquo; In this session, we\u0026rsquo;ll look at extending an existing CI/CD pipeline with checks for security, performance, and cost to make a decision on whether we want to deploy our app or not.\nThe talk # At VMware we define Continuous Verification as:\n\u0026ldquo;A process of querying external systems and using information from the response to make decisions to improve the development and deployment process.\u0026rdquo;\nContinuous Verification is an extension to the development and deployment processes companies already have. It focuses on optimizing both the development and deployment experience by looking at security, performance, and cost. At most companies, some of these steps are done manually or scripted, but they\u0026rsquo;re rarely part of the actual deployment pipeline.\nAnd that is exactly how we can make sure that we build software better, faster, and more secure!\nSlides # Talk materials # Continuous Verification: The Missing Link to Fully Automate Your Pipeline Prowler: AWS Security Best Practices Assessment, Auditing, Hardening and Forensics Readiness Tool VMware Secure State ACME Serverless Fitness Shop - Payment Service Tanzu Observability powered by Wavefront The ACME Fitness Shop Gotling Snyk.io DevOps Pipeline # ## Set the default image for the CI workflow image: docker:19.03.8 ## Global variables available to the workflow variables: ## The host for the docker registry, set to docker:2375 to work with DinD DOCKER_HOST: tcp://docker:2375 ## Skip verification of TLS certificates for DinD DOCKER_TLS_CERTDIR: \u0026#34;\u0026#34; ## Specify which GitLab templates should be included include: template: Container-Scanning.gitlab-ci.yml ## Specify the stages that exist in the template and the order in which they need to run stages: - scan_code - build - container_scanning - governance - deploy_staging - performance - deploy_production ## Stage scan_code performs a vulnerability analysis of the code using Snyk.io scan_code: stage: scan_code image: golang:1.14 script: ## Download the latest version of the Snyk CLI for Linux - curl -o /bin/snyk -L https://github.com/snyk/snyk/releases/latest/download/snyk-linux - chmod +x /bin/snyk ## Authenticate using a Snyk API token - snyk auth $SNYK_TOKEN ## Run snyk to test for vulnerabilities in the dependencies - snyk test ## Build the container tagged with the commit revision for which project is built build: stage: build image: docker:19.03.8 services: - docker:19.03.8-dind variables: DOCKER_HOST: tcp://docker:2375/ DOCKER_DRIVER: overlay2 before_script: - docker info - docker login -u \u0026#34;$DOCKER_USER\u0026#34; -p \u0026#34;$DOCKER_PASSWORD\u0026#34; script: - docker info - docker build --file /builds/retgits/test/cmd/cloudrun-payment-http/Dockerfile . -t $DOCKER_USER/payment:$CI_BUILD_REF - docker push $DOCKER_USER/payment:$CI_BUILD_REF ## Scan containers built in this job container_scanning: stage: container_scanning ## Validate whether the project is still within budget budget: stage: governance image: vtimd/alpine-python-kubectl script: - chmod +x ./governance/budget.py - ./governance/budget.py $GITLAB_TOKEN - if [ $OVERAGE = \u0026#34;OVER\u0026#34; ]; then exit 1 ; else echo \u0026#34;Within Budget. Continuing!\u0026#34;; fi ## Validate whether the project follows the best practices set by the security team security: stage: governance image: vtimd/alpine-python-kubectl script: - chmod +x ./governance/security.py - ./governance/security.py - if [ $VSS_VIOLATION_FOUND = \u0026#34;True\u0026#34; ]; then exit 1 ; else echo \u0026#34;Violation Check Passed. Continuing!\u0026#34;; fi ## Deploy the service to staging deploy_staging: stage: deploy_staging image: google/cloud-sdk:alpine script: # Authenticate using the service account - echo $GCLOUD_SERVICE_KEY \u0026gt; ${HOME}/gcloud-service-key.json - gcloud auth activate-service-account --key-file ${HOME}/gcloud-service-key.json - gcloud config set project $GCP_PROJECT_ID # Deploy - gcloud run deploy payment --namespace=default --image=retgits/payment:6cc4ac945f98f7e2c4770779ff13431e399b9ea6 --platform=gke --cluster=$CLUSTER --cluster-location=$CLUSTER_LOCATION --connectivity=external --set-env-vars=SENTRY_DSN=$SENTRY_DSN,VERSION=$VERSION,STAGE=dev,WAVEFRONT_TOKEN=$WAVEFRONT_TOKEN,WAVEFRONT_URL=$WAVEFRONT_URL,MONGO_USERNAME=$MONGO_USERNAME,MONGO_PASSWORD=$MONGO_PASSWORD,MONGO_HOSTNAME=$MONGO_HOSTNAME ## Start traffic generation traffic: stage: performance image: alpine:latest script: ## Download the latest version of Gotling - apk add curl - curl -o /bin/gotling -L https://github.com/retgits/gotling/releases/download/v0.3-alpha/gotling - chmod +x /bin/gotling ## Run performance test - gotling governance/trafficgen.yaml ## Check performance against Wavefront perf_stats: stage: performance image: name: retgits/wavefront-pod-inspector:serverless entrypoint: [\u0026#34;\u0026#34;] script: - /bin/entrypoint.sh - if [ $abc = \u0026#34;failed\u0026#34; ]; then echo \u0026#34;Alert\u0026#34; \u0026amp;\u0026amp; exit 1 ; else echo \u0026#34;Within range. Continuing!\u0026#34;; fi ## Deploy the service to production deploy_production: stage: deploy_production image: google/cloud-sdk:alpine script: # Authenticate using the service account - echo $GCLOUD_SERVICE_KEY \u0026gt; ${HOME}/gcloud-service-key.json - gcloud auth activate-service-account --key-file ${HOME}/gcloud-service-key.json - gcloud config set project $GCP_PROJECT_ID # Deploy - gcloud run deploy payment --namespace=default --image=retgits/payment:6cc4ac945f98f7e2c4770779ff13431e399b9ea6 --platform=gke --cluster=$CLUSTER --cluster-location=$CLUSTER_LOCATION --connectivity=external --set-env-vars=SENTRY_DSN=$SENTRY_DSN,VERSION=$VERSION,STAGE=prod,WAVEFRONT_TOKEN=$WAVEFRONT_TOKEN,WAVEFRONT_URL=$WAVEFRONT_URL,MONGO_USERNAME=$MONGO_USERNAME,MONGO_PASSWORD=$MONGO_PASSWORD,MONGO_HOSTNAME=$MONGO_HOSTNAME","date":"July 2, 2020","externalUrl":null,"permalink":"/2020/07/data-driven-decisions-in-devops-@-mydevsecops/","section":"Blog","summary":"With everything going on in DevOps, I think we can safely say that building pipelines is the way to deploy your applications to production. But knowing what you deploy to production and whether it is actually okay needs more data, like security checks, performance checks, and budget checks. We’ve come up with a process for that, which we call Continuous Verification “A process of querying external systems and using information from the response to make decisions to improve the development and deployment process.” In this session, we’ll look at extending an existing CI/CD pipeline with checks for security, performance, and cost to make a decision on whether we want to deploy our app or not.\n","title":"Data Driven Decisions in DevOps @ MyDevSecOps","type":"blog"},{"content":"In a nutshell, Continuous Verification is about putting as many automated checks as possible into your CI/CD pipelines. These checks call out to external systems to validate performance, security, and cost — without asking your engineers to do that manually. The same systems that decide whether a deployment goes to production can also help engineers understand where the bottlenecks are. More checks in the pipeline means fewer manual tasks, less overhead, and better decisions about what actually ships. And yeah, maybe a bit more time at the beach.\nThe talk # At VMware we defined Continuous Verification as:\n\u0026ldquo;A process of querying external systems and using information from the response to make decisions to improve the development and deployment process.\u0026rdquo;\nContinuous Verification is an extension to the development and deployment processes companies already have. It focuses on optimizing both the development and deployment experience by looking at security, performance, and cost. At most companies, some of these steps are done manually or scripted, but they\u0026rsquo;re rarely part of the actual deployment pipeline.\nSlides # Talk materials # Continuous Verification: The Missing Link to Fully Automate Your Pipeline Prowler: AWS Security Best Practices Assessment, Auditing, Hardening and Forensics Readiness Tool ACME Serverless Fitness Shop - Payment Service Tanzu Observability powered by Wavefront The ACME Fitness Shop Gotling DevOps Pipeline # ## Set the default image for the CI workflow image: docker:19.03.8 ## Global variables available to the workflow variables: ## The host for the docker registry, set to docker:2375 to work with DinD DOCKER_HOST: tcp://docker:2375 ## Skip verification of TLS certificates for DinD DOCKER_TLS_CERTDIR: \u0026#34;\u0026#34; ## Specify which GitLab templates should be included include: template: Container-Scanning.gitlab-ci.yml ## Specify the stages that exist in the template and the order in which they need to run stages: - scan_code - build - container_scanning - governance - deploy_staging - performance - deploy_production ## Stage scan_code performs a vulnerability analysis of the code using Snyk.io scan_code: stage: scan_code image: golang:1.14 script: ## Download the latest version of the Snyk CLI for Linux - curl -o /bin/snyk -L https://github.com/snyk/snyk/releases/latest/download/snyk-linux - chmod +x /bin/snyk ## Authenticate using a Snyk API token - snyk auth $SNYK_TOKEN ## Run snyk to test for vulnerabilities in the dependencies - snyk test ## Build the container tagged with the commit revision for which project is built build: stage: build image: docker:19.03.8 services: - docker:19.03.8-dind variables: DOCKER_HOST: tcp://docker:2375/ DOCKER_DRIVER: overlay2 before_script: - docker info - docker login -u \u0026#34;$DOCKER_USER\u0026#34; -p \u0026#34;$DOCKER_PASSWORD\u0026#34; script: - docker info - docker build --file /builds/retgits/test/cmd/cloudrun-payment-http/Dockerfile . -t $DOCKER_USER/payment:$CI_BUILD_REF - docker push $DOCKER_USER/payment:$CI_BUILD_REF ## Scan containers built in this job container_scanning: stage: container_scanning ## Validate whether the project is still within budget budget: stage: governance image: vtimd/alpine-python-kubectl script: - chmod +x ./governance/budget.py - ./governance/budget.py $GITLAB_TOKEN - if [ $OVERAGE = \u0026#34;OVER\u0026#34; ]; then exit 1 ; else echo \u0026#34;Within Budget. Continuing!\u0026#34;; fi ## Validate whether the project follows the best practices set by the security team security: stage: governance image: vtimd/alpine-python-kubectl script: - chmod +x ./governance/security.py - ./governance/security.py - if [ $VSS_VIOLATION_FOUND = \u0026#34;True\u0026#34; ]; then exit 1 ; else echo \u0026#34;Violation Check Passed. Continuing!\u0026#34;; fi ## Deploy the service to staging deploy_staging: stage: deploy_staging image: google/cloud-sdk:alpine script: # Authenticate using the service account - echo $GCLOUD_SERVICE_KEY \u0026gt; ${HOME}/gcloud-service-key.json - gcloud auth activate-service-account --key-file ${HOME}/gcloud-service-key.json - gcloud config set project $GCP_PROJECT_ID # Deploy - gcloud run deploy payment --namespace=default --image=retgits/payment:6cc4ac945f98f7e2c4770779ff13431e399b9ea6 --platform=gke --cluster=$CLUSTER --cluster-location=$CLUSTER_LOCATION --connectivity=external --set-env-vars=SENTRY_DSN=$SENTRY_DSN,VERSION=$VERSION,STAGE=dev,WAVEFRONT_TOKEN=$WAVEFRONT_TOKEN,WAVEFRONT_URL=$WAVEFRONT_URL,MONGO_USERNAME=$MONGO_USERNAME,MONGO_PASSWORD=$MONGO_PASSWORD,MONGO_HOSTNAME=$MONGO_HOSTNAME ## Start traffic generation traffic: stage: performance image: alpine:latest script: ## Download the latest version of Gotling - apk add curl - curl -o /bin/gotling -L https://github.com/retgits/gotling/releases/download/v0.3-alpha/gotling - chmod +x /bin/gotling ## Run performance test - gotling governance/trafficgen.yaml ## Check performance against Wavefront perf_stats: stage: performance image: name: retgits/wavefront-pod-inspector:serverless entrypoint: [\u0026#34;\u0026#34;] script: - /bin/entrypoint.sh - if [ $abc = \u0026#34;failed\u0026#34; ]; then echo \u0026#34;Alert\u0026#34; \u0026amp;\u0026amp; exit 1 ; else echo \u0026#34;Within range. Continuing!\u0026#34;; fi ## Deploy the service to production deploy_production: stage: deploy_production image: google/cloud-sdk:alpine script: # Authenticate using the service account - echo $GCLOUD_SERVICE_KEY \u0026gt; ${HOME}/gcloud-service-key.json - gcloud auth activate-service-account --key-file ${HOME}/gcloud-service-key.json - gcloud config set project $GCP_PROJECT_ID # Deploy - gcloud run deploy payment --namespace=default --image=retgits/payment:6cc4ac945f98f7e2c4770779ff13431e399b9ea6 --platform=gke --cluster=$CLUSTER --cluster-location=$CLUSTER_LOCATION --connectivity=external --set-env-vars=SENTRY_DSN=$SENTRY_DSN,VERSION=$VERSION,STAGE=prod,WAVEFRONT_TOKEN=$WAVEFRONT_TOKEN,WAVEFRONT_URL=$WAVEFRONT_URL,MONGO_USERNAME=$MONGO_USERNAME,MONGO_PASSWORD=$MONGO_PASSWORD,MONGO_HOSTNAME=$MONGO_HOSTNAME","date":"June 25, 2020","externalUrl":null,"permalink":"/2020/06/automated-devops-for-the-serverless-fitness-shop-knowing-what-and-why-you-go-to-production-@-ns1-ins1ghts-2020/","section":"Blog","summary":"In a nutshell, Continuous Verification is about putting as many automated checks as possible into your CI/CD pipelines. These checks call out to external systems to validate performance, security, and cost — without asking your engineers to do that manually. The same systems that decide whether a deployment goes to production can also help engineers understand where the bottlenecks are. More checks in the pipeline means fewer manual tasks, less overhead, and better decisions about what actually ships. And yeah, maybe a bit more time at the beach.\n","title":"Automated DevOps for the Serverless Fitness Shop","type":"blog"},{"content":"Whether you\u0026rsquo;re a Product Manager or Developer Advocate, once you start presenting you think every talk has to be unique\u0026hellip; spoiler alert, it doesn\u0026rsquo;t have to be.\nThe talk # Getting started with presenting is tough, whether it\u0026rsquo;s virtual or in person. Whether you\u0026rsquo;re a Product Manager or Developer Advocate, once you start you think every presentation has to be unique. My take is that they don\u0026rsquo;t. You can absolutely reuse talks at multiple conferences (even virtual ones). Your presentations are always evolving. The key is making them relevant to the audience you\u0026rsquo;re speaking to.\nSlides # ","date":"June 10, 2020","externalUrl":null,"permalink":"/2020/06/every-talk-has-to-be-unique-right/","section":"Blog","summary":"Whether you’re a Product Manager or Developer Advocate, once you start presenting you think every talk has to be unique… spoiler alert, it doesn’t have to be.\n","title":"Every Talk Has To Be Unique, Right?","type":"blog"},{"content":"Knative builds on Kubernetes to abstract away complexity for developers, and enables them to focus on delivering value to their business. The complex (and sometimes boring) parts of building apps to run on Kubernetes are managed by Knative. In this post, we will focus on setting up a lightweight environment to help you to develop modern apps faster using Knative.\nStep 1: Setting up your Kubernetes deployment using KinD # There are many options for creating a Kubernetes cluster on your local machine. However, since we are running containers in the Kubernetes cluster anyway, let’s also use containers for the cluster itself. Kubernetes IN Docker, or KinD for short, enables developers to spin up a Kubernetes cluster where each cluster node is a container.\nYou can install KinD on your machine by running the following commands:\ncurl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.8.1/kind-$(uname)-amd64 chmod +x ./kind mv ./kind /some-dir-in-your-PATH/kind\nNext, create a Kubernetes cluster using KinD, and expose the ports the ingress gateway to listen on the host. To do this, you can pass in a file with the following cluster configuration parameters:\ncat \u0026gt; clusterconfig.yaml \u0026lt;\u0026lt;EOF kind: Cluster apiVersion: kind.sigs.k8s.io/v1alpha4 nodes: - role: control-plane extraPortMappings: ## expose port 31080 of the node to port 80 on the host - containerPort: 31080 hostPort: 80 ## expose port 31443 of the node to port 443 on the host - containerPort: 31443 hostPort: 443 EOF\nThe values for the container ports are randomly chosen, and are used later on to configure a NodePort service with these values. The values for the host ports are where you\u0026rsquo;ll send cURL requests to as you deploy applications to the cluster.\nAfter the cluster configuration file has been created, you can create a cluster. Your kubeconfig will automatically be updated, and the default cluster will be set to your new cluster.\n$ kind create cluster --name knative --config clusterconfig.yaml\nCreating cluster \u0026quot;knative\u0026quot; ... ✓ Ensuring node image (kindest/node:v1.18.2) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to \u0026quot;kind-knative\u0026quot; You can now use your cluster with: kubectl cluster-info --context kind-knative Have a nice day! 👋\nStep 2: Install Knative Serving # Now that the cluster is running, you can add Knative components using the Knative CRDs. At the time of writing, the latest available version is 0.15.\n$ kubectl apply --filename https://github.com/knative/serving/releases/download/knative-v1.0.0/serving-crds.yaml\ncustomresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created\nAfter the CRDs, the core components are next to be installed on your cluster. For brevity, the unchanged components are removed from the response.\n$ kubectl apply --filename https://github.com/knative/serving/releases/download/knative-v1.0.0/serving-core.yaml\nnamespace/knative-serving created serviceaccount/controller created clusterrole.rbac.authorization.k8s.io/knative-serving-admin created clusterrolebinding.rbac.authorization.k8s.io/knative-serving-controller-admin created image.caching.internal.knative.dev/queue-proxy created configmap/config-autoscaler created configmap/config-defaults created configmap/config-deployment created configmap/config-domain created configmap/config-gc created configmap/config-leader-election created configmap/config-logging created configmap/config-network created configmap/config-observability created configmap/config-tracing created horizontalpodautoscaler.autoscaling/activator created deployment.apps/activator created service/activator-service created deployment.apps/autoscaler created service/autoscaler created deployment.apps/controller created service/controller created deployment.apps/webhook created service/webhook created clusterrole.rbac.authorization.k8s.io/knative-serving-addressable-resolver created clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-admin created clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-edit created clusterrole.rbac.authorization.k8s.io/knative-serving-namespaced-view created clusterrole.rbac.authorization.k8s.io/knative-serving-core created clusterrole.rbac.authorization.k8s.io/knative-serving-podspecable-binding created validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.serving.knative.dev created mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.serving.knative.dev created validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.serving.knative.dev created\nStep 3: Set up networking using Kourier # Next, choose a networking layer. This example uses Kourier. Kourier is the option with the lowest resource requirements, and connects to Envoy and the Knative Ingress CRDs directly.\nTo install Kourier and make it available as a service leveraging the node ports, you’ll need to download the YAML file first and make a few changes.\ncurl -Lo kourier.yaml https://github.com/knative/net-kourier/releases/download/knative-v1.0.0/kourier.yaml\nBy default, the Kourier service is set to be of type LoadBalancer. On local machines, this type doesn’t work, so you’ll have to change the type to NodePort and add nodePort elements to the two listed ports.\nThe complete Service portion (which runs from line 75 to line 94 in the document), should be replaced with:\napiVersion: v1 kind: Service metadata: name: kourier namespace: kourier-system labels: networking.knative.dev/ingress-provider: kourier spec: ports: - name: http2 port: 80 protocol: TCP targetPort: 8080 nodePort: 31080 - name: https port: 443 protocol: TCP targetPort: 8443 nodePort: 31443 selector: app: 3scale-kourier-gateway type: NodePort\nTo install the Kourier controller, enter the command:\n$ kubectl apply --filename kourier.yaml\nnamespace/kourier-system created configmap/config-logging created configmap/config-observability created configmap/config-leader-election created service/kourier created deployment.apps/3scale-kourier-gateway created deployment.apps/3scale-kourier-control created clusterrole.rbac.authorization.k8s.io/3scale-kourier created serviceaccount/3scale-kourier created clusterrolebinding.rbac.authorization.k8s.io/3scale-kourier created service/kourier-internal created service/kourier-control created configmap/kourier-bootstrap created\nNow you will need to set Kourier as the default networking layer for Knative Serving. You can do this by entering the command:\n$ kubectl patch configmap/config-network \\ --namespace knative-serving \\ --type merge \\ --patch '{\u0026quot;data\u0026quot;:{\u0026quot;ingress-class\u0026quot;:\u0026quot;kourier.ingress.networking.knative.dev\u0026quot;}}'\nIf you want to validate that the patch command was successful, run the command:\n$ kubectl describe configmap/config-network --namespace knative-serving\n... (abbreviated for readability) ingress-class: ---- kourier.ingress.networking.knative.dev ...\nTo get the same experience that you would when using a cluster that has DNS names set up, you can add a “magic” DNS provider.\nsslip.io provides a wildcard DNS setup that will automatically resolve to the IP address you put in front of sslip.io.\nTo patch the domain configuration for Knative Serving using sslip.io, enter the command:\n$ kubectl patch configmap/config-domain \\ --namespace knative-serving \\ --type merge \\ --patch '{\u0026quot;data\u0026quot;:{\u0026quot;127.0.0.1.sslip.io\u0026quot;:\u0026quot;\u0026quot;}}'\nIf you want to validate that the patch command was successful, run the command:\n$ kubectl describe configmap/config-domain --namespace knative-serving\n... (abbreviated for readability) Data ==== 127.0.0.1.sslip.io: ---- ...\nBy now, all pods in the knative-serving and kourier-system namespaces should be running. You can check this by entering the commands:\n$ kubectl get pods --namespace knative-serving\nNAME READY STATUS RESTARTS AGE activator-6d9f95b7f8-w6m68 1/1 Running 0 12m autoscaler-597fd8d69d-gmh9s 1/1 Running 0 12m controller-7479cc984d-492fm 1/1 Running 0 12m webhook-bf465f954-4c7wq 1/1 Running 0 12m\n$ kubectl get pods --namespace kourier-system\nNAME READY STATUS RESTARTS AGE 3scale-kourier-control-699cbc695-ztswk 1/1 Running 0 10m 3scale-kourier-gateway-7df98bb5db-5bw79 1/1 Running 0 10m\nTo validate your cluster gateway is in the right state and using the right ports, enter the command:\n$ kubectl --namespace kourier-system get service kourier\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kourier NodePort 10.98.179.178 \u0026lt;none\u0026gt; 80:31080/TCP,443:31443/TCP 87m\n$ docker ps -a\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d53c275d7461 kindest/node:v1.18.2 \u0026quot;/usr/local/bin/entr…\u0026quot; 4 hours ago Up 4 hours 127.0.0.1:49350-\u0026gt;6443/tcp, 0.0.0.0:80-\u0026gt;31080/tcp, 0.0.0.0:443-\u0026gt;31443/tcp knative-control-plane\nThe ports, and how they’re tied to the host, should be the same as you’ve defined in the clusterconfig file. For example, port 31080 in the cluster is exposed as port 80.\nStep 4: Deploying your first app # Now that the cluster, Knative, and the networking components are ready, you can deploy an app. The straightforward Go app that already exists, is an excellent example app to deploy. The first step is to create a yaml file with the hello world service definition:\ncat \u0026gt; service.yaml \u0026lt;\u0026lt;EOF apiVersion: serving.knative.dev/v1 # Current version of Knative kind: Service metadata: name: helloworld-go # The name of the app namespace: default # The namespace the app will use spec: template: spec: containers: - image: ghcr.io/knative/helloworld-go:latest # The URL to the image of the app env: - name: TARGET # The environment variable printed out by the sample app value: \u0026quot;Hello Knative Serving is up and running with Kourier!!\u0026quot; EOF\nTo deploy your app to Knative, enter the command:\n$ kubectl apply --filename service.yaml\nTo validate your deployment, you can use kubectl get ksvc. NOTE: While your cluster is configuring the components that make up the service, the output of the kubectl get ksvc command will show that the revision is missing. The status ready eventually changes to true.\n$ kubectl get ksvc\nNAME URL LATESTCREATED LATESTREADY READY REASON helloworld-go http://helloworld-go.default.127.0.0.1.sslip.io helloworld-go-fqqs6 Unknown RevisionMissing\nNAME URL LATESTCREATED LATESTREADY READY REASON helloworld-go http://helloworld-go.default.127.0.0.1.sslip.io helloworld-go-fqqs6 helloworld-go-fqqs6 True\nThe final step is to test your application, by checking that the code returns what you expect. You can do this by sending a cURL request to the URL listed above.\nBecause this example mapped port 80 of the host to be forwarded to the cluster and set the DNS, you can use the exact URL.\n$ curl -v http://helloworld-go.default.127.0.0.1.sslip.io\nHello Knative Serving is up and running with Kourier!!\nStep 5: Cleaning up # You can stop your cluster and remove all the resources you’ve created by entering the command:\nkind delete cluster --name knative\n","date":"June 3, 2020","externalUrl":null,"permalink":"/2020/06/how-to-set-up-a-local-knative-environment-with-kind-and-without-dns-headaches/","section":"Blog","summary":"Knative builds on Kubernetes to abstract away complexity for developers, and enables them to focus on delivering value to their business. The complex (and sometimes boring) parts of building apps to run on Kubernetes are managed by Knative. In this post, we will focus on setting up a lightweight environment to help you to develop modern apps faster using Knative.\n","title":"How to set up a local Knative environment with KinD and without DNS headaches","type":"blog"},{"content":"At VMware we define Continuous Verification as:\n\u0026ldquo;A process of querying external systems and using information from the response to make decisions to improve the development and deployment process.\u0026rdquo;\nAt #OSSDay, I got a chance to not only talk about what that means for serverless apps and how you can build it into your existing pipelines using tools like GitLab, CloudHealth, Wavefront and Gotling.\nThe talk # Continuous Verification is an extension to the development and deployment processes companies already have. It focuses on optimizing both the development and deployment experience by looking at security, performance, and cost. At most companies, some of these steps are done manually or scripted, but they\u0026rsquo;re rarely part of the actual deployment pipeline. In this session, we look at extending an existing CI/CD pipeline with checks for security, performance, and cost to make a decision on whether to deploy or not.\nSlides # Talk materials # Continuous Verification: The Missing Link to Fully Automate Your Pipeline Prowler: AWS Security Best Practices Assessment, Auditing, Hardening and Forensics Readiness Tool ACME Serverless Fitness Shop - Payment Service Tanzu Observability powered by Wavefront The ACME Fitness Shop Gotling ","date":"April 30, 2020","externalUrl":null,"permalink":"/2020/04/continuous-verification-in-a-serverless-world-@-open-source-community-day/","section":"Blog","summary":"At VMware we define Continuous Verification as:\n“A process of querying external systems and using information from the response to make decisions to improve the development and deployment process.”\nAt #OSSDay, I got a chance to not only talk about what that means for serverless apps and how you can build it into your existing pipelines using tools like GitLab, CloudHealth, Wavefront and Gotling.\n","title":"Continuous Verification In A Serverless World @ Open Source Community Day","type":"blog"},{"content":"","date":"April 15, 2020","externalUrl":null,"permalink":"/series/building-a-serverless-fitness-shop/","section":"Series","summary":"","title":"Building a Serverless Fitness Shop","type":"series"},{"content":"If you\u0026rsquo;ve read the blog posts on CloudJourney.io before, you\u0026rsquo;ve likely come across the term \u0026ldquo;Continuous Verification\u0026rdquo;. If not, no worries. There\u0026rsquo;s a solid article from Dan Illson and Bill Shetti on The New Stack that explains it in detail. The short version: Continuous Verification means putting as many automated checks as possible into your CI/CD pipelines. More checks, fewer manual tasks, more data to smooth out and improve your development and deployment process.\nSo far we covered the tools and technologies, Continuous Integration, and Infrastructure as Code aspects of the ACME Serverless Fitness Shop. This post is about observability.\nWhat is the ACME Serverless Fitness Shop # Quick recap: the ACME Serverless Fitness Shop combines two of my favorite things — serverless and fitness. It has seven distinct domains, each with one or more serverless functions. Some are event-driven, others have an HTTP API, and all of them are written in Go.\nWhat is Observability # Cloud-native apps have fundamentally changed how we design, build, and run systems. They need to adapt to change rapidly, be resilient, and work at scale. Whether you\u0026rsquo;re running microservices on Kubernetes or as serverless functions, some companies have hundreds of services in production with thousands of deployments per day. That growing complexity makes one of the biggest challenges figuring out how and where things go wrong.\nWikipedia describes observability as \u0026ldquo;a measure of how well internal states of a system can be inferred from knowledge of its external outputs\u0026rdquo;.\nIn distributed systems, observability typically has three pillars: logs, metrics, and traces.\nLogs are the (usually immutable) records an app sends somewhere to be stored. The ACME Serverless Fitness Shop uses AWS CloudWatch Logs for this. CloudWatch Logs gives you a single place to find logs from all components — API Gateway messages, Lambda function output, everything. The data is automatically indexed and queryable, which makes finding that needle in the haystack a lot easier.\nMetrics are the numerical values you measure. There are different types: current values (like CPU load), counters (like concurrent executions), and so on. Within VMware Tanzu Observability by Wavefront, you can track all of these.\nTraces represent the events flowing through your system from service to service. An end-to-end trace starts at the first entry point (usually the UI), tracks every service it touches, and records how long each call took. The article \u0026ldquo;How to Use Tracing to Analyze Your Applications\u0026rdquo; gives a good overview of using tracing to find outliers, errors, and traffic patterns.\nAdding VMware Tanzu Observability by Wavefront # Adding Wavefront observability to the ACME Serverless Fitness Shop is straightforward:\npackage main import ( \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; \u0026#34;github.com/aws/aws-lambda-go/lambda\u0026#34; wflambda \u0026#34;github.com/retgits/wavefront-lambda-go\u0026#34; // Import this library ) var wfAgent = wflambda.NewWavefrontAgent(\u0026amp;wflambda.WavefrontConfig{}) func handler() (string, error){ return \u0026#34;Hello World\u0026#34;, nil } func main() { // Wrap the handler with wfAgent.WrapHandler() lambda.Start(wfAgent.WrapHandler(handler)) } I used a slightly modified version of this Go code (also available on GitHub) that also reports memory usage. Beyond the code change, you need two environment variables in your deployment:\nWAVEFRONT_URL: The URL of your Wavefront instance (like, https://myinstance.wavefront.com). WAVEFRONT_API_TOKEN: Your Wavefront API token (see the docs how to create an API token). The Pulumi deployment adds these environment variables to the function arguments. In the Payment service, they\u0026rsquo;re created on lines 156 and 157 and added to the function on line 171. Once deployed, data flows into Wavefront on every function execution.\nGenerating some load # With the functions sending data to Wavefront, the next step is generating some traffic. There are plenty of load testing tools out there — pick whatever works for you. I went with Gotling, a Go-based variant of Gatling. The config below hits two functions (get all products and get product), picking a random product ID from the first call for the second. Using random data helps limit caching effects.\n--- iterations: 20 users: 2 rampup: 2 actions: - http: title: Get all products method: GET url: https://\u0026lt;api id\u0026gt;.execute-api.us-west-2.amazonaws.com/Prod/products accept: json response: jsonpath: $.data[*].id variable: product index: random - sleep: duration: 3 - http: title: Get a single random products method: GET url: https://\u0026lt;api id\u0026gt;.execute-api.us-west-2.amazonaws.com/Prod/products/${product} accept: json Graphs # With metrics flowing into Wavefront, you can start observing how your system behaves. The Lambda function duration graph shows that the first executions take significantly longer — around 450 milliseconds. Those are cold starts. The rest of the invocations are well below the 100-millisecond billing threshold, so there\u0026rsquo;s no immediate need to optimize.\nMemory usage ranges between 32 and 36 MB. The functions have 256 MB available, so there\u0026rsquo;s plenty of headroom. Combined with the sub-100ms execution times, there\u0026rsquo;s no real reason to powertune these functions.\nLambda isn\u0026rsquo;t the only component worth monitoring. Wavefront can track all the infrastructure and app metrics that AWS and your apps emit — DynamoDB query result counts, capacity units consumed, SQS queue depth, oldest message age, and more.\nThis brings me to one of my favorite Wavefront features: alerts. Instead of staring at dashboards all day, you can set up alerts for specific conditions — too many messages in a queue, too many read capacity units consumed, etc. Teams can use that data alongside traces, logs, and error tracking to decide what needs attention.\nWhat\u0026rsquo;s next? # That wraps up the observability side of the ACME Serverless Fitness Shop. Let me know what you\u0026rsquo;d like to see next.\nCover image by ThisIsEngineering from Pexels\n","date":"April 15, 2020","externalUrl":null,"permalink":"/2020/04/building-a-serverless-fitness-shop-observability/","section":"Blog","summary":"If you’ve read the blog posts on CloudJourney.io before, you’ve likely come across the term “Continuous Verification”. If not, no worries. There’s a solid article from Dan Illson and Bill Shetti on The New Stack that explains it in detail. The short version: Continuous Verification means putting as many automated checks as possible into your CI/CD pipelines. More checks, fewer manual tasks, more data to smooth out and improve your development and deployment process.\nSo far we covered the tools and technologies, Continuous Integration, and Infrastructure as Code aspects of the ACME Serverless Fitness Shop. This post is about observability.\n","title":"Building a Serverless Fitness Shop - Observability","type":"blog"},{"content":"","date":"April 15, 2020","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"},{"content":"If you\u0026rsquo;ve read the blog posts on CloudJourney.io before, you\u0026rsquo;ve likely come across the term \u0026ldquo;Continuous Verification\u0026rdquo;. If not, no worries. There\u0026rsquo;s a solid article from Dan Illson and Bill Shetti on The New Stack that explains it in detail. The short version: Continuous Verification means putting as many automated checks as possible into your CI/CD pipelines. More checks, fewer manual tasks, more data to smooth out and improve your development and deployment process.\nIn part one we covered the tools and technologies and in part two we covered the Continuous Integration aspect of the ACME Serverless Fitness Shop. This post is about Infrastructure as Code.\nWhat is the ACME Serverless Fitness Shop # Quick recap: the ACME Serverless Fitness Shop combines two of my favorite things — serverless and fitness. It has seven distinct domains, each with one or more serverless functions. Some are event-driven, others have an HTTP API, and all of them are written in Go.\nInfrastructure as Code # The Wikipedia page for Infrastructure as Code describes it as \u0026ldquo;the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.\u0026rdquo;\nPut simply, Infrastructure as Code makes your infrastructure programmable. And yes, \u0026ldquo;serverless\u0026rdquo; means you shouldn\u0026rsquo;t worry about servers and VMs, but you still need to think about storage, API gateways, and databases. There are good reasons IaC is becoming the norm: faster provisioning (especially in the cloud), fewer human configuration errors (assuming the code is correct), easier multi-region deployments, and reduced risk when team members leave — the code stays behind.\nTools for Infrastructure as Code # As I mentioned in the first part of the series, IaC means moving infrastructure creation into the CI/CD pipeline as much as possible. There are plenty of options:\nTerraform: Solid tool, but you\u0026rsquo;re writing infrastructure in a different language (HCL) than the rest of your code. Serverless Framework: One of the first to simplify building and deploying serverless functions, but developers still have to orchestrate different parts of their apps. AWS CloudFormation and SAM: AWS-native, with useful SAM templates, but it\u0026rsquo;s another syntax to learn. Pulumi: An open-source IaC tool that works across clouds and lets you use real programming languages. What makes Pulumi different # I\u0026rsquo;m a developer (or developer advocate), which means I\u0026rsquo;m definitely not a YAML expert. The languages I enjoy are TypeScript and Go. When I think about those languages, I expect loops, variables, modules, and frameworks. Pulumi is the tool that lets me mix infrastructure with actual code. To create three similar IAM roles, I write a for loop instead of copying and pasting a statement three times. That matters for developer experience.\nSpeaking of developer experience — we expect syntax highlighting, IDE support, and strongly typed objects. Defining infrastructure with the same concepts we use for application code is what sets Pulumi apart.\nConfiguration # Each domain has a pulumi folder with the configuration and code needed to deploy its services to AWS. Pulumi uses a configuration file for setting variables:\nconfig: aws:region: us-west-2 ## The region you want to deploy to awsconfig:generic: sentrydsn: ## The DSN to connect to Sentry accountid: ## Your AWS Account ID awsconfig:tags: author: retgits ## The author, you... feature: acmeserverless ## The resources are part of a specific app (the ACME Serverless Fitness Shop) team: vcs ## The team you\u0026#39;re on version: 0.2.0 ## The version of the app To use this configuration in a Pulumi program, there are two Go structs that map the key/value pairs to strongly typed variables:\n// Tags are key-value pairs to apply to the resources created by this stack type Tags struct { // Author is the person who created the code, or performed the deployment Author pulumi.String // Feature is the project that this resource belongs to Feature pulumi.String // Team is the team that is responsible to manage this resource Team pulumi.String // Version is the version of the code for this resource Version pulumi.String } // GenericConfig contains the key-value pairs for the configuration of AWS in this stack type GenericConfig struct { // The AWS region used Region string // The DSN used to connect to Sentry SentryDSN string `json:\u0026#34;sentrydsn\u0026#34;` // The AWS AccountID to use AccountID string `json:\u0026#34;accountid\u0026#34;` } To populate these structs, Pulumi provides a RequireObject method that reads the configuration and returns an error if the expected YAML element isn\u0026rsquo;t found:\n// Get the region region, found := ctx.GetConfig(\u0026#34;aws:region\u0026#34;) if !found { return fmt.Errorf(\u0026#34;region not found\u0026#34;) } // Read the configuration data from Pulumi.\u0026lt;stack\u0026gt;.yaml conf := config.New(ctx, \u0026#34;awsconfig\u0026#34;) // Create a new Tags object with the data from the configuration var tags Tags conf.RequireObject(\u0026#34;tags\u0026#34;, \u0026amp;tags) // Create a new GenericConfig object with the data from the configuration var genericConfig GenericConfig conf.RequireObject(\u0026#34;generic\u0026#34;, \u0026amp;genericConfig) genericConfig.Region = region Building code # You could use Make to build the Go executable and zip file that Lambda needs. But since we\u0026rsquo;re already using Go, why not use Go for that too? I built a Go module that handles it. Four lines of code to create the executable and zip file. And because Pulumi mixes infrastructure with real code, you can add loops to build multiple functions or conditions to build selectively.\nfnFolder := path.Join(wd, \u0026#34;..\u0026#34;, \u0026#34;cmd\u0026#34;, \u0026#34;lambda-payment-sqs\u0026#34;) buildFactory := builder.NewFactory().WithFolder(fnFolder) buildFactory.MustBuild() buildFactory.MustZip() Finding resources # Not all resources your app needs are in the same stack. Things like SQS queues or DynamoDB tables might live in a completely different stack, but you still need access to them.\n// Lookup the SQS queues responseQueue, err := sqs.LookupQueue(ctx, \u0026amp;sqs.LookupQueueArgs{ Name: fmt.Sprintf(\u0026#34;%s-acmeserverless-sqs-payment-response\u0026#34;, ctx.Stack()), }) if err != nil { return err } requestQueue, err := sqs.LookupQueue(ctx, \u0026amp;sqs.LookupQueueArgs{ Name: fmt.Sprintf(\u0026#34;%s-acmeserverless-sqs-payment-request\u0026#34;, ctx.Stack()), }) if err != nil { return err } Here we\u0026rsquo;re looking up the two SQS queues used for payment requests and credit card validation responses. The queue names and ARNs are needed to configure IAM policies and event source mappings.\nCreating IAM policies # While I enjoy Pulumi\u0026rsquo;s Go SDK, there are areas where AWS SAM speeds things up. IAM policies are one of them. SAM lets you pick from a list of policy templates to scope Lambda permissions to the resources your app uses. To get something similar in Pulumi, I built a Go module that wraps those policy templates for use in any Go app.\n// Create a factory to get policies from iamFactory := sampolicies.NewFactory().WithAccountID(genericConfig.AccountID).WithPartition(\u0026#34;aws\u0026#34;).WithRegion(genericConfig.Region) // Add a policy document to allow the function to use SQS as event source iamFactory.AddSQSSendMessagePolicy(responseQueue.Name) iamFactory.AddSQSPollerPolicy(requestQueue.Name) policies, err := iamFactory.GetPolicyStatement() if err != nil { return err } _, err = iam.NewRolePolicy(ctx, \u0026#34;ACMEServerlessPaymentSQSPolicy\u0026#34;, \u0026amp;iam.RolePolicyArgs{ Name: pulumi.String(\u0026#34;ACMEServerlessPaymentSQSPolicy\u0026#34;), Role: role.Name, Policy: pulumi.String(policies), }) if err != nil { return err } These few lines of Go create an IAM policy that lets the Lambda function send messages to and receive messages from the two queues. The Go module saves me from writing a bunch of IAM policy statements by hand.\nDeploying functions # // Create the AWS Lambda function functionArgs := \u0026amp;lambda.FunctionArgs{ Description: pulumi.String(\u0026#34;A Lambda function to validate creditcard payments\u0026#34;), Runtime: pulumi.String(\u0026#34;go1.x\u0026#34;), Name: pulumi.String(fmt.Sprintf(\u0026#34;%s-lambda-payment\u0026#34;, ctx.Stack())), MemorySize: pulumi.Int(256), Timeout: pulumi.Int(10), Handler: pulumi.String(\u0026#34;lambda-payment-sqs\u0026#34;), Environment: environment, Code: pulumi.NewFileArchive(\u0026#34;../cmd/lambda-payment-sqs/lambda-payment-sqs.zip\u0026#34;), Role: role.Arn, Tags: pulumi.Map(tagMap), } function, err := lambda.NewFunction(ctx, fmt.Sprintf(\u0026#34;%s-lambda-payment\u0026#34;, ctx.Stack()), functionArgs) if err != nil { return err } _, err = lambda.NewEventSourceMapping(ctx, fmt.Sprintf(\u0026#34;%s-lambda-payment\u0026#34;, ctx.Stack()), \u0026amp;lambda.EventSourceMappingArgs{ BatchSize: pulumi.Int(1), Enabled: pulumi.Bool(true), FunctionName: function.Arn, EventSourceArn: pulumi.String(requestQueue.Arn), }) if err != nil { return err } The function arguments look the same as they would in any other Lambda deployment tool — runtime, memory size, IAM role, etc. The Payment service is triggered by SQS messages, so it needs a NewEventSourceMapping() to connect the function to the queue. The mapping uses the IAM role from the function arguments to verify the function has permission to receive messages, and it\u0026rsquo;ll throw an error if it doesn\u0026rsquo;t.\nWhy use Pulumi and how does Continuous Verification play a role? # Fair question: the Pulumi code is about twice the size of the CloudFormation template that does the same thing. So why bother?\nFor me, it comes down to the same arguments from earlier. Pulumi lets me write deployments in the same language as my application code, gives me strongly typed variables, and provides access to all the tooling I already use for development — IDE support, testing, loops, the works. That makes building and maintaining the serverless infrastructure a lot more natural.\nThose same reasons are why Pulumi fits well with Continuous Verification. The built-in previews, the ability to verify that resources were created as expected, and the ability to iterate on your infrastructure code all help you make an informed decision about whether your code should go to production.\nWhat\u0026rsquo;s next? # Next up, we\u0026rsquo;ll look at the observability side of serverless with VMware Tanzu Observability by Wavefront.\nPhoto by panumas nikhomkhai from Pexels\n","date":"April 8, 2020","externalUrl":null,"permalink":"/2020/04/building-a-serverless-fitness-shop-infrastructure-as-code/","section":"Blog","summary":"If you’ve read the blog posts on CloudJourney.io before, you’ve likely come across the term “Continuous Verification”. If not, no worries. There’s a solid article from Dan Illson and Bill Shetti on The New Stack that explains it in detail. The short version: Continuous Verification means putting as many automated checks as possible into your CI/CD pipelines. More checks, fewer manual tasks, more data to smooth out and improve your development and deployment process.\nIn part one we covered the tools and technologies and in part two we covered the Continuous Integration aspect of the ACME Serverless Fitness Shop. This post is about Infrastructure as Code.\n","title":"Building a Serverless Fitness Shop - Infrastructure as Code","type":"blog"},{"content":"If you\u0026rsquo;ve read the blog posts on CloudJourney.io before, you\u0026rsquo;ve likely come across the term \u0026ldquo;Continuous Verification\u0026rdquo;. If not, no worries. There\u0026rsquo;s a solid article from Dan Illson and Bill Shetti on The New Stack that explains it in detail. The short version: Continuous Verification means putting as many automated checks as possible into your CI/CD pipelines. More checks, fewer manual tasks, more data to smooth out and improve your development and deployment process.\nIn part one of this series, we covered the tools and technologies behind the ACME Serverless Fitness Shop. Now it\u0026rsquo;s time to look at the CI/CD side of things.\nWhat is the ACME Serverless Fitness Shop # Quick recap: the ACME Serverless Fitness Shop combines two of my favorite things — serverless and fitness. It has seven distinct domains, each with one or more serverless functions. Some are event-driven, others have an HTTP API, and all of them are written in Go.\nContinuous Anything # \u0026ldquo;Continuous Anything\u0026rdquo; isn\u0026rsquo;t just a catchy title. It captures the idea that all the practices starting with \u0026ldquo;Continuous\u0026rdquo; should work together in a single run from code to production: integration (building and testing), deployment (getting builds to staging and production), and verification (making sure the deployment is the right thing to do).\nThere are plenty of CI/CD tools to choose from — Jenkins, Travis CI, CircleCI, and others. For the ACME Serverless Fitness Shop, I had a few requirements. Serverless means not running your own servers, so the CI/CD tool needs to be a managed service. There are multiple service domains with their own repositories, but some variables should be shared across all of them from a single place.\nJenkins is well-known, but almost everything requires a plugin installed on the server. You need to make sure build tools are available on the machine running the builds. And as far as I know, there\u0026rsquo;s no managed Jenkins service.\nTravis CI runs builds in a container or VM with minimal tooling beyond the language you\u0026rsquo;ve chosen. You also can\u0026rsquo;t share environment variables across multiple projects.\nThat brings me to CircleCI. It\u0026rsquo;s a managed service with a generous free tier. CircleCI offers orbs — reusable pieces of YAML that can be configuration, commands, or entire CI jobs. It\u0026rsquo;s the plugin concept without requiring anyone to install them on a server. The ACME Serverless Fitness Shop has seven service domains, each with its own GitHub repository. CircleCI\u0026rsquo;s \u0026ldquo;Build Contexts\u0026rdquo; let me share environment variables across builds. I only need to update my Sentry DSN in one place and it\u0026rsquo;s available to all builds.\nContinuous Integration # Back in 1991, Grady Booch coined the term Continuous Integration as \u0026ldquo;the practice of merging all developers\u0026rsquo; working copies to a shared mainline several times a day\u0026rdquo;. The idea still holds: the longer a developer works on a separate copy of a codebase, the higher the chance of conflicts. These can range from simple library version mismatches to multiple developers editing the same method in the same file.\nA typical Continuous Integration workflow runs builds and tests, then gives feedback when something fails. In CircleCI, that takes about 10 lines of YAML. The image used is the next-gen convenience image from CircleCI, which comes with useful Go tools pre-installed.\n# The version of the CircleCI configuration language to use (2.1 is needed for most orbs) version: 2.1 jobs: build: docker: # The ACME source code is in Go, so we\u0026#39;ll rely on the image provided by the CircleCI team (1.14 is the latest version at the time of writing) - image: cimg/go:1.14 steps: # Get the sources from the repository - checkout # Get all dependencies for both code execution and running tests (-t) and downloaded packages shouldn\u0026#39;t be installed (-d) - run: go get -t -d ./... # Run gotestsum for all packages, which is a great tool to run tests and see human friendly output - run: gotestsum --format standard-quiet # Compile and build executables and store them in the .bin folder - run: GOBIN=$(pwd)/.bin go install ./... Continuous Delivery # Finding a clean definition of Continuous Delivery is harder than you\u0026rsquo;d think. After a lot of back-and-forth with Matty Stratton, Laura Santamaria, and Aaron Aldrich, the consensus is: Continuous Delivery means being able to test the entire codebase and prepare it for production, with the final push to production requiring manual approval.\nFor the ACME Serverless Fitness Shop, I use Pulumi for this. I want a tool without a custom DSL — I\u0026rsquo;m not a YAML expert and I\u0026rsquo;d rather write Go. The Pulumi team built an orb that makes the CircleCI integration straightforward, handling installation and boilerplate. In a future post, we\u0026rsquo;ll look at how the Pulumi scripts are structured.\nversion: 2.1 # Register the Pulumi orb orbs: pulumi: pulumi/pulumi@1.2.0 jobs: build: docker: - image: circleci/golang:1.14 steps: - checkout - run: go get -t -d ./... - run: gotestsum --format standard-quiet - run: GOBIN=$(pwd)/.bin go install ./... # Log in to Pulumi using a specific version of the CLI - pulumi/login: version: 1.12.1 - pulumi/preview: stack: retgits/dev working_directory: ~/project/pulumi workflows: version: 2 deploy: jobs: - build: # The context gives the ability to set environment variables that are shared across pipelines. # In this case the ACMEServerless context has the Pulumi Token that is needed by the Pulumi orb context: ACMEServerless At 23 lines, this pipeline already builds and tests both the app code and the infrastructure. It also sets the context (shared environment variables) so all builds have the same configuration.\nContinuous Verification # As a reminder, Continuous Verification is \u0026ldquo;A process of querying external system(s) and using information from the response to make decision(s) to improve the development and deployment process.\u0026rdquo;\nThere are many things you can verify, but here are four that matter most for this project:\nSecurity: Are the Go modules safe? Are the IAM settings correct? Performance: Is the function execution time acceptable? Utilization: How much memory are the functions actually using? Cost: What will running these components cost? Starting with security — it\u0026rsquo;s a shared responsibility to build safe software. I\u0026rsquo;ve written about Snyk before, so I won\u0026rsquo;t repeat the details here. Scanning Go modules takes two additional lines of YAML: add the orb (snyk: snyk/snyk@0.0.10) and add the scan step (snyk/scan).\nFor performance checks after invoking the Lambda function a few times, you can use the AWS CLI to pull CloudWatch metrics. That requires four steps:\nAdd the AWS CLI orb: aws-cli: circleci/aws-cli@0.1.22 Add the setup step: aws-cli/setup (use a context to avoid putting credentials in your YAML) Add a run step to get statistics: export FUNCTION=AllCarts \u0026amp;\u0026amp; export ENDDATE=`date -u \u0026#39;+%Y-%m-%dT%TZ\u0026#39;` \u0026amp;\u0026amp; export STARTDATE=`date -u -d \u0026#34;1 day ago\u0026#34; \u0026#39;+%Y-%m-%dT%TZ\u0026#39;` \u0026amp;\u0026amp; export DURATION=`aws cloudwatch get-metric-statistics --metric-name Duration --start-time $STARTDATE --end-time $ENDDATE --period 3600 --namespace AWS/Lambda --statistics Average --dimensions Name=FunctionName,Value=$FUNCTION | jq \u0026#39;.Datapoints | map(.Average) | add\u0026#39;` \u0026amp;\u0026amp; if (($DURATION \u0026gt; 3000)); then echo \u0026#34;Alert\u0026#34; \u0026amp;\u0026amp; exit 1; else echo \u0026#34;Within range. Continuing\u0026#34;; fi Set an appropriate threshold for $DURATION. In this example, it\u0026rsquo;s 3000 milliseconds. In a future post on tracing, we\u0026rsquo;ll look at using VMware Tanzu Observability by Wavefront to check function costs and memory utilization.\nHere\u0026rsquo;s the full pipeline with all the steps and orbs:\nversion: 2.1 orbs: pulumi: pulumi/pulumi@1.2.0 snyk: snyk/snyk@0.0.10 aws-cli: circleci/aws-cli@0.1.22 jobs: build: docker: - image: circleci/golang:1.14 steps: - checkout - run: go get -t -d ./... - run: gotestsum --format standard-quiet - run: GOBIN=$(pwd)/.bin go install ./... - snyk/scan - pulumi/login: version: 1.12.1 # We\u0026#39;re updating the stack instead of only showing the preview - pulumi/update: stack: retgits/dev working_directory: ~/project/pulumi skip-preview: true - aws-cli/setup - run: export FUNCTION=AllCarts \u0026amp;\u0026amp; export ENDDATE=`date -u \u0026#39;+%Y-%m-%dT%TZ\u0026#39;` \u0026amp;\u0026amp; export STARTDATE=`date -u -d \u0026#34;1 day ago\u0026#34; \u0026#39;+%Y-%m-%dT%TZ\u0026#39;` \u0026amp;\u0026amp; export DURATION=`aws cloudwatch get-metric-statistics --metric-name Duration --start-time $STARTDATE --end-time $ENDDATE --period 3600 --namespace AWS/Lambda --statistics Average --dimensions Name=FunctionName,Value=$FUNCTION | jq \u0026#39;.Datapoints | map(.Average) | add\u0026#39;` \u0026amp;\u0026amp; if (($DURATION \u0026gt; 3000)); then echo \u0026#34;Alert\u0026#34; \u0026amp;\u0026amp; exit 1; else echo \u0026#34;Within range. Continuing\u0026#34;; fi workflows: version: 2 deploy: jobs: - build: context: ACMEServerless 29 lines of YAML that:\nBuild and test the Go code for the Lambda functions Scan Go modules for security vulnerabilities Validate the infrastructure deployment Update the development environment Run a performance check The best part is that all builds use the exact same steps, and I don\u0026rsquo;t have to share any credentials thanks to the CircleCI context.\nWhat\u0026rsquo;s next? # Next up, we\u0026rsquo;ll look at Infrastructure as Code with Pulumi in more detail.\nCover photo by Magda Ehlers from Pexels\n","date":"April 1, 2020","externalUrl":null,"permalink":"/2020/04/building-a-serverless-fitness-shop-continuous-anything/","section":"Blog","summary":"If you’ve read the blog posts on CloudJourney.io before, you’ve likely come across the term “Continuous Verification”. If not, no worries. There’s a solid article from Dan Illson and Bill Shetti on The New Stack that explains it in detail. The short version: Continuous Verification means putting as many automated checks as possible into your CI/CD pipelines. More checks, fewer manual tasks, more data to smooth out and improve your development and deployment process.\n","title":"Building a Serverless Fitness Shop - Continuous Anything","type":"blog"},{"content":"If you\u0026rsquo;ve read the blog posts on CloudJourney.io before, you\u0026rsquo;ve likely come across the term \u0026ldquo;Continuous Verification\u0026rdquo;. If you haven\u0026rsquo;t, no worries. There\u0026rsquo;s a solid article from Dan Illson and Bill Shetti on The New Stack that explains it in detail. The short version: Continuous Verification is \u0026ldquo;A process of querying external system(s) and using information from the response to make decision(s) to improve the development and deployment process.\u0026rdquo;\nIn practice, that means putting as many automated checks as possible into your CI/CD pipelines. More checks means fewer manual tasks, which means more data to smooth out and improve your development and deployment process. The CloudJourney.io team built the ACME Fitness Shop to showcase continuous verification in a containerized world. There are deployments for Kubernetes, Docker, and AWS Fargate. In this blog series, we\u0026rsquo;ll look at how Continuous Verification works in a serverless context, and how we built the components that make up the ACME Serverless Fitness Shop.\nWhat is the ACME Serverless Fitness Shop # The ACME Serverless Fitness Shop combines two of my favorite things: serverless and fitness. The shop has seven different domains, each containing one or more serverless functions:\nShipment: A shipping service, because what is a shop without a way to ship your purchases? 🚚 Payment: A payment service, because nothing in life is really free\u0026hellip; 💰 Order: An order service, because what is a shop without actual orders to be shipped? 📦 Cart: A cart service, because what is a shop without a cart to put stuff in? 🛒 Catalog: A catalog service, because what is a shop without a catalog to show off our awesome red pants? 📖 User: A user service, because what is a shop without users to buy our awesome red pants? 👨‍💻 Point-of-Sales: A point-of-sales app to sell our products in brick-and-mortar stores! 🛍️ Some of these services are event-driven, while others have an HTTP API. The API-based services use the same API specifications as their containerized counterparts, so the serverless version stays compatible with the original ACME Fitness Shop frontend.\nDeciding on Data stores # With Functions-as-a-Service, you can\u0026rsquo;t maintain state inside the function. Once it\u0026rsquo;s done processing, it shuts down and any in-memory state is gone. Most functions need to persist data somewhere. When you go serverless for everything, there are a few options for storage:\nAWS DynamoDB for a NoSQL database with single-digit millisecond latency at any scale Amazon Aurora Serverless for a MySQL-compatible relational database Amazon RDS Proxy for using AWS Lambda with traditional RDS relational databases For the ACME Serverless Fitness Shop, most queries are simple gets and puts. We always know the data type a function needs and which keys are associated with it. There are no joins or schemas needed for referential integrity. AWS advocates for purpose-built databases, and for these access patterns, DynamoDB is the right fit. The single-digit millisecond latency is a nice bonus, but the real win is that DynamoDB is fully managed — no upgrade windows, no patching, no ops overhead.\nDeciding on Application integration # Serverless apps are event-driven, so the next decision is which service handles the events. A few options:\nAmazon SNS for publish/subscribe style messaging Amazon SQS as a managed queueing service Amazon EventBridge as a serverless event bus With SQS, receivers poll for messages and each message goes to a single receiver. With SNS, messages are pushed to all subscribers, which is typically faster. The real difference is in the use case. Queues are great for decoupling apps and async communication. Pub/sub is better when multiple systems need to act on the same message. The ACME Serverless Fitness Shop has functions handling distinct messages asynchronously, so SQS is the natural fit.\nDeciding on Compute # Last decision: where do the apps run? Within AWS, the main options are:\nAWS Lambda — run code without provisioning servers, practically synonymous with serverless Lambda@Edge — run Lambda functions at edge locations AWS Fargate — run containers in a serverless fashion Fargate is solid, and at re:Invent 2019 AWS added the ability to run Kubernetes pods on it. That would be the easiest path to get the ACME Fitness Shop into the cloud, but containers still incur cost even when idle. Since there\u0026rsquo;s already a Fargate and Kubernetes version, and the goal is to pay as little as possible when functions aren\u0026rsquo;t running, we went with AWS Lambda and the Go 1.x runtime.\nFrom Microservices to Serverless # Moving from traditional microservices to event-driven architecture requires refactoring and rearchitecting. To show what that looks like, here\u0026rsquo;s how we changed the Payment service from an HTTP-based microservice to an SQS-based Lambda function. Two requirements for this change:\nThe service must still validate credit card payments and respond with the validation status (no change in functionality) The input and output must not add or remove any fields that would alter the service\u0026rsquo;s behavior (no change to inputs or outputs) Creating events # Event-driven architectures need events, and events should describe what happened. The Payment service has two: one that triggers it and one that it produces. The order service sends a \u0026ldquo;PaymentRequested\u0026rdquo; event when an order needs payment. The Payment service responds with a \u0026ldquo;CreditCardValidated\u0026rdquo; event — because that\u0026rsquo;s exactly what happened.\nKeeping track of events in an event-driven system gets complicated fast. Adding metadata to each event helps. Here\u0026rsquo;s what the PaymentRequested event looks like:\n{ \u0026#34;metadata\u0026#34;: { \u0026#34;domain\u0026#34;: \u0026#34;Order\u0026#34;, // Domain represents the the event came from like Payment or Order \u0026#34;source\u0026#34;: \u0026#34;CLI\u0026#34;, // Source represents the function the event came from \u0026#34;type\u0026#34;: \u0026#34;PaymentRequested\u0026#34;, // Type respresents the type of event this is \u0026#34;status\u0026#34;: \u0026#34;success\u0026#34; // Status represents the current status of the event }, \u0026#34;data\u0026#34;: { \u0026#34;orderID\u0026#34;: \u0026#34;12345\u0026#34;, \u0026#34;card\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;Visa\u0026#34;, \u0026#34;Number\u0026#34;: \u0026#34;4222222222222\u0026#34;, \u0026#34;ExpiryYear\u0026#34;: 2022, \u0026#34;ExpiryMonth\u0026#34;: 12, \u0026#34;CVV\u0026#34;: \u0026#34;123\u0026#34; }, \u0026#34;total\u0026#34;: \u0026#34;123\u0026#34; } } And the CreditCardValidated event:\n{ \u0026#34;metadata\u0026#34;: { \u0026#34;domain\u0026#34;: \u0026#34;Payment\u0026#34;, \u0026#34;source\u0026#34;: \u0026#34;CLI\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;CreditCardValidated\u0026#34;, \u0026#34;status\u0026#34;: \u0026#34;success\u0026#34; }, \u0026#34;data\u0026#34;: { \u0026#34;success\u0026#34;: \u0026#34;true\u0026#34;, \u0026#34;status\u0026#34;: 200, \u0026#34;message\u0026#34;: \u0026#34;transaction successful\u0026#34;, \u0026#34;amount\u0026#34;: 123, \u0026#34;transactionID\u0026#34;: \u0026#34;3f846704-af12-4ea9-a98c-8d7b37e10b54\u0026#34; } } Functional behavior # The Payment service does three things:\nReceive a message from Amazon SQS Validate the credit card Send the validation result to Amazon SQS Here\u0026rsquo;s the Go code (Sentry tracing removed for clarity):\npackage main // removed imports for clarity // handler handles the SQS events and returns an error if anything goes wrong. // The resulting event, if no error is thrown, is sent to an SQS queue. func handler(request events.SQSEvent) error { // Unmarshal the PaymentRequested event to a struct req, err := payment.UnmarshalPaymentRequested([]byte(request.Records[0].Body)) if err != nil { return handleError(\u0026#34;unmarshaling payment\u0026#34;, err) } // Generate the event to emit evt := payment.CreditCardValidated{ Metadata: payment.Metadata{ Domain: payment.Domain, Source: \u0026#34;ValidateCreditCard\u0026#34;, Type: payment.CreditCardValidatedEvent, Status: \u0026#34;success\u0026#34;, }, Data: payment.PaymentData{ Success: true, Status: http.StatusOK, Message: payment.DefaultSuccessMessage, Amount: req.Data.Total, OrderID: req.Data.OrderID, TransactionID: uuid.Must(uuid.NewV4()).String(), }, } // Check the creditcard is valid. // If the creditcard is not valid, update the event to emit // with new information check := validator.New() err = check.Creditcard(req.Data.Card) if err != nil { evt.Metadata.Status = \u0026#34;error\u0026#34; evt.Data.Success = false evt.Data.Status = http.StatusBadRequest evt.Data.Message = payment.DefaultErrorMessage evt.Data.TransactionID = \u0026#34;-1\u0026#34; handleError(\u0026#34;validating creditcard\u0026#34;, err) } // Create a new SQS EventEmitter and send the event em := sqs.New() err = em.Send(evt) if err != nil { return handleError(\u0026#34;sending event\u0026#34;, err) } return nil } // handleError takes the activity where the error occured and the error object and sends a message to sentry. // The original error is returned so it can be thrown. func handleError(activity string, err error) error { log.Printf(\u0026#34;error %s: %s\u0026#34;, activity, err.Error()) return err } // The main method is executed by AWS Lambda and points to the handler func main() { lambda.Start(handler) } Infrastructure as Code # Continuous Integration, Continuous Delivery, and Continuous Verification all depend on automating as much as possible so developers and engineers can focus on building business value. That includes creating infrastructure in the pipeline, which means Infrastructure as Code. Options include:\nTerraform — write HCL to define infrastructure Serverless Framework — one of the first tools to simplify building and deploying functions AWS CloudFormation (and SAM) — the AWS-native configuration language Pulumi — an open-source IaC tool that works across clouds I wanted a tool without a custom DSL. I\u0026rsquo;m not a YAML expert, and I enjoy writing Go. If I can keep my entire toolset Go-based, that\u0026rsquo;s ideal. This is where Pulumi fits. It lets me use the Go toolchain while deploying to Amazon Web Services and leveraging the full AWS ecosystem. All the services, the DynamoDB table, and the SQS queues are deployed using Pulumi. Here\u0026rsquo;s how you create a DynamoDB table with the Pulumi Go SDK (tags removed for clarity — full code on GitHub):\npackage main import ( \u0026#34;fmt\u0026#34; \u0026#34;github.com/pulumi/pulumi-aws/sdk/go/aws/dynamodb\u0026#34; \u0026#34;github.com/pulumi/pulumi/sdk/go/pulumi\u0026#34; \u0026#34;github.com/pulumi/pulumi/sdk/go/pulumi/config\u0026#34; ) // DynamoConfig contains the key-value pairs for the configuration of Amazon DynamoDB in this stack type DynamoConfig struct { // Controls how you are charged for read and write throughput and how you manage capacity BillingMode pulumi.String `json:\u0026#34;billingmode\u0026#34;` // The number of write units for this table WriteCapacity pulumi.Int `json:\u0026#34;writecapacity\u0026#34;` // The number of read units for this table ReadCapacity pulumi.Int `json:\u0026#34;readcapacity\u0026#34;` } func main() { pulumi.Run(func(ctx *pulumi.Context) error { // Read the configuration data from Pulumi.\u0026lt;stack\u0026gt;.yaml conf := config.New(ctx, \u0026#34;awsconfig\u0026#34;) // Create a new DynamoConfig object with the data from the configuration var dynamoConfig DynamoConfig conf.RequireObject(\u0026#34;dynamodb\u0026#34;, \u0026amp;dynamoConfig) // The table attributes represent a list of attributes that describe the key schema for the table and indexes tableAttributeInput := []dynamodb.TableAttributeInput{ dynamodb.TableAttributeArgs{ Name: pulumi.String(\u0026#34;PK\u0026#34;), Type: pulumi.String(\u0026#34;S\u0026#34;), }, dynamodb.TableAttributeArgs{ Name: pulumi.String(\u0026#34;SK\u0026#34;), Type: pulumi.String(\u0026#34;S\u0026#34;), }, } // The set of arguments for constructing an Amazon DynamoDB Table resource tableArgs := \u0026amp;dynamodb.TableArgs{ Attributes: dynamodb.TableAttributeArray(tableAttributeInput), BillingMode: pulumi.StringPtrInput(dynamoConfig.BillingMode), HashKey: pulumi.String(\u0026#34;PK\u0026#34;), RangeKey: pulumi.String(\u0026#34;SK\u0026#34;), Name: pulumi.String(fmt.Sprintf(\u0026#34;%s-%s\u0026#34;, ctx.Stack(), ctx.Project())), ReadCapacity: dynamoConfig.ReadCapacity, WriteCapacity: dynamoConfig.WriteCapacity, } // NewTable registers a new resource with the given unique name, arguments, and options table, err := dynamodb.NewTable(ctx, fmt.Sprintf(\u0026#34;%s-%s\u0026#34;, ctx.Stack(), ctx.Project()), tableArgs) if err != nil { return err } // Export the ARN and Name of the table ctx.Export(\u0026#34;Table::Arn\u0026#34;, table.Arn) ctx.Export(\u0026#34;Table::Name\u0026#34;, table.Name) return nil }) } Continuous Anything # While building out the services, I came across Stackery\u0026rsquo;s Road to Serverless Ubiquity Guide. One paragraph on developer experience stuck with me:\n\u0026ldquo;But developers are human beings, too—and their experience of these tools and technologies is extremely important if we want to encourage sustainable and repeatable development practices.\u0026rdquo;\nSustainable and repeatable development practices matter regardless of whether you\u0026rsquo;re doing serverless or not. You want repeatable processes and repeatable builds. A friend introduced me to CircleCI, which has a concept of Orbs — reusable snippets of code that automate repeated processes, speed up project setup, and integrate with third-party tools. That saves a lot of work on deployment scripts. All services, including DynamoDB and SQS, have their CircleCI pipeline and each pipeline is only 35 lines of configuration. Most of those lines are copied from the starter template.\nWrapping up # In this first part of the series, we covered the key choices:\nA data store, DynamoDB, because it\u0026rsquo;s the right purpose-built database for the access patterns the ACME Serverless Fitness Shop needs The application integration service, SQS, because it allows the functions to operate asynchronously The compute resources, Lambda, for its event-driven model and cost profile The Infrastructure as Code tool, Pulumi, so I can write Go to deploy my Go functions The CI/CD tool, CircleCI, because Orbs keep the configuration minimal We also walked through moving a microservice to serverless. Next up: what Continuous Verification means for serverless workloads.\nPhoto by Humphrey Muleba on Unsplash\n","date":"March 23, 2020","externalUrl":null,"permalink":"/2020/03/building-a-serverless-fitness-shop-tools-and-tech/","section":"Blog","summary":"If you’ve read the blog posts on CloudJourney.io before, you’ve likely come across the term “Continuous Verification”. If you haven’t, no worries. There’s a solid article from Dan Illson and Bill Shetti on The New Stack that explains it in detail. The short version: Continuous Verification is “A process of querying external system(s) and using information from the response to make decision(s) to improve the development and deployment process.”\n","title":"Building a Serverless Fitness Shop - Tools and Tech","type":"blog"},{"content":"Microservices give us as developers an incredible amount of freedom. We can choose our language and we can decide where and when to deploy our service. One of the biggest challenges with microservices, though, is figuring out how things go wrong. With microservices, we can build large, distributed applications, but that also means finding what goes wrong is challenging. It’s even harder to trace errors when you use a platform like AWS Lambda.\nAs good developers, we write our unit tests and integration tests and we make sure those tests all pass. Together with the Quality Assurance team, we write complex test scenarios to make sure our code behaves the way we intended. The one thing, though, we can never predict is how our end-users will use the software. There are always new issues we didn\u0026rsquo;t think would happen. That is why tools like Sentry.io are incredibly useful. Sentry.io, which is an application monitoring platform, gives real-time insight into all events logged by developers. Those events can be errors, but they can also be other types of events.\nAs applications grow, and become a lot more complex, the time it takes to figure out where things go wrong increases too. As you rearchitect apps to be event-driven and make use of serverless compute, that complexity will increase even further. One of the apps we built to help showcase the things we work on as a team is the ACME Fitness Shop. The ACME Fitness Shop consists of six microservices, all running in their own containers and using their own data stores.\nwrote on the topic of observability and how to get from just metrics to observability.\nOver the past months, the CloudJourney.io team has worked on a serverless version of the ACME Fitness Shop. Currently, the serverless version has 24 Lambda functions that work together. Keeping track of what they all do and where things fall apart is tough. Rather than having a single container that does all the “cart stuff”, there are now eight different Lambda functions.\nThe Lambda functions in the above diagram do the same as their container-based counterparts in the first image. In fact, you could swap the container-based services with the serverless services and never really know the difference. While we can have different teams working on different parts of the same group of functionality, we did add a bit of complexity when it comes to troubleshooting and finding errors. Especially with serverless, you can’t open up a terminal session and “ssh into a container”.\nInstrumenting Error handling # One of the most commonly used tools to report on errors is Sentry.io. Sentry allows developers to instrument their apps to detect errors in real-time. To me, one of the features that stands out is that it also captures where in the source code the error occurred (with a code snippet).\nFirst things, first though. To connect to Sentry, the only required value is the client key which Sentry calls the DSN. To keep that value secure, and definitely follow best practices, you can store it in the AWS Systems Manager Parameter Store (SSM) and use it while deploying your app using CloudFormation (or SAM). SSM lets you create hierarchical groups for your parameters. Using that hierarchical feature, we’ve called the parameter /Sentry/Dsn in SSM. In CloudFormation templates you can use the parameter, like:\nParameters: SentryDSN: Type: AWS::SSM::Parameter::Value\u0026lt;String\u0026gt; Default: /Sentry/Dsn Capturing events # Most serverless platforms will close all network connections as soon as the function is completed and won\u0026rsquo;t wait for confirmation. To make sure that all events are received by Sentry, you can configure a synchronous HTTP transport. Together with a few additional settings, the connection to Sentry is configured as:\nsentrySyncTransport := sentry.NewHTTPSyncTransport() sentrySyncTransport.Timeout = time.Second * 3 sentry.Init(sentry.ClientOptions{ Dsn: os.Getenv(\u0026#34;SENTRY_DSN\u0026#34;), // The DSN, coming from the AWS Systems Manager Parameter Store Transport: sentrySyncTransport, ServerName: os.Getenv(\u0026#34;FUNCTION_NAME\u0026#34;), // The name of the function so it can be easily found in Sentry\u0026#39;s UI Release: os.Getenv(\u0026#34;VERSION\u0026#34;), // The version of the deployment so it can be found in GitHub Environment: os.Getenv(\u0026#34;STAGE\u0026#34;), // The stage, so you can see if it is test or production }) Sentry offers a bunch of useful events that you can send to help understand what goes on in your app:\nBreadcrumbs: a series of events that occurred before an error or message; Exception: an error that has occurred; Message: a log message with additional information about an event or an error. Within the Payment service, the credit card data is validated to make sure it\u0026rsquo;s a valid credit card. When an order doesn\u0026rsquo;t have a valid credit card, the rest of the flow will halt as well so that\u0026rsquo;s an error you want to capture.\nsentry.CaptureException(fmt.Errorf(\u0026#34;validation failed for order [%s] : %s\u0026#34;, req.Request.OrderID, err.Error())) This is an error you will find during your unit and integration tests, but since it highly impacts your user experience you likely want to keep track of it during production too.\nThe opposite is possible as well. If you want to capture successful invocations of your app, you can do that too. In this particular case, I\u0026rsquo;d say the invocation was a success when the credit card was successfully validated, and the message was successfully sent to where it needs to go (in this case an SQS queue). With a single statement, you can send an event to Sentry that captures the success of the Lambda function.\nsentry.CaptureMessage(fmt.Sprintf(\u0026#34;validation successful for order [%s]\u0026#34;, req.Request.OrderID)) Keeping track of data # In both cases, keeping track of additional data is important. That additional data could mean the difference between spending two minutes looking at a single service or spending the entire day figuring out which services were impacted. The breadcrumbs play an essential role here. The amount, the order number, and the generated transaction ID are useful in this context. This data is also really useful if there is an error in the Shipping service. For example, when an order should have been sent to the Shipping service but is never picked up by the Lambda function, you can easily trace where the issue is. Adding this data is done through breadcrumbs, like:\ncrumb := sentry.Breadcrumb{ Category: \u0026#34;CreditCardValidated\u0026#34;, Timestamp: time.Now().Unix(), Level: sentry.LevelInfo, Data: map[string]interface{}{ \u0026#34;Amount\u0026#34;: req.Request.Total, \u0026#34;OrderID\u0026#34;: req.Request.OrderID, \u0026#34;Success\u0026#34;: true, \u0026#34;Message\u0026#34;: \u0026#34;transaction successful\u0026#34;, }, } // ...(snip) sentry.AddBreadcrumb(\u0026amp;crumb) Up to now, we’ve looked at the serverless components og the ACME Fitness Shop. As mentioned, there is a container version too and you could mix and match parts. This is where tracing and knowing where errors occur is crucial. Rather than just looking at logs in one place, you would need to look at both your serverless logs and Kubernetes logs. Within the Python based cart service, we’ve added the redis integration too. Simply adding a single line, and an import statement, makes that all the redis commands that are executed show up too.\nimport sentry_sdk from sentry_sdk.integrations.redis import RedisIntegration sentry_sdk.init( dsn=\u0026#39;https://\u0026lt;key\u0026gt;@sentry.io/\u0026lt;project\u0026gt;\u0026#39;, integrations=[RedisIntegration()] ) Rather than spending time figuring out where the issue is and how certain data got translated into a redis command, you can literally just read what happened.\nAs your application moves from dev, to test, and to production, the environment tag will help you keep track of what happens in which environment. Issues that happen in the test environment are less urgent than ones that happen in production. One of the tags we’ve set up is called “release”, which is the SHA of the git commit. Based on that, we can track the exact commit that’s running in any environment at any given time and see the events that have been captured.\nThese tags, when tied to specific scopes, can even allow you to correlate messages together. That gives you the power to see what all went on in your system at the time an error occurred. Some great examples, including tracing from the Nginx load balancer in front of your app, can be found in the Sentry docs.\nWhat’s next? # The containers and Kubernetes manifests for the ACME Fitness Shop are on GitHub and so it the code for the ACME Serverless Fitness Shop. So if you want to try it out and see whether Sentry.io adds value to your use cases as well, you can do exactly that using the code and apps we\u0026rsquo;ve already built. In the meanwhile, let us know your thoughts and send Bill, Leon, or the team a note on Twitter.\nPhoto by John Schnobrich on Unsplash\n","date":"March 9, 2020","externalUrl":null,"permalink":"/2020/03/tracking-distributed-errors-in-serverless-apps/","section":"Blog","summary":"Microservices give us as developers an incredible amount of freedom. We can choose our language and we can decide where and when to deploy our service. One of the biggest challenges with microservices, though, is figuring out how things go wrong. With microservices, we can build large, distributed applications, but that also means finding what goes wrong is challenging. It’s even harder to trace errors when you use a platform like AWS Lambda.\n","title":"Tracking Distributed Errors In Serverless Apps","type":"blog"},{"content":"At VMware we define Continuous Verification as:\n\u0026ldquo;A process of querying external systems and using information from the response to make decisions to improve the development and deployment process.\u0026rdquo;\nAt Serverless Nashville, I got a chance to not only talk about what that means for serverless apps but also how we use serverless in some of the business units at VMware.\nThe talk # Continuous Verification is an extension to the development and deployment processes companies already have. It focuses on optimizing both the development and deployment experience by looking at security, performance, and cost. At most companies, some of these steps are done manually or scripted, but they\u0026rsquo;re rarely part of the actual deployment pipeline. In this session, we look at extending an existing CI/CD pipeline with checks for security, performance, and cost to make a decision on whether to deploy or not.\nSlides # Video # Talk materials # Continuous Verification: The Missing Link to Fully Automate Your Pipeline Prowler: AWS Security Best Practices Assessment, Auditing, Hardening and Forensics Readiness Tool ACME Serverless Fitness Shop - Payment Service ","date":"February 27, 2020","externalUrl":null,"permalink":"/2020/02/continuous-verification-in-a-serverless-world-@-serverless-nashville/","section":"Blog","summary":"At VMware we define Continuous Verification as:\n“A process of querying external systems and using information from the response to make decisions to improve the development and deployment process.”\nAt Serverless Nashville, I got a chance to not only talk about what that means for serverless apps but also how we use serverless in some of the business units at VMware.\n","title":"Continuous Verification In A Serverless World @ Serverless Nashville","type":"blog"},{"content":"DevOps, as a practice to build and deliver software, has been around for over a decade. What about adding security to that, though? After all, security is one of the cornerstones of today\u0026rsquo;s information technology. As it turns out, one of the first mentions of adding security was a Gartner blog post in 2012. Neil MacDonald wrote,\n\u0026ldquo;DevOps must evolve to a new vision of DevOpsSec that balances the need for speed and agility of enterprise IT capabilities (\u0026hellip;)\u0026rdquo;.\nIn Risk Based Security\u0026rsquo;s year-end report for 2019, it mentions more than 22 thousand new vulnerabilities have been disclosed. More than a third of those have exploits available or code that proves how it can be misused. In the same report, JFrog\u0026rsquo;s Paul Garden mentions that if you\u0026rsquo;re not currently using hybrid infrastructure you soon will as more and more workloads are moved to the cloud. But if your security software is so out of date that last virus it found was the one that killed off the dinosaurs or so old that it originally came with a coupon for floppy discs, that is a big challenge (jokes courtesy of TrendMicro).\nWith that in mind, figuring out the need for a more cloud-native security solution is easy. Figuring out which one to pick isn\u0026rsquo;t. The CloudJourney team built the ACME Fitness Shop to show how complex apps run on Kubernetes. Over the past months, we\u0026rsquo;ve also worked to build a serverless version. There is no difference in the APIs between the Kubernetes and serverless versions, so they should be able to work together. Combining those two, vastly different, technologies into a single technology stack and keeping it safe is tough.\nKeeping Kubernetes safe # Aqua\u0026rsquo;s Risk Explorer view shows all the namespaces and deployments for clusters that have the Aqua enforcer running. That way, you can get a complete overview of your cluster from the ground up. The view shows all your infrastructure, all the namespaces that have apps running, and the containers that are part of those apps. In a Kubernetes environment it\u0026rsquo;s hard to get a complete overview of everything that goes on, so this view definitely helps get a perspective on things from a security point-of-view.\nWhen you click on one of the deployments, you get an overview of the risks and infrastructure associated with it. For example, the Payment service of the ACME Fitness Shop has 4 known high vulnerabilities and currently has one running instance.\nTo keep containers safe, even when there is no immediate resolution to the vulnerability, Aqua built vShield. That vShield technology is like a virtual patch that makes sure no one can exploit the vulnerability without requiring the dev teams to change their code. With a single press of a button, the cURL vulnerability can no longer be misused when attackers would be able to compromise your system.\nKeeping your Kubernetes environment safe starts with knowing where your containers and images come from. For example, you want to make sure that all the containers you deploy come from a registry you trust. Relying on Docker Hub all the time to pull images to your Kubernetes cluster is a recipe for disaster. Adopting a private registry, like Harbor, is a much safer choice. Aqua comes with out of the box with connectivity to Harbor, so it can help your container registry safe and tell you when things are not as they should be.\nMaking sure that images are scanned by Aqua is important. Equally important is to make sure that you trust the base images of the containers you deploy into your cluster. Personally, I value the amount of effort the Bitnami team takes in making sure the images they provide are safe and secure. My base images of choice are usually bitnami/minideb or bitnami/node. Within Aqua, you can create an Image Assurance Policy where you can list all the base images that you trust. Deployments that do not comply with that policy are flagged and can be acted upon.\nOne of my personal favorite features is the ability to send all notifications through a webhook to Slack. With a ton of activity going on in Slack anyway, it definitely helps when notifications are sent there so everyone in the team can see that the container you just wanted to deploy is not allowed.\nOne of my other favorite features is the RESTful API that Aqua offers. With the API, developers have access to all the functionality that the Web UI has (which does require authentication 😉). That means those devs can build workflows to automate and run actions from within Slack (or any other messaging platform you might like). Going one step further, using the webhook and API together, I could build a serverless app that reacts on new vulnerabilities being detected by running a new image scan, or checking which risks haven\u0026rsquo;t been acknowledged yet and letting the team on Slack know.\nKeeping functions safe # Some of the workload for the ACME Fitness Shop runs on AWS Lambda. While there are definitely aspects of security that are the same across Kubernetes and serverless, there are differences too.\nOne of the hot topics when it comes to building and running serverless applications is the principle of least privilege. That means as much that every component of your serverless app should only have access to the information and resources it needs for its function.\nIn this case, my function is absolutely safe but it does have access to permissions that might not be needed. Knowing what functions have excessive and unused permissions, and being able to act upon them is crucial to make sure that your serverless workload is safe too.\nWant to try it too? # Both Harbor and Aqua can run as standalone Docker containers, or as part of your Kubernetes cluster. You can get the Helm charts from, for example, the VMware Cloud Marketplace.Those Helm charts give you a less complex way to get started with both products. If you\u0026rsquo;re also looking for a way to run everything on your own machine, I can absolutely recommend either Minikube or KinD (Kubernetes in Docker) as great single cluster Kubernetes installations.\nWhat\u0026rsquo;s next? # Keeping our apps, wherever they might be deployed, safe is a shared responsibility. As we\u0026rsquo;re all working on better security, let me know your thoughts.\nCover image by Thomas Jensen on Unsplash.\n","date":"February 24, 2020","externalUrl":null,"permalink":"/2020/02/hybrid-security-from-on-prem-to-serverless/","section":"Blog","summary":"DevOps, as a practice to build and deliver software, has been around for over a decade. What about adding security to that, though? After all, security is one of the cornerstones of today’s information technology. As it turns out, one of the first mentions of adding security was a Gartner blog post in 2012. Neil MacDonald wrote,\n“DevOps must evolve to a new vision of DevOpsSec that balances the need for speed and agility of enterprise IT capabilities (…)”.\n","title":"Hybrid Security - From On-Prem to Serverless","type":"blog"},{"content":"","date":"February 6, 2020","externalUrl":null,"permalink":"/categories/go/","section":"Categories","summary":"","title":"Go","type":"categories"},{"content":"When I started this series on creating infrastructure as code on AWS with Pulumi, I knew the team was actively improving Go support. What I didn\u0026rsquo;t expect was how quickly those improvements would land and how much cleaner the code would get. This post revisits some of the earlier code and updates it to the new SDK.\nThe complete project is available on GitHub.\nThe core programming model in Pulumi has a lot of features, but not all of them were available in the Go SDK yet. The work the Pulumi team was doing included more strongly typed structs (no more interface{}, yay! 🥳), better Input and Output type support, and feature parity with other language SDKs. With that many changes, breaking changes were inevitable. Pulumi recommends that Go developers pin their SDK versions.\nTL;DR I had to make a bunch of changes to make everything work again (like adding pulumi.String() wrappers around strings) but I could also replace all my custom structs with ones from the Pulumi SDK. There are a few rough edges, but it\u0026rsquo;s a work in progress and the direction is great.\nConfiguration # One of the new features — or maybe I just missed it before — is the ability to read and parse configuration directly from the YAML file into typed structs.\nPreviously, I wrote my own helper to pull config values from the context:\n// getEnv searches for the requested key in the pulumi context and provides either the value of the key or the fallback. func getEnv(ctx *pulumi.Context, key string, fallback string) string { if value, ok := ctx.GetConfig(key); ok { return value } return fallback } That helper and its related functions can now be replaced. The snippet below defines a VPCConfig struct and populates it using RequireObject(), which reads directly from the YAML file. If the config key doesn\u0026rsquo;t exist, Pulumi throws an error. You also get to model your structs with proper types — arrays instead of comma-separated strings.\n// VPCConfig is a strongly typed struct that can be populated with // contents from the YAML file. These are the configuration items // to create a VPC. type VPCConfig struct { CIDRBlock string `json:\u0026#34;cidr-block\u0026#34;` Name string SubnetIPs []string `json:\u0026#34;subnet-ips\u0026#34;` SubnetZones []string `json:\u0026#34;subnet-zones\u0026#34;` } func main() { pulumi.Run(func(ctx *pulumi.Context) error { // Create a new config object with the data from the YAML file // The object has all the data that the namespace awsconfig has conf := config.New(ctx, \u0026#34;awsconfig\u0026#34;) var vpcConfig VPCConfig conf.RequireObject(\u0026#34;vpc\u0026#34;, \u0026amp;vpcConfig) vpcArgs := \u0026amp;ec2.VpcArgs{ CidrBlock: pulumi.String(vpcConfig.CIDRBlock), Tags: pulumi.Map(tagMap), } ... // snip }) } The configuration file itself also gets cleaner. Here\u0026rsquo;s what it looked like before:\nvpc:name: myPulumiVPC vpc:cidr-block: \u0026#34;172.32.0.0/16\u0026#34; vpc:subnet-zones: \u0026#34;us-east-1a,us-east-1c\u0026#34; vpc:subnet-ips: \u0026#34;172.32.32.0/20,172.32.80.0/20\u0026#34; And here\u0026rsquo;s the new version with proper YAML structure:\nawsconfig:vpc: cidr-block: 172.32.0.0/16 name: myPulumiVPC subnet-ips: - 172.32.32.0/20 - 172.32.80.0/20 subnet-zones: - us-east-1a - us-east-1c A few more lines, but being able to see which config items are arrays is worth it. One thing that struck me as a little odd: you write config in YAML but the struct tags use JSON.\nType Safety # In the previous version, I created my own structs to add type safety when creating DynamoDB tables. Here\u0026rsquo;s what the old approach looked like:\n// DynamoAttribute represents an attribute for describing the key schema for the table and indexes. type DynamoAttribute struct { Name string Type string } // DynamoAttributes is an array of DynamoAttribute type DynamoAttributes []DynamoAttribute // ToList takes a DynamoAttributes object and turns that into a slice of map[string]interface{} so it can be correctly passed to the Pulumi runtime func (d DynamoAttributes) ToList() []map[string]interface{} { array := make([]map[string]interface{}, len(d)) for idx, attr := range d { m := make(map[string]interface{}) m[\u0026#34;name\u0026#34;] = attr.Name m[\u0026#34;type\u0026#34;] = attr.Type array[idx] = m } return array } // Create the attributes for ID and User dynamoAttributes := DynamoAttributes{ DynamoAttribute{ Name: \u0026#34;ID\u0026#34;, Type: \u0026#34;S\u0026#34;, }, DynamoAttribute{ Name: \u0026#34;User\u0026#34;, Type: \u0026#34;S\u0026#34;, }, } The updated SDK introduced strongly typed structs for Table Attributes and Global Secondary Indices, which eliminates the need for those helper types and methods. What was roughly 33 lines of code is now 10:\n// Create the attributes for ID and User dynamoAttributes := []dynamodb.TableAttributeInput{ dynamodb.TableAttributeArgs{ Name: pulumi.String(\u0026#34;ID\u0026#34;), Type: pulumi.String(\u0026#34;S\u0026#34;), }, dynamodb.TableAttributeArgs{ Name: pulumi.String(\u0026#34;User\u0026#34;), Type: pulumi.String(\u0026#34;S\u0026#34;), }, } The same benefits apply to Lambda deployments. More strongly typed structs with clear field definitions means the code is easier to write and easier to read. No helper methods needed.\nGoing forward # It\u0026rsquo;s not all perfect — the Pulumi engineers still have work to do, and they\u0026rsquo;re doing it out in the open. There are some rough edges, like documentation gaps for inputs and outputs of certain methods. But the progress over just a few weeks was impressive, and I\u0026rsquo;m looking forward to seeing where the Go SDK goes from here.\nCover image by Martin Vorel from Pixabay\n","date":"February 6, 2020","externalUrl":null,"permalink":"/2020/02/how-to-build-infrastructure-as-code-with-pulumi-and-golang-part-2/","section":"Blog","summary":"When I started this series on creating infrastructure as code on AWS with Pulumi, I knew the team was actively improving Go support. What I didn’t expect was how quickly those improvements would land and how much cleaner the code would get. This post revisits some of the earlier code and updates it to the new SDK.\n","title":"How To Build Infrastructure as Code With Pulumi And Golang - Part 2","type":"blog"},{"content":"","date":"February 6, 2020","externalUrl":null,"permalink":"/series/infrastructure-as-code-with-pulumi-and-go/","section":"Series","summary":"","title":"Infrastructure as Code With Pulumi and Go","type":"series"},{"content":"One of my strong beliefs is that coding should be available to everyone. Whether that is a seasoned developer or someone who just wants to connect two systems together. With Project Flogo, we\u0026rsquo;ve made it possible for everyone to use the same constructs. If you want to use the web-based flow designer, that\u0026rsquo;s awesome! If you want to write your apps using the Go API, that\u0026rsquo;s awesome too. In this podcast I joined Jan Oberhauser (N8N), Nick O\u0026rsquo;Leary (Node Red), and the SAP Customer Experience Labs team to discuss No Code / Low Code.\nListen to the full episode on SAP\u0026rsquo;s website\n","date":"February 6, 2020","externalUrl":null,"permalink":"/2020/02/sap-customer-experience-labs-talk-episode-7-no-code-/-low-code/","section":"Blog","summary":"One of my strong beliefs is that coding should be available to everyone. Whether that is a seasoned developer or someone who just wants to connect two systems together. With Project Flogo, we’ve made it possible for everyone to use the same constructs. If you want to use the web-based flow designer, that’s awesome! If you want to write your apps using the Go API, that’s awesome too. In this podcast I joined Jan Oberhauser (N8N), Nick O’Leary (Node Red), and the SAP Customer Experience Labs team to discuss No Code / Low Code.\n","title":"SAP Customer Experience Labs Talk – Episode 7 No Code / Low Code","type":"blog"},{"content":"I\u0026rsquo;ve used Pulumi to do a bunch of things so far: creating subnets in a VPC, building EKS clusters, and DynamoDB tables. The one thing I hadn\u0026rsquo;t tried yet was deploying Lambda functions, so that\u0026rsquo;s what this post covers.\nThe complete project is available on GitHub.\nMy Lambda # The Lambda function here is straightforward — it reads an environment variable and says hello:\npackage main import ( \u0026#34;fmt\u0026#34; \u0026#34;os\u0026#34; \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; \u0026#34;github.com/aws/aws-lambda-go/lambda\u0026#34; ) func handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) { val := os.Getenv(\u0026#34;NAME\u0026#34;) return events.APIGatewayProxyResponse{ Body: fmt.Sprintf(\u0026#34;Hello, %s\u0026#34;, val), StatusCode: 200, }, nil } func main() { lambda.Start(handler) } That code lives in a file called hello-world.go inside a hello-world folder. The Pulumi project sits in its own pulumi folder so it doesn\u0026rsquo;t collide with your Lambda code. The folder structure looks like this:\n├── README.md ├── go.mod ├── go.sum └── hello-world │ └── main.go ├── pulumi │ ├── Pulumi.lambdastack.yaml │ ├── Pulumi.yaml └── main.go Building and uploading your Lambda code # To deploy a Lambda function, the code needs to be packaged and uploaded. Pulumi has an Archive concept for creating zip files, but the Go implementation has a known issue that makes it unusable. Instead, you can extend the Pulumi program to handle the build, zip, and upload steps before the main run:\nconst ( shell = \u0026#34;sh\u0026#34; shellFlag = \u0026#34;-c\u0026#34; rootFolder = \u0026#34;/rootfolder/of/your/lambdaapp\u0026#34; ) func runCmd(args string) error { cmd := exec.Command(shell, shellFlag, args) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr cmd.Dir = rootFolder return cmd.Run() } The runCmd method runs a shell command and returns an error or nil. These three calls build the binary, zip it, and upload it to S3. Place them before pulumi.Run():\nif err := runCmd(\u0026#34;GOOS=linux GOARCH=amd64 go build -o hello-world/hello-world ./hello-world\u0026#34;); err != nil { fmt.Printf(\u0026#34;Error building code: %s\u0026#34;, err.Error()) os.Exit(1) } if err := runCmd(\u0026#34;zip -r -j ./hello-world/hello-world.zip ./hello-world/hello-world\u0026#34;); err != nil { fmt.Printf(\u0026#34;Error creating zipfile: %s\u0026#34;, err.Error()) os.Exit(1) } if err := runCmd(\u0026#34;aws s3 cp ./hello-world/hello-world.zip s3://\u0026lt;your-bucket\u0026gt;/hello-world.zip\u0026#34;); err != nil { fmt.Printf(\u0026#34;Error creating zipfile: %s\u0026#34;, err.Error()) os.Exit(1) } If any of these fail, you\u0026rsquo;ll see the output and error message in the diagnostics section of your terminal.\nCreating an IAM role # Every Lambda function needs an IAM role to operate. This one just needs permission to run. The ARN (Amazon Resource Name) is exported so it\u0026rsquo;s visible in the Pulumi console:\n// The policy description of the IAM role, in this case only the sts:AssumeRole is needed roleArgs := \u0026amp;iam.RoleArgs{ AssumeRolePolicy: `{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Action\u0026#34;: \u0026#34;sts:AssumeRole\u0026#34;, \u0026#34;Principal\u0026#34;: { \u0026#34;Service\u0026#34;: \u0026#34;lambda.amazonaws.com\u0026#34; }, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Sid\u0026#34;: \u0026#34;\u0026#34; } ] }`, } // Create a new role called HelloWorldIAMRole role, err := iam.NewRole(ctx, \u0026#34;HelloWorldIAMRole\u0026#34;, roleArgs) if err != nil { fmt.Printf(\u0026#34;role error: %s\\n\u0026#34;, err.Error()) return err } // Export the role ARN as an output of the Pulumi stack ctx.Export(\u0026#34;Role ARN\u0026#34;, role.Arn()) Setting environment variables # The Pulumi SDK lets you define environment variables, just like CloudFormation. Since the Lambda function reads a NAME variable, you set it up as a nested map:\nenvironment := make(map[string]interface{}) variables := make(map[string]interface{}) variables[\u0026#34;NAME\u0026#34;] = \u0026#34;WORLD\u0026#34; environment[\u0026#34;variables\u0026#34;] = variables Creating the function # The last step is creating the Lambda function itself. The S3Bucket and S3Key point to the zip file you uploaded earlier, and role.Arn() references the IAM role:\n// The set of arguments for constructing a Function resource. functionArgs := \u0026amp;lambda.FunctionArgs{ Description: \u0026#34;My Lambda function\u0026#34;, Runtime: \u0026#34;go1.x\u0026#34;, Name: \u0026#34;HelloWorldFunction\u0026#34;, MemorySize: 256, Timeout: 10, Handler: \u0026#34;hello-world\u0026#34;, Environment: environment, S3Bucket: \u0026#34;\u0026lt;your-bucket\u0026gt;\u0026#34;, S3Key: \u0026#34;hello-world.zip\u0026#34;, Role: role.Arn(), } // NewFunction registers a new resource with the given unique name, arguments, and options. function, err := lambda.NewFunction(ctx, \u0026#34;HelloWorldFunction\u0026#34;, functionArgs) if err != nil { fmt.Println(err.Error()) return err } // Export the function ARN as an output of the Pulumi stack ctx.Export(\u0026#34;Function\u0026#34;, function.Arn()) Running Pulumi up # With everything in place, run pulumi up to deploy. If you need details on setting up a Go project for Pulumi, check out this post.\n$ pulumi up Previewing update (lambda): Type Name Plan Info + pulumi:pulumi:Stack lambda-lambda create 2 messages + ├─ aws:iam:Role HelloWorldIAMRole create + └─ aws:lambda:Function HelloWorldFunction create Diagnostics: pulumi:pulumi:Stack (lambda-lambda): updating: hello-world/hello-world (deflated 49%) upload: hello-world/hello-world.zip to s3://\u0026lt;your-bucket\u0026gt;/hello-world.zip Resources: + 3 to create Do you want to perform this update? yes Updating (lambda): Type Name Status Info + pulumi:pulumi:Stack lambda-lambda created 2 messages + ├─ aws:iam:Role HelloWorldIAMRole created + └─ aws:lambda:Function HelloWorldFunction created Diagnostics: pulumi:pulumi:Stack (lambda-lambda): updating: hello-world/hello-world (deflated 49%) upload: hello-world/hello-world.zip to s3://\u0026lt;your-bucket\u0026gt;/hello-world.zip Outputs: Function: \u0026#34;arn:aws:lambda:us-west-2:ACCOUNTID:function:HelloWorldFunction\u0026#34; Role ARN: \u0026#34;arn:aws:iam::ACCOUNTID:role/HelloWorldIAMRole-7532034\u0026#34; Resources: + 3 created Duration: 44s Permalink: https://app.pulumi.com/retgits/lambda/lambda/updates/1 Testing with the AWS Console # In the Pulumi console, you can see the resources that were created:\nIn the AWS Lambda console, you can test the function and confirm it responds with \u0026ldquo;Hello, WORLD\u0026rdquo;:\nCover image by Kevin Horvat on Unsplash\n","date":"January 28, 2020","externalUrl":null,"permalink":"/2020/01/how-to-create-aws-lambda-functions-using-pulumi-and-golang/","section":"Blog","summary":"I’ve used Pulumi to do a bunch of things so far: creating subnets in a VPC, building EKS clusters, and DynamoDB tables. The one thing I hadn’t tried yet was deploying Lambda functions, so that’s what this post covers.\n","title":"How To Create AWS Lambda Functions Using Pulumi And Golang","type":"blog"},{"content":"As a developer, I\u0026rsquo;ve built apps and wrote code. As a cheesecake connoisseur, I\u0026rsquo;ve tried many different kinds of cheesecake. After I got to talk to some of the bakers, I realized that building apps and baking cheesecake have a lot in common. It all starts with knowing and trusting your ingredients.\nAccording to Tidelift, over 90 percent of applications contain some open source packages. Developers choose open source because they believe it\u0026rsquo;s better, more flexible, and more extendible. A lot of developers also fear how well packages are maintained and how security vulnerabilities are identified and solved.\nWhether you deploy your apps as functions, containers, or on virtual machines, trusting your ingredients will always be an essential part of building secure code. In the first nine months of last year, close to 17,000 new vulnerabilities were discovered. Almost two-thirds of the disclosed vulnerabilities can be solved by upgrading or patching.\nIBM Security, with its Cost of a Data Breach report, looks at the costs for companies when something does go wrong. On average, it takes companies 279 days to identify they have a security breach and contain it. The average cost of a security breach is $3.92M and the US stands out with an average of $8.19M.\nThe sooner you realize that the packages you\u0026rsquo;re using have a security flaw, the easier it is to fix it. That saying remains true whether you deploy your apps as containers, on virtual machines, or as functions. From a security point-of-view, you\u0026rsquo;ll want to:\nGet your dependencies from a trusted source; Scan your dependencies for known vulnerabilities; Keep track of which dependencies you have deployed. Let\u0026rsquo;s look at these three pieces in a little more detail when you\u0026rsquo;re dealing with functions.\nTrust, but verify # No matter which programming language, you want to make sure that your dependencies come from a trusted location. These locations can be internal, like JFrog Artifactory or Sonatype Nexus, or external. For example, Python developers will get their modules from PyPi and Node.js developers will get their modules from NPM. Quite recently, the Go community got its module mirror and checksum database too. All of these sources give developers the ability to verify the integrity of the package they\u0026rsquo;ve downloaded.\nIf you\u0026rsquo;re a Go developer, like me, using Go modules with the checksum database will help when you want to verify the integrity of modules. For example, if my go.sum file (which gets checksums from the database) contains a different checksum than go get calculates it\u0026rsquo;ll stop you from downloading the module.\n$ go get ./... verifying github.com/aws/aws-sdk-go@v1.27.0/go.mod: checksum mismatch downloaded: h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= go.sum: h1:HmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= SECURITY ERROR This download does NOT match an earlier download recorded in go.sum. The bits may have been replaced on the origin server, or an attacker may have intercepted the download attempt. As a developer, regardless where I deploy, this helps me make sure my ingredients come from a trusted location and are not tampered with.\nFinding known vulnerabilities # Getting your dependencies from a trusted location solves one part of the problem. The second part of the problem is finding vulnerabilities in those dependencies. There are lots of solutions available depending on your language of choice. Using the UI, GoCenter shows if the module you\u0026rsquo;re looking at has any vulnerabilities (in any of the versions indexed by GoCenter). The red warning sign in the image below shows how that might look.\nThe team at Snyk.io, allows developers to sign up for a free plan and use the Snyk CLI for unlimited tests on open source projects and 200 tests on private projects. The Snyk CLI gives developers, for a lot of different languages, the ability to find and fix vulnerabilities in the dependencies of a project. Simply running snyk test will list all the vulnerabilities, including suggested fixes if there are any.\nTesting /github/workspace... Organization: retgits Package manager: gomodules Target file: go.mod Project name: github.com/retgits/testrepo Open source: no Project path: /github/workspace Licenses: enabled ✓ Tested 200 dependencies for known issues, no vulnerable paths found. Next steps: - Run `snyk monitor` to be notified about new related vulnerabilities. - Run `snyk test` as part of your CI/test. Keeping track of dependencies # Manually validating that your dependencies are safe is a good practice for developers. Automating that, however, is even better. If you decide to give Snyk access to your source code repositories, you\u0026rsquo;ll get the ability to have Snyk automatically test all pull requests, automatically create pull requests to fix any security vulnerabilities it finds, and even raise pull requests to update out-of-date dependencies. This is all on top of periodically scanning your dependencies for new vulnerabilities.\nMaking sure that every build or deployment of your function is also checked helps keep your data safe. In my CI/CD pipelines, I run a vulnerability scan on every push. The yaml snippet below is all that\u0026rsquo;s needed to let Snyk scan your dependencies every time your GitHub Actions pipeline runs.\n- name: Vulnerability scan id: synk-test uses: snyk/actions/golang@master env: SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }} If your functions are built using Node.js, Ruby, or Java you can even connect Snyk directly to your AWS Lambda functions to monitor the deployed dependencies for vulnerabilities. This makes sure that if there are any new security vulnerabilities with the actual deployed and running code, you\u0026rsquo;ll know just as quick as Snyk does.\nWhat\u0026rsquo;s next # With more and more apps contain open source packages, we\u0026rsquo;re collectively responsible to make sure we get dependencies from a trusted source, we scan our dependencies for known vulnerabilities, and keep track of which dependencies we have deployed. Keeping our apps and our data safe is a shared responsibility. As we\u0026rsquo;re all working on better security, let me know your thoughts and send me or the team a note on Twitter.\n","date":"January 24, 2020","externalUrl":null,"permalink":"/2020/01/trusting-your-ingredients-whats-in-your-function-anyway/","section":"Blog","summary":"As a developer, I’ve built apps and wrote code. As a cheesecake connoisseur, I’ve tried many different kinds of cheesecake. After I got to talk to some of the bakers, I realized that building apps and baking cheesecake have a lot in common. It all starts with knowing and trusting your ingredients.\nAccording to Tidelift, over 90 percent of applications contain some open source packages. Developers choose open source because they believe it’s better, more flexible, and more extendible. A lot of developers also fear how well packages are maintained and how security vulnerabilities are identified and solved.\n","title":"Trusting your ingredients - What's in your function anyway?","type":"blog"},{"content":"In previous posts, I used Pulumi for VPCs, subnets, and EKS clusters. Most apps also need a datastore, so this post covers creating a DynamoDB table.\nThe complete project is available on GitHub.\nStatic types # One of Go\u0026rsquo;s strengths is static typing — you know what goes in and what comes out of every function. At the time of writing, the Pulumi Go SDK didn\u0026rsquo;t offer static types for AWS resources, so the code below defines DynamoAttribute and GlobalSecondaryIndex types to fill that gap:\n// DynamoAttribute represents an attribute for describing the key schema for the table and indexes. type DynamoAttribute struct { Name string Type string } // DynamoAttributes is an array of DynamoAttribute type DynamoAttributes []DynamoAttribute // GlobalSecondaryIndex represents the properties of a global secondary index type GlobalSecondaryIndex struct { Name string HashKey string ProjectionType string WriteCapacity int ReadCapacity int } // GlobalSecondaryIndexes is an array of GlobalSecondaryIndex type GlobalSecondaryIndexes []GlobalSecondaryIndex The tableArgs expects interface{} for these fields, and the underlying infrastructure needs them as a list. These ToList() methods handle that conversion:\n// ToList takes a DynamoAttributes object and turns that into a slice of map[string]interface{} so it can be correctly passed to the Pulumi runtime func (d DynamoAttributes) ToList() []map[string]interface{} { array := make([]map[string]interface{}, len(d)) for idx, attr := range d { m := make(map[string]interface{}) m[\u0026#34;name\u0026#34;] = attr.Name m[\u0026#34;type\u0026#34;] = attr.Type array[idx] = m } return array } // ToList takes a GlobalSecondaryIndexes object and turns that into a slice of map[string]interface{} so it can be correctly passed to the Pulumi runtime func (g GlobalSecondaryIndexes) ToList() []map[string]interface{} { array := make([]map[string]interface{}, len(g)) for idx, attr := range g { m := make(map[string]interface{}) m[\u0026#34;name\u0026#34;] = attr.Name m[\u0026#34;hash_key\u0026#34;] = attr.HashKey m[\u0026#34;projection_type\u0026#34;] = attr.ProjectionType m[\u0026#34;write_capacity\u0026#34;] = attr.WriteCapacity m[\u0026#34;read_capacity\u0026#34;] = attr.ReadCapacity array[idx] = m } return array } This gives you typed objects in your code while still satisfying the Pulumi runtime.\nBuilding a table # Here\u0026rsquo;s where it all comes together. This example creates a table with usernames and unique IDs (used in one of my apps for order data). The actual order data isn\u0026rsquo;t modeled here, so it won\u0026rsquo;t be queryable.\n// Create the attributes for ID and User dynamoAttributes := DynamoAttributes{ DynamoAttribute{ Name: \u0026#34;ID\u0026#34;, Type: \u0026#34;S\u0026#34;, }, DynamoAttribute{ Name: \u0026#34;User\u0026#34;, Type: \u0026#34;S\u0026#34;, }, } // Create a Global Secondary Index for the user field gsi := GlobalSecondaryIndexes{ GlobalSecondaryIndex{ Name: \u0026#34;User\u0026#34;, HashKey: \u0026#34;User\u0026#34;, ProjectionType: \u0026#34;ALL\u0026#34;, WriteCapacity: 10, ReadCapacity: 10, }, } // Create a TableArgs struct that contains all the data tableArgs := \u0026amp;dynamodb.TableArgs{ Attributes: dynamoAttributes.ToList(), HashKey: \u0026#34;ID\u0026#34;, WriteCapacity: 10, ReadCapacity: 10, GlobalSecondaryIndexes: gsi.ToList(), } // Let the Pulumi runtime create the table userTable, err := dynamodb.NewTable(ctx, \u0026#34;User\u0026#34;, tableArgs) if err != nil { return err } // Export the name of the newly created table as an output in the stack ctx.Export(\u0026#34;TableName\u0026#34;, userTable.ID()) Complete code # Here\u0026rsquo;s everything combined into a single runnable Go program:\npackage main import ( \u0026#34;github.com/pulumi/pulumi-aws/sdk/go/aws/dynamodb\u0026#34; \u0026#34;github.com/pulumi/pulumi/sdk/go/pulumi\u0026#34; ) // DynamoAttribute represents an attribute for describing the key schema for the table and indexes. type DynamoAttribute struct { Name string Type string } // DynamoAttributes is an array of DynamoAttribute type DynamoAttributes []DynamoAttribute // ToList takes a DynamoAttributes object and turns that into a slice of map[string]interface{} so it can be correctly passed to the Pulumi runtime func (d DynamoAttributes) ToList() []map[string]interface{} { array := make([]map[string]interface{}, len(d)) for idx, attr := range d { m := make(map[string]interface{}) m[\u0026#34;name\u0026#34;] = attr.Name m[\u0026#34;type\u0026#34;] = attr.Type array[idx] = m } return array } // GlobalSecondaryIndex represents the properties of a global secondary index type GlobalSecondaryIndex struct { Name string HashKey string ProjectionType string WriteCapacity int ReadCapacity int } // GlobalSecondaryIndexes is an array of GlobalSecondaryIndex type GlobalSecondaryIndexes []GlobalSecondaryIndex // ToList takes a GlobalSecondaryIndexes object and turns that into a slice of map[string]interface{} so it can be correctly passed to the Pulumi runtime func (g GlobalSecondaryIndexes) ToList() []map[string]interface{} { array := make([]map[string]interface{}, len(g)) for idx, attr := range g { m := make(map[string]interface{}) m[\u0026#34;name\u0026#34;] = attr.Name m[\u0026#34;hash_key\u0026#34;] = attr.HashKey m[\u0026#34;projection_type\u0026#34;] = attr.ProjectionType m[\u0026#34;write_capacity\u0026#34;] = attr.WriteCapacity m[\u0026#34;read_capacity\u0026#34;] = attr.ReadCapacity array[idx] = m } return array } func main() { pulumi.Run(func(ctx *pulumi.Context) error { // Create the attributes for ID and User dynamoAttributes := DynamoAttributes{ DynamoAttribute{ Name: \u0026#34;ID\u0026#34;, Type: \u0026#34;S\u0026#34;, }, DynamoAttribute{ Name: \u0026#34;User\u0026#34;, Type: \u0026#34;S\u0026#34;, }, } // Create a Global Secondary Index for the user field gsi := GlobalSecondaryIndexes{ GlobalSecondaryIndex{ Name: \u0026#34;User\u0026#34;, HashKey: \u0026#34;User\u0026#34;, ProjectionType: \u0026#34;ALL\u0026#34;, WriteCapacity: 10, ReadCapacity: 10, }, } // Create a TableArgs struct that contains all the data tableArgs := \u0026amp;dynamodb.TableArgs{ Attributes: dynamoAttributes.ToList(), HashKey: \u0026#34;ID\u0026#34;, WriteCapacity: 10, ReadCapacity: 10, GlobalSecondaryIndexes: gsi.ToList(), } // Let the Pulumi runtime create the table userTable, err := dynamodb.NewTable(ctx, \u0026#34;User\u0026#34;, tableArgs) if err != nil { return err } // Export the name of the newly created table as an output in the stack ctx.Export(\u0026#34;TableName\u0026#34;, userTable.ID()) }) } Pulumi up # Create a new Pulumi project and run pulumi up. To create a Go-based project:\npulumi new go \\ --name builder \\ --description \u0026#34;An awesome Pulumi infrastructure-as-code Stack\u0026#34; \\ --stack retgits/builderstack Replace the generated Go file with the code above and run pulumi up. For more detail on project setup, check out one of my previous posts.\nCover image by Tobias Fischer on Unsplash\n","date":"January 21, 2020","externalUrl":null,"permalink":"/2020/01/how-to-create-a-dynamodb-table-in-aws-using-pulumi-and-golang/","section":"Blog","summary":"In previous posts, I used Pulumi for VPCs, subnets, and EKS clusters. Most apps also need a datastore, so this post covers creating a DynamoDB table.\n","title":"How To Create a DynamoDB Table In AWS Using Pulumi And Golang","type":"blog"},{"content":"At re:Invent, AWS introduced the ability to run EKS pods on AWS Fargate, and Fargate is cheaper than hosting Kubernetes yourself. In the last post I created an EKS cluster, so this one adds a Fargate profile to it.\nThe complete project is available on GitHub.\nConfiguration # The Fargate profile needs a name, a Kubernetes namespace to target, and an IAM role for pod execution. Add this to the YAML file from the previous post:\nfargate:profile-name: EKSFargateProfile fargate:namespace: example fargate:execution-role-arn: \u0026#34;arn:aws:iam::ACCOUNTID:role/EKSFargatePodExecutionRole\u0026#34; For details on creating the IAM role, check the AWS docs. You can use the command line (e.g., pulumi config set fargate:profile-name \u0026quot;EKSFargateProfile\u0026quot;) or edit the YAML file directly. The file is named Pulumi.\u0026lt;name of your project\u0026gt;.yaml.\nAdding the Fargate profile # This code extends the previous post. It reads the profile name and namespace from the YAML file, references the cluster and subnets from earlier posts, and calls eks.NewFargateProfile() to attach the profile to your cluster.\n// Create an EKS Fargate Profile fargateProfileName := getEnv(ctx, \u0026#34;fargate:profile-name\u0026#34;, \u0026#34;unknown\u0026#34;) selectors := make([]map[string]interface{}, 1) namespaces := make(map[string]interface{}) namespaces[\u0026#34;namespace\u0026#34;] = getEnv(ctx, \u0026#34;fargate:namespace\u0026#34;, \u0026#34;unknown\u0026#34;) selectors[0] = namespaces fargateProfileArgs := \u0026amp;eks.FargateProfileArgs{ ClusterName: clusterName, FargateProfileName: fargateProfileName, Tags: tags, SubnetIds: subnets[\u0026#34;subnet_ids\u0026#34;], Selectors: selectors, PodExecutionRoleArn: getEnv(ctx, \u0026#34;fargate:execution-role-arn\u0026#34;, \u0026#34;unknown\u0026#34;), } fargateProfile, err := eks.NewFargateProfile(ctx, fargateProfileName, fargateProfileArgs) if err != nil { fmt.Println(err.Error()) return err } ctx.Export(\u0026#34;FARGATE-PROFILE-ID\u0026#34;, fargateProfile.ID()) Running the code # Run pulumi up to add the Fargate profile. If you\u0026rsquo;re using the same project and stack, Pulumi knows the cluster already exists and will only create the new profile.\n$ pulumi up Previewing update (builderstack): Type Name Plan pulumi:pulumi:Stack builder-builderstack + └─ aws:eks:FargateProfile EKSFargateProfile create Outputs: + FARGATE-PROFILE-ID: output\u0026lt;string\u0026gt; Resources: + 1 to create 5 unchanged Do you want to perform this update? yes Updating (builderstack): Type Name Status pulumi:pulumi:Stack builder-builderstack + └─ aws:eks:FargateProfile EKSFargateProfile created Outputs: CLUSTER-ID : \u0026#34;myEKSCluster\u0026#34; + FARGATE-PROFILE-ID: \u0026#34;myEKSCluster:EKSFargateProfile\u0026#34; SUBNET-IDS : [ [0]: \u0026#34;subnet-0a1909bec2e936bd7\u0026#34; [1]: \u0026#34;subnet-09d229c2eb8061979\u0026#34; ] VPC-ID : \u0026#34;vpc-0437c750acf1050c3\u0026#34; Resources: + 1 created 5 unchanged Duration: 2m27s Permalink: https://app.pulumi.com/retgits/builder/builderstack/updates/4 The permalink at the bottom takes you to the Pulumi console where you can see all the details of the execution and the resources that were created.\nCover image by Gerd Altmann from Pixabay\n","date":"January 16, 2020","externalUrl":null,"permalink":"/2020/01/how-to-make-your-aws-eks-cluster-use-fargate-using-pulumi-and-golang/","section":"Blog","summary":"At re:Invent, AWS introduced the ability to run EKS pods on AWS Fargate, and Fargate is cheaper than hosting Kubernetes yourself. In the last post I created an EKS cluster, so this one adds a Fargate profile to it.\n","title":"How To Make Your AWS EKS Cluster Use Fargate Using Pulumi And Golang","type":"blog"},{"content":"Building a Kubernetes cluster from scratch is hard, which is why managed services exist. In the previous post I added subnets to a VPC. This post uses that VPC to create an AWS EKS cluster.\nThe complete project is available on GitHub.\nConfiguration # At minimum, you need a cluster name, a Kubernetes version, and an IAM role. Specifying which log types to send to CloudWatch is optional but helpful for debugging. Add this to the YAML file from the previous post:\neks:cluster-name: myEKSCluster eks:k8s-version: \u0026#34;1.14\u0026#34; eks:cluster-role-arn: \u0026#34;arn:aws:iam::ACCOUNTID:role/ServiceRoleForAmazonEKS\u0026#34; eks:cluster-log-types: \u0026#34;api,audit,authenticator,scheduler,controllerManager\u0026#34; You can use the command line (e.g., pulumi config set eks:cluster-name \u0026quot;myEKSCluster\u0026quot;) or edit the YAML file directly. The file is named Pulumi.\u0026lt;name of your project\u0026gt;.yaml.\nCreating the cluster # This code extends the previous post. It reads the cluster name and log types from the YAML file, uses the subnets created earlier, and calls eks.NewCluster() to create the EKS cluster in your existing VPC.\n// Create an EKS cluster clusterName := getEnv(ctx, \u0026#34;eks:cluster-name\u0026#34;, \u0026#34;unknown\u0026#34;) enabledClusterLogTypes := strings.Split(getEnv(ctx, \u0026#34;eks:cluster-log-types\u0026#34;, \u0026#34;unknown\u0026#34;), \u0026#34;,\u0026#34;) clusterArgs := \u0026amp;eks.ClusterArgs{ Name: clusterName, Version: getEnv(ctx, \u0026#34;eks:k8s-version\u0026#34;, \u0026#34;unknown\u0026#34;), RoleArn: getEnv(ctx, \u0026#34;eks:cluster-role-arn\u0026#34;, \u0026#34;unknown\u0026#34;), Tags: tags, VpcConfig: subnets, EnabledClusterLogTypes: enabledClusterLogTypes, } cluster, err := eks.NewCluster(ctx, clusterName, clusterArgs) if err != nil { fmt.Println(err.Error()) return err } ctx.Export(\u0026#34;CLUSTER-ID\u0026#34;, cluster.ID()) Running the code # Run pulumi up to create the cluster. If you\u0026rsquo;re using the same project and stack, Pulumi knows the VPC already exists and will only create the EKS cluster. Fair warning: this can take a while. In my case it was almost 10 minutes.\n$ pulumi up Previewing update (builderstack): Type Name Plan pulumi:pulumi:Stack builder-builderstack + └─ aws:eks:Cluster myEKSCluster create Outputs: + CLUSTER-ID: output\u0026lt;string\u0026gt; Resources: + 1 to create 4 unchanged Do you want to perform this update? yes Updating (builderstack): Type Name Status pulumi:pulumi:Stack builder-builderstack + └─ aws:eks:Cluster myEKSCluster created Outputs: + CLUSTER-ID: \u0026#34;myEKSCluster\u0026#34; SUBNET-IDS: [ [0]: \u0026#34;subnet-\u0026lt;id\u0026gt;\u0026#34; [1]: \u0026#34;subnet-\u0026lt;id\u0026gt;\u0026#34; ] VPC-ID : \u0026#34;vpc-\u0026lt;id\u0026gt;\u0026#34; Resources: + 1 created 4 unchanged Duration: 9m55s Permalink: https://app.pulumi.com/retgits/builder/builderstack/updates/3 The permalink at the bottom takes you to the Pulumi console where you can see all the details of the execution and the resources that were created.\nCover image by Gerd Altmann from Pixabay\n","date":"January 14, 2020","externalUrl":null,"permalink":"/2020/01/how-to-create-an-aws-eks-cluster-using-pulumi-and-golang/","section":"Blog","summary":"Building a Kubernetes cluster from scratch is hard, which is why managed services exist. In the previous post I added subnets to a VPC. This post uses that VPC to create an AWS EKS cluster.\n","title":"How To Create An AWS EKS Cluster Using Pulumi And Golang","type":"blog"},{"content":"In the previous post, I used Pulumi to create a VPC. This post picks up where that left off and adds subnets to it.\nThe complete project is available on GitHub.\nConfiguration # A subnet is a logical partition of your network. A VPC spans all availability zones in a region, but each subnet lives in a single availability zone. For high availability, you need at least two zones, each with its own CIDR block. Add this to the YAML file from the previous post:\nvpc:subnet-zones: \u0026#34;us-east-1a,us-east-1c\u0026#34; vpc:subnet-ips: \u0026#34;172.32.32.0/20,172.32.80.0/20\u0026#34; You can use the command line (e.g., pulumi config set vpc:subnet-zones \u0026quot;us-east-1a,us-east-1c\u0026quot;) or edit the YAML file directly. The file is named Pulumi.\u0026lt;name of your project\u0026gt;.yaml.\nCreating subnets # This code extends the previous post. It reads the zone and CIDR block configuration, splits on the comma delimiter, and loops through each zone to create a subnet inside the VPC. Each subnet ID gets added to an array for export to the Pulumi console.\n// Create the required number of subnets subnets := make(map[string]interface{}) subnets[\u0026#34;subnet_ids\u0026#34;] = make([]interface{}, 0) subnetZones := strings.Split(getEnv(ctx, \u0026#34;vpc:subnet-zones\u0026#34;, \u0026#34;unknown\u0026#34;), \u0026#34;,\u0026#34;) subnetIPs := strings.Split(getEnv(ctx, \u0026#34;vpc:subnet-ips\u0026#34;, \u0026#34;unknown\u0026#34;), \u0026#34;,\u0026#34;) for idx, availabilityZone := range subnetZones { subnetArgs := \u0026amp;ec2.SubnetArgs{ Tags: tags, VpcId: vpc.ID(), CidrBlock: subnetIPs[idx], AvailabilityZone: availabilityZone, } subnet, err := ec2.NewSubnet(ctx, fmt.Sprintf(\u0026#34;%s-subnet-%d\u0026#34;, vpcName, idx), subnetArgs) if err != nil { fmt.Println(err.Error()) return err } subnets[\u0026#34;subnet_ids\u0026#34;] = append(subnets[\u0026#34;subnet_ids\u0026#34;].([]interface{}), subnet.ID()) } ctx.Export(\u0026#34;SUBNET-IDS\u0026#34;, subnets[\u0026#34;subnet_ids\u0026#34;]) Running the code # Run pulumi up to add the subnets. If you\u0026rsquo;re using the same project and stack, Pulumi knows the VPC already exists and will only create the new subnets.\n$ pulumi up Previewing update (builderstack): Type Name Plan pulumi:pulumi:Stack builder-builderstack + ├─ aws:ec2:Subnet myPulumiVPC-subnet-1 create + └─ aws:ec2:Subnet myPulumiVPC-subnet-0 create Outputs: + SUBNET-IDS: [ + [0]: output\u0026lt;string\u0026gt; + [1]: output\u0026lt;string\u0026gt; ] Resources: + 2 to create 2 unchanged Do you want to perform this update? yes Updating (builderstack): Type Name Status pulumi:pulumi:Stack builder-builderstack + ├─ aws:ec2:Subnet myPulumiVPC-subnet-1 created + └─ aws:ec2:Subnet myPulumiVPC-subnet-0 created Outputs: + SUBNET-IDS: [ + [0]: \u0026#34;subnet-\u0026lt;id\u0026gt;\u0026#34; + [1]: \u0026#34;subnet-\u0026lt;id\u0026gt;\u0026#34; ] VPC-ID : \u0026#34;vpc-\u0026lt;id\u0026gt;\u0026#34; Resources: + 2 created 2 unchanged Duration: 8s Permalink: https://app.pulumi.com/retgits/builder/builderstack/updates/2 The permalink at the bottom takes you to the Pulumi console where you can see all the details of the execution and the resources that were created.\nCover image by StockSnap from Pixabay\n","date":"January 9, 2020","externalUrl":null,"permalink":"/2020/01/how-to-add-subnets-to-a-vpc-in-aws-using-pulumi-and-golang/","section":"Blog","summary":"In the previous post, I used Pulumi to create a VPC. This post picks up where that left off and adds subnets to it.\n","title":"How To Add Subnets To a VPC In AWS Using Pulumi And Golang","type":"blog"},{"content":"Your source code is only one piece of what goes into production. You also need API gateways, S3 buckets, VPCs, and other infrastructure. Configuring those by hand is tedious and error-prone. Pulumi lets you define all of that in the same language you build your app in.\nMost resources on the major cloud providers need to live inside a VPC, so that\u0026rsquo;s a natural starting point. This post walks through creating one with the Pulumi Go SDK. The complete project is available on GitHub.\nA new project # If you haven\u0026rsquo;t already, head over to the Pulumi website to create an account (free for personal use). Once you\u0026rsquo;ve installed the CLI, you can create a new project. The go template gives you a starting point, and you can set the project name, description, and stack name via flags:\npulumi new go \\ --name builder \\ --description \u0026#34;An awesome Pulumi infrastructure-as-code Stack\u0026#34; \\ --stack retgits/builderstack In this example, the project name is builder and the stack is retgits/builderstack.\nUsing Go modules # The default Go template still uses dep for dependency management, but Go modules are the way to go now. Switching over takes three commands:\ngo mod init github.com/retgits/builder go mod tidy rm Gopkg.* The first creates a new Go module. The second pulls dependencies from Gopkg.toml into go.mod. The third cleans up the old Gopkg files.\nDefault configuration variables # Pulumi needs to know how to connect to AWS. You configure the AWS provider through the command line with pulumi config set aws:\u0026lt;option\u0026gt;:\npulumi config set aws:profile default pulumi config set aws:region us-east-1 The aws:region parameter is required and tells Pulumi where to deploy. The aws:profile parameter is optional and maps to the profiles you created with aws configure.\nAdding more configuration variables # AWS recommends tagging your resources so they\u0026rsquo;re easier to manage, search, and filter. The tags used here are version, author, team, and feature. Feel free to adjust them in the source code. You\u0026rsquo;ll also need a VPC name and CIDR block:\ntags:version: \u0026#34;0.1.0\u0026#34; tags:author: \u0026lt;your name\u0026gt; tags:team: \u0026lt;your team\u0026gt; tags:feature: myFirstVPCWithPulumi vpc:name: myPulumiVPC vpc:cidr-block: \u0026#34;172.32.0.0/16\u0026#34; You can add these with the command line (e.g., pulumi config set tags:version \u0026quot;0.1.0\u0026quot;) or edit the YAML file directly. The file is named Pulumi.\u0026lt;name of your project\u0026gt;.yaml.\nUsing configuration variables in your code # Configuration variables from the YAML file are accessible through the pulumi.Context object. The GetConfig() method returns a string and a boolean indicating whether the key was found. A small helper function makes this cleaner:\n// getEnv searches for the requested key in the pulumi context and provides either the value of the key or the fallback. func getEnv(ctx *pulumi.Context, key string, fallback string) string { if value, ok := ctx.GetConfig(key); ok { return value } return fallback } Making it all work # With the helper in place, here\u0026rsquo;s the code that creates the VPC. It builds a tag map, sets up the VPC arguments, and exports the VPC ID so it shows up in the Pulumi console:\nfunc main() { pulumi.Run(func(ctx *pulumi.Context) error { // Prepare the tags that are used for each individual resource so they can be found // using the Resource Groups service in the AWS Console tags := make(map[string]interface{}) tags[\u0026#34;version\u0026#34;] = getEnv(ctx, \u0026#34;tags:version\u0026#34;, \u0026#34;unknown\u0026#34;) tags[\u0026#34;author\u0026#34;] = getEnv(ctx, \u0026#34;tags:author\u0026#34;, \u0026#34;unknown\u0026#34;) tags[\u0026#34;team\u0026#34;] = getEnv(ctx, \u0026#34;tags:team\u0026#34;, \u0026#34;unknown\u0026#34;) tags[\u0026#34;feature\u0026#34;] = getEnv(ctx, \u0026#34;tags:feature\u0026#34;, \u0026#34;unknown\u0026#34;) tags[\u0026#34;region\u0026#34;] = getEnv(ctx, \u0026#34;aws:region\u0026#34;, \u0026#34;unknown\u0026#34;) // Create a VPC for the EKS cluster cidrBlock := getEnv(ctx, \u0026#34;vpc:cidr-block\u0026#34;, \u0026#34;unknown\u0026#34;) vpcArgs := \u0026amp;ec2.VpcArgs{ CidrBlock: cidrBlock, Tags: tags, } vpcName := getEnv(ctx, \u0026#34;vpc:name\u0026#34;, \u0026#34;unknown\u0026#34;) vpc, err := ec2.NewVpc(ctx, vpcName, vpcArgs) if err != nil { fmt.Println(err.Error()) return err } // Export IDs of the created resources to the Pulumi stack ctx.Export(\u0026#34;VPC-ID\u0026#34;, vpc.ID()) return nil }) } Running the code # Go doesn\u0026rsquo;t use a package manager to install Pulumi plugins like other languages do, so you need to install the AWS provider manually:\npulumi plugin install resource aws v1.17.0 Then run pulumi up to create the VPC:\n$ pulumi up Previewing update (builderstack): Type Name Plan + pulumi:pulumi:Stack builder-builderstack create + └─ aws:ec2:Vpc myPulumiVPC create Resources: + 2 to create Do you want to perform this update? yes Updating (builderstack): Type Name Status + pulumi:pulumi:Stack builder-builderstack created + └─ aws:ec2:Vpc myPulumiVPC created Outputs: VPC-ID: \u0026#34;vpc-\u0026lt;id\u0026gt;\u0026#34; Resources: + 2 created Duration: 11s Permalink: https://app.pulumi.com/retgits/builder/builderstack/updates/1 The permalink at the bottom takes you to the Pulumi console where you can see all the details of the execution and the resources that were created.\nCover image by Free-Photos from Pixabay\n","date":"January 7, 2020","externalUrl":null,"permalink":"/2020/01/how-to-create-a-vpc-in-aws-using-pulumi-and-golang/","section":"Blog","summary":"Your source code is only one piece of what goes into production. You also need API gateways, S3 buckets, VPCs, and other infrastructure. Configuring those by hand is tedious and error-prone. Pulumi lets you define all of that in the same language you build your app in.\n","title":"How To Create a VPC In AWS Using Pulumi And Golang","type":"blog"},{"content":"As a trend cloud vendors tend to use the word serverless quite loosely. While serverless comes in a lot of shapes and sizes and as long as the characteristics fit within the four categories from my last blog, it is a serverless service. To make sure that we’re all on the same page, I’ll use the following definition for serverless:\n“Serverless is a development model where developers focus on a single unit of work and can deploy to a platform that automatically scales, without developer intervention.”\nIn this blog post, we’ll look at how that model works on AWS Fargate, which allows you to run containers without having to manage servers or clusters.\nTL;DR Serverless is cheaper!\nContainers, like really? # In the serverless space, products are continuously changing to make it easier for developers to adopt the technology and build awesome apps. In the serverless space, though, there is a tradeoff between control and abstraction. As a developer, you have to choose how much control you’re willing to give up to gain abstraction from things you don’t want to worry about.\nFrom: Forrest Brazeal\nIt turns out that containers are an amazing way to package up software with very specific dependencies. Those dependencies can be things highly specific operating system dependencies to train your machine learning model or binaries like ImageMagick. As a developer, you’re abstracted away from the underlying infrastructure (you don’t have to worry about the VMs, clusters or pods that your app will run on) but you are responsible for the runtime in the container itself. So, with containers and using services like Google Cloud Run, OpenFaaS, or AWS Fargate you’re not giving up all runtime control, but you do get more abstraction.\nWhere do your containers come from? # The containers that we’ve used and built for the ACME Fitness Shop come from container images created by Bitnami. Bitnami creates container images from over 150 open source projects, like Kafka and Redis which we use in our demos, and makes sure they’re always up to date. The Bitnami team has security as one of their core values, so they also make sure that the containers they build don’t have known security vulnerabilities in them. That gives developers trusted images so they can focus on delivering added value. Safety is everyone’s concern, so you still want to make sure that the dependencies and software you add to those containers are secure too. In our pipelines, we use tools like Snyk.io and GitLab’s container scanning feature to validate the dependencies we add don’t introduce any critical vulnerabilities. For more details on how we built our pipelines, I recommend reading Continuous Verification in Action.\nWhat does that cost me? # No matter what you’re looking at, cost is always a major factor when it comes to adopting any technology. Whether it’s the cost of licenses or the cost of running an app, it matters. To draw a, cost-wise, honest comparison between different options let’s look at the ACME Fitness Shop we’ve built. Our GitHub repository contains scripts to deploy the app to a Kubernetes cluster, run it using docker-compose, or deploy it to AWS Fargate.\nYou can deploy the CloudFormation template by running the below command. Be aware, though, that it will create 44 different resources (ranging from a log group to different security groups and the actual ECS cluster) and your AWS account will be charged for it.\naws cloudformation create-stack \\ --capabilities CAPABILITY_NAMED_IAM \\ --stack-name mystack \\ --parameters ParameterKey=User,ParameterValue=\u0026lt;your name\u0026gt; \\ ParameterKey=Team,ParameterValue=\u0026lt;your team\u0026gt; \\ ParameterKey=SourceSecurityGroup,ParameterValue=\u0026lt;your security group\u0026gt; \\ ParameterKey=Subnets,ParameterValue=\u0026lt;your subnet\u0026gt; \\ ParameterKey=VPC,ParameterValue=\u0026lt;your vpc\u0026gt; \\ --template-body file:///\u0026lt;path\u0026gt;/\u0026lt;to\u0026gt;/acme-fitness-shop.yaml I’ve captured an image of the end result from the “Designer View” in the CloudFormation console. The orange clouds represent the Task Definitions, Services, and the ECS cluster. The green squares are for service discovery, using AWS Cloud Map to create a private DNS namespace for the cluster. The red circles are, because it does have to be secure, security groups so access to services is limited to only the services that really need access.\nEach individual service, whether it’s a database or a microservice has a “Task Definition” to describe what the actual thing is it should run and what the resource limits are. All task definitions are given 512MB of memory and 1/4th CPU to run.\nLet’s break down the cost of running this app on AWS Fargate. Our fitness shop is still in the startup phase, so I’ll estimate that all services combined will produce about 5GB of log data and have 3GB of data traffic flowing out to the Internet.\nAWS Cloud Map for service discovery\nTo make sure that all services can find each other without relying on IP addresses, we’re using AWS Cloud Map for service discovery. All resources registered via Amazon ECS Service Discovery are free, and you pay for lookup queries and associated DNS charges only. The app needs 13 DNS lookups to make sure all services can find each other and with the Time-To-Live for those DNS records set to 10 seconds, you’ll need about 4 million DNS requests per month. That translates to a grand total of $4.\nAmazon EC2 security groups\nWe want to limit access to services to only those other services that really need access and to do that we rely on EC2 security groups. The security groups don’t cost anything.\nAmazon CloudWatch for logging\nWith our fitness shop still in the startup phase, we estimated that the 13 services combined will produce about 5GB of log data. CloudWatch logs has a free tier up to 5GB so there shouldn’t be any additional charges needed for CloudWatch Logs.\nAWS Fargate for compute\nThe 13 services that make up the app all have 512MB of memory and 1/4th CPU available to them. To keep all services running will cost us about $118 ($0.04048 per vCPU per hour and $0.004445 per GB RAM per hour).\nData Traffic\nWith our fitness shop still in the startup phase, we assumed we have 3GB of data flowing out to the Internet every month. That data traffic would cost us $0.18 ($0.09 per GB if you stay below 10TB per month with one GB of traffic being free). Data traffic into EC2 from the Internet is free so that won’t cost anything.\nThe total cost of running our Fitness Shop on AWS Fargate for a month comes to about $122. To have some comparison whether that’s a lot or not, let’s compare that to a self-hosted Kubernetes cluster on EC2 would be. To make it a fair comparison, we’ll use the same assumptions for logs and data traffic.\nKubernetes CoreDNS service discovery\nOne of the great things about Kubernetes is the fact that it comes with CoreDNS for service discovery. That means there are no costs related to it.\nAmazon EC2 security groups\nWhile traffic into the Kubernetes cluster will go through services and no other connectivity is allowed by Kubernetes by default, we still need a few EC2 security groups. Luckily, those are still free.\nAmazon CloudWatch for logging\nWith our fitness shop still in the startup phase, we estimated that the 13 services combined will produce about 5GB of log data. CloudWatch logs has a free tier up to 5GB so there shouldn’t be any additional charges needed for CloudWatch Logs.\nData Traffic\nWith our fitness shop still in the startup phase, we assumed we have 3GB of data flowing out to the Internet every month. That data traffic would cost us $0.18 ($0.09 per GB if you stay below 10TB per month with one GB of traffic being free). Data traffic into EC2 from the Internet is free so that won’t cost anything.\nEC2 instances to run our cluster\nThe 13 containers that make up the app require a total 3.25 CPUs and 6.5GB of RAM to make sure they have the same resources as with AWS Fargate. The t3a family of instances seem to be the most economical and the price for two large instances or one extra-large instance is the same. In Kubernetes best practice, we’ll need two of t3a.xlarge instances (or four t3a.large, which cost exactly the same) to keep the Kubernetes master node and worker node separately. The cost of those EC2 instances is $217.58\nComparing the results # Lets put all of the above in a chart to make the comparison easier.\nOr, in table format\nServerless Kubernetes Service Discovery $4 (AWS Cloud Map) $0 (Built-in) Logging (Amazon CloudWatch Logs) $0 $0 Compute $118 (AWS Fargate) $217.58 (Amazon EC2) Data traffic $0.18 $0.18 Total $122.18 $217.76 With a whopping 56% difference, or $96, running the ACME Fitness Shop on AWS Fargate is a lot cheaper than running it on a self-hosted Kubernetes cluster. That\u0026rsquo;s money you could donate to a charity, buy some cool accessories for your car, or buy yourself a really nice dinner!\nWhat’s next # Running a self-hosted Kubernetes cluster, which costs $217.76, is a little more costly than running AWS Fargate, which costs $122.18 for the same service. So for our use case, it makes sense to look at AWS Fargate. I’m definitely not saying that Kubernetes doesn’t have a purpose. As I mentioned before, there is a tradeoff between control and abstraction and as a developer you have to decide if and how you want to make that tradeoff. If you want to run the comparison yourself, try out the deployment scripts we have in our GitHub repository.\nIn the meanwhile, let me know your thoughts.\nImage by Markus Distelrath from Pixabay.\n","date":"December 9, 2019","externalUrl":null,"permalink":"/2019/12/cost-matters-the-serverless-edition/","section":"Blog","summary":"As a trend cloud vendors tend to use the word serverless quite loosely. While serverless comes in a lot of shapes and sizes and as long as the characteristics fit within the four categories from my last blog, it is a serverless service. To make sure that we’re all on the same page, I’ll use the following definition for serverless:\n“Serverless is a development model where developers focus on a single unit of work and can deploy to a platform that automatically scales, without developer intervention.”\nIn this blog post, we’ll look at how that model works on AWS Fargate, which allows you to run containers without having to manage servers or clusters.\n","title":"Cost Matters! The Serverless Edition","type":"blog"},{"content":"Using serverless requires us to change our mindset on how we build apps and requires us to unlearn things we learned building apps in the past. At AWS re:Invent I got a chance to do a VMware Code session and talk about how we took part of our ACME Fitness Shop and transformed it into serverless functions with AWS Lambda.\n","date":"December 9, 2019","externalUrl":null,"permalink":"/2019/12/serverless-from-microservice-to-functions/","section":"Blog","summary":"Using serverless requires us to change our mindset on how we build apps and requires us to unlearn things we learned building apps in the past. At AWS re:Invent I got a chance to do a VMware Code session and talk about how we took part of our ACME Fitness Shop and transformed it into serverless functions with AWS Lambda.\n","title":"Serverless - From Microservice to Functions","type":"blog"},{"content":"Containers were this awesome technology that ushered in the Cloud era and with a lot of new FaaS tools coming along, companies are wondering if they should jump the container ship altogether. As it turns out, containers definitely have value in Serverless. In this session we will take an existing containerized app and move it into AWS Fargate, look at the cost of running it, and how we can monitor it.\n","date":"December 9, 2019","externalUrl":null,"permalink":"/2019/12/serverless-the-wrath-of-containers/","section":"Blog","summary":"Containers were this awesome technology that ushered in the Cloud era and with a lot of new FaaS tools coming along, companies are wondering if they should jump the container ship altogether. As it turns out, containers definitely have value in Serverless. In this session we will take an existing containerized app and move it into AWS Fargate, look at the cost of running it, and how we can monitor it.\n","title":"Serverless - The Wrath of Containers","type":"blog"},{"content":"There are many predictions from market analyst firms on the size of the global serverless architecture market and how fast it will grow. The numbers range from $18B](https://industrynewsreports.us/8860/serverless-architecture-market-set-for-rapid-growth-to-reach-around-18-04-billion-globally-by-2024-2/) to [$21.99B in the next few years with the compound annual growth rate (CAGR) in the double digits. But is serverless only a fancy name for products like AWS Lambda and Azure Functions?\nWhat is serverless? # When you build an app, everything you do generally breaks down into two large buckets. The first bucket holds everything that every other app needs to do as well. These are things like running a set of servers to deploy your app to or running your CI/CD tools. The first bucket contains all activities that don’t give your app an edge over anyone else in the market. AWS calls the activities in this bucket “undifferentiated heavy lifting”. The second bucket is where the magic is. That bucket holds everything that gives your app an edge over others. These are things like your amazing user experience or snappy responses from the support team. This second bucket is what AWS calls “the secret sauce”. As a company, you want to make sure your developers spend as much time as possible on activities in the second bucket so they can focus on business value. The activities in the first bucket should go to a cloud provider as much as possible. So serverless, or a serverless operating model, is all about developers delivering business value. You want to spend as little time as possible on anything but your competitive advantage.\nKey drivers of serverless # In general, serverless solutions have four key drivers. First of all, serverless means that there are no servers to manage or provision. This means you don’t need to install the runtime or patch servers. It doesn’t mean that there are no servers, though. The second driver is automatic scaling. The microservices in your app should scale to infinity when they get really busy. It also means those services should scale back again when it’s not as busy. The third key driver is that you want to pay for value. This means you’re paying for what you’re using like memory consumption, CPU usage or network throughput rather than server units. The fourth driver is that high availability should be a capability you use, rather than a capability you build yourself.\nLet’s compare a few services and see how those drivers work out. I’ll use some of the AWS services, but the comparison works for the other cloud providers too. AWS Lambda allows me to upload code so I don’t have to manage any servers. It also allows me to scale from zero to infinity and back again, and I pay for the number of executions. When a server running my Lambda functions goes down, another takes over. So Lambda works for all four drivers. With AWS EC2, I do have to manage servers. I also have to pay for the server unit. EC2 doesn’t scale the app that runs on top of it, and I have to design high availability for my app. So EC2 works for none of the drivers. To show that serverless isn’t only a fancy name for Lambda, let’s look at AWS Fargate. With Fargate, I deploy a container and I don’t manage a server. I pay for the time my container runs, based on CPU and memory consumption. Fargate allows me to scale my apps based on resource usage. When a machine running my Fargate instance goes down it will failover to a new one. So Fargate is serverless too! In fact, AWS and most of the other cloud providers, have a wide range of serverless options for all sorts of use cases.\nServerless and Event-Driven Architectures # In event-driven architectures, your microservices react to the events coming in. When you have many events coming in at the same time, you want the microservices in your app to scale up (and back again when it’s a quieter period). The microservices should failover automatically when something happens, and you don’t want to pay for entire servers. The same key drivers that are important to serverless, are important to event-driven architectures.\nDoes serverless actually matter? # As I mentioned in the event-driven architecture blog, your services can run anywhere and your users won’t care about that. In the same way, it doesn’t matter which programming language you use. What matters to your users, is that your app works all the time and that it looks good. Your users care about the activities from the second bucket and that is what serverless allows you to do.\nOver the coming period, we\u0026rsquo;ll share more on how we\u0026rsquo;ve event-driven and serverless works for our ACME Fitness App. In the meanwhile, let me know your thoughts and send me or the team a note on Twitter.\nCover photo by Bethany Drouin from Pixabay\n","date":"November 4, 2019","externalUrl":null,"permalink":"/2019/11/why-serverless-architectures-matter/","section":"Blog","summary":"There are many predictions from market analyst firms on the size of the global serverless architecture market and how fast it will grow. The numbers range from $18B](https://industrynewsreports.us/8860/serverless-architecture-market-set-for-rapid-growth-to-reach-around-18-04-billion-globally-by-2024-2/) to [$21.99B in the next few years with the compound annual growth rate (CAGR) in the double digits. But is serverless only a fancy name for products like AWS Lambda and Azure Functions?\n","title":"Why Serverless Architectures Matter","type":"blog"},{"content":"The CTO of a company I have worked for used to say that services should be loosely coupled but tightly integrated. I didn\u0026rsquo;t realize until a lot later how true that statement is as you\u0026rsquo;re building out microservices. How those microservices communicate with each other has also changed quite a bit. More often than not, they send messages using asynchronous protocols. As an industry, we decided that this new way of building apps should be called \u0026ldquo;Event-Driven Architecture (EDA).\u0026rdquo;\nWhat is Event-Driven anyway? # Thinking about event-driven architecture starts with thinking about events. In this context, events are the facts tell what has happened in your app. You fire it off and forget about it, letting someone else decide what to do with it. A large part of our daily lives is event-driven. That \u0026ldquo;like\u0026rdquo; you got on your tweet, the text message on your phone, or the email that your GitHub keys were stolen are all events.\nEvent-driven means that all the other parts of your app react to those events. It\u0026rsquo;s like a jazz band, where the players react to the melody played by the others. As the drums begin to play, the trumpets react. To have your microservices act like jazz musicians too, you\u0026rsquo;ll need two things. You\u0026rsquo;ll need APIs and events to communicate, and your microservices should be stateless.\nWhy do I need APIs? # Greg Young said: \u0026ldquo;When you start modeling events, it forces you to think about the behavior of the system.\u0026rdquo; Those events represent what your microservice sends out into the world for others to react on and what your microservice will respond to. In the REST world, developers create API specifications as a description of the interface between microservices. In an event-driven world, you can still use HTTP but there are better options. In an event-driven world, you use brokers.\nWhat are event brokers? # Let\u0026rsquo;s use a definition from Gartner for what an event broker is: “middleware products that are used to facilitate, mediate and enrich the interactions of sources and handlers in event-driven computing.” Today, there are many different brokers out there that you can use. Some common examples are Apache Kafka, RabbitMQ, and Solace\u0026rsquo;s Event Broker. With these tools, developers can still create API specifications using a new project called AsyncAPI. Their tools help developers create API specifications, and generate code from them, for event-driven apps.\nfrom: https://martin.kleppmann.com/2015/05/27/logs-for-data-infrastructure.html\nAs we break down our large monolithic app into microservices and want them to send out events, we can\u0026rsquo;t rely on just HTTP as the way for services to communicate. As you build more services that are interested in the same event, you don\u0026rsquo;t want to update the producer of the events. Event brokers make the event available to another subscriber, without making changes to the code. The same holds true for \u0026ldquo;scaling out\u0026rdquo;. As some microservices get very busy, you want to scale the number of instances of that service and not change the event publisher to be able to send events to that new instance. Event brokers that care of that too. A third reason why event brokers are great for event driven architectures is that those brokers keep track of the events that are sent. If your microservice isn\u0026rsquo;t running for some reason and starts again, the events it missed are waiting to be processed.\nWhere does my state go? # The jazz band doesn\u0026rsquo;t keep all their music in their head, so your microservices shouldn\u0026rsquo;t either. Your services can be deployed to a Kubernetes cluster, Virtual Machines, or serverless platforms. For event-driven architectures, it doesn\u0026rsquo;t really matter where you run your code. What matters is that your microservices don\u0026rsquo;t keep track of state themselves, they rely on external systems like databases or message brokers to keep track of it.\nIf the music your band plays has more to do for trumpets than a single player can handle, you need to add another player. Just like scaling up the number of trumpet players, you want to scale up the instances of your microservices when your app gets busy. If all instances of a microservice keep track of their own state, you have to synchronize between them and make sure all those instances operate on the same state. Brokers like Apache Kafka offer the solution by keeping track of state for your services, regardless of which programming language or platform you deploy them to. Adding or removing microservices is now a piece of cake.\nWhen you let the broker keep track of the events, and of the state, you can look at a pattern called \u0026ldquo;event sourcing\u0026rdquo;. That means that instead of only keeping track of the current state of a microservice, you also keep track of every new event to generate a log of everything that has happened. Microservices following that pattern have a bunch of awesome benefits. Those microservices can receive events if they weren\u0026rsquo;t ready to handle them, replay events after fixes have been deployed, roll back changes if something goes wrong down the line, or even rebuild the entire state of an app. Apache Kafka became famous for supporting this particular pattern.\nAn event-driven architecture lets you speed up development and gives you faster and more frequent deployments regardless of where you deploy your services.\nContinuously verifying event-driven architectures # As a team, we\u0026rsquo;ve talked about Continuous Verification quite a bit. I strongly believe that event-driven architectures are a great fit for Continuous Verification. You want to be thoughtful about what you\u0026rsquo;re spending because event-driven architectures give you the ability to scale to easily infinity and back. As you\u0026rsquo;re keeping your audit log, your state, in a separate place, you need security policies and make sure that the events can be trusted. On top of that, you need to think about observability and how some of these features will make your life easier.\nWhat\u0026rsquo;s next? # The next time you\u0026rsquo;re listening to a jazz band play, think about how they act and react to each other and how that is similar to your application architecture. Over the coming period, we\u0026rsquo;ll share more on how we\u0026rsquo;ve built our ACME Fitness App to be event-driven and which tools we\u0026rsquo;ve used. In the meanwhile, let me know your thoughts.\n","date":"October 11, 2019","externalUrl":null,"permalink":"/2019/10/event-driven-architectures-putting-jazz-into-apps/","section":"Blog","summary":"The CTO of a company I have worked for used to say that services should be loosely coupled but tightly integrated. I didn’t realize until a lot later how true that statement is as you’re building out microservices. How those microservices communicate with each other has also changed quite a bit. More often than not, they send messages using asynchronous protocols. As an industry, we decided that this new way of building apps should be called “Event-Driven Architecture (EDA).”\n","title":"Event-Driven Architectures - Putting Jazz Into Apps","type":"blog"},{"content":"As a developer, I always thought that security, like documentation, would be done by someone else. While that might have been true in the past, in today\u0026rsquo;s world that model no longer works. As a developer you\u0026rsquo;re responsible for the security of your app. Security in this case should be seen in the broadest sense of the word, ranging from licenses to software packages. A chef creating cheesecake has similar challenges. The ingredients of a cheesecake are similar to the software packages a developer uses. The preparation is similar to the DevOps pipeline, and recipe is similar to the licenses for developers. Messing up any of those means you have a messy kitchen, or a data breach!\nSlides # Video # ","date":"September 23, 2019","externalUrl":null,"permalink":"/2019/09/trusting-your-ingredients-@devopsdays-columbus/","section":"Blog","summary":"As a developer, I always thought that security, like documentation, would be done by someone else. While that might have been true in the past, in today’s world that model no longer works. As a developer you’re responsible for the security of your app. Security in this case should be seen in the broadest sense of the word, ranging from licenses to software packages. A chef creating cheesecake has similar challenges. The ingredients of a cheesecake are similar to the software packages a developer uses. The preparation is similar to the DevOps pipeline, and recipe is similar to the licenses for developers. Messing up any of those means you have a messy kitchen, or a data breach!\n","title":"Trusting Your Ingredients @DevOpsDays Columbus","type":"blog"},{"content":"Imagine this, it\u0026rsquo;s 5pm on a Friday afternoon and while you really want to go enjoy the weekend, you also need to deploy a new version of your app to production. Using AWS CloudFormation (CF), you add a new instance to your fleet of EC2 instances to run your app.\n\u0026#34;InstanceType\u0026#34; : { \u0026#34;Description\u0026#34; : \u0026#34;WebServer EC2 instance type\u0026#34;, \u0026#34;Type\u0026#34; : \u0026#34;String\u0026#34;, \u0026#34;Default\u0026#34; : \u0026#34;t1.micro\u0026#34;, \u0026#34;ConstraintDescription\u0026#34; : \u0026#34;must be a valid EC2 instance type.\u0026#34; } Now it\u0026rsquo;s just a matter of running aws cloudformation deploy and your changes are in production, right on time for you to log off and enjoy the weekend. An hour later, as your app is running a high load on production and you\u0026rsquo;re having that well-deserved refreshment, your colleagues are scrambling to find out what caused the dip in performance on production. Desperate for answers, they call you and as you\u0026rsquo;re working through the traffic traces, you realize it might have been the deployment you did earlier. You go back to the CloudFormation console and see you chose a micro instance on EC2 instead of the Extra Large which has the compute power needed for your app. It\u0026rsquo;s an easy fix this time, but you could have specified the wrong environment variables, or credentials from the staging environment for production deployments.\nInfrastructure-as-Code helps you manage both the codebase of your apps and your infrastructure. When you\u0026rsquo;re treating your infrastructure that way, you and the rest of the DevOps teams in your company can combine the deployments you do with other performance and health metrics on a single dashboard. Obviously, you\u0026rsquo;re not deploying changes on a Friday afternoon (are you?). Though let\u0026rsquo;s say you do deploy on Fridays and want to see those deployments appear in your dashboards? In this blog, I\u0026rsquo;ll walk through setting up an SNS (Simple Notification Service) topic to send CloudFormation events to an AWS Lambda app and from there to Wavefront to be visualized.\nGetting ready # If you want to follow along with the steps in this post, you\u0026rsquo;ll need a few things:\nAn AWS account to deploy a Lambda app to and to configure some other bits and pieces A Wavefront API token to send data to Wavefront. If you don\u0026rsquo;t have a Wavefront account, you can get a trial here Configuring SNS # The cool thing about CloudFormation is that it can send events to an SNS topic. That way other apps can be notified when something happens in your environment, but to do that, you\u0026rsquo;ll need to have an SNS topic. From within the SNS homepage, click on the orange Create topic button and give your topic a descriptive name (like \u0026ldquo;CloudFormationEvents\u0026rdquo;)\nYou can use all the default values, or change them as you see fit, just make sure you copy the ARN (Amazon Resource Name) as you\u0026rsquo;ll need that later on to configure some CloudFormation events.\nDeploying a Lambda app # The next step is to deploy a Lambda app that will listen to events from the SNS topic and send them to Wavefront. To make sure the app knows it will get data from SNS, your \u0026ldquo;handler\u0026rdquo; function needs to have an SNSEvent as the input.\nfunc handler(request events.SNSEvent) error { ... } A CloudFormation event sent to SNS has a bunch of useful information, but the important bits are in the \u0026ldquo;Sns.Message\u0026rdquo; field. That field contains information like which stack was deployed, which account it was deployed to (which is very useful if you separate accounts), and what the current state is.\n{ \u0026#34;Records\u0026#34;: [ { \u0026#34;EventSource\u0026#34;: \u0026#34;aws:sns\u0026#34;, \u0026#34;EventVersion\u0026#34;: \u0026#34;1.0\u0026#34;, \u0026#34;EventSubscriptionArn\u0026#34;: \u0026#34;arn:aws:sns:us-west-2:123456789012:CloudFormationEvents:ff5557cc-f52e-4274-90c6-7a961d334743\u0026#34;, \u0026#34;Sns\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;Notification\u0026#34;, \u0026#34;MessageId\u0026#34;: \u0026#34;12345\u0026#34;, \u0026#34;TopicArn\u0026#34;: \u0026#34;arn:aws:sns:us-west-2:123456789012:CloudFormationEvents\u0026#34;, \u0026#34;Subject\u0026#34;: \u0026#34;AWS CloudFormation Notification\u0026#34;, \u0026#34;Message\u0026#34;: \u0026#34;StackId=\u0026#39;arn:aws:cloudformation:us-west-2:123456789012:stack/MyStack/b9e8d9b0-be10-11e9-aa8d-0a1528792fcb\u0026#39;\\nTimestamp=\u0026#39;2019-08-13T21:24:52.887Z\u0026#39;\\nEventId=\u0026#39;c9fb6200-be10-11e9-9c1a-0621218a9930\u0026#39;\\nLogicalResourceId=\u0026#39;MyStack\u0026#39;\\nNamespace=\u0026#39;123456789012\u0026#39;\\nPhysicalResourceId=\u0026#39;arn:aws:cloudformation:us-west-2:123456789012:stack/MyStack/b9e8d9b0-be10-11e9-aa8d-0a1528792fcb\u0026#39;\\nResourceProperties=\u0026#39;null\u0026#39;\\nResourceStatus=\u0026#39;CREATE_COMPLETE\u0026#39;\\nResourceStatusReason=\u0026#39;\u0026#39;\\nResourceType=\u0026#39;AWS::CloudFormation::Stack\u0026#39;\\nStackName=\u0026#39;MyStack\u0026#39;\\nClientRequestToken=\u0026#39;Console-CreateStack-6b6e28ac-09ab-a7ee-9cf6-20865fb3953b\u0026#39;\\n\u0026#34;, \u0026#34;Timestamp\u0026#34;: \u0026#34;2019-08-13T21:24:52.919Z\u0026#34;, \u0026#34;SignatureVersion\u0026#34;: \u0026#34;1\u0026#34;, \u0026#34;Signature\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;SigningCertUrl\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;UnsubscribeUrl\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;MessageAttributes\u0026#34;: {} } } ] } To get the data from the message field, and turn it into a proper CloudFormation event you can use later on, you can \u0026ldquo;range\u0026rdquo; over the elements to create a struct from it.\n... for _, element := range elements { if len(element) \u0026gt; 0 \u0026amp;\u0026amp; strings.Contains(element, \u0026#34;=\u0026#34;) { items := strings.Split(element, \u0026#34;=\u0026#34;) ptrString := strings.ReplaceAll(items[1], \u0026#34;\u0026#39;\u0026#34;, \u0026#34;\u0026#34;) elementMap[items[0]] = \u0026amp;ptrString } } ... return cf.StackEvent{ StackId: elementMap[\u0026#34;StackId\u0026#34;], EventId: elementMap[\u0026#34;EventId\u0026#34;], LogicalResourceId: elementMap[\u0026#34;LogicalResourceId\u0026#34;], PhysicalResourceId: elementMap[\u0026#34;PhysicalResourceId\u0026#34;], ResourceProperties: elementMap[\u0026#34;ResourceProperties\u0026#34;], ResourceStatus: elementMap[\u0026#34;ResourceStatus\u0026#34;], ResourceStatusReason: elementMap[\u0026#34;ResourceStatusReason\u0026#34;], ResourceType: elementMap[\u0026#34;ResourceType\u0026#34;], StackName: elementMap[\u0026#34;StackName\u0026#34;], ClientRequestToken: elementMap[\u0026#34;ClientRequestToken\u0026#34;], Timestamp: time, }, nil To send the data from your Lambda app to Wavefront, you can create a WavefrontEvent and send that to your Wavefront instance using your API token.\nevt := WavefrontEvent{ Table: \u0026#34;\u0026#34;, Name: fmt.Sprintf(\u0026#34;CloudFormation Event for %s\u0026#34;, *event.StackName), StartTime: event.Timestamp.Unix(), EndTime: event.Timestamp.Unix() + 1, Annotations: Annotations{ Severity: \u0026#34;info\u0026#34;, Type: \u0026#34;CloudFormation\u0026#34;, Details: fmt.Sprintf(\u0026#34;Event ID %s (%s)\u0026#34;, *event.EventId, *event.ResourceStatus), }, } ... req, err := http.NewRequest(\u0026#34;POST\u0026#34;, wavefrontEventURL, bytes.NewReader(payload)) ... req.Header.Add(\u0026#34;authorization\u0026#34;, fmt.Sprintf(\u0026#34;Bearer %s\u0026#34;, wavefrontAPIToken)) ... The full source code of the AWS Lambda app is available on GitHub so if you want to deploy an already existing app, you can clone the source code and update the YAML template.\ngit clone https://github.com/retgits/wavefront-cf-notifier cd wavefront-cf-notifier make deps make build Before you can deploy the app, though, you\u0026rsquo;ll need to update the \u0026ldquo;template.yaml\u0026rdquo; file in three places:\nLine 26 should be updated with the ARN of your SNS topic Line 33 should be updated to your correct API endpoint for Wavefront Line 34 should be updated with your Wavefront API token With the changes to the template made, you\u0026rsquo;re ready to deploy using two \u0026ldquo;make\u0026rdquo; commands.\nmake package make deploy Once the deployment is finished, you\u0026rsquo;ll have a Lambda app running that\u0026rsquo;s listening to CloudFormation events and is ready to send them to Wavefront.\nAdding events to Wavefront # Using the Wavefront Query Language, you can add events to your charts by adding a new query. If you want to add all events, you could add the query:\nevents(name=\u0026#34;CloudFormation*\u0026#34;) If you want to only add events for a specific stack you\u0026rsquo;re deploying, which is a little more likely, you can add the query:\nevents(name=\u0026#34;CloudFormation Events for \u0026lt;name of your stack\u0026gt;*\u0026#34;) Using the query language, you can overlay the events on the chart and get a similar graph to this one. The event doesn\u0026rsquo;t only show that an event occurred, but also which stack it was.\nThe blue circles at the bottom show when an event was triggered by CloudFormation and the blue overlays show the duration of the event.\nTriggering CloudFormation events # Now that everything has been deployed, you\u0026rsquo;re all set to make use of the events and make sure your team knows exactly what\u0026rsquo;s going on. The only thing left is to make sure the CloudFormation scripts you have are updated with the notification endpoint. So for example, if you\u0026rsquo;re deploying a stack using the EC2ChooseAMI template, you can add the NotificationARNs right inside the \u0026ldquo;myStackWithParams\u0026rdquo; element.\n{ \u0026#34;AWSTemplateFormatVersion\u0026#34;: \u0026#34;2010-09-09\u0026#34;, \u0026#34;Resources\u0026#34;: { \u0026#34;myStackWithParams\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;AWS::CloudFormation::Stack\u0026#34;, \u0026#34;Properties\u0026#34;: { \u0026#34;TemplateURL\u0026#34;: \u0026#34;https://s3.amazonaws.com/cloudformation-templates-us-east-2/EC2ChooseAMI.template\u0026#34;, \u0026#34;Parameters\u0026#34;: { \u0026#34;InstanceType\u0026#34;: \u0026#34;t1.micro\u0026#34;, \u0026#34;KeyName\u0026#34;: \u0026#34;mykey\u0026#34; }, \u0026#34;NotificationARNs\u0026#34;: [ \u0026#34;\u0026lt;your sns arn\u0026gt;\u0026#34; ] } } } } If you\u0026rsquo;re deploying CloudFormation stack using the AWS Console, make sure you set the \u0026ldquo;Notification options\u0026rdquo; to your new SNS topic in step 3 (\u0026quot;Configure stack options\u0026quot;)\nNext steps # The events coming from CloudFormation, your Infrastructure-as-Code, will now show up in dashboards from Wavefront, so your entire DevOps organization now knows what happened and which CloudFormation stack to look at. This could save time while debugging and figuring out where the issues might come from. If you\u0026rsquo;re part of a DevOps team that deploys using AWS CloudFormation and you want insight into the deployment events, feel free to grab the code for the Lambda app from GitHub. Let me know your thoughts.\n","date":"September 20, 2019","externalUrl":null,"permalink":"/2019/09/how-to-send-cloudformation-events-to-wavefront-using-aws-lambda/","section":"Blog","summary":"Imagine this, it’s 5pm on a Friday afternoon and while you really want to go enjoy the weekend, you also need to deploy a new version of your app to production. Using AWS CloudFormation (CF), you add a new instance to your fleet of EC2 instances to run your app.\n","title":"How to send CloudFormation events to Wavefront using AWS Lambda","type":"blog"},{"content":"Trusting Your Ingredients - What Building Go Apps And Cheesecake Have In Common.\nIn this lightning session at GopherCon 2019, I got the chance to talk about two things I love. Cheesecake and Golang! As a developer, I\u0026rsquo;ve written code and built apps, and I realized that building apps and creating a cheesecake have a lot in common. In both cases you need to have the right ingredients, you need to trust your suppliers and have transparency in your production process. In this talk, we\u0026rsquo;ll look at how you can, and why you should, know what is in the app you deploy.\nSlides # Video # ","date":"July 27, 2019","externalUrl":null,"permalink":"/2019/07/gophercon-2019-trusting-your-ingredients/","section":"Blog","summary":"Trusting Your Ingredients - What Building Go Apps And Cheesecake Have In Common.\nIn this lightning session at GopherCon 2019, I got the chance to talk about two things I love. Cheesecake and Golang! As a developer, I’ve written code and built apps, and I realized that building apps and creating a cheesecake have a lot in common. In both cases you need to have the right ingredients, you need to trust your suppliers and have transparency in your production process. In this talk, we’ll look at how you can, and why you should, know what is in the app you deploy.\n","title":"GopherCon 2019 - Trusting Your Ingredients","type":"blog"},{"content":"As a developer, I\u0026rsquo;ve written code and built apps, and I realized that building apps and creating a cheesecake have a lot in common. In both cases you need to have the right ingredients, you need to trust your suppliers and have transparency in your production process. I got to go to Atlanta and meet with the Docker Meetup Group there, where we got to talk about In this talk, how you can, and why you should, know what is in the app you deploy.\n","date":"July 9, 2019","externalUrl":null,"permalink":"/2019/07/docker-meetup-group-atlanta-trusting-your-ingredients/","section":"Blog","summary":"As a developer, I’ve written code and built apps, and I realized that building apps and creating a cheesecake have a lot in common. In both cases you need to have the right ingredients, you need to trust your suppliers and have transparency in your production process. I got to go to Atlanta and meet with the Docker Meetup Group there, where we got to talk about In this talk, how you can, and why you should, know what is in the app you deploy.\n","title":"Trusting Your Ingredients at Docker Meetup Atlanta","type":"blog"},{"content":"Sometimes you need to get data from cloud-based systems into an environment that doesn\u0026rsquo;t expose APIs or ports to the outside world. Webhooks help, but you still need something that accepts them and gets them across your firewall. That\u0026rsquo;s exactly where Solace PubSub+ Cloud comes in. I built a small webhook forwarder app that receives data from Solace and sends it onward without any of my systems being exposed to the internet.\nI initially looked at PubNub for this, and while it mostly worked, there was one major gap. In some of my use cases, I needed the HTTP header information (like X-GitHub-Delivery or X-Hub-Signature when using GitHub webhooks), and PubNub couldn\u0026rsquo;t pass those through.\nA friend, Jeremy Meiss, pointed me to Solace and said it would solve all my messaging needs. After some head-scratching and doc-reading, I got it working nicely. And honestly, one of their best features is probably that it\u0026rsquo;s free 😇!\nBy default, the Solace PubSub+ Cloud instance is set to \u0026ldquo;messaging\u0026rdquo; mode, which strips non-standard HTTP headers from messages. To fix that, you switch it to \u0026ldquo;microgateway\u0026rdquo; mode, which preserves any received HTTP header fields as metadata on the Solace message. In the web console, you can change this under Message VPN -\u0026gt; Connectivity — set the mode to \u0026ldquo;Gateway\u0026rdquo;.\nWith gateway mode enabled, the Solace microgateway adds all non-standard headers to your message as JMS_Solace_HTTP_field_\u0026lt;header\u0026gt;.\nThe other thing I ran into: most REST requests don\u0026rsquo;t include a correlation ID. The microgateway uses that field to correlate requests with responses. If it\u0026rsquo;s absent, the gateway generates an appMessageID and uses that instead.\nThe problem is that the default session.sendReply() method grabs the correlationID from the original message if you provide one to reply to (which I figured out after reading the API docs for the session object). When I manually construct the reply and set the destination and correlation ID headers myself, it works perfectly. The response should look like this:\nvar reply = solace.SolclientFactory.createMessage(); reply.setAsReplyMessage(true); reply.setDestination(message.getReplyTo()); reply.setCorrelationId(message.getApplicationMessageId()); subscriber.session.sendReply(null,reply) This sends back an empty message, but the gateway responds with an HTTP/200 status. In my case, that means I can have GitHub send webhooks to the Intel NUC on my desk without the IT team opening firewall ports (I\u0026rsquo;m sure they\u0026rsquo;re very happy with that 😅). Solace made the IT and security team happy — and they\u0026rsquo;ve likely done the same for a lot of other companies too 😎\nIf you want to see the code, check out my GitHub repository and let me know your thoughts either on Twitter or here.\nCover image by Peter H from Pixabay\n","date":"June 5, 2019","externalUrl":null,"permalink":"/2019/06/how-to-get-webhooks-into-your-system-using-solace-pubsub-cloud/","section":"Blog","summary":"Sometimes you need to get data from cloud-based systems into an environment that doesn’t expose APIs or ports to the outside world. Webhooks help, but you still need something that accepts them and gets them across your firewall. That’s exactly where Solace PubSub+ Cloud comes in. I built a small webhook forwarder app that receives data from Solace and sends it onward without any of my systems being exposed to the internet.\n","title":"How to Get Webhooks Into Your System Using Solace PubSub+ Cloud","type":"blog"},{"content":"At the Twistlock Cloud-Native Security Day, a co-located event at KubeCon 2019, I got to talk about what cheesecake and building apps have in common. As a developer you\u0026rsquo;re responsible for the security of your app. Security in this case should be seen in the broadest sense of the word, ranging from licenses to software packages. A chef creating cheesecake has similar challenges. The ingredients of a cheesecake are similar to the software packages a developer uses. The preparation is similar to the DevOps pipeline, and recipe is similar to the licenses for developers. Messing up any of those means you have a messy kitchen, or a data breach! In this talk we\u0026rsquo;ll look at:\nWhy do we care about licenses? How does Sec get into the early stages of DevSecOps? What can chefs and devs learn from each other? Slides # ","date":"May 20, 2019","externalUrl":null,"permalink":"/2019/05/trusting-your-ingredients-what-building-software-and-cheesecake-have-in-common/","section":"Blog","summary":"At the Twistlock Cloud-Native Security Day, a co-located event at KubeCon 2019, I got to talk about what cheesecake and building apps have in common. As a developer you’re responsible for the security of your app. Security in this case should be seen in the broadest sense of the word, ranging from licenses to software packages. A chef creating cheesecake has similar challenges. The ingredients of a cheesecake are similar to the software packages a developer uses. The preparation is similar to the DevOps pipeline, and recipe is similar to the licenses for developers. Messing up any of those means you have a messy kitchen, or a data breach! In this talk we’ll look at:\nWhy do we care about licenses? How does Sec get into the early stages of DevSecOps? What can chefs and devs learn from each other? ","title":"Trusting Your Ingredients - What Building Software And Cheesecake Have In Common","type":"blog"},{"content":"Developers love Docker containers for managing software, but apps also need data and configuration. Those live on Docker volumes, and the question becomes: how do you reuse them?\nAt DockerCon 2019, I got on stage to answer exactly that. I walked through how to manage and reuse Docker volumes with data and configuration. The demo showed how to deploy a pre-configured Jenkins server and a simple web server using a binary repository as a pipeline for Docker volumes management.\nSlides # Video # ","date":"April 30, 2019","externalUrl":null,"permalink":"/2019/04/dockercon-2019-persistence-is-futile-or-is-it/","section":"Blog","summary":"Developers love Docker containers for managing software, but apps also need data and configuration. Those live on Docker volumes, and the question becomes: how do you reuse them?\n","title":"DockerCon 2019 - Persistence Is Futile (Or Is It?)","type":"blog"},{"content":" I\u0026rsquo;m Leon Stigter, a Senior Solutions Architect at AWS. My day-to-day is helping organizations figure out how to use data lakes, analytics, and serverless architectures to solve real problems — not just build cool tech for the sake of it.\nI\u0026rsquo;ve been in tech for over 20 years, across product management, developer advocacy, and solutions architecture. Before AWS, I worked at Lightbend, VMware, and TIBCO — mostly launching developer-focused products and learning (sometimes the hard way) how teams ship better software faster.\nMy core belief is simple: devs wanna dev. The best thing I can do is make sure they have the right tools, patterns, and guidance to focus on building rather than fighting infrastructure.\nOutside of work, I write code, speak at conferences, and blog about whatever I find interesting — usually serverless, data, or cloud-native architecture. I\u0026rsquo;m also on a never-ending quest to find the world\u0026rsquo;s best cheesecake. Recommendations are always welcome 🍰\nRecent talks What\u0026#39;s new in AWS Lake Formation (reInvent 2023) (November 2023) Achieving your modern data architecture (August 2022) How To Use Innovation And Proven Methodologies To Uncover Your Distinctive Competencies (July 2022) Simply Stateful Serverless (October 2021) Why (stateful) serverless matters for server admins (September 2021) The views and opinions expressed on this blog are my own and may not reflect those of the people or organizations I work with.\n","date":"January 13, 2019","externalUrl":null,"permalink":"/about/","section":"retgits.com","summary":" I’m Leon Stigter, a Senior Solutions Architect at AWS. My day-to-day is helping organizations figure out how to use data lakes, analytics, and serverless architectures to solve real problems — not just build cool tech for the sake of it.\nI’ve been in tech for over 20 years, across product management, developer advocacy, and solutions architecture. Before AWS, I worked at Lightbend, VMware, and TIBCO — mostly launching developer-focused products and learning (sometimes the hard way) how teams ship better software faster.\n","title":"About me","type":"page"},{"content":"I\u0026rsquo;ve been playing with OpenFaas ever since I learned about Minikube a few years ago, so when one of my colleagues mentioned Google\u0026rsquo;s Distroless project I obviously needed to see if my Go projects could work using those images too.\nDistroless # \u0026ldquo;Distroless\u0026rdquo; images contain only your application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution. Restricting what\u0026rsquo;s in your runtime container to precisely what\u0026rsquo;s necessary for your app is a best practice employed by Google and other tech giants that have used containers in production for many years. It improves the signal to noise of scanners (e.g. CVE) and reduces the burden of establishing provenance to just what you need.\nSource: Google Container Tools\nOpenFaaS # OpenFaaS allows you to package anything as a serverless function - Binaries, Node.js or, as in my case, Go!\nSo what do I do # When you\u0026rsquo;re starting with OpenFaaS the first command you run is\nfaas-cli template pull This downloads all the templates that are curated by the OpenFaaS team and puts them in a ./template folder. For the go template, you can replace the second container (OpenFaaS uses a multistage Dockerfile) in ./template/go/Dockerfile with the below snippet\n# Let\u0026#39;s see if we can do distroless FROM gcr.io/distroless/base COPY --from=builder /usr/bin/fwatchdog / COPY --from=builder /go/src/handler/function/ / COPY --from=builder /go/src/handler/handler / ENV fprocess=\u0026#34;./handler\u0026#34; EXPOSE 8080 HEALTHCHECK --interval=2s CMD [ -e /fwatchdog ] || exit 1 CMD [\u0026#34;/fwatchdog\u0026#34;] This will do exactly the same, just with a Distroless base image to run your apps!\nCover image by Pixabay\n","date":"January 7, 2019","externalUrl":null,"permalink":"/2019/01/how-to-use-distroless-containers-openfaas-to-minimize-attack-vectors/","section":"Blog","summary":"I’ve been playing with OpenFaas ever since I learned about Minikube a few years ago, so when one of my colleagues mentioned Google’s Distroless project I obviously needed to see if my Go projects could work using those images too.\n","title":"How To Use Distroless Containers \u0026 OpenFaaS To Minimize Attack Vectors","type":"blog"},{"content":"Serverless platforms have been getting a lot of attention. AWS announced a ton of things at their annual user conference, Google announced support for Go in private beta and serverless containers in private alpha, and even Gitlab announced some form of serverless support. With all the big players, it\u0026rsquo;s easy to overlook the smaller ones — but they\u0026rsquo;re often the most interesting.\nZeit # One of those \u0026ldquo;smaller\u0026rdquo; platforms I came across was Zeit. Their mission is to \u0026ldquo;Make Cloud Computing as Easy and Accessible as Mobile Computing.\u0026rdquo; Underneath that, it reads: \u0026ldquo;We build products for developers and designers. And those who aspire to become one.\u0026rdquo; That sets a high bar for how their products should work, and I was curious to see if it held up. So I set out to build a simple function to serve as the backend for the contact form on retgits.com.\nHere\u0026rsquo;s a walkthrough of the code and a few lessons I learned building a Go app on Zeit. The app takes an HTTP request, validates the reCAPTCHA to make sure a bot didn\u0026rsquo;t fill out the form, and sends an email to a pre-determined address. The full code is available on GitHub.\nThe important files:\n. ├── .env_template \u0026lt;-- A template file with the environment variables needed for the function ├── index.go \u0026lt;-- The actual function code └── now.json \u0026lt;-- Deployment descriptor for Zeit Zeit handles secrets similarly to regular environment variables, which keeps the code simple. The same os.Getenv() works for both secrets and regular env vars. One downside: you can\u0026rsquo;t update secrets in place. You have to delete and recreate them. In my case, the .env_template has the variables I need and a Makefile target handles the delete-and-recreate cycle.\nDeployments (via the macOS app or CLI) rely on a now.json to tell the Zeit builders what to do with your code. It\u0026rsquo;s straightforward to set up and their docs are helpful. You can have multiple builds sections for a monorepo with frontend and backend code (usually not a great idea, but fine for experimenting). Environment variables are listed with their value directly or prefixed with @ to indicate they\u0026rsquo;re secrets.\nThe main file is index.go, which contains all the function logic. Here\u0026rsquo;s the breakdown:\n// Constants const ( // The URL to validate reCAPTCHA recaptchaURL = \u0026#34;https://www.google.com/recaptcha/api/siteverify\u0026#34; ) // Variables var ( // The reCAPTCHA Secret Token recaptchaSecret = os.Getenv(\u0026#34;RECAPTCHA_SECRET\u0026#34;) // The email address to send data to emailAddress = os.Getenv(\u0026#34;EMAIL_ADDRESS\u0026#34;) // The email password to use emailPassword = os.Getenv(\u0026#34;EMAIL_PASSWORD\u0026#34;) // The SMTP server smtpServer = os.Getenv(\u0026#34;SMTP_SERVER\u0026#34;) // The SMTP server port smtpPort = os.Getenv(\u0026#34;SMTP_PORT\u0026#34;) ) This section reads in the environment variables. The code doesn\u0026rsquo;t care whether they\u0026rsquo;re secrets or regular env vars.\n// Handler is the main entry point into tjhe function code as mandated by ZEIT func Handler(w http.ResponseWriter, r *http.Request) { // HTTPS will do a PreFlight CORS using the OPTIONS method. // To complete that a special response should be sent if r.Method == http.MethodOptions { response(w, true, \u0026#34;\u0026#34;, r.Method) return } // Parse the request body to a map buf := new(bytes.Buffer) buf.ReadFrom(r.Body) u, err := url.ParseQuery(buf.String()) if err != nil { response(w, false, fmt.Sprintf(\u0026#34;There was an error sending your form data: %s\u0026#34;, err.Error()), r.Method) return } // Prepare the POST parameters urlData := url.Values{} urlData.Set(\u0026#34;secret\u0026#34;, recaptchaSecret) urlData.Set(\u0026#34;response\u0026#34;, u[\u0026#34;g-recaptcha-response\u0026#34;][0]) // Validate the reCAPTCHA resp, err := httpcall(recaptchaURL, \u0026#34;POST\u0026#34;, \u0026#34;application/x-www-form-urlencoded\u0026#34;, urlData.Encode(), nil) if err != nil { response(w, false, fmt.Sprintf(\u0026#34;There was an error sending your form data: %s\u0026#34;, err.Error()), r.Method) return } // Validate if the reCAPTCHA was successful if !resp.Body[\u0026#34;success\u0026#34;].(bool) { response(w, false, fmt.Sprintf(\u0026#34;There was an error sending your form data: %s\u0026#34;, fmt.Sprintf(\u0026#34;%v\u0026#34;, resp.Body[\u0026#34;error-codes\u0026#34;])), r.Method) return } // Set up email authentication information. auth := smtp.PlainAuth( \u0026#34;\u0026#34;, emailAddress, emailPassword, smtpServer, ) // Prepare the email mime := \u0026#34;MIME-version: 1.0;\\nContent-Type: text/plain; charset=\\\u0026#34;UTF-8\\\u0026#34;;\\n\\n\u0026#34; subject := fmt.Sprintf(\u0026#34;Subject: [BLOG] Message from %s %s!\\n\u0026#34;, u[\u0026#34;name\u0026#34;][0], u[\u0026#34;surname\u0026#34;][0]) msg := []byte(fmt.Sprintf(\u0026#34;%s%s\\n%s\\n\\n%s\u0026#34;, subject, mime, u[\u0026#34;message\u0026#34;][0], u[\u0026#34;email\u0026#34;][0])) // Connect to the server, authenticate, set the sender and recipient, // and send the email all in one step. err = smtp.SendMail( fmt.Sprintf(\u0026#34;%s:%s\u0026#34;, smtpServer, smtpPort), auth, emailAddress, []string{emailAddress}, msg, ) if err != nil { fmt.Printf(\u0026#34;[BLOG] Message from %s %s\\n%s\\n%s\\nThe message was not sent: %s\u0026#34;, u[\u0026#34;name\u0026#34;][0], u[\u0026#34;surname\u0026#34;][0], u[\u0026#34;message\u0026#34;][0], u[\u0026#34;email\u0026#34;][0], err.Error()) response(w, false, \u0026#34;There was an error sending your email, but we\u0026#39;ve logged the data...\u0026#34;, r.Method) return } // Return okay response response(w, true, \u0026#34;Thank you for your email! I\u0026#39;ll contact you soon.\u0026#34;, r.Method) return } The entry point function is called Handler — that\u0026rsquo;s a Zeit requirement you can\u0026rsquo;t change.\nThere are two helper methods in the code:\nresponse: handles sending replies to the incoming request. Since most replies follow the same pattern, a single method made sense. httpcall: calls the reCAPTCHA service. Why is everything in one file? The Zeit Go builder treats every file as a separate build artifact, so splitting things into http.go and main.go wasn\u0026rsquo;t an option. I also found that the builder looks for the first \u0026ldquo;exported\u0026rdquo; method and ignores the rest. I could work around that by moving Handler to the top, but a better fix would be for the builder to check whether a function is exported and has the right signature.\nConclusion # Once I figured out the guardrails, the serverless contact form worked perfectly. And honestly, those guardrails aren\u0026rsquo;t bad. With a pretty generous free plan, a solid set of runtimes (PHP, Next.js, even Markdown), ease of use, and some of the things the team tweeted about, Zeit has a pretty interesting time ahead (yes, I did totally want to make a time related pun). I hope they\u0026rsquo;ll continue their service for a long time, not in the very least because they\u0026rsquo;re awesome contributors to Open Source.\nCover image by Pixabay\n","date":"January 2, 2019","externalUrl":null,"permalink":"/2019/01/how-to-build-a-serverless-contactform-with-zeit/","section":"Blog","summary":"Serverless platforms have been getting a lot of attention. AWS announced a ton of things at their annual user conference, Google announced support for Go in private beta and serverless containers in private alpha, and even Gitlab announced some form of serverless support. With all the big players, it’s easy to overlook the smaller ones — but they’re often the most interesting.\n","title":"How To Build A Serverless Contactform With Zeit","type":"blog"},{"content":"There are many challenges facing software development specifically when building and deploying new microservices as we try to do every day. Using Cloud-Native technologies we can navigate some of those risks, but not all of our development practices, especially security and compliance, have kept up with the speed in which the rest of our tech stack has evolved. In this presentation I cover how JFrog Xray helps you safely deploy your artifacts to production with full confidence.\nSlides # Video # ","date":"December 6, 2018","externalUrl":null,"permalink":"/2018/12/dockercon-eu-2018-the-art-of-deploying-artifacts-to-production-with-confidence/","section":"Blog","summary":"There are many challenges facing software development specifically when building and deploying new microservices as we try to do every day. Using Cloud-Native technologies we can navigate some of those risks, but not all of our development practices, especially security and compliance, have kept up with the speed in which the rest of our tech stack has evolved. In this presentation I cover how JFrog Xray helps you safely deploy your artifacts to production with full confidence.\n","title":"DockerCon EU 2018 - The Art Of Deploying Artifacts To Production With Confidence","type":"blog"},{"content":"A smart security camera takes in a high volume of video images and processes those images using a set of machine learning models. Those models can identify interesting snippets of movement throughout the day and decide which ones to keep. Some of the video snippets might contain movement of birds — but others might contain footage of intruders.\nYou can check out the interview on Software Engineering Daily\n","date":"October 25, 2018","externalUrl":null,"permalink":"/2018/10/flogo-event-driven-ecosystem-on-se-daily/","section":"Blog","summary":"A smart security camera takes in a high volume of video images and processes those images using a set of machine learning models. Those models can identify interesting snippets of movement throughout the day and decide which ones to keep. Some of the video snippets might contain movement of birds — but others might contain footage of intruders.\n","title":"Flogo - Event Driven Ecosystem On SE Daily","type":"blog"},{"content":"","date":"October 25, 2018","externalUrl":null,"permalink":"/categories/tibco/","section":"Categories","summary":"","title":"TIBCO","type":"categories"},{"content":"Two weeks ago, I had the opportunity to be at AirFrance/KLM in KLM\u0026rsquo;s Digital Studio to talk about Project Flogo and brainstorm on where they could use it to improve and expand their digital footprint. The team was kind enough to share the recorded video on YouTube.\n","date":"October 22, 2018","externalUrl":null,"permalink":"/2018/10/the-secrets-of-project-flogo-a-deep-dive/","section":"Blog","summary":"Two weeks ago, I had the opportunity to be at AirFrance/KLM in KLM’s Digital Studio to talk about Project Flogo and brainstorm on where they could use it to improve and expand their digital footprint. The team was kind enough to share the recorded video on YouTube.\n","title":"Project Flogo at KLM's Digital Studio","type":"blog"},{"content":"No matter the metric, serverless is definitely gaining interest. It\u0026rsquo;s the dream of every developer, supplying the ability to deploy services in the cloud in no time, automatically scale them, enjoy automagic management by a cloud provider—and, most important, keep it all cost effective! How does this dream become a reality?\nThis presentation covered what serverless is all about and the benefits of running your apps in the serverless environment. It covers the monoliths-microservices-functions progression and when, where, and why to use serverless architecture and how Project Flogo fits in to the overall picture\n","date":"October 13, 2018","externalUrl":null,"permalink":"/2018/10/tibco-now-2018-project-flogo-serverless-integration-powered-by-flogo-and-lambda/","section":"Blog","summary":"No matter the metric, serverless is definitely gaining interest. It’s the dream of every developer, supplying the ability to deploy services in the cloud in no time, automatically scale them, enjoy automagic management by a cloud provider—and, most important, keep it all cost effective! How does this dream become a reality?\nThis presentation covered what serverless is all about and the benefits of running your apps in the serverless environment. It covers the monoliths-microservices-functions progression and when, where, and why to use serverless architecture and how Project Flogo fits in to the overall picture\n","title":"TIBCO NOW 2018 - Project Flogo Serverless Integration Powered by Flogo and Lambda","type":"blog"},{"content":"Innovation at the edge is driven by a whole host of people and personalities, but who makes sure those innovations get into production? Developers.\nIn this TIBCO Tech Talk, I walk through tools and technologies that help developers build better software, faster.\nThe talk covers:\nThe latest updates for Project Flogo, an open-source and ultra-lightweight edge computing platform A brief demo of Flogo and API Scout How to get started on your developer journey with these tools ","date":"October 5, 2018","externalUrl":null,"permalink":"/2018/10/developers-developers-developers-innovating-at-the-edge/","section":"Blog","summary":"Innovation at the edge is driven by a whole host of people and personalities, but who makes sure those innovations get into production? Developers.\nIn this TIBCO Tech Talk, I walk through tools and technologies that help developers build better software, faster.\nThe talk covers:\nThe latest updates for Project Flogo, an open-source and ultra-lightweight edge computing platform A brief demo of Flogo and API Scout How to get started on your developer journey with these tools ","title":"Developers, Developers, Developers - Innovating at the Edge","type":"blog"},{"content":"In today\u0026rsquo;s world everyone is building apps, most times those apps are event-driven and react to what happens around them. How do you take those apps to, let\u0026rsquo;s say, a Kubernetes cluster, or let them communicate between cloud and on-premises, and how can developers and non-developers work together using the same tools?\n","date":"October 1, 2018","externalUrl":null,"permalink":"/2018/10/api-world-2018-project-flogo-an-event-driven-stack-for-the-enterprise/","section":"Blog","summary":"In today’s world everyone is building apps, most times those apps are event-driven and react to what happens around them. How do you take those apps to, let’s say, a Kubernetes cluster, or let them communicate between cloud and on-premises, and how can developers and non-developers work together using the same tools?\n","title":"API World 2018 - Project Flogo an Event Driven Stack for the Enterprise","type":"blog"},{"content":"In today\u0026rsquo;s world everyone is building apps, most times those apps are event-driven and react to what happens around them. How do you take those apps to, let\u0026rsquo;s say, a Kubernetes cluster, or let them communicate between cloud and on-premises, and how can developers and non-developers work together using the same tools? Let\u0026rsquo;s break down the title a bit\u0026hellip;\nDoes it have to be Open Source # Let\u0026rsquo;s be fair, Open Source Software (OSS) powers nearly all of our modern society and economy. It\u0026rsquo;s not just making the source code of an application or framework available on GitHub, it\u0026rsquo;s a movement that is driving the innovation in our daily lives!\nIn fact, looking at a few statistics drives this point home even further:\n91% of developers use the same OSS tools for work and personal projects 98% of developers use OSS tools at work And these numbers come from a relatively old report from Open Source For U. Just think about all the technologies you\u0026rsquo;re using on a daily basis that are built as Open Source, like Visual Studio Code, Golang, PostgreSQL, React, Ruby, Kafka, or Flogo\nSo, does it have to be Open Source? Yes, I think it does!\nWhat is Event-Driven # The dictionary describes an event as \u0026ldquo;a thing that happens, especially one of importance.\u0026rdquo; So, events are rather easy to describe because an event is just that… The thing that will differ from event to event is how you process it. Do you process them in a stream, or one at a time, or something different? The technology you choose should help you with all of those.\nStack # When you\u0026rsquo;re building a microservice, do you want to build a few apps that work together? A streaming app that aggregates the events, filters them and sends them off to a separate Rules app to validate them and send them off to a separate Integration app to handle logging, writing to databases and whatnot? Especially in a microservice world, you want to use streams, rules, and integration in a single app to limit overhead and stay within the bounded context of your service.\nDevs and non-devs # I\u0026rsquo;ll start with a confession\u0026hellip; I love writing code! I prefer an editor like Visual Studio Code over a graphical design-time (though I use both in my job), but I realize that everyone has their own preference. There will be developers that enjoy a graphical design-time, there will be developers that rather write apps using a DSL, and there are developers that want to write code. Ideally, the tech you use helps all those developers.\nProject Flogo # At TIBCO\u0026rsquo;s user conference last week, we\u0026rsquo;ve shown a ton of new things surrounding Project Flogo. Let\u0026rsquo;s unpack the title and see how Flogo stacks up (pun intended).\nOpen Source: Yes, most certainly. Project Flogo has a BSD-3 license, which is one of the most permissive licenses possible (you\u0026rsquo;d never have to tell us you\u0026rsquo;re using it, though I\u0026rsquo;d prefer it if you did 😄) Stack: Most definitely, yes. At TIBCO NOW, we demoed a package that used streaming events and contextual rules to determine where it was and if it hadn\u0026rsquo;t moved far enough along. These two new capabilities for Project Flogo have been released as Open Source as well, complementing the existing integration flows. Devs and non-devs: I think so! Project Flogo has a cool Web UI, a great JSON DSL, and a newly announced Golang API. Wrapping up # I think Flogo is an amazing stack to help you build event-driven apps, I\u0026rsquo;d love to know your thoughts on it! Let me know by dropping me a note. If you have any thoughts on how to make this better, just create a GitHub issue.\nCover image by Pixabay\n","date":"October 1, 2018","externalUrl":null,"permalink":"/2018/10/the-art-of-open-source-event-driven-stacks-for-the-enterprise/","section":"Blog","summary":"In today’s world everyone is building apps, most times those apps are event-driven and react to what happens around them. How do you take those apps to, let’s say, a Kubernetes cluster, or let them communicate between cloud and on-premises, and how can developers and non-developers work together using the same tools? Let’s break down the title a bit…\n","title":"Project Flogo: An Open Source Event-Driven Stack","type":"blog"},{"content":"As a developer advocate, I\u0026rsquo;m in the amazing position to talk to lots and lots of developers. Throughout those conversations I hear a lot of the same concerns popping up. Two of those being, \u0026ldquo;where did I deploy that microservice?\u0026rdquo; 😩 and \u0026ldquo;what is the API definition of that microservice again?\u0026quot;😟\nWhen your deployment footprint grows, keeping track of all those deployed microservices on Kubernetes can become quite a challenge. Keeping the API documentation updated for developers, could become even more challenging. To make an attempt to solve that challenge, we\u0026rsquo;ve released API Scout. API Scout helps you get up-to-date API docs to your developers by simply annotating your services in Kubernetes. The tool looks for two simple annotations in your service to be able to index a service:\napiscout/index: 'true': This annotation ensures that apiscout indexes the service apiscout/swaggerUrl: '/swaggerspec': This is the URL from where apiscout will read the OpenAPI document So rather than making big changes to your app, you can add those two annotations to your service deployment file and just update that. For one of the sample apps I added, the updated yaml file looks like:\nJust a part of the yaml file. The full file can be found here\nTo build your own instance, clone the repo and run make build-all. To deploy, after updating the yaml file in the Kubernetes directory to match your desired settings, run make run-kube.\nSo instead of \u0026ldquo;Where is that thing again?\u0026rdquo; 😟 you can now be, \u0026ldquo;I know what I deployed last summer!\u0026rdquo; 😅 While I think this is a very useful tool, and yes I have it running on my own local deployment of Kubernetes, I\u0026rsquo;d love to know your thoughts on it! Let me know by clapping 👏, dropping me a note here or by dropping me a note on Twitter. If you have any thoughts on how to make this better, just create a GitHub issue.\nCover image by Phad Pichebtovornkul on Unsplash\n","date":"September 17, 2018","externalUrl":null,"permalink":"/2018/09/now-where-did-i-deploy-that-microservice/","section":"Blog","summary":"As a developer advocate, I’m in the amazing position to talk to lots and lots of developers. Throughout those conversations I hear a lot of the same concerns popping up. Two of those being, “where did I deploy that microservice?” 😩 and “what is the API definition of that microservice again?\"😟\n","title":"Tracking Microservices on Kubernetes with API Scout","type":"blog"},{"content":"Not too long ago Flogo introduced a new Go API that allows you to build event-driven apps by simply embedding the Flogo engine in your existing Go code. Now you can use the event-driven engine of Flogo to build Go apps while using the activities and triggers that already exist and combining that with “regular” Go code. In one of my other posts, I built an app that could receive messages from PubNub and for this post, I’ll walk through building the exact same using the Go API.\nNote: I realize that certain pieces of the code aren’t as optimized as they could be, but I wanted to keep the flow similar to what I did with the Web UI\nTo run this example you’ll need to have Go installed and execute these commands from where you created the file with the source code (which will have to be in your $GOPATH)\nGet all the dependencies\n#If you already have Flogo or the Flogo CLI, you can skip thesego get -u github.com/TIBCOSoftware/flogo-contrib/activity/loggo get -u github.com/TIBCOSoftware/flogo-lib/core/datago get -u github.com/TIBCOSoftware/flogo-lib/enginego get -u github.com/TIBCOSoftware/flogo-lib/flogogo get -u github.com/TIBCOSoftware/flogo-lib/logger # You will need to go get these :) go get -u github.com/retgits/flogo-components/activity/writetofile go get -u github.com/retgits/flogo-components/trigger/pubnubsubscriber Generate the metadata\nThe Flogo engine needs a bit of metadata, and to generate that the line at the top of the file needs to be executed. To do that, simply run the command below:\ngo generate Build and run\nNow that the “hard” part is done, you can build the app like you would do for any Go app:\ngo build./pubnub-app Testing it out # If you’re testing it out in the same way as I did in my last post, you’ll see the same the same status messages come by as in the example with the Web UI and when you test the app in the exact same way, you’ll see that both apps will receive the same message!\nNo matter if you’re a Go developer or someone who builds microservices visually (through a very cool Web UI), you can do it using Flogo! If you’re trying out Flogo and have any questions, feel free to join our Gitter channel, create an issue on GitHub or even drop me a note on Twitter. I’d also love to get your feedback if you thought this was helpful (or not).\n","date":"August 28, 2018","externalUrl":null,"permalink":"/2018/08/the-art-of-using-go-in-flogo/","section":"Blog","summary":"Not too long ago Flogo introduced a new Go API that allows you to build event-driven apps by simply embedding the Flogo engine in your existing Go code. Now you can use the event-driven engine of Flogo to build Go apps while using the activities and triggers that already exist and combining that with “regular” Go code. In one of my other posts, I built an app that could receive messages from PubNub and for this post, I’ll walk through building the exact same using the Go API.\n","title":"The Art Of Using Go in Flogo","type":"blog"},{"content":"I can hear you think \u0026ldquo;Part 2?! So there actually is a part 1?\u0026rdquo; 😱 The answer to that is, yes, there most definitely is a part 1 (but you can safely ignore that 😅). In that part I went over deploying Flogo apps built with the Flogo Web UI using the Serverless Framework. Now, with the Go API that we added to Flogo, you can mix triggers and activities from Flogo (and the community) with your regular Go code and deploy using the Serverless Framework.\nWhat you\u0026rsquo;ll need # Two things need to be installed before we start. If you don\u0026rsquo;t have these yet, now would be a great time to get them (I\u0026rsquo;ll wait, I promise…)\nThe Serverless Framework An AWS account Let\u0026rsquo;s get started!\nCreate a sample project based on the Flogo template:\nserverless create -u https://github.com/tibcosoftware/flogo/tree/master/serverless -p myservice That generates the following structure:\nmyservice \u0026lt;-- A directory with the name of your service ├── hello \u0026lt;-- A folder with the sources of your function │ ├── function.go \u0026lt;-- A Hello World function │ └── main.go \u0026lt;-- The Lambda trigger code, created by Flogo ├── .gitignore \u0026lt;-- Ignores things you don\u0026#39;t want in git ├── Makefile \u0026lt;-- A Makefile to build and deploy even faster ├── README.md \u0026lt;-- A quickstart guide └── serverless.yaml \u0026lt;-- The Serverless Framework template The content of main.go comes directly from the Lambda trigger. The function.go file has three methods that make up the entire app:\ninit # Sets up the defaults — log levels, creates the app by calling shimApp(), and starts the engine.\nshimApp # Builds a new Flogo app and registers the Lambda trigger with the engine. The shim triggers the engine every time an event comes into Lambda and calls RunActivities each time.\nRunActivities # This is where the actual work happens. You get the input from whatever event triggered your Lambda function in a map called evt (part of the inputs). The sample logs \u0026ldquo;Go Serverless v1.x! Your function executed successfully!\u0026rdquo; and returns the same as a response. The trigger in main.go handles marshalling it into a proper API Gateway response.\nBuild and Deploy # To build the executable for Lambda, run make or make build. That runs two commands under the hood:\ngo generate ./...: Generates the metadata that the Flogo engine needs for all the activities and triggers you\u0026rsquo;re using, so it can be compiled into the executable. env GOOS=linux go build -ldflags=\u0026quot;-s -w\u0026quot; -o bin/hello hello/*.go: Creates a Linux executable from the sources in the hello folder and puts it in the bin folder. To deploy, run make deploy or sls deploy. This pushes your function to AWS Lambda.\nThe output will look something like:\n\u0026lt;snip\u0026gt; Service Information service: myservice stage: dev region: us-east-1 stack: myservice-dev api keys: None endpoints: GET - https://xxx.execute-api.us-east-1.amazonaws.com/dev/hello functions: hello: myservice-dev-hello \u0026lt;snip\u0026gt; And test # Test it with cURL:\ncurl --request GET --url https://xxx.execute-api.us-east-1.amazonaws.com/dev/hello --header \u0026#39;content-type: application/json\u0026#39; You should get back:\n{\u0026#34;message\u0026#34;: \u0026#34;Go Serverless v1.x! Your function executed successfully!\u0026#34;} A little more personal # That works, but it\u0026rsquo;s not very useful yet. Let\u0026rsquo;s update the code to handle both GET and POST and return a more personalized response.\nFirst, update serverless.yml with a new event handler. Around line 58 you\u0026rsquo;ll find the events section. It already has a GET entry — copy it and change the method to POST.\nThe response creation after the switch statement stays the same:\nFor GET, keep the original message: message = \u0026quot;Go Serverless v1.x! Your function executed successfully!\u0026quot;\nFor POST, reply with the caller\u0026rsquo;s name:\nThe complete updated method:\nBuild and Deploy, again # Run make deploy to build and deploy the update. The output now shows both endpoints:\n\u0026lt;snip\u0026gt; Service Information service: myservice stage: dev region: us-east-1 stack: myservice-dev api keys: None endpoints: GET - https://xxx.execute-api.us-east-1.amazonaws.com/dev/hello POST - https://xxx.execute-api.us-east-1.amazonaws.com/dev/hello functions: hello: myservice-dev-hello \u0026lt;snip\u0026gt; The GET still works as before. The command curl --request GET --url https://xxx.execute-api.us-east-1.amazonaws.com/dev/hello — header 'content-type: application/json' should still respond with {\u0026quot;message\u0026quot;: \u0026quot;Go Serverless v1.x! Your function executed successfully!\u0026quot;}\nBut when you POST:\ncurl --request POST --url https://xxx.execute-api.us-east-1.amazonaws.com/dev/hello — header \u0026#39;content-type: application/json\u0026#39; --data \u0026#39;{\u0026#34;name\u0026#34;: \u0026#34;Flynn\u0026#34;}\u0026#39; you get a personalized response:\n{\u0026#34;message\u0026#34;: \u0026#34;Flynn is going all in on Serverless v1.x!\u0026#34;} What\u0026rsquo;s next # If you\u0026rsquo;re a Go developer who wants to build event-driven apps and deploy with the Serverless Framework, Flogo is worth a look. If you have questions, join the Gitter channel, create an issue on GitHub, or drop me a note on Twitter.\n","date":"August 16, 2018","externalUrl":null,"permalink":"/2018/08/serverless-and-flogo-a-perfect-match-part-2/","section":"Blog","summary":"I can hear you think “Part 2?! So there actually is a part 1?” 😱 The answer to that is, yes, there most definitely is a part 1 (but you can safely ignore that 😅). In that part I went over deploying Flogo apps built with the Flogo Web UI using the Serverless Framework. Now, with the Go API that we added to Flogo, you can mix triggers and activities from Flogo (and the community) with your regular Go code and deploy using the Serverless Framework.\n","title":"Deploying Flogo Apps to Lambda with the Serverless Framework (Part 2)","type":"blog"},{"content":"I got a ton of great feedback on my post Securely Chatting Microservices, so I decided to create a video out of it and start a new video series called Flynn in Flight!\n","date":"August 14, 2018","externalUrl":null,"permalink":"/2018/08/flynn-in-flight-how-to-build-securely-chatting-microservices/","section":"Blog","summary":"I got a ton of great feedback on my post Securely Chatting Microservices, so I decided to create a video out of it and start a new video series called Flynn in Flight!\n","title":"Flynn in Flight: Secure Microservice Communication with Flogo and PubNub","type":"blog"},{"content":"Building microservices is awesome, having them talk to each other is even more awesome! But in today\u0026rsquo;s world, you can\u0026rsquo;t be too careful when it comes to sending sensitive data across the wire. Last week I was at PubNub for a Meetup where, together with Jordan Schuetz and Nicholas Grenié, we spoke about cool things you can do with PubNub. One of them is using PubNub as a messaging layer to have your microservices, built with Flogo (duh), talk to each other in a secure way. In this post, I\u0026rsquo;ll go over the steps to build those microservices and hook them up using PubNub.\nWhat are we building? # During the Meetup, Jordan and Nicholas showed a demo that used Typeform to build a simple mobile app for opening your front door. Messages flowed from a mobile device, through PubNub, to Slack where a user could click a button to open the door or not. Here we\u0026rsquo;ll build a simpler version: a microservice that receives messages from PubNub and writes them to a file, keeping a ledger of everyone coming in.\nGetting a PubNub account # You\u0026rsquo;ll need a PubNub account first. Registration is easy — go to https://dashboard.pubnub.com/login and use \u0026ldquo;SIGN UP\u0026rdquo; to create a new account. After signing up, use the big red button to create a new app (the name doesn\u0026rsquo;t matter, you can change it later). Click on the newly created app and you\u0026rsquo;ll see a new KeySet. The Publish and Subscriber keys are what you need to connect to PubNub.\nStep 1: Creating a key set\nBuilding your app using a UI # To get started with Flogo, the only thing you need installed is Docker. Check out this link for a refresher on getting the Flogo Web UI running. Once you have it pulled from Docker Hub and running, open a browser and go to http://localhost:3303/apps.\nStep 2: Create a new app\nClick \u0026ldquo;New\u0026rdquo; to create a new microservice and give it a name. Click \u0026ldquo;Create a Flow\u0026rdquo;, name it whatever you want, then click on the flow to open the design canvas.\nFlogo doesn\u0026rsquo;t ship with a PubNub trigger out of the box, so I built one using the SDK from the PubNub team. To install it, click the \u0026ldquo;+\u0026rdquo; icon on the left side of the screen:\nStep 3: Adding a trigger\nClick \u0026ldquo;Install new\u0026rdquo; and paste \u0026ldquo;https://github.com/retgits/flogo-components/trigger/pubnubsubscriber\u0026rdquo; into the input dialog. After installation, click \u0026ldquo;Receive PubNub messages\u0026rdquo; to add the trigger to your app.\nWe want to store the incoming PubNub message in a file. To do that, create an Input parameter by clicking the grey \u0026ldquo;Input Output\u0026rdquo; bar. Call the parameter \u0026ldquo;pubnubmessage\u0026rdquo;, keep the type as \u0026ldquo;string\u0026rdquo;, and click save.\nNow configure the trigger to listen for PubNub messages. Click on the trigger and fill in:\npublishKey: The key from PubNub (usually starts with pub-c) subscribeKey: The key from PubNub (usually starts with sub-c) channel: The channel to listen on (totally up to you) Then click \u0026ldquo;Map to flow inputs\u0026rdquo; to map the PubNub message to the \u0026ldquo;pubnubmessage\u0026rdquo; parameter. The parameter will already be selected since it\u0026rsquo;s the only one — just click \u0026ldquo;* message\u0026rdquo; in the Trigger Output section and \u0026ldquo;save\u0026rdquo;. Click the \u0026ldquo;X\u0026rdquo; on the top-right (no, not your browser…) to close the dialog and go back to the flow.\nWe\u0026rsquo;ll add two activities: one to log the message and one to write it to a file. Click the large \u0026ldquo;+\u0026rdquo; sign to add an activity:\nStep 4: Adding activities\nPick \u0026ldquo;Log Message\u0026rdquo; from the activity list on the right. Hover over the new activity to see the cog icon, then hover over that to get the configuration menu. Select \u0026ldquo;message\u0026rdquo; in \u0026ldquo;Activity Inputs\u0026rdquo; and expand \u0026ldquo;flow (flow)\u0026rdquo; to select \u0026ldquo;pubnubmessage\u0026rdquo;.\nStep 5: Mapping data\nHit \u0026ldquo;save\u0026rdquo; and that part is done.\nAdding activities # Now we need a file-writing activity. On the main flow screen, click \u0026ldquo;Install new activity\u0026rdquo; to get the same install dialog.\nStep 6: Adding new activities\nPaste \u0026ldquo;https://github.com/retgits/flogo-components/activity/writetofile\u0026rdquo; in the dialog. Once installed, add it to your flow and configure it:\nAppend: Set to \u0026ldquo;true\u0026rdquo; — we want to append, not overwrite Content: Expand \u0026ldquo;flow (flow)\u0026rdquo; and select \u0026ldquo;pubnubmessage\u0026rdquo; Create: Set to \u0026ldquo;true\u0026rdquo; — create the file if it doesn\u0026rsquo;t exist Filename: Something like \u0026ldquo;visitors.txt\u0026rdquo; (include the double quotes) Click \u0026ldquo;save\u0026rdquo; and you\u0026rsquo;re back on the main flow screen. The completed flow looks like this:\nA completed flow!\nBuilding an executable # That\u0026rsquo;s the entire flow design. To build it, click the \u0026ldquo;\u0026lt;\u0026rdquo; button on the top-left to go back to your microservice, then select \u0026ldquo;Build\u0026rdquo; and choose your OS. The Flogo Web UI will compile your microservice into a tiny executable (about 12MB).\nLet\u0026rsquo;s test it! # Run the executable by double-clicking it (Windows) or from a terminal (macOS/Linux). If it starts successfully you\u0026rsquo;ll see something like: 2018–08–06 21:20:02.867 INFO [engine] — Received status [pubnub.PNConnectedCategory], this is expected for a subscribe, this means there is no error or issue whatsoever\nTo test it, use the PubNub debug console:\nTesting from the PubNub console\nIn \u0026ldquo;Default Channel\u0026rdquo;, type the same channel name you configured in your Flogo app (MyChannel, in this example). Click \u0026ldquo;ADD CLIENT\u0026rdquo; to create a client that can send and receive data. The nice thing about PubNub is that you don\u0026rsquo;t need to open any firewall ports for the debug console and your microservice to communicate. At the bottom of the page you\u0026rsquo;ll see \u0026ldquo;{\u0026ldquo;text\u0026rdquo;:\u0026ldquo;Enter Message Here\u0026rdquo;}\u0026rdquo; — either hit \u0026ldquo;SEND\u0026rdquo; or replace it with something like \u0026ldquo;{\u0026ldquo;Hello\u0026rdquo;:\u0026ldquo;World\u0026rdquo;}\u0026rdquo;. After clicking \u0026ldquo;SEND\u0026rdquo;, the message shows up in your microservice\u0026rsquo;s terminal:\nOh!! A message in a terminal window\nAnd in the log file created alongside your app:\nShows up in Notepad too :D\nPubNub gives your microservices a secure communication layer, and Flogo makes building those microservices straightforward — whether you\u0026rsquo;re writing Go code or designing flows visually through the Web UI. If you\u0026rsquo;re trying out Flogo and have questions, feel free to join our Gitter channel, create an issue on GitHub or drop me a note on Twitter.\n","date":"August 13, 2018","externalUrl":null,"permalink":"/2018/08/how-to-build-securely-chatting-microservices-with-flogo-and-pubnub/","section":"Blog","summary":"Building microservices is awesome, having them talk to each other is even more awesome! But in today’s world, you can’t be too careful when it comes to sending sensitive data across the wire. Last week I was at PubNub for a Meetup where, together with Jordan Schuetz and Nicholas Grenié, we spoke about cool things you can do with PubNub. One of them is using PubNub as a messaging layer to have your microservices, built with Flogo (duh), talk to each other in a secure way. In this post, I’ll go over the steps to build those microservices and hook them up using PubNub.\n","title":"How To Build Securely Chatting Microservices With Flogo And PubNub","type":"blog"},{"content":"Building multi-platform event-driven microservices and functions can get complicated fast. In this short webinar hosted by DZone, I cover how to use Project Flogo to build event-driven microservices and functions that target both Kubernetes and AWS Lambda — without losing your mind in the process.\n","date":"August 3, 2018","externalUrl":null,"permalink":"/2018/08/efficiently-build-and-deploy-event-driven-functions-to-kubernetes-aws-lambda/","section":"Blog","summary":"Building multi-platform event-driven microservices and functions can get complicated fast. In this short webinar hosted by DZone, I cover how to use Project Flogo to build event-driven microservices and functions that target both Kubernetes and AWS Lambda — without losing your mind in the process.\n","title":"Efficiently Build And Deploy Event-driven Functions to Kubernetes \u0026 AWS Lambda","type":"blog"},{"content":"\u0026ldquo;Serverless\u0026rdquo; allows developers to focus on writing their code, and a cloud provider, like AWS, takes care of all the other bits. Building serverless apps means the developer doesn\u0026rsquo;t have to worry about server management, scaling, or high availability, a convenience that usually comes with the added benefit of lower operational cost. We\u0026rsquo;ll be showing how to use the Project Flogo lightweight integration engine and open source framework to deploy functions to AWS Lambda using SAM.\n","date":"August 2, 2018","externalUrl":null,"permalink":"/2018/08/tibco-meetup-2018-building-serverless-apps-with-go-sam/","section":"Blog","summary":"“Serverless” allows developers to focus on writing their code, and a cloud provider, like AWS, takes care of all the other bits. Building serverless apps means the developer doesn’t have to worry about server management, scaling, or high availability, a convenience that usually comes with the added benefit of lower operational cost. We’ll be showing how to use the Project Flogo lightweight integration engine and open source framework to deploy functions to AWS Lambda using SAM.\n","title":"TIBCO Meetup 2018 - Building serverless apps with Go \u0026 SAM","type":"blog"},{"content":"This post walks through building a Slack bot that responds to a /cat slash command with cat facts. The bot is built with Project Flogo, runs on AWS Lambda, and is exposed through API Gateway. The whole thing takes about 15 minutes to set up.\nPrerequisites # Before getting started, make sure you have:\nGit The Flogo CLI (go get -u github.com/TIBCOSoftware/flogo-cli/...) An AWS account with access to Lambda and API Gateway If you want to tweak the app before deploying, you can use the Flogo Web UI:\ndocker run -it -p 3303:3303 flogo/flogo-docker:latest eula-accept Just make sure to also install the QueryParser activity.\nClone the repo # git clone https://github.com/retgits/flogo-slackbot cd flogo-slackbot What the bot does # The bot responds to a /cat slash command in Slack. Depending on what you type after /cat, it takes one of three branches:\n/cat usage — shows available commands /cat fact — fetches a random cat fact and posts it in the channel /cat whoami — responds with your Slack username If you load slack_cat.json into the Flogo Web UI, the flow looks like this:\nThe text responses are defined on these lines in the JSON:\nusage: line 121 fact: line 156 whoami: line 173 Feel free to customize those before building. Once you\u0026rsquo;re done, export the app from the Flogo Web UI.\nBuild the executable # Turn the JSON flow into a Lambda-ready binary:\n# Create the app structure flogo create -f slack_cat.json -flv github.com/TIBCOSoftware/flogo-contrib/action/flow@master slackcat cd slackcat # Build for Lambda flogo build -e -shim lambda_trigger Test locally with SAM (optional) # You can test the bot locally using the AWS SAM CLI before deploying to AWS.\nThe sam folder contains a SAM template (YAML) and six JSON test events — two per command (one simple, one simulating API Gateway):\nusage.json / apiusage.json fact.json / apifact.json whoami.json / apiwhoami.json To run a local test:\ncd sam cp ../src/slackcat/handler ./handler sam local invoke \u0026#34;SlackCat\u0026#34; -e apifact.json The output will look something like:\n{\u0026#34;statusCode\u0026#34;:200,\u0026#34;headers\u0026#34;:null,\u0026#34;body\u0026#34;:\u0026#34;:wave: \u0026lt;@sam\u0026gt;, did you know that Polydactyl cats are also referred to as \\\u0026#34;Hemingway cats\\\u0026#34; because the author was so fond of them.\u0026#34;} I genuinely didn\u0026rsquo;t know that about Hemingway and polydactyl cats. 😸\nDeploy to Lambda # There are plenty of ways to deploy to Lambda, but doing it manually helps you see what the automation frameworks do under the hood.\nIn the Lambda console, click \u0026ldquo;Create function\u0026rdquo; Configure it: Name: SlackBot Runtime: Go 1.x Role: Choose an existing role → lambda_basic_execution Click \u0026ldquo;Create function\u0026rdquo; In the Function code section, upload src/slackcat/handler.zip Set the Handler field to handler (replace the default hello) Click \u0026ldquo;Save\u0026rdquo; To test from the console, create a test event using any of the JSON files from the sam folder.\nSet up API Gateway # The bot needs to be reachable from the internet, so we add an API Gateway trigger.\nIn the Lambda function config, find \u0026ldquo;Add triggers\u0026rdquo; and select \u0026ldquo;API Gateway\u0026rdquo; Use these settings: API: Create a new API API name: SlackBot Deployment stage: dev Security: Open Click \u0026ldquo;Add\u0026rdquo;, then \u0026ldquo;Save\u0026rdquo; A note on security: the \u0026ldquo;Open\u0026rdquo; setting is fine for testing, but you\u0026rsquo;ll want to lock this down before using it for real.\nAfter saving, expand the API Gateway section to find your Invoke URL — you\u0026rsquo;ll need it for the Slack configuration.\nCreate the Slack app # Head to https://api.slack.com/apps and click \u0026ldquo;Create New App.\u0026rdquo;\nGive it a name, pick a workspace, and click \u0026ldquo;Create App.\u0026rdquo;\nNow set up the slash command:\nClick \u0026ldquo;Slash Commands\u0026rdquo; → \u0026ldquo;Create New Command\u0026rdquo; Configure: Command: /cat Request URL: your API Gateway Invoke URL Short Description: :cat: Usage Hint: usage Click \u0026ldquo;Save\u0026rdquo; Go to \u0026ldquo;Install App\u0026rdquo; → \u0026ldquo;Install App to Workspace\u0026rdquo; → \u0026ldquo;Authorize\u0026rdquo; Try it out # At this point you should have:\nA Flogo app deployed to Lambda An API Gateway exposing it A Slack slash command pointing to the gateway Type /cat fact in any channel and see what you get.\nAgain, I had no idea. 🙀\nIf you build something fun with Flogo or have questions, feel free to reach out.\n","date":"June 15, 2018","externalUrl":null,"permalink":"/2018/06/how-to-build-a-slack-bot-powered-by-project-flogo/","section":"Blog","summary":"This post walks through building a Slack bot that responds to a /cat slash command with cat facts. The bot is built with Project Flogo, runs on AWS Lambda, and is exposed through API Gateway. The whole thing takes about 15 minutes to set up.\n","title":"How To Build a Slack Bot Powered By Project Flogo","type":"blog"},{"content":"Serverless has real potential to change how businesses build and architect cloud applications. No provisioning infrastructure, no dealing with maintenance, updates, scaling, or capacity planning — you just upload your apps to AWS and go. This webinar walks through the case for going serverless and what that looks like in practice.\n","date":"June 5, 2018","externalUrl":null,"permalink":"/2018/06/why-you-should-go-serverless-with-aws-and-tibco/","section":"Blog","summary":"Serverless has real potential to change how businesses build and architect cloud applications. No provisioning infrastructure, no dealing with maintenance, updates, scaling, or capacity planning — you just upload your apps to AWS and go. This webinar walks through the case for going serverless and what that looks like in practice.\n","title":"Why You Should Go Serverless with AWS and TIBCO","type":"blog"},{"content":"Every developer has that one technology they gravitate toward — whether it\u0026rsquo;s ESB, open source tooling, or Node.js. The idea behind this webinar was simple: what if you could bring all of that into one place? And you\u0026rsquo;re not locked into iPaaS for deployment either. You can deploy on-premises, to a private cloud, to devices, or to serverless environments.\n","date":"April 18, 2018","externalUrl":null,"permalink":"/2018/04/integration--cloud-a-match-made-in-heaven/","section":"Blog","summary":"Every developer has that one technology they gravitate toward — whether it’s ESB, open source tooling, or Node.js. The idea behind this webinar was simple: what if you could bring all of that into one place? And you’re not locked into iPaaS for deployment either. You can deploy on-premises, to a private cloud, to devices, or to serverless environments.\n","title":"Integration + Cloud - A Match Made In Heaven","type":"blog"},{"content":"Together with the O\u0026rsquo;Reilly team, we did a webinar on low-code app development. APIs and microservices are great if you\u0026rsquo;re a technical developer, but what if you\u0026rsquo;re not — and you still need to understand how they connect? In this 60-minute webcast, Leon Stigter and Bruno Trimouille of TIBCO Software walk through how low-code platforms can help marketing and sales teams automate their workflows and deliver on business goals without needing to get under the hood.\n","date":"April 12, 2018","externalUrl":null,"permalink":"/2018/04/transform-your-business-app-with-low-code-apis-and-microservices/","section":"Blog","summary":"Together with the O’Reilly team, we did a webinar on low-code app development. APIs and microservices are great if you’re a technical developer, but what if you’re not — and you still need to understand how they connect? In this 60-minute webcast, Leon Stigter and Bruno Trimouille of TIBCO Software walk through how low-code platforms can help marketing and sales teams automate their workflows and deliver on business goals without needing to get under the hood.\n","title":"Transform your business app with low-code, APIs, and microservices","type":"blog"},{"content":"As the AI-fueled, edge-exposed, blockchain-driven, and streaming analytics-enabled use cases of the future move closer into view, new technologies are needed to make the vision real. Unique and complex workloads accompany the use cases of the future, but luckily, the enabling technologies to compute those workloads have already arrived.\nJoin TIBCO and AWS for an exciting webinar to help you better understand what serverless architecture is all about, and the benefits of running your apps in a serverless environment. Before you give a listen, how about a quick introduction?\nSo, what is serverless computing anyway?\nServerless is the utilization of a compute platform to run code without provisioning or managing servers. Serverless can be utilized for nearly any type of application or backend service but there are certain use cases that work need auto-scaling capabilities both for high availability and for economics. Some of the most common use cases for serverless include ecommerce, mobile back-ends, streaming data analytics, chatbots and AI, and IT process automation.\nWhy should I utilize serverless for my application deployment?\nBuilding serverless applications means that you can focus on your customers and your product, and not worry about servers, volume spikes, run-times, or even up-times. In short, zero administration. The servers do still exist, but as a developer, you don’t have to think about those servers. You just focus on code, which gives you more deployment agility. In the world of cloud, you pay for what you use, as opposed to deployed parallel instances of your environment as a fail-safe. This can be up to a 90% savings over a VM, as you never pay for idle. Serverless unlocks new business models for the enterprise based on its ability to execute only the code function runtimes when triggered by an event. Enter: Function-as-a-Service (FaaS).\nWhat’s the relationship between FaaS and serverless?\nFaaS describes the capability to deploy an individual “function”, action, or piece of business logic using serverless architecture. They are expected to start within milliseconds and process individual requests; then the process ends and your function no longer uses any resources, potentially saving your business millions while maintaining the load balancing, instant scaling, and high-availability you need. FaaS is the breakthrough enabler to the event-driven architectures that enterprises need, in order to take action on events occurring in real-time.\nHow can TIBCO help in your voyage into the world of serverless?\nServerless applications must be lightweight and highly performant to deliver on business value. TIBCO’s open source Project Flogo® was written entirely in Golang with the goal of running microservices in the smallest possible footprint. We’ve seen services built with Flogo® up to 50x smaller than comparable services built using different frameworks.\nThe Flogo® framework is designed around single units of executable work, called a flow. A flow can be thought of as a function, and executes a specific set of tasks. Flogo® was the first serverless integration framework to work with AWS Lambda functions, and Flogo® has recently released fully-native Lambda capabilities to take full advantage of the newly introduced Golang support from AWS.\nUltimately, what this unique combination of AWS Lambda and Project Flogo® means for organizations is the ability to build and deploy functions faster than ever before, with economics of a pay-as-you-go AWS pricing model that enables an entirely new world of use cases.\nGet smarter about serverless\nYou’ve heard a bit about the benefits of serverless and FaaS and how TIBCO’s toolset can help you with that transition. For a deeper dive and a better understanding of the use cases, tools and support from TIBCO and AWS, listen to our exciting webinar discussing:\nServerless technology and the art of the possible Monolith to microservices to functions The evolution to event-driven architecture The best features in AWS Lambda What TIBCO is doing to support your transition to serverless Are you ready to join the market leaders in the new world of serverless? Let TIBCO and AWS help take you there.\n","date":"April 3, 2018","externalUrl":null,"permalink":"/2018/04/adopting-serverless-computing-with-tibco-aws/","section":"Blog","summary":"As the AI-fueled, edge-exposed, blockchain-driven, and streaming analytics-enabled use cases of the future move closer into view, new technologies are needed to make the vision real. Unique and complex workloads accompany the use cases of the future, but luckily, the enabling technologies to compute those workloads have already arrived.\nJoin TIBCO and AWS for an exciting webinar to help you better understand what serverless architecture is all about, and the benefits of running your apps in a serverless environment. Before you give a listen, how about a quick introduction?\n","title":"Adopting Serverless Computing with TIBCO \u0026 AWS","type":"blog"},{"content":"I get to work with serverless microservices on a daily basis, those are services I use myself and ones I help our customers build to take advantage of the benefits that serverless brings you. With many services needing to be deployed and continuous updates, I found myself doing the same thing over and over. It is that specific task that frustrates me most; it simply wasn’t as seamless as I thought it could be.\nIn this article, I’ll walk you through how I cut the development time and make deployments easily repeatable like a walk in the park — thanks to the combination of the Serverless Framework and a tool called Project Flogo.\nWhat is Flogo? # I’m guessing that you know about the Serverless Framework, but you might not know what Flogo is. Well, Flogo is an Ultralight Edge Microservices Framework! Now that we got the tagline out of the way, let’s unwrap that statement a bit. Flogo is written in Go and now that you can run Go on Lambda, you can easily package up your service and run it on Lambda. Flogo provides you with a visual environment to design your service which, overly simplified, means you put a bunch of activities (like sending a message to Amazon SNS) in a row and execute them when an event occurs.\nTogether with the Serverless Framework, you can configure which events should trigger it, where to deploy it and what kind of resources it is allowed to use without going into the AWS console. The thing I’m personally very excited about is how easy configuration management is and how easy you can move your service to a new stage.\nPrerequisites # In this tutorial, I’ll walk you through creating your app, as well as deploying it using Serverless.\nYou’ll need to have:\nThe Serverless Framework and Go installed An AWS account if you don’t have that done yet, the links will guide you through the steps.\nInstalling the Project Flogo CLI # To build the Flogo app, we’ll make use of the Flogo CLI.\nInstall it like so:\ngo get -u github.com/TIBCOSoftware/flogo-cli/\u0026hellip;\nTo simplify dependency management, we’re using the Go dep tool. (Note that dep strongly recommends using the binary releases that are available on the releases page.)\nCreating your app # Because of the way dep works, you’ll need to execute the commands from within your ${GOPATH}/src directory.\nLet’s create a directory called serverlesslambdaapp:\ncd $GOPATH/src\nmkdir serverlesslambdaapp\nAnd a flogo.json file in that directory:\nWith that done, you’ll need just one command to turn it into a Flogo app that you can use later on to build the executable from:\nflogo create -f flogo.json lambda\nThe above command will create a directory called lambda, find all the dependencies the flow has, and download them. It might take a few seconds for this command to complete.\nNow, we can create an executable out of that project. To run in AWS Lambda, we’ll need to embed the flogo.json in the application to make sure there are no external file dependencies. (You can still make use of environment variables, but we’ll cover that in a different tutorial.)\nThe trigger for Lambda, which contains the event information, makes use of a Makefile to build and zip the app. So let’s run:\ncd lambda\nflogo build -e -shim my_lambda_trigger\nAfter the flogo build command there are two important files in the ./src/lambda. One file is called handler.zip; this is a zipped executable that you can upload to Lambda. The other is simply called handler, and is the unzipped version.\nWhile you could absolutely use the command line tools that AWS provides to deploy your app, or even upload it manually, it’s much easier to automate that part — especially as your app becomes more complex. This is why I love the Serverless Framework ❤\nDeploying Apps with Serverless # The team at Serverless did an amazing job making deployments and packaging really simple. From here you only need a few steps.\nThe first thing is to create a new Serverless service in the same folder as your flogo.json file (if you’ve followed along with the commands, you should still be there :-)):\n# Let’s create a serverless service with the same name as the app\nserverless create -u https://github.com/retgits/flogo-serverless -p serverlesslambdaapp\nThe next step is to copy the handler over to the newly-created Serverless folder:\ncp src/lambda/handler serverlesslambdaapp/handler\nThis would be an ideal time to update your serverless.yml file with any bucket names, IAM roles, environment variables or anything that you want to configure, because the only thing left is to deploy!\n# To package up your function before deploying run\ncd serverlesslambdaapp\nserverless package\n# To deploy, which also does the packaging, your function run\ncd serverlesslambdaapp\nserverless deploy\n*Note: this unfortunately only works under Linux or macOS systems, or when using the Windows Subsystem for Linux (WSL). This is because Windows developers may have trouble producing a zip file that marks the binary as executable on Linux. See here for more info).\nTesting 1… 2… 3… # Let’s test the app to make sure that it really deploys to Lambda and runs correctly.\nAfter you log into AWS and select ‘Lambda’, you’ll be presented with all the functions you’ve deployed so far. One of them should be called something like serverlesslambdaapp-dev-hello.\nClick on that, and you’ll see the overview of your function, including a large button that says ‘Test’. Click ‘Test’ to configure a new test event (any input will do), and click ‘Test’ again to run it.\nIf all went well, and why shouldn’t it, your log will show a line like 2018–03–07 00:18:34.735 INFO [activity-tibco-log] — Hello World from Serverless and Flogo\ntesting…\nWant to try yourself? # Excited to try even more things with Flogo? Check out our docs or the website\n","date":"March 24, 2018","externalUrl":null,"permalink":"/2018/03/serverless-and-flogo-a-perfect-match/","section":"Blog","summary":"I get to work with serverless microservices on a daily basis, those are services I use myself and ones I help our customers build to take advantage of the benefits that serverless brings you. With many services needing to be deployed and continuous updates, I found myself doing the same thing over and over. It is that specific task that frustrates me most; it simply wasn’t as seamless as I thought it could be.\nIn this article, I’ll walk you through how I cut the development time and make deployments easily repeatable like a walk in the park — thanks to the combination of the Serverless Framework and a tool called Project Flogo.\n","title":"Serverless and Flogo - A Perfect Match","type":"blog"},{"content":"Together with the O\u0026rsquo;Reilly team, I did a webinar on visually building microservices. Modern digital experiences run on microservices, but building them isn\u0026rsquo;t always straightforward — especially if you\u0026rsquo;re not deep in the weeds of API specs and Swagger definitions.\nThe core questions we tackled: how do you let developers and architects visually define an API without needing to be Swagger experts? And once you\u0026rsquo;ve built your microservices, how do you deploy the same project to a private cloud, a public cloud, and on-prem without reworking everything?\n","date":"March 13, 2018","externalUrl":null,"permalink":"/2018/03/a-visual-approach-to-building-and-deploying-microservices/","section":"Blog","summary":"Together with the O’Reilly team, I did a webinar on visually building microservices. Modern digital experiences run on microservices, but building them isn’t always straightforward — especially if you’re not deep in the weeds of API specs and Swagger definitions.\nThe core questions we tackled: how do you let developers and architects visually define an API without needing to be Swagger experts? And once you’ve built your microservices, how do you deploy the same project to a private cloud, a public cloud, and on-prem without reworking everything?\n","title":"A Visual Approach To Building And Deploying Microservices","type":"blog"},{"content":"Last year on February 14th we published a blog post on “Building the Ultimate Valentine’s API.” Personally, I had a lot of fun writing it and finding facts related to how we spend our Valentine’s Day (though I forgot to bring home chocolates and flowers to my wife, who was none too pleased)! To prevent history from repeating itself, we’re taking things a step further this year with a Valentine’s Day webinar on why Integration + Cloud = A Match Made in Heaven.\nSome fun stats on Valentine’s Day: For this holiday, florists produce about 198 million roses, send 180 million cards—and there are about 1,200 locations producing chocolate and other cocoa products for us to give to our loved ones. \\[1\\]Speaking of love, I believe that every developer has that one special technology they absolutely love—whether it’s a programming language (like Golang, Java, or Node.js), an architecture style (like microservices and APIs), or simply all sorts of open source software. TIBCO holds this belief as well, so we decided to put it all together in one easy to use place. Putting it in one place doesn’t mean you have to use our cloud to deploy this tech—you can choose to deploy it on-premises, to a private cloud, to devices, or to serverless environments. Talk about an open relationship! 🙂\nIn the webinar, we’ll touch on three essential elements that create a healthy and strong bond between integration and the cloud. We’ll talk about rapidly prototyping apps and APIs, choosing where you want to deploy those apps, and how you can keep using the same app model when you’re moving between environments.\nTo rapidly prototype APIs you’ll need to look at two incredibly important aspects. First of all, you want to be able to visually create those API specs. This is really important because not everyone will be able to write an Open API spec (formerly known as Swagger) from scratch, using nothing more than a text editor. The second important thing is the ability to test your API specs before you hand them over to a developer, or implement them yourself. Testing in this early stage is really important because it allows you to see whether the inputs, outputs, and operations match with your design. Plus, it gives other developers the chance to start writing their integrations while you focus on the business logic.\nThe Greek philosopher Heraclitus is credited to have said, “Change is the only constant in life,” and I would assume that if he observed the modern world of integration technology, he would have said the same for it. As we move from seeing ESB and iPaaS as only a method to connect applications to each other and move towards those technologies being the infrastructure on which modern apps are built, you can see the lines between application development and integration rapidly blurring. We’re moving from big monolithic apps to containers and functions, and we’re adopting lots of new patterns like Function-as-a-Service, evented APIs, and microgateways.\nThe last item is all about choice, one of the most important things in dating, and in integration as well. When you’re working on your integration or application development project, you might not know where you’ll end up deploying it. The code might end up on Kubernetes, in the TIBCO Cloud, or on-premises. Because you already have so many choices to make when you’re building, you don’t want to have to migrate when you’re moving to a different environment. You want to be able to take your app and deploy it somewhere else!\nAs Charles M. Schultz once said, “All you need is love. But a little chocolate now and then doesn’t hurt.” So, I should probably stop writing this blog and go get my Valentine some chocolate. If you want to get started with TIBCO Cloud Integration, you can do that here, or go sign-up to attend tomorrow’s webinar. The best news? Even if you stand us up, you’ll receive a copy of the recording.\n","date":"February 13, 2018","externalUrl":null,"permalink":"/2018/02/integration--cloud-a-match-made-in-heaven/","section":"Blog","summary":"Last year on February 14th we published a blog post on “Building the Ultimate Valentine’s API.” Personally, I had a lot of fun writing it and finding facts related to how we spend our Valentine’s Day (though I forgot to bring home chocolates and flowers to my wife, who was none too pleased)! To prevent history from repeating itself, we’re taking things a step further this year with a Valentine’s Day webinar on why Integration + Cloud = A Match Made in Heaven.\n","title":"Integration + Cloud - A Match Made in Heaven","type":"blog"},{"content":"I spoke at a Gopherfest meetup about the Go Programming Language. Together with an awesome colleague (Miguel Torres) we talked about Project Flogo and the lessons we learned building it.\n","date":"December 11, 2017","externalUrl":null,"permalink":"/2017/12/gopherfest-sv-2017-architectures-design-patterns-and-lessons-learned/","section":"Blog","summary":"I spoke at a Gopherfest meetup about the Go Programming Language. Together with an awesome colleague (Miguel Torres) we talked about Project Flogo and the lessons we learned building it.\n","title":"Gopherfest SV 2017 - Architectures, Design Patterns, and Lessons Learned","type":"blog"},{"content":"This year I wasn’t able to attend re:Invent, but I did want to do something nice in between the live streams and specifically around Serverless compute and AWS Lambda.\nLambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume — there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service — all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.\nThe above snippet from AWS explains very well what Lambda is aimed to do!\nTo develop your serverless functions you can use many different application frameworks, but did you know that Flogo is one of them? With Project Flogo you can embrace serverless computing with Flogo’s first class support for AWS Lambda. Infinitely scaling Flogo’s ultralight functions and scaling it back to zero when not in use.\nIn my last article I wrote about how to deploy those ultralight Flogo apps to Kubernetes to use manage your microservices on one of the most powerful container management platforms. In this article I want to go one step further down and deploy them on a Function-as-a-Service platform, AWS Lambda.\nPrerequisites # Before we get started there are a few prerequisites that we need to take into account:\nYou’ll need to have Docker installed You’ll need to have the latest version of the Project Flogo Web UI installed if you want to graphically create your microservices (and who doesn’t right?): docker run -it -p 3303:3303 flogo/flogo-docker eula-accept You’ll need to have the Flogo CLI installed You’ll obviously need an account for AWS :) The app # With the latest version of the Web UI for Project Flogo there are a bunch of new things (the Lambda trigger being one of them). What hasn’t changed though is that we’ll start with a creating a new microservice and create a new flow within that.\nAfter you click on the flow, you’ll be presented with quite a new look and feel to modelling your flow. The flows and the triggers have been separated so that the flows are now very well aligned with the concept of a function. As a developer you can focus on the business logic without worrying on the infrastructure :)\nYou can click on the ‘+’ sign on the left to add a new trigger ‘Start Flow as a function in Lambda’. This allows your flow to run as a function in Lambda. We won’t need any Flow params for now but we do want to see something in the logs so let’s add a ‘Log Message’ activity\nBuilding for Lambda # To build an app for Lambda, you’ll need to export the flow from the Web UI first using the ‘Export App’ function. From a terminal window execute\nflogo create -f \u0026lt;appname\u0026gt;.json lambdacd lambda The above command will create a new folder called ‘lambda’ and go get (pun intended) all the dependencies it needs to run your flows. For Lambda we want to build an application with the configuration embedded in the executable and with a shim to instruct the build process to overwrite the entry point for the application using the Lambda Trigger. The argument to -shimindicates the trigger ID to use as the entry point to the flow (function). The AWS Lambda trigger leverages a makefile to kick off the build process and the build process happens within a docker container (hence docker as a prerequisite) because, at the time of writing, Go plugins (.so files) can only be built on Linux. So to build execute:\nflogo build -e -shim start_flow_as_a_function_in_lambda This command will pull the docker image ‘eawsy/aws-lambda-go-shim:latest’ locally and build the zip file needed for deployment to AWS Lambda. Once this command finishes successfully the zip file (handler.zip) will be located in your app directory (for example `/path/to/app/lambda/src/lambda/handler.zip`).\nUploading to AWS Lambda # In the Lambda console you can create a new function and it is important that the runtime is set to ‘Python 2.7’ (the generated shim contains a python executable function that in turn triggers the flow).\nAs you might have seen during the live streams AWS just announced support for Go, so stay tuned for native triggers as well!\nAfter hitting the create button, you’ll be presented with a brand new Lambda function. From the ‘Code entry type’ dropdown you’ll need to select ‘Upload a .ZIP file’, pick the zip that was just generated and set the Handler to ‘handler.Handle’ (without this you’ll not be able to trigger your flow). You can leave the other defaults as is and hit the orange ‘save’ button. To see your flow in action send a test event and check the CloudWatch logs\nConclusion # This was a very simple Flogo app running on AWS Lambda, but you can do incredibly cool things using Project Flogo as the microservices framework that runs on Lambda! If you want to get started yourself, just follow the steps above and let us know what you end up building.\n","date":"December 1, 2017","externalUrl":null,"permalink":"/2017/12/what-do-i-do-in-between-reinvent-live-streams-build-lambda-functions/","section":"Blog","summary":"This year I wasn’t able to attend re:Invent, but I did want to do something nice in between the live streams and specifically around Serverless compute and AWS Lambda.\nLambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume — there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service — all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.\n","title":"What Do I Do In Between re:Invent Live Streams? Build Lambda functions","type":"blog"},{"content":"With Project Flogo you can visually create Ultralight Edge Microservices and run them anywhere. But what if you want to run those incredibly light microservices using one of the most powerful container management platforms, Kubernetes?\nPrerequisites # As described on the Kubernetes website\nKubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.\nIf you haven’t set up your own Kubernetes cluster yet, I can absolutely recommend looking at minikube. The team has made an amazing effort to make it super easy to run your own cluster locally with minimal installation effort.\nAs Kubernetes is meant for containerized apps it means we’ll have to create a Docker image from our Flogo app and push it to a registry accessible to the Kubernetes cluster. In the examples below I’ll make use of Docker Cloud, but depending on your preference you can pick any container registry.\nThe Flogo app # As the post is more about running the app on Kubernetes than it is on how to create the apps, I’ve simply used the tutorial in the Flogo documentation. This app has a simple HTTP receiver listening on port 8080 and sends back a default string. If you want to use a different app that is of course possible as well!\nCreate a Docker image # Flogo describes itself as an Ultralight Edge Microservices Framework, so containerizing the apps built with it shouldn’t add too much overhead. Luckily today you have a whole bunch of small base containers available, ranging from alpine to debian (with jessie-slim). My three favorites being:\n$ docker images debian jessie-slim a870c469749c 10 days ago 79.1MB alpine latest 053cde6e8953 11 days ago 3.97MB bitnami/minideb latest c5693017e0d4 3 weeks ago 53.6MB The app I have, compiled to run on Linux, is about 7.4MB and because I want to keep the overhead as low as possible I’ll use alpine for this one. Combining alpine with my Flogo app should result in an image of about 12MB, which I think is pretty good. To build an image we need a Dockerfile:\n# Dockerfile for flogoapp # VERSION 0.0.1 # The FROM instruction initializes a new build stage and sets the Base Image for subsequent instructions. # We’re using alpine because of the small size FROM alpine # The ADD instruction copies new files, directories or remote file URLs from \u0026lt;src\u0026gt; and adds them to the filesystem of the image at the path \u0026lt;dest\u0026gt;. # We’ll add the flogoapp, built using the Web UI, to the working directory ADD flogoapp.dms . # The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. # The app we’re using listens on port 8080 by default EXPOSE 8080 # The main purpose of a CMD is to provide defaults for an executing container. # In our case we simply want to run the app CMD ./flogoapp.dms To build an app out of this you can simply run the command:\ndocker build . -t \u0026lt;your username\u0026gt;/flogoalpine In my case that ended up with quite a small image, at roughly the size I expected it to be!\nREPOSITORY TAG IMAGE ID CREATED SIZE\nretgits/flogoalpine latest e7bc672e009e About an hour ago 11.7MB\nAs mentioned I’ll make use of Docker Cloud to push my images to so that the Kubernetes cluster can access them. One simple command makes the image available :-)\ndocker push \u0026lt;your username\u0026gt;/flogoalpine That takes care of the Docker part, let’s get over to Kubernetes!\nCreate a “Deployment” # The Deployment in Kubernetes is a controller which provides declarative updates for Pods and ReplicaSets. Essentially speaking it gives you the ability to declaratively update your apps, meaning zero downtime!\nA sample deployment.yaml file could like like below. This will create a Deployment on Kubernetes, with a single replica (so one instance of our app running) where the container will have the name `flogoapp` and it will pull the \u0026lt;image name\u0026gt; as the container to run. Pay special attention to the `containerPort` as that will make sure that the port will be accessible from the outside (though still within the cluster)\napiVersion: extensions/v1beta1 kind: Deployment metadata: name: flogoapp-deployment spec: replicas: 1 template: metadata: labels: app: flogoapp spec: containers: - name: flogoapp image: \u0026lt;image name\u0026gt; imagePullPolicy: Always ports: - containerPort: 8080 To now create a deployment you can run\nkubectl create -f deployment.yaml Within the kubectl cli tool, or using the dashboard, you can see the status of your deployments:\n$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE flogoapp-deployment 1 1 1 1 50m Our app is running! Now we need to make sure we can access it from the outside as well…\nCreate a “Service” # The Kubernetes documentation has an excellent explanation on why you need Services, so I’ll let them tell the story\nKubernetes Pods are mortal. They are born and when they die, they are not resurrected. ReplicationControllers in particular create and destroy Pods dynamically (e.g. when scaling up or down or when doing rolling updates). While each Pod gets its own IP address, even those IP addresses cannot be relied upon to be stable over time. This leads to a problem: if some set of Pods (let’s call them backends) provides functionality to other Pods (let’s call them frontends) inside the Kubernetes cluster, how do those frontends find out and keep track of which backends are in that set?\nSo the services logically group pods together and make sure that even when a pod goes away you don’t have to change IP addresses. A service can have a lot of different capabilities and many more configuration options, so let’s create one that is fairly simple.\nThe below `service.yaml` file simply defines the service `flogoapp` that directly binds port 8080 of the app we have deployed to port 30061 that we can access from outside of the cluster.\napiVersion: v1 kind: Service metadata: name: flogoapp labels: app: flogoapp spec: selector: app: flogoapp ports: * port: 8080 protocol: TCP nodePort: 30061 type: LoadBalancer To create the service in Kubernetes you can simply run:\nkubectl create -f service.yaml Within the kubectl cli tool, or using the dashboard, you can see the status of your services just like your deployments:\n$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE flogoapp LoadBalancer 10.0.0.110 \u0026lt;pending\u0026gt; 8989:30061/TCP 1h kubernetes ClusterIP 10.0.0.1 \u0026lt;none\u0026gt; 443/TCP 1d And that takes care of the exposing the app outside of the cluster as well. So we have one final task!\nAccess your app! # Accessing the app is quite simple now. First we need the external IP address from the Kubernetes cluster. If you’re running minikube you can get that by running minikube ip. with cURL you can now invoke the API from the app and see the internal Flogo ID of the app.\n$ curl [http://192.168.99.100:30061/helloworld](http://192.168.99.100:30061/helloworld?ref=hackernoon.com) {“id”:”006257ffaf5fb1e9621914dcd0203af8\u0026#34;} Conclusion # We’ve taken a simple Flogo app, added that app into a Docker container and deployed that to Kubernetes. By itself Flogo is incredibly powerful and lightweight. Combining that with the power and flexibility of Kubernetes gives you the power to run ultralight microservices on a very cool and powerful platform. If you want to try out Project Flogo, visit our web page or GitHub project.\n","date":"November 15, 2017","externalUrl":null,"permalink":"/2017/11/how-to-deploy-flogo-apps-to-kubernetes/","section":"Blog","summary":"With Project Flogo you can visually create Ultralight Edge Microservices and run them anywhere. But what if you want to run those incredibly light microservices using one of the most powerful container management platforms, Kubernetes?\nPrerequisites # As described on the Kubernetes website\nKubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.\n","title":"How To Deploy Flogo Apps To Kubernetes","type":"blog"},{"content":"SaaS has become the default way enterprises acquire software, which means any given organization has (or will have) dozens of apps that need to talk to each other. The average Marketing department uses around 30 different SaaS apps, and HR isn\u0026rsquo;t far behind. This webinar looks at the integration challenge that creates and when iPaaS is the right level of abstraction to solve it.\n","date":"October 5, 2017","externalUrl":null,"permalink":"/2017/10/when-is-ipaas-the-right-level-of-abstraction/","section":"Blog","summary":"SaaS has become the default way enterprises acquire software, which means any given organization has (or will have) dozens of apps that need to talk to each other. The average Marketing department uses around 30 different SaaS apps, and HR isn’t far behind. This webinar looks at the integration challenge that creates and when iPaaS is the right level of abstraction to solve it.\n","title":"When Is iPaaS The Right Level Of Abstraction?","type":"blog"},{"content":"Last month TIBCO added the ability to add custom activities to TIBCO Cloud Integration — Web Integrator (I’ll use Web Integrator going forward). The Web Integrator experience is “Powered by Project Flogo”, so when you create your own extensions for Web Integrator, and use them in every flow that you want, those activities will work with Project Flogo as well. In this blog post I’ll walk through creating a new extension that connects to IFTTT using the WebHooks service.\nWhy IFTTT? # IFTTT is a free web-based service that people use to create chains of simple conditional statements, called applets. An applet is triggered by changes that occur within other web services1.\nIFTTT also has a great feature to extend that connectivity using, what was previously known as the Maker Channel, the WebHooks service. With easy to understand input and output and still delivering great value, this is an great way to show how you can create your own extensions for Web Integrator.\nSetting up IFTTT # Within IFTTT you need to activate the WebHooks service. You can do that by going to this url and clicking the big white button Connect.\nNow go to the settings screen and check the generated URL. The last past of the URL (after the last /) is what IFTTT calls the WebHook key and that is used to authenticate the calls coming in. We\u0026rsquo;ll need that key later! Be sure to never tell anyone your key (and yes, I did change my key already).\nBuild a notification flow # The connector and the activity that are the end result of this blog post need to be tested with something nice. As pretty much everyone carries a mobile phone nowadays, the IFTTT flow will start with a WebHook (the this) and send a notification (the that)\nEvent name\nOne of the parameters that our connector will need, and one of the parameters that IFTTT needs to route the event to the correct applet, is the event name. In this case I’ve set it to HelloWorld.\nThe structure of your folder # The end result of this extension will have an activity and a connector and a bunch of files that make up the two. The code is available on GitHub, so you could just skip all this and check that out. The folder structure, including the files we’ll create is:\nwi-ifttt-extension ├───activity│ └───ifttt│ ├───activity.go│ ├───activity.json│ ├───activity.module.ts│ ├───activity.ts│ ├───activity_test.go│ └───ifttt.png│└───connector └───ifttt ├───connector.json ├───connector.module.ts ├───connector.ts └───ifttt.png The root folder of the extension is called wi-ifttt-extension, which will also be the name of the repository on GitHub (this is important for when we add Travis CI into the mix). Bellow that is a folder to store the activities called activity and a folder to store connectors called connector. From there on each activity and each connector will have its own subfolder. In the case above there is one activity in the folder ifttt and one connector in the folder ifttt (different folders, same name)\nThe Web Integrator connector # Let’s look a bit more detailed at the connector\n└───connector └───ifttt ├───connector.json ├───connector.module.ts ├───connector.ts └───ifttt.png A good extension obviously needs a killer icon, and as luck would have IFTTT has a brilliant icon available. The file ifttt.png in this folder is shown on the connector screen. The only requirement here is that it is a png file (I\u0026rsquo;d recommend looking for 128x128 pixels).\nconnector.json # The connector.json file describes what the connector is, what the inputs are it needs and a bunch of other metadata. It is important that the namefield doesn\u0026rsquo;t have any special characters and no spaces (in my case it is wi-ext-ifttt). The category in the display section is the name of the category that will be shown for this connector. In this case the connector will be shown in the ifttt category. The ref field is a reference to a GitHub repository and this field has to be a correct URL (the repository doesn\u0026rsquo;t have to exist though). The settings describe the fields that the user will need to provide:\nname: the name of the connection so you can find it easily in your activities description: a useful description is always good, especially when you have multiple connections webhookKey: this is the key that comes from IFTTT, note that the type is password so it will be shown on the user interface as a bunch of balls eventName: the event to trigger connector.ts # The connector also has a TypeScript file that covers the validation of the fields and takes care of alerting Web Integrator when to save the data.\nAs you browse through the code you’ll see a validate object which, as the name would imply, validates whether the connection can be saved. In our case there are two mandatory fields that need to be there, webhookKey and eventName. If both the webhookKey and the eventName have a value enable the connect button otherwise keep it disabled.\nThe action object is triggered when an action occurs (e.g. a button is clicked). This part of the code handles the click of the Connect button. As all the details are already in the form, the only thing that needs to happen is to save the data. The action result is needed to save the configuration data.\nIf you would upload all the code for the connector and create a new connection, the screen would look something like this\nThe Web Integrator activity # Now that the connector is covered, let’s take a look at the activity\n├───activity│ └───ifttt│ ├───activity.go│ ├───activity.json│ ├───activity.module.ts│ ├───activity.ts│ ├───activity_test.go│ └───ifttt.png Just like above, a good extension still needs that killer icon. The file ifttt.png in this folder is shown on the activity.\nactivity.json # The activity.json describes the metadata of your activity and looks quite similar to the connector.json file. The important differences here are that is describes the input and output of the activity and that it has a special input type object for a field called iftttConnection. With help of the TypeScript files we\u0026rsquo;ll go over next, this object makes sure we can pick a connection and use that in the code.\nactivity.ts # The activity.ts file handles a bit of the UI code for the activity. As you\u0026rsquo;re browsing the code you\u0026rsquo;ll see the value object which allows you to specify what types of values you can pick for a certain field. For the field iftttConnection the only allowed types are connections that are created as an iftttConnector (the connector category as specified in the connector.json must match what we specify here).\nAs with the connector, the activity.ts has a validate object as well and it can be used to validate the input of certain fields. For the field iftttConnection we need to check that the connection has been set and otherwise display an error message (the value already makes sure that you can only pick connections, so there is no need to validate that again).\nactivity.go # The package ifttt provides connectivity to IFTTT for TIBCO Cloud Integration — Web Integrator using the WebHooks service from IFTTT (https://ifttt.com/maker_webhooks). In the code there are five constants that I’ve used to make my code easier (and more reusable). For example\nivConnection = \u0026#34;iftttConnection\u0026#34; means I can use a field called ivConnection and at runtime that object will point to the thingy I named iftttConnection in the metadata of my activity. A lot of words to say that ivConnection is the connection object I\u0026rsquo;ll use in the code. The same applies to the other input variables (all starting with iv) and output variables (all starting with ov). The payload struct is used to describe the payload to IFTTT with a maximum of three values (this limit is set by IFTTT). The eval method executes the activity and sends a message to IFTTT. I\u0026rsquo;ve documented the steps it takes in the code as well:\nStep 1 is to validate that the connection has been set. The connection is mandatory whereas the three values are optional so we don’t need to check that. Step 2 is to get the connection details object Step 3 is to build the IFTTT WebHook URL. To trigger the event it will make a POST request to the URL. The URL will have the event name we specified in IFTTT and the webhook key we got from there too (both are extracted from the connection details object) Step 4 is to create JSON payload. The data is completely optional. This content will be passed on to the Action in your Recipe in IFTTT. Step 5 is to send the POST message and handle any errors that might come from there Step 6 is to set the return value so the rest of the Web Integrator flow can use that too Testing your activity # As good developers you want to make sure that the code you’re writing works perfectly. To make sure you can unit test the code you can create a activity_test.go file. The layout of the file is pretty straight forward and the TestEval method is the unit test for the Eval function and sends a message to IFTTT (make sure that you have updated the values when unit testing the code).\nAs you updated the variables simply run go test from the ifttt subfolder of activity and it will tell you whether your code will work or not.\nAdding Travis CI to the mix # The article I wrote earlier on Continuously Testing Flogo activities is also valid for Web Integrator. In fact, you should see some striking similarities between the folder layout I had there and the one I have in this post. Check out the section on Travis CI to get add CI/CD into the mix of your new activities for Web Integrator.\nUsing it in a Web Integrator app # To use your extension in a Web Integrator app, you need to upload it to TIBCO Cloud Integration. Simply create a zip file of the root folder (for example wi-ifttt-extension), click on Extensions in the header and upload the zip.\nNow you can create a connection to IFTTT from the Connections menu and use the activity in your new app\nConclusion # It isn’t too hard to build your own activities and connector for Web Integrator. As this is my first article here, I’d love to get your feedback!\n","date":"September 11, 2017","externalUrl":null,"permalink":"/2017/09/how-to-build-extensions-for-flogo-apps-in-tibco-cloud-integration/","section":"Blog","summary":"Last month TIBCO added the ability to add custom activities to TIBCO Cloud Integration — Web Integrator (I’ll use Web Integrator going forward). The Web Integrator experience is “Powered by Project Flogo”, so when you create your own extensions for Web Integrator, and use them in every flow that you want, those activities will work with Project Flogo as well. In this blog post I’ll walk through creating a new extension that connects to IFTTT using the WebHooks service.\n","title":"How To Build Extensions For Flogo Apps In TIBCO Cloud Integration","type":"blog"},{"content":"Back in 2012, the engineering team at Heroku created a set of best practices for developing and running web apps. That document, consisting of 12 important rules, became the 12 Factor App manifesto. It gained a lot of traction over the years, especially as microservices took off. Along with microservices came a wave of related practices and tools — git, DevOps, Docker, Configuration Management — that all reinforced these principles.\nThis post walks through each of the 12 factors and how they apply to Node.js apps on TIBCO Cloud Integration.\nCodebase # One codebase tracked in revision control, many deploys. Keeping your code in version control is table stakes, but it\u0026rsquo;s especially important for 12 Factor compliance. The idea: one app, one repository. Developers can work on it without worrying about breaking other code (yes, unit testing matters here). I personally prefer git-based systems like GitHub or Gogs. Shared code across services should live in its own repository and be treated as a dependency.\n\u0026ldquo;So what about the deploys?\u0026rdquo; A deploy is a single running instance of the microservice. With TIBCO Cloud Integration, each push automatically creates a new instance and you can run multiple versions in the same or separate sandboxes.\nDependencies # Explicitly declare and isolate dependencies. Most languages have a package manager that handles installing libraries at deploy time. Node.js has two main options: npm and yarn. Both work off package.json, so switching between them is possible. TIBCO Cloud Integration standardizes on npm.\nOne thing to be careful about: pin your dependency versions. While you can specify \u0026ldquo;at least version x.y.x\u0026rdquo;, it\u0026rsquo;s better to lock to a specific tested version. You don\u0026rsquo;t want to wake up to a new dependency version breaking your app.\nConfiguration # Store config in the environment. Config here means anything likely to change between deploys. The Visual Studio Code extension for TIBCO Cloud Integration generates a .env file for this purpose. Don\u0026rsquo;t commit that file to version control though — ask yourself: \u0026ldquo;Could I put this in a public repo without leaking credentials?\u0026rdquo; Usually not. Instead, create a .env.example with all the keys and dummy values.\nTIBCO Cloud Integration injects environment variables into the container at runtime. Using the VSCode plugin, you can add variables with the Add environment variable command. In your code, reference them with a fallback:\nvar dbuser = process.env.DB_USER || \u0026#39;defaultvalue\u0026#39;; Backing services # Treat backing services as attached resources. A backing service is anything your app depends on — an Amazon S3 bucket, an Azure SQL Server, etc. \u0026ldquo;Attached resource\u0026rdquo; means you access it through a URL. This makes local testing much easier since you don\u0026rsquo;t need an entire ecosystem running just to test one microservice. TIBCO Cloud Integration supports deploying Mock apps for API calls, and there are plenty of stub frameworks for other resources.\nThe alternative is giving every developer their own full environment with all backing services. And if you\u0026rsquo;ve hardcoded a dependency on a specific MySQL database and it needs to be replaced\u0026hellip; do you really want to work over the weekend to fix that?\nBuild, Release, Run # Strictly separate build and run stages. The manifesto defines three stages:\nThe build stage: turn your code into an executable The release stage: takes the executable and adds the config The run stage: takes the output from the release stage and runs it on the target environment This separation is critical for CI/CD pipelines — your code should move through environments without changes (only the config differs). This is why containerized environments stress treating containers as immutable objects. With TIBCO Cloud Integration, Node.js apps get this for free. When you push your app, you can specify a properties file that injects values into the container (see config above).\nProcesses # Execute the app as one or more stateless processes. There\u0026rsquo;s still debate about why statelessness matters, and honestly it probably traces back to how easy it was to stuff everything into a monolith. But the rule is clear: shared data (including persistent data) belongs in a backing service, not in the app itself. The reason is scalability — if your app holds state, it can\u0026rsquo;t scale horizontally without risking duplicate actions or failures. Most Node.js apps start a single process (npm start or node .), but developers still need to make sure the app itself is stateless.\nPort Bindings or Data Isolation # Depending on which version of the manifesto you\u0026rsquo;re reading, the seventh factor is either port bindings or data isolation (the latter from the NGINX team\u0026rsquo;s update). For port bindings, the original definition says it well:\nThe twelve-factor app is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port.\nData Isolation makes perfect sense too (and maybe should have been the 13th factor ;-)). Every microservice should own its data, and you should only access that data through the microservice\u0026rsquo;s API. Violating this creates tight coupling between services, which is never a good idea.\nConcurrency # Scale out via the process model. For microservices, this means you should be able to run more than one instance. Containerized deployments like TIBCO Cloud Integration give you this out of the box. That said, you can easily break this by using timers inside your processes — a timer means you can\u0026rsquo;t scale up without running duplicate work.\nDisposability # I\u0026rsquo;ve always liked the phrase \u0026ldquo;treat your containers like cattle, not like pets.\u0026rdquo; Disposability is exactly that. You should be able to kill a container and start a new one without impact, or scale up and down in response to demand, painlessly. This is another reason stateless services matter. TIBCO Cloud Integration gives you scaling with the push of a button or a simple command.\nDev/Prod parity # Keep your environments as similar as possible. Not just to minimize config changes during deployment, but to make sure your app behaves the same in staging and production. TIBCO Cloud Integration helps here with multiple sandboxes that keep the runtime environment consistent. It doesn\u0026rsquo;t handle your backing services, but having the runtime sorted is a good start :)\nLogs # A good microservice does one thing well (kind of like Linux commands — ps, grep). In a microservice environment, treat your logs as streams and send them elsewhere, unless logging is literally your microservice\u0026rsquo;s job. Most languages have solid logging frameworks. With Node.js on TCI, there\u0026rsquo;s a special logger class that matches the rest of the TCI log format. As a best practice, don\u0026rsquo;t use console.log().\nAdmin processes # Administrative tasks and management processes shouldn\u0026rsquo;t live in your app. Run them as one-off processes in a separate container or thread. Data migrations, for example, should be one-off commands, not part of your regular deployment.\nAs always let me know what you think by posting a reply here or at the TIBCO Community\n","date":"August 31, 2017","externalUrl":null,"permalink":"/2017/08/how-to-build-twelve-factor-apps-with-node.js-in-tibco-cloud-integration/","section":"Blog","summary":"Back in 2012, the engineering team at Heroku created a set of best practices for developing and running web apps. That document, consisting of 12 important rules, became the 12 Factor App manifesto. It gained a lot of traction over the years, especially as microservices took off. Along with microservices came a wave of related practices and tools — git, DevOps, Docker, Configuration Management — that all reinforced these principles.\n","title":"How To Build Twelve Factor Apps with Node.js in TIBCO Cloud Integration","type":"blog"},{"content":"Pretty much all the large cloud platforms provide not only a great visual interface to get things done, they also have a great command line interface. As much as I like a great UI when browsing the web, I tend to favor the command line when I\u0026rsquo;m focused on building things.\nThe TIBCO Cloud - Command Line Interface, or tibcli for short, has all the same features and functions that allow you to get work done through the TIBCO Cloud Integration web UI. You can update your configuration variables (tibcli app configure myApp1 prop1=\u0026quot;newval\u0026quot; prop1=\u0026quot;newval2\u0026quot;), push apps (tibcli app push) and stream log files (tibcli monitor applog --stream myApp1) to name a few actions.\nThe tibcli doesn\u0026rsquo;t yet support a lot of functionality for Node.js though, so I decided to write my own. tibcli-node has many of the same features as the VSCode plug-in but accessible through the command line.\nCheck out the repo and let me know your thoughts!\n","date":"August 30, 2017","externalUrl":null,"permalink":"/2017/08/what-every-node.js-developer-should-use-to-deploy-to-tibco-cloud-integration/","section":"Blog","summary":"Pretty much all the large cloud platforms provide not only a great visual interface to get things done, they also have a great command line interface. As much as I like a great UI when browsing the web, I tend to favor the command line when I’m focused on building things.\n","title":"Introducing tibcli-node for TIBCO Cloud Integration","type":"blog"},{"content":"In 2016 TIBCO announced Project Flogo as an ultra lightweight integration engine — up to 20 to 50 times lighter than Node.js and Java Dropwizard. It\u0026rsquo;s open source and easily extensible, which means you want to make sure the activities you build keep working after each check-in. The question is straightforward: how do you test your activities every time code is pushed to Git?\nDepending on where your source code lives and how public it is, you have a few options. This post covers Jenkins for a local git server and Travis-CI for GitHub repos.\nProject structure # Before we start, here\u0026rsquo;s my project layout since some of the scripts depend on it. I structure my Flogo extensions by category with separate folders for activities and triggers:\n├───\u0026lt;Repo root\u0026gt; │ └───activity | | └───\u0026lt;my-activity\u0026gt; | | |───\u0026lt;all my files\u0026gt; │ └───trigger | └───\u0026lt;my-trigger\u0026gt; | |───\u0026lt;all my files\u0026gt; A real example — my repository is called Concat:\n├───Concat │ └───activity | └───my-activity | |───activity.go | |───activity.json | |───activity_test.go Jenkins # Installing the Go Plugin # If you just installed Jenkins, Go probably wasn\u0026rsquo;t on your radar. The Go Plugin makes it easy. Go to Manage Jenkins -\u0026gt; Manage Plugins, search for Go Plugin on the Available tab, and select Download now and install after restart.\nAfter the restart, go to Manage Jenkins -\u0026gt; Global Tool Configuration and find the Go section. Click Go installations\u0026hellip;, give it a name (this helps you find it later), check Install automatically, and select your version. Click Apply then Save.\nConfiguring the build job # Create a New Item and select a Freestyle project. In my case, since I have a category with multiple activities, I use a parameterized project.\nI\u0026rsquo;ll assume you know how to configure source code management, so I\u0026rsquo;ll skip that part.\nIn the Build Environment section, check two boxes:\nDelete workspace before build starts: Always start with fresh code. Set up Go programming language tools: Pick the Go version you configured earlier. In the Build section add a shell command build step:\n## Go get the Project Flogo dependencies go get github.com/TIBCOSoftware/flogo-lib/... go get github.com/TIBCOSoftware/flogo-contrib/... ## Go get the test dependencies go get github.com/stretchr/testify/assert ## Find all the activities and run the test cases for them for path in ./activity/*; do [ -d \u0026#34;${path}\u0026#34; ] || continue # if not a directory, skip dirname=\u0026#34;$(basename \u0026#34;${path}\u0026#34;)\u0026#34; ## Run the test cases go test ./activity/$dirname done ## Create a release zipfile that strips out all non-required files zip -r v${BUILD_NUMBER}-${JOB_NAME}.zip ./activity/ ./connector/ If your test cases succeed, so does your build — otherwise you\u0026rsquo;ll need to tweak your code :-)\nTravis-CI # For GitHub-hosted code, Travis-CI provides continuous integration with automated testing, building, and deploying. They have a solid Getting Started guide, so I\u0026rsquo;ll skip the initial setup.\nOne requirement I had: every code update should create a new release. For that you need a Personal Access Token from GitHub. Don\u0026rsquo;t put that token in your repo files — add it as an Environment Variable in Travis-CI instead. Travis hides the value in logs by default. Here\u0026rsquo;s how mine looks, with a variable called TOKEN:\nThe only additional file you need in your repo is .travis.yml:\n## We don\u0026#39;t need elevated privileges sudo: false ## The language should be Go and we\u0026#39;ll use version 1.8.3 language: go go: - 1.8.3 ## The below statement skips all branches that start with a \u0026#39;v\u0026#39; (e.g. v1) so that we can have working branches that get committed. branches: except: - /^v.*/ ## Install the dependencies we need install: - go get github.com/TIBCOSoftware/flogo-lib/... - go get github.com/TIBCOSoftware/flogo-contrib/... - go get github.com/stretchr/testify/assert ## The script is the same as it was in Jenkins, though joined to be a single line script: - for path in ./activity/*; do [ -d \u0026#34;${path}\u0026#34; ] || continue; dirname=\u0026#34;$(basename \u0026#34;${path}\u0026#34;)\u0026#34;; go test ./activity/$dirname; done; zip -r release.zip ./activity/ ./connector/ ## After a successful build, we want to create a new release on GitHub in case the build was tagged. This was we can have more control over when a build is an actual release. The release will have the same name as the tag deploy: provider: releases api_key: $TOKEN file: \u0026#34;release.zip\u0026#34; skip_cleanup: true on: tags: true Wrapping up # Both Jenkins and Travis-CI make it straightforward to set up continuous testing and delivery for Flogo activities. Check out Project Flogo and let me know what you\u0026rsquo;ve built!\n","date":"August 23, 2017","externalUrl":null,"permalink":"/2017/08/how-to-continuously-test-flogo-activities-with-jenkins/","section":"Blog","summary":"In 2016 TIBCO announced Project Flogo as an ultra lightweight integration engine — up to 20 to 50 times lighter than Node.js and Java Dropwizard. It’s open source and easily extensible, which means you want to make sure the activities you build keep working after each check-in. The question is straightforward: how do you test your activities every time code is pushed to Git?\n","title":"How To Continuously Test Flogo Activities With Jenkins","type":"blog"},{"content":"I\u0026rsquo;ve gotten a lot of questions about using Basic Authentication with the Web Integrator in TIBCO Cloud Integration. Turns out it\u0026rsquo;s pretty straightforward.\nWhen you\u0026rsquo;re using an InvokeRESTService activity in Web Integrator the tab Input Settings has section called Request Headers and in that table you can specify the HTTP headers you want to use. To add a header parameter, click the + button and press Enter to save your changes. For HTTP Basic Authentication you need to specify the header Authorization.\nHTTP Basic Authentication passes the word Basic: (and a space) followed by a Base64-encoded string of the username and password. For example, Basic: dXNlcjpwYXNz is the value you\u0026rsquo;d use if the username/password was user:pass. In Web Integrator you can build this expression using the mapper. The expression on the Input tab for the above scenario would be string.concat(\u0026quot;Basic: \u0026quot;, string.stringToBase64(\u0026quot;user:pass\u0026quot;))\nIf you want to verify what headers your request is actually sending, RequestBin is a handy tool. RequestBin is a community project from Runscope that lets you inspect HTTP requests and debug webhook payloads. You can also host RequestBin yourself following the steps on GitHub.\n","date":"August 21, 2017","externalUrl":null,"permalink":"/2017/08/how-to-add-basic-auth-to-flogo-apps-in-tibco-cloud-integration/","section":"Blog","summary":"I’ve gotten a lot of questions about using Basic Authentication with the Web Integrator in TIBCO Cloud Integration. Turns out it’s pretty straightforward.\n","title":"How To Add Basic Auth To Flogo Apps in TIBCO Cloud Integration","type":"blog"},{"content":"In the age of monolithic apps and app servers, monitoring was relatively straightforward. With microservices, you\u0026rsquo;re dealing with more servers and more services, and monitoring gets complex fast. You have options — Nagios, Zabbix, or Prometheus. My preference goes to the Greek deity that stole fire from Mount Olympus and brought it to us.\nDeity? Prometheus # From the Prometheus website:\n(\u0026hellip;) an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, many companies and organizations have adopted Prometheus, and the project has a very active developer and user community.\nAs SoundCloud moved to microservices they needed to monitor thousands of services, and their existing monitoring had too many limitations. The Prometheus website adds:\nIt is now a standalone open source project and maintained independently of any company. To emphasize this and clarify the project\u0026rsquo;s governance structure, Prometheus joined the Cloud Native Computing Foundation in 2016 as the second hosted project after Kubernetes.\nPrometheus architecture # Prometheus was designed with 4 main requirements:\nA multi-dimensional data model, so that data can be sliced and diced at will, along dimensions like instance, service, endpoint, and method. Operational simplicity, so that you can spin up a monitoring server where and when you want, even on your local workstation, without setting up a distributed storage backend or reconfiguring the world. Scalable data collection and decentralized architecture, so that you can reliably monitor the many instances of your services, and independent teams can set up independent monitoring servers. Finally, a powerful query language that leverages the data model for meaningful alerting (including easy silencing) and graphing (for dashboards and for ad-hoc exploration). Image from Prometheus documentation\nThere are many good explanations of the individual components out there, so I\u0026rsquo;ll leave that to those resources.\nBuilding a Node.js app # Before we get into Prometheus configuration, we need a service to monitor. I\u0026rsquo;ll walk through what you need to add to your Node.js apps to start monitoring them with Prometheus. I\u0026rsquo;m assuming you\u0026rsquo;ve created an API spec in TIBCO Cloud Integration and exported the Node.js code. If not, the tutorial on the TIBCO Community will get you started (or just grab the code from my GitHub repo ;-)).\nAdding dependencies to package.json # We need a Prometheus client for Node.js that supports histograms, summaries, gauges, and counters. Prometheus recommends prom-client:\nnpm install --save prom-client We also want response time tracking, so we need a middleware that records response times for HTTP requests:\nnpm install --save response-time Creating a module # Following Node.js best practices, we\u0026rsquo;ll create a separate module for all the Prometheus instrumentation. It\u0026rsquo;s a fair amount of JavaScript, but I\u0026rsquo;ve documented it thoroughly.\n/** * Newly added requires */ var Register = require(\u0026#39;prom-client\u0026#39;).register; var Counter = require(\u0026#39;prom-client\u0026#39;).Counter; var Histogram = require(\u0026#39;prom-client\u0026#39;).Histogram; var Summary = require(\u0026#39;prom-client\u0026#39;).Summary; var ResponseTime = require(\u0026#39;response-time\u0026#39;); var Logger = require(\u0026#39;./logger\u0026#39;); /** * A Prometheus counter that counts the invocations of the different HTTP verbs * e.g. a GET and a POST call will be counted as 2 different calls */ module.exports.numOfRequests = numOfRequests = new Counter({ name: \u0026#39;numOfRequests\u0026#39;, help: \u0026#39;Number of requests made\u0026#39;, labelNames: [\u0026#39;method\u0026#39;] }); /** * A Prometheus counter that counts the invocations with different paths * e.g. /foo and /bar will be counted as 2 different paths */ module.exports.pathsTaken = pathsTaken = new Counter({ name: \u0026#39;pathsTaken\u0026#39;, help: \u0026#39;Paths taken in the app\u0026#39;, labelNames: [\u0026#39;path\u0026#39;] }); /** * A Prometheus summary to record the HTTP method, path, response code and response time */ module.exports.responses = responses = new Summary({ name: \u0026#39;responses\u0026#39;, help: \u0026#39;Response time in millis\u0026#39;, labelNames: [\u0026#39;method\u0026#39;, \u0026#39;path\u0026#39;, \u0026#39;status\u0026#39;] }); /** * This funtion will start the collection of metrics and should be called from within in the main js file */ module.exports.startCollection = function () { Logger.log(Logger.LOG_INFO, `Starting the collection of metrics, the metrics are available on /metrics`); require(\u0026#39;prom-client\u0026#39;).collectDefaultMetrics(); }; /** * This function increments the counters that are executed on the request side of an invocation * Currently it increments the counters for numOfPaths and pathsTaken */ module.exports.requestCounters = function (req, res, next) { if (req.path != \u0026#39;/metrics\u0026#39;) { numOfRequests.inc({ method: req.method }); pathsTaken.inc({ path: req.path }); } next(); } /** * This function increments the counters that are executed on the response side of an invocation * Currently it updates the responses summary */ module.exports.responseCounters = ResponseTime(function (req, res, time) { if(req.url != \u0026#39;/metrics\u0026#39;) { responses.labels(req.method, req.url, res.statusCode).observe(time); } }) /** * In order to have Prometheus get the data from this app a specific URL is registered */ module.exports.injectMetricsRoute = function (App) { App.get(\u0026#39;/metrics\u0026#39;, (req, res) =\u0026gt; { res.set(\u0026#39;Content-Type\u0026#39;, Register.contentType); res.end(Register.metrics()); }); }; Adding code to server.js # In server.js you only need a few lines:\n\u0026#39;use strict\u0026#39;; ... /** * This creates the module that we created in the step before. * In my case it is stored in the util folder. */ var Prometheus = require(\u0026#39;./util/prometheus\u0026#39;); ... /** * The below arguments start the counter functions */ App.use(Prometheus.requestCounters); App.use(Prometheus.responseCounters); /** * Enable metrics endpoint */ Prometheus.injectMetricsRoute(App); /** * Enable collection of default metrics */ Prometheus.startCollection(); ... Five lines of code in server.js and your app is instrumented.\nDeploy # Push the app to TIBCO Cloud Integration using either the tibcli utility or the Visual Studio Code extension. More details in the TCI docs or in this post on the TIBCO Community. After deploying, grab the URL of your app — you\u0026rsquo;ll need it for Prometheus. For example: https://integration.cloud.tibcoapps.com/ijdc72jg2ugg2dikkkl236f2rhma6qaz.\nRunning Prometheus # There are many ways to run Prometheus, but Docker is probably the easiest. No installation headaches, no dependency management. You just need to tell Prometheus which endpoint to monitor via a prometheus.yml file that you bind-mount from the host.\nPrometheus needs a hostname and port for its targets, which makes monitoring apps on iPaaS/PaaS platforms a bit trickier. The metrics_path parameter per job handles this by telling Prometheus to hit a specific path on the server.\nA basic but functional prometheus.yml:\nglobal: scrape_interval: 1m scrape_timeout: 10s evaluation_interval: 1m rule_files: - /etc/prometheus/alert.rules scrape_configs: - job_name: PrometheusApp scrape_interval: 5s scrape_timeout: 5s metrics_path: /ijdc72jg2ugg2dikkkl236f2rhma6qaz/metrics scheme: https static_configs: - targets: - integration.cloud.tibcoapps.com labels: app: PrometheusApp sandbox: MyDefaultSandbox I\u0026rsquo;ve set the job_name and the label app to match my app name in TCI for easy correlation. The metrics_path contains the app URL path plus /metrics.\nStart the Docker container:\ndocker run -p 9090:9090 -v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus Looking at Prometheus data # Open Prometheus at http://localhost:9090/graph and enter the expression up:\nThat checks whether your app is running. sum(numOfRequests) gives you the total request count:\nWithout sum() you get a breakdown by HTTP verb:\nReporting with Grafana # Grafana can take the data from Prometheus and present it in dashboards. You can build views showing app status, request counts, and request type breakdowns:\nA sample dashboard that shows the status, the number of requests and what kind of requests were served\nWrapping up # Prometheus lets you monitor the health and status of your Node.js apps on TIBCO Cloud Integration (or anywhere else), and Grafana gives you custom dashboards on top of that data. The combination works well for any app that can expose or push metrics to Prometheus.\n","date":"August 18, 2017","externalUrl":null,"permalink":"/2017/08/how-to-monitor-your-node.js-apps-with-prometheus/","section":"Blog","summary":"In the age of monolithic apps and app servers, monitoring was relatively straightforward. With microservices, you’re dealing with more servers and more services, and monitoring gets complex fast. You have options — Nagios, Zabbix, or Prometheus. My preference goes to the Greek deity that stole fire from Mount Olympus and brought it to us.\n","title":"Monitoring Node.js Apps with Prometheus","type":"blog"},{"content":"You shouldn\u0026rsquo;t have to be a Swagger expert to design and build an API. Creating an API from scratch can be a difficult task, so what if you could do it without writing a line of code?\nThe Web Integrator in TIBCO Cloud Integration is an API-led platform that makes it straightforward to get started building REST services. Think about the message you want to receive and the response you want to send back — that\u0026rsquo;s all you need to define your API.\nWhat we\u0026rsquo;re building # A FlightBookings app that sends an email confirmation to the user requesting air travel.\nCreate the app # Like every app on TIBCO Cloud Integration, start with the orange Create button. The Web Integrator is powered by TIBCO\u0026rsquo;s OSS Project Flogo and the mascot (Flynn) is in the interface. Click on Create a Web Integrator App to get started.\nCreate the flow # Click Create a flow and select Rest Trigger. This is where you configure your API. The resource path is what your API listens on — best practice is to use the plural form of a noun (books, apps, or in our case, flightbookings). The HTTP methods supported by Web Integrator are:\nGET: Get the resource specified (e.g. get a single book or get all books) POST: Create a new resource of the object (e.g. create a new flightbooking) PUT: Update a resource or create a new one if it doesn\u0026rsquo;t exist yet (e.g. update an app or create a new one) DELETE: Remove the resource (e.g. delete the cookie) We\u0026rsquo;re creating new flightbookings, so the method is POST. Rather than writing JSON schema by hand, the Web Integrator lets you paste a sample message for input and output. Copy this into the input box and click Create:\n{ \u0026#34;Class\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;DepartureDate\u0026#34;: \u0026#34;2017-05-27\u0026#34;, \u0026#34;Destination\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;FirstName\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;EmailAddress\u0026#34;: \u0026#34;string\u0026#34; } Obviously you can have more fields, but this is good enough for now\nImplement logic # Now for the business logic. Click on the newly created flow and you\u0026rsquo;ll see a canvas with two tiles: ReceiveHTTPMessage (triggers the flow) and ReplyToHTTPMessage (sends a response to the client). We\u0026rsquo;ll do three things:\nAdd a log tile to log that a new message arrived Add an email tile to send a confirmation Update the ReplyToHTTPMessage to send back the data we need Adding a log tile # Click on ReplyToHTTPMessage and drag it two spaces over to make room for new tiles.\nClick the first empty tile and select Log Message. It needs an input — on the Input click message to craft the log message. There are plenty of functions available, but we\u0026rsquo;ll use a simple string concatenation. You can type in the textbox or click on parameters and functions on the right side. Or just copy this:\nstring.concat(\u0026#34;A new booking request has arrived for \u0026#34;, $TriggerData.body.FirstName) This function concatenates a string with the FirstName passed in as a parameter\nSending an email # For the email step, I covered the Send Mail activity setup in a previous post so I\u0026rsquo;ll skip the connection details. The email activity needs 4 inputs:\nsender: who the message comes from (e.g. some@email.com) recipients: the email address from the request. Click on recipients and search for EmailAddress inside $TriggerData (or use `$TriggerData.body.EmailAddress`) subject: the subject line (e.g. thank you for requesting a flight) message: the email body, using concat again to personalize it: string.concat(\u0026#34;Dear \u0026#34;, string.concat($TriggerData.body.FirstName, \u0026#34;. Thank you for requesting a flight. Please note we\u0026#39;ll take care of it soon!\u0026#34;)) Updating the ReplyToHTTPMessage # Instead of echoing back the input, update the Input Settings with a new response sample. We want to reply with the FirstName, a unique identifier, and the current date:\n{ \u0026#34;FirstName\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;ID\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;Date\u0026#34;: \u0026#34;string\u0026#34; } Paste that in and the Input tab fields update automatically. Map them:\nFirstName: $TriggerData.body.FirstName ID: number.random(999999) Date: datetime.currentDate() Push your app # Everything\u0026rsquo;s mapped and configured. Click the blue Push app button to deploy.\nTest it # After pushing, you\u0026rsquo;ll be back on the Apps page. Click View and Test 1 Endpoint then View API to open the test page. On the POST method, the only required item is the body:\n{ \u0026#34;Class\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;DepartureDate\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;Destination\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;EmailAddress\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;FirstName\u0026#34;: \u0026#34;string\u0026#34; } Be sure to replace the value of EmailAddress with an actual email address to make sure you see the result :)\nThe response body should look something like:\n{ \u0026#34;Date\u0026#34;: \u0026#34;2017-08-15+00:00\u0026#34;, \u0026#34;FirstName\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;ID\u0026#34;: 623436 } That\u0026rsquo;s it # A few steps and you\u0026rsquo;ve got a working API for flight bookings. As always let me know your thoughts on this tutorial either by commenting below or posting something on the TIBCO Community!\n","date":"August 16, 2017","externalUrl":null,"permalink":"/2017/08/the-art-of-building-rest-services-in-tibco-cloud-integation/","section":"Blog","summary":"You shouldn’t have to be a Swagger expert to design and build an API. Creating an API from scratch can be a difficult task, so what if you could do it without writing a line of code?\n","title":"Building REST Services in TIBCO Cloud Integration","type":"blog"},{"content":"In 2002 Jeff Bezos issued a mandate that would change the world forever. At the very least it brought a massive change to how data is reused on the Internet:\nAll teams will henceforth expose their data and functionality through service interfaces. Teams must communicate with each other through these interfaces. There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team\u0026rsquo;s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network. It doesn\u0026rsquo;t matter what technology they use. All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions. Anyone who doesn\u0026rsquo;t do this will be fired. Thank you; have a nice day! That mandate kicked off a lot of what we now call the API economy. Many enterprises have APIs that deliver data so you can focus on building value rather than figuring out how to get the data. That said, most APIs out there are documented but don\u0026rsquo;t have a swagger.json you can import directly. The Web Integrator in TIBCO Cloud Integration lets you paste sample messages from API docs and use those as the basis for invoking REST APIs.\nThis one is heavy on screenshots. I\u0026rsquo;ll use the MetaWeather API as the example.\nCreate your app # Step 1: Create an app! Step 2: Give the app a name, make it something meaningful like WeatherApp ;-) Step 3: Choose the Web Integrator (powered by Project Flogo) as the type Step 4: Create a new flow in your new Web Integrator app Step 5: Give your flow a name and select the Timer to start, after all who doesn\u0026rsquo;t like a good timer? Step 6: Click on your new flow to open the editor. For now it will have only a single tile, but when we\u0026rsquo;re done there will be a few more! Getting data from MetaWeather # Step 7: Check out the MetaWeather API Location search. The URL pattern is /api/location/search/?query Step 8: Here\u0026rsquo;s a sample response for London Step 9: Add an InvokeRESTService tile and paste the URL for the location search, without the query. So the URL will be https://www.metaweather.com/api/location/search Step 10: On the Input Settings tab, add an entry in the Query Params called query. This tells the tile to expect a new parameter Step 11: On the Input tab, give your query parameter a value. Expand queryParams, select the parameter, and type a value on the right. I\u0026rsquo;ll go with London. Step 12: On the Output Settings, paste the sample response from the MetaWeather API (from step 8) Step 13: Add a log activity to see data in the logs. On the Input tab we only want the first WOEid, so the value is $InvokeRESTService.responseBody.woeid[0]. Add a second API call # A single API call is useful, but let\u0026rsquo;s use the output of the first call to invoke another API. Step 14: Check out the MetaWeather API for Location. The URL pattern is /api/location/{woeid} (which explains why the WOEid was interesting just now :)) Step 15: Here\u0026rsquo;s a sample response (longer than the previous one, but it has much more data) Step 16: Add another InvokeRESTService tile and paste the URL with woeid in curly braces: https://www.metaweather.com/api/location/{woeid} Step 17: The curly braces are Path parameters. On the Input Settings tab, map the first woeid from the previous step: $InvokeRESTService.responseBody.woeid[0] Step 18: On the Output Settings, paste the sample response from step 15 Step 19: Add another log step and log the WOEid to confirm it\u0026rsquo;s still the same city. Step 20: Push the app and check the logs tab for the result! As a next step, try modifying the second log to show more information.\nThat\u0026rsquo;s it # You used to need a lot of programming to orchestrate APIs into something useful. With the Web Integrator you can use the sample messages most APIs provide to chain calls together without writing code. As always let me know your thoughts on this tutorial either by commenting below or posting something on the TIBCO Community!\n","date":"August 16, 2017","externalUrl":null,"permalink":"/2017/08/how-to-combine-apis-with-flogo-apps-in-tibco-cloud-integration/","section":"Blog","summary":"In 2002 Jeff Bezos issued a mandate that would change the world forever. At the very least it brought a massive change to how data is reused on the Internet:\nAll teams will henceforth expose their data and functionality through service interfaces. Teams must communicate with each other through these interfaces. There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network. It doesn’t matter what technology they use. All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions. Anyone who doesn’t do this will be fired. Thank you; have a nice day! That mandate kicked off a lot of what we now call the API economy. Many enterprises have APIs that deliver data so you can focus on building value rather than figuring out how to get the data. That said, most APIs out there are documented but don’t have a swagger.json you can import directly. The Web Integrator in TIBCO Cloud Integration lets you paste sample messages from API docs and use those as the basis for invoking REST APIs.\n","title":"How To Combine APIs With Flogo Apps In TIBCO Cloud Integration","type":"blog"},{"content":"Ever wanted to capture data from a form and send it somewhere useful? Google Forms handles the collection side well, but what about routing that data to an API? That\u0026rsquo;s where TIBCO Cloud Integration comes in.\nSome assumptions # A few assumptions going in, which should cover most readers. If you have questions, post them at the TIBCO Community or here below.\nYou\u0026rsquo;ve seen TIBCO Cloud Integration before and at least have an active account (if not you can sign up here) You know how to create an API spec in TIBCO Cloud Integration (if not the documentation helps you to create your first API) You\u0026rsquo;re familiar with Express and Node.js You\u0026rsquo;ve got access to Google Forms What we\u0026rsquo;re building # The end result: a Node.js app that logs information posted to it from a Google Forms form. The form submits data, Google Apps Script sends it to the API, and the app logs it. Simple pipeline.\nAPI first # Most apps on TIBCO Cloud Integration start with the API spec. We\u0026rsquo;ll model the Google Form based on the API, so we define the API shape first. To create an API spec we need a name and version:\nName: GoogleForm Version: 1.0.0 We need a POST operation — I\u0026rsquo;ll call it \u0026lsquo;request\u0026rsquo;. The payload is straightforward: a name, an email address, and some feedback. You can generate the request schema from a sample message:\n{ \u0026#34;name\u0026#34;: \u0026#34;Leon\u0026#34;, \u0026#34;email\u0026#34;: \u0026#34;some@email.com\u0026#34;, \u0026#34;feedback\u0026#34;: \u0026#34;This is really a cool tutorial\u0026#34;, \u0026#34;apikey\u0026#34;: \u0026#34;12345\u0026#34; } The apikey field is a hardcoded value so the app can verify the request actually came from the Google Form rather than somewhere else. The API spec and all source code can be found here.\nBuilding the app # Generate the Node.js code from the API Modeler and implement the logic. To generate the Node.js app go to the API specs page, hover over an API specification and click Generate Node.js code.\nOnce you\u0026rsquo;ve unzipped the app, open the file \u0026lsquo;request.js\u0026rsquo; in the handlers folder and remove lines 19 to 27 (the default generated code). We\u0026rsquo;ll replace it with a simple check: if the apikey matches our predefined value, log the data. If not, ignore it.\n... var status = 200; // Checks if the apikey in the body matches the value specified if(\u0026#34;A985C1924862054FB7308AB6695C6C2D97DAB97B3528338DD76482EF127A6CDE\u0026#34; == req.body.apikey) { // Just log the entire message Logger.log(Logger.LOG_DEBUG, `Body parameters received ${JSON.stringify(req.body)}`); } res.status(status).send({message:\u0026#39;OK\u0026#39;}); ... Add this on line 2 of your file to make the logger accessible:\nvar Logger = require(\u0026#39;../util/logger\u0026#39;); Now push the app to TIBCO Cloud Integration using the tibcli utility (or if you use Microsoft Visual Studio Code you can use the plug-in).\nThe Google Form # Go to Google Drive and select \u0026lsquo;New -\u0026gt; Google Forms -\u0026gt; Blank form\u0026rsquo;. Add three questions with \u0026lsquo;short text answer\u0026rsquo; as the type:\nNow for the part that ties it all together — the script that sends form responses to the API. Go into the script editor for Google Forms, name your project, and add this code:\nfunction myOnSubmitHandler(e) { // Get the form responses and the values of the questions // note that the questions are in the array in the order // you put them on the form var formResponses = e.response.getItemResponses(); var name = formResponses[0].getResponse(); var email = formResponses[1].getResponse(); var feedback = formResponses[2].getResponse(); // Prepare JSON payload // this will also have the apikey specified in the Node.js app var data = { \u0026#39;name\u0026#39;: name, \u0026#39;email\u0026#39;: email, \u0026#39;feedback\u0026#39;: feedback, \u0026#39;apikey\u0026#39;: \u0026#39;A985C1924862054FB7308AB6695C6C2D97DAB97B3528338DD76482EF127A6CDE\u0026#39;, }; // Default HTTP options var options = { \u0026#39;method\u0026#39; : \u0026#39;post\u0026#39;, \u0026#39;contentType\u0026#39;: \u0026#39;application/json\u0026#39;, \u0026#39;payload\u0026#39; : JSON.stringify(data), \u0026#39;muteHttpExceptions\u0026#39; : true }; // Send the request to TIBCO Cloud Integration // the URL comes from the app details and at the end is the resource you send the request to (the one from the API spec) var httpResponse = UrlFetchApp.fetch(\u0026#39;https://integration.cloud.tibcoapps.com/nuii2guwbehquemdg2to7hflf7q5ebiy/request\u0026#39;, options); } Next, select \u0026lsquo;Edit\u0026rsquo; in the menu and choose \u0026lsquo;Current project\u0026rsquo;s triggers\u0026rsquo;. Create a new trigger with myOnSubmitHandler in the run column followed by From form and On form submit.\nYou might get an authorization prompt — that\u0026rsquo;s expected since the script needs permission to make HTTP calls.\nTesting it # Preview the form, fill in some sample data (use a real email address), and hit \u0026lsquo;Submit\u0026rsquo;.\nCheck the logs of your app in TCI — you should see a new line with the same data:\n","date":"August 11, 2017","externalUrl":null,"permalink":"/2017/08/how-to-connect-google-forms-to-apis/","section":"Blog","summary":"Ever wanted to capture data from a form and send it somewhere useful? Google Forms handles the collection side well, but what about routing that data to an API? That’s where TIBCO Cloud Integration comes in.\n","title":"How To Connect Google Forms to APIs","type":"blog"},{"content":"Sending emails is still a core part of many integration flows — error notifications, confirmations, alerts. This tutorial walks through setting up the Send Mail activity in TIBCO Cloud Integration\u0026rsquo;s Web Integrator, using Gmail as the provider.\nNote: you might need to create an \u0026lsquo;App Specific Password\u0026rsquo; if your account uses two-factor authentication\nThis one is mostly visual — screenshots do the heavy lifting.\nCreate a new app # Give the app a name # Choose the Web Integrator App type # Create a new flow # Give the flow a name and select the Timer to start # Click on the newly created flow # Add a \u0026lsquo;Send Mail\u0026rsquo; activity # Configure the properties # Note: This slide uses Gmail, though you can use any other mail provider as well\nApp specific passwords # If your email account uses two-factor authentication you need to create an \u0026lsquo;App Password\u0026rsquo; like here for Gmail\nAdd receivers # On the \u0026lsquo;Input\u0026rsquo; add the recipients (comma separated if you want more than one)\nThe subject # Add the subject line for your email\nYour message # Input the message body you want to send\nPush the App # That\u0026rsquo;s it # In a few seconds your app will start and send the email. Let me know your thoughts on this tutorial either by commenting below or posting something on the TIBCO Community!\n","date":"August 8, 2017","externalUrl":null,"permalink":"/2017/08/how-to-send-emails-using-flogo-apps-in-tibco-cloud-integration/","section":"Blog","summary":"Sending emails is still a core part of many integration flows — error notifications, confirmations, alerts. This tutorial walks through setting up the Send Mail activity in TIBCO Cloud Integration’s Web Integrator, using Gmail as the provider.\n","title":"How To Send Emails Using Flogo Apps in TIBCO Cloud Integration","type":"blog"},{"content":"Probably the most common version control system used by developers today is git. Whether that is a self hosted server (like Gogs), a bare repo (git init) or with GitHub, most developers intuitively choose git. I try to store all my projects in local git repos and some of them make it to GitHub, while many of them don\u0026rsquo;t. When it comes to deploying apps to TIBCO Cloud Integration, I do many updates per day so I wanted an easy way to not only store my latest source but deploy it right after.\nEnter git hooks \u0026hellip;\nGit hooks are scripts that run when a specific event occurs in your repository. You can configure what happens at commits (or right before), before patches, etc. Atlassian has a solid tutorial on the topic.\nFor my use case I created a Git post-commit script to deploy my Node.js apps directly to TIBCO Cloud Integration either using an environment variable or using a more interactive mode. The interactive mode gives me a \u0026lsquo;yes/no\u0026rsquo; option after committing my new code to the repo, in case I have a fear of commitment ;-)\nTo get started, Windows only for now, clone my repo and copy the post-commit and post-commit.ps1 files to your repo\u0026rsquo;s root .git/hooks directory. It\u0026rsquo;s probably not a great idea to actually clone the repo into the internals of your repo.\nChange the location of the tibcli executable in the post-commit.ps1 file on line 5 to where you keep the executable.\nSelect the mode you want to have by either commenting or uncommenting the mode on line 58 and 59 of the post-commit file.\nLet me know your thoughts, concerns and suggestions!\n","date":"August 7, 2017","externalUrl":null,"permalink":"/2017/08/how-to-use-git-hooks-to-automatically-deploy-apps/","section":"Blog","summary":"Probably the most common version control system used by developers today is git. Whether that is a self hosted server (like Gogs), a bare repo (git init) or with GitHub, most developers intuitively choose git. I try to store all my projects in local git repos and some of them make it to GitHub, while many of them don’t. When it comes to deploying apps to TIBCO Cloud Integration, I do many updates per day so I wanted an easy way to not only store my latest source but deploy it right after.\n","title":"How To Use Git Hooks To Automatically Deploy Apps","type":"blog"},{"content":"Creating deployment artifacts every time you check something in to GitHub gets old fast. Jenkins can handle that for you. This post walks through using Jenkins with the tibcli utility to deploy Node.js apps to TIBCO Cloud Integration every time updates are pushed to GitHub.\nSome assumptions # A few assumptions going in, which should cover most readers. If you have questions, post them at the TIBCO Community.\nYou\u0026rsquo;re using GitHub to store your projects and you\u0026rsquo;ve got a repo for your Node.js app You\u0026rsquo;re familiar with Jenkins You\u0026rsquo;ve modeled an API Spec on TIBCO Cloud Integration You\u0026rsquo;ve downloaded the tibcli utility from TIBCO Cloud Integration and the user that will run the JENKINS server has to login with the tibcli at least once Python # Wait, Python?! We are doing Node.js, don\u0026rsquo;t worry. The tibcli utility works in interactive mode, so we\u0026rsquo;ll use a Python script to automate the build tasks and push the app to TIBCO Cloud Integration. You\u0026rsquo;ll need the pexpect module:\nsudo pip install pexpect Copy and modify the script below. Save it somewhere memorable — I named mine server.py.\n### Imports import sys import os import shutil import zipfile import pexpect ### Constants DEPLOYMENT_PATH = \u0026#39;./deployment\u0026#39; APP_NAME = sys.argv[1] TIBCLI_PATH = sys.argv[2] APP_PUSH_CMD = \u0026#39;tibcli app push\u0026#39; PASSWORD = sys.argv[3] def replace_unicode(cmd_output): cmd_output = cmd_output.replace(\u0026#39;\\b\u0026#39;, \u0026#39;\u0026#39;) cmd_output = cmd_output.replace(\u0026#39;\\x1b\u0026#39;, \u0026#39;\u0026#39;) cmd_output = cmd_output.replace(\u0026#39;[32m\u0026#39;, \u0026#39;\u0026#39;) cmd_output = cmd_output.replace(\u0026#39;[31m\u0026#39;, \u0026#39;\u0026#39;) cmd_output = cmd_output.replace(\u0026#39;[0m\u0026#39;, \u0026#39;\u0026#39;) return cmd_output def with_interactive_login(child): cmd_output = str(child.before) child.sendline(PASSWORD) cmd_output += str(child.after) return cmd_output def zipdir(path, ziph): for root, dirs, files in os.walk(path): for file in files: ziph.write(os.path.join(root, file)) if not os.path.exists(DEPLOYMENT_PATH): os.makedirs(DEPLOYMENT_PATH) if os.path.exists(\u0026#39;./\u0026#39; + APP_NAME + \u0026#39;/node_modules\u0026#39;): shutil.rmtree(\u0026#39;./\u0026#39; + APP_NAME + \u0026#39;/node_modules\u0026#39;) shutil.copy2(\u0026#39;manifest.json\u0026#39;,DEPLOYMENT_PATH + \u0026#39;/manifest.json\u0026#39;) zipf = zipfile.ZipFile(DEPLOYMENT_PATH + \u0026#39;/app.zip\u0026#39;, \u0026#39;w\u0026#39;, zipfile.ZIP_DEFLATED) zipdir(\u0026#39;./\u0026#39; + APP_NAME, zipf) zipf.close() cmd_output = \u0026#39;\u0026#39; child = pexpect.spawn(TIBCLI_PATH + \u0026#39;/\u0026#39; + APP_PUSH_CMD,cwd=DEPLOYMENT_PATH) if child.expect([\u0026#34;Password\u0026#34;, pexpect.EOF, pexpect.TIMEOUT], timeout=300) == 0: cmd_output = with_interactive_login(child) else: print(\u0026#34;command time out occur\u0026#34;) cmd_output += str(child.before) cmd_output = replace_unicode(cmd_output) print(cmd_output) Getting your butler # Jenkins is a self-contained, open source automation server for building, testing, and deploying software. For this tutorial I went with the Long-term Support Release (LTS). You can download and install Jenkins for just about any OS, and there\u0026rsquo;s a Docker container available too.\nNote: Securing your Jenkins installation is definitely worth doing. There are plenty of good tutorials on that, so I\u0026rsquo;ll skip it here.\nPlugins # If you\u0026rsquo;ve installed the latest version of Jenkins there is only one additional plugin we need:\nNodeJS Plugin Install it via Manage Jenkins -\u0026gt; Manage Plugins and search on the Available tab.\nConnect to GitHub # We need Jenkins to know about your GitHub repos. Rather than polling, we\u0026rsquo;ll have GitHub tell Jenkins when updates happen via webhooks. You\u0026rsquo;ll need a Personal Access Token from GitHub (Settings menu) with access to:\nrepo notifications user Save that token — you\u0026rsquo;ll need it shortly.\nBack in Jenkins, go to Credentials -\u0026gt; System and add two Global credentials:\nThe first is for the GitHub Plug-in in Jenkins Kind: Secret text Scope: Global Secret: The Personal Access Token from GitHub ID: Something to remember this credential by Description: A good description is helpful to remember this credential by The second will be for the Jenkins project accessing your GitHub repo Kind: Username with password Scope: Global Username: Your GitHub username Password: The Personal Access Token from GitHub ID: Something to remember this credential by Description: A good description is helpful to remember this credential by Go to Manage Jenkins -\u0026gt; Configure System, scroll to the GitHub section, select Add GitHub server and pick the first credential from the dropdown. Hit Test Connection to verify, then Save.\nAdding Node.js to Jenkins # Go to Manage Jenkins -\u0026gt; Global Tool Configuration and scroll to the NodeJS section. The NodeJS plugin lets you install and manage different versions of Node.js for your builds. Add a new installation, keep install from nodejs.org selected, choose your version, and hit Save.\nSetting up CI and CD # Now for the actual pipeline. Add a New Item, give it a name, and select Freestyle project.\nSource Code Management # Select Git, paste your repository URL, and pick the second set of credentials you created earlier.\nBuild Triggers # Choose GitHub hook trigger for GITScm polling. This injects a webhook into your repo so every new commit triggers a build.\nBuild Environments # Check Provide Node \u0026amp; npm bin/ folder to PATH and select the Node.js version you configured.\nBuild # Add a build step Execute shell with:\n# Copy the deployment script to this folder cp /path/to/server.py . python server.py \u0026lt;YOUR APPNAME\u0026gt; \u0026lt;LOCATION OF TIBCLI\u0026gt; \u0026lt;YOUR PASSWORD\u0026gt; Post-build Actions # Add Archive the artifacts with files set to deployment/** to keep your build artifacts. Then add Delete workspace when build is done to clean up.\n","date":"August 4, 2017","externalUrl":null,"permalink":"/2017/08/how-to-set-up-continuous-integration-with-jenkins-and-node.js/","section":"Blog","summary":"Creating deployment artifacts every time you check something in to GitHub gets old fast. Jenkins can handle that for you. This post walks through using Jenkins with the tibcli utility to deploy Node.js apps to TIBCO Cloud Integration every time updates are pushed to GitHub.\n","title":"How To Set Up Continuous Integration with Jenkins and Node.js","type":"blog"},{"content":"I\u0026rsquo;ve just updated the Microsoft Visual Studio Code extension to help develop and deploy Node.js apps to TIBCO Cloud Integration. Apart from a whole bunch of restructuring, it now has the ability to create a new Node.js app (if you don\u0026rsquo;t want to start from an API spec) and it makes use of the .env files to work with process.env context.\nCheck out the repository on GitHub\n","date":"August 4, 2017","externalUrl":null,"permalink":"/2017/08/vscode-extension-for-tibco-cloud-integration/","section":"Blog","summary":"I’ve just updated the Microsoft Visual Studio Code extension to help develop and deploy Node.js apps to TIBCO Cloud Integration. Apart from a whole bunch of restructuring, it now has the ability to create a new Node.js app (if you don’t want to start from an API spec) and it makes use of the .env files to work with process.env context.\n","title":"VSCode Extension For TIBCO Cloud Integration","type":"blog"},{"content":"With Node.js in TIBCO Cloud Integration you have a solid toolset for building APIs. Here we\u0026rsquo;ll create a custom Express middleware that checks if the IP address of the sender matches a predefined list. In this tutorial we\u0026rsquo;ll use the list of TIBCO Mashery Traffic Managers as a \u0026lsquo;whitelist\u0026rsquo; (so traffic from all other IP addresses will be blocked).\nSome assumptions # A few assumptions going in, which should cover most readers. If you have questions, post them below or at the TIBCO Community.\nYou\u0026rsquo;re using the generated Node.js code from TIBCO Cloud Integration (You can check this link for more details) You\u0026rsquo;re familiar with Express and Node.js You know the Mashery IP addresses can be found at https://developer.mashery.com/docs/read/proxy_information/Archived_IP_Whitlisting_Information (if you didn\u0026rsquo;t know that, you do now :-)) Express middleware # Middleware functions in Node.js have access to the request and response objects in your Express app. From the Express docs, middleware can:\nExecute any code. Make changes to the request and the response objects. End the request-response cycle. Call the next middleware in the stack. We care about the third and fourth bullet here. If the request doesn\u0026rsquo;t come from Mashery we end the cycle. If it does, we call the next middleware in the stack.\nThe code # Our middleware needs to do one thing: check whether the request IP is a Mashery Traffic Manager IP. Three requirements:\nWe need to check both x-forwarded-for and remoteAddress so the same code works locally and in TIBCO Cloud Integration. Mashery publishes IPs in CIDR format, so we need to translate those into ranges and check for matches. Following Node.js best practices, we\u0026rsquo;ll put this in its own file. I\u0026rsquo;ve called it mashery.js and stored it in the \u0026lsquo;util\u0026rsquo; folder. \u0026#39;use strict\u0026#39;; var ip = require(\u0026#39;ip\u0026#39;); var Logger = require(\u0026#39;./logger\u0026#39;); /** * To test locally add \u0026#39;::1/32\u0026#39; or \u0026#39;127.0.0.1/32\u0026#39; to the list. */ var trafficManagerIPs = [\u0026#39;64.94.14.0/27\u0026#39;, \u0026#39;64.94.228.128/28\u0026#39;, \u0026#39;216.52.39.0/24\u0026#39;, \u0026#39;216.52.244.96/27\u0026#39;, \u0026#39;216.133.249.0/24\u0026#39;, \u0026#39;23.23.79.128/25\u0026#39;, \u0026#39;107.22.159.192/28\u0026#39;, \u0026#39;54.82.131.0/25\u0026#39;, \u0026#39;75.101.137.168/32\u0026#39;, \u0026#39;75.101.142.168/32\u0026#39;, \u0026#39;75.101.146.168/32\u0026#39;, \u0026#39;75.101.141.43/32\u0026#39;, \u0026#39;75.101.129.141/32\u0026#39;, \u0026#39;174.129.251.74/32\u0026#39;, \u0026#39;174.129.251.80/32\u0026#39;, \u0026#39;50.18.151.192/28\u0026#39;, \u0026#39;50.112.119.192/28\u0026#39;, \u0026#39;54.193.255.0/25\u0026#39;, \u0026#39;204.236.130.149/32\u0026#39;, \u0026#39;204.236.130.201/32\u0026#39;, \u0026#39;204.236.130.207/32\u0026#39;, \u0026#39;176.34.239.192/28\u0026#39;, \u0026#39;54.247.111.192/26\u0026#39;, \u0026#39;54.93.255.128/27\u0026#39;, \u0026#39;54.252.79.192/27\u0026#39;]; module.exports = function (req, res, next) { var invalidMasheryIP = true; var reqIp = req.headers[\u0026#39;x-forwarded-for\u0026#39;] || req.connection.remoteAddress; for (var i = 0, len = trafficManagerIPs.length; i \u0026lt; len; i++) { if (ip.cidrSubnet(trafficManagerIPs[i]).contains(reqIp)) { invalidMasheryIP = false; next(); } } if (invalidMasheryIP) { Logger.log(Logger.LOG_WARN, `An unauthorized IP address ${reqIp} has tried to access the service`); res.status(403).end(); } }; Using it in your Node.js app # To make sure every request goes through the Mashery check first, require the new file and add an App.use line above all other middleware. Here\u0026rsquo;s what that looks like:\n\u0026#39;use strict\u0026#39;; var Http = require(\u0026#39;http\u0026#39;); var mashery = require(\u0026#39;./util/mashery\u0026#39;); ... App.use(mashery); ... Wrapping up # A few lines of code (and some copy/paste) and you can validate whether requests come from a specific set of IPs. The only thing left is to deploy your Node.js app.\n","date":"August 3, 2017","externalUrl":null,"permalink":"/2017/08/how-to-use-express-middleware-to-filter-traffic-in-node.js/","section":"Blog","summary":"With Node.js in TIBCO Cloud Integration you have a solid toolset for building APIs. Here we’ll create a custom Express middleware that checks if the IP address of the sender matches a predefined list. In this tutorial we’ll use the list of TIBCO Mashery Traffic Managers as a ‘whitelist’ (so traffic from all other IP addresses will be blocked).\n","title":"How To Use Express Middleware To Filter Traffic In Node.js","type":"blog"},{"content":"If you are like me, the data I need to do my job exists not only in the cloud. It can be hard to get to all data sources, especially when those are on-premises and behind a firewall. I am not alone, as pretty much everyone is facing these challenges. In fact, Gartner predicted that over sixty-five percent of all integration flows will be created outside of the control of IT departments as a result of the growing number of integration related tasks that they need to take care of. Simply put, organizations today are integrating to everything. The ‘everything’ in the last sentence not only includes Software-as-a-Service applications like Salesforce.com or NetSuite, but also includes applications and services hosted in private networks and datacenters. IT departments are increasingly looking for ways to provide services to departments and lines of business by allowing them to do their own integration using tools and platforms they provide. Leveraging applications and services that are hosted in datacenters and combining that with SaaS provides the IT team with a whole new set of challenges and questions with the most important one being around security. ‘How can I make sure that my network and my data are accessed securely?’\nThere are multiple ways to get access to your corporate network and data, but many of them require downloading and installing software. This is a task which many IT departments are not particularly fond of. When other employees downloand and install software, they expose data to the rest of the Internet. With TIBCO Cloud Integration, we want to make getting back to your data as easy as possible, without installing any additional software. With that in mind, we turned to a trusted solution that has been allowing employees to access corporate networks for years, Virtual Private Networks.\nWith the VPN capability of TIBCO Cloud Integration, you can connect to on-premises sources leveraging security infrastructure already in place in private networks and datacenters. You can either give each individual app its own VPN connection or you can design an app that bridges between cloud and on-premises. No matter which of the two you pick, there is no software to install! Just configure your VPN connection and get back to your data in a secure manner.\nSign up for a free 30-day trial of TIBCO Cloud Integration to get back to your data and leverage on-premises applications in a secure way!\n","date":"May 10, 2017","externalUrl":null,"permalink":"/2017/05/the-art-of-getting-back-to-your-data-securely/","section":"Blog","summary":"If you are like me, the data I need to do my job exists not only in the cloud. It can be hard to get to all data sources, especially when those are on-premises and behind a firewall. I am not alone, as pretty much everyone is facing these challenges. In fact, Gartner predicted that over sixty-five percent of all integration flows will be created outside of the control of IT departments as a result of the growing number of integration related tasks that they need to take care of. Simply put, organizations today are integrating to everything. The ‘everything’ in the last sentence not only includes Software-as-a-Service applications like Salesforce.com or NetSuite, but also includes applications and services hosted in private networks and datacenters. ","title":"The Art Of Getting Back To Your Data Securely!","type":"blog"},{"content":"The world of integration is hybrid. Not only hybrid in the sense that you have on-premise and cloud-based applications, but also hybrid in the types of people that connect systems together or build something completely new. What really doesn’t change is the fact that people want to use the tools that fit their purpose.\nThere is quite a good chance that you know Node.js. According to Techworm, it is the number 7 programming language. If you’ve ever built a Node.js app, chances are pretty good that your first app said “Hello World” every time. In fact, that might even have been your first API!\nBeginnings can be difficult, especially when you’re creating a completely new microservice without the appropriate framework, so we want to give you a head start. TIBCO Cloud Integration has always been on API-led integration. Now, you can generate a Node.js stub based on your API specification so the “only” thing you need to do is implement the microservice.\nAccording to the Node Foundation, Node has over 3 million users with an amazing growth rate and NPM grows faster than other package managers. Node.js has one of the biggest and most active communities (including lots of happy developers like myself!). That means you can reuse the Node modules from thousands of developers and put them together in amazing new ways to suit your needs.\nSpeaking of tools that are fit for purpose, with this new addition to TIBCO Cloud Integration you get full control over the design-time environment that you want to use. Whether that is Microsoft Visual Studio Code, Eclipse, or simply Notepad, you can unzip the generated stub (from TIBCO Cloud Integration) and start developing using the tools and the workflows that suit you best. Once you’re done, zip up the code and use the command-line interface to upload the app to TIBCO Cloud Integration.\nYou can start by modeling your API, generating a microservice with a few clicks, and implementing your microservice by focusing on the logic rather than on getting the boilerplate code right.\nSign up for a free 30-day trial of TIBCO Cloud Integration to design whatever API you want and implement it using Node.js!\n","date":"April 5, 2017","externalUrl":null,"permalink":"/2017/04/the-art-of-building-node.js-microservices-in-tibco-cloud-integration/","section":"Blog","summary":"The world of integration is hybrid. Not only hybrid in the sense that you have on-premise and cloud-based applications, but also hybrid in the types of people that connect systems together or build something completely new. What really doesn’t change is the fact that people want to use the tools that fit their purpose.\nThere is quite a good chance that you know Node.js. According to Techworm, it is the number 7 programming language. If you’ve ever built a Node.js app, chances are pretty good that your first app said “Hello World” every time. In fact, that might even have been your first API!\n","title":"The Art Of Building Node.js Microservices in TIBCO Cloud Integration","type":"blog"},{"content":"Integration is red (it is my heart, after all), clouds are blue, interconnect everything and I’ll 💙 you!\nWith the theme of TIBCO NOW this year being “Digital Smarter”, I wanted to see if I could build the ultimate Valentine’s Day API using our own technology while considering the requirements that might impose on one’s choice of tech.\nValentine’s Day is traditionally the holiday where people receive cards from their significant others and secret admirers and is also a great day to have a first date. What to do on a first date? Catch a movie. According to research, one in ten people would ask someone out based on their movie preferences, so having the ability to connect to different film APIs could make or break that first date.\nAnother important aspect of Valentine’s Day and first dates? Candy. In fact, we collectively spend more than $1.7 billion on candy, and in the US, over 36 million boxes of chocolate are bought or shipped from stores or online. With chocolate, we need to not only interact with APIs, but also need to orchestrate calls to make sure the chocolate is bought first and shipped afterward. I can import all of these APIs into TIBCO Business Studio—Cloud Edition, wire them together, and add some logic – without writing a single line of code.\nNow I have multiple APIs that I’m using and there is a bit of business logic in there to decide which API to call in each given situation. For example, if someone chooses Netflix over an in-person movie, they might want the chocolate to be delivered to their home. Now I need to model the specification of what I want to expose to my users, as that will be different than the APIs I’ve been using. Using the API Modeler of TIBCO Cloud Integration, I don’t have to worry about the syntax as I can see what the API will look like using the visual modeling capability.\nEvery year, there are about one billion Valentine’s Day cards sent to sweethearts, lovers, children, and teachers (in fact, teachers actually receive the most Valentine’s Day cards!) Apart from Christmas, Valentine’s Day is the largest card sending time of the year. A further requirement of our ultimate Valentine’s Day API is to be able to handle the one billion API calls coming in from Valentine’s cards. Luckily, with TIBCO Mashery, we’re absolutely able to handle that traffic!\nSo, my ultimate Valentine’s Day API is able to scale and handle high levels of traffic, orchestrates the use of other services, and is designed to my specifications without writing any code.\nWant to build your own ultimate Valentine’s Day API (or any awesome API, for that matter)? Sign up for a free 30-day trial of TIBCO Cloud Integration to design and quickly build whatever API your heart desires. Join us for our upcoming webinars on the Spectrum of Integration, which will be available instantly on-demand after the live broadcast and demonstrate powerful tools to tackle cloud integration and cloud connection. Our first webinar, Connecting Cloud Services, is already available to watch, and our next webinar, How to Get Started with Containers and Hybrid Cloud, is coming up on 2/23. Register today!\n","date":"February 14, 2017","externalUrl":null,"permalink":"/2017/02/the-secret-of-the-ultimate-valentines-api/","section":"Blog","summary":"Integration is red (it is my heart, after all), clouds are blue, interconnect everything and I’ll 💙 you!\nWith the theme of TIBCO NOW this year being “Digital Smarter”, I wanted to see if I could build the ultimate Valentine’s Day API using our own technology while considering the requirements that might impose on one’s choice of tech.\nValentine’s Day is traditionally the holiday where people receive cards from their significant others and secret admirers and is also a great day to have a first date. What to do on a first date? Catch a movie. According to research, one in ten people would ask someone out based on their movie preferences, so having the ability to connect to different film APIs could make or break that first date.\n","title":"The Secret Of The Ultimate Valentine’s API","type":"blog"},{"content":"To successfully compete or even survive in today’s ever-changing business climate, organizations need to become more agile. They need to respond to customer expectations and market changes more quickly. Companies are doing this by using and building APIs. APIs are the model for quickly building and growing successful businesses. The Internet has transformed from a network of informational web pages to an ecosystem of APIs and applications that work together to empower new applications, new businesses, new ways of working together, and new business opportunities.\nWhile I was preparing my presentation for Cloud Expo 2016, one of my friends at TIBCO mentioned that APIs have been around forever. Whether it is a CORBA interface, a Java class that exposes methods, or the JSON messages that traverse the HTTP methods, they are all Application Programming Interfaces (APIs). The big difference is that today, you don’t need a degree in computer science to be able to understand them. That made sense to me, but while it doesn’t take a computer science degree to understand APIs, most tools do require you to have one to create them.\nCreating an API from scratch can be a difficult task as you need to understand semantics and structure of the modeling language and, quite frequently, you need a text editor to build the model. Creating the model of an API first eliminates the sequential step development that often leads to long, drawn out development projects. With the API Modeler in TIBCO Cloud Integration we’ve taken an approach to visually create API contracts on the web. The graphical modeling interface allows for the creation of an Open API (formerly known as Swagger) contract for the API, without writing a line of code. Eliminating the need to write code is important as the market shifts development responsibility to departments and lines of business from IT.\nWith the API modeling capability in TIBCO Cloud Integration you can generate the skeleton of your API in a few clicks, where the only required information is the name and the version of your API contract. Adding new resources is as easy as clicking the large orange button and giving the new resource a name.\nAPIs also have input and output and when you’re designing an API, you usually have a pretty good idea how they’ll look. Validation of these in- and outputs, with JSON schemas, should also be easy to do, right? With the API modeling capability in TIBCO Cloud Integration you can take those in- and outputs and have TCI create the JSON schema for you! When you generate a mock application from the specification and view the API contract you can see all the messages defined in a way that doesn’t require a degree to understand or create them! With TIBCO Cloud Integration, you have a zero-code way to design and create APIs. Test out a free TCI trial today.\n","date":"December 14, 2016","externalUrl":null,"permalink":"/2016/12/how-to-create-an-api-without-writing-any-code/","section":"Blog","summary":"To successfully compete or even survive in today’s ever-changing business climate, organizations need to become more agile. They need to respond to customer expectations and market changes more quickly. Companies are doing this by using and building APIs. APIs are the model for quickly building and growing successful businesses. The Internet has transformed from a network of informational web pages to an ecosystem of APIs and applications that work together to empower new applications, new businesses, new ways of working together, and new business opportunities.\n","title":"How To Create An API Without Writing Any Code","type":"blog"},{"content":"Companies must find a way to join both paths and view the transition to digital as a unified journey, with the end goal clearly defined, then utilize APIs to help them get there faster. The question then becomes, how can companies and developers leverage ESBs, APIs, and a Fast Data platform to cultivate innovation?\nIn my session at 19th Cloud Expo (Nov 2016), I explored this topic further, highlighting specific use cases and the true value that can be gained from the cloud and APIs in this quest\n","date":"November 1, 2016","externalUrl":null,"permalink":"/2016/11/cloudexpo-2016-the-road-to-a-cloud-first-enterprise/","section":"Blog","summary":"Companies must find a way to join both paths and view the transition to digital as a unified journey, with the end goal clearly defined, then utilize APIs to help them get there faster. The question then becomes, how can companies and developers leverage ESBs, APIs, and a Fast Data platform to cultivate innovation?\n","title":"CloudExpo 2016 - The Road to a Cloud First Enterprise","type":"blog"}]