Skip to main content
Building a Serverless Fitness Shop - Tools and Tech
  1. Blog/

Building a Serverless Fitness Shop - Tools and Tech

·10 mins·
Building a Serverless Fitness Shop - This article is part of a series.
Part 1: This Article

If you’ve read the blog posts on CloudJourney.io before, you’ve likely come across the term “Continuous Verification”. If you haven’t, no worries. There’s a solid article from Dan Illson and Bill Shetti on The New Stack that explains it in detail. The short version: Continuous Verification is “A process of querying external system(s) and using information from the response to make decision(s) to improve the development and deployment process.

In practice, that means putting as many automated checks as possible into your CI/CD pipelines. More checks means fewer manual tasks, which means more data to smooth out and improve your development and deployment process. The CloudJourney.io team built the ACME Fitness Shop to showcase continuous verification in a containerized world. There are deployments for Kubernetes, Docker, and AWS Fargate. In this blog series, we’ll look at how Continuous Verification works in a serverless context, and how we built the components that make up the ACME Serverless Fitness Shop.

What is the ACME Serverless Fitness Shop
#

The ACME Serverless Fitness Shop combines two of my favorite things: serverless and fitness. The shop has seven different domains, each containing one or more serverless functions:

  • Shipment: A shipping service, because what is a shop without a way to ship your purchases? 🚚
  • Payment: A payment service, because nothing in life is really free… πŸ’°
  • Order: An order service, because what is a shop without actual orders to be shipped? πŸ“¦
  • Cart: A cart service, because what is a shop without a cart to put stuff in? πŸ›’
  • Catalog: A catalog service, because what is a shop without a catalog to show off our awesome red pants? πŸ“–
  • User: A user service, because what is a shop without users to buy our awesome red pants? πŸ‘¨β€πŸ’»
  • Point-of-Sales: A point-of-sales app to sell our products in brick-and-mortar stores! πŸ›οΈ

Some of these services are event-driven, while others have an HTTP API. The API-based services use the same API specifications as their containerized counterparts, so the serverless version stays compatible with the original ACME Fitness Shop frontend.

Deciding on Data stores
#

With Functions-as-a-Service, you can’t maintain state inside the function. Once it’s done processing, it shuts down and any in-memory state is gone. Most functions need to persist data somewhere. When you go serverless for everything, there are a few options for storage:

  • AWS DynamoDB for a NoSQL database with single-digit millisecond latency at any scale
  • Amazon Aurora Serverless for a MySQL-compatible relational database
  • Amazon RDS Proxy for using AWS Lambda with traditional RDS relational databases

For the ACME Serverless Fitness Shop, most queries are simple gets and puts. We always know the data type a function needs and which keys are associated with it. There are no joins or schemas needed for referential integrity. AWS advocates for purpose-built databases, and for these access patterns, DynamoDB is the right fit. The single-digit millisecond latency is a nice bonus, but the real win is that DynamoDB is fully managed β€” no upgrade windows, no patching, no ops overhead.

Deciding on Application integration
#

Serverless apps are event-driven, so the next decision is which service handles the events. A few options:

  • Amazon SNS for publish/subscribe style messaging
  • Amazon SQS as a managed queueing service
  • Amazon EventBridge as a serverless event bus

With SQS, receivers poll for messages and each message goes to a single receiver. With SNS, messages are pushed to all subscribers, which is typically faster. The real difference is in the use case. Queues are great for decoupling apps and async communication. Pub/sub is better when multiple systems need to act on the same message. The ACME Serverless Fitness Shop has functions handling distinct messages asynchronously, so SQS is the natural fit.

Deciding on Compute
#

Last decision: where do the apps run? Within AWS, the main options are:

  • AWS Lambda β€” run code without provisioning servers, practically synonymous with serverless
  • Lambda@Edge β€” run Lambda functions at edge locations
  • AWS Fargate β€” run containers in a serverless fashion

Fargate is solid, and at re:Invent 2019 AWS added the ability to run Kubernetes pods on it. That would be the easiest path to get the ACME Fitness Shop into the cloud, but containers still incur cost even when idle. Since there’s already a Fargate and Kubernetes version, and the goal is to pay as little as possible when functions aren’t running, we went with AWS Lambda and the Go 1.x runtime.

From Microservices to Serverless
#

Moving from traditional microservices to event-driven architecture requires refactoring and rearchitecting. To show what that looks like, here’s how we changed the Payment service from an HTTP-based microservice to an SQS-based Lambda function. Two requirements for this change:

  • The service must still validate credit card payments and respond with the validation status (no change in functionality)
  • The input and output must not add or remove any fields that would alter the service’s behavior (no change to inputs or outputs)

Creating events
#

Event-driven architectures need events, and events should describe what happened. The Payment service has two: one that triggers it and one that it produces. The order service sends a “PaymentRequested” event when an order needs payment. The Payment service responds with a “CreditCardValidated” event β€” because that’s exactly what happened.

Keeping track of events in an event-driven system gets complicated fast. Adding metadata to each event helps. Here’s what the PaymentRequested event looks like:

{
    "metadata": {
        "domain": "Order", // Domain represents the the event came from like Payment or Order
        "source": "CLI", // Source represents the function the event came from
        "type": "PaymentRequested", // Type respresents the type of event this is
        "status": "success" // Status represents the current status of the event
    },
    "data": {
        "orderID": "12345",
        "card": {
            "Type": "Visa",
            "Number": "4222222222222",
            "ExpiryYear": 2022,
            "ExpiryMonth": 12,
            "CVV": "123"
        },
        "total": "123"
    }
}

And the CreditCardValidated event:

{
    "metadata": {
        "domain": "Payment",
        "source": "CLI",
        "type": "CreditCardValidated",
        "status": "success"
    },
    "data": {
        "success": "true",
        "status": 200,
        "message": "transaction successful",
        "amount": 123,
        "transactionID": "3f846704-af12-4ea9-a98c-8d7b37e10b54"
    }
}

Functional behavior
#

The Payment service does three things:

  • Receive a message from Amazon SQS
  • Validate the credit card
  • Send the validation result to Amazon SQS

Here’s the Go code (Sentry tracing removed for clarity):

package main

// removed imports for clarity

// handler handles the SQS events and returns an error if anything goes wrong.
// The resulting event, if no error is thrown, is sent to an SQS queue.
func handler(request events.SQSEvent) error {
	// Unmarshal the PaymentRequested event to a struct
	req, err := payment.UnmarshalPaymentRequested([]byte(request.Records[0].Body))
	if err != nil {
		return handleError("unmarshaling payment", err)
	}

	// Generate the event to emit
	evt := payment.CreditCardValidated{
		Metadata: payment.Metadata{
			Domain: payment.Domain,
			Source: "ValidateCreditCard",
			Type:   payment.CreditCardValidatedEvent,
			Status: "success",
		},
		Data: payment.PaymentData{
			Success:       true,
			Status:        http.StatusOK,
			Message:       payment.DefaultSuccessMessage,
			Amount:        req.Data.Total,
			OrderID:       req.Data.OrderID,
			TransactionID: uuid.Must(uuid.NewV4()).String(),
		},
	}

	// Check the creditcard is valid.
	// If the creditcard is not valid, update the event to emit
	// with new information
	check := validator.New()
	err = check.Creditcard(req.Data.Card)
	if err != nil {
		evt.Metadata.Status = "error"
		evt.Data.Success = false
		evt.Data.Status = http.StatusBadRequest
		evt.Data.Message = payment.DefaultErrorMessage
		evt.Data.TransactionID = "-1"
		handleError("validating creditcard", err)
	}

	// Create a new SQS EventEmitter and send the event
	em := sqs.New()
	err = em.Send(evt)
	if err != nil {
		return handleError("sending event", err)
	}

	return nil
}

// handleError takes the activity where the error occured and the error object and sends a message to sentry.
// The original error is returned so it can be thrown.
func handleError(activity string, err error) error {
	log.Printf("error %s: %s", activity, err.Error())
	return err
}

// The main method is executed by AWS Lambda and points to the handler
func main() {
	lambda.Start(handler)
}

Infrastructure as Code
#

Continuous Integration, Continuous Delivery, and Continuous Verification all depend on automating as much as possible so developers and engineers can focus on building business value. That includes creating infrastructure in the pipeline, which means Infrastructure as Code. Options include:

  • Terraform β€” write HCL to define infrastructure
  • Serverless Framework β€” one of the first tools to simplify building and deploying functions
  • AWS CloudFormation (and SAM) β€” the AWS-native configuration language
  • Pulumi β€” an open-source IaC tool that works across clouds

I wanted a tool without a custom DSL. I’m not a YAML expert, and I enjoy writing Go. If I can keep my entire toolset Go-based, that’s ideal. This is where Pulumi fits. It lets me use the Go toolchain while deploying to Amazon Web Services and leveraging the full AWS ecosystem. All the services, the DynamoDB table, and the SQS queues are deployed using Pulumi. Here’s how you create a DynamoDB table with the Pulumi Go SDK (tags removed for clarity β€” full code on GitHub):

package main

import (
	"fmt"

	"github.com/pulumi/pulumi-aws/sdk/go/aws/dynamodb"
	"github.com/pulumi/pulumi/sdk/go/pulumi"
	"github.com/pulumi/pulumi/sdk/go/pulumi/config"
)

// DynamoConfig contains the key-value pairs for the configuration of Amazon DynamoDB in this stack
type DynamoConfig struct {
	// Controls how you are charged for read and write throughput and how you manage capacity
	BillingMode pulumi.String `json:"billingmode"`

	// The number of write units for this table
	WriteCapacity pulumi.Int `json:"writecapacity"`

	// The number of read units for this table
	ReadCapacity pulumi.Int `json:"readcapacity"`
}

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		// Read the configuration data from Pulumi.<stack>.yaml
		conf := config.New(ctx, "awsconfig")

		// Create a new DynamoConfig object with the data from the configuration
		var dynamoConfig DynamoConfig
		conf.RequireObject("dynamodb", &dynamoConfig)

		// The table attributes represent a list of attributes that describe the key schema for the table and indexes
		tableAttributeInput := []dynamodb.TableAttributeInput{
			dynamodb.TableAttributeArgs{
				Name: pulumi.String("PK"),
				Type: pulumi.String("S"),
			}, dynamodb.TableAttributeArgs{
				Name: pulumi.String("SK"),
				Type: pulumi.String("S"),
			},
		}

		// The set of arguments for constructing an Amazon DynamoDB Table resource
		tableArgs := &dynamodb.TableArgs{
			Attributes:    dynamodb.TableAttributeArray(tableAttributeInput),
			BillingMode:   pulumi.StringPtrInput(dynamoConfig.BillingMode),
			HashKey:       pulumi.String("PK"),
			RangeKey:      pulumi.String("SK"),
			Name:          pulumi.String(fmt.Sprintf("%s-%s", ctx.Stack(), ctx.Project())),
			ReadCapacity:  dynamoConfig.ReadCapacity,
			WriteCapacity: dynamoConfig.WriteCapacity,
		}

		// NewTable registers a new resource with the given unique name, arguments, and options
		table, err := dynamodb.NewTable(ctx, fmt.Sprintf("%s-%s", ctx.Stack(), ctx.Project()), tableArgs)
		if err != nil {
			return err
		}

		// Export the ARN and Name of the table
		ctx.Export("Table::Arn", table.Arn)
		ctx.Export("Table::Name", table.Name)

		return nil
	})
}

Continuous Anything
#

While building out the services, I came across Stackery’s Road to Serverless Ubiquity Guide. One paragraph on developer experience stuck with me:

“But developers are human beings, tooβ€”and their experience of these tools and technologies is extremely important if we want to encourage sustainable and repeatable development practices.”

Sustainable and repeatable development practices matter regardless of whether you’re doing serverless or not. You want repeatable processes and repeatable builds. A friend introduced me to CircleCI, which has a concept of Orbs β€” reusable snippets of code that automate repeated processes, speed up project setup, and integrate with third-party tools. That saves a lot of work on deployment scripts. All services, including DynamoDB and SQS, have their CircleCI pipeline and each pipeline is only 35 lines of configuration. Most of those lines are copied from the starter template.

Wrapping up
#

In this first part of the series, we covered the key choices:

  • A data store, DynamoDB, because it’s the right purpose-built database for the access patterns the ACME Serverless Fitness Shop needs
  • The application integration service, SQS, because it allows the functions to operate asynchronously
  • The compute resources, Lambda, for its event-driven model and cost profile
  • The Infrastructure as Code tool, Pulumi, so I can write Go to deploy my Go functions
  • The CI/CD tool, CircleCI, because Orbs keep the configuration minimal

We also walked through moving a microservice to serverless. Next up: what Continuous Verification means for serverless workloads.

Photo by Humphrey Muleba on Unsplash

Building a Serverless Fitness Shop - This article is part of a series.
Part 1: This Article

Related

Continuous Verification In A Serverless World @ Serverless Nashville

·1 min
At VMware we define Continuous Verification as: “A process of querying external systems and using information from the response to make decisions to improve the development and deployment process.” At Serverless Nashville, I got a chance to not only talk about what that means for serverless apps but also how we use serverless in some of the business units at VMware.

Hybrid Security - From On-Prem to Serverless

·5 mins
DevOps, as a practice to build and deliver software, has been around for over a decade. What about adding security to that, though? After all, security is one of the cornerstones of today’s information technology. As it turns out, one of the first mentions of adding security was a Gartner blog post in 2012. Neil MacDonald wrote, “DevOps must evolve to a new vision of DevOpsSec that balances the need for speed and agility of enterprise IT capabilities (…)”.

Cost Matters! The Serverless Edition

·8 mins
As a trend cloud vendors tend to use the word serverless quite loosely. While serverless comes in a lot of shapes and sizes and as long as the characteristics fit within the four categories from my last blog, it is a serverless service. To make sure that we’re all on the same page, I’ll use the following definition for serverless: β€œServerless is a development model where developers focus on a single unit of work and can deploy to a platform that automatically scales, without developer intervention.” In this blog post, we’ll look at how that model works on AWS Fargate, which allows you to run containers without having to manage servers or clusters.