How To Build Twelve Factor Apps with Node.js in TIBCO Cloud Integration

Explore the 12 Factors of the renowned 12 Factor App manifesto by Heroku and learn how to implement them effectively for Node.js apps in TIBCO Cloud Integration. Uncover best practices for codebase, dependencies, configuration, and more.

How To Build Twelve Factor Apps with Node.js in TIBCO Cloud Integration
Page content

Back in 2012, the engineering team at Heroku created a set of best practices to develop and run web apps. That document, consisting of 12 incredibly important ‘rules’, was dubbed the 12 Factor App manifesto. Over the years the document gained a lot of traction and especially with the rise of microservices having a 12 Factor App compliant app became important. With the rise of microservices a lot of other practices and tools (like git, DevOps, Docker and Configuration Management) became very popular as well.

In this blog post I want to dive in the 12 factors that the Heroku engineers described and how you can make it work with Node.js apps in TIBCO Cloud Integration.

Codebase

The tagline for this first of twelve best practices is One codebase tracked in revision control, many deploys. Keeping your code in a version control system is definitely a best practice when it comes to development of code, and most certainly important when you’re building apps that comply with the Twelve Factor App manifesto. The idea being that a single app has its own repository, so developers can work on it without worrying about breaking other code (yes, unit testing is quite important). Personally, I like git based version control systems (like GitHub or Gogs). If you have code that should be shared across services, which is quite common, that should get its own repository and be a dependency for the services (like a library). I can hear you ask, “So what about the deploys?” A deploy, according to the manifesto, is a single running instance of the microservice. With TIBCO Cloud Integration each push is automatically a new instance of a service and you can run multiple versions in the same or separate sandboxes.

Dependencies

Next up is the nightmare of every DevOps specialist, dependencies, with the line that you should Explicitly declare and isolate dependencies. The idea here is that most programming languages come with a package manager and can install packages and libraries when you deploy your service. Node.js has a two main options for package management, with npm and yarn. The good part is that both package managers have decided to work off the same type of file (package.json), so you could move from one to the other. With TIBCO Cloud Integration we’re standardizing on npm, though. With many dependencies being updated with the same rapid pace as most microservices are evolving you should take good care of your package.json. While you can certainly specify that your dependency should have at least version x.y.x, it is best practice to stick with a single tested version of your dependency. After all, you don’t want to wake up to a new version of your dependency breaking your app?

Configuration

Always store config in the environment! For the sake of definition, I like to refer to the original manifesto describing config as everything that is likely to change between deploys. As a best practice, the Visual Studio Code extension for TIBCO Cloud Integration generates a .env file that you can use for this. Be aware though, that you shouldn’t store those files in your version control system. A good question to ask yourself if it should be in a version control system is: “Could I put this in a public repository, without giving away credentials?”. Usually that isn’t the case with .env files so as a better idea you should make a .env.example and in this file place all the keys (with dummy values) your app needs. With TIBCO Cloud Integration you can make use of environment variables that are injected into the container at runtime. Using the VSCode plugin you can select the command Add environment variable to create a new variable that you can use in your code. Best practice on that one? If you’ve just added a variable called DB_USER, use it in your code as:

var dbuser = process.env.DB_USER || 'defaultvalue';

Backing services

You should treat backing services as attached resources. A backing service is any service that your app is dependent on (like an Amazon S3 bucket or an Azure SQL Server) and an attached resource means you should be able to access it through a URL. Following this practice makes it a lot easier to test a single microservice locally, as the developer doesn’t have to set up an entire ecosystem of services to simply test one microservice. With TIBCO Cloud Integration you have the ability to deploy Mock apps to mock API calls and there are many good stub framework available for other resources. The alternative would obviously be to let the developer have his own engineering environment with all the backing services installed (or installable through scripts). Why should you care about this one? Let’s assume you have hardcoded your dependency on a specific MySQL database and that database needs to be replaced… Do you really want to work over the weekend to make that change?

Build, Release, Run

The absolute requirement here is to strictly separate build and run stages. According to the manifesto the different stages are:

  • The build stage: turn your code into an executable
  • The release stage: takes the executable and adds the config
  • The run stage: takes the output from the release stage and runs it on the environment you want

From a development point of view it is incredibly important to be able to split these stages as you want your code to move through a Continuous Integration and Continuous Deployment pipeline without any changes (the only changes would be the config for each environment). This is why in containerized environments, like docker, developers stress on treating the containers as immutable objects. Within TIBCO Cloud Integration, your Node.js apps get this “for free”. When you push your app to the runtime, you can specify which a properties file that will inject values into the container (see config).

Processes

You must execute the app as one or more stateless processes. There is still a lot of debate as to why you need to have stateless processes and quite honestly it might have to do with the fact it used to be incredibly easy to just out everything in your monolithic app. You should, however, put all data that is shared between instances (including persistent data) in a backing service and never in the app itself. The reason is, of course, scalability. If you keep data in your app, it can never be scaled horizontally without the risk for duplicate actions or failures. Most Node.js apps are built in such a way that they only start one single process (using the npm start or node . command), but developers must still take care of developing stateless apps.

Port Bindings or Data Isolation

Depending on which version of the Twelve Factor App manifesto you’re reading, the seventh is either port bindings or data isolation. The former coming from the original, the latter defined in the update the NGINX team made. For the port bindings, I think the original definition is incredibly powerful so I’ll just put it below:

The twelve-factor app is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port.

Data Isolation in itself makes perfect sense (and perhaps should have been 13th on the list ;-)). It states that every microservice should be responsible for its own data and you should never access the data through anything other than the API (or port) that the microservice exposes. If you violate this, you’re creating very tight couplings between microservices and that is never a good idea.

Concurrency

Concurrency means that you should be able to scale out your app via the process model. For the microservices you build, it simply means that you should be able to scale up more than one instance of it. Containerized deployments, like the ones you do in TIBCO Cloud Integration, give you this benefit out of the box. Having said that, you can still very easily destroy this “directive” by using timers to start your processes. A timer inside your processes means it can never be scaled up, as you’ll always have duplicate processes running.

Disposability

While I’m not exactly sure where the phrase came from, I’ve always liked “treat your container like cattle and not like pets”. The notion of disposability is really all about that phrase. Being able to dispose of one container and start a new one without any impact, or simply grow or lower the number of running containers to respond to demand should be painless. This is also why it is incredibly important to have stateless services. Scaling is something you get for free with TIBCO Cloud Integration, with the push of a button or a simple command

Dev/Prod parity

Keeping your different environments as similar as possible is really important. Not only because you want to minimize changes to the config when deploying, but you also want to make sure that if your app works the same in your staging environment as well as your production one. With TIBCO Cloud Integration you can easily do this by having multiple sandboxes that make sure the rest of your environment is the same. It doesn’t take care of your backing services, but having your runtime taken care of is a good start :)

Logs

One of the best definitions I heard about microservices is that a microservices should be focused on one single task and that task is the only thing it should do (kind of similar to Linux command line apps like ps or grep). In a microservice environment, you should treat your logs as streams and send it elsewhere unless the task of your microservice is to log stuff. For most programming languages there are amazing logging frameworks available and with Node.js on TCI we give you a special logger class to use so it matches the rest of the logs on TCI. As an additional best practice, don’t use console.log()

Admin processes

Administrative processes or management tasks shouldn’t be in your app! You should run them as single one-off processes in a separate container or a separate thread. Actions like data migration should be done as one-off commands and not be part of what you deploy.

As always let me know what you think by posting a reply here or at the TIBCO Community