Implementing Continuous Integration With Typescript Node.js Application Using Bitbucket Pipelines
Continuous Integration (CI) is an essential practice in modern software development. It allows teams to detect problems early, improve software quality, and streamline the build and deployment process. In this blog post, we will delve into a Bitbucket Pipelines configuration designed for a Node.js application. We'll cover the setup step by step, explaining each section of the configuration and how it contributes to an efficient CI pipeline.
Introduction to Bitbucket Pipelines
Bitbucket Pipelines is an integrated CI/CD service built into Bitbucket. It allows you to automatically build, test, and deploy your code based on a configuration file that you commit to your repository. In our example, we utilize a Docker image, cache dependencies for speed, and define multiple steps for building, testing, linting, and deployment.
Base Docker Image
image: node:lts
We start by specifying the Docker image to be used for the pipeline runs. Here, node:lts
is the Docker image for the latest Long Term Support (LTS) version of Node.js. This image contains all the necessary tools and libraries to run a Node.js application.
Definitions Section
The definitions
section allows us to create reusable snippets of configuration that can be referenced later in the pipeline.
Build Artifact Step
- step: &build-artifact
name: Build Artifact
caches:
- node
script:
- apt-get update
- apt-get install zip
- npm install
- npm run build
- zip -r application.zip . -x node_modules/* src/* .git/*
artifacts:
- application.zip
The &build-artifact
is an anchor that allows us to reference this step elsewhere in the configuration. This step performs the following actions:
- Updates the package list using
apt-get update
. - Installs the
zip
utility, required to package the application. - Installs the project dependencies via
npm install
. - Builds the application with
npm run build
. - Creates a zipped archive of the built application, excluding unnecessary directories like
node_modules
,src
, and.git
. - Declares
application.zip
as an artifact, meaning it will be stored by Bitbucket Pipelines and can be used in subsequent steps.
Build, Lint, and Test Step
- step: &build-lint-test
name: Build, Lint And Test
caches:
- node
script:
- npm install
- npm run build
- npm run lint
- npm run test:cov
The &build-lint-test
anchor defines a step for building the application, linting the code, and running tests with coverage. The caches
directive tells Pipelines to cache the node_modules
directory between runs to speed up the build process.
Pipelines Section
This section defines the actual pipeline that will run on certain triggers, such as a pull request or a push to a branch.
Pull Requests Pipeline
pull-requests:
'**':
- step: *build-lint-test
For pull requests targeting any branch (indicated by '**'
), we use the build-lint-test
step. This ensures that every pull request is built, linted, and tested before it can be merged.
Branches Pipeline
branches:
develop:
- step: *build-artifact
- step:
name: Deploy to DEVELOPMENT
deployment: development
script:
- echo "Deploying to development environment"
- pipe: atlassian/aws-elasticbeanstalk-deploy:1.3.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
APPLICATION_NAME: $AWS_APPLICATION_NAME
ENVIRONMENT_NAME: $AWS_ENVIRONMENT_NAME
ZIP_FILE: 'application.zip'
S3_BUCKET: $S3_BUCKET
For the develop
branch, we have two steps:
-
Build Artifact: We use the previously defined
build-artifact
step to create a deployable zip file of our application. -
Deploy to DEVELOPMENT: This step uses an Atlassian-defined pipe,
aws-elasticbeanstalk-deploy
, to deploy the zipped application to AWS Elastic Beanstalk. It makes use of environment variables to handle credentials and deployment settings.
Environment Variables
In this configuration, you'll come across various environment variables, denoted by placeholders such as $AWS_ACCESS_KEY_ID
and $DEVELOPMENT_APPLICATION_NAME
. These environment variables are essential for securing sensitive information like AWS credentials and application names. By storing these values in environment variables, you ensure that they remain confidential and are not exposed in your configuration file.
Here's a breakdown of the key environment variables you'll be working with:
- AWS_ACCESS_KEY_ID: Your AWS access key.
- AWS_SECRET_ACCESS_KEY: Your AWS secret access key.
- AWS_DEFAULT_REGION: The AWS region code (e.g., us-east-1, us-west-2) of the region containing the AWS resource(s).
- AWS_APPLICATION_NAME: The name of the Elastic Beanstalk application.
- AWS_ENVIRONMENT_NAME: Environment name.
- ZIP_FILE: The application source bundle to deploy (zip, jar, war).
- S3_BUCKET: Bucket name used by Elastic Beanstalk to store artifacts.
Make sure to securely configure these environment variables in your Bitbucket repository settings to ensure the integrity and security of your CI/CD process.
Conclusion
By leveraging Bitbucket Pipelines and its powerful features, you can establish a robust CI/CD pipeline for your Node.js application. This example configuration is a starting point that demonstrates building, linting, testing, packaging, and deploying an application automatically. With Pipelines, teams can ensure that their main branches are always in a deployable state, and that new code is always up to the quality standards set by the project.
Customize this configuration to match your project's requirements, and you'll have a CI/CD pipeline that saves time, increases productivity, and reduces the risk of human error.
References
For this implementation, I referred to the following documentation:
Top comments (1)
Thanks for the detailing. It was very informative !!!