In this article, we will build a CI/CD pipeline with the AWS Cloud Development Kit (CDK) and debug a test it using Dashbird's observability tool.
In 2021, continuous integration and continuous delivery, or short CI/CD, should be part of every modern software development process. It helps deliver new features and bug fixes much faster. If the code doesn't have to sit days or weeks before it gets deployed, it has a higher chance that a developer remembers what they were thinking when they wrote it. And with good code quality tools, you can prevent problems before buggy code gets deployed anywhere.
API-based web or mobile apps are a well-known example for serverless systems, but CI/CD pipelines can include serverless technology too! AWS offers managed services to build serverless CI/CD pipelines in the cloud, so you only have to pay what you use.
This tutorial requires:
The project with all the code needed already exists on GitHub. You just have to fork it into your own account. This is required to give AWS CodePipeline access to your GitHub hooks.
To fork the repository, open it in your browser and click the "Fork" button on the top right.
The next step is to initialize the project on your development machine. For this, you can simply clone your own fork.
$ git clone
And finally, you need to update the
credentials.json, so the project works with your own fork and AWS account.
GITHUB_USERNAMEis the name of the GitHub account you forked the project to.
dashbird-cicd-exampleif you didn't change the repository name
AWS_ACCOUNT_IDis the ID/number of your AWS account
AWS_REGIONis the AWS region you like to deploy this project to later
You also need a personal access token so that the pipeline can access your GitHub repository.
The GitHub docs explain how you get that token. The token needs the
admin:repo_hook scopes. In turn, the AWS docs explain how to add the token as a secret to the AWS Secrets Manager. The token should be a plain text secret with the name
If everything is set up correctly, the project can be deployed with just two commands. One command to bootstrap the CDK in your AWS account and one to deploy the project.
But let's go over the project before we deploy, so you get a sense of what will happen.
In short, this project is basically two systems in one. One is the CI/CD pipeline that listens to our repository changes and executes pipeline steps when pushes happen. The other one is the actual application we want to deploy.
bin/pipeline.jscontains the entry-point for the CDK deployment. It will load the pipeline stack.
lib/pipeline-stack.jscontains the infrastructure code for the CI/CD pipeline. It will be deployed from the command line.
lib/webservice/stage.jscontains the glue code between the pipeline and the application stack. The pipeline will use it for one or multiple deployments of our application.
lib/webservice/stack.jscontains the infrastructure code for the actual application. In this case, an API Gateway with one route that executes a Lambda function. The application will be deployed by our pipeline in the cloud (and to the cloud) when a push happened.
lib/webservice/lambda/handler.jscontains actual application code for the Lambda function. In this case, it contains an error to illustrate Dashird monitoring.
The nice thing about CDK pipelines is that changes to our application code, to our application infrastructure, and our pipeline infrastructure are all handled by that same pipeline.
Let's deploy the project. If you haven't used the CDK before with the account and region combination you added to the
credentials.json; you need to bootstrap the CDK first with the following command:
$ cdk bootstrap \
--profile account1-profile \
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
If the bootstrap worked correctly, you could now deploy the project with this CDK command:
$ cdk deploy
This can take a few minutes.
As mentioned before, this will deploy the CI/CD pipeline and not the application. After the pipeline is up and running, it will pull the repository and deploy our API Gateway and Lambda based application.
If you open the AWS console and navigate to CodePipeline (in the category Developer Tools and your chosen AWS region), you can see the pipeline working on the deployment. The last step will fail!
Now that the CI/CD pipeline is up and running, we can* finally have look if our application works correctly. If you open the CodePipeline in the AWS console and scroll down to the bottom, you should see the below error*.
Figure 1: AWS CodePipeline error
If you followed the Dashbird getting started guide, all your Lambda functions will be monitored automatically, even the one that is freshly deployed by our pipeline. This means you can find that error right in the Dashbird dashboard; it should look like this:
Figure 2: Lambda function error in Dashbird
If you have more than one Lambda function, you can find it by its generated name, which consists of:
- The stage name
- The stack name
- The Lambda function's resource name
- A unique ID
If you click on this error, you should see the event that led to it. Below, you see the stack trace that Dashbird provides.
Figure 3: Dashbird stack trace
While the filename handler.js, line number 2, and the error type are correct, the location is different. It was in the
lib/webservice/lambda directory on our development machine, but when deployed, it ended up in the Lambda owned
/var/task directory inside the AWS Cloud. You should keep in mind to give your Lambda functions code file a name that helps to find it later.
To fix this error, we simply remove that erroneous line from our handler.js file, commit the change and push it to GitHub. This will trigger our CI/CD pipeline to run again without the error.
Continuous delivery pipelines are crucial for modern software and should be part of your development process, and serverless technology is a good foundation for building your pipeline. With the CDK, such pipelines are easy to set up and maintain.\
Because Dashbird follows observability principles, you don't have to think about monitoring explicitly when creating your systems. You will get automatic preconfigured insights into and alarms for every AWS service supported by Dashbird, even the Lambda functions that are part of your CI/CD pipeline. This way, you stay up to date on your systems' internal state and won't get any surprises in production.
This project is based on an AWS own example project created for the AWS blog. A bit cleaned up for easier reproduction, but it's mainly the same, so you can read the AWS article if you want to get some extra information.