Hello:)
In this blog, we will see how we can deploy an application directly to a new EC2 Instance using CircleCI as CI/CD tool. You can deploy any kind of application that fits in but for this blog, we will deploy a Node HTTP Server.
Before we move forward you should have a good understanding of CircleCI, terminologies like jobs, workflows, environment variables, ssh key, etc in CircleCI. Also, you should have a good understanding of AWS EC2 service.
Prerequisite
Before we start, we will be needing the AWS IAM user's "ACCESS_KEY", "SECRET_KEY" and "AWS_REGION" and store them as a project's environment variable in CircleCI. If you want you can completely create a new IAM user for CircleCI and grant it full access to the EC2 service. Once you are ready with the IAM user generate a Key Pairs which we will use later to perform SSH in a newly created instance or if you already have an old one then it is fine. Now, copy the downloaded key and add it to the SSH key in Project's settings in CircleCI, once the key is successfully added to CircleCI, it provides you the fingerprint of that key which we are going to use later.
Planing the CircleCI Jobs
The deployment of the Node Server can be divided into two sub-tasks or jobs:
- Build and Test
- Deploy
If your application is different then you can divide it into more steps or if there is nothing to build and test then directly perform the Deploy Step.
1. Build and Test
As Node Server there is nothing to build, so we will move directly to API testing. To Test the Node Server in the CircleCI execution environment, we first have to install all dependencies in the environment such as node, npm, etc and then we can run our tests. In our case, we will be using "jest" to test our API by running npm run jest
, Jest will test our application on specified endpoints.
In our build and Test job environment, we do not have to set up anything as CircleCI provides a node execution environment, so, we will use it.
So our Build and Test job will look like this:
build_and_test:
executor: node/default
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm test
name: Testing API
2. Deploy
To better understand the deploy job, we can break the whole job into multiple steps:
- Set up the Environment for AWS CLI in the executor.
- Create a New EC2 instance and delete the old one.
- Import the SSH key to the execution environment.
- Perform the SSH and set up the environment.
- Clone/Copy the project and Start it.
Step 1:
Setting up the AWS CLI execution environment in Circle CI is also very easy as CircleCI already provides an executor pre-configured for AWS CLI aws-cli/default
.
Step 2:
Now, this step is going to be a little hard, but if you can understand the bash script below then it's a piece of cake.
We will use a bash script to create a new instance and delete the old one, the script makes use of two environment variables "PREVIOUS_INSTANCE_NAME" which is not required, and "NEW_INSTANCE_NAME". So, make sure you create these environment variables in Circle CI. To create these environment variables you can create them in a project setting as we did for AWS credentials, or you can use the job-level environment variable. In my case, I have used the job-level environment variable as they do not possess any security threats if I write them directly in the config file.
environment:
PREVIOUS_INSTANCE_NAME: CircleCITest
NEW_INSTANCE_NAME: CircleCITest
Once, the instance is created the script will export the Instance IP as a bash environment variable so that we can access it in the next job step.
if [ ! -z "$PREVIOUS_INSTANCE_NAME" ]
then
INSTANCE_ID=`aws ec2 describe-instances --filters "Name=tag:Name,Values=$PREVIOUS_INSTANCE_NAME" --query "Reservations[].Instances[].[InstanceId]" --output text`
if [ ! -z "$INSTANCE_ID" ]
then
echo "Old instance ID: $INSTANCE_ID"
aws ec2 terminate-instances --instance-ids $INSTANCE_ID
echo "Terminated the previous instance"
else
echo "Did not found any instance with provided name"
fi
else
echo "Previous instance name not provided so moving forward"
fi
if [ ! -z "$NEW_INSTANCE_NAME" ]
then
NEW_INSTANCE_ID=`aws ec2 run-instances --image-id ami-07ffb2f4d65357b42 --count 1 --instance-type t2.micro --key-name CircleCI --security-group-ids sg-0791c0115b3a5100a --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=$NEW_INSTANCE_NAME}]" --query "Instances[].[InstanceId]" --output text`
echo "New instance Id: $NEW_INSTANCE_ID"
echo "Waiting for New Instance Start...."
aws ec2 wait instance-running --instance-ids $NEW_INSTANCE_ID
echo "Instance Successfully started"
NEW_INSTANCE_IP=`aws ec2 describe-instances --instance-ids $NEW_INSTANCE_ID --query "Reservations[].Instances[].NetworkInterfaces[].Association.PublicIp" --output text`
echo "New Instance IP: $NEW_INSTANCE_IP"
echo "export NEW_INSTANCE_IP=$NEW_INSTANCE_IP" >> $BASH_ENV
else
echo "New Instance Name is Required!"
fi
There are few points I want to highlight in the above script
- At
aws ec2 run-instance
command I have given an argument--key-name CircleCI
, it is the same key that we have generated above in the Prerequisites section. In my case the key pair name is "CircleCI", If you have a different name then use that name instead. - At
export NEW_INSTANCE_IP=$NEW_INSTANCE_IP
command, we are creating a new shell variable and appending it in bash_env file for later use.
Step 3:
To perform the SSH we need an SSH key. In our case, we have already generated the key and added it to the project setting in the prerequisites section. The reaming work is to tell CircleCI to import the key to our executor memory so, that ssh
command running inside it can use that key.
To import the key use this step in the CircleCI's job:
- add_ssh_keys:
fingerprints:
- "43:5d:f6:62:jk:ac:1c:d8:kk:47:5e:sd:19:29:e6:mm"
Note: The fingerprint of the key is provided by CircleCI when you add the key in project settings successfully.
It will import the key and place it in "$HOME/.ssh/id_rsa_435df662jkac1cd8kk475esd1929e6mm" file, with "id_rsa_" as a common prefix and key fingerprint without ":" as the suffix of file name.
Now we can perform the SSH in the executor, but before that, we will sleep/wait for one minute because, after the instance starts, the SSH server at the new EC2 instance takes time to start its service. To do so use sleep 1m
command.
Step 4:
In this step we are going to use ssh to set up the application environment in the newly created EC2 instance, Based on the application, our commands can wary. Right now we will move forward with our node server and install the nodejs.
To perform, the SSH use the command below
ssh -o StrictHostKeyChecking=accept-new \
-i $HOME/.ssh/id_rsa_4f5df96212ac1cd8d2475e6c1929e6fe \
ubuntu@$NEW_INSTANCE_IP "mkdir ~/App && cd ~ && curl -sL https://deb.nodesource.com/setup_16.x -o /tmp/nodesource_setup.sh && sudo bash /tmp/nodesource_setup.sh && sudo apt install nodejs && node -v"
Note: As the fingerprint value is known to us beforehand, we can easily compute the filename of the SSH key.
Step 5:
Now, coming to our final step where we will either copy the building project from the CircleCI executor to the EC2 instance or directly clone the repository to the EC2 instance.
If you have a build project then you can copy it using the scp
command. In our case, we don't have a build project so we will go with cloning. So Again using ssh we will run the clone command at the EC2 instance, install project dependencies and run the application.
Bye:)
That was all for this blog, the whole config file is provided below, you can edit according to your wish, so go ahead. After you create your config file successfully, then later whenever you push any changes the CircleCI will again rerun all the tests and deploy again. To make this CI/CD work smoothly keep the Previous instance and new Instance name the same because if you don't then before whenever you push changes you also have to change the project environment variable in CircleCI for the new/old instance name, to delete the old instance.
Config File
version: 2.1
orbs:
node: circleci/node@5.0.3
aws-cli: circleci/aws-cli@3.1.4
jobs:
build_and_test:
executor: node/default
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
command: npm test
name: Testing API
deploy:
executor: aws-cli/default
environment:
PREVIOUS_INSTANCE_NAME: CircleCITest
NEW_INSTANCE_NAME: CircleCITest
steps:
- aws-cli/setup:
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
aws-region: AWS_REGION
- run:
command: |
if [ ! -z "$PREVIOUS_INSTANCE_NAME" ]
then
INSTANCE_ID=`aws ec2 describe-instances --filters "Name=tag:Name,Values=$PREVIOUS_INSTANCE_NAME" --query "Reservations[].Instances[].[InstanceId]" --output text`
if [ ! -z "$INSTANCE_ID" ]
then
echo "Old instance ID: $INSTANCE_ID"
aws ec2 terminate-instances --instance-ids $INSTANCE_ID
echo "Terminated the previous instance"
else
echo "Did not found any instance with provided name"
fi
else
echo "Previous instance name not provided so moving forward"
fi
if [ ! -z "$NEW_INSTANCE_NAME" ]
then
NEW_INSTANCE_ID=`aws ec2 run-instances --image-id ami-07ffb2f4d65357b42 --count 1 --instance-type t2.micro --key-name CircleCI --security-group-ids sg-0791c0115b3a5100a --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=$NEW_INSTANCE_NAME}]" --query "Instances[].[InstanceId]" --output text`
echo "New instance Id: $NEW_INSTANCE_ID"
echo "Waiting for New Instance Start...."
aws ec2 wait instance-running --instance-ids $NEW_INSTANCE_ID
echo "Instance Successfully started"
NEW_INSTANCE_IP=`aws ec2 describe-instances --instance-ids $NEW_INSTANCE_ID --query "Reservations[].Instances[].NetworkInterfaces[].Association.PublicIp" --output text`
echo "New Instance IP: $NEW_INSTANCE_IP"
echo "export NEW_INSTANCE_IP=$NEW_INSTANCE_IP" >> $BASH_ENV
else
echo "New Instance Name is Required!"
fi
name: Destorying Old instance and createing New
- add_ssh_keys:
fingerprints:
- "4f:5d:f9:62:12:ac:1c:d8:d2:47:5e:6c:19:29:e6:fe"
- run:
command: sleep 1m
name: Waiting of SSH Server to start at New EC2 Instance
- run:
command: ssh -o StrictHostKeyChecking=accept-new -i $HOME/.ssh/id_rsa_4f5df96212ac1cd8d2475e6c1929e6fe ubuntu@$NEW_INSTANCE_IP "mkdir ~/App && cd ~ && curl -sL https://deb.nodesource.com/setup_16.x -o /tmp/nodesource_setup.sh && sudo bash /tmp/nodesource_setup.sh && sudo apt install nodejs && node -v"
name: Performing SSH and Setting node environment
- run:
command: ssh -o StrictHostKeyChecking=accept-new -i $HOME/.ssh/id_rsa_4f5df96212ac1cd8d2475e6c1929e6fe ubuntu@$NEW_INSTANCE_IP "sudo npm install -g degit && cd ~/App && degit https://github.com/hiumesh/node-rest-api-jest-tests.git && npm install"
name: Cloning the project
- run:
command: ssh -o StrictHostKeyChecking=accept-new -i $HOME/.ssh/id_rsa_4f5df96212ac1cd8d2475e6c1929e6fe ubuntu@$NEW_INSTANCE_IP "touch stater.sh && echo 'cd ~/App && npm start disown &' > stater.sh"
name: Creating a stater script
- run:
command: ssh -o StrictHostKeyChecking=accept-new -i $HOME/.ssh/id_rsa_4f5df96212ac1cd8d2475e6c1929e6fe ubuntu@$NEW_INSTANCE_IP "sh stater.sh >/dev/null 2>&1 &"
name: Starting Server
workflows:
test_my_app:
jobs:
- build_and_test
- deploy:
requires:
- build_and_test
context:
- CircleCITest
Top comments (0)