DEV Community

Luka Popovic
Luka Popovic

Posted on

How To Deploy NestJS Application to Production using AWS and GitHub Actions

CoverPhoto

Introduction

If you are a developer like me, there is a good chance that a web application's deployment process is not something you do all the time. In most cases, the apps we build are deployed to the Kubernetes clusters or some other servers dedicated individuals do DevOps for a living and take care of this whole process. I usually have to do it whenever we start a new project, and then it works by itself. In this blog post, I will share our way of doing it with you and put all the code we use into the GitHub repository so it can help someone else. There is a lot of documentation and blog posts written about the individual steps involved, but I had difficulty finding a comprehensive guide that suits our needs. Therefore I created one.

These are the steps that we will do today:

  1. (We will prepare our repository for production deployment)
  2. We will configure the AWS EC2 server
  3. We will set up the Nginx reverse proxy and Certbot on our newly created server
  4. We will add GitHub action that will deploy our code to the AWS EC2 server
  5. We will check that everything is running correctly and call it off for today

Concept

First, I want to explain to you how this deployment will work. I assume this is not fitting for everyone, so I want to save you some reading time if this doesn't suit your needs.
​
We will deploy our service or application onto the AWS EC2 dedicated server, which runs docker and manages our services using the docker-compose. This is a pretty basic setup that gives us some flexibility if we need to scale, but we would need to do it manually. If you need a more advanced setup where you can automatically scale your services based on the load and the fly, I would strongly suggest looking into Kubernetes and how to set it up using Helm, Flux, or something similar. In that case, it makes sense to read a bit about GitOps.
​
For accessing our service, it is strongly recommended to use HTTPS. Using the unsecured HTTP protocol nowadays is fine for development, but I wouldn't do a production deployment with it.

In order to serve our service with the HTTPS in this particular case, we have two options:

  • Run the NestJS application in HTTPS mode
  • Create a reverse proxy that will be exposed to the internet and handle the HTTPS

In this tutorial, we will do the latter. There are more reasons why we chose this approach over the first one. One of them is that creating and maintaining certificates is not so straightforward. In order to create your own certificates, you would need to have some kind of Certification Authority that will do the signing. That authority would also need to be known to the major browsers, or you need to create exceptions in every browser, etc. Not straightforward at all. The other reason is that with the reverse proxy, we can run multiple services behind the proxy on the same domain.

For our certification purposes, we will use Let's Encrypt. Let's Encrypt is a nonprofit certificate authority that is well-known to all major browsers and pretty easy to set up and get going. There is only one catch! Let's encrypt certificate is valid for only 3 months. We need to renew the certificate before expiring, and we don't want to do it manually. For that, we have a Certbot.

Certbot is a tool that takes over the manual work and checks if the certificate is valid. If it is soon to be expired, Certbot requests a new one for you. That way, we avoid the manual overhead of tracking and renewing our certificates.

The last piece of the puzzle is our reverse proxy. For this, there are multiple options like Traefik, Caddy, HA Proxy, and Nginx. We will use Nginx in this example for its simplicity, wide usage, and big community where questions can be asked if something goes sideways.

Behind the reverse proxy, you can have any web server or service running. In our case that will be the NestJS application that provides a GraphQL API.

Prerequisites

There are some prerequisites that you need to have in order to follow the tutorial:

  1. You need to have an AWS account. If you don't have one, you can open it by following this link.
  2. You need to have an application or service that you want to deploy to production. It should be already in the GitHub repository. If not, you can see here how to do it.
  3. You need some kind of domain, either free or paid. If you don't have one, you can create a free one here DuckDNS

Preparing our repository for production deployment

If we are not careful, there is a possibility that our project repository is not really for our application or service to be deployed to production. We might miss Dockerfile or docker-compose.yml file or have some of these misconfigured. This allows someone to access our database, or service even if he is not allowed to. To be on the safe side, it is always a good thing to check this thing before creating the first deployment and before making our application or service available to the internet. At OctaCode, we always use our scaffolding project to spin up our services or applications. That way, we at least exclude some misconfigurations from the equation during our first deployments. You can check our scaffolding project repository here.

I will not go through all the details of what needs to be configured for the production deployment. You can check the repository for more, but here are some of the things that you should check:

  • If you have Dockerfile and docker-compose.yml defined
  • If some of the services in the docker-compose.yml are exposed to the internet (usually, you should bind them just to the localhost if you use the reverse proxy)
  • If your build scripts are prepared and your production build is working correctly
  • If you have some environment variables that are used and that should also be included in the GitHub or on the deployment server?

Configuring AWS EC2 server

I assume you already created the AWS account, so we can start with spinning up our server. If not, you can see how to do it in the two sections above.

Creating an EC2 instance in AWS is mostly a simple process. It goes something like this:

  1. You login to the AWS Go to the AWS Management Console
  2. From there, you can select services and search for EC2
  3. Clicking on 'Launch Instance' will start the process
  4. From here, you can choose different operating systems for your instance, different instance types, and much more. We like to keep it simple, choose Ubuntu for your image and t2.micro for your instance type. This will be enough for our purposes, and, most likely, you will not get billed by AWS if you use it.
  5. Then, you need to choose a key pair for accessing the server. If you don't already have one, you can click on 'Create a new pair' to get a new one. Make sure to keep this certificate .pem file, otherwise you will not be able to access your server with SSH.
  6. Then we need to set up Network settings. We need to allow access to the HTTP and HTTPS from the internet. Also, SSH access should be allowed since we will use this way to connect to the server from GitHub action.
  7. Lastly, you configure your storage amount needed (30GB is free in the free tier) and click on 'Launch Instance'.

After the instance wizard is closed, it will take a minute or so for your instance to be ready. Good job creating it!

Setting up docker and docker-compose

After our instance is ready, we need to install docker and docker-compose on our server. This can be achieved following the official documentation here

Here are the commands if you are lazy to follow the link:

Remove old versions if applicable



sudo apt-get remove docker docker-engine docker.io containerd runc


Enter fullscreen mode Exit fullscreen mode

Update apt and allow it to use repositories over the HTTPS



sudo apt-get update
sudo apt-get install ca-certificates curl gnupg


Enter fullscreen mode Exit fullscreen mode

Add Docker's official GPG key



sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg


Enter fullscreen mode Exit fullscreen mode

Add Docker repository



echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


Enter fullscreen mode Exit fullscreen mode

Install docker



sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose


Enter fullscreen mode Exit fullscreen mode

Add your user to docker group



sudo groupadd docker
sudo usermod -aG docker $USER


Enter fullscreen mode Exit fullscreen mode

After this, you can run the following command to propagate the changes without the need to close the shell and reopen it



newgrp docker


Enter fullscreen mode Exit fullscreen mode

To test if the docker is working correctly, you can create a network, we will need it later anyways.



docker network create dockernet


Enter fullscreen mode Exit fullscreen mode

Good job! Docker and docker-compose should be available on the server now, and we can proceed to the next step.

Connecting our domain with our server

Since certificates and HTTPS cannot work with IP addresses only, we need to connect our domain to our server. Then, we can use our domain for the certificate setup.

If you created your free domain with DuckDNS, all you need to do is to update your IP under the domains, the current IP of the domain you created.

On the other hand, if you have a registered paid domain, you can create an A Record in your DNS configuration for the subdomain and point it to the IP address of your server.
​
Note: Public IP address of your server is visible in the EC2 console for the running instance.

Setting up Nginx reverse proxy and Certbot on our newly created server

The next step is a bit tricky and can be a bit hard, especially if you are a beginner. This article helped a lot while researching, and it heavily influenced our setup. To simplify the process, we created the repository that you can clone and get yourself a head start.

After cloning the repository you can go into the nginx-reverse-proxy folder



cd nginx-reverse-proxy


Enter fullscreen mode Exit fullscreen mode

You will need to do a couple of configuration steps before spinning up docker containers:

  • Exchange change-me.org with your domain name in a couple of places in order to have the proxy spinning up correctly.
  • Replace 'address and port of your service' with the docker container name of your service and internal docker port (not the exposed one to the host)
  • Your service needs to run inside the dockernet network in order to be visible by the proxy!
  • If your service is running outside of docker, uncomment bridge mode in the docker-compose.yml and comment out the networks sections

After that you will need to change permissions for the init.sh with



sudo chmod +x init.sh


Enter fullscreen mode Exit fullscreen mode

And execute it with



sudo ./init.sh


Enter fullscreen mode Exit fullscreen mode

Follow the process and after it is finished run to start the docker containers.



docker-compose up -d


Enter fullscreen mode Exit fullscreen mode

Great Job! Now you should have working reverse proxy and only thing we need to do is to setup our deployment.

Add github action that will deploy our code to the AWS EC2 server

What are GitHub actions?

GitHub Actions isΒ a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository, or deploy merged pull requests to production.

That is proper definition, great! But what does it actually means in plain english?

GitHub Actions are basically yaml configuration files that live inside your repository in .github/workflows folder and execute some steps when some action happens in GitHub. Action can be merge of pull request or push into a branch and many more. It is a really powerful tool that let us automate stuff like deployment processes, releases, testing, etc.

For our purpose we will define the github action that will do the deployment process for us. It will look something like this:

  1. Whenever we merge something in main branch
  2. Try to run docker-compose to build containers
  3. If everything is correct and there are no errors in the build
  4. SSH into our server get latest code from the main branch
  5. Build and start our containers

Generate SSH key pair and setup access to the github repository

In order to achieve this, we will need to setup our ssh key on our EC2 server.

We can do this using following command:



ssh-keygen -t rsa


Enter fullscreen mode Exit fullscreen mode

Follow the instructions (press enter couple of times) until the key is generated. We just generated a public/private key pair that we can use to authenticate when doing git commands with the GitHub repository. To be able to do this, we need to add our public (not private) key to the settings of GitHub repository under the deploy keys.

You can get your public key executing following command:



echo ~/.ssh/id_rsa.pub


Enter fullscreen mode Exit fullscreen mode

Click Add deploy key button in GitHub and copy the output of the command in the key field. You can name it based on the server that connects to the GitHub.

Save it and proceed.

To test it, you can try to clone your repository somewhere on the server. Please make sure that you are using SSH GitHub URL when cloning the repository and not the HTTPS, since only the SSH one will work.

Add EC2 server information in GitHub secrets

We had set up the SSH key so that our server can access the GitHub repository, and now we need to do it the other way around. We need to add 3 things to the GitHub repository setting so that we can connect to our server when the GitHub Action is running.

These three things are:

  1. Server username
  2. Server public IP address
  3. PEM certificate that we got when creating our instance

Since these things are security concerns, we don't want to hardcode them into our actions or save them in the repository. For this kind of thing, we use GitHub secrets.

We want our secrets to be scoped to our repository, therefore, we will go into the repository settings and choose Secrets and Variables section and select Actions.

In there, we will click on the 'New repository secret' button to add a new secret.

We will add following secrets:



name: USERNAME, secret: <servers username>
name: HOSTNAME, secret: <servers ip address>
name: CERTIFICATE, secret: <content of the pem file>


Enter fullscreen mode Exit fullscreen mode

After that we can add our github action in our repository.

  • Create new folder .github in the root of our project.
  • Inside of .github create another folder with name workflows
  • In the workflows directory we need to create a file with name deployment.yml

In this file we will add following code:



name: Fancy Deployment

on:
  push:
    branches: [ master ]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Build docker_compose
        run: docker-compose up -d --build
      - name: Build application
        run: docker-compose exec -T service yarn build
  deploy:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Deploy application
        env:
          PRIVATE_KEY: ${{ secrets.AWS_PRIVATE_KEY }}
          HOSTNAME : ${{ secrets.HOSTNAME }}
          USERNAME : ${{ secrets.USERNAME }}
        run: |
          echo "$PRIVATE_KEY" > private_key && chmod 600 private_key
          ssh -o StrictHostKeyChecking=no -i private_key ${USERNAME}@${HOSTNAME} '

          cd /<path to your cloned github repository on the server> &&
          git checkout main &&
          git fetch --all &&
          git reset --hard origin/main &&
          git pull origin main &&
          docker-compose up -d --build
        '


Enter fullscreen mode Exit fullscreen mode

So you are probably wondering what is happening in this code snippet? I will try to explain it a bit.

First we declare jobs. We have two of them build and deploy. As the name says the first one builds docker containers using docker-compose and tries to execute the command that is usually executed in the docker container.

If this succeeds, we go on to the next job which uses our defined secrets and connects to our server. It executes a code snippet that gets the latest version of our main branch and starts our docker containers using docker-compose. That will start our application on our server.

When you save this file, you push it into the repository, and it lands in the main branch either directly or as a pull request. GitHub will recognize the file and will start the pipeline. You can see the outcome of this action in GitHub under the Actions section.

If we did everything right, our application should be deployed, and we should be able to access it using our domain name specified in the reverse proxy. It should also be using HTTPS as every modern software project!

Checking if everything works correctly

If everything works fine, then great job! You can go and have a coffee, you deserve it! If not, it usually doesn't work on the first try, at least from my experience. People tend to make mistakes, and it is possible that I wrote something wrong, or you configured something wrong. In that case, here are a couple of things that you can double-check to help troubleshoot your setup.

1. Can't access service or getting Bad Gateway from the Nginx

If you can't access your service, the first stop where you can check what is happening is the logs of the Nginx service. You can do this by going to the EC2 server and executing the following command to see the docker container logs:



docker logs <nginx_container_name>


Enter fullscreen mode Exit fullscreen mode

If you have some errors here, then there is a good chance that something is misconfigured in the app.conf file of Nginx. Take another look and make sure that your domain name is correct. Also, check if the proxy_pass is redirecting to the correct service. If your other service is running in docker as well, make sure to add dockernet external network to the service specifications in the docker-compose.yml so that the Nginx container can access the service. This is already configured correctly if you used our Scaffolding project.

Another issue can be that the domain is not resolving to the IP address of our server.

You can check it with the following command:



nslookup your-domain


Enter fullscreen mode Exit fullscreen mode

If this is the case, you should check your steps where you did the A record or you did configure the IP address in the DuckDNS.

2. Can't generate certificates with Let's Encrypt

If this error pops up when executing the init.sh script, then the first thing to check is if the domain names are properly replaced in app.conf and in init.sh.

The second issue could be that you forgot to tick the Allow HTTP and Allow HTTPS from the internet when creating the EC2 instance and that challenges needed for the Let's Encrypt can't be accessed. In that case, you will need to go to the EC2 Console and click on Security Groups under the Network and Security on the left side menu.

When the security groups open, you will have one that is called launch-wizard-some-number. You select it, then in the lower part, choose Inbound Rules. Click on the Edit Inbound Rules button, and when the page opens, you click Add Rule. You select HTTP from the first dropdown, for source you choose Anywhere-IPv4, and you Save rules.

Then you try to run init.sh script again. It should work correctly if this was the issue.

Summary

This was a long one. But I hope it will help at least some of you, fellow developers, when you face deployment tasks. Surely there are multiple other ways of achieving similar results, more complex and potentially also more simple ones. I didn't find the simpler ones, and this was the most straightforward way of achieving our needs.

This is suitable for some simpler projects without the need for a lot of flexibility. In some of the upcoming blogs, we will speak about GitOps and Kubernetes and how we utilize these for some more complex projects that we work on. Until then, happy coding!

References:

Original blog posted here

Top comments (3)

Collapse
 
larhou profile image
Lars H

Great article!

You write that you really like working with NestJS.

Could you put some extra words on why?

I am currently focused on the MERN stack. But I am actively researching other web frameworks.

How do you think NestJS compare to other Javascript web frameworks?

  • The two React-based web frameworks: Remix and Next

  • Blitz and Redwood - appear to be full-stack frameworks like Rails

  • Other Javascript-based web frameworks: Sails, Adonis, Ember, Meteor

Would be great to hear your opinion.

Collapse
 
luka_popovic profile image
Luka Popovic

Hi Lars,

glad that you liked my first article! I am a newbie in writing, so positive and constructive feedback is always welcome!

In my case I use NestJS as a pure backend javascript framework. For frontend I like using React. I am a bit old school and I am in my comfort zone there. Therefore I couldn't compare NestJS directly to most of the frameworks you mentioned like Next, Blitz, Remix and Ember. Because they are more focused on frontend part although in a server side rendering setup.

I was actually not familiar with Redwood, so thank you for bringing that up for me.

What I like with NestJS is its syntax. My background was in OOP and I felt at home with NestJS. Having features like dependency injection and decorators were somehow natural for me and I think it helps create more readable and simple code.

The other thing is its community. I think NestJS is mature enough to be used on a large scale production systems.

The third thing is just a personal preference. There are so many frameworks there now days, that it just makes sense at some point choosing one and sticking with it. It needs certain time and effort to become familiar with the framework and to learn its details.

And the last but not least thing is the market itself. If we exclude Next which is not a direct comparison, I haven't seen none of the frameworks mentioned by the clients that I work with or when applying for certain jobs.

So all in all, for me and my use case NestJS is still a safe bet. :)

Cheers

Collapse
 
gabriel_mendes profile image
Gabriel Henrique • Edited

Hi, i'm a bit noob on this. In the nginx part, should I clone the helper repository on ec2 instance and leave the containers running or in my computer? I didn't get it, sorry