loading...
Cover image for Deploy a Deno Application on AWS Using Docker and Travis CI

Deploy a Deno Application on AWS Using Docker and Travis CI

fhsinchy profile image Farhan Hasin Chowdhury Updated on ใƒป13 min read

Although Deno is still in a very early phase of its life and not ready for usage in production-grade applications, I decided to try and see If I can deploy a Deno application on AWS using a deployment process I usually use.

In this article, I'll guide you through the process of setting up a CI-CD pipeline on Travis CI and deploying a Deno application on AWS Elastic Beanstalk. For deploying the database, we'll be using AWS Relational Database Service (RDS). The application we're going to deploy is an API that I made a few weeks ago with Deno:

GitHub logo fhsinchy / deno-blog

An experimental blogging application developed with Docker, Deno, Oak and MySQL ๐Ÿฆ•

This is a simple blogging API with stateless authentication and full CRUD functionality. If you want to learn about the process of developing and dockerizing such an application in Deno, you can read my previous article on the topic:

Going forward, I am assuming that you have a working experience of Docker. Having some familiarity with AWS and Travis CI will be helpful but is not mandatory.

Now that we're done with the introduction, let's get started with the deployment process.


Deployment Workflow

A representation of the deployment workflow is as follows:

Deployment Workflow Diagram

As you can see on the diagram, whenever we push some new code to our repository, it will notify Travis CI. Travis CI will then run the tests we've written inside our application.

If all tests pass, Travis CI will then proceed with deploying the project on AWS. In case of a failure, Travis will stop and notify us about the crash.

The deployment process has six steps, described in six distinct sections in this article. These are as follows:

  • Setting-up Repository on GitHub.
  • Configuring Travis CI for Testing.
  • Setting-up AWS Elastic Beanstalk.
  • Configuring Travis CI for Deployment.
  • Creating Database on RDS.
  • Testing the API using Postman.

Without any further ado let's jump right into the first step in the next section.


Setting-up Repository on GitHub

The first thing we need for our CI-CD pipeline to work is a public repository on GitHub. Although Travis CI works with private repositories as well, if you want all the scenarios to be the same as mine, you too should create a public repository.

I'll be using the master branch of following repository:

GitHub logo fhsinchy / deno-blog

An experimental blogging application developed with Docker, Deno, Oak and MySQL ๐Ÿฆ•

You can either fork it or create a copy of it by downloading the zip. Make sure you delete the .travis.yml file. We'll be rewriting this file from scratch.


Configuring Travis-CI for Testing

Now that you have your repository set-up go to https://travis-ci.com/ and sign-up or sign-in with GitHub. Once authorized, you'll be asked to install Travis CI as a GitHub app.

On the next step, you can either select a single repository or all the repositories from your profile. You can do what you prefer. Once you've chosen the repository click Approve & Install button:

Select Single Repository

On the next step, you'll be asked to activate the GitHub apps integration. Click on the Activate button:

Activate Apps Integration

Again on the next step, you'll be asked to select a single repository or all the repositories. I'm selecting a single repository. Click on the Approve and install button:

Select Single Repository

Once this step is done, go to your Travis CI dashboard and you should see your repositories listed:

Travis CI Repository List

Going inside the repository you should see something like the following:

No Builds

That's because there is no .travis.yml file in the repository.


Configuring Travis CI for Testing

In the previous section, we saw a No builds for this repository message. To make our project recognized by Travis CI we have to create a .travis.yml file on the root of our project. So go ahead and create the file by executing the following command in your terminal:

touch .travis.yml

Open up the file in your favorite text editor and put in following code:

sudo: required

services:
    - docker

Before I start explaining the code here, let me tell you how Travis CI works. Travis CI runs a virtual machine on the server. This virtual machine acts as an environment for our code and runs the instructions we give it.

On the first line, we're letting Travis CI know that we require superuser permission to do our thing. Then on the third line, we start defining services. Defining services inside a .travis.yml file is a lot like defining them inside a docker-compose.yml file. For testing our application all we need is Docker. You can learn about this more on the official Using Docker in Builds documentation page.

So far what we've done is not very interesting. To make it interesting add the following bit of code to your .travis.yml file:

# previously written code

before_install:
    - docker build -t fhsinchy/deno-blog .

script:
    - docker run fhsinchy/deno-blog test --allow-env tests/version.test.ts

Here, we define before_install which is a job life-cycle phase. In this phase, we build our docker image and tag it with a name. Once the image is built, we define the script phase which is one of the main phases in Travis CI job life-cycle. If you want to learn more about job lifecycle, there is an entire Job Lifcycle page on the official documentation.

I haven't written any big tests in this project just a demo test that checks if the Deno version is greater than 1 or not. You can see the test source code inside tests/version.test.ts file:

import { assert } from "https://deno.land/std/testing/asserts.ts";

Deno.test({
  name: "checking deno version",
  fn(): void {
    const version: any = Deno.env.get("DENO_VERSION");
    assert(parseInt(version) >= 1);
  },
});

You can write more tests if you want to but I'm skipping that. So we're ready to actually push our code and let Travis CI test it out.

To run the tests, just commit all the changes and push them to GitHub repository. Travis CI should pick-up the changes and start running the given instructions. If the tests pass, you should see a green notification:

Passed

Now that our tests are passing, we can continue to the next section.


Setting-up AWS Elastic Beanstalk

Go to AWS Management Console and use the Find Services search box to look for Elastic Beanstalk. Navigate to Elastic Beanstalk Management Console and click Create Application.

Fill up the application creation form by putting in the following values:

Application Name: deno-blog
Platform: Docker (leave branch and version as it is)
Application code: Sample application

Click on the Create application button at the bottom and wait till the process is finished. Once done you should see the dashboard with a big Health Ok sign:

Application Created

You can find the application URL under the environment name:

Application URL

Clicking that will take you to the sample application:

Welcome

That's pretty much it for the Elastic Beanstalk set-up for now. We'll go back to our .travis.yml file and configure the deployment there.


Configuring Travis CI for Deployment

Open up .travis.yml and add following bit of code to it:

# previously written code

deploy:
    provider: 
    region:
    app:
    env:
    bucket_name:
    bucket_path:
    access_key_id:
    secret_access_key:
    on:
        branch:

We've added a new deploy block in the file that contains quite a lot of options to be configured. Let's begin with the easy ones.

The app is deno-blog and something that I know from previous experience is that the value of bucket_path is always same as the application name.

The region and env name can be found on the dashboard itself:

region and env

Here in the URL you can see the region is us-east-2 and the env is DenoBlog-env.

The provider name is elasticbeanstalk without any spaces in between. Travis CI supports a lot of providers by default. You can read about the on the official Deployment documentation page.

deploy:
    provider: elasticbeanstalk
    region: "us-east-2"
    app: "deno-blog"
    env: "DenoBlog-env"
    bucket_name:
    bucket_path: "deno-blog"
    access_key_id:
    secret_access_key:
    on:
        branch: master

By setting the branch: master in the on block we're saying that, deployment should happen only if code is being pushed to the master branch.

When we deploy code on AWS, code lives inside a S3 bucket on the server. To find out the bucket_name navigate to the Services list on the top left corner of your screen. Click on S3 from the Storage section there. You can search for S3 as well if you want.

In the S3 Management Console, there should be a list of all the S3 buckets that you have. Find the one that contains the previously mentioned region in it's name. In my case the value of region is us-east-2 so the bucket to choose from the list is elasticbeanstalk-us-east-2-890476563482:

S3 Bucket List

Fortunately, that's the only bucket in my list.

Now lastly, the value of access_key_id and secret_access_key will come from another AWS service called AWS Identity and Access Management (IAM). Go to the Service list again and find IAM from the Security, Identity, & Compliance section. You can search for it as well if you want.

Once on the IAM Management Console, click on Users from Access Management menu from your left sidebar. On the next page, click on Add user to create a new user. Fill up the form as follows:

New User

Click on the Next: Permissions button. In this page select Attach existing policies directly and select AWSElasticBeanstalkFullAccess from the list of policies. Make use of the fantastic search box there to avoid scrolling through hundreds of policies:

Permission Policies

Click in the Next: Tags button. Tags are optional so again click the Next: Review button. On the review page click the Create user button right away.

On the next page however, we have two very important values. In the list of users there you'll find an Access key ID and Secret access key:

Secrets

You can view the Secret access key by pressing the Show button. Copy both the value and save somewhere safe in your computer, you'll be needing them soon. Finally click on the Close button to finish this.

Given the access_key_id and the secret_access_key are secrets, we can not put them directly in the repository. We will instead put them as environment variable in Travis CI environment.

Go back to your Travis CI project page. Click on the More options menu on the top right side of the screen and select Settings:

Travis CI Environment Variables

Scroll down to the Environment Variables sections. Create two variables and put in the secrets obtained from AWS IAM.

Now that we have all the values let's populate the travis.yml file:

deploy:
    provider: elasticbeanstalk
    region: "us-east-2"
    app: "deno-blog"
    env: "DenoBlog-env"
    bucket_name: "elasticbeanstalk-us-east-2-890476563482"
    bucket_path: "deno-blog"
    access_key_id: $AWS_ACCESS_KEY_ID
    secret_access_key: "$AWS_SECRET_ACCESS_KEY"
    on:
        branch: master

Make sure you match up the environment variable names properly.

With that done, we're now ready to deploy our application to AWS. Please recheck you travis.yml file and make sure you haven't made any mistakes. My file looks like the following:

sudo: required

services:
    - docker

before_install:
    - docker build -t fhsinchy/deno-blog .

script:
    - docker run fhsinchy/deno-blog test --allow-env tests/version.test.ts

deploy:
    provider: elasticbeanstalk
    region: "us-east-2"
    app: "deno-blog"
    env: "DenoBlog-env"
    bucket_name: "elasticbeanstalk-us-east-2-890476563482"
    bucket_path: "deno-blog"
    access_key_id: $AWS_ACCESS_KEY_ID
    secret_access_key: "$AWS_SECRET_ACCESS_KEY"
    on:
        branch: master

If all is good, commit the changes and push to the repository. Wait for Travis CI to finish up it's thing and scroll down the Job log. If everything goes fine, you should see something like this:

Installing deploy dependencies

Preparing deploy

Deploying application

No stash entries found.

Done. Your build exited with 0.

Now go back to your Elastic Beanstalk Management Console and you should see something like this:

Deployed

As you can see in the Running version it shows a build from Travis CI. Ok health means we're good to go. If you're seeing Sample application in the Running version part, click the Refresh button on top right side.

Try accessing the URL. You should see something like this:

200

This is means the API has been successfully deployed. Elastic Beanstalk can build and run any Dockerfile found in the application automatically. In case of multi-container applications, things can get a bit more complicated and configuration heavy but that's a topic for another article.

All that is left to do now is adding the database.


Creating Database on RDS

The application we're deploying uses MySQL as its database system. To deploy the data on cloud we'll make use of another AWS service called Relational Database Service (RDS).

Go to the Service list and find RDS from the Database section. You can search for it as well if you want. Once you're on your RDS Management Console navigate to the Databases page from the left side menu. Click the Create database button and on the next page carefully fill up the form as follows:

Choose a database creation method: Standard Create
Engine options: MySQL
Version: 5.7.28
Templates: Free tier

On the Settings section, give your database instance a descriptive name like deno-blog-db-service. Put in a master username like root and a secure password:

DB Settings

Note down the username and password you'll be needing these two soon. Leave everything as it is on the DB instance size section.

In the Connectivity section open up Additional connectivity configuration sub section and set Publicly accessible to Yes otherwise you won't be able to connect to it from your local machine:

Publicly Accessible

Leave Database authentication section as it is and go to the Additional configuration section. In this section, put in a Initial database name like denoblog and note it down somewhere.

Finally click on the Create database button at the bottom right corner. The database creation process takes quite a while and during this time you'll see the database Status as Creating:

Database Creating Status

Once the database has been created, the Status will be updated to Available:

Database Available Status

Although the database is publicly accessible you won't be able to connect to it from your local machine. To do that an inbound rule needs to be updated in the security policies. To do that click on the name of your newly created database from the list of databases. On the Connectivity & security tab look for VPC security groups. It usually stays on the Security column:

VPC Security Group

Click on the name of the security group and it'll take you to a new page titled EC2 Management Console. There click on the Actions button and select Edit inbound rules:

Edit Inbound Rules

On the next page you should see a list of security groups:

Source

Change the Source from Custom to Anywhere and hit the Save rules button at the bottom right corner.

Once the rules are being saved, go back to your RDS Management Console. Select database you just created and look for the Endpoint & port values:

Endpoint and Port

Copy the endpoint and keep it somewhere safe.

A MySQL client is required in this step to create the tables. I will be using MySQL Workbench. You may use anything else like Sequel Pro, DBeaver or even the MySQL CLI.

Open your MySQL client application. Create a new connection with previously noted endpoint and port. Use the master password you created in the previous section:

Create Database Connection

To create the tables run following SQL code:

USE denoblog;

CREATE TABLE IF NOT EXISTS users (
    id int(11) NOT NULL AUTO_INCREMENT,
    name varchar(255) NOT NULL,
    email varchar(255) NOT NULL UNIQUE,
    password varchar(255) NOT NULL,
    created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
    PRIMARY KEY (id)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;

CREATE TABLE IF NOT EXISTS blogs (
    id int(11) NOT NULL AUTO_INCREMENT,
    title varchar(255) NOT NULL,
    content text NOT NULL,
    slug varchar(255) NOT NULL UNIQUE,
    created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
    PRIMARY KEY (id)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;

If everything goes fine, two tables users and blogs should be created.

Tables Created

Now that we have our database ready, we need to set up some environment variables for our application. If you open up the docker-compose.yml file in our application code, and scroll to the very bottom, you should see something like this:

environment: 
    - DB_HOST=db # this should be identical to the database service name
    - DB_USER=root
    - DB_DATABASE=denoblog
    - DB_PASSWORD=63eaQB9wtLqmNBpg
    - TOKEN_SECRET=QA3GCPvnNO3e6x29dFfzbvIlP8pRNwif

These are the environment variable necessary for running the application. To set them up go to your Elastic Beanstalk Management Console one last time. From the left side menu click on Configuration and then click the Edit button on the Software section:

Elastic Beanstalk Environment Variables

Scroll down to the Environment properties section and add all above mentioned variables. Make sure to use the correct host database name and master password from database creation step. The TOKEN_SECRET can be any random secure string. It's used for generating JWT tokens in the application.


Testing the API using Postman

Finally it's time to test out the API. There is a postman-collection/deno-blog.postman_collection.json file in the source code. You can import that into Postman and update the host to match up your application URL:

Postman Testing


Conclusion

Well that was a lot of writing. I hope that you've enjoyed this article. Thank you from the bottom of my heart for taking interest in my writing.

Best of luck for your journey to the Deno Land โœˆ๏ธ

Posted on by:

fhsinchy profile

Farhan Hasin Chowdhury

@fhsinchy

Programmer ๐Ÿ‘จโ€๐Ÿ’ป | Voracious Reader ๐Ÿ“– | Video Game Enthusiast ๐ŸŽฎ | Author @freeCodeCamp

Discussion

markdown guide