DEV Community

Cover image for Deploying Django Application on AWS with Terraform. GitLab CI/CD
Yevhen Bondar for Daiquiri Team

Posted on • Updated on

Deploying Django Application on AWS with Terraform. GitLab CI/CD

In previous parts, we've deployed the Django web application to ECS and connected PostgreSQL to it. But now, we have to deploy application changes manually. In this part, we are going to automate this process with the following steps:

  • Create a GitLab group and projects for the backend and the infrastructure repositories.
  • Add the test CI/CD stage to run tests.
  • Add the build CI/CD stage to build a docker image and push it to the ECR.
  • Add the deploy CI/CD stage to update the application on AWS.

Creating Projects

Let's start with creating a GitLab group and projects. Go to gitlab.com and register an account.

Then create a GitLab group. A group is like a folder for projects. You can configure shared settings like access policy and CI variables for them.

Create projects for Django and Terraform in this group. Be sure to remove the "Initialize repository with a README" option to create a clean repository.

Also, add your ssh key to SSH Keys section. It allows you to access your projects via Git.

Now, let's push both of our repositories to GitLab.

# Push backend
cd ../django-aws-backend
git remote add origin git@gitlab.com:django-aws/django-aws-backend 
git push --set-upstream origin main

# Push infrastructure
cd ../django-aws-infrastructure
git remote add origin git@gitlab.com:django-aws/django-aws-infrastructure 
git push --set-upstream origin main
Enter fullscreen mode Exit fullscreen mode

Check your GitLab projects in a browser and verify that the push is successful.

Stage: Test

Now let's add unit tests check to the Django project. Django's unit tests use a Python standard library module unittest. See more info about testing a Django application here.

Go to the Django project, activate venv and start a PostgreSQL Docker container:

$ cd ../django-aws-backend
$ . ./venv/bin/activate
$ docker-compose up -d
Enter fullscreen mode Exit fullscreen mode

Let's create django_aws/tests.py and add a simple test to check DB connection:

from django.test import TestCase
from django.db import connection


class DbConnectionTestCase(TestCase):
    def test_db_connection(self):
        self.assertTrue(connection.is_usable())
Enter fullscreen mode Exit fullscreen mode

Run python manage.py test locally:

$ python manage.py test
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
.
----------------------------------------------------------------------
Ran 1 test in 0.010s

OK
Destroying test database for alias 'default'...
Enter fullscreen mode Exit fullscreen mode

Now, let's add this check on GitLab's side. Take a look here if you have no idea what GitLab CI is. Generally, GitLab will run code specified in .gitlab-ci.yml on every push in the repository.

Create .gitlab-ci.yml file with following content:

image: python:3.10

stages:
  - test

variables:
  POSTGRES_PASSWORD: password
  POSTGRES_DB: django_aws
  DATABASE_URL: postgres://postgres:password@postgres:5432/django_aws

test:
  services:
    - postgres:14.2
  cache:
    key:
      files:
        - requirements.txt
      prefix: ${CI_JOB_NAME}
    paths:
      - venv
      - .cache/pip
  stage: test
  script:
    - python -m venv venv
    - . venv/bin/activate
    - pip install --upgrade pip
    - pip install -r requirements.txt
    - python manage.py test
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • image: python:3.10. By default, GitLab runs CI/CD pipeline on shared runners hosted by GitLab using the docker executor. Here we specify docker image for the executor.
  • stages:. A pipeline can have several stages. Now we have only the test stage. In the future, we will have three stages: test, build, and deploy. GitLab will execute stages in the specified order.
  • services: postgres:14.2. GitLab will run PostgreSQL in a separate container during the test stage. So, our Django application will be able to run DB-related tests.
  • variables: POSTGRES_PASSWORD, POSTGRES_DB DATABASE_URL - environment variables for both Django and PostgreSQL docker containers.
  • cache: key:. Here we cache the result of the pip install command to speed up pipeline executions. If the key files: requirements.txt hasn't changed since the last run GitLab will download the cache for venv and .cache/pip directories.
  • script: ... is commands to execute. GitLab executes commands one by one. If some command returns a non-zero exit code, GitLab will interrupt pipeline execution and mark it Failed. If all commands have been executed successfully, GitLab marks the current stage as Successful and goes to the next stage.

Now, push the changes and take a look at CI/CD tab of your GitLab Django project.

$ git add .
$ git commit -m "add gitlab-ci; add test"
$ git push
Enter fullscreen mode Exit fullscreen mode

Tests is running
Tests have been passed

Stage: Build

Tests are passed, so we move on to the build stage. At this stage, we need to connect our GitLab account with our AWS account to grant GitLab access to the ECR repository. For this, we'll create a separate AWS user gitlab with limited permissions. Let's go to the AWS IAM console and create a new user.

You can find instructions on how to create an AWS user here

Add to this user AmazonEC2ContainerRegistryPowerUser policy to enable read and write permission to any ECR repository on this account. Proceed to the final step of user creation and save ACCESS_KEY_ID and SECRET_ACCESS_KEY.

Now go to the GitLab group settings and add AWS_ACCOUNT_ID, AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID, and AWS_DEFAULT_REGION variables. GitLab runner will use these credentials for AWS CLI calls.

Make sure to mask sensitive variables AWS_ACCOUNT_ID, AWS_SECRET_ACCESS_KEY, and AWS_ACCESS_KEY_ID to hide their values in GitLab logs. Take a look at the AWS deployment documentation page for more information.

Then add to the .gitlab-ci.yml build block:

stages:
  - test
  - build

variables:
  ...
  DOCKER_HOST: tcp://docker:2375
  DOCKER_TLS_CERTDIR: ""
  AWS_REGISTRY_URL: "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${CI_PROJECT_NAME}:latest"

test:
  ...

build:
  stage: build
  image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest
  services:
    - docker:20.10-dind
  before_script:
    - aws ecr get-login-password | docker login --username AWS --password-stdin $AWS_REGISTRY_URL
  script:
    - docker pull $AWS_REGISTRY_URL || true
    - docker build --cache-from $AWS_REGISTRY_URL -t $AWS_REGISTRY_URL .
    - docker push $AWS_REGISTRY_URL
  only:
    - main
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest. This image allows using AWS CLI commands during CI/CD.
  • services: docker:20.10-dind. We use docker-in-docker to build the docker image inside the docker executor. Docker starts as a separate service, so we can access the docker daemon from the GitLab job.
  • variables: DOCKER_HOST, DOCKER_TLS_CERTDIR. Specify the path to the docker-in-docker daemon and disable TLS connection.
  • AWS_REGISTRY_URL. Construct the ECR URL from the project name. Ensure that the names of the ECR repo and Gitlab project are the same.
  • before_script: aws ecr get-login-password. Log in to ECR.
  • docker pull $AWS_REGISTRY_URL || true. Pull actual image to use it as cache
  • docker push $AWS_REGISTRY_URL. Upload image to ECR.
  • only: main. We'll build a docker image for main branch only.

Commit your changes and check a CI/CD pipeline:

$ git add .
$ git commit -m "add build stage"
$ git push
Enter fullscreen mode Exit fullscreen mode

Build passed

Stage: Deploy

Build stage passed, now we'll deploy container from ECR to ECS.

But let's start with running migrations. We want to run migrations as separate ECS task to avoid any side effects on web application. Go to the infrastructure project and make following changes in ecs.tf and apply changes:

# Production cluster
...

locals {
  container_vars = {
    region = var.region

    image     = aws_ecr_repository.backend.repository_url
    log_group = aws_cloudwatch_log_group.prod_backend.name

    rds_db_name  = var.prod_rds_db_name
    rds_username = var.prod_rds_username
    rds_password = var.prod_rds_password
    rds_hostname = aws_db_instance.prod.address
  }
}

# Backend web task definition and service
resource "aws_ecs_task_definition" "prod_backend_web" {
  ...
  container_definitions = templatefile(
    "templates/backend_container.json.tpl",
    merge(
      local.container_vars,
      {
        name       = "prod-backend-web"
        command    = ["gunicorn", "-w", "3", "-b", ":8000", "django_aws.wsgi:application"]
        log_stream = aws_cloudwatch_log_stream.prod_backend_web.name
      },
    )
  )
  ...
}
...
# Cloudwatch Logs
...
resource "aws_cloudwatch_log_stream" "prod_backend_migrations" {
  name           = "prod-backend-migrations"
  log_group_name = aws_cloudwatch_log_group.prod_backend.name
}

# Migrations

resource "aws_ecs_task_definition" "prod_backend_migration" {
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = 256
  memory                   = 512

  family = "backend-migration"
  container_definitions = templatefile(
    "templates/backend_container.json.tpl",
    merge(
      local.container_vars,
      {
        name       = "prod-backend-migration"
        command    = ["python", "manage.py", "migrate"]
        log_stream = aws_cloudwatch_log_stream.prod_backend_migrations.name
      },
    )
  )
  depends_on         = [aws_db_instance.prod]
  execution_role_arn = aws_iam_role.ecs_task_execution.arn
  task_role_arn      = aws_iam_role.prod_backend_task.arn
}
Enter fullscreen mode Exit fullscreen mode

Here we've moved the same variables for migration and web containers in container_vars. Then we've created a separate task definition and log stream for the migration container. Now we can the start task with a specified task definition to apply migrations for every release.

Now let's create a deployment script. Return the Django project and create scripts/deploy.sh file

cd ../django-aws-backend
mkdir scripts 
touch scripts/deploy.sh 
chmod 777 scripts/deploy.sh
Enter fullscreen mode Exit fullscreen mode

with the following content:

#!/bin/bash

set -e

# Collect ECS_GROUP_ID and PRIVATE_SUBNET_ID for running migrations
echo "Collecting data..."
ECS_GROUP_ID=$(aws ec2 describe-security-groups --filters Name=group-name,Values=prod-ecs-backend --query "SecurityGroups[*][GroupId]" --output text)
PRIVATE_SUBNET_ID=$(aws ec2 describe-subnets  --filters "Name=tag:Name,Values=prod-private-1" --query "Subnets[*][SubnetId]"  --output text)

echo "Running migration task..."
# Construct NETWORK_CONFIGURATON to run migtaion task 
NETWORK_CONFIGURATON="{\"awsvpcConfiguration\": {\"subnets\": [\"${PRIVATE_SUBNET_ID}\"], \"securityGroups\": [\"${ECS_GROUP_ID}\"],\"assignPublicIp\": \"DISABLED\"}}"
# Start migration task
MIGRATION_TASK_ARN=$(aws ecs run-task --cluster prod --task-definition backend-migration --count 1 --launch-type FARGATE --network-configuration "${NETWORK_CONFIGURATON}" --query 'tasks[*][taskArn]' --output text)
echo "Task ${MIGRATION_TASK_ARN} running..."
# Wait migration task to complete
aws ecs wait tasks-stopped --cluster prod --tasks "${MIGRATION_TASK_ARN}"

echo "Updating web..."
# Updating web service
aws ecs update-service --cluster prod --service prod-backend-web --force-new-deployment  --query "service.serviceName"  --output json

echo "Done!"
Enter fullscreen mode Exit fullscreen mode

You can check the docs for run-task, wait tasks-stopped, and update-service commands.

Run this script locally to check it:

$ ./scripts/deploy.sh
Collecting data...
Running migration task...
Task arn:aws:ecs:us-east-2:947134793474:task/prod/dcc06d7ec1ac4bb69bba445565eddf8b running...
Updating web...
"prod-backend-web"
Done!
Enter fullscreen mode Exit fullscreen mode

We've successfully run this script by the admin user. GitLab CI/CD will use the gitlab user, so we need to grant all required permissions.

Let's create a new policy gitlab-deploy-ecs, and add it to the gitlab user. Go to the IAM Console, select the "Users" tab and click on the gitlab user.

Next, click on "Add inline policy" and add the JSON policy definition. You need to use your AWS_ACCOUNT_ID instead of 947134793474 number.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeSubnets",
                "ec2:DescribeSecurityGroups",
                "ecs:UpdateService",
                "ecs:DescribeTasks"
            ],
            "Resource": ["*"]
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": [
                "arn:aws:iam::947134793474:role/ecs-task-execution",
                "arn:aws:iam::947134793474:role/prod-backend-task"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "ecs:RunTask",
            "Resource": "arn:aws:ecs:us-east-2:947134793474:task-definition/backend-migration*"
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

Explaining policies:

  • ec2:DescribeSubnets, ec2:DescribeSecurityGroups — for "Collecting data..." stage.
  • ecs:RunTask, iam:PassRole — for running a migration ECS task.
  • ecs:DescribeTasks — for waiting a migration ECS task ends.
  • ecs:UpdateService — for updating the ECS Django web application.

Click "Review Policy", name the policy gitlab-ecs-deploy and click "Create Policy". Now, the gitlab user will be able to execute the deploy.sh script.

Add deploy stage to .gitlab-ci.yml

...

stages:
  - test
  - build
  - deploy

...

deploy:
  stage: deploy
  image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest
  script:
    - ./scripts/deploy.sh
  only:
    - main
Enter fullscreen mode Exit fullscreen mode

At the deploy stage, we simply run deploy.sh script. We use the aws-base image to have access to AWS CLI commands.

Finally, let's add some changes to Django to see that our application will be updated automatically. Let's change the Django Admin header text. Add this line to django_aws/urls.py:

admin.site.site_header = "Django AWS Admin Panel"
Enter fullscreen mode Exit fullscreen mode

Now commit your changes and see how's your pipeline going.

$ git add .
$ git commit -m "add deploy stage"
$ git push
Enter fullscreen mode Exit fullscreen mode

Check your admin page prod-57218461274.us-east-2.elb.amazonaws.com/admin and see a new title. It can take some time to update the ECS service.

Also, do not forget to push infrastructure code to GitLab.

Congratulations! We've successfully set up CI/CD with GitLab for our web Django web application. Now we can commit our code to the main branch, and GitLab CI/CD will automatically test, build and deploy it on the AWS.

But the prod-57218461274.us-east-2.elb.amazonaws.com domain looks not user-friendly :) Also, we need to secure the connection between a user and the Django application with an SSL certificate. In the next part, we'll connect the Namecheap domain to AWS and set up an SSL certificate for them.

You can find the source code of backend and infrastructure projects here and here.

If you need technical consulting on your project, check out our website or connect with me directly on LinkedIn.

Top comments (9)

Collapse
 
antoine_beneteau_d2c0daeb profile image
Antoine • Edited

Tutorial over! Looking forward to see part 4, will it be possible with this part to easily do the same thing with another domain than namecheap like godaddy for example ?

When do you think part 4 will be posted? If you have visibility On it
🙌🏼

Collapse
 
eugen1j profile image
Yevhen Bondar

I'll going to publish the part 4 today.

another domain than namecheap like godaddy for example ?

Yes, sure. You only need to specify the AWS nameservers for your doamin. If your domain provider has a terraform module for NS management you will be able to do it with Terraform.

Otherwise you can specify the nameservers manually.

Collapse
 
antoine_beneteau_d2c0daeb profile image
Antoine

Pretty cool thank you ! Is it a good practice to add nginx with FARGATE ?

Thread Thread
 
eugen1j profile image
Yevhen Bondar

I think you don't need an Nginx server between the load-balancer and the Django application.

If you want to check some rules you can use Load Balancer rules:

docs.aws.amazon.com/elasticloadbal...
registry.terraform.io/providers/ha...

Collapse
 
antoine_beneteau_d2c0daeb profile image
Antoine • Edited

Hello, first thank you for your tutorial it's awesome !!

I encounter a blockage during the CI/CD on the build side. It fails by indicating this error:
An error occurred (IncompleteSignatureException) when calling the GetAuthorizationToken operation:.....
Error: Cannot perform an interactive login from a non TTY device

I searched everywhere to know where it came from but without success.

Collapse
 
eugen1j profile image
Yevhen Bondar

Hello, thank you!

I think this will solve your problem stackoverflow.com/a/61854312/8153147

So, instead of

aws ecr get-login-password | docker login --username AWS --password-stdin $AWS_REGISTRY_URL

use

docker login -u AWS -p $(aws ecr get-login-password) $AWS_REGISTRY_URL

Collapse
 
antoine_beneteau_d2c0daeb profile image
Antoine

Thanks for your response.

I tried this but I get :
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://registry-1.docker.io/v2/": unauthorized: incorrect username or password

Thread Thread
 
eugen1j profile image
Yevhen Bondar

I think you need to return to this code aws ecr get-login-password | docker login --username AWS --password-stdin $AWS_REGISTRY_URL. It fails because aws ecr get-login-password command prints nothing to stdout.

Maybe, you have invalid credentials in Gitlab variables? Can you check aws ecr get-login-password command locally with AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID credentials you use in GitLab? You can set credentials locally in nano ~/.aws/credentials

Thread Thread
 
antoine_beneteau_d2c0daeb profile image
Antoine

I made it work by adding the creds directly in the project not in the group