DEV Community

Harry@StartQuick Tech
Harry@StartQuick Tech

Posted on • Originally published at startquicktech.Medium on

AWS Copilot Saves Developers

If you’re a developer building containerized applications on AWS, it can be a complex and time-consuming process to manage all the infrastructure and deployment tasks. From setting up clusters to configuring load balancer and scaling, there are many things to consider.

That’s why you need AWS Copilot.

If you like to follow the hands-on with me, you can watch the video below. I am happy if you can subscribe my channel if you want to learn more .

Using AWS Copilot, you need to understand below concepts.

  • Application  — an Application is a collection of services and environments.
  • Environment  — a Environment is specific configuration of the Applications, such as staging or production. Each Environment has its own set of resources, and Copilot manages them separately from other Environments.
  • Service  — a Service in Copilot represents a single component of your App, such as a web server. Once you deploy a service in the environment, Copilot will build the image, push it to Amazon ECR and set up the required infrastructure for running the containers.

Let us use AWS copilot to create a django website running on AWS fargate cluster and CICD pipeline using AWS Code Pipeline.

Prerequisites

  1. AWS CLI v2 and AWS Copilot are both installed.
  2. AWS Credential is configured correctly.

Prepare the initial code

  1. Go to my Github repository django-start branch and clone the branch to you local machine.
git clone -b django-start https://github.com/harryzhou1987/startquick-aws-copilot-django.git
Enter fullscreen mode Exit fullscreen mode
  1. Replace Django Secret Key in settings.py file which is in django-project/mysite/mysite.
SECRET_KEY = '[You Own Django Secret]'
Enter fullscreen mode Exit fullscreen mode
  1. Run Docker Compose to set up a local development environment
cd startquick-aws-copilot-django
docker compose up -d --build
Enter fullscreen mode Exit fullscreen mode
  1. Go to http://localhost:8080 in your web browser and check if the local environment is up.

Set up and Deploy the containerized service on AWS Fargate using AWS Copilot

  1. Initialize an App.
copilot app init
Enter fullscreen mode Exit fullscreen mode
  1. Create Environment named test.
copilot env init
Enter fullscreen mode Exit fullscreen mode
  1. Check the manifest.yaml for the environment and update it with VPC & subnet information. Add certificate arn for the domain which you are going to use. You can also use your own existing VPC and subnets. Then deploy the test environment using below command
copilot env deploy --name test
Enter fullscreen mode Exit fullscreen mode
  1. Once it is done, a new VPC is created. We need to create an RDS instance for this project. You can either create the database using AWS console or command line below. You can also use your own database if you have but make sure you set up the correct environmental variables.
# Create a subnet group for RDS
aws rds create-db-subnet-group \
    --db-subnet-group-name [Subnet Group Name] \
    --db-subnet-group-description "DB subnet group for private subnets" \
    --subnet-ids [Private Subnet ID1] [Private Subnet ID2] ...

# Create security group for RDS
aws ec2 create-security-group \
    --group-name [Security Group Name] \
    --description "Security group for database instance in the private subnets" \
    --vpc-id [VPC ID]

# Here you can record the security group ID or check the security group ID via AWS console.

# Create ingress rule for the security group
aws ec2 authorize-security-group-ingress \
    --group-id [DB Security Group - Output of above command] \
    --protocol tcp \
    --port 3306 \
    --source-group [Service Security Group - Copilot created already]

# Create RDS instance
aws rds create-db-instance \
    --db-instance-identifier [DB Instance Name] \
    --db-instance-class db.t2.micro \
    --engine mysql \
    --engine-version 8.0 \
    --allocated-storage 20 \
    --master-username dbuser \
    --master-user-password SecretPassword \
    --db-subnet-group-name [Subnet Group Name] \
    --vpc-security-group-ids [DB Security Group] \
    --db-name djangodb \
    --no-multi-az
Enter fullscreen mode Exit fullscreen mode
  1. When the DB instance is ready, start the service in the test environment
copilot init
Enter fullscreen mode Exit fullscreen mode

It should fail due to the missing alias for the load balancer. You need to add below in the environment part of the manifest yaml file.

environments:
  test:
    http:
      alias: # The "test" environment imported a certificate.
        - name: "[Domain Name for the Service]"
          hosted_zone: [Hosted Zone for your Domain]
Enter fullscreen mode Exit fullscreen mode
  1. Add variables for the service. Refer to docker compose yaml file. DB Host needs to tbe the endpoint of your RDS instance.
DB_HOST: [Endpoint - RDS instance]
      DB_NAME: djangodb
      DB_USER: dbuser
      DB_PASSWORD: SecretPassword
      ALLOWEDSOURCE: 0.0.0.0
      DEBUG: true
Enter fullscreen mode Exit fullscreen mode
  1. Add Allowed_Host in your django project settings.py file. I use below for the test environment.
ALLOWED_HOSTS = [
    ip_address,
    'localhost',
    'www.cloudcracker.click' # Replace this with your own
]
Enter fullscreen mode Exit fullscreen mode
  1. Run below command to deploy the service again.
copilot svc deploy --env test
Enter fullscreen mode Exit fullscreen mode
  1. Go to Site URL in your web browser and check if the site is up.

Create Deployment Pipeline

  1. Check out to a new Git branch if needed.
git checkout -b "new-branch"
Enter fullscreen mode Exit fullscreen mode
  1. Run below to build the pipeline. The pipeline is using AWS Code Build and Code Pipeline.
copilot pipeline init
git add copilot/ && git commit -m "Adding pipeline artifacts" && git push
copilot pipeline deploy
Enter fullscreen mode Exit fullscreen mode
  1. During the deployment, you need to set up the authorization of your code repository for AWS.

Final Test

Assuming your docker compose environment is still up.

  1. Do the dev work locally and confirm your dev work via http://localhost:8080
  2. Push the code to your remote repository and wait until the automatic deployment is done.

Easy as! You don’t need to manually create the infrastructure for your containerized service and AWS did everything for you. You can check CloudFormation to see what services AWS helps you create.

Feedback

I might miss something. Please leave your comments for any questions. Thank you.

Harry@NZ

Top comments (0)