In this article, we will learn about Elastic Beanstalk and its capabilities. We will understand the problem that Elastic Beanstalk solves and the steps required to set up and deploy a Django app with Elastic Beanstalk.
We will also be using Docker for containerizing the app and Nginx as a reverse proxy server. Additionally, we will learn how to connect an RDS instance with our Beanstalk application.
What is Elastic Beanstalk?
Elastic Beanstalk (EB) is a managed AWS service that allows you to upload, deploy and manage your applications easily. It deals with the provisioning of resources needed for the application such as EC2 instances, Cloudwatch for logs, Auto Scaling Groups, Load Balancers, Databases, and Proxy Servers (Nginx, Apache) - all of which are customizable.
Setting up the bare project
To focus on the main theme of this article, Elastic Beanstalk, we will be cloning a very simple Django app where we make a post request to add some inventory data into our database and a get request to retrieve them. Head over to this github repository and clone the app.
Setting up Docker
We will be using docker for containerization. This is to maintain consistency between the development environment and production environments and eliminate all forms of But it works on my computer
problems.
For this, you should have docker and docker-compose already installed.
Writing our Dockerfile
Let us create a Dokcerfile in the root directory. Copy the following into it.
FROM python:3.9-bullseye
ARG PROJECT_DIR=/home/app/code
ENV PYTHONUNBUFFERED 1
WORKDIR $PROJECT_DIR
RUN useradd non_root && chown -R non_root $PROJECT_DIR
RUN python -m pip install --upgrade pip
COPY requirements.txt $PROJECT_DIR
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . $PROJECT_DIR
RUN python manage.py collectstatic --noinput
EXPOSE 8000
USER non_root
What are we doing here?
We specify our base image as
python:3.9-bullseye
with theFROM
. This is, to put it simply, a Debian OS with python3.9 installed.We create a new directory where we will be moving our Django application code into.
We create a new user and give it ownership of the application code directory (This is to follow docker security best practices).
We install the app dependencies from requirements.txt and run
collectstatic
to collect all our static files into a single directory path which we have already defined in our settings.py file asSTATIC_ROOT
.We EXPOSE port 8000 of the container. The proxy server we will create will forward requests to this port.
Writing our docker-compose file
Now, Let us add a compose file. Create a docker-compose.yml file at the root directory and add the following.
version: '3.6'
services:
web:
build:
context: .
command: sh -c "python manage.py migrate &&
gunicorn ebs_django.wsgi:application --bind 0.0.0.0:8000"
volumes:
- static_volume:/home/app/code/static
env_file:
- .env
image: web
restart: "on-failure"
nginx:
build:
context: ./nginx
ports:
- 80:80
volumes:
- static_volume:/home/app/code/static
depends_on:
- web
volumes:
static_volume:
We define two services here:
WEB: This is our application. It will be built based on instructions defined in our Dockerfile.
Once built, we runmanage.py migrate
to create our database schemas from our migration files. Then we bind gunicorn to the port 8000 of the machine. Gunicorn will serve our Django app through it.NGINX: This is a web server we will be using as our reverse proxy. (A reverse proxy server is a server that sits in front of our application and routes incoming requests from external sources (users) to our provisioned servers.)
We will also use Nginx to serve our static files. hence the reason for binding to the volumestatic_volume
which will be populated by our web container when we runcollectstatic
.
Next, create a folder named Nginx and add a Dockerfille inside with the following:
FROM nginx:1.23.3
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d/nginx.conf
This is the Dockerfile our Nginx container will be created from. It just defines a base image and deletes the default.conf
provided by Nginx and adding a new one named nginx.conf
, which we will create now.
Create the file and copy the following into it:
server {
listen 80 default_server; # default external port. Anything coming from port 80 will go through NGINX
location / {
proxy_pass http://web:8000;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /home/app/code/static/; # Our static files
}
}
What is happening here?
The Nginx server listens on the machine's port 80 for incoming requests.
Requests received are forwarded to our Django (gunicorn) server.
We define another
location
directive/static/
, this is so that Nginx directly serves our static files as we mentioned earlier.
NOTE: We manually set up Nginx as a container since EB does not create an Nginx proxy server for us when we use a docker-compose.yml
file, if we were to use a Dockerfile only, EB would provision an Nginx server for us.
Installing necessary dependencies
We need to install some packages for our docker container. Since we will be using gunicorn to serve our Django app and we will be using PostgreSQL, we need to install psycopg2-binary as well as gunicorn. You can do that with the following command in your activated virtual environment on the shell:
(ebsenv) ebs-django-docker-tutorial git:(master) pip install psycopg2-binary gunicorn
Run pip freeze to update your requirements.txt
file.
(ebsenv) ebs-django-docker-tutorial git:(master) pip freeze > requirements.txt
Integrating Elastic Beanstalk
First off, let's understand two terms:
Environment: This is a collection of all the AWS resources provisioned by EB to run your application code.
Application: This is a collection of environments, application code versions, and environment configurations. You can have multiple environments in a single application. For example, you could decide to have a production environment running a specific application code version, and a QA environment running another application code version.
Alright, now, back to the code. EB provides us with several tools we can use to configure our environments and deploy our applications.
EB CLI
AWS SDK (Boto for python)
EB Console
We will be using a mix of both the CLI and the EB console. To install the cli, use the command below inside your virtual env.
(eb_python_testenv) EBS-TEST git:(master) pip install awsebcli
Head over to the AWS console to get your AWS credentials (aws_access_key, aws_secret_access_key), rename .env.example
file to .env
and paste your credentials there.
We can run eb init
to initialize some configurations (platform, region) for our application.
(eb_python_testenv) EBS-TEST git:(master) eb init eb-docker-rds-django
By default, EB provisions the resources in your application inside the us-west-2 (US West (Oregon))
region, you can choose a different region by picking its corresponding number in the CLI. For this tutorial, we will be using the us-east-1. It asks if you are using Docker, choose Yes, Then Pick the option to use Docker when asked to select a platform. When prompted to use CodeCommit, pick no. CodeCommit is a Version Control Service and we are already using GitHub. When asked to set up SSH, choose no as we will not be needing it here.
Once that's done, you will see a new .elasticbeanstalk
folder created for you with a config.yml
file created. It defines some configurations for when we eventually create our environment.
Commit your changes with git add .
and git commit -m "your commit message"
Next, we create our environment with the command:
(ebsenv) ebs-django-docker-tutorial git:(master) eb create eb-docker-rds-django
eb-docker-rds-django
is our environment name. Wait for a couple minutes while EB provisions your resources for you. Once that is done, We can head over to the AWS EB console to see the status of our environment or we could use eb status
on our terminal directly.
As you can see currently, the Health of our app is red. This is because in our settings.py file we are attempting to retrieve credentials for a database we do not have yet. You can confirm this by typing in eb logs
on your terminal and scrolling through the logs.
Let us create an AWS Relational Database Service (RDS) for our EB environment.
Head over to AWS RDS console, click on databases and click Create database
. Next,
Choose
Standard create
option.Choose PostgreSQL as your database engine.
Under
Credentials settings
, type in a master username and a master password. (Write them down somewhere).Scroll to additional configurations, expand, and under
Initial database name
, type the name you want to use for your application database (Write it down).Leave everything else the same and click
Create Database
.
Once that is created successfully, we need to edit the security groups to let the EC2 instances of our EB environment access our RDS instance. To do this:
Click on the newly created database. Scroll down to
Security group rules
and click on the Inbound Security Group. Click onInbound Rules
tab and click onEdit inbound rules
.For type, choose Postgres.
For the source, we will choose the security group attached to our EC2 instances by EB. To find the security group of your EC2 instances, you can simply head to Auto Scaling Group (ASG) Service, click on your provisioned ASG, scroll to
Launch Configurations
and click on the security group, on the new page, you will see thesecurity group id
of your ASG, copy it and paste it into the source search bar.Click
save rules
.
We have created our database, now we need to add the environment variables inside our EB environment. EB provides us several ways to do that, including using the CLI with eb setenv key=value
or directly through the console. Let us head over to the EB console to do this. Under environments, click on your environment. On the left sidebar, click on configuration, Under Software
click Edit and scroll down to Environment properties
, and add your RDS credentials. Also, add your DJANGO_SECRET_KEY
.
You can find your database hostname under the connectivity tab of the RDS instance.
Next, we need to add our domain into the allowed_hosts settings of Django. Head over to the EB console, click on your environment and grab the URL.
Go to your settings.py file and update the ALLOWED_HOSTS list.
# ebs_django/settings.py
# looks something like eb-docker-rds-django.xxx-xxxxxxxx.us-east-1.elasticbeanstalk.com
ALLOWED_HOSTS = ['YOUR_ENVIRONMENT_URL']
commit your changes with git add .
and git commit -m "your commit message"
and deploy the new version with:
(ebsenv) ebs-django-docker-tutorial git:(master) eb deploy
That's it. Head over to postman to test the app.
It works, yay! I guess we're done... well, not quite. The EB CLI provides us with the tools necessary to run our environment locally, which usually would be great when doing development. We can do this with the command eb local run
. Unfortunately, EB "local run" currently has no support for reading docker-compose.yml files, so what's a workaround?
Docker Compose for Development
We can simply create a new docker-compose file to simulate our EB environment. Luckily, in our current setup, we really just need to add a database and we are set. Let's do that.
Create a new docker-compose file called docker-compose.dev.yml
. This will be used specifically for development purposes.
version: '3.6'
services:
web:
build:
context: .
command: sh -c "python manage.py makemigrations && python manage.py migrate &&
gunicorn ebs_django.wsgi:application --bind 0.0.0.0:8000 --reload" # updated command
volumes:
- ./:/home/app/code/ # new volume
- static_volume:/home/app/code/static
env_file:
- .env
image: web
depends_on:
- db
restart: "on-failure"
nginx:
build:
context: ./nginx
ports:
- 80:80
volumes:
- static_volume:/home/app/code/static
depends_on:
- web
db: # new db service
image: postgres:15-alpine
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=${RDS_PASSWORD}
- POSTGRES_DB=${RDS_DB_NAME}
- POSTGRES_USER=${RDS_USERNAME}
ports:
- 5432:5432
volumes:
pgdata: # new line
static_volume:
There are a few differences in this compose file compared with our initial one.
the command in our web service now includes a
reload
flag. This ensures gunicorn reloads the server after new source code changesA new volume in
web
to make changes in our local files reflect in our container.A new db service.
A new
pgdata
volume to make sure our db data persists.
Update your .env file to include the credentials for our development DB.
# Develop
DJANGO_SECRET_KEY='secret_key'
DEBUG="1"
ALLOWED_HOSTS=*
# --- ADDED LINES -----
RDS_HOSTNAME=db
RDS_DB_NAME='test'
RDS_PASSWORD='test123'
RDS_USERNAME='test'
RDS_PORT=5432
# ---------------------
aws_access_key=AKIA ************* BXE
aws_secret_access_key=bheyut ******************* ywtRuUfChDL5r
Start the development server with
ebs-django-docker-tutorial git:(master) docker-compose -f docker-compose.dev.yml up --build
Update your ALLOWED_HOSTS
in your settings.py file to include 127.0.0.1
. Now head over to postman and test the development server with an empty POST request at http://127.0.0.1/inventory-item
.
That's it.
Conclusion
You now have a fair understanding of Elastic Beanstalk. In this article you have learned:
What Elastic Beanstalk is
How to integrate Nginx as a reverse proxy for your web server
How to deploy your Django app into a beanstalk docker environment
How to integrate an RDS instance into your EB environment.
How to mimic your remote EB environment into your development environment when using docker-compose.
If you found this article useful or learned something new, consider leaving a thumbs up and following me to keep up-to-date with any recent postings!
Till next time, happy coding!
Levi
Top comments (0)