DEV Community

Cover image for A journey through Django and Docker: hands-on production deployment principles and ideas
Jorge PM
Jorge PM

Posted on • Edited on

A journey through Django and Docker: hands-on production deployment principles and ideas

The main objective of this tutorial is to give you an idea of some of the things involved pushing a Python webapp using Docker in production.

The final code can be found in this repository in case you get stuck.
https://github.com/zom-pro/django-docker

Introduction

To follow this tutorial/guide you need to have a basic knowledge of Django (understand how to create apps, settings, etc), Docker and Linux.
It isn’t really a step by step tutorial rather a guide. Something to do with a cup of tea rather than when trying to make something work for tomorrow’s deadline.

This tutorial covers a variety of activities related to a containerised Django web application deployment to production. It’s not meant to be the finished product rather an introduction to put you in the right path. This is an article I would like to have found and read back in the day when I was trying to get my head around “real” deployments.

Most of the time, hello world tutorials are too focused on development and not enough emphasis is given to the requirements of a more productionised (real word?) environment. This is of course a huge topic and I’m only scratching the surface. Also, this articles is mostly based on localhost (on your own machine) development. This is to reduce the complexity and need for AWS, Heroku, etc accounts. However if you don’t want to use a VM and rather use an EC2 in AWS, it should be relatively simply as far as you have ssh access to it.

All the links I refer to are an alternative to the specific information but they are just the results of my (bias) google search.

You can always drop me a comment if anything needs more details/clarification, etc. (or if you find an error). Remember this is just an alternative implementation so if you disagree with something, drop me a message as well. Across my career, I’ve learned a lot from friendly and productive discussions.

Sections index

Copy and find to jump to the right section

  • Install Ubuntu server and nginx in a Virtualbox virtual machine
  • Initial docker development and configuration
  • Push container to repository so it can be pulled in production
  • Some docker production configuration

Install Ubuntu server and nginx in a Virtualbox virtual machine

Download Ubuntu server (https://ubuntu.com/download/server) and install it in a Virtualbox virtual machine with default settings. If you haven’t done this before, nowadays it’s pretty much keeping the defaults all the way so don’t worry about it. To allow connectivity between your machine and the vm, use a bridge adaptor as network.

Tip: because you don’t have a graphic environment in Ubuntu server, I couldn’t get the copy paste functionality to work so I installed an ssh server to ease the usage (allow for copy paste, etc) https://linuxize.com/post/how-to-enable-ssh-on-ubuntu-18-04/. So then I ssh from my local terminal where I have everything I need.

To add some security (something you will definitely need in production), we will block the ports we aren’t going to use. A quick and easy way to secure your vm is to use uncomplicated firewall. https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-with-ufw-on-ubuntu-18-04.
The configuration I used for ufw is (allow HTTP, HTTPS and SSH). In a more productionise environment, you would benefit from having a VPN and allowing SSH trough your VPN or a configuration based on a bastion host. Both topics are out of the scope of this tutorial.
To configure ufw, use these commands.

 sudo ufw allow from 192.168.0.16 to any port 22
 sudo ufw allow from 192.168.0.16 to any port 443
 sudo ufw allow from 192.168.0.16 to any port 80
Enter fullscreen mode Exit fullscreen mode

Tip: DigitalOcean has great tutorials and they are a good alternative for a vm on the cloud (cheaper and simpler than AWS normally, it will depend on which service you’re using)

Now we will install nginx in the vm. This is an straight forward apt installation.https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-18-04.

There’s a lot of fine tuning that can be done in nginx but it is out of the scope of this tutorial.

Tip: Why not running nginx as a container? You can! but in this case I chose not to because I didn’t feel my requirements needed it. These are the type of options you need to decide based on your functional and non-functional requirements. For example a not perfectly optimised container-based nginx could have performance issues.
https://stackoverflow.com/questions/49023800/performance-issues-running-nginx-in-a-docker-container

I have made the mistake to rush to containerise everything under the sun when something were perfectly fine running directly on the os like webservers and databases. Every case will be different and as far as the decision was taken based on the requirements available at that point, correctly documented and analysed (lots of research for different alternatives!), then it’s the right decision (because you didn’t have a better one). If you feel you couldn’t really get to the bottom, just review it later on.

Once you have installed nginx configure a self-signed certificate https://www.humankode.com/ssl/create-a-selfsigned-certificate-for-nginx-in-5-minutes. This step will allow you to use HTTPS (port 443)
Following the steps in the wizard (previous link), my certificate looks like this:

[req]
default_bits = 2048
default_keyfile = localhost.key
distinguished_name = req_distinguished_name
req_extensions = req_ext
x509_extensions = v3_ca
[req_distinguished_name]
countryName = Country Name (2 letter code)
countryName_default = UK
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = London
localityName = Locality Name (eg, city)
localityName_default = Rochester
organizationName = Organization Name (eg, company)
organizationName_default = localhost
organizationalUnitName = organizationalunit
organizationalUnitName_default = Development
commonName = Common Name (e.g. server FQDN or YOUR name)
commonName_default = localhost
commonName_max = 64
[req_ext]
subjectAltName = @alt_names
[v3_ca]
subjectAltName = @alt_names
[alt_names]
DNS.1 = vmlocalhost
Enter fullscreen mode Exit fullscreen mode

As you can see, I called my hostname (in the vm) vmlocalhost, so change the hostname to that
https://linuxize.com/post/how-to-change-hostname-on-ubuntu-18-04/
Also, you want to change your host machine so it relates the vmlocalhost hostname to its local ip. (192.168.0.x) that has been assigned to your vm. This is something you wouldn’t have to do if you were using a real certificate of course.

Once you have installed the certificate, configure nginx to redirect to 443
https://serverfault.com/questions/67316/in-nginx-how-can-i-rewrite-all-http-requests-to-https-while-maintaining-sub-dom
My configuration looks like this (at this point! The final version can be found in the repository)

server {
  listen 80;
  server_name vmlocalhost;
  return 301 https://$server_name$request_uri;
}
server {
  listen 443 ssl default_server;
  listen [::]:443 ssl default_server;
  ssl_certificate /etc/ssl/certs/localhost.crt;
  ssl_certificate_key /etc/ssl/private/localhost.key;

  ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
  root /var/www/html;
  index index.html index.htm index.nginx-debian.html;
  server_name vmlocalhost;
  location / {
    try_files $uri $uri/ =404;
  }
}
Enter fullscreen mode Exit fullscreen mode

Initial docker development and configuration

Head back to your host to start with your development lifecycle. The idea is that you write your code locally and when you’re happy with the code, you build a container locally. Once you’re happy with the container, you push it to a repository and pull it on the other side (in this case the vm). Normally, you will handle this with some kind of CI/CD like circle ci (another very important thing to do in production but out of the scope of this tutorial).

Create a Django application in localhost (our dev environment) and test it. Also you might want to add a very simple application like the one I have in my github repository to ease debugging, etc. Once you can see the Django rocket or your application, let’s get the container running. You need to get familiar with Dockerfile but they are fairly intuitive. Mine looks like this:

FROM python:3.8.2-slim-buster
RUN apt update & apt -y upgrade
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code
RUN pip install -r requirements.txt
RUN chmod +x /code/docker-entrypoint.sh
ENTRYPOINT [ “/code/docker-entrypoint.sh” ]
Enter fullscreen mode Exit fullscreen mode

Store it in a file called Dockerfile at the root of the project (if in doubt have a look at the repository) and use it by running:

sudo docker build -t djangodocker .
sudo docker run --rm -it -p 8000:8000 djangodocker
Enter fullscreen mode Exit fullscreen mode

All you need in the requirements.txt file right now is Django. Your docker-entrypoint.sh will have the instruction to run Django (and later gunicorn) so:

python manage.py runserver 0.0.0.0
Enter fullscreen mode Exit fullscreen mode

Tip: I came back to this the day after and started to having issues with the container not wanting to run. To debug it, I followed this instructions: https://serverfault.com/questions/596994/how-can-i-debug-a-docker-container-initialization
The events& command told me that there was nothing wrong and I was being silly, my entrypoint was stopping the container from just ending. I simply removed the entry point so the container had nothing to do and it was terminating correctly…

At this point the image is 232 MB, I’m happy with that. Alpine is a great alternative to Ubuntu/Debian if you want something smaller but make sure all your dependencies can be installed in Alpine before committing to it. Image size can be a defining factor in some environments but again, that should be decided based on project’s requirements.

Now to show you a container-related development activity (this is making something work in the container that is not related to your code directly), let’s get Gunicorn up and running in the container (if you haven’t heard about Gunicorn, now is a good moment to learn about it and application servers). So imagine your requirements didn’t include gunicorn yet (they do in the repository version of course) and you want to get it running before creating an endpoint to connect to it.

Go into the container Dockerfile, remove your entry point, build it again and run it but this time in interactive mode (it will get you a bash instance inside the container)

docker run -it --rm -p 8000:8000 djangodocker bash
Enter fullscreen mode Exit fullscreen mode

Then to run gunicorn, install it and run it

pip install gunicorn
gunicorn -b 0.0.0.0:8000 django_docker.wsgi
Enter fullscreen mode Exit fullscreen mode

TIP: To avoid waiting for gunicorn when running https://pythonspeed.com/articles/gunicorn-in-docker/

Now for our entrypoint we replace the Django runserver by (best replace what it was there before). Here django_docker is your app (in which folder you should find a wsgi.py file if you are in your project root). With this command, we are telling gunicorn to bind to the port we want. If you need anything more complicated than this, you can create a configuration file.https://docs.gunicorn.org/en/stable/configure.html
The final entrypoint looks like this:

#!/bin/bash

# Collect static files
echo “Collect static files”
python manage.py collectstatic --noinput

# Apply database migrations
echo “Apply database migrations”
python manage.py migrate

# Start server
echo “Starting server”
gunicorn -b 0.0.0.0:8000 --workers=2 --threads=4 --worker-class=gthread django_docker.wsgi
Enter fullscreen mode Exit fullscreen mode

I’m collecting static and migrating which is not something recommended for production (see more at the end of the article about this) but it works for us just now because we destroy the DB every time and we only deploy one of these containers at the time.

Now we should have a working docker container in your local environment (dev).

Push container to repository so it can be pulled in production

This is our final step. You can use docker hub to push your image
https://ropenscilabs.github.io/r-docker-tutorial/04-Dockerhub.html

You need to create repository in docker hub. It is free for a single private repository.

In your localhost, run:

docker login --username=zompro(your username) --email=your@email.com
sudo docker tag djangodocker:latest zompro/djangodocker-repo:latest
sudo docker push zompro/djangodocker-repo:latest
Enter fullscreen mode Exit fullscreen mode

This will login to the hub with your credentials (it will ask for your password), tag (name) your local image (you will use this to control versions normally) and push it to the repository.

If you haven’t done it yet install the docker daemon in the virtual machine https://docs.docker.com/install/linux/docker-ce/ubuntu/
Once you have the Docker deamon running, run in your vm

docker login --username=zompro --email=your@email.com
sudo docker pull zompro/djangodocker-repo
sudo docker run --rm -it -p 8000:8000 zompro/djangodocker-repo
Enter fullscreen mode Exit fullscreen mode

Make sure your vm’s ip (or in my case my hostname vmlocalhost as we configured earlier) is in the allowed settings (Django settings).

If you haven’t done it already, we will configure nginx to act as a proxy pass for gunicorn and to serve our static files.

Add to the nginx configuration

 location /static {
   alias /var/www/static/;
 }
Enter fullscreen mode Exit fullscreen mode

And when running the container, use (make sure the static directory exists) link a volume so when you collectstatic in your container, they are available in that directory to be served by nginx.

docker run --rm -d -p 8000:8000 -v /var/www/static:/code/static zompro/djangodocker-repo
Enter fullscreen mode Exit fullscreen mode

Reload the page and keep an eye on nginx access log you should see nginx serving your static files.

tail -f /var/log/nginx/access.log
Enter fullscreen mode Exit fullscreen mode

For example in my case (the first ip is from my local machine, the one that requested the page) and you can see the static file being served from our vmlocalhost

192.168.0.16 — — [29/Apr/2020:07:40:04 +0000] “GET /static/admin/fonts/Roboto-Light-webfont.woff HTTP/1.1” 304 0 “https://vmlocalhost/static/admin/css/fonts.css" “Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.113 Safari/537.36”
Enter fullscreen mode Exit fullscreen mode

Some docker production configuration

This is a big topic so i just want to mention a couple of things.

So far so good but we are still running on debug mode and we want to control which environment (prod, dev) we are configuring our application for. We want all the configuration for Docker to come from external variables (rather than having one container for prod and a different one for dev which would null one of the biggest advantages of using docker).

Tip: this might be a good moment to branch out to the 12 factor app guidelines (whether you agree or not with them, you should understand them and disagree, if you do, from an informed position).

Let’s change our Django settings a bit to reflect the use of environment variables. Add on top (always remember you can check the file in my github repository for the final piece)

ENVIRONMENT = os.getenv(“ENVIRONMENT”)

and then you can do something like this (this is a simple version, it can be a lot more elegant)

DEBUG = False
if ENVIRONMENT == “development”:
  DEBUG = True
ALLOWED_HOSTS = [“vmlocalhost”]
if ENVIRONMENT == “development”:
   ALLOWED_HOSTS.append(“127.0.0.1”)
Enter fullscreen mode Exit fullscreen mode

Tip: One thing I like doing is always defaulting to the less dangerous case. So here for example, you must force the system to be in development “mode” to enable debugging. So if you forget to add (or bad documentation means you don’t know you have to) it will not open a possible security issue by default. Lots of systems come with very weak default configuration (admin for login and password is a classic) so watch out.

After making this changes to the settings.py file, if we run our container again after rebuilding it in our localhost (sudo docker run --rm -it -p 8000:8000 djangodocker), we should get a 400 error because our dev environment is not in the allowed hosts (this is simulating your development environment). To avoid this error, pass the environment variable as part of the command. We do something similar with the secret key (that should never be stored in your repository).

docker run --env ENVIRONMENT=development --env SECRET_KEY=super_secret_random_key --rm -it -p 8000:8000 djangodocker
Enter fullscreen mode Exit fullscreen mode

Tip: normally, you will handle this as part of your .env file using docker-compose. I intentionally left docker-compose out of this tutorial because I think it’s important to get a good grasp of the docker command before moving on but it’s probably the next place you want to go.

Some final tips

One thing to consider for production environment is that probably (depends on your use case always) you want to run migrations “manually”. This means running a docker exec command to run the migrations. The reason for this is imagine an scenario were you’re running an external db (not a sqlite inside the container) which is very likely to be your case, and you have more than one redundant container starting up, again a likely scenario. If both containers try to run migrations when they startup at the same time, you will have a mess in your hands. But then they are other many things on Django side such as handling the secret key but I had to limit the scope somewhere.

Many (I mean many) things were left out of this tutorial. Between them: docker-compose (as mentioned before), restart policies (should you be using just docker to control your services or something else like systemd), docker networks (as there are volumes, containers, you create networks between them too), etc. Once you have climbed those mountains you will realise (or I least I did) that they were a hill and you are now looking at The Andes. Between some of the highest picks you will find microservices architectures and their complexity, Kubernetes and many other super cool technologies. My advise, don’t start climbing any of these unless the need is clear and in front of you. If you rush to implement them, you will dig a hole of technical debt from where you might not come out.

I hope you enjoyed. I certainly did making this. Remember, if it feels too complicated right now, you will be laughing at how simply and silly it is 6 months from now if you persevere. At some point, we were all scratching our heads without understanding what was going on.

Top comments (0)