DEV Community

Hector Castro
Hector Castro

Posted on • Originally published at hector.dev on

Twelve-Factor Methodology Applied to a Django App

In the past few weeks, I’ve participated in a handful of DevOps/Site Reliability Engineer (SRE) interviews. Several interviewers have asked for guidelines configuring and operating cloud-native applications. My mind immediately goes to the Twelve-Factor App methodology, originally created by the folks who built Heroku—one of the first publicly accessible platforms as a service (PaaS).

Combined, the points serve to abstract applications from the infrastructure they run on, paving the way for configurability, scalability, and reliability. To illustrate how this works in practice, I set up a Django application and use it to explain how each 12 Factor point applies. I hope you find it useful!


Note : The code snippets in the following sections do not chain together perfectly. The snippets are there primarily to help communicate what’s going on in ways that only code can.

Codebase

A codebase is the complete source material of a given software program or application. Its structure will vary based on technology, but for a Django application called mysite created with django-admin startproject, it looks like this:

$ git init
Initialized empty Git repository in /home/hector/Projects/django-blog/.git/
$ git add .
$ git status
On branch master

No commits yet

Changes to be committed:
  (use "git rm --cached <file>..." to unstage)
        new file: .gitignore
        new file: Pipfile
        new file: Pipfile.lock
        new file: mysite/manage.py
        new file: mysite/mysite/ __init__.py
        new file: mysite/mysite/asgi.py
        new file: mysite/mysite/settings.py
        new file: mysite/mysite/urls.py
        new file: mysite/mysite/wsgi.py
        new file: setup.cfg
Enter fullscreen mode Exit fullscreen mode

Excellent—we have ourselves a codebase! We’ll gradually cover converting codebases into deploys in the following sections.

Dependencies

Applications have dependencies. 12 Factor wants us to explicitly declare these dependencies so they can be managed in a repeatable way. The first step toward achieving this happens with a Pipfile. It was created by a Python dependency management tool called pipenv after the following commands were run:

pipenv install django~=3.1
pipenv install black --dev --pre # --pre is needed because of black's versioning scheme
pipenv install flake8~=3.8 --dev
pipenv install isort~=5.7 --dev
Enter fullscreen mode Exit fullscreen mode

The inside of a Pipfile is written in Tom’s Obvious Minimal Language (TOML) and contains a manifest of the Python dependencies needed for a project:

[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"

[packages]
django = "~=3.1"

[dev-packages]
black = "*"
flake8 = "~=3.8"
isort = "~=5.7"

[requires]
python_version = "3.8"

[pipenv]
allow_prereleases = true
Enter fullscreen mode Exit fullscreen mode

Nowadays, we try to take this a step further by capturing all the necessary application dependencies in a container image. In most cases, the pursuit of creating a container image leads to using Docker, which implies the addition of a Dockerfile:

FROM python:3.8

ENV PYTHONUNBUFFERED=1

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

COPY ./Pipfile* .
RUN pip install pipenv
RUN pipenv install --system --deploy --ignore-pipfile
COPY ./mysite .

ENTRYPOINT ["python", "manage.py"]
Enter fullscreen mode Exit fullscreen mode

To make sure things are in working order, we can build and test the container image using the following commands. Here, the runserver argument launches the Django development server:

$ docker build -t mysite .
$ docker run --rm mysite runserver
Watching for file changes with StatReloader
Performing system checks...

System check identified no issues (0 silenced).

You have 18 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
Run 'python manage.py migrate' to apply them.
March 01, 2021 - 20:45:33
Django version 3.1.7, using settings 'mysite.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
Enter fullscreen mode Exit fullscreen mode

Looks good! We now have everything needed to spin up the application captured in a container image. In addition, we have all the associated instructions to build the image defined in a declarative way (e.g., Pipfile, Dockerfile).

Config

In the Twelve-Factor world, configuration is defined as anything that can vary between deploys of a codebase. This allows a single codebase to be deployed into different environments without customization. Some examples of configuration include:

  • Connection strings to the database, Memcached, and other backing services.
  • Credentials to external services (e.g., Amazon S3, Google Maps, etc.).
  • Information about the target environment (e.g., Staging vs. Production).

Once we’ve identified the configuration for our application, we need to work toward making it consumable via environment variables. In the example below, we focus on changing the way Django’s SECRET_KEY and DEBUG settings are set in settings.py (the home for all Django configuration settings).

diff --git a/mysite/mysite/settings.py b/mysite/mysite/settings.py
index d541c62..3a99d45 100644
--- a/mysite/mysite/settings.py
+++ b/mysite/mysite/settings.py
@@ -9,7 +9,7 @@ https://docs.djangoproject.com/en/3.1/topics/settings/
 For the full list of settings and their values, see
 https://docs.djangoproject.com/en/3.1/ref/settings/
 """
-
+import os
 from pathlib import Path

 # Build paths inside the project like this: BASE_DIR / 'subdir'.
@@ -20,10 +20,10 @@ BASE_DIR = Path( __file__ ).resolve().parent.parent
 # See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/

 # SECURITY WARNING: keep the secret key used in production secret!
-SECRET_KEY = "#v5hnkypk39qex@9zb2j2as3n9f7)jgvz05*9t&0@2y$kx$7lw"
+SECRET_KEY = os.getenv("DJANGO_SECRET_KEY", "secret")

 # SECURITY WARNING: don't run with debug turned on in production!
-DEBUG = True
+DEBUG = os.getenv("DJANGO_ENV") == "Development"

 ALLOWED_HOSTS = []
Enter fullscreen mode Exit fullscreen mode

Here, we made use of the Python standard library os module to help us read configuration from the environment. Now, the two settings can be more easily reconfigured across deploys.

To prove it works, we can change the environment with the -e flag of docker run:

$ docker build -t mysite .
$ docker run --rm \
    -e DJANGO_SECRET_KEY="dev-secret" \
    -e DJANGO_ENV="Development" \
    mysite runserver
Watching for file changes with StatReloader
Performing system checks...

System check identified no issues (0 silenced).

You have 18 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
Run 'python manage.py migrate' to apply them.
March 01, 2021 - 21:25:57
Django version 3.1.7, using settings 'mysite.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
^C%
Enter fullscreen mode Exit fullscreen mode

OK. Everything continued to work the way it was working before. Now, let’s see what happens if we try to make DJANGO_ENV=Production, which will cause the DEBUG setting to evaluate to False:

$ docker run --rm \
    -e DJANGO_SECRET_KEY="prod-secret" \
    -e DJANGO_ENV="Production" \
    mysite runserver
CommandError: You must set settings.ALLOWED_HOSTS if DEBUG is False.
Enter fullscreen mode Exit fullscreen mode

Aha! This CommandError looks ominous, but it is an indicator that our change of DJANGO_ENV made its way into the application’s execution environment successfully!

Backing services

A backing service is any service the application consumes over the network as part of its normal operation. Emphasis is placed on minimizing the distinction between local and third-party backing services such that the application can’t tell the difference between them.

As an example, say you have a PostgreSQL database instance running on your workstation that’s connected to your application to persist data. Later, when it comes time to deploy to production, the same approach to configuring the local PostgreSQL instance should work when it gets swapped out for an Amazon Relational Database Service (RDS) instance.

To achieve this with Django, we need to change the way connectivity to the database is configured. That happens via the DATABASES dictionary in settings.py:

diff --git a/mysite/mysite/settings.py b/mysite/mysite/settings.py
index 3a99d45..fcff52a 100644
--- a/mysite/mysite/settings.py
+++ b/mysite/mysite/settings.py
@@ -75,8 +75,12 @@ WSGI_APPLICATION = "mysite.wsgi.application"

 DATABASES = {
     "default": {
- "ENGINE": "django.db.backends.sqlite3",
- "NAME": BASE_DIR / "db.sqlite3",
+ "ENGINE": "django.db.backends.postgresql",
+ "NAME": os.getenv("POSTGRES_DB"),
+ "USER": os.getenv("POSTGRES_USER"),
+ "PASSWORD": os.getenv("POSTGRES_PASSWORD"),
+ "HOST": os.getenv("POSTGRES_HOST"),
+ "PORT": os.getenv("POSTGRES_PORT"),
     }
 }
Enter fullscreen mode Exit fullscreen mode

Here, we modified DATABASES so that all the necessary settings for the default database are pulled from the environment. Now, it doesn’t matter if the application is launched with HOST equal to localhost or mysite.123456789012.us-east-1.rds.amazonaws.com. In either case, the application should be able to connect to the database successfully using the settings found in the environment.

Build, release, run

In the Dependencies section we produced a build in the form of a container image. But, we also need a unique label to identify and differentiate between versions of the container image. Uniqueness can come in the form of a timestamp, or an incrementing number, but I personally like to use Git revisions. Below is an example that uses the current Git revision to tag a container image:

$ # Get a reference to the latest commit of the current
$ # branch and make it short (only 7 characters long).
$ export GIT_COMMIT="$(git rev-parse --short HEAD)"
$ docker build -t "mysite:$GIT_COMMIT" .
$ docker images | grep mysite
mysite e87b8c4 4f3dc2772c57 2 minutes ago 978MB
Enter fullscreen mode Exit fullscreen mode

As you can see from the output, the reference mysite:e87b8c4 is unique to the container image we built. If we make additional changes to the codebase and commit them to the underlying Git repository, following these same steps will result in a new container image with a new unique reference.

Next, we need to combine the container image build above with a relevant set of configuration to produce a release. Here, we’ll use a lightweight Docker Compose configuration file to describe the connection between the two (builds and releases) in a declarative way. In a production system, you’d likely do something similar using a Kubernetes deployment or an Amazon ECS task definition:

version: "3"
services:
  web:
    image: mysite:e87b8c4
    environment:
      - POSTGRES_HOST=mysite.123456789012.us-east-1.rds.amazonaws.com
      - POSTGRES_PORT=5432
      - POSTGRES_USER=mysite
      - POSTGRES_PASSWORD=mysite
      - POSTGRES_DB=mysite
      - DJANGO_ENV=Staging
      - DJANGO_SECRET_KEY=staging-secret
    command:
      - runserver
      - "0.0.0.0:8000"
    ports:
      - "8000:8000"
Enter fullscreen mode Exit fullscreen mode

This bit of Docker Compose configuration ties together the mysite:e87b8c4 build with a set of environment specific configuration to produce a release. If the container image and Docker Compose configuration snippet are available on the same host, then the application is ready for immediate execution on that host.

Lastly, we have the run stage. For Docker Compose, that’s as simple as using docker-compose up to launch the web service. For a more sophisticated container orchestration system, several more steps would likely be involved:

  • The container image is published to a centrally accessible container registry.
  • The deployment manifest is submitted for evaluation to a container scheduler.
  • Compute is connected to the container scheduler with adequate resources to place instances of the application.

Processes

The Twelve-Factor methodology emphasizes applications as stand-alone processes because when they share nothing, they can be made to more easily scale horizontally. Therefore, striving to store all dynamic state in a backing service (e.g., a database) to make a process stateless is important.

However, sometimes whole components of an application need to be dynamically built, like its associated CSS and JavaScript. To be truly stateless, we want to generate those components during the build phase and capture them in the container image.

Django has several built-in mechanisms to handle static assets, but I prefer to use a third-party library named WhiteNoise. Primarily, because it helps package both the application and its supporting static assets together in a way that enables thinking about a deploy as an atomic operation.

After installing WhiteNoise using pipenv with a command similar to the one we used in Dependencies to install Django, we need to configure the Django application to use WhiteNoise for static asset management. Here, we inject WhiteNoise into the Django INSTALLED_APPS and MIDDLEWARE hierarchy to take over static asset management in development and non-development environments:

diff --git a/mysite/mysite/settings.py b/mysite/mysite/settings.py
index 216452b..f4e32c6 100644
--- a/mysite/mysite/settings.py
+++ b/mysite/mysite/settings.py
@@ -31,6 +31,7 @@ ALLOWED_HOSTS = []
 # Application definition

 INSTALLED_APPS = [
+ "whitenoise.runserver_nostatic",
     "django.contrib.admin",
     "django.contrib.auth",
     "django.contrib.contenttypes",
@@ -41,6 +42,7 @@ INSTALLED_APPS = [

 MIDDLEWARE = [
     "django.middleware.security.SecurityMiddleware",
+ "whitenoise.middleware.WhiteNoiseMiddleware",
     "django.contrib.sessions.middleware.SessionMiddleware",
     "django.middleware.common.CommonMiddleware",
     "django.middleware.csrf.CsrfViewMiddleware",
@@ -122,3 +124,7 @@ USE_TZ = True
 # https://docs.djangoproject.com/en/3.1/howto/static-files/

 STATIC_URL = "/static/"
+
+STATIC_ROOT = "/static"
+
+STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
Enter fullscreen mode Exit fullscreen mode

The two settings at the bottom (STATIC_ROOT and STATICFILES_STORAGE) tell Django where to store the collected files on the container image file system and what preprocessing operations to apply.

Next, we need to ensure that Django preprocesses all static assets as part of the container image build process. For Django, that means adding an invocation of the collectstatic command to the container image build instructions:

diff --git a/Dockerfile b/Dockerfile
index 4653278..6420680 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -10,4 +10,6 @@ RUN pip install pipenv
 RUN pipenv install --system --deploy --ignore-pipfile
 COPY ./mysite .

+RUN python manage.py collectstatic --no-input
+
 ENTRYPOINT ["python", "manage.py"]
Enter fullscreen mode Exit fullscreen mode

Statelessness achieved!

Port binding

Now that we have the application source code, dependencies, and supporting static assets inside a container image, we need a way to expose the entirety of it in a self-contained way. Since this is a web application, our goal is to use the HTTP protocol instead of lower level APIs like CGI, FastCGI, Servlets, etc.

We’ve seen our application bound to a port over HTTP several times already via the docker run invocations above, but they were all using a development-grade HTTP application server (e.g., runserver). How do we achieve something similar in a production-grade way?

Enter Gunicorn and Uvicorn. Gunicorn is a production-grade Python application server for UNIX based systems, and Uvicorn provides a Gunicorn worker implementation with Asynchronous Server Gateway Interface (ASGI) compatibility.

After installing Gunicorn and Uvicorn using pipenv install, we need to tweak the Docker Compose configuration from Build, release, run to use Gunicorn as the entrypoint. We also add a few command-line options to ensure that the ASGI API is used (between Gunicorn and Django) along with the Uvicorn worker implementation:

diff --git a/docker-compose.yml b/docker-compose.yml
index f5f693d..bac885d 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -20,8 +20,12 @@ services:
     build:
       context: .
       dockerfile: Dockerfile
+ entrypoint: gunicorn
     command:
- - runserver
- - "0.0.0.0:8000"
+ - "mysite.asgi:application"
+ - "-b 0.0.0.0:8000"
+ - "-k uvicorn.workers.UvicornWorker"
Enter fullscreen mode Exit fullscreen mode

After all of these changes, Docker Compose should be able to bring the service up bound to port 8000 using Gunicorn:

$ docker-compose up web
Starting django-blog_web_1 ... done
Attaching to django-blog_web_1
web_1 | [2021-03-06 19:57:43 +0000] [1] [INFO] Starting gunicorn 20.0.4
web_1 | [2021-03-06 19:57:43 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web_1 | [2021-03-06 19:57:43 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker
web_1 | [2021-03-06 19:57:43 +0000] [8] [INFO] Booting worker with pid: 8
web_1 | [2021-03-06 19:57:43 +0000] [8] [INFO] Started server process [8]
web_1 | [2021-03-06 19:57:43 +0000] [8] [INFO] Waiting for application startup.
web_1 | [2021-03-06 19:57:43 +0000] [8] [INFO] ASGI 'lifespan' protocol appears unsupported.
web_1 | [2021-03-06 19:57:43 +0000] [8] [INFO] Application startup complete.
Enter fullscreen mode Exit fullscreen mode

We can confirm by creating a second terminal session, hitting the /admin/ endpoint, and inspecting the response:

$ http localhost:8000/admin/
HTTP/1.1 302 Found
cache-control: max-age=0, no-cache, no-store, must-revalidate, private
content-length: 0
content-type: text/html charset=utf-8
date: Sat, 06 Mar 2021 19:59:36 GMT
expires: Sat, 06 Mar 2021 19:59:36 GMT
location: /admin/login/?next=/admin/
referrer-policy: same-origin
server: uvicorn
vary: Cookie
x-content-type-options: nosniff
x-frame-options: DENY
Enter fullscreen mode Exit fullscreen mode

It’s alive!

Concurrency

As load against an application increases, the ability to address it by quickly and reliably adding more stateless processes is desirable. Gunicorn has built-in support for a process level worker model, but using it to scale an application in cloud based environments can cause contention with higher level distributed process managers. This is because both want to manage the processes, but only the distributed process manager has a wholistic view of resources across machines. Instead, we can set the number of Gunicorn worker processes low and defer process management to a higher level supervisor.

Specifying different process types can’t really be done with Gunicorn either. Usually, that’s more tightly coupled with the container orchestration engine you use. Later on in Dev/prod parity we’ll see a Docker Compose configuration with both a database and web process type. Within a more production-oriented container orchestration system like Kubernetes, you’d achieve something similar by creating separate sets of pods—one for each process type to enable independent scaling.

Disposability

In cloud environments, application disposability is important because it increases agility during releases, scaling events, and failures. An application exhibits disposability when it properly handles certain types of asynchronous notifications called signals. Signals help local supervisory services (e.g., systemd and Kubelet) manage an application’s lifecycle externally.

Gunicorn has built-in support for signal handling. If you use it as your application server, it will automatically handle signals like SIGTERM to facilitate a graceful shutdown of the application.

Dev/prod parity

Configuration allows a single build of a codebase to run locally, in staging, and in production. Leveraging that to maintain parity across environments keeps incompatibilities from cropping up as software is being developed. This results in a higher degree confidence that the application will function the same way in production, as it did locally.

Still, maintaining development and production parity is an ongoing challenge. Much like speed and security, you have to be constantly thinking about it, or else you lose it.

Nowadays, operating system support for namespacing resources through containerization, along with higher level tooling like Docker and Docker Compose, go a long way toward making this pursuit easier to achieve. As an example, see the following Docker Compose configuration file:

version: "3"
services:
  database:
    image: postgres:12.6
    environment:
      - POSTGRES_USER=mysite
      - POSTGRES_PASSWORD=mysite
      - POSTGRES_DB=mysite

  web:
    image: mysite
    environment:
      - POSTGRES_HOST=database
      - POSTGRES_PORT=5432
      - POSTGRES_USER=mysite
      - POSTGRES_PASSWORD=mysite
      - POSTGRES_DB=mysite
      - DJANGO_ENV=Development
      - DJANGO_SECRET_KEY=secret
      - DJANGO_LOG_LEVEL=DEBUG
    build:
      context: .
      dockerfile: Dockerfile
    entrypoint: gunicorn
    command:
      - "mysite.asgi:application"
      - "-b 0.0.0.0:8000"
      - "-k uvicorn.workers.UvicornWorker"
    ports:
      - "8000:8000"
Enter fullscreen mode Exit fullscreen mode

Within this relatively small file, we have defined all services needed to run our application locally. Each service (database and web) run as separate processes within their own containers, but are networked together. From the perspective of our Django application, this setup differs minimally from a true production container orchestration setup.

Logs

Logs emitted by an application provide visibility into its behavior. However, in cloud environments you cannot reliably predict where your application is going to run. This makes it difficult to get visibility into the application’s behavior—unless, you treat application logging as a stream. Treating application logs as a stream makes it easier for other services to aggregate and archive log output for centralized viewing.

Django uses Python’s built-in logging module to perform system logging, which allows it to be set up in some pretty sophisticated ways. However, all we want is for Django to log everything as a stream to standard out. We can make that happen by specifying a custom logging configuration dictionary in settings.py that looks like:

LOGGING = {
    "version": 1,
    "disable_existing_loggers": False,
    "handlers": {
        "console": {
            "class": "logging.StreamHandler",
        },
    },
    "root": {
        "handlers": ["console"],
        "level": "WARNING",
    },
    "loggers": {
        "django": {
            "handlers": ["console"],
            "level": "WARNING",
            "propagate": False,
        },
    },
}
Enter fullscreen mode Exit fullscreen mode

This configures the parent root logger to send messages with the WARNING level and higher to the console handler (e.g., standard out). It also has support to tune the default Django log levels via the DJANGO_LOG_LEVEL environment variable. A dynamic override like this can be extremely helpful when troubleshooting because it allows logging settings to be modified without requiring a new release.

Admin processes

Administrative tasks are essential to every application. It is important for the code associated them to ship with the application to avoid synchronization issues as they are invoked in the same execution environment as the application.

Most of Django’s supporting administrative tasks, like applying database migrations, sending test emails, and adding users, can already be executed as one-off processes. In addition, Django provides a robust framework for adding more that are specific to your application (e.g., toggling feature flags, orchestrating data imports, etc.).

As an example, we can apply outstanding database migrations (there should be some for a newly initialized Django project) with the built-in migrate command:

$ docker-compose run --rm --entrypoint "python manage.py" web migrate
Creating django-blog_web_run ... done
Operations to perform:
  Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying admin.0003_logentry_add_action_flag_choices... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying auth.0009_alter_user_last_name_max_length... OK
  Applying auth.0010_alter_group_name_max_length... OK
  Applying auth.0011_update_proxy_permissions... OK
  Applying auth.0012_alter_user_first_name_max_length... OK
  Applying sessions.0001_initial... OK
Enter fullscreen mode Exit fullscreen mode

Here, we dynamically override the previously referenced Docker Compose configuration with --entrypoint set to python manage.py instead of gunicorn. We also specify that we want the migrate subcommand to be run. This execution leads to a series of cross-container communications that ensure our database schema aligns with the current state of Django’s data model.


That’s it! Whether you were aware of the 12 Factor methodology before or not, I hope that seeing it applied to a Django application enables you to more easily integrate it with whatever web framework you use. May it lead to more configurable, scalable, and reliable applications. Amen.


Thanks to Dave Konopka for providing thoughtful feedback on my drafts of this post.

Top comments (0)