DEV Community

Cover image for How to Run a Python Flask App in Docker with Gunicorn, Nginx, Redis, Celery and Crontab
Andy
Andy

Posted on

How to Run a Python Flask App in Docker with Gunicorn, Nginx, Redis, Celery and Crontab

If you have built a Python Flask application that has need for Redis and cron jobs and you're looking to host your application using Docker, this post will provide you with how to set up your app to run smoothly using Nginx as reverse webserver proxy and Gunicorn as app server.

This post assumes you know how to build applications using Python Flask. Also, for this post, I assumed using a remote database server (MySQL)

You can check my previous post on how Build a User Authentication API using Python Flask and MySQL

Challenges with docker
There can only be one CMD instruction in a Dockerfile. Considering our application uses Celery and Redis to handle queue and also requires running cron jobs. Running a background process to keep running your jobs in a single Docker container can be tricky.

You can use an entrypoint.sh script

FROM python:3.12-rc-alpine
COPY app_process app_process
COPY bin/crontab /etc/cron.d/crontab 
RUN chmod +x /etc/cron.d/crontab
RUN crontab /etc/cron.d/crontab
COPY start.sh start.sh
CMD /start.sh
Enter fullscreen mode Exit fullscreen mode

Start.sh script could be

#!/bin/bash

# turn on bash's job control
set -m

# Start the primary process and put it in the background
gunicorn --bind 0.0.0.0:5000 wsgi:app --log-level=debug --workers=2 &

# cron
cron -f &

#celery
celery -A myapp.celery worker --loglevel=INFO

# now we bring the primary process back into the foreground
# and leave it there
fg %1

Enter fullscreen mode Exit fullscreen mode

You can chain multiple commands to start all services in a single


CMD gunicorn --bind 0.0.0.0:5000 wsgi:app --log-level=debug --workers=2 & cron -f & celery -A myapp.celery worker --loglevel=INFO
Enter fullscreen mode Exit fullscreen mode

You can also use supervisord to manage the processes.

# syntax=docker/dockerfile:1
FROM python:3.12-rc-alpine
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY app_process app_process
COPY bin/crontab /etc/cron.d/crontab 
RUN chmod +x /etc/cron.d/crontab
RUN crontab /etc/cron.d/crontab
CMD ["/usr/bin/supervisord"]
Enter fullscreen mode Exit fullscreen mode

Your supervisord config could be something like

[supervisord]
nodaemon=true
user=root

[program:celeryworker]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=celery -A myapp.celery worker --loglevel=INFO
autostart=true
autorestart=true

[program:myapp_gunicorn]
command=gunicorn --bind 0.0.0.0:5000 wsgi:app --log-level=debug --workers=2
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

[program:cron]
command = cron -f -L 15
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/cron.log
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
Enter fullscreen mode Exit fullscreen mode

The issue with any of the above approach is you will be responsible for monitoring them if any service fails and tries to recover. For example crontab could stop running while the main app is working, you have to handle how to recover the crontab without restarting the whole container.

It’s best practice to separate areas of concern by using one service per container.

Using Multiple Containers.

You can use multiple containers to run the different services. In this solution I used

  • one container for the Flask App,
  • one container for the redis service
  • one container for the cronjob and Celery (Queue service) using Supervisord to manage celery.

Note: You decide to further move the celery (queue service) into a separate container if you want to.

The DockerFile for Flask App

FROM python:3.11.4-slim-bullseye

# set work directory
WORKDIR /app

# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

ARG UID=1000
ARG GID=1000

RUN apt-get update \
  && apt-get install -y --no-install-recommends build-essential default-libmysqlclient-dev default-mysql-client curl libpq-dev pkg-config \
  && rm -rf /var/lib/apt/lists/* /usr/share/doc /usr/share/man \
  && apt-get clean

# RUN useradd -m python
# RUN chown -R python:python /app

# USER python


# If you have a requirement.txt file
COPY requirements/main.txt requirements/main.txt

# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements/main.txt

COPY . /app/

RUN pip install -e .


CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--worker-tmp-dir", "/dev/shm", "--workers", "2", "--worker-class", "gevent", "--worker-connections", "1000", "wsgi:app", "--log-level", "debug"]
Enter fullscreen mode Exit fullscreen mode

Docker File for Crontab and Celery

FROM python:3.11.4-slim-bullseye

# set work directory
WORKDIR /cronapp/


# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

ARG UID=1000
ARG GID=1000

RUN apt-get update \
  && apt-get install -y --no-install-recommends supervisor build-essential default-libmysqlclient-dev default-mysql-client curl cron libpq-dev pkg-config \
  && rm -rf /var/lib/apt/lists/* /usr/share/doc /usr/share/man \
  && apt-get clean

# RUN useradd -m python
# RUN chown -R python:python /app
# USER python


COPY requirements/main.txt requirements/main.txt

# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements/main.txt

COPY . /cronapp/

RUN pip install -e .

# Setup cronjob
RUN touch /var/log/cron.log 

# Copying the crontab file 
COPY cron/bin/crontab /etc/cron.d/crontab 
RUN chmod +x /etc/cron.d/crontab


# run the crontab file
RUN crontab /etc/cron.d/crontab

RUN mkdir -p /var/log/supervisor

COPY services/cron/bin/supervisord.conf /etc/supervisor/conf.d/supervisord.conf

# CMD ["/usr/bin/supervisord", "-n"]

CMD cron -f & /usr/bin/supervisord -n
Enter fullscreen mode Exit fullscreen mode

The Supervisord config

[supervisord]
nodaemon=true
user=root

[program:celeryworker]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=celery -A myapp.celery worker --loglevel=INFO
autostart=true
autorestart=true

[program:myapp_gunicorn]
command=gunicorn --bind 0.0.0.0:5000 wsgi:app --log-level=debug --workers=4
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Enter fullscreen mode Exit fullscreen mode

Sample crontab

SHELL=/bin/bash
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

# run notify users every day at 1:05AM
5 1 * * *  flask --app myapp notify-users >> /var/log/cron.log 2>&1
Enter fullscreen mode Exit fullscreen mode

For this approach to work, your app has to be structured to use the package pattern. (This is same in the previous post).

This way, you can run a function from the command line on your app like below:

flask --app myapp notify-users

Remember to specify a function to run on command line by using the @app.cli.command to create custom commands

Example:

from myapp.users import users
from myapp import app
from myapp.models.user import User
from myapp.queue.sendmail import send_email_to_user

@app.cli.command('notify-users')
def notify_users():
    offset = 0
    limit = 100
    users = User.filter(User.is_verified == 1).order_by(User.created_at.desc()).limit(limit).offset(offset)

    for user in users:
        send_email_to_user(user)

Enter fullscreen mode Exit fullscreen mode

Nginx Dockerfile

FROM nginx:1.23-alpine

RUN rm /etc/nginx/nginx.conf
COPY nginx.conf /etc/nginx/

RUN rm /etc/nginx/conf.d/default.conf
COPY myapp.conf /etc/nginx/conf.d/


CMD ["nginx", "-g", "daemon off;"]
Enter fullscreen mode Exit fullscreen mode

You can now use docker-compose to manage all containers

Sample docker-compose.yml

version: "3.8"

services:
  backend:
    container_name: "app"
    build:
      context: .
      args:
        - "UID=-1000"
        - "GID=-1000"
        - "FLASK_DEBUG=false"
    volumes:
      - .:/app
    ports:
      - "5000:5000"
    env_file:
      - ".env"
    restart: "-unless-stopped"
    stop_grace_period: "2s"
    tty: true
    deploy:
      resources:
        limits:
          cpus: "-0"
          memory: "-0"
    depends_on:
      - "redis"
    profiles: ["myapp"]

  cron:
    container_name: "cron"
    build:
      context: .
      dockerfile: ./services/cron/Dockerfile
      args:
        - "UID=-1000"
        - "GID=-1000"
    env_file:
      - ".env"
    restart: "-unless-stopped"
    stop_grace_period: "2s"
    tty: true
    deploy:
      resources:
        limits:
          cpus: "-0"
          memory: "-0"
    depends_on:
      - "redis"
    volumes:
      - .:/cronapp/
    profiles: ["myapp"]

  redis:
    deploy:
      resources:
        limits:
          cpus: "-0"
          memory: "-0"
    image: "redis:7.0.5-bullseye"
    restart: "-unless-stopped"
    stop_grace_period: "3s"
    command: "redis-server --bind redis --maxmemory 256mb --maxmemory-policy allkeys-lru --appendonly yes"
    volumes:
      - "./redis:/data"
    profiles: ["redis"]

  nginx:
    container_name: "nginx"
    build:
      context: ./services/nginx
    restart: "-unless-stopped"
    stop_grace_period: "2s"
    tty: true
    deploy:
      resources:
        limits:
          cpus: "-0"
          memory: "-0"
    ports:
      - "80:80"
    depends_on:
      - "backend"
    volumes:
      - .:/nginx/
    profiles: ["nginx"]

Enter fullscreen mode Exit fullscreen mode

You can now start your application and all services by running

docker compose up --detach --build app redis cron nginx
Enter fullscreen mode Exit fullscreen mode

Top comments (0)