I run Django projects in Docker containers and use Visual Studio Code as my IDE. In this article, I share how I debug and auto-reload both Django and Celery workers. The solutions are based on debugpy
, watchdog
, and django.utils.autoreload
.
In this article:
- Example docker-compose
- ππ Debugging a Django app running in Docker
- π Debugging a Django celery worker running in Docker (no auto reload)
- π Auto-reloading a Django celery worker running in Docker (no debug)
- ππ Debugging a Django celery worker running in Docker with auto-reload
(toc generated with bitdowntoc)
Example docker-compose
To better understand the rest of the post, let's assume your docker-compose.yml
looks similar to this:
services:
web: # Django App
build: .
command: ./manage.py runserver 0.0.0.0:80
volumes:
- .:/app # mount the code inside the container
links:
- postgres
worker: # Celery Worker
build: .
command: >-
celery -A my.package.worker worker
-l info --concurrency=6 --queues a,b
volumes:
- .:/app
links:
- postgres
postgres: # Database
image: postgres:15-alpine
ports:
- '5432'
volumes:
- .data/postgres:/var/lib/postgresql/data
environment:
POSTGRES_DB: ...
POSTGRES_USER: ...
POSTGRES_PASSWORD: ...
ππ Debugging a Django app running in Docker
Live reload is enabled when running manage.py runserver
, but what about debugging?
The easiest way to make breakpoints work is to install debugpy
in the container, and open a remote port for debugging that vscode can connect to.
Since the docker-compose file and the Dockerfile
are usually version controlled, let's use a docker-compose.override.yml
for the debug setup. From docker compose's documentation:
If you donβt provide the
-f
flag on the command line, Compose traverses the working directory and its parent directories looking for adocker-compose.yml
and adocker-compose.override.yml
file. [...] If both files are present on the same directory level, Compose combines the two files into a single configuration.(see also Merge and override).
Create a docker-compose.override.yml
alongside your docker-compose.yml
with the following:
# in docker-compose.override.yml
services:
web:
command: >-
sh -c "pip install debugpy &&
python -m debugpy --listen 0.0.0.0:3000
manage.py runserver 0.0.0.0:80"
ports:
- 3003:3000 # set 3003 to anything you want
Now, when running docker compose up
, you should see the following in the logs of the web
container, before the normal startup logs:
web-1 | Collecting debugpy
web-1 | Downloading debugpy-1.6.7-py2.py3-none-any.whl (4.9 MB)
web-1 | Installing collected packages: debugpy
web-1 | Successfully installed debugpy-1.6.7
On vscode, create a new debug configuration, either using the UI (debug > add configuration) or by creating the file .vscode/launch.json
:
{
"version": "0.2.0",
"configurations": [
{
"name": "CHANGEME",
"type": "python",
"request": "attach",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
],
"port": 3003,
"host": "127.0.0.1",
"django": true,
"justMyCode": false,
}
]
}
The important things:
pathMappings.remoteRoot
should match the folder where your code is mounted in the containerport
should match the one you mapped to the container's port3000
, i.e. the port debugpy listens tojustMyCode
determines if breakpoints outside of your code (e.g. in libraries you use) work or not.
With this configuration, you can start a debug session (or hit F5
) whenever you want and all your breakpoints should work. Once you are done debugging, simply "detach" the debugger using the detach icon.
βΉοΈ if you need to debug something that only happens during startup, pass -wait-for-client
to debugpy. When set, the Django app won't start until you start the debugger.
π Debugging a Django celery worker running in Docker (no auto reload)
For the celery workers, the same principles apply. Simply change the docker-compose.override.yml
to:
services:
worker:
command: >-
sh -c "pip install debugpy &&
python -m debugpy --listen 0.0.0.0:3000
/usr/local/bin/celery -A my.package.worker worker
-l info -P solo --queues a,b"
ports:
- 3003:3000
Compared to the initial celery command, the big differences are:
-
celery must be called using an absolute path (e.g.
/usr/local/bin/celery
) since debugpy looks for scripts in the working directory (in my case/app
). Using justcelery
will raise:
No such file or directory: '/app/celery'
passing
-P solo
to celery instead of--concurrency N
simplifies debugging, as only one celery thread is used.
ββ οΈβ This won't AUTO RELOAD your worker ββ οΈβ(keep reading π)
π Auto-reloading a Django celery worker running in Docker (no debug)
To only auto-reload a celery worker, it is possible to use the awesome utility watchmedo auto-restart
from the watchdog package. watchmedo
watches a set of files and/or directories and automatically restarts a process upon file changes.
The docker-compose.override.yml
becomes:
services:
worker:
command: >-
sh -c "pip install "watchdog[watchmedo]" &&
python -m watchdog.watchmedo auto-restart
-d src/ -p '*.py' --recursive
celery -A my.package.worker worker -l info -P solo --queues a,b"
ports:
- 3003:3000
Common options of watchmedo
are:
the
-d
or--directory
option is the directory to watch. It can be repeated.the
-p
or--patterns
option restricts the watch to the matching files. Use;
to list multiple patterns, for example*.py;*.json
the
-R
or--recursive
option monitors the directories recursively.
Note that we do not need to specify an absolute path for celery
anymore.
ββ οΈβ The debugger won't work ββ οΈβ(keep reading π)
ππ Debugging a Django celery worker running in Docker with auto-reload
To make live reload and the debugger work, combining debugpy
and watchmedo
is not enough. I believe it has to do with debugpy restarting every time (thanks to watchmedo), hence losing the connection and context. In other words, we need the restart to happen in the debugged context.
How does Django do auto-reload? Looking at the source code, we can see that the runserver
command uses django.utils.autoreload
under-the-hood (see runserver.py::run). The cool thing is, this utility can also be used to run other processes!
Here is a simple python file that uses autoreload
to run a celery worker:
import django
# β needs to be called *before* importing autoreload
django.setup()
from django.utils import autoreload
def run_celery():
# β import the Celery app object from your code
from my_package.celery.app import app as celery_app
# β usual celery arguments
args = "-A my.package.worker worker -l info -P solo --queues a,b"
celery_app.worker_main(args.split(" "))
print("Starting celery worker with autoreload...")
autoreload.run_with_reloader(run_celery)
Don't forget to adapt the args
and Celery
object import to suit your needs.
Assuming this file is saved at the root of your project, the docker-compose.override.yml
becomes:
services:
worker:
command: >-
sh -c "pip install debugpy &&
python -m debugpy --listen 0.0.0.0:3000 worker.py"
ports:
- 3003:3000
For other ways of achieving the same, have a look at [StackOverflow] Celery auto reload on ANY changes.
Latest comments (0)