DEV Community

Python-T Point
Python-T Point

Posted on • Originally published at pythontpoint.in

🚀 Docker Compose Django Postgres tutorial — setup made simple

"The first time I broke production, it wasn’t because of bad code. It was because my local database didn’t match production — and I had no idea how to fix it."

Yeah. I lived that.

I remember the time — I was a junior at a 6-person startup in Pune, working on our flagship SaaS product. We were live, growing fast, and I was so proud of this new feature I’d built. Added a simple model, ran migrations locally (SQLite, of course), deployed on Friday evening (why do we always do this?), and boom — Internal Server Error for every user.

Turns out, the JSONField I’d used so casually? SQLite doesn’t support it. PostgreSQL does.

And guess what we were running in production.

Spoiler: not SQLite.

Four hours. Two senior devs pulled off their weekend plans. A hot mess of rollbacks, emergency psql commands, and one very quiet apology on Slack.

Cold chai has never tasted so bitter.

So look — if you're learning Django and still treating SQLite like the endgame… stop. I know it’s easy. I know it works for tutorials. But the real world runs on PostgreSQL. And if you're not using Docker Compose to mirror that reality locally, you’re just delaying the pain.

Trust me, I've been there.

That’s why this docker compose django postgres tutorial is what I’d hand to my past self. No fluff. Just working, repeatable setup. One command. Full control. No more “but it worked on my machine” excuses.


docker compose django postgres tutorial

🚀 Getting Started — Why Dockerize?

Here’s the thing.

You don’t dockerize because it’s trendy.

You do it because you’re tired of spending half your day debugging environment mismatches.

Python 3.9 on your laptop. 3.11 on staging. Missing libpq-dev. No psycopg2. "Why won’t it just run?!"

And then — the dreaded:

django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module

Again.

So.

Docker wraps all that noise in a box. Your app, its Python version, dependencies, PostgreSQL — everything runs the same, everywhere.

One container for Django. One for Postgres. Maybe one for Redis later. Isolated. Reproducible. No system-level installs. No permission hell.

And Docker Compose? That’s the glue. Single docker-compose.yml. One command:

docker-compose up

And just like that — your entire stack is live.

Did I mention it works the same on macOS, Linux, Windows?

Yeah. Game changer.

🔧 Prerequisites

Before we jump in:

  • Docker and Docker Compose installed (run docker --version and docker-compose --version to check)
  • You’ve built a Django "Hello World" — just enough to know what manage.py does
  • Python 3.8+, pip, and a text editor that doesn’t hate you

Docker Desktop includes Compose — grab it from docker.com if you haven’t already.

(And yes, WSL2 users — you’re covered. Just don’t forget to enable it. I’ve lost count of how many juniors I’ve mentored who forgot that step.)

📦 Project Structure

Keep it clean. Keep it simple.

This is what we’re building:

myproject/
├── docker-compose.yml
├── Dockerfile
├── requirements.txt
└── myproject/
    ├── manage.py
    └── myproject/
        ├── __init__.py
        ├── settings.py
        ├── urls.py
        └── wsgi.py
Enter fullscreen mode Exit fullscreen mode

All in one folder. Django app + Docker config. No magic. No over-engineering.

Just like the real world.


⚙️ Configuration — The docker-compose.yml File

This is where the docker compose django postgres tutorial actually starts working for you.

Your docker-compose.yml defines everything: services, networks, volumes, environment vars.

Here’s a minimal, battle-tested version:

version: '3.8'

services:
  db:
    image: postgres:15
    environment:
      POSTGRES_DB: myproject
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: mypassword
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    ports:
      - "5432:5432"

  web:
    build: .
    command: python manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/code
    ports:
      - "8000:8000"
    depends_on:
      - db
    environment:
      - DATABASE_URL=postgres://myuser:mypassword@db:5432/myproject

volumes:
  postgres_data:
Enter fullscreen mode Exit fullscreen mode

Let’s break it down — slowly.

db service :

Uses the official postgres:15 image. Sets the DB name, user, and password via environment variables. (Hardcoded here for dev — don’t panic, we’ll rotate them in prod.)

Mounts a named volume: postgres_data.

This is critical.

Without it, your database wipes clean every time you run docker-compose down.

Imagine losing all your test data. Again. And again.

Not fun.

Port 5432 is exposed — so you can connect with DBeaver or psql from your host machine. Super handy for debugging.

web service :

Builds from your Dockerfile. Mounts your code into /code so changes reflect live. Exposes port 8000.

The depends_on: db line? It means: start the database first.

But — and this is important — it only waits for the container to be running , not ready.

PostgreSQL takes a few seconds to boot up and accept connections.

So yes, your Django container might start too early.

And crash. Repeatedly.

Spoiler: we’ll fix that later with a retry loop or wait-for-it.sh. (But that’s another blog post.)

Still — even this basic config gives you portability. Hand it to any dev. Run docker-compose up. Same result.

Consistency. Not luck.

"Consistency isn’t a luxury — it’s the foundation of reliable development."

And yeah — I learned this the hard way, after three staging envs mysteriously diverged.


🐍 Django Setup — Making It Talk to PostgreSQL

Alright. Time to ditch SQLite.

First: add psycopg2-binary to your requirements.txt. Django needs it to talk to Postgres.

Django>=4.2
psycopg2-binary>=2.8
Enter fullscreen mode Exit fullscreen mode

Now, your Dockerfile:

FROM python:3.11

ENV PYTHONUNBUFFERED 1

WORKDIR /code

COPY requirements.txt /code/
RUN pip install -r requirements.txt

COPY . /code/
Enter fullscreen mode Exit fullscreen mode

Standard. No surprises.

Now the big one: update your settings.py.

Replace the DATABASES block with this:

import os
from urllib.parse import urlparse

# For simplicity, we'll parse DATABASE_URL
# In production, use django-environ or django-configurations
DATABASE_URL = os.environ.get('DATABASE_URL', 'sqlite:///db.sqlite3')

if DATABASE_URL.startswith('postgres://'):
    url = urlparse(DATABASE_URL)
    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.postgresql',
            'NAME': url.path[1:],
            'USER': url.username,
            'PASSWORD': url.password,
            'HOST': url.hostname,
            'PORT': url.port,
        }
    }
else:
    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.sqlite3',
            'NAME': BASE_DIR / 'db.sqlite3',
        }
    }
Enter fullscreen mode Exit fullscreen mode

This is slick.

If DATABASE_URL is set (it will be, in Docker), use PostgreSQL.

If not? Fall back to SQLite — perfect for quick tests or local dev without containers.

Smart. Flexible. No hardcoding.

Oh — and use django-environ in real projects. Seriously. But for this tutorial? This snippet works.

🔧 Run Migrations Inside the Container

Containers are isolated.

That means you can't just run python manage.py migrate on your host machine.

It has to happen inside the web container.

So after you start things up:

docker-compose up -d
Enter fullscreen mode Exit fullscreen mode

Then run:

docker-compose exec web python manage.py makemigrations
docker-compose exec web python manage.py migrate
Enter fullscreen mode Exit fullscreen mode

Watch the output.

You should see Django creating tables in PostgreSQL — not SQLite.

No db.sqlite3 file anywhere.

Clean.

🧠 Why This Matters

Look — I get it.

For a blog, a todo app, even a small CRM? SQLite feels fine.

But things go sideways fast.

What about concurrent writes? Full-text search? JSONB fields? Geospatial queries?

SQLite laughs and crashes.

And when you finally deploy to Heroku, AWS, or a VPS — boom. Reality hits.

Your "working" app fails because the database behaves differently.

Not cool.

From what I've seen on real projects — the teams that dockerize early survive longer. They deploy faster. They break less.

And they don’t have midnight debugging sessions over chai and guilt.

This docker compose django postgres tutorial isn’t about containers.

It’s about not lying to yourself.

Your local setup should mirror production. As closely as possible.

Docker makes that achievable. Today.


🤝 Debugging Tips — When Things Break

Spoiler: they will.

And when they do — don’t panic.

Just check the logs.

docker-compose logs db

docker-compose logs web

9 times out of 10, the error’s right there.

Can’t connect to the database?

Maybe depends_on gave you a false sense of security.

The db container is up , but PostgreSQL isn’t ready to accept connections yet.

So Django crashes trying to run migrations.

Solution? Add a retry loop. Or use wait-for-it in your startup script.

I’ve seen this break CI pipelines. Wasted two hours once — just because Postgres was slow to boot in CI.

Yeah.

Migration fails?

Double-check:

  • Is psycopg2-binary in requirements.txt?
  • Are the user and password in docker-compose.yml matching what’s in DATABASE_URL?
  • Did you run migrate inside the container?

And for the love of clean commits — never hardcode credentials in settings.py.

Not even for “just local.”

I once pushed settings.py with real DB creds to a public repo.

Next morning: my PostgreSQL instance was mining Monero.

True story.

(That’s when I learned about .env files and git secret. But that’s — again — another post.)

So use environment variables. Always.

Even if it feels like overkill.

It’s not.

📌 A Quick Note on Security (because, well, crypto miners)

That incident? Cost us ₹8k on AWS in one weekend.

And the shame.

Just use environment variables. Mount a .env file if you must. But keep secrets out of code.

Trust me.


🟩 Final Thoughts

Dockerizing your Django app with PostgreSQL isn’t just “good practice.”

It’s survival.

It’s choosing to care about consistency. About team velocity. About not being “that dev” who breaks production on a Friday.

When you start with Docker and Postgres locally — you close the gap between “works for me” and “works for everyone.”

No more excuses.

And the best part? This same pattern scales.

Add Redis for caching? One more service in docker-compose.yml.

Need Celery? Another container.

Elasticsearch? Same flow.

One file. Multiple services. Zero chaos.

Honestly — this setup has saved me more weekends than I can count.

You’re not just learning Docker.

You’re learning how real systems work.

And how to not break them.

❓ Frequently Asked Questions

How do I access the PostgreSQL database from outside the container?

You can connect via localhost:5432 using tools like pgAdmin, DBeaver, or psql. Just use the same credentials defined in docker-compose.yml — user, password, and database name. The port mapping 5432:5432 makes this possible.

Can I use this setup in production?

Not directly. This setup is for local development. In production, you’d separate concerns: use managed PostgreSQL (like AWS RDS), avoid volume mounts for code, and serve Django behind a proper web server like Gunicorn + Nginx. But the core idea — containerized services — scales beautifully.

Why use environment variables instead of hardcoding in settings.py?

Hardcoding credentials is a security risk and makes your app less flexible. Environment variables let you change behavior across environments (dev, staging, prod) without touching code. Plus, you can keep secrets out of version control — a must for team projects and CI/CD.

Top comments (0)