Conversational Development With Claude Code — Part 10: Database Migrations with Docker, Alembic, and Automated Tests
TL;DR — Treat database migrations like a production change even when you’re “just local.” Run the API in Docker, centralize repeatable commands in a Makefile, execute tests inside the container, and use Claude Code as a disciplined operator: planning first, executing with permission, and validating that the service still answers after the schema evolves.
This chapter is about changing the foundation without cracking the building.
Why migrations are where good intentions go to die
Most incidents don’t start with “bad code.”
They start with a subtle drift:
- a migration applied in one environment but not another,
- a missing dependency that turns “tests passed” into “tests never ran,”
- a schema change that technically works… until a real request hits a code path you didn’t exercise.
A database is not a module you refactor.
It’s a living contract between runtime, data, and time.
So the goal here isn’t “generate a migration.”
The goal is: ship the migration without breaking the service — and be able to prove it.
That’s the difference between “I changed the DB” and “I can trust the system again.”
The operational baseline: run the backend like it’s real
Before Alembic, before constraints, before indexes — you need a stable baseline.
In the backend folder we assume a Makefile exists. The point of a Makefile is not nostalgia. It’s repeatability. It turns tribal knowledge into a verb.
From the backend directory:
cd backend
make build
make start
make logs
What each command implies:
- Docker image: not just packaging, but freezing the environment so your machine’s local quirks don’t become invisible dependencies.
- Logs: the only honest narrator when your service “started” but isn’t actually healthy.
- Makefile: the social contract of the team — the commands you can rely on at 2 a.m.
Once the container is up, validate the API responds (for example at http://localhost:8000), and keep make logs available as your heartbeat monitor.
When “there is no test command” becomes a migration blocker
A migration without tests is a guess.
A migration with tests you didn’t run is a lie you told yourself with a clean conscience.
If your Makefile lacks a test target, you don’t “just run pytest.” You standardize it.
Ask Claude Code to add a test command that executes pytest inside the API container. This is crucial: your tests should run in the same environment that runs the service.
The intended developer experience becomes:
make test
Under the hood, it can be implemented with docker compose exec (or an equivalent, depending on your stack).
The key is not the exact command.
The key is: tests are now a first‑class workflow primitive.
Fix tests before you touch the schema
This chapter assumes your migration is part of a feature evolution (in our series, the ratings feature). But at this stage, the “feature” is irrelevant. What matters is: the system must be green before it changes.
When tests fail inside the container, don’t bargain. Repair them.
Common culprits in Python API stacks:
-
httpxmissing (async HTTP clients, test clients) -
pytest-asynciomissing (async test runtime)
The pattern that scales:
1) add missing dependencies into the right dependency group (often optional / dev / test)
2) rebuild the image (so the container environment is updated)
3) rerun make test until you get a clean pass
Expected outcome:
- tests run and pass at 100% inside the container
- your baseline is not hypothetical anymore
Claude Code as an operator: plan first, then execute with permission
Claude Code becomes most valuable when you stop using it as autocomplete and start using it as a controlled executor.
Two modes matter here:
- Plan mode: Claude describes exactly what it will do — files touched, commands executed, validations performed.
- Bash tool execution: Claude proposes terminal commands; you approve. This keeps the workflow fast without becoming reckless.
You can also tune the interaction style:
- default: confirmation before edits/commands (recommended)
- “auto accept”: useful for low-risk operations, but treat it like sudo — not a lifestyle
In migrations, I like a simple rule:
If it changes state (DB, filesystem, containers), it gets a plan and an explicit approval.
That’s how you stay fast and correct.
Phase 1: create the migration (DB first, always)
Your new capability needs persistence. In this series, that persistence is a table like “course rating” in PostgreSQL with:
- a strict 1–5 range constraint
- indexes that make aggregation and lookup cheap
- relationships that preserve integrity (e.g., course/user foreign keys if applicable)
- a migration that can move forward and roll back
The migration workflow looks like this:
1) Identify the running API container
2) Check current Alembic state
3) Generate a new revision following project conventions
4) Apply the migration
5) Validate at the database layer that the schema is where you think it is
In Claude Code, you can drive this as a conversation:
- “Inspect current Alembic versions.”
- “Create a migration for the ratings table with constraints + indexes.”
- “Apply it and prove it’s applied.”
- “Confirm the API still responds.”
And you do it with plan mode so you see the whole move before the first step lands.
Rollback is not optional: write downgrade like you mean it
The migration isn’t complete until downgrade exists.
A downgrade is not a courtesy. It’s your escape hatch when:
- you discover a production edge case,
- a deploy pipeline fails half‑way,
- or the model layer evolves differently than you expected.
Your Alembic migration should document rollback behavior explicitly, and your workflow should include:
- how to downgrade to the previous revision
- what data loss might occur (dropping a table is absolute)
- what validations to run post-downgrade
Even in local dev, practice rollback. It trains the team’s nervous system to treat migrations as reversible operations.
Validate like an engineer, not like a gambler
After applying the migration:
1) Rerun tests:
make test
2) Verify containers are healthy:
- confirm API container is running
- confirm logs do not show crash loops
3) Hit a lightweight endpoint (e.g., health):
-
curlis enough - or a small
python3script if you want to assert response shape
What you want is a single sentence you can trust:
Tests pass. Container is healthy. Endpoints respond. Migration is applied.
When Claude Code summarizes this for you at the end of the run, you get something rare in modern software: confidence without drama.
The real output of this chapter: a workflow you can hand to someone else
By the end of Part 10, you should have:
- a running backend via Docker
- a
Makefilethat can build/start/log/test - pytest running inside the container
- missing test dependencies resolved (if any)
- an Alembic migration created and applied
- rollback story documented (downgrade)
- post-migration validations (tests + endpoint checks)
This isn’t just “how to migrate a DB.”
This is how to keep the system coherent while it evolves.
What’s next
Phase 2 of the ratings feature is where most teams get impatient:
models and ORM layer integration.
Next chapter, we’ll move from schema to SQLAlchemy models and start connecting persistence to the API surface — still with the same discipline:
- baseline first
- plan before execution
- tests after every state change
If you run into friction (Docker daemon, Alembic env, test flakiness), drop it in the comments — those are the real lessons.
— Written by Cristian Sifuentes
Full‑stack engineer · AI‑assisted systems thinker

Top comments (0)