DEV Community

brian austin
brian austin

Posted on

How I use Claude Code for database migrations — zero downtime, every time

How I use Claude Code for database migrations — zero downtime, every time

Database migrations are one of the highest-risk operations in software engineering. One wrong column rename and you've got a production outage. Here's exactly how I use Claude Code to make migrations safer and faster.

The problem with migrations

Most developers treat migrations as an afterthought. You add a feature, realize you need a new column, write a quick ALTER TABLE, and pray. With Claude Code, I've built a systematic approach that catches problems before they hit production.

My migration workflow

Step 1: Start with the schema diff

I open Claude Code and paste in my current schema alongside the feature requirements:

Here's my current users table schema:
[paste schema]

I need to add soft delete support with deleted_at timestamp, 
plus a new subscription_tier enum column.

Generate a migration that:
1. Is backward compatible
2. Has a safe rollback
3. Works with zero downtime on a live PostgreSQL table
Enter fullscreen mode Exit fullscreen mode

Claude Code generates the migration AND the rollback in one shot:

-- Migration: 20260408_add_soft_delete_and_tier.sql
BEGIN;

-- Add deleted_at (nullable = backward compatible)
ALTER TABLE users 
  ADD COLUMN IF NOT EXISTS deleted_at TIMESTAMP WITH TIME ZONE;

-- Create enum type first
DO $$ BEGIN
  CREATE TYPE subscription_tier AS ENUM ('free', 'basic', 'pro');
EXCEPTION WHEN duplicate_object THEN null;
END $$;

-- Add column with default (safe for existing rows)
ALTER TABLE users 
  ADD COLUMN IF NOT EXISTS tier subscription_tier DEFAULT 'free';

-- Create index concurrently (no table lock!)
CREATE INDEX CONCURRENTLY IF NOT EXISTS 
  idx_users_tier ON users(tier) 
  WHERE deleted_at IS NULL;

COMMIT;
Enter fullscreen mode Exit fullscreen mode
-- Rollback: 20260408_add_soft_delete_and_tier.rollback.sql
BEGIN;
DROP INDEX CONCURRENTLY IF EXISTS idx_users_tier;
ALTER TABLE users DROP COLUMN IF EXISTS tier;
ALTER TABLE users DROP COLUMN IF EXISTS deleted_at;
DROP TYPE IF EXISTS subscription_tier;
COMMIT;
Enter fullscreen mode Exit fullscreen mode

Notice the CREATE INDEX CONCURRENTLY — Claude Code knows this pattern prevents table locks.

Step 2: Check for breaking changes

Before running anything, I ask:

Review this migration for breaking changes. 
Look for:
- Columns being renamed (will break existing queries)
- NOT NULL constraints on existing tables without defaults
- Enum modifications that invalidate existing values
- Index operations that lock the table
- Foreign key additions that require full table scans
Enter fullscreen mode Exit fullscreen mode

Claude Code will flag things like:

⚠️ Line 12: Adding NOT NULL without a default will fail on non-empty tables.
Fix: Add DEFAULT '' first, then add NOT NULL constraint separately.

⚠️ Line 18: Renaming column 'user_name' to 'username' will break any 
query using the old column name until all code is deployed.
Fix: Use a two-phase migration — add new column, copy data, update 
code, then drop old column.
Enter fullscreen mode Exit fullscreen mode

Step 3: Generate the ORM layer changes

Once the SQL is solid, I ask Claude Code to update the data layer:

Now update my Prisma schema and the UserRepository class 
to use the new soft delete pattern and tier enum.
Also update any existing queries that do 'SELECT * FROM users' 
to filter out deleted records by default.
Enter fullscreen mode Exit fullscreen mode

This generates:

  1. Updated Prisma schema with soft delete middleware
  2. Updated repository methods with deleted_at IS NULL filters
  3. A list of every file that queries the users table directly

Step 4: Generate the test suite

Write integration tests for:
1. Soft delete doesn't permanently remove data
2. Soft-deleted users can't log in
3. Admin queries can still see deleted users
4. The rollback script returns the database to its exact prior state
Enter fullscreen mode Exit fullscreen mode

The big sessions problem

Here's where this workflow hits a wall: migrations for large codebases generate a LOT of context. You've got:

  • Current schema (sometimes 200+ tables)
  • All affected files
  • Test suite
  • Multiple iterations of feedback

The typical Claude Code session runs out of steam around iteration 3 or 4. The model starts forgetting which columns exist, suggests patterns it already flagged as unsafe, or loses track of which files it already updated.

I switched to using SimplyLouie (simplylouie.com) as my Claude API endpoint. It's ✌️2/month for unlimited sessions, which means I can:

  1. Start a fresh session for each migration (no context bleed between projects)
  2. Run the migration check, ORM update, and test generation as separate focused sessions
  3. Not worry about burning through my monthly token quota on a single complex migration

For Rs165/month (India), ₦3,200/month (Nigeria), or R$10/month (Brazil) — it's less than a cup of coffee.

The multi-step migration pattern (advanced)

For truly complex changes — column renames, type changes on large tables — I use a three-phase approach:

Phase 1 migration: Add new structure

ALTER TABLE orders ADD COLUMN status_v2 VARCHAR(50);
CREATE INDEX CONCURRENTLY idx_orders_status_v2 ON orders(status_v2);
Enter fullscreen mode Exit fullscreen mode

Code deploy: Update application to write to both columns

Phase 2 migration: Backfill old data

UPDATE orders SET status_v2 = CASE status 
  WHEN 0 THEN 'pending'
  WHEN 1 THEN 'active'
  WHEN 2 THEN 'cancelled'
END
WHERE status_v2 IS NULL;
Enter fullscreen mode Exit fullscreen mode

Code deploy: Switch reads to new column

Phase 3 migration: Drop old column

ALTER TABLE orders DROP COLUMN status;
Enter fullscreen mode Exit fullscreen mode

I ask Claude Code to generate all three phases at once, with the timing notes about when each deploy should happen. It's the kind of thing that's tedious to reason through manually but takes Claude Code about 30 seconds.

What to always ask Claude Code to check

I end every migration session with this prompt:

Final review checklist:
1. Will this migration work if run twice (idempotent)?
2. Can it be rolled back safely after the next code deploy?
3. Are there any implicit table locks that could cause timeouts?
4. Does the rollback script require any data backfill?
5. Are there any foreign key constraints that could block this?
Enter fullscreen mode Exit fullscreen mode

If any answer is "no" or "maybe", Claude Code explains the fix.

The result

  • Zero migration-related production incidents in 6 months
  • Average migration review time: 8 minutes (down from 45)
  • 100% of migrations include both up and rollback scripts
  • Every migration tested against a production snapshot before deploy

The workflow is repeatable. Once you've done it twice, you can run the same prompt templates on any migration and get production-quality SQL in minutes.


Using Claude Code for heavy database work? The biggest limitation is session length on complex schemas. SimplyLouie gives you fresh sessions at ✌️2/month — useful when you're running 10+ iterations on a migration.

Top comments (0)