DEV Community

brian austin
brian austin

Posted on

How I use Claude Code to write database migrations safely — a complete workflow

How I use Claude Code to write database migrations safely — a complete workflow

Database migrations are one of the riskiest operations in software development. A bad migration can lock tables, corrupt data, or take down production for hours. Here's the exact workflow I use with Claude Code to write migrations safely — every time.

The problem with migrations

Most migration failures happen because:

  • The migration wasn't tested against production data volume
  • Nobody thought about the rollback path
  • The schema change conflicts with in-flight requests
  • The migration runs during peak traffic

Claude Code can't prevent all of these, but it can catch most of them — if you use it right.

Step 1: Give Claude Code your current schema

Before writing a single line of migration code, I paste in my current schema:

Here's my current users table:

CREATE TABLE users (
  id SERIAL PRIMARY KEY,
  email VARCHAR(255) UNIQUE NOT NULL,
  created_at TIMESTAMP DEFAULT NOW()
);

I need to add a stripe_customer_id column. What migration should I write, and what could go wrong?
Enter fullscreen mode Exit fullscreen mode

Claude Code doesn't just write the migration. It identifies the risks first.

Step 2: Let Claude Code draft the migration with rollback

I always ask for both the forward migration AND the rollback:

-- Up migration
ALTER TABLE users 
  ADD COLUMN stripe_customer_id VARCHAR(255);

CREATE INDEX CONCURRENTLY idx_users_stripe_customer_id 
  ON users(stripe_customer_id);

-- Down migration (rollback)
DROP INDEX CONCURRENTLY IF EXISTS idx_users_stripe_customer_id;

ALTER TABLE users 
  DROP COLUMN IF EXISTS stripe_customer_id;
Enter fullscreen mode Exit fullscreen mode

Note the CONCURRENTLY keyword on the index creation. Claude Code added that because adding an index without it locks the table. That's the kind of thing you'd miss at 2am.

Step 3: Ask Claude Code to simulate the migration

This is where the workflow gets powerful. I ask:

Simulate what happens when this migration runs against a table with 5 million rows.
What's the estimated lock time? What happens to in-flight queries?
Enter fullscreen mode Exit fullscreen mode

Claude Code walks through:

  1. The ALTER TABLE operation (near-instant for adding nullable column in PostgreSQL 11+)
  2. The CONCURRENT index build (no lock, but uses resources)
  3. Estimated time based on row count and index size
  4. What happens to reads and writes during each phase

Step 4: Write the application code first

Counter-intuitive but critical: write the code that uses the new column BEFORE running the migration.

I use Claude Code to:

  1. Identify all places in the codebase that touch the users table
  2. Write the updated code that handles both old schema (no column) and new schema (with column)
  3. Review for any N+1 queries the new column might create
// Before migration: no stripe_customer_id
const user = await db.query('SELECT id, email FROM users WHERE id = $1', [userId]);

// After migration: backwards-compatible
const user = await db.query(
  'SELECT id, email, stripe_customer_id FROM users WHERE id = $1', 
  [userId]
);
// Handle null gracefully — old rows won't have this yet
const customerId = user.stripe_customer_id ?? null;
Enter fullscreen mode Exit fullscreen mode

Step 5: Zero-downtime deployment order

Claude Code generates the deployment checklist:

1. Deploy application code (handles both old and new schema)
2. Verify deployment is stable
3. Run migration on replica first
4. Check replica performance
5. Run migration on primary during low-traffic window
6. Monitor error rates for 15 minutes
7. If error rate spikes → run rollback migration
8. If stable → remove backward-compat code in next deploy
Enter fullscreen mode Exit fullscreen mode

Where rate limits hit

This workflow is context-heavy. A real migration session includes:

  • Full schema dump (often 500-1000 lines)
  • Application code that touches the table
  • Test data examples
  • The migration itself (forward + rollback)
  • Deployment checklist

For a complex schema change across a large application, I'm regularly hitting Claude Code's context limits mid-session — right when I need it most.

What I do about it

I use SimplyLouie as my Claude API endpoint. It's $2/month (not $20) and gives me a dedicated endpoint I can configure in Claude Code's settings:

export ANTHROPIC_BASE_URL=https://api.simplylouie.com
Enter fullscreen mode Exit fullscreen mode

Same API, same models, fraction of the cost. For database migration work especially — where sessions run long and context is large — having a cost-effective endpoint means I don't cut corners to save tokens.

The full workflow in one prompt

Here's the prompt I actually use at the start of every migration session:

I'm writing a database migration. Here's my current schema: [schema]

Here's what I need to change: [description]

Please:
1. Write the forward migration with CONCURRENT index operations
2. Write the rollback migration
3. Identify any table lock risks
4. List all application code I need to update before running this
5. Write the deployment checklist in order
6. Flag anything that could go wrong with 5M+ rows
Enter fullscreen mode Exit fullscreen mode

This single prompt generates everything I need for a safe migration. The key is giving Claude Code the schema upfront — not asking it to infer.

Results

  • 0 production incidents from migrations in the past 8 months
  • Average migration review time: 20 minutes (down from 2 hours)
  • Caught 3 missing rollback paths that would have been unrecoverable
  • Caught 2 table-locking index operations before they hit production

The workflow works. The rate limits are the only friction — which is why having a cost-effective API endpoint matters for this kind of intensive session.


Using Claude Code for database work? What's your migration workflow? Drop it in the comments.

Top comments (0)