How I use Claude Code for database migrations — a zero-downtime workflow
Database migrations are one of the highest-stakes operations in software development. One wrong schema change and you're dealing with downtime, data loss, or a rollback at 2am.
Here's the exact workflow I use with Claude Code to make database migrations safer and faster.
The problem with migrations
Most migration tools give you up() and down() functions and leave the hard thinking to you:
- Will this lock the table?
- Can I roll this back without data loss?
- What happens to the running application during migration?
- Are there any indexes I need to add?
Claude Code handles this thinking for me.
My zero-downtime migration workflow
Step 1: Describe the schema change, get the analysis
Before writing a single line of migration code:
claude "I need to add a non-nullable column 'subscription_tier' (enum: free, pro, enterprise)
to the users table which has 2.3M rows. We're on PostgreSQL 14, running in production
24/7. Analyze what locking behavior this will cause and give me a zero-downtime approach."
Claude immediately identifies:
- Adding a non-nullable column without a default locks the entire table in older PostgreSQL
- In PG 11+, adding a column with a non-volatile default is instant
- The safe approach: add column as nullable → backfill → add constraint
Step 2: Generate the migration with rollback
claude "Write the full migration for this schema change including:
1. The up() migration (zero-downtime, handles existing rows)
2. The down() rollback (must be data-safe)
3. Any required index changes
4. An estimated runtime based on 2.3M rows"
Output:
-- up.sql (zero-downtime approach)
-- Step 1: Add nullable column (instant, no lock)
ALTER TABLE users ADD COLUMN subscription_tier VARCHAR(20);
-- Step 2: Backfill existing rows in batches (run separately, app keeps running)
-- Estimated: ~4 minutes at 10k rows/batch
DO $$
DECLARE
batch_size INT := 10000;
offset_val INT := 0;
rows_updated INT;
BEGIN
LOOP
UPDATE users
SET subscription_tier = 'free'
WHERE subscription_tier IS NULL
AND id IN (
SELECT id FROM users
WHERE subscription_tier IS NULL
LIMIT batch_size
);
GET DIAGNOSTICS rows_updated = ROW_COUNT;
EXIT WHEN rows_updated = 0;
PERFORM pg_sleep(0.1); -- backpressure
END LOOP;
END $$;
-- Step 3: Add NOT NULL constraint (after backfill complete)
ALTER TABLE users
ALTER COLUMN subscription_tier SET NOT NULL,
ALTER COLUMN subscription_tier SET DEFAULT 'free';
-- Add index for queries that filter by tier
CREATE INDEX CONCURRENTLY idx_users_subscription_tier
ON users(subscription_tier);
-- down.sql (safe rollback)
DROP INDEX CONCURRENTLY IF EXISTS idx_users_subscription_tier;
ALTER TABLE users DROP COLUMN subscription_tier;
Step 3: Validate against your ORM
If you're using Sequelize, Prisma, or Django ORM:
claude "Now write the Sequelize migration file for this change.
Also update the User model to include the new field with proper typing."
Claude generates both the migration file and the model update — consistent, no manual copy-paste errors.
Step 4: Pre-flight checklist
claude "Before I run this migration in production, give me a checklist:
- Verify backfill query is correct
- Confirm index won't cause table lock
- Check that rollback is data-safe
- Flag any edge cases I might have missed"
I've caught several issues this way — like a migration that would work fine with a small dataset but deadlock under production load.
The rate limit problem
A complex migration session involves:
- Schema analysis
- Migration code generation
- ORM model updates
- Test data queries
- Pre-flight review
That's easily 50,000-100,000 tokens for a non-trivial migration. On the official Claude API, that's $0.15-$0.75 per migration session.
I use SimplyLouie for this — it's a $2/month API proxy that gives me unlimited Claude sessions. For database work where I'm running multiple migration sessions per week, the unlimited access matters more than per-token pricing.
# Works with Claude Code out of the box
export ANTHROPIC_BASE_URL=https://simplylouie.com/api
export ANTHROPIC_API_KEY=your-key
claude "analyze this migration..."
Real example: renaming a column
Renaming a column is deceptively dangerous. Claude Code handles the complexity:
claude "I need to rename 'user_id' to 'account_id' in the orders table (500k rows).
The column is referenced by:
- 3 foreign keys in other tables
- 2 indexes
- Multiple application queries
Give me a zero-downtime approach that handles all dependencies."
The safe approach Claude generates:
- Add
account_idas an alias (trigger-based sync) - Deploy code that writes to both columns
- Backfill
account_idfromuser_id - Switch reads to
account_id - Remove the old column
This is a 5-step process most engineers would do in one dangerous ALTER TABLE.
Tips for migration sessions
Always include your PostgreSQL/MySQL version — locking behavior varies significantly.
Share your current indexes — Claude spots missing indexes on foreign keys that will slow down the migration.
Mention row counts — Claude adjusts batch sizes based on table size.
Ask about monitoring — Claude will suggest queries to monitor migration progress.
The bottom line
Database migrations used to be my most stressful engineering task. With Claude Code:
- I get zero-downtime migration strategies automatically
- Rollbacks are generated alongside every migration
- Edge cases get caught before production
The sessions run long (migrations are complex), which is exactly where a flat-rate API endpoint like SimplyLouie pays for itself.
SimplyLouie is a $2/month Claude API proxy. 7-day free trial, no commitment. Works with Claude Code via ANTHROPIC_BASE_URL.
Top comments (0)