DEV Community

brian austin
brian austin

Posted on

How I use Claude Code to refactor a monolith into microservices — a complete workflow

How I use Claude Code to refactor a monolith into microservices — a complete workflow

Every growing codebase eventually hits the same wall: the monolith that got you to product-market fit is now slowing you down. New features take weeks instead of days. Every deploy risks breaking something unrelated. Your team steps on each other's toes constantly.

I've done this migration twice now — and the second time, I had Claude Code running alongside me the entire way. Here's the exact workflow that saved me weeks.

The problem with monolith migrations

The classic advice is "strangler fig pattern" — slowly extract services at the edges while keeping the core running. But knowing which services to extract first, how to define their boundaries, and how to handle the data split? That's where teams get stuck for months.

Claude Code turns this into a methodical, step-by-step process.

Step 1: Map the dependency graph

Before touching any code, I ask Claude to analyze the entire codebase and produce a service boundary map:

Analyze this monolith and identify:
1. Natural service boundaries based on domain logic
2. Circular dependencies that need to be broken first
3. Shared database tables and how to split them
4. Which services have the most independent test coverage (safest to extract first)
5. Estimated effort for each extraction

Output as a prioritized migration plan with the lowest-risk services first.
Enter fullscreen mode Exit fullscreen mode

This prompt alone produces a migration roadmap that would take a senior architect days to write manually.

Step 2: Extract the first service

I always start with the most isolated service — typically something like notifications, email sending, or file uploads. Here's the extraction prompt:

We're extracting the notification service from this monolith.

Current location: src/notifications/
Current dependencies: [paste list]
Database tables used: notifications, notification_preferences

Create:
1. A new Express service with its own package.json
2. REST API endpoints mirroring current internal function calls
3. An async message queue interface for fire-and-forget notifications
4. Database migration to copy notification tables to a separate DB
5. An adapter layer in the monolith that calls the new service instead of the old code
6. Feature flag to switch between old and new implementation

The monolith should still pass all existing tests after this change.
Enter fullscreen mode Exit fullscreen mode

The feature flag is critical — it lets you deploy and test in production with real traffic before cutting over completely.

Step 3: Set up inter-service communication

Once you have more than 2 services, you need a consistent communication pattern. I use this prompt to generate the boilerplate:

# Generate service client for any extracted service
cat > generate-client.prompt << 'EOF'
Given this service's OpenAPI spec (or Express routes), generate:
1. A TypeScript client with full type safety
2. Retry logic with exponential backoff
3. Circuit breaker pattern for resilience
4. Distributed tracing headers (OpenTelemetry)
5. Unit tests mocking the HTTP calls

The client should work in both the monolith and other microservices.
EOF

cat services/notification/routes.js | claude-code generate-client.prompt
Enter fullscreen mode Exit fullscreen mode

Step 4: Handle the data split

Data is always the hardest part. Claude Code helps you write the migration scripts:

We need to split the users table. Currently it contains:
- Core user data (auth, profile)
- Billing data (subscription, payment methods)
- Preferences (notification settings, UI config)

Generate:
1. Schema for 3 separate tables/databases
2. Migration script that copies data with zero downtime
3. Dual-write code that keeps both databases in sync during transition
4. Verification query to confirm data consistency
5. Cutover script to switch to new schema

Constraints: Postgres, no downtime, rollback must be possible at any point.
Enter fullscreen mode Exit fullscreen mode

The dual-write phase is key — you write to both old and new schema simultaneously for 2-4 weeks, giving you time to verify the new schema is correct before cutting the old one off.

Step 5: Generate the service mesh config

Once you have 5+ services, you need service discovery and load balancing. Claude Code can generate your entire infrastructure config:

Generate Docker Compose for local development with:
- All extracted services with their environment variables
- Nginx reverse proxy with routing rules
- Redis for session sharing between services
- RabbitMQ for async communication
- Jaeger for distributed tracing
- Each service health check endpoint

Also generate the production Kubernetes manifests for:
- Deployments with rolling update strategy
- Services and Ingress rules
- HorizontalPodAutoscaler based on CPU/memory
- ConfigMaps for each environment
Enter fullscreen mode Exit fullscreen mode

The rate limit reality

Here's what nobody tells you about AI-assisted migrations: they're long sessions.

A single service extraction — prompting, reviewing, iterating on the generated code, running tests, debugging edge cases — takes 3-6 hours of Claude interaction. A full monolith migration might be 20-40 such sessions.

If you're hitting rate limits in the middle of extracting a service (right when you've built up the full context of what you're working on), you lose hours of productivity waiting for limits to reset.

I ran into this constantly with Claude Pro. The solution I use now is SimplyLouie — a $2/month flat-rate Claude API wrapper. No per-session limits, no quota exhaustion mid-migration. You just... keep working.

For a migration project that's going to take months, paying $2/month instead of $20/month (and still hitting limits) is an obvious choice.

The full migration timeline

Here's what a realistic timeline looks like with this workflow:

Phase Duration Claude sessions
Dependency mapping 1 day 2-3 sessions
First service extraction 3-5 days 5-8 sessions
Data split planning 2-3 days 3-4 sessions
Each subsequent service 2-4 days 4-6 sessions
Service mesh setup 1-2 days 2-3 sessions
Final cutover 1 day 1-2 sessions

With 10 services to extract, that's 50-80+ Claude sessions. Having unlimited flat-rate access matters a lot.

What I've learned

Start with data: The service boundary map is only useful if you also plan the data split upfront. Code is easy to move; database splits take weeks.

Feature flags everywhere: Every extraction should ship behind a feature flag. This lets you test with real production traffic without committing to the new architecture.

Extract leaf nodes first: Services with no dependencies on other services are always safest to extract first. Build confidence before tackling the complex ones.

Keep the monolith working: The adapter layer pattern means your existing tests still pass throughout the migration. This is your safety net.


If you're about to start a monolith migration and want to do it without rate limit interruptions, SimplyLouie is worth trying. 7-day free trial, then $2/month.

What's the biggest monolith you've had to break apart? Drop it in the comments — I'd love to hear how it went.

Top comments (0)