DEV Community

Cover image for From monolith to modular monolith to microservices: realistic migration patterns
Sepehr Mohseni
Sepehr Mohseni

Posted on

From monolith to modular monolith to microservices: realistic migration patterns

I've been part of three "microservices migrations" over the past decade. Two failed spectacularly. The third succeeded — but only because we stopped trying to build microservices and started thinking about boundaries first.

The pattern I see repeatedly: teams jump straight from a messy monolith to a distributed system, skip the hard work of understanding their domain, and end up with a distributed monolith that's worse than what they started with. More latency, more complexity, same coupling — just across network boundaries now.

This article walks through the realistic path we took: monolith → modular monolith → selective microservices extraction. It's slower, less exciting, and it actually works.

Why Most Microservices Migrations Fail

Let's be honest about what's happening out there. Teams read about Netflix and Uber, get excited, and start splitting services. Six months later, they have 15 services that all deploy together, share a database, and require synchronized releases. Congratulations — you've built a distributed monolith with extra network hops.

The failure modes I keep seeing:

Over-decomposition too early. You don't understand your domain boundaries yet, so you guess. Those guesses become service boundaries. Now refactoring requires coordinated deployments across teams. The cost of being wrong is 10x higher than in a monolith.

Distributed monolith hell. Services call each other synchronously in long chains. One slow service brings down everything. You didn't remove coupling — you just made it harder to see and debug.

Team readiness gaps. Microservices require operational maturity: CI/CD pipelines, observability, on-call rotations, service ownership. If your team struggles to deploy a monolith reliably, distributing it won't help.

Shared database addiction. "We'll share the database for now and split it later." Later never comes. Now you have multiple services with intimate knowledge of each other's schema, and you've achieved nothing.

The brutal truth: if you can't build a well-structured monolith, you definitely can't build well-structured microservices.

Step 1: Modularize the Monolith First

Before extracting anything, prove you understand your domain by organizing the monolith into clear modules. This is where Domain-Driven Design actually pays off — not as architecture astronautics, but as a practical tool for finding boundaries.

Identifying Bounded Contexts

Sit down with domain experts (product managers, senior engineers who've been around) and map out the core subdomains:

  • Core domain: What makes your business unique? This changes frequently and needs the most investment.
  • Supporting subdomains: Necessary but not differentiating. Auth, notifications, billing if you're not a fintech.
  • Generic subdomains: Solved problems. Use off-the-shelf solutions when possible.

For an e-commerce platform, this might look like:

Core Domain:
  - Product Catalog (pricing rules, inventory, variants)
  - Order Fulfillment (picking, packing, shipping logic)

Supporting Subdomains:
  - User Management (accounts, preferences)
  - Payments (integration with providers)
  - Notifications (email, SMS, push)

Generic Subdomains:
  - Authentication (use Auth0, Cognito, etc.)
  - File Storage (S3, GCS)
Enter fullscreen mode Exit fullscreen mode

Refactoring Into Modules

Now restructure your codebase to reflect these boundaries. In Go, this means proper package organization with explicit interfaces between modules:

// Before: everything imports everything
package main

import (
    "myapp/db"
    "myapp/handlers"
    "myapp/models"
    "myapp/utils"
)

// After: domain-oriented structure with clear boundaries
//
// /internal
//   /catalog
//     /domain      (entities, value objects, repository interfaces)
//     /app         (application services, use cases)
//     /infra       (repository implementations, external adapters)
//     /api         (HTTP handlers for this module)
//   /orders
//     /domain
//     /app
//     /infra
//     /api
//   /users
//     ...
//   /shared        (truly shared kernel - be very conservative here)
Enter fullscreen mode Exit fullscreen mode

The key rule: modules communicate through explicit interfaces, not by reaching into each other's internals.

// internal/catalog/domain/repository.go
package domain

type ProductRepository interface {
    FindByID(ctx context.Context, id ProductID) (*Product, error)
    FindByCategory(ctx context.Context, categoryID CategoryID) ([]*Product, error)
    Save(ctx context.Context, product *Product) error
}

// internal/catalog/domain/product.go
type Product struct {
    ID          ProductID
    Name        string
    Price       Money
    CategoryID  CategoryID
    Stock       int
    // ... domain logic methods
}

func (p *Product) Reserve(quantity int) error {
    if p.Stock < quantity {
        return ErrInsufficientStock
    }
    p.Stock -= quantity
    return nil
}
Enter fullscreen mode Exit fullscreen mode
// internal/orders/app/service.go
package app

// Orders module depends on Catalog through an interface,
// not by importing catalog's internal types directly
type ProductChecker interface {
    CheckAvailability(ctx context.Context, productID string, quantity int) (bool, error)
    GetPrice(ctx context.Context, productID string) (int64, error)
}

type OrderService struct {
    orderRepo      OrderRepository
    productChecker ProductChecker  // injected, implemented by catalog module
    eventPublisher EventPublisher
}

func (s *OrderService) CreateOrder(ctx context.Context, req CreateOrderRequest) (*Order, error) {
    // Validate product availability through the interface
    for _, item := range req.Items {
        available, err := s.productChecker.CheckAvailability(ctx, item.ProductID, item.Quantity)
        if err != nil {
            return nil, fmt.Errorf("checking availability: %w", err)
        }
        if !available {
            return nil, ErrProductNotAvailable
        }
    }

    // Create the order within this module's boundary
    order := NewOrder(req.CustomerID, req.Items)

    if err := s.orderRepo.Save(ctx, order); err != nil {
        return nil, fmt.Errorf("saving order: %w", err)
    }

    // Publish event for other modules to react
    s.eventPublisher.Publish(ctx, OrderCreatedEvent{
        OrderID:    order.ID,
        CustomerID: order.CustomerID,
        Items:      order.Items,
        CreatedAt:  order.CreatedAt,
    })

    return order, nil
}
Enter fullscreen mode Exit fullscreen mode

Enforcing Boundaries With Build Constraints

Don't rely on code review alone to enforce module boundaries. Use tooling:

// internal/catalog/.import-restrictions
// Using a tool like go-import-lint or custom linting rules:
//
// allowed_imports:
//   - "myapp/internal/shared"
//   - "myapp/pkg/*"
// forbidden_imports:
//   - "myapp/internal/orders/*"
//   - "myapp/internal/users/*"
Enter fullscreen mode Exit fullscreen mode

Or use Go's internal package convention more aggressively:

/internal
  /catalog
    /internal       <- only catalog can import this
      /persistence
      /adapters
    /api            <- exposed to main, other modules use interfaces
Enter fullscreen mode Exit fullscreen mode

At this stage, you still have one deployable, one database, one repo. But the code is organized around domain boundaries, and each module has a clear public interface.

Spend 3-6 months here. Seriously. Refactoring a monolith is cheap compared to refactoring distributed services. Every boundary you get wrong now becomes a distributed transaction problem later.

Step 2: The Strangler Fig Pattern Done Right

Once your monolith is modular, you can start extracting services — but incrementally, with minimal risk. The Strangler Fig pattern is your friend here.

The idea: place a proxy in front of your monolith, route new or refactored functionality to new services, and gradually "strangle" the old code until you can delete it.

Setting Up the Facade

Start with an API gateway or reverse proxy in front of everything. Kong, Envoy, Traefik — pick one based on your team's familiarity.

# traefik dynamic configuration example
http:
  routers:
    # New auth service handles authentication
    auth-router:
      rule: "PathPrefix(`/api/v2/auth`)"
      service: auth-service
      priority: 100

    # New catalog service handles product endpoints
    catalog-router:
      rule: "PathPrefix(`/api/v2/products`)"
      service: catalog-service
      priority: 100

    # Everything else still goes to monolith
    legacy-router:
      rule: "PathPrefix(`/api`)"
      service: legacy-monolith
      priority: 1

  services:
    auth-service:
      loadBalancer:
        servers:
          - url: "http://auth-svc:8080"

    catalog-service:
      loadBalancer:
        servers:
          - url: "http://catalog-svc:8080"

    legacy-monolith:
      loadBalancer:
        servers:
          - url: "http://monolith:8080"
Enter fullscreen mode Exit fullscreen mode

Choosing What to Extract First

Don't extract randomly. Prioritize by:

  1. High change velocity: Modules that change frequently benefit most from independent deployment.
  2. Clear boundaries: Modules with minimal dependencies on others are easier to extract.
  3. Scaling needs: If one part needs to scale differently than the rest.
  4. Team ownership: A team that wants to own a service end-to-end.

For most systems, authentication/authorization is a great first candidate: clear boundaries, well-understood domain, often needs different security posture.

Anti-Corruption Layers

When extracting a service, you'll need to translate between the old monolith's data models and your new clean domain models. This is where Anti-Corruption Layers (ACL) save you from inheriting legacy baggage.

// internal/catalog/infra/legacy_adapter.go
package infra

// LegacyProductAdapter translates between legacy monolith format
// and our clean domain model
type LegacyProductAdapter struct {
    legacyClient *http.Client
    baseURL      string
}

// The legacy system returns this mess
type LegacyProductResponse struct {
    ProdID       int     `json:"prod_id"`
    ProdName     string  `json:"prod_name"`
    ProdDesc     string  `json:"prod_desc"`
    PriceInCents int64   `json:"price_cents"`
    CatID        int     `json:"cat_id"`
    IsActive     int     `json:"is_active"`  // 0 or 1, not bool
    QtyOnHand    int     `json:"qty_on_hand"`
    // ... 20 more fields we don't care about
}

// Translate to our clean domain model
func (a *LegacyProductAdapter) GetProduct(ctx context.Context, id string) (*domain.Product, error) {
    resp, err := a.fetchFromLegacy(ctx, id)
    if err != nil {
        return nil, err
    }

    // ACL: translate legacy format to domain model
    return &domain.Product{
        ID:         domain.ProductID(fmt.Sprintf("%d", resp.ProdID)),
        Name:       strings.TrimSpace(resp.ProdName),
        Price:      domain.Money{Amount: resp.PriceInCents, Currency: "USD"},
        CategoryID: domain.CategoryID(fmt.Sprintf("%d", resp.CatID)),
        Stock:      resp.QtyOnHand,
        Active:     resp.IsActive == 1,
    }, nil
}
Enter fullscreen mode Exit fullscreen mode

The ACL lives in your new service and handles all the ugliness of legacy integration. Your domain model stays clean.

Feature Flags as Your Safety Net

Never extract a service without feature flags. They're your undo button.

// Feature flag configuration
type FeatureFlags struct {
    client *launchdarkly.Client  // or unleash, flipt, etc.
}

func (f *FeatureFlags) UseNewCatalogService(ctx context.Context, userID string) bool {
    return f.client.BoolVariation(
        "use-new-catalog-service",
        ldcontext.New(userID),
        false,  // default to legacy
    )
}
Enter fullscreen mode Exit fullscreen mode
// In your API gateway or application code
func (h *ProductHandler) GetProduct(w http.ResponseWriter, r *http.Request) {
    ctx := r.Context()
    userID := getUserID(ctx)
    productID := chi.URLParam(r, "id")

    var product *Product
    var err error

    if h.flags.UseNewCatalogService(ctx, userID) {
        // Route to new service
        product, err = h.newCatalogClient.GetProduct(ctx, productID)
    } else {
        // Legacy path
        product, err = h.legacyProductService.GetProduct(ctx, productID)
    }

    if err != nil {
        // handle error
        return
    }

    respondJSON(w, product)
}
Enter fullscreen mode Exit fullscreen mode

Migration Strategy With Feature Flags

  1. Dark launch (0%): Deploy new service, no traffic. Validate it starts, passes health checks.

  2. Shadow traffic (0% live, 100% shadowed): Send copies of requests to new service, compare responses. Don't serve shadow responses to users.

func (h *ProductHandler) GetProductWithShadow(w http.ResponseWriter, r *http.Request) {
    ctx := r.Context()
    productID := chi.URLParam(r, "id")

    // Always call legacy for the real response
    product, err := h.legacyProductService.GetProduct(ctx, productID)

    // Shadow call to new service (async, don't block)
    go func() {
        shadowCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
        defer cancel()

        newProduct, newErr := h.newCatalogClient.GetProduct(shadowCtx, productID)

        // Compare and log differences
        h.comparator.Compare(product, err, newProduct, newErr)
    }()

    if err != nil {
        // handle error
        return
    }

    respondJSON(w, product)
}
Enter fullscreen mode Exit fullscreen mode
  1. Canary (1-5%): Route small percentage of real traffic to new service. Monitor error rates, latency.

  2. Progressive rollout (5% → 25% → 50% → 100%): Gradually increase traffic. Have rollback ready.

  3. Cleanup: Once at 100% for 2+ weeks with no issues, remove feature flag and legacy code path.

When to Stop at Modular Monolith

Here's the contrarian take: for many teams, the modular monolith is the destination, not a waypoint.

Stay with modular monolith if:

  • Your team is under 30-50 engineers
  • You don't have extreme scaling needs (different parts needing 10x different resources)
  • You can deploy frequently enough (daily is fine for most)
  • You don't have regulatory requirements forcing isolation
  • You value productivity over architectural purity

Move to microservices when:

  • Different modules genuinely need different scaling profiles
  • Teams are large enough (50+) that coordination overhead justifies distribution
  • You have the operational maturity (CI/CD, observability, on-call) to handle it
  • Compliance requires isolation (PCI-DSS for payments, HIPAA for health data)

I've seen 200-person engineering orgs run successfully on a modular monolith. I've seen 30-person teams drown in microservices complexity. Team size and operational maturity matter more than technical elegance.

Real Pitfalls and How We Handled Them

Database Coupling: The Shared DB Trap

Our biggest mistake in migration #2: we extracted services but left them pointing at the same PostgreSQL instance. "We'll split the database later."

The problem: services still had implicit coupling through foreign keys, shared tables, transactions that spanned service boundaries. We had all the operational complexity of microservices with none of the benefits.

The fix: Extract the database with the service, or don't extract the service yet.

// During migration: dual-write pattern
func (s *OrderService) CreateOrder(ctx context.Context, req CreateOrderRequest) (*Order, error) {
    // Write to new order service database
    order, err := s.orderRepo.Save(ctx, newOrder)
    if err != nil {
        return nil, err
    }

    // Also write to legacy database for services still depending on it
    if err := s.legacySync.SyncOrder(ctx, order); err != nil {
        // Log but don't fail - legacy is secondary
        s.logger.Warn("failed to sync to legacy", "order_id", order.ID, "err", err)
    }

    return order, nil
}
Enter fullscreen mode Exit fullscreen mode

Dual-write during migration, then cut over consumers one by one, then remove legacy sync.

Eventual Consistency Surprises

In a monolith, you rely on database transactions for consistency. In microservices, you get eventual consistency whether you planned for it or not.

Our catalog service would update a product price. The order service would read the old price from its cache. Customers saw inconsistent pricing for 30-60 seconds.

The fix: Design for eventual consistency from the start. Use events + idempotent handlers.

// Catalog service publishes price change event
type ProductPriceUpdatedEvent struct {
    ProductID string    `json:"product_id"`
    OldPrice  int64     `json:"old_price"`
    NewPrice  int64     `json:"new_price"`
    UpdatedAt time.Time `json:"updated_at"`
}

// Order service subscribes and invalidates cache
func (h *PriceUpdateHandler) Handle(ctx context.Context, event ProductPriceUpdatedEvent) error {
    // Invalidate cache entry
    h.priceCache.Delete(event.ProductID)

    // Or update cache directly
    h.priceCache.Set(event.ProductID, event.NewPrice, event.UpdatedAt)

    return nil
}
Enter fullscreen mode Exit fullscreen mode

Also: show eventual consistency to users honestly. "Price confirmed at checkout" instead of pretending prices are always real-time.

Testing Across Boundaries

Integration testing in a monolith is easy — spin up the app, hit endpoints. With services, you need contract testing to avoid the "works on my machine, breaks in production" problem.

// Using Pact for contract testing

// Consumer side (order service)
func TestOrderService_CatalogContract(t *testing.T) {
    pact := dsl.Pact{
        Consumer: "order-service",
        Provider: "catalog-service",
    }
    defer pact.Teardown()

    pact.AddInteraction().
        Given("product 123 exists").
        UponReceiving("a request for product 123").
        WithRequest(dsl.Request{
            Method: "GET",
            Path:   dsl.String("/api/products/123"),
        }).
        WillRespondWith(dsl.Response{
            Status: 200,
            Body: dsl.MapMatcher{
                "id":    dsl.String("123"),
                "name":  dsl.Like("Widget"),
                "price": dsl.Like(1999),
            },
        })

    // Test against mock
    err := pact.Verify(func() error {
        client := NewCatalogClient(pact.Server.URL)
        product, err := client.GetProduct(context.Background(), "123")
        assert.NoError(t, err)
        assert.Equal(t, "123", product.ID)
        return nil
    })

    assert.NoError(t, err)
}
Enter fullscreen mode Exit fullscreen mode

The contract gets verified against the real catalog service in CI. If catalog changes its response format, the contract test fails before you deploy.

The Timeline That Worked

Here's roughly how our successful migration went:

Months 1-2: Assessment

  • Mapped domain boundaries with product and engineering leads
  • Identified candidate modules for extraction
  • Set up observability (we couldn't extract what we couldn't measure)

Months 3-6: Modularization

  • Restructured codebase into domain modules
  • Introduced interfaces between modules
  • Added integration tests at module boundaries
  • Still one deployable, one database

Months 7-8: First Extraction (Auth)

  • Deployed auth service alongside monolith
  • Strangler Fig routing through Traefik
  • Shadow traffic for 2 weeks, canary for 2 weeks
  • Full cutover, deleted legacy auth code

Months 9-12: Second Extraction (Catalog)

  • More complex due to data volume
  • Dual-write pattern during migration
  • Contract tests between order service and catalog
  • Event-driven cache invalidation

Month 13+: Ongoing

  • Evaluate each module: extract or stay in modular monolith?
  • Most modules stayed in monolith — good enough
  • Extracted only what truly needed independent scaling

Conclusion

The path from monolith to microservices isn't a straight line, and it definitely isn't a weekend project. The teams that succeed treat it as a multi-month journey with explicit phases:

  1. Modularize first: Prove you understand your domain by organizing the monolith well. This is where you learn boundaries cheaply.

  2. Strangle incrementally: Use the Strangler Fig pattern with proper routing, feature flags, and shadow traffic. Never big-bang.

  3. Know when to stop: The modular monolith is a valid end state. Not every module needs to be a service.

The goal isn't microservices — it's sustainable software that your team can evolve safely. Sometimes that's microservices. Often it's something simpler.

Key takeaways:

  • Spend months modularizing before extracting anything
  • Use DDD to find real boundaries, not guessed ones
  • Strangler Fig pattern with feature flags gives you safe rollback
  • Anti-corruption layers keep legacy mess out of new services
  • Contract testing prevents integration nightmares
  • Most teams should stop at modular monolith
  • Extract only what genuinely needs independent scaling or deployment

Top comments (0)