DEV Community

Cover image for Migrating a Large Go Service to Hexagonal Without a Rewrite
Gabriel Anhaia
Gabriel Anhaia

Posted on

Migrating a Large Go Service to Hexagonal Without a Rewrite


You know the plan. Someone on the team reads about hexagonal architecture, draws a ports-and-adapters diagram on a whiteboard, and proposes a rewrite. Two sprints of design. A feature freeze. A new repo. Three months later the rewrite is half-done, the old service is still in prod, and both need bug fixes.

The rewrite never finishes. It gets deprioritized because the business needs features, not architecture astronautics. The team goes back to the original codebase, now slightly more demoralized.

There is a different approach. Four incremental steps. Each one is a single pull request. Each PR compiles, passes every existing test, and ships to production before you start the next one. No feature freeze. No new repo. No rewrite.

A team I know ran this sequence on a large order-processing service. They shipped all four steps across a few weeks while delivering normal sprint work. By the end, the service had clean domain isolation, and no one outside the team noticed anything changed.

What you start with

The typical Go service that grew organically. Handlers call the database directly. Business logic lives in the HTTP layer. The directory tree looks something like this:

myservice/
├── main.go
├── handlers/
│   ├── order.go       # 800 lines, SQL + validation + HTTP
│   ├── customer.go
│   └── health.go
├── models/
│   ├── order.go       # structs with json + sql tags
│   └── customer.go
├── db/
│   ├── connection.go
│   └── migrations/
└── config/
    └── config.go
Enter fullscreen mode Exit fullscreen mode

The handlers/order.go file does everything. It parses the HTTP request, validates the input, runs business rules, queries PostgreSQL, formats the response. If you want to test the discount calculation, you need a running database.

// handlers/order.go — the before state
func (h *Handler) CreateOrder(
    w http.ResponseWriter,
    r *http.Request,
) {
    var req struct {
        CustomerID string  `json:"customer_id"`
        Items      []Item  `json:"items"`
        CouponCode string  `json:"coupon_code"`
    }
    if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
        http.Error(w, "bad request", 400)
        return
    }
Enter fullscreen mode Exit fullscreen mode

The handler decodes the request, queries for a coupon discount, computes the total, and persists the order — all in one function.

    // business logic tangled with SQL
    var discount float64
    err := h.db.QueryRowContext(r.Context(),
        "SELECT discount FROM coupons WHERE code = $1",
        req.CouponCode,
    ).Scan(&discount)
    if err != nil && err != sql.ErrNoRows {
        http.Error(w, "internal error", 500)
        return
    }

    total := calculateTotal(req.Items, discount)

    _, err = h.db.ExecContext(r.Context(),
        `INSERT INTO orders 
         (customer_id, total_cents, status) 
         VALUES ($1, $2, 'pending')`,
        req.CustomerID, total,
    )
    if err != nil {
        http.Error(w, "internal error", 500)
        return
    }

    w.WriteHeader(201)
    json.NewEncoder(w).Encode(map[string]int64{
        "total_cents": total,
    })
}
Enter fullscreen mode Exit fullscreen mode

Everything depends on everything. The handler imports database/sql. The model structs have both json and sql tags. Tests are slow because they need PostgreSQL. Adding a gRPC endpoint means duplicating the handler logic.

You could rewrite this from scratch. Or you could move four things, one at a time.

Step 1: Extract domain types into their own package

This is the smallest possible PR. You create an internal/domain package and move your core structs into it. No behavior yet. No interfaces. Just types.

myservice/
├── internal/
│   └── domain/
│       ├── order.go      # Order, Item, OrderStatus
│       └── customer.go   # Customer
├── handlers/
│   ├── order.go          # now imports internal/domain
│   ├── customer.go
│   └── health.go
├── models/              # still exists, still has sql tags
│   ├── order.go
│   └── customer.go
├── db/
├── config/
└── main.go
Enter fullscreen mode Exit fullscreen mode

The domain structs are clean. No json tags, no sql tags, no ORM annotations. They represent what the business cares about.

// internal/domain/order.go
package domain

import "time"

type OrderStatus string

const (
    OrderPending   OrderStatus = "pending"
    OrderConfirmed OrderStatus = "confirmed"
    OrderCancelled OrderStatus = "cancelled"
)

type Order struct {
    ID         string
    CustomerID string
    Items      []Item
    TotalCents int64
    Status     OrderStatus
    CreatedAt  time.Time
}

type Item struct {
    ProductID string
    Quantity  int
    PriceCents int64
}
Enter fullscreen mode Exit fullscreen mode

The handlers still use the old models package for their SQL work. That is fine. The only change in the handlers is that any function returning data to the caller now maps from models.Order to domain.Order at the boundary. Some handlers won't need to change at all in this step — they can keep returning the old model types until step 3.

What this PR does not do: move any logic, change any behavior, delete the models package. It adds one new package with clean types. Every existing test passes without modification.

Review the imports. The domain package should import nothing but standard library basics — time, fmt, errors. If you see database/sql or encoding/json in there, something leaked in. This check is your architecture linter for the rest of the migration.

go list -f '{{.Imports}}' ./internal/domain/...
# should return: [time fmt errors] or similar
# should NOT contain: database/sql, net/http, encoding/json
Enter fullscreen mode Exit fullscreen mode

Step 2: Define port interfaces at the domain boundary

A port is an interface defined in the domain package that describes what the domain needs from the outside world — without saying how.

// internal/domain/ports.go
package domain

import "context"

type OrderRepository interface {
    Save(ctx context.Context, order Order) error
    FindByID(
        ctx context.Context,
        id string,
    ) (Order, error)
    FindByCustomer(
        ctx context.Context,
        customerID string,
    ) ([]Order, error)
}

type CouponRepository interface {
    FindByCode(
        ctx context.Context,
        code string,
    ) (Coupon, error)
}

type OrderNotifier interface {
    NotifyCreated(ctx context.Context, order Order) error
}
Enter fullscreen mode Exit fullscreen mode

And move the business logic into a domain service that depends on these interfaces — not on any concrete database or HTTP client.

// internal/domain/service.go
package domain

import (
    "context"
    "fmt"
)

type OrderService struct {
    orders  OrderRepository
    coupons CouponRepository
    notify  OrderNotifier
}

func NewOrderService(
    orders OrderRepository,
    coupons CouponRepository,
    notify OrderNotifier,
) *OrderService {
    return &OrderService{
        orders:  orders,
        coupons: coupons,
        notify:  notify,
    }
}
Enter fullscreen mode Exit fullscreen mode

The constructor takes the three ports. CreateOrder orchestrates them:

func (s *OrderService) CreateOrder(
    ctx context.Context,
    customerID string,
    items []Item,
    couponCode string,
) (Order, error) {
    if customerID == "" {
        return Order{}, fmt.Errorf("customer ID required")
    }
    if len(items) == 0 {
        return Order{}, fmt.Errorf("at least one item required")
    }

    var discount float64
    if couponCode != "" {
        coupon, err := s.coupons.FindByCode(
            ctx, couponCode,
        )
        if err != nil {
            return Order{}, fmt.Errorf(
                "looking up coupon: %w", err,
            )
        }
        discount = coupon.Discount
    }

    order := Order{
        ID:         generateID(),
        CustomerID: customerID,
        Items:      items,
        TotalCents: calculateTotal(items, discount),
        Status:     OrderPending,
    }

    if err := s.orders.Save(ctx, order); err != nil {
        return Order{}, fmt.Errorf(
            "saving order: %w", err,
        )
    }

    _ = s.notify.NotifyCreated(ctx, order)
    return order, nil
}
Enter fullscreen mode Exit fullscreen mode

The discount calculation, the validation, the order assembly — all of it lives in the domain now. The handler still works. It still calls the database directly. But the logic is also available through OrderService for any caller that wants it.

This is the PR where you can start writing fast tests.

First, the fakes. Each satisfies one port interface with zero infrastructure:

// internal/domain/service_test.go
package domain_test

import (
    "context"
    "testing"

    "myservice/internal/domain"
)

type fakeOrderRepo struct {
    saved []domain.Order
}

func (f *fakeOrderRepo) Save(
    _ context.Context,
    o domain.Order,
) error {
    f.saved = append(f.saved, o)
    return nil
}

func (f *fakeOrderRepo) FindByID(
    _ context.Context,
    id string,
) (domain.Order, error) {
    return domain.Order{}, nil
}

func (f *fakeOrderRepo) FindByCustomer(
    _ context.Context,
    _ string,
) ([]domain.Order, error) {
    return nil, nil
}

type fakeCouponRepo struct{}

func (f *fakeCouponRepo) FindByCode(
    _ context.Context,
    _ string,
) (domain.Coupon, error) {
    return domain.Coupon{Discount: 0.1}, nil
}

type fakeNotifier struct{}

func (f *fakeNotifier) NotifyCreated(
    _ context.Context,
    _ domain.Order,
) error {
    return nil
}
Enter fullscreen mode Exit fullscreen mode

The test itself wires fakes into the service and checks the discount math:

func TestCreateOrder_AppliesCoupon(t *testing.T) {
    repo := &fakeOrderRepo{}
    svc := domain.NewOrderService(
        repo,
        &fakeCouponRepo{},
        &fakeNotifier{},
    )

    items := []domain.Item{
        {ProductID: "p1", Quantity: 1, PriceCents: 1000},
    }

    order, err := svc.CreateOrder(
        context.Background(),
        "cust-1",
        items,
        "SAVE10",
    )
    if err != nil {
        t.Fatalf("unexpected error: %v", err)
    }
    if order.TotalCents != 900 {
        t.Errorf(
            "got total %d, want 900",
            order.TotalCents,
        )
    }
    if len(repo.saved) != 1 {
        t.Fatalf("expected 1 saved order, got %d",
            len(repo.saved))
    }
}
Enter fullscreen mode Exit fullscreen mode

No database. No Docker. Runs in microseconds. The handler still works the old way in production — you have not broken anything.

Step 3: Wrap existing code as adapters

This is the step that feels like the biggest change but breaks the least. You take the SQL code that currently lives in handlers/order.go and db/ and wrap it in a struct that satisfies the port interfaces.

myservice/
├── internal/
│   ├── domain/
│   │   ├── order.go
│   │   ├── customer.go
│   │   ├── ports.go
│   │   └── service.go
│   └── adapter/
│       ├── postgres/
│       │   ├── order_repo.go
│       │   └── coupon_repo.go
│       ├── http/
│       │   ├── order_handler.go
│       │   └── customer_handler.go
│       └── email/
│           └── notifier.go
├── handlers/          # still exists, being emptied
├── models/            # still exists, being emptied
├── db/
├── config/
└── main.go
Enter fullscreen mode Exit fullscreen mode

The adapter is a thin wrapper. You are moving SQL, not rewriting it.

// internal/adapter/postgres/order_repo.go
package postgres

import (
    "context"
    "database/sql"
    "fmt"

    "myservice/internal/domain"
)

type OrderRepository struct {
    db *sql.DB
}

func NewOrderRepository(
    db *sql.DB,
) *OrderRepository {
    return &OrderRepository{db: db}
}

func (r *OrderRepository) Save(
    ctx context.Context,
    order domain.Order,
) error {
    _, err := r.db.ExecContext(ctx,
        `INSERT INTO orders 
         (id, customer_id, total_cents, status)
         VALUES ($1, $2, $3, $4)`,
        order.ID,
        order.CustomerID,
        order.TotalCents,
        string(order.Status),
    )
    if err != nil {
        return fmt.Errorf("inserting order: %w", err)
    }
    return nil
}
Enter fullscreen mode Exit fullscreen mode

FindByID maps a single row back to a domain type and translates sql.ErrNoRows into a domain error:

func (r *OrderRepository) FindByID(
    ctx context.Context,
    id string,
) (domain.Order, error) {
    row := r.db.QueryRowContext(ctx,
        `SELECT id, customer_id, total_cents, 
                status, created_at
         FROM orders WHERE id = $1`, id,
    )

    var o domain.Order
    var status string
    err := row.Scan(
        &o.ID,
        &o.CustomerID,
        &o.TotalCents,
        &status,
        &o.CreatedAt,
    )
    if err == sql.ErrNoRows {
        return domain.Order{},
            domain.ErrOrderNotFound
    }
    if err != nil {
        return domain.Order{},
            fmt.Errorf("scanning order: %w", err)
    }
    o.Status = domain.OrderStatus(status)
    return o, nil
}
Enter fullscreen mode Exit fullscreen mode

FindByCustomer iterates multiple rows with the same mapping:

func (r *OrderRepository) FindByCustomer(
    ctx context.Context,
    customerID string,
) ([]domain.Order, error) {
    rows, err := r.db.QueryContext(ctx,
        `SELECT id, customer_id, total_cents,
                status, created_at
         FROM orders
         WHERE customer_id = $1
         ORDER BY created_at DESC`, customerID,
    )
    if err != nil {
        return nil, fmt.Errorf(
            "querying orders: %w", err,
        )
    }
    defer rows.Close()

    var orders []domain.Order
    for rows.Next() {
        var o domain.Order
        var status string
        if err := rows.Scan(
            &o.ID,
            &o.CustomerID,
            &o.TotalCents,
            &status,
            &o.CreatedAt,
        ); err != nil {
            return nil, fmt.Errorf(
                "scanning row: %w", err,
            )
        }
        o.Status = domain.OrderStatus(status)
        orders = append(orders, o)
    }
    return orders, rows.Err()
}
Enter fullscreen mode Exit fullscreen mode

The HTTP handler becomes an adapter too — a thin translation layer between HTTP and the domain service.

// internal/adapter/http/order_handler.go
package http

import (
    "encoding/json"
    "net/http"

    "myservice/internal/domain"
)

type OrderHandler struct {
    svc *domain.OrderService
}

func NewOrderHandler(
    svc *domain.OrderService,
) *OrderHandler {
    return &OrderHandler{svc: svc}
}
Enter fullscreen mode Exit fullscreen mode

The Create method translates HTTP into a domain call and the domain result back into HTTP. No SQL, no business rules:

func (h *OrderHandler) Create() http.HandlerFunc {
    return func(
        w http.ResponseWriter,
        r *http.Request,
    ) {
        var req struct {
            CustomerID string `json:"customer_id"`
            Items      []struct {
                ProductID  string `json:"product_id"`
                Quantity   int    `json:"quantity"`
                PriceCents int64  `json:"price_cents"`
            } `json:"items"`
            CouponCode string `json:"coupon_code"`
        }
        if err := json.NewDecoder(
            r.Body,
        ).Decode(&req); err != nil {
            http.Error(w, "bad request", 400)
            return
        }

        items := make([]domain.Item, len(req.Items))
        for i, ri := range req.Items {
            items[i] = domain.Item{
                ProductID:  ri.ProductID,
                Quantity:   ri.Quantity,
                PriceCents: ri.PriceCents,
            }
        }

        order, err := h.svc.CreateOrder(
            r.Context(),
            req.CustomerID,
            items,
            req.CouponCode,
        )
        if err != nil {
            http.Error(w, "internal error", 500)
            return
        }

        w.WriteHeader(201)
        json.NewEncoder(w).Encode(map[string]any{
            "id":          order.ID,
            "total_cents": order.TotalCents,
        })
    }
}
Enter fullscreen mode Exit fullscreen mode

Notice what happened. The handler no longer imports database/sql. It doesn't know about PostgreSQL. It calls h.svc.CreateOrder and translates the result to HTTP. If you want to add a gRPC endpoint next week, you write a new adapter that calls the same OrderService. Zero duplication.

The old handlers/ and models/ directories still exist. They still compile. You can migrate one handler at a time across multiple PRs. Each PR is reviewable in isolation.

Step 4: Move wiring to main()

The final step. main() becomes the composition root — the one place that knows about every concrete type.

// main.go
package main

import (
    "database/sql"
    "log"
    "net/http"
    "os"

    "myservice/internal/domain"
    "myservice/internal/adapter/email"
    pgadapter "myservice/internal/adapter/postgres"
    httpadapter "myservice/internal/adapter/http"

    _ "github.com/lib/pq"
)

func main() {
    db, err := sql.Open(
        "postgres",
        os.Getenv("DATABASE_URL"),
    )
    if err != nil {
        log.Fatal(err)
    }
    defer db.Close()

    // adapters
    orderRepo := pgadapter.NewOrderRepository(db)
    couponRepo := pgadapter.NewCouponRepository(db)
    notifier := email.NewNotifier(
        os.Getenv("SMTP_URL"),
    )

    // domain
    orderSvc := domain.NewOrderService(
        orderRepo,
        couponRepo,
        notifier,
    )
Enter fullscreen mode Exit fullscreen mode

Adapters feed into the domain service, and the domain service feeds into the HTTP handler. The last few lines wire the router and start listening:

    // HTTP layer
    orderHandler := httpadapter.NewOrderHandler(
        orderSvc,
    )

    mux := http.NewServeMux()
    mux.HandleFunc(
        "POST /orders",
        orderHandler.Create(),
    )

    log.Printf("listening on :8080")
    log.Fatal(http.ListenAndServe(":8080", mux))
}
Enter fullscreen mode Exit fullscreen mode

Read that file top to bottom. In 35 lines you can see the entire architecture — the database, the repo, the service, the handler. A new developer reads main.go and knows how the system is wired. No framework, no annotation scanning, no container resolution chain to trace.

The final directory tree:

myservice/
├── internal/
│   ├── domain/
│   │   ├── order.go       # types, no infrastructure imports
│   │   ├── customer.go
│   │   ├── coupon.go
│   │   ├── ports.go       # interfaces
│   │   ├── service.go     # business logic
│   │   └── service_test.go
│   └── adapter/
│       ├── postgres/
│       │   ├── order_repo.go
│       │   └── coupon_repo.go
│       ├── http/
│       │   ├── order_handler.go
│       │   └── customer_handler.go
│       └── email/
│           └── notifier.go
├── config/
│   └── config.go
└── main.go
Enter fullscreen mode Exit fullscreen mode

The old handlers/, models/, and db/ directories are gone. Not deleted in one dramatic commit — emptied gradually as each handler migrated to the new adapter structure. The last file in each directory gets removed when no one imports it anymore. go vet ./... catches the dead code.

Why this works when rewrites don't

Each step delivers a working, deployable service. If the team gets pulled to an incident after step 2, the codebase is still better than where it started — you have clean domain types and interfaces, even if the handlers haven't migrated yet. A rewrite gives you nothing until it's finished.

The four-step sequence also avoids the coordination tax. Different team members can migrate different handlers in step 3. One person takes CreateOrder, another takes ListOrders, a third takes CancelOrder. Each migration is an independent PR that doesn't conflict with the others.

And the tests prove it works. Step 2 gives your domain logic fast unit tests. By step 3 the adapters have integration tests against a real database, and step 4 changes nothing about the end-to-end suite — same HTTP endpoints, same assertions. At no point does the test suite go red because of the migration.

The rules that keep it clean

Once the migration is done, enforce the boundary. Two checks that catch regressions before they merge:

# Add to CI: domain must not import infrastructure
go list -f '{{.Imports}}' ./internal/domain/... \
  | grep -qE "database/sql|net/http|encoding/json" \
  && echo "FAIL: domain imports infrastructure" \
  && exit 1
Enter fullscreen mode Exit fullscreen mode
# Add to CI: adapters import domain, never the reverse
go list -f '{{.Deps}}' ./internal/domain/... \
  | grep -q "adapter" \
  && echo "FAIL: domain depends on adapter" \
  && exit 1
Enter fullscreen mode Exit fullscreen mode

Put these in your CI pipeline. A violation that gets caught at PR review takes five minutes to fix. A violation that festers for six months takes a week.

The migration, step by step

Step PR scope What changes What doesn't
1 Extract domain types New internal/domain package with clean structs All handlers, all tests
2 Define ports + domain service Add interfaces and business-logic service Handlers still call DB directly
3 Wrap SQL/HTTP as adapters SQL moves into adapter structs, handlers thin out External API contract
4 Wire in main() main.go becomes composition root Everything else — it's already migrated

Four PRs. No feature freeze. Each one ships independently. The service is hexagonal when you're done, and nobody outside the team noticed you changed anything.


If this was useful

The four-step migration above is based on the incremental migration chapter in Hexagonal Architecture in Go. The book covers the full arc — from spaghetti to ports and adapters — with tested code at every step. It includes the parts this post had to skip: transactions across adapters, error translation at boundaries, the decorator pattern for observability, and the cases where hexagonal is overkill.

The Complete Guide to Go Programming is the companion. It covers the language itself — types, concurrency, testing, modules — so that the architecture book can focus on architecture.

Thinking in Go — the 2-book series on Go programming and hexagonal architecture

Top comments (0)