DEV Community

Cover image for Domain Model vs Persistence Model: The Mapper Layer in Go
Gabriel Anhaia
Gabriel Anhaia

Posted on

Domain Model vs Persistence Model: The Mapper Layer in Go


You open a Go service that has been running for two years and look for the Order struct. You find one. It has GORM tags, JSON tags, a validate: tag the API layer uses, and a method called MarkPaid. The same struct is the row, the request body, the response body, and the aggregate. Every change to one role drags the other three along. A column rename ships as a public API change. A JSON-tag fix breaks a Postgres index annotation that nobody noticed.

A GORM-shaped struct sitting in your domain package is a row in a costume wearing a domain-model name tag. This post is the pattern that fixes it. Two types, one mapper, one seam you can defend.

The Two Types Have Different Jobs

A domain Order answers business questions. Can this transition to paid? What is the running total? Is the customer allowed to add a discount line? It owns the rules. It hides its fields behind methods because the rules need to run on every mutation, not just the ones the API happens to expose.

A persistence OrderRow answers a different question: what does a single row in the orders table look like to the driver that reads it? It is flat. It is exported because database/sql and sqlc need exported fields to scan into. It carries database concerns: nullable columns, decimal precision, the created_at the trigger writes. It has no methods worth talking about.

These two types have nothing in common except a name and the fact that one is loaded from the other. Forcing them to share a struct is what produced the mess.

A Domain Aggregate

Here is what the domain side looks like for Order. Private fields, behaviour on the type, no imports outside the standard library.

package order

import (
    "errors"
    "time"
)

type Status string

const (
    StatusPending   Status = "pending"
    StatusPaid      Status = "paid"
    StatusShipped   Status = "shipped"
    StatusCancelled Status = "cancelled"
)

type Item struct {
    ProductID string
    Quantity  int
    UnitPrice Money
}
Enter fullscreen mode Exit fullscreen mode

The aggregate keeps every field private and exposes the rules instead.

type Order struct {
    id         ID
    customerID CustomerID
    items      []Item
    status     Status
    placedAt   time.Time
}

func (o *Order) Total() Money {
    var sum Money
    for _, it := range o.items {
        sum = sum.Add(it.UnitPrice.Times(it.Quantity))
    }
    return sum
}

func (o *Order) MarkPaid() error {
    if o.status != StatusPending {
        return errors.New(
            "order: only pending orders can be paid",
        )
    }
    o.status = StatusPaid
    return nil
}
Enter fullscreen mode Exit fullscreen mode

The constructor for fresh orders runs every invariant. A second constructor — Rehydrate — is the one the adapter calls when it has already pulled a row out of storage.

type RehydrateInput struct {
    ID         ID
    CustomerID CustomerID
    Items      []Item
    Status     Status
    PlacedAt   time.Time
}

func Rehydrate(in RehydrateInput) (Order, error) {
    if in.ID == "" || in.CustomerID == "" {
        return Order{}, errors.New(
            "order: missing identity",
        )
    }
    return Order{
        id:         in.ID,
        customerID: in.CustomerID,
        items:      in.Items,
        status:     in.Status,
        placedAt:   in.PlacedAt,
    }, nil
}
Enter fullscreen mode Exit fullscreen mode

Notice what is not on this type: no struct tags, no gorm.Model, no json: annotations, no db: annotations. The domain package compiles without database/sql in its dependency graph. That is the property the mapper protects.

A Persistence Row

The adapter package has its own Order shape. Call it OrderRow — that is what it is.

package postgresadapter

import (
    "database/sql"
    "time"
)

type OrderRow struct {
    ID         string
    CustomerID string
    Status     string
    Total      string
    PlacedAt   time.Time
    CreatedAt  time.Time
    UpdatedAt  sql.NullTime
}

type OrderItemRow struct {
    OrderID   string
    ProductID string
    Quantity  int32
    UnitPrice string
}
Enter fullscreen mode Exit fullscreen mode

Fields are exported because the SQL driver needs them. Money is a string at this layer because Postgres numeric(10,2) round-trips cleanly through a string and never silently loses precision the way float64 does. UpdatedAt is sql.NullTime because the column is nullable until the first update.

If you are using sqlc, the row type is generated for you from the schema and a query file. The shape is the same.

-- query.sql
-- name: GetOrder :one
SELECT id, customer_id, status, total, placed_at,
       created_at, updated_at
FROM orders
WHERE id = $1;

-- name: GetOrderItems :many
SELECT order_id, product_id, quantity, unit_price
FROM order_items
WHERE order_id = $1;
Enter fullscreen mode Exit fullscreen mode

sqlc emits a GetOrderRow and a GetOrderItemsRow. They sit next to the adapter, not next to the domain. The point is the same either way: this is a flat data carrier, not an entity.

The Mapper Layer

Between those two types lives a small, side-effect-free function that nobody outside the adapter package can see. It is the only place that knows both shapes.

package postgresadapter

import (
    "fmt"

    "myapp/order"
)

func toDomain(
    row OrderRow,
    items []OrderItemRow,
) (order.Order, error) {
    domainItems := make([]order.Item, 0, len(items))
    for _, it := range items {
        price, err := order.ParseMoney(it.UnitPrice)
        if err != nil {
            return order.Order{}, fmt.Errorf(
                "row %s: price: %w", row.ID, err,
            )
        }
        domainItems = append(domainItems, order.Item{
            ProductID: it.ProductID,
            Quantity:  int(it.Quantity),
            UnitPrice: price,
        })
    }

    return order.Rehydrate(order.RehydrateInput{
        ID:         order.ID(row.ID),
        CustomerID: order.CustomerID(row.CustomerID),
        Items:      domainItems,
        Status:     order.Status(row.Status),
        PlacedAt:   row.PlacedAt,
    })
}
Enter fullscreen mode Exit fullscreen mode

The reverse direction has the same shape with the arrows flipped. The aggregate exposes read-only accessors (o.ID(), o.Items(), o.Status(), o.Total(), o.PlacedAt()) for the mapper to read; they were omitted from the aggregate block above for brevity.

func fromDomain(
    o order.Order,
) (OrderRow, []OrderItemRow) {
    items := make([]OrderItemRow, 0, len(o.Items()))
    for _, it := range o.Items() {
        items = append(items, OrderItemRow{
            OrderID:   string(o.ID()),
            ProductID: it.ProductID,
            Quantity:  int32(it.Quantity),
            UnitPrice: it.UnitPrice.String(),
        })
    }
    return OrderRow{
        ID:         string(o.ID()),
        CustomerID: string(o.CustomerID()),
        Status:     string(o.Status()),
        Total:      o.Total().String(),
        PlacedAt:   o.PlacedAt(),
    }, items
}
Enter fullscreen mode Exit fullscreen mode

The repository becomes a thin wrapper around the database handle and these two functions.

func (r *Repository) ByID(
    ctx context.Context, id order.ID,
) (order.Order, error) {
    row, err := r.q.GetOrder(ctx, string(id))
    if errors.Is(err, sql.ErrNoRows) {
        return order.Order{}, order.ErrNotFound
    }
    if err != nil {
        return order.Order{}, err
    }
    items, err := r.q.GetOrderItems(ctx, row.ID)
    if err != nil {
        return order.Order{}, err
    }
    return toDomain(row, items)
}
Enter fullscreen mode Exit fullscreen mode

sql.ErrNoRows gets caught at the boundary and translated to order.ErrNotFound. The service layer never sees a driver error type.

What This Buys You

The mapper layer pays for itself across four predictable places.

No behaviour leak. MarkPaid runs on the domain type. The persistence row has no methods, so it cannot accidentally encode a business rule as a BeforeUpdate hook. The rules are where you go to read them.

No struct-tag soup. The domain has zero tags. The persistence side has the tags it actually needs and no others. JSON tags live on a third type — the API DTO — for the same reason.

Easy testing. The domain test loads zero database packages. The mapper test loads zero database packages either; it is two pure functions that round-trip a fixture. The repository integration test runs against a real Postgres in testcontainers, exercises the mapper plus the SQL, and is the only place where all three pieces meet.

Schema changes stay local. Add a cancelled_reason column for analytics. Touch the row struct, the SQL, and the mapper. The domain has no opinion until somebody decides cancellation reasons are a business rule, at which point the change is deliberate and shows up as a method.

The Cost: Mapper Drift

The pattern has one real failure mode. You add a field to the domain, you add the column to the database, and you forget the mapper. It compiles. The unit tests pass. Production silently drops the field on every read and write.

Three things keep this honest, and you need at least two of them.

The first is a round-trip test that lives in the adapter package and does exactly this:

func TestMapper_RoundTrip(t *testing.T) {
    original := orderFixture(t)

    row, items := fromDomain(original)
    rebuilt, err := toDomain(row, items)
    if err != nil {
        t.Fatal(err)
    }
    if !rebuilt.Equal(original) {
        t.Fatalf(
            "round trip lost data: want %v got %v",
            original, rebuilt,
        )
    }
}
Enter fullscreen mode Exit fullscreen mode

That test fails the moment you add a field on one side and not the other, provided Equal actually compares every field on the aggregate. Make Equal a method on the domain type, write it once, and never let it short-circuit.

The second is a code review checklist that says "domain field added → mapper updated → schema migration written." Three boxes. Same PR. If your team uses CODEOWNERS, point the persistence package at someone who reads the mapper as part of every diff that touches the domain.

The third option, and the one worth reaching for in larger codebases, is a generated mapper. A tool like goverter derives the mapping from the two struct shapes via codegen. A forgotten field stops being a silent bug and starts being a compile error the moment the shapes diverge. Generated code is harder to step through, and value-object conversions still need hand-written hooks, but for ten-field aggregates with mostly primitive columns the generator pays for itself the first time a schema migration lands.

In practice, the round-trip test is the cheapest of the three and catches most drift. Start there.

When the Two Models Should Be the Same

Not every type in your codebase needs this split. A read-only projection — the rows feeding a "recent invoices" page, the search-index document, the analytics export — has no behaviour, so the row shape and the read model are the same thing. Build a flat struct, scan into it, ship it. No domain type involved.

The split applies to aggregates: types whose fields are protected by rules. Order, Subscription, Cart, Reservation. If a field of the type can only change in particular ways, you want the rules in one place and the row in another, with a mapper between them.

The pattern is older than Go and older than DDD. Martin Fowler catalogued it as the Data Mapper pattern in Patterns of Enterprise Application Architecture (2002). The Go-specific part is small. Implicit interfaces make the seam free, and codegen tools like goverter make the mapping enforceable once the shape stabilises. The hard part is the discipline of not letting a gorm:"index" tag creep onto your aggregate the next time the schedule gets tight.

Next time the schema migration lands at 4 PM on a Friday, the mapper is the only file that has to change. Your business rules never find out.


If this hit something

Hexagonal Architecture in Go is the long-form version of the seam: the port shape, the adapter mapping, value-object conversion, the unit-of-work pattern across multiple repositories, and the chapter on when the split is overhead rather than insurance. Twenty-two chapters, every example tested, companion repo included.

Book 2 in the Thinking in Go series. Book 1 is the language and runtime fundamentals; this one is what to do with them once the service is real.

Thinking in Go — the 2-book series on Go programming and hexagonal architecture

Top comments (0)