DEV Community

Cover image for Your GORM Model Is Not Your Domain (And It's Eating Your Business Logic)
Gabriel Anhaia
Gabriel Anhaia

Posted on

Your GORM Model Is Not Your Domain (And It's Eating Your Business Logic)


Open the domain package of any mid-sized Go service that started with GORM. Look at the first struct.

package domain

import "gorm.io/gorm"

type Order struct {
    gorm.Model
    CustomerID string  `gorm:"index;not null"`
    Total      float64 `gorm:"type:decimal(10,2)"`
    Status     string  `gorm:"default:'pending'"`
}
Enter fullscreen mode Exit fullscreen mode

Four lines in and the domain has already lost. Your Order is no longer a thing your business understands. It is a database row with a DeletedAt field bolted on by an ORM maintainer in a v2 migration. The status string is whatever the column accepts. The total is whatever fits in decimal(10,2). Your business rules live inside struct tags now.

This is the leak, and the way I read it is as a category error rather than a bug or a missing abstraction. Once it is in, every service, handler, and test has to negotiate with an ORM that has no concept of your domain.

The Leak

gorm.Model is an embedded struct GORM ships to give you ID, CreatedAt, UpdatedAt, and DeletedAt for soft-deletes, and the convenience is the whole pitch.

The cost is hidden in two places. First, your "domain" type now imports gorm.io/gorm. Anywhere you pass an Order, you pass GORM. Anywhere you mock the repository, the test fixture has to know what a gorm.DeletedAt is. Anywhere you serialise an Order to JSON, you ship created_at, updated_at, and deleted_at to the client because they are public fields on the struct.

Second, the struct tags are doing real work. gorm:"index;not null", gorm:"type:decimal(10,2)", gorm:"default:'pending'". Those are schema decisions. They live on the same struct that your PlaceOrder service operates on. The Three Dots Labs team has discussed the same coupling at length in their Common Anti-Patterns in Go Web Applications writeup; the practical effect, in my experience, is that swapping the persistence library out becomes a full rewrite of every struct it touched.

The same struct ends up doing four jobs:

  1. The database row (column types, indexes, defaults)
  2. The HTTP request body (JSON tags, validation rules)
  3. The HTTP response body (the same JSON tags, leaking internal columns)
  4. The thing your business logic operates on

When all four jobs share one type, every change to one breaks the other three.

What Dies When GORM Changes

The "I will switch later" plan does not survive a major version bump. GORM's v2 release notes removed RecordNotFound in favour of checking ErrRecordNotFound with errors.Is, made DeletedAt an explicit gorm.DeletedAt field type instead of any field with that name, cached TableName results so dynamic tables now have to use Scopes, and switched struct tag names from snake_case to camelCase. Each of those is a one-line code change in isolation. Multiplied across every domain struct, every test fixture, and every handler that touches one, it is a sprint.

AutoMigrate in v2 also alters column types when size, precision, or nullability changes. That is fine in a greenfield project. It is a Friday-evening incident in a service whose domain types and table schemas are the same struct, because a refactor of the domain field becomes an implicit ALTER TABLE on production.

This is not a "GORM is bad" argument. The maintainers ship the changes their library needs. The argument is that your business does not get a vote on those changes. If a calculation about late-fee eligibility lives on a struct that GORM owns, then GORM's release calendar is your release calendar.

The Persistence Row vs The Domain Aggregate

The fix is older than Go: keep two types.

One type belongs to the domain. It expresses what an order is in your business: a customer, a list of items, a status that can be pending, paid, shipped, or cancelled, a total that is the sum of the items, rules about which transitions are legal.

package order

import (
    "errors"
    "time"
)

type Status string

const (
    StatusPending   Status = "pending"
    StatusPaid      Status = "paid"
    StatusShipped   Status = "shipped"
    StatusCancelled Status = "cancelled"
)

type Item struct {
    ProductID string
    Quantity  int
    UnitPrice Money
}
Enter fullscreen mode Exit fullscreen mode

Status is a string type with a closed set of constants, so the compiler catches any typo at the call site. The aggregate itself stays small and keeps every field private.

type Order struct {
    id         ID
    customerID CustomerID
    items      []Item
    status     Status
    placedAt   time.Time
}

func (o *Order) Total() Money {
    var sum Money
    for _, it := range o.items {
        sum = sum.Add(it.UnitPrice.Times(it.Quantity))
    }
    return sum
}

func (o *Order) MarkPaid() error {
    if o.status != StatusPending {
        return errors.New("only pending orders can be paid")
    }
    o.status = StatusPaid
    return nil
}

func Rehydrate(in RehydrateInput) (Order, error) {
    if in.ID == "" || in.CustomerID == "" {
        return Order{}, errors.New("rehydrate: missing identity")
    }
    return Order{
        id:         in.ID,
        customerID: in.CustomerID,
        items:      in.Items,
        status:     in.Status,
        placedAt:   in.PlacedAt,
    }, nil
}

type RehydrateInput struct {
    ID         ID
    CustomerID CustomerID
    Items      []Item
    Status     Status
    PlacedAt   time.Time
}
Enter fullscreen mode Exit fullscreen mode

No imports from gorm.io. No struct tags. No public fields you do not want mutated from outside. The status transition lives on the type that owns the status, Total is a method rather than a column, and Rehydrate is the only constructor the domain exposes to adapters that already have an ID and timestamp from storage.

The other type belongs to the adapter, and it represents the row in storage rather than the order in your business.

package postgresadapter

import (
    "time"

    "gorm.io/gorm"
)

type orderRow struct {
    gorm.Model
    ID         string    `gorm:"primaryKey;type:uuid"`
    CustomerID string    `gorm:"index;not null"`
    Status     string    `gorm:"default:'pending'"`
    Total      string    `gorm:"type:decimal(10,2)"`
    PlacedAt   time.Time `gorm:"not null"`
}

type orderItemRow struct {
    ID        uint   `gorm:"primaryKey"`
    OrderID   string `gorm:"index;not null"`
    ProductID string
    Quantity  int
    UnitPrice string `gorm:"type:decimal(10,2)"`
}
Enter fullscreen mode Exit fullscreen mode

orderRow is unexported. It lives next to the adapter that uses it. Nothing outside this package imports it. When GORM v3 renames gorm.Model or removes soft-deletes, the change is local. When the schema needs an index, the index is added next to the column it indexes, not next to the business logic that should not care about indexes.

You end up with two types and one small translation step between them, and that translation step is where the entire payoff of the split lives.

Mapping in the Adapter

The mapping is not glamorous. It is a side-effect-free function that you write once and cover with tests.

func toDomain(row orderRow, items []orderItemRow) (order.Order, error) {
    domainItems := make([]order.Item, 0, len(items))
    for _, it := range items {
        price, err := order.ParseMoney(it.UnitPrice)
        if err != nil {
            return order.Order{}, fmt.Errorf("price: %w", err)
        }
        domainItems = append(domainItems, order.Item{
            ProductID: it.ProductID,
            Quantity:  it.Quantity,
            UnitPrice: price,
        })
    }

    return order.Rehydrate(order.RehydrateInput{
        ID:         order.ID(row.ID),
        CustomerID: order.CustomerID(row.CustomerID),
        Items:      domainItems,
        Status:     order.Status(row.Status),
        PlacedAt:   row.PlacedAt,
    })
}
Enter fullscreen mode Exit fullscreen mode

The inverse direction is the same shape with the arrows reversed: walk the aggregate, build the rows, hand them back to the repository.

func fromDomain(o order.Order) (orderRow, []orderItemRow) {
    items := make([]orderItemRow, 0, len(o.Items()))
    for _, it := range o.Items() {
        items = append(items, orderItemRow{
            OrderID:   string(o.ID()),
            ProductID: it.ProductID,
            Quantity:  it.Quantity,
            UnitPrice: it.UnitPrice.String(),
        })
    }
    return orderRow{
        ID:         string(o.ID()),
        CustomerID: string(o.CustomerID()),
        Status:     string(o.Status()),
        Total:      o.Total().String(),
        PlacedAt:   o.PlacedAt(),
    }, items
}
Enter fullscreen mode Exit fullscreen mode

Rehydrate running the same invariants as any other constructor is the only concession the domain makes to persistence: it admits that things being loaded from storage already have an identity. With the mapping in place, the repository is now a thin wrapper around a gorm.DB and the two helpers above.

type OrderRepository struct {
    db *gorm.DB
}

func (r *OrderRepository) Save(ctx context.Context, o order.Order) error {
    row, items := fromDomain(o)
    return r.db.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
        if err := tx.Save(&row).Error; err != nil {
            return err
        }
        if err := tx.Where("order_id = ?", row.ID).
            Delete(&orderItemRow{}).Error; err != nil {
            return err
        }
        return tx.Create(&items).Error
    })
}
Enter fullscreen mode Exit fullscreen mode

Reading is the same translation in reverse, with one extra job: catching the GORM-specific not-found error at the boundary so it never escapes the adapter.

func (r *OrderRepository) ByID(ctx context.Context, id order.ID) (order.Order, error) {
    var row orderRow
    err := r.db.WithContext(ctx).First(&row, "id = ?", string(id)).Error
    if errors.Is(err, gorm.ErrRecordNotFound) {
        return order.Order{}, order.ErrNotFound
    }
    if err != nil {
        return order.Order{}, err
    }
    var items []orderItemRow
    if err := r.db.WithContext(ctx).
        Where("order_id = ?", row.ID).
        Find(&items).Error; err != nil {
        return order.Order{}, err
    }
    return toDomain(row, items)
}
Enter fullscreen mode Exit fullscreen mode

gorm.ErrRecordNotFound is caught at the boundary and translated to order.ErrNotFound. The service layer never sees a GORM error type.

The sqlc Variant

If the boilerplate above is the problem you are trying to solve, swap GORM for sqlc. The shape of the adapter does not change. The row type is generated for you from the schema; the mapping function still lives in the adapter package; the domain still has zero database imports.

-- query.sql
-- name: GetOrder :one
SELECT id, customer_id, status, total, placed_at
FROM orders
WHERE id = $1;

-- name: SaveOrder :exec
INSERT INTO orders (id, customer_id, status, total, placed_at)
VALUES ($1, $2, $3, $4, $5)
ON CONFLICT (id) DO UPDATE
SET customer_id = EXCLUDED.customer_id,
    status      = EXCLUDED.status,
    total       = EXCLUDED.total;
Enter fullscreen mode Exit fullscreen mode

sqlc generates a GetOrderRow struct and a SaveOrderParams struct from those queries at build time. Your adapter:

func (r *OrderRepository) ByID(ctx context.Context, id order.ID) (order.Order, error) {
    row, err := r.q.GetOrder(ctx, string(id))
    if errors.Is(err, sql.ErrNoRows) {
        return order.Order{}, order.ErrNotFound
    }
    if err != nil {
        return order.Order{}, err
    }
    items, err := r.q.GetOrderItems(ctx, row.ID)
    if err != nil {
        return order.Order{}, err
    }
    return toDomain(row, items)
}
Enter fullscreen mode Exit fullscreen mode

sqlc generates Go from your SQL at build time, which means there is no runtime reflection over your queries, type errors surface at go build rather than at request time, and you write native SQL when you reach for a window function or a CTE. The JetBrains team published a useful tour of the trade-offs across the main Go database libraries in Comparing database/sql, GORM, sqlx, and sqlc if you want a wider survey. Whichever tool you pick, the rule is the same: the row type belongs to the adapter, never to the domain.

How This Changes Testing

Domain tests stop touching the database.

func TestOrder_MarkPaid_OnlyFromPending(t *testing.T) {
    o := orderInState(t, order.StatusPending)

    if err := o.MarkPaid(); err != nil {
        t.Fatalf("expected nil, got %v", err)
    }
    if o.Status() != order.StatusPaid {
        t.Errorf("status: want paid, got %s", o.Status())
    }
}

func TestOrder_MarkPaid_RejectsCancelled(t *testing.T) {
    o := orderInState(t, order.StatusCancelled)

    if err := o.MarkPaid(); err == nil {
        t.Fatal("expected error, got nil")
    }
}
Enter fullscreen mode Exit fullscreen mode

No testcontainers. No sqlmock. No "spin up Postgres for the unit suite." The test runs in microseconds because the only thing it tests is the rule.

The repository gets its own integration test against a real database, because that is where the integration actually happens:

func TestOrderRepository_RoundTrip(t *testing.T) {
    db := newTestDB(t) // testcontainers, real Postgres
    repo := NewOrderRepository(db)

    placed := orderFixture(t)
    if err := repo.Save(ctx, placed); err != nil {
        t.Fatal(err)
    }

    loaded, err := repo.ByID(ctx, placed.ID())
    if err != nil {
        t.Fatal(err)
    }
    if !loaded.Equal(placed) {
        t.Errorf("round trip lost data")
    }
}
Enter fullscreen mode Exit fullscreen mode

Two suites. Different lifecycles. The fast one runs on every save; the slow one runs in CI. You stop paying for a database when you are testing a state machine, and you stop pretending your unit tests cover persistence when they do not.

This split also kills the "ORM mock" anti-pattern. You do not mock GORM. You do not stub *gorm.DB. The domain talks to an OrderRepository interface it owns, and the in-memory implementation in the test package fits in about twenty lines.

type InMemoryOrderRepo struct {
    mu     sync.Mutex
    orders map[order.ID]order.Order
}

func (r *InMemoryOrderRepo) Save(_ context.Context, o order.Order) error {
    r.mu.Lock()
    defer r.mu.Unlock()
    r.orders[o.ID()] = o
    return nil
}

func (r *InMemoryOrderRepo) ByID(_ context.Context, id order.ID) (order.Order, error) {
    r.mu.Lock()
    defer r.mu.Unlock()
    o, ok := r.orders[id]
    if !ok {
        return order.Order{}, order.ErrNotFound
    }
    return o, nil
}
Enter fullscreen mode Exit fullscreen mode

Twenty-odd lines is the whole double, and because it satisfies the same interface the Postgres adapter satisfies, the service tests run in-process with no migration, no schema, no fixtures, and no flake.

The Pragmatic Small-App Exception

None of this is free. Two types means two definitions to keep in sync. A mapping function to write and test. An interface in the domain that a beginner reading the code has to follow through one more file before they find the SQL.

For an internal CRUD admin tool with three tables, two endpoints, and a team of one, that overhead does not pay for itself. Embed gorm.Model, ship the feature, move on. The point of architecture is to absorb change, and a tool that will be deleted in eighteen months will not see enough change for the structure to earn its keep.

The signals that you have outgrown the shortcut are concrete:

  • Two services share the same struct and disagree on what a field means.
  • A second adapter (search index, message broker, cache) needs the same data in a different shape.
  • The number of if x.Status == "pending" && ... branches scattered across handlers crosses three.
  • A column type change in the database is breaking JSON contracts on the public API.
  • You started writing tests that mock *gorm.DB.

When two of those land in the same week, the cost of the split has already been paid in incidents. Pay it deliberately instead.

The domain is the part of the codebase that does not know what storage is. The adapter is the part that does. gorm.Model belongs on the adapter side of that line. Keep it there, and your business rules stop shipping with someone else's release notes.


If this hit something you have been wrestling with in a Go service, Hexagonal Architecture in Go is the long-form version of the argument: the domain shape, the port design, the adapter mapping, the unit-of-work pattern across multiple repositories, and the chapter on when not to bother. Twenty-two chapters, every example tested, companion repo included.

Book 2 in the Thinking in Go series. Book 1 is the language and runtime fundamentals; this one is what to do with them once the service is real.

Thinking in Go — the 2-book series on Go programming and hexagonal architecture

Top comments (0)