DEV Community

Cover image for CQRS in Go Without a Framework: Separate Read and Write Paths
Gabriel Anhaia
Gabriel Anhaia

Posted on

CQRS in Go Without a Framework: Separate Read and Write Paths


You've seen the dashboard. The orders table is doing fine on writes. New rows go in at a steady rate, the constraints hold, the audit log looks honest. The query side is on fire. The "recent orders by customer" page joins five tables, the report writer hits a CTE that the planner gives up on under load, and someone added an index last week that helped one query and slowed three others.

The instinct is to add another index. The next instinct is to denormalise inside the write tables. Both make the write side worse to defend later.

CQRS is the third option. Same service, two paths through it. The write path keeps invariants. The read path answers questions fast. They look almost nothing like each other once you stop pretending they should.

The smallest CQRS layout that still earns the name in Go is one service with a write side, a read side, a projection wired through a domain event, and no framework imports.

What CQRS actually is, with the marketing removed

Command Query Responsibility Segregation, popularised by Greg Young around 2010 (his CQRS Documents PDF is the canonical write-up), is one rule:

The model that mutates state is not the same model that answers queries.

That is the whole pattern. No buses, no IMediator<TRequest, TResponse>, no event sourcing, no service mesh. Two object graphs in the same codebase. One handles commands and protects invariants. The other handles queries and shapes data for a screen.

CQRS and event sourcing get bundled together in tutorials because they pair nicely. They are different patterns. CQRS is read/write split. Event sourcing is "the log of state changes is the source of truth, and the row is a projection." You can do CQRS without event sourcing. A future post covers event sourcing on its own.

When it earns its keep, when it doesn't

Reach for CQRS when:

  • Read and write traffic differ by an order of magnitude or more, and you want to scale them separately.
  • The query shape and the storage shape are different enough that every query is a five-table join with three coalesces.
  • Several screens want the same data shaped differently, and the write model is a poor fit for any of them.
  • The write model has invariants worth defending, and the read model needs liberties (denormalised, snapshot-frozen, eventually consistent).

Skip CQRS when:

  • The service is CRUD over a flat table.
  • Every read is SELECT * FROM x WHERE id = ?.
  • The team has not yet felt the pain of a query timeout or a write contention spike.

A user-profile API does not need CQRS. An order workflow with reporting, account dashboards, and an analytics export probably does, eventually.

The layout: one service, two sides

The service has the same hex shape as before, with the read side as a peer of the write side:

orders/
├── internal/
│   ├── write/
│   │   ├── domain/    # aggregates, invariants
│   │   ├── port/      # write repository, event publisher
│   │   └── adapter/   # SQL writer, command HTTP handler
│   ├── read/
│   │   ├── model/     # DTO types shaped for screens
│   │   ├── port/      # query interfaces
│   │   └── adapter/   # SQL reader, query HTTP handler
│   └── projection/    # event handler updating the read store
└── main.go
Enter fullscreen mode Exit fullscreen mode

Two domains, deliberately. The write domain has an Order aggregate that protects invariants. The read model has a flat OrderSummary with whatever shape the dashboard wants. They live next to each other in the same module, and main.go wires them.

The write side: aggregate, command handler, one transaction

The write side mutates state. Its job is to protect invariants. It does this with a small aggregate, a repository that saves it transactionally, and a command handler that orchestrates one command at a time.

package domain

import (
    "errors"
    "time"
)

type Order struct {
    id         string
    customerID string
    items      []Item
    totalCents int64
    status     string
    createdAt  time.Time
}

type Item struct {
    SKU        string
    Quantity   int
    PriceCents int64
}

var ErrEmptyOrder = errors.New("order has no items")

func PlaceOrder(
    id, customerID string,
    items []Item,
    now time.Time,
) (*Order, OrderPlaced, error) {
    if len(items) == 0 {
        return nil, OrderPlaced{}, ErrEmptyOrder
    }
    var total int64
    for _, it := range items {
        total += it.PriceCents * int64(it.Quantity)
    }
    o := &Order{
        id:         id,
        customerID: customerID,
        items:      items,
        totalCents: total,
        status:     "placed",
        createdAt:  now,
    }
    return o, OrderPlaced{
        OrderID:     id,
        CustomerID:  customerID,
        TotalCents:  total,
        ItemCount:   len(items),
        LastItemSKU: items[len(items)-1].SKU,
        PlacedAt:    now,
    }, nil
}

type OrderPlaced struct {
    OrderID     string
    CustomerID  string
    TotalCents  int64
    ItemCount   int
    LastItemSKU string
    PlacedAt    time.Time
}
Enter fullscreen mode Exit fullscreen mode

The aggregate has unexported fields. The constructor returns the aggregate and a domain event in one go. There is no setter that can break the total invariant.

The port and the command handler:

package port

import (
    "context"
    "example.com/orders/internal/write/domain"
)

type OrderWriter interface {
    Save(
        ctx context.Context,
        o *domain.Order,
        evt domain.OrderPlaced,
    ) error
}
Enter fullscreen mode Exit fullscreen mode
package app

import (
    "context"
    "time"

    "example.com/orders/internal/write/domain"
    "example.com/orders/internal/write/port"
)

type PlaceOrderCommand struct {
    OrderID    string
    CustomerID string
    Items      []domain.Item
}

type PlaceOrderHandler struct {
    writer port.OrderWriter
    now    func() time.Time
}

func (h *PlaceOrderHandler) Handle(
    ctx context.Context, cmd PlaceOrderCommand,
) error {
    o, evt, err := domain.PlaceOrder(
        cmd.OrderID,
        cmd.CustomerID,
        cmd.Items,
        h.now(),
    )
    if err != nil {
        return err
    }
    return h.writer.Save(ctx, o, evt)
}
Enter fullscreen mode Exit fullscreen mode

The handler does three things: build the aggregate, get the event back, save both atomically. Saving the event is the seam where the read side gets fed.

The SQL adapter writes the order row and the event row in one transaction. This is the transactional outbox pattern in its smallest form:

package adapter

import (
    "context"
    "database/sql"
    "encoding/json"

    "example.com/orders/internal/write/domain"
)

type PostgresWriter struct{ db *sql.DB }

func (w *PostgresWriter) Save(
    ctx context.Context,
    o *domain.Order,
    evt domain.OrderPlaced,
) error {
    tx, err := w.db.BeginTx(ctx, nil)
    if err != nil {
        return err
    }
    defer tx.Rollback()

    if err := upsertOrder(ctx, tx, o); err != nil {
        return err
    }
    payload, err := json.Marshal(evt)
    if err != nil {
        return err
    }
    _, err = tx.ExecContext(ctx,
        `INSERT INTO outbox (kind, payload)
         VALUES ($1, $2)`,
        "order_placed", payload,
    )
    if err != nil {
        return err
    }
    return tx.Commit()
}
Enter fullscreen mode Exit fullscreen mode

One transaction, one aggregate, one event. The order row and the event row commit together or not at all. A separate process drains the outbox and dispatches events to whoever is listening. The projection is one of those listeners.

The read side: denormalised store, query handlers, DTOs

The read side never touches the write tables. It has its own table, shaped for the screens it serves.

CREATE TABLE order_summary (
    order_id     text PRIMARY KEY,
    customer_id  text NOT NULL,
    total_cents  bigint NOT NULL,
    status       text NOT NULL,
    placed_at    timestamptz NOT NULL,
    item_count   int NOT NULL,
    last_item_sku text
);

CREATE INDEX ON order_summary (customer_id, placed_at DESC);
Enter fullscreen mode Exit fullscreen mode

That table is a denormalised projection. It has the customer ID inline, the count of items already computed, and the most recent SKU baked in for a "what did they buy" preview. None of that is in the write schema. The read side carries it because its only job is to answer one set of queries fast.

The DTO and the query port:

package model

import "time"

type OrderSummary struct {
    OrderID     string
    CustomerID  string
    TotalCents  int64
    Status      string
    PlacedAt    time.Time
    ItemCount   int
    LastItemSKU string
}
Enter fullscreen mode Exit fullscreen mode
package port

import (
    "context"
    "example.com/orders/internal/read/model"
)

type OrderQueries interface {
    RecentByCustomer(
        ctx context.Context,
        customerID string,
        limit int,
    ) ([]model.OrderSummary, error)
}
Enter fullscreen mode Exit fullscreen mode

The query adapter reads from order_summary directly. No joins, no aggregations at query time, no LEFT JOIN line_items GROUP BY ...:

package adapter

import (
    "context"
    "database/sql"

    "example.com/orders/internal/read/model"
)

type PostgresQueries struct{ db *sql.DB }

func (q *PostgresQueries) RecentByCustomer(
    ctx context.Context,
    customerID string,
    limit int,
) ([]model.OrderSummary, error) {
    rows, err := q.db.QueryContext(ctx,
        `SELECT order_id, customer_id, total_cents,
                status, placed_at, item_count,
                COALESCE(last_item_sku, '')
         FROM order_summary
         WHERE customer_id = $1
         ORDER BY placed_at DESC
         LIMIT $2`,
        customerID, limit,
    )
    if err != nil {
        return nil, err
    }
    defer rows.Close()

    var out []model.OrderSummary
    for rows.Next() {
        var s model.OrderSummary
        if err := rows.Scan(
            &s.OrderID, &s.CustomerID,
            &s.TotalCents, &s.Status,
            &s.PlacedAt, &s.ItemCount,
            &s.LastItemSKU,
        ); err != nil {
            return nil, err
        }
        out = append(out, s)
    }
    return out, rows.Err()
}
Enter fullscreen mode Exit fullscreen mode

The query is one row read from a covering index. No write-side code is on the hot path.

The projection: one event handler, two writes apart

The projection is what makes the read side stay in sync. It listens for OrderPlaced events from the outbox and writes a row into order_summary. It has no opinion on aggregates or invariants. It just translates an event into a denormalised row.

package projection

import (
    "context"
    "database/sql"

    "example.com/orders/internal/write/domain"
)

type OrderSummaryProjector struct{ db *sql.DB }

func (p *OrderSummaryProjector) On(
    ctx context.Context, evt domain.OrderPlaced,
) error {
    _, err := p.db.ExecContext(ctx,
        `INSERT INTO order_summary (
             order_id, customer_id, total_cents,
             status, placed_at, item_count,
             last_item_sku
         ) VALUES ($1, $2, $3, $4, $5, $6, $7)
         ON CONFLICT (order_id) DO UPDATE SET
             status      = EXCLUDED.status,
             total_cents = EXCLUDED.total_cents`,
        evt.OrderID, evt.CustomerID, evt.TotalCents,
        "placed", evt.PlacedAt, evt.ItemCount,
        evt.LastItemSKU,
    )
    return err
}
Enter fullscreen mode Exit fullscreen mode

The ON CONFLICT clause makes the handler idempotent. If the outbox dispatcher delivers the same event twice (it will, eventually), the row converges instead of duplicating. Idempotent projections are the difference between a sane read side and a 3am page.

There is a small delay between the order committing and the projection running. The order is in orders immediately. It arrives in order_summary a moment later. That gap is the price of the pattern, and the dashboard catches up in milliseconds under normal load.

The two sides do not share a model

The point worth holding on to: write/domain.Order and read/model.OrderSummary are different types on purpose. They share a key (the order ID) and nothing else. The write side has unexported fields, invariants, a constructor that can fail. The read side has a flat struct with public fields, no behavior, designed to be JSON-encoded and rendered.

If the next dashboard needs orders grouped by month, you do not change the write aggregate. You add a new projection table and a new query handler. The write side stays still. The read side grows wider.

When CQRS doesn't earn its keep

Two failure modes show up when teams adopt CQRS too early.

The first: a "read side" that is just SELECT * FROM the_write_table. No projection, no separate store, no shape change. That is not CQRS. That is two query interfaces over the same model, with overhead. Drop it.

The second: a CQRS layout on a service that does CRUD. The UserProfile service has one query (by ID) and one command (update). Splitting it into a write aggregate, an event, a projection, and a read DTO triples the code without buying anything. The contention does not exist. The query is already fast. The dashboard is one row.

CQRS is not a default. It is a response to traffic shape, query shape, and invariant shape. When those three line up to make a single model painful, the split pays for itself within weeks. Otherwise it is overhead with a fancy name.

The rule: keep the write side simple until a query forces you off it. Once that happens, add the read side and a projection, and the write side stays still while the read side grows wider over time.


If this was useful

The longer version of this argument covers outbox dispatchers in production, projection rebuild strategies, where the read side lives when it grows beyond one table, and how CQRS sits inside a hex layout without inventing a framework. That material is in Hexagonal Architecture in Go. The Complete Guide to Go Programming is the companion when the language fundamentals are still settling in.

Thinking in Go — the 2-book series on Go programming and hexagonal architecture

Top comments (0)