DEV Community

Cover image for A Repository Layer in Go Without an ORM: 4 Patterns That Survive in Production
Gabriel Anhaia
Gabriel Anhaia

Posted on

A Repository Layer in Go Without an ORM: 4 Patterns That Survive in Production


A pattern I have seen more than once: a team has a users
table with a handful of columns and a wide spread of Go files
that knew about every one of them. They had picked GORM on day
one because the README example fit on a laptop screen. By the
time the table grew an archived_at column, a migration that
should have taken an hour took the better part of a sprint.
Half the call sites used Find, the other half First, a few
did Raw with hand-written SQL because the team found it
could not express the query they needed. The "repository" was
a folder of leaky abstractions over a thin wrapper.

They stripped the ORM out. The codebase got smaller and the
SQL got readable. What replaced GORM was not nothing. It was a deliberate choice between four
patterns that all answer the same question: where does the
SQL live and who is allowed to look at it?

This post walks the four patterns, the failure mode each one
fixes, and a decision matrix at the end so you can pick
without re-litigating it next quarter.

The contract every pattern shares

Before the patterns, the type. Every example below stores and
fetches the same User.

// domain/user.go
package domain

import "time"

type User struct {
    ID        string
    Email     string
    Name      string
    CreatedAt time.Time
    Archived  bool
}
Enter fullscreen mode Exit fullscreen mode

That is the type the rest of the application sees. The
patterns differ in who turns a row into a User and where
that translation lives.

Four labeled drawer cards for sqlx, sqlc, interface plus implementation, and hexagonal port and adapter

Pattern 1: sqlx + struct scan

The smallest move from database/sql to something ergonomic.
sqlx adds named parameters, struct scanning, and a few
helpers. It does not generate code, it does not parse your
SQL, it does not pretend to be an ORM. It is database/sql
with the tedious parts shortened.

// adapter/postgres/user_repo.go
package postgres

import (
    "context"
    "time"

    "github.com/jmoiron/sqlx"
    "yourapp/domain"
)

type UserRepo struct {
    db *sqlx.DB
}

func NewUserRepo(db *sqlx.DB) *UserRepo {
    return &UserRepo{db: db}
}

type userRow struct {
    ID        string    `db:"id"`
    Email     string    `db:"email"`
    Name      string    `db:"name"`
    CreatedAt time.Time `db:"created_at"`
    Archived  bool      `db:"archived"`
}

func (r *UserRepo) ByID(
    ctx context.Context, id string,
) (domain.User, error) {
    const q = `
      SELECT id, email, name, created_at, archived
        FROM users
       WHERE id = $1`
    var row userRow
    if err := r.db.GetContext(ctx, &row, q, id); err != nil {
        return domain.User{}, err
    }
    return domain.User(row), nil
}
Enter fullscreen mode Exit fullscreen mode

The SQL sits in a const next to the function that runs it.
The struct tag does the column-to-field mapping. There is one
type for the row shape (userRow) and one for the domain
(domain.User). They look identical right now. They will
diverge the first time the database column name does not match
the Go field name.

When sqlx fits: small services, a handful of tables, queries
that are mostly CRUD. The cost is that every query is a
runtime string. A typo in created_ta ships fine; you find it
in production.

Pattern 2: sqlc — generated typed queries

sqlc flips the pattern. You write SQL files, sqlc reads them
at build time, infers types from your schema, and generates a
Go method for each query. The SQL is the source of truth and
it is checked at code-gen time.

A query file:

-- query/user.sql

-- name: GetUser :one
SELECT id, email, name, created_at, archived
  FROM users
 WHERE id = $1;

-- name: ArchiveUser :exec
UPDATE users
   SET archived = TRUE
 WHERE id = $1;
Enter fullscreen mode Exit fullscreen mode

Run sqlc generate and you get a Queries struct with typed
methods:

// generated by sqlc — do not edit
type GetUserRow struct {
    ID        string
    Email     string
    Name      string
    CreatedAt time.Time
    Archived  bool
}

func (q *Queries) GetUser(
    ctx context.Context, id string,
) (GetUserRow, error) { /* ... */ }
Enter fullscreen mode Exit fullscreen mode

A typo in a column name fails the build. A change in the
schema that breaks a query fails the build. The generated code
is small, readable, and it is the kind of code you would have
written by hand if you had the patience.

When sqlc fits: real-shaped queries with joins, aggregates,
and RETURNING clauses; teams that want compile-time pressure
on the SQL; codebases big enough that "find every place that
SELECTs from the users table" is a real grep target.

The cost is the build step and the slight rigidity. Dynamic
WHERE clauses (filters? builders) are awkward in sqlc and
better written by hand. Most projects mix sqlc for the static
queries with a small database/sql escape hatch for the
dynamic ones.

Pattern 3: interface + multiple implementations

Both patterns above live in adapter/postgres/. Pattern 3
takes a step back and asks the application: what does the
repository look like to the code that calls it?

You define an interface in the package that consumes it.

// service/user_service.go
package service

import (
    "context"
    "yourapp/domain"
)

type UserStore interface {
    ByID(ctx context.Context, id string) (domain.User, error)
    Save(ctx context.Context, u domain.User) error
}

type UserService struct {
    store UserStore
}

func NewUserService(s UserStore) *UserService {
    return &UserService{store: s}
}
Enter fullscreen mode Exit fullscreen mode

The service does not import adapter/postgres. It does not
know about sqlx, sqlc, or even that the data lives in a
database. It knows about UserStore. Wiring the production
binary is one line:

svc := service.NewUserService(postgres.NewUserRepo(db))
Enter fullscreen mode Exit fullscreen mode

Tests use a different binding:

// adapter/memory/user_repo.go
package memory

type UserRepo struct {
    users map[string]domain.User
}

func New() *UserRepo {
    return &UserRepo{users: map[string]domain.User{}}
}

func (r *UserRepo) ByID(
    _ context.Context, id string,
) (domain.User, error) {
    u, ok := r.users[id]
    if !ok {
        return domain.User{}, ErrNotFound
    }
    return u, nil
}

func (r *UserRepo) Save(
    _ context.Context, u domain.User,
) error {
    r.users[u.ID] = u
    return nil
}
Enter fullscreen mode Exit fullscreen mode

The service test runs against the in-memory repo. The
integration test runs against the Postgres repo. The same
interface keeps both honest.

The Go-idiomatic detail: the interface goes in the consumer
package, not the provider package. service declares what it
needs. Postgres and memory each happen to satisfy it. This is
the inverse of the Java convention and it is what lets the
service compile without importing any storage code.

When pattern 3 fits: anything bigger than a single binary with
a single repo. Once you want a fast unit test that does not
spin up Postgres, you want this.

The cost is mostly discipline. The interface tends to grow
until it has thirty methods and looks suspiciously like the
Postgres adapter's public surface. Keep it focused on what one
service needs; let the next service declare its own.

Pattern 4: hexagonal — port + adapter

Pattern 4 is pattern 3 promoted to an architectural rule. The
interface is no longer a convenience inside service/; it is
a port that lives in the domain package, and adapters are
implementations that plug in from the outside.

// domain/repo.go
package domain

import "context"

type UserRepository interface {
    ByID(ctx context.Context, id string) (User, error)
    Save(ctx context.Context, u User) error
}
Enter fullscreen mode Exit fullscreen mode

The adapter:

// adapter/postgres/user_repo.go
package postgres

import (
    "context"

    "github.com/jmoiron/sqlx"
    "yourapp/domain"
)

type PostgresUserRepository struct{ db *sqlx.DB }

var _ domain.UserRepository = (*PostgresUserRepository)(nil)

func (r *PostgresUserRepository) ByID(
    ctx context.Context, id string,
) (domain.User, error) { /* ... */ }
Enter fullscreen mode Exit fullscreen mode

That var _ domain.UserRepository = (*PostgresUserRepository)(nil)
line is doing real work. It asserts at compile time that the
adapter satisfies the port. If you add a method to the port,
every adapter that does not implement it stops compiling. If
you change a signature, the adapter breaks before the call
site does. The compile error points at the right file.

The folder layout that falls out:

domain/        # types, ports, business rules. No imports out.
service/       # use cases. Imports domain.
adapter/postgres/   # implements domain ports. Imports domain.
adapter/http/       # HTTP handlers. Imports service.
adapter/memory/     # test double. Implements domain ports.
cmd/api/main.go     # wiring. Imports everything, picks adapters.
Enter fullscreen mode Exit fullscreen mode

The dependency graph is one-way: cmd → adapter → service →
domain
. The domain has no imports out of itself. Storage,
HTTP, queue clients — none of them are visible to business
rules.

Silhouette wiring a domain port to a swappable Postgres adapter and an in-memory adapter

When pattern 4 fits: long-lived services where the storage
choice may change, where you want a strict test pyramid, or
where the same domain is fronted by both an HTTP API and a
queue consumer and they should not duplicate logic. It is the
heaviest of the four patterns and it earns the weight by
keeping the core stable while the edges churn.

The cost is the layout discipline and the temptation to put
SQL types in the port. The port speaks domain.User. The
adapter speaks userRow and translates. If the port ever
returns a sql.NullString, the leak has already happened.

Decision matrix

You have... Pattern that fits Why
One binary, a few tables, mostly CRUD sqlx + struct scan Smallest moving parts. SQL stays readable.
Real joins, aggregates, schema changes you want compile-checked sqlc Build-time pressure on SQL; the typed methods are what you would write by hand.
Need fast unit tests without spinning up Postgres interface + multi-impl The in-memory repo is one struct and a map.
Long-lived service, multiple entry points (HTTP + queue), strict test pyramid hex port + adapter The domain stays stable while the edges change.

The patterns are not mutually exclusive. The most common
production shape is pattern 4 outside, pattern 2 inside:
the hexagonal port sits in domain/, and the Postgres adapter
that implements it uses sqlc-generated queries internally. The
service tests bind the in-memory adapter; the integration
tests bind the Postgres one. SQL is checked at build time and
the storage choice is one wiring change away.

What you give up by skipping the ORM

Three things, honestly. Fast prototyping of CRUD
endpoints: the ORM's Find/Save is faster to type for the
first table. Schema migrations: you write your own,
usually with golang-migrate or goose. Convenience
features
: soft delete, optimistic locking, automatic
updated_at. Each of those becomes a small piece of code you
understand instead of a flag you set.

What you get is the SQL on the page, the type on the page, and
no surprise queries at 3 AM. For a service that lives more
than six months that trade is not close.

Closing thought

The repository layer is one of the few places in a Go service
where the architectural choice you make on day one is still
visible on day three hundred. ORMs paper it over with sugar
that stops being sweet around the second migration. The four
patterns above are all honest about what they are: a place to
put SQL, a way to check it, an interface for tests to plug
into, a port for the architecture to lean on.

Pick the lightest one your team can live with for a year. If
that turns out to be the heaviest one, the patterns compose.
The hex port can wrap sqlc which can sit alongside one sqlx
escape hatch for the dynamic query you could not predict. The
matrix is a starting point, not a contract.


If this was useful

If the repository layer is the one you are about to redo and
you want the longer treatment — ports, adapters, transaction
boundaries, where to put domain events, how to keep the test
pyramid honest — Hexagonal Architecture in Go walks the full
service end-to-end, including the SQL adapter and the
in-memory one. The companion volume Complete Guide to Go
Programming
covers the language pieces that make these
patterns sit comfortably: interfaces, errors, contexts, and
the standard library you do not need to wrap.

Thinking in Go — the 2-book series on Go programming and hexagonal architecture

Top comments (0)