Why Another Query Builder?
Compare these two approaches to insert a user and get back the ID:
With database/sql (PostgreSQL):
var id int64
err := db.QueryRowContext(ctx,
`INSERT INTO users (name, email, status) VALUES ($1, $2, $3) RETURNING id`,
user.Name, user.Email, user.Status,
).Scan(&id)
user.ID = id
With Relica:
err := db.Model(&user).Insert()
// user.ID is already populated
That's the difference. No manual RETURNING clause. No Scan. No placeholder numbering. The ID just appears in your struct.
And here's the thing - if you switch to MySQL tomorrow, your Relica code works unchanged. The library handles RETURNING vs LastInsertId() internally. You don't think about it.
The Problem I Kept Running Into
GORM gave me convenience but took away control. When a query was slow, I couldn't easily see what SQL was being generated. The magic was helpful until it wasn't.
sqlx gave me control but no convenience. I was writing the same query construction patterns in every project:
// This gets old fast
query := "SELECT * FROM users WHERE 1=1"
args := []interface{}{}
if status != nil {
query += " AND status = $" + strconv.Itoa(len(args)+1)
args = append(args, *status)
}
if role != "" {
query += " AND role = $" + strconv.Itoa(len(args)+1)
args = append(args, role)
}
// ... more conditions, more string concatenation, more bugs
What I actually wanted:
q := db.Select("*").From("users")
if status != nil {
q = q.Where("status = ?", *status)
}
if role != "" {
q = q.Where("role = ?", role)
}
q.All(&users)
Type-safe. Readable. No string concatenation. No placeholder arithmetic. The ? converts to $1, $2 for PostgreSQL automatically.
Type Safety: Why It Matters for Query Builders
Most Go database libraries fall into one of two camps:
Type-unsafe (runtime errors):
// GORM - typo in field name? Runtime error, not compile error
db.Model(&User{}).Where("stauts = ?", 1).Find(&users) // typo: "stauts"
// map[string]interface{} - wrong type? Runtime error
db.Insert("users", map[string]interface{}{
"age": "not a number", // string instead of int - discovered at runtime
})
Type-safe (compile-time errors):
// Relica with structs - compiler catches mistakes
type User struct {
ID int64 `db:"id"`
Name string `db:"name"`
Status int `db:"status"`
}
user := User{Stauts: 1} // Won't compile - field doesn't exist
db.Model(&user).Insert()
user.Status = "active" // Won't compile - wrong type
The struct approach gives you:
- IDE autocomplete - your editor knows the fields
- Compile-time validation - typos caught before runtime
- Refactoring safety - rename a field, compiler shows all usages
- Documentation - struct definition is the schema
What Relica Actually Does
Relica is a query builder, not an ORM. It doesn't manage your schema, doesn't auto-migrate, doesn't lazy load. It builds SQL queries using a fluent API and executes them.
Basic Usage
type User struct {
ID int64 `db:"id"`
Name string `db:"name"`
Email string `db:"email"`
Status int `db:"status"`
}
db, err := relica.Open("postgres", "postgres://user:pass@localhost/db")
if err != nil {
log.Fatal(err)
}
defer db.Close()
ctx := context.Background()
// SELECT with context and fluent builder
var users []User
err = db.Select("id", "name", "email").
From("users").
Where("status = ?", 1).
OrderBy("name").
Limit(10).
WithContext(ctx).
All(&users)
// INSERT from struct - type-safe, no map[string]interface{}
user := User{Name: "Alice", Email: "alice@example.com", Status: 1}
result, err := db.InsertStruct("users", &user).Execute()
// UPDATE - Model API adds WHERE by primary key automatically
user.Status = 2
err = db.Model(&user).Update() // WHERE id = ? added automatically
// DELETE
result, err = db.Delete("users").
Where("id = ?", user.ID).
Execute()
The ? placeholder converts automatically based on your database driver - $1, $2 for PostgreSQL, ? for MySQL and SQLite.
The Model API
For common CRUD operations, the Model API provides ORM-like ergonomics:
type User struct {
ID int64 `db:"id"`
Name string `db:"name"`
Email string `db:"email"`
}
// TableName() is optional - defaults to lowercase struct name + "s"
func (User) TableName() string { return "users" }
// INSERT - ID auto-populated after insert
user := User{Name: "Alice", Email: "alice@example.com"}
err := db.Model(&user).Insert()
fmt.Println(user.ID) // 1 (auto-populated)
// UPDATE - uses primary key in WHERE automatically
user.Name = "Alice Updated"
err = db.Model(&user).Update()
// Selective fields - only update specific columns
err = db.Model(&user).Update("name") // Only updates name column
// DELETE
err = db.Model(&user).Delete()
The auto-populate ID feature handles the PostgreSQL quirk where LastInsertId() isn't supported - Relica uses RETURNING clauses internally. You don't need to think about it.
Expression API for Complex Conditions
Raw string conditions work, but for complex dynamic queries, the Expression API prevents SQL injection and makes the code clearer:
// HashExp for simple conditions
db.Select("*").From("users").
Where(relica.HashExp{
"status": 1,
"role": []string{"admin", "moderator"}, // IN clause
"deleted_at": nil, // IS NULL
}).All(&users)
// Comparison operators
db.Select("*").From("users").
Where(relica.And(
relica.GreaterThan("age", 18),
relica.Or(
relica.Eq("role", "admin"),
relica.Like("email", "@company.com"),
),
)).All(&users)
// BETWEEN, IN, LIKE with automatic escaping
db.Select("*").From("orders").
Where(relica.Between("created_at", startDate, endDate)).
All(&orders)
Slice values automatically become IN clauses. nil values become IS NULL. The library handles the SQL generation details.
JOINs, Aggregates, and Advanced Queries
The builder supports the SQL features you actually need:
// JOIN with aggregates
db.Select("u.name", "COUNT(o.id) as order_count").
From("users u").
LeftJoin("orders o", "o.user_id = u.id").
GroupBy("u.id", "u.name").
Having("COUNT(o.id) > ?", 10).
OrderBy("order_count DESC").
All(&results)
// Subqueries
sub := db.Builder().Select("user_id").From("orders").Where("total > ?", 100)
db.Select("*").From("users").
Where(relica.In("id", sub)).
All(&users)
// CTEs (Common Table Expressions)
cte := db.Builder().
Select("user_id", "SUM(total) as total").
From("orders").
GroupBy("user_id")
db.Builder().
Select("*").
With("order_totals", cte).
From("order_totals").
Where("total > ?", 1000).
All(&premiumUsers)
// UNION, INTERSECT, EXCEPT
q1 := db.Builder().Select("name").From("users")
q2 := db.Builder().Select("name").From("archived_users")
q1.Union(q2).All(&names)
All of this generates standard SQL. If you want to see what's being executed, the SQL is predictable and inspectable.
Batch Operations
Inserting or updating many rows one-by-one is slow. Batch operations reduce roundtrips:
// Batch INSERT from values - 3.3x faster than individual inserts for 100 rows
result, err := db.Builder().
BatchInsert("users", []string{"name", "email"}).
Values("Alice", "alice@example.com").
Values("Bob", "bob@example.com").
Values("Charlie", "charlie@example.com").
Execute()
// Batch INSERT from slice of structs - type-safe
users := []User{
{Name: "Alice", Email: "alice@example.com"},
{Name: "Bob", Email: "bob@example.com"},
}
result, err = db.BatchInsertStruct("users", users).Execute()
// Batch UPDATE - map required here since each row gets different values
result, err = db.Builder().
BatchUpdate("users", "id").
Set(1, map[string]interface{}{"status": "active"}).
Set(2, map[string]interface{}{"status": "inactive"}).
Execute()
The batch UPDATE uses CASE WHEN logic internally, updating multiple rows with different values in a single query.
Zero Dependencies
The production code uses only the Go standard library. Database drivers are your choice - Relica doesn't bundle them.
import (
"github.com/coregx/relica"
_ "github.com/lib/pq" // PostgreSQL
// _ "github.com/go-sql-driver/mysql" // MySQL
// _ "modernc.org/sqlite" // SQLite
)
Test dependencies exist (testcontainers for integration testing), but they're isolated in a separate module and don't affect your production builds.
Statement Caching
Prepared statements are cached with an LRU eviction policy. Default capacity is 1000 statements, configurable at connection time:
db, err := relica.Open("postgres", dsn,
relica.WithStmtCacheCapacity(2000),
relica.WithMaxOpenConns(25),
relica.WithMaxIdleConns(5),
)
Cache hit latency is under 60 nanoseconds. For frequently used queries, you can pin them to prevent eviction:
queries := []string{"SELECT * FROM users WHERE id = ?"}
db.WarmCache(queries)
db.PinQuery(queries[0])
Context Support
Every query supports context for timeouts and cancellation:
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
// Context on DB level - all queries use this context
dbWithCtx := db.WithContext(ctx)
var users []User
err := dbWithCtx.Select("*").From("users").All(&users)
// Or context on individual query
err = db.Select("*").
From("users").
Where("status = ?", 1).
WithContext(ctx).
All(&users)
Transactions
Standard transaction support with all isolation levels:
type Account struct {
ID int64 `db:"id"`
UserID int64 `db:"user_id"`
Balance float64 `db:"balance"`
}
ctx := context.Background()
tx, err := db.Begin(ctx)
if err != nil {
return err
}
defer tx.Rollback() // Safe to call even after Commit
// Insert user with Model API - ID auto-populated
user := User{Name: "Alice", Email: "alice@example.com"}
err = tx.Model(&user).Insert()
if err != nil {
return err
}
// Create account for user
account := Account{UserID: user.ID, Balance: 100.0}
err = tx.Model(&account).Insert()
if err != nil {
return err
}
return tx.Commit()
The Model API within transactions provides the same auto-populate ID and type safety as outside transactions.
Multi-Database Support
Same code works across PostgreSQL, MySQL, and SQLite. The query builder handles placeholder differences ($1 vs ?) and identifier quoting ("users" vs `users`):
| Database | Placeholders | Identifiers | UPSERT |
|---|---|---|---|
| PostgreSQL | $1, $2, $3 |
"users" |
ON CONFLICT |
| MySQL | ?, ?, ? |
`users` |
ON DUPLICATE KEY |
| SQLite | ?, ?, ? |
"users" |
ON CONFLICT |
Why Try Relica
1. Your go.sum stays clean
Zero production dependencies. Check our go.mod - only stdlib imports in the main package. Test dependencies are isolated in a separate module. Your production binary doesn't inherit vulnerabilities from libraries you don't control.
2. Structs, not maps
// GORM-style - runtime errors if you typo a field name
db.Create(map[string]interface{}{"nmae": "Alice"}) // typo goes unnoticed
// Relica - compiler catches typos
user := User{Nmae: "Alice"} // won't compile
db.Model(&user).Insert()
3. See what SQL runs
Every query is predictable. db.Select("*").From("users").Where("id = ?", 1) generates exactly SELECT * FROM "users" WHERE id = $1. No surprises, no hidden JOINs, no extra queries.
4. One codebase, three databases
Write once, run on PostgreSQL, MySQL, or SQLite. Placeholder conversion ($1 vs ?), identifier quoting ("table" vs `table`), and dialect-specific features (RETURNING, ON CONFLICT) handled automatically.
5. Performance without complexity
Statement cache with <60ns hit latency. Batch operations 3.3x faster than individual inserts. You get the performance benefits without configuring anything - defaults are production-ready.
When NOT to Use Relica
If you need auto-migrations: Relica doesn't manage your schema. Use a dedicated migration tool like golang-migrate.
If you want lazy loading: There's no relationship mapping. JOINs are explicit - which is actually a feature, not a bug. You always know what queries run.
If you prefer raw SQL exclusively: sqlx might be simpler for your use case. Though you can use both - Relica wraps database/sql.
Best Practices: How to Use Relica Effectively
1. Prefer Model API for single-row CRUD
// Recommended - type-safe, auto-WHERE by PK
user := User{Name: "Alice", Email: "alice@example.com"}
db.Model(&user).Insert() // ID auto-populated
db.Model(&user).Update() // WHERE id = ? automatic
db.Model(&user).Delete() // WHERE id = ? automatic
2. Use InsertStruct/BatchInsertStruct for bulk inserts
// Single struct
db.InsertStruct("users", &user).Execute()
// Slice of structs - one query, not N queries
users := []User{{Name: "Alice"}, {Name: "Bob"}, {Name: "Charlie"}}
db.BatchInsertStruct("users", users).Execute()
3. Use Expression API for complex WHERE conditions
// Instead of string concatenation
db.Select("*").From("users").
Where(relica.And(
relica.GreaterThan("age", 18),
relica.In("role", "admin", "moderator"),
relica.Like("email", "@company.com"),
)).All(&users)
4. Always use context for production queries
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
db.Select("*").From("users").WithContext(ctx).All(&users)
5. Use transactions for related operations
tx, _ := db.Begin(ctx)
defer tx.Rollback()
user := User{Name: "Alice"}
tx.Model(&user).Insert()
account := Account{UserID: user.ID, Balance: 100}
tx.Model(&account).Insert()
tx.Commit()
Project Status
Relica is at version 0.7.0. The API is stable for the features documented here. Breaking changes require deprecation cycles - the goal is to avoid a v2.0.0 for as long as possible.
Current metrics:
- 600+ tests
- 85.5% code coverage
- Zero production dependencies
- PostgreSQL, MySQL, SQLite support
Getting Started
go get github.com/coregx/relica
Minimal working example:
package main
import (
"context"
"fmt"
"log"
"time"
"github.com/coregx/relica"
_ "github.com/lib/pq"
)
type User struct {
ID int64 `db:"id"`
Name string `db:"name"`
Email string `db:"email"`
}
func main() {
db, err := relica.Open("postgres", "postgres://user:pass@localhost/db")
if err != nil {
log.Fatal(err)
}
defer db.Close()
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
// Insert with Model API - ID auto-populated
user := User{Name: "Alice", Email: "alice@example.com"}
err = db.Model(&user).Insert()
if err != nil {
log.Fatal(err)
}
fmt.Printf("Created user with ID: %d\n", user.ID)
// Query with context
var users []User
err = db.Select("*").
From("users").
Where("id = ?", user.ID).
WithContext(ctx).
All(&users)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Found: %+v\n", users)
}
Source Code
GitHub: https://github.com/coregx/relica
Documentation is in the repository. The README covers most use cases. For advanced features, check the docs/guides/ directory.
My Other Go Libraries
If you found Relica useful, you might be interested in my other open-source Go projects:
fursy - Next-generation HTTP router for Go achieving around 10 million requests per second throughput. Built for high-performance APIs.
GitHub: github.com/coregx/fursy
coregex - Multi-engine regex library up to 263x faster than stdlib with SIMD optimizations. Useful when regex performance is critical.
GitHub: github.com/coregx/coregex
phoenix - High-performance Terminal UI framework for Go with 91.8% test coverage. For building interactive CLI applications.
GitHub: github.com/coregx/phoenix
stream - RFC-compliant Server-Sent Events and WebSocket implementation for Go. Real-time communication without external dependencies.
GitHub: github.com/coregx/stream
signals - Type-safe reactive state management inspired by Angular Signals. Reactive patterns for Go applications.
GitHub: github.com/coregx/signals
hdf5 - Pure Go implementation of HDF5 file format passing 98.2% of official tests. Scientific data without CGO.
GitHub: github.com/scigolib/hdf5
I also maintain angular-editor - a WYSIWYG rich text editor for Angular with 700+ GitHub stars, supporting Angular 6 through 19+.
GitHub: github.com/kolkov/angular-editor
All projects follow the same philosophy: zero or minimal dependencies, predictable behavior, and high test coverage.
Full list: github.com/kolkov
Questions or feedback? Open an issue on GitHub or leave a comment below.
Top comments (0)