Why Another Query Builder?
I spent years working with Go database libraries. GORM is powerful but opaque - when something goes wrong, good luck debugging the auto-generated SQL. sqlx is great for raw SQL, but I found myself writing the same boilerplate query construction logic in every project.
What I wanted was something in between: a library that builds SQL predictably, requires zero external dependencies in production, and doesn't make me choose between type safety and simplicity.
Relica is my answer to that problem.
The Problem with Existing Solutions
GORM does too much magic. Auto-migrations, lazy loading, hooks everywhere. When your query is slow, figuring out what SQL actually got executed requires digging through layers of abstraction. In production systems, I need predictability over convenience.
sqlx does too little. It's excellent at what it does - scanning rows into structs and named parameter binding - but you're still writing raw SQL strings. Complex queries with dynamic WHERE conditions become string concatenation nightmares.
database/sql is the foundation, but using it directly means writing the same error handling, connection management, and scanning code repeatedly.
I wanted a query builder that:
- Generates predictable, inspectable SQL
- Has zero dependencies to avoid supply chain risks
- Provides ORM-like convenience for common operations
- Stays out of the way for complex queries
What Relica Actually Does
Relica is a query builder, not an ORM. It doesn't manage your schema, doesn't auto-migrate, doesn't lazy load. It builds SQL queries using a fluent API and executes them.
Basic Usage
db, err := relica.Open("postgres", "postgres://user:pass@localhost/db")
if err != nil {
log.Fatal(err)
}
defer db.Close()
// SELECT with fluent builder
var users []User
err = db.Select("id", "name", "email").
From("users").
Where("status = ?", 1).
OrderBy("name").
Limit(10).
All(&users)
// INSERT returns sql.Result
result, err := db.Insert("users", map[string]interface{}{
"name": "Alice",
"email": "alice@example.com",
}).Execute()
// UPDATE with WHERE
result, err = db.Update("users").
Set(map[string]interface{}{"status": 2}).
Where("id = ?", 123).
Execute()
// DELETE
result, err = db.Delete("users").
Where("id = ?", 123).
Execute()
The ? placeholder converts automatically based on your database driver - $1, $2 for PostgreSQL, ? for MySQL and SQLite.
The Model API
For common CRUD operations, the Model API provides ORM-like ergonomics:
type User struct {
ID int64 `db:"id"`
Name string `db:"name"`
Email string `db:"email"`
}
// TableName() is optional - defaults to lowercase struct name + "s"
func (User) TableName() string { return "users" }
// INSERT - ID auto-populated after insert
user := User{Name: "Alice", Email: "alice@example.com"}
err := db.Model(&user).Insert()
fmt.Println(user.ID) // 1 (auto-populated)
// UPDATE - uses primary key in WHERE automatically
user.Name = "Alice Updated"
err = db.Model(&user).Update()
// Selective fields - only update specific columns
err = db.Model(&user).Update("name") // Only updates name column
// DELETE
err = db.Model(&user).Delete()
The auto-populate ID feature handles the PostgreSQL quirk where LastInsertId() isn't supported - Relica uses RETURNING clauses internally. You don't need to think about it.
Expression API for Complex Conditions
Raw string conditions work, but for complex dynamic queries, the Expression API prevents SQL injection and makes the code clearer:
// HashExp for simple conditions
db.Select("*").From("users").
Where(relica.HashExp{
"status": 1,
"role": []string{"admin", "moderator"}, // IN clause
"deleted_at": nil, // IS NULL
}).All(&users)
// Comparison operators
db.Select("*").From("users").
Where(relica.And(
relica.GreaterThan("age", 18),
relica.Or(
relica.Eq("role", "admin"),
relica.Like("email", "@company.com"),
),
)).All(&users)
// BETWEEN, IN, LIKE with automatic escaping
db.Select("*").From("orders").
Where(relica.Between("created_at", startDate, endDate)).
All(&orders)
Slice values automatically become IN clauses. nil values become IS NULL. The library handles the SQL generation details.
JOINs, Aggregates, and Advanced Queries
The builder supports the SQL features you actually need:
// JOIN with aggregates
db.Select("u.name", "COUNT(o.id) as order_count").
From("users u").
LeftJoin("orders o", "o.user_id = u.id").
GroupBy("u.id", "u.name").
Having("COUNT(o.id) > ?", 10).
OrderBy("order_count DESC").
All(&results)
// Subqueries
sub := db.Builder().Select("user_id").From("orders").Where("total > ?", 100)
db.Select("*").From("users").
Where(relica.In("id", sub)).
All(&users)
// CTEs (Common Table Expressions)
cte := db.Builder().
Select("user_id", "SUM(total) as total").
From("orders").
GroupBy("user_id")
db.Builder().
Select("*").
With("order_totals", cte).
From("order_totals").
Where("total > ?", 1000).
All(&premiumUsers)
// UNION, INTERSECT, EXCEPT
q1 := db.Builder().Select("name").From("users")
q2 := db.Builder().Select("name").From("archived_users")
q1.Union(q2).All(&names)
All of this generates standard SQL. If you want to see what's being executed, the SQL is predictable and inspectable.
Batch Operations
Inserting or updating many rows one-by-one is slow. Batch operations reduce roundtrips:
// Batch INSERT - 3.3x faster than individual inserts for 100 rows
result, err := db.Builder().
BatchInsert("users", []string{"name", "email"}).
Values("Alice", "alice@example.com").
Values("Bob", "bob@example.com").
Values("Charlie", "charlie@example.com").
Execute()
// Batch UPDATE - different values for different rows
result, err = db.Builder().
BatchUpdate("users", "id").
Set(1, map[string]interface{}{"status": "active"}).
Set(2, map[string]interface{}{"status": "inactive"}).
Execute()
The batch UPDATE uses CASE WHEN logic internally, updating multiple rows with different values in a single query.
Zero Dependencies
The production code uses only the Go standard library. Database drivers are your choice - Relica doesn't bundle them.
import (
"github.com/coregx/relica"
_ "github.com/lib/pq" // PostgreSQL
// _ "github.com/go-sql-driver/mysql" // MySQL
// _ "modernc.org/sqlite" // SQLite
)
Test dependencies exist (testcontainers for integration testing), but they're isolated in a separate module and don't affect your production builds.
Statement Caching
Prepared statements are cached with an LRU eviction policy. Default capacity is 1000 statements, configurable at connection time:
db, err := relica.Open("postgres", dsn,
relica.WithStmtCacheCapacity(2000),
relica.WithMaxOpenConns(25),
relica.WithMaxIdleConns(5),
)
Cache hit latency is under 60 nanoseconds. For frequently used queries, you can pin them to prevent eviction:
queries := []string{"SELECT * FROM users WHERE id = ?"}
db.WarmCache(queries)
db.PinQuery(queries[0])
Transactions
Standard transaction support with all isolation levels:
tx, err := db.Begin(ctx)
if err != nil {
return err
}
defer tx.Rollback() // Safe to call even after Commit
_, err = tx.Insert("users", data).Execute()
if err != nil {
return err
}
_, err = tx.Update("accounts").
Set(map[string]interface{}{"balance": newBalance}).
Where("user_id = ?", userID).
Execute()
if err != nil {
return err
}
return tx.Commit()
The Model API works within transactions too:
tx, _ := db.Begin(ctx)
defer tx.Rollback()
user := User{Name: "Alice"}
err = tx.Model(&user).Insert() // ID auto-populated
if err == nil {
tx.Commit()
}
Multi-Database Support
Same code works across PostgreSQL, MySQL, and SQLite. The query builder handles placeholder differences ($1 vs ?) and identifier quoting ("users" vs `users`):
| Database | Placeholders | Identifiers | UPSERT |
|---|---|---|---|
| PostgreSQL | $1, $2, $3 |
"users" |
ON CONFLICT |
| MySQL | ?, ?, ? |
`users` |
ON DUPLICATE KEY |
| SQLite | ?, ?, ? |
"users" |
ON CONFLICT |
When NOT to Use Relica
Be honest about the limitations:
If you need auto-migrations: Relica doesn't manage your schema. Use a dedicated migration tool.
If you want lazy loading: There's no relationship mapping. JOINs are explicit.
If you prefer raw SQL exclusively: sqlx might be simpler for your use case.
If you need connection pooling beyond stdlib: Relica wraps database/sql. For advanced pooling (like pgbouncer-style features), you might want pgx directly.
Project Status
Relica is at version 0.7.0. The API is stable for the features documented here. Breaking changes require deprecation cycles - the goal is to avoid a v2.0.0 for as long as possible.
Current metrics:
- 600+ tests
- 85.5% code coverage
- Zero production dependencies
- PostgreSQL, MySQL, SQLite support
Getting Started
go get github.com/coregx/relica
Minimal working example:
package main
import (
"fmt"
"log"
"github.com/coregx/relica"
_ "github.com/lib/pq"
)
type User struct {
ID int64 `db:"id"`
Name string `db:"name"`
Email string `db:"email"`
}
func main() {
db, err := relica.Open("postgres", "postgres://user:pass@localhost/db")
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Insert
user := User{Name: "Alice", Email: "alice@example.com"}
err = db.Model(&user).Insert()
if err != nil {
log.Fatal(err)
}
fmt.Printf("Created user with ID: %d\n", user.ID)
// Query
var users []User
err = db.Select("*").
From("users").
Where("id = ?", user.ID).
All(&users)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Found: %+v\n", users)
}
Source Code
GitHub: https://github.com/coregx/relica
Documentation is in the repository. The README covers most use cases. For advanced features, check the docs/guides/ directory.
My Other Go Libraries
If you found Relica useful, you might be interested in my other open-source Go projects:
fursy - Next-generation HTTP router for Go achieving around 10 million requests per second throughput. Built for high-performance APIs.
GitHub: github.com/coregx/fursy
coregex - Multi-engine regex library up to 263x faster than stdlib with SIMD optimizations. Useful when regex performance is critical.
GitHub: github.com/coregx/coregex
phoenix - High-performance Terminal UI framework for Go with 91.8% test coverage. For building interactive CLI applications.
GitHub: github.com/coregx/phoenix
stream - RFC-compliant Server-Sent Events and WebSocket implementation for Go. Real-time communication without external dependencies.
GitHub: github.com/coregx/stream
signals - Type-safe reactive state management inspired by Angular Signals. Reactive patterns for Go applications.
GitHub: github.com/coregx/signals
hdf5 - Pure Go implementation of HDF5 file format passing 98.2% of official tests. Scientific data without CGO.
GitHub: github.com/scigolib/hdf5
I also maintain angular-editor - a WYSIWYG rich text editor for Angular with 700+ GitHub stars, supporting Angular 6 through 19+.
GitHub: github.com/kolkov/angular-editor
All projects follow the same philosophy: zero or minimal dependencies, predictable behavior, and high test coverage.
Full list: github.com/kolkov
Questions or feedback? Open an issue on GitHub or leave a comment below.
Top comments (0)