I recently started a new project in Go where I needed a database. It was an opportunity to select a stack I was happy with. In the past, I had used ORMs like gORM in existing projects.
Now, why not use an ORM? I've seen performance issues too many times with ORMs. I prefer writing my own SQL to avoid surprises. After all, I know the database schema and writing code for a specific purpose very often leads to better performance than generic code. ORMs have to support all kinds of database schemas. I only have to support mine. Having successfully used Knex.js in NodeJS (a popular query builder) in the past, I know writing SQL queries myself is not hard and provides very good performance.
I was browsing awesome-go for ideas on how to setup my data access layer when I stumbled on sqlc. It seemed like a great option. Code generation is a strategy often used in the Go ecosystem and making my queries safe at compile time was an idea I really liked. Knex was great, but it required of me that I test thoroughly my queries at runtime and that I sanitize my query results to ensure type safety within my application.
Here is how you write requests in sqlc. It's plain SQL with a comment that allows you to name your query and give a hint of how many results will be returned.
-- name: GetUserByID :one
SELECT * FROM users uWHERE u.id = sqlc.arg('UserID') AND u.deleted_at IS NULL LIMIT 1;
-- name: InsertUser :one
INSERT INTO users(
id, name, organization_id,
created_at, last_updated_at, deleted_at
)
VALUES (
sqlc.arg('UserID'),
sqlc.arg('Name'),
sqlc.arg('OrganizationID'),
sqlc.arg('CreatedAt'),
sqlc.arg('LastUpdatedAt'),
sqlc.arg('DeletedAt')
)
RETURNING *;
In order to provide compile-time safety and Code generation, sqlc requires a Database schema in the form of a single SQL file.
I could have maintained a separate Database schema file for sqlc apart from my migrations, but I felt it would remove some of the type-safety promise sqlc made. I would need to ensure this schema was synced with the real database schema at all times, and I didn't like this idea much.
I also though about appending all migration files together. It would not be too bad at the beginning, but I knew that with time, I would add new columns to tables, drop some of them and have a bunch of changes overall. This could potentially cause issues with sqlc.
I chose another avenue. Enter Prisma.
EDIT: Thank you Andrew for commenting. If you only want to use the migration files without Prisma, sqlc works with a migration folder.
Prisma to the rescue
Prisma is a popular NodeJS ORM. As I said earlier, I'm not a fan of ORMs, even though they can be useful, especially in CRUD scenarios. However, Prisma is not only an ORM, but also a migration tool. To use Prisma, you have to write a schema file. The format used is pretty readable in my opinion.
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
datasource db {
provider = "postgresql"
url = env("MIGRATOR_PG_URL")
}
// We don't care about the client, but it needs to be defined, so we export
// the client to a tmp folder we ignored using a .gitignore file.
generator client {
provider = "prisma-client-js"
output = "../tmp/prisma-client"
}
model User {
id String @id @db.Uuid
name String
organization_id String @db.Uuid
created_at DateTime @db.Timestamptz()
last_updated_at DateTime? @db.Timestamptz()
deleted_at DateTime? @db.Timestamptz()
@@map("users")
}
This format has many advantages. It allows new developers to get familiar with the database schema very quickly because it serves as documentation. It also allows for the creation of new migrations by reconciliating the state of the database with the defined schema.
Prisma allows you to generate SQL migrations through its CLI. You can either provide a database connection so it can generate only what changed, or you can ask it to generate it from scratch. By using the last option, we can generate the schema sqlc requires to do its job.
npx prisma migrate diff --from-empty --to-schema-datamodel=./prisma/schema.prisma --script > ./schema.gen.sql
SQL Schema:
-- CreateTable
CREATE TABLE "users" (
"id" UUID NOT NULL PRIMARY KEY,
"name" TEXT NOT NULL,
"organization_id" UUID NOT NULL,
"created_at" TIMESTAMPTZ NOT NULL,
"last_updated_at" TIMESTAMPTZ,
"deleted_at" TIMESTAMPTZ
);
Then, in sqlc's configuration file, let's take in the generated schema as an input.
version: "2"
sql:
- engine: "postgresql"
queries: "sqlc/queries.sql"
schema: "schema.gen.sql"
gen:
go:
package: "db"
out: "src/db"
sql_package: "pgx/v5"
emit_interface: true
emit_pointers_for_null_types: true
output_batch_file_name: "batch.gen.go"
output_db_file_name: "db.gen.go"
output_models_file_name: "models.gen.go"
output_querier_file_name: "querier.gen.go"
output_copyfrom_file_name: "copyfrom.gen.go"
output_files_suffix: ".gen.go"
sqlc generate
In the configuration above, I added a .gen.go
extension to all generated files. This allows me to ignore them using a .gitignore file. It minimizes the risk of me getting a stale version of those files.
As expected, sqlc generates all the relevant Data Access Objects. In the future, once my schema evolves, it will check my queries for breaking changes, minimizing the risk of discovering something broke in production.
The only problem remaining here is we rely on generated files to ensure this whole chain works. This can be annoying to maintain, but by using a Makefile, we can ensure your dependency chain is always up-to-date.
schema.gen.sql: prisma/schema.prisma ## Generates the SQL schema based on the Prisma schema
@echo "[INFO] Generating SQL schema"
@npx prisma migrate diff --from-empty --to-schema-datamodel=./prisma/schema.prisma --script > ./schema.gen.sql
sqlc: ## Generates the SQL code based on the SQLC config and schema
@echo "[INFO] Installing sqlc"
@go install github.com/sqlc-dev/sqlc/cmd/sqlc@v1.23.0
@echo "[INFO] Generating SQL code"
@sqlc generate
.PHONY: sqlc
Now, in order to generate our Data Access Objects, we can simply call make sqlc
. However, we can even go one step further and call it from go generate
by embedding the make command in a Go file.
package my_app
//go:generate make schema.gen.sql
As a bonus, sqlc is able to generate code for other languages besides Go. It now supports Kotlin, Python, F#, and there are plans to support C# and Typescript in the future.
EDIT: Typescript support has just launched in alpha. Check it out!
What do you think of this setup? Once in place, it feels great to use. Do you have a solution you like to implement for your Data Access Layer? Do you have a proposal to make this approach better? Let me know.
Top comments (5)
Great article Gabriel! First time learning about
sqlc
and it looks pretty neat.I think if I was starting a new Go project from scratch, I like the idea of using Prisma for the schema authoring and generation of migration files, but instead of writing the raw queries in sql files and have sqlc generate the Go code from those, I would probably just try to use something like the prisma Go client , which I know is ORM-y but it also provides the ability to write raw queries like:
Which is nice for those complex queries that are sometimes needed. It also reduces the amount of generation steps I feel, making it a bit less brittle, but also the downside is more work for the dev.
It's all trade-offs at the end of the day. My next Go project, I will look into both methods and test them out to see what I like more!
Hey Jason. Thanks for your comment. I found the best thing about sqlc is the compile-time safety of types. It will not let you create a query against tables and columns it does not know.
This avoids you having to find out about your broken queries at runtime.
Hi, great post! I am one of the maintainers of sqlc and just wanted to mention that you can actually specify a directory of migrations as your schema to sqlc. You don't have to put everything in one file.
Also we just announced the first alpha release of our TypeScript plugin, which does what you'd expect (generates TypeScript from your SQL). It would be great to have anyone who's interested try it out!
Hi Andrew. I've added a few edits to the article to reflect the information in your comment. Thank you for correcting me :)
You are contributing to a great project. Keep up the good work.
wow, great solution