Writing business logic can be interesting.
Writing yet another REST wrapper around a table is not.
Never.
Ever.
This is not software development; it is digital sock knitting. Except the socks then need to be covered with tests, documented, wrapped into DTOs, passed through a service, a repository, a handler, a mapper, a payload, and, preferably, completed without crying in the process.
Imagine a very ordinary situation.
A DBA adds a new table to the database. From the SQL side, everything is beautiful: a clean schema, proper indexes, relationships, constraints, the whole package. The DBA is happy. The database is happy. Somewhere far away, a tiny bird is singing.
Meanwhile, the backend developer realizes two things:
- They now need to create a controller, service, repository, interfaces, CRUD implementation, entity, DTO, payload, mappers, documentation, and probably something else, because “that is how we do things here.”
- They urgently need to reconsider their career path. Maybe become a painter. Or a baker. Or a person who herds goats and has no idea what
CreateUserRequestis.
And that is only one table.
When was the last time you saw a database with just one table?
Usually, things are much more entertaining. Especially when the database was originally built as a large standalone project, and the backend and frontend were postponed until “later.” This is a beautiful architectural approach: first build the city, then think about roads, electricity, and maybe a fountain.
You get the password to the DEV environment.
You connect.
And there you see several hundred tables, a couple dozen schemas, historical artifacts, a users2 table, a users_new table, a users_final table, a users_final_really table, and also a reference dictionary that nobody has touched since 2016 because “it works.”
And that is still considered a small database.
At this point, you once again want to change jobs, move to the forest, raise livestock, grow tomatoes, and explain to your children that you used to write REST APIs, but now you have a normal life.
Routine Code Is Not Heroism
I love writing code. I am a programmer.
But I do not love monkey work disguised as “just a normal task for a couple of days.”
Writing another handler is not hard. Writing another service is not hard. Writing another repository is not hard either.
That is exactly the problem.
You have already done it a thousand times. You know where GetByID will be. You know where Create will be. You know that somewhere nearby there will be Update, then Delete, then “let’s add filtering,” then “let’s add pagination,” then “why is this field not named the same way as on the frontend?”
And you sit there thinking:
am I really a software engineer, or just a very expensive template copier?
Business logic is interesting.
Architectural decisions are interesting.
Understanding the domain is interesting.
Writing the twenty-seventh CRUD layer for the dict_operation_status table is not a spiritual journey. It is punishment.
So the Generator Appeared
With these thoughts in mind, while starting another project, I decided to spend a couple of weeks not writing CRUD by hand for the first 100 tables, but building a code generator instead.
At first, it was for Java.
This is not about generating business logic. There is no magic here. The generator does not know how your business should work, why an order cannot be canceled after payment, or why production should not be touched after 6 PM on Friday, even though everyone still touches it anyway.
This is only about primitive CRUD.
That same layer that is almost always needed, but writing it by hand every time feels like manually moving bytes from one folder to another while commenting on the process in Jira.
And, as practice showed, the idea was not wasted.
Instead of heroically suffering for several weeks, you can generate the foundation, open the project in your IDE, and do normal work. Or at least a more meaningful form of suffering.
Then Go Came Along
Later, I started working on Go projects, and I decided to adapt the same logic there.
In some ways, Java is simpler. Project structure usually lives by the principle: “everything must be here, named exactly like this, built with Maven, and please do not ask unnecessary questions.”
Go is closer to me. It is simpler, freer, and gives more room for creativity.
Of course, along with creativity comes the other side of freedom: ten logging libraries, ten HTTP libraries, ten opinions about project structure, and twenty people in the comments explaining why your version is wrong.
After reading forums, GitHub, other people’s projects, and surviving a mild existential crisis, the structure gradually started to take shape.
Generation could begin.
“Just Generate an Entity” Sounds Easy
Collecting the list of tables from a database and turning them into Go structs is not that hard.
Well, almost. And those structs are not really useful on their own anyway; you need the full CRUD layer with methods and everything else.
And then the real database begins.
And in a real database, you have:
- multiple schemas
- identical table names in different schemas
- tables without primary keys, because “it is fine”
- composite primary keys
- data types that stare at you like ancient gods
- legacy decisions that nobody understands but everyone is afraid to delete
- column names that make you want to call both a linguist and a therapist
And that is where CRUD stops being so simple.
But most problems are solvable. Over time, roughly 90% of typical cases can be covered. The remaining 10% can be finished by hand, and that is fine.
The important part is that the main routine is already done.
That means you no longer need to manually create hundreds of nearly identical files while pretending this is “backend layer development.”
Why This Is Actually Usable
I ran the generator against several production databases that were large enough, complex enough, and honest enough.
And I came to the conclusion: yes, this can be used.
The generated code is easy to refactor in an IDE. Today this is not a big problem: rename packages, move files around, adjust the structure, make it fit the project style.
The important thing is that the starting routine is already closed.
That same layer that made you want to run away to the forest and grow tomatoes in the morning is already sitting in the project.
Not perfect. Not the final architecture of your dreams. But good enough to start working instead of slowly turning into a boilerplate-code generator running on biological fuel.
A Small Example
For each table, we generate a package under internal/api/<schema>/<table>/:
internal/api/
repository.go # top-level repository interfaces (all tables)
service.go # top-level service interfaces (all tables)
handler.go # router: mounts all routes
public/
products/
entity.go # DB struct mapped from table schema
repository.go # pgx queries: Save, Update, Delete, Find, FindAll, paginated
service.go # business logic layer, delegates to repository
dto.go # CreateDto, UpdateDto, Dto (internal transfer types)
payload.go # HTTP request/response types with JSON tags
handler.go # net/http handlers with Swagger annotations
Let’s look at generation for a table like this:
create table products (
record_id serial primary key,
category_id int not null references categories (record_id),
name varchar(250) not null,
description text
);
comment on table products is 'Stores products with a reference to their category.';
comment on column products.name is 'Name of the product.';
We get:
entity.go:
type Products struct {
RecordID int `json:"record_id" db:"record_id"`
CategoryID int `json:"category_id" db:"category_id"`
Name string `json:"name" db:"name"`
Description *string `json:"description" db:"description"`
CreatedAt time.Time `json:"created_at" db:"created_at"`
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
GUID string `json:"guid" db:"guid"`
}
repository.go:
func (r *repo) Save(ctx context.Context, inputEntity *Products) (*Products, error) {
query := `
insert into public.products (category_id, name, description)
values ($1, $2, $3)
returning record_id, category_id, name, description, created_at, updated_at, guid
`
row := r.db.Pool.QueryRow(ctx, query,
inputEntity.CategoryID,
inputEntity.Name,
inputEntity.Description,
)
return scanFullRow(row)
}
handler.go:
// @Summary Create new item
// @Tags products
// @Accept json
// @Produce json
// @Param request body productsCreateRequest true "Create input"
// @Success 201 {object} productsResponse
// @Router /api/v1/products [post]
func (h *Handler) Save(w http.ResponseWriter, r *http.Request) {
req := &productsCreateRequest{}
if err := httputils.ReadJSON(r, req); err != nil {
httputils.WriteJSON(w, http.StatusBadRequest, httputils.ErrorResponse{Message: err.Error()})
return
}
// validate -> map to DTO -> call service -> map to response
resp, err := h.svc.Save(r.Context(), mapCreateRequestToCreateInputDto(req))
...
httputils.WriteJSON(w, http.StatusCreated, dtoToPayload)
}
And a basic set of routes:
POST /api/v1/products
PUT /api/v1/products/{record_id}
DELETE /api/v1/products/{record_id}
GET /api/v1/products/{record_id}
GET /api/v1/products
GET /api/v1/products/pageable
Everything is fairly simple, predictable, and clear: what goes where and why.
Why Put This in Open Source
Honestly, because it seems like this might be useful not only to me.
Many projects share the same pain: the database already exists, there are many tables, the API was needed yesterday, and for some reason the team wants to work on things that actually bring value instead of manually writing repository.go number 148.
gofromdb lets you quickly generate Go code from an existing database and get a foundation for further development.
Not instead of the developer.
Instead of the part of the developer that already runs on autopilot while sadly staring at the monitor.
You run the generator and get the templates.
Then you can write business logic, normalize the architecture, add rules, tests, validation, authorization, and everything that actually depends on the project.
Instead of sitting there and manually proving to the computer that the orders table really does need a GetOrderByID method.
What Comes Next
I see several possible directions for the project:
- add a smarter type-handling system
- add generation of tests and mocks
- rethink the overall project structure and naming rules once again
- improve the templates
- remove what is unnecessary
- add what is necessary
- start growing tomatoes
I am especially interested in feedback on the project structure. In Go, this is always a lively topic, because every developer knows the one true project structure, and each of them has a different one.
Final Words
The project is here:
https://github.com/hashmap-kz/gofromdb
It runs with a single command. It should work correctly. At least, that is how every optimistic README usually begins.
If it looks interesting, try it.
If it does not work, open an issue, and I will try to help.
If it does work, write as well. Sometimes an open-source author needs to know that their project was not only downloaded by a CI bot, but also launched by at least one living person with a pulse who did not run away to the forest.
Good coding!
Top comments (0)