Quick Links:
- Introduction
- The Problem: Too Many Entities, Too Much Repeated Logic
- The Core Idea: Describe the Resource — Don’t Hardcode It
- How the Generic GET Flow Works
- Before vs After
- Why This Architecture Is Powerful?
- Conclusion
Introduction
In fast-growing systems, the simplest operation — fetching data — slowly becomes one of the hardest to maintain.
Every new entity requires its own GET endpoint, its own filters, its own search logic, and its own controller–service–repository chain. After a few months, the codebase becomes filled with duplicate logic and inconsistent behaviors across resources.
During our Infrastructure work, we faced exactly this challenge.
Our solution was to design a Generic GET Engine — a single dynamic pipeline capable of handling GET operations for any resource, without writing any new controllers or queries.
🛑 The Problem: Too Many Entities, Too Much Repeated Logic
Originally, each entity implemented GET differently:
- Events had one filtering model
- Processes had another
- Some supported search, others didn’t
- Some used pagination, others didn't
- Every new resource required writing controllers, queries, validations, and tests
This resulted in:
- ❌ Duplication
- ❌ Inconsistency
- ❌ Slow development
- ❌ Harder debugging
The system needed one unified solution.
✨ The Core Idea: Describe the Resource, Don’t Hardcode It
Instead of writing resource-specific logic, we introduced a central Resource Registry.
Each resource is defined once, using metadata that tells the system how it should behave.
Example — Registering a Resource
registry.register({
name: 'event',
entity: Event,
alias: 'e',
searchable: ['name'],
});
This simple definition allows the GET engine to:
- Identify the resource dynamically
- Know which entity to query
- Know which fields support full-text search
- Build the correct SQL automatically
From this point on, the system can generate and execute queries for event without any manual code.
⚙️ How the Generic GET Flow Works
1. Request
Client sends:
GET /api/event?search=meeting&sort=createdAt:desc&limit=20&page=1
The engine parses parameters like:
searchfiltersortpagination
2. Resolve Resource
The system identifies the resource from the URL and loads its config:
const config = registry.get(resourceName);
return baseRepository.findAll(config, queryOptions);
This dynamic lookup is the heart of the engine — nothing is hardcoded anymore.
3. Build Query
The Query Builder constructs SQL based on the registry + incoming options.
Example snippet:
if (search) {
qb.where(`similarity(${alias}.name, :search) > 0.2`, { search });
}
The engine can generate:
- Full-text search (e.g., Trigram similarity)
- Filters (ranges, equals, lists)
- Sorting
- Pagination
- Field selection
All automatically.
4. Execute
The Base Repository executes the query and returns a uniform structured response.
return qb.getMany();
This completes the flow — without writing any entity-specific logic.
📊 Before vs After
| Feature | Before (Manual Logic) | After (Generic GET Engine) |
|---|---|---|
| GET Logic | Each entity had its own GET logic | One generic GET engine for all resources |
| Consistency | Queries and search were inconsistent | Unified search, filtering, sorting, pagination |
| New Resource | Required manual coding of controllers and queries | Requires only: registry.register({ ... });
|
| Duplication | High | Zero duplication |
Why This Architecture Is Powerful?
- ✔️ Saves Time: Dozens of repetitive controller/service/query files are eliminated.
- ✔️ Scales Easily: Adding new entities is trivial.
- ✔️ Consistency Across the System: Every endpoint behaves the same: same search, sorting, filters, pagination logic.
- ✔️ Extendable: Add a feature once → all resources get it.
- ✔️ Cleaner Codebase: Infrastructure is centralized, not scattered.
💡 Conclusion
The Generic GET Engine transformed our system from a patchwork of duplicated logic into a clean, extensible, and scalable architecture.
By shifting from "write GET logic per entity" to "describe the entity once", we created a foundation that supports rapid development and long-term maintainability.
This approach is ideal for any team working with large domain models or dynamic resource sets. Instead of writing endless boilerplate code — you build one smart engine.

Top comments (2)
"Great post! I love the 'describe, don't hardcode' approach. It’s amazing to see how much boilerplate code this architecture saves.
By the way, I noticed the similarity check in your query builder — that’s exactly the practical use case for the PostgreSQL Trigrams I was just writing about! It’s cool to see how the backend architecture and DB optimization fit together so well. Great work!"
"Great post! I love the 'describe, don't hardcode' approach. It’s amazing to see how much boilerplate code this architecture saves.
By the way, I noticed the similarity check in your query builder — that’s exactly the practical use case for the PostgreSQL Trigrams I was just writing about! It’s cool to see how the backend architecture and DB optimization fit together so well. Great work!"