APIs are the backbone of modern software. They power every mobile app you use, every SaaS dashboard you log into, and every integration between systems that makes the modern web work. Yet despite their ubiquity, most APIs are designed poorly. They leak database schema into response payloads, return cryptic error codes, break backward compatibility without warning, and crumble under load.
I know this because I have spent the past nine years building, consuming, and debugging APIs across seven companies, multiple industries, and technology stacks spanning PHP (Laravel, Symfony) and JavaScript (Node.js, Express, NestJS). I have built APIs for health-tech platforms at 54gene, financial infrastructure at 2am Tech, real-time gaming at Lordwin Group, on-demand delivery at Viaduct, education technology at Univelcity, and enterprise SaaS at VacancySoft. Collectively, these systems have handled millions of requests and served tens of thousands of users.
Through that experience, I have distilled a set of principles that separate well-designed APIs from fragile, frustrating ones. These are not theoretical musings — they are battle-tested patterns drawn from production systems that process 50,000+ daily requests at VacancySoft and have served over 8,000 users at Univelcity with 20,000+ requests per day.
This article is a deep dive into those principles.
Principle 1: Design for the Consumer, Not the Database
The most common API design mistake I encounter is what I call "schema leaking" — when an API's resource structure mirrors the underlying database tables rather than reflecting how consumers actually use the data.
Early in my career at Univelcity, I fell into this trap. We had a users table, a courses table, and a enrollments join table. The first version of our API had three corresponding endpoints that returned raw table structures. Clients had to make three separate requests and stitch data together themselves.
The fix was to think in terms of resources, not tables. A consumer doesn't care about your join table. They care about "a student's enrolled courses" or "a course's enrolled students." The API should model those concepts directly:
// Bad: Database-centric design
GET /users/123
GET /enrollments?user_id=123
GET /courses/456
// Good: Consumer-centric design
GET /students/123/courses
GET /courses/456/students
At VacancySoft, I apply this principle rigorously. Our APIs expose business concepts — vacancies, organisations, sectors, market trends — not the complex graph of PostgreSQL tables underneath. When we introduced a "market snapshot" feature, rather than forcing clients to aggregate data from four different endpoints, I designed a single /market-snapshots resource that returned a pre-composed view of the data clients needed. This reduced client-side complexity dramatically and cut the average number of API calls per page load from seven to two.
The key insight: your database schema is an implementation detail. Your API contract is a product. Treat it like one.
Principle 2: Versioning Strategies That Don't Break Clients
API versioning is one of those topics where everyone has an opinion and nobody agrees. I have used three approaches across my career: URI versioning, header versioning, and query parameter versioning. Here is what I have learned.
URI versioning (/api/v1/vacancies) is what I use in most production systems and what we use at VacancySoft. It is explicit, easy to understand, easy to route, and easy to document. The tradeoff is that it can lead to code duplication if you are not careful about abstracting shared logic between versions.
Header versioning (Accept: application/vnd.api+json; version=2) is cleaner in theory but harder to test, harder to debug with basic tools like curl, and harder for less experienced developers to consume.
Query parameter versioning (/api/vacancies?version=2) is the worst of both worlds and I do not recommend it.
My approach at VacancySoft:
// Express router with versioned routes
const v1Router = express.Router();
const v2Router = express.Router();
app.use('/api/v1', v1Router);
app.use('/api/v2', v2Router);
// Shared business logic lives in services, not controllers
// Controllers are thin version-specific adapters
v1Router.get('/vacancies', async (req, res) => {
const data = await vacancyService.search(req.query);
res.json(transformV1(data));
});
v2Router.get('/vacancies', async (req, res) => {
const data = await vacancyService.search(req.query);
res.json(transformV2(data));
});
The critical rule I follow: never remove a version without a deprecation period and proactive communication to consumers. At VacancySoft, when we deprecated v1 of our vacancy search endpoint, I implemented a deprecation header (Sunset: Sat, 01 Jul 2024 00:00:00 GMT) six months in advance and added logging to track which clients were still using v1. This gave us data-driven confidence to retire the old version without breaking anyone.
Principle 3: Pagination, Filtering, and Sorting at Scale
Pagination seems simple until you are dealing with datasets of hundreds of thousands of records that change frequently. I have implemented both offset-based and cursor-based pagination in production, and the difference matters enormously at scale.
Offset pagination (?page=5&per_page=20) is intuitive but fundamentally broken for large datasets. The database still has to scan all rows up to the offset, meaning page 500 is orders of magnitude slower than page 1. It also produces inconsistent results when records are inserted or deleted between requests.
At VacancySoft, where our vacancy dataset exceeds hundreds of thousands of records and is updated continuously, I implemented cursor-based pagination:
// Cursor-based pagination implementation
async function getVacancies(cursor, limit = 20) {
const query = knex('vacancies')
.orderBy('created_at', 'desc')
.orderBy('id', 'desc')
.limit(limit + 1);
if (cursor) {
const { created_at, id } = decodeCursor(cursor);
query.where(function () {
this.where('created_at', '<', created_at)
.orWhere(function () {
this.where('created_at', '=', created_at)
.andWhere('id', '<', id);
});
});
}
const results = await query;
const hasMore = results.length > limit;
const items = hasMore ? results.slice(0, -1) : results;
return {
data: items,
pagination: {
has_more: hasMore,
next_cursor: hasMore
? encodeCursor(items[items.length - 1])
: null,
},
};
}
function encodeCursor(record) {
return Buffer.from(
JSON.stringify({
created_at: record.created_at,
id: record.id,
})
).toString('base64');
}
This approach uses a composite cursor of (created_at, id) to ensure stable ordering even with duplicate timestamps. It performs consistently regardless of how deep into the dataset a client paginates, because the database uses an index seek rather than a scan.
For filtering, I standardise on a predictable pattern:
GET /api/v2/vacancies?sector=technology®ion=london&salary_min=50000&sort=-created_at
The - prefix for descending sort is a convention I adopted from JSON:API and have used consistently across every API I have built since 2020. It is intuitive and self-documenting.
Principle 4: Error Handling That Helps Developers
Nothing frustrates an API consumer more than unhelpful errors. I have received 500 Internal Server Error with an empty body from third-party APIs, and I have vowed never to do that to my consumers.
Every API I build follows a consistent error schema:
{
"error": {
"code": "VALIDATION_ERROR",
"message": "The request body contains invalid fields.",
"details": [
{
"field": "email",
"message": "Must be a valid email address.",
"received": "not-an-email"
},
{
"field": "salary_min",
"message": "Must be a positive integer.",
"received": -500
}
],
"request_id": "req_8f3a2b1c",
"documentation_url": "https://api.example.com/docs/errors#VALIDATION_ERROR"
}
}
Key principles:
-
Machine-readable error codes (
VALIDATION_ERROR, not just HTTP status codes) so clients can programmatically handle different error types. - Human-readable messages so developers can understand what went wrong without consulting documentation.
- Field-level detail for validation errors so frontend developers know exactly which fields to highlight.
- Request IDs so that when a consumer reports an issue, I can trace it through our logging infrastructure.
- Documentation URLs that link directly to the relevant error documentation.
At VacancySoft, I implemented a centralised error handling middleware that ensures every error — whether it is a validation failure, an authorization error, or an unexpected exception — passes through the same formatting pipeline:
// Centralised error handler middleware
function errorHandler(err, req, res, next) {
const requestId = req.headers['x-request-id'] || generateRequestId();
if (err instanceof ValidationError) {
return res.status(422).json({
error: {
code: 'VALIDATION_ERROR',
message: err.message,
details: err.details,
request_id: requestId,
},
});
}
if (err instanceof AuthorizationError) {
return res.status(403).json({
error: {
code: 'FORBIDDEN',
message: 'You do not have permission to access this resource.',
request_id: requestId,
},
});
}
// Unexpected errors: log full stack, return safe message
logger.error({
request_id: requestId,
error: err.message,
stack: err.stack,
path: req.path,
method: req.method,
});
return res.status(500).json({
error: {
code: 'INTERNAL_ERROR',
message: 'An unexpected error occurred. Please try again.',
request_id: requestId,
},
});
}
This approach ensures that internal details (stack traces, database errors) never leak to consumers, while still providing enough information for debugging.
Principle 5: Rate Limiting and Throttling for Protection
Any API exposed to the internet will eventually be abused — whether by a misconfigured client making thousands of requests in a loop, a scraper, or a deliberate attack. Rate limiting is not optional.
I implement rate limiting at multiple layers:
Application-level rate limiting using Redis-backed token bucket or sliding window algorithms:
const rateLimit = require('express-rate-limit');
const RedisStore = require('rate-limit-redis');
const Redis = require('ioredis');
const redisClient = new Redis(process.env.REDIS_URL);
const apiLimiter = rateLimit({
store: new RedisStore({
client: redisClient,
prefix: 'rl:',
}),
windowMs: 60 * 1000, // 1 minute
max: 100, // 100 requests per minute
standardHeaders: true,
legacyHeaders: false,
handler: (req, res) => {
res.status(429).json({
error: {
code: 'RATE_LIMIT_EXCEEDED',
message: 'Too many requests. Please retry after the window resets.',
retry_after: res.getHeader('Retry-After'),
},
});
},
});
// Stricter limits for authentication endpoints
const authLimiter = rateLimit({
store: new RedisStore({ client: redisClient, prefix: 'rl:auth:' }),
windowMs: 15 * 60 * 1000, // 15 minutes
max: 10, // 10 attempts per 15 minutes
standardHeaders: true,
});
app.use('/api/', apiLimiter);
app.use('/api/auth/', authLimiter);
Infrastructure-level rate limiting via Nginx or a CDN as the first line of defence. At VacancySoft, we use Nginx rate limiting in front of our Node.js services to catch abusive traffic before it reaches the application layer. This dual-layer approach proved critical during a traffic spike when a partner's integration went haywire and started hitting our API at 10x the normal rate. Nginx absorbed the burst, and the application-level limiter gracefully degraded the remaining overflow with proper 429 responses.
I always include RateLimit-* headers in responses so well-behaved clients can self-throttle:
RateLimit-Limit: 100
RateLimit-Remaining: 67
RateLimit-Reset: 1719849600
Principle 6: Authentication and Authorization Patterns
Authentication (who are you?) and authorization (what can you do?) are distinct concerns that I see conflated constantly. Over my career I have implemented multiple patterns, and the right choice depends heavily on context.
For service-to-service communication within a backend, I use short-lived JWT tokens with asymmetric key signing (RS256). The issuing service signs with a private key, and consuming services verify with the public key without needing to call back to the auth service:
const jwt = require('jsonwebtoken');
// Token generation (auth service)
function generateAccessToken(user) {
return jwt.sign(
{
sub: user.id,
email: user.email,
roles: user.roles,
permissions: user.permissions,
},
privateKey,
{
algorithm: 'RS256',
expiresIn: '15m',
issuer: 'auth.vacancysoft.com',
}
);
}
// Middleware for route-level authorization
function requirePermission(permission) {
return (req, res, next) => {
if (!req.user || !req.user.permissions.includes(permission)) {
return res.status(403).json({
error: {
code: 'FORBIDDEN',
message: `Required permission: ${permission}`,
},
});
}
next();
};
}
// Usage
router.delete(
'/vacancies/:id',
authenticate,
requirePermission('vacancies:delete'),
vacancyController.destroy
);
At VacancySoft, I implemented a role-based access control (RBAC) system with granular permissions. Rather than checking if (user.role === 'admin') throughout the codebase, permissions are encoded in the JWT and checked via middleware. This made it trivial to add new roles — when the product team introduced a "regional analyst" role, I added the appropriate permissions without touching a single controller.
For external-facing APIs, I use API keys for identification combined with OAuth 2.0 for delegated access when third-party integrations need to act on behalf of users.
Principle 7: Documentation as a First-Class Citizen
I have a rule: if an API endpoint ships without documentation, it does not ship. Documentation is not an afterthought — it is a deliverable.
I use the OpenAPI 3.0 specification (Swagger) for every API I build, and I generate the spec from code annotations rather than maintaining a separate document that inevitably drifts:
/**
* @openapi
* /api/v2/vacancies:
* get:
* summary: Search vacancies
* description: >
* Returns a paginated, filtered list of vacancies.
* Supports cursor-based pagination for stable results.
* tags: [Vacancies]
* parameters:
* - in: query
* name: sector
* schema:
* type: string
* description: Filter by sector slug
* - in: query
* name: cursor
* schema:
* type: string
* description: Pagination cursor from previous response
* responses:
* 200:
* description: Paginated list of vacancies
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/VacancyListResponse'
* 429:
* description: Rate limit exceeded
*/
router.get('/vacancies', vacancyController.search);
At VacancySoft, our Swagger documentation is auto-generated from these annotations and published to an internal documentation portal. Every endpoint includes request/response examples, error codes, and authentication requirements. When new team members onboard, they can explore and test every endpoint interactively without reading a single line of source code.
I also generate Postman collections from the OpenAPI spec, which our QA team and integration partners use for testing. This single-source-of-truth approach eliminates the documentation drift that plagues most API teams.
The Results
These principles are not academic exercises. Applied consistently at VacancySoft, they have produced measurable outcomes:
| Metric | Result |
|---|---|
| Daily request volume | 50,000+ |
| API uptime | 99.8%+ over the last 12 months |
| Average response time (p50) | 45ms |
| Average response time (p95) | 180ms |
| Client integration time | Reduced from weeks to days |
| Breaking changes in production | Zero in the last 18 months |
| Support tickets related to API confusion | Down 70% after error schema standardisation |
Olamilekan Lamidi's approach to API design has been shaped by one core belief: an API is a product, and its consumers are your users. Every decision — from resource naming to error formatting to pagination strategy — should be made with the consumer's experience in mind.
Key Takeaways
- Model resources around consumer use cases, not database tables.
- Use URI versioning with a clear deprecation policy and sunset headers.
- Implement cursor-based pagination for any dataset that changes frequently or exceeds a few thousand records.
- Standardise error responses with machine-readable codes, human-readable messages, and request IDs.
- Rate limit at multiple layers — infrastructure and application — and communicate limits via headers.
- Separate authentication from authorization and encode permissions in tokens for stateless checks.
- Treat documentation as a deliverable, not an afterthought. Generate it from code.
These principles have served me across every company and technology stack I have worked with. Whether you are building a Laravel API in PHP or an Express API in Node.js, the fundamentals remain the same: design for clarity, build for resilience, and always, always think about the person on the other side of the request.
Top comments (0)