- Which routes are slow?
- How many DB queries is this endpoint firing?
- What's the error rate on
/ordersvs/users? - Can I trace a request across multiple services?
And every time, the answer was either "set up OpenTelemetry" (a multi-day project) or "write custom middleware again" (a boring wheel re-invention).
So I stopped doing either and built auto-api-observe a zero-dependency, TypeScript-first observability package for Express and Fastify that takes one line to set up.
What You Get - Immediately
Install it:
npm install auto-api-observe
Drop it into Express:
const express = require('express');
const observability = require('auto-api-observe');
const app = express();
app.use(observability()); // ← that's it
app.listen(3000);
Or Fastify:
const Fastify = require('fastify');
const { fastifyObservability } = require('auto-api-observe');
const fastify = Fastify();
await fastify.register(fastifyObservability); // ← that's it
await fastify.listen({ port: 3000 });
From that moment, every single request produces a structured JSON log:
{
"timestamp": "2026-03-05T12:00:00.000Z",
"traceId": "a1b2c3d4-...",
"method": "GET",
"route": "/users",
"path": "/users/42",
"status": 200,
"latency": 120,
"latencyMs": "120ms",
"dbCalls": 0,
"slow": false,
"ip": "127.0.0.1"
}
No config. No agents. No external services.
The Feature I'm Most Proud Of: DB Call Tracking via AsyncLocalStorage
Most logging middleware tells you how long a request took. But it doesn't tell you why it was slow.
Was it 400ms because of business logic? Or because your ORM silently fired 12 queries?
I built trackDbCall() to answer exactly that. Call it anywhere in your async chain:
const { trackDbCall } = require('auto-api-observe');
async function getUser(id) {
trackDbCall(); // no need to pass request context
return db.query('SELECT * FROM users WHERE id = ?', [id]);
}
It uses AsyncLocalStorage under the hood, so it automatically knows which request it belongs to, no matter how deeply nested in your call stack. The dbCalls count shows up right in the log entry:
{
"route": "/users",
"latencyMs": "340ms",
"dbCalls": 7,
"status": 200
}
340ms with 7 dbCalls, now you immediately know where to look. No guessing, no profiler needed.
You can also count multiple calls at once: trackDbCall(3).
Distributed Tracing - Across Services, Zero Config
Every request gets an auto-generated UUID traceId. But here's the part that makes it useful in microservices:
If an upstream service passes the x-trace-id header, the same ID is reused. And the same header is set on the response, so downstream services can propagate it too.
→ Service A (generates traceId: abc-123, sets x-trace-id: abc-123)
→ Service B (reads x-trace-id: abc-123, logs with same ID)
→ Service C (same — full trace across all services)
You can also read the trace ID directly in your handlers:
app.get('/items', (req, res) => {
console.log(req.traceId); // 'abc-123'
res.json({});
});
This means you can grep your logs by traceId and reconstruct the full journey of any request across your entire system, with no OpenTelemetry collector required.
Built-in Metrics Endpoint
No Prometheus. No Grafana. Just call getMetrics():
const { getMetrics } = require('auto-api-observe');
app.get('/metrics', (req, res) => {
res.json(getMetrics());
});
You get per-route breakdowns with avg/min/max latency, error counts, slow request counts, and status code distributions:
{
"totalRequests": 142,
"successRequests": 135,
"clientErrorRequests": 3,
"errorRequests": 4,
"slowRequests": 2,
"uptime": 3600,
"routes": {
"GET /users": {
"count": 80,
"avgLatency": 95,
"minLatency": 12,
"maxLatency": 1400,
"errors": 0,
"slowCount": 1,
"statusCodes": { "200": 80 }
}
}
}
For internal dashboards, health checks, or a quick production sanity check, this is everything you need without spinning up an external metrics service.
Custom Fields - Attach Anything to Any Log Entry
Need to log the current user ID, a tenant, a feature flag? addField() merges any data into the current request's log entry:
const { addField } = require('auto-api-observe');
app.get('/orders', async (req, res) => {
const orders = await getOrders(req.user.id);
addField('userId', req.user.id);
addField('orderCount', orders.length);
res.json(orders);
});
Like trackDbCall(), it uses AsyncLocalStorage, so you can call it from anywhere, even deep inside a service layer, and it'll attach to the right request.
Plug Into Your Existing Log Stack
If you're already using Winston, Pino, or a cloud logging service, you don't have to give that up. Just pass a custom logger:
app.use(observability({
logger: (entry) => {
winston.info('api', entry); // Winston
pino.info(entry); // Pino
datadogLogs.logger.info('', entry); // Datadog
},
}));
Or silence console output entirely while keeping metrics collection:
app.use(observability({ logger: false, enableMetrics: true }));
Other options worth knowing:
app.use(observability({
slowThreshold: 500, // flag anything over 500ms as slow
skipRoutes: ['/health', /^\/internal/], // skip health checks from logs
onResponse: (entry) => {
if (entry.slow) alertTeam(entry); // custom alerting hook
},
}));
TypeScript - First Class
Full types ship out of the box:
import observability, {
ObservabilityOptions,
LogEntry,
Metrics,
} from 'auto-api-observe';
const options: ObservabilityOptions = {
slowThreshold: 500,
onResponse: (entry: LogEntry) => {
if (entry.slow) alertTeam(entry);
},
};
app.use(observability(options));
The LogEntry type is fully typed including your custom fields via the index signature [key: string]: unknown.
Zero Runtime Dependencies
The entire package runs on pure Node.js. No lodash, no uuid package, no express peer dependency required unless you use it. The AsyncLocalStorage-based context tracking is built on Node's native async_hooks module.
Why Not OpenTelemetry?
OTel is the right choice for large-scale distributed systems with dedicated DevOps. It's powerful.
But most Node.js developers need structured logs and per-route metrics today, not after configuring a collector pipeline, an exporter, and a dashboard.
auto-api-observe is for that situation. Start in five minutes, get real observability in production, and migrate to a full OTel stack later when you actually need it. Both can even coexist, use onResponse to forward entries to an OTel-compatible exporter.
Install & Links
npm install auto-api-observe
- 📦 npm: npmjs.com/package/auto-api-observe
- ⭐ GitHub: github.com/rahhuul/auto-api-observe
What's Coming Next
- [ ] Configurable slow threshold per-route
- [ ] Winston / Pino transport presets
- [ ] OpenTelemetry export compatibility layer
- [ ] Request body/response body sampling (opt-in)
If there's something you'd want, open an issue, I read everything.
If this saves you even 20 minutes of setup time, a ⭐ on GitHub goes a long way for an open source project. And drop a comment if you run into anything, happy to help.
Top comments (0)