How to Implement API Observability with OpenTelemetry in Node.js (2026 Guide)
As of February 2026, building APIs without observability is like flying blind. You might have the fastest response times, but if you can't see what's happening inside your system, you're just guessing. OpenTelemetry has become the industry standard for API observability, and in this guide, I'll show you how to implement it in Node.js from scratch.
What is API Observability?
Observability goes beyond traditional monitoring. While monitoring tells you that something went wrong, observability tells you why. It consists of three pillars:
- Metrics — Quantitative measurements (response times, error rates, throughput)
- Logs — Discrete events with timestamps
- Traces — Request paths across distributed systems
OpenTelemetry (OTel) provides a unified framework for collecting all three, without locking you into a specific vendor.
Setting Up Your Node.js Project
First, create a new Node.js project and install the required packages:
mkdir otel-api-demo && cd otel-api-demo
npm init -y
npm install express @opentelemetry/api @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-trace-otlp @opentelemetry/exporter-metrics-otlp @opentelemetry/resources @opentelemetry/semantic-conventions
As of February 2026, these packages are at version 1.x and fully stable.
Creating the Observability Setup
Create a file called instrumentation.js that sets up all the OpenTelemetry components:
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp');
const { OTLPMetricExporter } = require('@opentelemetry/exporter-metrics-otlp');
const { Resource } = require('@opentelemetry/resources');
const { ATTR_SERVICE_NAME, ATTR_SERVICE_VERSION } = require('@opentelemetry/semantic-convals');
const sdk = new NodeSDK({
resource: new Resource({
[ATTR_SERVICE_NAME]: 'my-api-service',
[ATTR_SERVICE_VERSION]: '1.0.0',
}),
traceExporter: new OTLPTraceExporter(),
metricExporter: new OTLPMetricExporter(),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
console.log('OpenTelemetry SDK started');
process.on('SIGTERM', () => {
sdk.shutdown().catch(console.error);
});
Building Your API with Observability
Now let's create an Express API that automatically inherits all the observability features:
// api.js
require('./instrumentation');
const express = require('express');
const { trace, SpanStatusCode } = require('@opentelemetry/api');
const app = express();
const tracer = trace.getTracer('api-tracer');
app.use(express.json());
// Simulated database
const users = [
{ id: 1, name: 'Alice', email: 'alice@example.com' },
{ id: 2, name: 'Bob', email: 'bob@example.com' },
];
// GET /users - List all users with custom tracing
app.get('/users', async (req, res) => {
const span = tracer.startSpan('get-users');
try {
// Simulate DB query with custom span
const querySpan = tracer.startSpan('db-query-users', { parent: span });
await new Promise(resolve => setTimeout(resolve, 100)); // Simulate latency
querySpan.end();
span.setAttribute('user.count', users.length);
res.json({ users, count: users.length });
} catch (error) {
span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });
res.status(500).json({ error: 'Internal server error' });
} finally {
span.end();
}
});
// GET /users/:id - Get single user
app.get('/users/:id', async (req, res) => {
const span = tracer.startSpan('get-user-by-id');
span.setAttribute('user.id', req.params.id);
try {
const user = users.find(u => u.id === parseInt(req.params.id));
if (!user) {
span.setStatus({ code: SpanStatusCode.ERROR });
span.setAttribute('error', true);
return res.status(404).json({ error: 'User not found' });
}
res.json(user);
} catch (error) {
span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });
res.status(500).json({ error: 'Internal server error' });
} finally {
span.end();
}
});
// POST /users - Create new user
app.post('/users', async (req, res) => {
const span = tracer.startSpan('create-user');
try {
const { name, email } = req.body;
if (!name || !email) {
span.setStatus({ code: SpanStatusCode.ERROR, message: 'Missing required fields' });
return res.status(400).json({ error: 'Name and email are required' });
}
const newUser = {
id: users.length + 1,
name,
email
};
users.push(newUser);
span.setAttribute('user.created', newUser.id);
res.status(201).json(newUser);
} catch (error) {
span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });
res.status(500).json({ error: 'Internal server error' });
} finally {
span.end();
}
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`API server running on port ${PORT}`);
});
Custom Metrics for Business Intelligence
Beyond traces, let's add custom metrics to track business KPIs:
// metrics.js
const { metrics } = require('@opentelemetry/api');
const meter = metrics.getMeter('api-metrics');
// Request counter
const requestCounter = meter.createCounter('api.requests.total', {
description: 'Total API requests',
});
// Response time histogram
const responseTimeHistogram = meter.createHistogram('api.response.time', {
description: 'API response time in milliseconds',
unit: 'ms',
});
// Active connections gauge
const activeConnections = meter.createUpDownCounter('api.connections.active', {
description: 'Active API connections',
});
// Middleware to track all requests
function metricsMiddleware(req, res) {
const startTime = Date.now();
activeConnections.add(1);
requestCounter.add(1, {
method: req.method,
path: req.route?.path || req.path
});
res.on('finish', () => {
const duration = Date.now() - startTime;
responseTimeHistogram.record(duration, {
method: req.method,
path: req.route?.path || req.path,
status: res.statusCode,
});
activeConnections.add(-1);
});
}
module.exports = { metricsMiddleware };
Integrate the metrics middleware into your Express app:
const { metricsMiddleware } = require('./metrics');
app.use(metricsMiddleware);
Running the Observability Stack
To actually see your telemetry data, you'll need a backend. For development, you can use Jaeger:
# docker-compose.yml
services:
jaeger:
image: jaegertracing/all-in-one:1.60
ports:
- "16686:16686"
- "4317:4317"
- "4318:4318"
environment:
- COLLECTOR_OTLP_ENABLED=true
Run with:
docker-compose up -d
Set environment variables to send data to Jaeger:
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=http://localhost:4318
export OTEL_SERVICE_NAME=my-api
Viewing Your Observability Data
- Start Jaeger at
http://localhost:16686 - Run your API:
node api.js - Make some test requests:
curl http://localhost:3000/users
curl -X POST http://localhost:3000/users -H "Content-Type: application/json" -d '{"name":"Charlie","email":"charlie@example.com"}'
In Jaeger, you'll see:
- Traces — Full request paths with timing
- Span details — Custom attributes you added
- Service map — How requests flow through your system
Best Practices for API Observability in 2026
Use semantic conventions — Stick to OpenTelemetry's standard attribute names (
http.method,http.route,http.status_code) for compatibility.Sample intelligently — Don't trace everything in production. Use tail-based sampling to capture errors and slow requests.
Add business context — Beyond technical spans, add attributes that matter to your business (user IDs, order IDs, product IDs).
Correlate logs with traces — Use the trace ID in your logs to get full context when debugging.
Set up alerts — Configure alerts for error rates above 1% or p99 latency above 2 seconds.
Conclusion
OpenTelemetry has matured significantly in 2026. What used to require vendor-specific SDKs now works with a single, open standard. Your APIs become fully observable without code changes beyond the initial setup.
The best part? You're not locked in. Switch from Jaeger to Grafana, DataDog, or any other backend — your instrumentation code stays the same.
Start small: add OpenTelemetry to one API endpoint this week. You'll immediately see value in being able to trace any request end-to-end.
This article is part of the 1xAPI technical series. For more API development guides, visit 1xapi.com/blog.
Top comments (0)