When building production applications, observability is crucial for understanding system behavior, diagnosing issues, and optimizing performance. However, metrics collection is often treated as an afterthought, bolted onto applications only when problems arise or when someone remembers to add instrumentation.
This creates a challenge: by the time you realize you need metrics, you may have already missed critical data points during development and testing phases. More importantly, it separates metrics from logs conceptually, even though they often represent the same events from different perspectives.
The Relationship Between Logs and Metrics
Logs and metrics are two sides of the same coin. When you log that a request was processed, you might also want to track how many requests per second your system handles. When you log an error, you might want to increment an error counter. When you log a database query completion, you might want to record its duration as a timing metric.
In traditional approaches, developers often end up writing code like this:
// Log the event
logger.info('Request processed', { userId: '123', endpoint: '/api/users' });
// Separately track the metric
statsd.increment('requests.total');
statsd.timing('request.duration', 150);
This duplication leads to several problems:
- Maintenance burden: You must remember to update both the log and metric calls when modifying code
- Inconsistency: Logs and metrics might drift out of sync over time
- Code complexity: Additional instrumentation code clutters business logic
- Missed opportunities: It's easy to forget to add metrics, especially when under time pressure
The Auto-Instrumentation Approach
Many observability solutions attempt to solve this by providing auto-instrumentation. These tools automatically wrap HTTP frameworks, database drivers, and other common libraries to collect metrics without requiring explicit code changes.
Auto-instrumentation is powerful for baseline metrics, but it has limitations:
- Generic metrics only: You get request counts, durations, and error rates, but not business-specific metrics
- Limited context: Auto-instrumented metrics often lack business context that your logs contain
- Performance overhead: Automatic wrapping can introduce overhead, even if minimal
- Coverage gaps: Custom logic, background jobs, and business-specific operations aren't covered
For example, auto-instrumentation might tell you that /api/users took 150ms, but it won't tell you that user ID 123 specifically requested their own profile data, or that this particular request involved a cache miss followed by a database query. That context lives in your logs, but connecting it to metrics requires additional effort.
LogLayer's Approach: Unified Logging and Metrics
LogLayer, a TypeScript logging library, addresses this by providing a unified API where logging and metrics collection happen together in a single, fluent call.
It offers a mixin that integrates with the StatsD client hot-shots to add a metrics API to LogLayer's logging capabilities.
LogLayer also supports a wide range of logging libraries and cloud transports, allowing you to use your preferred logging backend while adding metrics collection to your workflow. Instead of maintaining separate logging and metrics code, you add metrics as part of your logging statement.
Here's how it works in practice:
import { LogLayer, useLogLayerMixin, ConsoleTransport } from 'loglayer';
import { StatsD } from 'hot-shots';
import { hotshotsMixin } from '@loglayer/mixin-hot-shots';
// Set up StatsD client
const statsd = new StatsD({
host: 'localhost',
port: 8125
});
// Register the metrics mixin
useLogLayerMixin(hotshotsMixin(statsd));
// Create logger instance
const log = new LogLayer({
transport: new ConsoleTransport({
logger: console
})
});
// Log and track metrics in one call
log.statsIncrement('request.count')
.statsTiming('request.duration', 150)
.withMetadata({ userId: '123', endpoint: '/api/users' })
.info('Request processed');
In this example, the single call accomplishes three things:
- Sends an increment metric for request counting
- Records a timing metric for request duration
- Logs the event with structured metadata
All of this happens atomically in your application code, ensuring that logs and metrics stay in sync.
Benefits of Unified Logging and Metrics
This approach provides several concrete benefits:
Consistency: Since metrics and logs are part of the same call, they can't drift apart. If you're logging an event, the corresponding metric is right there with it.
Contextual metrics: Your metrics carry the same rich context as your logs. When you see a spike in request.count, you can correlate it with the log messages from that time period, complete with metadata.
Simplified code: Less boilerplate means less code to maintain. You write one statement instead of multiple separate calls.
Comprehensive coverage: You're more likely to add metrics for custom business logic because it requires no additional effort beyond what you're already doing for logging.
Better observability: When investigating issues, having logs and metrics unified makes it easier to understand both what happened (logs) and how the system performed (metrics) for the same events.
Real-World Use Cases
Consider a few scenarios where unified logging and metrics shine:
API Request Handling:
app.get('/api/users/:id', async (req, res) => {
const startTime = Date.now();
const userId = req.params.id;
try {
const user = await getUserById(userId);
const duration = Date.now() - startTime;
log.statsIncrement('api.requests.success')
.statsTiming('api.response.time', duration)
.withMetadata({ userId, endpoint: '/api/users/:id' })
.info('User data retrieved');
res.json(user);
} catch (error) {
const duration = Date.now() - startTime;
log.statsIncrement('api.requests.error')
.statsTiming('api.response.time', duration)
.withError(error)
.withMetadata({ userId, endpoint: '/api/users/:id' })
.error('Failed to retrieve user');
res.status(500).json({ error: 'Internal server error' });
}
});
Background Job Processing:
async function processEmailQueue() {
const job = await queue.getNext();
if (!job) {
log.statsGauge('queue.size', await queue.getSize())
.info('Email queue empty');
return;
}
const startTime = Date.now();
try {
await sendEmail(job);
const duration = Date.now() - startTime;
log.statsIncrement('email.sent')
.statsDecrement('queue.size')
.statsTiming('email.processing.time', duration)
.withMetadata({ recipient: job.to, template: job.template })
.info('Email sent successfully');
} catch (error) {
const duration = Date.now() - startTime;
log.statsIncrement('email.failed')
.statsTiming('email.processing.time', duration)
.withError(error)
.withMetadata({ recipient: job.to, template: job.template })
.error('Failed to send email');
}
}
Database Query Monitoring:
async function findUsersByRole(role: string) {
const startTime = Date.now();
try {
const users = await db.query('SELECT * FROM users WHERE role = ?', [role]);
const duration = Date.now() - startTime;
log.statsTiming('db.query.duration', duration)
.statsHistogram('db.query.result.size', users.length)
.withMetadata({ query: 'findUsersByRole', role, resultCount: users.length })
.info('Database query executed');
return users;
} catch (error) {
const duration = Date.now() - startTime;
log.statsIncrement('db.query.errors')
.statsTiming('db.query.duration', duration)
.withError(error)
.withMetadata({ query: 'findUsersByRole', role })
.error('Database query failed');
throw error;
}
}
Conclusion
Observability shouldn't be an afterthought. By unifying logging and metrics collection in a single API, LogLayer makes it natural to instrument your code as you write it. You don't need to remember to add metrics separately, and you don't need to maintain parallel logging and metrics code paths.
The result is more comprehensive observability with less effort, better consistency between logs and metrics, and metrics that carry the rich context your logs provide. When investigating issues or understanding system behavior, having logs and metrics unified around the same events makes the whole picture clearer.
If you're building Node.js applications and want to improve your observability while reducing code complexity, consider giving LogLayer's unified logging and metrics approach a try. You might find that the best way to ensure you collect metrics is to make them inseparable from logging.
Top comments (0)