When building production applications, observability is crucial for understanding system behavior, diagnosing issues, and optimizing performance. However, metrics collection is often treated as an afterthought, bolted onto applications only when problems arise or when someone remembers to add instrumentation.
This creates a challenge: by the time you realize you need metrics, you may have already missed critical data points during development and testing phases. More importantly, it separates metrics from logs conceptually, even though they often represent the same events from different perspectives.
The Auto-Instrumentation Approach
Many observability solutions attempt to solve this by providing auto-instrumentation. These tools automatically wrap HTTP frameworks, database drivers, and other common libraries to collect metrics without requiring explicit code changes.
Auto-instrumentation is powerful for baseline metrics, but it has limitations:
Generic metrics only: You get request counts, durations, and error rates, but not business-specific metrics
Limited context: Auto-instrumented metrics often lack business context that your logs contain
Performance overhead: Automatic wrapping can introduce overhead, even if minimal
Coverage gaps: Custom logic, background jobs, and business-specific operations aren't covered
For example, auto-instrumentation might tell you that /api/users took 150ms, but it won't tell you that user ID 123 specifically requested their own profile data, or that this particular request involved a cache miss followed by a database query. That context lives in your logs, but connecting it to metrics requires additional effort.
LogLayer's Approach: A Clean, Fluent API
LogLayer, a TypeScript logging library, addresses this by providing a single, consistent API for both logging and metrics.
It offers a mixin that integrates with the StatsD client hot-shots to add a fluent metrics API directly to LogLayer instances. The mixin provides a builder pattern that makes metrics collection as clean and expressive as LogLayer's logging API.
LogLayer also supports a wide range of logging libraries and cloud transports, allowing you to use your preferred logging backend while adding metrics collection through the same interface. Instead of juggling two different APIs, you use one consistent, fluent interface for both logging and metrics.
Here's how it works in practice:
import { LogLayer, useLogLayerMixin, ConsoleTransport } from 'loglayer';
import { StatsD } from 'hot-shots';
import { hotshotsMixin } from '@loglayer/mixin-hot-shots';
// Set up StatsD client
const statsd = new StatsD({
host: 'localhost',
port: 8125
});
// Register the metrics mixin (must be called before creating LogLayer instances)
useLogLayerMixin(hotshotsMixin(statsd));
// Create logger instance
const log = new LogLayer({
transport: new ConsoleTransport({
logger: console
})
});
// Use the same log instance for both metrics and logging
log.stats.increment('request.count').send();
log.stats.timing('request.duration', 150).send();
log.withMetadata({ userId: '123', endpoint: '/api/users' })
.info('Request processed');
Notice how both metrics and logging use the same log instance with a consistent, fluent API. The stats property provides access to a builder pattern that lets you chain configuration methods before sending the metric:
// The fluent builder pattern makes metrics configuration clean and readable
log.stats.increment('api.requests')
.withValue(1)
.withTags(['env:production', 'service:api'])
.withSampleRate(0.1)
.send();
Why Using the Same Object Matters
The key advantage of LogLayer's approach is that metrics and logging share the same object instance. This might seem like a small detail, but it has significant practical benefits:
No separate client management: You don't need to import, configure, or pass around a separate StatsD client. Your log instance already has everything you need. When you're already logging, adding a metric is just accessing a property on the same object.
Consistent context: Since you're using the same log instance throughout your code, you can easily add metrics wherever you're already logging. If you're logging a request, you can immediately add a metric without needing to import or inject another dependency.
Consider this common scenario:
// Traditional approach - need both logger and statsd
function processRequest(logger: Logger, statsd: StatsD, userId: string) {
logger.info('Processing request', { userId });
statsd.increment('requests.total');
// ... rest of function
}
// With LogLayer - just one object
function processRequest(log: LogLayer, userId: string) {
log.info('Processing request', { userId });
log.stats.increment('requests.total').send();
// ... rest of function
}
The LogLayer version is simpler because you're working with one object instead of two. When you're already logging, adding metrics becomes a natural extension of what you're doing, not a separate concern.
Conclusion
Observability shouldn't be an afterthought. By providing a single, fluent API for both logging and metrics, LogLayer makes it natural to instrument your code as you write it. You don't need to juggle multiple APIs or remember different patterns for logging vs. metrics.
The hot-shots mixin's fluent builder pattern makes metrics configuration clean and expressive, while the unified API means your instrumentation code stays consistent throughout your application. The result is more comprehensive observability with less effort, better code organization, and metrics that naturally live alongside your logs.
If you're building Node.js applications and want to improve your observability while reducing code complexity, consider giving LogLayer's fluent logging and metrics API a try. You might find that the best way to ensure you collect metrics is to make them as easy to use as your logging.
Top comments (0)