DEV Community

Chintan Shah
Chintan Shah

Posted on

logfx v1.0.0: One Logger for Development and Production

Today I'm shipping logfx v1.0.0. It's the first release I consider production-ready: 13 official integrations, PII redaction, retry and circuit breaker in the webhook transport, and zero dependencies. Same API in Node and browser.

Every logger forces a choice: readable output for development, or structured JSON for production. You either configure both manually or pick one and live with it. logfx v1.0.0 removes that choice. Same logger, auto-detected format based on NODE_ENV.

I've been maintaining logfx since v0.3.0.

The Problem

In development, you want logs you can read. Colors, emojis, clear structure. When something breaks at 2am, you need to scan output quickly.

In production, you need structured JSON. Datadog, Elasticsearch, Splunk, and every log aggregator expect JSON. Timestamps, levels, correlation IDs. Machine-readable.

Most loggers make you configure two setups or accept one format everywhere. logfx switches automatically.

The other pain point: logging user data. Emails, tokens, passwords. One slip and you're leaking PII. logfx handles that with built-in redaction and custom patterns.

The third: sending logs to a remote endpoint. Networks fail. Retries, circuit breakers, dead letter queues. You either build it or hope your logs get through. logfx ships it.

import { createLogger } from 'logfx'

const log = createLogger()

log.info('Server started', { port: 3000 })
Enter fullscreen mode Exit fullscreen mode

Development output:

💡 INFO  Server started { port: 3000 }
Enter fullscreen mode Exit fullscreen mode

Production output (NODE_ENV=production):

{"timestamp":"2026-01-14T12:00:00.000Z","level":"info",
"message":"Server started","data":{"port":3000}}
Enter fullscreen mode Exit fullscreen mode

No format flag. No environment checks in your code. It just works.

What logfx Actually Does

Core Logging

Five levels (debug, info, success, warn, error), namespaced loggers, context metadata. Errors serialize with stack traces. Lazy evaluation so you can defer expensive operations until they're actually logged. Same API whether you're in Node, Bun, Deno, or the browser.

const authLog = createLogger({ namespace: 'auth' })
authLog.info('Login attempt', { userId: 123 })
// 💡 INFO [auth] Login attempt { userId: 123 }
Enter fullscreen mode Exit fullscreen mode

Attach metadata to every log from a logger:

const log = createLogger({
  context: { service: 'api', version: '1.0.0' }
})
log.info('Request received', { path: '/users' })
// Every entry includes service and version
Enter fullscreen mode Exit fullscreen mode

W3C TraceParent format is supported for correlating logs across services.

PII Redaction

Logging user data is risky. Emails, SSNs, credit cards, JWTs can leak. logfx has built-in patterns and key-based redaction:

const log = createLogger({
  redact: {
    keys: ['password', 'token', 'apiKey'],
    patterns: ['email', 'ssn', 'creditCard', 'phone', 'ip', 'jwt'],
    customPatterns: [
      { name: 'apiKey', regex: /sk_(live|test)_[a-zA-Z0-9]+/g }
    ]
  },
  transports: [transports.console({ format: 'json' })]
})

log.info('User signup', { email: 'user@example.com', password: 'secret' })
// {"email":"[REDACTED]","password":"[REDACTED]",...}
Enter fullscreen mode Exit fullscreen mode

You can add custom patterns or masking functions. Useful when you need partial redaction for debugging (e.g. last 4 digits of a card).

Production Reliability (Webhook Transport)

When you send logs to a remote endpoint, networks fail. Timeouts, 5xxs, rate limits. logfx's webhook transport handles that:

  • Retry - Exponential backoff with jitter. Configurable max retries and delay.
  • Circuit breaker - Stops sending after N failures, reopens after a timeout.
  • Dead letter queue - Failed logs go to an in-memory queue. Optionally persist to disk.
  • Multi-region failover - Multiple URLs with round-robin or priority. Optional health checks.

You get this without writing retry logic or circuit breaker code. It's built in.

Framework Middleware

Express, Fastify, and Next.js integrations give you req.log and request IDs:

import express from 'express'
import { expressLogger } from 'logfx/middleware'

const app = express()
app.use(expressLogger())

app.get('/users', (req, res) => {
  req.log.info('Fetching users', { userId: req.query.id })
  res.json({ users: [] })
})
Enter fullscreen mode Exit fullscreen mode

Output:

💡 INFO [http] Incoming request { method: 'GET', path: '/users', requestId: 'abc123' }
💡 INFO [http] Fetching users { userId: '42' }
💡 INFO [http] Request completed { method: 'GET', path: '/users', status: 200, durationMs: 45 }
Enter fullscreen mode Exit fullscreen mode

Each request gets a unique ID. Status codes set log level (5xx = error, 4xx = warn). Skip health checks or customize ID extraction. The middleware works with Express, Fastify, and Next.js API routes.

13 Integrations

Separate packages for each platform. Install only what you need:

npm install logfx logfx-datadog
# or logfx-elasticsearch, logfx-sentry, logfx-cloudwatch, etc.
Enter fullscreen mode Exit fullscreen mode
import { createLogger } from 'logfx'
import { datadogTransport } from 'logfx-datadog'

const log = createLogger({
  transports: [
    datadogTransport({
      apiKey: process.env.DD_API_KEY,
      service: 'my-api',
      batchSize: 100,
      flushInterval: 5000
    })
  ]
})
Enter fullscreen mode Exit fullscreen mode

Available integrations: Datadog, Elasticsearch, Sentry, OpenTelemetry, AWS CloudWatch, Google Cloud Logging, Azure Monitor, Slack, Grafana Loki, Papertrail, Splunk, Honeycomb, Logtail. Each uses the same Transport interface. Add console, file, or webhook in the same array.

No monolith. Install only what you need. Each integration is a separate package.

File Transport with Rotation

Write to disk with size-based rotation and compression:

transports.file({
  path: './logs/app.log',
  rotation: {
    maxSize: '10mb',
    maxFiles: 5,
    compress: true
  }
})
Enter fullscreen mode Exit fullscreen mode

Creates app.log.1, app.log.2, etc. Old files are gzipped. Prevents disk from filling up.

Browser Support

Same API in Node and browser. For SPAs, the Beacon transport sends logs reliably even when the user closes the tab:

transports.beacon({
  url: '/api/logs',
  events: { beforeunload: true, visibilitychange: true }
})
Enter fullscreen mode Exit fullscreen mode

Uses the Beacon API. Falls back to fetch if unavailable. Non-blocking so it doesn't delay page unload. Useful for analytics, error reporting, or session replay when the user navigates away.

Log Sampling

For high traffic, reduce volume while keeping visibility:

const log = createLogger({
  sampling: {
    debug: 0.1,
    info: 0.5,
    warn: 1.0,
    error: 1.0
  }
})
Enter fullscreen mode Exit fullscreen mode

Sample 10% of debug, 50% of info. Always log warnings and errors.

When It Helps

  • Node.js APIs - Express/Fastify/Next.js middleware, structured JSON to Datadog/Elasticsearch, request tracing
  • SPAs - Beacon transport for analytics or error reporting on page close
  • Microservices - Context and request IDs for correlating logs across services
  • Sensitive data - PII redaction before logs hit aggregation
  • High volume - Async buffering, sampling, circuit breaker when the log sink is down

What's in the Box

  • Zero dependencies on the core package
  • ~3KB gzipped
  • ESM and CJS
  • Full TypeScript support
  • CI: tests, typecheck, build on Node 18, 20, 22, 24
  • Stress tested: 10k logs in under 1s, 1k with redaction in under 500ms

What It Doesn't Do

It's a logger, not a full observability platform. No distributed tracing (though OpenTelemetry integration injects trace IDs). No metrics. No APM. It logs. That's the scope.

Try It

npm install logfx
Enter fullscreen mode Exit fullscreen mode
import { log } from 'logfx'

log.info('Hello', { version: '1.0.0' })
log.error('Something failed', new Error('oops'))
Enter fullscreen mode Exit fullscreen mode

Links:

If you're on logfx 0.5.x, v1.0.0 adds integrations, PII redaction improvements, and production hardening. The core API is compatible. Upgrade and add what you need.

Wrapping Up

One logger for dev and prod. No format switching. No config. 13 integrations when you need them. PII redaction and production reliability built in.

I also maintain handlejson (GitHub) for safe JSON parsing, envconfig-kit (GitHub) for env validation, and upstatus (GitHub) for uptime monitoring. All focused Node/TS packages.

What logging setup are you using? Would love to hear what works (or doesn't) for you.

Top comments (0)