DEV Community

Cover image for Logtide 0.8.0: Browser Observability, MongoDB Support, and Golden Signals
Polliog
Polliog

Posted on

Logtide 0.8.0: Browser Observability, MongoDB Support, and Golden Signals

Logtide 0.8.0 is out today. It's a release that started with a single promise from the 0.7.0 article: "full dashboard integration is the first thing on the 0.8.x list." We kept that promise, and then kept going.

This is the release that closes three major open threads at once: browser observability, MongoDB support for @logtide/reservoir, and Golden Signals with real percentile data. Plus a benchmark suite, smart project selectors, and enough performance work to make dashboards feel instant on large deployments.

If you're new here: Logtide is an open-source log management and SIEM platform built for European SMBs. Privacy-first, self-hostable, GDPR-compliant. No Elastic cluster to babysit just Docker Compose and the storage engine of your choice.


What's New

🌐 Browser SDK: Observability for Your Frontend

Backend observability was already solid. Browser instrumentation was the gap. 0.8.0 closes it with @logtide/browser a dedicated browser SDK built from the ground up, available as a drop-in addition to all existing framework packages.

Session tracking assigns a session_id to each browser tab via sessionStorage. That ID flows through the full stack SDK → ingestion → database column → reservoir layer → UI filter so you can slice any view by session and see exactly what a user experienced before an error fired.

Core Web Vitals are collected automatically: LCP, INP, and CLS via the web-vitals library, with a configurable sampling rate so you're not flooding your instance for low-traffic pages.

Breadcrumbs work on two axes:

  • Click breadcrumbs use event delegation to track click and input interactions. data-testid attributes are captured when present. Input values are never captured.
  • Network breadcrumbs patch fetch and XMLHttpRequest to record method, URL, status code, and duration. Query params are stripped by default; you can add a deny list for sensitive endpoints.

Offline resilience wraps the transport layer with an OfflineTransport that buffers logs and spans when connectivity drops (bounded queue, no unbounded memory growth), flushes on reconnect, and uses sendBeacon on page unload so nothing is lost when the tab closes.

Source maps ship with a new @logtide/cli package and a logtide sourcemaps upload command. Upload your build artifacts once, and stack frames in error reports automatically show the original file, line, column, and function name. You can toggle between minified and original frames directly in the UI.

Each framework got targeted improvements:

  • Next.js: RSC error detection tagged with mechanism: 'react.server-component', route params from __NEXT_DATA__ in navigation breadcrumbs
  • Nuxt: logtidePiniaPlugin for automatic Pinia action breadcrumbs
  • SvelteKit: route context in handleError, createBoundaryHandler() for <svelte:boundary>
  • Angular: NgZone context detection tagging errors as angular.zone: 'inside'/'outside'

Projects using the browser SDK automatically get two new dashboard tabs: Performance (Web Vitals over time) and Sessions (session-based filtering and replay context). The Capabilities API (GET /api/v1/projects/:id/capabilities) auto-detects whether a project has Web Vitals or Sessions data and shows those tabs only when relevant.


📈 Metrics Dashboard: The Dashboard We Promised in 0.7.0

We shipped OTLP metrics ingestion in 0.7.0 with the store and API client ready but no visualization layer. 0.8.0 delivers it.

The redesigned metrics page has two tabs: Overview and Explorer.

Overview groups your metrics by service. Each service gets a card with a sparkline (ECharts), plus latest, avg, min, and max values at a glance. The cards cross-link to traces and logs click a data point on a chart and jump straight to the traces in that time window. Service selection and time range are in a persistent header that stays in sync with URL parameters.

Under the hood, Overview is powered by pre-aggregated rollups rather than scanning raw data on every load:

  • TimescaleDB: metrics_hourly_stats and metrics_daily_stats continuous aggregates with automatic refresh policies
  • ClickHouse: metrics_hourly_rollup and metrics_daily_rollup materialized views
  • MongoDB: on-the-fly aggregation pipeline (no separate materialized views needed at this scale)

The query layer uses smart rollup routing: if your request is asking for 1h or 1d intervals with a compatible aggregation function, it hits the pre-aggregated table. Otherwise it falls back to raw data. You get dashboard speed without sacrificing query flexibility.


🍃 MongoDB Storage Adapter: @logtide/reservoir Is Now a Tri-Engine System

@logtide/reservoir launched with TimescaleDB and ClickHouse. 0.8.0 adds the third engine: MongoDB.

All 33 StorageEngine interface methods are implemented logs, spans, traces, metrics, and exemplars. The adapter ships with MongoDBQueryTranslator for filter translation, a Docker Compose profile-gated MongoDB 7.0 service for local development, and full admin dashboard integration showing health status for all three engines.

It also auto-detects MongoDB 5.0+ features: $dateTrunc for time bucketing and native time-series collections when available, with fallback for older versions.

The adapter comes with 100 tests: 34 unit tests and 66 integration tests covering the full query surface.

Practical compatibility: if you're running DocumentDB, FerretDB, or Cosmos DB in MongoDB compatibility mode, the adapter works with those too. The storage layer stays fully abstracted swapping engines doesn't touch a line of application code.

// reservoir.config.ts
export default createStorageEngine('mongodb', {
  uri: 'mongodb://localhost:27017/logtide',
  authSource: 'admin',
})
Enter fullscreen mode Exit fullscreen mode

📊 Golden Signals with Percentiles

Rate, errors, duration the four golden signals of observability. Duration without percentiles is noise. 0.8.0 adds P50, P95, and P99 aggregation across all three storage engines.

The new Golden Signals panel has dedicated charts for request rate, error rate, and latency percentiles side by side. The percentile aggregation implementation is engine-native: percentile_cont on TimescaleDB, quantile on ClickHouse, $percentile on MongoDB. No application-level approximation.

You can filter by service name and additional attributes, and all three charts load in parallel.


Everything Else Worth Knowing

Smart project selectors: project dropdowns throughout the app now only show projects that actually have data in the relevant category. If a project has no traces, it won't appear in the traces page selector. A new GET /api/v1/projects/data-availability endpoint powers this, with graceful fallback to all projects if the check fails.

Reservoir benchmark suite: k6-based benchmarking scripts for ingestion and query workloads across all three engines. Seed up to 100k events per run. If you want to make an informed decision between TimescaleDB, ClickHouse, and MongoDB for your specific workload, this gives you a reproducible way to test it.

Custom time range picker: the TimeRangePicker now supports arbitrary custom ranges, synced to URL parameters. Bookmark any time window.

DSN copy on API key creation: when you create a new API key, the dialog now shows the full DSN string (https://KEY@host) ready to copy. One step instead of three.


Performance Work

0.8.0 has more targeted performance work than any previous release. A few highlights:

TimescaleDB skip-scan via Recursive CTEs: distinct queries on high-cardinality fields like service were doing full table scans. Recursive CTEs implement the index skip-scan pattern PostgreSQL lacks natively, dropping execution time from minutes to milliseconds on large tables.

Dashboard intelligent optimization: all three engines now support countEstimate for approximate counts, bypassing heavy COUNT(*) operations on high-volume projects. The dashboard loads instantly regardless of log volume.

MongoDB-specific: insertMany({ordered: false}) for maximum write throughput, compound indexes matching actual query patterns, sparse indexes on nullable fields, atomic trace upsert with a single bulkWrite (one network round trip), and cursor-based keyset pagination with (time, id) tuples for consistent pagination under concurrent writes.

Capabilities detection: reduced the scanning range from 7 days to 24 hours for Web Vitals and Sessions detection, making the initial project dashboard load instant.


Upgrading

No breaking changes.

docker compose pull
docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Redis-free deployment:

docker compose -f docker-compose.simple.yml pull
docker compose -f docker-compose.simple.yml up -d
Enter fullscreen mode Exit fullscreen mode

To use the MongoDB adapter, enable the profile in your Compose setup:

docker compose --profile mongodb up -d
Enter fullscreen mode Exit fullscreen mode

What's Next

0.8.0 closes the observability foundation. What's left before v1.0 (our beta milestone):

  • Log parsing pipelines (#152): structured extraction for syslog, legacy formats, and custom patterns without writing VRL transforms by hand
  • Webhook receivers (#154): ingest external events from GitHub, PagerDuty, Stripe, and others without custom code
  • Proactive health monitoring (#151): status pages built from the data already in Logtide, with uptime history and alerting
  • Scheduled digest reports (#153): weekly email summaries of error trends, anomalies, and key metrics

The query abstraction layer is also a candidate for extraction as a standalone open-source library if you have thoughts on that, open a discussion.


Full Changelog: v0.7.0...v0.8.0

Star the project, open an issue, or just try it the Docker setup takes about 5 minutes.

Top comments (0)