Logtide 0.7.0 is out, and it's the biggest release since we went open-source. Three major features, a full dashboard overhaul, and a pile of fixes.
If you're new here: Logtide is an open-source log management and SIEM platform built for European SMBs. Privacy-first, self-hostable, GDPR-compliant. No Elastic cluster to babysit, no opaque pricing, just PostgreSQL/TimescaleDB (or clickhouse if you need) and Docker Compose.
- π Cloud: logtide.dev
- π» GitHub: logtide-dev/logtide (340+ β)
- π Docs: logtide.dev/docs
What's New
π OTLP Metrics Ingestion: The Observability Stack Is Now Complete
Logs. Traces. Metrics. That's the three pillars of observability, and until now Logtide covered two of them. With 0.7.0, the loop closes.
Logtide now accepts metrics over the standard OTLP protocol (POST /v1/otlp/metrics), with support for both protobuf and JSON payloads, gzip compression, and all five OpenTelemetry metric types: gauge, sum, histogram, exponential histogram, and summary.
The part we're most excited about: exemplar support. When you're staring at a latency spike on a histogram, you can click through to the exact trace that caused it. Metrics and traces are no longer separate islands.
On the storage side, metrics land in dedicated TimescaleDB hypertables with 7-day chunk compression and a 90-day retention policy. If you're running ClickHouse via the @logtide/reservoir abstraction, that's fully supported too.
The query API covers everything you'd expect:
GET /api/v1/metrics/names
GET /api/v1/metrics/labels/keys
GET /api/v1/metrics/labels/values
GET /api/v1/metrics/data
GET /api/v1/metrics/aggregate
Seven aggregation intervals (1m to 1w), six functions (avg, sum, min, max, count, last), and group-by label support for multi-series charts. The Svelte store and API client are wired up and ready, full dashboard integration is the first thing on the 0.8.x list.
πΊοΈ Service Dependency Graph: See How Your Services Actually Talk to Each Other
Distributed systems are hard to reason about. When something breaks, you want to know: what called what, and what's the blast radius?
The new Service Map gives you a force-directed graph of your microservices, built from two sources simultaneously: span parent-child relationships from your traces, and log co-occurrence analysis via trace_id self-joins. You get a complete picture even if your instrumentation is partial.
The backend runs three parallel queries on each request β span dependencies, per-service health stats from continuous aggregates, and log correlation then merges them into a single response.
Health state is color-coded directly on the nodes:
- π’ Green: <1% error rate
- π‘ Amber: 1β10% error rate
- π΄ Red: >10% error rate
Click any node and a side panel opens with error rate, average and p95 latency, total call count, and the full list of upstream/downstream edges. Solid edges are span-based. Dashed edges are log-correlation-based. You can export the whole thing as PNG and filter by time range.
The Service Map lives inside the Traces page now (more on the navigation restructuring below).
π Audit Log: Compliance-Ready Out of the Box
If you're running Logtide for a team especially in a regulated industry you've probably needed to answer questions like "who changed this alert rule?" or "when did that user access these logs?" Until now, you had to piece that together yourself.
0.7.0 ships a full audit trail covering four event categories: log access, configuration changes, user management, and data modifications. Every login, logout, API key creation and revocation, project change, role modification, and admin action gets recorded automatically.
Organization owners and admins can access the full log from Organization Settings. Rows are expandable to show full event metadata including resource IDs, user agent, and IP address. You can filter by category and action, and export up to 10,000 rows as CSV.
UX Restructuring: Less Navigation Chaos
With metrics added, the flat 11-item sidebar was becoming a problem. We reorganized everything into three logical sections:
- Observe: Logs, Traces, Metrics, Errors
- Detect: Alerts, Security
- Manage: Projects, Settings
The Security page now has its own sub-navigation (Dashboard, Rules, Incidents), and the Alerts page is simplified to just Alert Rules and History. Settings got the same treatment, with sections for General, Security & Data, Notifications, Team, and Administration.
The command palette is updated too all 9 main pages now have keyboard shortcuts (g d for dashboard, g s for search, g t for traces, g m for metrics, etc.).
Notable Fixes
A few fixes worth calling out explicitly:
Batch ingestion flexibility: The POST /api/v1/ingest endpoint now accepts direct arrays and wrapped array formats in addition to the standard {"logs": [...]} envelope. This makes it work out of the box with Vector's codec: json configuration and Fluent Bit without custom transformations.
ClickHouse timeline gaps: The admin dashboard was showing periodic zero-drops in the activity chart. Root cause: ClickHouse returned ISO timestamps (2026-02-26T13:00:00.000Z) while PostgreSQL returned text format (2026-02-26 13:00:00+00). The merge was silently failing. All bucket keys are now normalized to ISO format, and all 24 hourly slots are pre-filled to eliminate gaps regardless of actual data.
Upgrading
No breaking changes. No schema migrations to run manually.
docker compose pull
docker compose up -d
If you're on the Redis-free setup:
docker compose -f docker-compose.yml pull
docker compose -f docker-compose.yml up -d
What's Next
With the core observability stack complete, 0.8.x focuses on making it useful:
- Metrics dashboard: full visualization layer for the metrics data we're now ingesting β charts, multi-series support, exemplar drill-through
- Custom configurable dashboards: build your own layouts with the widgets you actually care about
- Log parsing pipelines: structured extraction for legacy systems and syslog sources
- Webhook receivers: ingest events from external services (GitHub, PagerDuty, etc.) without writing custom code
- Proactive health monitoring: status pages for your services, built from the data already in Logtide
We're also continuing work on @logtide/reservoir MongoDB support is in progress for teams already invested in that ecosystem.
Full Changelog: v0.6.0...v0.7.0
If you're using Logtide, we'd love to hear how it's going. Open an issue, start a discussion, or just drop a β if it's been useful.
Top comments (0)