DEV Community

Cover image for Logtide 0.9.0: Custom Dashboards, Health Monitoring, and Log Parsing Pipelines
Polliog
Polliog

Posted on

Logtide 0.9.0: Custom Dashboards, Health Monitoring, and Log Parsing Pipelines

Logtide 0.9.0 is out today. At the end of the 0.8.0 article we listed three things we wanted to tackle next: a customizable dashboard system to replace the fixed layout that had shipped since day one, proactive health monitoring so Logtide could tell you when something was down rather than waiting for a log to show up, and structured parsing pipelines for teams whose logs don't arrive pre-formatted. All three ship in this release.

If you're new here: Logtide is an open-source log management and SIEM platform built for European SMBs. Privacy-first, self-hostable, GDPR-compliant. No Elastic cluster to babysit just Docker Compose and the storage engine of your choice.


What's New

📊 Custom Dashboards: 9 Panel Types, Drag-to-Resize, and YAML Export

The fixed dashboard that shipped in 0.1.0 had a good run. It was a reasonable starting point 4 stat cards, log volume, top services, top error messages but it served everyone the same view regardless of what they actually cared about. 0.9.0 replaces it with a fully composable dashboard system.

Dashboards are org-scoped with an optional is_personal flag for views you don't want to share with the whole team. The Default dashboard is auto-created per organization and protected from deletion. A header dropdown lets you switch, create, clone, import, and export dashboards without leaving the page.

9 panel types cover every data source in Logtide:

  • Time series and single stat for general log-based metrics
  • Top-N table for ranking services, endpoints, or users by any dimension
  • Live log stream for a real-time tail of filtered log output
  • Alert status for a current-state view of your active alert rules
  • Metric chart and metric stat for OTLP metrics with avg/sum/min/max/count/last/p50/p95/p99 aggregations
  • Trace latency for p50/p95/p99 directly from span data
  • Detection events for SIEM incidents grouped by severity
  • Monitor status for uptime percentage and response time from the new monitoring system (more on that below)

Layout is a responsive 12-column grid. Panels snap to grid units when resized drag the bottom-right handle. The grid collapses to 6 columns on tablet and 1 column on mobile; stored widths are always in the 12-col reference and scale proportionally, so a panel that takes up half the desktop doesn't become a sliver on a small screen.

Inline edit mode keeps all pending changes in a local snapshot. Toggle edit, rearrange, resize, and configure as many panels as you want. Hit Save for a single atomic write, or Cancel to discard everything. There's no separate edit page.

YAML import/export lets you version-control dashboards alongside your infrastructure code. Import regenerates panel IDs and uses JSON_SCHEMA validation to block prototype pollution from crafted inputs. The schema is versioned (schema_version: 1) and ships with a migration framework in @logtide/shared: each version defines a MigrationFn, and migrateDashboard walks the chain on every read. Future schema changes will be applied automatically.

Panel data fetching is batched: a single POST /:id/panels/data round-trip fetches all panel data via Promise.allSettled. An error in one panel doesn't fail the rest of the dashboard.

Cross-org isolation is enforced at the data layer: every panel fetch verifies that config.projectId belongs to the requesting org. A crafted YAML import pointing at another org's project ID will return empty data, not that org's data.

The panel registry architecture is worth a mention for contributors. Adding a new panel type touches exactly 6 files: shared types, backend Zod schema, backend fetcher, frontend panel component, frontend config form, and a single registry entry. The renderer, container, store, and routes never change.

Existing users will see no visual change on first login the auto-created Default dashboard replicates the previous fixed layout exactly.


🖥️ Service Health Monitoring and Public Status Pages

Logtide has always been reactive: something breaks, logs appear, you find out. 0.9.0 adds the proactive layer.

Three monitor types cover the common cases. HTTP/HTTPS monitors are fully configurable: method, expected status code, custom headers, and a body assertion that accepts either a contains check or a regex. TCP monitors ping a host:port pair. Heartbeat monitors flip the model instead of Logtide reaching out, your service sends a POST /api/v1/monitors/:id/heartbeat on a schedule, and Logtide fires an incident when the expected ping doesn't arrive within the grace window.

Worker execution follows the same BullMQ pattern used throughout the codebase. A worker picks up all due monitors every 30 seconds and runs them in batches of 20 concurrent checks via Promise.allSettled. Results flow into the monitor_results hypertable with 7-day compression and 30-day retention. A monitor_uptime_daily continuous aggregate refreshed hourly powers the uptime percentage displays without hitting raw data on every page load.

Incident creation is automatic and integrated with the existing SIEM layer. When consecutive failures cross the configurable threshold, an incident is created with source: 'monitor' and linked via monitor_id. Notifications go through the same email and webhook channels already configured for alert rules no separate notification setup. Auto-resolution fires when the next check succeeds. An atomic WHERE incident_id IS NULL guard prevents duplicate incidents under concurrent check runs.

Severity is configurable per monitor (critical, high, medium, low, informational) rather than hardcoded. A flaky dev endpoint and a production payment service don't need to page with the same urgency.

Public status pages (/status/:projectSlug) are Uptime Kuma-inspired: a 45-day heartbeat bar grid, per-monitor uptime badge, overall status banner, and a light/dark mode toggle. Visibility is configured per project disabled by default, with public, password-protected, and org-members-only options.

Scheduled maintenances let you define windows with start and end times. Active maintenances suppress monitor incident creation so a planned deployment doesn't trigger pages, and display a maintenance banner on the status page so your users know what's happening.

Manual status incidents are independent from SIEM incidents. You can publish communications with an Investigating/Identified/Monitoring/Resolved progression and a full update timeline useful for communicating with users about an outage regardless of whether it was auto-detected.

The monitoring dashboard (/dashboard/monitoring) has a project selector, create/edit/delete forms with client-side validation, a detail page with an uptime chart and recent checks list, and a one-click heartbeat URL copy for the heartbeat monitor type.


🔩 Log Parsing and Enrichment Pipelines

Structured logging is a best practice, but not every log source you connect will cooperate. Nginx access logs, syslog output from legacy systems, plain text from third-party services these arrive as unstructured strings. Previously you'd parse them in your collector config or accept that they'd be stored as blobs. 0.9.0 gives you a better option.

Pipelines run as BullMQ background jobs after ingestion acknowledgment. Ingestion latency is unchanged logs are accepted and queued immediately, parsing happens asynchronously.

Five built-in parsers cover the common formats: nginx (combined log format), apache (identical pattern), syslog (RFC 3164 and RFC 5424), logfmt, and JSON message body.

Custom grok patterns use %{PATTERN:field} and %{PATTERN:field:type} syntax, with 22 named built-ins (IPV4, WORD, NOTSPACE, NUMBER, POSINT, DATA, GREEDYDATA, QUOTEDSTRING, METHOD, URIPATH, HTTPDATE, and more) and optional type coercion (:int, :float). If your log format is unusual enough that none of the built-in parsers cover it, grok handles the rest.

GeoIP enrichment uses the embedded MaxMind GeoLite2 database. Point it at any field containing an IP address and get country, city, coordinates, timezone, and ISP added to the log record automatically.

Scope is flexible: a pipeline can target a specific project or apply org-wide. Project-specific pipelines take priority over org-wide ones when both match. An in-memory cache in getForProject holds the resolved pipeline per project for 5 minutes, invalidated automatically on create/update/delete.

Pipeline preview lets you test any combination of steps against a sample log message before saving. The UI shows per-step extracted fields and the final merged result side by side, so you can iterate on the configuration without committing it.

YAML import/export follows the same pattern as dashboards: name, description, enabled, and steps fields; re-importing the same pipeline for the same scope performs an upsert rather than creating a duplicate.

The step builder in the settings UI (/dashboard/settings/pipelines) lets you add, reorder, and configure steps interactively, with per-type configuration forms for parser selection, grok pattern input, and GeoIP field targeting.


Everything Else Worth Knowing

Monitoring in the sidebar: the monitoring section appears under "Detect" alongside Alerts and Security. No extra navigation to find it.

Dashboard switcher in the header: replaces the previous single fixed entry point with a dropdown that handles create, delete, import, and export without leaving the page.

failureThreshold default aligned: the frontend form default was 3; the backend default was 2. They now match.

Project slugs: auto-generated from project name on creation, unique per org, backfilled for existing projects via migration. The status page route (/status/:projectSlug) uses these.


Upgrading

docker compose pull
docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Migrations run automatically on startup. No manual steps required.


What's Next

The roadmap toward v1.0 has a few clear remaining pieces:

  • Digest reports (#154): scheduled email summaries of log volume, top errors, and active incidents -- useful for teams that don't live in the dashboard
  • Webhook receivers (#154): accept inbound webhooks from external services (PagerDuty, GitHub, Stripe, etc.) and normalize them into Logtide log events without a collector in the middle

v1.0 is the Beta milestone. We're not jumping straight to a public Beta declaration we want the announcement to mean something. These issue groups are the remaining distance.


Full Changelog: v0.8.0...v0.9.0

If you're using Logtide, open an issue, start a discussion, or drop a ⭐ if it's been useful.

Top comments (0)