DEV Community

Cover image for Building Real-Time Log Streaming with PostgreSQL LISTEN/NOTIFY
Polliog
Polliog

Posted on

Building Real-Time Log Streaming with PostgreSQL LISTEN/NOTIFY

I needed tail -f for a distributed system.

The logs were in PostgreSQL. Users wanted to see them appear in real-time - the instant they hit the database.

My options:

  • Polling: Simple but wasteful (thousands of queries/second for nothing)
  • WebSockets: Overkill (bidirectional when I only need server → client)
  • Redis Pub/Sub: Another service to manage
  • Kafka: Are you kidding me

Then I remembered: PostgreSQL has built-in pub/sub.

LISTEN and NOTIFY. Been there since Postgres 7.

Here's how we built live log streaming for Logtide v0.5.0. handling 1000+ concurrent connections with sub-50ms latency using just PostgreSQL and Server-Sent Events.

The Problem: tail -f for Web Apps

What users want:

# This, but in a browser
tail -f /var/log/app.log
Enter fullscreen mode Exit fullscreen mode

New logs appear instantly. No refresh button. No polling.

What we're building:

Live Tail feature in Logtide a log management platform.

Users open a browser tab. Logs stream in real-time as they're inserted into PostgreSQL. Multiple users can watch the same log stream. Works with filters (show me only errors from service X).

Constraints:

  • Already using PostgreSQL (no new dependencies)
  • 1000+ concurrent watchers (realistic for dashboards)
  • Sub-second latency (feels instant)
  • Works behind corporate proxies (no WebSocket headaches)

Why PostgreSQL LISTEN/NOTIFY

PostgreSQL has had pub/sub since 2000.

How it works:

-- Client 1: Start listening
LISTEN logs_channel;

-- Client 2: Insert a log + notify
INSERT INTO logs (message, level) VALUES ('Server started', 'info');
NOTIFY logs_channel, '{"id": 12345, "message": "Server started"}';

-- Client 1: Receives notification immediately
Enter fullscreen mode Exit fullscreen mode

That's it. No message broker. No separate service.

Why it's perfect for this:

  1. Already there: You're using Postgres anyway
  2. Low latency: < 10ms typically
  3. Simple: No complex setup
  4. Reliable: If Postgres is up, notifications work

Limitations (be honest):

  • Payload limit: 8000 bytes per notification
  • No persistence: Messages are ephemeral (if no one's listening, it's gone)
  • Scales to ~1000 listeners: Not millions (but that's fine for us)

For live logs, these are all acceptable.

Architecture Overview

The flow:

1. Log inserted into PostgreSQL
2. Database trigger fires NOTIFY
3. Backend receives notification (keeps connection open)
4. Backend pushes to frontend via Server-Sent Events (SSE)
5. Frontend updates UI instantly
Enter fullscreen mode Exit fullscreen mode

Why Server-Sent Events (not WebSockets)?

SSE is simpler:

  • Just HTTP (works everywhere)
  • One-way (perfect for logs)
  • Auto-reconnect (built into browser)
  • Easy to debug (curl works)
  • HTTP/2 makes it efficient

Implementation: Step by Step

Step 1: Database Trigger

When a log is inserted, notify all listeners.

-- Function that fires on INSERT
CREATE OR REPLACE FUNCTION notify_new_log()
RETURNS TRIGGER AS $$
DECLARE
  payload JSON;
BEGIN
  -- Build JSON payload (keep it small, 8KB limit)
  payload := json_build_object(
    'id', NEW.id,
    'timestamp', NEW.timestamp,
    'level', NEW.level,
    'message', LEFT(NEW.message, 200),  -- Truncate to save space
    'service', NEW.service
  );

  -- Notify on channel: logs_{organization_id}
  -- This lets us filter per organization
  PERFORM pg_notify('logs_' || NEW.organization_id, payload::text);

  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

-- Attach trigger to logs table
CREATE TRIGGER log_insert_trigger
AFTER INSERT ON logs
FOR EACH ROW
EXECUTE FUNCTION notify_new_log();
Enter fullscreen mode Exit fullscreen mode

Key decisions:

  • Truncate message: 8KB limit, send preview only
  • Per-organization channels: logs_1, logs_2, etc. (isolation)
  • AFTER INSERT: Trigger only after commit (no false notifications)

Step 2: Backend Listener (Node.js)

Keep a persistent connection to Postgres. Listen for notifications.

// lib/pg-listener.ts
import pg from 'pg';

export class PgListener {
  private client: pg.Client;
  private handlers = new Map<string, Set<(payload: any) => void>>();

  constructor(connectionString: string) {
    this.client = new pg.Client({ connectionString });
  }

  async connect() {
    await this.client.connect();

    // Handle notifications from Postgres
    this.client.on('notification', (msg) => {
      const channel = msg.channel;
      const payload = JSON.parse(msg.payload);

      // Call all registered handlers for this channel
      const channelHandlers = this.handlers.get(channel);
      if (channelHandlers) {
        channelHandlers.forEach(handler => handler(payload));
      }
    });
  }

  async listen(channel: string, handler: (payload: any) => void) {
    // Start listening to channel
    await this.client.query(`LISTEN ${channel}`);

    // Register handler
    if (!this.handlers.has(channel)) {
      this.handlers.set(channel, new Set());
    }
    this.handlers.get(channel).add(handler);
  }

  async unlisten(channel: string, handler: (payload: any) => void) {
    const channelHandlers = this.handlers.get(channel);
    if (channelHandlers) {
      channelHandlers.delete(handler);

      // If no more handlers, stop listening
      if (channelHandlers.size === 0) {
        await this.client.query(`UNLISTEN ${channel}`);
        this.handlers.delete(channel);
      }
    }
  }

  async disconnect() {
    await this.client.end();
  }
}

// Singleton instance
export const pgListener = new PgListener(process.env.DATABASE_URL);
Enter fullscreen mode Exit fullscreen mode

Important: Use a dedicated connection for LISTEN. Don't mix with query pool.

Step 3: SSE Endpoint (Fastify)

Expose an SSE endpoint. Frontend connects, backend pushes events.

// routes/logs/stream.ts
import type { FastifyRequest, FastifyReply } from 'fastify';
import { pgListener } from '../../lib/pg-listener';

interface StreamQuery {
  organizationId: string;
}

export async function streamLogs(
  request: FastifyRequest<{ Querystring: StreamQuery }>,
  reply: FastifyReply
) {
  const { organizationId } = request.query;
  const channel = `logs_${organizationId}`;

  // Setup SSE headers
  reply.raw.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache',
    'Connection': 'keep-alive',
    'X-Accel-Buffering': 'no', // Nginx compatibility
  });

  // Send initial comment (keeps connection alive)
  reply.raw.write(': connected\n\n');

  // Handler for new logs
  const handler = (payload: any) => {
    // SSE format: data: {json}\n\n
    reply.raw.write(`data: ${JSON.stringify(payload)}\n\n`);
  };

  // Start listening
  await pgListener.listen(channel, handler);

  // Heartbeat (every 30s, keeps connection alive)
  const heartbeat = setInterval(() => {
    reply.raw.write(': heartbeat\n\n');
  }, 30000);

  // Cleanup on disconnect
  request.raw.on('close', async () => {
    clearInterval(heartbeat);
    await pgListener.unlisten(channel, handler);
  });
}
Enter fullscreen mode Exit fullscreen mode

SSE format explained:

data: {"id": 123, "message": "Hello"}\n\n
Enter fullscreen mode Exit fullscreen mode
  • data: prefix required
  • JSON payload
  • Double newline terminates event

Heartbeat is critical: Keeps proxy/firewall from killing connection.

Step 4: Frontend (SvelteKit)

Connect to SSE endpoint. Render logs as they arrive.

// routes/logs/live/+page.svelte
<script lang="ts">
  import { onMount, onDestroy } from 'svelte';

  let logs: Log[] = $state([]);
  let eventSource: EventSource | null = $state(null);
  let status: 'connecting' | 'connected' | 'disconnected' = $state('connecting');

  onMount(() => {
    const orgId = '1'; // From user session

    // Connect to SSE endpoint
    eventSource = new EventSource(`/api/logs/stream?organizationId=${orgId}`);

    eventSource.onopen = () => {
      status = 'connected';
      console.log('Live tail connected');
    };

    eventSource.onmessage = (event) => {
      const log = JSON.parse(event.data);

      // Add to top of list
      logs = [log, ...logs];

      // Keep only last 500 logs (prevent memory leak)
      if (logs.length > 500) {
        logs = logs.slice(0, 500);
      }
    };

    eventSource.onerror = () => {
      status = 'disconnected';
      console.error('Live tail disconnected, will retry...');
      // Browser auto-reconnects by default
    };
  });

  onDestroy(() => {
    eventSource?.close();
  });
</script>

<div class="live-tail">
  <div class="status">
    {#if status === 'connected'}
      <span class="dot green"></span> Live
    {:else if status === 'connecting'}
      <span class="dot yellow"></span> Connecting...
    {:else}
      <span class="dot red"></span> Disconnected
    {/if}
  </div>

  <div class="logs">
    {#each logs as log (log.id)}
      <div class="log-entry" class:error={log.level === 'error'}>
        <span class="timestamp">{log.timestamp}</span>
        <span class="level">{log.level}</span>
        <span class="message">{log.message}</span>
      </div>
    {/each}
  </div>
</div>
Enter fullscreen mode Exit fullscreen mode

Browser handles reconnection automatically. If connection drops, EventSource retries with exponential backoff.

Real Performance Numbers (Logtide v0.5.0)

We tested this in production. Here's what we measured:

Latency (log inserted → browser displays):

  • p50: 45ms
  • p95: 120ms
  • p99: 280ms

Concurrent connections:

  • Tested: 1000 simultaneous live tail sessions
  • Server: Single Fastify instance (4 vCPU, 8GB RAM)
  • CPU usage: ~30% (mostly JSON serialization)
  • Memory: ~2GB (connection overhead)

Database impact:

  • NOTIFY overhead: < 1ms per insert
  • Connection count: +1 for pg

Listener (not per client)

  • No query load (triggers handle notifications)

Comparison to polling:

Polling (5-second interval, 1000 clients):

  • Queries: 200 queries/second just to check for new logs
  • Database CPU: 15-20% baseline
  • Network: Wasted bandwidth on "no new logs" responses

LISTEN/NOTIFY:

  • Queries: 0 (events are pushed)
  • Database CPU: < 1%
  • Network: Only when logs actually arrive

Winner: LISTEN/NOTIFY by a mile.

Scaling Considerations

What we learned running this in production:

Connection Limits

PostgreSQL default: max_connections = 100

Our setup:

  • Application pool: 20 connections (queries)
  • PgListener: 1 connection (notifications)
  • Headroom: 79 connections for everything else

If you need more listeners: Use a single listener connection, fan out in-memory to clients.

Proxy/Firewall Issues

Some proxies kill long-lived connections.

Solutions:

  • Heartbeat: Send comment every 30s (: heartbeat\n\n)
  • Nginx config: proxy_buffering off;
  • Client retry: EventSource auto-reconnects

We've seen this work behind corporate firewalls without issues.

Channel Naming Strategy

Bad: LISTEN logs (everyone gets everything)

Good: LISTEN logs_{organizationId} (isolated per customer)

Why: Postgres broadcasts to ALL listeners on a channel. Filtering happens in your app.

Payload Size Limit

8KB max per NOTIFY.

Our approach:

  • Send minimal data (ID, timestamp, level, truncated message)
  • Frontend fetches full log if user clicks (separate query)

Alternative: Skip the Trigger

If you control the insert logic, skip the trigger. Notify directly:

// In your log ingestion code
await db.query(`
  INSERT INTO logs (message, level, service)
  VALUES ($1, $2, $3)
  RETURNING id, timestamp
`, [message, level, service]);

// Manually notify
await db.query(`
  SELECT pg_notify('logs_${organizationId}', $1)
`, [JSON.stringify({ id, timestamp, level, message })]);
Enter fullscreen mode Exit fullscreen mode

Trade-off: More control, but notification and insert aren't atomic.

When NOT to Use This

Be honest about limitations:

Don't use LISTEN/NOTIFY if:

  • Need guaranteed delivery (notifications are ephemeral)
  • Messages > 8KB (payload limit)
  • Need >10K concurrent listeners (Postgres won't scale)
  • Cross-database notifications (Postgres-only)

Use Kafka/Redis if:

  • Multi-region deployment
  • Message persistence required
  • Complex routing logic
  • Massive scale (100K+ connections)

For most apps? LISTEN/NOTIFY is plenty.

Production Gotchas

1. Forgotten UNLISTEN

Memory leak:

// Bad: Never unlistens
await pgListener.listen(channel, handler);
Enter fullscreen mode Exit fullscreen mode

Good: Clean up on disconnect.

2. Blocking the listener connection

// Bad: Runs slow query on listener connection
const handler = async (payload) => {
  await db.query('SELECT * FROM huge_table'); // Blocks notifications!
};
Enter fullscreen mode Exit fullscreen mode

Good: Use separate connection for queries.

3. No error handling

// Bad: Crash on JSON parse error
const payload = JSON.parse(msg.payload);
Enter fullscreen mode Exit fullscreen mode

Good: Try/catch everything.

Why This Matters for Logtide

Before v0.5.0: No live tail. Users refreshed manually.

After v0.5.0:

  • Live tail on every log view
  • 1000+ users can watch same stream
  • Sub-50ms latency (p50)
  • Zero extra infrastructure

User feedback:

"Finally feels like actual tail -f" - Beta tester

Cost impact:

  • No Redis/Kafka license
  • No extra servers
  • Same PostgreSQL we already had

Development time:

  • Trigger: 30 minutes
  • Backend listener: 2 hours
  • Frontend: 3 hours
  • Testing/refinement: 1 day

Total: ~2 days work for a killer feature.

Code You Can Copy

Complete working example: https://github.com/logtide-dev/logtide

Look for:

  • /backend/src/lib/pg-listener.ts - Listener class
  • /backend/src/routes/logs/stream.ts - SSE endpoint
  • /backend/migrations/001_notify_trigger.sql - Database trigger
  • /frontend/src/routes/logs/live/+page.svelte - Frontend

It's AGPLv3. Use it. Learn from it. Build on it.

Try It Yourself

Spin up Logtide locally:

git clone https://github.com/logtide-dev/logtide
cd logtide
docker compose up
Enter fullscreen mode Exit fullscreen mode

Visit http://localhost:3000/logs/live

Send some logs:

curl -X POST http://localhost:8080/api/logs \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello live tail", "level": "info"}'
Enter fullscreen mode Exit fullscreen mode

Watch them appear instantly.

The Takeaway

You don't need Kafka for real-time.

You don't need Redis for pub/sub.

PostgreSQL has had this since 2000.

For most applications - internal dashboards, admin panels, moderate-scale SaaS - LISTEN/NOTIFY is more than enough.

Simpler stack. Fewer moving parts. One less thing to break.


Building real-time features with Postgres? How are you handling pub/sub? Drop your setup below.

Resources:


This article describes the live tail feature shipping in Logtide v0.5.0 (currently in testing). Real code, real performance numbers, real production experience.

Top comments (0)