DEV Community

Cover image for Building a Real‑Time VWAP Dashboard with Bun, Binance WebSockets & SQLite
Sivaram
Sivaram

Posted on

Building a Real‑Time VWAP Dashboard with Bun, Binance WebSockets & SQLite

Building a Real-Time VWAP Dashboard with Bun, Binance WebSockets and SQLite

A weekend project that escalated into a full deep dive into real-time systems.

Live demo:

https://realtime-vwap-dashboard.sivaramp.com/

I started this as a tiny weekend thing.

Subscribe to a few Binance streams, compute VWAP, chart it, done.

Instead, I fell into a rabbit hole involving WebSocket fanout, flame graphs, SQLite tuning, React rendering bottlenecks, GC behavior, LRU caching, payload optimization, and a lot of low-level debugging I absolutely did not expect when I started.

This post walks through the architecture, the problems, the flame graphs, and the insights.


What I Built

A real-time dashboard that displays a 1-second VWAP for top crypto trading pairs.

The backend:

  • connects to Binance aggTrade WebSocket streams
  • ingests 150–350 events per second
  • buckets trades into 1-second windows
  • computes VWAP
  • stores a sliding historical window in SQLite
  • broadcasts compact WebSocket messages to all connected clients

All of this is implemented in one Bun TypeScript file, deployed as a Railway Bun Function.


Tech Stack

No Redis.

No Kafka.

No message queues.

No background workers.

Just one process.


Backend Architecture

1. Subscribing to 60+ Binance aggTrade streams

One multiplexed WebSocket connection is enough:

const streams = symbols.map((s) => `${s.toLowerCase()}@aggTrade`);

ws.send(
  JSON.stringify({
    method: "SUBSCRIBE",
    params: streams,
    id: 1
  })
);
Enter fullscreen mode Exit fullscreen mode

This produces 150–350 messages per second depending on volatility.


2. Bucketing trades per second and computing VWAP

For each symbol, I maintain a 1-second rolling buffer of trades.

function computeVWAP(trades) {
  let pv = 0;
  let vol = 0;

  for (const t of trades) {
    const price = Number(t.p);
    const qty = Number(t.q);
    pv += price * qty;
    vol += qty;
  }

  return vol === 0 ? null : pv / vol;
}
Enter fullscreen mode Exit fullscreen mode

Every second, each bucket is flushed, the VWAP computed, persisted, and broadcast.


3. SQLite Persistence (WAL Mode)

SQLite WAL handled this load almost effortlessly.

db.exec("PRAGMA journal_mode=WAL;");

const stmt = db.prepare(
  "INSERT INTO vwap (symbol, ts, vwap) VALUES (?1, ?2, ?3)"
);

for (const tick of batch) {
  stmt.run(tick.symbol, tick.ts, tick.vwap);
}
Enter fullscreen mode Exit fullscreen mode

I periodically trim old rows to maintain a sliding historical window.


4. WebSocket Fanout to Clients

The same Bun process also exposes a WebSocket server:

server.publish("ticks", JSON.stringify(latestVWAPBatch));
Enter fullscreen mode Exit fullscreen mode

Frontend clients subscribe and receive compact batches every second.


Frontend: Surprisingly the Hardest Part

Rendering dozens of real-time charts with 1-second global updates was far more demanding than I expected. Chrome DevTools made it very obvious:

  • layout thrashing
  • expensive React render cycles
  • GC noise from array cloning
  • SVG layerization issues
  • large diff surfaces causing re-renders
  • accidental state explosions

After optimization, everything became much smoother, but it took a lot of profiling.

Below are the flame graphs that guided most of that work.


Flame Graphs

Pre-Optimisation

Observations:

  • heavy Recalculate Style
  • large layout and paint blocks per tick
  • unnecessary React renders
  • big GC spikes from slice and shift
  • too many nodes being diffed

Post-Optimisation

Fixes:

  • memoized derived values
  • batched state updates
  • trimmed arrays to avoid GC churn
  • reduced SVG complexity
  • far fewer style recalculations
  • predictable render cycle per second

The flame graphs made the bottlenecks painfully clear and the improvements very measurable.


Backend Observability (Railway Metrics)

Railway’s built-in metrics were perfect for validating the system’s behavior under load.

CPU Usage

Notes:

  • consistently around 0.1–0.2 vCPU
  • only spikes briefly during reconnects

Memory Usage

Notes:

  • stable around 60–70 MB
  • no leaks in the long-running rolling window

Network Egress

Notes:

  • scales linearly with connected clients
  • compact payload kept spikes minimal

Disk Usage (SQLite WAL)

Notes:

  • WAL writes barely increase usage
  • trimming strategy keeps DB size stable

Backend Performance Summary (Bun + SQLite)

Running for hours:

  • CPU: ~0.2 vCPU
  • RAM: ~60 MB
  • Ingest: ~300 messages per second
  • Outbound: 60–100 messages per second per client
  • SQLite: WAL mode handled writes without strain
  • Multiple clients: 5–10 live users with no jitter

For one single-file TypeScript process doing ingestion, calculation, persistence and broadcasting, this was extremely stable.


What I Learned

This small weekend project pushed me into:

  • real-time streaming architecture
  • WebSocket fanout patterns
  • 1-second VWAP windowing
  • React flame-graph optimization
  • GC-aware data structure choices
  • memory leak hunting in long-running processes
  • payload size tuning
  • SQLite WAL tuning

One of the most unexpectedly fun dev projects I have done in a long time.


Top comments (0)