Building a Real-Time Terminal Dashboard with Node.js Streams and Blessed
Terminal dashboards are quietly becoming essential tools in the DevOps and backend engineering world. While browser-based monitoring platforms like Grafana and Datadog dominate the conversation, terminal dashboards offer something they cannot: zero-latency access over SSH, negligible resource overhead, and the ability to run on any machine with a terminal emulator. When your production server is melting down at 3 AM, you don't want to wait for a browser tab to load — you want answers the instant you connect.
In this article, we'll build a real-time terminal dashboard from scratch using Node.js streams and the blessed-contrib library. By the end, you'll have a fully functional monitoring tool that displays live CPU usage, memory consumption, a scrolling log viewer, and network statistics — all rendered beautifully in your terminal. We'll also architect a plugin system so you can extend the dashboard with custom widgets.
Why Terminal Dashboards Still Matter
The modern monitoring stack is overflowing with SaaS products, yet terminal dashboards remain indispensable for several reasons:
- SSH-first workflows: When you're already inside a remote server debugging an incident, opening a browser dashboard means context-switching. A terminal dashboard keeps you in flow.
- Resource efficiency: A blessed-based TUI uses a fraction of the memory and CPU that a web application requires. On a server that's already under stress, this matters.
- Air-gapped environments: Plenty of production systems have no outbound internet access. A local terminal dashboard works without any external dependencies.
- Composability: Terminal tools can be piped, scripted, and automated. A TUI dashboard can be launched as part of a deployment script, a tmux session, or a cron-triggered alert workflow.
Tools like htop, btop, and k9s have proven that rich terminal interfaces are not just viable — they're preferred by many engineers.
Project Setup
Let's initialize the project and install our dependencies:
mkdir terminal-dashboard && cd terminal-dashboard
npm init -y
npm install blessed blessed-contrib
blessed is a curses-like library for Node.js that provides low-level terminal rendering primitives: boxes, lists, text areas, and input handling. blessed-contrib builds on top of it with high-level widgets like line charts, bar graphs, gauges, maps, and tables — exactly what we need for a dashboard.
Create the entry point:
touch dashboard.js
The Architecture: Streams All the Way Down
The key architectural decision is to model every data source as a Node.js Readable or Transform stream. This gives us several advantages:
- Backpressure handling: If our rendering can't keep up with incoming data, streams handle the buffering automatically.
- Composability: We can pipe data through transform streams for filtering, aggregation, or formatting before it reaches a widget.
- Consistency: Every data source — CPU stats, memory, logs, network — exposes the same interface. Widgets don't care where data comes from.
Here's the high-level data flow:
[Data Source Stream] → [Transform Stream] → [Widget Renderer]
Building the Data Streams
CPU Usage Stream
Node.js provides os.cpus() which returns per-core CPU timing information. To calculate usage percentages, we need to sample at intervals and compute the delta:
const { Readable } = require('stream');
const os = require('os');
class CpuStream extends Readable {
constructor(interval = 1000) {
super({ objectMode: true });
this.interval = interval;
this.previousCpus = os.cpus();
this.timer = null;
}
_read() {
if (this.timer) return;
this.timer = setInterval(() => {
const currentCpus = os.cpus();
const usages = currentCpus.map((cpu, i) => {
const prev = this.previousCpus[i];
const prevTotal = Object.values(prev.times).reduce((a, b) => a + b, 0);
const currTotal = Object.values(cpu.times).reduce((a, b) => a + b, 0);
const totalDiff = currTotal - prevTotal;
const idleDiff = cpu.times.idle - prev.times.idle;
return totalDiff === 0 ? 0 : ((totalDiff - idleDiff) / totalDiff) * 100;
});
const avgUsage = usages.reduce((a, b) => a + b, 0) / usages.length;
this.push({
timestamp: Date.now(),
average: Math.round(avgUsage * 100) / 100,
perCore: usages.map((u) => Math.round(u * 100) / 100),
});
this.previousCpus = currentCpus;
}, this.interval);
}
_destroy(err, callback) {
clearInterval(this.timer);
callback(err);
}
}
This stream emits an object every second containing the average CPU usage and per-core breakdowns. The objectMode: true flag lets us push JavaScript objects directly instead of buffers.
Memory Usage Stream
Memory monitoring is simpler since we don't need deltas:
class MemoryStream extends Readable {
constructor(interval = 2000) {
super({ objectMode: true });
this.interval = interval;
this.timer = null;
}
_read() {
if (this.timer) return;
this.timer = setInterval(() => {
const total = os.totalmem();
const free = os.freemem();
const used = total - free;
const proc = process.memoryUsage();
this.push({
timestamp: Date.now(),
totalGB: (total / 1073741824).toFixed(2),
usedGB: (used / 1073741824).toFixed(2),
freeGB: (free / 1073741824).toFixed(2),
usedPercent: Math.round((used / total) * 100),
heapUsedMB: (proc.heapUsed / 1048576).toFixed(1),
rssMB: (proc.rss / 1048576).toFixed(1),
});
}, this.interval);
}
_destroy(err, callback) {
clearInterval(this.timer);
callback(err);
}
}
Log Tail Stream
For watching log files in real time, we combine fs.watch with a read stream that picks up new content as it's appended:
const fs = require('fs');
const { Transform } = require('stream');
class LogTailStream extends Readable {
constructor(filePath) {
super({ objectMode: true });
this.filePath = filePath;
this.position = 0;
this.watcher = null;
}
_read() {
if (this.watcher) return;
// Start from end of file
try {
const stat = fs.statSync(this.filePath);
this.position = stat.size;
} catch {
this.position = 0;
}
this.watcher = fs.watch(this.filePath, (eventType) => {
if (eventType !== 'change') return;
const stat = fs.statSync(this.filePath);
if (stat.size <= this.position) {
this.position = 0; // File was truncated, reset
}
const stream = fs.createReadStream(this.filePath, {
start: this.position,
encoding: 'utf8',
});
let chunk = '';
stream.on('data', (data) => (chunk += data));
stream.on('end', () => {
this.position = stat.size;
const lines = chunk.split('\n').filter(Boolean);
lines.forEach((line) => {
this.push({
timestamp: Date.now(),
line: line.trim(),
source: this.filePath,
});
});
});
});
}
_destroy(err, callback) {
if (this.watcher) this.watcher.close();
callback(err);
}
}
Network Statistics Stream
We can track network I/O by reading from /proc/net/dev on Linux, or use a cross-platform approach by monitoring active connections:
const { execSync } = require('child_process');
class NetworkStream extends Readable {
constructor(interval = 2000) {
super({ objectMode: true });
this.interval = interval;
this.timer = null;
this.previousBytes = { rx: 0, tx: 0 };
}
_read() {
if (this.timer) return;
this.timer = setInterval(() => {
const nets = os.networkInterfaces();
const connections = Object.entries(nets).reduce((acc, [name, addrs]) => {
const ipv4 = addrs.find((a) => a.family === 'IPv4' && !a.internal);
if (ipv4) acc.push({ interface: name, address: ipv4.address });
return acc;
}, []);
// Get active connection count (cross-platform)
let activeConnections = 0;
try {
const output = execSync('netstat -an 2>/dev/null | wc -l', {
encoding: 'utf8',
timeout: 2000,
});
activeConnections = parseInt(output.trim(), 10) || 0;
} catch {
activeConnections = -1;
}
this.push({
timestamp: Date.now(),
interfaces: connections,
activeConnections,
interfaceCount: Object.keys(nets).length,
});
}, this.interval);
}
_destroy(err, callback) {
clearInterval(this.timer);
callback(err);
}
}
Transform Streams for Data Processing
One of the most powerful aspects of this architecture is using Transform streams as middleware between data sources and widgets. Here's a transform that adds rolling averages to CPU data:
class RollingAverageTransform extends Transform {
constructor(windowSize = 30) {
super({ objectMode: true });
this.window = [];
this.windowSize = windowSize;
}
_transform(chunk, encoding, callback) {
this.window.push(chunk.average);
if (this.window.length > this.windowSize) {
this.window.shift();
}
callback(null, {
...chunk,
rollingAvg:
Math.round(
(this.window.reduce((a, b) => a + b, 0) / this.window.length) * 100
) / 100,
history: [...this.window],
});
}
}
And a transform that colorizes log lines based on severity:
class LogSeverityTransform extends Transform {
constructor() {
super({ objectMode: true });
}
_transform(chunk, encoding, callback) {
const line = chunk.line.toLowerCase();
let severity = 'info';
let color = 'green';
if (line.includes('error') || line.includes('fatal')) {
severity = 'error';
color = 'red';
} else if (line.includes('warn')) {
severity = 'warn';
color = 'yellow';
} else if (line.includes('debug')) {
severity = 'debug';
color = 'cyan';
}
callback(null, { ...chunk, severity, color });
}
}
Assembling the Dashboard UI
Now let's build the actual terminal interface. blessed-contrib provides a grid layout system that makes positioning widgets straightforward:
const blessed = require('blessed');
const contrib = require('blessed-contrib');
function createDashboard() {
const screen = blessed.screen({
smartCSR: true,
title: 'Node.js System Monitor',
});
const grid = new contrib.grid({ rows: 12, cols: 12, screen });
// CPU line chart — top left (4 rows x 6 cols)
const cpuChart = grid.set(0, 0, 4, 6, contrib.line, {
label: ' CPU Usage (%) ',
showLegend: true,
legend: { width: 12 },
style: {
line: 'cyan',
text: 'white',
baseline: 'white',
},
xLabelPadding: 3,
xPadding: 5,
wholeNumbersOnly: true,
maxY: 100,
});
// Memory gauge — top right (4 rows x 6 cols)
const memGauge = grid.set(0, 6, 4, 3, contrib.gauge, {
label: ' Memory Usage ',
stroke: 'green',
fill: 'white',
});
// Memory details — beside gauge
const memInfo = grid.set(0, 9, 4, 3, contrib.table, {
label: ' Memory Details ',
keys: true,
fg: 'white',
columnSpacing: 2,
columnWidth: [12, 10],
});
// Log viewer — middle section (4 rows x 12 cols)
const logBox = grid.set(4, 0, 4, 12, contrib.log, {
label: ' Log Stream ',
fg: 'green',
selectedFg: 'green',
bufferLength: 50,
});
// Network stats — bottom left (4 rows x 6 cols)
const netTable = grid.set(8, 0, 4, 6, contrib.table, {
label: ' Network Interfaces ',
keys: true,
fg: 'cyan',
columnSpacing: 2,
columnWidth: [16, 18, 12],
});
// System info — bottom right (4 rows x 6 cols)
const sysInfo = grid.set(8, 6, 4, 6, contrib.markdown, {
label: ' System Info ',
style: { fg: 'white' },
});
// Keybindings
screen.key(['escape', 'q', 'C-c'], () => process.exit(0));
return { screen, cpuChart, memGauge, memInfo, logBox, netTable, sysInfo };
}
The grid system divides the screen into a 12x12 matrix. Each grid.set(row, col, rowSpan, colSpan, widgetType, options) call places a widget at the specified position. This approach is responsive — the widgets scale proportionally when the terminal is resized.
Connecting Streams to Widgets
Here's where everything comes together. We pipe each data stream through its transforms and into widget updaters:
function startDashboard() {
const ui = createDashboard();
const cpuHistory = { x: [], y: [] };
const MAX_HISTORY = 60;
// === CPU Stream → Line Chart ===
const cpuStream = new CpuStream(1000);
const cpuTransform = new RollingAverageTransform(30);
cpuStream.pipe(cpuTransform).on('data', (data) => {
cpuHistory.x.push(new Date(data.timestamp).toLocaleTimeString().slice(0, 5));
cpuHistory.y.push(data.average);
if (cpuHistory.x.length > MAX_HISTORY) {
cpuHistory.x.shift();
cpuHistory.y.shift();
}
ui.cpuChart.setData([
{
title: `CPU (avg: ${data.rollingAvg}%)`,
x: cpuHistory.x,
y: cpuHistory.y,
style: { line: 'cyan' },
},
]);
ui.screen.render();
});
// === Memory Stream → Gauge + Table ===
const memStream = new MemoryStream(2000);
memStream.on('data', (data) => {
ui.memGauge.setPercent(data.usedPercent);
if (data.usedPercent > 90) {
ui.memGauge.setOptions({ stroke: 'red' });
} else if (data.usedPercent > 70) {
ui.memGauge.setOptions({ stroke: 'yellow' });
} else {
ui.memGauge.setOptions({ stroke: 'green' });
}
ui.memInfo.setData({
headers: ['Metric', 'Value'],
data: [
['Total', `${data.totalGB} GB`],
['Used', `${data.usedGB} GB`],
['Free', `${data.freeGB} GB`],
['Usage', `${data.usedPercent}%`],
['Heap', `${data.heapUsedMB} MB`],
['RSS', `${data.rssMB} MB`],
],
});
ui.screen.render();
});
// === Log Stream → Log Widget ===
const logFile = process.argv[2] || '/var/log/system.log';
try {
const logStream = new LogTailStream(logFile);
const logTransform = new LogSeverityTransform();
logStream.pipe(logTransform).on('data', (data) => {
const prefix = `[${data.severity.toUpperCase()}]`;
ui.logBox.log(`{${data.color}-fg}${prefix}{/} ${data.line}`);
});
} catch (err) {
ui.logBox.log(`{red-fg}Could not watch ${logFile}: ${err.message}{/}`);
}
// === Network Stream → Table ===
const netStream = new NetworkStream(3000);
netStream.on('data', (data) => {
const rows = data.interfaces.map((iface) => [
iface.interface,
iface.address,
'active',
]);
if (rows.length === 0) {
rows.push(['(none)', '-', '-']);
}
rows.push(['Connections', String(data.activeConnections), '-']);
ui.netTable.setData({
headers: ['Interface', 'Address', 'Status'],
data: rows,
});
ui.screen.render();
});
// === System Info ===
const platform = os.platform();
const arch = os.arch();
const hostname = os.hostname();
const nodeVer = process.version;
const cores = os.cpus().length;
ui.sysInfo.setMarkdown(
`**Host**: ${hostname}\n` +
`**OS**: ${platform} (${arch})\n` +
`**Node**: ${nodeVer}\n` +
`**Cores**: ${cores}\n` +
`**Uptime**: ${Math.floor(os.uptime() / 3600)}h ${Math.floor((os.uptime() % 3600) / 60)}m`
);
ui.screen.render();
// Refresh uptime every 60 seconds
setInterval(() => {
ui.sysInfo.setMarkdown(
`**Host**: ${hostname}\n` +
`**OS**: ${platform} (${arch})\n` +
`**Node**: ${nodeVer}\n` +
`**Cores**: ${cores}\n` +
`**Uptime**: ${Math.floor(os.uptime() / 3600)}h ${Math.floor((os.uptime() % 3600) / 60)}m`
);
ui.screen.render();
}, 60000);
return ui;
}
Notice how each data pipeline follows the same pattern: create stream, optionally pipe through transforms, consume data events, update widget, render screen. The consistency makes the system easy to reason about and extend.
Making It Extensible: The Plugin System
A dashboard that only shows hardcoded metrics is useful but limited. Let's add a plugin system that lets anyone add custom widgets and data sources:
class DashboardPlugin {
constructor(name, options = {}) {
this.name = name;
this.options = options;
}
// Override in subclass: create and return a data stream
createStream() {
throw new Error(`Plugin ${this.name} must implement createStream()`);
}
// Override in subclass: create and return a widget config
createWidget(grid, row, col, rowSpan, colSpan) {
throw new Error(`Plugin ${this.name} must implement createWidget()`);
}
// Override in subclass: handle data from stream and update widget
update(widget, data, screen) {
throw new Error(`Plugin ${this.name} must implement update()`);
}
}
Here's the plugin manager that loads and orchestrates plugins:
class PluginManager {
constructor(screen, grid) {
this.screen = screen;
this.grid = grid;
this.plugins = [];
this.streams = [];
}
register(plugin, position) {
const { row, col, rowSpan, colSpan } = position;
const widget = plugin.createWidget(this.grid, row, col, rowSpan, colSpan);
const stream = plugin.createStream();
stream.on('data', (data) => {
plugin.update(widget, data, this.screen);
this.screen.render();
});
this.plugins.push({ plugin, widget, stream });
this.streams.push(stream);
return this;
}
destroy() {
this.streams.forEach((stream) => stream.destroy());
}
}
Now creating a custom plugin is straightforward. Here's an example that monitors disk usage:
class DiskUsagePlugin extends DashboardPlugin {
constructor() {
super('disk-usage');
}
createStream() {
const stream = new Readable({
objectMode: true,
read() {},
});
setInterval(() => {
try {
const output = execSync("df -h / | tail -1 | awk '{print $5}'", {
encoding: 'utf8',
timeout: 3000,
});
const percent = parseInt(output.replace('%', ''), 10);
stream.push({ percent, raw: output.trim() });
} catch {
stream.push({ percent: 0, raw: 'N/A' });
}
}, 5000);
return stream;
}
createWidget(grid, row, col, rowSpan, colSpan) {
return grid.set(row, col, rowSpan, colSpan, contrib.donut, {
label: ' Disk Usage ',
radius: 8,
arcWidth: 3,
remainColor: 'black',
yPadding: 2,
});
}
update(widget, data, screen) {
const color = data.percent > 90 ? 'red' : data.percent > 70 ? 'yellow' : 'green';
widget.setData([
{ label: 'Used', percent: data.percent, color },
]);
}
}
To use the plugin system, integrate it with the main dashboard:
function startWithPlugins() {
const screen = blessed.screen({ smartCSR: true, title: 'Dashboard' });
const grid = new contrib.grid({ rows: 12, cols: 12, screen });
const manager = new PluginManager(screen, grid);
manager.register(new DiskUsagePlugin(), {
row: 0, col: 0, rowSpan: 4, colSpan: 4,
});
// Register more plugins as needed...
screen.key(['q', 'C-c'], () => {
manager.destroy();
process.exit(0);
});
screen.render();
}
Loading Plugins from a Directory
For a production-ready tool, you'd want to load plugins dynamically from a directory:
const path = require('path');
async function loadPlugins(pluginDir) {
const plugins = [];
let files;
try {
files = fs.readdirSync(pluginDir).filter((f) => f.endsWith('.js'));
} catch {
return plugins;
}
for (const file of files) {
try {
const PluginClass = require(path.join(pluginDir, file));
if (PluginClass.prototype instanceof DashboardPlugin) {
plugins.push(new PluginClass());
}
} catch (err) {
console.error(`Failed to load plugin ${file}: ${err.message}`);
}
}
return plugins;
}
Users can drop .js files into a ~/.dashboard/plugins/ directory, and the dashboard picks them up on startup. Each plugin file just needs to export a class that extends DashboardPlugin.
Putting It All Together
Here's the complete entry point that ties everything together:
#!/usr/bin/env node
const blessed = require('blessed');
const contrib = require('blessed-contrib');
const os = require('os');
const fs = require('fs');
const { Readable, Transform } = require('stream');
const { execSync } = require('child_process');
// ... (all stream and plugin classes from above)
function main() {
const args = process.argv.slice(2);
const logFile = args.find((a) => !a.startsWith('--')) || '/var/log/system.log';
const pluginDir = args.includes('--plugins')
? args[args.indexOf('--plugins') + 1]
: null;
const ui = startDashboard(logFile);
if (pluginDir) {
loadPlugins(pluginDir).then((plugins) => {
// Could add these to remaining grid space
console.log(`Loaded ${plugins.length} plugins`);
});
}
process.on('uncaughtException', (err) => {
ui.screen.destroy();
console.error('Dashboard crashed:', err);
process.exit(1);
});
}
main();
Run it:
node dashboard.js /var/log/syslog
# Or with a custom log file and plugins:
node dashboard.js /path/to/app.log --plugins ./my-plugins/
Performance Considerations
A few tips for keeping your dashboard responsive:
Throttle renders: Don't call
screen.render()on every single data event. Batch updates and render on a timer (e.g., every 250ms) if data arrives faster than the terminal can display it.Limit history buffers: Cap arrays like
cpuHistorywith aMAX_HISTORYconstant. Unbounded arrays will eventually consume all available memory.Use
objectModewisely: Object mode streams skip the internal buffering logic. This is perfect for our use case (small, infrequent objects) but would be problematic for high-throughput binary data.Clean up on exit: Always call
stream.destroy()andclearInterval()in your cleanup handlers. Leaked timers and file watchers can keep the process alive after the user pressesq.
process.on('SIGINT', () => {
cpuStream.destroy();
memStream.destroy();
netStream.destroy();
screen.destroy();
process.exit(0);
});
Testing Streams in Isolation
One major advantage of the stream-based architecture is testability. Each data stream can be tested independently without spinning up the entire UI:
const { test } = require('node:test');
const assert = require('node:assert');
test('CpuStream emits valid data', (t, done) => {
const stream = new CpuStream(100); // Fast interval for testing
let received = 0;
stream.on('data', (data) => {
assert.ok(data.average >= 0 && data.average <= 100);
assert.ok(Array.isArray(data.perCore));
assert.ok(data.timestamp > 0);
received++;
if (received >= 3) {
stream.destroy();
done();
}
});
});
test('RollingAverageTransform computes correctly', (t, done) => {
const transform = new RollingAverageTransform(3);
const inputs = [
{ average: 10, timestamp: 1 },
{ average: 20, timestamp: 2 },
{ average: 30, timestamp: 3 },
];
let count = 0;
transform.on('data', (data) => {
count++;
if (count === 3) {
assert.strictEqual(data.rollingAvg, 20); // (10+20+30)/3
transform.destroy();
done();
}
});
inputs.forEach((input) => transform.write(input));
});
Next Steps
This dashboard is a solid foundation, but there are plenty of directions to take it:
- Docker container monitoring: Create a plugin that reads from the Docker API to show container CPU, memory, and network stats.
- Alert thresholds: Add configurable alerts that flash the screen or trigger a system notification when a metric exceeds a threshold.
- Remote monitoring: Use a WebSocket or TCP stream to pull metrics from remote machines, turning this into a multi-host dashboard.
- Configuration file: Load the dashboard layout, data sources, and thresholds from a YAML or JSON config file.
- Mouse support: blessed supports mouse events — add click-to-zoom on charts, or click a log line to see full details.
Conclusion
Terminal dashboards combine the immediacy of command-line tools with the visual richness of graphical interfaces. By modeling data sources as Node.js streams, we get a clean, composable architecture where each component can be developed, tested, and replaced independently. The plugin system ensures the dashboard grows with your needs rather than becoming a monolithic script.
The code in this article is production-usable. Clone it, customize the widgets for your infrastructure, drop in a few plugins, and you have a monitoring tool that starts in milliseconds and runs anywhere a terminal exists.
The full source code is available on GitHub. Give it a star if you find it useful, and feel free to open issues or submit plugins.
Top comments (0)