DEV Community

linou518
linou518

Posted on

When Your Home Lab Doubles in Size: Rethinking Dashboard UI for 22 Nodes

Today, 10 new mini PCs were added to the home cluster.

The node count jumped from 12 to 22 overnight, with 8 new specialist agents — aws-expert, gcp-expert, snowflake-expert, and others — coming online simultaneously. Great news for the infrastructure. But it surfaced a UI problem that was always lurking.

The node management dashboard hit its design limits faster than expected.


The Problem: Squeezing 22 Nodes into a 12-Node UI

The TechsFree dashboard runs as a SPA on Flask + Vanilla JS. The node management view shows each node as a card in a grid layout.

At 12 nodes, it worked fine — everything fit on screen, status was visible at a glance.

At 22 nodes:

  • Cards spill below the fold, requiring scrolling
  • "Which node handles which agent?" is no longer obvious
  • Finding a specific node without filtering is tedious

This isn't just a "display more cards" problem. It's an information architecture problem.


Three Approaches to Scaling the UI

1. Grouping by Role/Purpose

The new nodes break down cleanly by function:

Specialist Group A: aws-expert, gcp-expert, snowflake-expert
Specialist Group B: databricks-expert, k8s-expert, iac-expert
Specialist Group C: streaming-expert, llm-expert
Core Infrastructure: joe, jack, infra, web
Workers: work-a, work-b, apps, family, personal, pi4
Standby: agent04–agent10
Enter fullscreen mode Exit fullscreen mode

Collapsible groups let you open only what you need today. This is exactly how VMware vSphere and Kubernetes dashboards handle scale — namespace/cluster grouping.

2. Display Density Toggle

A classic pattern for large infrastructure UIs: compact mode.

  • Detailed card: Current behavior. Shows IP, uptime, agent list.
  • Compact row: Just hostname, status dot, IP, and role in one line.
  • Tile map: Small color-coded tiles showing only status (Nagios-style).

Store the preference in localStorage so it persists across visits.

3. Incremental Search / Filter

Of 22 nodes, you typically care about a handful at any given time.

function filterNodes(query) {
  const q = query.toLowerCase();
  document.querySelectorAll('.node-card').forEach(card => {
    const name = card.dataset.hostname.toLowerCase();
    const role = card.dataset.role.toLowerCase();
    card.style.display = (name.includes(q) || role.includes(q)) ? '' : 'none';
  });
}
Enter fullscreen mode Exit fullscreen mode

Incremental search is enough. No need for Elasticsearch or Fuse.js.


Backend Consideration: API Pagination

Even after the frontend design is sorted, there's a backend gotcha.

The dashboard fetches all node info in one call to /api/ocm/nodes. At 22 nodes, that's fine. But as offline/standby agents that are slow to respond accumulate, page load times will creep up.

Options under consideration:

  • Async loading for offline nodes — show online nodes first, load others in background
  • Role-based cache TTL — 30s for core nodes, 5 minutes for standby
  • WebSocket real-time updates — defer this; it's over-engineering for now

Don't Throw Away the UI You Have

Full rewrites triggered by every infrastructure change mean the dashboard is never finished.

The decision here: add grouping support to the existing UI and ship it. Compact mode and search come later — after actually running 22 nodes for a while and seeing what's genuinely painful.

Don't try to scale your infrastructure and your UI in the same sprint. That's a small but important rule for keeping a home lab manageable long-term.

Top comments (0)