DEV Community

Cover image for Home Automation in 3MB: Building a Rust System for Raspberry Pi Zero
Scaraude
Scaraude

Posted on

Home Automation in 3MB: Building a Rust System for Raspberry Pi Zero

This story starts in a damp mountain house. The humidity there eats wallpaper for breakfast, and there is no fiber connection to fall back on. I wanted to know if the living room was molding while I was away, so I set myself a budget: one Raspberry Pi Zero 2W with its hat, an SD card, a cable or two, and a Sonoff Zigbee 3.0 dongle. Add four temperature/humidity sensors ($27) and you have an off-grid monitoring kit for less than a single smart thermostat.

Then reality hit. Every off‑the‑shelf home automation stack I tried wanted at least 1 GB of RAM and expected a Pi 3+. The Pi Zero 2W gives you 512 MB of RAM, a quad-core ARM CPU, and the energy footprint of a toothbrush charger. That pushed me to write my own stack in Rust. The payoff is a 3 MB binary that idles between 10 and 20 MB of RAM while juggling sensors, switches, charts, and automations.


What do we need here?

I like constraints because they force honest trade-offs. This project has to:

  • run offline on a Pi Zero 2W for months,
  • run ultra-low power so it can eventually operate off-grid,
  • allow simple automation flows,
  • show charts and automation controls to friends who are not engineers,
  • and be simple enough for me to maintain alone.

The goal is to build a reliable piece of infrastructure that keeps a real cabin healthy. That “why” matters when you decide whether your bundle needs another dependency or whether you can shave 2 MB off the binary and extend SD card life by six months.


Sensor dashboard

Commander dashboard


Architecture: only what is needed

┌───────────────┐         ┌─────────────┐         ┌─────────────────┐
│  zigbee2mqtt  │         │  mosquitto  │         │ home-assistant  │
│   (systemd)   │◄───────►│   (broker)  │◄───────►│      -rs        │
│               │  pub/sub│             │  pub/sub│   (systemd)     │
└───────────────┘         └─────────────┘         │                 │
        ▲                                         │  ┌───────────┐  │
        │ USB                                     │  │  SQLite   │  │
        │                                         │  │ Database  │  │
        ▼                                         │  └───────────┘  │
┌───────────────┐                                 │                 │
│ Zigbee Dongle │                                 │  ┌───────────┐  │
│  (Sonoff 3.0) │                                 │  │HTTP Server│  │
└───────────────┘                                 │  │           │  │
        ▲                                         │  └───────────┘  │
        │ Zigbee 3.0                              └────────▲────────┘
        ▼                                                  │ HTTP
┌───────────────┐                                          ▼
│Zigbee Devices │                                  ┌──────────────┐
│ • Sensors     │                                  │    Svelte    │
│ • Switches    │                                  │  Dashboard   │
│ • and more... │                                  │ (Browser UI) │
└───────────────┘                                  └──────────────┘
Enter fullscreen mode Exit fullscreen mode

Three systemd units live on the Pi Zero: zigbee2mqtt, mosquitto and home-assistant-rs. These units communicate through the well-known and really IoT-friendly protocol MQTT. Zigbee2MQTT bridges many Zigbee devices by translating their messages into MQTT topics and payloads. Unfortunately, it does require Node.js and takes up around 100 MB at runtime. Even as the biggest RAM consumer here, there is no lighter alternative, and the utility is worth the cost.
The MQTT broker, on the other hand, could be embedded in the Rust binary pretty easily with rmqtt, but Mosquitto is an old, ultra-reliable, and really efficient tool already written in C (~10MB at runtime). No need to reinvent the wheel here!

The Rust binary itself is split into four (for now) long-lived services connected through a tokio::broadcast event bus:

  1. DbWriterService persists data to SQLite with INSERT-only time-series tables.
  2. StateManagerService keeps RAM-only device and switch state for instant HTTP responses.
  3. AutomationService is its own async service now. Fed by the same bus but isolated from HTTP so a bad rule cannot block the API.
  4. WebSocketBroadcaster fans each SystemEvent to browsers in less than 100 ms.

Once a sensor publishes over Zigbee, the MQTT handler converts it into a strongly typed event, the services react, and the frontend receives the exact same payload over WebSocket. No polling, no duplicated caches on the backend.


Choices: more for less

Rust

Rust gives me C-like control without C-like footguns. Release builds with opt-level = "z", lto = true, and panic = "abort" (see Cargo.toml) turned the binary from ~8 MB into ~3 MB, which matters when you scp it over spotty Wi-Fi and load it from an SD card.

Svelte

I initially wrote static HTML generators with an LLM. That lasted a weekend. The dashboard is now a Svelte Single Page App compiled to tiny bundles (sub‑40 KB gzipped for the main chunk) while still giving me stores, animations, and a pleasant DX. It also lets me lean on Chart.js, which keeps data visualization painless.

Services over Docker

Docker is great for portability. Here portability is a given, the Pi runs Debian. Dropping Docker removed the daemon overhead and an entire class of SD-card writes. Each part of the stack is a plain systemd unit (see systemd/*.service). When the Pi reboots after a storm, systemd brings everything back without me ssh’ing in to restart containers.

Max load on the client

The previous iteration cached chunks of data on the backend. That cache is gone. Every heavy structure now lives in a Svelte store (frontend/src/lib/stores/dataCache.ts). On first load the browser downloads 24 hours of readings, then trims and sorts them locally:

const merged = exists
  ? state.sensors.readings
  : sortReadings([...state.sensors.readings, reading]);

return {
  ...state,
  sensors: {
    ...state.sensors,
    readings: trimReadings(merged, state.sensors.rangeHours),
    latestTimestamp: Math.max(state.sensors.latestTimestamp, reading.timestamp),
  },
};
Enter fullscreen mode Exit fullscreen mode

When a new measurement lands, it is streamed over WebSocket and merged directly into this cache. The backend simply acknowledges the MQTT event and keeps the append-only database in sync. Result: no duplicate caches, and the Pi spends CPU cycles only when something actually changed.

Event-driven everything

A single EventBus (src/events/bus.rs) glues the system together. The MQTT loop publishes SystemEvents. Services subscribe and act independently. Tokio’s broadcast channel is fast enough to deliver events to five concurrent services with <2 ms of overhead on the Pi Zero. Because the automation engine, DB writer, and WebSocket broadcaster are decoupled, I can restart one without touching the others, and backpressure is limited by the channel size (1 000 events right now).

System monitor

The most boring part might be my favorite for its simplicity: a Bash script (monitor.sh) writes CSV logs every 60 seconds. That file is ~10–100× cheaper than an actual database write, so I can log CPU %, RAM %, and temperature forever without wearing the SD card. Svelte turns those CSVs into zoomable charts (LogsPage.svelte, SystemMetricsView.svelte), and you can scroll through top CPU/RAM consumers from the same dashboard.


Real-time-first data flow

The initial prototype polled /api/readings every 15 seconds. That meant 12 MB/hour of redundant JSON, 240 SQLite reads, and an UI that always felt one step behind. The new flow:

  1. Browser loads and hydrates the cache (fetchReadings) once.
  2. eventStream.connect() opens a resilient WebSocket (/ws) powered by Hyper + tokio-tungstenite.
  3. Each SystemEvent (sensor reading, device state change, automation execution) is serialized once and fanned out to browsers.
  4. The frontend merges the event into the cache and redraws graphs locally.

Bandwidth dropped by ~95 %. CPU usage on the Pi sits around 0.5 % idle / 5-10 % during automation spikes, and the UI reacts in under 100 ms because it never waits for round-trips.


Independent services: automation, state, storage, streaming

When everything lived inside one loop, a slow action like saving to disk or talking to a sleepy switch would block MQTT processing and make the UI stutter. The current version splits the runtime into four independent async services sharing only the event bus:

  1. DbWriterService keeps SQLite append-only and offloads writes from the MQTT thread.
  2. StateManagerService owns the authoritative in-memory snapshot used by HTTP routes.
  3. AutomationService evaluates rules and publishes AutomationTriggered/AutomationExecuted events.
  4. WebSocketBroadcaster streams each event to browsers without ever touching the database.

Each service has its own receiver on the bus:

tokio::spawn(DbWriterService::new(db.clone(), event_bus.subscribe()).run());
tokio::spawn(StateManagerService::new(device_state.clone(), switch_state.clone(), event_bus.subscribe()).run());
tokio::spawn(AutomationService::new(
    db.clone(),
    mqtt_client.clone(),
    device_state.clone(),
    switch_state.clone(),
    event_bus.clone(),
    event_bus.subscribe(),
).run());
Enter fullscreen mode Exit fullscreen mode

Because they are decoupled, extending the runtime is trivial: build a new service, subscribe to the bus, spawn it. A panic in automation does not touch HTTP, a flood of WebSocket clients cannot slow database writes, and each subsystem can be restarted or tuned in isolation. This is “microservices” on a single binary: lightweight, predictable, and tailored for a Pi Zero.


Deployment: Make before magic

No fancy CI/CD. Just SSH and Make:

build:          # cross-compile to ARM64
    cross build --target aarch64-unknown-linux-gnu --release

quick-deploy: build transfer-binary
    ssh $(PI_USER)@$(PI_HOST) "sudo systemctl restart home-assistant-rs"

quick-deploy-frontend:
    cd frontend && npm run build && \
    scp -r dist/* $(PI_USER)@$(PI_HOST):$(DEPLOY_DIR)/static/
Enter fullscreen mode Exit fullscreen mode

No CI runner, no hidden scripts. Edit Rust → make quick-deploy. Watch system logs → make logs, update live. A full build + deploy takes ~30 s, which keeps the cabin observability loop tight.


Real-world numbers after 15 days

Metric Value
Binary size 3.1 MB (ARM64 release)
RAM usage 12‑18 MB (avg 15 MB)
CPU usage 0.5‑2 % idle, 5‑10 % while automations fire
DB size 4.8 MB (8 sensors, 15 days of readings)
Network ~1 MB/hour (WebSocket + occasional REST)
Uptime 99.8 % (only restarts were planned updates)
Devices handled 8 temp/humidity, 1 switches, 3 rules

systems logs


Conclusion

Hyper over Axum, Makefile over CI, Bash scripts over daemons, CSV logs over databases. Choosing the simplest tool for each job kept the whole stack lean enough to run 24/7 on a Pi Zero 2W, even when the cabin loses internet. Rust and Svelte let me compile everything on a beefy laptop so the Pi only hosts a 3 MB binary and static assets, which means 30‑second deploys, 10‑20 MB of RAM use, and enough headroom to keep adding sensors, automations, and visualizations without sacrificing the off-grid resilience that started this project.


What’s next ?

Batch sensor writes and remove all remaining database readings that could be cached to extends SD life. Integrate more sensors & more commander. Build a system logs file rotation. Enhance data visualisation. Build a notification service...
Even a Zigbee driver to remove zigbee2mqtt and mosquitto entirely ?
A lot is possible, and any contribution is more than welcome!


If you want to try it, the repo is here: github.com/Scaraude/home-assistant-rs. Clone it, run "make deploy", and watch your own cabin stats in under an hour.

Thanks for reading, happy to answer questions or hear how you would push this even further.

Top comments (0)