The Problem
I've been using Node-RED and n8n for years. They're great tools, but every time I hit a complex workflow — hundreds of nodes, real-time data, high throughput — the same issues kept showing up:
- Memory bloat under sustained load
- No real plugin isolation (a bad plugin crashes everything)
- JSON-over-WebSocket bottlenecks in the editor
- Heavy deployments with tons of npm dependencies
I kept thinking: what if a flow engine was built from scratch with performance and safety as first-class citizens?
So I built z8run — an open-source visual flow engine written in Rust.
What is z8run?
z8run is a self-hosted alternative to n8n and Node-RED. You get a drag-and-drop visual editor, a REST API, WebSocket real-time sync, and a plugin system — but the entire backend is compiled Rust.
The core idea: build, connect, and automate anything — visually, without sacrificing performance or security.
Key Features
- Single binary — no runtime dependencies. Download, run, done.
- Rust + Tokio async runtime — handles thousands of concurrent flows
- WebAssembly plugin sandbox — write plugins in Rust, Go, C, or anything that compiles to WASM. Plugins run in an isolated sandbox with controlled capabilities (network, filesystem, memory limits)
- Binary WebSocket protocol — 11-byte header instead of verbose JSON. The editor stays responsive even with large flows
- AES-256-GCM credential vault — your API keys and secrets are encrypted at rest, not stored in plaintext config files
- 23 built-in nodes across 6 categories, including 10 AI nodes
Architecture
z8run is a Rust workspace with focused crates:
z8run/
├── z8run-core # Flow engine, DAG validation, scheduler
├── z8run-protocol # Binary WebSocket protocol
├── z8run-storage # SQLite / PostgreSQL persistence
├── z8run-runtime # WASM plugin sandbox (wasmtime)
└── z8run-api # REST + WebSocket server (Axum)
Flows are directed acyclic graphs (DAGs). The scheduler compiles them into parallel execution plans using topological ordering — nodes that don't depend on each other run concurrently.
How It Compares
| Feature | z8run | Node-RED | n8n |
|---|---|---|---|
| Language | Rust | Node.js | Node.js |
| WASM plugins | Yes | No | No |
| AI nodes built-in | 10 | Community | Limited |
| Binary protocol | Yes | JSON | JSON |
| Credential vault | AES-256-GCM | Separate | Built-in |
| Single binary deploy | Yes | No | No |
| License | Apache-2.0 / MIT | Apache-2.0 | Sustainable Use |
The biggest difference is the plugin model. In Node-RED, a misbehaving plugin can crash your entire process. In z8run, plugins run inside a wasmtime sandbox with explicit capability grants. You decide if a plugin can access the network, filesystem, or how much memory it can use.
Built-in Nodes
z8run ships with 23 nodes out of the box:
- Input: HTTP In, Timer, Webhook (with HMAC-SHA256 signature validation)
- Process: Function, JSON Transform, HTTP Request, Filter
- Output: Debug, HTTP Response
- Logic: Switch (multi-rule routing), Delay
- Data: Database (PostgreSQL, MySQL, SQLite), MQTT
- AI: LLM, Embeddings, Classifier, Prompt Template, Text Splitter, Vector Store, Structured Output, Summarizer, AI Agent, Image Gen
The AI suite is something I'm particularly excited about. You can build LLM-powered workflows visually — chain a prompt template into an LLM node, pipe the output through a classifier, store embeddings in a vector store — all without writing code.
Quick Start
The fastest way to try it:
git clone https://github.com/z8run/z8run.git
cd z8run
cp .env.example .env
cargo build --release
cargo run --bin z8run -- serve
Or with Docker:
docker pull ghcr.io/z8run/z8run-api:latest
docker compose up -d
The server starts on http://localhost:7700. Hit /api/v1/health to verify, then open the browser for the visual editor.
# Create your first flow
curl -X POST http://localhost:7700/api/v1/flows \
-H "Content-Type: application/json" \
-d '{ "name": "My First Flow" }'
Why Rust?
I get this question a lot. Here's the honest answer:
- Memory safety without GC — flow engines are long-running processes. No GC pauses, no memory leaks from forgotten event listeners.
- Predictable performance — when you're processing thousands of messages per second through a DAG, you need consistent latency, not "usually fast with occasional 200ms GC pauses".
- wasmtime integration — the Rust WASM ecosystem is mature. wasmtime gives us a production-grade sandbox with fine-grained capability control.
-
Single binary —
cargo build --releasegives you one binary with everything embedded. Nonode_modules, no runtime to install.
The tradeoff is development speed. Rust is slower to write than TypeScript. But for infrastructure software that runs 24/7, I think it's the right choice.
What's Next
z8run is at v0.2.0 and actively developed. The roadmap includes:
- Plugin marketplace
- Helm chart for Kubernetes
- Flow duplication and undo/redo in the editor
- Node search in the palette
- Rate limiting
If this sounds interesting, check it out:
- GitHub: github.com/z8run/z8run
- Live Demo: app.z8run.org
- Website: z8run.org
- crates.io: crates.io/crates/z8run
Contributions, feedback, and stars are welcome. If you've been looking for a performant, self-hosted flow engine — give z8run a try and let me know what you think.
z8run is dual-licensed under Apache 2.0 and MIT.
Top comments (0)