Introduction
"Same vision of 'multi-model + multi-channel + memory + tools,' rewritten in Rust: single binary, a few MB of memory, millisecond startup, and one-click migration from OpenClaw."
This is Part 26 of the "Open Source Project of the Day" series. Today we explore ZeroClaw (GitHub).
OpenClaw (ClawdBot) is the familiar AI assistant gateway: multiple LLMs, Telegram/Discord/Feishu and other multi-channel support, persistent memory, skills and tools — but it's built on Node.js/TypeScript, and the runtime memory and cold-start time aren't great for Raspberry Pi, low-spec VPS, or edge devices. ZeroClaw occupies the same space as OpenClaw — both are "self-hostable, multi-model + multi-channel + memory + tools" autonomous AI assistant infrastructure — but implemented in 100% Rust, targeting zero overhead: single static binary, <5MB memory in common scenarios, <10ms startup, runnable on ~$10 hardware. It preserves identity compatibility with OpenClaw (IDENTITY/SOUL Markdown files) and data migration (zeroclaw migrate openclaw), and architecturally emphasizes a Trait-driven, Provider/Channel/Tool pluggable design for flexible replacement and extension. This article focuses on ZeroClaw's relationship to OpenClaw and a feature and performance comparison between the two.
What You'll Learn
- ZeroClaw's positioning: a Rust-implemented, lightweight alternative with the same vision as OpenClaw
- Relationship to OpenClaw: identity format compatibility, memory migration, feature mapping
- Feature comparison: Provider/Channel/Memory/Tool, security and runtime, identity and extensibility
- Performance comparison: memory, startup time, binary size, deployment cost (including README benchmark table)
- Quick start, architecture highlights, and suitable use cases
Prerequisites
- Basic understanding of OpenClaw or "AI assistant gateway" (multi-model, multi-channel, memory, tools)
- Comfortable using the Rust toolchain (rustup, cargo) or willing to use the one-click install script
- For deployment on Raspberry Pi or low-spec devices, sensitivity to memory and startup time makes the advantages more apparent
Project Background
Project Introduction
ZeroClaw is a fast, compact, fully autonomous AI assistant infrastructure: single binary, extremely low memory footprint and millisecond cold start by default, supporting multiple AI providers, multiple messaging channels, pluggable memory, tools, and runtimes, with built-in security mechanisms including Gateway pairing, sandboxing, and whitelisting. Its relationship to OpenClaw can be summarized as:
- Same product category: Both are self-hosted multi-model + multi-channel + memory + tools AI assistants/gateways
- Implementation fork: OpenClaw is TypeScript/Node; ZeroClaw is Rust with no Node runtime dependency
-
Compatibility and migration: Supports OpenClaw-style identity (Markdown: IDENTITY.md, SOUL.md, etc.) and migrating memory from OpenClaw (
zeroclaw migrate openclaw) - Differentiation: Trait-driven architecture, smaller resource footprint, faster startup, optional Docker sandbox, AIEOS identity, subscription-based authentication (OpenAI Codex / Claude Code), and more
The project was built with contributions from Harvard, MIT, Sundai.Club, and other community members.
Core problems the project solves:
- Running an AI assistant gateway on Raspberry Pi, $10 dev boards, or small-memory VPS — OpenClaw's Node runtime and memory usage become a bottleneck
- Need a lightning-fast cold-start CLI/daemon for script and cron use
- Want to remain compatible with existing OpenClaw identity/memory while switching to a lighter runtime
- Need pluggable Provider/Channel/Memory/Tool that can be swapped via configuration or Trait implementation without changing core code
Target user groups:
- Existing OpenClaw users who want lower resource usage or migration to edge/low-spec devices
- Teams needing "OpenClaw-like capabilities" but preferring a single binary with no Node dependency
- Developers doing embedded/edge AI with sensitivity to memory and startup time
- Self-hosting users who prioritize security and auditability (sandboxing, whitelists, pairing, tunnels)
Author/Team Introduction
- Organization/Repository: zeroclaw-labs (GitHub), main repo zeroclaw-labs/zeroclaw
- Acknowledgments: README notes contributions from Harvard, MIT, Sundai.Club, and other community members
- Project creation date: February 2026 (GitHub shows around 2026-02-13)
Project Stats
- ⭐ GitHub Stars: 11.6k+
- 🍴 Forks: 1.1k+
- 📦 Version: main branch as trunk, no separate version number
- 📄 License: MIT (noted in README and LICENSE)
- 🌐 Website: No independent website; primarily GitHub and the repo's docs/ directory
- 💬 Community: GitHub Issues
Main Features
Core Purpose
ZeroClaw's core purpose is to provide the same "multi-model + multi-channel + memory + tools" capabilities as OpenClaw, reducing overhead on resources and performance while adding pluggability and security:
- Multiple Providers: 28+ built-in providers (plus aliases), supports OpenAI-compatible and Anthropic custom endpoints
- Multiple Channels: CLI, Telegram, Discord, Slack, Mattermost, iMessage, Matrix, Signal, WhatsApp, Email, IRC, Lark, DingTalk, QQ, Webhook, and more
-
Memory system: SQLite hybrid retrieval (vector + FTS5), Lucid bridge, Markdown files, or explicitly disabled (none); supports migration from OpenClaw (
migrate openclaw) - Tools and runtime: shell/file/memory, cron, git, pushover, browser, http_request, screenshot, composio, etc.; runtime supports native or Docker sandbox
- Security: Gateway pairing code, sandbox, whitelist, rate limiting, workspace filesystem scope, encrypted key storage
- Identity: Default OpenClaw format (Markdown); optional AIEOS v1.1 (JSON) for cross-system persona portability
- Subscription authentication: OpenAI Codex (ChatGPT subscription), Claude Code / Anthropic setup-token, multiple accounts with encrypted storage
Use Cases
-
Smooth migration from OpenClaw
- Keep existing SOUL/IDENTITY and memory, import with
zeroclaw migrate openclaw, run with fewer resources
- Keep existing SOUL/IDENTITY and memory, import with
-
Edge and low-cost deployment
- Raspberry Pi, $10 boards, small-memory VPS; single binary, no Node, significantly lower cold-start and memory vs. OpenClaw
-
Scripts and automation
- CLI cold-start is extremely fast, suitable for frequent
zeroclaw agent/zeroclaw statuscalls in cron, CI, or scripts
- CLI cold-start is extremely fast, suitable for frequent
-
Security and compliance first
- Local binding by default, pairing, workspace restriction, Docker sandbox, whitelist — meets "no public internet exposure, minimal permissions" requirements
-
Multiple identities and subscriptions
- Multiple Profiles (OpenAI Codex, Anthropic), encrypted storage, suitable for multi-account or team-unified gateway
Quick Start
Environment: Rust toolchain (or use the one-click install script in the repo to install system dependencies + Rust + ZeroClaw).
One-click install (review the script first):
curl -LsSf https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/scripts/install.sh | bash
Build from source and install:
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
cargo build --release --locked
cargo install --path . --force --locked
export PATH="$HOME/.cargo/bin:$PATH"
Quick configuration and chat:
# Non-interactive quick setup (API Key + Provider)
zeroclaw onboard --api-key sk-... --provider openrouter
# Or interactive wizard
zeroclaw onboard --interactive
# Single message
zeroclaw agent -m "Hello, ZeroClaw!"
# Interactive chat
zeroclaw agent
# Start Gateway (default 127.0.0.1:3000)
zeroclaw gateway
# Background daemon (includes channels and scheduled tasks)
zeroclaw daemon
Migrate memory from OpenClaw:
zeroclaw migrate openclaw --dry-run # Preview
zeroclaw migrate openclaw # Execute migration
Core Features
-
Single binary, extremely low memory
- In release builds,
zeroclaw --help/zeroclaw statuspeaks at ~3.9–4.1MB memory; README states <5MB in common scenarios, roughly 1% of OpenClaw (>1GB including Node)
- In release builds,
-
Millisecond cold start
- Same documentation example: 0.01–0.02s real time; README benchmark table shows <10ms on a 0.8GHz edge device, compared to OpenClaw >500s (cold start)
-
Trait-driven, fully pluggable
- Provider, Channel, Memory, Tool, Runtime, Tunnel, Identity are all Traits — swap implementations or extend via configuration without touching core code
-
Secure by default
- Gateway defaults to 127.0.0.1, pairing code, workspace restriction, sensitive path blocking, optional Docker sandbox; README provides security checklist and self-test recommendations (e.g., nmap)
-
OpenClaw compatibility and migration
- Identity format defaults to OpenClaw Markdown;
migrate openclawsupports importing OpenClaw memory into ZeroClaw
- Identity format defaults to OpenClaw Markdown;
-
Subscriptions and multiple accounts
- OpenAI Codex OAuth, Anthropic setup-token, multiple Profiles, encrypted storage (auth-profiles.json + .secret_key)
ZeroClaw vs. OpenClaw: Relationship and Comparison
Relationship Overview
- Same product form: Both are "self-hosted AI assistant gateways" — multi-model, multi-channel, persistent memory, skills/tools, optional autonomous operation (daemon).
- Different implementation: OpenClaw is TypeScript/Node.js ecosystem; ZeroClaw is Rust, single static binary, no Node dependency.
-
Compatible and migratable: ZeroClaw defaults to supporting OpenClaw-style identity (IDENTITY.md, SOUL.md, etc.) and provides
zeroclaw migrate openclawfor memory migration — easy to switch from OpenClaw to ZeroClaw without losing persona and history. - Positioning difference: OpenClaw has a mature ecosystem with many Skills and third-party integrations; ZeroClaw focuses on resource and performance (small memory, fast startup, low-cost hardware) and Trait architecture, secure defaults, AIEOS/subscriptions and other extensibility.
Feature Comparison
| Dimension | ZeroClaw | OpenClaw |
|---|---|---|
| Language/runtime | Rust, single binary, no runtime dependency | TypeScript/Node.js, requires Node environment |
| AI providers | 28+ built-in + custom OpenAI/Anthropic endpoints | Multiple models (Claude/GPT/Gemini/Ollama, etc.) |
| Messaging channels | CLI, Telegram, Discord, Slack, WhatsApp, iMessage, Matrix, Signal, Feishu/DingTalk/QQ, etc. | Telegram, Discord, WhatsApp, Feishu, Slack, etc. |
| Memory | SQLite hybrid retrieval, Lucid, Markdown, none; can import OpenClaw data | Own memory implementation, can work with Claude-Mem, etc. |
| Tools/skills | shell, file, memory, cron, git, browser, http, composio, etc.; TOML + SKILL.md | Skill system (Markdown), rich community Skills |
| Identity format | Default OpenClaw Markdown; optional AIEOS v1.1 JSON | Markdown (SOUL/IDENTITY, etc.) |
| Security | Pairing, sandbox, workspace restriction, Docker runtime, whitelist, encrypted keys | Config and plugin dependent, can work with tunnels and permissions |
| Subscriptions/multiple accounts | OpenAI Codex, Anthropic multi-Profile, encrypted storage | Mostly API key configuration |
| Extension approach | Trait + config, Provider/Channel/Memory/Tool swappable | Plugins and Skills, Node ecosystem |
Performance Comparison (Based on README Benchmark Table)
The locally reproducible benchmarks in README (normalized to a 0.8GHz edge device concept) show the following magnitude differences:
| Metric | OpenClaw | NanoBot | PicoClaw | ZeroClaw |
|---|---|---|---|---|
| Language | TypeScript | Python | Go | Rust |
| Memory (typical) | >1GB (including Node) | >100MB | <10MB | <5MB |
| Cold start (0.8GHz concept) | >500s | >30s | <1s | <10ms |
| Binary size | ~28MB (dist) | N/A | ~8MB | ~3.4MB |
| Low-cost hardware | Mac Mini-class $599 | Linux SBC ~$50 | ~$10 board | ~$10-class hardware |
- Memory: ZeroClaw typical CLI/status peaks at ~3.9–4.1MB; OpenClaw requires Node, overall commonly >1GB.
- Startup: ZeroClaw is millisecond-level; OpenClaw cold start (especially first-time dependency loading/service startup) can reach hundreds of seconds.
- Deployment cost: ZeroClaw targets ~$10-class boards and small-memory VPS; OpenClaw is more commonly deployed on Mac Mini or larger instances.
Reproduction method (README): After cargo build --release, run ls -lh target/release/zeroclaw, and measure locally with /usr/bin/time -l target/release/zeroclaw --help / zeroclaw status.
When to Choose ZeroClaw vs. OpenClaw
- Prefer ZeroClaw: Edge/Raspberry Pi/low-spec VPS, sensitive to memory and cold start, want single binary with no Node dependency, need to migrate OpenClaw memory while preserving identity, value Trait pluggability and secure defaults, need OpenAI Codex/Claude subscription multi-account support.
- Prefer OpenClaw: Already deeply dependent on OpenClaw Skills and Node ecosystem, need specific community Skills/plugins, team has unified Node DevOps and debugging, not sensitive to "minimum memory/fastest cold start."
Detailed Project Analysis
Architecture Highlights: Traits and Pluggability
The README summarizes each subsystem with its Trait, built-in implementations, and extension approach:
-
Provider: 28+ built-in plus aliases, extend with
custom:https://...oranthropic-custom:https://... - Channel: Multiple messaging channels, can connect any messaging API
- Memory: SQLite / Lucid / Markdown / none, can connect other persistence backends
- Tool: shell, file, memory, cron, git, browser, http, composio, etc., extensible
- Runtime: Native or Docker (sandbox); future plans for WASM/edge
- Security: Pairing, sandbox, whitelist, rate limiting, workspace and sensitive path restrictions
- Identity: OpenClaw (Markdown) or AIEOS v1.1 (JSON)
- Tunnel: None, Cloudflare, Tailscale, ngrok, custom
Configuration is primarily via ~/.zeroclaw/config.toml, generated by onboard; API keys etc. can be encrypted in ~/.zeroclaw/.
Memory and OpenClaw Migration
- Memory layer: Vector + FTS5 keyword in SQLite, hybrid weighting, EmbeddingProvider Trait, LRU cache, safe index rebuilding.
-
zeroclaw migrate openclaw: Exports from OpenClaw and imports memory into ZeroClaw, supports--dry-runpreview.
Security and Gateway
- Gateway defaults to only binding 127.0.0.1; for public access, use tunnels (Tailscale, Cloudflare, ngrok, etc.) or explicitly enable
allow_public_bind. - Pairing: 6-digit one-time code on startup, exchange for Bearer Token via
POST /pair; Webhook requests must carryAuthorization: Bearer <your-token>in headers. - Channel whitelist: Empty means deny all;
"*"explicitly allows all; recommend precise whitelisting by Telegram/Discord IDs. - Workspace restriction, sensitive path blocking, Docker sandbox — see README security checklist and configuration instructions.
Project Resources
Official Resources
- 🌟 GitHub: github.com/zeroclaw-labs/zeroclaw
- 📚 Documentation: Repo docs/README.md, docs/SUMMARY.md, commands/config/channels, etc.
- 🐛 Issues: GitHub Issues
Related Resources
- OpenClaw main repository
- OpenClawInstaller (OpenClaw one-click deployment)
- AIEOS (identity specification, ZeroClaw supports v1.1)
Who Should Use This
- OpenClaw users: Want lower resource usage, migration to edge or low-spec machines, or want to keep identity/memory while switching runtimes
- Edge/embedded and low-cost deployment: Raspberry Pi, $10 boards, small-memory VPS, need small binary and fast startup
- Security and compliance first: Need pairing, sandbox, whitelist, workspace restriction, tunnel-first self-hosted gateway
- Rust and architecture enthusiasts: Want to reference a Trait-driven, pluggable Provider/Channel/Memory AI assistant implementation
Welcome to visit my personal homepage for more useful knowledge and interesting products
Top comments (0)