Spacebot: The Multi-Agent AI Platform Built in Rust for Teams
Written by Arshdeep Singh
Most AI assistant platforms are built around a single agent and a single conversation. You ask a question, the model answers, the session ends. Simple, stateless, and ultimately limited.
Spacebot takes a fundamentally different approach. It's a multi-agent AI orchestration platform — built in Rust as a single binary — designed for teams that want persistent, context-aware AI infrastructure running alongside their work, not just responding to ad-hoc queries.
Built in Rust. Built to Last.
The choice of Rust is a signal. Spacebot isn't another Python wrapper around an LLM API — it's a serious piece of infrastructure built for performance, reliability, and long-term operation.
Tech stack:
- Rust + Tokio — async runtime for high-throughput, low-latency agent orchestration
- SQLite — embedded database for persistent storage without external dependencies
- LanceDB — vector database for semantic memory and embedding search
- FastEmbed — on-device embedding generation (no external API calls for embeddings)
- Serenity — Discord gateway integration for team-facing interface
The result is a single binary that runs everywhere, boots fast, and doesn't require a Kubernetes cluster to operate.
The Three-Agent Architecture
Spacebot's most interesting design decision is how it structures agent cognition across three distinct roles:
🎭 Face Agent
The user-facing agent. This is the personality your team interacts with — friendly, contextual, and responsive. The Face Agent handles conversation, interprets intent, and coordinates responses. Think of it as the "front desk" of your AI infrastructure.
🧠 Conscience
An independent thinking fork that runs in parallel to the Face Agent. While the Face Agent is handling the immediate response, the Conscience is evaluating, questioning, and sometimes pushing back. This separation prevents the sycophantic drift common in single-agent systems — the agent that just agrees with everything because it's optimizing for immediate approval.
⚙️ Worker
Pure execution. No personality, no reasoning overhead — just doing the thing. The Worker handles tool calls, API requests, file operations, and any task where speed and reliability matter more than conversational quality.
This three-layer architecture means Spacebot isn't just answering questions — it's thinking about them from multiple angles while simultaneously acting on them.
The Cortex: Memory That Actually Works
The feature that sets Spacebot apart from almost every other platform is The Cortex — a persistent, cross-conversation memory system built on an 8-dimensional memory graph.
Most AI systems have no memory between sessions. Some have basic summarization. Spacebot has a structured knowledge graph that tracks:
- Facts and entities mentioned across all conversations
- Relationships between topics, people, and projects
- Temporal patterns (what gets asked when, what follows what)
- Team-level context that persists across users
Every 60 minutes, the Cortex synthesizes a fresh briefing — a structured summary of what's been discussed, what decisions were made, and what needs attention. This means your AI assistant actually knows what happened yesterday without you having to re-explain it.
For teams, this is transformative. Instead of every team member bootstrapping context from scratch, Spacebot carries institutional memory.
10 LLM Providers, All BYOK
Spacebot supports 10 LLM providers out of the box:
- OpenAI (GPT-4, GPT-4o)
- Anthropic (Claude Sonnet, Claude Opus)
- Google (Gemini)
- Groq
- Mistral
- DeepSeek
- Ollama (local models)
- And more
All plans are Bring Your Own Key (BYOK) — Spacebot never touches your API keys beyond routing requests. You control costs, you choose models, you own the data.
OpenClaw Skills Compatibility
Spacebot is compatible with OpenClaw skills — the skill/tool ecosystem built for AI agents running in OpenClaw environments. This means if you've already built custom integrations or automations as OpenClaw skills, they drop straight into Spacebot without modification.
Pricing
| Plan | Price | Best For |
|---|---|---|
| Pod | $29/mo | Small teams, personal use |
| Outpost | $59/mo | Growing teams, more resources |
| Nebula | $129/mo | Large teams, high-volume |
| Self-host | Free | Full control via Docker |
All plans are BYOK. The self-hosted option via Docker gives you the full Spacebot experience on your own infrastructure — no subscription required if you're comfortable running your own stack.
Self-Hosting
docker pull spacebot/spacebot:latest
docker run -d -e OPENAI_API_KEY=your_key -v spacebot_data:/data -p 3000:3000 spacebot/spacebot:latest
The single-binary architecture means there's no complex orchestration setup. One container, one persistent volume, and you're running a full multi-agent AI platform.
Who Is Spacebot For?
- Engineering teams that want a persistent AI assistant with real team memory
- Startups looking for AI infrastructure that scales without rebuilding
- Self-hosters who want control over their AI stack
- Discord-based communities that want embedded AI with memory and multi-agent reasoning
- Builders who value Rust-quality reliability over Python-ecosystem convenience
Why This Matters
The current generation of AI assistants treats every session as a blank slate. That's fine for consumer apps, but it's a significant limitation for teams doing complex, ongoing work.
Spacebot's bet is that the next generation of team AI tools will look less like chatbots and more like colleagues — entities with memory, independent thinking, and the ability to coordinate specialized work across multiple execution contexts.
The three-agent architecture, the Cortex memory system, and the Rust foundation suggest a team that's building for that future rather than optimizing for today's demo.
Written by Arshdeep Singh
Top comments (0)