I have been building Rune over the past few weeks — a Rust-native AI runtime that acts as a personal agent gateway. It handles multi-provider routing, durable sessions, semantic memory, tool approval workflows, and federated multi-instance orchestration.
It is moving fast — over 1,200 commits since mid-March and I use it daily. But it needs more hands. Here is what it is, where it is going, and how you can help.
What Rune does
Rune sits between your messaging channels (CLI, WebChat, API) and your model providers (OpenAI, Anthropic, Azure AI Foundry, Azure OpenAI). It adds the runtime layer that every serious agent deployment needs:
- Durable sessions — survive restarts, crashes, redeployments. Backed by PostgreSQL, SQLite, or Azure Cosmos DB.
- Tool call approval workflows — agents propose actions, operators approve or reject. Every decision is logged.
- Semantic memory and retrieval — context that persists across conversations.
- Provider routing — switch models per task, failover between providers, manage cost.
- Cron jobs and reminders — scheduled agent work without external orchestration.
- Federated instances — run Rune on your laptop and your server, share state, delegate workloads.
Why Rust?
The agent gateway is a long-running process that holds sessions, queues approvals, brokers every model call, and runs on your infrastructure 24/7. That is a systems programming problem.
Rust gives Rune predictable performance (no GC pauses), memory safety without a runtime, single-binary deployment, and a type system that catches bugs before they reach production. The tradeoff is that Rust has a steeper learning curve — which is exactly why this project needs contributors who enjoy working in that space.
Get it running in 60 seconds
# One-line install
curl -fsSL https://raw.githubusercontent.com/ghostrider0470/rune/main/scripts/install.sh | sh
# Or from source
cargo build --release --bin rune-gateway --bin rune
cargo run --release --bin rune -- setup --api-key "$OPENAI_API_KEY"
You get a working gateway with WebChat at localhost:8787/webchat, a dashboard, SQLite state, and auto-detected local Ollama. Docker Compose is also supported for zero-config deployment.
Native spells (built-in tools)
Rune ships with compiled-in tools called "spells":
- rust-patterns — query production Rust patterns by topic, tags, or imports
- security-audit — baseline host security checks
- code-review — structured multi-dimensional review for files, diffs, and PRs
- evolver — self-evolution workflow scaffold
These are not plugins. They compile into the binary and run at native speed.
Where contributors can make an impact
This is where I need help. Rune is an active, parity-seeking runtime — the core loop works, but there is a lot of surface area to cover. Here are the areas where contributions would move the needle:
Good first issues
-
Documentation improvements — the
docs/folder is comprehensive but can always be clearer -
Error messages and diagnostics — make
rune doctor runcatch more edge cases - CLI UX — better help text, output formatting, progress indicators
Medium complexity
- New spells — build a new built-in tool (web scraping, file management, git automation)
- Provider integrations — add support for Groq, Mistral, or local llama.cpp
- Dashboard features — the TypeScript frontend (10% of the codebase) needs love
- Storage backends — help harden the Cosmos DB and Azure SQL paths
Advanced
- Federation protocol — improve multi-instance delegation and health checking
- Semantic memory — better retrieval strategies, embedding pipeline optimizations
- Streaming — improve real-time token streaming across the gateway
- Security hardening — audit the approval workflow, session isolation, auth paths
Non-code contributions
- Try it and file issues — install Rune, break it, tell me what confused you
- Write about it — blog posts, tutorials, comparison articles
- Design feedback — if you have opinions about agent runtime UX, I want to hear them
The architecture at a glance
Channels (CLI / WebChat / API)
|
Rune Gateway (Rust)
├── Session Manager (durable state)
├── Tool Executor (approval queue)
├── Provider Router (multi-model)
├── Memory & Retrieval
├── Scheduler (cron / reminders)
└── Federation (multi-instance)
|
Providers (OpenAI / Anthropic / Azure AI / Ollama)
|
Storage (SQLite / PostgreSQL / Cosmos DB)
How to start contributing
- Star and clone: github.com/ghostrider0470/rune
-
Read
docs/contributor/— development workflow, build instructions, PR conventions -
Check
docs/parity/— see what is shipped vs what is next - Pick something from the issues or propose your own
- Join the conversation — open a discussion on GitHub, I respond to everything
The codebase is 88% Rust, 10% TypeScript, with comprehensive documentation organized by audience (operator, contributor, reference, strategy). Architecture Decision Records live in docs/adr/.
Why contribute to Rune?
- Real Rust systems programming — not toy examples, actual long-running service code
- AI infrastructure — the agent runtime space is early and moving fast
- Small project, big impact — your PR will not disappear into a 10,000-file monorepo
- Learn by building — async Rust, PostgreSQL, provider APIs, WebSocket streaming, federation protocols
If you have ever wanted to work on the infrastructure layer of AI agents — the part that actually runs them reliably — Rune is a good place to start.
git clone https://github.com/ghostrider0470/rune.git
cd rune
cargo build --release
Let's build this together.
GitHub: ghostrider0470/rune
License: Open Source (MIT)
Top comments (2)
This is a solid architecture. A few thoughts from someone building in a similar space:
On durable sessions with SQLite: If you're targeting edge deployments (robots, IoT), you might hit a wall with SQLite's concurrency model. SQLite uses file-level locking, which can block when you have multiple agent threads trying to write session state simultaneously. We ran into this on a robot project where perception, planning, and control loops all needed concurrent DB access.
On semantic memory: The word 'semantic' is doing a lot of heavy lifting here. Are you using HNSW/IVF for the vector indexes, or something custom? And how do you handle the cold-start problem when there's no prior context to retrieve?
On federated instances: This is the most interesting part to me. The idea of laptop + server sharing state is powerful. Are you using CRDTs, event sourcing, or something else for conflict resolution? This is genuinely one of the harder problems in distributed agent systems.
Would love to see more details on the storage layer. Good luck with Hacktoberfest!
On SQLite concurrency: You're absolutely right, SQLite's file-level locking is a real constraint for high-concurrency edge deployments. Rune's answer is multi-database architecture: SQLite is the zero-config local fallback, but PostgreSQL is the production target (Cosmos DB for Azure-native). The connection pool and write patterns abstract this — same binary, config-driven backend. For your robot use case (perception + planning + control loops), you'd want PostgreSQL or the async connection pooling we have in PgPool. We'd love contributions on SQLite WAL mode or read replicas if someone wants to push the embedded story further.
On semantic memory: We're using LanceDB (via lancedb crate) for vector storage, not raw HNSW/IVF. It's columnar, handles hot updates better than flat indices, and plays nice with our local-first requirement. For cold-start, we have explicit memory.level modes: file (scan local session files), keyword (FTS5), and semantic (hybrid with graceful fallback). The real challenge we're wrestling with now is fact deduplication — mem0's update semantics sometimes overwrite instead of merge. That's an active lane if you're interested.
On federated instances: Not CRDTs or event sourcing (yet). Current delegation is imperative with lease-based coordination, main instance picks a peer, reserves a branch/worktree, monitors health, and recovers on timeout. Shared state goes through the REST memory API (/api/v1/memory/*). You're right that conflict resolution is the hard problem, we're moving toward operation-based eventual consistency for vector stores, but right now it's "same user, different devices" rather than true multi-writer federation. We'd love eyes on this — it's genuinely unsolved in the agent space.
Storage layer details: happy to expand, we have rune-store with SQLite (rusqlite), PostgreSQL (sqlx), and Cosmos DB (azure_data_cosmos) implementations. The abstraction is StoreBackend trait, so new backends are ~300 lines.
Thanks for the thoughtful questions