Source Code:
skwuwu
/
Analemma-GVM
A governance runtime for AI agents, built on Linux kernel security primitives.
Analemma-GVM
A lightweight secure runtime for autonomous AI agents. Governs every outbound HTTP call, isolates the filesystem, and locks down syscalls — using a Rust proxy and Linux kernel primitives.
I wanted to run multiple autonomous AI agents (such as OpenClaw) for my personal affairs. But every time I let agents do everything they want, there was always a little anxiety. What if it does something it shouldn't? What if it leaks personal information or deletes important data?
Existing answers (such as NemoClaw, OPA+Envoy) required Docker, an embedded Kubernetes cluster, NVIDIA GPUs, or Envoy sidecars. I wanted a lightweight alternative that doesn't need infrastructure setup and strictly enforces what agents do.
So I built GVM (Governance Virtual Machine) — a lightweight security runtime for AI agents. Two small Rust binaries (CLI + proxy, ~22MB total), no Kubernetes, no service mesh, no GPU. It sits between your agent and its actions…
Why I built this?:
I really wanted to use multiple autonomous agents (such as OpenClaw) 24/7 to automate my workflows, but letting them to do everything they require made me anxious. For example, they can read my .env files and expose it to internet by misleading context, call external api incorrectly that making financial costs, and delete my important data accidentally. Because they are not essentially deterministic the probability of such events occurring isn't zero. In fact, a lot of security accident cases are found in reality.
For sure, i considered about using existing security options such as NVDIA's NemoClaw, docker/VM isolation for agents, but i didn't want to build and maintain kubernetes or virtual machine infrastructure. For solo dev or small teams, this solutions are so heavy. Also, they don't enforce what agent actually do.
So i decided to build this project(GVM), light weight governance runtime with low dependencies for AI agent made with Rust proxy combining Linux kernel features(Namespace, Overlayfs, seccomp-bpf). Rust proxy governs agent's actual external I/O, and Linux kernel stacks enforce agent to use proxy, and what syscalls they can use.
What it actually can do:
-Proxy catches every outbound HTTP/HTTPS call and checks if it matches with your ruleset. unknown calls are delayed or denied.
-On Linux environment, kernel level isolation guarantees that the proxy isn't bypassed by agent.
-Agents write to an overlay file system, so they can't directly change real data.
-GVM is just binary, and it does not extract your data from your server like saas services.
How it works, briefly:
Agent → GVM Proxy (rule check) → External API
↓ denied/delayed
Merkle-chained audit log
On Linux, kernel-level network isolation forces all traffic through
the proxy — the agent has no userspace path around it. On macOS/Windows, cooperative mode works via HTTP_PROXY injection.
Typical workflow:
-
gvm run --sandbox --watch— see what your agent calls -
gvm suggest— generate rules from the session -
gvm run --sandbox— enforce with kernel isolation
About more technical information:
It will be covered in next posts. explaining all about my project can make this post too long, so there will be another deep dive posts. If you are interested before my next posts are written, please read github link's docs!
Welcome for Feedbacks!
It's alpha release stage, so it's not hardened yet. It has only been tested on the EC2 and OpenClaw combination now. I'd be glad to get architecture and technical feedbacks.

Top comments (5)
this is pretty much the exact problem i ran into. letting agents run 24/7 with access to .env files and real APIs kept me up at night. ended up building something similar but at the application layer - pattern matching on outbound traffic instead of kernel isolation. overlay fs approach is clever though, solves the accidental deletion problem way more cleanly than what i did
The overlayFS + seccomp-bpf combination is a smart approach — you get write isolation without full VM overhead, and the BPF layer gives you fine-grained syscall control that container runtimes expose too coarsely.
The proxy intercepting outbound HTTP before it reaches the network is the piece I find most interesting. Most sandbox solutions focus on compute isolation but treat network I/O as a binary allow/deny. Having a rule evaluation layer in the middle is much more useful for autonomous agents where you want "allowed to call these endpoints, but log and rate-limit everything else."
Two questions: how does the overlay diff get committed or discarded after the agent's run? And does seccomp-bpf add measurable latency to syscall-heavy workloads, or does it stay below the noise floor for typical agent tasks?
Sorry for late reply, that's fair question.
Overlay diff:
Upon session termination, the parent scans the upper layer and classifies files by pattern. Safe output like .csv is automatically merged, executable files like *.py display a diff and require manual approval, and temporary files like /tmp/ are discarded. Only new files are automatically merged; modifying or deleting existing files always requires manual approval. If SIGTERM occurs, the process is handled after a normal scan; if SIGKILL occurs, the file is lost because the upper layer is tmpfs. However, the UX aspects of overlayfs are scheduled for improvement and may change in the future.
seccomp-BPF Latency:
There is no standalone benchmark for seccomp, but the total overhead for the sandbox MITM path is +14ms per request, which includes TLS termination, SRR checks, and WAL appends. Since seccomp itself executes BPF within the kernel, the latency is in the tens of nanoseconds per syscall, and because the agent is HTTP call-centric, it is not a syscall-intensive workload, so it is below the noise floor.
running agents without syscall constraints is trusting a promise. seccomp-bpf enforces it. the gap between what an agent is supposed to do and what it can actually do - that is the real attack surface.
Awesome contribution, thanks for sharing, let's try