Building a Self-Hosted AI Developer Assistant with OpenClaw
I wanted an assistant that could do real engineering work, not just answer prompts. So I turned a VPS into a self-hosted AI operator that can open GitHub issues, implement fixes, push branches, open PRs, write docs, and even generate blog drafts like this one.
This post walks through how I set up OpenClaw, why I routed coding work through Edith, and what actually worked (and what broke) once the system started doing real tasks.
What OpenClaw actually is
OpenClaw is a hostable agent runtime. Think of it as an orchestration layer between:
- messaging channels (Signal, Discord, etc.)
- tools (shell, git, browser, APIs)
- memory/context
- specialized workers (subagents / coding agents)
It’s not “just a chatbot.” It’s a controllable runtime that can:
- parse instructions from chat
- call tools with guardrails
- run code tasks in isolated worker sessions
- produce artifacts (commits, PRs, docs, drafts)
In practice, OpenClaw gives you a policy-aware control plane for AI actions. You can keep high-trust actions internal, require confirmation for risky external writes, and still move quickly on repetitive engineering work.
Why I wanted a self-hosted agent
I wanted three things cloud copilots don’t really give me:
Control over execution
I choose where commands run, where secrets live, and which repos the agent can touch.Composable automation
I wanted one assistant that can coordinate issues, PRs, docs, and content—not five disconnected tools.Predictable ops + lower marginal cost
A VPS + containers + my own workflow is easier to reason about than constantly changing SaaS limits.
Also: I don’t want to copy/paste the same “please create branch, implement, test, open PR” instructions every day.
My architecture
Here’s the shape that worked:
- VPS running Dockerized workloads
- OpenClaw as the agent runtime
- Nginx Proxy Manager for ingress/TLS
- GitHub auth + repo clones on host
- Skill modules for repeatable tasks (e.g., DEV draft creation)
- Subagents for longer coding jobs
- Edith as the coding-task routing setup
How Edith fits in
Friday is my main agent and orchestrator. When I give a non-trivial coding task, Friday delegates it to Edith as the coding worker, then Edith reports back to Friday with results.
That separation matters:
- Friday stays responsive in the main conversation
- coding jobs run in focused worker context via Edith
- long jobs complete asynchronously
- failures are isolated and easy to report
In short: Friday orchestrates, Edith executes coding tasks, and Friday delivers the final outcome back to me.
A simplified flow
- I send a request in chat (e.g., “fix oldest open issue in repo X”).
- OpenClaw reads policy/skills and validates tool path.
- Task is delegated through Edith to a coding worker.
- Worker explores code, edits files, runs checks.
- Worker pushes branch + opens PR.
- Assistant reports back with links and test output.
Automating PRs + issues
This is where the setup started paying for itself.
Create GitHub issues from backlog notes
Instead of manually drafting issue bodies, I feed rough notes and constraints. The assistant turns them into:
- clear problem statements
- acceptance criteria
- implementation hints
- priority labels / metadata
That keeps issue quality high and reduces “what does done mean?” ambiguity before code starts.
Implement issue, branch, test, open PR
For implementation flows, I use a consistent contract:
- find target issue
- restate requirements
- create branch (
issue-<id>-<slug>) - implement minimal targeted fix/feature
- add/update tests
- run lint/test/typecheck/build
- commit with standardized message
- push + open PR with
Closes #<id>
The big win is consistency. Same structure every time means less review overhead and fewer missing steps.
Generate docs while context is fresh
After implementation, I often have the agent update:
- README sections
- operator notes
- migration/setup steps
- changelog snippets
Because the assistant just touched the code, it usually documents changes more accurately than delayed manual docs.
Generate draft blog posts from shipped work
I also reuse this pipeline for writing. The assistant can create DEV.to drafts from completed features:
- summarize what changed
- explain architecture decisions
- include command snippets
- leave post unpublished for review
That removes the “I should write about this later” graveyard of ideas.
Lessons learned
1) Guardrails are not optional
Give the agent broad read access, but put friction on destructive/external writes unless explicitly requested. “Fast + safe” beats “fully autonomous.”
2) Small, deterministic tasks outperform vague prompts
“Fix issue #24 with acceptance criteria + tests + PR format” works better than “improve this codebase.”
3) Subagents are worth it for context isolation
Long coding tasks in a dedicated worker produce better outcomes than cramming everything into the primary chat thread.
4) Standardized PR templates reduce cleanup
When every PR includes summary, test notes, and closing keywords, reviews go faster and project history stays clean.
5) Self-hosted doesn’t mean zero maintenance
You’re now running an automation platform. Monitor logs, rotate keys, and keep dependencies patched.
6) Treat “AI ops” like real ops
Track checks, document runbooks, and design for failure modes. If a workflow is important, make it observable.
TL;DR
I used OpenClaw + containerized infrastructure on a VPS to build a practical AI developer assistant that can create issues, ship PRs, write docs, generate blog drafts, and manage repos. Routing coding jobs through Edith gave me cleaner isolation, better reliability, and a workflow I can actually trust.
If you want to build one, start with strict guardrails and one repeatable automation path, then expand from there.
Top comments (0)