DEV Community

Cover image for repoDoc
Yash Dodwani
Yash Dodwani

Posted on

repoDoc

OpenClaw Challenge Submission 🦞

===

This is a submission for the OpenClaw Challenge.

What I Built

repoDoc — an autonomous bug-fixing agent for GitHub repos with continuous, branch-aware monitoring.

The pain it solves: bad code (hardcoded secrets, eval(), SQL injection, debug prints, TODOs) routinely lands on feature branches and survives lazy code review. SonarQube tells you. CodeRabbit comments on your PR. repoDoc actually fixes it — autonomously, on the same branch the developer is working on, before a human ever opens a PR.

It continuously watches every branch of every registered repo. On each new commit it:

  1. Fetches the diff
  2. Evaluates it against organizational guardrails (8 built-in rules + AI security review)
  3. Opens a GitHub Issue listing every violation
  4. Auto-generates a fix PR back on the originating branch using Gemini 3 Flash
  5. Pings Telegram (and replies conversationally on PR comments)

Real proof of the loop running on real repos:

How I Used OpenClaw

The full agentic loop is exposed as a 5-skill OpenClaw skill suite that any OpenClaw agent can install via ClawHub or by pasting the repo URL into chat:

Skill Phase What it does
repodoc-analyze full loop observe → decide → act → verify → create_pr in one call
repodoc-watch trigger continuous polling across all branches every 5 min
repodoc-guardrails act regex + Gemini-powered diff evaluator
repodoc-detect-bugs act pytest + flake8 → structured bug reports
repodoc-fix verify surgical Gemini patch, no refactors

Each skill is a real SKILL.md (YAML frontmatter + Markdown instructions) plus a Python entrypoint. Drop the suite into ~/.openclaw/skills/ and any OpenClaw agent becomes an autonomous code-review teammate.

The skills compose: repodoc-watch triggers repodoc-guardrails on every commit, which feeds repodoc-detect-bugs, which feeds repodoc-fix, which closes with a branch-aware GitHub PR. OpenClaw's skill chaining + trigger metadata made it natural to expose the same agentic loop two ways — on-demand from chat (/repodoc-analyze <url>) and as a 5-min background watcher — without duplicating logic.

Demo

📺 Watch the 90-second walkthrough: https://youtu.be/9YMFPPct7ew

The video shows the full autonomous loop in action:

  1. Add a watched repo with the Enterprise Grade preset
  2. First scan silently baselines the current state — no false positives on existing code
  3. Click the ⏪ Replay button — repoDoc treats the planted "bad commit" as a brand-new push
  4. Within 30 seconds:
    • 🔔 Telegram alert fires with the violation list
    • 🐛 GitHub Issue auto-opens listing every violation
    • ✅ Fix PR opens with Gemini's surgical fixes (e.g., eval()ast.literal_eval())
  5. Open the PR — see real, mergeable diffs with explanations in plain English

Source Code

Try it yourself in 60 seconds

# Install the skill suite into your OpenClaw agent
# (paste this URL into your OpenClaw chat)
https://github.com/yashdodwani/repodoc-openclaw-skills

# Then in chat:
/repodoc-watch https://github.com/your-repo-link
Enter fullscreen mode Exit fullscreen mode

What I Learned

Treating each agent step as a named, idempotent skill transforms the loop from "magic black box" to "five things I can debug independently." Inline Python loops hide intermediate state; OpenClaw skills surface them — a huge win for both observability and reusability.
Branch-aware fixes matter more than I expected. Generic agents fix on main; real engineers want fixes back on their feature branch, where they're working. This is where pre-PR autonomy beats post-PR review tools like CodeRabbit — we meet developers on their branch, not on main.
Regex + LLM dual-layer guardrails beat either alone. Regex for determinism (secrets, eval() always fire); LLM for nuance (logical bugs, security context). One catches what the other misses.
The biggest UX challenge wasn't the AI — it was making the first watcher pass silent so adding a repo doesn't spam alerts about historical commits. The "baseline first, alert on new" pattern was non-obvious but critical.
OpenClaw's skill manifest is the right level of abstraction. Lower than CrewAI/LangGraph (which force you into their orchestration model), higher than raw scripts (which lose composability). The SKILL.md + Python combo lets you ship something a non-developer can install while keeping full power for the agent author.

ClawCon Michigan

Couldn't attend in person — followed the recordings. Hoping to make ClawCon NYC if it happens.

Top comments (0)