This is a submission for the GitHub Copilot CLI Challenge
What I Built
Reporails is a validator for AI agent instruction files: CLAUDE.md, AGENTS.md, copilot-instructions.md. It scores your files, tells you what's missing, and helps you fix it.
The project already supported Claude Code and Codex. For this challenge, I added GitHub Copilot CLI as a first-class supported agent - using Copilot CLI itself to build the adapter.
The architecture was already multi-agent by design. A .shared/ directory holds agent-agnostic workflows and knowledge. Each agent gets its own adapter that wires into the shared content. Claude does it through .claude/skills/, Copilot through .github/copilot-instructions.md.
Adding Copilot took 113 lines. Not because the work was trivial - but because the architecture was ready.
Repos:
- CLI: reporails/cli (v0.3.0)
- Rules: reporails/rules (v0.4.0)
- Recommended: reporails/recommended (v0.2.0)
Demo
After adding Copilot support, each agent gets its own rule set with no cross-contamination:
| Agent | Rules | Breakdown |
|---|---|---|
| Copilot | 29 | 30 CORE - 1 excluded + 0 COPILOT-specific |
| Claude | 39 | 30 CORE - 1 excluded + 10 CLAUDE-specific |
| Codex | 37 | 30 CORE + 7 CODEX-specific |
Run it yourself:
npx @reporails/cli check --agent copilot
My Experience with GitHub Copilot CLI
It understood the architecture immediately
I explained the .shared/ folder — that it was created specifically so both Claude and Copilot (and other agents) can reference the same workflows and knowledge without duplication. Copilot got it on the first exchange:
The key insight it surfaced: "The .shared/ content is already agent-agnostic. Both agents reference the same workflows. No duplication is needed - just different entry points."
That's exactly right. Claude reaches shared workflows through /generate-rule → .claude/skills/ → .shared/workflows/rule-creation.md. Copilot reads instructions → .shared/workflows/rule-creation.md. Same destination, different front doors.
What it built
Copilot created the full adapter in three phases:
-
Foundation -
.github/copilot-instructions.md,agents/copilot/config.yml, updatedbackbone.yml, verified test harness supports--agent copilot -
Workflow Wiring - entry points in copilot-instructions.md, context-specific conditional instructions, wired to
.shared/workflows/and.shared/knowledge/ - Documentation - updated README and CONTRIBUTING with agent-agnostic workflow guidance
The bug it found (well, helped find)
While testing the Copilot adapter, I discovered that the test harness had a cross-contamination bug. When running --agent copilot, it was testing CODEX rules too — because _scan_root() scanned ALL agents/*/rules/ directories indiscriminately.
The fix was three lines of Python:
# If agent is specified, only scan that agent's rules directory
if agent and agent_dir.name != agent:
continue

The model selector surprise
When I opened the Copilot CLI model selector, the default model was Claude Sonnet 4.5. The irony of building a Copilot adapter using Copilot CLI running Claude was not lost on me.
What worked, honestly
Copilot CLI understood multi-agent architecture without hand-holding. It generated correct config files matching existing adapter patterns. The co-author signature was properly included in all commits. It didn't try to duplicate content that was already shared - it just wired the entry points.
The whole experience reinforced something I've been thinking about: the tool matters less than the architecture underneath. If your project is structured well, any competent agent can extend it. That's the whole point of reporails - making sure your instruction files are good enough that the agent can actually help you.
What also happened during this challenge
While building the Copilot adapter, I also rebuilt the entire rules framework from scratch. Went from 47 rules (v0.3.1) to 35 rules (v0.4.0) - fewer rules, dramatically higher quality. Every rule is now distinct, detectable, and backed by evidence. But that's a story for another post.
Try it: npx @reporails/cli check


Top comments (0)