This is a submission for the GitHub Copilot CLI Challenge.
## What I Built
I built WikiPilot, a local-first, AI-powered CLI that generates a structured wiki for real codebases with
source-grounded evidence.
Instead of manually writing docs that drift over time, WikiPilot analyzes repositories, extracts symbols, plans
pages, generates documentation, validates quality, and outputs a static viewer-ready wiki.
### Key capabilities
- Evidence-first docs: generated sections include source references and confidence scoring.
- Incremental updates: processes changed files by default, with full rebuild support.
- Multi-language analysis: TypeScript/JavaScript and C# support.
- Machine-readable outputs: manifests, codemap, quality reports, and wiki plan artifacts.
- Viewer experience: static docs viewer with navigation, TOC, and Mermaid support.
### Why this matters
WikiPilot makes documentation more auditable, repeatable, and CI-friendly, so teams can keep
architecture knowledge close to the code without heavy manual curation.
## Demo
- Repository: https://github.com/HariharanS/wikipilot
-
Screenshots:
### Suggested walkthrough (60–90 seconds)
- Show
.wikipilot.ymland explain the target repo setup. - Run generation (
generate) and show incremental + quality outputs. - Open generated markdown and point to evidence/source grounding.
- Launch viewer (
serve --build) and show navigation + rendered docs. - Close with one practical “before/after” outcome (time saved, clearer onboarding, etc.).
## My Experience with GitHub Copilot CLI
GitHub Copilot CLI acted like a development copilot across architecture iteration, implementation, and debugging
loops while building WikiPilot.
I used it to speed up:
- CLI command design and refactors
- prompt/schema iteration for generation quality
- debugging pipeline edge cases
- improving developer UX and docs
### Example Copilot CLI workflows I used
bash
copilot "help me design a CLI flow for generate/serve/evaluate-models commands"
copilot "review this module and suggest a safer refactor with minimal changes"
copilot "debug why this output quality check is failing and propose a fix"
copilot "draft docs for this command based on code behavior"
Impact
Copilot CLI reduced context switching, accelerated iteration on tricky parts (generation + validation), and helped
** Ran out of copilot credits **
- missing features to deploy to cloud
- use improved prompts and regenerate docs to improve quality of docs produced
keep momentum from idea to working end-to-end tool.
What I learned
- Evidence-grounded AI output is much more trustworthy than free-form generation.
- Incremental pipelines are critical for real-world repo scale.
- Good DX (clear commands, predictable outputs, quality reports) matters as much as model quality.
What’s next
- Better cross-repo relationship visualization
- More language analyzers
- Richer interactive viewer exploration and traceability
Top comments (0)