This is a submission for the GitHub Copilot CLI Challenge
What I Built
MCP Orchestrator - A CLI tool that orchestrates multiple AI agents to solve real developer problems.
The Problem: New developers waste 1-2 weeks understanding a codebase before becoming productive. They spend days reading scattered docs, finding entry points, and understanding architecture.
My Solution: 4 independent MCP (Model Context Protocol) agents working together in a pipeline to analyze any repository and generate a comprehensive onboarding guide in 30 seconds.
Architecture
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MCP ORCHESTRATOR β
β (Pipeline Coordinator) β
ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββΌβββββββββββββββββββ¬βββββββββββββββββββ
β β β β
βΌ βΌ βΌ βΌ
βββββββββββββββββ βββββββββββββββββ βββββββββββββββββ βββββββββββββββββ
β Agent 1 β β Agent 2 β β Agent 3 β β Agent 4 β
β Architecture β β Entry Point β β Documentation β β Onboarding β
β Analyzer β β Detector β β Finder β β Generator β
βββββββββ¬ββββββββ βββββββββ¬ββββββββ βββββββββ¬ββββββββ βββββββββ¬ββββββββ
β β β β
β Analyzes: β Locates: β Finds: β Combines:
β β’ Modules β β’ Main files β β’ READMEs β β’ All data
β β’ Classes β β’ CLI cmds β β’ Docs β β’ Into guide
β β’ Tests β β’ APIs β β’ Docstrings β β’ Learning
β β β β path
ββββββββββββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββ
β Onboarding Guide β
β β Project Overview β
β β Quick Start β
β β Architecture Map β
β β Learning Path β
ββββββββββββββββββββββββββββ
Each agent is an independent MCP server analyzing specific aspects of the codebase. The orchestrator coordinates them and passes data between steps.
Result: A comprehensive guide with project overview, quick start, architecture breakdown with module-level insights, and a "Your First Hour" learning path.
GitHub: github.com/levdalba/MCP-Terminal-Orchestrator
Demo
Watch it analyze Facebook's React repository (233k+ stars):
What it finds:
- Project Type: Node.js with JavaScript/TypeScript
- 109 Entry Points across 20+ packages
- 74 Documentation files
- Module-level insights (e.g., "Defines AppConfig class")
- Complete architecture breakdown
- Personalized learning path
Try it yourself:
# Install
pip install -e .
# Run on any repository
mcp pipeline examples/onboard_repo_pipeline.yml \
--registry ./examples/onboarding_registry.json
My Experience with GitHub Copilot CLI
TL;DR: 80% faster development, 3-4 days instead of 2-3 weeks
I used GitHub Copilot CLI (copilot command) extensively throughout this project. Here's what made the biggest impact:
1. Architecture Design
Command I used:
copilot -p "Propose a minimal Python CLI repo layout for an MCP orchestrator using Typer, including pyproject.toml deps and an entrypoint module name." --model claude-sonnet-4.5 -s
Impact: Copilot designed the entire project structure with proper separation of concerns, recommended the right dependencies (typer[all]), and set up the entrypoint. This saved me ~2 days of architectural decisions.
2. MCP Protocol Implementation
Command I used:
copilot -p "Propose a minimal JSON-RPC framing approach for stdio transport in a Python MCP CLI." --model claude-sonnet-4.5 -s
Impact: Copilot explained how to implement JSON-RPC 2.0 communication with MCP servers, handling line-delimited messages and error responses. I implemented the JsonRpcClient based on this guidance. Without it, I would've spent days reading the MCP spec and debugging protocol issues.
3. Testing Strategy
Command I used:
copilot -p "Draft 3 pytest tests for registry loading and pipeline parsing in this repo." --model claude-sonnet-4.5 -s
Impact: Copilot generated comprehensive test cases covering edge cases I hadn't considered. This accelerated test writing from ~1 day to 3 hours and improved code quality.
What AI Excelled At:
β Boilerplate generation - Project structure, config files, setup
β API design - Clean CLI patterns and command structure
β Error handling - Comprehensive edge case coverage
β Documentation - README structure and examples
What Needed Human Oversight:
β οΈ Environment variables - Copilot's initial approach lost PATH, I had to fix subprocess env handling
β οΈ Real-world validation - Testing on actual repos (React, Django) revealed edge cases
β οΈ UX decisions - Demo presentation and visual formatting required human judgment
Time Breakdown:
| Task | Without AI | With Copilot CLI | Savings |
|---|---|---|---|
| Project setup | 2 days | 2 hours | 90% |
| Core implementation | 1 week | 2 days | 70% |
| Testing | 1 day | 3 hours | 80% |
| Documentation | 1 day | 1 hour | 90% |
| Total | 2-3 weeks | 3-4 days | ~80% |
Key Takeaway:
GitHub Copilot CLI isn't just autocomplete - it's an AI pair programmer. I used it for:
- Design decisions ("How should 4 agents work together?")
- Debugging ("Find where we're checking membership on None")
- Learning ("Explain JSON-RPC framing for MCP")
It dramatically accelerated development while teaching me MCP protocol concepts I didn't know. The --model claude-sonnet-4.5 flag was particularly powerful for architectural questions.
Would I use it again? Absolutely. It transformed a 2-3 week project into a 3-4 day sprint while maintaining code quality.
Built with β€οΈ and AI assistance
Top comments (0)