GitHub Copilot is one of the most capable AI coding agents available today. But out of the box, it's only accessible through VS Code, GitHub.com, or the Copilot SDK embedded in your own application.
What if you could expose Copilot as an independent, discoverable agent one that any A2A orchestrator or AI framework can call, without any Copilot-specific integration code?
That's exactly what a2a-copilot does.
GitHub: https://github.com/shashikanth-gs/a2a-copilot
npm: https://www.npmjs.com/package/a2a-copilot
What is A2A?
The A2A (Agent-to-Agent) protocol is an open standard for agent interoperability. It defines how agents:
-
Advertise capabilities via Agent Cards (
/.well-known/agent-card.json) - Accept tasks via JSON-RPC and REST
- Stream responses via SSE
- Maintain task lifecycle (submitted → working → completed)
When an agent speaks A2A, any orchestrator that understands the protocol can discover and call it without knowing anything about the agent's internals.
Prerequisites
- Node.js 18+
- A GitHub account with Copilot access
-
ghCLI authenticated (gh auth login) OR aGITHUB_TOKENenvironment variable set
Step 1: Install
npm install -g a2a-copilot
Or run without installing:
npx a2a-copilot --config path/to/config.json
Step 2: Create Your Agent Config
Create my-agent/config.json:
{
"agentCard": {
"name": "My Copilot Agent",
"description": "A GitHub Copilot-powered coding agent exposed via A2A",
"version": "1.0.0",
"protocolVersion": "0.3.0",
"streaming": true,
"skills": [
{
"id": "coding",
"name": "Coding Assistant",
"description": "Full-stack coding, architecture, and debugging",
"tags": ["code", "typescript", "python", "architecture"]
}
]
},
"server": {
"port": 3000,
"hostname": "0.0.0.0",
"advertiseHost": "localhost"
},
"copilot": {
"model": "gpt-4.1",
"streaming": true,
"systemPrompt": "You are a senior software engineer. Help with code, architecture, and debugging."
}
}
Step 3: Start the Agent
a2a-copilot --config my-agent/config.json
Output:
[info] A2A server started
[info] Agent Card: http://localhost:3000/.well-known/agent-card.json
[info] JSON-RPC: http://localhost:3000/a2a/jsonrpc
[info] REST: http://localhost:3000/a2a/rest
[info] Health: http://localhost:3000/health
Step 4: Discover Your Agent
curl http://localhost:3000/.well-known/agent-card.json | jq .
Response:
{
"name": "My Copilot Agent",
"description": "A GitHub Copilot-powered coding agent exposed via A2A",
"version": "1.0.0",
"protocolVersion": "0.3.0",
"url": "http://localhost:3000",
"capabilities": { "streaming": true },
"skills": [...]
}
This is the A2A Agent Card the agent's identity and capability manifest. Any A2A-compatible orchestrator can read this and immediately understand what the agent can do.
Step 5: Send a Task
Via REST:
curl -X POST http://localhost:3000/a2a/rest \
-H "Content-Type: application/json" \
-d '{
"message": {
"role": "user",
"parts": [{"kind": "text", "text": "Write a TypeScript function that debounces a callback with a configurable delay."}]
}
}'
Via JSON-RPC:
curl -X POST http://localhost:3000/a2a/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": "1",
"method": "tasks/send",
"params": {
"message": {
"role": "user",
"parts": [{"kind": "text", "text": "Explain the difference between debounce and throttle."}]
}
}
}'
Copilot responds in full A2A format with task lifecycle events and streaming artifacts.
Adding MCP Tools (Optional)
Want your agent to also read and write files? Add an MCP section to your config:
stdio (child process):
"mcp": {
"filesystem": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/workspace"]
}
}
HTTP MCP server:
"mcp": {
"my-tools": {
"type": "http",
"url": "http://localhost:8002/mcp"
}
}
Restart the agent. Copilot now has access to those tools as part of its reasoning loop.
Architecture
Full request flow from A2A client through to GitHub Copilot and MCP tools
What You Just Built
You now have GitHub Copilot running as a standalone, fully A2A-compliant agent:
- Discoverable via Agent Card
- Callable via JSON-RPC and REST
- Streaming via SSE
- Multi-turn conversations via persistent sessions
- MCP tool access (if configured)
- Docker-deployable
Any A2A orchestrator can call it without any Copilot-specific code.
What's Next
-
Switch models: Change
"model"to"claude-sonnet-4-5"in your config - Add MCP tools: Database connectors, HTTP APIs, custom tool servers
- Run multiple specialized agents on different ports with different system prompts
- Use an A2A orchestrator to route tasks dynamically across agents
- Check out a2a-opencode for a vendor-neutral alternative that supports any LLM provider
GitHub: https://github.com/shashikanth-gs/a2a-copilot
npm: npm install -g a2a-copilot
Also see: https://github.com/shashikanth-gs/a2a-opencode (for the OpenCode / any-provider variant)



Top comments (0)