<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shashi Kanth</title>
    <description>The latest articles on DEV Community by Shashi Kanth (@shashikanthgs).</description>
    <link>https://dev.to/shashikanthgs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shashikanthgs"/>
    <language>en</language>
    <item>
      <title>Build a Vendor-Neutral A2A Agent That Works With Any LLM Provider</title>
      <dc:creator>Shashi Kanth</dc:creator>
      <pubDate>Fri, 27 Feb 2026 22:24:59 +0000</pubDate>
      <link>https://dev.to/shashikanthgs/build-a-vendor-neutral-a2a-agent-that-works-with-any-llm-provider-43e5</link>
      <guid>https://dev.to/shashikanthgs/build-a-vendor-neutral-a2a-agent-that-works-with-any-llm-provider-43e5</guid>
      <description>&lt;p&gt;One of the most common mistakes in AI system architecture is building point-to-point integrations with specific LLM providers.&lt;/p&gt;

&lt;p&gt;You choose Anthropic. You integrate Claude directly. Six months later you want to benchmark against GPT-4.1, or a new model drops that changes the playing field. Now you're rewriting integration code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a2a-opencode&lt;/strong&gt; solves this with a different approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wrap &lt;a href="https://opencode.ai" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt; — which already supports Anthropic, OpenAI, GitHub Copilot, and more — behind the A2A protocol&lt;/li&gt;
&lt;li&gt;Your orchestration layer speaks A2A, not "Claude API" or "OpenAI API"&lt;/li&gt;
&lt;li&gt;Swap model providers in one config line. Your orchestrator never changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/shashikanth-gs/a2a-wrapper" rel="noopener noreferrer"&gt;https://github.com/shashikanth-gs/a2a-wrapper&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;npm:&lt;/strong&gt; &lt;a href="https://www.npmjs.com/package/a2a-opencode" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/a2a-opencode&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  What is A2A?
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/google-deepmind/a2a" rel="noopener noreferrer"&gt;A2A (Agent-to-Agent) protocol&lt;/a&gt; is an open standard for agent interoperability. It defines how agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Advertise capabilities&lt;/strong&gt; via Agent Cards (&lt;code&gt;/.well-known/agent-card.json&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accept tasks&lt;/strong&gt; via JSON-RPC and REST&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stream responses&lt;/strong&gt; via SSE&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain task lifecycle&lt;/strong&gt; (submitted → working → completed)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When an agent speaks A2A, any orchestrator that understands the protocol can discover and call it — without knowing anything about the underlying model or provider.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6e0t5f6wn8c483bqn9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6e0t5f6wn8c483bqn9q.png" alt="A2A Agent Card Discovery Flow"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Node.js 18+&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://opencode.ai" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt; installed (&lt;code&gt;npm install -g opencode-ai&lt;/code&gt; or equivalent)&lt;/li&gt;
&lt;li&gt;A supported LLM provider API key (Anthropic, OpenAI, or GitHub Copilot via &lt;code&gt;gh auth login&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Step 1: Start OpenCode
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;opencode serve
&lt;span class="c"&gt;# OpenCode starts on http://localhost:4096 by default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Install a2a-opencode
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzlccr2te819fmpcqryd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzlccr2te819fmpcqryd.png" alt="npm Quick-Start Terminal Card"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; a2a-opencode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Or run without installing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx a2a-opencode &lt;span class="nt"&gt;--config&lt;/span&gt; path/to/config.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 3: Configure Your Agent
&lt;/h2&gt;

&lt;p&gt;Create &lt;code&gt;my-agent/config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"agentCard"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"My OpenCode Agent"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A vendor-neutral AI agent with MCP tool support"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"protocolVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0.3.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"streaming"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"code-review"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Code Review"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Analyze, review, and refactor code"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"code"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"review"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"refactor"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"security"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"server"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3001&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"advertiseHost"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"opencode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"baseUrl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:4096"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"anthropic/claude-sonnet-4-20250514"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"systemPrompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"You are a code review expert. Analyze code for bugs, performance, and security issues."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"autoApprove"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"autoAnswer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 4: Start the A2A Wrapper
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;a2a-opencode &lt;span class="nt"&gt;--config&lt;/span&gt; my-agent/config.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;[info] A2A server started
[info] Agent Card: http://localhost:3001/.well-known/agent-card.json
[info] JSON-RPC:   http://localhost:3001/a2a/jsonrpc
[info] REST:       http://localhost:3001/a2a/rest
[info] Health:     http://localhost:3001/health
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 5: Discover Your Agent
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:3001/.well-known/agent-card.json | jq &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"My OpenCode Agent"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A vendor-neutral AI agent with MCP tool support"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"protocolVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0.3.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:3001"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"capabilities"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"streaming"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the A2A Agent Card — the agent's identity and capability manifest. Any A2A-compatible orchestrator can read this and immediately understand what the agent can do.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 6: Send a Task
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Via REST:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:3001/a2a/rest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "message": {
      "role": "user",
      "parts": [{"kind": "text", "text": "Review this TypeScript function for bugs and performance issues."}]
    }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Via JSON-RPC:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:3001/a2a/jsonrpc &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "jsonrpc": "2.0",
    "id": "1",
    "method": "tasks/send",
    "params": {
      "message": {
        "role": "user",
        "parts": [{"kind": "text", "text": "Explain the difference between debounce and throttle."}]
      }
    }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The response streams back in full A2A format with task lifecycle events and artifacts — identical to any other A2A agent, regardless of which LLM is powering it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Switching Providers = One Line Change
&lt;/h2&gt;

&lt;p&gt;Want to switch from Claude to GPT-4.1?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"opencode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"openai/gpt-4.1"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Switch to GitHub Copilot?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"opencode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"github/gpt-4.1"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The A2A interface is identical. Restart the agent. Your orchestrator doesn't change.&lt;/p&gt;




&lt;h2&gt;
  
  
  Multi-Agent Example
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxklwkz3cqdtjkwduyhu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxklwkz3cqdtjkwduyhu.png" alt="Multi-Agent Routing with a2a-opencode"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;One orchestrator routing tasks to three specialized agents across providers&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Run three specialized agents on different ports:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Terminal 1: Code review agent (Claude)&lt;/span&gt;
a2a-opencode &lt;span class="nt"&gt;--config&lt;/span&gt; agents/code-review/config.json &lt;span class="nt"&gt;--port&lt;/span&gt; 3001

&lt;span class="c"&gt;# Terminal 2: Documentation agent (GPT-4.1)&lt;/span&gt;
a2a-opencode &lt;span class="nt"&gt;--config&lt;/span&gt; agents/docs/config.json &lt;span class="nt"&gt;--port&lt;/span&gt; 3002

&lt;span class="c"&gt;# Terminal 3: Security analysis (Claude Opus)&lt;/span&gt;
a2a-opencode &lt;span class="nt"&gt;--config&lt;/span&gt; agents/security/config.json &lt;span class="nt"&gt;--port&lt;/span&gt; 3003
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your orchestrator routes to whichever agent fits the task — by capability, skill tags, or load. Each agent is independently swappable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqovmh7caszmi4y8jfw2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqovmh7caszmi4y8jfw2.png" alt="a2a-opencode Internal Architecture"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Full request flow from A2A client through OpenCode to any LLM provider and MCP tools&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;A2A Client / Orchestrator
        │
        │  JSON-RPC / REST / SSE
        ▼
a2a-opencode (Express + A2A SDK)
  ├─ SessionManager     (contextId → OpenCode session)
  ├─ EventStreamManager (SSE polling + auto-reconnect)
  ├─ PermissionHandler  (auto-approves tool calls)
  └─ EventPublisher     (OpenCode events → A2A events)
        │
        │  HTTP + SSE
        ▼
OpenCode Server (opencode serve)
  ├─ LLM inference (Anthropic / OpenAI / GitHub Copilot / ...)
  └─ MCP tool execution
        │
        │  MCP Protocol
        ▼
MCP Servers (filesystem, database, custom tools...)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Adding MCP Tools (Optional)
&lt;/h2&gt;

&lt;p&gt;Want your agent to read and write files, query databases, or call custom APIs? Add an MCP section to your config:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;stdio (child process):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"mcp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"filesystem"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"stdio"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@modelcontextprotocol/server-filesystem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/path/to/workspace"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;HTTP MCP server:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"mcp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"my-api-tools"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:8002/mcp"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the agent. OpenCode handles MCP execution and tool results flow back through the A2A response stream.&lt;/p&gt;




&lt;h2&gt;
  
  
  a2a-copilot vs a2a-opencode — Which Should You Use?
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;a2a-copilot&lt;/th&gt;
&lt;th&gt;a2a-opencode&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;LLM backend&lt;/td&gt;
&lt;td&gt;GitHub Copilot models only&lt;/td&gt;
&lt;td&gt;Any provider via OpenCode&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auth&lt;/td&gt;
&lt;td&gt;GitHub account / &lt;code&gt;gh&lt;/code&gt; CLI token&lt;/td&gt;
&lt;td&gt;Provider-specific (set in OpenCode)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;External dependency&lt;/td&gt;
&lt;td&gt;None (uses &lt;code&gt;gh&lt;/code&gt; CLI)&lt;/td&gt;
&lt;td&gt;OpenCode server must be running&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-provider support&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Teams already on GitHub Copilot&lt;/td&gt;
&lt;td&gt;Multi-provider or vendor-neutral setups&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Both expose the same A2A interface. Your orchestrator integrates once and can use either — or both.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Just Built
&lt;/h2&gt;

&lt;p&gt;You now have a vendor-neutral AI agent running as a standalone, fully A2A-compliant service:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Discoverable via Agent Card&lt;/li&gt;
&lt;li&gt;Callable via JSON-RPC and REST&lt;/li&gt;
&lt;li&gt;Streaming via SSE&lt;/li&gt;
&lt;li&gt;Multi-turn conversations via persistent sessions&lt;/li&gt;
&lt;li&gt;Any LLM provider swappable in one config line&lt;/li&gt;
&lt;li&gt;MCP tool access (if configured)&lt;/li&gt;
&lt;li&gt;Docker-deployable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Any A2A orchestrator can call it without any provider-specific code.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Switch models:&lt;/strong&gt; Change &lt;code&gt;"model"&lt;/code&gt; to &lt;code&gt;"openai/gpt-4.1"&lt;/code&gt; or &lt;code&gt;"github/gpt-4.1"&lt;/code&gt; in your config&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add MCP tools:&lt;/strong&gt; Filesystem connectors, database clients, HTTP API tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run multiple specialized agents&lt;/strong&gt; on different ports, each with a different provider and system prompt&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use an A2A orchestrator&lt;/strong&gt; to route tasks dynamically across agents by capability or load&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check out a2a-copilot&lt;/strong&gt; for a zero-dependency alternative that wraps GitHub Copilot directly&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/shashikanth-gs/a2a-wrapper" rel="noopener noreferrer"&gt;https://github.com/shashikanth-gs/a2a-wrapper&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;npm:&lt;/strong&gt; &lt;code&gt;npm install -g a2a-opencode&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Also see:&lt;/strong&gt; &lt;a href="https://github.com/shashikanth-gs/a2a-wrapper" rel="noopener noreferrer"&gt;https://github.com/shashikanth-gs/a2a-wrapper&lt;/a&gt; (for the GitHub Copilot variant)&lt;/p&gt;

</description>
      <category>a2a</category>
      <category>opencode</category>
      <category>agents</category>
      <category>aiagents</category>
    </item>
    <item>
      <title>Turn GitHub Copilot into an A2A-Compliant Agent in Under 5 Minutes</title>
      <dc:creator>Shashi Kanth</dc:creator>
      <pubDate>Fri, 27 Feb 2026 22:07:02 +0000</pubDate>
      <link>https://dev.to/shashikanthgs/turn-github-copilot-into-an-a2a-compliant-agent-in-under-5-minutes-4pfl</link>
      <guid>https://dev.to/shashikanthgs/turn-github-copilot-into-an-a2a-compliant-agent-in-under-5-minutes-4pfl</guid>
      <description>&lt;p&gt;GitHub Copilot is one of the most capable AI coding agents available today. But out of the box, it's only accessible through VS Code, GitHub.com, or the Copilot SDK embedded in your own application.&lt;/p&gt;

&lt;p&gt;What if you could expose Copilot as an independent, discoverable agent one that any A2A orchestrator or AI framework can call, without any Copilot-specific integration code?&lt;/p&gt;

&lt;p&gt;That's exactly what &lt;strong&gt;a2a-copilot&lt;/strong&gt; does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/shashikanth-gs/a2a-wrapper" rel="noopener noreferrer"&gt;https://github.com/shashikanth-gs/a2a-wrapper&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;npm:&lt;/strong&gt; &lt;a href="https://www.npmjs.com/package/a2a-copilot" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/a2a-copilot&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  What is A2A?
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/google-deepmind/a2a" rel="noopener noreferrer"&gt;A2A (Agent-to-Agent) protocol&lt;/a&gt; is an open standard for agent interoperability. It defines how agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Advertise capabilities&lt;/strong&gt; via Agent Cards (&lt;code&gt;/.well-known/agent-card.json&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accept tasks&lt;/strong&gt; via JSON-RPC and REST&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stream responses&lt;/strong&gt; via SSE&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain task lifecycle&lt;/strong&gt; (submitted → working → completed)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When an agent speaks A2A, any orchestrator that understands the protocol can discover and call it without knowing anything about the agent's internals.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6e0t5f6wn8c483bqn9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6e0t5f6wn8c483bqn9q.png" alt="A2A Agent Card Discovery Flow"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Node.js 18+&lt;/li&gt;
&lt;li&gt;A GitHub account with Copilot access&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gh&lt;/code&gt; CLI authenticated (&lt;code&gt;gh auth login&lt;/code&gt;) OR a &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; environment variable set&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Step 1: Install
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnodm035tl5a8m0m4x5bq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnodm035tl5a8m0m4x5bq.png" alt="npm Quick-Start Terminal Card"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; a2a-copilot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or run without installing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx a2a-copilot &lt;span class="nt"&gt;--config&lt;/span&gt; path/to/config.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 2: Create Your Agent Config
&lt;/h2&gt;

&lt;p&gt;Create &lt;code&gt;my-agent/config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"agentCard"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"My Copilot Agent"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A GitHub Copilot-powered coding agent exposed via A2A"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"protocolVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0.3.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"streaming"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"coding"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Coding Assistant"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Full-stack coding, architecture, and debugging"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"code"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"typescript"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"python"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"architecture"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"server"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"hostname"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"advertiseHost"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"copilot"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpt-4.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"streaming"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"systemPrompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"You are a senior software engineer. Help with code, architecture, and debugging."&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 3: Start the Agent
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;a2a-copilot &lt;span class="nt"&gt;--config&lt;/span&gt; my-agent/config.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;[info] A2A server started
[info] Agent Card: http://localhost:3000/.well-known/agent-card.json
[info] JSON-RPC:   http://localhost:3000/a2a/jsonrpc
[info] REST:       http://localhost:3000/a2a/rest
[info] Health:     http://localhost:3000/health
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 4: Discover Your Agent
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:3000/.well-known/agent-card.json | jq &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"My Copilot Agent"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A GitHub Copilot-powered coding agent exposed via A2A"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"protocolVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0.3.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:3000"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"capabilities"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"streaming"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the A2A Agent Card the agent's identity and capability manifest. Any A2A-compatible orchestrator can read this and immediately understand what the agent can do.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5: Send a Task
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Via REST:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:3000/a2a/rest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "message": {
      "role": "user",
      "parts": [{"kind": "text", "text": "Write a TypeScript function that debounces a callback with a configurable delay."}]
    }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Via JSON-RPC:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:3000/a2a/jsonrpc &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "jsonrpc": "2.0",
    "id": "1",
    "method": "tasks/send",
    "params": {
      "message": {
        "role": "user",
        "parts": [{"kind": "text", "text": "Explain the difference between debounce and throttle."}]
      }
    }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copilot responds in full A2A format with task lifecycle events and streaming artifacts.&lt;/p&gt;




&lt;h2&gt;
  
  
  Adding MCP Tools (Optional)
&lt;/h2&gt;

&lt;p&gt;Want your agent to also read and write files? Add an MCP section to your config:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;stdio (child process):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"mcp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"filesystem"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"stdio"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@modelcontextprotocol/server-filesystem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/path/to/workspace"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;HTTP MCP server:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"mcp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"my-tools"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:8002/mcp"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the agent. Copilot now has access to those tools as part of its reasoning loop.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagfusdm6dpwopr06ozel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagfusdm6dpwopr06ozel.png" alt="a2a-copilot Internal Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Full request flow from A2A client through to GitHub Copilot and MCP tools&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Just Built
&lt;/h2&gt;

&lt;p&gt;You now have GitHub Copilot running as a standalone, fully A2A-compliant agent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Discoverable via Agent Card&lt;/li&gt;
&lt;li&gt;Callable via JSON-RPC and REST&lt;/li&gt;
&lt;li&gt;Streaming via SSE&lt;/li&gt;
&lt;li&gt;Multi-turn conversations via persistent sessions&lt;/li&gt;
&lt;li&gt;MCP tool access (if configured)&lt;/li&gt;
&lt;li&gt;Docker-deployable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Any A2A orchestrator can call it without any Copilot-specific code.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Switch models:&lt;/strong&gt; Change &lt;code&gt;"model"&lt;/code&gt; to &lt;code&gt;"claude-sonnet-4-5"&lt;/code&gt; in your config&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add MCP tools:&lt;/strong&gt; Database connectors, HTTP APIs, custom tool servers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run multiple specialized agents&lt;/strong&gt; on different ports with different system prompts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use an A2A orchestrator&lt;/strong&gt; to route tasks dynamically across agents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check out a2a-opencode&lt;/strong&gt; for a vendor-neutral alternative that supports any LLM provider&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/shashikanth-gs/a2a-wrapper" rel="noopener noreferrer"&gt;https://github.com/shashikanth-gs/a2a-wrapper&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;npm:&lt;/strong&gt; &lt;code&gt;npm install -g a2a-copilot&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Also see:&lt;/strong&gt; &lt;a href="https://github.com/shashikanth-gs/a2a-wrapper" rel="noopener noreferrer"&gt;https://github.com/shashikanth-gs/a2a-wrapper&lt;/a&gt; (for the OpenCode / any-provider variant)&lt;/p&gt;

</description>
      <category>a2a</category>
      <category>githubcopilot</category>
      <category>copilotsdk</category>
      <category>agents</category>
    </item>
    <item>
      <title>The Missing Piece for AI-Assisted Infrastructure Management</title>
      <dc:creator>Shashi Kanth</dc:creator>
      <pubDate>Wed, 31 Dec 2025 11:52:38 +0000</pubDate>
      <link>https://dev.to/shashikanthgs/the-missing-piece-for-ai-assisted-infrastructure-management-4709</link>
      <guid>https://dev.to/shashikanthgs/the-missing-piece-for-ai-assisted-infrastructure-management-4709</guid>
      <description>&lt;p&gt;I have been managing my homelab for years now, and it handles a lot. Two Kubernetes clusters, a mix of physical machines and VMs, a few components running in the cloud, reverse proxies managing traffic across all of it, databases, caches the usual sprawl that happens when you actually use your infrastructure for real workloads.&lt;/p&gt;

&lt;p&gt;Deploying something new is never just one step. It's a choreographed sequence: update the config on the reverse proxy, deploy to the right Kubernetes cluster, make sure the database migration ran, verify the cache invalidated properly, check that the monitoring picked it up. Miss a step, and something breaks in a way that takes an hour to debug.&lt;/p&gt;

&lt;p&gt;When Claude and ChatGPT started getting genuinely good at understanding infrastructure, I had a thought: what if I could just describe what I want deployed, and have an AI coordinate across all these systems?&lt;/p&gt;

&lt;p&gt;The problem? I'm not handing my SSH keys or kubeconfig files to anyone. Or any AI.&lt;/p&gt;

&lt;p&gt;So I built something to solve this. It's called SSH MCP Bridge, and I'm open-sourcing it today.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Problem: Coordination Across Heterogeneous Infrastructure
&lt;/h2&gt;

&lt;p&gt;If you have a single server, infrastructure management is straightforward. SSH in, run commands, done.&lt;/p&gt;

&lt;p&gt;But real infrastructure even a homelab is rarely a single server. It's a collection of machines with different purposes, different access patterns, and different failure modes. Deploying a new service might touch five different systems. Troubleshooting a problem means correlating logs and metrics across multiple hosts.&lt;/p&gt;

&lt;p&gt;This is where AI assistance could genuinely help. Not by being smarter than me at any individual task, but by handling the coordination overhead. The AI can SSH into the reverse proxy, check the config, hop over to the app server, verify the deployment, query the database to confirm the migration ran, and report back all while I describe what I'm trying to accomplish in plain English.&lt;/p&gt;

&lt;p&gt;But current AI integrations with infrastructure are either too locked down to be useful, or they require you to paste credentials into places that make security folks nervous. I wanted something different. I wanted to tell Claude "deploy the new version to production" and have it actually coordinate across my systems without ever seeing a single IP address, password, or private key.&lt;/p&gt;

&lt;p&gt;That's not paranoia. That's just good architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: SSH MCP Bridge
&lt;/h2&gt;

&lt;p&gt;MCP (Model Context Protocol) is how modern AI assistants like Claude, ChatGPT, and VS Code Copilot connect to external tools. Instead of copy-pasting command outputs back and forth, you expose tools that the AI can call directly. The ecosystem is still young, but it's maturing fast.&lt;/p&gt;

&lt;p&gt;SSH MCP Bridge is an MCP server that sits between your AI assistant and your infrastructure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AI Assistant (Claude/ChatGPT/VS Code)
           |
           v
    SSH MCP Bridge
           |
           v
Your Servers (web, db, cache, etc.)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI talks to the bridge using MCP. The bridge holds your SSH credentials and maintains connections to your servers. When the AI wants to run a command, it asks the bridge. The bridge executes it and returns the results.&lt;/p&gt;

&lt;p&gt;What the AI sees:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A list of friendly server names ("web-server", "database", "redis-cache")&lt;/li&gt;
&lt;li&gt;Descriptions of what each server does&lt;/li&gt;
&lt;li&gt;Tools to execute commands and manage sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What the AI never sees:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IP addresses&lt;/li&gt;
&lt;li&gt;SSH private keys&lt;/li&gt;
&lt;li&gt;Passwords&lt;/li&gt;
&lt;li&gt;Network topology&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't just about security (though that's the main point). It also makes the AI's job easier. Instead of reasoning about "192.168.1.47", it thinks about "the production database server." That's closer to how we think about infrastructure anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two Ways to Deploy
&lt;/h2&gt;

&lt;p&gt;I designed this for two different use cases, because my needs are different depending on context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STDIO Mode&lt;/strong&gt; is for local deployments. If you're running Claude Desktop on your laptop and your laptop can already SSH into your servers, this is the simplest path. The bridge runs as a subprocess that Claude talks to directly. No network exposure, no authentication complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTP Mode&lt;/strong&gt; is for remote deployments. Deploy the bridge on a server in your network (or in a container), and connect to it over HTTP/SSE. This is what you need for ChatGPT integration, or if you want a centralized MCP server that multiple clients can connect to. It supports API key auth for simple setups, and full OAuth 2.0/OIDC for enterprise environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Can Actually Do With This
&lt;/h2&gt;

&lt;p&gt;Let me give you some real examples from my own usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Troubleshooting&lt;/strong&gt;: "Check disk usage and memory on all servers, and tell me if anything looks concerning." The AI queries each host, aggregates the results, and gives you a summary. No more opening four terminal tabs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployments&lt;/strong&gt;: "Pull the latest code on the app server, run migrations on the database, restart the application, and verify it's responding." That's one sentence that coordinates multiple servers in the right order.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration changes&lt;/strong&gt;: "Add a new upstream server to the nginx config and reload." The AI can read the current config, make the edit, validate it, and apply it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Investigation&lt;/strong&gt;: "Show me the last 50 lines of the application log, and check if there are any related errors in the nginx access log." Cross-referencing logs across servers becomes conversational.&lt;/p&gt;

&lt;p&gt;The key insight here is that the AI can maintain context across multiple commands and multiple servers. It remembers what it just checked, notices patterns, and can reason about the overall state of your system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Session Management
&lt;/h2&gt;

&lt;p&gt;SSH connections are relatively expensive to establish. You don't want to open a new connection for every command.&lt;/p&gt;

&lt;p&gt;The bridge maintains a session pool. Once a connection to a host is established, it stays open and gets reused. Sessions automatically close after a configurable idle timeout (default is 30 minutes). There's also a cap on how many concurrent sessions per host, to prevent resource exhaustion.&lt;/p&gt;

&lt;p&gt;For shell mode sessions (where you want working directory and environment to persist between commands), the bridge keeps a persistent shell channel open. For exec mode sessions (stateless, isolated commands), each command runs independently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Considerations
&lt;/h2&gt;

&lt;p&gt;I'm going to be direct about security, because infrastructure access is serious.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Credential isolation&lt;/strong&gt; is the core principle. The bridge holds credentials; clients don't. Period.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Command-level control&lt;/strong&gt; gives you multiple layers of restriction. First, the SSH username you configure determines what's possible at the OS level if you use a non-root user, root commands will fail even if the AI generates them. The operating system enforces this, not the bridge. On top of that, you can configure allowed or disallowed command patterns in the bridge itself. Want to block any command containing &lt;code&gt;rm -rf&lt;/code&gt; or &lt;code&gt;sudo&lt;/code&gt;? Add it to the deny list. Want to restrict execution to only a specific set of commands? Use an allow list. The AI never gets to run something you haven't permitted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authentication&lt;/strong&gt; for HTTP mode uses either API keys (for simpler setups) or OAuth 2.0/OIDC (for enterprise). The OAuth integration works with Auth0, Azure AD, Okta, Keycloak anything that speaks standard OIDC.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit logging&lt;/strong&gt; captures every command executed, with timestamp, user identity (from JWT tokens in OAuth mode), target host, and result. If you need to answer "who did what, when" for compliance or incident investigation, it's all there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container security&lt;/strong&gt;: the Docker image runs as a non-root user. Mount your config and SSH keys as read-only volumes. Set resource limits. Standard practices, but important to mention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network isolation&lt;/strong&gt;: in HTTP mode, put the bridge behind a reverse proxy with TLS. Restrict access at the firewall level. Consider deploying it on an internal network accessible only via VPN.&lt;/p&gt;

&lt;p&gt;What you should NOT do: expose this to the public internet with only API key auth. That's asking for trouble.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;If you want to try it out:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/shashikanth-gs/mcp-ssh-bridge.git
&lt;span class="nb"&gt;cd &lt;/span&gt;ssh-mcp-bridge
python &lt;span class="nt"&gt;-m&lt;/span&gt; venv .venv
&lt;span class="nb"&gt;source&lt;/span&gt; .venv/bin/activate
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enable_stdio&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;log_level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INFO"&lt;/span&gt;

&lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-server&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Development&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;server"&lt;/span&gt;
    &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-server.com"&lt;/span&gt;
    &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-user"&lt;/span&gt;
    &lt;span class="na"&gt;private_key_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;~/.ssh/id_rsa"&lt;/span&gt;
    &lt;span class="na"&gt;execution_mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;shell"&lt;/span&gt;

&lt;span class="na"&gt;session&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;idle_timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Claude Desktop, add to your config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ssh-bridge"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/path/to/venv/bin/python"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-m"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ssh_mcp_bridge"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/path/to/config.yaml"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart Claude Desktop, and ask it to list your SSH hosts. If everything's configured correctly, you should see your server listed.&lt;/p&gt;

&lt;p&gt;Docker deployment is also available if you prefer containers. Check the repo for docker-compose examples.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Open Source This?
&lt;/h2&gt;

&lt;p&gt;I've been using this for my own infrastructure for a while. It started as a weekend project to scratch an itch, then grew as I added OAuth support, then got more polished as I realized other people might find it useful.&lt;/p&gt;

&lt;p&gt;The MCP ecosystem needs more tools. Right now, most examples are simple file readers, web scrapers, basic API wrappers. Infrastructure management is a harder problem, but it's also where AI assistance can provide real leverage.&lt;/p&gt;

&lt;p&gt;I'm also hoping to get feedback and contributions. There are features I want but haven't built yet: SCP/SFTP file transfers, bastion host (jump host) support, MCP resources for exposing server state. If any of those interest you, PRs are welcome.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;AI-assisted infrastructure management is coming whether we like it or not. The question is whether we do it in a way that's secure and auditable, or in a way that we'll regret later.&lt;/p&gt;

&lt;p&gt;SSH MCP Bridge is my attempt at the former. It's not the only approach, and it might not be right for everyone. But if you've been looking for a way to let AI help with server management without compromising your security posture, give it a try.&lt;/p&gt;

&lt;p&gt;The repo is at &lt;a href="https://github.com/shashikanth-gs/mcp-ssh-bridge" rel="noopener noreferrer"&gt;github.com/shashikanth-gs/mcp-ssh-bridge&lt;/a&gt;. Docker images are on Docker Hub. Documentation covers everything from quick start to OAuth setup to security hardening.&lt;/p&gt;

&lt;p&gt;Questions, feedback, or war stories about AI and infrastructure? I'm interested in hearing them.&lt;/p&gt;




</description>
      <category>mcp</category>
      <category>ai</category>
      <category>devsecops</category>
      <category>homelab</category>
    </item>
  </channel>
</rss>
