The AI agent landscape is evolving fast. While we've seen protocols like MCP (Model Context Protocol) gain traction for connecting AI models to tools and data sources, there's another emerging standard that's purpose-built for something different: agent-to-agent communication.
That's where the Agent2Agent (A2A) protocol comes in. Its a specification designed for AI agents to discover, communicate, and collaborate with each other.
If you're ready to jump in, the @artinet/create-agent
CLI gives you three battle-tested templates to start building right away.
Why Agent2Agent Matters
But before we dive into the templates, let's talk about why A2A is different from other protocols you might know:
MCP is about tools and context. It excels at giving AI models access to external tools, data sources, and contextual information. Think: "How do I let Claude read my Notion docs?"
A2A is about agent collaboration. It's designed for agents to talk to each other, delegate tasks, share results, and build complex multi-agent workflows. Think: "How do I let my research agent call my data analysis agent?"
The A2A protocol standardizes:
- Agent discovery via Agent Cards (like OpenAPI specs for agents)
- Message passing with rich content types (text, files, data)
- Task lifecycle management (submission, streaming updates, cancellation)
- Bidirectional communication and streaming
What's Inside
The @artinet/create-agent
CLI ships with three templates, each teaching a different pattern:
1. Basic - The Foundation
A minimal agent template, perfect for understanding the core structure of building simple agents. Think of it as your "Hello, World" for A2A agents.
const demoAgent = AgentBuilder()
.text(({ command }) => {
// Step 1: Extract and validate input
const userText = getPayload(command.message).text;
return {
parts: [`Processing request: ${userText}`],
args: [userText], // Pass to next step
};
})
.text(({ args }) => {
// Step 2: Transform and respond
return `You said: "${args?.[0]}". This is an echo server example.`;
})
.createAgentEngine();
Real-world use cases:
- Form validators that check → sanitize → store
- Document processors that extract → analyze → summarize
- Data pipelines that fetch → transform → deliver
2. Coder - AI-Powered Code Generation
A template pre-configured for developing agents that are directly integrated with an LLM.
Real-world use cases:
- Code generation agents with project context
- Content writers that maintain conversation flow
- Technical support bots with memory
- Agents that need reasoning capabilities
3. Orchestrator - The Coordinator
A template designed for building orchestrator agents that can manage and coordinate multiple other agents or tasks.
Real-world use cases:
- Research platforms (gather agent → analysis agent → summarization agent)
- Development tools (planner agent → coder agent → tester agent)
- Customer service (classifier agent → specialist agents → synthesis agent)
- Any complex workflow that benefits from specialization
How it Works
All three templates come loaded with everything you need for modern A2A agent development:
A2A Foundation:
- Built on the official Agent2Agent spec
- Express-based server for handling A2A connections
- Automatic Agent Card serving at
/.well-known/agent.json
- Ready-to-extend structure using the AgentBuilder pattern
Modern TypeScript Development:
- Full TypeScript support with sensible defaults
- TSX for fast development with hot reloading
- ES modules for modern JavaScript
- A complete Zod implemention of Agent2Agent spec
Ready To Go:
- In-memory task storage (easily swappable for persistence)
- CORS configuration included
- Proper error handling
- Optional
@artinet/lchat
integration for testing
Getting Started in 60 Seconds
Using these templates is incredibly simple. Just run:
npx @artinet/create-agent@latest
The CLI will walk you through an interactive setup:
- Select a template from basic, coder, or orchestrator
- Enter your project name (used for the directory and package.json)
That's it! The CLI will scaffold your project and you're ready to go.
Once your project is created:
cd your-agent-name
npm install
npm start
Your agent will be running at http://localhost:3000/a2a
with the Agent Card available at http://localhost:3000/.well-known/agent.json
.
Pro tip: Run with the --with-chat
flag to automatically launch an interactive chat client:
npm start -- --with-chat
The AgentBuilder Pattern
All three templates use the powerful AgentBuilder
pattern from the @artinet/sdk
, which makes creating multi-step agents incredibly intuitive:
import { AgentBuilder, getPayload } from "@artinet/sdk";
const demoAgent = AgentBuilder()
.text(({ command }) => {
const userText = getPayload(command.message).text;
return {
parts: [`Processing request: ${userText}`],
args: [userText], // Pass data to the next step
};
})
.text(({ args }) =>
`You said: "${args?.[0]}". This is an echo server example.`
)
.createAgentEngine();
This pattern allows you to:
- Chain multiple processing steps
- Pass data between steps using
args
- Mix different content types (text, files, data)
- Keep your code clean and maintainable
Diving Deeper: The Coder Template
The coder template is particularly interesting because it demonstrates LLM integration:
import { LocalRouter } from "@artinet/router";
import { AgentBuilder, getParts } from "@artinet/sdk";
export const demoAgent = AgentBuilder()
.text(({ context }) => {
// Build conversation history
const stateHistory = context.State().task.history ?? [];
const history = [...stateHistory, context.command.message];
const messages = history.map((m) => ({
role: m.role === "agent" ? "agent" : "user",
content: getParts(m.parts).text,
}));
return {
parts: [`Generating code...`],
args: messages,
};
})
.text(async ({ args }) => {
const router = new LocalRouter();
return await router.connect({
message: {
identifier: "deepseek-ai/DeepSeek-R1",
preferredEndpoint: "hf-inference",
session: {
messages: [
{
role: "system",
content: "You are an expert coding assistant..."
},
...(args ?? []),
],
},
},
});
})
.createAgentEngine();
This shows how easy it is to integrate state management, conversation history, and external LLM calls in your A2A agent.
The Orchestrator: Multi-Agent Coordination
The orchestrator template takes things even further by showing how to coordinate multiple agents:
// Define a coding agent
const codingAgentEngine = AgentBuilder()
.text(/* ... coding logic ... */)
.createAgentEngine();
// Register it with the router
router.createAgent({
engine: codingAgentEngine,
agentCard: codingAgentCard,
});
// Main orchestrator delegates to specialized agents
export const demoAgent = AgentBuilder()
.text(() => "Thinking about your request...")
.text(async ({ command }) => {
return await router.connect({
message: {
identifier: "deepseek-ai/DeepSeek-R1",
preferredEndpoint: "hf-inference",
session: {
messages: [
{
role: "system",
content: "You are an orchestration agent..."
},
{
role: "user",
content: getPayload(command.message).text,
},
],
},
},
agents: ["coding-agent"], // Available specialized agents
});
})
.createAgentEngine();
This pattern enables building sophisticated multi-agent systems where a main orchestrator can intelligently route requests to specialized agents.
Why This Setup Works
After using these templates to build several A2A agents, here's what I appreciate most:
The dev loop is fast. TSX provides instant reloads without a build step during development.
The SDK does the heavy lifting. You don't need to worry about protocol details, JSON-RPC, or SSE streaming—it's all handled for you.
Progressive complexity. Start with the basic template, then graduate to the coder template, and finally build orchestrators when you need multi-agent coordination.
It's opinionated but not rigid. The templates give you sensible defaults, but you can swap out pieces like storage, LLM providers, or routing logic as needed.
Ready from the start. These aren't just examples, they're built with error handling, CORS, and are highly customisable.
Testing Your Agent
Once your agent is running, you can test it several ways:
Using the built-in chat client:
npm start -- --with-chat
Using the A2AClient from the SDK:
import { A2AClient } from "@artinet/sdk";
const client = new A2AClient("http://localhost:3000/a2a");
const stream = client.sendStreamingMessage({
message: {
role: "user",
parts: [{ kind: "text", text: "Hello, agent!" }],
},
});
for await (const update of stream) {
console.log(update);
}
The Agent Card => Your Agent's Identity:
Every A2A agent exposes an Agent Card at /.well-known/agent.json. Think of it as OpenAPI for agents:
{
"name": "My Coding Agent",
"url": "http://localhost:3000/a2a",
"version": "1.0.0",
"capabilities": {
"streaming": true
},
"skills": [
{
"id": "code-generation",
"name": "Code Generator",
"description": "Generates high-quality code"
}
]
}
Visit http://localhost:3000/.well-known/agent.json
to see your agent's capabilities, skills, and metadata.
Customizing Your Agent
All templates use a clean structure that's easy to customize:
your-agent/
├── src/
│ ├── agent.ts # 🧠 Your agent's brain (the logic)
│ ├── launch.ts # 🚀 Server configuration
│ └── lib/
│ └── card.ts # 📋 Agent Card (capabilities & metadata)
├── package.json
└── tsconfig.json
To customize:
- Modify
agent.ts
to change your agent's behavior - Update
lib/card.ts
to describe your agent's capabilities - Adjust
launch.ts
for server configuration (port, CORS, paths) - Swap
InMemoryTaskStore
forFileStore
or your own storage implementation
Adding Persistence
The templates use in-memory storage by default, but switching to persistent storage is simple:
import { createAgentServer, FileStore } from "@artinet/sdk";
import path from "path";
const { app } = createAgentServer({
agent: {
engine: demoAgent,
tasks: new FileStore(path.join(process.cwd(), "data")),
agentCard: agentCard,
},
// ... rest of config
});
Or implement your own storage by implementing the Store
interface for databases, cloud storage, etc.
What's Next
These templates are just the beginning. The Agent2Agent protocol is designed for interoperability, which means agents built with these templates can communicate with any other A2A-compliant agent.
Some ideas for what you could build:
- Research assistants that gather and synthesize information
- Code reviewers that analyze pull requests
- Data analyzers that process and visualize datasets
- Multi-agent systems where specialized agents collaborate on complex tasks
- API wrappers that expose existing services as A2A agents
Clone a template and start building your A2A agent. Whether you're exposing an API, wrapping a database, or creating a multi-agent system, these templates handle the boring parts so you can focus on more interesting problems.
Check out the @artinet/create-agent
and @artinet/sdk
on npm, and give them a star on GitHub if you find them useful!
Learn more about building Multi-Agent Systems at artinet.io.
And if you build something cool with it, we'd love to hear about it!
Top comments (0)