Note: This article summarizes the following X post video (approx. 30 min) in English.
Speaker: Ivan Nardini (Google Cloud Developer Relations Engineer, AI/ML) / Recorded at an Anthropic-hosted event.
Original YouTube: Building AI agents with Claude in Google Cloud's Vertex AI | Code w/ Claude
Introduction
You've built an AI agent — but can't ship it to production. That's the wall Ivan Nardini (Google Cloud) dismantles in this 30-minute workshop.
Using ADK, MCP, Vertex AI Agent Engine, and A2A Protocol, he walks through building and deploying a multi-agent system powered by Claude — end to end.
Why AI Agents Are Hard to Productionize
Prototypes are easy. Production is hard. Three root causes:
| Challenge | Details |
|---|---|
| Fragmented landscape | Too many frameworks — unclear what to choose |
| Hard to integrate | Cross-framework agent communication is complex |
| Lack of ops & governance | Monitoring, logging, and scaling must all be hand-rolled |
Google Cloud's Agentic Stack is designed to solve all three.
The Google Cloud Agentic Stack
Four layers, each targeting one of the above challenges:
| Layer | Role |
|---|---|
| Agent Development Kit (ADK) | Open-source, code-first agent development framework |
| Model Context Protocol (MCP) | Open protocol standardizing how apps provide context to LLMs |
| Vertex AI Agent Engine | Managed platform for deploying and scaling agents in production |
| Agent2Agent (A2A) Protocol | Open standard enabling cross-framework agent collaboration |
Demo 1: Build Your First Agent with 3 Files
Using a birthday planner agent as the example:
from google.adk.agents import LlmAgent
from google.adk.models.anthropic_llm import Claude
from google.adk.models.registry import LLMRegistry
root_agent = LlmAgent(
name="birthday_planner",
model="claude-3-7-sonnet@20250219",
description="An agent that helps plan birthday parties",
instruction="Handle guest lists, venue suggestions, and scheduling..."
)
Just three files: agent.py, .env, requirements.txt. One command to run:
adk run birthday_planner # CLI interaction
adk web # Browser UI + debug view
ADK supports LlmAgent, SequentialAgent, and other patterns — compatible with Claude, Gemini, and more.
Demo 2: Go Multi-Agent with MCP
To extend the birthday planner to also schedule calendar events, you add two more agents and an orchestrator:
- BirthdayPlannerAgent — party suggestions
- CalendarServiceAgent — calendar operations via MCP server
- EventOrganizerAgent — routes requests to the right agent
Connecting an MCP server is two lines:
mcp_tools, exit_stack = await MCPToolset.from_server(
connection_params=SseServerParams(url=MCP_CALENDAR_SERVER_URL)
)
agent = LlmAgent(
name="CalendarServiceAgent",
model="claude-3-7-sonnet@20250219",
tools=mcp_tools,
...
)
Any existing MCP server can be plugged in as a tool. The orchestrator auto-routes requests based on agent descriptions.
Demo 3: Deploy to Vertex AI Agent Engine
agent_engines.create(
agent=root_agent,
requirements=["google-cloud-aiplatform[adk]"]
)
What you get automatically after deploy:
- Observability via Cloud Trace / Logging / Monitoring
- Session management (persistent conversation history)
- Integration with Vertex AI Evaluation Service for continuous improvement
Works with LangGraph, LangChain, LlamaIndex, and CrewAI too — not just ADK.
Bonus: A2A Protocol — Cross-Framework Agent Communication
When you need a LangChain agent and an ADK agent to collaborate, you need a shared language: Agent2Agent (A2A) Protocol.
Two core concepts:
- Agent Card: A digital business card for the agent — lets other agents discover what it can do
- Agent Skills: Describes the agent's specific capabilities and API
Built on HTTP / JSON-RPC, enterprise-ready security included.
Summary
| Takeaway | Detail |
|---|---|
| ADK: 3 files, 1 command | Fastest path to a working agent |
| MCP: 2 lines | Plug in any existing MCP server as a tool |
| Agent Engine: zero-ops deploy | Observability, scaling, sessions — all managed |
| A2A: break the framework wall | Claude, Gemini, LangChain, CrewAI can coexist |
ADK + MCP + Agent Engine + A2A gives you a complete stack from local dev to production scale.
Top comments (0)