On April 30th I got an email from Google about something called GEAR, their new program for building AI agents using ADK, the Agent Development Kit. I signed up, watched the intro video, and had a strange feeling of recognition.
The pattern was familiar. Define tools. Write descriptions. Connect an AI model to those tools. Let the model decide which tool to call based on what the user asks.
I built exactly this in .NET back in February, except I used MCP instead of ADK. And I pointed it at a Kubernetes cluster instead of a database.
What ADK and MCP are both trying to solve
The problem both frameworks address is the same. You have an AI model and you want it to do real things in the world, not just generate text. To do that, the model needs tools. A tool is just a function the model can call: search the web, query a database, restart a server, create a file.
The hard part is telling the model what each tool does well enough that it picks the right one. Both ADK and MCP solve this with descriptions. You write a description for each tool, and the model reads those descriptions to decide what to call.
ADK does this in Python, Java, TypeScript, or Go. You define an agent with a name, a model, an instruction, and a list of tools. The framework handles the rest.
MCP does this through a server protocol. You define tools with names, descriptions, and input schemas. Any MCP-compatible client, including Claude Desktop, can connect to your server and use those tools through natural language.
What I built
My MCP server lets Claude manage a Kubernetes cluster through natural language. You type something like "restart the idp-platform deployment" and Claude figures out which tool to call, what parameters to pass, and executes it.
The server exposes 8 tools: list pods, get pod logs, scale a deployment, restart a deployment, describe a node, get cluster events, and a couple more. Each tool has a detailed description that tells Claude what it does and when to use it.
Here is what one tool definition looks like in .NET:
[McpServerTool, Description(
"Scale a Kubernetes deployment to a specific number of replicas. " +
"Use this when you need to increase or decrease the number of running instances. " +
"Provide the deployment name and namespace. " +
"Returns the updated replica count and deployment status.")]
public async Task<string> ScaleDeployment(
[Description("The name of the deployment to scale")] string deploymentName,
[Description("The Kubernetes namespace")] string namespaceName,
[Description("The desired number of replicas")] int replicas)
The description is doing a lot of work here. It tells Claude what the tool does, when to use it, what it needs, and what it returns. That is exactly the same thing ADK tool descriptions do.
The key insight both frameworks share
Tool descriptions are the interface layer, not documentation.
When you write a description for a tool, you are not writing it for a developer to read. You are writing it for the AI model to read. That changes everything about how you should write them.
Be specific about when to use this tool versus a similar one. Be clear about what the inputs mean. Be explicit about what the output contains. Vague descriptions lead to the model picking the wrong tool or calling it with the wrong parameters.
I learned this the hard way. My first version of the scale deployment tool had a description that just said "Scale a deployment." Claude kept confusing it with the restart tool. Adding specificity about what scaling means versus restarting fixed it immediately.
Where they differ
ADK is a full framework. It handles multi-agent systems, bidirectional streaming, session management, deployment to Google Cloud, evaluation, and observability. It is designed for production enterprise applications.
MCP is a protocol. It is lighter, more focused, and model-agnostic. Any AI client that speaks MCP can connect to your server. Claude Desktop, Cursor, and increasingly other tools all support MCP. You write the server once and it works across clients.
For my use case, MCP was the right choice. I wanted Claude to control my local Kubernetes cluster during development. I did not need multi-agent orchestration or managed cloud deployment. I needed a reliable protocol that Claude Desktop could speak natively.
If I was building a production AI system for a business with multiple agents, audit logging, and cloud scale, ADK would be more appropriate.
The .NET angle
Neither ADK nor MCP has official .NET support as a primary language. ADK supports Python, Java, TypeScript, and Go. The official MCP SDK supports Python and TypeScript.
There is a community .NET MCP SDK that works well and that is what I used. But it does mean you are working slightly outside the official tooling for both frameworks if you are a .NET developer.
That said, building an MCP server in .NET is straightforward once you have the SDK. The tooling, testing, and deployment story is the same as any other .NET application.
What ADK does that impressed me
The built-in Dev UI is genuinely useful. When you run an ADK agent locally, you get a browser interface that shows you exactly what the agent is thinking, which tools it called, what parameters it passed, and what came back. That visibility into the agent's reasoning is something I had to build myself for debugging my MCP server.
The multi-agent support is also impressive. ADK lets you define hierarchies of agents where a primary agent can delegate to specialist agents. I have not needed this yet but I can see why it matters for complex workflows.
Source code
My MCP Kubernetes Manager is open source: github.com/aftabkh4n/mcp-kubernetes-manager
It includes an AI-generated release notes pipeline as well - every merged PR automatically generates a structured changelog entry using Claude and GitHub Actions.
If you are a .NET developer curious about building AI agents, MCP is worth exploring even while the official tooling catches up. The protocol is solid and the community SDK works well.
If you are starting fresh and language flexibility matters, ADK is worth a serious look. Google has clearly put real engineering behind it.
Either way, the mental model is the same. Tools, descriptions, and an AI that reads them. That part does not change no matter which framework you use.
Top comments (0)