I got tired of switching between my AI assistant and a terminal every time
I needed to check what was running in my Kubernetes cluster. So I built
something that removes the terminal from the equation entirely.
This is an MCP server written in .NET 9 that connects Claude directly to
a Kubernetes cluster. Instead of running kubectl commands, you just ask
Claude what you want in plain English and it figures out which tool to
call, talks to the cluster, and returns real data.
What MCP actually is
MCP stands for Model Context Protocol. It is an open standard created by
Anthropic that lets AI assistants talk to external tools and services in
a structured way. Think of it as a plugin system for AI - you build a
server that exposes tools, and any MCP-compatible AI can call those tools
during a conversation.
The difference from a regular API is that the AI decides when and how to
use the tools based on what you ask. You do not write code to call the
tools. You just have a conversation.
What I built
The server exposes eight tools that Claude can call:
ListPods - lists all pods across all namespaces or a specific one, with
status and age.
GetPodLogs - fetches recent log output from any pod, useful for debugging
without opening a terminal.
ListDeployments - shows all deployments with their desired versus ready
replica counts.
ScaleDeployment - scales a deployment to any number of replicas up to a
hard limit of 10, with the previous count included in the response so you
know what changed.
RestartDeployment - triggers a rolling restart by updating the
restartedAt annotation, which is exactly what kubectl rollout restart
does under the hood.
GetDeploymentStatus - returns detailed rollout status and conditions,
useful for checking whether a deployment completed successfully.
ListNamespaces - lists all namespaces with status and age.
GetNamespaceSummary - returns pods, deployments, and services for a
namespace in one view.
A real conversation with the cluster
Here is what an actual session looks like. I open Claude Desktop and ask:
"List all pods in my Kubernetes cluster"
Claude calls ListPods, my MCP server talks to the cluster, and I get back
something like:
idp-platform/idp-platform-xxx - Running - 1d
monitoring/prometheus-stack-grafana-xxx - Running - 1d
monitoring/alertmanager-xxx - Running - 1d
kube-system/coredns-xxx - Running - 1d
Then I ask:
"Scale the idp-platform deployment to 2 replicas"
Claude calls ScaleDeployment with the right parameters, the cluster
updates, and I get back a confirmation that it scaled from 1 to 2
replicas. No kubectl, no terminal, no context switching.
How it works technically
The server is built with the ModelContextProtocol .NET SDK. Each tool
class is decorated with McpServerToolType and each method with
McpServerTool plus a Description attribute that tells the AI what the
tool does and when to use it.
[McpServerToolType]
public class DeploymentTools
{
[McpServerTool, Description("Scale a Kubernetes deployment to a " +
"specified number of replicas.")]
public async Task<string> ScaleDeployment(
[Description("Name of the deployment to scale.")]
string deploymentName,
[Description("Number of replicas. Maximum 10.")]
int replicas,
[Description("Namespace. Defaults to default.")]
string namespaceName = "default")
{
// safety check, then scale
}
}
The descriptions are what the AI reads to decide which tool to call.
Writing good descriptions is the most important part of building an MCP
server - vague descriptions lead to wrong tool calls.
The server communicates over stdio, which means Claude Desktop spawns it
as a child process and the two communicate through standard input and
output. No HTTP server, no ports, no firewall rules.
Connecting to Claude Desktop
Add this to your Claude Desktop config file at
%APPDATA%\Claude\claude_desktop_config.json:
{
"mcpServers": {
"kubernetes-manager": {
"command": "C:\\Program Files\\dotnet\\dotnet.exe",
"args": [
"run",
"--project",
"C:\\path\\to\\mcp-kubernetes-manager\\src\\Mcp.KubernetesManager"
]
}
}
}
Restart Claude Desktop, go to Settings, and you will see the server
listed under Local MCP servers with all eight tools available.
What I learned
Good tool descriptions matter more than good code. The AI uses the
Description attributes to decide which tool to call. If your description
is ambiguous the AI will either call the wrong tool or ask you to clarify.
Spending time on descriptions pays off more than optimising the
implementation.
Stdio transport is simpler than it sounds. There is no networking to
configure, no authentication to set up, and no ports to expose. The
parent process starts the server and they communicate through pipes. It
just works.
Safety guards belong in the tools themselves. I added a hard limit of 10
replicas to ScaleDeployment and input validation throughout. The AI has
no awareness of what is safe in your environment - that responsibility
stays with the tool developer.
What is next
The natural next step is adding tools for Helm releases, ConfigMap
updates, and event streaming so Claude can watch what is happening in the
cluster in real time. I am also looking at adding a confirmation step for
destructive operations - the AI would describe what it is about to do and
wait for explicit approval before making changes.
Source code: https://github.com/aftabkh4n/mcp-kubernetes-manager
If you have questions or ideas for additional tools, drop a comment below.

Top comments (0)