This post is Part 2 of a series on building an LLM-powered BattleTech bot.
👉 Part 1: Architecture, Agents, and Tools
What to Expect
Here’s how the series is structured:
Part 1. Theory & Architecture
- Introduction
- MakaMek Architecture Overview
- How LLM Models Use Tools
Part 2. Hands-On Implementation
- Building an MCP Server to Query Game State
- Creating Agents with the Microsoft Agent Framework
- Empowering Agents with Tools
- Conclusion
MCP Server for Querying Game State
In Part 1, we covered the motivation, architecture, and theory behind using tools and agents in my BattleTech bot.
Let's see how to implement all of this in practice. Most examples around this topic are available in Python, mainly for historical reasons. As we saw earlier, however, agents and tools are just pieces of “traditional” software, they can be implemented in any programming language. Since MakaMek is a .NET application, it makes perfect sense to create agents and tools in the same tech stack.
All code is available on the MakaMek GitHub page — check the BotContainer (for MCP) and BotAgent (for AI agents) projects.
Let’s take a closer look at the MCP server. The idea is that it should have access to the ClientGame class and the corresponding calculators, and be able to query them to retrieve the tactical situation. This means we need to add an MCP server to the BotContainer project. It also needs to be a remote server so that AI agents can reach it over the network.
There is an official MCP SDK available for .NET and C#. It is currently in preview, which means the APIs are not fully stable yet, but I haven’t noticed any issues so far.
So how do we implement an MCP server in a .NET app? The process is straightforward and consists of just a few steps.
1. Add the NuGet packages
To create a remote MCP server, add the following packages to your project:
dotnet add package ModelContextProtocol --version 0.6.0-preview.1
dotnet add package ModelContextProtocol.AspNetCore --version 0.6.0-preview.1
The first one (ModelContextProtocol) is core MCP functionality, it's enough if you only need to create a local server, the ModelContextProtocol.AspNetCore is required for a remote MCP to enable HTTP/SSE server and transport.
2. Register the MCP server in DI
Once the packages are added, register the MCP server with the DI container:
services.AddMcpServer(options =>
{
options.ServerInfo = new Implementation
{
Name = "MakaMek MCP Server",
Version = "0.1.0"
};
})
.WithHttpTransport(options => // enables a remote server accessible via HTTP/SSE
{
options.Stateless = true; // important if you want to host it in a serverless function
})
.WithTools<DeploymentTools>() // classes with tools exposed by this server
.WithTools<MovementTools>()
.WithTools<WeaponsAttackTools>();
3. Expose the MCP endpoints
Because the MCP uses a standard endpoints, no custom controller code is needed:
app.MapMcp();
4. Implement your tools
Tools, as explained in Part 1, are just regular C# methods decorated with certain attributes for discoverability.
Here is an example of one of the simplest tools. More complex ones are available in the repository:
[McpServerToolType] // Indicates that this class contains MCP tools (optional with manual registration)
public class DeploymentTools
{
private readonly IGameStateProvider _gameStateProvider;
public DeploymentTools(IGameStateProvider gameStateProvider)
{
_gameStateProvider = gameStateProvider;
}
// Description is one of the most important parts: it is what the LLM "sees".
// It should clearly explain when the tool should be used and what it does.
[McpServerTool, Description("Get valid deployment zones (hexes). Should be used by the deployment agent.")]
public List<HexCoordinateData> GetDeploymentZones()
{
return _gameStateProvider.ClientGame.GetEdgeHexCoordinates()
.Select(h => h.ToData())
.ToList();
}
}
5. Connect your agent to the MCP server
That’s it. When you start the application, your MCP server becomes available to any agents on the same network. Just add the corresponding MCP configuration to your favorite coding agent (VS Code, Cursor, Claude, Kiro — you name it) to quickly test that it's actually working:
{
"servers": {
"makamek-mcp-server": {
"url": "http://localhost:5002",
"type": "http"
}
}
}
Creating Agents with Microsoft Agent Framework
Now that we have an MCP server, let’s create our own agents to use it. Again, we’ll do this in C#, and the obvious choice is the Microsoft Agent Framework (MAF), since it provides a .NET SDK alongside the Python one. MAF is a fairly new product, but it is based on Semantic Kernel after its merge with AutoGen.
I should mention that this is one of the areas where Microsoft’s approach can be confusing. They move fast and change APIs often, which makes production use risky. For example, in the week since I finished this feature, some APIs already changed in a new preview release. In this post, I’ll reference the version I used during development. Be aware that it’s not the latest one, and you may need to adjust names and APIs if you upgrade.
So let’s see how to create agents using MAF.
1. Add the NuGet packages
dotnet add package Microsoft.AspNetCore.OpenApi --version 10.0.2 # Needed for models supporting the OpenAI API
dotnet add package Microsoft.Agents.AI --version 1.0.0-preview.260108.1 # Microsoft Agent Framework
dotnet add package Microsoft.Extensions.AI --version 10.2.0
dotnet add package Microsoft.Extensions.AI.Abstractions --version 10.2.0
dotnet add package Microsoft.Extensions.AI.OpenAI --version 10.2.0-preview.1.26063.2
Microsoft.Agents.AI is the Microsoft Agent Framework itself. The remaining packages provide abstractions and extensions to wire it into the broader Microsoft AI ecosystem. I’ll come back to that shortly.
2. Choose and abstract your model provider
Next, think about the model you want to use to power your agents. If your hardware allows it (for example, a decent GPU with enough VRAM), running a local model is often a good place to start. Local models are weaker than frontier models, but they’re free to run (apart from your electricity bill).
LM Studio is my tool of choice to run local models, but the same approach works with Ollama or Foundry Local.
A good practice is not to hardcode the model provider and instead make it swappable. To illustrate this, let’s create two providers that conform to a common interface.
2.1 Common LLM provider interface
public interface ILlmProvider
{
IChatClient GetChatClient();
}
IChatClient comes from Microsoft.Extensions.AI and represents a client capable of communicating with any chat model.
2.2 OpenAI implementation
public IChatClient GetChatClient()
{
var openAiClient = new OpenAIClient(_config.ApiKey);
var chatClient = openAiClient.GetChatClient(_config.Model);
return chatClient.AsIChatClient();
}
_config is a section of a standard ASP.NET Core configuration for the model provider (see the full project in the repo for details).
AsIChatClient() comes from Microsoft.Extensions.AI.OpenAI and knows how to wrap a concrete OpenAI client into the generic IChatClient abstraction.
2.3 Local / offline model implementation
Most local models support the OpenAI API, which is why I call this provider “OpenAI-like”:
public IChatClient GetChatClient()
{
var openAiClient = new OpenAIClient(
new ApiKeyCredential("NO_API_KEY_REQUIRED"),
new OpenAIClientOptions { Endpoint = _endpoint });
var chatClient = openAiClient.GetChatClient(_config.Model);
return chatClient.AsIChatClient();
}
The implementation is almost identical, except that the local model does not require a real API key. Instead, we provide the endpoint of the LM Studio (or Ollama) server running the model.
2.4 Register providers with DI
Next, register the providers as dependencies. It’s handy to expose the active provider via configuration so models can be swapped without code changes:
services.AddSingleton<LocalOpenAiLikeProvider>();
services.AddSingleton<OpenAiProvider>();
services.AddSingleton<ILlmProvider>(sp =>
{
var cfg = sp.GetRequiredService<IOptions<LlmProviderConfiguration>>().Value;
return cfg.Type switch
{
"LocalOpenAI" => sp.GetRequiredService<LocalOpenAiLikeProvider>(),
"OpenAI" => sp.GetRequiredService<OpenAiProvider>(),
_ => throw new InvalidOperationException($"Unsupported LLM provider type '{cfg.Type}'.")
};
});
3. Create agents with MAF
Once the wiring is done, we can move on to the actual implementation. In my case, I need four agents, one per game phase. They differ mainly by their prompts and the tools they use; the underlying model can be the same.
With that in mind, I created an abstract BaseAgent class and four specialized agents derived from it. Model setup is shared in the base class.
To access the model, we need an instance of AIAgent from MAF. There are several ways to create it, either directly via a constructor or via a builder. The builder approach is more flexible because it allows you to plug in middleware, so that’s what I use:
var agent = LlmProvider.GetChatClient()
.CreateAIAgent(
instructions: SystemPrompt, // defined in the derived agent
tools: allTools) // tools (we’ll look at them in detail later)
.AsBuilder()
.Use(ToolCallMiddleware)
.Build();
var thread = agent.GetNewThread();
ToolCallMiddleware is a callback executed by the AIAgent before it invokes a tool requested by the model. It’s very useful for observability and guardrails.
thread holds the entire conversation history between the agent and the model. It’s important to create one so the model has proper context, for example knowing that a previous step triggered a tool call.
4. Run the agent
Now that we have an AIAgent instance, we can start making calls to the model:
var response = await agent.RunAsync(
userPrompt,
thread);
Empowering Agents with Tools
At this point, the agent can reason and return a text response. But what about our MCP server? How do we use the tools it exposes? And how do we convert a model’s response into a structured command that the game can actually execute?
Remember the tools array we passed to the agent factory method? Let’s take a closer look at it. This array contains definitions of all AITools that we want to make available to the model, including MCP tools, API tools, and local tools. In this post, we focus only on MCP and local tools, since those are the ones used by the MakaMek agents.
1. MCP tools
To access tools exposed by a remote MCP server, we first create an MCP client by providing the server endpoint and a transport mode:
await using var mcpClient = await McpClient.CreateAsync(
new HttpClientTransport(
new HttpClientTransportOptions
{
TransportMode = HttpTransportMode.StreamableHttp, // HTTP/SSE transport for remote servers
Endpoint = new Uri(mcpEndpoint),
}));
Note the
await usingstatement:mcpClientshould be disposed when no longer needed. Be careful with the scope, though — the client must remain alive for as long as the agent needs to call MCP tools.
Once the MCP client is created, we can list all tools available on the server:
var mcpTools = await mcpClient.ListToolsAsync();
If MCP tools are the only ones you use, this list can already be passed directly to the agent factory.
2. Local tools
Local tools are functions defined in the same module (in .NET terms, the same assembly) as the agents. In MakaMek, each specialized agent exposes its own local tools, mostly to translate the model’s decisions into concrete MakaMek commands.
Here’s an example of a local tool used by the DeploymentAgent:
[Description("Execute a deployment decision for a unit")]
private string MakeDeploymentDecision(
[Description("Unit GUID")] Guid unitId,
[Description("Q coordinate")] int q,
[Description("R coordinate")] int r,
[Description("Facing direction 0-5")] int direction,
[Description("Tactical reasoning")] string reasoning)
{
var command = new DeployUnitCommand
{
UnitId = unitId,
Position = new HexCoordinateData(q, r),
Direction = direction,
GameOriginId = Guid.Empty, // Will be set by ClientGame
};
PendingDecision = (command, reasoning);
return JsonSerializer.Serialize(new
{
success = true,
message = "Deployment decision recorded"
});
}
Notice that both the method and all its parameters are decorated with the Description attribute. This metadata is important: the LLM uses it to understand when and how the tool should be called.
Once the tools are defined, we expose them by overriding GetLocalTools() in a specialized agent:
protected override List<AITool> GetLocalTools()
{
return
[
AIFunctionFactory.Create(MakeDeploymentDecision, "make_deployment_decision")
];
}
Here, AIFunctionFactory.Create turns a normal C# method into an AITool by providing the method reference and the name that will be visible to the model.
3. Combining MCP and local tools
Now we can retrieve the agent-specific local tools, combine them with the MCP tools, and pass them to the agent:
var localTools = GetLocalTools();
List<AITool> allTools = [..localTools, ..mcpTools]; // passed to the agent factory
This is the same allTools collection we provided earlier when creating the agent.
That concludes the entire end-to-end flow of exposing tools via MCP and consuming them using agents. Of course I had to take some shortcuts in the blog, but feel free to explore the full solution on GitHub.
Conclusion
So, with all the tools and “LLM wisdom” available to the bot, can it actually play the game? And if it can, how does it compare to a traditional rule-based bot?
The answer is both yes and no. It does play the game, and with all the guardrails in place it always takes valid actions, but those actions often feel very random and don’t make much sense, even when the full tactical situation is available to the model. This is especially noticeable with smaller local models (I use qwen3-vl-8b), which very often just picks one of the first available options. Frontier GPT-5.2 behaves a bit more “believably”, but there is still no comparison with a rule-based bot that consistently chooses the best available option and is therefore almost impossible to beat when the dice gods are not on your side. On top of that, the rule-based bot is much faster and cheaper to run: depending on the number of units controlled by the bot, a single game can easily exceed a million input tokens.
This real-world result demonstrates that AI isn't always the right solution, even when it's technically feasible.
Even though the LLM-powered bot turned out to be practically useless from a gameplay perspective, I don’t regret building it. It gave me a great agentic playground in a domain I really enjoy and can continue improving.
Some obvious areas for improvement would be:
- refining the prompts (they are definitely not optimised yet),
- converting tool outputs to natural language instead of the current structured format,
- introducing another type of tool, such as RAG, to provide BattleTech rules relevant to a specific situation,
- and, as a longer-term idea, retraining or fine-tuning a model once I have enough gameplay logs.
So there are plenty of ideas left to explore in my spare time (if I can find some 😄).
Do you see anything else worth adding to the list?
Top comments (0)