If you're building an AI feature in .NET in 2026, the first framework you hear about is Microsoft Semantic Kernel. It's well-funded, actively maintained, and integrates deeply with Azure. For most projects, that's a fine starting point.
But "fine for most" is not "right for all." Over the last few months we've talked to teams who started with Semantic Kernel and ended up looking for something else. The reasons cluster around three themes: local LLM support, observability, and dependency footprint.
This post is an honest comparison — not a hit piece. Semantic Kernel is a real piece of engineering. We just think it's worth understanding what trade-offs it makes, and what an alternative shaped around different priorities looks like.
Where Semantic Kernel shines
Let's start with what Semantic Kernel does well, because it's a lot:
- Azure-native. If your stack is already Azure OpenAI + Azure AI Search + App Service, Semantic Kernel snaps into place with minimal ceremony.
- First-party support. It's a Microsoft project. That alone reduces procurement friction in enterprise environments.
- Plugins ecosystem. The plugin model is well-documented and Microsoft has shipped a steady stream of integrations.
- Backed by serious R&D. The team behind Semantic Kernel has poured real engineering into kernel orchestration, planners, and prompt templating.
If your team is already invested in the Microsoft cloud and you're building features that look like "summarize this Word doc" or "search our SharePoint," Semantic Kernel is probably the right tool.
Where teams start looking elsewhere
1. Local LLMs are a second-class citizen
Semantic Kernel can talk to Ollama. It can talk to LM Studio. But the developer experience is built around hosted APIs — Azure OpenAI, OpenAI, Anthropic — and local providers feel bolted on.
This matters for a growing number of teams:
- Regulated industries — banks, healthcare, defense — that can't ship customer data to OpenAI's servers
- Cost-sensitive products that need to handle high request volumes without paying $0.001 per call
- Edge deployments running on customer hardware with no internet connection
- Air-gapped enterprises where any outbound traffic is a security incident
If your roadmap includes local LLMs as a peer of hosted ones — not a fallback — you'll feel the friction.
2. The runtime is heavy
Add Semantic Kernel to a small console app and watch the dependency tree light up. The framework pulls in a lot — telemetry, ML.NET, abstractions on top of abstractions. For a CRUD API that wants to summarize a paragraph, that's a lot of surface area.
It also makes auditing harder. If you need to ship to a customer who reads SBOMs, every transitive package is a question to answer.
3. Observability is opt-in, not built-in
Want to know how many tokens an agent run consumed? Want to trace exactly which tool was called and when? Want a structured event log of every retry, every fallback, every LLM call?
You can get there with Semantic Kernel — by hooking OpenTelemetry, configuring listeners, and writing some glue code. But it's not the default. Most teams don't bother until something goes wrong in production, and then they're scrambling.
For teams who've been burned by black-box AI behavior in production, observability-by-default is non-negotiable.
What an alternative looks like
LogicGrid is a .NET-native multi-agent framework that takes a different posture on each of those three points. It's not better at everything — it's optimized for a different set of constraints.
Local LLMs are first-class
// Same agent. Any provider. Zero code change.
var llm = LlmClientBase.Ollama("llama3.2");
// var llm = LlmClientBase.OpenAI("gpt-4o");
// var llm = LlmClientBase.Anthropic("claude-sonnet-4-6");
// var llm = LlmClientBase.Gemini("gemini-2.0-flash");
IAgent agent = new Agent<string>(
name: "Summariser",
description: "Summarises any text concisely.",
systemPrompt: "Summarise the following in 2-3 sentences: {{input}}",
llm: llm);
var result = await agent.RunAsync(
"Long document text...", new AgentContext("run-1"));
Switching from Ollama to Claude is a one-line change. Streaming, tool calling, and embeddings work the same way across every provider. There's no "OpenAI is the real path; Ollama is the demo path."
Zero hidden runtime dependencies
LogicGrid targets netstandard2.0, net6.0, and net8.0. The full SBOM is published as sbom.json in the public repo. The only thing you're pulling in is what's strictly needed.
For air-gapped deployments, that matters: you can audit the entire dependency graph before the package touches your build server.
Observability by default
Every agent step, tool call, retry, and LLM call emits a structured event:
var ctx = new AgentContext()
.WithLogging()
.WithTracing(out var trace);
await agent.RunAsync("Hello", ctx);
// trace contains every step, tool call, retry, and LLM call
foreach (var span in trace.Spans)
Console.WriteLine($"{span.Name} — {span.Duration.TotalMilliseconds:F0}ms");
You don't have to opt into telemetry. You opt out if you don't want it.
Migration considerations
If you're considering moving from Semantic Kernel to LogicGrid, the conversion is generally straightforward — both frameworks model the same concepts (agents, tools, memory) but with different APIs. The biggest mental shift is around orchestration: Semantic Kernel encourages a "planner" mindset where the LLM decides the workflow; LogicGrid encourages explicit graphs where you decide the workflow and the LLM fills in the steps.
Neither approach is wrong — but if you've been frustrated by Semantic Kernel planners going off-script, LogicGrid's graph orchestration will feel like a relief.
When not to switch
If any of these are true, stick with Semantic Kernel:
- Your stack is fully on Azure and you use Azure OpenAI exclusively
- You need first-party Microsoft support contracts
- Your team has already invested significant tooling and training in Semantic Kernel
- You're building primarily for Microsoft 365 / Copilot integration
LogicGrid is a better fit when:
- Local LLMs are part of your roadmap, not a side note
- You ship to enterprises who scrutinize dependencies
- You want observability without writing your own telemetry layer
- You're targeting older .NET versions (.NET Framework 4.7.2+ via
netstandard2.0)
Try it in 5 minutes
dotnet add package LogicGrid.Core
ollama pull llama3.2
using LogicGrid.Core.Agents;
using LogicGrid.Core.Llm;
var llm = LlmClientBase.Ollama("llama3.2");
IAgent agent = new Agent<string>(
name: "Helper",
description: "Answers questions concisely.",
systemPrompt: "Answer in one short sentence.",
llm: llm);
var result = await agent.RunAsync(
"What is the capital of France?", new AgentContext("run-1"));
Console.WriteLine(result);
That's it. No appsettings.json ritual, no SDK initialization dance, no API keys (until you want to use a hosted provider).
If you've been frustrated with Semantic Kernel's posture toward local LLMs or its dependency weight — give LogicGrid 30 minutes. If it doesn't fit, you'll know quickly. If it does, the quickstart walks you through the next steps.
Want a deeper comparison? The follow-up post LangChain vs Semantic Kernel vs LogicGrid goes feature-by-feature across all three frameworks.
Top comments (0)