<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: logicgrid-dev</title>
    <description>The latest articles on DEV Community by logicgrid-dev (@logicgriddev).</description>
    <link>https://dev.to/logicgriddev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/logicgriddev"/>
    <language>en</language>
    <item>
      <title>A Semantic Kernel Alternative for .NET — When and Why You'd Reach for One</title>
      <dc:creator>logicgrid-dev</dc:creator>
      <pubDate>Sun, 03 May 2026 05:09:22 +0000</pubDate>
      <link>https://dev.to/logicgriddev/a-semantic-kernel-alternative-for-net-when-and-why-youd-reach-for-one-40l</link>
      <guid>https://dev.to/logicgriddev/a-semantic-kernel-alternative-for-net-when-and-why-youd-reach-for-one-40l</guid>
      <description>&lt;p&gt;If you're building an AI feature in .NET in 2026, the first framework you hear about is &lt;strong&gt;Microsoft Semantic Kernel&lt;/strong&gt;. It's well-funded, actively maintained, and integrates deeply with Azure. For most projects, that's a fine starting point.&lt;/p&gt;

&lt;p&gt;But "fine for most" is not "right for all." Over the last few months we've talked to teams who started with Semantic Kernel and ended up looking for something else. The reasons cluster around three themes: &lt;strong&gt;local LLM support, observability, and dependency footprint&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This post is an honest comparison — not a hit piece. Semantic Kernel is a real piece of engineering. We just think it's worth understanding what trade-offs it makes, and what an alternative shaped around different priorities looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Semantic Kernel shines
&lt;/h2&gt;

&lt;p&gt;Let's start with what Semantic Kernel does well, because it's a lot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Azure-native.&lt;/strong&gt; If your stack is already Azure OpenAI + Azure AI Search + App Service, Semantic Kernel snaps into place with minimal ceremony.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;First-party support.&lt;/strong&gt; It's a Microsoft project. That alone reduces procurement friction in enterprise environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plugins ecosystem.&lt;/strong&gt; The plugin model is well-documented and Microsoft has shipped a steady stream of integrations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backed by serious R&amp;amp;D.&lt;/strong&gt; The team behind Semantic Kernel has poured real engineering into kernel orchestration, planners, and prompt templating.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your team is already invested in the Microsoft cloud and you're building features that look like "summarize this Word doc" or "search our SharePoint," Semantic Kernel is probably the right tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where teams start looking elsewhere
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Local LLMs are a second-class citizen
&lt;/h3&gt;

&lt;p&gt;Semantic Kernel can talk to Ollama. It can talk to LM Studio. But the developer experience is built around hosted APIs — Azure OpenAI, OpenAI, Anthropic — and local providers feel bolted on.&lt;/p&gt;

&lt;p&gt;This matters for a growing number of teams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Regulated industries&lt;/strong&gt; — banks, healthcare, defense — that can't ship customer data to OpenAI's servers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost-sensitive products&lt;/strong&gt; that need to handle high request volumes without paying $0.001 per call&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edge deployments&lt;/strong&gt; running on customer hardware with no internet connection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Air-gapped enterprises&lt;/strong&gt; where any outbound traffic is a security incident&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your roadmap includes local LLMs as a peer of hosted ones — not a fallback — you'll feel the friction.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The runtime is heavy
&lt;/h3&gt;

&lt;p&gt;Add Semantic Kernel to a small console app and watch the dependency tree light up. The framework pulls in a lot — telemetry, ML.NET, abstractions on top of abstractions. For a CRUD API that wants to summarize a paragraph, that's a lot of surface area.&lt;/p&gt;

&lt;p&gt;It also makes auditing harder. If you need to ship to a customer who reads SBOMs, every transitive package is a question to answer.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Observability is opt-in, not built-in
&lt;/h3&gt;

&lt;p&gt;Want to know how many tokens an agent run consumed? Want to trace exactly which tool was called and when? Want a structured event log of every retry, every fallback, every LLM call?&lt;/p&gt;

&lt;p&gt;You can get there with Semantic Kernel — by hooking OpenTelemetry, configuring listeners, and writing some glue code. But it's not the default. Most teams don't bother until something goes wrong in production, and then they're scrambling.&lt;/p&gt;

&lt;p&gt;For teams who've been burned by black-box AI behavior in production, observability-by-default is non-negotiable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What an alternative looks like
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/"&gt;LogicGrid&lt;/a&gt; is a .NET-native multi-agent framework that takes a different posture on each of those three points. It's not better at everything — it's optimized for a different set of constraints.&lt;/p&gt;

&lt;h3&gt;
  
  
  Local LLMs are first-class
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Same agent. Any provider. Zero code change.&lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;LlmClientBase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Ollama&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"llama3.2"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// var llm = LlmClientBase.OpenAI("gpt-4o");&lt;/span&gt;
&lt;span class="c1"&gt;// var llm = LlmClientBase.Anthropic("claude-sonnet-4-6");&lt;/span&gt;
&lt;span class="c1"&gt;// var llm = LlmClientBase.Gemini("gemini-2.0-flash");&lt;/span&gt;

&lt;span class="n"&gt;IAgent&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Summariser"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Summarises any text concisely."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;systemPrompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Summarise the following in 2-3 sentences: {{input}}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;RunAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"Long document text..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;AgentContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"run-1"&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Switching from Ollama to Claude is a one-line change. Streaming, tool calling, and embeddings work the same way across every provider. There's no "OpenAI is the real path; Ollama is the demo path."&lt;/p&gt;

&lt;h3&gt;
  
  
  Zero hidden runtime dependencies
&lt;/h3&gt;

&lt;p&gt;LogicGrid targets &lt;code&gt;netstandard2.0&lt;/code&gt;, &lt;code&gt;net6.0&lt;/code&gt;, and &lt;code&gt;net8.0&lt;/code&gt;. The full SBOM is published as &lt;code&gt;sbom.json&lt;/code&gt; in &lt;a href="https://github.com/logicgrid-dev/logicgrid" rel="noopener noreferrer"&gt;the public repo&lt;/a&gt;. The only thing you're pulling in is what's strictly needed.&lt;/p&gt;

&lt;p&gt;For air-gapped deployments, that matters: you can audit the entire dependency graph before the package touches your build server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Observability by default
&lt;/h3&gt;

&lt;p&gt;Every agent step, tool call, retry, and LLM call emits a structured event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;AgentContext&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WithLogging&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WithTracing&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;out&lt;/span&gt; &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;RunAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Hello"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// trace contains every step, tool call, retry, and LLM call&lt;/span&gt;
&lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Spans&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WriteLine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;$"&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt; — &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TotalMilliseconds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;F0&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt;ms"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You don't have to opt into telemetry. You opt out if you don't want it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration considerations
&lt;/h2&gt;

&lt;p&gt;If you're considering moving from Semantic Kernel to LogicGrid, the conversion is generally straightforward — both frameworks model the same concepts (agents, tools, memory) but with different APIs. The biggest mental shift is around orchestration: Semantic Kernel encourages a "planner" mindset where the LLM decides the workflow; LogicGrid encourages explicit graphs where you decide the workflow and the LLM fills in the steps.&lt;/p&gt;

&lt;p&gt;Neither approach is wrong — but if you've been frustrated by Semantic Kernel planners going off-script, LogicGrid's &lt;a href="http://logicgrid.dev/docs/orchestration/graph" rel="noopener noreferrer"&gt;graph orchestration&lt;/a&gt; will feel like a relief.&lt;/p&gt;

&lt;h2&gt;
  
  
  When &lt;strong&gt;not&lt;/strong&gt; to switch
&lt;/h2&gt;

&lt;p&gt;If any of these are true, stick with Semantic Kernel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your stack is fully on Azure and you use Azure OpenAI exclusively&lt;/li&gt;
&lt;li&gt;You need first-party Microsoft support contracts&lt;/li&gt;
&lt;li&gt;Your team has already invested significant tooling and training in Semantic Kernel&lt;/li&gt;
&lt;li&gt;You're building primarily for Microsoft 365 / Copilot integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LogicGrid is a better fit when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local LLMs are part of your roadmap, not a side note&lt;/li&gt;
&lt;li&gt;You ship to enterprises who scrutinize dependencies&lt;/li&gt;
&lt;li&gt;You want observability without writing your own telemetry layer&lt;/li&gt;
&lt;li&gt;You're targeting older .NET versions (.NET Framework 4.7.2+ via &lt;code&gt;netstandard2.0&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try it in 5 minutes
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dotnet add package LogicGrid.Core
ollama pull llama3.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;LogicGrid.Core.Agents&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;LogicGrid.Core.Llm&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;LlmClientBase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Ollama&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"llama3.2"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;IAgent&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Helper"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Answers questions concisely."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;systemPrompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Answer in one short sentence."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;RunAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"What is the capital of France?"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;AgentContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"run-1"&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="n"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WriteLine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. No &lt;code&gt;appsettings.json&lt;/code&gt; ritual, no SDK initialization dance, no API keys (until you want to use a hosted provider).&lt;/p&gt;

&lt;p&gt;If you've been frustrated with Semantic Kernel's posture toward local LLMs or its dependency weight — give LogicGrid 30 minutes. If it doesn't fit, you'll know quickly. If it does, the &lt;a href="https://logicgrid.dev/docs/getting-started/quickstart" rel="noopener noreferrer"&gt;quickstart&lt;/a&gt; walks you through the next steps.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Want a deeper comparison?&lt;/strong&gt; The follow-up post &lt;a href="http://logicgrid.dev/blog/langchain-vs-semantic-kernel-vs-logicgrid" rel="noopener noreferrer"&gt;LangChain vs Semantic Kernel vs LogicGrid&lt;/a&gt; goes feature-by-feature across all three frameworks.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>csharp</category>
      <category>ai</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
