Why This Comparison Matters Right Now
Here's the honest reality of building Python AI agents in 2026: you have two genuinely good framework choices, and picking the wrong one for the wrong problem will cost you architecture refactors, not just a few hours of code changes.
LangGraph and Semantic Kernel have both crossed major milestones since most popular comparisons were written:
- LangGraph hit v1.0 in October 2025 — with a formal stability commitment and no breaking changes until 2.0. (official announcement)
-
LangChain 1.0's
create_agentnow runs on the LangGraph runtime underneath, making LangGraph the execution engine of the LangChain ecosystem. (LangChain/LangGraph 1.0 blog) - Semantic Kernel shipped first-class MCP support for Python in v1.28.1 — SK can now act as both an MCP client and server natively in the SDK. (official SK dev blog)
If you're still reading comparisons that call LangGraph "unstable" or Semantic Kernel "too tied to .NET", you're reading old content.
This post is grounded in the official LangGraph docs, the official Semantic Kernel docs, and both framework changelogs. Let's go.
TL;DR: The One-Line Decision Rule
| If your problem is… | Use… |
|---|---|
| Stateful, durable, resumable agent workflows with explicit control | LangGraph |
| Protocol-first, plugin-composed, interoperable agent platforms | Semantic Kernel |
That distinction explains every trade-off in this article.
Architecture: Two Very Different Mental Models
LangGraph — The Graph Runtime
LangGraph models your agent system as a stateful graph where you explicitly define state, nodes, and edges. Nodes are Python callables or subgraphs. Edges are transitions. State is a typed object that flows through the graph and gets updated at each step.
That's not an internal implementation detail — it's the primary abstraction you work with every day.
The official LangGraph v1 docs describe the framework around three core ideas: durable execution, controllability, and human-in-the-loop. Resuming a workflow from the last checkpoint after a crash, inserting a human review step, or branching into parallel sub-agents are first-class operations — not workarounds.
Since LangGraph v1, LangChain's create_agent lives on top of this runtime. The stack now has a clean separation:
- Start with
create_agentfor standard tool-calling loops. - Drop down to raw LangGraph when you need explicit workflow topology.
Semantic Kernel — The Kernel-Plugin Middleware
Semantic Kernel starts from the Kernel abstraction, which holds AI services, plugins, and functions. Plugins are groups of functions exposed to the model and to agents, and can come from native Python code, prompt templates, or imported external schemas.
The official SK agent-functions docs state:
"Any Plugin available to an Agent is managed within its respective Kernel instance — this enables each Agent to access distinct functionalities based on its specific role."
Orchestration emerges from agents choosing functions and planners sequencing capability calls — rather than from a graph topology you define up front.
This makes Semantic Kernel feel more like AI middleware. You shape what your agent can do, then let function calling and the agent framework decide how to do it.
Architectural Difference — Quick Reference
| Dimension | LangGraph | Semantic Kernel |
|---|---|---|
| Primary abstraction | Typed state graph (nodes + edges) | Kernel + plugins + agents |
| Workflow control | You define topology explicitly | Emerges from agent function calling |
| State management | First-class typed state + checkpointing | Externalized per service or plugin |
| Best mental model | Durable state machine for agents | AI middleware with composable capabilities |
Code: The Same Agent in Both Frameworks
To make the architectural differences concrete, let's build the same agent in both: a multi-turn weather assistant with memory and a system prompt.
LangGraph — Weather Agent with Checkpointing
Pattern from the official LangGraph agents quickstart
# pip install -U langgraph "langchain[openai]"
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import InMemorySaver
from langchain.chat_models import init_chat_model
# --- Tool: plain Python function ---
def get_weather(city: str) -> str:
"""Get the current weather for a given city."""
# Replace with a real API call in production
return f"It's sunny and 28°C in {city}."
# --- LLM ---
model = init_chat_model("openai:gpt-4o-mini", temperature=0)
# --- Checkpointer enables durable multi-turn memory ---
# Swap InMemorySaver for SqliteSaver or PostgresSaver in production
checkpointer = InMemorySaver()
# --- Compile graph agent ---
agent = create_react_agent(
model=model,
tools=[get_weather],
prompt="You are a helpful weather assistant.",
checkpointer=checkpointer,
)
# --- thread_id binds this conversation to a persistent checkpoint ---
config = {"configurable": {"thread_id": "user-session-1"}}
# Turn 1
response = agent.invoke(
{"messages": [{"role": "user", "content": "What is the weather in Mumbai?"}]},
config=config,
)
print(response["messages"][-1].content)
# Turn 2 — agent remembers context automatically via checkpointer
followup = agent.invoke(
{"messages": [{"role": "user", "content": "How about Delhi?"}]},
config=config,
)
print(followup["messages"][-1].content)
What is happening architecturally:
-
create_react_agentcompiles aStateGraphwith a model-tool loop under the hood. - The
checkpointerpersists state at every step; the samethread_idresumes from the last saved state automatically. - If the process crashes mid-run, restarting and invoking with the same
thread_idpicks up from the last checkpoint — durability is a runtime concern, not your concern.
Semantic Kernel — Weather Agent with Plugin
Pattern from the official SK agent-functions docs
# pip install semantic-kernel
import asyncio
from semantic_kernel import Kernel
from semantic_kernel.agents import ChatCompletionAgent
from semantic_kernel.connectors.ai.open_ai import (
OpenAIChatCompletion,
OpenAIChatPromptExecutionSettings,
)
from semantic_kernel.connectors.ai import FunctionChoiceBehavior
from semantic_kernel.functions import kernel_function
from semantic_kernel.contents import ChatHistory
# --- Plugin: class with @kernel_function decorators ---
class WeatherPlugin:
@kernel_function(name="get_weather", description="Get the weather for a city.")
def get_weather(self, city: str) -> str:
# Replace with a real API call in production
return f"It's sunny and 28°C in {city}."
# --- Kernel: holds services and plugins ---
kernel = Kernel()
kernel.add_service(OpenAIChatCompletion(ai_model_id="gpt-4o-mini"))
# --- Execution settings: enable auto function calling ---
settings = OpenAIChatPromptExecutionSettings()
settings.function_choice_behavior = FunctionChoiceBehavior.Auto()
# --- Register plugin ---
kernel.add_plugin(WeatherPlugin(), plugin_name="WeatherPlugin")
# --- Agent: kernel + instructions ---
agent = ChatCompletionAgent(
kernel=kernel,
name="WeatherAssistant",
instructions="You are a helpful weather assistant.",
)
async def run_agent():
# ChatHistory is your responsibility to maintain across turns
history = ChatHistory()
# Turn 1
history.add_user_message("What is the weather in Mumbai?")
async for message in agent.invoke(history):
print(f"Agent: {message.content}")
history.add_message(message)
# Turn 2
history.add_user_message("How about Delhi?")
async for message in agent.invoke(history):
print(f"Agent: {message.content}")
history.add_message(message)
asyncio.run(run_agent())
What is happening architecturally:
- The
Kernelholds the AI service and plugins as a dependency container. -
@kernel_functiondecorators make Python methods discoverable and invocable by the model automatically. -
FunctionChoiceBehavior.Auto()tells the model to call functions when needed. - Memory is a
ChatHistoryobject you manage and pass into each invocation. The runtime does not persist it for you.
The Most Revealing Difference in 6 Lines
# LangGraph — runtime owns durability
checkpointer = InMemorySaver()
config = {"configurable": {"thread_id": "session-1"}}
agent.invoke(messages, config) # resumes from last checkpoint automatically
# Semantic Kernel — you own state
history = ChatHistory()
history.add_user_message("...")
agent.invoke(history) # you pass and maintain state explicitly
In LangGraph, durability is a runtime concern. In Semantic Kernel, state management is your concern. Neither is wrong — they match different application models.
Protocol Support: MCP and A2A
This is where Semantic Kernel has made its most significant leap recently.
Semantic Kernel — Native MCP in the Python SDK
The official SK MCP announcement states:
"Python support for MCP has arrived... SK Python can act as both an MCP Host and an MCP Server, support multiple transport methods (stdio, SSE, WebSocket), chain multiple MCP servers together, and expose SK functions or agents as MCP servers."
That is not an adapter or community plugin. It is first-class SDK support from v1.28.1+. For teams building tools and agents that need to cross service boundaries via a standard protocol, this is a meaningful architectural upgrade.
LangGraph — Strong MCP at the Deployment Edge
LangGraph's MCP story is more about deployment than in-process integration. When deployed on the LangGraph Platform, every agent is automatically exposed as an MCP-accessible endpoint at /mcp with no extra code required. For self-hosted deployments, integration is available via the langchain-mcp-adapters package.
Bottom line:
SK is stronger when you want MCP semantics inside your Python process. LangGraph is stronger when you think of agents as deployed services that other clients consume via MCP.
Stability and Breaking Changes: The 2026 Reality
Here is what the official docs actually say now.
LangGraph v1 (October 2025): The official v1 release notes state that the core graph APIs and execution model are unchanged. The main migration note is deprecation of create_react_agent in langgraph.prebuilt in favour of LangChain's create_agent. The LangGraph 1.0 announcement explicitly commits to no breaking changes until 2.0.
Semantic Kernel 1.x: Most architectural disruption landed at 1.0 (namespace reorg, API renames, context variable changes). The H1 2025 SK roadmap and subsequent releases show an incremental, additive pattern with targeted fixes rather than structural breaks.
The old narrative of "LangGraph breaks every release" is no longer accurate. Both frameworks are now in a stability-first phase.
Updated Technical Ratings (March 2026)
Based on official docs and both frameworks' current stable releases:
| Dimension | LangGraph | Semantic Kernel |
|---|---|---|
| Workflow control | ⭐ 4.8 / 5 | ⭐ 4.0 / 5 |
| Durable execution | ⭐ 4.9 / 5 | ⭐ 4.1 / 5 |
| Plugin/tool architecture | ⭐ 4.2 / 5 | ⭐ 4.8 / 5 |
| MCP interoperability | ⭐ 3.9 / 5 | ⭐ 4.9 / 5 |
| Flow debuggability | ⭐ 4.7 / 5 | ⭐ 3.9 / 5 |
| Start simple, scale complex | ⭐ 4.8 / 5 | ⭐ 4.4 / 5 |
| Python DX overall | ⭐ 4.6 / 5 | ⭐ 4.5 / 5 |
The scores are intentionally close. Both are production-grade frameworks solving real problems well. The winner for your team is whichever abstraction maps better to how you think about the problem you are solving.
When to Choose Which
✅ Choose LangGraph when:
- Your agent logic involves non-trivial branching, retries, human review, or approval steps that benefit from explicit graph topology.
- You need durable execution — workflows that survive crashes, resume from checkpoints, and have auditable step history.
- You are already invested in the LangChain ecosystem and want the clean
create_agent→ LangGraph stack with a clear upgrade path. - You want fine-grained observability into how execution moved through a workflow at the node level.
✅ Choose Semantic Kernel when:
- You are building a platform or SDK where capabilities are composed as plugins and different agents consume different tool surfaces.
- MCP or A2A interoperability is a core requirement and you want it natively in the Python SDK, not via adapters.
- Your team already uses a DI/service-oriented architecture and the kernel-plugin model maps naturally to it.
- You want lightweight deployment without a dedicated orchestration runtime and can manage state externally.
The One-Line Rule — Revisited
If your agent needs to behave like a durable state machine, use LangGraph.
If your agent needs to behave like a protocol-aware platform component, use Semantic Kernel.
That is the comparison most blog posts are not making. Hopefully this one was useful.
Top comments (1)
Solid comparison. One thing I would question: does SK MCP support actually handle stateful resumption across crashes the way LangGraph checkpointing does, or is that still a gap?