On March 27, researchers at Cyera disclosed three security vulnerabilities affecting LangChain and LangGraph — two of the most widely deployed AI development frameworks in the world.
LangChain-Core recorded 23 million downloads in the week before disclosure. LangChain had 52 million. LangGraph had 9 million. That's 84 million combined weekly downloads carrying at least one of these vulnerabilities into production environments.
The CVEs are:
- CVE-2026-34070 (CVSS 7.5) — path traversal in prompt loading
- CVE-2025-68664 (CVSS 9.3) — deserialization injection that leaks API keys and environment secrets
- CVE-2025-67644 (CVSS 7.3) — SQL injection in the LangGraph checkpoint store
Path traversal. Deserialization of untrusted data. SQL injection. If you've been in web security for any length of time, you've seen these before. They were in the OWASP Top 10 in 2004.
CVE-2026-34070: Path Traversal in Prompt Loading
LangChain's prompt-loading API (langchain_core/prompts/loading.py) accepts file paths to load prompt templates. It does not validate those paths.
A specially crafted prompt template reference can escape the intended directory and read arbitrary files from the server's filesystem. Configuration files, deployment metadata, tokens, prompt templates belonging to other applications — anything the process has read access to.
This is CWE-22: Improper Limitation of a Pathname to a Restricted Directory. It was first catalogued in 2006. Most web frameworks have built-in protections against it. LangChain's prompt loader did not.
Fix: upgrade langchain-core to version 1.2.22 or higher.
CVE-2025-68664: Serialization Injection That Leaks Secrets
This one is the most severe at CVSS 9.3.
LangChain has an internal serialization format. When a dictionary contains an lc key, the framework treats it as a serialized LangChain object rather than regular data. The vulnerability: dumps() and dumpd() did not escape user-controlled dictionaries that happened to include the reserved lc key.
An attacker can craft input data that the framework interprets as a serialized object. When that object is processed, it can trigger the loading of arbitrary LangChain components — including ones that expose environment variables, API keys, and other secrets.
Cyata documented this vulnerability in December 2025 under the name "LangGrinch." As researcher Vladimir Tokarev noted: "Each vulnerability exposes a different class of enterprise data: filesystem files, environment secrets, and conversation history."
This is CWE-502: Deserialization of Untrusted Data. Java developers have been fighting this class of bug since at least 2015, when the Apache Commons Collections deserialization vulnerability became one of the most exploited flaws in enterprise software. The AI ecosystem is learning the same lesson.
Fix: upgrade langchain-core to version 0.3.81 (if on the 0.x branch) or 1.2.5+.
CVE-2025-67644: SQL Injection in LangGraph Checkpoints
LangGraph uses SQLite for checkpoint storage. The SqliteSaver component's list() and alist() methods accept metadata filter keys — and those keys are interpolated directly into SQL queries without sanitisation.
An attacker who can influence the metadata filter keys can inject arbitrary SQL. The result: full bypass of any query filters and access to all checkpoint records, which contain conversation state, tool call results, and any data the agent processed during its run.
This is CWE-89: SQL Injection. It was the number-one vulnerability in the original OWASP Top 10 in 2004. Parameterised queries have been the standard defence for over twenty years. The LangGraph checkpoint store did not use them.
Fix: upgrade langgraph-checkpoint-sqlite to version 3.0.1.
The Pattern
These are not exotic AI-specific attack vectors. There is no prompt injection here, no novel adversarial technique, no research paper required to understand the attack surface. These are bread-and-butter application security bugs — the kind that automated scanners have been catching in web applications since the mid-2000s.
The web security community spent two decades building defences against these vulnerability classes. Frameworks like Django, Rails, and Express have path traversal protection, parameterised queries, and safe serialization built into their core. Developers using those frameworks get these protections by default without thinking about them.
The AI framework ecosystem has not inherited those protections. It has inherited the speed and ambition, but not the scar tissue.
LangChain is not a small project maintained by a single developer. It has significant funding, a large team, and enterprise customers. These vulnerabilities are not the result of neglect or resource constraints. They're the result of building fast in a domain where the security patterns haven't been established yet — and where the developers building the frameworks may not have backgrounds in the web security discipline that solved these problems.
Why This Matters More Than Usual
When a web application has a path traversal bug, the blast radius is the data that application can access. When an AI orchestration framework has a path traversal bug, the blast radius includes:
- Every API key the agent has been configured with
- Every tool credential stored in environment variables
- Every prompt template, including ones that encode business logic
- Every conversation history checkpoint, which may contain customer data, internal documents, or credentials that users pasted into chat
AI frameworks accumulate access. They need API keys for LLM providers, database credentials for memory stores, authentication tokens for the tools they orchestrate. A single vulnerability in the framework layer exposes everything the framework touches — which, by design, is everything.
The SQL injection in LangGraph's checkpoint store is a good example. Checkpoints contain the full state of agent conversations: tool calls, responses, intermediate reasoning, user inputs. An attacker who can query the checkpoint store without filters has access to the complete operational history of every agent running on that instance.
What To Do
Patch immediately. The fixes are straightforward version bumps:
-
langchain-core>= 1.2.22 (or >= 0.3.81 for the 0.x line) -
langgraph-checkpoint-sqlite>= 3.0.1
Audit your environment variables. If you're running LangChain in an environment that has API keys, cloud credentials, or database passwords in environment variables — and you almost certainly are — assume those were accessible through CVE-2025-68664 until you patched. Rotate them.
Check your checkpoint stores. If you use LangGraph with SQLite checkpoints and the checkpoint store was accessible to untrusted input, assume conversation history was accessible. Audit what data those conversations contained.
Run a dependency audit. pip audit or safety check will flag these CVEs in your lockfile. If you're not running dependency audits in CI, this is a good week to start.
Consider what's between your agents and the world. These CVEs are in the framework layer — the code that sits between your application logic and the LLMs, databases, and tools your agents use. If you're running DLP, content scanning, or access controls, they need to cover the framework layer too, not just the agent's outbound calls.
The Bigger Picture
RSAC 2026 wrapped up this week. Cisco released DefenseClaw, an open-source framework for scanning AI agent skills and MCP servers. Microsoft announced Agent 365. CrowdStrike launched Charlotte AI AgentWorks. SentinelOne, Check Point, Saviynt, and Teleport all shipped AI agent security products.
The industry is building defences for the AI agent era. That's genuinely necessary. But the LangChain disclosure is a reminder that the most urgent vulnerabilities aren't the exotic ones. They're the ones we already know how to find and fix — in the frameworks that haven't looked for them yet.
84 million weekly downloads. Path traversal, SQL injection, and deserialization. The AI industry is speed-running the web's security history, and it hasn't reached the chapter where we learned to check our inputs.
Sources: The Hacker News: LangChain, LangGraph Flaws Disclosure (March 27, 2026) · TechRadar: LangChain framework security issues · Cyata: LangGrinch CVE-2025-68664 (December 2025) · Cisco DefenseClaw announcement (March 2026)
Originally published on mistaike.ai
Top comments (0)