In the last 30 days, three separate agent frameworks received nearly identical feature requests from their communities. Different repos, different maintainers, different architectures. Same ask.
The issues
LangChain #35393 — "Agent Identity Verification"
Opened mid-April 2026. The thread (15+ comments and growing) asks for a way to verify agent identity before tool execution. Not after. Not in logs. Before the tool call happens.
OpenAI Agents SDK #2775 — governance collaboration
imran-siddique opened this requesting a governance integration layer for the Agents SDK. The goal: define policies that run before an agent acts, not after it's already sent the email or deleted the row.
CrewAI #4596 — "Fail closed without pre-execution checks"
Three comments. The request is direct: if a policy engine can't evaluate a tool call, block it. Don't default to allowing it. Fail closed, not fail open.
What they have in common
Strip away the framework-specific language and all three issues describe the same architecture:
- Agent decides to call a tool
- Something checks whether that call should proceed
- If the check says no (or can't decide), the call doesn't happen
- Every decision gets logged
That's it. Pre-execution evaluation with a default-deny posture.
The reason this keeps coming up independently is that every team building production agents hits the same wall. The agent works. It works too well. It sends emails nobody reviewed. It runs database queries nobody approved. It calls external APIs with real money attached.
The observability layer (logging, tracing, monitoring) tells you what happened. It doesn't prevent anything.
Why this is technically hard
Each framework handles tool calls differently.
LangChain wraps tools as BaseTool instances with a _run method. Intercepting means wrapping that method or using the callback system (which fires after invocation, not before, in most configurations).
OpenAI's Agents SDK uses function definitions passed to the completions API. The tool call comes back in the API response. Intercepting means catching the response before the function executes — a different hook point than LangChain.
CrewAI has its own task/tool abstraction with @tool decorators. Intercepting means wrapping the decorated function or patching the task execution pipeline.
Three frameworks, three interception points, three sets of lifecycle hooks. A governance layer that works across all of them needs to abstract over these differences without breaking the framework-specific semantics.
What's actually different between the requests
Despite the shared pattern, each community emphasizes different aspects:
- LangChain's thread focuses on identity. Who is this agent? Can we verify its identity before trusting its tool calls? The concern is authentication, not just authorization.
- OpenAI's issue frames it as collaboration between governance systems and the SDK. The language is about integration points and extensibility.
- CrewAI's request is the most operational. Fail closed. If you can't check, don't proceed. This is classic security engineering applied to agent behavior.
Identity. Integration. Fail-closed defaults. Three angles on the same problem.
What this tells us
When three independent communities ask for the same thing within a month of each other, that's a market signal. Production agent deployments are hitting governance requirements that the frameworks weren't designed to handle.
The frameworks were built for capability: make the agent smarter, give it more tools, let it reason better. Governance was someone else's problem.
Now it's everyone's problem. And the teams filing these issues are the ones deploying agents into environments where "the agent decided it was fine" isn't an acceptable answer for why the production database got modified.
No framework has shipped a complete answer yet. The issues are open, the discussions are active, and the architecture is still being figured out. But the convergence is real. Pre-execution governance for agent tool calls isn't a niche concern anymore. It's the feature request.
Top comments (0)