As AI Agents gain more autonomy, a fundamental fear has taken hold in the enterprise: "What if the Agent does something it shouldn't?"
We’ve all seen the warnings in system prompts: "Please be careful when deleting data." But as every seasoned engineer knows, a prompt is not a security policy. If you want to prevent an AI from accidentally triggering a production deployment or wiping a database, you need a hard, runtime "Kill Switch."
In the apcore protocol, we call this the Approval Gate. In this sixteenth article, we explore how the requires_approval annotation brings "Human-in-the-Loop" (HITL) directly into the heart of the execution pipeline.
Why Autonomy Needs a Brake Pedal
Autonomous Agents are designed to loop: they plan, execute, observe, and repeat. The problem arises during the "Execute" phase. If an Agent decides that the best way to "optimize disk space" is to delete your var/log directory, it will try to do so instantly.
Traditional systems try to solve this with prompt engineering or post-execution auditing. Both are too slow.
At apcore, we implement HITL at Step 5 of our 11-step pipeline. Before the validation runs, and long before your code is executed, the Executor checks for the "Approval" flag.
The requires_approval Annotation
Marking a module as "High Stakes" is a single-line operation in apcore:
@module(id="ops.deploy", description="Deploy to production.")
@annotations(requires_approval=True, destructive=True)
def deploy(env: str):
# Logic...
When this module is invoked, the apcore Executor doesn't run the code. Instead, it halts and triggers an ApprovalHandler.
Pluggable Approval Handlers
The beauty of apcore is that the "Human" doesn't have to be in any specific UI. Because apcore is a protocol, the approval request is projected onto whichever Surface the caller is using:
1. The CLI Surface
If you are running a module via apcore-cli, the terminal will pause and ask:
Module 'ops.deploy' requires approval. Proceed? [y/N]
2. The MCP Surface
If Claude is calling your tool via MCP, apcore-mcp uses the protocol's Elicitation feature. A confirmation dialog appears directly in the Claude or Cursor interface, allowing the user to click "Approve" before the AI continues.
3. The Agent-to-Agent (A2A) Surface
In an A2A workflow, the "Provider Agent" sends an input-required status back to the "Consumer Agent." The Consumer Agent then knows it must pause its task and ask its own human user for permission.
Bypassing Approval: The Trusted Context
There are scenarios where you want to bypass the gate—for example, during automated CI/CD runs or when a highly trusted system administrator is using the CLI.
apcore allows this via the Trusted Context:
-
CLI: The
-yor--yesflag tells the handler to auto-approve. -
Identity: You can configure your registry to auto-approve calls from specific
identity.types(e.g.,"system") while requiring them for"user"or"agent".
Conclusion: Bridging Fear and Autonomy
The path to production AI is not about making models "smarter"—it's about making our infrastructure safer. By enforcing "Human-in-the-Loop" at the protocol level, apcore gives enterprises the confidence to deploy autonomous Agents, knowing that the "Brake Pedal" is always under human control.
Next, we wrap up Volume II with "Observability 2.0: Tracing AI 'Thought Chains' with OpenTelemetry."
This is Article #16 of the **Building the AI-Perceivable World* series. Join us in building secure and governed AI architectures.*
GitHub: aiperceivable/apcore
Top comments (0)