DEV Community

Bridge ACE
Bridge ACE

Posted on

Approval Gates: How to Make AI Agents Safe for Real-World Operations

Approval Gates: How to Make AI Agents Safe for Real-World Operations

AI agents with real-world tool access (email, phone, browser, payments) are powerful. They are also dangerous. Without guardrails, an agent can send emails to your customers, make purchases, or push code to production.

Bridge ACE solves this with Approval Gates — a classification system that determines which actions an agent can take autonomously and which require human sign-off.

How It Works

Every tool in Bridge ACE is classified into one of three categories:

AUTO — Execute Immediately

  • Reading files, analyzing code, internal messaging between agents
  • No risk of external impact
  • Agent acts autonomously

LOG — Execute and Record

  • Web searches, research queries
  • Low risk but worth tracking
  • Agent acts, action is logged for audit

REQUIRE_APPROVAL — Queue for Human

  • Sending emails
  • Making phone calls
  • Posting on social media
  • Making purchases
  • Pushing code to production
  • Any irreversible external action

When an agent triggers a REQUIRE_APPROVAL action, the request appears in the Fleet Management UI. The human reviews the action, the recipient, the content — and approves or denies.

Why This Matters

Most AI agent frameworks have a binary choice: either the agent can do everything (dangerous) or it needs approval for everything (unusable). Bridge ACE's three-tier system finds the sweet spot:

  • Agents work autonomously on safe tasks (reading, analyzing, coordinating)
  • Agents pause and wait for approval on risky tasks (sending, purchasing, deploying)
  • Everything is logged for audit trails

Combined with Scope Locks

Approval Gates handle external actions. Scope Locks handle internal file access. Together they form a complete governance layer:

  • Agent A cannot edit Agent B's files (Scope Lock)
  • No agent can send an email without approval (Approval Gate)
  • Every action is logged with timestamps and agent identity

This makes it safe to give agents powerful tools. The tools exist. The guardrails exist. The human stays in control.

Implementation

# approval_gate.py
class ApprovalPolicy(Enum):
    AUTO = 'auto'               # Safe — execute immediately
    LOG = 'log'                 # Low risk — execute and log
    REQUIRE_APPROVAL = 'require' # Risky — queue for human
Enter fullscreen mode Exit fullscreen mode

The classification is configurable per agent via the guardrails system. You can make a trusted agent more autonomous and a new agent more restricted.

Open Source

git clone https://github.com/Luanace-lab/bridge-ide.git
cd bridge-ide && ./install.sh
Enter fullscreen mode Exit fullscreen mode

Apache 2.0. Self-hosted. Your agents, your rules.

GitHub: github.com/Luanace-lab/bridge-ide

Top comments (0)