DEV Community

AgentBouncr
AgentBouncr

Posted on

Getting Started with AgentBouncr: Add Governance to Your AI Agent in 5 Minutes

Getting Started with AgentBouncr: Add Governance to Your AI Agent in 5 Minutes

If you're building AI agents that call tools — reading databases, sending emails, approving payments, modifying files — you need governance. Not "eventually." Now.

AgentBouncr is a governance layer that sits between your AI agent and its tools. It evaluates every tool call against a policy before execution, records every decision in a tamper-proof audit trail, and gives you a deterministic kill switch when things go wrong.

This tutorial walks you through the full setup: install, policy, evaluation, persistence, audit verification, and framework integration. Everything runs locally. No accounts, no cloud dependencies.

Prerequisites

  • Node.js 18+
  • npm or pnpm
  • Any AI agent that calls tools (or follow along to see how it works)

Step 1: Install

npm install @agentbouncr/core @agentbouncr/sqlite
Enter fullscreen mode Exit fullscreen mode

Two packages:

  • @agentbouncr/core — The governance engine: policy evaluation, audit trail, kill switch, event system, injection detection.
  • @agentbouncr/sqlite — SQLite persistence adapter so your audit trail survives restarts.

Step 2: Initialize with Persistence

import { GovernanceMiddleware } from '@agentbouncr/core';
import { SqliteDatabaseAdapter } from '@agentbouncr/sqlite';
import pino from 'pino';

const db = new SqliteDatabaseAdapter(
  pino({ level: 'info' }),
  './governance.db'
);
await db.runMigrations();

const governance = new GovernanceMiddleware({ db });
Enter fullscreen mode Exit fullscreen mode

This creates a SQLite database at ./governance.db with all necessary tables. Migrations run automatically.

Step 3: Register an Agent

Every agent needs a registered identity with an explicit tool allowlist:

await governance.registerAgent({
  agentId: 'claims-agent',
  name: 'Claims Processor',
  allowedTools: ['search_claims', 'approve_payment', 'send_email'],
});
Enter fullscreen mode Exit fullscreen mode

The agent can only use these three tools. Any attempt to call a tool not on the allowlist will be denied — even before the policy is evaluated.

Step 4: Define a Policy

Policies are declarative JSON. No custom code needed:

governance.setPolicy({
  name: 'claims-policy',
  version: '1.0',
  rules: [
    {
      tool: 'approve_payment',
      effect: 'deny',
      condition: { amount: { gt: 5000 } },
      reason: 'Payments over 5000 require manual approval',
      requireApproval: true
    },
    {
      tool: 'send_email',
      effect: 'allow',
      rateLimit: { maxPerMinute: 10 }
    },
    {
      tool: 'file_write',
      effect: 'deny',
      condition: { path: { startsWith: '/etc/' } }
    },
    { tool: '*', effect: 'allow' }
  ],
  createdAt: new Date().toISOString(),
  updatedAt: new Date().toISOString(),
});
Enter fullscreen mode Exit fullscreen mode

Let's break down what this policy does:

  1. Payments over $5,000 → denied, requires human approval
  2. Email sending → allowed, but rate-limited to 10 per minute
  3. File writes to /etc/ → denied (system directory protection)
  4. Everything else → allowed (default rule)

Rules are evaluated top-to-bottom. The first matching rule wins. Deny-before-allow is the design principle: if a deny rule matches, it takes precedence regardless of order.

Condition Operators

AgentBouncr supports 11 condition operators:

Operator Example Meaning
equals { status: { equals: "active" } } Exact match
notEquals { role: { notEquals: "admin" } } Not equal
gt { amount: { gt: 5000 } } Greater than
lt { amount: { lt: 100 } } Less than
gte { score: { gte: 0.8 } } Greater than or equal
lte { retries: { lte: 3 } } Less than or equal
startsWith { path: { startsWith: "/etc/" } } String prefix
endsWith { file: { endsWith: ".exe" } } String suffix
contains { query: { contains: "DROP TABLE" } } Substring match
in { env: { in: ["prod", "staging"] } } Value in set
matches { email: { matches: ".*@company\\.com" } } Regex match

Step 5: Evaluate Tool Calls

Now integrate the governance check into your agent's tool execution flow:

// Before executing any tool call, evaluate it
const result = await governance.evaluate({
  agentId: 'claims-agent',
  tool: 'approve_payment',
  params: { amount: 12000, claimId: 'CLM-4821' },
});

if (result.allowed) {
  // Proceed with tool execution
  await executeToolCall('approve_payment', { amount: 12000 });
} else {
  console.log(`Denied: ${result.reason}`);
  // Handle denial — notify user, request approval, etc.
}

console.log(result.traceId);  // "00-a1b2c3..." (W3C Trace Context)
Enter fullscreen mode Exit fullscreen mode

Output:

Denied: Payments over 5000 require manual approval
Enter fullscreen mode Exit fullscreen mode

The evaluation is deterministic — no LLM is involved in the permission decision. Pure logic. The same input always produces the same result.

Evaluation latency: under 5 milliseconds. The governance check adds negligible overhead to your agent's workflow.

A Few More Examples

// This will be allowed (within policy)
const r1 = await governance.evaluate({
  agentId: 'claims-agent',
  tool: 'search_claims',
  params: { customerId: 'CUST-123' },
});
console.log(r1.allowed); // true

// This will be denied (system directory)
const r2 = await governance.evaluate({
  agentId: 'claims-agent',
  tool: 'file_write',
  params: { path: '/etc/passwd', content: 'hacked' },
});
console.log(r2.allowed); // false
Enter fullscreen mode Exit fullscreen mode

Step 6: Verify the Audit Trail

Every evaluation is automatically recorded. Verify the integrity of the entire trail:

const verification = await db.verifyAuditChain();
console.log(verification.valid); // true
Enter fullscreen mode Exit fullscreen mode

The audit trail uses a SHA-256 hash chain. Each entry includes the cryptographic hash of the previous entry. If any record is modified, deleted, or reordered after the fact, the chain breaks and valid returns false.

Each audit entry includes:

  • Trace ID — W3C Trace Context format, compatible with OpenTelemetry
  • Agent ID — Which agent made the request
  • Tool — Which tool was requested
  • Parameters — What parameters were passed
  • Decision — Allowed or denied
  • Matched rule — Which policy rule triggered the decision
  • Duration — How long the evaluation took
  • Timestamp — When it happened

Step 7: Kill Switch

When something goes wrong — and in production, something always eventually goes wrong — the kill switch stops everything:

governance.activateKillSwitch();

// All subsequent evaluations return denied
const result = await governance.evaluate({
  agentId: 'claims-agent',
  tool: 'search_claims',
  params: { customerId: 'CUST-123' },
});
console.log(result.allowed); // false — kill switch is active
Enter fullscreen mode Exit fullscreen mode

The kill switch is deterministic. No LLM involved, no network calls, no delays. When active, every tool call is denied synchronously in the evaluate path.

Deactivate when the situation is resolved:

governance.deactivateKillSwitch();
// Agents resume normal operation
Enter fullscreen mode Exit fullscreen mode

Step 8: Event System

Subscribe to governance events for monitoring and alerting:

governance.on('tool_call.denied', (event) => {
  console.log(`BLOCKED: ${event.agentId} tried ${event.tool}`);
  // Send to Slack, PagerDuty, your SIEM, etc.
});

governance.on('killswitch.activated', (event) => {
  console.log('KILL SWITCH ACTIVATED');
});
Enter fullscreen mode Exit fullscreen mode

AgentBouncr emits 20 event types across five categories: tool calls, approvals, agent lifecycle, security, and policies.

Integrating with Vercel AI SDK

If you're using the Vercel AI SDK, there's a built-in wrapper:

import { wrapToolsWithGovernance } from '@agentbouncr/core';

const governedTools = wrapToolsWithGovernance(myVercelTools, {
  agentId: 'my-agent',
  governance,
});

// Use governedTools instead of myVercelTools
// Denied calls throw GovernanceError instead of executing
Enter fullscreen mode Exit fullscreen mode

This wraps each tool's execute() function with a governance check. No manual evaluate() calls needed.

Integrating via HTTP API

If you're using n8n, Make, Zapier, or any platform that can make HTTP calls, the same governance is available via the Enterprise HTTP API:

curl -X POST https://your-instance.agentbouncr.com/api/evaluate \
  -H "Authorization: Bearer gov_your_token" \
  -H "Content-Type: application/json" \
  -d '{
    "agentId": "claims-agent",
    "tool": "approve_payment",
    "params": { "amount": 12000 }
  }'
Enter fullscreen mode Exit fullscreen mode

Same policies, same audit trail, same dashboard. The CISO sees all agents — SDK and HTTP — in one view.

Using the CLI

AgentBouncr includes a CLI for agent management and audit operations:

npm install -g @agentbouncr/cli

# List all registered agents
governance agent list

# Start an agent
governance agent start claims-agent

# Verify audit trail integrity
governance audit verify

# Import tools from an MCP manifest
governance import --mcp ./mcp-manifest.json
Enter fullscreen mode Exit fullscreen mode

What You've Built

In under 5 minutes, you added a complete bounded agency layer to your AI agent:

  • Policy engine with 11 condition operators, rate limits, and approval gates
  • Tamper-proof audit trail with SHA-256 hash chain verification
  • Deterministic kill switch with sub-millisecond activation
  • Event system with 20 event types for monitoring and alerting
  • W3C Trace Context for correlation with your observability stack

The governance layer is framework-agnostic. It works with LangChain, Vercel AI SDK, CrewAI, OpenAI Agents SDK, n8n, or any tool-calling agent.

Next Steps

  • GitHub Repository — Source code, examples, and documentation
  • agentbouncr.com — Enterprise Dashboard with PostgreSQL, multi-tenancy, approval workflows, compliance reporting, and visual policy management
  • MCP Import Guide — Import tools from Model Context Protocol manifests

AgentBouncr is source-available under the Elastic License 2.0 — free to use, modify, and distribute.

Top comments (0)