LangChain makes it incredibly easy to build AI agents that take real-world actions. Database queries, API calls, file operations, infrastructure management — your agent can do it all with a few lines of code.
That's the problem.
When your LangChain agent has tools that can modify production databases, send emails, or scale infrastructure, you need more than prompt engineering to keep it safe. You need execution governance.
This guide shows you how to wrap LangChain tools with Vienna OS execution warrants, so every high-risk action gets proper authorization before it runs.
The Problem with Uncontrolled LangChain Tools
Here's a typical LangChain tool:
from langchain.tools import tool
@tool
def scale_kubernetes(replicas: int, deployment: str) -> str:
"""Scale a Kubernetes deployment to the specified number of replicas."""
k8s_client.scale(deployment, replicas=replicas)
return f"Scaled {deployment} to {replicas} replicas"
This works great — until your agent decides to scale to 500 replicas at 3 AM. There's no approval, no risk assessment, no audit trail. The agent just... does it.
Adding Vienna OS Governance
Vienna OS wraps your existing tools with execution control. The agent still decides what to do, but Vienna OS controls whether it's allowed.
Step 1: Install the SDK
pip install vienna-sdk
Step 2: Create a Governed Tool Wrapper
from langchain.tools import tool
from vienna_sdk import ViennaClient
vienna = ViennaClient(
endpoint="https://your-instance.regulator.ai",
api_key="your-api-key"
)
@tool
def scale_kubernetes(replicas: int, deployment: str) -> str:
"""Scale a Kubernetes deployment. Requires governance approval for >10 replicas."""
# Submit intent to Vienna OS
warrant = vienna.request_warrant(
intent="scale_kubernetes_deployment",
resource=deployment,
payload={
"target_replicas": replicas,
"current_replicas": k8s_client.get_replicas(deployment),
"cost_impact": estimate_cost(replicas)
}
)
if warrant.status == "approved":
k8s_client.scale(deployment, replicas=replicas)
vienna.confirm_execution(warrant.id)
return f"✅ Scaled {deployment} to {replicas} replicas (warrant: {warrant.id})"
elif warrant.status == "pending":
return f"⏳ Scaling request pending approval (warrant: {warrant.id})"
else:
return f"❌ Scaling request denied: {warrant.denial_reason}"
Step 3: Define Risk Policies
# vienna-policies.yaml
policies:
- name: "kubernetes-scaling"
match:
intent: "scale_kubernetes_deployment"
rules:
- condition: "payload.target_replicas <= 10"
risk_tier: "T0" # Auto-approve
- condition: "payload.target_replicas <= 50"
risk_tier: "T1" # Single DevOps approval
- condition: "payload.target_replicas > 50"
risk_tier: "T2" # Multi-party approval
- condition: "payload.cost_impact > 10000"
risk_tier: "T3" # Executive approval
Now your agent can scale to 5 replicas instantly (T0), but scaling to 500 requires multiple approvals (T2/T3).
Governing a Complete LangChain Agent
Here's a full example with multiple governed tools:
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_openai import ChatOpenAI
from vienna_sdk import ViennaClient, governed_tool
vienna = ViennaClient(endpoint="https://your-instance.regulator.ai")
@governed_tool(vienna, intent="query_database", risk_tier="T0")
def query_database(sql: str) -> str:
"""Execute a read-only database query."""
return db.execute(sql)
@governed_tool(vienna, intent="modify_database", risk_tier="T1")
def modify_database(sql: str) -> str:
"""Execute a database write operation. Requires approval."""
return db.execute(sql)
@governed_tool(vienna, intent="send_email", risk_tier="T2")
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to a customer. Requires multi-party approval."""
return email_client.send(to=to, subject=subject, body=body)
@governed_tool(vienna, intent="delete_records", risk_tier="T3")
def delete_records(table: str, condition: str) -> str:
"""Delete database records. Requires executive approval."""
return db.execute(f"DELETE FROM {table} WHERE {condition}")
# Create agent with governed tools
llm = ChatOpenAI(model="gpt-4")
tools = [query_database, modify_database, send_email, delete_records]
agent = create_openai_functions_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
# Agent runs normally — Vienna OS handles governance transparently
result = executor.invoke({"input": "Clean up inactive users and notify them"})
The agent thinks and plans normally. But when it tries to delete records, Vienna OS requires executive approval. When it tries to send emails, it needs multi-party sign-off. Reads are instant.
Real-Time Monitoring
Vienna OS provides SSE streaming so you can watch your agents in real time:
# Monitor all agent activity
for event in vienna.stream_events():
if event.risk_tier >= "T1":
print(f"⚠️ {event.agent_id}: {event.intent} ({event.risk_tier})")
print(f" Status: {event.status}")
print(f" Warrant: {event.warrant_id}")
The Audit Trail
Every action gets a complete evidence chain:
{
"warrant_id": "w_2026_03_28_a7b9c1d3",
"agent_id": "langchain-sre-agent",
"intent": "scale_kubernetes_deployment",
"risk_tier": "T2",
"submitted_at": "2026-03-28T15:30:00Z",
"approved_at": "2026-03-28T15:32:15Z",
"approved_by": ["alice@company.com", "bob@company.com"],
"executed_at": "2026-03-28T15:32:16Z",
"execution_result": "success",
"signature": "8f2e1a9b4c7d3e6f...",
"parameters": {
"deployment": "api-server",
"target_replicas": 50,
"previous_replicas": 12
}
}
SOC 2 auditors, compliance officers, and insurance companies love this.
Getting Started
- Try the demo: regulator.ai/try
-
Install the Python SDK:
pip install vienna-sdk - Read the docs: regulator.ai/docs
- Star us on GitHub: github.com/risk-ai/regulator.ai
Your LangChain agents are powerful. Make sure they're governed too.
Originally published at regulator.ai. Vienna OS is the execution control layer for autonomous AI systems — cryptographic warrants, risk tiering, and immutable audit trails. Try it free.
Top comments (0)