DEV Community

Ritesh Kumar
Ritesh Kumar

Posted on

Title: How to add a governance layer to your LangChain agent in 3 lines

Every tutorial shows you how to build
an AI agent.

Nobody shows you what happens when it
does something it shouldn't.

Your agent approves a payment it wasn't
supposed to. Calls an API with wrong
parameters. Executes an action outside
its mandate.

No audit trail. No explanation. No way
to prove what happened.

This post shows you how to fix that
in 3 lines of code.

The problem

When your AI agent takes a real-world
action — payment, approval, data export
— two things need to happen:

  1. Someone needs to decide if the action is allowed
  2. That decision needs to be recorded permanently

Most agent frameworks give you neither.

The solution

SOVIGL is a policy enforcement layer
that sits between your agent and the
action it wants to take.

One call. Three outcomes:

  • approved — action executes
  • pending — held for human approval
  • blocked — stopped permanently

Every outcome is permanently recorded
with a decision ID, plain English
explanation, risk score, and policy
version.

Install

pip install sovigl

Basic usage

import sovigl

sovigl.configure(
api_key="your-key",
org_id="your-org"
)

decision = sovigl.evaluate(
action="payment.create",
context={
"amount": 5000,
"role": "employee",
"user_id": "user_123",
"agent_id": "invoice_bot"
}
)

if decision.approved:
execute_payment()
elif decision.pending:
route_to_human_approver()
elif decision.blocked:
log_and_stop()

What you get back

Every decision returns:

decision.status # approved/pending/blocked
decision.decision_id # permanent immutable ID
decision.reason # why this decision was made
decision.explanation_registry # full explainability
decision.risk_assessment # risk score 0.0-1.0
decision.policy_version # which policy was active
decision.approval_id # human approval reference

LangChain integration

from langchain.agents import AgentExecutor
import sovigl

sovigl.configure(api_key="your-key", org_id="your-org")

def governed_payment(amount: float, role: str) -> str:
decision = sovigl.evaluate(
action="payment.create",
context={"amount": amount, "role": role}
)

if decision.approved:
return f"Payment approved. Audit ID: {decision.decision_id}"
elif decision.pending:
return f"Payment held for approval. ID: {decision.approval_id}"
else:
return f"Payment blocked. Reason: {decision.reason}"
Enter fullscreen mode Exit fullscreen mode




Use as a LangChain tool

from langchain.tools import tool

@tool
def process_payment(amount: float, role: str) -> str:
"""Process a payment with governance."""
return governed_payment(amount, role)

Why this matters for compliance

Every decision automatically produces
evidence for:

  • EU AI Act Art.12 — audit trail
  • EU AI Act Art.13 — explainability
  • EU AI Act Art.14 — human oversight
  • NIST AI RMF — govern and measure
  • MAS FEAT — accountability
  • RBI FREE-AI — REC21 + REC24

Not claims. Structured evidence in every
API response.

Try it now

Demo works with no API key:

import sovigl

decision = sovigl.evaluate(
action="payment.create",
context={"amount": 5000}
)

print(decision.status)
print(decision.decision_id)
print(decision.reason)

Live dashboard — no signup:
https://web-production-e334b.up.railway.app/dashboard

GitHub:
https://github.com/riteshkumar10000/sovigl-sdk

What's next

If you're building agents that take
real-world actions — payments, approvals,
data operations — SOVIGL gives you
governance without rebuilding your
agent architecture.

Free during beta. No credit card.

Questions in the comments — happy to help.

Top comments (0)