Your AI Agents Are Running Unsupervised
The EU AI Act is enforceable. SOC 2 auditors are asking about your agent workflows. And your LangChain agents are making API calls with zero audit trail.
This is the problem asqav solves. It's a Python SDK that adds governance to AI agents - cryptographic audit trails, policy enforcement, content scanning, and compliance reports. One pip install, a few lines of code, and every agent action gets a quantum-safe signature you can verify later.
The simplest version
pip install asqav
import asqav
asqav.init(api_key="sk_...")
agent = asqav.Agent.create("my-agent")
sig = agent.sign("api:call", {"model": "gpt-4", "tokens": 1200})
# sig.verification_url -> publicly verifiable proof
Every call to agent.sign() creates an ML-DSA signed record on asqav's servers. You get back a verification URL anyone can check.
What's new
For developers
Decorators that sign function calls automatically. Async support. A CLI for managing agents. And integrations for the frameworks you're already using.
@asqav.sign
def query_database(sql: str) -> list:
return db.execute(sql)
# LangChain
from asqav.extras.langchain import AsqavCallbackHandler
chain.invoke(input, config={"callbacks": [AsqavCallbackHandler()]})
# Also: CrewAI, LiteLLM, Haystack, OpenAI Agents SDK
For security teams
Content scanning catches PII, prompt injections, secrets, and toxic content before they hit production. Behavioral monitoring detects drift from baselines. Rate limiting per agent.
# Content scanning runs inline on sign_action
sig = agent.sign("chat:response", {
"output": user_facing_text # auto-scanned for PII, secrets, injections
})
# Quarantine a misbehaving agent instantly
# POST /agents/{id}/quarantine
For compliance
Generate EU AI Act Article 12/14 reports, DORA ICT risk reports, and SOC 2 Trust Services reports. PDF export. Scheduled auto-generation. Evidence collection built in.
# Generate a compliance report via API
# POST /compliance-reports
{
"framework": "eu_ai_act_article_12",
"format": "pdf"
}
# Download: GET /compliance-reports/{id}/download
Under the hood
All signatures use ML-DSA-65 (NIST FIPS 204) - the post-quantum standard. Audit trails are anchored to Bitcoin via OpenTimestamps. Multi-party signing requires multiple entities to approve high-risk actions. This isn't observability. It's cryptographic proof.
Pricing
Free - 1,000 signatures/month, 3 agents, 3 policies. No credit card.
Pro ($29/mo) - 50K signatures, 25 agents, observability, cost attribution.
Business ($99/mo) - Unlimited signatures, content scanning, behavioral monitoring, compliance reports, multi-party signing.
Get started
pip install asqav
- GitHub: github.com/jagmarques/asqav
- Docs: asqav.com/docs
- Dashboard: asqav.com
Open source SDK, MIT licensed. The governance your AI agents are missing.
Top comments (0)