OWASP released their Top 10 for Agentic AI Systems. Here is what matters and how to address each risk.
The Top Risks
- Excessive Agency - agents doing more than intended
- Supply Chain Vulnerabilities - compromised tools and plugins
- Insecure Output Handling - agents producing unsafe outputs
- Insufficient Logging - no audit trail of agent actions
- Broken Access Control - agents accessing unauthorized resources
Addressing These with Governance
Most of these risks come down to: no visibility, no control, no proof.
Policy enforcement (risks 1, 5)
from asqav import Asqav
client = Asqav(api_key="sk_...")
# Block excessive actions
client.create_policy(
name="limit-external-calls",
action_pattern="api:external:*",
action="block_and_alert",
severity="high"
)
Audit trails (risk 4)
agent = client.create_agent(name="my-agent")
sig = agent.sign("data:read:users", {"query": "active"})
# Every action now has a quantum-safe cryptographic record
Multi-party authorization (risks 1, 5)
Critical actions require human approval before execution.
CI/CD Scanning
The asqav-compliance scanner checks your codebase for these risks:
- uses: jagmarques/asqav-compliance@v1
with:
standard: eu-ai-act
Top comments (0)