The EU AI Act is now active. If you build or deploy AI agents in the EU, you need to comply. Here is what matters for developers.
What the EU AI Act requires
For high-risk AI systems (which includes many autonomous agents), the Act requires:
- Logging and traceability - Record what the system did and why
- Human oversight - Humans must be able to intervene
- Risk management - Identify and mitigate risks
- Technical documentation - Detailed records of system behavior
The problem for AI agents
Traditional software has predictable behavior. AI agents don't. A LangChain agent might call different APIs, access different data, or take different actions each time it runs. Without governance, you have no record of what happened.
Adding compliance to your agents
Asqav is an open-source Python SDK that handles this. Install it:
pip install asqav
Register your agent and sign every action:
from asqav import Asqav
client = Asqav(api_key="sk_...")
agent = client.create_agent(name="my-agent")
# Every action gets a cryptographic audit record
client.sign(
agent_id=agent.agent_id,
action_type="data:read:customers",
action_id="query-001"
)
This gives you:
- Tamper-proof audit trail with quantum-safe signatures
- Policy enforcement to block risky actions
- Compliance reports you can hand to auditors
CI/CD scanning
The asqav-compliance GitHub Action scans your codebase for compliance gaps:
- uses: jagmarques/asqav-compliance@v1
with:
standard: eu-ai-act
It catches missing audit trails and unprotected agent actions before they reach production.
Top comments (0)