I've been building AI agents for a while, and the one thing that always bugged me was this: how do you prove what your code actually did?
Logs are fine. But logs can be edited. They're not proof of anything.
So I built a decorator that adds cryptographic signatures to any Python function call. One line. No refactoring.
Before
Here's a normal function that summarizes a document using an LLM:
def summarize(doc: str) -> str:
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": f"Summarize: {doc}"}]
)
return response.choices[0].message.content
Works great. But there's no record that it ran, what it received, or what it returned. If a regulator asks "what did your AI agent do on March 15th?" - you've got nothing.
After
import asqav
asqav.init() # uses ASQAV_API_KEY env var
@asqav.sign
def summarize(doc: str) -> str:
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": f"Summarize: {doc}"}]
)
return response.choices[0].message.content
That's it. One decorator.
Every time summarize() runs, asqav captures the function name, arguments, return value, and timestamp - then signs it cryptographically. The signature gets stored with a unique ID you can look up later.
Verifying a signature
Anyone with the signature ID can verify it. No API key needed:
result = asqav.verify_signature("sig_abc123")
if result.verified:
print(f"Valid - signed by {result.agent_name}")
This is a public endpoint. Your auditor, your compliance team, or a regulator can verify signatures without access to your infrastructure.
Custom action types
By default, the decorator tags everything as function:call. You can be more specific:
@asqav.sign(action_type="llm:summarize")
def summarize(doc: str) -> str:
...
@asqav.sign(action_type="deploy:prod")
def deploy_model(model_id: str) -> None:
...
This makes it easier to filter and search your audit trail later.
Works with async too
If your function is a coroutine, the decorator handles it automatically:
@asqav.sign
async def summarize(doc: str) -> str:
response = await client.chat.completions.create(...)
return response.choices[0].message.content
No separate async_sign decorator. It just works.
Install
pip install asqav
Get an API key at asqav.com and set it:
export ASQAV_API_KEY="sk_..."
Then add @asqav.sign to whatever functions you want auditable. That's the whole setup.
Why bother?
The EU AI Act requires audit trails for high-risk AI systems. DORA wants operational resilience records for financial services. ISO 42001 needs evidence of AI governance controls.
You could build all of this yourself. Or you could add a decorator and move on with your day.
The SDK is open source: github.com/jagmarques/asqav-sdk
Top comments (0)