AI agents handle sensitive data all the time. Most have zero controls on what gets passed to external APIs.
Here are the 5 most common leaks:
1. PII in tool call arguments
Your agent sends a customer name and email to an LLM for summarization. That PII just left your system. Most teams do not even know it happened.
2. API keys in agent context
Agent reads a config file, picks up a database password, passes it as context to the next tool call. Now your secrets are in an LLM provider log.
3. Prompt injection via tool outputs
A tool returns data containing hidden instructions. The agent follows them and exfiltrates data through a subsequent tool call.
4. Financial data in reasoning chains
The agent reasons about revenue numbers, customer counts, pricing. All of this ends up in trace logs that may not be access-controlled.
5. Medical or legal information
Healthcare and legal agents handle privileged information. Without scanning, this data flows freely between tools.
Detection
asqav scans every tool call input and output for PII, secrets, injection patterns, and sensitive content. When something is detected, it can block the call or flag it.
import asqav
asqav.init(api_key=\"sk_...\")
agent = asqav.Agent.create(\"my-agent\")
# This will be flagged - PII detected
agent.sign(\"api:send\", {\"data\": \"John Smith, SSN 123-45-6789\"})
The scanning runs before the action executes, so sensitive data never leaves your system.
Top comments (0)