If you're building AI agents — chatbots, automation pipelines, document processors — you're processing personal data. Every user query contains at least an identifier. Most contain names, emails, phone numbers, or more.
Under GDPR, the EU AI Act (high-risk obligations hit August 2, 2026), and Nigeria's NDPA, that processing has legal requirements:
- Audit trails of every LLM call
- PII detection before data hits external APIs
- Consent management per user per purpose
- Data Protection Impact Assessments
- Data Processing Agreements with every AI provider
None of the major agent frameworks handle this. LangChain doesn't. CrewAI doesn't. So I built an open-source tool that does.
agent-shield
A Python middleware that wraps any LLM call with compliance features:
from agent_shield import Shield
shield = Shield()
result = shield.scan("Contact me at john@example.com or 08034567890")
print(result.pii_found) # {'EMAIL': 1, 'PHONE_NG': 1}
print(result.redacted) # "Contact me at [EMAIL_REDACTED] or [PHONE_REDACTED]"
What it does:
- Detects 12 PII types (emails, phones, Nigerian BVN/NIN, UK NI numbers, credit cards, DOB, IP)
- Redacts PII before sending to LLM providers
- Tamper-evident audit logging (hash-chain verification)
- Per-user consent management
- Auto-generates DPIA skeletons from your audit data
- Data flow mapping (Markdown + Mermaid diagrams)
Zero dependencies for core. Works with OpenAI, Anthropic, or any provider.
GitHub: github.com/Thezenmonster/agent-shield
If you want to understand the full regulatory picture for AI systems:
Top comments (0)