I’ve been building AI agents at work and kept running into the same problem: every framework lets agents call any registered tool with zero safety checks. An agent with database access can run DROP TABLE users and nothing stops it.
So I built AgentShield-FW, a runtime firewall that intercepts every tool call and enforces configurable safety policies before execution.
• GitHub: https://github.com/Avinash-Amudala/AgentShield
• PyPI: pip install agentshield-fw
The simplest usage:
`import agentshield
shield = agentshield.Shield()
@shield.protect
def execute_sql(query: str) -> str:
return db.execute(query)
Agent tries: execute_sql("DROP TABLE users")
→ Blocked by AgentShield: Destructive SQL detected (ASI02)`
What makes it different:
• Zero required dependencies — core runs on Python stdlib only
• 40+ pre-built rules covering SQL injection, path traversal, credential leaks, prompt injection, shell commands, rate limiting
• Mapped to OWASP Agentic Security Top 10 (ASI01-ASI10)
• Works with LangChain, MCP, CrewAI, OpenAI SDK, or any Python function
• Sub-millisecond latency (<1ms p99)
• 94.56% test coverage
• Hash-chained audit logging for tamper detection
Other projects called "AgentShield" are static scanners (analyze config files). This is a runtime firewall (intercepts live tool calls). WAF vs SAST.
MIT license. Python 3.10-3.13.
Top comments (0)