Hey everyone,
I’ve been using Cursor heavily lately, and like many of you, I’ve been grabbing .cursorrules and AI scripts from GitHub and various "libraries" to boost my productivity.
But it started feeling like a security black box. We’re essentially running untrusted, 3rd-party instructions with full access to our source code, terminal, and .env files.
I decided to build a small tool called AgentFend to solve this for myself. It uses a static analysis engine I’m calling Onyx to scan prompts and scripts before you hit "Enter".
What it actually looks for right now:
🚩 Data Exfiltration: Detecting if a prompt tries to send your code/keys to an external URL.
🚨 Prompt Injections: Identifying instructions that try to override your agent's safety guardrails.
🔑 Sensitive File Access: Flagging rules that shouldn't be touching your .aws or .ssh folders.
It assigns a security score (0-100) and explains why a script might be sketchy.
It’s 100% free and I don't store your code. I’m really looking for some technical feedback from this community:
Is the "static analysis" approach enough, or should I look into runtime sandboxing?
What other "red flags" should I add to the Onyx engine?
Check it out here if you're interested: https://agentfend.com/
Hope this helps some of you stay safe while building!
Top comments (0)