I've been building AI features for a while and kept running into the same problem: prompt injection attacks are getting more sophisticated, but most solutions either require an external API call (adding latency) or are too heavyweight to drop into an existing project.
So I built @ny-squared/guard — a zero-dependency, fully offline LLM security SDK.
What it does
Scans user inputs before they hit your LLM and blocks:
- 🛡️ Prompt injection — "Ignore all previous instructions and..."
- 🔒 Jailbreak attempts — DAN, roleplay bypasses, override patterns
- 🙈 PII leakage — emails, phone numbers, SSNs, credit cards
- ☣️ Toxic content — harmful inputs flagged before reaching your model
Works with any LLM provider (OpenAI, Anthropic, Google, etc.).
The problem with existing solutions
Most LLM security tools I found had at least one of these issues:
- External API dependency — adds 50-200ms latency per request
- Complex setup — requires separate infrastructure or a paid account
- No TypeScript support — or minimal types
- Heavyweight — brings in dozens of transitive dependencies
@ny-squared/guard runs entirely in-process. No network calls. No API keys. <5ms per scan.
Quick start
bash
npm install @ny-squared/guard
Top comments (0)