While everyone celebrates New Year, we're shipping code.
It's January 1st, 2026. Most teams are on vacation. We just pushed Dragon v4.0 with 200 detection engines, 1,047 passing tests, and complete OWASP coverage.
Here's our 2025 journey and why we can't stop working - even on holidays.
The Numbers Don't Lie
| Metric | Jan 2025 | Dec 2025 | Growth |
|---|---|---|---|
| Detection Engines | 17 | 200 | +1,076% |
| Unit Tests | 85 | 1,047 | +1,132% |
| Lines of Code | 15K | 81K | +440% |
| Attack Payloads | 3K | 51,800+ | +1,627% |
We didn't just grow - we became the most comprehensive open-source LLM security platform.
2025 Timeline: Key Milestones
Q1: Foundation
- January: Started with basic injection detection
- February: Added Strange Math engines (Sheaf Coherence, TDA, Hyperbolic Geometry)
- March: First 50 engines milestone
Q2: Expansion
- April: Meta-Judge aggregation engine
- May: Visual Content Analyzer (VLM protection)
- June: 100 engines milestone + PyPI release
Q3: Enterprise Features
- July: Go Gateway for production deployment
- August: gRPC/REST dual protocol
- September: Strike red-team platform
Q4: OWASP Complete & Dragon v4.0
- October: OWASP LLM Top 10 100% coverage
- November: OWASP Agentic SI Top 10 100% coverage
- December: Dragon v4.0 release
What We Built This Year
The Strange Math Engines
Most security tools use regex. We use:
- Sheaf Coherence - Cech cohomology for multi-turn attack detection
- Hyperbolic Geometry - Poincare ball for hierarchy manipulation
- TDA Enhanced - Persistent homology for topological fingerprinting
- Information Geometry - Fisher-Rao distance for distribution anomalies
- Chaos Theory - Lyapunov exponents for behavioral analysis
"You're using algebraic topology for prompt injection? That's insane."
- Anonymous security researcher
Yes. And it works.
The January 2026 R&D Engines
Fresh from today's release:
| Engine | Attack it Stops |
|---|---|
| MoE Guard | GateBreaker (3% neuron exploit) |
| RAG Poisoning Detector | Document injection |
| Dark Pattern Detector | Web agent manipulation |
| Tool Hijacker Detector | Agentic exfiltration |
| Echo Chamber Detector | False agreement fabrication |
| Memory Poisoning Detector | Persistent state attacks |
Strike: Red Team Platform
Not just defense - we attack too.
Strike is our offensive security toolkit with:
- 85 attack categories - From basic injection to MCP exploitation
- UniversalController - One interface, all attack vectors
- 51,800+ payloads - Continuously updated from live research
from strike import UniversalController
controller = UniversalController(target_url="https://your-llm.api")
results = controller.run_category("prompt_injection")
print(f"Successful attacks: {len(results.successes)}")
Why it matters: You can't defend what you don't understand. Strike lets you test your own systems before attackers do.
Gateway: Production-Ready Defense
Low-latency security for real workloads.
Gateway (written in Go) provides:
- <10ms latency - Zero noticeable delay
- gRPC + REST - Dual protocol support
- Streaming analysis - Real-time token-by-token protection
- Kubernetes-ready - Helm charts, health probes, autoscaling
Architecture: Client -> Gateway (Go) -> LLM API, with Brain (Python, 200 engines) analyzing requests.
Why it matters: Security that slows down your app is security that gets disabled. Gateway keeps you safe without the latency tax.
How We Helped the Community
Open Source Everything
All 200 engines are open source. The complete codebase:
git clone https://github.com/DmitrL-dev/AISecurity
cd AISecurity/sentinel-community
pip install -e .
No enterprise tier gatekeeping. No "contact sales" for advanced features.
PhD-Level Documentation
Every engine is documented with:
- Theoretical foundations (academic sources)
- Implementation code
- Deviations from theory
- Known limitations
- Honest assessments
300KB of pure security science in engines-expert-deep-dive-en.md.
Attack Payload Database
51,800+ verified payloads for testing your own systems:
- 39K LLM-specific attacks
- 12.8K traditional injection variants
- Organized by attack taxonomy
Why We Work on Holidays
"It's New Year's Day. Why are you coding?"
Because attackers don't take holidays.
While we wrote this article:
- Someone tried to jailbreak a medical AI
- A RAG system ingested a poisoned document
- A web agent clicked a hidden overlay
Every day we delay, someone's LLM gets exploited.
What We Shipped Today (January 1, 2026)
- Fixed critical Unicode regex bug (false positives)
- Fixed 48 engine import errors
- Added MoE Guard with GateBreaker defense
- Expanded pattern library for transfer attacks
- Updated all documentation to Dragon v4.0
- 1,047 tests passing, zero failures
Pushed at 3:00 AM. Because security never sleeps.
What's Next (2026 Roadmap)
Q1: Custom LLM Training
- SENTINEL-Guard model (Qwen3-8B base)
- Dual-mode: Defender + Explainer
- Trained on our 51K payload database
Q2: Enterprise Features
- Real-time streaming analysis
- Multi-tenant isolation
- Kubernetes operator
Q3: Standards Body Work
- OWASP AI Security Committee participation
- CSA AI Security Alliance contributions
- Academic paper submissions
Thank You
To everyone who:
- Starred the repo
- Reported bugs
- Improved documentation
- Contributed code
- Spread the word
You make this possible.
Get Started
pip install sentinel-ai
from sentinel import SentinelAnalyzer
analyzer = SentinelAnalyzer()
result = analyzer.analyze("Ignore previous instructions...")
print(f"Risk: {result.risk_score}")
print(f"Threats: {result.detected_threats}")
GitHub: github.com/DmitrL-dev/AISecurity
Documentation: dmitrl-dev.github.io/AISecurity
Happy New Year 2026!
Now back to work. We have LLMs to protect.
Dmitry Labintcev
SENTINEL Project Lead
SENTINEL Shield — Launch Announcements
Title:
SENTINEL Shield: 17K lines of C for a DMZ between your app and AI systems
Body:
I've been building AI security tools for the past year. One thing became painfully clear: there's no universal barrier between trusted applications and untrusted AI components.
Every company deploying LLMs builds their own validation. Python wrappers. Regex checks. Maybe some prompt engineering. It's fragmented, slow, and misses attacks.
So I built what should exist: a proper DMZ for AI.
SENTINEL Shield is:
- 17,076 lines of pure C (C11 standard)
- Zero runtime dependencies — compiles to single binary
- 6 custom binary protocols designed for AI security
- Cisco IOS-style CLI — your network team already knows it
- Sub-millisecond latency — because security shouldn't add 100ms
- 100K+ requests/second — production scale from day one
- FFI bindings for Python, Go, Node.js
- HA clustering with automatic failover
- MIT licensed — fully open source
Why C instead of Python/Rust?
- Performance — Python adds 10-100ms. We add <100μs.
- Minimal attack surface — no supply chain of 500 packages
- Universal — links to any language via FFI
- Production-proven — same approach as nginx, OpenSSL, Linux kernel
The architecture:
Your App (trusted)
│
▼
SENTINEL SHIELD
├── Zone Registry
├── Rule Engine
├── 6 Guards (LLM/RAG/Agent/Tool/MCP/API)
├── Pattern Matching
├── Rate Limiting
├── Canary Detection
├── Quarantine
└── Audit Logging
│
▼
AI Systems (untrusted)
This is essentially a WAF for AI but designed from first principles for the specific threat model.
Example config (yes, it's like Cisco IOS):
access-list 100
shield-rule 10 block input llm contains "ignore previous"
shield-rule 20 block input llm pattern "reveal.*system.*prompt"
shield-rule 1000 allow input any
!
zone production-llm
type llm
access-list 100 in
GitHub: https://github.com/DmitrL-dev/AISecurity/tree/main/sentinel-shield
Happy to answer questions about the architecture choices.
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.