π This post is now archived. For the latest updates on SENTINEL, see the new consolidated article:
SENTINEL Platform β Complete AI Security Toolkit (2026 Update Log)
The Problem: Three Languages, One Brain
I'm building SENTINEL β an open-source AI security platform. Six months ago, my architecture looked like this:
User β Go Gateway β Python Brain (209 ML engines) β LLM
β β
C Shield (DMZ) IMMUNE (XDR monitoring)
β
Strike (Red Team payloads)
Three languages. Three runtimes. Three deployment nightmares.
- Go Gateway (400 LOC): HTTP routing, auth, rate limiting
- Python Brain (98K LOC): 209 detection engines, gRPC server
- C Shield (23K LOC): DMZ, 21 protocols, sub-ms latency
- IMMUNE (Python): XDR/EDR/MDR monitoring
- Strike (Python): Red team payloads, 39K+ attack vectors
The Go Gateway was the weakest link. It did exactly what Shield could do, but:
- Required Go runtime (~100MB)
- Added 3-5ms latency
- Needed separate deployment
- Duplicated auth/ratelimit logic
The Question That Changed Everything
"Who handles request routing?"
I stared at my architecture diagram. Shield already had:
- HTTP server with routing (
api_add_route()) -
/evaluateendpoint for security checks - SBP protocol to talk to Brain
- Auth via SZAA protocol
The Gateway was a $0 cost center with 100% overlap.
The Decision: Kill the Gateway
- User β Go Gateway β Python Brain β LLM
+ User β C Shield β Python Brain β LLM
One less language. One less runtime. One less thing to break at 3 AM.
What I Built: SLLM Protocol
600 lines of C that replaced 400 lines of Go:
// include/protocols/sllm.h
shield_err_t sllm_proxy_request(
const sllm_request_t *request,
sllm_response_t *response
);
The flow:
1. User POST /proxy {"messages": [...]}
2. INGRESS: sllm_analyze_ingress() β Brain gRPC
3. FORWARD: sllm_forward_to_llm() β OpenAI/Gemini/Anthropic
4. EGRESS: sllm_analyze_egress() β Brain gRPC
5. Return sanitized response
Multi-Provider Support
typedef enum {
SLLM_PROVIDER_OPENAI,
SLLM_PROVIDER_GEMINI,
SLLM_PROVIDER_ANTHROPIC,
SLLM_PROVIDER_OLLAMA,
SLLM_PROVIDER_CUSTOM
} sllm_provider_t;
Each provider has its own request/response format:
// OpenAI: {"model": "gpt-4", "messages": [...]}
sllm_build_openai_body(&req, &body, &len);
// Gemini: {"contents": [{"parts": [...]}]}
sllm_build_gemini_body(&req, &body, &len);
// Anthropic: {"model": "claude-3", "messages": [...]}
sllm_build_anthropic_body(&req, &body, &len);
The Numbers
| Metric | Go Gateway | C Shield |
|---|---|---|
| Lines of Code | 400 | 600 (+50%) |
| Runtime Size | ~100MB | 0 |
| Latency | 3-5ms | <1ms |
| Dependencies | Go modules | Zero |
| Deploy Complexity | Separate | Integrated |
Net result: +200 LOC, -100MB, -4ms, -1 deployment target.
The Hard Parts
1. HTTP in Pure C
No curl. No libraries. Just sockets:
static shield_err_t http_post(
const char *host, int port, const char *path,
const char *headers, const char *body,
char **response, size_t *response_len
) {
int sock = socket(AF_INET, SOCK_STREAM, 0);
// ... 80 lines of socket code
}
Is it pretty? No. Does it work? Yes. Does it add dependencies? No.
2. JSON Without Libraries
static char *extract_json_string(const char *json, const char *key) {
char pattern[128];
snprintf(pattern, sizeof(pattern), "\"%s\":\"", key);
const char *start = strstr(json, pattern);
// ... manual parsing
}
Could I use cJSON? Sure. But that's a dependency. Shield's philosophy: zero runtime deps.
3. Graceful Degradation
if (!g_sllm_config.brain_endpoint[0]) {
// Brain unavailable - default allow
analysis->allowed = true;
return SHIELD_OK;
}
If Brain is down, Shield doesn't crash. It logs and allows (configurable).
What I Learned
1. Polyglot is Expensive
Every language is:
- A runtime to deploy
- A toolchain to maintain
- A context switch for your brain
- A potential version conflict
For a solo developer, this compounds fast.
2. C is Underrated for Infrastructure
Modern C (C11+) with good practices is:
- Fast (no GC pauses)
- Portable (runs everywhere)
- Debuggable (no runtime magic)
- Stable (APIs don't break yearly)
3. "Rewrite in Rust" Isn't Always the Answer
Rust is great. But for a project with 23K lines of working C, adding Rust means:
- New toolchain
- FFI complexity
- Mixed memory models
Sometimes the answer is: make the C better.
The New Architecture
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SENTINEL PLATFORM β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β π HIVE (Central Core) β β
β β Threat Intelligence β’ Orchestration β’ Policies β β
β β ThreatHunter β’ Watchdog β’ PQC β’ QRNG β’ Cognitive Signatures β β
β ββββββββββββββ¬βββββββββββββββββββββββ¬βββββββββββββββββββββββ¬βββββββββββββ β
β β β β β
β βΌ βΌ βΌ β
β ββββββββββββββββββββββ ββββββββββββββββββββββ ββββββββββββββββββββββ β
β β π‘οΈ SHIELD (C) β β π§ BRAIN (Py) β β π΄ STRIKE (Py) β β
β β DMZ / Pre-Filter β β ML/AI Engines β β Red Team β β
β β 21 Protocols ββββ€ 209 Detectors ββββ€ 39K+ Payloads β β
β β <1ms Latency β β 98K LOC β β Crucible CTF β β
β βββββββββββ¬βββββββββββ ββββββββββββββββββββββ ββββββββββββββββββββββ β
β β β
β β SLLM Protocol (NEW) β
β βΌ β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β π EXTERNAL AI SYSTEMS β β
β β ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ β β
β β β OpenAI β β Gemini β β Anthropicβ β Ollama β β β
β β ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β² β
β β SIEM/Telemetry β
β βββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββ
β β π₯ IMMUNE (EDR/XDR/MDR) ββ
β β Agent System β’ Hive Mesh β’ DragonFlyBSD Hardened ββ
β β Kernel Modules β’ eBPF Monitoring β’ Real-time Threat Response ββ
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Complete ecosystem:
- HIVE β Central brain, orchestration, cognitive signatures
- SHIELD (C) β DMZ, 21 protocols, <1ms, zero deps
- BRAIN (Python) β 209 ML engines, 98K LOC
- STRIKE (Python) β Red team, 39K+ payloads
- IMMUNE (C/Python) β EDR/XDR/MDR, kernel-level
Result: Zero Go. Two languages. One platform.
Should You Do This?
Yes, if:
- You're a solo dev or small team
- Your Gateway is mostly pass-through
- You already have a C codebase
- Latency matters
No, if:
- Your Gateway has complex business logic
- You don't know C
- Your team is productive in Go
- "If it ain't broke, don't fix it" applies
Code
Full implementation: SENTINEL on GitHub
Key files:
-
include/protocols/sllm.hβ API -
src/protocols/sllm.cβ Implementation -
src/api/handlers.cβ/proxyendpoint
Building AI security infrastructure solo. 209 detection engines. Pure C DMZ. Strange Mathβ’.
π¬ Telegram: @DmLabincev β’ π: @DLabintcev
Top comments (0)