DEV Community

sentinel-safety
sentinel-safety

Posted on

Add Child Safety to Your Platform in 30 Minutes: A SENTINEL Integration Guide

If you're building a platform where users interact — a game, a community forum, a messaging app, an educational tool — child safety compliance is not optional. The EU DSA, UK Online Safety Act, and NCMEC mandatory reporting obligations all apply based on your user base, not your company size or headcount.

SENTINEL (https://github.com/sentinel-safety/SENTINEL) is an open-source behavioral intelligence platform that handles the full compliance stack: behavioral detection, perceptual hash matching, evidence preservation, and CyberTipline report generation. This guide walks through getting it running on your infrastructure.

Prerequisites: Docker, Docker Compose, 4GB RAM, a Linux or macOS host. Everything runs locally — no data leaves your infrastructure.

Step 1: Clone and Configure (5 minutes)

git clone https://github.com/sentinel-safety/SENTINEL.git
cd SENTINEL
cp .env.example .env
Enter fullscreen mode Exit fullscreen mode

Edit .env with your platform's configuration. The required fields at minimum:

# Platform identity (used in NCMEC reports)
PLATFORM_NAME="Your Platform Name"
PLATFORM_ESP_ID="your-esp-id"

# Database
DATABASE_URL=postgresql://sentinel:sentinel@postgres:5432/sentinel

# Redis (session state and message queuing)
REDIS_URL=redis://redis:6379

# JWT for inter-service auth
JWT_SECRET=$(openssl rand -hex 32)
Enter fullscreen mode Exit fullscreen mode

The ESP ID is your platform's identifier in NCMEC's CyberTipline system. If you don't have one yet, you can register at https://www.missingkids.org/theissues/csam.

Step 2: Start the Services (3 minutes)

SENTINEL runs as 13 microservices. Docker Compose handles the orchestration:

docker-compose up -d
Enter fullscreen mode Exit fullscreen mode

On first run, this pulls images and initializes the database schema. On subsequent starts, it's under 10 seconds.

Verify all services are healthy:

docker-compose ps
Enter fullscreen mode Exit fullscreen mode

You should see all services in Up or Up (healthy) state. The key services:

  • gateway (port 8000): API entry point for your platform to send events
  • behavioral-analyzer: Session-level pattern detection
  • content-scanner: Perceptual hash matching
  • evidence-store: Forensically sound evidence packaging
  • report-generator: CyberTipline report construction
  • federation-hub: Cross-platform threat intelligence (optional)

Step 3: Send Your First Event (5 minutes)

SENTINEL's gateway accepts message events via REST. When a user sends a message on your platform, forward it:

import httpx

async def forward_to_sentinel(message: dict):
    async with httpx.AsyncClient() as client:
        response = await client.post(
            "http://localhost:8000/api/v1/events/message",
            json={
                "platform_user_id": message["sender_id"],
                "session_id": message["conversation_id"],
                "content": message["text"],
                "timestamp": message["created_at"].isoformat(),
                "recipient_ids": [message["recipient_id"]],
                "metadata": {
                    "ip_address": message.get("sender_ip"),
                    "user_agent": message.get("user_agent"),
                }
            },
            headers={"Authorization": f"Bearer {SENTINEL_API_KEY}"}
        )
    return response.json()
Enter fullscreen mode Exit fullscreen mode

For Node.js platforms:

const axios = require('axios');

async function forwardToSentinel(message) {
  const response = await axios.post(
    'http://localhost:8000/api/v1/events/message',
    {
      platform_user_id: message.senderId,
      session_id: message.conversationId,
      content: message.text,
      timestamp: new Date(message.createdAt).toISOString(),
      recipient_ids: [message.recipientId],
      metadata: {
        ip_address: message.senderIp,
        user_agent: message.userAgent,
      }
    },
    { headers: { Authorization: `Bearer ${process.env.SENTINEL_API_KEY}` } }
  );
  return response.data;
}
Enter fullscreen mode Exit fullscreen mode

The gateway returns immediately with an event ID. Analysis happens asynchronously — your message delivery latency is unaffected.

Step 4: Handle Alerts (10 minutes)

When SENTINEL detects a high-risk session, it fires a webhook to your platform. Configure your webhook endpoint:

# In .env
ALERT_WEBHOOK_URL=https://your-platform.com/webhooks/sentinel
ALERT_WEBHOOK_SECRET=your-webhook-secret
Enter fullscreen mode Exit fullscreen mode

Your webhook handler receives:

{
  "alert_id": "alert_01HXYZ...",
  "session_id": "conv_123",
  "platform_user_id": "user_456",
  "risk_score": 0.87,
  "risk_level": "HIGH",
  "behavioral_signals": [
    "age_solicitation_detected",
    "trust_escalation_pattern",
    "platform_exit_pressure"
  ],
  "recommended_action": "REVIEW",
  "evidence_package_id": "evp_01HABC...",
  "created_at": "2026-04-26T14:23:00Z"
}
Enter fullscreen mode Exit fullscreen mode

A minimal handler that queues for human review:

from fastapi import FastAPI, Request, HTTPException
import hmac, hashlib

app = FastAPI()

@app.post("/webhooks/sentinel")
async def handle_sentinel_alert(request: Request):
    # Verify signature
    body = await request.body()
    sig = hmac.new(
        WEBHOOK_SECRET.encode(),
        body,
        hashlib.sha256
    ).hexdigest()
    if not hmac.compare_digest(sig, request.headers.get("X-Sentinel-Signature", "")):
        raise HTTPException(status_code=401)

    alert = await request.json()

    if alert["risk_level"] in ("HIGH", "CRITICAL"):
        # Queue for human review — never auto-ban on algorithm alone
        await queue_for_trust_and_safety_review(alert)

    return {"status": "received"}
Enter fullscreen mode Exit fullscreen mode

Critical: Always route HIGH and CRITICAL alerts to human review before taking account action. SENTINEL's fairness gate applies statistical verification before flagging, but human judgment is the final step. Automated bans without review create wrongful termination liability.

Step 5: Understanding What Gets Flagged (5 minutes)

SENTINEL's behavioral analyzer tracks signals across a session, not individual messages. A single message asking someone's age is not flagged. The pattern that gets flagged looks like this:

  1. Initial contact (age solicitation, shared interest framing)
  2. Trust escalation over multiple sessions (increasing personal disclosure requests)
  3. Platform exit pressure ("let's continue on Discord")
  4. Image solicitation following the above sequence

No single step triggers an alert. The risk score accumulates across the behavioral trajectory. This is why behavioral detection catches grooming that content filters miss — the individual messages are often innocuous; only the pattern is not.

You can inspect any session's behavioral analysis:

curl -H "Authorization: Bearer $SENTINEL_API_KEY" \
  http://localhost:8000/api/v1/sessions/{session_id}/analysis
Enter fullscreen mode Exit fullscreen mode

Step 6: Evidence and Reporting

When a session reaches CRITICAL risk level, or when your trust and safety team confirms a violation after review, you can generate an NCMEC CyberTipline report:

curl -X POST \
  -H "Authorization: Bearer $SENTINEL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"session_id": "conv_123", "confirmed_by": "analyst_id_456"}' \
  http://localhost:8000/api/v1/reports/generate
Enter fullscreen mode Exit fullscreen mode

This returns a report object pre-populated with all required and recommended CyberTipline fields, sourced from the evidence package that was built at the time of flagging. The report is ready for submission or human review before submission.

Evidence is automatically preserved for 90 days (configurable for law enforcement extension requests) in isolated storage with full chain of custody.

What You've Got

After these 30 minutes, your platform has:

  • Real-time behavioral analysis on all user messages
  • Perceptual hash scanning for known CSAM
  • An alert pipeline to your trust and safety workflow
  • An evidence preservation system that satisfies 18 U.S.C. § 2258A retention requirements
  • A CyberTipline report generator ready to produce compliant reports within your 24-hour reporting window

This is the same infrastructure stack that compliance teams at large platforms build over 12-18 months. SENTINEL packages it as a deployable system because the platforms that need it most — indie games, small social networks, educational tools — are the ones least likely to have the resources to build it from scratch.

Next Steps

  • Content scanning: Configure the NCMEC hash database integration for PhotoDNA-compatible matching (requires NCMEC ESP registration)
  • Federation: Enable the federation hub to share anonymized threat intelligence with other SENTINEL-enabled platforms (cross-platform grooming patterns are common)
  • Monitoring: The /metrics endpoint exposes Prometheus-compatible metrics for your observability stack
  • Tuning: Adjust risk score thresholds and behavioral signal weights in config/behavioral_rules.yml to match your platform's demographics

Documentation, architecture details, and the full API reference are at the GitHub repo: https://github.com/sentinel-safety/SENTINEL

SENTINEL is in active v1 development. Production deployments should include a human review layer. The system is designed to surface risk for human judgment, not replace it.

Top comments (0)