Your platform has authentication. It has rate limiting. It has spam filters. Does it have child safety?
If your platform has chat, messaging, comments, or any feature where users communicate, you have a legal and moral obligation to protect minors. And if you're thinking "we'll build that later" — the regulators aren't waiting.
- KOSA (US) requires protection across 9 harm categories
- DSA (EU) mandates risk assessment for minors — fines up to 6% of global revenue
- Online Safety Act (UK) requires proactive detection of harmful content
- A New Mexico jury just ordered Meta to pay $375 million for misleading users about child safety
- A CNN/CCDH investigation proved 8 of 10 AI chatbots helped simulated teens plan violence
The era of "we moderate when users report" is over. Platforms are now legally liable for proactive protection.
But here's the thing most developers don't realize: you don't need to build this yourself.
The Problem with Building In-House
I've talked to dozens of CTOs and engineering leads at platforms with 10K to 1M+ users. The conversation usually goes like this:
"We know we need child safety. We've been meaning to build it. But we'd need to hire ML engineers, acquire training data, build detection models for grooming, bullying, self-harm, and a dozen other categories, train them across multiple languages, handle voice and image analysis, maintain and retrain the models, and keep up with evolving threats and regulations."
That's a team-year per harm category. KOSA has nine categories.
Most platforms end up with a keyword blocklist and a manual report button. That catches the obvious stuff. It misses everything dangerous.
What Keyword Filters Miss
The most dangerous threats on platforms aren't content problems — they're behavioral problems.
A groomer never sends a message that says "I am grooming you." The danger is in the pattern:
Week 1-2: "What game do you play? We should team up!"
→ Every content classifier: ✅ Safe
Week 2-4: "You can tell me anything. I get you better than your parents do."
→ Every content classifier: ✅ Safe
Week 4-6: "Don't tell anyone about our conversations, okay?"
→ Every content classifier: ✅ Safe
Week 6+: Escalation to exploitation
→ Content classifier: ❌ Too late
Each individual message looks completely innocent. Only by analyzing how the conversation evolves over time can you detect the grooming pattern — weeks before harm occurs.
The same applies to coercive control (isolation → financial control → monitoring → manipulation), radicalization (trust-building → us-vs-them framing → calls to action), and financial exploitation (rapport → urgency → money request).
This is what behavioral detection does. And it's now available as an API.
Adding Behavioral Detection: 3 Lines of Code
Here's what it actually looks like to add child safety to a chat platform:
const response = await fetch('https://api.tuteliq.ai/v1/detect/grooming', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
},
body: JSON.stringify({
messages: conversationHistory,
childAge: 13
})
});
That's it. The API returns a risk level, the detected grooming stage, supporting evidence, and recommended actions — all in under 400ms.
You can do the same for bullying, self-harm, coercive control, radicalization, fraud, and 15+ other behavioral threats. One API. All endpoints included in every plan.
A More Complete Integration
Here's a real-world pattern for a chat platform:
async function moderateMessage(message, userId, conversationHistory) {
// Run behavioral detection on the full conversation
const response = await fetch('https://api.tuteliq.ai/v1/analyze', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.TUTELIQ_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
content: message,
include: ['bullying', 'unsafe'],
// Optional: pass conversation history for behavioral analysis
context: {
ageGroup: getUserAgeGroup(userId),
platform: 'gaming-chat'
}
})
});
const result = await response.json();
// Handle based on risk level
if (result.severity === 'critical') {
blockMessage();
alertModerator(result);
// Generate incident report for compliance
const report = await generateReport(conversationHistory);
logForCompliance(report);
} else if (result.severity === 'high') {
flagForReview(result);
}
return result;
}
What's Available
The API covers 20+ detection endpoints across every modality:
Behavioral detection (the stuff classifiers miss): grooming, coercive control, radicalization, romance scams, money mule recruitment, social engineering, vulnerability exploitation
Content safety: bullying, unsafe content (self-harm, violence, drugs, explicit material), gambling harm, app fraud
Multimodal: voice analysis (transcription + behavioral), image analysis (visual + OCR), video analysis (frame extraction + audio)
Incident response: structured reports for compliance teams and law enforcement, age-appropriate action plans for children, parents, and moderators
27 languages, auto-detected. Zero data retention — content is analyzed and discarded.
KOSA Compliance in One Integration
If your platform serves US users under 18, KOSA requires you to protect them across 9 specific harm categories. Here's the mapping:
| KOSA Category | Endpoint |
|---|---|
| Bullying | /detect/bullying |
| Grooming & sexual exploitation | /detect/grooming |
| Eating disorders | /detect/unsafe |
| Substance use | /detect/unsafe |
| Self-harm and suicide |
/detect/unsafe + /analyze/emotions
|
| Depression and anxiety | /analyze/emotions |
| Compulsive usage | /analyze/emotions |
| Sexual exploitation and abuse |
/detect/grooming + /analyze/image
|
| Harmful visual content |
/analyze/image + /analyze/video
|
9/9 covered. One API key. No ML team.
The Science Behind It
This isn't generic ML classification. The detection models are informed by published criminological research from Lancaster University and the University of Central Lancashire. The behavioral patterns encoded in the API come from forensic analysis of real grooming cases, coercive control documentation, and fraud investigation data.
The Chief Scientific Officer is a criminologist. The advisory board includes a professor embedded in a UK police constabulary. The patterns are real.
Getting Started
- Sign up at tuteliq.ai/dashboard — free tier, 1,000 messages/month
- Read the docs at docs.tuteliq.ai
- Check the implementation guide on GitHub: github.com/tuteliq/child-safety-guide
SDKs available for Node.js, Python, Go, Java, Ruby, PHP, Swift, Kotlin, Flutter, and React Native. Native integrations for Roblox, Discord, Slack, and Telegram.
The Bottom Line
If you're building a platform where people communicate, child safety isn't a feature — it's infrastructure. Like authentication. Like encryption. Like rate limiting.
The difference is that authentication protects your system. Child safety protects your users.
The technology exists. The regulatory pressure is real. The question is whether your platform acts now — or waits for the lawsuit.
Gabriel Sabadin is the CEO of Tuteliq AB, a Swedish company building behavioral detection APIs for online safety. Questions? gabriel.sabadin@tuteliq.ai
Top comments (0)