Install guide and config at curatedmcp.com
Tuteliq: Real-Time Harm Detection for AI Apps—Text, Voice, Image, Video
Tuteliq is a child-safety and online-harm detection layer built for AI assistants. It exposes 50 detection tools across five content surfaces: text, voice, image, video, and PDF. Unlike generic content filters, Tuteliq targets specific threats—grooming, sextortion, self-harm ideation, romance scams, deepfakes, synthetic CSAM, radicalisation, and 16+ other harms—and returns structured risk scores with confidence levels and evidence tags per message.
Every detection includes age-calibrated thresholds (so safety rules adapt to user demographics), cross-endpoint amplification (flagging patterns across multiple messages), and country-aware crisis helpline routing. Built-in GDPR tooling handles consent records, data export/deletion, audit logs, and breach reporting—so you stay compliant with KOSA, EU DSA, and regional data-protection law. Sub-second latency on a single endpoint. 27 languages. Free tier included.
What It Does
Tuteliq plugs directly into Claude, Cursor, or any MCP-compatible client as a trust-and-safety layer. Once installed, your AI agent gains access to 50 specialized detection tools—no separate API calls or webhook plumbing needed. Feed it user messages, voice transcripts, images, or PDFs; get back structured JSON with risk categories, confidence scores, and actionable evidence. The interactive UI widgets render results inline, so you see detections without leaving your editor or chat interface.
For teams building moderated user-facing platforms, moderator dashboards, or safeguarding-aware chatbots, this means you can catch harm signals before they escalate—and prove compliance to regulators.
How to Install
https://api.tuteliq.ai/mc
Add to your Claude Desktop config:
{
"mcpServers": {
"tuteliq": {
"command": "npx",
"args": ["-y", "@tuteliq/mcp"]
}
}
}
Authentication is OAuth 2.1 with PKCE—no manual token handling.
Real-World Use Cases
- Moderate user-generated content in a community app: Run voice and image uploads through Tuteliq's detectors before they hit your feed. Catch sextortion attempts, deepfakes, and coercive-control language in real time.
- Build a safeguarding chatbot for young users: Calibrate detection thresholds by age cohort, log detections with audit trails for compliance review, and route crisis signals to helplines automatically.
- Validate synthetic media in a content-review workflow: Identify AI-generated images and deepfakes in bulk, flag them for human review, and export evidence reports for legal or regulatory teams.
Full install guides for Claude Desktop, Cursor, Windsurf, and more at CuratedMCP.
Top comments (0)