If you operate a platform where users under 18 might be present — a game, a community forum, a tutoring app, a messaging tool — there's a good chance you've heard that child safety regulations are getting stricter.
You may have heard "DSA" and "UK Online Safety Act" mentioned. You might have a vague sense that you're probably in scope for something. But the actual requirements are surprisingly opaque, especially for smaller teams who can't afford a compliance consultant.
This post walks through what DSA and UKOSA actually require, what counts as "reasonable" compliance for a small platform, and what you'd need to build (or deploy) to demonstrate it.
Two Laws. One Problem.
EU Digital Services Act (DSA) came into force for all platforms in February 2024. It applies to any online intermediary operating in the EU — regardless of where the platform is headquartered.
UK Online Safety Act (UKOSA) completed its phased implementation in January 2025, with additional categorization duties taking effect in July 2026. It applies to platforms with UK users — again, regardless of where you're based.
Both laws operate on a tiered system. The obligations on a gaming indie studio with 10,000 users are dramatically different from those on a Very Large Online Platform (VLOP) like Meta. But here's the thing smaller teams often miss: the baseline obligations apply to everyone, including platforms that have never thought of themselves as being "in scope."
What Both Laws Actually Require (Baseline)
1. You must have a process for content moderation
Both laws require platforms to have documented, functioning processes for dealing with illegal content and harmful content involving minors. "We don't really have chat" is not a defense if your platform has any user-to-user communication feature.
What this means practically:
- A written moderation policy that users can read
- A mechanism for users to report content
- A process for reviewing and acting on reports
- Documentation that you actually follow the process
2. You must have a way to report CSAM to authorities
If child sexual abuse material appears on your platform (or is generated/distributed through it), you are required to report it. In the US, this means NCMEC CyberTipline reporting. The DSA establishes that illegal CSAM reports must be made to national authorities (and to a soon-to-be-established EU center).
What this means practically:
- You need tooling that can generate evidence packages in the NCMEC reporting format (hash, timestamp, account information, content)
- You need a documented retention policy for evidence that might be needed in legal proceedings
- You need to know what your reporting obligations are in the jurisdictions you operate in
3. You must implement child safety measures if minors are in your user base
This is where both laws get more specific. If you have users under 18 (or if you have any reason to believe you might), you're required to implement proportionate measures to prevent harmful contact with those users.
The key word is "proportionate." A platform with 500 users has different obligations than TikTok. But "proportionate" does not mean "none."
The July 2026 Ofcom Categorization Register
In July 2026, Ofcom will publish the UK's first Platform Categorization Register under UKOSA. This register will categorize platforms into tiers — and different tiers have different mandatory obligations.
Here's what this means for smaller platforms: many platforms that currently believe they're below the threshold will discover they're not.
The categorization criteria include:
- Number of UK users
- Whether the platform allows user-to-user communication
- Whether users under 18 are present (or "likely to be present")
- Whether the platform has content that is "regulated content" under UKOSA
If you run a gaming platform with voice or text chat, and you have any UK users, you should be planning now for what category you might fall into.
What "Proactive" Child Safety Looks Like
Both laws nudge platforms toward proactive (not just reactive) safety measures. Reactive safety is: someone reports abuse, you respond. Proactive safety is: you detect patterns of potential abuse before a report is filed.
For most platforms, proactive safety has historically meant one thing: keyword filtering. Block certain words and phrases, flag messages that contain them.
There are two problems with this approach, and regulators are increasingly aware of both:
Problem 1: Keyword filters don't catch grooming.
Grooming is a process that unfolds over weeks or months. It typically begins with entirely normal, benign conversation — building trust, establishing a relationship, escalating gradually. The vocabulary of early-stage grooming looks nothing like the vocabulary regulators put on keyword lists. By the time a keyword triggers, significant harm has often already begun.
Problem 2: Keyword filters create legal liability, not just safety.
A keyword filter that misses a grooming pattern, when documented, looks like a system that was designed to fail. When a regulator or plaintiff examines your moderation logs, "we had a keyword filter" is not a strong defense. "We monitored behavioral patterns and escalated to human moderators when patterns suggested risk" is a much stronger one.
What Behavioral Detection Actually Requires
If you want to implement behavioral detection — the approach that actually works against grooming — here's what you need:
1. Multi-session context
A single-message classifier cannot detect grooming. You need a system that tracks how conversations evolve over time — across multiple sessions, over days or weeks. The risk signal comes from the trajectory, not any individual message.
2. Relationship graph tracking
Grooming often involves one adult establishing a relationship with one minor. Coordinated grooming (multiple accounts approaching the same minor) is also documented. You need to track who is talking to whom, with what frequency, and how those relationships develop.
3. Explainability for human moderators
Regulators in both the EU and UK have begun asking: when your system flags a user, what does your human moderator actually see? An opaque score from 0 to 100 is not sufficient. Moderators need to understand why a flag was triggered — both for accuracy (to make good decisions) and for accountability (to document that human review occurred).
4. Audit logs with forensic integrity
Both DSA and UKOSA require that you be able to demonstrate your compliance process to regulators. This means tamper-evident audit logs — records that cannot be altered after the fact — that show when a risk was detected, what action was taken, and by whom.
For legal proceedings (criminal cases, civil suits), chain-of-custody matters. Your audit log is evidence. It needs to be treated like evidence from the start.
5. Data handling compliance
You can't build a behavioral detection system without collecting and processing behavioral data. That data collection must be GDPR-compliant (for EU and UK users), COPPA-compliant (if you have US users under 13), and consistent with your privacy policy.
This means:
- A documented lawful basis for processing behavioral data for safety purposes
- Erasure handling — when a user exercises their right to deletion, the audit log must be preserved for legal compliance but personal data must be removed
- Data minimization — you should process the minimum necessary behavioral signals, not archive raw message content
The Compliance Burden on Small Platforms
Here's the frustrating reality: the compliance requirements above are legitimate and proportionate. They exist to protect children. But implementing all of them from scratch is expensive — easily $500K+ in engineering cost for a full custom implementation.
This is where the market has a gap. Large platforms (Meta, Discord, Roblox, TikTok) have entire trust and safety engineering teams. Small platforms — indie game studios, EdTech startups, community forums — have maybe one person who is also doing three other jobs.
The UKOSA's Ofcom has explicitly acknowledged this gap. Their guidance mentions that smaller platforms can use third-party tooling to meet their obligations, provided that tooling is well-documented and auditable. The regulation doesn't require you to build from scratch; it requires you to have a functioning, defensible compliance posture.
What This Looks Like in Practice
We built SENTINEL as an open-source answer to this gap. Here's what it covers:
Behavioral risk scoring: Four signal layers (linguistic, graph, temporal, and fairness) that monitor conversation patterns across sessions — not just individual messages. Each score comes with a plain-language explanation so moderators understand what triggered it.
Fairness gates: Before any detection model can be deployed, it must pass a demographic parity audit. If it disproportionately flags any demographic group, it cannot ship. This prevents the disparate-impact problems that have plagued algorithmic moderation systems.
Tamper-evident audit logs: 7-year retention with cryptographic chaining — every entry is a chain link that can be verified. Designed for legal proceedings, not just internal monitoring.
NCMEC CyberTipline reporting: Generates evidence packages in the required format. If you have a mandatory reporting obligation, the tooling to meet it is built in.
GDPR/COPPA erasure handling: When a deletion request comes in, personal data can be removed from behavioral records without destroying the audit log's forensic integrity.
Federation (opt-in): Platforms can share threat signatures without sharing raw messages. A predator banned on one platform gets flagged on federated platforms — without any platform ever seeing another platform's user data.
It's free for platforms under $100k annual revenue. Most indie studios, most EdTech startups, most community forums qualify.
Where to Start
If you're a small platform trying to figure out your compliance posture:
Establish whether you're in scope. If you have users in the EU or UK and any user-to-user communication feature, you probably are. If you have users under 18 (or can't rule it out), the child safety provisions apply.
Document what you have. Even if it's just a keyword filter and a report-abuse button, document it. A documented process is a defense. An undocumented one is not.
Understand the July 2026 UKOSA deadline. If you operate a UK-facing platform, start tracking Ofcom's categorization register announcements now. The obligations for higher-tier platforms take effect in Q3 2026.
Look at open-source tooling. You don't need to build a moderation platform from scratch. SENTINEL (and other tools in the ROOST ecosystem) are specifically designed to give smaller platforms access to the same caliber of safety infrastructure that large platforms have built internally.
One More Thing
The regulatory environment is not going to get simpler. The EU's AI Act introduces additional requirements for AI-based content moderation systems. The UK is actively expanding UKOSA. US state laws are proliferating.
But the fundamental requirement is not that complex: you need to demonstrate that you took child safety seriously, that you had proportionate processes, and that you documented what you did. That's achievable for a small platform with the right tools.
SENTINEL is an open-source behavioral intelligence platform for child safety compliance. Free for platforms under $100k revenue. GitHub: https://github.com/sentinel-safety/SENTINEL
Top comments (0)