Most conversations about AI and data miss the point.
They frame the problem as user behavior:
“You shared too much.”
“You trusted the tool.”
“You should’ve known better.”
That’s convenient. And wrong.
The real issue isn’t that people overshare.
It’s that data extraction is happening invisibly, by default, and without meaningful choice.
The Moment No One Warns You About
Think about the moments where value is actually created:
• Uploading artwork
• Posting writing
• Submitting code
• Commenting in forums
• Training prompts, feedback, edits
• Simply browsing while logged in
In those moments, you’re not “talking to AI.”
You’re interacting with platforms — many of which reserve the right to use that interaction to train models.
Sometimes they say it clearly.
Sometimes it’s buried in policy updates.
Sometimes it’s vague by design.
But almost always, the same thing is true:
You don’t know what’s happening while it’s happening.
Awareness Isn’t the Fix. Agency Is.
Telling people to “be more careful” assumes they have the information and the leverage to act.
They don’t.
Reading Terms of Service isn’t realistic.
Tracking policy changes across platforms is impossible.
Finding opt-out paths — when they exist at all — is friction by design.
The problem isn’t ignorance.
It’s lack of agency at the point of interaction.
What WTOM Actually Does
WTOM (WhoTrainedOnMe) exists for one reason:
To interrupt invisible AI training at the moment it matters and give users a real choice.
When you visit a site, WTOM checks whether that platform:
• Uses user data to train AI
• Has a history of content scraping
• Offers an opt-out or protest mechanism
• Is vague or opaque about AI usage
If there’s a match, WTOM shows a small, contextual signal — right then, on that site.
Not a dashboard.
Not a warning banner.
Not fear-based messaging.
Just clarity — paired with action.
This Isn’t About Fear. It’s About Control.
WTOM doesn’t tell you what to do.
It doesn’t block anything.
It doesn’t assume intent.
It simply answers the question you’re almost never allowed to ask in real time:
Is this platform training on me — and can I do anything about it?
Sometimes the answer is “yes, and here’s how to opt out.”
Sometimes it’s “yes, and there is no opt-out.”
Sometimes it’s “unclear — and that’s the point.”
But at least you know before you contribute.
Why This Matters (Especially for Creators)
Artists, writers, developers, photographers — they create value constantly online.
And in many cases, that value becomes training data by default, without explicit consent and without compensation.
WTOM doesn’t solve the entire system.
But it does something rare:
It restores situational awareness + choice at the exact moment leverage normally disappears.
Where This Is Going
WTOM is early. The database is growing. Coverage is expanding. Reporting is manual and human by necessity.
But the direction is simple:
• Real-time AI training visibility
• Per-site opt-out and protest paths
• A future privacy dashboard that reflects your exposure, not just abstract risk
This isn’t about stopping AI.
It’s about stopping silent extraction.
If You Want That Control Back
WTOM is live now for:
Chrome + all Chromium browsers: Check it here
Firefox: Check it here
Use it.
Report sites.
Pressure platforms through visibility.
Because the most dangerous part of AI training today isn’t malice.
It’s that it happens quietly, while you’re busy creating.
Top comments (0)