DEV Community

Cover image for Catch The AI
Shubham Prajapati
Shubham Prajapati

Posted on

Catch The AI

Last few months, there has been a boost in creating AI-based content on social media, but it is slowing down on many platforms.

YouTube is the first player to introduce it. In YouTube's new policy, it is clearly stated that they will demonetize those channels which create their content using AI. Exact terms:

‘This policy is now renamed to "inauthentic content".
While this type of content has always been ineligible for monetization, the update aims to provide better clarity and identification methods.
It specifically targets videos with AI-generated commentary or those that heavily rely on recycled content with minimal alterations.’

It is a direct & indirect move to stop AI-based content on monetized and money-related platforms.

On the other hand, Cloudflare has enabled publishers to block unauthorized scraping of their content by AI crawlers, introducing a “permission-first” model for AI companies’ access to web content. This move targets AI model training, not direct content monetization on user platforms, but reflects a broader industry trend in controlling how AI uses online content.

This move targets AI model training and generating, not direct content monetization on user platforms, but reflects a broader industry trend in controlling how AI uses online content.

Based on these examples, we can think that now AI is affecting in 2 methodologies or concepts:

On monetization-based platforms, which now don’t want to pay for AI-based content creation.

They also prefer that any AI models do not use their content, which is created by original human writers.

Basically, these platforms want that their content is original and that no AI models use this hard-worked content.

Based on Polaris Market Research, there is 18–25% growth in AI-based content creation in the 2020–2025 graph.

But why is this happening suddenly? Actually, it is not sudden. From last year, there have been many strikes/campaigns happening in written-based content like “Writers' Strike 2024.”

They also raised one point in their protest — that many industries & AI companies are using their content unauthorizedly or without their permission to train their models.

And it is also true that new AI-based models & algorithms have web-based search options, and they also access blogging-based websites like ChatGPT, Perplexity, Cloud, etc.

Why is this not good for content writers & content creators?
Firstly, original writers and creators make content with hard work & research — they get monetization for quality content creation.

But in AI-based channels like AI motion, AI cartoons, AI-based scripts, AI-based content writing, AI-based music generation — all types of different channels do not provide original & high-form content for monetization.

Now ‘researching’ for content creation is going to die. But after this, new policies are important for every creator.

Definitely, it is a big impact on the social media world. Me and my friend Heet Vekariya also discussed it. After that, another question arises:

  1. Which is the next possible platform to take this type of big step?
  2. Which new technology field will rise due to this impact? Let’s focus on the first question.

The possible platforms — there are many monetization & writing-based platforms that are going to face this issue, like:

Meta (Facebook, Instagram)

X (Twitter), Twitch

Other blogging-type platforms like Medium, Wattpad, Substack, dev.to, etc.

They all may take steps to solve the issue of AI-based content and protect data from AI models and web search.

Now the second question: Which is the next technology or field to grow after this impact?

So we discussed it, and we see the upside of this issue.

We think about the “Catch the AI” effect.

After this policy — and more such policies in the future — it is obvious that platforms need high-power AI detection with high accuracy, which surpasses false negatives & false positives.

Because these platforms provide visuals, audio, and text data, like on YouTube, one AI-generated video can use:

Visuals (2D screen)

Audio (like music)

Text (like script in the video)

So, for that, it is important they have strong algorithms to detect AI in the platform.

Currently, there are AI detectors available on the internet — but they are not powerful enough for big data.

And here comes the new field, where these platforms need to build and improve:

AI Detection

Pattern Recognition

Content Authenticity Checkers

To Catch the AI — because we know how well AI can create human-like content. To stop that, the best AI detection is needed.

Top comments (0)