The internet was once seen as a new frontier for creativity and original ideas. Today, many people describe it differently: as a junkyard or an echo chamber, where the same ideas are repeated over and over. Let's explore the idea that the internet is in a state of decline, constantly recycling its own content, which leads to a drop in quality and originality.
This didn't happen overnight. It's the result of changes in technology and culture over the last 3 decades.
- The Early Web (1990s - early 2000s): This was a time of passion projects. People created websites about their hobbies and interests because they wanted to share them with small, like-minded communities. Originality was natural because the web was new and not yet commercialized.
- The Rise of Social Media (mid-2000s): Platforms like Facebook, YouTube, and blogs made it easy for anyone to create content. But this also meant that a few large companies started to control where and how people shared information. The goal shifted from personal expression to getting likes, shares, and followers.
- The Age of Google (2010s - 2022): Getting to the first page of Google search results became the most important goal for many creators. This led to a new way of making content called Search Engine Optimization (SEO). Instead of coming up with new ideas, people began to copy what was already popular, creating slightly different versions of the same articles and lists. We can call this the "human copy machine" 😉 phase.
- The AI Flood (2022 - Today): Artificial Intelligence (AI) can now do the job of the "human copy machine" but thousands of times faster and cheaper. This has flooded the internet with AI-generated content, creating a serious problem: AI models are now learning from other AI content, which can cause them to get progressively worse over time.
Characteristic | Phase I: The Early Web (1990s – early 2000s) | Phase II: The Rise of Social Media (mid-2000s) | Phase III: The Age of Google (2010s – 2022) | Phase IV: The AI Flood (2022 – Today) |
---|---|---|---|---|
Why People Created | To share passions and build communities | To get likes, go viral, and make money | To rank high on Google and get traffic | To create content automatically and cheaply |
Where People Created | Personal websites, forums | Blogs, social media, video sites (like Blogger, Facebook, YouTube) | Search engines, large content websites | AI tools like ChatGPT |
Creator-Audience Link | Direct, small, niche groups | Interactive, community-focused | Controlled by algorithms, business-like | Distant, no real relationship |
Key Technology | HTML, dial-up internet, early browsers | Social media platforms, high-speed internet | Search algorithms, analytics tools | Artificial Intelligence (AI), Large Language Models (LLMs) |
Typical Content | A personal homepage, a fan site, a forum discussion | A blog post, a viral video, a social media profile | A “how-to” guide for Google, a list article (“listicle”) | An AI-written article, an AI summary, a deepfake video |
The Internet Starts to Eat Itself (c. 2022-Present)
The current era of Artificial Intelligence (AI), which took off with the release of tools like ChatGPT in late 2022, is the logical conclusion of the previous phase (The Age of Google). AI models are extremely good at the exact task that SEO taught human writers to do: analyze huge amounts of existing text and create a new summary of it. The only difference is that an AI can do it in seconds, for almost no cost.
The result has been a massive flood of AI-generated, or "synthetic" content.
A 2025 study of 900 thousand new web pages found that nearly 75% of them had at least some AI-generated content.
The amount of AI content in the top 20 Google search results has grown dramatically, from around 2% in 2019 to nearly 20% by mid-2025.
Some experts now believe that 90% of all online content could be generated by AI as early as 2025.
AI slop
This flood of AI content is making the internet a less reliable place. The web is being filled with what some call "AI slop": generic, repetitive, and often boring content that lacks any real human insight or experience.
This has made services like Google search less useful. Around mid-2024, reports of several incidents highlighted a growing problem with internet search quality. Many users expressed frustration that services like Google had become less useful, with search results increasingly cluttered by low-quality, AI-written websites created only for ad revenue.
Even Google's own AI-powered summaries have given dangerously wrong answers, like telling people to put glue on pizza or to eat rocks. This erodes the trust we have in the internet as a source of good information.
Model Collapse
The biggest long-term threat is a problem called "model collapse". This is a feedback loop where new AI models are trained on the low-quality AI content created by older models. It's like making a photocopy of a photocopy; each new copy gets blurrier and less clear.
As AI models learn from other AI-generated text, they start to forget the diversity and richness of real, human-created information. Researchers have found this happens in two stages:
- First, the models forget rare or unusual information. They learn the most common patterns, which makes their output very generic.
- As the cycle continues, the models' understanding of the world becomes distorted. Their output becomes a flat, boring copy of a copy.
This process is polluting the internet's knowledge base. The web, which was the main source of training data for AI, is now being contaminated by AI's own output. This has led some researchers to see all data created before 2022 as a precious, "uncontaminated" resource.
Some experts still believe the fears of "model collapse" are exaggerated and can be seen as a "doomsday fear".
This article explains that while model collapse (where AI models trained on other AI-generated data lead to a decline in quality) is a real concern, some experts argue that the most severe predictions are unlikely to happen in the real world. With proper management and a continued emphasis on using high-quality, human-generated data, the most catastrophic outcomes can be avoided.
The goal of content creation is shifting. It's no longer about informing a human, but about creating a massive amount of content as cheaply as possible to fill up digital space and be seen by search engines. This turns the web into a giant, automated content farm.
Model collapse is a serious threat. The internet has been our collective memory. If that memory becomes corrupted by endless, degrading copies, we could enter a new kind of digital dark age, where finding true, original knowledge becomes nearly impossible because it's buried under a mountain of synthetic junk.
Human Creativity in the Age of AI
The rise of AI presents two main challenges to human creativity.
- Economic Threat: For many jobs, like writing marketing copy or making simple graphics, AI can now produce "good enough" content much faster and cheaper than a human. As businesses look to save money, many creative professionals could lose their jobs, which might mean less human creativity overall.
- Direct Threat:AI could one day become more creative than humans. However, for now, AI has major limitations. AI creativity is just very good pattern-matching. It doesn't come from real understanding or life experience.
AI lacks the key things that make human creativity special, such as
- AI cannot feel joy or sadness, nor can it have the sudden flashes of insight that lead to great art.
- AI doesn't understand culture or social situations in the way humans do.
- AI can only remix what it has learned from its training data. It can't create something completely new from a unique personal vision.
AI as a Creative Partner
This leads to a more positive view: AI not as a replacement, but as a powerful assistant that can help humans be more creative.
In this partnership, AI can do the boring work and spark new ideas.
Do the Boring Work
Every creative project has tedious tasks. AI can handle things like basic research, cleaning up audio, or drafting simple text. This frees up human creators to focus on the artistic vision and emotional heart of the work.
Spark New Ideas
By mixing ideas in unexpected ways, it can give creators a starting point for something new. For example, Musicians use AI to generate new musical phrases that they then arrange into a finished song that reflects their human vision. Visual artists use tools like DALL-E to quickly test out different ideas and styles. Humans switch from being creators to curators.
The New Value of Authenticity
In a world flooded with fake and generic AI content, things like authenticity, real expertise, and genuine human experience are becoming more valuable than ever. The content that AI can't create (i.e. personal stories, unique insights from real life, and a distinct point of view) will be what people seek out and are willing to pay for.
The internet's content will likely split into two tiers. At the bottom will be a huge amount of cheap, mass-produced AI content. At the top will be a premium level of authentic, human-made content that focuses on originality and emotional connection.
In this new world, the most important skill for a creator won't just be making things, but having good taste and a strong vision. The successful creator will be someone who can use AI as a powerful assistant to generate ideas, but then use their uniquely human judgment to shape that raw material into something truly meaningful and authentic.
Conclusion
The Story So Far
The internet is now feeding on itself, risking a future where information quality continues to decline. It has become difficult to tell the difference between something made by a human and something made by a machine.
A New Solution?
In this new reality, we need a way to trust what we see online. The first idea was to build AI content detectors, but these are in a constant battle with AI models designed to trick them. A better, more lasting solution is not to just find what's fake, but to prove what's real. This is the idea of content provenance: creating a clear, verifiable record of where a piece of content came from.
A group of major tech and media companies, including Adobe, Microsoft, and the BBC, have already created an open standard for this, called the Coalition for Content Provenance and Authenticity (C2PA).
This system works through something called Content Credentials, which act like a secure "nutrition label" for digital content.
An Example: Here's how C2PA works
- When a photo is taken or a file is created, the creator can attach Content Credentials. This is a secure piece of information that is cryptographically signed ("label"), so you can tell if it's been tampered with.
- This "label" contains key information, such as who created the content, what tools were used (including if AI was involved), and a history of any changes made to it.
- Companies like Leica, Nikon, and Canon are building this technology directly into their cameras. Software companies like Adobe are also including it in their tools, and you will start to see a "cr" icon on content that has these verifiable credentials.
The Future of a More Trustworthy Internet
The internet may feel like a junkyard, but we are not without hope. The way forward is to give people the tools to tell the difference between real and fake. Content provenance standards like C2PA do just that. They allow us to ask important questions about the content we see (i.e. who made this? how was it made?) and get trustworthy answers. This helps real human creators prove the authenticity of their work and stand out from the sea of AI-generated media.
The future of originality online won't be about banning AI, but about building a new culture of verification. By using these tools, we can move from a world of doubt to one where we can trust what we see, ensuring that even in the junkyard, the real treasures can still be found.
Top comments (0)