DEV Community

Cover image for The AI vs Creator Dilemma
Rick Cogley
Rick Cogley

Posted on • Originally published at cogley.jp

The AI vs Creator Dilemma

The web runs on a fundamental bargain: creators publish content, readers consume it, and somewhere in between, value flows back—through advertising, subscriptions, or traffic that converts to business opportunities. AI has upended this equation in ways we're only beginning to understand.

This week brought a crystallizing and sobering moment: Tailwind CSS laid off 75% of its engineering team despite being more popular than ever. The framework powers the design of sites for NASA, Shopify, and countless others. We hear revenue dropped 80%, docs traffic fell 40%. The driver here wasn't quality or competition, but rather AI coding assistants answering questions without anyone visiting the docs.

If your business model depends on people visiting your content, the Tailwind situation should be a wake-up call. This is a preview of what's coming.

The Value Chain Has Broken

Think of how things used to work. A front end developer has a question about Tailwind utility classes as they are building their site. They search, land on tailwindcss.com, find an answer, and maybe notice Tailwind Plus and decide to buy it. We did. The documentation itself has been both the product support and the marketing funnel.

Now? The same developer probaby asks Cursor, Gemini or Claude Code, getting an answer instantaneously without ever even leaving their coding editor. The question gets answered, Tailwind gets used, but Tailwind Labs sees no traffic, no views of paid product landing pages, and therefore no business conversion opportunities.

The huge and bitter irony here is that the AI models that the coding assistants use were trained on this same documentation. And the value flowed in one direction only.

Two Responses to the Same Problem

The divergence in how popular projects are responding reveals a genuine philosophical split.

The Embrace Strategy: Svelte

The Svelte team appears to have taken a different path. As part of Advent of Svelte 2024, they introduced comprehensive AI support through standardized llms.txt documentation files. I heard on a podcast that their docs now include multiple formats — full for comprehensive access, and even distilled versions optimized for use in an AI agent, with a limited "context window".

Svelte is also helping devs develop better Svelte 5 code, via a dedicated MCP (Model Context Protocol) server that can be leveraged by AI coding assistants. The MCP provides tools for listing documentation sections, retrieving relevant docs on demand, and even a svelte-autofixer that analyzes generated code for issues and best practices. For example, you can easily add it to Claude Code with a single command: npx sv add mcp.

This represents the logical endpoint of the "embrace" philosophy: don't just make your documentation available to AI—build infrastructure that makes AI better at your ecosystem. It is a feedback loop: developers using Claude Code or Codex to scaffold Svelte projects will have better experiences because of the MCP, leading to more Svelte adoption.

Whether this translates to sustainable revenue for the Svelte team remains an open question. But so far, it's a fundamentally different bet than Tailwind is making.

The Protect Strategy: Tailwind

When a community member submitted a pull request adding llms.txt support to Tailwind's documentation site, founder Adam Wathan rejected it. Not because he opposes the feature in principle, but because he's trying to save his business first.

His explanation was raw:

"The reality is that 75% of the people on our engineering team lost their jobs here yesterday because of the brutal impact AI has had on our business. And making it easier for LLMs to read our docs just means less traffic to our docs which means less people learning about our paid products and the business being even less sustainable."

You can see that the discussion thread got a bit heated - some called him short-sighted, others pointed out the irony: AI companies worth billions built products trained on freely available documentation, then redirected the value that documentation used to capture. Well of course they did, and while both sides are rational I think, neither is entirely right.

The Attribution Gap

Here's where I can't help but get a bit cynical. Some AI providers do indeed cite their sources. For example, Anthropic includes citations in its responses when Claude uses web search, linking back to the original content. And while I think that's a step in the right direction, the citation alone doesn't solve the problem.

A link in a footnote doesn't generate the traffic, engagement, or conversion opportunities that someone actually reading your content does. It's better than nothing I guess, but people are getting everything neatly packaged by AI, so there is necessarily going to be less focus on where that information came from. But a link is somehow not the same as what was lost.

The core issue is that content has become training data and retrieval source material without the consent, compensation, or even acknowledgment of the people who created it. Your blog post, your documentation, your carefully crafted tutorial, there it goes, getting fed into a model that generates revenue for someone else. Then again, maybe it's the "same as it ever was", thinking of Google's ad revenue model.

Cloudflare's Interesting Bet

Cloudflare has been building toward something potentially significant. In September 2024, they announced AI Audit—tools to see how AI bots access your content and make informed decisions about blocking or allowing them. On July 1, 2025, their "Content Independence Day", they rolled out "Pay per Crawl" in private beta, a marketplace where content creators can set prices for AI crawlers to access their material. They also became the first major infrastructure provider to block AI crawlers by default on new domains.

The technical mechanism uses HTTP 402 (Payment Required), a status code that's been defined since HTTP 1.1 (that's a long time ago) but rarely used. When an AI crawler requests content, publishers can return either the content with 200 OK, or a 402 with pricing information. Cloudflare acts as the payment intermediary. Major publishers including Condé Nast, TIME, The Associated Press, and Fortune have signed on.

It's clever, but I'm skeptical it will actually work. Consider the economics:

  • AI companies are already operating at significant losses
  • Investors expect returns, not new expense categories
  • The largest AI players have already scraped most of the open web
  • Future training may shift toward synthetic data and licensed premium sources

The power asymmetry is stark: a handful of well-funded AI companies have enormous leverage, while millions of individual creators have almost none. Cloudflare's marketplace assumes AI companies will voluntarily participate in paying for content they can often get just by scraping, anyway. That's a bad bet on goodwill from in-the-red organizations whose business models require a zero to extremely low content cost.

The Real Stakes

What happens if we get this wrong?

The optimistic case: A sustainable model emerges. Content creators get compensated when AI uses their work. Quality content continues to be produced because there's economic incentive to produce it. The AI ecosystem and the content ecosystem achieve symbiosis.

The pessimistic case: Value extraction continues until the sources dry up. Independent creators stop publishing detailed technical content because there's no return. Documentation becomes shallow and defensive. AI models eventually degrade because the content they depend on stops being created.

We've seen this pattern before. Every ad-supported content model eventually collapsed into a race to the bottom—clickbait, SEO gaming, engagement farming. The question is whether AI will accelerate that collapse or create pressure for something better.

My Personal Choice: Selective Participation

For my own content, I've decided to participate selectively. I'll use Cloudflare's tools to block AI crawlers that don't cite sources. For crawlers that do attribute (like the ones that power Claude's web search with proper citations) I'm willing to have my content accessible to AI.

This isn't a principled, really, it's just pragmatic self-defense. If my articles are going to be used, I want at least the acknowledgment of where that information came from. A citation that might lead someone back to the original source. A thin thread connecting the AI's answer to the human who wrote it.

The llms.txt standard represents something interesting: creators proactively optimizing for AI consumption rather than just having their content scraped. But until there's real compensation, I'm not interested in making extraction more efficient for systems that give nothing back.

What Would Actually Help

If I could design it, this is what I think a better system might look like:

  1. Mandatory attribution with real weight: Not just a footnote, but prominent source linking in AI responses. Make the citation as visible as the answer itself. This is a decision about UX that AI companies can make, keeping the content's origin front and center.

  2. Revenue sharing based on actual usage: Track when specific content informs AI responses. Pay creators proportionally. Perplexity has started doing this with their Publisher Program, though the amounts are modest.

  3. Opt-in training with compensation: Clear consent mechanisms for whether content can be used for training vs. just retrieval, with different compensation structures for each.

  4. Crawler transparency: Standardized identification of AI crawlers, their intended use (training vs. retrieval), and their citation policies. Let creators make informed decisions.

  5. Small creator access to deals: Right now, licensing deals go to News Corp and the New York Times. The long tail of creators—the millions of blog posts and tutorials and documentation pages that AI actually depends on—get nothing.

The Inevitable Adaptation

The web is going to change. That's not a prediction; it's already happening. The question is whether that change serves creators, AI companies, or both.

I'm not anti-AI. I use Claude extensively for my own work and am integrating it and other generative AI into my firm eSolia's ops. But I want to participate in an ecosystem where value flows both directions, where the intelligence I consume is matched by fair treatment of the intelligence I create.

For now, that means blocking bad actors, allowing good ones, and watching carefully to see which category various AI providers end up in. It means supporting efforts like Cloudflare's Pay per Crawl even while doubting they'll achieve scale. It means engaging with the llms.txt conversation while remaining skeptical of optimization that primarily benefits extractors.

The old web had its problems. The new web is still being written. I'd like some say in what we write.


This article reflects one creator's perspective on the rapidly evolving relationship between AI systems and content creators. The Tailwind CSS situation broke on January 6, 2026, with details from Adam Wathan's statements on GitHub and his "We had six months left" episode of the Adam's Morning Walk podcast.

Resources

Tailwind CSS situation:

Cloudflare's AI efforts:

AI integration approaches:

JRC CleanShot Microsoft Edge 2026-01-09-095706JST@2x.png

Originally published at cogley.jp

Rick Cogley is CEO of eSolia Inc., providing bilingual IT outsourcing and infrastructure services in Tokyo, Japan.

Top comments (0)