DEV Community

WP Agency
WP Agency

Posted on • Originally published at infowpagency.blogspot.com

Anthropic Pays Record $1.5 Billion to Authors: The AI Copyright Battle That Changed Everything

So here's a story that's actually pretty wild when you think about it. You know Claude, that AI chatbot that's been making waves lately? Well, its parent company Anthropic just agreed to pay authors a whopping $1.5 billion. Yeah, you read that right – billion with a 'B.'

And honestly, this isn't just about money. This might be the moment that changes how AI companies think about using other people's work forever.

What Actually Happened Here?

Let me paint you the picture. Three authors – Andrea Bartz (she writes thriller novels), Charles Graeber, and Kirk Wallace Johnson (both nonfiction writers) – basically said "Hey, wait a minute" when they found out Anthropic was using their books to train Claude without asking or paying them.

These writers filed a class action against Anthropic last year, arguing that the company unlawfully used millions of pirated books to teach its AI assistant Claude to respond to human prompts.

The kicker? The filing alleges that Anthropic used over 7 million pirated books—sourced from sites like Books3, LibGen, and PiLiMi—to train its large language models. That's not just a few books here and there. We're talking about millions of works that authors spent years writing.

The Numbers That'll Make Your Head Spin

The artificial intelligence company Anthropic has agreed to pay authors $3,000 per book, and the company will pay roughly $3,000 per book plus interest. The sum will be "the largest publicly reported copyright recovery in history."

Think about that for a second. If you're an author and your book was used to train Claude, you're getting $3,000 just like that. Not bad, right?

But here's what really gets me – the company agreed to destroy the datasets that contained the pirated books. So they're not just paying up; they're actually getting rid of the stuff they shouldn't have used in the first place.

This Wasn't Always a Slam Dunk

Here's where it gets interesting. In June, a judge ruled that Anthropic's use of books to train its AI models was "fair use." So technically, they could've kept fighting this case.

But they didn't. Why? Well, my guess is that even if you win in court, spending years fighting authors while everyone's watching probably isn't great for business. Especially when you're trying to convince people that your AI company is one of the good guys.

What This Really Means for AI Companies

You know what's actually fascinating about this? It's not just about Anthropic. Every major AI company – OpenAI, Google, Meta, you name it – has been training their models on massive amounts of text from the internet. And a lot of that text? Well, let's just say the copyright situation is... complicated.

The lawsuit disputes the idea that AI systems are learning the way humans do, noting "Humans who learn from books buy lawful copies of them, or borrow them from libraries that buy [them]."

That's actually a pretty good point. When I read a book to learn something, I either bought it or borrowed it from a library that bought it. The AI companies? Not so much.

The Authors Who Started It All

I've got to give credit where it's due. Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson didn't have to take on a massive AI company backed by Amazon and Google. They alleged Anthropic AI used the contents of millions of digitized copyrighted books to train the large language models behind their chatbot, Claude, including at least two works by each plaintiff.

These weren't just random authors looking for a payday – they found their actual books being used without permission. And honestly, wouldn't you be pretty upset if someone took your work and used it to make money without asking?

The Bigger Picture

The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in how AI companies handle copyrighted content.

Think about it this way: if you're running an AI startup right now, you're probably thinking twice about just scraping whatever content you can find on the internet. This settlement basically says "you can't just take whatever you want and figure out the legal stuff later."

And to be honest, that's probably a good thing. Don't get me wrong – I love AI and what it can do. But there's got to be a fair way to do this that doesn't involve essentially stealing millions of books.

What's Next?

Here's my take: this won't be the last settlement we see. There are similar lawsuits against other AI companies, and now that there's a precedent for billion-dollar payouts, you can bet more authors (and musicians, and artists, and anyone else whose work got scraped) are going to be paying attention.

"This result is nothing short of remarkable," Nelson added. And honestly, I agree. A few years ago, who would've thought we'd see AI companies paying authors $1.5 billion?

The Bottom Line

Look, I'm not against AI. I think it's incredibly powerful and has the potential to help us in amazing ways. But this settlement feels like a step toward doing AI the right way – with permission, with compensation, and with respect for the people whose creative work makes these systems possible.

Will this slow down AI development? Maybe a little. Will it make it more expensive? Probably. But it might also make it more fair, and honestly, I think that's worth something.

What do you think? Are you surprised by how much Anthropic agreed to pay, or did you see something like this coming? And if you're a writer, are you wondering whether your work might've been used to train an AI somewhere?

I'd love to hear your thoughts on this whole situation. It feels like we're watching history being made in real time.

Top comments (0)