DEV Community

Ethan Zhang
Ethan Zhang

Posted on

What Happened in AI This Week: From ChatGPT's $3B Milestone to Disney's OpenAI Partnership

Grab your coffee, because AI just had another absolutely wild week. We're talking billion-dollar deals, model releases that triggered "code red" alerts, and some genuinely concerning developments on the security front. If you blinked, you missed a lot.

So here's the thing: the AI landscape is moving so fast that even industry insiders struggle to keep up. This week alone, we saw major corporate partnerships, breakthrough technical releases, and growing evidence that we need to talk seriously about AI safety. Whether you're building with AI, investing in it, or just trying to understand where this is all heading, these stories matter.

Let me break down what actually happened this week in a way that makes sense over your morning brew.

The Business of AI: Billions Are Flying Around

ChatGPT Just Hit $3 Billion in Mobile Revenue

According to TechCrunch, ChatGPT's mobile app crossed the $3 billion mark in consumer spending. Yeah, you read that right. Three billion dollars. From a mobile app that barely existed two years ago.

This isn't just a feel-good number for OpenAI's investors. It signals something bigger: people are actually willing to pay for AI tools. We're past the "cool demo" phase and firmly into "I need this in my daily workflow" territory.

Disney Bets $1 Billion on OpenAI (And Brings Mickey Along)

But wait, it gets better. Ars Technica reports that Disney dropped $1 billion on OpenAI and licensed 200 of their characters for use in Sora, OpenAI's video generation platform.

Think about what this means. Disney, the company that guards their IP like Smaug guards gold, just said "here, use Mickey Mouse in your AI video generator." This is Hollywood putting serious money where their mouth is on AI-generated content. Whether that's exciting or terrifying probably depends on whether you're a Disney executive or a visual effects artist.

ChatGPT Launches an App Store

As if the revenue numbers weren't enough, ChatGPT just opened up an app store, effectively telling developers: "Build on our platform." This is the classic tech playbook, Apple did it with the iPhone, Salesforce did it with their platform. Now OpenAI wants to become the infrastructure that other AI applications run on.

For developers, this is huge. The ChatGPT app store means you can build specialized AI tools and plug them directly into an ecosystem with millions of daily users. It's also OpenAI saying "we're not just an AI lab anymore, we're a platform company."

Oracle's $15 Billion Data Center Bet

Meanwhile, Oracle announced they're spending an additional $15 billion on data centers. Their stock actually dropped on the news because that's a massive capital expenditure. But here's the subtext: companies are preparing for AI workloads to explode. You don't spend $15 billion unless you're expecting serious demand.

Technical Breakthroughs: The Models Keep Getting Better

OpenAI Drops GPT-5.2 After Google Scare

Now, this is where it gets interesting. Ars Technica reports that OpenAI released GPT-5.2 after declaring a "code red" over competitive threats from Google. Translation: the AI arms race is real, and these companies are absolutely sprinting.

GPT-5.2 isn't just an incremental update. The fact that OpenAI felt pressured enough to push it out ahead of schedule tells you everything about how intense the competition has become. Google's advances with their models apparently spooked OpenAI enough to accelerate their release timeline.

AI That Builds Itself

Here's something that feels straight out of science fiction: OpenAI built an AI coding agent that improves itself. They're using AI to write code that makes the AI better. It's recursion all the way down.

This is a big deal because it suggests we're approaching a point where AI development could accelerate dramatically. When AI can meaningfully contribute to its own improvement, you start getting compound effects. It's exciting. It's also a little unsettling if you think about it too hard over breakfast.

Open Source Fights Back

Not everything is happening behind closed doors. A new open-weights AI coding model from Mistral is reportedly closing in on proprietary options. This matters because it means the open-source community isn't just watching from the sidelines.

The debate between open and closed AI is one of the defining tensions in the industry right now. Proprietary models from OpenAI and Google get a lot of attention, but open alternatives like this ensure that AI capabilities don't become completely locked up behind corporate paywalls.

The Dark Side: AI's Growing Pains

Okay, time to talk about the stuff that should make us all a bit uncomfortable.

Romance Scammers Get Scary Good at Face-Swapping

WIRED uncovered an ultra-realistic AI face-swapping platform called Haotian that's being used extensively in romance scams. We're talking "nearly perfect" face swaps during live video chats. You think you're video calling someone, but it's AI-generated in real-time.

The platform was making millions, mainly distributed via Telegram. WIRED's investigation got the main channel shut down, but here's the problem: the technology is out there now. If one team can build it, others will too. The days of trusting video calls as proof of identity are numbered.

8 Million Users Had Their AI Conversations Harvested

According to Ars Technica, browser extensions with 8 million users were collecting extended AI conversations. Think about what you've typed into ChatGPT. Your ideas, your questions, maybe company information. Now imagine all of that being siphoned off by a browser extension you thought was harmless.

This is exactly the kind of privacy nightmare people warned about. As AI becomes more integrated into our workflows, the attack surface for data harvesting grows dramatically.

"Slop": Word of the Year

In what might be the most cutting commentary on AI, Merriam-Webster named "slop" their word of the year, a direct reference to low-quality AI-generated content flooding the internet.

Here's where we are: AI has gotten good enough to generate massive amounts of content, but not good enough for most of it to be actually useful. The result? Content pollution. Articles that look legitimate but are just AI rehashing other AI content in an endless loop of mediocrity.

Photo Manipulation Becomes Trivial

And finally, OpenAI's new ChatGPT image generator makes faking photos easy. Not "possible with technical skill" easy. Just... easy. Type a prompt, get a fake photo that looks real.

We're entering an era where "pics or it didn't happen" no longer works as evidence. Every image needs to be treated with skepticism. That's a huge cultural shift, and we're not ready for it.

What This All Means

So here we are. AI is simultaneously hitting massive commercial success, achieving impressive technical breakthroughs, and creating serious security and trust problems.

The Disney deal and ChatGPT's $3 billion revenue show that AI has real business staying power. The GPT-5.2 release and self-improving coding agents show the technology is advancing faster than most people realize. And the scams, privacy violations, and content pollution show we're building faster than we're thinking through the consequences.

What should you watch for next? Regulatory responses are coming, whether it's around AI-generated content labeling, deepfake regulations, or privacy requirements for AI applications. The debate between open and closed AI will continue intensifying. And we'll probably see more traditional companies following Disney's lead in making major AI investments.

One thing's for sure: you're going to need a lot more coffee to keep up with where this is all going.

References

Top comments (0)