DEV Community

Denis Moroz
Denis Moroz

Posted on • Originally published at denismoroz.ai

What's Actually Happening in AI Right Now (Explained Like I'm Talking to a Friend)

You've seen the headlines. "AI breaks record." "New model released." "Safety protocol triggered." It's a lot. Most of it is written for people who already know what they're reading about.

I'm going to fix that. Here are the three biggest things happening in AI right now, explained the way I'd explain them to a friend over coffee.


1. Anthropic Built Something So Powerful They Decided Not to Release It

This is the one that stopped me in my tracks.

Anthropic — the company behind Claude, one of the most capable AI assistants out there — confirmed they built a new model called Claude Mythos 5. It's the first AI model to cross the 10-trillion-parameter mark, which is a way of saying it's genuinely enormous in terms of complexity.

And then they didn't release it.

Why? Because it triggered their internal ASL-4 safety protocol. ASL-4 is a classification Anthropic uses for models that are approaching capabilities they consider genuinely dangerous — not "it might write mean emails" dangerous, but "this could contribute to mass-casualty-level events in the wrong hands" dangerous.

Here's what's remarkable about this: a company voluntarily shelved a product they spent probably hundreds of millions of dollars building because their own safety red lines were met.

You can read that two ways. Cynically: it's a PR move — they get credit for being responsible while staying competitive. Generously: this is exactly how you'd want a powerful AI company to behave.

I lean generous, but I'm watching closely. The fact that this conversation is happening at all tells you we're entering genuinely new territory.

What this means for you: Nothing changes in your day-to-day AI use. The Claude you use (including Claude.ai and apps built on it) is a different model. But the fact that a major AI lab hit a self-imposed safety ceiling is worth knowing. It sets a precedent.


2. GPT-5.4 Is Out, and It's the Most Capable Public AI Model I've Ever Used

On the other end of the spectrum: OpenAI shipped GPT-5.4 in March, and it's the real deal.

Previous AI models were specialists. You'd use one for coding, another for writing, another for research. GPT-5.4 is the first public model that leads across all those categories at once — coding, reasoning, writing, knowing things, using your computer. One model, no tradeoffs.

The "Thinking" version of GPT-5.4 scored 75% on a benchmark called OSWorld-Verified, which tests how well AI can complete real desktop tasks (booking a flight, editing a spreadsheet, that kind of thing). That's a 28-point jump over the previous version and better than most humans score on the same test.

I've been using it. The honest take: it's noticeably better at staying on task for complex, multi-step things. It's less likely to hallucinate in ways that feel plausible but are wrong. And it's faster than I expected.

What this means for you: If you're using any AI assistant for work, now is a good time to test GPT-5.4 if you haven't. Whether it's worth upgrading your subscription depends on what you're using AI for — but for anything involving reasoning or multi-step tasks, it's a meaningful upgrade.


3. Someone Figured Out How to Make AI Use 100x Less Energy

This one doesn't have a brand name attached to it, which is probably why you haven't heard about it. But it matters.

A research team published a paper showing that combining neural networks (the math-heavy approach most modern AI uses) with old-school symbolic reasoning (basically, logic rules that humans write) can slash AI's energy consumption by up to 100 times while actually improving accuracy.

To put that in perspective: AI training currently consumes roughly the same electricity as small countries. Data centers running AI are one of the fastest-growing sources of electricity demand worldwide. If this approach scales — and that's still an if — it could reshape the economics and environmental footprint of the entire industry.

This isn't a product. It's a research result. It'll take years to show up in things you use. But it's the kind of foundational shift that looks obvious in retrospect.

What this means for you: Nothing immediately. But if you care about AI being sustainable long-term (and you should, because runaway energy costs put a ceiling on how far this technology can go), this is early good news.


The One-Sentence Summary

April 2026 in AI: one company built something too powerful to release, one company released the most powerful public model ever, and researchers found a way to make all of it a lot cheaper to run.


That's it for this week. If you found this useful, forward it to one person who keeps asking you what's going on in AI. That's the whole goal here — making this stuff make sense.

Next up: the AI tools I actually use every week (and the ones I've deleted). Dropping in a few days.


Tags: AI news, AI explained, ChatGPT, Claude, Anthropic, OpenAI, non-technical AI

Top comments (0)