I want to be honest with you: I almost didn't write this post.
Not because I don't have thoughts on it. I have too many. And that's the problem. Every week there are approximately four hundred new AI announcements, three conflicting hot takes, two long-form think pieces that completely contradict each other, and one tweet that goes viral for being confidently wrong.
It's exhausting. And I think most of it doesn't matter.
So here's my actual filter. The one I use to decide what's worth reading, what's worth testing, and what's worth completely ignoring.
The noise I skip
"AI will replace [profession] in X years." I have read this headline about lawyers, writers, radiologists, programmers, and teachers. For some of these I've seen confident estimates ranging from 2 to 25 years, sometimes in the same week. Nobody knows. The models doing these predictions are guessing. Move on.
New model benchmark announcements. Every new model is the "best ever" on some benchmark. Benchmarks are useful for researchers comparing controlled capabilities. They're not useful for figuring out whether you should change your workflow. I care about what the model can actually do in the context I use it. That requires trying it, not reading the press release.
AI company raises massive round. Good for them. This tells me the investors think there's money to be made. It tells me nothing about whether the product is good or whether you should use it.
\"AI is just a bubble.\" Maybe. The internet was also a bubble in 2000 and it also ended up being the most significant infrastructure of the 21st century. \"Bubble\" and \"real and important\" are not mutually exclusive.
The signal I pay attention to
Behavior changes in people I trust. Not what they say they believe about AI — what they actually do. If a developer I respect switches to a new coding tool and sticks with it for six months, that's more signal than a hundred benchmarks.
Things that work without a tutorial. Good tools don't require you to learn how to prompt them. If I have to take a course to get value out of something, the thing isn't ready yet. The best AI tools I've used feel obvious on first use.
New capabilities, not new interfaces. The AI space is full of wrappers — products built on top of GPT-4 or Claude that add a specialized UI. Some of these are useful. But they're not new capability. Actual new capability is when something becomes possible that genuinely wasn't before. That's rare. That's worth paying attention to.
When non-technical people start using something. Not because that makes it legitimate, but because it means the interface problem has been solved. Broad adoption by non-technical users is a reliable indicator that a tool has actually crossed the usability threshold. It took years for the smartphone to get there. AI tools are starting to get there in months.
What I think is actually worth your attention right now
I'll be specific, since I just told you I hate vague takes.
The agentic shift. AI systems that can take a sequence of actions — browse the web, write a file, run code, send an email — without constant human input. This is where the work is happening right now, and it's early enough that the patterns aren't set yet. If you want to understand where things are going in the next 12–18 months, this is it.
Voice interfaces maturing. The gap between voice as a party trick and voice as an actual interface is closing fast. I'm not talking about Alexa. I'm talking about full conversational interfaces that can handle ambiguity, follow context, and take action. The latency is still too high but it's dropping.
The commoditization of base models. Claude, GPT-4, Gemini, and others are getting close enough that the underlying model matters less than the integration, the context, and the interface. This changes the competitive landscape significantly and shifts the interesting work to the application layer.
The honest version
I don't know which companies will win. I don't know which models will matter in three years. I don't know if AGI is two years away or twenty.
What I do know: the tools available today are already meaningfully useful if you're willing to actually use them instead of just reading about them. And the space is moving fast enough that your filter matters more than your forecast.
Stop trying to predict. Start paying attention to what's working.
This is the kind of honest take I share regularly at denismoroz.ai. If you want the actual signal without the noise, the newsletter is where that lives.
Top comments (0)