DEV Community

Alberto Pertusi
Alberto Pertusi

Posted on • Originally published at alpe.dev

AI Is Not the Metaverse, But the Dealer Is the Same

Everyone has an opinion on AI these days. Your LinkedIn feed is drowning in it. Your uncle at Sunday lunch has one too, and he still uses Internet Explorer.

So here's mine. Not because the world needs another hot take, but because I'm tired of reading everyone else's bad ones.


AI Is an Incredible Tool. Period.

Let's get this out of the way: I think AI is genuinely transformative technology. I'm not being contrarian for the sake of it. I built this entire website in two days with AI assistance. I use it daily for work.

I'm deep in this. I'm not some outsider throwing rocks at a building I've never entered.

But being a daily user of something is exactly what gives you the right to call out the nonsense surrounding it. You don't need to hate AI to be honest about the circus that's been built around it.


No, This Is Not the Metaverse

Every few months someone publishes a piece comparing AI to the metaverse, predicting the same spectacular implosion. Let me be direct: this comparison is lazy and wrong.

The metaverse was a solution looking for a problem. Meta's Reality Labs burned through tens of billions of dollars trying to convince people they wanted to attend meetings as legless avatars. By 2026, the metaverse is essentially a punchline. Nobody's working, shopping, or socializing in persistent virtual worlds. It just... didn't happen.

AI? NVIDIA alone projected nearly $120 billion in revenue for fiscal 2025. Enterprise adoption jumped from roughly 50% in 2022 to over 78% by late 2024. People are actually using this stuff. Real products. Real revenue. Real workflows changing.

Is there hype? Absolutely. Are valuations stretched? Sam Altman himself compared the current moment to the dot-com bubble. But hype around something real is fundamentally different from hype around something nobody wanted. The internet survived the dot-com crash. AI will survive the hype cycle.

It's just not going to look like what the LinkedIn influencers are telling you.


We're Still at the Beginning (So Stop Pretending You've Figured It Out)

Here's what genuinely annoys me: everyone talking with absolute certainty about where AI is going and what you should do about it. These are the same people who were selling NFT courses two years ago and "prompt engineering masterclasses" last year.

We're in the opening minutes of AI adoption. Most companies report using AI in some capacity, but a widely cited MIT study found that 95% of organizations investing in generative AI were getting zero measurable return. Ninety-five percent. Let that sink in.

The numbers tell the same story everywhere you look. Deloitte's 2026 State of AI report found that nearly two-thirds of organizations are still stuck in the pilot stage. Only 6% have fully implemented agentic AI. Only 8.6% have AI agents running in production.

Meanwhile, Menlo Ventures reports that companies spent $37 billion on generative AI in 2025 (up 3.2x from 2024), but only 31% of use cases actually reached production.

Read that again: companies tripled their spending, and two-thirds of their experiments still haven't shipped.

This doesn't mean AI is useless. It means we're still figuring out how to use it properly. And that should make you very suspicious of anyone claiming to have the playbook.

The honest answer is: nobody knows. Not OpenAI. Not Google. Not the guy with "AI Thought Leader" in his bio who posts fifteen times a day. We're all experimenting. Some experiments are working. Most aren't. That's fine. That's how technology adoption actually works.

So do yourself a favor: take advice from people who admit they're figuring it out, not from people who pretend they've already figured it out. The latter are usually selling something.


"The Biggest Upgrade Ever!!" (Again.)

I swear, if I read one more breathless post about how "Model X version 3.7.2-turbo-ultra is THE BIGGEST LEAP IN AI HISTORY," I'm going to start blocking people professionally.

Every. Single. Release. is apparently the one that changes everything. Every benchmark improvement is treated like the moon landing. And then two weeks later, there's another release that's also the biggest leap ever, and nobody mentions the last one again.

Simon Willison documented this perfectly in his 2025 year-in-review: GPT-5 in August, GPT-5.1 in November, GPT-5.2 in December, plus Opus 4.5, Gemini 3 Pro, all launched within months of each other, each one supposedly groundbreaking.

He noted a meme that gets "updated every few months as a new company claims to have released the world's best model." And yet, the actual pace of progress appeared to slow compared to the GPT-4/o1 breakthroughs. More fanfare, less fireworks.

This is marketing, not progress reporting. Real progress is incremental, boring, and measured in whether the tool actually helps you ship better code, not in whether a cherry-picked benchmark went up three points.

The models are getting better. I'm not denying that. But the breathless hype cycle around every point release is exhausting and frankly insulting to anyone who's paying attention.


"It Works Great!" (Until You Try to Do Something Real)

Now, the part that'll probably get me in trouble with the AI evangelists: the tools still have serious limitations when you move past the demo.

Don't get me wrong: generating a React site, writing boilerplate, mapping 120 fields of a JSON response with Jackson into a Java record? All of that works brilliantly, and I'm genuinely grateful to our AI overlords that it's possible today. That stuff is real, tangible time saved.

But I'm not yet convinced (because I've seen it with my own eyes, daily) that putting an agent in a loop and letting it rip is the stroke of genius everyone claims it is. Especially because, just like we used to review our colleagues' pull requests to make sure they were solid, now we're reviewing the output of whatever agent is running the show.

Except the stakes are the same: at the end of the day, you are responsible for what runs in production. Not the agent. You.

The data backs this up. A study from The Register found that AI-authored code produces 1.75x more logic errors, 1.57x more security findings, and is 2.74x more likely to introduce XSS vulnerabilities than human-written code. The Qodo State of AI Code Quality report found that 65% of developers say their AI assistant "misses relevant context" and 46% actively distrust the output.

And here's my favorite stat: the METR study, a randomized controlled trial with experienced open-source developers, found that using AI tools made developers 19% slower. Not faster. Slower.

But developers believed AI had sped them up by 20%. The perception gap is wild. We're here over again, how do you measure speed in software engineering? I have still to figure it out, maybe I am not smart enough, or maybe I don't like corporate bullshit enough :D

I experience this constantly. AI is incredible for boilerplate, for exploring unfamiliar APIs, for rubber-ducking ideas. But the moment you need it to reason about a complex domain, maintain consistency across a large codebase, or make architectural tradeoffs, it starts confidently generating code that looks right but is subtly wrong.

And "subtly wrong" in production is worse than "obviously broken." Obviously broken code fails fast. Subtly wrong code ships, and you find out three months later from a customer.


The model race is over. The tooling race is on.

Here's something the "biggest model ever" crowd keeps missing: the battle has already shifted from models to tooling. GPT-5, Claude Opus, Gemini 3 Pro; they're all remarkably capable, and the gap between them keeps shrinking. The real differentiator now is what you build around the model. The IDE integrations, the context management, the workflow automation.

Enter "agents", the buzzword of 2026. Everyone and their dog is shipping an AI agent. But let's be honest about what most of these agents actually are: glorified prompt generators. You write a prompt (probably a bad one, because you're a human in a hurry). The agent wraps it in a well-structured system prompt with context from your codebase, chains a few calls together, and sends that to the model. The model does the actual work. The agent gets the credit.

That's not intelligence. That's a middleman with good formatting skills.

I'll write a dedicated post about this soon, because there's a lot to unpack. But the short version is: the biggest problem right now is that these agents lose context and forget their own rules and you will find out that because something feels off. They've reinvented while loops and called them things like "Ralph" and other creative names. We're at the point where the industry is rebranding basic control flow and calling it innovation. Siamo alla frutta, as we say in Italian.

Don't get me wrong, the tooling layer matters. A lot. The difference between a raw API call and a well-orchestrated workflow is massive in practice. But let's call it what it is: engineering, not magic. The companies that will win aren't the ones with the "best" model (whatever that means this week). They're the ones building the best developer experience around models that are increasingly interchangeable.


The Dealer Analogy (My Personal Conspiracy Theory)

Alright, here's where I put on my tinfoil hat. Consider this my personal conspiracy theory, built on publicly available data and a healthy dose of Italian skepticism.

The current AI pricing model reminds me of a drug dealer offering the first taste for free.

ChatGPT Plus: $20/month. Claude Pro: $20/month. GitHub Copilot: $10/month. For tools that cost these companies significantly more to run than what you're paying.

Let's look at the actual numbers, because nobody seems to want to talk about them.

OpenAI expects to spend roughly $22 billion in 2025 against $13 billion in revenue, a net loss of $9 billion in a single year. Their own projections show $44 billion in total losses from 2023 through 2028 before maybe turning profitable in 2029. The company at the center of the AI revolution won't make money for another three years. At best.

Anthropic (the company behind Claude, the tool I use daily) is on track for $9 billion ARR by end of 2025, targeting $20-26 billion for 2026, and expects to break even by 2027-2028. Better trajectory than OpenAI, but still not profitable. Still burning cash.

Google is the interesting outlier. They can afford to play this game differently because AI isn't their whole business. It's a feature bolted onto a $400 billion revenue machine. They cut Gemini's serving costs by 78% over 2025 and are now processing 10 billion tokens per minute. But even Google's investors flinched when Alphabet announced plans to spend up to $185 billion in CapEx in 2026, effectively doubling 2025 levels. The stock dropped 5% on the news. Turns out, even "we print money" Google can scare investors with AI spending.

So let's be clear: the three biggest players in AI are collectively burning tens of billions per year, and only one of them (Google) has a profitable core business to subsidize the bleeding.

And your subscription? Twenty bucks a month.

The math does not math.

So why the low prices? Because they need you addicted first.

Look at what's already happening. ChatGPT now has a $200/month Pro tier, ten times the Plus plan. GitHub Copilot introduced "premium requests", paying extra for the good models. Microsoft announced M365 price increases for July 2026, baking AI features into baseline subscriptions whether you want them or not. Tabnine killed its free tier. Sourcegraph discontinued Cody's free and Pro plans. The free AI coding assistant market is consolidating fast. The cheap options are disappearing one by one.

See the pattern? Start cheap, get everyone dependent, then slowly turn the screws.

Once AI is embedded in every workflow, every IDE, every team process, once developers literally can't imagine working without it, what's stopping these companies from doubling the price? Tripling it? The switching costs will be enormous. The dependency will be real.

Nobody talks about this. Everyone focuses on how much revenue AI companies are generating, never on how much they're losing. OpenAI alone is losing $9 billion a year.

That money has to come from somewhere eventually. And "somewhere" is your wallet.


I'm Not a Pessimist. I'm a Realist.

I'll be honest: I'm conflicted.

Part of me is genuinely electrified. The novelty, the possibilities on the horizon, what I could build with these tools that would have taken me months just a couple of years ago. I use AI every single day and it makes me better at my job in ways I couldn't have imagined.

But another part of me is cautious, critical, and a little scared of what the world might look like in five years. Maybe I'm just getting old.

What I do know is this: an incredible technology is in its very early stages, surrounded by an ecosystem of hype merchants, unrealistic expectations, real limitations, and a pricing model designed to create dependency before sustainability.

That doesn't make it bad. It makes it normal. Every transformative technology goes through this phase. The internet did. Mobile did. Cloud computing did.

The difference is that this time, the noise is louder, the claims are bolder, and the money being burned is bigger. So the correction, when it comes, will be louder too.


The TL;DR

  • AI is genuinely incredible technology. Use it.
  • It's not the metaverse. The comparison is lazy. Stop making it.
  • We're still at the beginning. Be skeptical of anyone who acts like they've cracked the code.
  • Every model release being "the biggest ever" is marketing, not reality.
  • The real battle is tooling, not models. And "agents" are mostly prompt generators with good PR.
  • The tools have real, significant limitations once you go beyond toy examples.
  • The pricing model is designed to hook you now and charge you later. Watch the pattern.
  • If someone on social media is shouting very loud about something, they're probably trying to sell you something.

Stay curious. Stay critical. And for the love of god, stop taking career advice from LinkedIn influencers and certainly neither from me :D.

- Alberto (with a healthy dose of skepticism)

Top comments (0)