I use AI models every day at work. Not as a side project, not as a researcher. As someone who depends on these tools to ship real things. And the longer I do this, the harder it gets to ignore how wide the gap is between what the industry claims and what actually happens in practice.
Most People Do Not Like AI
The story tech companies want you to believe is that AI adoption is exploding and the world is on board. The data disagrees.
A March 2026 NBC News survey of 1,000 registered voters found that only 26% have a positive view of AI. 46% have a negative view. To put that in context: AI ranked below ICE, Trump, the Republican Party, Kamala Harris, and Stephen Colbert on the favorability scale. Only the Democratic Party and Iran scored lower. This is not a fringe concern. It is the majority opinion.
Pew Research found that half of U.S. adults say AI makes them feel more concerned than excited. Only 10% say the opposite. And that concern has been rising since 2021, not falling.
The industry's response to this is that people who actually use AI end up liking it. And there is some truth to that. Daily users are favorable toward AI by a wide margin. But that is a self-selecting group. The people who tried it and stopped, or never started, are a much larger group. Forcing adoption and hoping sentiment catches up is not a product strategy. It is a hope.
The Microsoft Copilot Problem
Copilot is the best case study we have right now, because Microsoft had every possible advantage going in.
400 million plus paid Microsoft 365 seats. Copilot embedded directly into Word, Excel, Outlook, Teams, and Windows. Tools people open every single morning. No AI product in history has had distribution like that.
Here is what happened anyway.
According to Recon Analytics data from 150,000+ U.S. respondents, when employees have access to both Copilot and ChatGPT, 76% choose ChatGPT. Only 18% pick Copilot. Only 35.8% of organizations that have Copilot licenses end up with active users. Copilot's paid subscriber market share dropped 39% in six months. And 44% of people who stopped using it said the reason was they did not trust its answers.
Think about what that actually means. The single greatest distribution advantage in software history could not hold users who had a real choice. Getting AI in front of people is not the same thing as getting people to use it. Microsoft proved that at scale.
The Numbers Are Bad
Goldman Sachs estimates the top tech companies are on track to spend roughly 527 billion dollars on AI infrastructure in 2026 alone. For context, AI-related services are expected to generate somewhere between 25 and 51 billion dollars in traceable revenue against that spend. The gap between what is going in and what is coming back out is enormous by any historical measure. When cloud computing was at the same adoption stage in 2011, companies were spending roughly 2.4 dollars for every dollar of revenue. The current ratio is multiples higher than that, and cloud was already considered an aggressive bet at the time.
One widely cited analyst estimate puts the depreciation costs of data centers coming online in 2025 at around 40 billion dollars annually, while those same facilities are generating only 15 to 20 billion in revenue at current usage rates. The infrastructure is losing value faster than it earns money.
OpenAI ended 2025 with over 20 billion dollars in annualized revenue. That is genuinely impressive growth from almost nothing three years ago. They are also burning roughly 8 billion dollars a year on compute alone and are projected to lose 14 billion dollars in 2026 by itself. Cumulative losses are projected to hit 44 billion dollars before the company expects to turn profitable, sometime around 2029.
The math does not work yet. And "yet" is carrying a lot of weight in that sentence.
Big Tech Can Afford It, Until They Cannot
The common pushback is that Google and Microsoft and Meta can just absorb these losses from their other businesses. And that is partially true. These companies print money from ads, cloud, and enterprise software. The layoffs you have seen are a rounding error compared to those cash flows.
But even that has limits. Five of the Magnificent Seven are now pushing capital expenditure to 94% of operating cash flow. Meta has already issued 30 billion dollars in corporate bonds to fund AI infrastructure. You do not borrow money at that scale unless you are extremely confident in the return, or you are terrified of being left behind.
There is also a circular money problem that does not get enough attention. Nvidia is investing in OpenAI. OpenAI buys Nvidia chips. Microsoft funds OpenAI. OpenAI runs on Azure. The same capital moves between the same small group of players. When Harvard Business Review described it as an increasingly complex web of interconnected transactions, they were being generous.
Benchmarks Are Mostly Marketing
This is the part that practitioners know and the tech press ignores.
Every model release comes with a press release full of benchmark scores. SWE-bench. MMLU. GPQA. The numbers look close. The models look competitive. Then you use them on real work.
The gap is not small.
Benchmarks test clean, well-specified problems with known answers. Real work is ambiguous, context-heavy, and does not look like anything in an evaluation set. Companies increasingly train models to perform well on known benchmarks. It is teaching to the test, but the test costs a billion dollars.
As someone who uses these tools every day: most models that claim to be competitive are not, in practice. The one that actually follows complex instructions without losing the thread, does not hallucinate with confidence, and genuinely improves your output rather than requiring you to babysit it, that is a very short list.
If real value is concentrated in one or two models while the rest produce benchmark-friendly results that fall apart in production, then the market is badly mispricing a lot of companies and a lot of enterprise contracts.
What Probably Happens Next
The pure AI labs get absorbed. Microsoft already has effective control over OpenAI through its investment structure. Google has poured billions into Anthropic. Full acquisition is the logical endpoint. The "independent AI lab" era is probably shorter than people think.
A reckoning arrives sometime between 2026 and 2028. Goldman Sachs and Deutsche Bank have both pointed to this window as when infrastructure spending has to start generating real returns or face write-downs. If revenue does not catch up, shareholder patience runs out.
Open source keeps closing the gap. DeepSeek showed last year that a smaller lab could build frontier-quality models at a fraction of the cost and release them publicly. API prices have already collapsed 90 to 97% in two years. If that continues, the entire business model of selling access to proprietary models has a serious problem.
The Honest Summary
The technology is real. Some of it is genuinely useful. This is not 2001.
But the public is skeptical, not enthusiastic. Real adoption, once you strip out employer mandates, is thinner than reported. The spending-to-revenue ratio is historically unprecedented. Most models are not as competitive as their benchmarks suggest. And the money flowing through this industry is largely circular among a small group of interconnected players.
The companies making these bets are not stupid. They might be right that this is the most important infrastructure investment in a generation. But there is a version of this story where we look back in five years and recognize the most expensive case of industry-wide FOMO ever recorded.
The emperor might not be naked. But he is wearing a lot less than the press releases suggest.
I am a developer who uses AI tools daily in production. I am not a doomer and I am not a hype merchant. I just think this industry deserves honest accounting.
Top comments (0)