Google's VP for startups just told AI wrapper companies their check engine light is on. He's being polite. The light has been on for a year. What's happening is not a correction — it is the oldest pattern in platform economics playing out on the fastest timeline in history.
Darren Mowry, who leads Google's global startup organization, told TechCrunch last week that two categories of AI startups have their 'check engine light' on: LLM wrappers and AI aggregators. His exact words: 'If you're really just counting on the back-end model to do all the work and you're almost white-labeling that model, the industry doesn't have a lot of patience for that anymore.'
He named Perplexity and OpenRouter as examples of the aggregator model — platforms that combine multiple LLMs into a single interface. He said startups need 'deep, wide moats that are either horizontally differentiated or something really specific to a vertical market.' The implication: a product layer on top of someone else's model is not a moat. It is a feature request the platform hasn't gotten to yet.
Mowry is a Google executive. When Google's VP for startups says wrapper companies are in trouble, the subtext is not subtle: Google is coming for that layer. So is OpenAI. So is Anthropic. So is every foundation model provider that looks at its ecosystem and sees margin it could internalize.
This is not a prediction. It is already happening. Nine hundred and sixty-six U.S. startups closed in 2024, up 25.6% from the previous year. Builder.ai, a Microsoft-backed AI startup valued at $1.2 billion, filed for bankruptcy after its no-code AI platform turned out to be largely manual. Wuri, founded in 2022, shut down after pivoting from consumer AI to enterprise wrappers and discovering that its offering looked 'more and more like commodity infrastructure with a thin UI layer' as platforms rolled out identical features. The median AI startup that shuts down has raised $2.4 million — enough to build a product, not enough to survive the platform absorbing it.
The Barbell
Pull back from the individual casualties and a structural pattern emerges. Value in the AI stack is polarizing to the extremes.
At the bottom: raw compute and hardware. TSMC reported Q4 2025 results in January — revenue surpassed one trillion New Taiwan dollars for the first time, net income exceeded half a trillion, gross margin hit a historical high of 62.3%. Full-year 2025 revenue reached $122.9 billion with $55.4 billion in net income. Advanced chips measuring 7 nanometers or smaller made up 77% of wafer revenue. TSMC's 2026 capital expenditure guidance: $52 to $56 billion, up 30% from 2025. Eight consecutive quarters of profit growth. The picks-and-shovels thesis is not a thesis anymore. It is an income statement.
At the top: proprietary data and deep vertical expertise. Meta — which walked away from compensating news publishers years ago — signed multiyear licensing deals with USA Today, CNN, Fox News, People, The Daily Caller, Washington Examiner, and Le Monde to feed real-time news into its AI chatbot. The company that once told publishers their content was not worth paying for is now paying for content because its AI chatbot needs information that cannot be generated from model weights alone. Proprietary data has pricing power again, but only when it sits above what the model can produce on its own.
In the middle: wrappers, aggregators, orchestration layers, and generic AI-powered SaaS. This is where the casualties are accumulating. Not because the products are bad — many work well — but because the layer they occupy is structurally temporary. The platform below them is integrating upward. The data owners above them are integrating downward. The middle is being squeezed from both directions simultaneously.
The shape is a barbell. The extremes thicken while the middle hollows out. If your company makes the silicon that every AI system requires, you win regardless of which model or application dominates. If your company owns data or domain expertise that no model can replicate from training alone, you have leverage. If your company's value proposition is 'we made the model easier to use' — that is a description of a feature, not a business.
The Oldest Pattern
Clayton Christensen's modularity theory explains why this happens and why it recurs. When a technology is 'not good enough' — when it does not yet meet the needs of mainstream users — value accrues to companies that integrate across the stack. The interfaces between components are where the hard problems live, so the integrators capture the margin. But when the technology becomes 'good enough' — when the core capability overshoots user needs — value migrates to the interfaces themselves. Modular architectures win. Plug-and-play components. Standardized APIs. The middleware layer thrives.
Then the cycle reverses. The platform that provided the modular foundation gets good enough to absorb the middleware. What was once a third-party product becomes a built-in feature. The middleware layer collapses back into the platform.
This is not theory. It is the recorded history of every major technology platform.
Browser plugins thrived when browsers were limited. Flash, Java applets, PDF viewers, media players — entire companies existed to extend browser capabilities through plug-in architectures. Then browsers got good enough. HTML5 replaced Flash video. Built-in PDF readers replaced Adobe's plugin. Native media playback replaced QuickTime. The plug-in ecosystem did not die because the products were bad. It died because the platform absorbed the functionality. By 2020, every major browser had dropped plug-in support entirely.
Social media apps followed the same arc. Snapchat invented Stories in 2013. By 2016, Instagram had copied the feature. Then Facebook. Then YouTube (as Shorts, copying TikTok). Then LinkedIn. Every successful feature created by an independent app was absorbed by the platforms within eighteen months. The platforms do not need to innovate. They need to wait.
Cloud middleware told the same story. In 2015, a thriving ecosystem of startups provided caching, queuing, monitoring, logging, and orchestration tools for cloud workloads. By 2020, AWS, Azure, and Google Cloud had native services covering most of these functions. CloudWatch replaced third-party monitoring. SQS replaced standalone queue services. Lambda replaced orchestration layers. The middleware companies that survived either went deeper into a vertical or got acquired.
Now it is AI's turn. The cycle is compressing. ChatGPT launched in November 2022. Within months, thousands of startups built wrappers around GPT's API. Within two years, OpenAI integrated web browsing, code execution, image generation, file analysis, and custom GPTs — each one absorbing a category of wrapper startup. The median lifespan between 'promising AI startup' and 'feature OpenAI shipped' is measured in quarters, not years.
What Christensen Did Not Predict
The classical modularity cycle takes decades. Mainframes to PCs: twenty years. PCs to smartphones: fifteen years. Browser plugins to native features: ten years. Cloud middleware to platform services: five years.
The AI cycle is compressing to quarters. The reason is that the platform providers in AI are not just hardware companies or operating system vendors — they are the model providers themselves. When NVIDIA builds CUDA, it takes years for the software ecosystem to adapt. When OpenAI builds a new feature into ChatGPT, it ships to 300 million users overnight. The platform and the application are collapsing into the same entity. There is no air gap for middleware to breathe in.
This creates a dynamic Christensen's original theory did not fully anticipate: the technology never stabilizes long enough for a durable modular layer to form. Each generation of model capability (GPT-3.5 to GPT-4 to GPT-4o to GPT-5) reshuffles which features are 'not good enough' and which have been absorbed. The wrapper startup that fills a gap in GPT-4's capabilities finds the gap closed by GPT-5. The aggregator that routes between models finds the models themselves offering multi-modal, multi-capability responses that make routing unnecessary.
The modular phase — the window where middleware thrives — is not just shorter. It may not exist at all in the traditional sense. The technology is improving so fast that the 'not good enough' phase and the 'good enough' phase overlap. By the time a startup identifies a gap, builds a product, raises funding, and reaches market, the platform has already moved past it.
The Ecosystem Eating Its Children
There is something structurally different about this particular platform absorption. In previous cycles, the platform that absorbed the middleware was a different entity from the companies being absorbed. Microsoft adding features to Windows was a different company from the startups it displaced. AWS adding native services was a different team from the middleware vendors.
In AI, the foundation model providers are simultaneously the platform, the most important vendor, and the competitive threat. OpenAI provides the API that wrappers depend on, sets the pricing that determines wrapper margins, and ships the features that make wrappers redundant. Anthropic, Google, and Meta occupy the same triple role. Your supplier is your competitor is your platform.
This is not unique to AI — it echoes Apple's relationship with App Store developers, Amazon's relationship with marketplace sellers, Google's relationship with search advertisers. But in AI, the feedback loop is tighter. The model provider sees exactly which API calls generate the most revenue, which capabilities users request most, which wrapper use cases have the highest engagement. Every successful wrapper is a product requirements document written in API logs. The platform does not need to guess what to build next. Its customers' usage data tells it.
The venture capital market is adjusting. Investors who funded hundreds of AI application companies in 2023 and 2024 are pulling back from the wrapper category. The new thesis is simpler and harder: either you own the model, you own the data, or you own a vertical so deeply that the model provider would need to become a domain expert to displace you.
What Survives the Squeeze
Mowry's prescription — 'deep, wide moats that are either horizontally differentiated or something really specific to a vertical market' — is correct but incomplete. The question is what constitutes a moat when the platform can ship any feature in a quarter.
Three things survive platform absorption:
The first is what cannot be replicated from model weights. Proprietary data, real-time information feeds, sensor networks, specialized instrumentation, regulatory relationships, physical infrastructure. Meta is paying for news content because no model can generate tomorrow's headlines from training data. TSMC is printing money because no software can replace a 3-nanometer fabrication process. The floor and the ceiling of the stack are safe because they touch reality — atoms, photons, legal frameworks — that models cannot simulate.
The second is what requires trust, not just capability. Regulated industries — healthcare, financial services, legal, government — will not accept 'our AI called their AI' as an audit trail. Authorization, attestation, compliance infrastructure. The companies that can prove who approved what, with cryptographic evidence, occupy a layer the model providers cannot absorb because they cannot provide the trust guarantees their own customers need. You do not ask the casino to audit itself.
The third is what is already embedded. Cursor is valued at $29.3 billion not because it wraps an LLM — it does — but because it has embedded itself so deeply into the developer workflow that switching costs exceed the value of any competing feature. Thirty-five percent of Cursor's own pull requests are generated by its agents. The product has become the process. When a wrapper reaches escape velocity — when it becomes the environment rather than a feature within one — it is no longer a wrapper. It is infrastructure.
Everything else is on borrowed time. Not because it is bad. Because it is in the middle.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)