DEV Community

Tyson Cung
Tyson Cung

Posted on

Meta Spent $135 Billion on AI — Now They Might License Google's Model

Meta committed up to $135 billion in AI capital spending for 2026. They hired Alexandr Wang, formed an "AI super team," and bet everything on their next-generation model, codenamed Avocado.

Then it underperformed. And now they're reportedly considering licensing Google's Gemini instead.

Here's what happened.


The Avocado Problem

Meta's Avocado model was supposed to launch in March 2026. It was meant to leapfrog competitors — proving that Meta's open-source AI strategy (Llama) could compete with the best closed models from Google and OpenAI.

But internal testing showed Avocado didn't beat Google's Gemini 2.5 — a model that's already been public for four months.

Releasing something that sits below a competitor's existing model after spending $135 billion is... not a great look.

So Meta delayed Avocado to at least May 2026 and started exploring a backup plan.


The Google Licensing Rumour

According to the New York Times, Meta's AI division leaders have discussed temporarily licensing Google's Gemini to power Meta's AI products while Avocado gets fixed.

Think about that for a second:

  • Meta has 19,000+ GPUs in its data centres
  • They've spent $14.3 billion since hiring their AI super team
  • They open-sourced Llama and built an entire ecosystem around it
  • And they might need to rent Google's model to keep up

This would be like Toyota spending billions on a new engine, then putting a Honda engine in while they figure it out.


The Numbers That Tell the Story

Company 2026 AI CapEx Status
Meta $115-135B Avocado delayed, considering licensing Gemini
Alphabet (Google) $175-185B Gemini 2.5 leading benchmarks
Microsoft $80B+ Azure AI + OpenAI partnership
Amazon $100B+ AWS Bedrock + Anthropic investment

Meta is spending more than Microsoft and Amazon on AI — but they're the ones scrambling.


Why This Matters

1. Money Doesn't Buy Intelligence

You can buy GPUs, hire talent, and build infrastructure. But model quality depends on data, architecture, and training methodology — not just compute.

Google has decades of search data, YouTube transcripts, and research papers. Meta has social media posts. The training data gap might be the real bottleneck.

2. Open Source vs. Closed Source

Meta's entire AI identity is built on open-source Llama. If they license Gemini (a closed model), it undermines their own narrative.

Developers and researchers chose Meta's ecosystem because it was open. Licensing a competitor's black box is a strategy contradiction.

3. The Build vs. Buy Dilemma

Every tech company faces this: do you build it yourself or buy/license it? Meta chose to build. Now they might need to temporarily buy — which raises the question: was the build strategy wrong, or just delayed?


What Happens Next

Three scenarios:

Best case: Avocado ships in May, outperforms Gemini 2.5, and the delay becomes a footnote. Meta's $135B bet pays off.

Middle case: Avocado ships but is only competitive, not leading. Meta licenses Gemini for specific products (Meta AI, WhatsApp, Instagram) while continuing to improve Llama.

Worst case: Avocado keeps slipping. Meta becomes dependent on Google's model for its AI features — handing leverage to a direct competitor in ads and social media.


The Takeaway

Spending $135 billion doesn't guarantee you build the best model. It guarantees you build a model.

The AI race isn't about who spends the most — it's about who has the best data, the best researchers, and the best training pipeline. Right now, Google is winning on all three.

Meta's Avocado delay is the first real crack in the "throw money at AI" strategy that Big Tech has been running for two years.

The question isn't whether AI is worth the investment. It's whether every company needs to build their own foundation model — or whether the smart play is to build on someone else's.


I made a YouTube Short breaking this down — watch it above! ⬆️

Follow me for daily breakdowns on AI, tech, and the business behind both.

Top comments (0)