DEV Community

brian austin
brian austin

Posted on

Mozilla just opposed Chrome's built-in AI. Here's why browser-level AI is dangerous.

Mozilla just filed an official opposition to Chrome's Prompt API.

If you haven't heard of Chrome's Prompt API: it's Google's proposal to bake AI directly into the browser itself. Web developers could call window.ai.prompt() and get a response — no API key, no external service, no cost. Sounds great, right?

Mozilla disagrees. And after thinking about it, I do too.

What Chrome's Prompt API actually is

Google's proposal: ship a small language model directly in Chrome. Web pages call a browser-native API to run inference locally. No API keys. No servers. Just... the browser, running AI.

The appeal is obvious:

  • Zero latency for simple tasks
  • Works offline
  • Free for developers

Why Mozilla is saying no

Mozilla's position (filed to the standards body this week): this proposal has serious problems.

1. It's Chrome-specific, not a standard
Chrome ships a model. Firefox ships a different model. Safari ships something else. Now every window.ai.prompt() call gives different results in different browsers. Web developers test in Chrome and ship — and users on Firefox get degraded or broken experiences. We've been here before. It's called the IE6 era.

2. The model is Google's
Whoever controls the model controls the answers. A Google-trained model baked into Chrome answers questions about competitors how, exactly? This is ad-model anxiety at the browser layer.

3. You can't swap it
With an external API, you can switch providers. If Claude gets better than Gemini Nano, you call Claude instead. With window.ai, you get whatever Google ships with Chrome. You have no choice. Your AI vendor is now your browser vendor.

4. Privacy is... unclear
Even "local" inference can phone home for updates, telemetry, fine-tuning data. The threat model for a Google-controlled browser AI is different from an open model you run yourself.

The deeper issue: who controls your AI access?

This is the same question playing out everywhere in 2026:

  • Ghostty left GitHub because platform control = content control
  • Anthropic's HERMES.md bug because metered billing = someone else controls your costs
  • OpenAI on Amazon Bedrock because hyperscaler integration = hyperscaler lock-in

Now: Chrome's Prompt API because browser integration = browser vendor control

The pattern is consistent. Every time AI gets embedded deeper into an existing platform, the platform gains leverage over your AI access.

What the alternative looks like

The alternative is what Mozilla is implicitly defending: AI as an external, swappable service that you call explicitly.

// The Chrome Prompt API way (Google controls the model)
const response = await window.ai.prompt("Summarize this article");

// The explicit API way (you control the model)
const response = await fetch('https://api.simplylouie.com/chat', {
  method: 'POST',
  headers: { 'Authorization': `Bearer ${API_KEY}` },
  body: JSON.stringify({ message: "Summarize this article" })
});
Enter fullscreen mode Exit fullscreen mode

The second version is more code. But you can:

  • Switch models without changing browsers
  • Call it from any platform (Node, Python, mobile, IoT)
  • Pay a fixed amount regardless of usage spikes
  • Audit exactly what model is running

The global access dimension

There's one more thing Mozilla didn't say but probably should have:

Chrome's built-in AI runs on a small, compressed model optimized for low-end hardware. For users in Nigeria, the Philippines, Indonesia, and India — where mid-range phones are the norm — this model will be significantly worse than what a $20/month ChatGPT user gets on a server-side GPT-4.

Baking AI into the browser sounds democratizing. But if the browser AI is a Gemini Nano that can barely do basic tasks while paying users get Claude Opus or GPT-4 Turbo server-side... it's not democratizing. It's a two-tier AI system with a thin veneer of "free for everyone."

Actual AI access equality means the same model for everyone, at a price point anyone can afford.

For Indian developers, that's ₹165/month for Claude Sonnet (not Nano).
For Nigerian developers, that's ₦3,200/month.
For Filipino developers, that's ₱112/month.

Not a compressed browser model. The real thing.

→ Try it: simplylouie.com

The bottom line

Mozilla is right to push back. Not because local AI is bad — local models have legitimate use cases. But because window.ai as a standard means:

  1. Google controls what "AI" means in the browser
  2. Firefox/Safari users get a different (probably worse) experience
  3. Developers lose the ability to switch models
  4. The ad-model incentives are baked one layer deeper into your stack

The best AI setup is the one you control. That means an external API with a fixed monthly cost, not a browser-embedded model from the world's largest advertising company.


What do you think — is browser-native AI a feature or a trojan horse? Drop a comment.

Top comments (0)