Apple has always had a Siri problem.
The assistant that launched in 2011 as a revolution in human-computer interaction gradually became a punchline — setting timers reliably, fumbling at everything else. While ChatGPT was rewriting what people expected from an AI, Siri was still botching restaurant recommendations. The gap between what Apple's assistant could do and what users wanted it to do had become embarrassing.
That changes now. In January 2026, Apple and Google announced a multi-year partnership to rebuild Siri on top of Google's Gemini models. The deal is worth approximately $1 billion per year. It's not a minor feature update — it's a full architectural replacement of what powers Siri's brain. And with WWDC 2026 kicking off June 8, we're about to see what that actually looks like in the hands of iPhone users worldwide.
So what does this really mean? Let's take it apart.
Why Apple Chose Google Over OpenAI
The short answer: OpenAI said no.
According to reporting from 9to5Mac and the Financial Times, Apple approached OpenAI first about a deeper Siri integration. OpenAI declined to build Apple's proposed custom model. The dynamics of that conversation reportedly involved both technical scope and Apple's strict privacy requirements — Siri's intelligence has to flow through Apple's Private Cloud Compute architecture, meaning Google (or anyone else powering it) can't actually see your personal data.
Google agreed to those terms. The result is a deal where Gemini 2.5 Pro — a model with roughly 8x the parameters of Apple's own AI models — becomes the reasoning engine behind Siri, while Apple's infrastructure remains the privacy layer between your data and Google's servers.
From a user experience standpoint, the architecture matters because it's invisible. You're not switching to Gemini. Siri is still Siri — same name, same interface, same "Hey Siri" trigger. What's changing is that when you ask a hard question, the thing generating the answer is now a frontier model instead of Apple's in-house model that was clearly outclassed.
There's also a straightforward competitive calculus here. A former Apple executive, quoted in Fortune's coverage of the deal, described it as "a necessary byproduct of Apple's decision not to go big on its AI investments like its competitors." Apple didn't build a frontier model. They bought access to one. That's a defensible choice — but it's also an admission.
What Siri 2.0 Can Do That Siri 1.x Couldn't
The capability jump is substantial. Not subtle. Here's what's actually different.
Multi-turn conversation. Old Siri treated every request as independent. Ask it something, get an answer, then ask a follow-up and it had already forgotten the context. Siri 2.0 can sustain 20+ exchange dialogues. You can actually negotiate with it — "find a Thai restaurant near me, actually make it vegetarian-friendly, and with parking, and open until 11pm" — and each constraint builds on the last.
Cross-app automation. This is the one that impressed me the most in early reports. The canonical example Apple's been using internally: "Find the flight info from my email, check if it's delayed, and text my Uber driver the update." That's three apps — Mail, a travel API, Messages — coordinated in a single request. Old Siri could barely do one of those alone. Siri 2.0 chains them.
On-screen awareness. Phase 1 (iOS 26.4, which rolled out in spring) already brought contextual awareness about what's on your screen. You can say "send this to Alex" while looking at a photo, or "add this to my notes" while reading a webpage. Siri now knows what you're looking at.
Email and document summarization. Not just "summarize this email" — full inbox management. Asking Siri to catch you up on what you missed from a particular thread, or to draft a reply in your voice based on what the thread says. This is the kind of thing that's been available in tools like Copilot for Outlook for a while. It's finally in your phone's native assistant.
Full conversational reasoning. Ask Siri to help you plan a trip, debug why your HomeKit automation isn't triggering, or explain a complex health data trend in your Apple Watch. These aren't voice commands anymore. They're conversations.
What Siri 2.0 still won't do natively: generate images, take voice calls, or run Python code in a sandbox. Those gaps exist because Gemini's capabilities are filtered through Apple's privacy and platform constraints. ChatGPT can generate images with DALL-E and handle voice calls in real time. Siri can't — at least not yet.
But there's a fix for that, and it lives in iOS 27.
iOS 27: The Multi-Model Play
Here's where Apple's strategy gets genuinely interesting.
iOS 27, expected to be previewed at WWDC on June 8, introduces what Apple's calling Siri Extensions. The concept: any AI provider that implements Apple's extension framework can plug into Siri as an available model. MacRumors broke the story on May 5 — users will be able to choose between Gemini (the default), ChatGPT, and Claude directly inside Siri.
So Siri is no longer a product. It's becoming a platform.
The implementation is clever. A custom version of Gemini powers the rebuilt Siri chatbot itself — no Google branding visible anywhere. But when you want to do something Gemini isn't the best at — image generation, say, or creative writing in ChatGPT's style — you can route that specific request to a different provider. Users can reportedly assign different voices to different AI providers, so you can tell at a glance which model is responding.
This is a meaningful reframe. Apple isn't claiming Gemini is better than Claude at everything. They're building a system where you can pick the right model for the right task, all from inside Siri. If you're doing a creative writing task: ChatGPT. Research with current web access: Gemini. Complex coding assistance: Claude. (Claude can now be configured directly in iOS 27 — something Apple Insider noted stemmed from early negotiations that reportedly fell apart over licensing terms, before Anthropic eventually agreed to join the Extensions framework.)
For users, the practical effect is that Siri becomes a universal access point to the AI models you already use. That's a user experience win even if the underlying models stay exactly where they are competitively.
What This Means If You're an OpenAI / ChatGPT User
Your position hasn't changed all that much — but it's gotten slightly worse.
ChatGPT is still available on iPhone. The app's not going anywhere. And iOS 27's Extensions framework explicitly includes ChatGPT as a supported provider. You can still set ChatGPT as your default AI extension in Siri if you prefer it.
What's changed: ChatGPT is no longer the default AI on the world's most popular phone. It's demoted from primary to opt-in. Across 2+ billion active Apple devices, that's not a trivial distribution shift. Analysts watching OpenAI's valuation have flagged this — losing default integration across that install base is a real blow to the growth metrics that support a $300 billion valuation.
OpenAI isn't losing iPhone users who are already ChatGPT power users. Those people will opt back in. What they're losing is the passive discovery — the people who started using AI because it was already there, who never went looking for ChatGPT but found it as Siri's backend. Those users are now finding Gemini instead.
There's also a branding dimension. Siri still says "Siri" — Gemini's name doesn't appear anywhere in the default experience. ChatGPT's name does appear when a user explicitly invokes it. One model is the invisible infrastructure, the other is the thing users have to consciously choose. That matters for mindshare.
The Competitive Ripple: Google Gains, Microsoft Gets Squeezed
Google's win here is almost embarrassingly large.
Two billion active Apple devices, all running Google's AI model in their primary assistant. No Google branding required. Every query that used to dead-end in Siri now flows through Gemini's reasoning engine. And the deal is structured as a multi-year partnership — this isn't a one-year trial.
For Google's AI narrative, the timing couldn't be better. Gemini has spent much of 2025 playing catch-up to GPT-4 and Claude in benchmark coverage. Now it's the AI powering the world's most valuable consumer device ecosystem. The distribution argument alone is enormous.
Microsoft's position is the most awkward. Copilot has access to good models — GPT-4 and beyond — and it's been struggling to convert that into consumer traction. Now Google has locked down AI dominance across both Android (native) and iPhone (via the Apple deal), and Copilot's mobile ambitions are squeezed from both sides. Microsoft still wins in enterprise via Copilot for Microsoft 365. But the consumer battle on mobile looks increasingly like a two-player game: Apple/Gemini on one side, Samsung/Galaxy AI (also Gemini-powered for most markets) on the other.
OpenAI's consumer bet — becoming the AI people reach for on their phones — just got harder to win. Not impossible. But harder.
Is Siri 2.0 Finally Competitive with ChatGPT and Claude?
Honestly? Closer than it's ever been. But "competitive" depends heavily on the task.
For the things most people actually do with AI on their phones — asking questions, summarizing content, managing tasks, executing cross-app actions — Gemini 2.5 Pro is a genuine frontier model. On those everyday tasks, Siri 2.0 is probably as good as ChatGPT or better. Apple's integration advantage (deep OS access, on-screen context, device knowledge) pushes it ahead of anything you can do in a browser-based AI.
Where the gap remains: real-time voice conversation (ChatGPT's Advanced Voice Mode is still better), image generation (Siri can't, ChatGPT can), and highly complex coding work (Claude's the specialist choice there, and it's now available as a Siri extension for exactly that reason).
The honest user experience verdict: Siri 2.0 is the first version of Siri that I'd actually recommend for complex tasks. Not just "set a timer" or "call Mom." Actually complex tasks — the ones where previously you'd have opened the ChatGPT app and left Siri alone.
But the competitive win isn't really about Siri vs. ChatGPT as models. It's about Apple building a platform where users don't have to choose. Siri becomes the front door, and behind it you can summon Gemini, Claude, or ChatGPT depending on what you need. That's a smarter strategy than declaring one model the winner — and it's the reason this WWDC matters more for AI than any Apple event since the original iPhone.
What to Watch at WWDC 2026
June 8 is when this gets real. A few things to look for:
Apple will demo the cross-app automation capabilities live — this is the showstopper moment they'll build the keynote around. Watch how seamlessly (or not) it handles the multi-step task demo. Live demos at Apple events are rehearsed obsessively, but the friction in how they describe the interaction tells you a lot about the product's actual state.
The Extensions framework announcement will be the developer story. If Apple makes it easy for any AI provider to plug into Siri, that's a significant shift in the platform dynamics for the whole AI ecosystem — not just the three providers announced so far.
And watch the privacy framing. Apple will lean hard on the Private Cloud Compute angle, emphasizing that Google can't see your data even though Gemini is answering your questions. Whether that claim holds up to scrutiny from security researchers is a different question — one worth watching after the conference.
The Bottom Line
Apple's Gemini deal is the most significant change to Siri since the assistant launched 15 years ago. The numbers are real: $1 billion per year, Gemini 2.5 Pro as the underlying model, 2+ billion devices, a full rollout preview at WWDC on June 8.
What it means for you as an iPhone user: the assistant you've been ignoring for years is worth picking back up. Not because Apple suddenly became an AI powerhouse — they didn't. But because they made the pragmatic call to get out of the way and let a frontier model do the reasoning.
What it means for the AI industry: Google just secured the most valuable distribution channel in consumer AI. OpenAI's mobile loss is significant. Microsoft's consumer play gets harder. And the multi-model future — where you pick the right AI for the right task — just got Apple's weight behind it.
Siri 2.0 isn't the best AI model. It's access to several of them, from the device in your pocket, in a way that finally works.
That's enough.
For more on the AI assistant landscape, see our Gemini for Mac review and our Claude Opus 4.7 deep-dive. For the broader model competition picture, GPT-5.4 review covers where ChatGPT stands today.
Top comments (0)