What is the Technology?
If you've ever gotten frustrated at Siri for completely misunderstanding you, you're not alone. For years, Apple's voice assistant has felt more like a party trick than an actually useful tool. That's all about to change… well hopefully. Apple is essentially rebuilding Siri from the ground up to work more like ChatGPT, meaning you can have a real, natural conversation with it instead of carefully choosing your words and hoping it figures out what you meant.
So how does something like this actually work? It all comes down to what's called a Large Language Model, or LLM. Think of it as an AI that has read basically everything on the internet, from textbooks and news articles to code and social media posts, until it became really good at understanding language and responding in a way that actually makes sense. That's why ChatGPT feels so different from the old Siri. It's not matching your words to a list of preset commands. It's actually processing what you said and figuring out the best response in real time.
Apple's version is codenamed Campos and it's built on a customized version of Google's Gemini AI model. The article notes it runs at 1.2 trillion parameters, which is basically a way of measuring how capable the model is. The bigger the number, the smarter and more capable it tends to be. One thing worth paying attention to though is where all that processing actually happens. AI this powerful needs serious computing resources, and Apple is reportedly planning to run Campos through Google's cloud servers rather than entirely on your device.
Summary of the Article
To put it simply, Apple has been having a rough time in the AI race lately. When they launched Apple Intelligence in 2024, it didn't exactly blow anyone away. Features were delayed, the ones that did show up felt half-baked, and the whole thing left a lot of people wondering if Apple had lost its edge. Meanwhile OpenAI and Google were consistently dropping impressive updates, and Samsung had already gone all in on conversational AI built right into their phones. Apple was falling behind and everyone could see it.
According to Bloomberg reporter Mark Gurman, who has a strong track record of breaking Apple news, the company's answer to all of this is coming this fall with iOS 27. The new Siri, powered by code name Campos, will look familiar on the surface since you'll still activate it the same way, by voice or holding the side button. But what happens after that is a completely different story. We're talking about an assistant that can search the web, generate images, analyze files, and even control your phone settings all through natural conversation. On top of that, it's being built into Apple's core apps, so you could have a conversation with Siri inside your photos app to find and edit a specific picture, or ask it to write an email based on plans already sitting in your calendar.
The bigger story here isn't really about the features though. It's about Apple doing something they said they wouldn't do. For years, executives argued that users didn't want a chat interface and that AI should just quietly work in the background. That stance didn't hold up. With OpenAI building its own hardware, hiring away Apple's engineers, and showing no signs of slowing down, Apple had no choice but to get in the game.
How Does It Apply to Mobile Development?
For anyone building mobile apps, this is the kind of development you can't afford to ignore.
The most immediate impact is that the baseline for what an app needs to do is rising. I mentioned something similar in my previous blog post regarding Openclaw. When the operating system itself can write emails, locate files, generate images, and control device settings through conversation, a lot of simple utility apps start looking redundant. Developers are going to have to think seriously about what genuine value their app offers that a built-in AI can't just handle natively. If your app's main feature is something Siri can now do in two seconds, that's a real problem worth solving sooner rather than later.
There's also a real opportunity here though. Apple will almost certainly release new APIs that let developers connect their apps directly into the Siri experience. Developers who move early to build for those integrations stand to benefit because your app becomes part of how users interact with AI on their phone rather than competing against it. That requires staying ahead of the curve rather than reacting after the fact.
Voice and conversational interfaces are also going to stop being optional. A lot of apps currently treat voice accessibility as a nice-to-have, something tacked on rather than built in from the start. As users grow accustomed to talking to their phone and getting genuinely useful responses, their expectations everywhere else will shift too. Designing for conversation and not just taps and swipes is going to become a basic expectation across the board.
And then there's privacy, which is honestly the most complicated piece of all this. The article mentions Apple is still debating how much the chatbot should be allowed to remember about its users. That tension between personalization and privacy is something every developer working in the AI space is going to have to navigate carefully, especially on Apple's platform where privacy has always been central to the brand. How you handle user data, what you store, and how transparent you are about it is only going to matter more as AI becomes the default way people interact with their phones.
My Opinion
As always I have a lot of opinions, but for the sake of brevity I will keep it short. I will only address security concerns, and why I think Apple was smart by partnering with Google.
Security
If you read my previous blog post about OpenClaw you would know that my main concern with AI agents is security. However, I believe Apple has already addressed this concern and nothing is leading me to believe it will be any different this time. During WWDC 2024 when they introduced Apple Intelligence, the main selling point was that all user prompts stay on device. In the event Apple Intelligence needs more computing power, it will use cloud servers, but the data is anonymized and even Apple cannot link you to the prompt. That is a very different situation from what we saw with OpenClaw.
The Apple and Google Gemini Deal
This is my favorite topic. Apple partnering with Google to use their AI models is a genuinely smart move, and before you say Apple couldn't cut it in the AI race, let me explain. Apple has always been late to the game and they are pretty transparent about it. They focus more on making things better rather than trying to be first to market. They would rather take their time and get it right than rush out something subpar.
Apple is also great at marketing. The iPhone, which is responsible for their largest source of revenue, is largely built on other companies' hardware technology wrapped in a beautiful product. I am not downplaying their innovation at all because the Apple user experience is genuinely one of the best in the world. I am just pointing out that Apple built a trillion dollar company without starting with their own hardware, so partnering with Google is very on brand for them.
It also just makes economic sense. Training AI models right now is extremely expensive, so expensive that I personally do not think most LLM companies will ever see a return on their investment. By partnering with Google, Apple gets access to a great model right from the start. And if this AI technology is not just hype and it actually sticks around, they will have the time and resources to slowly train their own models down the road and potentially save a lot of money in the process.
Reference:
According to a Janurary 2026 article by Mark Gurman at Bloomberg, Apple to Revamp Siri as a Built-In iPhone, Mac Chatbot to Fend Off OpenAI
Top comments (0)