Last week was Apple’s Worldwide Developers Conference, and I’d like to share some thoughts about the event. As usual, there were lots of videos to catch up on, and the keynote—which is starting to feel more like a media event for the general public than something specifically for Apple developers. It’s almost like an Apple product launch, aimed more at a broad audience who wants a sneak peek at what’s new in macOS and iPadOS.
If you develop for the Apple ecosystem, you’re probably watching both the Apple Keynote and the WWDC25: Platforms State of the Union, which dives a little deeper into the technical details. This year, as always, there were plenty of interesting sessions and videos. Here, I’ll highlight some of my favorite talks.
There was an elephant in the room, and it was addressed right at the beginning of the keynote by Craig Federighi. It was already obvious that Apple was not ready to ship Apple Intelligence in the way it had been announced last year. A couple of months ago, John Gruber of the Daring Fireball blog wrote a rather incendiary article titled "Something Is Rotten in the State of Cupertino". I didn’t expect Apple to take it personally—but they did. So I was a bit disappointed and sad about Apple’s decision this year to decline his invitation for The Talk Show Live From WWDC 2025.
For ten years straight, his live event featured top-level Apple executives on stage discussing WWDC. In previous years, we had Craig Federighi (VP of Software Engineering)and Greg Joswiak (VP of Marketing) on the podium, and it was something I always looked forward to.
The event still went ahead, with guests: The Verge’s Nilay Patel and The Wall Street Journal's Joanna Stern.
It’s quite rare for Apple to overpromise, but it’s not all bad if they take more time to work out their approach. There are also rumors that they’ve run into problems trying to deliver a truly secure AI experience. After all, LLMs are still unpredictable—and in many ways, a security nightmare.
How can we trust our most sensitive data to an AI that can hallucinate or be vulnerable to emerging cyberattacks like prompt injection and manipulation? There’s still not enough research around security in artificial intelligence, and the risks are real.
Interestingly, just before WWDC25, a group of Apple AI engineers published a technical paper titled "The Illusion of Thinking", arguing that most LLMs aren’t really reasoning at all—we just want to believe they are. Nobody truly understands how large models work internally, and they might even contain unknown backdoors. That opens up possibilities for serious vulnerabilities.
This is just the beginning, and honestly, I think it's reasonable for Apple to be cautious. I wouldn't want to trust all my personal data to an AI system either. One of the iPhone’s biggest advantages has always been that your data stays secure—on the device.
The visionOS had quite a few software mini updates this year. so I got widgets, it got to remember things you position in the space and from its UI also is where this glass design has been taken as inspiration for the new operating systems as a new uniform design. I think the Vision Pro is well and here to stay even if the hardware has not been updated. There is a rumour that really what Apple is Planning to do is to create a augmented reality device which would make sense if we look at the enhancements in the software . Maybe new glasses which have see-through glass but also with display allowing quite a few features this would be so interesting and I wonder where Apple is going with that eventually with a new hardware in the works? which is at this point total speculation of course.
while It was funny to see the designers using glass elements to try out the new design on a big table, I think the most talked about new feature being talked about by developers is Apple giving API access to the foundational models on device. This is great news indeed and very exciting. Once you give something so powerful to the developer communities you really can't imagine what they will come up with.
See more in the developer videos:
Meet the Foundation Models framework
Code-along: Bring on-device AI to your app using the Foundation Models framework
Deep dive into the Foundation Models framework
Discover machine learning & AI frameworks on Apple platforms
Explore prompt design & safety for on-device foundation models
How does Apple manage models with 3 million parameters on-device? By quantizing them to 2-bit precision. Two bits is not much! I am amazed that this works.
These models aren’t built for broad world knowledge, but they are incredibly fast. In fact, some developers have already reported a 50% speed increase compared to OpenAI's Whisper for transcription tasks: Hands-On: How Apple’s New Speech APIs Outpace Whisper for Lightning-Fast Transcription
Exciting days ahead!
Top comments (0)