I spent three days walking the halls of Fira Gran Via in Barcelona last week. I came back with sore feet and one clear takeaway:
AI assistants are the default now.
I work on Alexa+ and I spend my days thinking and prototyping about how AI ( and voice) fit into people's actual lives. How to merge context, multi-turn conversations, smart home with stuff that matters at 7am when you're making coffee. I'm building every day trying to find the optimal architecture for my agents. And what I saw in Barcelona was in a way confirming what I think: There is a convergence. Xiaomi, Samsung, LG, Lenovo, Honor, TECNO, Deutsche Telekom, Origen... they all arrived at the same vision.
The Browser Analogy
Remember when every company shipped a browser? You stopped building for Netscape, a specific browser, or even just one OS. You started building to web standards. Same thing is happening with AI assistants.
Many Booths Had an AI Assistant
And I'm not talking about chatbots or summarisation demos. These are full AI assistants with a name, a personality, and an ecosystem strategy behind them.
Xiaomi's Miloco was showing a complete solution with "Xiaomi Local Copilot". An AI for your home. It sees clutter and sends the robot vacuum. Reads your sleep state and adjusts the climate. Runs on HyperOS 3 across phones, tablets, wearables, smart home and EVs, all sharing one context model. They call the whole thing "Human x Car x Home" defining a proper distributed architecture.
Then there was LG Uplus with ixi-O. Started as a call assistant, now it does a lot more. Detects spam, catches voice phishing mid-call, lets you pull up AI while you're talking to someone. Their CEO was pretty direct on stage: "AI will evolve into an agent that understands context and finds tasks on its own." I've heard variations of this sentence from almost every assistant team I've spoken to. They all got there independently.
Samsung did something I didn't expect with Galaxy AI. Bixby is becoming a proper conversational device agent, sure, but the real move is that users can now choose between Bixby, Gemini, and Perplexity. Samsung's saying the orchestration layer matters more than which model sits behind it. π€
Honor went somewhere completely different with the Robot Phone. 200MP camera on a robotic arm that moves, tracks, nods, dances. A sort of physical AI assistant with spatial awareness and emotional expression (nods when agreeing, shakes its head when disagreeing and others)
Deutsche Telekom built the Magenta AI Call Assistant with ElevenLabs and what got me is that it lives in the network, not on your phone. Live translation, call summaries, mid-call Q&A, with appointment booking coming next. A telecom carrier building an AI life assistant into the network layer... didn't see that one coming.
Lenovo showed Qira, a cross-device assistant that actually remembers. You start research on a ThinkPad, pick it up on a Tab, finish on your phone, and you never repeat yourself. Unified memory across all three.
TECNO upgraded Ella so it reads and replies to WhatsApp messages, summarises YouTube videos, organises your tasks. They processed over 500M AI requests in 2025, more than half in non-English languages. They're building for emerging markets where an AI assistant might be how someone interacts with the digital world for the first time.
And Origen showed DOMIA, which kills if-this-then-that home automation and replaces it with LLM-powered multi-agent architecture. You say "It's a bit dark" and it does something different at 2pm than it does at 10pm because it actually gets the context. A few years ago this kind of ambient intelligence needed billions in R&D. Again...context.
The Silicon Is Ready Too
Qualcomm dropped the Snapdragon Wear Elite, the first wearable chip with a dedicated NPU. It runs 2B-parameter models on-device at 10 tokens/sec. Computer vision, text-to-speech, agentic AI on your wrist.
MediaTek talked about a "personal device cloud" where AI agents work together across your family's devices over Wi-Fi or 6G.
People Aren't Waiting, They're Building Their Own
The demand was there before Barcelona. We all know some examples:
OpenClaw hit 250,000 GitHub stars in 60 days. The AI agent that runs locally, connects to Claude or DeepSeek or GPT, and works through WhatsApp, Telegram, Signal. It went from a side project to a global thing so fast that Shenzhen's government started drafting policy around it.
Others like Rabbit R1 sold out at $199. The Humane AI Pin didn't make it, but people still paid for it. What all of these tell you is the same thing every MWC booth was responding to: people want an AI that does things in their life. Not another chat window.
So What Do You Do About It?
If you're building in this space, here's what I keep coming back to.
The assistant itself isn't the moat anymore. The real question becomes how deep your integration goes into someone's actual day.
Context is where the power sits in and if your integration only works with one assistant, you've got a problem. You need to think multi-assistant, and open standards are how you get there.
Open Standards You Should Learn

Four things worth your time right now:
Model Context Protocol (MCP) β Write a tool once, any compatible assistant can use it.
MCP Apps - let you build UIs that any host can render.
Agent Skills - let you package capabilities that any agent can call.
Context Hub - A way to make your AI Agent fetch curated documentation about API
Agent-to-Agent (A2A) β How agents discover and talk to each other. If you're building an agent that needs to coordinate with other agents, this is the protocol.
On-device model APIs - Apple Foundation Model Qualcomm's API. MediaTek Optimized for latency, privacy.
Β What to Build Next
Add an MCP server to allow AI agents to use your tools or product. Consider an MCP App if you need UI.
Publish an Agent Skill so any AI agent will have the knowledge about how to use your services, tools and workflows.
Design for context, not just commands. Expose state and history, not just actions. Think about giving the agents all the information to be able to get the right decision for the users.
Honest Reflection
Walking MWC was a good reminder that the whole industry got to the same conclusion on their own: people need AI that actually helps them run their lives.
If you're a developer, now's the time to start building for the assistant layer. Not for one assistant. For the layer underneath all of them.
I came back from MWC wanting to build faster. π₯




Top comments (0)