The plugin era is ending
Anthropic just shipped something that reveals exactly where AI agents are headed: Claude mobile now runs live, interactive Figma, Canva, Slack, and Amplitude instances directly inside the chat interface.
Not screenshots. Not summaries. Functional canvases you can edit and push changes back to the source tool.
This is the difference between a plugin and an embedded tool. And it matters more than you think.
Plugins retrieve. Embedded tools operate.
A plugin fetches data for the conversation:
User: What's the latest in Figma?
Plugin: [fetches screenshot] → Conversation continues
An embedded tool becomes the workspace:
User: Update this Figma component
Claude: [opens live Figma canvas] → User edits directly → Changes sync to source
The 52-second demo shows a designer tweaking a Figma file, a PM assembling a Canva deck, and an analyst querying Amplitude dashboards without opening a single other app.
Why the model switched
HCI research shows every app switch costs 20-40 seconds plus a measurable spike in cognitive load. For knowledge workers toggling between Slack, Figma, spreadsheets, and project trackers dozens of times per hour, that tax compounds into hours of lost output per week.
Anthropic's bet: collapsing those tools into a single AI-mediated surface eliminates the tax entirely.
But here's the key insight: the model doesn't just retrieve context—it renders the tool.
The MCP infrastructure underneath
The plumbing is Model Context Protocol (MCP), launched in January. It standardizes how Claude talks to third-party software. The launch partners—Figma, Canva, Slack, Asana, Box, Amplitude—aren't consumer toys. They live behind corporate firewalls, handle sensitive data, and require permissioned access.
Getting them to open up to an AI agent required something most vendors don't have: constitutional-AI-first trust positioning. Anthropic's safety brand closed the enterprise trust gap.
The mobile angle matters
Laptops are where work gets done. Phones are where work gets stuck. The "I'll deal with it when I'm back at my desk" problem captures 80% of a knowledge worker's leaked productivity.
If Claude can turn a phone into a legitimate workspace for visual design, data analysis, and project management, it captures that 80%.
That's the WeChat playbook applied to knowledge work: win on distribution and habit, not raw intelligence.
What changes for builders
If you're building AI tooling:
Stop thinking integration. Start thinking rendering layer. Your tool should be operable from within the agent, not just queryable by it.
Context graphs compound value. Claude knows your Figma file in the context of the Slack thread where your team debated the design, the Asana ticket that prompted it, and the Amplitude data that justified the change. Each new integration makes every other integration more valuable.
The switching cost is not the subscription. It's reassembling a fragmented workflow across ten apps and losing the connective tissue between them.
What to watch
Two signals over the next two quarters:
Team-level workflow templates: If Anthropic introduces shared MCP configurations that let an entire org standardize how Claude orchestrates their tools, they're selling to IT buyers, not just individual users.
Claude-native features from partners: If Figma or Slack start shipping features that assume the agent is always present, the super app thesis isn't just Anthropic's ambition—it's becoming the ecosystem's default assumption.
That's when the moat gets real.
The future isn't agents with more plugins. It's agents that become the interface for the tools you already use.
Anthropic just showed us what that looks like on a phone screen.
Top comments (0)