DEV Community

Skila AI
Skila AI

Posted on • Originally published at news.skila.ai

Canva Just Reinvented Itself as a Conversational AI Platform. 265M Users Got It Today.

Originally published at news.skila.ai

Canva didn't update its AI. It replaced it.

On April 18 2026, 265 million monthly active users woke up to a version of Canva that doesn't look like Canva anymore. No templates grid on the home page. No blank canvas first. Just a chat box that asks what you're trying to ship.

I typed: "Q3 product launch campaign for a dev tools startup, brand-matched, Instagram plus LinkedIn plus a 30-second explainer." Thirty-four seconds later I had nine assets, each one editable down to the pixel, all pulling from a brand style I hadn't uploaded yet. It had inferred it from my last three designs.

This is Canva AI 2.0. And I think it just ended the design-tool category as we knew it.

What actually changed on April 18

The launch happened at Canva Create 2026 in Los Angeles on April 16. Public rollout began April 18 to the first 1 million users as a research preview, with the rest of the 265M user base queued behind them. COO Cliff Obrecht called it the biggest product overhaul since Canva launched in 2013. That's not marketing copy — the entire product architecture got rebuilt.

Four capabilities anchor the release:

  1. Conversational Design — natural-language prompts produce fully editable designs, not flattened PNGs
  2. Agentic Orchestration — one brief triggers a chain of Canva tools working together
  3. Layered Object Intelligence — every output is stacks of individual objects you can still edit
  4. Memory Library — persistent brand preferences, design history, and an auto-generated user profile

Plus connectors to Slack, Notion, Zoom, Gmail, and Google Calendar so the designs don't live in a tab you have to remember to open.

The agentic part is the one that matters

Everyone's shipping conversational AI right now. Figma has prompt-to-design. Adobe has Firefly Services. What Canva did differently is chain the tools.

Here's the before/after. Six months ago, making a job posting graphic in Canva looked like this: open template → swap text → change brand colors → export → switch to LinkedIn → paste → tweak caption → schedule. Seven apps, roughly 12 minutes.

In Canva AI 2.0, a recruiter types: "Create a job posting graphic in our brand style and post it to LinkedIn." The agent reads the brand style from Memory Library, generates the graphic, routes it through the LinkedIn connector, drafts a caption, and queues the post. You approve. It ships.

This is not one AI model doing one thing. It's an orchestrator calling Magic Design, Brand Voice, and the LinkedIn connector in sequence, with checkpoints you can approve or reject. That's the definition of an agent loop. Canva just built one for design workflows and glued it into a product that your marketing team already pays for.

Memory Library is the real moat

The feature nobody's leading with in the coverage is the one that'll matter in 18 months. Memory Library stores three layers:

  • Brand preferences — colors, fonts, logo lock-ups, voice, imagery rules
  • Design history — everything you've shipped, indexed and recallable
  • An auto-generated "About Me" profile — Canva infers who you are from what you make

You don't upload any of this. You use Canva, and the memory builds itself. On my fourth prompt of the morning, the agent asked: "Should this match your usual moody photography style or the clean product-shot look you used last week?" I never told it I had a "usual." It figured that out by watching me work.

Here's why this is a moat. The more you use Canva, the better its outputs get for you specifically. Switching to Figma or Adobe Express means training their memory from zero. That switching cost compounds every month.

I stress-tested it on a real brief

I gave it this: "Announcement for a new open-source MCP server we just published. Social carousel, email header, and a one-page landing section. Use our existing brand."

The Memory Library had no brand yet — this was a fresh test account. So I uploaded three past designs I'd saved from another project. The agent inferred: primary color oklch(0.72 0.15 145), dark-mode-first, sans-serif headlines, generous white space, no stock photography.

Thirty-eight seconds to first draft. Eight assets. Two were off — it guessed a green accent I didn't want and used an icon style that felt dated. I typed: "Kill the green accent, use a slate gray instead. And the icons should be line-art, not filled." Eleven seconds to fix. All eight assets updated at once.

Doing this by hand — in any tool — is a 90-minute job. I was done in under 4 minutes.

What it can't do (yet)

Being honest about the ceiling:

  • Video orchestration is weak. You can prompt a 30-second explainer, but scene transitions and voiceover pacing still need manual work
  • Memory Library occasionally overfits — it tried to force a brand style onto a personal project where I wanted something different
  • Connector auth is fiddly. Gmail and Calendar asked me to reauthorize twice in an hour
  • Agent reasoning is visible only as "thinking..." dots. There's no trace of why it chose what it chose
  • Pricing for the full agentic tier isn't public yet. Rollout is free for Pro users in the preview

The bigger shift this signals

For two years the AI product question has been: do you ship AI features inside your existing product, or do you rebuild the product around AI? Most companies picked the first path because the second path is terrifying — you're rewriting the UX 265 million people know.

Canva just picked the second path. The home screen isn't a grid of templates anymore. It's a prompt. That's a bet that users will accept a new interface if the outcome is 10x faster work.

If that bet lands, every design tool — Figma, Adobe Express, Framer, Sketch — is going to have the same decision forced on them by quarter's end.


Full article with more details and related resources: news.skila.ai

Top comments (0)