DEV Community

Nico Krijnen
Nico Krijnen

Posted on • Originally published at realworldarchitect.dev on

MCP Apps - Finally a UI paradigm that speaks the language of intent

Earlier this month I spoke at the Dutch AI Conference. After my own talk, I attended a talk by Glenn Reyes about MCP-UI. While listening, something profound struck me, something that I think not many others may have realized. So I figured this insight is worth writing down and sharing.

Glenn argued that due to the text-based nature of our interactions with LLMs and agents, we are throwing away decades of UX expertise. MCP-UI has been an attempt to change that, allowing interactive visual components to be provided through MCP (Model Context Protocol), an open standard that gives AI assistants the ability to connect to external tools and take real action, not just generate text. Development of MCP-UI went quite fast and currently, interactive HTML-based interfaces are an official part of the MCP Apps standard.

Glenn Reyes at Dutch AI Conference 2026

MCP Apps solve the how do we make agent UIs better problem. But sitting in that audience, a different question grabbed me: how do you design what those agent interactions should be? Because building a beautiful MCP App that solves the wrong problem is still solving the wrong problem.

That question has been answered before. Not by the AI community, but by a different group of practitioners who’ve spent the past decade perfecting collaborative techniques for exactly this: understanding complex behavior, modeling intent, and building shared language between the people who know the domain and the people who write the code. These two worlds have barely met. So let’s see if we can bring these communities together!

Troublesome state

UIs have always been built around state. Lists, forms, search bars, toggles, modals, loading spinners, all of these require state to be managed. Many frameworks have been built in attempts to tame state complexity (Redux, MobX, signals), but the fundamental model stays the same: UI reacts to data, components reflect state. Even with these frameworks, UI code is still hard to reason about, hard to test, and far away from the language of the business.

Language of the business

What do I mean with “language of the business”, and why is that important? Developers (and their coding agents) write the code. To build the right thing, they need to understand the what and why behind what they are building or changing. Otherwise they solve the wrong problem…

The understanding that is needed typically lives with non-technical people: the users of the system or the people who know how things really work. They don’t speak your technical language and they shouldn’t have to. What you need is to build deep understanding by using a shared non-technical language. More on that later…

Intent

What struck me during Glenn’s talk was that MCP-UI and MCP Apps don’t manage state in the usual way that we’ve been doing that for frontend components. Instead, the primitive is the message or tool call, not the component. The UI becomes a surface for expressing goals, not navigating menus. Instead of “fill form, submit, handle error state”, the interaction is “user states intent, agent invokes tool, result surfaces”.

State-based UI vs intent-based interaction

In MCP Apps, every meaningful interaction is a named action with context. Things like: “SearchProducts”, “PlaceOrder”, “SummarizeDocument”. Sounds familiar? That’s because it’s very close to the commands used in EventStorming. Think of an MCP tool call like agent’s way of issuing a command to the outside world. A command is simply a request to do something. It captures what someone wants to happen in a name and with the information needed to do it. Commands model intent directly, they express “I want this to happen” rather than “set this field to that value”. They carry meaning, not mechanics.

A command like “ApproveExpenseReport” says why something needs to happen, not just how the data changes. When modeled correctly, commands speaks the language of the business, not the language of the database.

MCP tool calls map to commands in domain modeling

Designing Intent-based AI interactions

Let’s start with a concrete example. Imagine you’re building an MCP App for a warehouse operations team. Order pickers need to collect items from shelves and get them to the loading dock. A picker says “plan my picks for the next batch” and the agent needs to respond with an optimized walking route through the warehouse. You can’t describe a spatial route in text. This domain needs a visual interface. So how do you design it?

Instead of the usual: “let the dev team dream up how this will work”, let’s do some collaborative modeling and build some deeper understanding.

You get the warehouse team into the room and run an EventStorming session. You start by asking: “What are all the things that happen during a pick run?” Sticky notes go up: PickOrderReceived, RouteOptimized, ItemPicked, CartFull, BatchCompleted. Standard stuff, roughly what you expected.

Warehouse pick process EventStorming board

Then the warehouse supervisor adds one: ObstructionReported. You ask what that means. Turns out aisle blockages happen every day: pallet spills, restocking crews. When a planned route is blocked, pickers currently find out by walking into the obstruction and rerouting on the fly, or by shouting across the floor. This was nowhere in the requirements.

You keep going. ColdChainTimerStarted appears: frozen items have a 12-minute window from cold storage to the insulated shipping container. HazmatSegregationViolationDetected: cleaning chemicals and food products can never share a cart. The process is far richer than anyone expected. Commands pop up: PlanPickRoute, PickItem, ReportObstruction, SplitCart, SubstituteItem. Each maps directly to an MCP tool definition, named in the language of the warehouse team.

Then something interesting happens. When the supervisor explains how obstruction rerouting works, sticky notes aren’t enough. She grabs a marker and draws the floor plan on the whiteboard next to the event timeline. Aisles, shelving sections, cold storage tucked in the back corner, the loading dock. She marks where today’s restocking is blocking aisle 6 and draws the detour path a picker would take. Someone grabs colored markers: orange zones for hazmat storage, a blue snowflake near cold storage, red X’s for blockages.

Warehouse floor plan evolving from whiteboard sketch to MCP App UI

You zoom in on the tricky parts with Example Mapping. You pick PlanPickRoute and explore the rules:

  • Rule: cold chain : “Given a batch that includes frozen items, when the route is planned, then cold storage items must be picked last to stay within the 12-minute window.”
  • Rule: hazmat segregation : “Given a batch with both cleaning chemicals and food products, when the route is planned, then the cart must be split and items picked in separate runs.”
  • Edge case: “What if the only route to cold storage is blocked?” The supervisor pauses. “Good question. I think… the picker skips the cold items and comes back for them after the obstruction clears? We’ve never actually decided that.”

Example Mapping for PlanPickRoute MCP tool

The sketch on the wall becomes the MCP App: a floor plan with aisles, an optimized route line, obstructions in red, color-coded item categories. The events and commands from the timeline map to tool definitions. The visual the supervisor drew to explain her world is the visual the picker will use every day. The modeling session didn’t just uncover what the interactions should be, it produced the UI too.

Which techniques did you say?

Understanding how things work and building systems that fit into that is something we’ve been doing for a long time already, so it’s no surprise that there are plenty of tools that we can use to do that effectively. I think the following 4 have a an exceptionally good use for designing AI agent interactions, especially when using MCP and MCP Apps.

EventStorming

This collaborative modeling technique helps you understand system behavior. It maps business processes as a storyline of domain events, triggered by commands from actors. This is exactly how an AI-driven UI interaction flows.

EventStorming timeline mapped to MCP interaction flow

With EventStorming, the initial focus is only on events: facts that have happened. That is not by accident. When you think primarily about command and actions, it is easy to fall into the trap of modeling how things work. They model intention: things may still end up different than intended. The big benefit of starting with events is that they force you to focus on desired outcomes, not on how you get there. That gives you the freedom to find better ways to do things, to find a simpler command trigger for an event, still arriving at the same outcome. Both EventStorming and Event Modeling also include sketching UI interactions directly on the wall: wireframes, screens, visual cues that show what the user sees at each step. For MCP Apps, those sketches naturally become the design for custom visual components.

Domain Storytelling

More suitable for interview style, domain storytelling captures how actors work together to achieve goals through concrete work stories. These stories help you understand how the domain actually works. Again, this is a very natural fit for mapping the intent-flows of AI agent and MCP interactions, told in the language of the domain. When you use this technique, you draw an easy to understand visual story of what the person you’re talking to is explaining. The key here is the quick feedback and validation. If you misunderstood something, they will instantly point out the mistake in your drawing or point out edge cases not yet covered.

Domain Storytelling

Event Modeling

Very similar to EventStorming, also modeling the intent (commands), the facts (events) and the views (read models). This feels like an almost perfect conceptual match for MCP tool calls, conversation history and context windows.Event Modeling organizes parts of the flow as vertical slices, each slice a complete flow from intent to state change to updated view. The key being that each slice is an isolated piece of behavior that can be implemented as one unit of work. This makes estimation and planning a breeze. As each slice is isolated, it can also be implemented in parallel with other slices without waiting for the rest of the system to be done. Productivity-wise, this is rather interesting now that we can put many AI agents to work in parallel to each build a piece of a system. Slices also map nicely to how MCP Apps work, each slice essentially one tool interaction: what the user asks for, what happens and what they see as a result.

Event Modeling

Example Mapping

The more concrete you can make it, the easier it is to spot edge cases or incorrect assumption. Example Mapping is an excellent tool to quickly explore the rules around a specific command or intent: given this situation, when this happens, then this should be the outcome. Perfect to capture exactly what an MCP tool should do and how it should respond in different scenarios.

Example Mapping

Nowadays, you can take the raw output of an example mapping session, give it to your coding assistant and turn it into a test-suite. And with good behavioral tests and the right architecture guidelines, your coding agent will have a breeze creating the implementation for you too.


What I find most interesting is how all of these techniques are not just similar to how MCP tools work, conceptually they map almost one-to-one.

The real superpower

All these are not just modeling tools, they are collaborative discovery tools. They work so well at uncovering complex behavior because they put experts and developers in the same room : working side by side without any intermediaries. This direct contact creates shared understanding before a single line of code is written. It also creates that insight very quickly and much deeper than with written specs. Result: high focus during implementation, with very few questions or points of confusion. And when the inevitable unforeseen edge case emerges, the personal connections that developers have built up with the experts mean that they know exactly who to call.

Contrast that with the traditional approach: BA → Specs → PO → Story → Tasks → Dev/AI → Code. Where every handoff is a translation loss: the telephone-game effect in action. By the time the requirements reach the implementor, they are abstract, ambiguous, full of assumptions and often wrong in ways nobody notices until late. The typical result is expensive rework, long feedback cycles and clarity arriving at the worst possible moment.

Translation loss through handoffs vs direct collaborative discovery

The language aspect matters even more for our interaction with AI agents, where the “UI” is largely language. If you get the intent vocabulary wrong, the whole thing feels broken to it’s users. This focus on precise, shared language has its roots in Domain-Driven Design and is now more important than ever. Getting the vocabulary right is no longer just a modeling concern, it’s a usability concern. I guess that deserves a blog of its own.

What this all means for you

AI agents are becoming part of how we build, design and use software. Intent-based interaction through MCP fit naturally to command and event based design tools. Collaborative modeling techniques like EventStorming, Domain Storytelling, Event Modeling, and Example Mapping have been perfected over the past decade for exactly this kind of problem: understanding what a system should do by working directly with the people who know. If you want to build MCP Apps that actually solve the right problems, start by getting the right people in the room.

If you’d like a hand getting started, whether it’s facilitating a good modeling session or designing MCP tool interactions that speak your domain’s language, reach out. I’d love to help.

Top comments (0)