We've been building a dashboarding app at work. It looks great. The design is solid, the components are polished, and every time we show it to a stakeholder they love it.
It also takes forever to wire up.
Every new view is the same cycle: a stakeholder wants to see something, a developer builds an endpoint, another developer connects it to a component, someone tweaks the layout, and three days later the stakeholder sees the thing they asked for a week ago. Multiply that by every team, every quarter, every "can we just add one more metric" request, and you start to feel like you're running a custom dashboard factory instead of building a product.
We knew the components were good. The problem was the wiring.
The obvious AI answer is wrong
The first instinct is to let the AI query your database. Natural language to SQL, job done. Except it isn't.
You get hallucinated joins, filters that look right but produce wrong aggregates, and a model that has been handed direct access to your production data with nothing structural stopping it from being manipulated by whatever it reads along the way. And when something goes wrong — when a number is off, when a report looks plausible but isn't — you have no audit trail. Every query was generated fresh. Nothing is stable or reviewable.
The deeper problem is that this gives the AI the wrong job. The AI is good at understanding intent and composing structure. It is not a trustworthy query engine, and treating it like one trades one wiring problem for a much worse one.
The insight: the AI should touch the layout, not the data
What if the AI never saw your data at all?
Instead of asking the AI to fetch data, you ask it to decide what to show and where to show it — and then let your existing, validated endpoints do the actual fetching. The AI produces a layout manifest. The client executes it.
The flow looks like this. When a user submits a query, a vector search runs against a store of pre-registered endpoint descriptions and retrieves the semantically relevant subset. That subset, combined with a static component registry, gets injected into the AI context. The AI reasons about intent, selects components, computes any derived values like date ranges, assigns a bento-grid layout, and outputs a JSON document — the UI Manifest. No data flows through the AI at any point. It only ever sees descriptions of what endpoints exist, not what they return.
The client receives the manifest, renders the component shells, and each component independently fetches its own data from the URL the AI specified. Standard HTTP calls, standard backend validation, standard response handling. The data pipeline is completely unchanged. The AI just decided what to ask for and where to put it.
The registry is the trust boundary
The component registry is a static list the development team maintains. Each entry defines a component name, the data type it accepts — one of Metric, Dataset, or List — and the minimum column span it requires in the 12-column grid. The AI can only reference components that exist in that list. It cannot invent a component. It cannot reference an endpoint that has not been registered.
{
"component_registry": [
{ "name": "LineChart", "accepts": "Dataset", "minCol": 6 },
{ "name": "StatCard", "accepts": "Metric", "minCol": 3 },
{ "name": "DataTable", "accepts": "List", "minCol": 12 }
]
}
This is an intentional bottleneck. Every component in the registry has been built and tested by the team. Every endpoint has been written, validated, and registered with a parameter schema. The backend validates all parameters on every request before executing anything — the AI constructs a URL, the backend decides whether it is valid, and the data only moves if it passes that gate.
What you end up with is a system where the AI operates inside a box your team defines. Prompt injection in a user query cannot cause the AI to call an unregistered endpoint — it is not in context and would fail backend validation if constructed. The AI cannot see raw table names, schema details, or connection strings. There is nothing to exfiltrate during the generation phase because there is no data in the generation phase.
The registry is not just a config file. It is the security model.
What this actually produces
For a query like "show me sales versus targets from last week", the AI calculates the date range, selects a LineChart for the trend and a StatCard for the total, assigns the chart most of the grid width because it is the primary insight, and outputs:
{
"dashboardTitle": "Weekly Performance Overview",
"blocks": [
{
"component": "StatCard",
"gridSettings": { "colSpan": 4, "rowSpan": 1 },
"title": "Total Revenue",
"dataUrl": "/api/sales/total-revenue?startDate=2026-03-30&endDate=2026-04-05"
},
{
"component": "LineChart",
"gridSettings": { "colSpan": 8, "rowSpan": 2 },
"title": "Sales vs Target Trend",
"dataUrl": "/api/sales/weekly-trend?startDate=2026-03-30&endDate=2026-04-05"
}
]
}
The client renders two component shells. Each fetches its own data independently. The dashboard appears. The AI never saw a single revenue figure — it just decided where to put the boxes and which endpoints to point them at.
Each endpoint returns data in one of the three standardized payload types. A Metric is a scalar value with an optional delta. A Dataset is columnar data with named columns and rows. A List is a collection of typed records. Components know how to render their payload type. The manifest connects them. That standardization is also what makes the whole thing framework-agnostic — Blazor, React, Angular, and Vue all just need to implement the same three renderers and a manifest parser. The AI logic and the endpoint layer are completely shared.
The side effect: user-generated dashboards for free
Once you have a manifest, you have a JSON blob. And a JSON blob can be saved.
When a user likes what the AI built, they save the manifest. It becomes a named, persistent dashboard — no SQL knowledge required, no developer involved, no custom view built on request. The saved artifact is a pointer to endpoints your team already validated, with a layout attached. It is versionable, shareable, and auditable in the same way any other application data is. It is not an opaque generated query sitting in someone's personal schema.
This is what changes the relationship between business users and data-heavy software. The gap has never been the data. The gap has been the translation layer between "I want to know X" and "here is X, rendered clearly." That translation has historically required a developer. This pattern removes that requirement without removing developer control over what is expressible.
Where this goes
The immediate application is dashboards, but the pattern is general. Any application with a curated component library and a set of validated endpoints can adopt it. The AI becomes a translator between user intent and application capability, bounded by whatever the development team chose to expose.
And at the broadest level, this is what AI in enterprise software should look like. Not an autonomous agent with broad data access, but an orchestration layer that makes existing, validated capabilities accessible to anyone who can ask a question. The organization controls what is inside the box. The AI makes the box useful to everyone.
We did not set out to design a pattern. We just wanted to stop spending three days on every dashboard request. But the solution we landed on turns out to generalize cleanly, and it solves a problem a lot of teams are going to run into as they try to add AI to software that handles real data.
The AI should not talk to your data. It should talk to your layout. Let your endpoints do the rest.
Top comments (0)