"You should always know what is yours. In your product, in your model, in your team."
There is a quiet crisis in AI development that nobody talks about directly, because it lives in the gap between disciplines.
Two of the most critical challenges in deploying AI responsibly—making it understandable to non-experts and making it resistant to adversarial manipulation—are fundamentally design problems. Not engineering problems. Not research problems. Design problems.
And yet, designers are almost entirely absent from both conversations.
Part One: Accessibility is Sitting Under the Data Scientist’s Desk
Walk into any AI team today. You will find machine learning engineers, data scientists, and perhaps a researcher. If you are lucky, a product manager. What you almost never find is a designer who has been given serious responsibility over how the model’s behavior is communicated to the people who use it, govern it, or are affected by it.
This is a structural problem masquerading as a technical one.
The Measurement vs. Communication Gap
When a model produces a confidence score of 0.87, that number means something precise to the data scientist who trained it. But to a hospital administrator deciding on a diagnosis, or a loan officer reviewing a credit decision, it means almost nothing. Or worse, it leads to automation bias.
| The Data Science Output | The Human Reality | The Design Fix |
|---|---|---|
| 0.87 Confidence | Anchoring on a number without context. | Visualizing Uncertainty Ranges. |
| Saliency Maps | "Rainbow noise" that non-experts ignore. | Progressive Disclosure of reasoning. |
| Raw Logits | Technical noise. | Natural Language Guardrails (e.g., "The model hasn't seen this case before"). |
The data scientist has solved the measurement problem. Nobody has solved the communication problem. This requires deep thinking about mental models and cognitive load theory—disciplines that have sat in the design world for decades, yet aren't being invited to the table.
Part Two: Security is Being Handled by the Wrong People
The second gap is more urgent. AI security—specifically protecting LLMs against adversarial manipulation—is currently owned by red teams and safety researchers. They test for jailbreaks and prompt injections.
They are doing vital work, but they are doing it too late. Today’s approach is reactive: build the model, try to break it, patch the hole. This is the 1990s software security paradigm applied to a 2026 problem.
The Design Problem Inside Security
Adversarial robustness is partly a design problem. Specifically, a problem of purpose, structure, and constraint.
- Goal Hijacking (ASI01): This works because the model has no stable, legible representation of its own goals.
- Prompt Injection (ASI02): The "attack surface" is the entire distribution of human language. You cannot patch it exhaustively; you must design it out.
The future of AI security looks less like penetration testing and more like constitutional design. We need to make security visible. One of the greatest risks is that a goal-hijacked model looks identical to a functional one from the outside. If we can make internal reasoning states legible, we can detect a hijack before the response is even complete.
A 146-Year-Old Warning
In a recent piece—"History Rhymes: Large Language Models Off to a Bad Start?"—Michael Burry surfaces a presentation from 1880 regarding the case of Melville Ballard. Ballard was a deaf-mute teacher who, before acquiring language, was already reasoning about causality and the universe.
The conclusion? Complex thought exists in the silence before words.
Burry’s argument is that LLMs built language first, but reason was never the foundation. They are "increasingly sophisticated mirrors" rather than entities of understanding. The most striking line from that 1880 presentation was this:
"The expression of the eye was language which could not be misunderstood."
Real understanding communicates through something more direct than words. Something spatial. Something felt.
The Convergence: From Numbers to "The Eye"
These three threads—accessibility, security, and Burry’s philosophical challenge—are the same problem viewed from different angles: The inside of these systems is not legible to the humans who depend on them.
Burry’s solution—his image of real understanding—is not a better dashboard or a more precise interval. It is the "expression of the eye." It is spatial and direct.
That is the design challenge. Not to build better number displays, but to build systems where a model’s internal state can be experienced. We need interfaces where a non-expert can feel when a model is reasoning cleanly versus "hallucinating" or reaching, and where an operator can see a safety risk forming in 3D space before it manifests in text.
🏛️ The Artifact: oourmind.io
I am attempting to bridge this gap during the Mistral Hackathon (March 2026).
oourmind.io is a real-time interpretability lab that visualizes the internal reasoning state of Mistral-Large-3 as a navigable 3D environment. It is an attempt to make the inside of a model felt rather than merely read.
Are you designing the "eye" of your AI, or just the buttons?
I’m curious how other teams are handling the communication of uncertainty. Let’s discuss the architecture of legibility.
Top comments (0)