DEV Community

Cover image for 50% of Your Users Don't Have Eyes
Fard Johnmar
Fard Johnmar Subscriber

Posted on

50% of Your Users Don't Have Eyes

"Thirty years of change is being compressed into three years." — Satya Nadella

Adrian Levy's recent piece in UX Collective makes a striking claim: you're still designing for an architecture that no longer exists. The engineering and design map is disappearing. What's replacing it isn't a better map—it's a fundamentally different set of emerging practices.

He's describing the shift from UX to what John Maeda calls "AX" or Agentic Experience. And while that framing captures part of the situation, it does not address the harder question: What happens when your systems need to serve both humans and agents?


The Two-User Problem

For forty years, we've built software for one user type: humans. The interfaces evolved—CLI to GUI to web to mobile—but the consumer was always a person with eyes, hands, and the ability to interpret visual layouts.

That assumption no longer holds.

Agents are now a major category of application consumer. And their interface isn't visual at all—it's machine-readable. JSON schemas. Structured APIs. Discoverable capabilities.

Here's a concrete example: while writing this article, asked my agent to look up the dev.to API documentation. This should have been an easy task, but the agent failed in its first attempt. Why? The official API documentation site is heavily JavaScript-rendered. When I asked my agent to fetch it programmatically, it got back CSS and layout markup—not the actual documentation content. To get the information it needed, the agent had to find third-party blog posts written by humans about the API.

This example reveals the problem in miniature. The documentation exists, but it's built for humans navigating a website, not for agents that need to parse capability definitions. And that gap—between human-optimized presentation and machine-readable substance—creates real issues:

  • Missed opportunities: Agents can't self-serve from your official docs
  • Accuracy drift: Third-party sources may be outdated or wrong
  • Security exposure: Content outside your control becomes a vector for misinformation—or worse

The question isn't whether to build for humans or agents. Both are now consuming your systems. The question is: how do you build for both?


Why "UI + API" Doesn't Scale

When you design primarily for humans, agents become second-class citizens:

  • Discovery is manual: Agents need documentation or hardcoded knowledge to understand capabilities
  • State is divergent: The UI might show information the API doesn't expose
  • Interaction patterns clash: Human workflows (click, wait, read, click) map poorly to agent workflows (call, parse, decide, call)

But here's the deeper, under recognized problem: even having an API isn't enough.

I've watched agents struggle with APIs that require any procedural complexity. Using GET instead of POST. Failing to handle multi-step authentication flows. Not responding correctly to payment-required errors. The API exists and is well-documented, but the agent still fails.

The issue isn't the API itself—it's the absence of a tool layer above it. When we built GUIs for humans, we didn't just expose raw system calls. We created buttons that abstracted multi-step operations into single clicks. Agents need the same treatment: tools that bundle complexity into atomic operations they can invoke reliably.

An API says "here are all the things you can do." A tool says "here's how to accomplish this task." That distinction matters.

When you foucs on agents only, humans become second-class citizens:

  • Observability is poor: JSON logs aren't dashboards
  • Intervention is clumsy: "Stop" means finding the right API call, not pressing a button
  • Trust is difficult: Humans can't verify what they can't see - or don't understand

Neither approach works. We need a new design paradigm.


A Different Architecture: Dual-Native Design

What I'm calling dual-native architecture is designing systems from the ground up to serve humans and agents equally well.

Here's what that looks like practically:

1. Think About How to Serve Humans and Agents Simultaneously

Instead of "UI vs API," think of what agents and humans will need at every layer of your application:

Human Mode: Rich visual interfaces, interactive controls, contextual help
Agent Mode: Structured responses, machine-readable schemas, programmatic discovery
Enter fullscreen mode Exit fullscreen mode

The key insight: defining human vs agent modes isn't about adding complexity. It's a design principle that guides architectural and engineering decisions. Every command, every output, every interaction should have data aggregation, information design and content delivery mechanisms that share the same underlying data and logic.

2. Unified Data, Divergent Presentation

Even when systems share the same underlying data, transformation logic often leaks into the wrong layer.

You've seen this pattern: a JavaScript frontend that fetches raw data and then computes derived values client-side. Or a Python endpoint that pulls records from the database and transforms them in application code instead of letting the query do the work. The API returns one shape of data; the UI transforms it into another. An agent calling that same API gets raw data that doesn't match what humans see.

This is where drift happens. Not necessarily in the data fetching, but in the transformation layer. The UI adds computed fields. The frontend enriches context. Business logic creeps into templates. The API returns what the database gives it; the human interface shows something richer.

The dual-native approach fixes this by separating two concerns that usually get tangled together: data transformation and presentation.

First, move ALL transformation logic—computed fields, derived values, business rules—into a single shared layer. This layer produces one canonical data structure that represents the complete, enriched version of the data.

Then, and only then, hand that transformed data to a presentation layer. Ideally, the presentation layer has limited transformation features. It just formats the same data differently depending on who's asking:

Raw Data → Shared Transformation Layer → Canonical Data Model → Mode Switch → Human Render / Agent Render
Enter fullscreen mode Exit fullscreen mode

The human gets a dashboard. The agent gets JSON. But both receive the same transformed data—same computed fields, same derived values, same business logic applied.

In a dual-native system, you write one handler that computes health status and returns a data structure:

{
  status: "degraded",
  components: [
    {name: "database", healthy: true, latency_ms: 45},
    {name: "cache", healthy: false, error: "connection timeout"},
    {name: "queue", healthy: true, depth: 1204}
  ],
  checked_at: "2026-03-12T14:30:00Z"
}
Enter fullscreen mode Exit fullscreen mode

Then a rendering layer converts this based on who's asking:

  • Human mode: A dashboard with green/red indicators, latency graphs, and a "Cache unhealthy" alert banner
  • Agent mode: The raw JSON above, parseable and actionable

Same data. Same logic. Different presentations optimized for each consumer. When you fix a bug or add a component, both interfaces update automatically.

3. Schema-Driven Capability Discovery

For agents to navigate a system natively, they need discoverable capabilities. Documentation, in the form of skill.md files, provides necessary context. Schemas they can parse aids execution.

Imagine a deployment tool that exposes its capabilities like this:

{
  "name": "deploy_service",
  "description": "Deploy a service to the specified environment",
  "parameters": {
    "type": "object",
    "properties": {
      "service": {"type": "string"},
      "environment": {"enum": ["staging", "production"]}
    },
    "required": ["service", "environment"]
  },
  "requires_approval": true,
  "risk_level": "high"
}
Enter fullscreen mode Exit fullscreen mode

Notice what's included beyond standard JSON Schema: approval requirements and risk metadata. Agents can evaluate whether an operation fits their current authorization before attempting it.

This is essentially an internal MCP. Capabilities defined as data that agents can discover, validate against, and invoke programmatically.

4. The Human Role Shifts

There's an open question in the industry: what role will humans play when agents handle most of the cognitive labor? Some have called humans "taste masters"—determining what works based on experience and intuition. That's an accurate description, but I prefer thinking about it this way: humans are the judgment layer that agents structurally lack.

Agents are stateless, or even when given state, struggle to reason about content and context the way humans do. An agent can execute a thousand tasks, but it can face difficulties evaluating whether those tasks are worth doing (without a lot of trial and error). It is also not optimized to weigh tradeoffs that require understanding organizational politics, user sentiment, or long-term consequences that aren't in the training data or context.

Humans have cognitive superpowers: judgment, contextual reasoning, the ability to recognize when something feels wrong even before they can articulate why. Agents have different superpowers: speed, scale, tireless execution of well-defined operations.

Dual-native design gives humans tools to exercise their cognitive strengths while agents handle the execution:

  • Dashboards showing what agents are doing across the system—not to micromanage, but to maintain situational awareness
  • Health indicators that surface problems without requiring deep investigation
  • Approval gates for operations where human judgment genuinely matters
  • Audit trails that are visual, not just logged—so humans can spot patterns agents miss

Human UX becomes about guiding agents toward optimal outcomes, not moment-to-moment control. The interface should make human judgment efficient to apply, not replace it with rubber-stamping.

5. Tiered Autonomy

Some operations are safe for agents to execute autonomously. Others require human approval. Dual-native systems encode this directly:

Tier Behavior Examples
Auto-execute No approval needed Read-only queries, status checks
Preview-first Show what would happen, await confirmation Data modifications, deployments
Always supervised Real-time visibility, human can halt High-risk operations, irreversible changes

Agents query these tiers programmatically. Humans see interfaces that surface higher-tier activity for review.


The Historical Context

This isn't as unprecedented as it might seem. We've been here before—just with different actors.

1970s: Systems designed for operators (CLI)
1980s-90s: Systems redesigned for end users (GUI)
2000s: Systems extended for web browsers (REST APIs)
2010s: Systems transformed for mobile (responsive design + mobile APIs)

Each transition required rethinking assumptions about who the user was and what they needed. Each time, the "bolt it on later" approach failed compared to designing for the new user type from the start.

We're moving toward a new transition: systems designed or extended for AI agent consumption. And the lesson from history is clear: retrofitting isn't always an option. We need to design new types of interface and abstractions that serve new audiences optimally.

The difference this time: we're not replacing one user type with another. Humans don't disappear when agents arrive. Both need to be served simultaneously.


What This Means for Developers

If you're building systems that agents will interact with—and increasingly, that's every system—consider:

  1. Design for duality from day one. "Add an API later" creates second-class citizens.

  2. Make capabilities discoverable as data. Skilll.md is becoming standard as the agent-level documentation layer. While these files are useful, a more powerful and effective strategy is to combine these skill files with direct access to system capabilities programmatically.

  3. Include authorization metadata in schemas. Let agents know what requires approval before they try.

  4. Shift human UX toward observation. Dashboards, audit trails, approval workflows. Not click-by-click control.

  5. Ensure information parity. If humans can see it, agents should be able to query it. If agents can do it, humans should be able to observe it.


The Emerging Pattern

Jenny Wen, who leads design for Claude at Anthropic, said it directly on Lenny's Podcast: "This design process that designers have been taught, we sort of treat it as gospel. That's basically dead."

She's right. But what replaces it isn't just "AX instead of UX." It's recognizing that our systems increasingly have two distinct users with fundamentally different needs—and building architectures that treat both as first class citizens.

It's no longer about "humans or agents?"

It's: How do you build for both?


I've been engineering AI systems since ChatGPT launched in late 2022, creating novel architectures, optimizing memory, and maximizing multi-LLM coordination. Now I'm focused on agentic security and creating new products and services for the autonomous AI era. More at Enspektos.


What patterns have you seen for building dual-audience systems? Drop a comment—I'm particularly interested in how others are handling the observability challenge.

Top comments (0)