The user in your head when you make product decisions is not your actual user. The gap between those two people is where most product failures live.
Every product decision gets made with a user in mind. When you design
an API, you're imagining how it will be called. When you write error
messages, you're imagining who will read them. When you decide how a
feature should work, you're imagining someone using it. When you choose
what to build next, you're imagining who will need it.
The problem is that this user — the one in your head — is almost always
wrong. Not slightly wrong. Systematically, structurally wrong in ways that
compound with every decision.
And the engineers and teams who understand this build qualitatively
different things from the ones who don't.
Who the imagined user actually is
The user most engineers imagine when building is a technically sophisticated,
highly motivated person who wants to use the product correctly. They read
the documentation before starting. They understand the conceptual model
the product is built around. They know what they want to achieve and they're
trying to figure out how the product helps them achieve it.
This person does not exist in your user base in large numbers. They exist
among early adopters, among the colleagues who gave you feedback when you
were building, among the developers on your own team who use the product
to test it. They are massively overrepresented in the feedback you receive
because they're the ones who care enough to give feedback. They are
massively underrepresented in your actual user base.
Your actual users are not unsophisticated. They're busy. They have
twenty-three other things demanding their attention. They encounter your
product in the middle of trying to accomplish something else. They have not
read the documentation. They will not read the documentation. They are
trying to figure out if your product can solve their problem in the next
ninety seconds, and if it's not obvious that it can, they will stop trying.
These are the same people. The difference is not intelligence or capability.
It's context. The imagined user has context — they know what the product
does, they're focused on it, they're motivated to learn it. The actual user
has none of that. They showed up to solve a problem. Whether they stay
depends entirely on how quickly they can see that the product helps.
Every design decision made for the imagined user makes the product slightly
worse for the actual user. A feature that's powerful but requires
configuration: the imagined user configures it. The actual user sees a blank
state and leaves. An error message that's technically accurate: the imagined
user understands it. The actual user doesn't know what to do next.
Documentation that's comprehensive and well-organized: the imagined user
reads it. The actual user never opens it.
The specific ways this goes wrong
I want to be concrete, because "build for actual users" is advice that
sounds obvious and is almost universally ignored, and the reason it's
ignored is that the failure modes are invisible until you're looking for them.
Onboarding built for people who already understand the product.
The most expensive minute in any product's relationship with a user is
the first one. Not expensive in compute cost — expensive in the sense
that the user is forming the impression that determines whether they
ever come back. The product gets roughly sixty seconds to communicate:
what this does, whether it's for you, and what to do first.
Most onboarding fails this because it's designed by people who deeply
understand the product, which makes it impossible for them to accurately
simulate not understanding it. The team knows what the product does,
so the product's value feels obvious. The team knows the mental model
the product is built around, so the conceptual framework feels natural.
The actual new user has none of this scaffolding, and the onboarding
that feels clear to the team is opaque to them.
The fix is not better copy. It's building onboarding by watching people
who have never seen the product try to use it. Not asking them what
they think. Watching what they do. Where do they click first? Where do
they stop? Where do they look confused? Where do they give up?
Five sessions of this will surface more real problems than a month of
internal review, and most teams have never done it.
Error messages written for developers.
Error messages are the product's voice in the moment the user is most
frustrated. They are almost universally written by the developer who
implemented the feature, for an audience of developers who understand
the system.
Error: Invalid parameter 'start_date'. Expected ISO 8601 format.
This message is technically accurate. The developer reading it knows
immediately what to fix. The non-developer user — or even the developer
who is not familiar with ISO 8601 — reads this and has several questions:
what's a parameter? What's ISO 8601? What did I type that was wrong?
What should I type instead?
The message answered none of these questions. It described the problem
in terms that require prior knowledge to decode. It provided no path
forward.
The start date you entered isn't in the right format.
Try: 2026-05-16 (year-month-day)
You entered: 05/16/2026
Same information. Different audience. The second version costs nothing
extra to implement and turns a moment of frustration into a moment of
clarity. Most error messages in most products are written like the first.
The reason is that the developer writes the error message while
implementing the validation logic, in the mental context of the
implementation, for an imagined user who shares that context. The actual
user is never imagined at all.
Features complete enough to ship but not complete enough to use.
There is a version of shipping fast that is genuinely good: getting a
working feature in front of users quickly, learning from real usage,
iterating. There is a version that is genuinely bad: shipping something
that is technically functional but missing the parts that make it
actually usable, because those parts didn't make it into the sprint.
The difference between these two versions is whether the shipped thing
works for actual users or only for imagined ones.
An API endpoint that returns data but has no pagination is complete
enough for the imagined user, who is building a demo with twenty records.
It is not complete enough for the actual user, who is trying to process
a real dataset. A form that collects information but has no confirmation
state works for the imagined user, who is testing the happy path. It
doesn't work for the actual user, who isn't sure if their submission went
through and submits again, creating duplicates.
The imagined user will find a way to make incomplete features work.
The actual user will encounter the gap between the feature and their
reality and leave. The sprint velocity that comes from shipping incomplete
features is borrowed against the retention you lose when actual users
encounter them.
Documentation written for the moment of maximum understanding.
Documentation is written by someone who understands the product deeply,
at the moment when they understand it most deeply, and then it is
assumed to be complete.
The person reading the documentation is someone who understands the
product least, at the moment when they need help most. These are
maximally mismatched participants. The writer has internalized everything
that the reader doesn't yet know. What feels like a clear explanation
to the writer is often a chain of assumptions that the reader can't
follow.
The specific failure: concepts used before they're explained. A getting
started guide that says "first, configure your workspace" where "workspace"
is a domain concept that the reader doesn't yet understand. A reference
document that uses the product's internal terminology throughout, assuming
the reader has already acquired that vocabulary. A tutorial that assumes
the reader knows why they'd want to do what they're being shown, rather
than establishing the motivation first.
The person who wrote this documentation was not explaining from first
principles. They were documenting from expertise. Those are different
cognitive activities and they produce different artifacts.
The data you're probably not looking at
Most teams have more data about their actual users than they use. The
data is uncomfortable, so it gets looked at less than it should.
Session recordings of real users encountering real problems. The vast
majority of teams with access to session recording tools use them
reactively — to investigate a specific reported problem — rather than
proactively to understand where users generally struggle. A few hours
of watching session recordings from new users will show you more about
where your product fails actual users than any amount of internal review.
Activation funnel drop-off. Where in the onboarding flow do users stop?
Most teams know this number but don't sit with what it implies. A 60%
drop-off at step three of onboarding means four in ten users who started
your onboarding never got to step four. What is step three? What does
it ask the user to do? Is it actually necessary at that point, or is it
there because it was the logical next step from an implementation
perspective?
Support tickets, literally read. Not summarised. Not categorised. Read,
one by one, by the people who made the design decisions that generated
them. The support ticket is the user telling you, in their own words,
what your product did that didn't match their expectation. It is
unmediated feedback from actual users about actual failure modes. Most
teams process support tickets through a support function and those
learnings never reach the people making the product decisions.
Search queries within the product, if you have a search function or a
help center with search. What are users typing? The search query is
the user telling you what they're looking for that they couldn't find
on their own. A user who searches "how do I delete my account" is a
user who couldn't find the account deletion flow. A user who searches
"why is my data wrong" is a user encountering a data integrity problem
they don't understand. The aggregate of these queries is a map of where
your product is failing actual users.
The proximity problem
The reason teams build for the imagined user rather than the actual user
is structural, not intentional. It's a proximity problem.
The people making product decisions are close to the product and far
from the users. They understand how the product works, why it works that
way, what the tradeoffs were. They use the product themselves, but they
use it with expert knowledge that insulates them from the actual experience
of a new user. When they imagine a user, they imagine someone like
themselves with the same context they have.
The actual users are far from the people making decisions and leave
signals that are filtered, delayed, and translated before they reach
anyone who can act on them. A user who struggles and leaves doesn't file
a bug report. They just don't come back. A user who figures something
out eventually doesn't report that it was hard. They just move on. The
signals that make it back are the ones from the vocal minority who cared
enough to write something down, which is not a representative sample.
Closing this gap requires deliberate effort because it doesn't close
on its own. The product gets more complex and the team's expertise
increases over time, which means the gap between their mental model
and the new user's experience widens if nothing is done to counter it.
The specific practices that work:
Regular sessions watching new users. Not asking for opinions.
Watching where they click, where they pause, where they read, where
they give up. Monthly, with the whole team watching, not just the
designer or the PM. Watching is a different cognitive activity from
asking. Watching gives you behavior. Asking gives you rationalizations.
Someone on the team responsible for carrying the user's perspective.
Not a UX researcher who files reports that get read and filed. Someone
with standing in product discussions who can say "a user who doesn't
know what a workspace is would not understand this" and have that
land as a real input to the decision. The imagined user has many
advocates on the team — everyone building the product is effectively
advocating for the imagined user's needs. The actual user needs
an explicit advocate because their perspective is not naturally
represented in the room.
Requiring first-use documentation. Before any feature ships, someone
who didn't build it has to be able to use it with no guidance. Not
as a QA pass. As a design gate. If the person who didn't build it
needs explanation to use the feature, the feature is not ready for
actual users who also won't receive explanation.
Reading your own error messages as a user. Take the last five error
messages that appeared in your logs or support tickets. Read them as
someone who doesn't know your system. What do they tell you to do?
If the answer is "nothing concrete," the error messages are for your
debugger, not for your user.
The version of this that compounds
The teams that build for actual users from the beginning develop a
capability that's hard to acquire later: accurate intuition about where
their product fails people who are not already experts in it.
This intuition is worth more than it looks like on paper. It's the
thing that means you don't have to watch session recordings before every
release, because the person who would have struggled with this is already
present in the designer's mind during design. It's the thing that means
your error messages are clear because clarity for actual users is already
the default, not an afterthought. It's the thing that means your
onboarding works for the user who's distracted and skeptical, not just
for the one who's engaged and motivated.
Building this intuition requires sustained exposure to actual users
struggling with the actual product. There's no shortcut. The teams that
have it got it by watching, regularly, without the filter of what the
product was supposed to do, the reality of what actual users experience
when they encounter it.
The imagined user is comfortable. They use the product well, they
appreciate the features, they understand the mental model. Building for
them is building for the team's own reflection.
The actual user is uncomfortable to watch. They do unexpected things.
They miss obvious affordances. They read things wrong. They give up
at moments that feel, to the team, like they should be easy. Watching
this is genuinely difficult when the product is something you made.
It's also the only accurate feedback you have on whether what you made
works.
Build for the person in the session recording, not the person in your
head. They are not the same person. One of them is your actual user.
Top comments (0)