BY Oluwaferanmi Adeniji
There's a moment every engineer experiences but we seldom talk about: the moment you realize your feature works perfectly but is completely unusable by the actual user.
The tests pass. The code review gets approved. Deployment goes smoothly. Logs show no errors. Metrics indicate successful transactions. By every technical measure, the feature is complete and functioning exactly as designed.
And then someone tries to use it.
Not "someone" in the abstract sense, the way we casually reference "the user" in planning meetings or architecture discussions. An actual person, with actual constraints, trying to accomplish a task in the middle of their workday. That's when the gap appears.
This gap between what works technically and what works practically is what separates functional code from functional products. It's where most engineering effort quietly fails, not with dramatic crashes or security breaches, but with silent abandonment. Features that get built, shipped, measured as "successful" by internal metrics, and then avoided. Worked around. Complained about in Slack channels you're not in.
I’d call this the “product-engineering empathy gap”, though the term itself is misleading. This isn't about feelings or compassion in the conventional sense. It's not about being nicer to users or caring more deeply about their experience, though those things have their place. The empathy gap is a technical problem with technical consequences.
It's the distance between your mental model and theirs. Between your development environment and their reality. Between the logic you spent weeks building and the two seconds they have to understand it. Between the interface you see after staring at it for forty hours and the interface they see for the first time while distracted, stressed, or rushed or going about their daily business.
Familiarity bias plays an important role in blinding us from being able to spot this gaps sometimes, we build a mental model around the features we’re building and know how it works end to end, we know what happens when you click the button “Terminate”, but an average user might see that and think, what am I terminating? My account? My loan? Or transfer?
Most engineers approach this gap from the wrong direction. We think: "How do I make users understand my system?" We write better documentation. Add more tooltips. Create onboarding flows. Build help centers. All of these treat the gap as a communication problem. As though, if we just explained it better, users would get it.
But the gap isn't communication. It's assumption.
When you build a feature, you're not starting from zero. You're starting from months or years of context: technical decisions made in previous sprints, architectural patterns chosen by your team, domain knowledge accumulated through countless meetings, mental shortcuts developed through repetition. You know why the button is positioned there. You understand what happens when you click it. You can distinguish between a loading state and a broken state because you've seen both a hundred times.
Your user has none of this context. They're not trying to understand your system. They're trying to complete a task, and your interface is either helping or hindering that goal. Every moment of confusion is friction. Every unclear label is a decision point. Every unexpected behavior is a crisis of trust.
For the longest time, engineering empathy has been treated as a skill that designers handle while engineers focus on "real" technical problems. That’s wrong. Engineering empathy is a technical discipline that directly impacts every decision you make: your architecture, your API design, your error handling, your state management, your performance optimization. It shapes what you choose to build, how you choose to build it, and whether what you build actually matters.
The engineers who understand these build products people use. Products that feel intuitive not because of clever design tricks, but because the underlying technical decisions were made with real usage patterns in mind. Products that handle errors gracefully because someone thought about what happens when things go wrong in practice, not just in theory. Products that perform well on the hardware people actually have, not just the development machines engineers use.
The engineers who don't understand this build features that work in isolation and fail in context. Features that technically meet requirements but practically solve nothing. Code that's elegant to review and frustrating to use.
This gap exists because we've been trained to think about users in abstract. In planning meetings, we talk about "the user" as a singular entity with consistent needs, behaviors, and capabilities. We create personas with names and demographics. We write user stories that fit neatly into acceptance criteria. We measure success with aggregate metrics that smooth out individual experience into trendlines and percentages.
But "that perfect user" doesn't exist. There's no platonic ideal of a user with perfect understanding, infinite patience, and optimal conditions. There are only actual people, each with different contexts, constraints, and cognitive loads. Each with different hardware, different network conditions, different levels of urgency. Each with different mental models shaped by different experiences with different software.
Yet we build for the abstraction. We optimize for the ideal case. We test under perfect conditions. We assume understanding doesn't exist. We design for focus that isn't there.
This is the lie at the heart of most software development: that users are a knowable, predictable category we can build for in a general sense. That if we just follow best practices and ship clean code, the usage will take care of itself. That our job is to build features, and someone else's job is to make them usable.
The gap persists because we've separated technical execution from practical impact. We've created a division of labor where engineers build functionality and designers add usability, as if these were separable concerns. As if you could architect a system without understanding how it will be used. As if you could write error handling without imagining the person encountering that error.
This problem becomes particularly obvious during digital transformation, when businesses take existing analog processes and convert them to software. Here, the empathy gap widens fast!. Teams focus on replicating what exists: the paper form becomes a web form, the manual approval process becomes a workflow engine, the filing cabinet becomes a database. The assumption is that digital is inherently better, that moving the process online automatically improves it. But digitization without empathy often makes things worse.
The paper form had implicit knowledge embedded in it, the loan officer who helped you fill it out, the ability to see all fields at once, and the flexibility to attach notes in margins. The software replicates the fields but not the context. It enforces validation that the paper form couldn't, creating new friction points. It breaks multi-step processes into separate screens, fragmenting what was previously visible as a whole. It assumes digital literacy doesn't exist, users who understood the paper process perfectly now struggle with dropdown menus, required fields, and error messages about data formats they've never heard of.
Digital transformation teams often consist of people who understand the business domain deeply but have never watched someone struggle with basic UI patterns. They know every nuance of the loan approval process but don't realize that "Submit for Review" is ambiguous to someone who's never used workflow software. They're digitizing their mental model of the process, not the actual experience of the people executing it. The result is software that's technically correct, it implements every business rule, handles every edge case, but practically unusable for the field officers, clerks, and business owners who need to use it daily. The business got digitized. The users got left behind. This singular reason is why some large governmental organizations and parastatals have found it hard to transform or found it to be a grueling process.
Engineering empathy means collapsing this false division. It means recognizing that understanding your users' reality is not a separate concern from building technical systems, it's fundamental to building those systems correctly. It means that every technical decision you make is, implicitly, a decision about how someone will experience your software. And if you're making those decisions without understanding that experience, you're building blind.
The code that works in theory but fails in practice isn't good code. It's a waste. And the gap between the two is almost always a failure of empathy, a failure to understand the real conditions under which your software will be used, and to build accordingly.
The Cost of Low Empathy
Technical debt is a concept every engineer understands. You take shortcuts to ship faster. You skip refactoring. You let complexity accumulate. The code works, but it's fragile, hard to modify, and expensive to maintain. Eventually, you pay for those shortcuts, through slower development, more bugs, or complete rewrites.
User frustration debt works the same way, but it's invisible to most engineering teams until it's severe.
Every confusing interface is debt. Every unclear error message is debt. Every time a user has to guess what something does, or click three times when once should suffice, or wait without knowing if the system is working, that's debt accumulating. Like technical debt, it compounds. Unlike technical debt, most teams don't track it, don't measure it, and don't prioritize paying it down until it manifests as something dramatic: plummeting retention, viral complaint tweets, or mass user exodus to a competitor.
By then, the debt is expensive to resolve. Features need redesigning. Mental models need changing. Users have already learned workarounds and become skeptical of improvements. The compounding interest on user frustration debt is lost trust, and trust is harder to rebuild than code.
Low empathy creates phantom bugs. Users file issues that aren't technical failures, they're misunderstandings. "The save button doesn't work" means they're clicking the wrong button or not seeing feedback. "Data disappeared" means they don't understand the filter state. "Export is broken" means they expected CSV but got JSON, because the format of the exported data is unclear.
Low empathy creates rework cycles. You build a feature. Users struggle. Support tickets pile up. You add tooltips. Still confusing. You write documentation. Adoption stays low. Product team escalates. You add a tutorial. Some improvement. Six months later, you rebuild it from scratch.
This cycle is expensive. Not just the rework itself, but the opportunity cost. While you're fixing confusion from Feature A, you're not building Feature B. Your roadmap slips. Competitors ship. Users wait.
Why Engineers Sometimes lack User Empathy
Abstraction Is Our Core Skill
Engineers are literally trained to abstract. It's what makes us effective. We take messy, specific problems and extract general solutions. We see patterns across use cases. We build systems that work for many scenarios, not just one. This is good engineering.
Good engineering is not good user experience, it should be the other way round, the engineering should be shaped around a good user experience, the product shouldn’t have to bend to how the supposedly “great” engineering was set up, the engineering architecture should be shaped around user behavior and use cases.
Velocity Pressure Kills Reflection
Most engineering teams operate under constant pressure to ship. Sprints are two weeks. Deadlines are tight. Backlogs are long. Your manager asks "when will this be done?" daily. In this environment, anything that slows you down feels irresponsible.
Understanding users takes time, Talking to a customer success manager takes a meeting slot. Iterating on clarity after implementation takes another day. When you're already behind, these feel like luxuries you can't afford.
We're Separated From Users
In most organizations, engineers don't talk to users. That's someone else's job. Product talks to users. Design talks to users. Customer success talks to users. Engineers get filtered information: requirements documents, design mockups, acceptance criteria.
This separation seems efficient. Why have expensive engineering time in user research when researchers can do it? Why have engineers in support calls when support can handle it? Specialization makes sense.
But it also means engineers never see the consequences of their decisions. You build a feature with multiple sequential loads rather than use optimistic updates. You never see the user refreshing repeatedly because they think it's broken. You implement validation logic that rejects malformed input. You never hear the support call where someone's crying because they can't submit their loan application and the error message doesn't explain why.
The feedback loop is broken. You make decisions, ship code, and never witness the impact. Without that feedback, empathy can't develop. You're building in a vacuum, optimizing for metrics that feel abstract because you never meet the humans behind them.
The Smaller Your Agency, The More Your Details Matter
Here's the paradox: if you have low agency to change big decisions, the small decisions you control matter even more. You can't redesign the feature, but you can write a clear error message instead of a vague one. You can't change the workflow, but you can add loading indicators. You can't modify the requirements, but you can handle edge cases gracefully.
These small acts of empathy compound. A feature might have a fundamentally confusing design that you can't fix. But if your error handling is clear, your loading states are informative, your edge cases are handled, and your performance is good, the feature becomes usable despite its flaws.
Users rarely know whose decision created which problem. They just know the software is frustrating or not. Your contributions to reducing that frustration matter, even when they're small, even when they're invisible, even when they're just a good implementation of someone else's mediocre design.
This isn't about accepting bad design. It's about recognizing that empathy is exercised in implementation details, not just in high-level product decisions. And those details are where most engineers actually have agency.
These aren't excuses. They're explanations. The system creates conditions where empathy is difficult to practice and easy to skip. Recognizing this is important because it means fixing the problem requires systemic changes, and not just individual effort “alone”
The Specificity Test
Before writing a single line of code, ask yourself: Can you name three actual people who will use this feature?
Not personas, Not "busy professionals" or "small business owners”, but actual people. If you work at a company with real users, use their names, If you're building something new, find three people who match your target and talk to them until they're real to you. Know what time of day they'll use this. Know what else is happening around them. Know what they're trying to accomplish before they touch your feature and what they need to do after.
This sounds trivial, but most features are built for abstractions. "Users want to filter their data" is an abstraction. "Marcus needs to find all product metrics because his manager asks for them every two days" is specific. The difference shapes everything, your UI priorities, your error messages, your performance targets, your default states. The difference could mean that you don’t need to build a real time system, you can have a background job that gathers metrics and data only at the end of day, and saves them for easy access for when “Marcus” needs it. You went from trying to solve for Realtime data computation to building something simple while still solving “Marcus’s” problem and making the best use of your time.
When you can't name three people, you don't have enough information to build well. Full stop. Go talk to someone (or read the user persona’s documents properly) , a customer success manager, a support engineer, an actual user if possible, before you architect anything. The time you spend here prevents the time you'll waste refactoring later.
Build With Their Constraints, Not Yours
Your development environment is a lie. MacBook Pro with 32GB RAM, dual monitors, fast internet, latest Chrome with dev tools open. This is not reality for most people using your software.
Create a "real world" environment and test there regularly. Old laptop with 4GB RAM. Throttled internet connection. Multiple browser tabs open. Notifications firing. Phone ringing. Calendar reminder popping up. This isn't about edge case testing, this is about testing under the conditions where most usage actually happens.
Specifically:
Throttle your connection. Chrome DevTools makes this trivial. Set it to "Fast 3G" or "Slow 3G" and use your feature. Notice how the two-second load time you optimized to 1.2 seconds feels infinite on a slow connection. Notice how your loading states, which you tested for 200ms, now display long enough for users to question if something broke.
Test on older hardware. Borrow a laptop from 2017. Install your app on a mid-range Android phone from three years ago. Watch your smooth animations stutter. Watch your lazy-loaded components cause visible layout shifts. This isn't about supporting legacy hardware, it's about understanding that "performant" is relative.
Test while distracted. Start a task in your feature, then switch tabs. Check Slack. Come back three minutes later. Can you remember where you were? Does the state make sense? Did anything time out? Real users don't give your interface their undivided attention. They're juggling six things at once.
The goal isn't to make your local environment painful. It's to experience the friction that your decisions create for others. When you have to wait for that API call on a slow connection, you'll reconsider whether you really need to fetch data on component mount. When your form loses state after a page refresh, you'll implement auto-save. When you can't tell if something's loading or broken, you'll improve your feedback mechanisms.
The Brain Lies Exercise
Your brain is actively working against you when you test your own interfaces. This isn't a personal failing, it's neuroscience. You know where the buttons are. You understand what the labels mean. You expect certain behaviors. So your brain fills in missing affordances, unclear copy, and confusing flows. You literally see an interface that doesn't exist.
Combat this with forced unfamiliarity:
The design-beside-implementation technique: Put your Figma design or mockup on one monitor and your implementation on the other. Don't glance between them, study them side by side. Look for: spacing differences (1px matters), color tones, font weights, alignment, icon sizes, border radius. Your brain told you these were "close enough." They're not.
The week-away test: Finish a feature on Friday. Don't look at it over the weekend. Open it fresh on Monday morning. Use it before checking any code. You'll notice things that were invisible to you on Friday. Unclear labels. Confusing flows. Missing feedback. Your familiarity was masking these issues.
The annotation exercise: Screenshot your interface. Print it or open it in an image editor. Annotate every assumption you're making about what users understand. "Users will know this icon means 'save'" is an assumption. "Users will read this tooltip" is an assumption. "Users will understand that this button is disabled, not broken" is an assumption. Most interfaces have dozens of invisible assumptions. Make them visible.
Using your product With Intention
Using your own product with intention, only works if you do it correctly. Most engineers test their features by executing a perfect happy path: click the right buttons in the right order, fill fields with valid data, submit successfully. This proves nothing except that the feature can work under ideal conditions.
Real testing means using your feature the way actual users will:
Use it while tired. Not the fresh, caffeinated, focused version of you that built it. The 3 PM on Friday version of you who's been in meetings all day. The end-of-sprint version of you who just wants to finish one more task and go home. Notice how your cognitive load changes. Notice how you miss things. Notice how you want clearer feedback, more obvious buttons, simpler flows.
Use it to solve a real problem. Don't test the feature in isolation. Integrate it into an actual workflow you have. If you build a reporting tool, use it to generate a report you actually need. If you build a form, use it to submit real data. The artificiality of "test data" and "test scenarios" hides real friction.
Use it when it's not your focus. Open your feature while you're doing something else. Let it sit in a background tab. Come back to it later. Start a flow, get interrupted, return to it. This reveals what happens when attention is divided, which is how most software gets used.
Deliberately make mistakes. Enter invalid data. Click the wrong button. Hit back in your browser mid-flow. Submit a form twice. Try to do things out of order. Your error handling and validation were designed for these scenarios, but have you experienced them? Does the error message actually help? Can you recover, or is the user stuck?
Talk to People Who Talk to Users
If you can't access users directly, find people who can like Customer success managers, Support engineers, Sales people, Account managers or Product Managers. They hear the complaints, the confusion, the feature requests, the workarounds. They have pattern recognition you don't.
Set up a monthly conversation. Ask:
What questions are you hearing repeatedly?
What features do people struggle with?
What workarounds have users invented?
What do people complain about that isn't getting filed as bugs?
These conversations reveal the gap between your metrics and reality. Your dashboard might show 95% success rate on a feature. Support knows that the other 5% represents hundreds of confused users, but also that many successful uses involve users calling support for help. The feature "worked," but it wasn't usable alone.
Build for Recovery, Not Just Success
Every feature you build has three states: working, broken, and unclear. Most development time goes into making things work. Some time goes into handling when things break. Almost no time goes into handling when things are unclear.
Unclear is the most common state:
The user clicked submit. Form is processing. Are we loading? Did it work? Should I wait or try again?
User entered data. Validation failed. What's wrong? Which field? How do I fix it?
The user navigated to a page. Content is loading. How long should this take? Should I refresh?
For every user-facing action in your feature, answer:
What happens while this is processing? Show progress. Show state. Show time estimates if possible.
What happens if this fails? Show a clear error. Explain what went wrong. Suggest a fix.
What happens if this succeeds? Confirm it clearly. Show the outcome. Make the next step obvious.
This isn't just about polish, it's about trust. Users trust systems that communicate clearly. They abandon systems that leave them guessing.
None of these strategies are individually difficult. What's difficult is doing them consistently, especially under deadline pressure when it feels easier to just build the feature and move on. But this is the work. Empathy isn't a single moment of insight, it's a practice you repeat until it becomes automatic, until you can't build a feature without considering the person who will use it, until their context becomes as real to you as your code.
Functional code is the baseline, not the achievement. Tests passing, deployments succeeding, features working, these are table stakes. They prove you can write code. They don't prove you've solved anyone's actual problem.
Engineering empathy isn't about being nicer or more thoughtful or more user-focused in some abstract sense. It's about recognizing that the gap between "it works" and "people can use it" is a technical problem requiring technical solutions. It's about building systems that account for the reality of how they'll actually be used, not the idealized version you imagine while writing code.
It's not secondary to "real" engineering work. It shapes your architecture decisions, your API design, your state management, your error handling, your performance priorities. Every technical choice you make is implicitly a choice about user experience. Making those choices without understanding the experience is building blind.
When you practice empathy consistently, Your relationship with product and design changes. Conversations stop being adversarial, engineering defending what's "technically feasible" against product defending what's "user-friendly." When you understand user context deeply, you propose better solutions. You identify constraints earlier. You spot mismatches between requirements and reality before they become shipped features. You make different performance tradeoffs when you understand whose time you're optimizing for. You design different third-party APIs when you picture the developer who'll implement against them at 11 PM trying to ship a deadline feature.
Your velocity changes. Not immediately, at first, slowing down to understand users feels like it's costing you speed. But compounding works in your favor. Six months in, you're not doing rework. You're not fixing confusion. You're not trapped in support escalations. You're building new things while teams without empathy are still fixing old ones.
Building products that feel good to use isn't about polish or delight or exceeding expectations, though those can be outcomes. It's about building products that get out of people's way. Products where the interface disappears and users accomplish what they actually came to do. Products that respect people's time, attention, and cognitive load.
"The perfect user" still doesn't exist. There are only actual people with actual constraints trying to accomplish actual tasks. Your job isn't to build for the abstraction. It's to build for them, specifically, concretely, deliberately. To make technical decisions that account for their reality, not just yours.
That's engineering empathy. Not a soft skill. Not secondary. Not optional. A technical discipline that determines whether what you build actually works in the world, not just in your test suite.
You already know how to write code that compiles. Now build code that people can use.
Oluwaferanmi Adeniji is a Frontend Engineer at Moniepoint Inc
Top comments (0)