There's a particular kind of pain that every developer who's ever built a side project or a startup product knows intimately. You spend a weekend or six months building something. You think carefully about the architecture. You make sensible technology choices. You write reasonably clean code.
Then you ship it. And nothing happens.
Not negative feedback. Not users complaining about bugs or asking for features. Just silence. A few signups who never come back. A Product Hunt post that gets seven upvotes, six of which are from your friends.
This is the most common failure mode in software, and it almost never has anything to do with code quality. It's a research problem. Specifically, it's the absence of research before a single line was written.
This article is about how to do that research practically, without a UX team, without a budget, and without it taking six weeks before you can start building.
The Assumption Problem
When developers build products without talking to users first, they're not actually building from nothing. They're building from assumptions. Usually a lot of them, stacked on top of each other.
There's an assumption that the problem exists. An assumption that the problem is painful enough to motivate someone to find a solution. An assumption that the solution you're imagining maps onto how people actually think about the problem. An assumption about what they'd pay, how they'd discover it, what they'd compare it to.
None of these assumptions are unreasonable to make. The issue is that most of them never get tested.
Here's a useful way to think about it: every assumption your product rests on is a bet. Some bets are small they're either right or wrong, but either way the cost is low. Others are load-bearing. If they're wrong, it doesn't matter how well you've executed everything else.
User research is just the process of identifying which bets are load-bearing and testing them before you go all in.
What 'User Research' Actually Means for Developers
The term has a lot of baggage. It sounds like something that requires a researcher, a recruitment budget, a testing lab, and a six-slide deck on methodology. That version of user research exists and it's valuable but it's not the only version, and it's not what you need at the start.
For a developer validating an idea, user research is much simpler: it's the process of talking to people who have the problem you're solving before you build the solution.
That's it. You're trying to learn three things:
- Does the problem actually exist the way you think it does?
- Is it painful enough that people are actively trying to solve it?
- What does the current workaround look like and what does that tell you about what a solution needs to do?
- None of that requires a formal study. It requires conversations.
The Five Questions Worth Asking Before You Build
Most early-stage product interviews go sideways because the developer is trying to pitch their idea rather than learn from the conversation. You end up asking leading questions, getting socially acceptable answers, and walking away feeling validated when you shouldn't be.
The goal of a pre-build research conversation is to understand the person's existing behavior, not to test whether they like your solution. Here are five questions that reliably surface useful information:
1. 'Walk me through the last time you experienced this problem.'
This forces specificity. General statements about problems are almost always unreliable people say problems matter more than their actual behavior suggests. A specific story reveals context, frequency, and severity in ways that abstract agreement never will. You'll learn which version of the problem they actually have, which is often different from the one you assumed.
2. 'What do you do about it today?'
This is the most important question you can ask. The answer tells you two things: whether the problem is genuinely painful (people with painful problems find workarounds they don't just tolerate them), and what the real competition is. Your competition isn't the other apps in your category. It's whatever people are doing right now. That's usually a spreadsheet, a manual process, or nothing.
3. 'What's the most frustrating part of that?'
You're looking for the moment of maximum friction. This is often not the thing you assumed it was. The part of the workflow that bothers people most is usually the right place to focus both for your initial feature set and for your positioning.
4. 'Have you ever tried to find a better solution? What happened?'
This tells you whether people have enough urgency to look for a fix. If they've never searched, the problem probably isn't as acute as it feels. If they've tried other tools and abandoned them, you've just learned exactly what not to build and you might be talking to someone who'll genuinely engage with a better option.
5. 'If this problem disappeared tomorrow, what would change for you?'
This separates problems that are mildly annoying from problems that are actually blocking something. If the answer is essentially 'nothing much,' that's important information. If the answer involves significant time saved, revenue recovered, or stress removed, you're onto something with real stakes.
How Many Conversations Do You Actually Need?
Five. Maybe seven.
This surprises people who expect research to require statistical significance. But at the discovery stage, you're not trying to quantify anything you're trying to identify patterns. And patterns in qualitative research become evident very quickly.
After about five conversations with people who genuinely have the problem you're exploring, you'll notice that the same things keep coming up. The same frustrations. The same workarounds. The same failure modes with existing solutions. When you stop hearing things that surprise you, you've done enough discovery.
The practical implication: you can do this in a week. LinkedIn, Twitter, Reddit, Slack communities, Discord servers for your target audience five people with the right problem are reachable in three to five days if you're direct and specific in how you ask.
The ask that actually works: 'I'm exploring a problem in [space] before building anything. I'm not trying to sell you something I just want 20 minutes to understand how you currently deal with [specific thing]. Can we talk?'
Most people say yes to that. Especially developers, who know how much bad product gets built without this step.
What to Do With What You Learn
After your conversations, the goal is to figure out whether your original hypothesis holds up and if so, which version of it.
The things to look for:
- Consistency of pain: Did multiple people describe the same frustration without being prompted? That's signal. One person's strong opinion is noise.
- Behavioral evidence: Are people already spending time or money trying to solve this? Behavior is more reliable than stated preference.
- Surprise: What did you learn that you didn't expect? Surprises are usually the most useful output of early research they're the places your mental model was wrong.
- The gap between what people say and what they do: If someone says the problem is urgent but has never searched for a solution and has no workaround, the urgency might be less real than they think.
If the research consistently confirms your hypothesis, you've earned more confidence in building. If it reveals that your hypothesis was wrong in specific, identifiable ways, you can adjust before you've committed anything.
If it completely invalidates the idea that's good news. You've just saved yourself months of building the wrong thing.
A Note on Synthetic and AI-Assisted Research
One development worth knowing about: a category of tools has emerged that lets you conduct preliminary research without recruiting participants at all. They use AI to generate synthetic user personas and run structured interview sessions against them, which can be genuinely useful for early hypothesis generation figuring out what questions to ask before you talk to real people, or stress-testing an idea when you can't get enough conversations in your timeframe.
The honest limitation: synthetic research is most useful for exploration and direction-finding. It shouldn't replace conversations with real users who have the actual problem. But as a starting point especially for developers who are building in a domain they don't have a personal network in it's a meaningful upgrade from making assumptions alone.
Think of it as doing your reading before the interviews, not instead of them.
One development worth knowing about: a category of tools has emerged that lets you conduct preliminary research without recruiting participants at all. They use AI to generate synthetic user personas and run structured interview sessions against them, which can be genuinely useful for early hypothesis generation figuring out what questions to ask before you talk to real people, or stress-testing an idea when you can't get enough conversations in your timeframe.
The honest limitation: synthetic research is most useful for exploration and direction-finding. It shouldn't replace conversations with real users who have the actual problem. But as a starting point especially for developers who are building in a domain they don't have a personal network in it's a meaningful upgrade from making assumptions alone.
Think of it as doing your reading before the interviews, not instead of them.
The Actual Cost of Skipping This
Developers are generally good at estimating engineering costs. How long will it take to build this feature? How much does this infrastructure cost? Those estimates inform decisions constantly.
But most developers don't estimate the cost of building the wrong thing. The engineering hours spent on a feature nobody uses. The months of iteration trying to find product-market fit in a market that doesn't exist. The opportunity cost of not building the thing that would have worked.
Five user interviews at 20 minutes each takes less than two hours of your time. That's the investment required to test your most critical assumptions before you write code.
The return on that investment, if it saves you from building the wrong thing, is measured in months. If it tells you you're right and sharpens how you think about the problem, it's still the most productive two hours you could have spent before starting.
Build the right thing. It turns out the hard part was always figuring out what that was.
Top comments (0)