"Would you pay $49/month for this?"
It's the most tempting question to ask. It's also the most useless.
When you ask someone directly whether they'd pay a specific price, they're not answering the question you think they're asking. They're doing social math. They're thinking about whether you want them to say yes. They're thinking about whether they want to seem supportive. They're thinking about whether $49 sounds reasonable in the abstract, divorced from the actual moment of pulling out their credit card.
The result: people say yes to prices they'd never actually pay. And people say no to prices they'd happily pay if the product showed up in front of them at the right moment with the right framing.
Stated preference is not real preference. And building your pricing on stated preference is like building a house on sand.
Why people lie about pricing (without meaning to)
They're not trying to mislead you. The human brain is just bad at predicting its own future behavior, especially when money is involved.
Hypothetical bias. When the money isn't real, the decision isn't real. Saying "I'd pay $49" in a conversation costs nothing. Actually paying $49 when the checkout page loads costs $49. These are fundamentally different decisions processed by different parts of the brain.
Social desirability. In a conversation with a founder who clearly cares about their product, saying "that's too expensive" feels rude. So people hedge. "Yeah, that seems reasonable." They're being polite, not honest.
Anchoring. The moment you name a price, you've anchored the conversation. Ask "would you pay $49?" and the responses cluster around $49. Ask "would you pay $99?" and the responses cluster around $99. You're not learning what they'd pay. You're learning that humans anchor to whatever number they hear first.
Context collapse. In a survey or interview, the buyer is evaluating the price in isolation. In real life, they're comparing it to their budget, their other subscriptions, their boss's expectations, and whatever else they're spending money on this month. The survey doesn't capture that context. So the answer doesn't reflect reality.
What works better than asking
There are three approaches that get closer to real willingness-to-pay. Each has trade-offs.
Approach 1: Watch behavior, don't ask for opinions.
The best pricing data comes from observing what people actually do, not what they say they'd do. A/B testing different price points on real traffic gives you the most accurate data.
The problem: you need significant traffic to get statistically meaningful results. And A/B testing prices directly is ethically and practically messy - customers talk to each other, and finding out someone else got a lower price destroys trust.
Best for companies with high traffic and the ability to segment cleanly.
Approach 2: Use pricing-specific research methods.
Methods like Van Westendorp (four questions that map where your price stops feeling cheap and starts feeling expensive) and Gabor-Granger (testing willingness at specific price points) are designed specifically to get around the stated preference problem. They approach the price from multiple angles instead of asking directly.
The problem: they still rely on self-reporting. They're better than "would you pay $49?" but they still suffer from hypothetical bias. And they require a large enough sample to be statistically valid - usually 100+ responses.
Best for companies with an existing audience they can survey.
Approach 3: Simulate buyer behavior.
This is the newest approach and the one I built RightPrice around. Instead of asking people what they'd pay, you simulate a market of buyer personas and watch how they react.
The simulation generates AI buyers matched to your target audience. These agents interact with each other and with your offer in a simulated social environment. A reaction layer captures their sentiment, willingness to pay, objections, and excitements.
The key difference: the agents aren't answering a survey question. They're reacting to an offer in context - with other buyers around them, with competitor awareness, with skepticism built in. It's closer to observed behavior than stated preference.
The limitation: simulated buyers are not real buyers. The output is directional, not definitive. But when the alternative is asking 5 friends "does $49 sound right?" and getting 5 different answers, directional data is a significant upgrade.
Best for companies without a large enough audience to survey or A/B test.
The hierarchy of pricing data
From most reliable to least reliable:
- Real purchase behavior (what people actually paid) - most reliable, but you need volume and time
- A/B test results (what price point converted better) - reliable, but hard to do cleanly with pricing
- Simulated behavior (how AI buyer personas reacted) - directional, fast, no audience required
- Structured research methods (Van Westendorp, Gabor-Granger) - better than asking directly, still self-reported
- Direct questions ("would you pay $X?") - almost useless for pricing
Most founders operate at level 5 and wonder why their pricing feels off. Moving to level 3 or 4 is a significant improvement and can be done in an afternoon.
What to do with the data
Whatever method you use, look at three things:
The range, not the point. Willingness-to-pay is a range, not a number. You're looking for the band where most buyers are comfortable. Price at the upper third of that band - it maximizes revenue while staying within what the market accepts.
The objections. The number is less important than the reason. "Too expensive" is vague. "Too expensive compared to [competitor] which does X and Y" is actionable. The objections tell you what to fix - sometimes it's the price, sometimes it's the value perception, sometimes it's the packaging.
The segments. Different buyers have different willingness to pay. If one segment says $29 and another says $79, you don't average them to $54. You build two tiers. The data should inform your pricing structure, not just your price point.
Stop asking. Start observing.
The worst pricing data comes from asking people what they'd pay. The best pricing data comes from watching what they actually do - or simulating what they would do.
If you have the traffic, A/B test. If you have the audience, use Van Westendorp. If you have neither, simulate.
But whatever you do, stop asking "would you pay $49 for this?" The answer to that question has almost no relationship to what happens when someone actually sees $49 on a checkout page.
RightPrice simulates buyer behavior instead of asking for opinions. Code FIRST50 for free access at rightsuite.co.
Top comments (0)