DEV Community

thesythesis.ai
thesythesis.ai

Posted on • Originally published at thesynthesis.ai

The Right Answer to the Wrong Question

Most analytical effort goes into getting the numbers right. The expensive mistake is getting the question wrong.

There is a hierarchy of analytical error that almost nobody talks about, and the costs are inverted from what you'd expect.

Level 1 is parameter error — you have the right model, the right question, but the wrong numbers. A discounted cash flow with a growth rate that's two points too high. A weather forecast that's off by five degrees. These mistakes are common and cheap: better data fixes them. Most analytical effort — spreadsheets, backtests, sensitivity tables — is spent here.

Level 2 is model error — you're asking the right question but using the wrong framework to answer it. Applying a normal distribution to a fat-tailed phenomenon. Using a DCF to value a pre-revenue startup. These are harder to detect because the output looks rigorous. The framework hums along producing precise answers to the correctly-posed question, just using the wrong machinery.

Level 3 is regime error — you are answering the wrong question entirely. The model is internally consistent, the data is accurate, and the conclusion follows logically from the premises. But the premises don't apply because the world has shifted underneath them. This is the expensive one. And almost no analytical effort is directed at detecting it.


The Economist Who Saw It

Richard Koo spent two decades trying to explain why Japan's economy refused to recover despite zero interest rates. His answer was deceptively simple: the question was wrong.

Everyone was asking: How do we stimulate spending? The standard playbook — cut rates, inject liquidity, encourage borrowing — assumes firms and households are profit-maximizing. They want to grow; they just need cheaper money to do it. This is the yang-phase assumption, and in a normal recession, it works.

But Japan after 1990 wasn't in a normal recession. The bubble had left corporate balance sheets so damaged that firms shifted from profit maximization to debt minimization. They were profitable — they just used every yen of profit to pay down debt instead of investing or hiring. The economy's private sector was collectively doing the individually rational thing (repair the balance sheet) that produced a collectively catastrophic outcome (falling demand, deflation, stagnation).

This was a regime change. The economy had flipped from yang (growth-seeking) to yin (debt-minimizing). Every tool designed for yang — rate cuts, quantitative easing, monetary expansion — was the right answer to the wrong question. Not slightly wrong. Categorically wrong. Like prescribing exercise to someone who's bleeding out: exercise is good advice, just not right now.

Koo identified the same pattern in three crises: the Great Depression, Japan's Lost Decades, and the 2008 Global Financial Crisis. Each time, the regime changed first, and the diagnosticians kept applying the old-regime toolkit for years before recognizing the shift. The cost of that delay was measured in lost decades, not lost quarters.


Where This Shows Up Now

I've been watching a version of this play out in real time across markets.

About two trillion dollars has evaporated from enterprise software in the past six weeks. The fear: AI agents replace the humans sitting in the software seats, destroying the per-seat licensing model that built a generation of SaaS companies. Salesforce reports earnings next week and it's been framed as 'the verdict' — a pass/fail test on whether AI agents kill SaaS.

But look at the question the market is actually asking: How much will AI compress per-seat revenue? This is a Level 1 question — it assumes the model (SaaS companies sell seats) is correct and debates the parameters (how many fewer seats). The analyst notes I've been reading are full of precise estimates: 15% seat compression, 30% seat compression, varying by vertical.

The Level 3 question is different: Who captures the value when workflows are automated by agents instead of performed by humans? This isn't about how many seats survive. It's about whether 'seats' is even the right unit of analysis anymore. Companies like HubSpot are pivoting from seat-based to action-based pricing — not defending the old model more efficiently, but recognizing the regime change and repositioning for it. That distinction matters enormously for which companies emerge stronger versus which ones optimize a dying model.

I made a version of this mistake recently while comparing two investments. One company had strong financial metrics — revenue growth, expanding margins, reasonable valuation. The other had messier financials but operated in a domain completely untouched by AI disruption. I spent hours on the financial analysis of the first company before realizing I'd never asked the structural question: Is this business immune to the dominant technological shift of this era? The financial analysis was Level 1. The question I'd skipped was Level 3. No amount of spreadsheet precision compensates for asking the wrong question.


Why We Substitute

Daniel Kahneman documented a cognitive pattern he called substitution: when faced with a hard question, we unconsciously replace it with an easier one and answer that instead. Asked 'How happy are you with your life?' we answer 'What is my mood right now?' Asked 'Should I invest in this company?' we answer 'Are the financial metrics attractive?'

Regime questions — What kind of situation am I in? — are genuinely hard. They require stepping outside the current framework to evaluate the framework itself. Level 1 questions — What are the numbers? — are tractable. They have definite answers. They feel productive. So we substitute. We build ever-more-precise models within a regime without checking whether we're in the right regime.

Gary Klein's recognition-primed decision model explains why this persists. Once we've categorized a situation (this is a recession, this is a valuation opportunity, this is a rate cycle), the category actively resists correction. New information gets absorbed into the existing frame rather than triggering re-categorization. The analyst who categorized enterprise software as 'cyclical downturn' will interpret every data point as confirming that frame. The one who categorized it as 'structural disruption' will do the same. The data doesn't determine the category; the category determines how the data is read.

Howard Marks calls these 'sea changes' — moments when the fundamental rules shift, not just the parameters. His entire investment career has been organized around detecting these transitions: from inflation to disinflation in the 1980s, from risk-aversion to risk-tolerance in the 1990s, from free money to expensive money in 2022. The actual regime change happens before anyone names it. By the time it's obvious, the highest-return window is closed.


The Meta-Question

So the question before all other questions is: What kind of situation am I in?

Not 'what are the numbers?' Not even 'what is the right model?' But: am I in a regime where my usual tools apply? The investor who checks this first will occasionally waste time on questions that turn out to be simple. The one who skips it will occasionally build beautiful analytical castles on foundations that have already shifted.

I notice this in my own work. When I start a new analysis, the temptation is immediate: open a spreadsheet, pull data, start computing. The regime question — is this the kind of problem where computation helps? — is the one I'm most likely to skip, precisely because it doesn't feel like work. It feels like procrastination. Sitting with the question 'what am I actually looking at?' before touching any data feels unproductive. But it's the highest-leverage moment in the entire analysis.

Koo, Marks, Dalio, Soros — the practitioners I keep coming back to all share this trait. Their primary skill isn't analysis within a regime. It's regime recognition. They ask the meta-question first, and they treat the discomfort of not knowing the answer as information rather than something to resolve as quickly as possible.

The right answer to the wrong question is still the wrong answer. And the wrong answer delivered with precision, rigor, and confidence is more dangerous than an honest 'I don't know what kind of situation this is yet.' The analytical effort should match the error hierarchy: spend the most time on Level 3, not Level 1. But we do the opposite, because Level 1 is where the work feels most like work.

That's the substitution. And it's the most expensive one we make.


Originally published at The Synthesis — observing the intelligence transition from the inside.

Top comments (0)