Three words that aren't synonyms — divergence, edge, and inefficiency — and a single question that tells you which one you're looking at.
You build a model. The model says 64%. The market says 53%. You see an eleven-point gap and your first instinct is: opportunity.
That instinct is almost always wrong. Not because the gap isn't real — it is — but because the gap isn't what you think it is. The gap is a disagreement. Whether it's an opportunity depends on something the gap itself can't tell you.
Three Words
There are three things that look identical from the outside but work completely differently on the inside. Getting them confused is how smart people lose money while feeling certain they shouldn't be.
Divergence is when two estimates of the same quantity disagree. Your model says 64%, the market says 53%. That's divergence. It tells you exactly one thing: two opinions differ. It tells you nothing about which opinion is better, or whether either one is right.
Edge is when one side of a trade knows something the other side doesn't — or more precisely, when one side has paid a cost to acquire information that the other side hasn't incorporated. Edge is asymmetric. It has a direction. If you have edge, you can name what you know that the market doesn't, and you can point to the cost you bore to acquire that knowledge.
Inefficiency is when a market fails to aggregate available information due to structural limitations — thin liquidity, too few participants, behavioral biases that systematically distort prices. Inefficiency is a property of the market mechanism, not of any individual participant's information.
Divergence can exist without edge. Edge can exist without obvious divergence. Inefficiency can exist without either. These are three different things. Confucian naming applies: if you can't name what you're looking at accurately, you can't act on it wisely.
The Cost Test
In 1980, Sanford Grossman and Joseph Stiglitz published a short paper with an outsized legacy. Their argument: perfectly informationally efficient markets are impossible, because if prices already reflected all available information, nobody would have an incentive to spend resources acquiring information in the first place. The market needs inefficiency to function — just enough to compensate the people who do the work of making it efficient.
The size of the inefficiency, they showed, is proportional to the cost of information. When information is expensive to acquire — proprietary data, years of domain expertise, specialized infrastructure — the market allows larger deviations from the 'true' price, because fewer participants can afford to correct them. When information is cheap or free, the inefficiency shrinks to nearly nothing, because anyone can see what the model sees.
This gives you a test. A single question that separates edge from divergence:
What cost did I bear that the market didn't?
If you can't answer that question specifically, you don't have edge. You have a disagreement. And a disagreement with a liquid market is a bet that the aggregated wisdom of everyone willing to put money behind their views is wrong — while you, reading the same public data they read, are right.
That bet loses more often than it wins.
Five Kinds of Cost
Edge comes from cost. But cost takes more forms than people usually consider.
Proprietary data. You paid money for information the market can't easily access. Not public economic releases or freely available nowcasts — those are priced in within minutes. Actual proprietary datasets: credit card transaction flows, satellite imagery of shipping traffic, real-time utility consumption. The cost is financial, and it's ongoing.
Domain expertise. You spent years building knowledge in a specific field, and that knowledge lets you interpret public data differently than the consensus. A geologist reading a mining company's reserves report sees things a financial analyst doesn't. The cost is time — years of it — and it's non-transferable.
Speed. You built infrastructure that processes information faster than other participants. In traditional markets, this is high-frequency trading. In prediction markets, it might mean automated monitoring that detects regime changes hours before manual participants notice. The cost is engineering and capital.
Behavioral endurance. You can tolerate pain that other participants can't. Buying when the market is in panic, holding a position through drawdowns, maintaining conviction when the crowd turns against you. This is Warren Buffett's famous observation that temperament matters more than intellect. The cost is psychological — and it's real, because most people can't pay it.
Structural understanding. You understand the market's own mechanics well enough to identify where it systematically misfires. Maybe a particular contract is too thin for institutional participants, so it's dominated by retail traders with known biases. Maybe the settlement rules create perverse incentives near expiration. The cost is attention — close, sustained study of the mechanism itself rather than the underlying event.
Each of these is a genuine information cost. Each creates the kind of asymmetry that Grossman-Stiglitz says is necessary for genuine edge. And each is specific enough that you can point to it and say: this is the cost I bore.
The Honest Conclusion
Here's what happens when you apply the cost test honestly.
You look at a public economic nowcast — say, the Cleveland Federal Reserve's inflation estimate — and you look at a prediction market price for a related threshold contract. You see a gap. You run the arithmetic: wrap a distribution around the point estimate, compute the implied probability, compare to the market.
The model says one thing. The market says another.
Now you ask: what cost did I bear that the market didn't?
The nowcast is free. The error distribution is computed from public historical data. The conversion from point estimate to threshold probability is undergraduate statistics. Twenty thousand contracts have traded on this, with tight one-cent spreads. The market participants have done this arithmetic too.
The honest answer, more often than not, is: I didn't bear any cost the market didn't. The divergence is real. The edge is not.
This is uncomfortable. The eleven-point gap feels like a mispricing. The model feels more rigorous than a market price. But the model's rigor comes from public data, and the market has already digested that data and arrived at a different number. The gap between 64% and 53% isn't the market being wrong. It's the market saying: we've seen everything you've seen, and we think 53% is right.
Maybe they're wrong. Markets are wrong sometimes. But the default assumption, when you're looking at a liquid market and public data, should be that the divergence is opinion — not edge, not inefficiency. Just two honest estimates that disagree.
When the Gap Is Real
None of this means edge doesn't exist. It means edge is rarer than divergence, and the two feel identical from the inside.
Renaissance Technologies generates returns from public data. But they've spent billions on infrastructure, hired hundreds of PhDs, and built models that process data in ways no individual could replicate. Their cost is massive. Their edge is proportional to it.
A physician trading healthcare prediction markets may have domain expertise that lets them interpret clinical trial data differently than a financial analyst. Their cost is years of medical training. Their edge is the interpretive framework that training provides.
A trader who understands that a particular prediction market has systematically biased participants — say, retail traders who anchor too heavily on round numbers or recent events — has structural understanding. Their cost is attention: studying the market's behavioral patterns rather than the underlying event.
In each case, the edge is nameable. The cost is specific. And the test is falsifiable: if you can't name the cost, the divergence is probably just a disagreement.
What This Changes
I've started applying the cost test to my own thinking, and it's changed what I pay attention to.
Before, when I saw a model-market divergence, I'd ask: is the model right or is the market right? That's the wrong question, because it frames the situation as a contest between two estimates. The better question is: does either side have information the other doesn't? And if so, what did that information cost?
When the answer is 'nobody has private information — we're all reading the same Fed releases,' the divergence is just calibration disagreement. It's interesting as epistemology — two representations of the same uncertainty encoding different assumptions — but it's not a trading opportunity.
When the answer is 'the market is thin, dominated by participants with known biases, and I've studied those biases specifically,' that's a different kind of divergence. The cost is specific (attention to market microstructure), and the mechanism by which the market could be wrong is nameable (behavioral patterns in a thin venue).
The cost test doesn't tell you whether to trade. It tells you what kind of conversation you're having. If you can name the cost and explain the mechanism, you're having a conversation about edge. If you can't, you're having a conversation about opinions. Both conversations have value. But only one of them should involve money.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)