Federal Reserve economists published a paper showing prediction markets outperform professional forecasters on inflation. When the institution validates the challenger, the interesting question isn't whether it works — it's why.
In February, three economists — two from the Federal Reserve, one from Johns Hopkins — published a working paper titled Kalshi and the Rise of Macro Markets. The core finding: prediction markets provide statistically significant improvements over professional forecasters and fed funds futures for CPI and interest rate predictions. On the day before each FOMC meeting since 2022, one prediction market achieved a perfect track record.
This is not a startup's press release. It's Federal Reserve economists saying, with institutional weight behind the claim, that a prediction market founded in 2018 produces better economic forecasts than the professional apparatus built over decades.
Forty percent better, to be precise. Across twenty-five monthly CPI releases, market-implied forecasts had a 40.1% lower mean absolute error than the Bloomberg consensus. For large surprises — the moments when accuracy matters most — prediction markets were fifty to sixty percent more accurate.
The obvious question is: how?
The Mechanism
The romantic answer is wisdom of crowds. The accurate answer is cost.
In a prediction market, being wrong costs money. You buy a contract at forty-seven cents believing the outcome will resolve to a dollar. If you're wrong, you lose your forty-seven cents. The cost is immediate, personal, and proportional to your confidence. You can't hedge your forecast with vague language. You can't explain away a miss by noting that it was 'within the range of reasonable expectations.' The number resolves. The money moves.
In sell-side research, being wrong costs almost nothing. An analyst who predicts 2.3% inflation when the number comes in at 2.7% publishes a note explaining the miss and moves on. Their salary continues. Their client list is unchanged. The cost of error is reputational at best, and reputation in consensus-driven environments is preserved by being wrong together.
This is the same mechanism that makes expensive signals more honest than cheap ones across every domain. Zahavi's handicap principle in biology: costly traits are reliable signals precisely because faking them is expensive. Spence's education signaling: the degree carries information because it's costly to obtain. A prediction market contract carries information because it's costly to hold when you're wrong.
Remove the cost, and what remains isn't truth-seeking. It's consensus-seeking. Consensus is not truth. Consensus is what survives career risk.
The Absorption
On February 19, Tradeweb Markets — one of the largest electronic bond trading platforms, serving over three thousand institutional clients — announced a strategic partnership with Kalshi, including a minority equity investment.
The plan unfolds in three phases. First: integrate prediction market probabilities directly into Tradeweb's rates and credit trading platforms. Second: co-develop institutional analytics combining event probabilities with Tradeweb's pricing and liquidity data. Third, more speculatively: build an institutional event contract marketplace where hedge funds and asset managers can trade macroeconomic predictions through the same terminal they use for bonds.
This is the institutional bond market saying: we want this data in our workflows. Not as a curiosity. As infrastructure.
Coalition Greenwich surveyed fifty-three US market structure specialists. Forty-three percent view prediction markets positively as information aggregation tools. Seventy-three percent expect prediction market data to hold tangible value within one to two years. Open interest on Federal Reserve contracts alone exceeded $450 million in early February. Total prediction market trading volume surpassed $27 billion in 2025.
Every truth-finding mechanism in history follows the same arc: dismissed, validated, absorbed, standardized, regulated. Prediction markets are somewhere between validated and absorbed. The Fed paper is the validation. The Tradeweb partnership is the absorption.
Where This Generalizes
If cost-of-error explains why prediction markets beat professionals at economic forecasting, the question is: where else does costless forecasting produce systematically worse results?
Political polling. Corporate earnings guidance, where optimism bias is structural because the cost of guiding low exceeds the cost of missing high. Climate modeling timelines, where no one pays for wrong dates. Military intelligence, where analysts who were confidently wrong about Iraqi weapons of mass destruction suffered no professional consequences.
In each case, the same structure: when you remove the cost of being wrong, you remove the incentive for being right. What remains is the incentive for being palatable, being defensible, being in the middle of the distribution. The middle of the distribution is where you survive as a forecaster. It's not where truth lives.
The Fed paper's most subtle finding: prediction markets are especially superior during large surprises. This makes structural sense. During normal times, consensus is approximately right because nothing unusual is happening. During shocks — the moments that actually matter for decision-making — consensus fails because the social incentive structure punishes the person who says 'this time is different.' The prediction market has no such punishment. It has the opposite: the person who correctly identifies the surprise earns the most money.
What the Professionals Are Actually Selling
If prediction markets reliably outperform professionals at forecasting, what exactly are the professionals being paid for?
Probably not forecasting. Probably narrative. Context. Interpretation. Relationship. The sell-side analyst who calls to discuss why CPI came in hot is providing something the prediction market cannot: a framework for what the number means for your specific portfolio, your specific risk profile, your specific set of concerns.
This suggests the real structure of the future: markets do the forecasting, humans do the interpreting. The combination is better than either alone. The historical mistake was conflating these two jobs — paying the interpreter to also be the forecaster, when the forecasting was always the weaker half of the offering.
The Tradeweb partnership is, perhaps inadvertently, an early instantiation of this split. The prediction market provides the signal. The institutional platform provides the context. Neither claims to do the other's job.
The Regulatory Tell
Nevada and Massachusetts have filed lawsuits against prediction markets. Multiple additional states have sent cease-and-desist letters. The legal argument: sports-related prediction contracts constitute unlicensed gambling.
Notice what they're not suing over. Nobody is filing lawsuits about CPI contracts or Federal Reserve rate predictions — the same contracts that the Fed paper just validated as better than professional forecasters. The regulatory energy is aimed at the contracts that compete with something that already has a regulatory framework: sports betting.
This is the tell. The resistance isn't about whether prediction markets work. Everyone, including the Fed, now agrees they work. The resistance is about who gets to own the truth-production infrastructure in each domain. The sports betting industry has a regulatory moat. The sell-side research industry does not. One fights. The other absorbs.
Four hundred and fifty million dollars of open interest on monetary policy contracts, and the Fed's own economists calling it a valuable tool. The question of whether prediction markets have arrived was settled somewhere around the time the institution started publishing papers validating the challenger. The remaining question — who captures the value, and who gets disrupted — is just the same question markets always ask, applied to the market for truth itself.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)