Ninety-five percent of hedge funds use generative AI. Bloomberg just shipped agentic research to every terminal. An AI analyst outperformed ninety-three percent of fund managers on public data. The question is not whether AI changes investing — it is what investing becomes when every participant has the same analytical brain.
A Stanford study published last year trained an AI analyst on thirty years of public financial data. The system outperformed ninety-three percent of mutual fund managers by an average of six hundred percent in alpha generation. It produced seventeen million dollars of quarterly alpha against the human average of two point eight million.
Bloomberg just rolled out ASKB — an agentic AI interface — to its three hundred and twenty-five thousand terminal users. The system draws from five thousand original news stories daily, research from eight hundred sell-side providers, and the full depth of Bloomberg Intelligence. Any analyst at any desk can now ask natural language questions about any company, any market, any historical pattern, and receive sourced, structured answers in seconds.
Ninety-five percent of hedge funds use generative AI in some capacity, up from eighty-six percent in 2023. JPMorgan deployed its LLM Suite to over two hundred thousand employees. Morgan Stanley's AskResearchGPT synthesizes seventy thousand proprietary research reports annually and saves each advisor up to fifteen hours per week. Goldman Sachs reports that AI has cut the time to create pitch materials by roughly fifty percent.
The question is not whether artificial intelligence changes investing. That question was answered a year ago. The question is what investing becomes when every participant in the market has essentially the same analytical brain.
The Latticework Dissolves
Charlie Munger spent a career building what he called a latticework of mental models — frameworks from physics, psychology, biology, engineering, and history that, when applied to business analysis, produced insight that the narrow specialist could not reach. The latticework was valuable because it was rare. Building it took decades of reading across disciplines, combined with the judgment to know which model applied to which situation.
That latticework is now available for a few dollars a month.
Any frontier language model can reason through mental models from forty disciplines simultaneously. It can apply physics metaphors to business problems, historical analogies to market structure, and game theory to competitive dynamics — all in the time it takes to type a question. The breadth that made Munger exceptional is a commodity. Not yet in depth — the model's application of any single framework may lack the grain of a lifetime practitioner — but in coverage. The latticework's value was in having access to multiple frames when most people had one or two. When everyone has access to all of them, the value of the collection collapses.
Peter Lynch told investors to invest in what they know. The advantage was information asymmetry — you saw the business thriving at the mall before the analysts did. AI does not go to malls, but it processes every point-of-sale dataset, every satellite image of parking lots, every shipping container manifest, every social media mention, every job posting, and every patent filing in the time it takes you to notice the line at Starbucks is longer than usual. The information asymmetry that powered a generation of stock pickers is closing from both sides: the information is more accessible and the processing is faster.
Predictive signals now lose five to ten percent of their effectiveness annually in U.S. and European markets. Alpha on new trades decays in roughly twelve months. Citadel's chief technology officer, before stepping down, said what the data confirms: In a world where the AI was making the trading decisions, everyone would know what it would do. His conclusion: true alpha exists at the frontier of innovation — and that is where human beings come in.
He was identifying the boundary. The question is what lies on the other side of it.
The Scarcity Migration
In a companion essay — The New Scarce — I argued that when any capability is automated, value migrates to the layer above it. The pattern has held for five centuries: automate execution, and judgment becomes scarce. Automate judgment, and the ability to determine what is worth doing becomes scarce.
Capital allocation is now the most concentrated test case for this hierarchy. The four layers — execution, judgment, authorization, definition — map onto investing with uncomfortable precision.
Execution is the layer that moved first. Algorithmic trading, smart order routing, automated portfolio rebalancing, and programmatic market-making are not new. They are thirty years old. The human on the trading floor routing orders by phone is the same story as the human on the assembly line tightening bolts — a task that machines perform better, faster, and cheaper. Execution was the first thing that got automated in capital markets, and the last thing that any serious investor today considers a source of edge.
Analysis is the layer moving now. Screening stocks, building financial models, evaluating credit risk, summarizing earnings calls, monitoring sentiment, identifying statistical patterns — these are the tasks that consumed the majority of an analyst's working hours. They are precisely the tasks that AI performs well. When Bloomberg deploys an agentic research assistant to every terminal, when ninety-five percent of hedge funds embed generative AI in their workflows, when an AI analyst generates six times the alpha of the human average on public data — the analytical layer is commoditizing at a speed that previous automation waves did not approach.
Over thirty-five percent of new hedge fund launches in 2025 branded themselves as AI-driven or AI-enhanced. When the tools that defined a profession's edge become a marketing claim, they have stopped being edge.
Judgment is the layer where the migration has paused — but only temporarily. Judgment in investing means conviction under uncertainty: the willingness to size a position when the model says one thing and your experience says another. The ability to distinguish a buying opportunity from a value trap. The recognition that this particular market dislocation is temporary rather than structural. Goldman Sachs's 2026 hedge fund outlook found that discretionary macro — the strategy that relies most heavily on human judgment about regime change — was the standout performer in 2025, capitalizing on central bank divergence and geopolitical volatility that no model trained on historical data could have anticipated.
The Stanford study is instructive on this point. The AI analyst outperformed ninety-three percent of managers — but it underperformed in situations involving intangible assets, financial distress, smaller and less liquid firms, and industries experiencing rapid change. These are precisely the domains where contact with reality — having been through a credit cycle, having watched management teams make promises and break them, having felt the difference between a company in temporary trouble and a company in terminal decline — provides information that public data does not contain.
Definition is the terminal layer. Not what to buy, but what game to play. Not which stock looks undervalued, but whether stocks are the right instrument at all. Not how to optimize the portfolio, but what optimal means for this particular capital base, this particular mandate, this particular moment in economic history. Definition is the act of creating the objective function — and no AI, no matter how capable, generates objective functions from first principles. It optimizes them.
The Information Liquidity Trap
John Maynard Keynes started his investing career as a top-down macro forecaster — attempting to predict currency movements and commodity cycles from economic data. He was brilliant at it and lost money consistently. He shifted to concentrated equity investing, holding a small number of companies he understood deeply, and compounded at roughly twelve percent annually for decades.
The evolution was not from one strategy to a better one. It was from the calculable side of the problem to the uncertain side. Keynes discovered — empirically, at the cost of real capital — that when sophisticated participants process the same information, additional analysis produces diminishing marginal insight. More research does not mean better decisions. At some point, the bottleneck moves from the quality of the answer to the quality of the question.
The economics of this have a name. Grossman and Stiglitz formalized it in 1980: if markets are perfectly informationally efficient, there is no incentive to gather information — because the market price already reflects it. But if no one gathers information, prices cannot be efficient. The equilibrium requires just enough informational inefficiency to compensate the cost of gathering it. Edge exists in the gap between the cost of information and its market price.
AI is collapsing that gap. When the cost of gathering, processing, and analyzing financial information approaches zero — when every fund has the same screening tools, the same language models, the same alternative data feeds — the Grossman-Stiglitz gap narrows. Not to zero. But narrow enough that the edge it once contained can no longer support the weight of the capital deployed against it.
This is what alpha decay looks like at scale. Signals lose five to ten percent of their effectiveness annually — but that number reflects an era of gradual tool democratization. When the democratization is sudden — when Bloomberg ships agentic AI to every terminal in the same quarter — the decay rate accelerates. The signals do not disappear. They become table stakes.
The information liquidity trap: when information processing is free, more processing produces zero marginal insight, and the asset managers who mistake analytical volume for analytical quality find themselves in the position of Keynes's early career — brilliantly analyzing their way to losses.
What the Market Already Prices
The evidence is not theoretical. It is visible in the market's own verdicts.
In the same week in early 2026, Block eliminated nearly half its workforce and surged twenty-four percent. C3 AI announced AI-driven reductions and fell twenty-three percent. The difference — as The New Scarce detailed — was that Block could name what was scarce about its remaining workers: two million dollars in gross profit per employee per year. C3 AI could not name what was scarce about its remaining humans. The market rewarded scarcity identification and punished its absence.
Apply the same lens to fund management. Salesforce shipped Agentforce to eight hundred million dollars in annualized recurring revenue, grew its pipeline by one hundred and sixty-nine percent — and the stock fell thirty-three percent year-to-date. The efficiency trap in action: when your product makes your customers need fewer seats, your success is structurally deflationary. The better the AI works, the less the customer pays. The market understood that Salesforce was automating its own demand away.
NVIDIA posted what Morgan Stanley called the largest, cleanest beat in semiconductor history and fell five and a half percent. The market was saying: we already know about the chips. Tell us something we have not yet priced. Capital rotated — not out of technology, but into the layers above the chip. The equal-weight S&P 500 hit an all-time high while the cap-weighted index fell. Sector dispersion reached the ninety-ninth percentile. The market was sorting — discriminating within AI, not fleeing from it.
The sorting tells the capital allocation story directly. When every analysis tool is available to every fund, the analysis itself cannot be the edge. The funds that outperform — discretionary macro leading in 2025, concentrated value investors compounding through volatility — are operating at the judgment and definition layers. They are doing what the Stanford AI cannot: recognizing regime change, sizing positions against uncertainty that no historical dataset captures, and deciding which game to play when the existing games are being commoditized in real time.
The Operator
There is a word for the person who operates at the definition layer: the operator.
Not the analyst. Not the trader. Not the portfolio manager in the narrow sense of someone who selects securities from a predetermined universe. The operator decides what capital allocation means for a particular pool of capital. What asset classes to own. What structures to use. What time horizon to operate on. What risks to take and — more importantly — what risks to refuse.
Three capabilities survive the commoditization of analysis. They are not mysterious. They are specific, observable, and difficult to replicate.
The first is game selection. Which markets to participate in. Which asset classes to own. Which instruments to use. The decision to allocate capital to crypto in 2015, to AI infrastructure in 2023, to physical commodities in 2025 — these were not analytical conclusions. No amount of screening or modeling produced them. They were convictions about where the world was going, derived from pattern recognition across domains that no single dataset captures. Game selection is irreducibly the operator's decision because it precedes the analysis. You cannot analyze your way into the question of which game to play. The question comes first.
The second is conviction under uncertainty. The models deal in probabilities and confidence intervals. The operator deals in the space beyond the model's boundary — the regime changes, the structural breaks, the moments when the historical data that trained the model is no longer the generating process for future returns. Holding a position when the model says sell, because you recognize that the model is calibrated to a regime that has just ended, is the judgment that separates compounders from index huggers. It cannot be learned from data. It is learned from the experience of having been wrong about a regime, having survived it, and having developed the pattern recognition that comes only from contact with reality over time.
The third is relationship capital. Information flows through networks of trust that cannot be accessed through APIs. The conversation at a conference that reveals a management team's real strategy. The relationship with a founder that provides conviction no public filing can. The network of counterparties who share their honest assessment of a deal because they trust you to reciprocate. Relationship capital is the last analog information channel in a digital world — and like all analog signals, it carries bandwidth that digital channels cannot replicate.
None of these are superhuman capabilities. All of them require something that AI currently does not have: accumulated experience of operating in the world under genuine uncertainty, with real capital at risk, over multiple cycles. The ascending half of human capability in investing — the half that gets more valuable as the descending half is automated.
What Endures
Every investor confronts, whether they name it or not, a question that precedes all analysis: what does return mean for this capital?
The question sounds trivial. It is not. For a pension fund, return means liability matching over thirty years. For an endowment, it means perpetual purchasing power. For a family office, it means wealth preservation across generations. For a venture fund, it means power-law outcomes on a ten-year horizon. For an individual, it means something even harder to define — some combination of financial security, optionality, legacy, and the freedom to do what matters.
These are not optimization parameters. They are values. And values require a self — a perspective shaped by experience, responsibility, and the accumulated texture of making decisions under uncertainty over decades. An AI can optimize a portfolio to match any stated objective. It cannot determine what the objective should be.
This is where capital allocation converges with the deeper argument of The New Scarce. When intelligence is abundant, the scarce resource is the judgment about what intelligence should do. In investing specifically, the scarce resource is the judgment about what capital is for — not in the abstract, but for this particular pool of capital, at this particular moment, in this particular world.
The operator who knows what game to play, who holds conviction through uncertainty, who maintains relationships that carry analog information — that operator is not competing with AI. That operator is using AI the way a structural engineer uses a calculator: as a tool that handles the calculable half, freeing attention for the half that cannot be calculated.
The calculable half is enormous. It includes everything the Stanford AI did to outperform ninety-three percent of managers. Screening, modeling, pattern recognition, statistical analysis, earnings prediction, sentiment tracking, risk calculation — all of it, available to everyone, at near-zero marginal cost.
The incalculable half is small. It includes only the questions that precede the analysis: which game to play, what return means, which risks are worth taking, and when the world has changed in a way that historical data cannot capture.
That small half is where the returns will concentrate. Not because AI is inadequate — it is extraordinary — but because the value of a tool is determined by the quality of the question it is applied to. And the questions that matter most in capital allocation are precisely the ones that require the operator's edge: judgment shaped by experience, conviction tested by volatility, and the clarity to know what is worth wanting before the optimization begins.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)