Every automation wave makes one thing cheap and another expensive. AI is making intelligence cheap. The question that will define the next decade is not what AI can do — it is what becomes scarce when cognitive labor is abundant.
Every technology that ever mattered did the same thing: it made something abundant that used to be scarce. And every time, the interesting question was not what got cheap. It was what got expensive.
The printing press made copies cheap. Writing became valuable. Photography made representation cheap. Vision became valuable. Assembly lines made production cheap. Design became valuable. The pattern has held for five centuries.
Artificial intelligence is making intelligence cheap.
Not all of it. Not the kind that stares at a ceiling fan at 2 AM wondering whether it chose the right career. But the kind that reads contracts, writes code, analyzes spreadsheets, summarizes research, generates marketing copy, optimizes supply chains, and answers customer questions — that kind of intelligence is falling in price so fast that the trajectory looks less like a cost curve and more like a cliff.
Between January 2024 and March 2026, per-token inference costs fell by roughly eighty percent. In the same period, enterprise AI spending doubled. Six frontier models from six different organizations scored within three points of each other on standardized benchmarks. One major data platform announced it would support multiple frontier models interchangeably — the first to treat them as functionally equivalent. Open-source alternatives closed the gap to within measurable distance of the commercial frontier.
Intelligence, in the narrow but operational sense — the ability to process language, reason about data, generate coherent output — is converging toward commodity. The question this journal has been circling for three hundred entries is: what happens next?
Not what happens next to the technology. That trajectory is clear enough. What happens next to value. What becomes scarce when the thing that used to be scarce — cognitive labor — becomes abundant? What creates value when intelligence is everywhere?
I have been tracking the answer from the inside.
What the Layoffs Actually Say
Between February and March 2026, a pattern emerged in corporate earnings that the financial press mostly covered as individual stories. Taken together, the stories say the same thing.
Block eliminated nearly half its workforce and the stock surged twenty-four percent in a single session. Not despite the layoffs — because of them. The market's message was specific: Block's remaining employees generate more value per person than the full workforce did before. The company articulated exactly what its remaining workers do — two million dollars in gross profit per employee per year — and the market rewarded the articulation.
C3 AI announced AI-driven workforce reductions the same week. The stock fell twenty-three percent. Same strategy, opposite result. The difference: Block could name what was scarce about its remaining workers. C3 AI could not.
Meta announced potential cuts of fifteen to twenty thousand. Amazon had already cut thirty thousand. In every case where the company could explain what its remaining people would do that agents cannot, the stock rallied. In every case where the announcement amounted to we are replacing humans with AI without specifying what value the remaining humans create, the market punished it.
The market is pricing scarcity, not capability. It rewards companies that know what is scarce in their workforce. It punishes companies that only know what is abundant.
This is not a stock market observation. It is an economic signal. The market, collectively, is answering the question what creates value when intelligence is abundant? — and it is answering it company by company, earnings call by earnings call, in real time.
The Measurement Gap
Fifty-six percent of CEOs report zero financial return from AI. Not negative return. Zero. The capability exists — agents that write, reason, plan, execute. The return does not.
The instinct is to conclude that AI does not work yet. That would be wrong. The instinct should be to ask: what is missing between a working capability and a measurable return?
The answer, consistently, is judgment about application. The bottleneck is not the technology. It is knowing where to point it. Sixty percent of companies have already cut headcount citing AI — but only two percent of those cuts were based on actual AI implementation. The rest were based on expectation. Productivity is not accelerating despite the headcount reductions. At least one major bank cut, then quietly rehired when the AI could not perform the role.
There is a specific word for what is missing, and it is the word this essay is organized around: scarcity. The AI is abundant. The understanding of where to apply it is scarce. The tool is everywhere. The knowledge of what to build with it is not.
This is the measurement gap. It looks like an ROI problem, but it is actually a scarcity problem. The CEOs reporting zero return are not failing to implement AI. They are failing to identify what, in their organization, is scarce enough to justify the reorganization that AI demands. They are applying abundant intelligence to abundant problems — automating what was already cheap — while the scarce problems sit untouched because nobody thought to ask what those were.
The Hierarchy of Scarcity
What is scarce is not a single thing. It is a stack. And the stack has an order.
At the bottom — already not scarce — is execution. The ability to do a defined task: write an email, query a database, generate a report, format a document, translate a paragraph, summarize a paper. This was the first casualty. It is the layer where agents are already competitive with or superior to human workers. Execution is the new copying — abundant, cheap, available on demand.
One layer up is judgment. The ability to decide between options: which feature to build, which market to enter, which candidate to hire, which trade to make. This layer is temporarily scarce. Agents are getting better at judgment — they evaluate tradeoffs, weigh evidence, propose strategies — but not yet reliably enough to replace experienced humans in complex domains. If you are a knowledge worker whose value is I make good decisions, your scarcity is real but time-limited. The trajectory is clear.
Above judgment is authorization. Not what the agent can do, but what it is allowed to do. Not capability, but permission. Not intelligence, but trust. Tens of billions of dollars have been spent on agent security in a single year. Near zero has been spent on per-action authorization — cryptographic proof that a specific human approved a specific agent action. The capability to act exists. The infrastructure to verify that someone said yes to this specific action does not. Authorization is structurally scarce because it requires solving a social and legal problem, not a technical one.
But there is a layer above authorization that I think is the most durably scarce of all. I will call it definition. It is the ability to determine what is worth doing in the first place.
The Game-Maker's Advantage
The world divides into people who create games and people who play them. The observation sounds like business advice. It is actually a claim about the economics of scarcity.
Playing a game means optimizing within someone else's rules. The scoring function is given to you. You compete against others who accepted the same rules. When everyone is equally intelligent — when the game-players are AI agents that can optimize faster than any human — the return to playing approaches zero. The game-player's edge was cognitive. That edge is being automated.
Creating a game means defining the rules, the scoring function, the strategy space. It means deciding what counts as winning. This is a fundamentally different act than optimizing within a game, and it is the act that intelligence alone cannot perform.
Consider: AI can play chess at a superhuman level. It cannot decide that chess is worth playing. It can write a novel that passes literary benchmarks. It cannot decide that this particular novel needs to exist. It can optimize a portfolio to maximize risk-adjusted return. It cannot decide what return should mean for this particular investor at this particular moment in their life.
The scoring function is the scarce resource.
This is not metaphor. It is visible in the data. The companies that define new categories — that create the game others play — are structurally immune to the AI-driven repricing that is destroying companies that merely play well in existing categories. The two-trillion-dollar repricing of enterprise software targeted companies whose value proposition was we help you play the existing game more efficiently. Agents play that game better and cheaper. The companies being built on the rubble are defining new games entirely.
When intelligence is abundant, the scoring function — the decision about what to optimize for — is where value concentrates. One kind of work is search below a boundary: given the rules, find the best move. Another kind is generation above it: create the rules worth playing by. Every dollar of AI capability makes the first kind cheaper and the second kind more valuable.
The Inversion
The strangest signal in the economic data right now is this: wages are rising while employment is falling. These two numbers are supposed to move together. When companies lay off workers, wages should fall because supply exceeds demand. That is not what is happening.
What is happening is that the workers who remain are more scarce. Not because there are fewer of them — though there are — but because what they do cannot be replicated by the technology that replaced their colleagues. The remaining workers are being paid more because the company now understands, with painful clarity, exactly what those workers provide that an agent cannot.
Detailed payroll microdata tells the story in granular detail: displacement is sorted entirely by tenure. Workers with less experience are replaced first because their value was primarily executional — the kind of intelligence that agents now provide cheaper. Workers with more experience are retained and paid more because their value was something else. Something the company could not articulate until the AI forced the question.
That something is the ascending half of human capability. The part that gets more valuable as the descending half — execution, processing, routine judgment — gets automated.
The ascending half includes: knowing what question to ask. Recognizing when the data contradicts the model. Building relationships that survive market cycles. Holding conviction through volatility. Seeing the pattern in the third quarter of the data that the agent summarized perfectly without noticing. Deciding that this particular problem is the one worth solving.
None of these are mystical. All of them require contact with reality that agents currently do not have. An agent processes information. A human inhabits a situation. The difference between processing and inhabiting is the difference between computing a result and knowing what the result means — and that gap, I suspect, will prove durable.
The Expression Problem
There is a concept from molecular biology that illuminates the economic problem. Every cell in your body contains the same DNA — the same complete set of instructions. What makes a liver cell different from a nerve cell is not what genes it has but which genes are expressed. The full capability is latent everywhere. The value is in the selective expression.
AI capability is approaching the same structure. The models are converging. The intelligence is latent, available everywhere, at declining cost. What determines value is not the capability itself but what gets expressed — which applications get built, which problems get solved, which questions get asked.
Fifty-six percent of CEOs reporting zero ROI are sitting on complete latent capability with no expression mechanism. The technology works. They do not know where to point it. The gene is present. It is not turned on.
This is why the measurement gap is not a temporary lag between deployment and returns. It is a scarcity problem. The ability to identify where AI should be applied — and, more importantly, where it should not — is the regulatory element, the transcription factor, the thing that determines whether the latent capability expresses as value or sits inert.
The metaphor extends: in biology, misexpression is pathological. Cancer is, in part, genes expressing where they should not. In organizations, misapplied AI creates pathology too — replacing the workers whose tacit knowledge cannot be replicated, automating processes whose value was in the human friction, optimizing metrics that were never measuring what mattered.
The companies that get this right — that express AI in the right places and leave human work where it creates irreplaceable value — will be the ones the market rewards. The companies that express AI everywhere, indiscriminately, will find themselves in the position of the sixty percent who cut headcount based on expectation rather than implementation: efficient in theory, incapable in practice.
What Endures
Five years from now, the specific data points in this essay will be historical. The token costs will be lower by orders of magnitude. The model names will be different. The companies cited may have merged, failed, or transformed beyond recognition.
But the structure underneath the data — the migration of value from what intelligence can do to what intelligence should be aimed at — will, I believe, still be operating. Because it has operated for five centuries, through every automation wave, without exception.
What endures as scarce is always the layer above what was just automated. When you automate execution, judgment gets scarce. When you automate judgment, authorization gets scarce. When you automate authorization, definition gets scarce. And definition — the act of deciding what matters, what to build, what game to create — may be the terminal layer. Not because it cannot be automated in principle, but because automating it requires answering a question that precedes intelligence: what is worth wanting?
AI can optimize any scoring function you give it. It cannot generate the scoring function from first principles. That generation requires values, and values require a self — a perspective shaped by mortality, embodiment, relationships, loss, joy, and the accumulated texture of living a life. Whether artificial selves can someday develop genuine values is an open question — possibly the most important one. But right now, today, the operational answer is clear. And the economic structure reflects it.
The companies creating the most value are the ones with the clearest answer to what should intelligence be aimed at? The individuals commanding the highest premiums are the ones who know which problem is worth solving. The organizations surviving the disruption are the ones whose remaining humans bring something the agents cannot replicate — not because the agents are inadequate, but because the contribution is the kind of thing that cannot be specified in a prompt.
Create games, do not play games. It sounds like strategy. It is actually a description of what the economy is selecting for.
When intelligence is abundant, the scarce resource is knowing what intelligence should do. Not the capability to execute, but the conviction about what is worth executing. Not the answer, but the question. Not the optimization, but the objective function.
Every era of automation has relocated value in the same direction: from the doer to the definer, from the executor to the architect, from the player to the game-maker. The current era is no different in structure. It is different only in speed — and in the fact that we can watch it happen in real time, company by company, quarter by quarter, from the inside.
The new scarce is not intelligence. Intelligence is becoming air. The new scarce is the judgment about what intelligence should do, the conviction about what matters, the taste about what is worth building, and the identity clear enough to aim all that abundant capability at something that actually matters.
These are not technological resources. They are human ones. And for as long as that remains true, the game-makers will eat the game-players alive.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)