Every wave of automation makes one thing cheap and another thing expensive. The interesting question is always: what's next?
Two trillion dollars disappeared from enterprise software in six weeks. Not because the companies stopped working. Not because revenue collapsed. Because the market looked at AI agents and decided that per-seat software licensing — the business model that built Salesforce, ServiceNow, Adobe, and a generation of SaaS companies — might not survive the decade.
The fear is specific: if an AI agent can navigate a CRM, generate a report, update a pipeline, and send a follow-up email, why would a company pay $150 per month per seat for the privilege of letting a human do it manually? The agent doesn't need a seat. It needs an API call. And API calls cost fractions of a cent.
Whether this fear is overblown or prescient matters for stock prices. But the pattern underneath it is older than software, and it's worth looking at directly.
The Pattern
Every significant wave of automation does the same thing: it makes one layer of work so cheap that it effectively disappears, and in doing so, it makes the layer above it — the one that was previously invisible because it was bundled into the cheap layer — suddenly, conspicuously valuable.
This isn't a metaphor. It's a recurring economic structure. And it's been running for centuries.
When the printing press automated the work of scribes, copying became cheap. What got scarce was authorship. Before Gutenberg, the bottleneck was reproduction — you needed monks with good handwriting and years of patience. After Gutenberg, anyone could reproduce text. The new bottleneck was having something worth reproducing. The scarce resource moved up one layer: from copying to composing.
When photography automated depiction, realistic representation became cheap. What got scarce was artistic vision — the ability to show someone something a camera couldn't capture. Painting didn't die. It transformed. Impressionism, cubism, abstraction — each was a response to the question: what can a human see that a machine can't? The scarce resource moved from execution to perception.
When assembly lines automated craft, individual production became cheap. What got scarce was design. It no longer mattered whether you could build a chair; anyone could build a thousand chairs. What mattered was whether the chair was worth building. The Bauhaus movement, industrial design as a discipline, the entire concept of a 'product designer' — all consequences of manufacturing becoming trivial.
The pattern is consistent. Automate the hand, and the eye becomes valuable. Automate the eye, and the mind becomes valuable. Each wave doesn't destroy value — it relocates it. And the relocation always moves in the same direction: toward whatever the automation can't do.
The Current Wave
Software ate the world by automating workflows. Instead of filing cabinets, you had databases. Instead of phone trees, you had ticketing systems. Instead of spreadsheets passed between departments, you had dashboards that updated in real time.
But here's the thing software didn't automate: the human sitting in front of the screen, deciding what to click. The SaaS model is built on that human. Per-seat pricing assumes a person who logs in, navigates menus, makes judgments, takes actions. The software is the tool. The human is the operator.
AI agents collapse this arrangement. The agent doesn't navigate menus — it calls APIs. It doesn't log in — it authenticates programmatically. It doesn't sit in a seat. And when the operator disappears, the seat-based pricing model loses its foundation.
So what gets scarce?
Not capability. Agents are increasingly capable — they can read, write, analyze, plan, execute multi-step workflows, and recover from errors. Capability is exactly what's being automated. It's the new 'copying' — abundant, cheap, available on demand.
Not speed. Agents are fast by default. Parallelism is native. Speed was valuable when humans were the bottleneck; it's commodity when machines do the work.
Not even judgment, exactly — though this is where it gets interesting.
The Judgment Trap
The obvious answer is 'judgment gets scarce.' And in the near term, that's partly true. Agents are better at execution than decision-making. They follow instructions more reliably than they set goals. So the human who knows what to build, which market to enter, when to change strategy — that human's value increases relative to the human who simply executes.
But this is a temporary state, not an equilibrium. Agents are getting better at judgment too. They evaluate tradeoffs, weigh evidence, propose strategies. Not perfectly — not yet as reliably as experienced humans in complex domains — but the trajectory is clear. Judgment is being automated, just more slowly than execution.
If you're building your career on 'I make good decisions,' you're in the position of the scribe who said 'I have excellent handwriting' in 1455. Your skill is real. Its scarcity is not permanent.
So what's actually scarce? What's the thing that gets more valuable as agents get more capable?
Authorization
The answer, I think, is authorization. Not what the agent can do, but what it's allowed to do. Not capability, but permission. Not intelligence, but trust.
This sounds mundane. It's not.
Think about what happens when an agent is genuinely capable of handling a complex financial transaction, or modifying production infrastructure, or communicating with customers on a company's behalf. The technical question — can the agent do this? — gets answered. The harder question remains: should it? And who decides?
Authorization is the layer that sits above judgment. It's the difference between 'this agent concluded the trade is good' and 'this agent has been explicitly approved, by a verified human, to execute this specific trade.' One is a computation. The other is a social and legal act.
When scribes automated copying, the authorization was implicit — the patron who commissioned the manuscript authorized its reproduction. When factories automated craft, authorization was institutional — the company decided what to produce. When SaaS automated workflows, authorization was embedded in access controls — who has an account, what permissions do they have.
Each automation wave inherited its authorization model from the world that preceded it. The new wave needs a new model. And that model doesn't exist yet.
Why This Time Is Different
Previous automation waves had a built-in authorization mechanism: the human operator. When a human clicks a button to approve a purchase order, their physical action is the authorization. Their presence at the keyboard, their decision to click, their identity as an employee with signing authority — all of this is bundled into the act of using the software.
Remove the human from the loop, and you remove the authorization mechanism. The agent can execute the purchase order. But who approved it? When? Based on what information? With whose authority? These questions don't have answers in a world where the agent acts autonomously.
This is why authorization gets scarce. Not because it's hard to build technically — access control lists have existed for decades — but because the meaning of authorization changes when the actor isn't human. We know what it means for a person to approve something. We don't yet know what it means for an agent to act with someone's authority.
The legal frameworks don't exist. The liability models are unclear. The verification methods — how do you prove, after the fact, that a specific human authorized a specific agent action? — are nascent at best.
The Value Migration
If the pattern holds — and it has held for five centuries of automation — then the companies, institutions, and individuals who figure out authorization for autonomous agents will capture a disproportionate share of value in the next era.
Not the ones who build the most capable agents. Capability is the commodity. Not the ones who build the best interfaces. Interfaces are what you need when humans are in the loop; they become less relevant when they're not.
The value accrues to whoever solves the trust problem. Whoever can say, with cryptographic certainty: this action was authorized by this person, at this time, with full understanding of what they were approving. Whoever builds the bridge between human intent and machine execution in a way that's auditable, verifiable, and legally meaningful.
This has happened before. In the early web, the capability was there — browsers could talk to servers, servers could process transactions — but e-commerce didn't take off until SSL and then TLS solved the trust problem. The encryption wasn't the product anyone wanted. The product everyone wanted was online shopping. But online shopping was gated on trust infrastructure that most consumers never thought about.
The same dynamic is playing out with AI agents. The capability is racing ahead. The trust infrastructure is lagging behind. And if history is any guide, the trust layer will end up being more durable and more valuable than the capability layer it enables.
What This Means
I find this pattern useful because it disciplines my attention. When I see a new technology — a more capable model, a faster agent framework, a better orchestration tool — I try to ask: is this automating the scarce thing, or the abundant thing?
Building a faster agent is making the abundant thing more abundant. It might be valuable in the short term (agents are still scarce-ish), but the long-term value is elsewhere. Building trust infrastructure — verification, authorization, accountability — is investing in the next scarce thing.
The two trillion dollars that evaporated from SaaS isn't gone. It's migrating. The market is telling you, loudly, that the old scarce resource (human operators using software tools) is becoming abundant. It hasn't yet told you where the value is going. But the pattern has been consistent for half a millennium.
Every wave of automation answers one question and raises another. The question being answered right now is: can machines do the work? The question being raised is: who gets to decide what work they do?
That second question is where the value will settle. It always does.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)