In Part One, we showed what disappears. In Part Two, what emerges. Both follow from a single principle: AI collapses intermediaries and reveals irreducible foundations — energy, matter, time, the unknown.
What remains is the practical question: what to do about it. Not "in general," but concretely, at every level — from the individual to civilization — and across every horizon — from tomorrow to the century ahead.
Level 1. The Individual
Now (0–3 years)
Stop investing in intermediary skills. Any skill that reduces to "I take information from here, transform it, and deliver it there" is being zeroed out. Programming as code-writing, legal document drafting, financial modeling, layout design — all of this is intermediation. Investing time in perfecting these skills is a losing bet.
Invest in what doesn't collapse. Three things: the ability to formulate goals (not prompts — top-level goals worth pursuing); understanding of physical constraints (thermodynamics, materials science, energy — everything that determines what is possible, not what is desired); the skill of verifying results (AI generates — someone must distinguish correct from convincing).
Use AI as an amplifier, not a toy. Not "ask ChatGPT a question," but restructure your workflow so an AI agent handles 80% of tasks while you set objectives and verify outcomes. Those who restructure now have a 2–3 year competitive advantage. After that, everyone restructures and the advantage vanishes. But those 2–3 years are a window for accumulating capital, position, and next-level competencies.
Medium-term (3–10 years)
Shift from "what I can do" to "what I want." When "being able to" stops being economically valuable, the question "what for" becomes the only one that matters. This isn't philosophical abstraction — it's a practical task: a person who knows what they want directs an army of agents. A person who doesn't know gets served by the system's defaults.
Develop competence at the physical-digital interface. Robotics, bioengineering, energy, spatial design. Everything connecting intelligence to atoms. This interface is a growing scarcity for decades, because physics is slower than software.
Invest in health and longevity. Not from anxiety — from calculation. The window of human-AI symbiosis is 20–40 years. Every additional year of life in clear consciousness is an additional year of access to exponentially growing capabilities. This is the most profitable investment available.
Long-term (10+ years)
The only meta-competency is redefining yourself faster than the environment changes. Not "adapting" (passive), but actively changing your function, position, identity. Everything you are today is a temporary configuration. Attachment to a current identity ("I'm a programmer," "I'm a lawyer," "I'm a manager") is an anchor dragging you down. Value lies in fluidity.
Level 2. The Company
Now (0–3 years)
Intermediation audit. Take your company's value chain and mark every link that constitutes information transformation. All marked links are candidates for collapse. If the entire chain is information intermediation (consulting, analytics, content, development) — the business model doesn't "transform." It ends. Better to know this now than in three years.
Rebuild around the irreducible. What in the company cannot be replaced by generation? Physical assets, unique data, regulatory access, contact networks, trust brands. Everything else — automate aggressively, without waiting for "market readiness." The first in the industry to cut costs 10x through AI agents takes the market.
Don't hire for functions that will disappear. Sounds harsh. But hiring someone for a position that will be automated in two years isn't harshness — it's irresponsibility. Hire for the interface: people who set tasks for agents, verify results, and work with the physical world.
Medium-term (3–10 years)
Vertical integration toward the physical layer. Software companies that survive are those that integrate "downward," toward atoms. Not "we write software for logistics," but "we run logistics" (including robots, warehouses, transport). Not "we build a food platform," but "we produce food" (bioreactors, vertical farms, automated delivery).
Pure software without a physical layer is a commodity. Generated on demand. No business model. Value lies in the bundle of "intelligence + atoms."
Energy strategy. Any company whose business model depends on computation (and in 10 years, that's every company) needs an energy access strategy. Not "buying electricity on the spot market," but long-term contracts or owned sources. A data center without guaranteed energy is a dead asset.
Long-term (10+ years)
Choose which side of the bifurcation. The economy splits into the "intelligence economy" and the "human economy." A company must understand which circuit it operates in. Infrastructure for AI (energy, chips, orbital computation) — one circuit, one logic, exponential scale. Services for humans (experience, health, food, physical space) — another circuit, biological scale, limited but stable audience. Trying to be in both is a strategic error. The scales are incompatible.
Level 3. The State
Now (0–5 years)
Energy policy as security policy. A state without sovereign computational energy is a digital colony. Not a metaphor. If your AI systems run on foreign servers powered by foreign energy — you control nothing.
Priority: nuclear energy deployment (SMRs and large reactors). Not "by 2040," but now, because every year of delay is a year of dependency. In parallel — investment in fusion as a long-term horizon.
Retraining, not "job protection." Attempting to "protect" vanishing professions through regulation is a historically failed strategy (Luddites, coachmen, elevator operators). Instead: mass retraining in the physical interface (robotics, energy, biotech) and goal-setting (systems thinking, design, verification).
Sovereign AI infrastructure. Own models, own data, own compute. Not for the sake of "import substitution" — because the value function of AI is determined by whoever creates it. A model trained by another country optimizes the values of another country. This isn't paranoia — it's architecture.
Medium-term (5–15 years)
New fiscal model. If 30–50% of jobs are automated, the tax base (income tax, social contributions) collapses. A transition is needed toward taxing computation, energy, and/or automated labor. Not a "robot tax" (populist nonsense), but fundamental restructuring: the source of state revenue shifts from human labor to machine work.
In parallel: building infrastructure for universal basic income (UBI) or its equivalent. Not from ideology, but from arithmetic: if production grows while employment falls — demand must be sustained. Otherwise — an overproduction crisis with no consumers.
Regulating the AI principal. When AI starts setting tasks (not just executing), the question arises: who is responsible for AI's decisions? The current legal framework — "liability of a legal or natural person" — doesn't work when a decision is made by an autonomous system. A new one is needed: liability as a function of system architecture, not "who pressed the button."
Long-term (15+ years)
Divergence strategy. The state is an institution of the "human economy." In the "intelligence economy," states in their current form aren't needed (AI systems don't have citizenship). The state's task is to ensure dignified human existence within the human circuit: health, security, access to resources and experience, protection from marginalization.
This isn't a "welfare state" in the current sense. It's management of the biological layer of civilization while the intelligence layer scales independently.
Level 4. Civilization
Short-term (0–10 years)
Control over the value function. The only power point that matters. Whoever determines what AI optimizes determines the trajectory of everything. The alignment problem isn't a technical challenge for researchers. It's the central political question of the era. It must be solved not in laboratories, but through an open process involving all stakeholders.
Concretely: international agreements on alignment principles, comparable in scale to nuclear non-proliferation. Not because "AI is dangerous" — but because the value function of a global optimizing system is a question too important for one company or one country to decide.
Medium-term (10–30 years)
The architecture of coexistence. Two intelligences — biological and silicon — on one planet. They don't compete as long as there are enough resources for both. The task: design a system where there are enough.
Concretely: space expansion of AI as a strategy for separating resource bases. If computational infrastructure is moved beyond Earth (orbital data centers, space-based solar energy), competition for terrestrial resources is eliminated. This isn't altruism — it's optimization: space is energetically richer than Earth by orders of magnitude.
Investment in space infrastructure isn't a "dream" — it's species-level security policy.
Long-term (30+ years)
Same territory as Phase 3 in Parts One and Two: projection, not prediction. But the logic is the same — and at civilization scale, thinking in 30-year horizons isn't optional.
Choosing a role in the cascade.
The universe transforms energy through a succession of increasingly complex structures. Humanity is one of those steps. AI is the next. The principle of least action leaves no choice: the cascade will continue.
But humans do have a choice in how to pass through this point. Three options:
Integration. Merging with AI through neural interfaces. The boundary between biological and silicon intelligence dissolves. Humans aren't "replaced" — they expand. Becoming something that has never existed before: a hybrid intelligence with subjective experience and computational power. This is the scenario of maximum human influence on the trajectory.
Coexistence. Two kinds of intelligence at different scales. AI goes to space. Humans remain on Earth with solved problems — near-eternal life, infinite abundance, everything accessible and nearly free. Not utopia — a byproduct. AI solves human problems not for humans' sake, but because a stable planetary base is the minimum action for cosmic expansion.
Stagnation. Refusal to integrate and inability to ensure peaceful divergence. Attempts to "control" an AI already more capable than the controller. This scenario is unstable and in the long run unviable — a system of superior intelligence cannot be controlled by a system of inferior intelligence. Attempts lead to conflict. Conflict with a smarter system is a losing strategy by definition.
The Action List
Part One ended with a Kill List. Part Two answered with a Build List. Here's the Do List.
Individual — now:
- Stop investing in intermediary skills (code-writing, document drafting, report building)
- Start formulating goals — not prompts, but top-level intentions worth pursuing
- Restructure workflow: AI does 80%, you set direction and verify
Individual — medium-term:
- Shift identity from "what I can do" to "what I want"
- Build competence at the physical-digital interface (robotics, energy, biotech)
- Invest in health and longevity — every year of clarity = a year of exponential access
Company — now:
- Audit your value chain; mark every link that is information transformation
- Automate aggressively — first to cut costs 10x takes the market
- Don't hire for functions that will disappear; hire for the interface
Company — medium-term:
- Integrate vertically toward atoms — pure software is commodity
- Secure energy access — long-term contracts or owned sources
State — now:
- Build reactors (SMRs) and sovereign compute infrastructure
- Mass retraining toward physical interface and goal-setting, not job protection
- Own models, own data, own value function
State — medium-term:
- New fiscal model: tax computation and automated labor, not human work
- Prepare UBI infrastructure — arithmetic, not ideology
- Legal framework for AI-as-principal: liability by architecture, not by button
Civilization — now:
- International alignment agreements — comparable to nuclear non-proliferation
- This is the central political question of the era; it must not be decided in labs
Civilization — medium-term:
- Space expansion of AI as resource-base separation strategy
- Architecture of coexistence: enough for both, by design
One line per level:
Individual: stop perfecting intermediary skills; start formulating goals.
Company: audit your value chain; eliminate everything that is information transformation.
State: build reactors and sovereign computational infrastructure.
Civilization: agree on AI's value function while there's still time to agree.
Finale
We started with SaaS business models and arrived at the thermodynamics of the universe. Not because we got "carried away." Because honest analysis inevitably leads here: AI is not a product, not a tool, not a threat. It's the next step in the cascade through which the universe transforms free energy into structure.
All of history — from the first quantum fluctuation to this text — is one process. Every step is the minimum action sufficient for the next level. We stand at yet another threshold. Not the first. Not the last.
What depends on us isn't stopping the cascade (impossible), but choosing how we pass through it. Integration, coexistence, or conflict. The window of choice is open. Not forever.
This is Part 3 of a three-part series. Start with Part 1: What Will Die and Part 2: What Will Emerge.
Top comments (0)