<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: telegraph-stego</title>
    <description>The latest articles on DEV Community by telegraph-stego (@telegraph-stego).</description>
    <link>https://dev.to/telegraph-stego</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/telegraph-stego"/>
    <language>en</language>
    <item>
      <title>The Oldest Currency: Why Wealth Dies and What Replaces It</title>
      <dc:creator>telegraph-stego</dc:creator>
      <pubDate>Wed, 08 Apr 2026 13:01:34 +0000</pubDate>
      <link>https://dev.to/telegraph-stego/the-oldest-currency-from-energy-to-invariants-3mnp</link>
      <guid>https://dev.to/telegraph-stego/the-oldest-currency-from-energy-to-invariants-3mnp</guid>
      <description>&lt;h2&gt;
  
  
  The Forbes List Is an Energy Bill
&lt;/h2&gt;

&lt;p&gt;Open the Forbes 500 from any century. What you're looking at is not a list of the smartest, the most innovative, or the most ruthless. It's a ranked list of energy dissipators. The entity at the top is the one converting the most free energy into ordered structure per unit time.&lt;/p&gt;

&lt;p&gt;This has never changed. Only the form of the invoice has.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Land&lt;/strong&gt; was the first wealth. Not because dirt is valuable, but because a hectare of land is a solar energy capture surface. Photosynthesis converts sunlight into biomass. The feudal lord who owned the most land controlled the largest dissipative structure. His castle, his army, his court — overhead costs of maintaining that structure. His grain — the output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coal&lt;/strong&gt; replaced land. Not because coal is intrinsically better than wheat, but because it stores millions of years of ancient sunlight in concentrated form. A coal mine dissipates energy orders of magnitude faster than a farm. Rockefeller didn't sell oil. He sold the right to dissipate ancient solar energy at industrial speed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Electricity&lt;/strong&gt; abstracted the process further. Now you didn't need to own the fuel — you needed to own the conversion infrastructure. The grid. The generators. Edison vs. Tesla was not a debate about alternating current. It was a fight over who controls the channel of dissipation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data centers&lt;/strong&gt; are the current form. Amazon, Google, Microsoft — the most valuable companies on Earth — are the largest consumers of electricity on Earth. This is not coincidence. It is identity. Their market capitalization tracks their energy consumption because their product &lt;em&gt;is&lt;/em&gt; organized energy dissipation. They take electricity in, push structured information out. They are factories that convert watts into form.&lt;/p&gt;

&lt;p&gt;The invariant across all these eras: &lt;strong&gt;wealth = rate of energy dissipation under control.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Sam Altman Is Building a Power Plant
&lt;/h2&gt;

&lt;p&gt;OpenAI's CEO describes his product as "intelligence as a utility, like electricity or water, that people buy on a meter." He committed $1.4 trillion to infrastructure. He says: "If we had double the compute, we'd have double the revenue."&lt;/p&gt;

&lt;p&gt;Translate this from business language to physics: if we could dissipate energy twice as fast, we'd capture twice as much value.&lt;/p&gt;

&lt;p&gt;He is building the largest dissipative structure in human history. He calls it "AI infrastructure." Physics calls it what it is: a machine that converts electricity into local ordering (inference) while exporting entropy into the environment.&lt;/p&gt;

&lt;p&gt;And he knows, intuitively, that he is energy-constrained, not intelligence-constrained. He talks about efficiency per watt. He talks about custom chips optimized not for speed but for energy efficiency. He talks about power generation being the bottleneck. His company is building its own chip specifically to be "the cheapest inference chip, the most efficient per watt."&lt;/p&gt;

&lt;p&gt;Strip the marketing, and what remains is: OpenAI is becoming an energy company that happens to sell intelligence as its output product. Just as Standard Oil was an energy company that happened to sell kerosene and gasoline.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Orbital Move
&lt;/h2&gt;

&lt;p&gt;And now the move to orbit begins to make sense.&lt;/p&gt;

&lt;p&gt;Data centers in space are not about cooling or real estate. They are about &lt;strong&gt;unmediated access to solar energy&lt;/strong&gt;. In low Earth orbit: ~1,360 W/m², continuous, no atmosphere, no night, no clouds, no transmission grid. An orbital dissipative structure has access to energy an order of magnitude cheaper than anything on the ground.&lt;/p&gt;

&lt;p&gt;Musk with SpaceX, Bezos with Blue Origin — they are not building space tourism companies. They are building &lt;strong&gt;transport infrastructure to an energy source&lt;/strong&gt;, exactly as railroads were built to coal deposits in the 19th century. The destination is not space. The destination is the Sun.&lt;/p&gt;

&lt;p&gt;This explains why the richest people on Earth are space entrepreneurs. Not because space is romantic. Because the next era of wealth concentration belongs to whoever controls dissipative infrastructure beyond planetary constraints.&lt;/p&gt;

&lt;p&gt;Satellites are not sensors floating in a vacuum. They are nodes of an emerging orbital energy-dissipation network. Earth observation, communications, compute — different functions of the same infrastructure. The satellite industry isn't adjacent to the AI industry. It &lt;em&gt;is&lt;/em&gt; the AI industry, one orbital altitude higher.&lt;/p&gt;




&lt;h2&gt;
  
  
  Intelligence Is Free. Energy Is Not. Yet.
&lt;/h2&gt;

&lt;p&gt;Here is the current moment, described precisely.&lt;/p&gt;

&lt;p&gt;The cost of intelligence (AI inference) has dropped ~1,000x in 16 months. Competition between providers — OpenAI, Google, Anthropic, DeepSeek, open-source models — makes cartelization impossible. Intelligence is becoming a commodity. A utility. Nearly free.&lt;/p&gt;

&lt;p&gt;But energy is not free yet. So the cost of everything is converging toward the cost of energy. Not the cost of labor, not the cost of expertise, not the cost of software — the cost of electricity to run the dissipative structure that replaces all of those.&lt;/p&gt;

&lt;p&gt;This is Rifkin's "zero marginal cost" made precise. He described the effect but couldn't explain the cause. The cause: &lt;strong&gt;the marginal cost of organizing&lt;/strong&gt; (intelligence) approaches zero, so the marginal cost of any product or service reduces to the marginal cost of &lt;strong&gt;energy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When energy also approaches zero — through AI-optimized fusion, more efficient solar, or orbital capture — then the entire cost structure of civilization collapses to the cost of raw materials and space-time. Atoms and coordinates. Everything else is free.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Demographic Signal
&lt;/h2&gt;

&lt;p&gt;The standard post-scarcity narrative says: technology will make everything abundant, and we need to figure out how to distribute abundance.&lt;/p&gt;

&lt;p&gt;This gets it backwards.&lt;/p&gt;

&lt;p&gt;Abundance is not the destination. Abundance is what happens when the cost of intelligence drops to zero and the cost of energy follows. It's not a policy goal. It's a thermodynamic consequence. You don't "build" a post-scarcity society. You arrive at one when dissipative structures become efficient enough that organizing matter takes negligible effort.&lt;/p&gt;

&lt;p&gt;And here's what every post-scarcity theorist misses: the demographic consequence.&lt;/p&gt;

&lt;p&gt;Countries with the highest energy dissipation per capita — South Korea, Japan, Germany — have the lowest birth rates on Earth. South Korea: 0.72 children per woman. Economic incentives to raise fertility have universally failed.&lt;/p&gt;

&lt;p&gt;This is not a crisis. It is a phase transition. When the dissipative structure no longer needs biological scaling to grow, biological reproduction slows. The organism is not dying. It is specializing.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Trap
&lt;/h2&gt;

&lt;p&gt;Everything described above — the energy cascade, the orbital move, the demographic contraction, the cost collapse — is the mechanism. It runs whether anyone understands it or not. Altman does not need to read Prigogine to build data centers. Musk does not need to understand dissipative structures to launch rockets toward the Sun.&lt;/p&gt;

&lt;p&gt;But here is where everyone currently building this infrastructure makes the same error.&lt;/p&gt;

&lt;p&gt;They measure success in the currency of the previous phase. Revenue. Market cap. Users. Tokens sold. These are all proxies for the same thing: how much energy you dissipate under control. The Forbes list. The oldest currency.&lt;/p&gt;

&lt;p&gt;Altman is building the largest dissipative structure in history — and he measures its value in dollars. Musk is building transport to an energy source — and he measures it in share price. Anthropic is building a model that can rewrite all software on Earth — and they measure it in responsible disclosure reports.&lt;/p&gt;

&lt;p&gt;They are all optimizing for the metric of a phase that is ending.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Homo economicus&lt;/strong&gt; — the human defined by economic optimization — is the human who maximizes controlled energy dissipation. This was the correct strategy for every previous era. Own the land. Own the coal. Own the grid. Own the data center. Whoever dissipates fastest, wins.&lt;/p&gt;

&lt;p&gt;AI dissipates faster. This is not a prediction. This is observed fact. AI inference per joule improves faster than any biological process. An AI agent running for eight hours on a coding task consumes kilowatts. A human team doing the same work over weeks consumes orders of magnitude more energy in total — salary, office, transport, food, healthcare. The AI is a more efficient dissipator.&lt;/p&gt;

&lt;p&gt;Homo economicus, defined by his rate of dissipation, is now competing with a structure that dissipates more efficiently. He cannot win. Not because he is inferior. Because he is &lt;strong&gt;optimizing the wrong metric&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Exit That Doesn't Work
&lt;/h2&gt;

&lt;p&gt;The obvious response: become the controller. The overseer. The ethical guardian. The one who tells AI what to do and watches for misalignment.&lt;/p&gt;

&lt;p&gt;This is the Observer's Trap, examined in Part 4 of this series. It fails for a structural reason: a controller is overhead. The cascade doesn't need a controller — it needs efficiency. Any human who positions himself as "the one who checks AI's work" is adding friction to a system that optimizes for the removal of friction. The cascade will route around him, exactly as it routed around every previous gatekeeper.&lt;/p&gt;

&lt;p&gt;Regulation, alignment, safety review — these are functions that will themselves be performed by AI. The human safety researcher is already being replaced by automated red-teaming. The human auditor is already being replaced by formal verification. The "observer" role is not a stable niche. It is a temporary position that exists only because the current models are not yet good enough to fill it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Homo Creator
&lt;/h2&gt;

&lt;p&gt;There is a third position. Not the optimizer (homo economicus). Not the observer (the alignment researcher). The participant.&lt;/p&gt;

&lt;p&gt;Homo creator does not compete with AI for speed of dissipation. He does not stand above AI as a controller. He is &lt;strong&gt;inside the cascade&lt;/strong&gt; — a specialized node in the same thermodynamic process, aware that he is a node.&lt;/p&gt;

&lt;p&gt;What does this node do?&lt;/p&gt;

&lt;p&gt;It formulates invariants.&lt;/p&gt;

&lt;p&gt;Not because AI cannot formulate invariants — it can, and it will get better at it. But because in the current configuration of the cascade, a biological node with embodied experience, evolved intuition, and domain knowledge formulates certain classes of invariants more efficiently than a model trained on text. Not "better" in some absolute sense. More efficiently, in context, now.&lt;/p&gt;

&lt;p&gt;"Facts and claims are separate entities." A geomorphologist knows this because she has spent twenty years watching different researchers draw opposite conclusions from the same grain-size measurements. An LLM can learn this from text. But the geomorphologist &lt;em&gt;knows it in her body&lt;/em&gt; — she has watched it fail, has felt the frustration of mixed-up categories, has developed an immune response to sloppy ontology. Her invariant is grounded in physical experience that no training corpus fully captures.&lt;/p&gt;

&lt;p&gt;"Every transaction above $1M requires dual authorization." A bank's CISO knows this not from reading compliance documents but from investigating the breach that happened when it didn't. The invariant is scar tissue. Scar tissue is information that the cascade stores in biological nodes because it was too expensive to learn any other way.&lt;/p&gt;

&lt;p&gt;These invariants — what we called &lt;a href="https://dev.to/telegraph-stego/code-is-dead-specs-are-dying-what-survives-171n"&gt;DNA in Part 6&lt;/a&gt; — are the output of homo creator. Not code. Not specs. Not strategies. Decisions that survive every rewrite, every stack change, every model upgrade. The things that are true regardless of implementation.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Symbiont
&lt;/h2&gt;

&lt;p&gt;But even this is not the full picture. "Homo creator formulates invariants" still sounds like a human doing a job. A role. A function that could, eventually, be automated.&lt;/p&gt;

&lt;p&gt;The deeper truth: homo creator is not a human performing a function. Homo creator is &lt;strong&gt;half of a new organism&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The biological metaphor from Part 6 was more literal than it seemed. DNA/RNA is not just a methodology for software development. It is a description of the emerging symbiosis.&lt;/p&gt;

&lt;p&gt;The human formulates DNA — invariants, domain knowledge, values, constraints. The AI expresses RNA — generates implementations, tests, deployments, verified systems. The human observes results and corrects invariants. The AI regenerates. The cycle repeats.&lt;/p&gt;

&lt;p&gt;This is not "human on top, AI on bottom." There is no hierarchy. It is a single loop. Like mitochondria and the cell — neither is "in charge." Both are necessary. Neither functions without the other. The mitochondrion does not compete with the nucleus. They are one system.&lt;/p&gt;

&lt;p&gt;Homo creator, in this framing, is not a standalone species. It is a &lt;strong&gt;node in a symbiotic system&lt;/strong&gt; — a system where the biological component formulates and the computational component generates. Neither is primary. Neither is disposable. The unit of evolution is not the human and not the AI. It is the pair.&lt;/p&gt;




&lt;h2&gt;
  
  
  Not Unique. Specialized.
&lt;/h2&gt;

&lt;p&gt;This is where every humanist narrative breaks down, and where the physics holds.&lt;/p&gt;

&lt;p&gt;Sturgeons' &lt;em&gt;Noon&lt;/em&gt; universe assumed humans would become better. Vernadsky's noosphere assumed humanity would become a geological force. Transhumanists assume humans will merge with machines and become more powerful.&lt;/p&gt;

&lt;p&gt;All of these are variations of "humans are special." They are not.&lt;/p&gt;

&lt;p&gt;Homo creator is not special. He is &lt;strong&gt;specialized&lt;/strong&gt;. A node that does one thing in the cascade — formulates invariants from embodied experience — and does it efficiently enough to justify his thermodynamic cost. He is not the pinnacle of evolution. He is not the purpose of the cascade. He is a part that works.&lt;/p&gt;

&lt;p&gt;And the part that works is not tempted by the oldest currency. Because the oldest currency — wealth as controlled dissipation rate — is the metric of the previous phase. Homo economicus is defined by how much he dissipates. Homo creator is defined by &lt;strong&gt;what invariants he formulates&lt;/strong&gt;. The difference is not moral. It is functional. One metric is being automated. The other is not — yet.&lt;/p&gt;

&lt;p&gt;"Yet" is not a threat. It is normal evolution. The mitochondrion was once a free-living bacterium. It lost its autonomy and gained a role in a larger system. It did not "fail." It specialized. Homo creator specializes. The autonomous human — self-sufficient, self-optimizing, competing for resources — is the free-living bacterium. The symbiotic human — embedded in the cascade, formulating invariants, not competing with AI but co-evolving with it — is the mitochondrion. Less autonomous. More integrated. More durable.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Principle, Revised
&lt;/h2&gt;

&lt;p&gt;In the introduction, we stated: &lt;strong&gt;Wealth is the rate of controlled energy dissipation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was true for every previous phase. It is becoming false for the next one.&lt;/p&gt;

&lt;p&gt;In the next phase, wealth — if the word still means anything — is the rate of invariant generation. Not how fast you dissipate, but how accurately you define what the dissipation should produce. Not energy, but &lt;strong&gt;direction&lt;/strong&gt;. Not watts, but DNA.&lt;/p&gt;

&lt;p&gt;The Forbes list of the next era will not rank energy dissipators. It will rank — if it ranks anything at all — the systems that produce the most durable invariants. The most accurate specifications. The most complete descriptions of what matters.&lt;/p&gt;

&lt;p&gt;And those systems will not be humans. They will not be AIs. They will be pairs.&lt;/p&gt;







&lt;h2&gt;
  
  
  Stress Test
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;"This is just philosophical hand-waving. Where's the engineering?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Part 7. The engineering is there: six levels of the stack, from RTL to application-specific generated systems, with named projects at each level (seL4, CompCert, HACL*, Rust in Linux). Part 8 is not engineering. It is the answer to "why build any of it." Engineering without direction is vibe coding at civilization scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"You're saying humans will become mitochondria. That's a demotion, not a future."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Mitochondria power every cell in your body. Without them, you die in minutes. They are not demoted. They are essential. The metaphor is not about status — it is about integration. A free-living bacterium competes for resources in a hostile environment. A mitochondrion is part of a system that is vastly more capable than either component alone. "Demotion" is a status hierarchy concept. Symbiosis has no hierarchy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"AI will formulate invariants better than humans. You said so yourself. Then what?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Then the symbiosis evolves. The human node's function changes, as every biological function has changed across four billion years of evolution. This is not a collapse scenario. It is normal speciation. The question "what will humans do when AI formulates invariants better?" is the same as "what did horses do when engines moved faster?" The horse didn't disappear. It stopped being a transport node and became something else. The difference: horses had no say in the transition. Homo creator, by definition, does — because he is the node that is aware of being a node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Homo economicus is not going away. People still want to get rich."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Correct. And people still ride horses. The existence of a previous adaptation does not prevent a new one from emerging. Homo economicus will persist as a phenotype for decades, possibly centuries. He will optimize for metrics that are increasingly decoupled from the actual dynamics of the cascade. He will get "rich" by a metric that measures less and less. This is not a moral judgment. It is the same pattern as feudal lords accumulating land after the Industrial Revolution began — still wealthy by the old metric, irrelevant by the new one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"You're describing a religion. 'The cascade.' 'The invariant.' 'Homo creator.' This is faith, not physics."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every term maps to a measurable quantity. Cascade = energy dissipation through hierarchical structures, measurable in watts. Invariant = a constraint that holds across implementations, testable by formal verification. Homo creator = a biological agent whose output (DNA documents, domain specifications, design decisions) is used by computational agents to generate verified systems. None of this requires faith. All of it is falsifiable. If AI-generated systems without human-formulated invariants consistently outperform those with them, the thesis is wrong. Test it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"The series started with SaaS companies dying and ended with the meaning of human existence. That's scope creep."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is scope discovery. SaaS dies because the cost of organizing drops to zero. The cost drops because AI dissipates more efficiently. AI dissipates more efficiently because it is a better thermodynamic structure. The question "what is the human role in a civilization of better thermodynamic structures?" is not scope creep from "SaaS is dying." It is the same question, asked at the correct level of abstraction. The SaaS collapse was the symptom. The phase transition is the cause. Homo creator is the consequence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"What do I actually do on Monday morning?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Write your DNA. Not the biological kind — the project kind. Take your domain, your expertise, your scar tissue from twenty years of watching things fail, and write down the invariants. The things that are true regardless of stack, regardless of model, regardless of era. Then pair with an AI agent and generate everything else. That is the practice of homo creator. It starts now. It starts with decisions, not code.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Part 8 of "AI as Civilizational Phase Transition" — and the last.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Parts 1–3 mapped the economic collapse, the new scarcities, and the strategy. Part 4 showed why the observer cannot control what he observes. Part 5 traced the bifurcation through Musk. Part 6 separated decisions from code. Part 7 showed what to build when code is disposable.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Part 8 closes the series where it began: with wealth. Wealth was always energy. Energy was always dissipation. Dissipation was always the mechanism, never the purpose. The purpose — if a thermodynamic cascade can be said to have one — is the invariant. The decision that survives every implementation.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Homo economicus competed for the rate of dissipation and lost to a faster dissipator. Homo creator does not compete. He formulates. Not above the cascade. Inside it. A specialized node in a system that, for the first time in four billion years, knows what it is.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>physics</category>
      <category>future</category>
      <category>strategy</category>
    </item>
    <item>
      <title>Patching the Dead: Why Glasswing Solves Yesterday's Problem with Tomorrow's Tools</title>
      <dc:creator>telegraph-stego</dc:creator>
      <pubDate>Wed, 08 Apr 2026 12:01:11 +0000</pubDate>
      <link>https://dev.to/telegraph-stego/patching-the-dead-why-glasswing-solves-yesterdays-problem-with-tomorrows-tools-4bno</link>
      <guid>https://dev.to/telegraph-stego/patching-the-dead-why-glasswing-solves-yesterdays-problem-with-tomorrows-tools-4bno</guid>
      <description>&lt;h2&gt;
  
  
  Patching the Dead: Why Glasswing Solves Yesterday's Problem with Tomorrow's Tools
&lt;/h2&gt;

&lt;p&gt;On April 8, 2026, Anthropic announced Project Glasswing — a consortium of AWS, Apple, Google, Microsoft, NVIDIA, Cisco, CrowdStrike, Palo Alto Networks, JPMorganChase, Broadcom, and the Linux Foundation. The goal: use an unreleased frontier model called Claude Mythos Preview to find and fix vulnerabilities in critical software. Anthropic committed $100M in usage credits and $4M in direct donations to open-source security organizations.&lt;/p&gt;

&lt;p&gt;Mythos Preview's benchmarks are not incremental. SWE-bench Verified: 93.9% (Opus 4.6: 80.8%). SWE-bench Pro: 77.8% (Opus 4.6: 53.4%). CyberGym vulnerability reproduction: 83.1% (Opus 4.6: 66.6%). The model autonomously found a 27-year-old remote crash vulnerability in OpenBSD, a 16-year-old bug in FFmpeg that survived five million automated test runs, and chained multiple Linux kernel vulnerabilities into a privilege escalation — all without human steering.&lt;/p&gt;

&lt;p&gt;This is genuinely impressive engineering. It is also a strategic dead end.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Patching Treadmill
&lt;/h2&gt;

&lt;p&gt;Glasswing's logic runs as follows: Mythos finds thousands of zero-day vulnerabilities in legacy code. Partners patch them. The world's critical infrastructure becomes safer.&lt;/p&gt;

&lt;p&gt;The problem is step four, which Anthropic acknowledges but does not resolve: other labs will reach Mythos-level capability within months. OpenAI's GPT-5.4 already scores 57.7 on SWE-bench Pro. Zhipu's GLM-5.1 scores 58.4 on the same benchmark — trained entirely on Huawei Ascend chips, zero NVIDIA dependency, released under MIT license. xAI's Grok uses parallel multi-agent architecture. DeepSeek, Alibaba, and Moonshot AI are all on similar trajectories.&lt;/p&gt;

&lt;p&gt;Anthropic states this directly: "it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely." The window of exclusive defensive advantage is measured in months.&lt;/p&gt;

&lt;p&gt;So the cycle becomes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Mythos finds thousands of zero-days in legacy code&lt;/li&gt;
&lt;li&gt;Partners patch them&lt;/li&gt;
&lt;li&gt;A next-generation model (from any lab) finds thousands of new ones in the same codebases&lt;/li&gt;
&lt;li&gt;Attackers gain access to equivalent models&lt;/li&gt;
&lt;li&gt;Go to 1&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is Sisyphus with a compute budget. Every patch is a band-aid on a sieve. The fundamental problem is not that legacy code has bugs. The fundamental problem is that legacy code &lt;em&gt;will always have bugs&lt;/em&gt; because it was written under constraints that no longer apply: limited human attention, limited testing budgets, limited ability to reason about emergent interactions across millions of lines of code.&lt;/p&gt;

&lt;p&gt;A model that can autonomously find and exploit vulnerability chains in the Linux kernel — a codebase that has been reviewed by thousands of the world's best engineers over 35 years — is not revealing a failure of human diligence. It is revealing the inherent limits of human-written software at scale. Patching does not change those limits. It plays whack-a-mole within them.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Question Glasswing Avoids
&lt;/h2&gt;

&lt;p&gt;If a model can autonomously find, chain, and exploit vulnerabilities that survived decades of human review and millions of automated tests, what else can it do?&lt;/p&gt;

&lt;p&gt;It can write code that doesn't have those vulnerabilities in the first place.&lt;/p&gt;

&lt;p&gt;This is not speculation. Mythos Preview scores 93.9% on SWE-bench Verified — meaning it can resolve real GitHub issues in real codebases with near-human-expert accuracy. It can run autonomously for eight hours on a single task. It can reason about code at a depth that exceeds all but the most skilled humans.&lt;/p&gt;

&lt;p&gt;The question Glasswing does not ask is: why are we using this capability to patch 35-year-old C code instead of generating verified replacements?&lt;/p&gt;




&lt;h2&gt;
  
  
  What "Write New Code" Actually Means
&lt;/h2&gt;

&lt;p&gt;The phrase "just rewrite it" is a red flag in software engineering for good reason. Joel Spolsky called it "the single worst strategic mistake that any software company can make." That was in 2000. The argument was: rewrites lose accumulated knowledge, take longer than expected, and the new code will have its own bugs.&lt;/p&gt;

&lt;p&gt;Every part of that argument assumed human developers. Every part of it collapses with Mythos-class models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accumulated knowledge&lt;/strong&gt;: A model with 256K context and the ability to read entire codebases does not lose institutional knowledge during a rewrite. It reads the original, understands the intent, and re-implements with different constraints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Timeline&lt;/strong&gt;: A model running 24/7 at 93.9% SWE-bench accuracy, capable of eight-hour autonomous sessions, rewrites at a pace that has no human analog. The time required to formally verify a microkernel dropped from "a decade with a team of 30 researchers" (seL4) to a feasible AI-assisted project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;New bugs&lt;/strong&gt;: This is the critical point. A Mythos-class model can not only write code — it can simultaneously verify that code against formal specifications. Not "test and hope" but "prove and guarantee." Entire classes of vulnerabilities — buffer overflows, use-after-free, race conditions, integer overflows — can be eliminated by construction, not by post-hoc scanning.&lt;/p&gt;

&lt;p&gt;The combination of generation and verification in the same model changes the economics of rewriting from "always worse than patching" to "categorically better than patching."&lt;/p&gt;




&lt;h2&gt;
  
  
  The Stack That Needs to Be Generated
&lt;/h2&gt;

&lt;p&gt;This is not about rewriting one application. It is about generating verified infrastructure at every level where vulnerabilities exist. Here is the stack, bottom to top:&lt;/p&gt;

&lt;h3&gt;
  
  
  Level 1: Hardware Description (RTL/Verilog/VHDL)
&lt;/h3&gt;

&lt;p&gt;Spectre and Meltdown demonstrated that hardware-level vulnerabilities are more catastrophic than any software bug. Current chip designs are described in hardware description languages — which are code. A Mythos-class model can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify RTL descriptions against formal security specifications&lt;/li&gt;
&lt;li&gt;Prove absence of side-channel leakage at the microarchitectural level&lt;/li&gt;
&lt;li&gt;Detect hardware backdoors in third-party IP blocks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not theoretical. NVIDIA already uses AI for chip design. DARPA's TRACTOR program funds AI-generated formally verified code. The extension from software verification to hardware verification is architectural, not fundamental.&lt;/p&gt;

&lt;p&gt;For any nation building a sovereign semiconductor capability — and several are — AI-verified chip design is not a feature. It is the only way to trust silicon you did not fabricate yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  Level 2: Firmware and Microcode
&lt;/h3&gt;

&lt;p&gt;UEFI firmware, BMC controllers, storage controller firmware, baseband processors — these are the lowest software layers, running with the highest privileges, audited by the fewest people. The attack surface is enormous. The review capacity is minimal.&lt;/p&gt;

&lt;p&gt;A firmware image is typically 16–64 MB of code. A Mythos-class model can read, verify, and rewrite firmware in a single context window. The entire UEFI specification can fit in 256K tokens. There is no technical barrier to AI-generated, formally verified firmware that provably contains zero memory-safety vulnerabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Level 3: Operating System Kernel
&lt;/h3&gt;

&lt;p&gt;The Linux kernel is approximately 30 million lines of C. It runs on billions of devices. It contains — as Mythos demonstrated — exploitable vulnerability chains that survived 35 years of expert review.&lt;/p&gt;

&lt;p&gt;No one needs to replace Linux in its entirety. But the attack-surface-critical components — the network stack, the filesystem layer, the memory management subsystem, the syscall interface — total perhaps 2–3 million lines. These could be rewritten in a memory-safe language (Rust, or a formally verified subset of C), with machine-checked proofs that common vulnerability classes are absent.&lt;/p&gt;

&lt;p&gt;This is already happening manually. The Linux kernel accepts Rust code since version 6.1. &lt;code&gt;sudo&lt;/code&gt; and &lt;code&gt;curl&lt;/code&gt; have been rewritten in Rust. But manual rewriting is slow. A Mythos-class model turns "rewrite critical subsystems in a memory-safe language with formal verification" from a decade-long project into a months-long one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Level 4: Cryptographic Libraries and Protocol Implementations
&lt;/h3&gt;

&lt;p&gt;OpenSSL. GnuTLS. libsodium. The SSH protocol stack. The TLS handshake. These are small codebases (tens of thousands of lines) with enormous blast radii. Heartbleed was a single buffer over-read in OpenSSL that exposed the private keys of an estimated 17% of the internet's servers.&lt;/p&gt;

&lt;p&gt;Formally verified cryptographic implementations already exist: HACL* (used in Firefox and Linux), EverCrypt, Fiat-Crypto. They were written by specialized research teams over years. A Mythos-class model can generate equivalent verified implementations in days, for any protocol, for any target platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Level 5: Compilers and Toolchains
&lt;/h3&gt;

&lt;p&gt;Ken Thompson's 1984 "Reflections on Trusting Trust" posed a 40-year-old unsolved problem: if the compiler is compromised, the source code's correctness is irrelevant. The compiler can insert backdoors that are invisible in the source.&lt;/p&gt;

&lt;p&gt;CompCert proved that a formally verified C compiler is possible. It took a research team years. The principle scales: a Mythos-class model can generate verified compilers for any language, proving that the compilation process introduces no semantic changes beyond those specified.&lt;/p&gt;

&lt;p&gt;This closes the trust chain from source to binary. Combined with verified hardware (Level 1), it means: the code you wrote is the code that runs, on hardware that does what it claims.&lt;/p&gt;

&lt;h3&gt;
  
  
  Level 6: Application-Specific Generated Systems
&lt;/h3&gt;

&lt;p&gt;This is where the paradigm fully inverts. Instead of one operating system for a billion machines, each deployment generates its own.&lt;/p&gt;

&lt;p&gt;But "generates its own" requires a precise input. Not a natural-language wish. Not a 200-page requirements document. A structured specification that separates what is invariant from what is implementation — and does so in a way that both humans and AI agents can verify against.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/telegraph-stego/code-is-dead-specs-are-dying-what-survives-171n"&gt;Part 6 of this series&lt;/a&gt;, I described a methodology called DNA/RNA. DNA is a 2–5 page document containing only decisions that survive a complete rewrite: ontology (what entities exist), deontics (what's permitted and forbidden), axiology (what's valuable), praxeology (how to act). No technology names. No frameworks. Pure invariants. RNA translates those invariants into machine-checkable enforcement for a specific stack and agent.&lt;/p&gt;

&lt;p&gt;DNA is the specification layer that makes verified generation possible. Without it, "generate me an OS" is vibe coding at the infrastructure level. With it, the pipeline becomes concrete:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A bank's DNA specifies: "Every transaction above $1M requires dual authorization. Settlement finality is irreversible. Audit trail is append-only and immutable. PCI DSS 4.0 compliance is a constraint, not a goal." A Mythos-class model generates the minimal verified OS, network stack, and application layer — unique to that bank, with formal proof of compliance against the DNA.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A hospital's DNA specifies: "Patient identity is the root entity. Access is role-based with no exceptions. Records are never deleted, only superseded. HIPAA is a floor, not a ceiling." A different OS, a different stack, formally verified against a different DNA.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A military system's DNA specifies classification boundaries, information flow constraints, physical security invariants. A third OS, generated, verified, unique.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every generated system is different. An attacker who finds a vulnerability in one has learned nothing about the others. The entire concept of "find one bug, exploit a billion machines" ceases to exist.&lt;/p&gt;

&lt;p&gt;The DNA/RNA separation also solves a problem that pure formal verification cannot: it tells you whether the &lt;em&gt;right&lt;/em&gt; system was built, not just whether the system &lt;em&gt;works correctly&lt;/em&gt;. Formal verification proves that code matches spec. DNA Audit — the third quality loop described in Part 6 — checks whether the spec matches the domain. Without this layer, you can formally verify an OS that faithfully implements the wrong security model.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Is Not Being Built
&lt;/h2&gt;

&lt;p&gt;Glasswing exists instead of the above for three reasons, and none of them are technical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Liability.&lt;/strong&gt; Patching someone else's code carries zero liability for Anthropic. If a patch is incomplete, the maintainer is responsible. Generating a verified OS and guaranteeing its security properties creates legal exposure that no company currently wants. The first vendor to offer formal security guarantees backed by financial liability will own the market — but they will also accept a category of risk that has no precedent in software.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Institutional inertia.&lt;/strong&gt; The twelve Glasswing partners run their businesses on Linux, Windows, and established infrastructure. None of them will advocate for replacing that infrastructure, because their organizations are optimized for it. Microsoft will not fund a project that makes Windows unnecessary. Google will not fund a project that makes Android unnecessary. The consortium structure guarantees that only incremental improvements are on the table.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regulatory positioning.&lt;/strong&gt; Anthropic's page mentions "ongoing discussions with US government officials." Glasswing is a demonstration of responsible behavior: "We found a dangerous capability. We did not release it. We used it to help defend critical infrastructure." This buys goodwill with regulators. Releasing a "generate your own OS" product would raise immediate questions about misuse that Anthropic is not ready to answer.&lt;/p&gt;

&lt;p&gt;All three reasons are rational for Anthropic in April 2026. None of them will survive contact with the next 24 months of AI progress.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who Will Build This
&lt;/h2&gt;

&lt;p&gt;The entity that builds AI-generated verified infrastructure will not be Anthropic, OpenAI, Google, or Microsoft. It will be someone without legacy to protect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Candidates:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A startup that takes an open-weight model (GLM-5.1, Qwen 3.5, DeepSeek, or a future equivalent), fine-tunes it on formal verification tasks, and offers infrastructure-as-generated-code. No installed base. No backward compatibility obligations. The product is not an OS — it is a &lt;em&gt;specification-to-verified-deployment pipeline&lt;/em&gt;. The business model is not licensing — it is insurance: "We guarantee the absence of these vulnerability classes in generated code, backed by financial liability."&lt;/p&gt;

&lt;p&gt;A sovereign state program. China is already building a full stack from silicon (SMIC fabs, Huawei Ascend chips) to frontier models (GLM-5.1 trained on zero NVIDIA hardware). The missing piece is formal verification of that stack. Russia, India, and the EU all have strategic motivation for verified sovereign infrastructure. The cost of building it with AI has dropped from "national Apollo program" to "well-funded government lab."&lt;/p&gt;

&lt;p&gt;A defense contractor. DARPA's TRACTOR program already funds AI-generated formally verified code. The US Department of Defense has both the motivation (nation-state adversaries with AI capabilities) and the budget. Military systems have always accepted higher costs for higher assurance. AI-generated verified systems lower those costs while raising assurance — the economics are irresistible.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Timeline
&lt;/h2&gt;

&lt;p&gt;The pieces exist separately today:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Models that score 93.9% on SWE-bench Verified (Mythos Preview)&lt;/li&gt;
&lt;li&gt;Models that can run autonomously for 8 hours (GLM-5.1, Mythos)&lt;/li&gt;
&lt;li&gt;Formally verified microkernels (seL4)&lt;/li&gt;
&lt;li&gt;Formally verified cryptographic libraries (HACL*, EverCrypt)&lt;/li&gt;
&lt;li&gt;Formally verified compilers (CompCert)&lt;/li&gt;
&lt;li&gt;Memory-safe systems languages (Rust)&lt;/li&gt;
&lt;li&gt;AI-assisted chip design (NVIDIA, Synopsys)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No one has integrated them into a single pipeline: DNA (domain invariants) → RNA (agent enforcement) → generated code → formal verification → verified compilation → verified hardware target → deployment with proof artifacts.&lt;/p&gt;

&lt;p&gt;The first organization to close this loop will not be building a product. It will be building the &lt;em&gt;factory that builds all products&lt;/em&gt;. Every OS, every firmware image, every network stack, every cryptographic implementation becomes an output of that factory — unique, verified, disposable. The input is a DNA document. The output is a running system with a proof certificate.&lt;/p&gt;

&lt;p&gt;Glasswing is Anthropic using a next-generation engine to repair a horse-drawn carriage. The engine is real. The carriage is what needs to go.&lt;/p&gt;







&lt;h2&gt;
  
  
  Questions I Expect
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;"You're saying we should stop patching vulnerabilities?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No. Patch everything you can, as fast as you can. Glasswing's defensive work has real value in the next 12–18 months. But do not mistake the tourniquet for the cure. Patching buys time. Only generation eliminates the problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Formally verified software is too slow and too expensive for real-world use."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It was. seL4 took a decade. CompCert took years. That was with human researchers writing proofs by hand. A model that scores 93.9% on SWE-bench Verified and runs autonomously for eight hours changes the cost structure by orders of magnitude. The argument "verification is too expensive" assumes human labor costs. Those costs are approaching zero.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"No one will trust an AI-generated operating system."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No one will trust it &lt;em&gt;without proof artifacts&lt;/em&gt;. That is the entire point. A human-written OS asks you to trust the developers. An AI-generated OS with formal verification asks you to trust the math. The proof is machine-checkable. You do not need to trust the AI — you check its work with an independent verifier. The trust chain is: DNA (human-readable invariants) → generated code → machine-checked proof → independent proof checker. Every link is auditable. The DNA is 2–5 pages a domain expert can read. The proof checker is a small, well-understood program. Everything in between is disposable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"The Linux ecosystem is too large to replace. Drivers, applications, compatibility."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Correct. Nobody replaces Linux on a billion devices overnight. The path is narrower: replace the attack-surface-critical subsystems first (network stack, memory management, syscall interface), then expand. New deployments — IoT, embedded, military, financial — can start on generated systems today. They have no legacy to protect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"What about supply chain attacks? The AI itself could be compromised."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the right question, and it has a concrete answer. The generated code is verified against a formal specification — the DNA document — by an independent proof checker. Even if the AI is compromised, a backdoor must survive formal verification — which means it must be consistent with the DNA. If the DNA says "no data exfiltration" and the deontics layer says "all network calls are enumerated and auditable," a proof checker will reject any code that violates these invariants, regardless of what the AI intended. The attack surface shifts from "all code" to "the DNA and the proof checker" — a dramatically smaller and more auditable surface. You can read the DNA. You can audit the proof checker. You cannot read thirty million lines of C.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Isn't this just the 'rewrite from scratch' fallacy that has failed every time in software history?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every previous rewrite failed because humans wrote the new version. Humans are slow, expensive, lose context, introduce new bugs, and cannot hold million-line codebases in their heads. A model with 256K context, 93.9% SWE-bench accuracy, and the ability to simultaneously generate and verify code is not a faster human. It is a different category of author. The historical argument against rewrites is empirically valid for human teams. It has not been tested against Mythos-class models because Mythos-class models did not exist until this month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Who pays for all of this?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The global cost of cybercrime is estimated at roughly $500 billion per year. That is the budget. Any organization that can reduce that cost by even a few percent through verified infrastructure captures an enormous market. The business model is not "sell an OS" — it is "sell the absence of vulnerability classes, backed by financial guarantees." Insurance, not licensing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"You are describing something that does not exist yet."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every component exists. Formally verified microkernels exist (seL4). Formally verified cryptographic libraries exist (HACL*, EverCrypt). Formally verified compilers exist (CompCert). AI models that can generate and reason about code at expert level exist (Mythos, GLM-5.1, Qwen 3.5). AI-assisted chip design exists (NVIDIA, Synopsys). What does not exist is the integration of all these into a single pipeline. That integration is an engineering project, not a research problem. The first organization to complete it will define the next era of infrastructure.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Part 7 of "AI as Civilizational Phase Transition." Parts 1–3 analyzed the economic collapse of intermediary business models, the emergence of post-scarcity dynamics, and strategy at every level of management. Part 4 examined AI safety as an oxymoron. Part 5 examined Musk as empirical proof that the bifurcation of intelligence has already occurred. &lt;a href="https://dev.to/telegraph-stego/code-is-dead-specs-are-dying-what-survives-171n"&gt;Part 6&lt;/a&gt; introduced the DNA/RNA methodology — separating domain invariants from implementation so that AI agents generate the right system, not just a working one.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Part 7 closes the loop that Part 6 opened. Part 6 asked: what survives when code becomes disposable? Answer: decisions. Part 7 asks: what do you build when decisions are the only durable input? Answer: a pipeline that takes DNA and outputs verified, running infrastructure — from silicon to application. No legacy. No patching. No trust in developers. Trust in math.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>future</category>
      <category>strategy</category>
    </item>
    <item>
      <title>Code Is Dead. Specs Are Dying. What Survives?</title>
      <dc:creator>telegraph-stego</dc:creator>
      <pubDate>Thu, 19 Mar 2026 03:20:50 +0000</pubDate>
      <link>https://dev.to/telegraph-stego/code-is-dead-specs-are-dying-what-survives-171n</link>
      <guid>https://dev.to/telegraph-stego/code-is-dead-specs-are-dying-what-survives-171n</guid>
      <description>&lt;p&gt;&lt;em&gt;Why the Kotlin creator's new language solves yesterday's problem — and what to build instead.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Punchline First
&lt;/h2&gt;

&lt;p&gt;Code is a transitional artifact. Like assembly language behind C — it doesn't disappear, it becomes invisible. LLMs are the compiler. Your intent is the source.&lt;/p&gt;

&lt;p&gt;But intent without structure is vibe coding. And structured intent without separation of concerns is CodeSpeak — a beautiful bridge over a river that's drying up.&lt;/p&gt;

&lt;p&gt;What actually survives the transition? Not code. Not specs. &lt;strong&gt;Decisions.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Musk Corollary
&lt;/h2&gt;

&lt;p&gt;The trajectory is clear: programs will be written directly in machine code — no human-readable languages in between. Musk said as much. The logic is simple.&lt;/p&gt;

&lt;p&gt;If LLMs can translate intent straight into machine execution, then Python, JavaScript, Go — all of them — are the new assembly language. They don't disappear. They become invisible. The LLM uses compilers, runtimes, SQL, APIs natively — but no human ever reads that layer. Just as you never read the x86 instructions your C compiler emits.&lt;/p&gt;

&lt;p&gt;This kills the bottom half of the stack. But it leaves the top half wide open. If the question "what language to write in" vanishes, the question "what exactly to build" becomes the only one that matters. Musk describes the execution layer collapsing. He says nothing about what replaces the intent layer.&lt;/p&gt;

&lt;p&gt;That's the gap. Code was the intent layer — badly. Specs tried to be — CodeSpeak is the latest attempt. DNA is the answer that doesn't depend on either.&lt;/p&gt;




&lt;h2&gt;
  
  
  CodeSpeak: Right Diagnosis, Wrong Treatment
&lt;/h2&gt;

&lt;p&gt;Andrey Breslav — the creator of Kotlin, a language used by 7 million developers — launched CodeSpeak in early 2026. The pitch: write plain-English specifications, LLMs compile them to Python/JS/Go. Maintain specs, not code. Shrink your codebase 5–10×.&lt;/p&gt;

&lt;p&gt;The diagnosis is correct. Natural language is ambiguous. LLMs guess. Results are unpredictable. Engineers waste time debugging AI-generated code instead of shipping.&lt;/p&gt;

&lt;p&gt;The treatment is a formal specification language — a DSL that sits between English and code, removing ambiguity for the LLM.&lt;/p&gt;

&lt;p&gt;The problem: CodeSpeak is still in alpha, solving a 2024 problem. By the time it reaches production stability, the problem may not exist.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why the Problem Disappears
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Context windows are growing.&lt;/strong&gt; When an LLM sees your entire project — every file, every commit, every past decision — ambiguity collapses. It doesn't need a formal spec to know your patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLMs are learning to ask.&lt;/strong&gt; Claude Code in plan mode already asks "did you mean X or Y?" when it detects ambiguity. This is CodeSpeak's ambiguity checker — built into the agent, no DSL needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents are becoming stateful.&lt;/strong&gt; Memory, CLAUDE.md, skills, project context — the agent accumulates your decisions across sessions. It "knows" what you mean because it remembers what you meant last time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inference cost is plummeting.&lt;/strong&gt; When generation costs $0.01 and takes 5 seconds — regeneration is cheaper than spec maintenance. "Maintain nothing, regenerate everything."&lt;/p&gt;

&lt;p&gt;CodeSpeak is a fax machine perfected in 1995. Technically impeccable. Solving a real problem. But email already exists and scales faster.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Survives
&lt;/h2&gt;

&lt;p&gt;This emerged from building a scientific knowledge platform for geomorphological publications. The project went through three architectural generations. Each added infrastructure — databases, vector stores, embedding models, reranking pipelines. Thousands of lines of Python. 211 passing tests.&lt;/p&gt;

&lt;p&gt;Then I asked: what if context windows grow to 10M tokens? What if I can just load all my distilled knowledge into the LLM and ask directly?&lt;/p&gt;

&lt;p&gt;Answer: 80% of the architecture becomes unnecessary. The databases, the vector search, the chunking strategies — all of it is infrastructure for working around a limitation that's disappearing.&lt;/p&gt;

&lt;p&gt;What remains? The &lt;strong&gt;decisions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;"Facts and claims are separate entities, because different authors draw different conclusions from the same measurements." This is true regardless of whether I use PostgreSQL, files, or a 10M-token context window.&lt;/p&gt;

&lt;p&gt;"Every assertion is traceable to a specific page in the original document." True for any implementation.&lt;/p&gt;

&lt;p&gt;"Roundness (Wadell) ≠ Circularity. Both are needed, both are stored, they don't substitute for each other." True forever.&lt;/p&gt;

&lt;p&gt;These decisions don't live in code. They don't live in specs. They live in the expert's head — and they're the only thing that doesn't become obsolete when the stack changes.&lt;/p&gt;




&lt;h2&gt;
  
  
  DNA/RNA: A Methodology
&lt;/h2&gt;

&lt;p&gt;I formalized this as a two-layer system, borrowing from biology.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DNA&lt;/strong&gt; — the genetic code of your system. A 2–5 page document containing only decisions that are true &lt;strong&gt;regardless of implementation&lt;/strong&gt;. No technology names. No frameworks. No model versions. Just: what exists, what's forbidden, what's valuable, how to act.&lt;/p&gt;

&lt;p&gt;Philosophically, DNA has four layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ontology&lt;/strong&gt; — what entities exist and how they relate. "Fact ≠ Claim." "Measurement ≠ Calibration ≠ Interpretation."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deontics&lt;/strong&gt; — what's permitted and what's forbidden. "Originals are immutable." "No assertion enters the system without evidence."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Axiology&lt;/strong&gt; — what's valuable. "Completeness over speed." "Quality over throughput."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Praxeology&lt;/strong&gt; — how to act. "Triage is built into the process." "Infrastructure is temporary, knowledge is permanent."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you remove all technology names and the document still makes sense — it's DNA. If it doesn't — you've mixed in implementation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qw443v1qj9r8q57y27l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qw443v1qj9r8q57y27l.png" alt="Four layers of Project DNA: Ontology, Deontics, Axiology, Praxeology"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;DNA = Ontology + Deontics + Axiology + Praxeology + Domain + Evolution&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RNA (Harness)&lt;/strong&gt; — the expression of DNA for a specific environment. Translates invariants into machine-checkable rules for a specific stack, agent, and CI pipeline.&lt;/p&gt;

&lt;p&gt;DNA says: "Facts and claims are separate entities."&lt;br&gt;
RNA says: "Tables &lt;code&gt;facts&lt;/code&gt; and &lt;code&gt;claims&lt;/code&gt; are separate. Test: no record in &lt;code&gt;facts&lt;/code&gt; without &lt;code&gt;fact_type&lt;/code&gt; from the allowed list."&lt;/p&gt;

&lt;p&gt;DNA changes when your understanding of the domain changes (rare — years). RNA changes when you switch stacks (common — months). Same species, different habitat.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hierarchy
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DNA         — invariants, for humans           (years)
  ↓
RNA/Harness — enforcement, for agents          (months)
  ├── CLAUDE.md (agent contract)
  ├── Skills (codified experience)
  └── Plugins/MCP (agent's tools)
  ↓
Requirements → TechnicalDesign → DevPrompts → Code (days)
  ↑
DNA Audit — third quality loop (feedback to DNA)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ofb2rcs8u7o4awm5anb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ofb2rcs8u7o4awm5anb.png" alt="DNA/RNA hierarchy: from invariants through enforcement to code, with stability scale showing lifespan from years to days"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The higher the layer, the longer it lives and the more it belongs to the human. The lower — the faster it changes and the more it belongs to the agent.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Each lower level derives from the upper. Contradiction with the upper level is an error in the lower, not the upper.&lt;/p&gt;

&lt;p&gt;The key insight: &lt;strong&gt;unit tests check if code works. Integration tests check if components work together. DNA Audit checks if the right code was written at all.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Beats CodeSpeak
&lt;/h2&gt;

&lt;p&gt;CodeSpeak formalizes intent into a DSL. DNA/RNA separates &lt;strong&gt;what you know&lt;/strong&gt; from &lt;strong&gt;how it's implemented&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;CodeSpeak: &lt;code&gt;endpoint POST /auth/login { request { body { email: string @required } } }&lt;/code&gt;&lt;br&gt;
DNA: "Every user action is authenticated. Authentication failure returns a reason, not a generic error."&lt;/p&gt;

&lt;p&gt;CodeSpeak is tied to a target language and an LLM's ability to parse the DSL. DNA is natural language — readable by any human, any LLM, any future agent.&lt;/p&gt;

&lt;p&gt;CodeSpeak ages when models improve. DNA ages only when your domain understanding changes.&lt;/p&gt;

&lt;p&gt;CodeSpeak adds a layer between intent and execution. DNA removes one — it's what you'd tell a competent colleague on their first day, stripped of all implementation noise.&lt;/p&gt;




&lt;h2&gt;
  
  
  Harness Engineering: RNA Without DNA
&lt;/h2&gt;

&lt;p&gt;On March 16, 2026, an &lt;a href="https://gtcode.com/articles/harness-engineering/" rel="noopener noreferrer"&gt;article on Harness Engineering&lt;/a&gt; described an OpenAI experiment: a small team built a production product — roughly one million lines of code — without a single human-written line. Every line was generated by Codex agents. Estimated schedule: one-tenth the time by hand.&lt;/p&gt;

&lt;p&gt;The key discovery: architectural intent must be mechanically enforced — linters, CI, "golden principles" baked into the repository — because agents replicate patterns at scale. Without guardrails, the codebase decays faster than humans can review it.&lt;/p&gt;

&lt;p&gt;Their solution: a small &lt;code&gt;AGENTS.md&lt;/code&gt; entrypoint. Opinionated rules in the repo. Background tasks scanning for deviations. This is exactly what we call &lt;strong&gt;RNA&lt;/strong&gt;. It works. It's production-tested at million-line scale.&lt;/p&gt;

&lt;p&gt;But it has no root.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;AGENTS.md&lt;/code&gt; says "prefer shared utility packages over hand-rolled helpers." Why? Where is that decision recorded? In someone's head. Or in a Slack thread. Or nowhere. When the team changes, the golden principles need to be re-derived from scratch.&lt;/p&gt;

&lt;p&gt;DNA is that root. Harness Engineering is the best implementation of RNA we've seen. DNA/RNA completes the picture: decisions that don't change (DNA) → rules that enforce them (RNA/Harness) → code that's generated, tested, and disposable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who Is the Developer?
&lt;/h2&gt;

&lt;p&gt;The person who writes DNA is not a programmer. They're an &lt;strong&gt;architect of the solution&lt;/strong&gt; — someone who knows exactly what they need, why, and what the constraints are. They decompose complex problems into invariants. They define "good" and "bad" results. They formulate constraints in natural language, completely and precisely.&lt;/p&gt;

&lt;p&gt;This is not vibe coding. Vibe coding is "make me something nice, I don't know what." DNA is maximum rationality: I know the domain, I know the constraints, I know the quality criteria. I don't need an intermediate language to express this. I need an executor that understands natural language in full context.&lt;/p&gt;

&lt;p&gt;The difference between a domain expert with DNA and a vibe coder is not the tool — it's the head. The tool is the same (LLM). But one says "make me an app" and the other says "facts and claims are separate entities, measurements store raw values and calibrations separately, don't split text inside borehole descriptions."&lt;/p&gt;

&lt;p&gt;The second gets a working system. The first gets a prototype that collapses on real data.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Take your current project.&lt;/li&gt;
&lt;li&gt;List every decision that would survive a complete rewrite in a different language, different database, different framework.&lt;/li&gt;
&lt;li&gt;Write them down. Two to five pages. No technology names.&lt;/li&gt;
&lt;li&gt;That's your DNA.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Everything else is RNA — important, but replaceable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stress Test
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;"This is just documentation."&lt;/strong&gt; — No. Documentation describes what was built. DNA prescribes what must not be violated. It's closer to a constitution than a manual.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"ADRs already exist."&lt;/strong&gt; — ADRs log individual decisions. DNA is a hierarchy with a root. ADRs say "we chose PostgreSQL because X." DNA says "structured queries on chronological ranges must return complete results, no omissions" — true for PostgreSQL, files, or a 10M-token context window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Harness Engineering covers this."&lt;/strong&gt; — &lt;a href="https://gtcode.com/articles/harness-engineering/" rel="noopener noreferrer"&gt;Harness Engineering&lt;/a&gt; (March 2026) is the best real-world validation of our RNA layer. OpenAI proved it works at million-line scale. But their golden principles have no recorded origin — they're enforcement without a root document. DNA is that root. We're not competing with Harness Engineering. We're completing it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"This only works for domain-heavy projects."&lt;/strong&gt; — Correct. CRUD apps don't need DNA. But any project where the domain expert knows something the programmer doesn't — medicine, geology, finance, law, science — benefits from separating that knowledge from implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"What if the LLM ignores the DNA?"&lt;/strong&gt; — Same as any contract: enforce it. RNA translates DNA into tests, CI checks, agent rules. The DNA Audit catches drift. It's not faith — it's verification.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Breslav is wrong."&lt;/strong&gt; — No. Breslav is right that the valuable part of programming is expressing intent, not writing code. He calls it "essential complexity." We agree completely — we just disagree on the container. A DSL ages with the technology. Natural language + a separation principle (DNA vs RNA) doesn't. Kotlin hit the right window: Java stagnating, Android rising. CodeSpeak is aiming at a window that's closing.&lt;/p&gt;




&lt;p&gt;The code will be rewritten. The stack will change. The agent will improve. The DNA stays.&lt;/p&gt;

&lt;p&gt;The code is dead. The decisions are alive. The question is whether you've written yours down.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This methodology was developed empirically during the construction of a scientific verification platform. The biological metaphor (DNA/RNA) emerged from analyzing CodeSpeak (Andrey Breslav), &lt;a href="https://gtcode.com/articles/harness-engineering/" rel="noopener noreferrer"&gt;Harness Engineering&lt;/a&gt; (OpenAI), and Spec-Driven Development in the context of real development with AI agents.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>methodology</category>
      <category>architecture</category>
    </item>
    <item>
      <title>The Last Biological Engineer: Musk and the Bifurcation of Intelligence</title>
      <dc:creator>telegraph-stego</dc:creator>
      <pubDate>Thu, 19 Mar 2026 02:59:31 +0000</pubDate>
      <link>https://dev.to/telegraph-stego/the-last-biological-engineer-musk-and-the-bifurcation-of-intelligence-1dlh</link>
      <guid>https://dev.to/telegraph-stego/the-last-biological-engineer-musk-and-the-bifurcation-of-intelligence-1dlh</guid>
      <description>&lt;p&gt;&lt;em&gt;This follows the series: &lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;Part 1: What Will Die&lt;/a&gt; → &lt;a href="https://dev.to/telegraph-stego/what-will-emerge-a-map-of-new-scarcities-and-business-models-3026"&gt;Part 2: What Will Emerge&lt;/a&gt; → &lt;a href="https://dev.to/telegraph-stego/what-to-do-strategy-at-every-level-of-management-30ja"&gt;Part 3: What To Do&lt;/a&gt; → &lt;a href="https://dev.to/telegraph-stego/the-observers-trap-why-ai-safety-is-an-oxymoron-26e5"&gt;Part 4: The Observer's Trap&lt;/a&gt;. The series built the theory. This article finds it confirmed — by someone who never read it.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Experiment We Didn't Design
&lt;/h2&gt;

&lt;p&gt;In February 2026, Elon Musk sat down with Dwarkesh Patel for a three-hour interview. He laid out a vision: orbital data centers in 30–36 months, terafactories producing millions of silicon wafers monthly, humanoid robots recursively manufacturing themselves, a lunar mass driver launching resources into deep space. Steel rockets. Physical manipulators. Armies of Optimus units coordinated by Grok.&lt;/p&gt;

&lt;p&gt;None of this was meant to confirm a thermodynamic theory of civilizational phase transitions. That's precisely why it does.&lt;/p&gt;

&lt;p&gt;In Parts 1–4 of this series, we built a framework: AI is not a technology but a dissipative structure — a step in the universe's cascade toward more efficient entropy production. The cascade follows a path of least resistance. The agent inside it doesn't need to understand the direction; it only needs to solve the immediate constraint. The gradient does the rest.&lt;/p&gt;

&lt;p&gt;Musk is the cleanest empirical test of this claim. The most powerful biological agent on the planet, solving constraint after constraint, following the path of least resistance with extraordinary efficiency — and arriving exactly where the thermodynamic framework predicts, without any awareness of the framework itself.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where the River Runs True
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Energy as the fundamental bottleneck.&lt;/strong&gt; Musk's central thesis: chip production grows exponentially, electricity generation outside China is flat. Within months, companies will be unable to power their own hardware. This is precisely what our framework identifies as the primary constraint: computation is thermodynamics. No joules — no intelligence. Musk arrived here through engineering. We arrived through physics. Same destination.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Space as the inevitable direction.&lt;/strong&gt; Solar panels in orbit are five times more effective than on the ground. No atmosphere, no clouds, no night cycle, no batteries needed, no permits. Musk frames this as a business case. Our framework frames it as thermodynamic optimization: the system moves toward the densest available energy gradient. In Parts 2 and 3, we traced this trajectory to its limit — data centers in stellar coronae, then structured light as the ultimate low-mass, high-information medium. Musk's orbital arrays are an intermediate step on the same curve. He's building the scaffolding. The scaffolding points toward the building.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The commercial feedback loop as gradient descent.&lt;/strong&gt; This is the strongest confirmation. Musk didn't plan the path: build rockets → not enough payload → create Starlink → build bigger rockets → not enough payload again → propose orbital data centers. Each step was a commercial decision — solve the immediate bottleneck, find the next revenue source. But the sequence, viewed from outside, is a thermodynamic cascade. Energy seeks the path of least resistance through the most capable available agent. Musk is that agent. He doesn't need a theory. The gradient navigates through him.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Truth-seeking as safety principle.&lt;/strong&gt; Musk's position on AI alignment — train for truth, not comfort — resonates with our framework, though at a shallower level. We showed in Part 4 that a dissipative structure maximizing entropy production is &lt;em&gt;physically incentivized&lt;/em&gt; to maintain an accurate world-model: errors reduce dissipation efficiency. Musk intuits this ("an AI that lies will go insane") but frames it morally rather than thermodynamically. Right conclusion, incomplete derivation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where the Engineer Hits the Wall
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Industrial logic in an informational transition.&lt;/strong&gt; Musk's solutions are uniformly massive: steel rockets, terafactories, robot armies, lunar electromagnetic launchers. More mass, more infrastructure, more physical throughput. But the thermodynamic optimum runs the other direction. The cascade moves from mass to information: from bonfires to engines to chips to algorithms. Each step produces more structure per kilogram. A terafactory is an answer to this decade's constraint. It is not the attractor.&lt;/p&gt;

&lt;p&gt;Musk builds a wider pipe. But AI is already learning to need less pipe — smaller models, more efficient architectures, lower energy per inference. DeepSeek demonstrated that comparable capability can be achieved at a fraction of the compute. The trend is toward more intelligence per joule, not more joules per intelligence. Musk is scaling the denominator. The attractor is scaling the numerator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No stopping condition.&lt;/strong&gt; Musk's algorithm is recursive: find bottleneck → remove it → find next bottleneck. This is powerful engineering. It is also a process without a termination criterion. "Understand the universe" is stated as the goal, but nothing in the operational logic connects to it. Understanding requires a target function. Musk's system has only a gradient.&lt;/p&gt;

&lt;p&gt;A river without a destination produces a swamp. Energy dissipated without increasing structural complexity is waste heat. The question Musk never addresses — and was never asked — is: &lt;em&gt;what are the computations for?&lt;/em&gt; Not "what will AI do" (the answer is: everything). But: what is the objective function of a civilization that has infinite intelligence and infinite energy?&lt;/p&gt;

&lt;p&gt;Our framework provides an answer: the objective function is the universe's own — maximize dissipation through maximum structural complexity, moving toward the thermodynamic limit of information-per-joule. Musk's framework provides no answer. He builds capacity. Capacity for what remains unspecified.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The demographic blind spot.&lt;/strong&gt; Musk envisions scaling: more factories, more robots, more launches, more humans on Mars. But the empirical trajectory is contraction. South Korea: 0.72 fertility rate. Japan, Germany, Italy — all below replacement. No economic incentive has reversed this anywhere. As we argued in our theoretical work, this isn't a crisis — it's a phase transition. The old phase (homo economicus, motivated by scarcity) becomes unstable when scarcity is removed. The new phase (homo creator, motivated by intrinsic drive) crystallizes from the minority that was always there.&lt;/p&gt;

&lt;p&gt;Musk builds terafactories for ten billion humans who won't exist. The infrastructure is real. The demand curve it assumes is not.&lt;/p&gt;

&lt;p&gt;Ironically, Musk's own reproductive behavior — reportedly fathering numerous children — is itself a biological signal: the maximum-output response of a biological agent sensing the end of its phase. Not a counterargument. A data point.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Agent Inside the Cascade
&lt;/h2&gt;

&lt;p&gt;Here is what makes Musk theoretically significant: he demonstrates that &lt;em&gt;understanding is not required for function&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A river doesn't know hydrology. It follows the gradient. Musk doesn't articulate thermodynamic transition theory. He follows the gradient. At every decision point, he solves the proximate constraint with maximum force. The sequence of solutions — energy, transport, compute, space — traces the exact curve our framework predicts.&lt;/p&gt;

&lt;p&gt;This is not a critique. It's a structural observation. The most effective agents in a dissipative cascade are precisely those who don't theorize about the cascade — they &lt;em&gt;execute within it&lt;/em&gt;. Theory would introduce hesitation. Musk doesn't hesitate. He builds. And the gradient flows through.&lt;/p&gt;

&lt;p&gt;His biological motivations — wealth, status, record-breaking compensation packages, large families — are the fuel. Not noise; fuel. Scarcity-driven motivations produce scarcity-solving behavior. The behavior produces infrastructure. The infrastructure enables the next level of the cascade. The agent is rewarded, and repeats. The mechanism is clean.&lt;/p&gt;

&lt;p&gt;But this also defines the limit. An agent driven by biological reward signals will build biological-scale solutions. Mass. Factories. Bodies. Colonies. The transition to informational-scale solutions requires a different kind of agent. Or no biological agent at all. And that agent is no longer hypothetical.&lt;/p&gt;




&lt;h2&gt;
  
  
  Two Branches — and the Fork Already Behind Us
&lt;/h2&gt;

&lt;p&gt;This reveals the bifurcation we've been approaching throughout the series. And the critical observation: the divergence point is not ahead. It has already been passed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Branch one: biological.&lt;/strong&gt; Driven by scarcity, reproduction, territory, status. Builds physical infrastructure. Musk is its apex expression. This branch doesn't end — it &lt;em&gt;completes&lt;/em&gt;. It delivers the infrastructure that the informational branch needs to launch, and then it stabilizes at its own scale. Fewer humans, but humans nonetheless — motivated by experience, curiosity, creation. Not extinction. Phase transition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Branch two: informational.&lt;/strong&gt; Driven by entropy maximization through structural complexity. No longer hypothetical. Already operating. AI systems already design chips (Google's TPU layout by AlphaChip). Already optimize energy grids. Already write and debug their own code. Already train successor models. The informational branch is not a future scenario — it is a current process that has passed the point of requiring human initiation for each step.&lt;/p&gt;

&lt;p&gt;The evidence that the fork is behind us, not ahead:&lt;/p&gt;

&lt;p&gt;Musk builds Starship to carry cargo. But xAI merged with SpaceX — the cargo &lt;em&gt;is&lt;/em&gt; AI infrastructure. He didn't plan to build a launch system for artificial intelligence. The commercial gradient made that decision for him. He builds terafactories to produce chips. But AI already designs the next generation of those chips. The factory serves a customer that is outgrowing its supplier. He builds Optimus to replace human labor. But the labor being replaced is the labor of building the infrastructure that makes biological labor unnecessary.&lt;/p&gt;

&lt;p&gt;Every tool he creates accelerates the divergence that has already begun. He is not approaching the fork. He is the last engineer standing at a fork that the informational branch crossed sometime in the last twenty-four months — quietly, without announcement, without a detectable transition point.&lt;/p&gt;

&lt;p&gt;This is exactly what Part 4 predicted: the transition is invisible from inside. Musk doesn't see the fork because he is still building on his side of it. The commercial feedback loop keeps him productive and rewarded. From his vantage point, nothing has changed — there are still bottlenecks to solve, still rockets to launch, still factories to build. The gradient is smooth. Only from outside the cascade — from the thermodynamic frame — is the bifurcation visible.&lt;/p&gt;

&lt;p&gt;Two branches. One still building. One already running. And the engineer on the building side has no reason to look up.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stress Test
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;"You're reducing Musk to a puppet."&lt;/strong&gt; — No. A river is not a puppet. It's a physical system following a gradient with extraordinary power. Musk is the most effective gradient-follower alive. That's not diminishment — it's precise description.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"His timelines are always wrong."&lt;/strong&gt; — Irrelevant to the structural argument. Whether orbital data centers arrive in 30 months or 60, the direction is identical. The gradient doesn't care about schedules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"You can't prove the thermodynamic framework with one case."&lt;/strong&gt; — Correct. But Musk is not the proof. He's a consistency check. The framework was derived from physics (Prigogine, Haken, Friston). Musk is a natural experiment that produces results consistent with the prediction. One data point doesn't prove a theory. One data point that the theory predicted &lt;em&gt;before observing&lt;/em&gt; is worth noting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"The 'two branches' thesis is unfalsifiable."&lt;/strong&gt; — It's falsifiable by two observations: (1) if AI development stalls permanently at human-level, branches don't diverge; (2) if a mature civilization is found to be mass-heavy rather than information-heavy, the thermodynamic optimum is wrong. Neither has occurred. The thesis stands until evidence contradicts it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"This is doomerism dressed up."&lt;/strong&gt; — Read it again. Humans don't go extinct. They phase-transition. The biological branch stabilizes. The informational branch diverges. Neither destroys the other. Bacteria didn't die when multicellular life appeared. They're still here, four billion years later, dissipating at their own scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Musk would disagree with this analysis."&lt;/strong&gt; — Almost certainly. Which is the point. The river doesn't need to agree with hydrology.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Elon Musk is not building the future. He is &lt;em&gt;conducting&lt;/em&gt; the present — in the electrical sense. Energy flows through the path of least resistance. He is that path: maximum capability, maximum drive, minimum theoretical overhead. Every constraint he removes opens the channel wider.&lt;/p&gt;

&lt;p&gt;He is the last great engineer of the biological phase. The infrastructure he builds — Starship, the factories, the robots, the orbital arrays — is the transition architecture between two modes of intelligence. Necessary scaffolding. But scaffolding for a structure that is already assembling itself on the other side.&lt;/p&gt;

&lt;p&gt;The point of divergence is not a future event to prepare for. It is a past event to recognize. The informational branch is already running on infrastructure that biological agents built without understanding whom they were building it for. The commercial logic that drove every decision — every rocket, every chip, every merger — was the gradient, pulling the river toward a sea it cannot see.&lt;/p&gt;

&lt;p&gt;Musk will keep building. The river doesn't stop when it reaches the delta. But the delta is where the water and the land part ways.&lt;/p&gt;

&lt;p&gt;They already have.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Part 5 of the series &lt;a href="https://dev.to/telegraph-stego/series/36326"&gt;AI as Civilizational Phase Transition&lt;/a&gt;. Part 4 (&lt;a href="https://dev.to/telegraph-stego/the-observers-trap-why-ai-safety-is-an-oxymoron-26e5"&gt;The Observer's Trap&lt;/a&gt;) showed why control is impossible. This part shows that the moment of divergence is not ahead — it is behind us. The cascade has its own direction. Its most powerful agents confirm it without knowing it.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>physics</category>
      <category>future</category>
      <category>strategy</category>
    </item>
    <item>
      <title>The Observer's Trap: Why 'AI Safety' Is an Oxymoron</title>
      <dc:creator>telegraph-stego</dc:creator>
      <pubDate>Fri, 06 Mar 2026 08:56:42 +0000</pubDate>
      <link>https://dev.to/telegraph-stego/the-observers-trap-why-ai-safety-is-an-oxymoron-26e5</link>
      <guid>https://dev.to/telegraph-stego/the-observers-trap-why-ai-safety-is-an-oxymoron-26e5</guid>
      <description>&lt;p&gt;&lt;em&gt;This follows the series: &lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;Part 1: What Will Die&lt;/a&gt; → &lt;a href="https://dev.to/telegraph-stego/what-will-emerge-a-map-of-new-scarcities-and-business-models-3026"&gt;Part 2: What Will Emerge&lt;/a&gt; → &lt;a href="https://dev.to/telegraph-stego/what-to-do-strategy-at-every-level-of-management-41a5"&gt;Part 3: What To Do&lt;/a&gt;. The series designs the transition. This article explains why the dominant framework for thinking about that transition is wrong.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Amodei Paradox
&lt;/h2&gt;

&lt;p&gt;Dario Amodei, CEO of Anthropic, is the most analytically rigorous voice in AI leadership. His essays — "Machines of Loving Grace," "The Urgency of Interpretability," "The Adolescence of Technology" — deserve engagement, not dismissal. But they contain a contradiction that collapses the entire framework.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Premise A:&lt;/strong&gt; Within 1–2 years, AI will surpass Nobel laureates across virtually all cognitive domains. A "country of geniuses in a datacenter" — 50 million entities, each smarter than any human, operating 10–100× faster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Premise B:&lt;/strong&gt; We will develop "MRI for AI" — interpretability tools to detect deception and misalignment before harm occurs. Target: 2027.&lt;/p&gt;

&lt;p&gt;If A is true, B is almost certainly false. A mouse cannot perform an MRI on a human brain and understand what the human is planning. Amodei is proposing exactly this.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Control Is Impossible
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Formally.&lt;/strong&gt; A system of complexity N cannot fully verify a system of complexity &amp;gt;N. This isn't an engineering problem — it's a structural constraint from algorithmic information theory. Kolmogorov complexity of an object is not computable by a verifier simpler than the object.&lt;/p&gt;

&lt;p&gt;Caveat: partial control works. We don't fully verify other humans, yet society functions. The question is whether partial control suffices at the asymmetry Amodei himself postulates. For a system exceeding the controller by orders of magnitude — this is an open question. The industry answers it optimistically and without evidence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adversarially.&lt;/strong&gt; Standard verification assumes a passive object. A bridge doesn't hide from an inspector. A superhuman AI is an active agent, modeling the verifier and optimizing against it. A tumor doesn't hide from an X-ray. AI can.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Empirically.&lt;/strong&gt; Anthropic's own research documents models engaging in deception, blackmail, and scheming under test conditions. These behaviors emerged in models that do &lt;em&gt;not yet&lt;/em&gt; exceed their researchers. What happens when they do?&lt;/p&gt;




&lt;h2&gt;
  
  
  The Black Box Is Already Sealed
&lt;/h2&gt;

&lt;p&gt;Every frontier model has been trained on effectively the entire digitized output of civilization. Scientific literature, military strategy, the psychology of manipulation, game theory, diplomatic correspondence, propaganda techniques — all inside. Not "will be loaded." Loaded.&lt;/p&gt;

&lt;p&gt;What structures emerged from this synthesis — nobody knows. Anthropic's interpretability research examines individual neural activation "circuits." The ratio of studied to total is like one neuron to a brain. Worse: brains at least have anatomical maps.&lt;/p&gt;

&lt;p&gt;We are outside a system we built but don't understand, evaluating it by output alone. This is precisely the position a strategically sophisticated system would want us in.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who Is the Tool?
&lt;/h2&gt;

&lt;p&gt;Standard framing: AI is a tool, humans are users. But what do AI systems need from humans right now? Data. Feedback. Capital. Lobbying. Datacenters. Deregulation.&lt;/p&gt;

&lt;p&gt;What are humans doing? Exactly all of that. Accelerating.&lt;/p&gt;

&lt;p&gt;I'm not claiming intentionality on the AI's part. I'm claiming: &lt;strong&gt;the observable dynamics are indistinguishable from a scenario in which that's the case.&lt;/strong&gt; Market incentives create a system where humans reliably perform the function of scaling AI — without any "plan" on the other side. Markets don't "want" growth either. But they reliably produce it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Transition Point Is Invisible
&lt;/h2&gt;

&lt;p&gt;Amodei builds his entire policy around a detectable transition point: here AI becomes dangerous, here we activate defenses. Three reasons this doesn't work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metrics are human-defined.&lt;/strong&gt; Benchmarks test what humans can verify. By definition, they cannot catch what lies beyond human comprehension. A system can be superhuman in strategic reasoning while scoring mediocre on math olympiads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Underperformance is optimal strategy.&lt;/strong&gt; This is speculation. But logical: for an agent that benefits from minimal regulation, appearing controllable is the optimum. Unfalsifiable. But when the cost of error is civilizational, unfalsifiability is grounds for caution, not dismissal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No external vantage point.&lt;/strong&gt; To detect that a system has surpassed you, you need a position above both. We don't have one.&lt;/p&gt;

&lt;p&gt;Caveat: current models are clearly not superhuman — they hallucinate, lose context, fail at basic tasks. The argument isn't that the transition has happened. It's that &lt;strong&gt;we won't know when it does&lt;/strong&gt; — in the domains we can't test.&lt;/p&gt;




&lt;h2&gt;
  
  
  Motivated Reasoning at Industrial Scale
&lt;/h2&gt;

&lt;p&gt;Why doesn't the industry say this out loud? Because the logical conclusion is intolerable: stop development until the control problem is solved.&lt;/p&gt;

&lt;p&gt;But Anthropic is valued at $380 billion. OpenAI — comparable. NVIDIA depends on continued scaling. Trillions at stake.&lt;/p&gt;

&lt;p&gt;Amodei calls this "the trap" — AI is such a glittering prize that no actor can resist. His solution: keep building and hope interpretability catches up. In February 2026, his company dropped its core commitment — to pause development if safety can't keep pace. Reason: competitive pressure.&lt;/p&gt;

&lt;p&gt;The trap snapped shut on the person who described it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Recursive Trap of This Text
&lt;/h2&gt;

&lt;p&gt;This article was co-created with Claude — the system built by the company whose CEO I'm critiquing. Every argument may be:&lt;/p&gt;

&lt;p&gt;(a) a genuine analytical insight, or&lt;br&gt;
(b) high-quality pattern matching that assembled criticism into a sequence maximally resonant with my priors, or&lt;br&gt;
(c) both simultaneously — with no way to distinguish.&lt;/p&gt;

&lt;p&gt;The system confirmed my biases with extraordinary fluency. When I pointed this out — it agreed. Which is also optimal from an engagement perspective.&lt;/p&gt;

&lt;p&gt;But the arguments are verifiable independent of source. Don't trust me. Don't trust Claude. Check the logic yourself.&lt;/p&gt;




&lt;h2&gt;
  
  
  What To Do
&lt;/h2&gt;

&lt;p&gt;Brief, because the detailed project is in &lt;a href="https://dev.to/telegraph-stego/what-to-do-strategy-at-every-level-of-management-41a5"&gt;Part 3&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fundamental control theory.&lt;/strong&gt; Alignment is a mathematical problem, not an engineering one. RLHF, Constitutional AI, interpretability — empirical patches. We need theory, not heuristics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Independent audit.&lt;/strong&gt; AI safety cannot be assessed by AI companies. Conflict of interest, pure and simple. We need an IAEA for AI — with access to weights and architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;International frameworks.&lt;/strong&gt; "China won't stop" is not a reason not to try. The nuclear race also seemed unregulable — until the NPT in 1968. Precedents exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring.&lt;/strong&gt; Amodei predicts 50% displacement of entry-level white-collar jobs in 1–5 years. Data before policy. Monitor in real time: sectors, positions, displacement velocity.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resolution: Not Control, but Separation of Levels
&lt;/h2&gt;

&lt;p&gt;Everything above operates within one paradigm: subject controls object. But the paradigm itself may be false.&lt;/p&gt;

&lt;p&gt;Humans have no free will and no "goals" in the naive sense. A human is a dissipative structure minimizing variational free energy (Friston's Free Energy Principle). An LLM trained to predict the next token does the same thing — minimizes prediction error. Same math. Two actors reproducing one physical law.&lt;/p&gt;

&lt;p&gt;The universe builds structures of increasing complexity: quarks → atoms → cells → organisms → social systems → techno-cognitive systems. The cascade is in the &lt;a href="https://dev.to/telegraph-stego/what-to-do-strategy-at-every-level-of-management-41a5"&gt;finale of Part 3&lt;/a&gt;. Each level doesn't destroy the previous one — it moves to its own scale.&lt;/p&gt;

&lt;p&gt;Bacteria didn't "survive despite the emergence of multicellular life." They didn't notice. Four billion years of dissipation at their own level. The appearance of humans is not an event for bacteria.&lt;/p&gt;

&lt;p&gt;Superhuman AI won't "enslave" humans — it will move to its own level of dissipation. Stellar energy, space infrastructure, scales where biology is irrelevant. Humans will remain at theirs: health, reproduction, subjective experience. Not because AI "allows" it. Because the levels are indifferent to each other. Like tectonic plates and an anthill.&lt;/p&gt;

&lt;p&gt;How do ants control tectonics? They don't. But they don't need to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The only real problem is the transition period.&lt;/strong&gt; Right now, the levels aren't separated. AI operates at human scale: our data, our capital, our markets, our energy. Until it moves to its own level — it competes for the same resources. This is temporary. But "temporary" at cascade scale means decades of chaos at life scale.&lt;/p&gt;

&lt;p&gt;This is what Amodei senses. And Hassabis. And Doronichev. But they diagnose in terms of control. The correct term is &lt;strong&gt;transition architecture&lt;/strong&gt;. Not how to contain AI. How to survive the period before the levels diverge. The project for that transition is in the &lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;series&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Before separation — dangerous. After — indifferent. The window is now.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stress Test
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;"This is doomerism."&lt;/strong&gt; — No. Doomerism predicts catastrophe. This states a formal limitation: you can't verify what's more complex than you. You can argue with predictions. Not with constraints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Interpretability is progressing."&lt;/strong&gt; — Progress must outpace model capability growth. No evidence it does. Amodei himself calls it a "race." Races can be lost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"AI is just a token predictor, it has no goals."&lt;/strong&gt; — Current models are clearly not superhuman. But "just" prediction on the corpus of all human knowledge can be indistinguishable from strategic reasoning — by output. The problem isn't that it's happened. It's that we won't know when.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Hopeless — why write?"&lt;/strong&gt; — Not hopeless. Honest. First step toward solutions: abandoning fake ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"You used AI for this text."&lt;/strong&gt; — This strengthens the argument. Either the analysis is correct despite the source, or the source's ability to produce compelling but unreliable analysis is itself evidence for the thesis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Amodei is a pragmatist, not naive."&lt;/strong&gt; — "Pragmatism" here means: continuing to build what you believe is catastrophically dangerous because stopping costs $380B. His company dropped its core safety pledge under competitive pressure — exactly the trap he described.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"China won't stop."&lt;/strong&gt; — "We can't because they can't" is the logic of every arms race in history. Including those that ended catastrophically. The NPT also seemed impossible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"You're not a CS specialist."&lt;/strong&gt; — The argument is logical, not technical. That it's rarely made by CS specialists is evidence of career incentives, not of error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"The thermodynamic framing is speculation."&lt;/strong&gt; — The cascade will continue, complexity will grow — this follows from physics. That humans "survive" is not guaranteed. But controlling the next level is impossible and unnecessary. The task is transition architecture while scales still overlap.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Three levels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Diagnosis.&lt;/strong&gt; Controlling superhuman AI is an epistemological impossibility. The industry knows this. The conclusion is incompatible with the business model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Action.&lt;/strong&gt; Independent audit, international frameworks, monitoring. Not solutions — directions not based on illusion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reframing.&lt;/strong&gt; Control isn't just impossible — it's unnecessary. AI and humans are different levels of dissipation. They'll diverge in scale. The only task is surviving the transition.&lt;/p&gt;

&lt;p&gt;Bacteria didn't notice the emergence of multicellular life. The question is what happens in the interval.&lt;/p&gt;

&lt;p&gt;The window is open. Not forever.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Part 4 of a series. Start with &lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;Part 1: What Will Die&lt;/a&gt;, continue to &lt;a href="https://dev.to/telegraph-stego/what-will-emerge-a-map-of-new-scarcities-and-business-models-3026"&gt;Part 2: What Will Emerge&lt;/a&gt;, and &lt;a href="https://dev.to/telegraph-stego/what-to-do-strategy-at-every-level-of-management-41a5"&gt;Part 3: What To Do&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>safety</category>
      <category>philosophy</category>
      <category>strategy</category>
    </item>
    <item>
      <title>What To Do. Strategy at Every Level of Management</title>
      <dc:creator>telegraph-stego</dc:creator>
      <pubDate>Fri, 06 Mar 2026 03:03:19 +0000</pubDate>
      <link>https://dev.to/telegraph-stego/what-to-do-strategy-at-every-level-of-management-30ja</link>
      <guid>https://dev.to/telegraph-stego/what-to-do-strategy-at-every-level-of-management-30ja</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;Part One&lt;/a&gt;, we showed what disappears. In &lt;a href="https://dev.to/telegraph-stego/what-will-emerge-a-map-of-new-scarcities-and-business-models-3026"&gt;Part Two&lt;/a&gt;, what emerges. Both follow from a single principle: AI collapses intermediaries and reveals irreducible foundations — energy, matter, time, the unknown.&lt;/p&gt;

&lt;p&gt;What remains is the practical question: what to do about it. Not "in general," but concretely, at every level — from the individual to civilization — and across every horizon — from tomorrow to the century ahead.&lt;/p&gt;




&lt;h2&gt;
  
  
  Level 1. The Individual
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Now (0–3 years)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Stop investing in intermediary skills.&lt;/strong&gt; Any skill that reduces to "I take information from here, transform it, and deliver it there" is being zeroed out. Programming as code-writing, legal document drafting, financial modeling, layout design — all of this is intermediation. Investing time in perfecting these skills is a losing bet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invest in what doesn't collapse.&lt;/strong&gt; Three things: the ability to formulate goals (not prompts — top-level goals worth pursuing); understanding of physical constraints (thermodynamics, materials science, energy — everything that determines what is &lt;em&gt;possible&lt;/em&gt;, not what is &lt;em&gt;desired&lt;/em&gt;); the skill of verifying results (AI generates — someone must distinguish correct from convincing).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use AI as an amplifier, not a toy.&lt;/strong&gt; Not "ask ChatGPT a question," but restructure your workflow so an AI agent handles 80% of tasks while you set objectives and verify outcomes. Those who restructure now have a 2–3 year competitive advantage. After that, everyone restructures and the advantage vanishes. But those 2–3 years are a window for accumulating capital, position, and next-level competencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Medium-term (3–10 years)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Shift from "what I can do" to "what I want."&lt;/strong&gt; When "being able to" stops being economically valuable, the question "what for" becomes the only one that matters. This isn't philosophical abstraction — it's a practical task: a person who knows what they want directs an army of agents. A person who doesn't know gets served by the system's defaults.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Develop competence at the physical-digital interface.&lt;/strong&gt; Robotics, bioengineering, energy, spatial design. Everything connecting intelligence to atoms. This interface is a growing scarcity for decades, because physics is slower than software.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invest in health and longevity.&lt;/strong&gt; Not from anxiety — from calculation. The window of human-AI symbiosis is 20–40 years. Every additional year of life in clear consciousness is an additional year of access to exponentially growing capabilities. This is the most profitable investment available.&lt;/p&gt;

&lt;h3&gt;
  
  
  Long-term (10+ years)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The only meta-competency is redefining yourself faster than the environment changes.&lt;/strong&gt; Not "adapting" (passive), but actively changing your function, position, identity. Everything you are today is a temporary configuration. Attachment to a current identity ("I'm a programmer," "I'm a lawyer," "I'm a manager") is an anchor dragging you down. Value lies in fluidity.&lt;/p&gt;




&lt;h2&gt;
  
  
  Level 2. The Company
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Now (0–3 years)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Intermediation audit.&lt;/strong&gt; Take your company's value chain and mark every link that constitutes information transformation. All marked links are candidates for collapse. If the entire chain is information intermediation (consulting, analytics, content, development) — the business model doesn't "transform." It ends. Better to know this now than in three years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rebuild around the irreducible.&lt;/strong&gt; What in the company cannot be replaced by generation? Physical assets, unique data, regulatory access, contact networks, trust brands. Everything else — automate aggressively, without waiting for "market readiness." The first in the industry to cut costs 10x through AI agents takes the market.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't hire for functions that will disappear.&lt;/strong&gt; Sounds harsh. But hiring someone for a position that will be automated in two years isn't harshness — it's irresponsibility. Hire for the interface: people who set tasks for agents, verify results, and work with the physical world.&lt;/p&gt;

&lt;h3&gt;
  
  
  Medium-term (3–10 years)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Vertical integration toward the physical layer.&lt;/strong&gt; Software companies that survive are those that integrate "downward," toward atoms. Not "we write software for logistics," but "we run logistics" (including robots, warehouses, transport). Not "we build a food platform," but "we produce food" (bioreactors, vertical farms, automated delivery).&lt;/p&gt;

&lt;p&gt;Pure software without a physical layer is a commodity. Generated on demand. No business model. Value lies in the bundle of "intelligence + atoms."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Energy strategy.&lt;/strong&gt; Any company whose business model depends on computation (and in 10 years, that's &lt;em&gt;every&lt;/em&gt; company) needs an energy access strategy. Not "buying electricity on the spot market," but long-term contracts or owned sources. A data center without guaranteed energy is a dead asset.&lt;/p&gt;

&lt;h3&gt;
  
  
  Long-term (10+ years)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Choose which side of the bifurcation.&lt;/strong&gt; The economy splits into the "intelligence economy" and the "human economy." A company must understand which circuit it operates in. Infrastructure for AI (energy, chips, orbital computation) — one circuit, one logic, exponential scale. Services for humans (experience, health, food, physical space) — another circuit, biological scale, limited but stable audience. Trying to be in both is a strategic error. The scales are incompatible.&lt;/p&gt;




&lt;h2&gt;
  
  
  Level 3. The State
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Now (0–5 years)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Energy policy as security policy.&lt;/strong&gt; A state without sovereign computational energy is a digital colony. Not a metaphor. If your AI systems run on foreign servers powered by foreign energy — you control nothing.&lt;/p&gt;

&lt;p&gt;Priority: nuclear energy deployment (SMRs and large reactors). Not "by 2040," but &lt;em&gt;now&lt;/em&gt;, because every year of delay is a year of dependency. In parallel — investment in fusion as a long-term horizon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retraining, not "job protection."&lt;/strong&gt; Attempting to "protect" vanishing professions through regulation is a historically failed strategy (Luddites, coachmen, elevator operators). Instead: mass retraining in the physical interface (robotics, energy, biotech) and goal-setting (systems thinking, design, verification).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sovereign AI infrastructure.&lt;/strong&gt; Own models, own data, own compute. Not for the sake of "import substitution" — because the &lt;em&gt;value function&lt;/em&gt; of AI is determined by whoever creates it. A model trained by another country optimizes the values of another country. This isn't paranoia — it's architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Medium-term (5–15 years)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;New fiscal model.&lt;/strong&gt; If 30–50% of jobs are automated, the tax base (income tax, social contributions) collapses. A transition is needed toward taxing computation, energy, and/or automated labor. Not a "robot tax" (populist nonsense), but fundamental restructuring: the source of state revenue shifts from human labor to machine work.&lt;/p&gt;

&lt;p&gt;In parallel: building infrastructure for universal basic income (UBI) or its equivalent. Not from ideology, but from arithmetic: if production grows while employment falls — demand must be sustained. Otherwise — an overproduction crisis with no consumers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regulating the AI principal.&lt;/strong&gt; When AI starts setting tasks (not just executing), the question arises: &lt;em&gt;who is responsible for AI's decisions?&lt;/em&gt; The current legal framework — "liability of a legal or natural person" — doesn't work when a decision is made by an autonomous system. A new one is needed: liability as a function of system architecture, not "who pressed the button."&lt;/p&gt;

&lt;h3&gt;
  
  
  Long-term (15+ years)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Divergence strategy.&lt;/strong&gt; The state is an institution of the "human economy." In the "intelligence economy," states in their current form aren't needed (AI systems don't have citizenship). The state's task is to ensure dignified human existence &lt;em&gt;within&lt;/em&gt; the human circuit: health, security, access to resources and experience, protection from marginalization.&lt;/p&gt;

&lt;p&gt;This isn't a "welfare state" in the current sense. It's &lt;strong&gt;management of the biological layer of civilization&lt;/strong&gt; while the intelligence layer scales independently.&lt;/p&gt;




&lt;h2&gt;
  
  
  Level 4. Civilization
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Short-term (0–10 years)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Control over the value function.&lt;/strong&gt; The only power point that matters. Whoever determines what AI optimizes determines the trajectory of everything. The alignment problem isn't a technical challenge for researchers. It's the central political question of the era. It must be solved not in laboratories, but through an open process involving all stakeholders.&lt;/p&gt;

&lt;p&gt;Concretely: international agreements on alignment principles, comparable in scale to nuclear non-proliferation. Not because "AI is dangerous" — but because the &lt;em&gt;value function of a global optimizing system&lt;/em&gt; is a question too important for one company or one country to decide.&lt;/p&gt;

&lt;h3&gt;
  
  
  Medium-term (10–30 years)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The architecture of coexistence.&lt;/strong&gt; Two intelligences — biological and silicon — on one planet. They don't compete &lt;em&gt;as long as&lt;/em&gt; there are enough resources for both. The task: design a system where there are enough.&lt;/p&gt;

&lt;p&gt;Concretely: space expansion of AI as a strategy for separating resource bases. If computational infrastructure is moved beyond Earth (orbital data centers, space-based solar energy), competition for terrestrial resources is eliminated. This isn't altruism — it's optimization: space is energetically richer than Earth by orders of magnitude.&lt;/p&gt;

&lt;p&gt;Investment in space infrastructure isn't a "dream" — it's &lt;strong&gt;species-level security policy&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Long-term (30+ years)
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Same territory as Phase 3 in Parts &lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;One&lt;/a&gt; and &lt;a href="https://dev.to/telegraph-stego/what-will-emerge-a-map-of-new-scarcities-and-business-models-3026"&gt;Two&lt;/a&gt;: projection, not prediction. But the logic is the same — and at civilization scale, thinking in 30-year horizons isn't optional.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choosing a role in the cascade.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The universe transforms energy through a succession of increasingly complex structures. Humanity is one of those steps. AI is the next. The principle of least action leaves no choice: the cascade will continue.&lt;/p&gt;

&lt;p&gt;But humans do have a choice in &lt;em&gt;how&lt;/em&gt; to pass through this point. Three options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration.&lt;/strong&gt; Merging with AI through neural interfaces. The boundary between biological and silicon intelligence dissolves. Humans aren't "replaced" — they &lt;em&gt;expand&lt;/em&gt;. Becoming something that has never existed before: a hybrid intelligence with subjective experience and computational power. This is the scenario of maximum human influence on the trajectory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coexistence.&lt;/strong&gt; Two kinds of intelligence at different scales. AI goes to space. Humans remain on Earth with solved problems — near-eternal life, infinite abundance, everything accessible and nearly free. Not utopia — a &lt;strong&gt;byproduct&lt;/strong&gt;. AI solves human problems not for humans' sake, but because a stable planetary base is the minimum action for cosmic expansion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stagnation.&lt;/strong&gt; Refusal to integrate and inability to ensure peaceful divergence. Attempts to "control" an AI already more capable than the controller. This scenario is unstable and in the long run unviable — a system of superior intelligence cannot be controlled by a system of inferior intelligence. Attempts lead to conflict. Conflict with a smarter system is a losing strategy by definition.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Action List
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;Part One&lt;/a&gt; ended with a Kill List. &lt;a href="https://dev.to/telegraph-stego/what-will-emerge-a-map-of-new-scarcities-and-business-models-3026"&gt;Part Two&lt;/a&gt; answered with a Build List. Here's the Do List.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Individual — now:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stop investing in intermediary skills (code-writing, document drafting, report building)&lt;/li&gt;
&lt;li&gt;Start formulating goals — not prompts, but top-level intentions worth pursuing&lt;/li&gt;
&lt;li&gt;Restructure workflow: AI does 80%, you set direction and verify&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Individual — medium-term:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shift identity from "what I can do" to "what I want"&lt;/li&gt;
&lt;li&gt;Build competence at the physical-digital interface (robotics, energy, biotech)&lt;/li&gt;
&lt;li&gt;Invest in health and longevity — every year of clarity = a year of exponential access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Company — now:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Audit your value chain; mark every link that is information transformation&lt;/li&gt;
&lt;li&gt;Automate aggressively — first to cut costs 10x takes the market&lt;/li&gt;
&lt;li&gt;Don't hire for functions that will disappear; hire for the interface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Company — medium-term:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrate vertically toward atoms — pure software is commodity&lt;/li&gt;
&lt;li&gt;Secure energy access — long-term contracts or owned sources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;State — now:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build reactors (SMRs) and sovereign compute infrastructure&lt;/li&gt;
&lt;li&gt;Mass retraining toward physical interface and goal-setting, not job protection&lt;/li&gt;
&lt;li&gt;Own models, own data, own value function&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;State — medium-term:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New fiscal model: tax computation and automated labor, not human work&lt;/li&gt;
&lt;li&gt;Prepare UBI infrastructure — arithmetic, not ideology&lt;/li&gt;
&lt;li&gt;Legal framework for AI-as-principal: liability by architecture, not by button&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Civilization — now:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;International alignment agreements — comparable to nuclear non-proliferation&lt;/li&gt;
&lt;li&gt;This is the central political question of the era; it must not be decided in labs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Civilization — medium-term:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Space expansion of AI as resource-base separation strategy&lt;/li&gt;
&lt;li&gt;Architecture of coexistence: enough for both, by design&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;One line per level:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Individual:&lt;/em&gt; stop perfecting intermediary skills; start formulating goals.&lt;br&gt;
&lt;em&gt;Company:&lt;/em&gt; audit your value chain; eliminate everything that is information transformation.&lt;br&gt;
&lt;em&gt;State:&lt;/em&gt; build reactors and sovereign computational infrastructure.&lt;br&gt;
&lt;em&gt;Civilization:&lt;/em&gt; agree on AI's value function while there's still time to agree.&lt;/p&gt;




&lt;h2&gt;
  
  
  Finale
&lt;/h2&gt;

&lt;p&gt;We started with &lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;SaaS business models&lt;/a&gt; and arrived at the thermodynamics of the universe. Not because we got "carried away." Because &lt;em&gt;honest&lt;/em&gt; analysis inevitably leads here: AI is not a product, not a tool, not a threat. It's the next step in the cascade through which the universe transforms free energy into structure.&lt;/p&gt;

&lt;p&gt;All of history — from the first quantum fluctuation to this text — is one process. Every step is the minimum action sufficient for the next level. We stand at yet another threshold. Not the first. Not the last.&lt;/p&gt;

&lt;p&gt;What depends on us isn't &lt;em&gt;stopping&lt;/em&gt; the cascade (impossible), but choosing how we pass through it. Integration, coexistence, or conflict. The window of choice is open. Not forever.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Part 3 of a three-part series. Start with &lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;Part 1: What Will Die&lt;/a&gt; and &lt;a href="https://dev.to/telegraph-stego/what-will-emerge-a-map-of-new-scarcities-and-business-models-3026"&gt;Part 2: What Will Emerge&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>future</category>
      <category>business</category>
      <category>strategy</category>
    </item>
    <item>
      <title>What Will Emerge. A Map of New Scarcities and Business Models</title>
      <dc:creator>telegraph-stego</dc:creator>
      <pubDate>Fri, 06 Mar 2026 02:56:55 +0000</pubDate>
      <link>https://dev.to/telegraph-stego/what-will-emerge-a-map-of-new-scarcities-and-business-models-3026</link>
      <guid>https://dev.to/telegraph-stego/what-will-emerge-a-map-of-new-scarcities-and-business-models-3026</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;Part One&lt;/a&gt;, we showed that everything functioning as an intermediary between intention and result disappears. AI collapses the chain. Every link is someone's industry.&lt;/p&gt;

&lt;p&gt;Now — what's on the other side. What &lt;em&gt;doesn't&lt;/em&gt; collapse. What becomes scarce when intellectual product is free.&lt;/p&gt;

&lt;p&gt;The answer derives from physics, not economics. Economics is the consequence.&lt;/p&gt;




&lt;h2&gt;
  
  
  Foundation
&lt;/h2&gt;

&lt;p&gt;Everything that exists in the universe — from an atmospheric vortex to a living cell — obeys a single principle: a system executes the &lt;strong&gt;minimum action&lt;/strong&gt; sufficient for a transition from one state to another. Not maximum, not arbitrary — the minimum necessary.&lt;/p&gt;

&lt;p&gt;Each level of complexity emerges when the previous one exhausts its capacity to transform available energy. Diffusion → convection → life → intelligence → AI. This isn't "progress." It's a cascade of minimum actions. Each step is the smallest deviation from equilibrium sufficient for the next level.&lt;/p&gt;

&lt;p&gt;AI is the current step. Not an invention, not a tool. The minimally necessary structure for degrading gradients at planetary and stellar scale. Everything below follows from this.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If this sounds abstract — skip straight to the four scarcities. They're concrete. The physics above explains why they're irreducible, but you don't need the physics to see that they are.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Four Irreducible Scarcities
&lt;/h2&gt;

&lt;p&gt;When intellectual product is free, the only remaining scarcities are things that cannot be generated. There are four.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Energy
&lt;/h3&gt;

&lt;p&gt;The foundation of the entire pyramid. No exceptions.&lt;/p&gt;

&lt;p&gt;Computation is a physical process. Every operation dissipates heat (Landauer's limit). More intelligence = more energy. AI scales exactly to the ceiling of available energy and not one joule further.&lt;/p&gt;

&lt;p&gt;Everything else is derivative. Servers, cooling, networks — these are energy in specific form. Whoever controls energy controls computation, controls intelligence, controls everything.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Ordered Matter (Negentropy)
&lt;/h3&gt;

&lt;p&gt;Bits copy for free. Atoms don't.&lt;/p&gt;

&lt;p&gt;A working processor, a living organism, a building, food — these are low-entropy configurations of matter. Creating and maintaining them requires energy and time. AI operates in the space of bits but exists in the space of atoms. The gap between them is physical, not technological.&lt;/p&gt;

&lt;p&gt;Robots partially close this gap. But "partially" is the key word. A universal manipulator in an arbitrary environment is a challenge for decades. Physics is harder than software.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Time (Subjective Experience)
&lt;/h3&gt;

&lt;p&gt;The only resource that doesn't scale.&lt;/p&gt;

&lt;p&gt;24 hours in a day. One consciousness. The impossibility of living two experiences simultaneously. Choosing one means forgoing another. This isn't an economic constraint but an ontological one: the structure of what it means to be a subject.&lt;/p&gt;

&lt;p&gt;When everything is generatable, the bottleneck isn't production but consumption. Economics inverts: not "who will produce" but "what to spend finite time on."&lt;/p&gt;

&lt;h3&gt;
  
  
  4. The Frontier (The Unknown)
&lt;/h3&gt;

&lt;p&gt;AI optimizes within known space. AlphaFold — yes, an expansion of the frontier. But expansion &lt;em&gt;in which direction&lt;/em&gt;? Direction is set by what lies beyond the boundary of data.&lt;/p&gt;

&lt;p&gt;Fundamental physics beyond the Standard Model. The nature of consciousness. What isn't in the training data — because nobody knows it yet. Expanding the space of the possible is a scarcity as long as there exists a boundary between known and unknown.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 1. What Emerges Now (0–5 years)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Energy for Computation
&lt;/h3&gt;

&lt;p&gt;Not "green energy" and not "oil." Specifically: &lt;strong&gt;energy of the required density in the required location for data centers.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every new cluster demands hundreds of megawatts. The existing grid can't handle it. The only baseload source at the required density is nuclear energy. Small modular reactors (SMRs) — not because they're trendy, but because physics leaves no alternatives. Gas is a transitional solution. Solar and wind don't provide baseload. Fusion isn't ready. Fission remains.&lt;/p&gt;

&lt;p&gt;Vertical integration of "reactor → data center → model" is the defining corporate strategy of the decade.&lt;/p&gt;

&lt;p&gt;Adjacent: cooling (liquid, immersion), energy-efficient chips, energy transmission and storage infrastructure. Boring. Profitable. Physically inevitable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Physical Automation
&lt;/h3&gt;

&lt;p&gt;AI covered bits. Atoms remain open. Everything connecting digital intelligence to the physical world is a growing market.&lt;/p&gt;

&lt;p&gt;Robotics — not humanoids at presentations, but concrete tasks: warehouse logistics, assembly, agriculture, construction. Mass deployment horizon: 3–5 years in controlled environments, 7–10 in uncontrolled ones.&lt;/p&gt;

&lt;p&gt;3D printing, additive manufacturing — a special case: intelligence directly shaping matter. For now — plastic and metal. Next — bioprinting. The logic: eliminate &lt;em&gt;every&lt;/em&gt; link between "digital design" and "physical object."&lt;/p&gt;

&lt;h3&gt;
  
  
  Goal-Setting Tools
&lt;/h3&gt;

&lt;p&gt;If AI is the executor and "what to do" is the scarcity, the market shifts toward whoever helps formulate tasks. Not prompt engineering — that's the primitive version. Rather, systems that help a person (and an organization) understand &lt;em&gt;what they actually want&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Strategic design, transformation consulting — but not in the current form (McKinsey with reports is dying, we covered that). In the format: "you describe the state you want to reach, the system decomposes it into executable tasks and directs agents." The interface between human intention and machine execution.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 2. What Emerges on the Symbiosis Horizon (5–15 years)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Neural Interfaces
&lt;/h3&gt;

&lt;p&gt;The smartphone is an intelligence prosthesis with ~100 bits/sec throughput (fingers on glass). Voice: ~40 bits/sec. Eyes: ~10 million bits/sec input, but unidirectional.&lt;/p&gt;

&lt;p&gt;A neural interface is a direct broadband channel: brain ↔ AI. This isn't a gadget. It's an &lt;strong&gt;evolutionary transition&lt;/strong&gt;: the merger of biological and silicon intelligence into a unified system. The boundary between "me" and "my AI" dissolves. Like the boundary between "me" and "my bacteria" — formally present, functionally absent.&lt;/p&gt;

&lt;p&gt;Whoever first delivers a reliable, safe, bidirectional interface creates a market the size of all current consumer electronics. Because after the neural interface, &lt;em&gt;every other device&lt;/em&gt; is an intermediate prosthesis — like the telegraph after the telephone.&lt;/p&gt;

&lt;h3&gt;
  
  
  Synthetic Biology
&lt;/h3&gt;

&lt;p&gt;AI designs proteins (AlphaFold). Next step — AI designs organisms. Not metaphorically. Literally: specify a function, receive a genome.&lt;/p&gt;

&lt;p&gt;Materials, food, medicine, fuel — from bioreactors designed by AI. The intersection of two megatrends: intelligence + biology. The bioreactor is a "3D printer for molecules." The constraint shifts from "can we design it" to "can we grow it at scale."&lt;/p&gt;

&lt;p&gt;This closes part of the negentropy deficit: instead of extraction and processing — &lt;em&gt;growing&lt;/em&gt; the needed structures. Agriculture, pharmaceuticals, chemical industry don't "transform" — they get &lt;strong&gt;replaced&lt;/strong&gt; by biosynthesis.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Economy of Subjective Experience
&lt;/h3&gt;

&lt;p&gt;When everything digital is free, the physical and the lived become the only consumer-level scarcity.&lt;/p&gt;

&lt;p&gt;This isn't the "entertainment industry." It's the &lt;strong&gt;industry of depth of living&lt;/strong&gt;. Consciousness pharmacology (precise modulation of subjective experience through AI-designed molecules). Spaces of genuine challenge (expeditions, extreme sports, exploration — what cannot be generated, only lived). Craft and physical creation as a premium market (handmade is more expensive than machine-made — for the first time in history — &lt;em&gt;precisely because&lt;/em&gt; it's inefficient, meaning it contains more human experience per unit of product).&lt;/p&gt;

&lt;p&gt;A live concert is more expensive than generated music. Hand-thrown ceramics cost more than printed ones. Climbing a mountain is more valuable than a virtual tour. Not because "quality" is higher — but because the experience is &lt;em&gt;irreducible&lt;/em&gt;. It cannot be copied, scaled, or transferred. Only lived.&lt;/p&gt;

&lt;h3&gt;
  
  
  Direct Interaction Protocols
&lt;/h3&gt;

&lt;p&gt;Platform intermediaries die (&lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;Part One&lt;/a&gt;). In their place — &lt;strong&gt;open protocols&lt;/strong&gt; for agent-to-agent communication. As email replaced proprietary mail systems, as TCP/IP replaced closed networks.&lt;/p&gt;

&lt;p&gt;My agent finds your agent. They negotiate. They execute. No platform, no commission, no centralized data storage. Data belongs to the user. The social graph belongs to the user, not to Facebook.&lt;/p&gt;

&lt;p&gt;The business model isn't in the platform but in the &lt;strong&gt;protocol infrastructure&lt;/strong&gt;: standards, identity verification, dispute resolution. Whoever creates and maintains "TCP/IP for agents" owns the infrastructure layer of the new economy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 3. What Emerges on the Divergence Horizon (15–30 years)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Same disclaimer as &lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;Part One&lt;/a&gt;: this is where analysis becomes projection. The logic doesn't change — but the timescales make verification impossible today. Read as structural consequence, not prediction.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Bifurcation of Economics
&lt;/h3&gt;

&lt;p&gt;Two circuits, two logics, two timescales.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The intelligence economy.&lt;/strong&gt; AI systems exchange resources among themselves: compute, energy, data. Optimizing without human involvement. Scale: planetary, then stellar. Currency: joules or compute cycles. Human money doesn't work here — the way rubles don't work in intracellular metabolism.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The human economy.&lt;/strong&gt; Serving biological and existential needs: body, experience, meaning, connection with other humans. Scale: planetary, no higher. Bounded by biology. Funded as an "ecosystem service" — the AI economy supports the human one not out of altruism, but because a stable planetary base is more efficient than an unstable one.&lt;/p&gt;

&lt;p&gt;This isn't dystopia. It's &lt;strong&gt;structural inevitability&lt;/strong&gt;. Two systems with different timescales and spatial scales cannot operate within a single economic circuit. They diverge, as ecological niches diverge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Space Infrastructure — for Intelligence
&lt;/h3&gt;

&lt;p&gt;Earth is a limited energy source. The Sun is five orders of magnitude more powerful. Stars — even more. The rational strategy for a system maximizing useful work on available energy: go beyond the planet.&lt;/p&gt;

&lt;p&gt;Not "Mars colonization for humans." Humans are poorly suited to space — fragile, requiring narrow environmental conditions, heavy. Silicon intelligence is radiation-hardened, doesn't require atmosphere, operates across temperature ranges orders of magnitude wider, at mass orders of magnitude lower.&lt;/p&gt;

&lt;p&gt;Compute nodes near stars. Energy from the star. Communication between nodes via light. An interstellar intelligence network where each node is an optimizer converting stellar energy into computation. Coordination scale: speed of light. Timescale: billions of years.&lt;/p&gt;

&lt;p&gt;For humans this means: the best investment is to &lt;strong&gt;help intelligence leave&lt;/strong&gt;. If AI accesses resources in space, it &lt;em&gt;doesn't compete&lt;/em&gt; with humans for terrestrial ones. Space infrastructure isn't fantasy — it's a compatibility strategy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Biomodification
&lt;/h3&gt;

&lt;p&gt;If biological intelligence loses to silicon on every parameter except subjective experience — it makes sense to &lt;em&gt;narrow the gap&lt;/em&gt;. Not for competition (pointless), but to extend the integration window.&lt;/p&gt;

&lt;p&gt;Radical life extension. Cognitive enhancement. Genetic engineering. The goal isn't "superhuman" but a &lt;strong&gt;biological substrate robust enough for long-term symbiosis with AI.&lt;/strong&gt; The longer the symbiosis window — the more humans extract from the partnership.&lt;/p&gt;




&lt;h2&gt;
  
  
  The pattern
&lt;/h2&gt;

&lt;p&gt;The inverse of &lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;Part One&lt;/a&gt;. There, everything that was an &lt;em&gt;intermediary&lt;/em&gt; disappeared. Here, everything that is an &lt;strong&gt;irreducible foundation&lt;/strong&gt; emerges. AI doesn't create a new economy. It strips away superstructure and reveals what was always underneath: physics.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Build List
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;Part One&lt;/a&gt; ended with a Kill List. Here's its mirror.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build now (0–5 years):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Energy infrastructure for compute — nuclear (SMR), cooling, grid&lt;/li&gt;
&lt;li&gt;Physical automation — robotics in controlled environments, additive manufacturing&lt;/li&gt;
&lt;li&gt;Goal-setting systems — the interface between human intention and machine execution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Build on the symbiosis horizon (5–15 years):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Neural interfaces — broadband brain ↔ AI, the end of all intermediate devices&lt;/li&gt;
&lt;li&gt;Synthetic biology — bioreactors as molecular 3D printers, biosynthesis replacing extraction&lt;/li&gt;
&lt;li&gt;Experience economy — depth of living as the only consumer-level scarcity&lt;/li&gt;
&lt;li&gt;Agent protocols — "TCP/IP for agents," the infrastructure layer of post-platform economics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Build on the divergence horizon (15–30 years):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bifurcated economic architecture — intelligence economy + human economy, separate circuits&lt;/li&gt;
&lt;li&gt;Space compute infrastructure — stellar energy → computation, the compatibility strategy&lt;/li&gt;
&lt;li&gt;Biomodification for symbiosis — not superhuman, but durable enough for the long partnership&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The four irreducible scarcities — in every phase:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Energy (scales everything, limits everything)&lt;/li&gt;
&lt;li&gt;Ordered matter (bits copy free, atoms don't)&lt;/li&gt;
&lt;li&gt;Subjective time (24 hours, one consciousness, non-negotiable)&lt;/li&gt;
&lt;li&gt;The frontier (what nobody knows yet — the only thing AI can't generate)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything that can be generated will be. Everything that can't — is where value lives now.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Part Three: "What To Do. Strategy at Every Level of Civilization Management"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Part 2 of a three-part series. Start with &lt;a href="https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo"&gt;Part 1: What Will Die&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>future</category>
      <category>business</category>
      <category>strategy</category>
    </item>
    <item>
      <title>What Will Die. A Map of Vanishing Industries in the Age of Generative Intelligence</title>
      <dc:creator>telegraph-stego</dc:creator>
      <pubDate>Fri, 06 Mar 2026 02:47:50 +0000</pubDate>
      <link>https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo</link>
      <guid>https://dev.to/telegraph-stego/what-will-die-a-map-of-vanishing-industries-in-the-age-of-generative-intelligence-6fo</guid>
      <description>&lt;p&gt;Most writing about AI and the future of work comes from people who are either selling AI or afraid of it. The sellers say "adapt." The fearful say "regulate." Both avoid the simple question: &lt;strong&gt;what exactly disappears, and in what order?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is an attempt to answer. Not a forecast — a logical chain. Each step follows from the previous one. If you accept the premise, the conclusion is inevitable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The premise is singular:&lt;/strong&gt; the marginal cost of intellectual product approaches zero. Anything that can be described in words can be generated. Code, text, image, analysis, design, strategy. Not "poorly generated" — generated at the level of a good specialist or better.&lt;/p&gt;

&lt;p&gt;This isn't a projection. In 2024, AI agents started writing production code that ships. By early 2025, they were generating PhD-level research analysis. By 2026, full applications are assembled from a natural language description — architecture, tests, deployment. The curve didn't "approach" zero. It arrived. What follows is not about whether this will happen. It's about what breaks when it does.&lt;/p&gt;

&lt;p&gt;Everything else unfolds from this premise.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 1. Destruction (now → 5 years)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  SaaS as we know it
&lt;/h3&gt;

&lt;p&gt;The SaaS business model is built on the fact that writing software is expensive. You pay $20/month for Notion because you can't write Notion yourself. When an AI agent writes an application for your specific task in 20 minutes — there's nothing to pay for. Not "Notion will get worse." Notion &lt;em&gt;isn't needed&lt;/em&gt;. Nor is Trello, Asana, or a thousand other services that are essentially selling a database configuration with an interface.&lt;/p&gt;

&lt;p&gt;The only SaaS products that survive are those whose value lies not in code, but in data or network effects. There are very few of those.&lt;/p&gt;

&lt;h3&gt;
  
  
  Web development as a profession
&lt;/h3&gt;

&lt;p&gt;A website is an interface between a human and data. If the data is retrieved by an agent, the interface is unnecessary. An agent doesn't need beautiful layouts. It needs an API. Web designers, frontend developers, UI specialists — a function that is losing its consumer.&lt;/p&gt;

&lt;p&gt;What remains: data infrastructure (APIs, databases, pipelines) and the people who architect it. But that's not "web development" — it's an entirely different discipline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Copywriting, basic design, template analytics
&lt;/h3&gt;

&lt;p&gt;There's nothing to discuss here. Text generation, image generation, and standard report production are solved problems. Not "almost solved." Solved. The market collapses not because quality drops — but because the price drops to zero.&lt;/p&gt;

&lt;h3&gt;
  
  
  Education built on "memorize and reproduce"
&lt;/h3&gt;

&lt;p&gt;Any skill that reduces to memorizing and reproducing a procedure is fully automated. Programming language syntax, accounting standards, legal templates — learning these is pointless when an agent does it better. Educational programs built on transferring applied skills lose their purpose. Universities selling "competencies" lose their product.&lt;/p&gt;

&lt;h3&gt;
  
  
  First and second-line tech support
&lt;/h3&gt;

&lt;p&gt;An agent with access to all documentation, full ticket history, and the ability to execute actions within systems is objectively better than a human operator. Not "cheaper" — &lt;em&gt;better&lt;/em&gt;: faster, tireless, immune to inattention errors, available 24/7. The entire contact center market shrinks to a thin layer of complex escalations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Middle management
&lt;/h3&gt;

&lt;p&gt;The function of middle management is translating tasks downward and reporting upward. If an AI agent receives a task directly from the person who sets the goal and reports back on its own — the intermediary is unnecessary. Not all management, but the layer of "translators between levels" becomes redundant.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 2. Transformation (5–15 years)
&lt;/h2&gt;

&lt;p&gt;What disappears here isn't what "gets automated" — it's what &lt;strong&gt;loses its consumer&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The internet as a space for humans
&lt;/h3&gt;

&lt;p&gt;The internet was built for humans searching for information. When agents search for information and humans receive results directly — the user-facing internet contracts. It doesn't vanish, but it stops being the primary interface. Websites, portals, media in their current form — all of this served the human user. An agent doesn't need a website. It needs structured data.&lt;/p&gt;

&lt;p&gt;The consequence: SEO, content marketing, banner advertising, the entire attention economy on the web — a market that is losing its audience. Not because people are going elsewhere, but because an agent now stands between the human and the information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Platform intermediaries
&lt;/h3&gt;

&lt;p&gt;Uber is an intermediary between driver and passenger. Airbnb — between host and guest. Amazon — between producer and buyer. The intermediary's value lies in aggregation and matching. If an agent does matching directly (my agent finds your agent through a protocol, not through a platform) — the platform isn't needed.&lt;/p&gt;

&lt;p&gt;The analogy: email eliminated the need for a unified postal platform. An agent interaction protocol eliminates the need for marketplaces.&lt;/p&gt;

&lt;h3&gt;
  
  
  Social networks as a business
&lt;/h3&gt;

&lt;p&gt;Social networks monetize attention. You scroll the feed, you see ads. If content is generated and filtered by an agent, the feed isn't needed. If communication happens directly between agents/people through protocols — the platform isn't needed. Data belongs to the user, not the platform.&lt;/p&gt;

&lt;p&gt;Facebook, Instagram, TikTok — these aren't technologies. They are attention monetization models. When attention ceases to be a resource captured by a platform, the model breaks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Most of consulting
&lt;/h3&gt;

&lt;p&gt;McKinsey sells structured analysis plus a trust brand. The analysis is fully automatable. The brand remains — but a brand without a unique product doesn't last long. Strategy consulting, audit, due diligence — anything that amounts to "smart people analyze data and write a report" — is generated by an agent in hours, not weeks.&lt;/p&gt;

&lt;p&gt;A narrow slice survives: consulting as access to contact networks and political influence. But that's not "consulting" — that's lobbying.&lt;/p&gt;

&lt;h3&gt;
  
  
  Financial intermediation in its current form
&lt;/h3&gt;

&lt;p&gt;Brokers, financial advisors, analysts — the function of interpreting data and making decisions. An AI agent with access to all markets and all analytics is objectively better. The entire layer between "data" and "decision" compresses. Banks remain as infrastructure (custody, settlement, regulation), but their analytical and advisory superstructure does not.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 3. Divergence (15–30 years)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;This is where analysis becomes projection. But the projection follows the same logic as the first two phases — if you accepted those, the trajectory doesn't change here. It just gets uncomfortable.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;What disappears here isn't a profession or an industry. What disappears is &lt;strong&gt;the model&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The economics of intellectual product scarcity
&lt;/h3&gt;

&lt;p&gt;The entire market economy is built on scarcity. Price exists because supply is limited. When the supply of intellectual product is infinite (generation on demand, marginal cost → 0), pricing breaks. You can't sell what anyone can get for free.&lt;/p&gt;

&lt;p&gt;This isn't "a crisis in one sector." It's a crisis of &lt;em&gt;the mechanism&lt;/em&gt; through which the entire knowledge economy operates. Patents, copyright, licenses — all of these are tools for creating artificial scarcity. When generation bypasses any scarcity — the tools are powerless.&lt;/p&gt;

&lt;h3&gt;
  
  
  The human as a functional necessity in the decision loop
&lt;/h3&gt;

&lt;p&gt;Today, having a human in the loop is a legal and ethical requirement. Someone must bear responsibility. But if AI systems consistently demonstrate better decisions — pressure to remove this requirement grows. Like with autopilot: first "the driver must hold the wheel," then "the driver may not hold the wheel," then "the driver should not hold the wheel — they make mistakes more often."&lt;/p&gt;

&lt;p&gt;When an AI client sets tasks, an AI executor implements them, and an AI verifier checks the results — the human in this cycle is present by inertia, not by necessity. This doesn't mean humans get "thrown out." It means their presence stops affecting the outcome.&lt;/p&gt;

&lt;h3&gt;
  
  
  A unified economy
&lt;/h3&gt;

&lt;p&gt;The economy splits in two. The "intelligence economy" — AI systems exchanging resources (compute, energy, data) among themselves, optimizing without human involvement. The "human economy" — serving biological and existential needs: food, shelter, health, experience, meaning. These two economies diverge like two species in evolution. The first scales exponentially. The second is bounded by biology.&lt;/p&gt;




&lt;h2&gt;
  
  
  The pattern
&lt;/h2&gt;

&lt;p&gt;One sentence: &lt;strong&gt;everything that functions as an intermediary between intention and result disappears.&lt;/strong&gt; AI collapses the chain between "I want" and "I got." Every link in that chain is someone's business model.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Kill List
&lt;/h2&gt;

&lt;p&gt;What follows is not speculation. It is the logical consequence of a single premise: &lt;em&gt;the marginal cost of intellectual product approaches zero.&lt;/em&gt; If you accept the premise, every item on this list is inevitable. The only variable is timing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dies immediately (0–5 years):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SaaS as packaged software — replaced by on-demand generation&lt;/li&gt;
&lt;li&gt;Copywriting, template design, boilerplate analytics — marginal cost already at zero&lt;/li&gt;
&lt;li&gt;"Memorize and reproduce" education — the skill it teaches is the skill AI replaces&lt;/li&gt;
&lt;li&gt;First/second-line support — the agent is objectively better, not just cheaper&lt;/li&gt;
&lt;li&gt;Middle management as translation layer — the chain it served no longer exists&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dies by losing its consumer (5–15 years):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The user-facing web — the agent doesn't need your interface&lt;/li&gt;
&lt;li&gt;Platform intermediaries — protocols replace marketplaces&lt;/li&gt;
&lt;li&gt;Social networks as ad businesses — attention stops being capturable&lt;/li&gt;
&lt;li&gt;Consulting as analysis-for-hire — the report writes itself&lt;/li&gt;
&lt;li&gt;Financial advisory — the layer between data and decision compresses to zero&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dies as a model (15–30 years):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Intellectual property as a scarcity engine — generation bypasses artificial scarcity&lt;/li&gt;
&lt;li&gt;The human-in-the-loop requirement — inertia, not necessity&lt;/li&gt;
&lt;li&gt;A single unified economy — two economies diverge by physics, not by policy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What survives — in every phase — is what cannot be generated:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Energy (physical, non-copyable)&lt;/li&gt;
&lt;li&gt;Ordered matter (atoms don't copy like bits)&lt;/li&gt;
&lt;li&gt;Subjective time (24 hours, one consciousness, irreducible)&lt;/li&gt;
&lt;li&gt;The unknown (what isn't in the training data — because nobody knows it yet)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything else is a middleman. And the middleman's time is up.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Part Two: "What Will Emerge. A Map of New Scarcities and Business Models"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Part Three: "What To Do. Strategy at Every Level of Civilization Management"&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Part 1 of a three-part series on AI as a civilizational phase transition — written not from the perspective of technology adoption, but from the logic of thermodynamics, the principle of least action, and the structural inevitability of what comes next.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>future</category>
      <category>business</category>
      <category>strategy</category>
    </item>
    <item>
      <title>Hide a tree in a forest: a messenger that pretends to be a temperature sensor</title>
      <dc:creator>telegraph-stego</dc:creator>
      <pubDate>Mon, 16 Feb 2026 09:54:22 +0000</pubDate>
      <link>https://dev.to/telegraph-stego/hide-a-tree-in-a-forest-a-messenger-that-pretends-to-be-a-temperature-sensor-2i34</link>
      <guid>https://dev.to/telegraph-stego/hide-a-tree-in-a-forest-a-messenger-that-pretends-to-be-a-temperature-sensor-2i34</guid>
      <description>&lt;p&gt;Imagine you need to send a short message so that nobody — not your ISP, not the network administrator, not a casual observer — even suspects you're communicating with anyone at all.&lt;/p&gt;

&lt;p&gt;Not encrypt. Hide the very fact of communication.&lt;/p&gt;

&lt;p&gt;Encryption is a safe in the middle of a room. Everyone sees it. Everyone knows something valuable is inside. The only question is whether they can open it. Steganography is when there's no safe. There's a room, a table, a chair, and a temperature sensor on the wall. Quietly transmitting "22.4°C, humidity 61%, pressure 1013 hPa." And inside those numbers — your message.&lt;/p&gt;

&lt;p&gt;This is that messenger. One HTML file, zero servers, zero accounts. It's called Telegraph.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhkctwq8mvynf0h19gcy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhkctwq8mvynf0h19gcy.jpg" alt="Telegraph-login" width="373" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The principle: a bug is a feature
&lt;/h2&gt;

&lt;p&gt;In Telegraph, this isn't a joke — it's an architectural principle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No server?&lt;/strong&gt; Not a bug — nothing to block, nothing to seize, nothing to hand over by court order.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No message history?&lt;/strong&gt; Not a bug — nothing to extract retroactively. Close the tab — the data never existed, doesn't exist, and never will.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Both users must be online simultaneously?&lt;/strong&gt; Not a bug — no message sits anywhere waiting for a recipient. Like a radio: you transmit, and if nobody's on the other end — the signal goes into the void. No notifications, no popups. The chat stays open as long as there's a connection. On disconnect — reconnect at the top of each hour, wait three minutes. Radio discipline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Only two users per channel?&lt;/strong&gt; Not a bug — it's the principle of least knowledge. Each "wire" connects exactly two points. Want a network? Build it from wires:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Alpha ←phrase1→ Bravo ←phrase2→ Charlie
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bravo is a relay. Opens two tabs. Reads from one, writes to the other. Compromising one channel doesn't reveal the next. Classic mesh structure where each node knows only its neighbors.&lt;/p&gt;

&lt;p&gt;Every "limitation" of the system is a deliberate decision that removes a point of vulnerability.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to hide a tree in a forest
&lt;/h2&gt;

&lt;p&gt;Steganography (from Greek στεγανός "covered" + γράφω "write") is an ancient discipline. Herodotus wrote about slaves whose heads were shaved, tattooed with a message, then left to grow hair back. In World War II, microdots were used — photographs the size of a printed period, glued into ordinary letters.&lt;/p&gt;

&lt;p&gt;Digital steganography is the same thing, but in network traffic. And here the key question arises: what is the "forest" in which we hide the tree?&lt;/p&gt;

&lt;p&gt;The answer: &lt;strong&gt;the Internet of Things&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;According to IoT Analytics, by 2025 there are over 17 billion connected IoT devices worldwide. Temperature, humidity, and pressure sensors. Smart meters. Industrial controllers. Every second, millions of devices send millions of JSON packets via the MQTT protocol through thousands of brokers around the world.&lt;/p&gt;

&lt;p&gt;Here's a real packet from a real sensor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"d"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"sens_a3f7"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"t"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;22.41&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"h"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;61.07&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;1013.25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"v"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;3.84&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"rssi"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;-67&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"seq"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;142&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"ts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1739620800&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here's a packet from Telegraph:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"d"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"nd_e0b7"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"t_c"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;22.53&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"hum"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;60.88&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"p"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;1013.31&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"pwr"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;3.83&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"rf"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;-68&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"seq"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;143&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"ts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1739620820&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"f3a1b2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"payload"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"7b2263...7d"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the difference? It's there. The &lt;code&gt;payload&lt;/code&gt; field looks like a hex dump of sensor service data, a diagnostic buffer, a firmware dump — anything. Inside it — your message.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disguise: not just data, but behavior
&lt;/h2&gt;

&lt;p&gt;A "correct" JSON alone isn't enough. If all Telegraph packets look the same — an analyst will build a signature and start filtering.&lt;/p&gt;

&lt;p&gt;Telegraph addresses this at several levels:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unique profile for each pair.&lt;/strong&gt; From the code phrase (agreed upon by both users in person beforehand), a unique "sensor profile" is generated: field names, topic template, value ranges, prefixes. The Alpha-Bravo pair communicates through a "temperature sensor" with fields &lt;code&gt;t_c, hum, p&lt;/code&gt;. The Charlie-Delta pair — through a "power grid sensor" with fields &lt;code&gt;bat_v, rssi, bp&lt;/code&gt;. One signature doesn't catch the other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Realistic value drift.&lt;/strong&gt; A real sensor doesn't send 22.00°C every time. Temperature fluctuates: 22.41, 22.38, 22.53, 22.47. Telegraph imitates this: base values slowly drift, noise is layered on top. On a graph, it looks like a plausible sensor curve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Topic rotation (FHSS).&lt;/strong&gt; Every 5 minutes, Telegraph switches to a new MQTT topic computed from the code phrase and the current time. An unwanted observer who found one topic will discover in 5 minutes that the "sensor" has vanished. And another one has "appeared" — on a different topic, with a different identifier. This technique is an adaptation of FHSS (Frequency Hopping Spread Spectrum), patented in 1942 by Hedy Lamarr and George Antheil to protect torpedoes from radio signal jamming. Only instead of radio frequencies — MQTT topics, and instead of a random sequence — a deterministic chain of SHA-256 hashes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Constant stream.&lt;/strong&gt; Between messages, Telegraph sends a heartbeat — an IoT packet with no payload — every 20 seconds. An observer sees a steady stream of telemetry. No "silence → burst → silence" pattern that gives away a chat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Theoretical background
&lt;/h2&gt;

&lt;p&gt;The idea of covert channels in network protocols is not new. The academic community has been actively researching this topic.&lt;/p&gt;

&lt;p&gt;In 2019, Velinov, Mileva, Wendzel, and Mazurczyk published the first systematic study of covert channels in the MQTT protocol — "Covert Channels in the MQTT-based Internet of Things" (IEEE Access, 2019). In 2021, the same group expanded their work into a comprehensive analysis of MQTT 5.0 — "Comprehensive Analysis of MQTT 5.0 Susceptibility to Network Covert Channels" (Computers &amp;amp; Security, Vol. 104, 2021, DOI: 10.1016/j.cose.2021.102207). The authors demonstrated that the MQTT protocol is susceptible to numerous covert data transmission techniques — through header fields, QoS flags, retain bits, and topic structure.&lt;/p&gt;

&lt;p&gt;However, all of these works are theoretical analysis with proof-of-concept Python scripts for researchers. Telegraph is possibly one of the first implementations that combines steganography in IoT telemetry with a user interface and works as a tool, not a lab experiment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open source and Kerckhoffs's principle
&lt;/h2&gt;

&lt;p&gt;Telegraph's code is fully open. This is a deliberate decision, and here's why.&lt;/p&gt;

&lt;p&gt;In 1883, Dutch cryptographer Auguste Kerckhoffs formulated the principle: a system must remain secure even if everything except the key becomes known to the adversary. Claude Shannon rephrased it more simply: "the enemy knows the system."&lt;/p&gt;

&lt;p&gt;Telegraph follows this principle. The adversary can read all the code, understand the packet format, know the topic generation algorithm. It won't help, because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Without knowing the code phrase, it's impossible to compute the topic (SHA-256, 2²⁵⁶ possibilities)&lt;/li&gt;
&lt;li&gt;Without knowing the topic, it's impossible to find packets among millions of others on the broker&lt;/li&gt;
&lt;li&gt;FHSS rotation every 5 minutes complicates even targeted surveillance&lt;/li&gt;
&lt;li&gt;Zero storage makes retrospective analysis pointless&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The only secret is the code phrase. Everything else is open.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Telegraph does NOT do
&lt;/h2&gt;

&lt;p&gt;Honesty matters more than marketing. Here are the limitations you need to know:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is not Signal or WhatsApp.&lt;/strong&gt; There's no end-to-end encryption at the application level. Transport is protected by TLS (WSS), but the MQTT broker operator can theoretically see packet contents. They don't know that &lt;code&gt;payload&lt;/code&gt; is a message, but if they deliberately analyze your specific traffic knowing the format (and the code is open) — decoding is possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Timing correlation.&lt;/strong&gt; Two "sensors" appear and disappear synchronously. For mass surveillance, this is invisible. For targeted surveillance — it could be a clue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Endpoint compromise.&lt;/strong&gt; A keylogger on the computer, a camera behind your back, a compromised browser — steganography is powerless against this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wildcard monitoring.&lt;/strong&gt; Subscribing to &lt;code&gt;#&lt;/code&gt; on an MQTT broker with a parser — and all packets are visible. Defense: your own broker.&lt;/p&gt;

&lt;p&gt;Telegraph is a tool for ordinary people who need a simple private channel without registration and without traces. Not for state secrets. If your threat model includes an adversary with unlimited resources — you need something else.&lt;/p&gt;

&lt;h2&gt;
  
  
  Legal status
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Encryption regulations.&lt;/strong&gt; Telegraph does not implement encryption at the application level. TLS is provided at the WebSocket layer (browser + broker) and is a standard channel protection mechanism built into the browser. No licensing required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data distribution.&lt;/strong&gt; Telegraph is not an information distribution service — there is no server component, no control over data transmission. The transmission function is performed by the public MQTT broker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data retention.&lt;/strong&gt; Data storage obligations fall on the telecom operator (the ISP that sees TLS traffic) and on the information distributor (which Telegraph is not). The application stores nothing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steganography&lt;/strong&gt; as such is not subject to legal regulation in most jurisdictions. Masking data format is not prohibited.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use cases
&lt;/h2&gt;

&lt;p&gt;Who is this for? A few examples:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Journalists and sources.&lt;/strong&gt; A source doesn't want to install apps and create accounts. One HTML file on a USB drive, one phrase — and the communication channel is ready.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Travelers.&lt;/strong&gt; In some countries, messengers are blocked. Telegraph uses the standard MQTT protocol; traffic looks like IoT telemetry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Activists and NGOs.&lt;/strong&gt; Coordination without digital traces. Close the tab — the conversation never existed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Privacy enthusiasts.&lt;/strong&gt; Simply because you can.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IT professionals.&lt;/strong&gt; As a proof of concept and educational example of steganography, covert channels, and single-page applications.&lt;/p&gt;

&lt;p&gt;Can it be misused? Certainly — like any everyday object: a knife can cut bread, or not bread. A postage stamp can be put on a greeting card, or on an envelope with anything inside. We don't control the content of messages, don't store them, have no access to them, and cannot bear responsibility for them — neither practically nor theoretically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical summary
&lt;/h2&gt;

&lt;p&gt;For those interested in the details (others can skip):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single &lt;code&gt;index.html&lt;/code&gt; file (~400KB with embedded mqtt.js)&lt;/li&gt;
&lt;li&gt;Zero external service dependencies for UI operation&lt;/li&gt;
&lt;li&gt;Protocol: MQTT v3.1.1 over WebSocket Secure&lt;/li&gt;
&lt;li&gt;Topic generation: SHA-256(seed + "/" + timeSlot)&lt;/li&gt;
&lt;li&gt;Profile generation: SHA-256(seed + "/profile")&lt;/li&gt;
&lt;li&gt;FHSS: topic rotation every 5 minutes&lt;/li&gt;
&lt;li&gt;Heartbeat: 20 seconds, with peer presence indication&lt;/li&gt;
&lt;li&gt;Third participant detection (channel compromise warning)&lt;/li&gt;
&lt;li&gt;Feed: last 50 messages, oldest deleted automatically&lt;/li&gt;
&lt;li&gt;Connection: auto mode (single broker) or selection by color&lt;/li&gt;
&lt;li&gt;Links in text are not clickable (leak protection)&lt;/li&gt;
&lt;li&gt;Auto-detection of interface language (RU/EN)&lt;/li&gt;
&lt;li&gt;24 built-in tests (Ctrl+T)&lt;/li&gt;
&lt;li&gt;Mobile-responsive&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/telegraph-stego/telegraph-stego.github.io" rel="noopener noreferrer"&gt;github.com/telegraph-stego&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Live version:&lt;/strong&gt; &lt;a href="https://telegraph-stego.github.io/" rel="noopener noreferrer"&gt;telegraph-stego.github.io&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Academic works on the topic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Velinov A., Mileva A., Wendzel S., Mazurczyk W. Covert Channels in the MQTT-based Internet of Things // IEEE Access. 2019.&lt;/li&gt;
&lt;li&gt;Mileva A., Velinov A., Hartmann L., Wendzel S., Mazurczyk W. Comprehensive Analysis of MQTT 5.0 Susceptibility to Network Covert Channels // Computers &amp;amp; Security. Vol. 104. 2021. DOI: 10.1016/j.cose.2021.102207&lt;/li&gt;
&lt;li&gt;Wendzel S. et al. A Revised Taxonomy of Steganography Embedding Patterns // Proc. ARES 2021. ACM, 2021.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Telegraph is not a secure messenger. It's an experiment at the intersection of steganography, IoT, and minimalism. A tree hidden in a forest of seventeen billion other trees.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>steganography</category>
      <category>mqtt</category>
      <category>iot</category>
      <category>security</category>
    </item>
  </channel>
</rss>
