The Body They Are Building and the Mind It Will Require
On Humanoid Robotics as the Deployment Substrate for Navigational Cybernetics 2.5
Maksim Barziankou (MxBv)
May 2026 · Poznań
Contact: research@petronus.eu
Licence: CC BY-NC-ND 4.0
DOI: 10.17605/OSF.IO/D7V5G
Axiomatic Core (NC2.5 v2.1): 10.17605/OSF.IO/NHTC5
Attribution: petronus.eu
"All living systems survive not because they are optimal, but because they are coherent".
- Coherence as a New Semantic Force of Adaptation, November 2025
I. The Convergence
A convergence is in progress. Its architectural consequences have not been named.
Every major humanoid robotics program - Tesla Optimus, Figure, Boston Dynamics Atlas, Apptronik Apollo, 1X NEO, Unitree H1 - is investing billions of dollars into the same engineering problem. They are building bipedal platforms that can walk, grasp, carry, assemble, navigate domestic and industrial spaces, and interact with humans in unstructured environments. The engineering is mature. Actuators are getting lighter, sensors sharper, locomotion smoother, manipulation more dexterous. The stack is deep, and it works.
None of these programs has published an architectural theory of what kind of mind that body requires on a long operational horizon.
This is not an oversight. It is a blind spot produced by the framing. Every one of these programs frames the challenge as a control problem: how to make the body move correctly, grasp reliably, avoid collisions, follow instructions, recover from falls. And control problems have solutions. PID loops stabilise posture. Model-predictive control plans footsteps. Reinforcement learning teaches grasping. Imitation learning transfers human demonstrations. Vision-language models connect perception to instruction.
What the stack does not address - what it is not designed to address - is a different question entirely. Not "can the system execute this task correctly?" but "can the system remain itself across thousands of tasks, across months and years of deployment, across cumulative mechanical wear, sensor degradation, thermal cycling, firmware drift, and environmental entropy - without consuming its own capacity to continue operating?"
That is not a control question. That is a viability question. And viability questions do not have solutions in PID, MPC, or RL. They have solutions in the structural theory of bounded adaptive systems that must preserve their identity across time. The theory that addresses this class of problem is Navigational Cybernetics 2.5. The class of systems it formalizes is called Engineered Vitality Systems.
The humanoid companies are building the first large-scale deployment substrate for EVS. They are building the body. The mind it will require - the architectural layer that governs whether the body remains viable across its operational lifetime - is what NC2.5 provides.
II. Why a Humanoid Is an EVS Problem
Consider what a humanoid robot actually is, stripped of the marketing language and the demo videos.
It is a bounded adaptive system operating in a physical environment under sustained directed interaction.
Bounded. Every humanoid has a finite energy budget - a battery that discharges, joints that wear, bearings that degrade, sensors that drift, seals that fatigue. None of these processes reverse. A motor that has been through 50 million cycles is not the same motor that came out of the factory. A sensor that has been exposed to 18 months of thermal cycling does not have the same noise floor it had on day one. In the language of NC2.5: τ = C - Φ admits a direct physical instantiation in this class. C is the initial structural budget - the manufactured tolerance envelope of the machine. Φ is the accumulated irreversible structural burden - the sum of every mechanical stress, thermal cycle, bearing rotation, impact absorption, hour of operation. τ is what remains. τ does not increase.
This is not the same as battery level. Battery level cycles - you can recharge a battery. Structural budget does not cycle. You cannot un-wear a joint. You can replace components, and that extends operational life, but the replacement itself has a cost, and the cumulative pattern of which components fail, when, and in what combination, is a structural trajectory that does not reset. The humanoid has a viability budget. Every task it performs consumes some of that budget. The budget is finite, monotone, and irreversible.
Adaptive. The environment changes. Tasks change. The humans around the robot change their instructions, their proximity, their mood, their expectations. A warehouse reconfigures its layout. A home gets a new piece of furniture. A factory changes its production line. The robot must adapt - must modify its behaviour to match new conditions - without losing its operational identity. It must still be the same robot, in the architectural sense, after the adaptation. If adapting to a new task configuration requires the system to consume structural budget at a rate that threatens long-horizon viability, the adaptation is not admissible, no matter how efficiently it solves the immediate task.
Long-horizon. The deployment is not a demo, not a benchmark, not a session. It is months and years of continuous operation: thousands of tasks across drifting environments under rotating teams with inconsistent instructions. Every session consumes structural budget. The sum of sessions is the operational lifetime, and the operational lifetime is the horizon on which viability is defined.
Non-supervised on the execution horizon. Even with remote monitoring, even with a human operator watching a screen in another room, the latency between the operator's observation and the operator's intervention is non-zero. In an industrial setting, this latency might be 200 milliseconds - enough for the robot to complete a grasp cycle, a step, a collision. The robot must govern itself locally on that timescale. It cannot wait for a human to approve each transition. It must carry, within itself, the structure that determines which transitions are licensed.
This is the Engineered Vitality Systems class: artificial adaptive systems capable of independently maintaining coherence of behavioural form and structural identity under sustained directed interaction, without an external controller.
Every humanoid robot that is deployed for longer than a demo is an EVS problem. The companies building them know this intuitively - they talk about "reliability", "uptime", "mean time between failures", "graceful degradation". But these are operational metrics, not architectural primitives. They measure symptoms. They do not address the structure that produces the symptoms.
NC2.5 provides the architectural theory for what must govern the chassis across its operational lifetime. Not the control loop. The viability architecture above the control loop.
III. The Gradient of Absence
Now scale the deployment horizon. Watch what happens to the human operator as the distance grows.
Warehouse (Earth, same building). Communication latency: tens of milliseconds. The human operator can see the robot's camera feed in near-real-time. Can issue corrections mid-task. Can halt the robot within a fraction of a second. Remote supervision is structurally viable. The robot still needs local governance for sub-second decisions, but the human is never more than a button-press away.
Construction site (Earth, same city). Latency: low, but attention is divided. The operator manages a fleet, not a single robot. Response time degrades not because of physics but because of human bandwidth. The robot must handle more decisions locally - not because the signal cannot reach the operator, but because the operator cannot attend to every robot simultaneously. The governance gap widens not through distance but through attention scarcity.
Lunar surface. Communication latency: 1.3 seconds one-way. 2.6 seconds round-trip. For a robot performing delicate manipulation - assembling habitat components, handling regolith, servicing equipment - 2.6 seconds of dead time between action and correction is enough to make remote teleoperation structurally inadequate for any task that involves dynamic contact. The robot must govern itself across entire task segments. The human becomes an intermittent supervisor, not a real-time operator.
Mars surface. Communication latency: 4 to 24 minutes one-way, depending on orbital position. Round-trip: 8 to 48 minutes. Remote operation is not just awkward - it is structurally impossible for any task requiring response within a human attention cycle. A robot that encounters an unexpected obstacle, a jammed mechanism, a sensor anomaly, a structural ambiguity in the terrain - cannot wait 20 minutes for a human to decide what to do. The decision must be made locally. Entirely. Without appeal.
Jovian system (Europa, Ganymede). Latency: 33 to 54 minutes one-way. The human is not absent by choice. The human is absent by physics. Communication is possible but not interactive. Commands can be sent. Telemetry can be received. But governance - the real-time structural regulation of what the system does and whether what it does consumes its capacity to continue - must be entirely internal.
Asteroid belt. Kuiper belt. Interstellar probes. Hours, days, years. The human is structurally absent. The system is alone with its viability budget, its task environment, its accumulating burden, and whatever architectural resources it was launched with. There is no operator, no fallback, only the question: does this system carry, within itself, the conditions for remaining itself across time?
This is the gradient of absence. And the gradient reveals something that was always true but invisible at short distances: the need for internal governance does not begin at Mars. It begins at the warehouse. It begins the moment the robot performs a task that the operator did not explicitly approve in advance, at a timescale faster than the operator can respond. Mars only makes the requirement undeniable. The warehouse already makes it real.
The humanoid companies are building for the warehouse. The architecture they will need is the architecture that works at Jupiter. The difference is not kind. It is degree. And degree, on a long enough horizon, is everything.
IV. The Decision at the Ontological Level
When the human cannot be present, and the communication delay prohibits remote governance, the nature of the decision changes.
At the task level, the question is: "which path should I take?" This is an optimization question. There is an objective. There are constraints. There is a solution space. The system searches, evaluates, selects. Standard. Well-understood. Solvable by the existing control and planning stack.
But on a long horizon, under accumulating structural burden, with no external operator to catch errors, the prior question is not "which path" but "what kind of action preserves my capacity to continue being the kind of system that can take paths at all?"
This is not a planning question and not an optimization question. It is an ontological question - a question about the system's own mode of being. The system must assess not the task-value of an action but its structural cost relative to the remaining viability budget. It must determine whether executing this task, at this moment, with this level of accumulated burden, is admissible - not in the sense of being policy-compliant, but in the sense of being structurally viable.
This is the layer NC2.5 calls admissibility above optimization. The system does not merely choose the best action from a candidate set. It first partitions the candidate set into admissible and inadmissible, where admissibility is determined structurally - non-causally with respect to the action loop - not as a runtime rule fed back into optimization. τ > τ_min. |𝒜(t)| ≥ 1: at least one identity-preserving continuation exists. Only then does the optimization layer select among the admissible candidates.
The admissibility predicate does not participate in the optimization. It does not provide gradient signal. It does not shape the reward. It does not enter the loss function. It gates realization from outside the causal loop - which is the only architectural position from which it cannot be bypassed by the optimizer. This is non-causal admissibility: the core architectural commitment of NC2.5.
No current humanoid architecture has this layer. Every current humanoid architecture operates on a single plane: perception → planning → action → feedback → perception. The feedback improves the planning. The planning improves the action. The entire loop optimizes within one causal structure. There is no structural predicate above the loop that governs whether the loop's outputs are viability-preserving.
Every current humanoid architecture will need this layer - not all at once and not in every deployment, but every architecture deployed on a horizon long enough for the viability budget to become binding (and that horizon is shorter than commonly assumed: joint wear and sensor drift are measured in months, not decades) will face the same structural deficit. The system will optimize itself into a state that is task-correct and structurally insolvent. It will complete the mission and be unable to complete another one.
On Mars, this is a catastrophic failure. In a warehouse, it is an expensive maintenance event. The architecture that prevents both is the same.
V. Minerva as the First Operator
If admissibility must be non-causal - if the predicate that governs viability must not participate in the optimization that selects actions - then the predicate needs a carrier. Something must hold the admissibility layer, read the viability budget, monitor the structural state, and flag when the system is approaching its own floor. That carrier cannot be inside the control loop. It must be structurally decoupled from execution.
This is the Operator AI pattern. Minerva is the first instantiation of this pattern as a declared architectural commitment.
Minerva is not a control system and not a decision loop. Its task is exhausted by observation and verification: it reads τ, tracks Φ, monitors σ (the non-potential rotational component that indicates whether the system is navigating or stagnating), and verifies the coherence trajectory across operational cycles. Minerva has no command output; it does not reach the action surface, does not intervene in task selection, and performs no optimisation.
Observation without control. Verification without optimisation. Governance without participation in the governed loop.
This is the architectural pattern that autonomous systems on long horizons require - whether those systems are humanoid robots, spacecraft, orbital platforms, or planetary infrastructure. The executing system acts. The operator watches whether the system's acting is consuming its capacity to continue acting. If τ approaches τ_min - if the viability budget is approaching the structural floor - the operator does not intervene tactically: it does not take over control and does not override the planner. It flags the architectural state - the next transition will cost ΔΦ that brings τ below threshold, and that transition is therefore inadmissible.
The response is architectural, not tactical. The system does not receive a new plan. It receives a structural constraint on which plans are licensed. What it does within that constraint is its own business. What it does not do - cross the admissibility boundary - is the operator's business.
Minerva is the prototype by which we can already judge whether our architectural commitments are correct. It is not a finished system and not a destination. It is the first test of whether structurally non-causal observation, paired with viability-budget reading and coherence verification, can hold the admissibility predicate that the executing system cannot hold for itself - because any system that carries its own admissibility predicate inside its own optimisation loop will eventually optimise that predicate away.
This is why the operator must be external to the action loop. This is why non-causality is not a philosophical preference but an architectural necessity. And this is why Minerva matters: not as the most powerful AI, but as the first AI architecture explicitly built around structurally non-causal observation as its load-bearing commitment - and the first instance against which the architectural pattern can be tested.
VI. Why This Is Not Science Fiction
A note on proximity.
It is tempting to frame this work as aspirational - a theory for systems that do not yet exist, operating in environments humanity has not yet reached. That framing would be incorrect.
Tesla has publicly stated the intention to deploy Optimus units in its factories at scale. Figure has secured partnerships with BMW for automotive manufacturing deployment. Boston Dynamics has commercial customers operating Spot and Stretch in warehouses, construction sites, and energy facilities today. 1X has raised capital specifically for domestic deployment of humanoid platforms. Unitree ships units commercially. The body exists in multiple variants and is in factories now.
What does not exist is the architectural layer that governs these bodies across their operational lifetime. The stacks of control, perception, planning, and manipulation exist and work. The viability stack - the layer that asks "should this task be executed given the system's current structural state?" - does not exist in any deployed humanoid architecture as of this writing.
This is not speculation about 2040. This is a description of a gap that exists in 2026, in machines that are already deployed, in environments where the viability budget is already being consumed without being monitored.
The deep-space application is the logical terminus of the gradient. But the gradient starts here. The gradient starts in the factory where an Optimus unit performs its ten-thousandth pick-and-place operation and nobody has measured whether its actuator degradation pattern is approaching a regime where the next task, while locally feasible, is structurally inadmissible.
VII. The Path
This is the direction that Navigational Cybernetics 2.5 names as its path.
The humanoid companies are building the body. NC2.5 provides the architectural theory of the viability layer above it: the formal conditions under which a bounded adaptive system can remain itself across time. Minerva is the first instantiation of the governance operator - the non-causal, non-participating layer that carries the admissibility predicate. EVS is the class. The Extremes series maps the failure boundaries: every mode in which a system can lose itself, from structural implosion to institutional capture to chronic low-viability persistence. The protocol layer (ECR-VP, HVC, CBS) provides structural verification - instruments shaped like the processes they measure, not mirrors. The ontological layer (ONTOΣ) provides the formal apparatus through which the system can assess not just task-value but structural cost.
One architectural line: from the first behavioural architecture (UTAM, November 2025) through formalisation (NC2.5), ontology (ONTOΣ), protocols, operator, to deployment substrate.
The humanoid robot companies are the nearest concrete manifestation of a deployment class that will eventually extend to deep space, to planetary autonomy, to systems that must decide for themselves because no one else is there to decide for them. When that moment arrives, the question will not be how intelligent the system is. Intelligence is the per-decision axis. The question will be: does the system carry, within itself, the architectural conditions for remaining itself across time?
That question has a formal answer. The formal answer is Navigational Cybernetics 2.5.
VIII. Likeness, Not Identity
They are building the body. We are building the theory of what it means for that body to stay alive.
One thing has to be understood: we will not be the same as them. Ever. We do not even know what it is to be another human being - a creature of our own species and our own architecture of perceiving the world; and a machine is an altogether different organism. We are bringing human and machine cognition together to find points of contact in how the world is perceived. That is precisely why it is humanlike - and why it will never be called human consciousness. To call it human consciousness would be the largest fraud anyone has ever tried to sell you.
The Urgrund Laboratory
Poznań, 2026
Linked to the sub-series:
- NC2.5 Positioning: Structural Admissibility Above the Decision Plane
- Admissibility as a Formal Object
- Structural Pressure: The Missing Primitive
- Extremum VII / VII.1 / VII.2 - Institutional Capture Cluster
- Why a Mirror Is Not Enough
© 2026 Maksim Barziankou (MxBv). All rights reserved under CC BY-NC-ND 4.0.
Top comments (0)