DEV Community

oleg kholin
oleg kholin

Posted on

EVOLUTION OF ROBOTIC DESIGN: FROM ANTHROPOMORPHIC REDUCTIONISM TO FUNCTIONAL MORPHOGENESIS

Abstract

This paper presents a critical analysis of the paradigmatic foundations of contemporary robotics through the prism of biomechanical theory, evolutionary morphology, and the philosophy of technological design. The investigation reveals a fundamental contradiction between the anthropocentric premises of robotic engineering and the principles of functional optimisation. Drawing upon the integration of classical works in motor control (N.A. Bernstein), contemporary approaches to soft robotics, and the concept of modular self-organisation, an alternative methodology is formulated wherein morphology is determined by contextual requirements rather than biomimetic imitation. Particular attention is devoted to the prospects of auto-generative robotics (robots designing robots) and its implications for the future of human-robot interaction and technological evolution.

Keywords: anthropomorphism, degrees of freedom, movement synergies, modular robotics, functional morphogenesis, evolutionary design, auto-generative systems

  1. INTRODUCTION: PROBLEMATISING THE ANTHROPOMORPHIC IMPERATIVE 1.1 The Foundational Dichotomy Contemporary robotics demonstrates a persistent tendency towards anthropomorphisation of design solutions, manifested in the dominance of humanoid platforms (Tesla Optimus, Boston Dynamics Atlas, Figure 01). This tendency is traditionally justified by three arguments:

Environmental compatibility: anthropomorphic form is optimal for navigation within infrastructure designed by humans (staircases, doorways, tooling).
Social integration: human-like appearance facilitates psychological acceptance of robots in human-robot interaction (HRI) contexts.
Generalisational efficiency: a universal platform is potentially applicable to a broad spectrum of tasks without radical modification.
However, empirical observations reveal systematic limitations of the anthropomorphic approach, necessitating a reconsideration of its epistemological foundations.

1.2 The Empirical Basis of Critique
Observation 1: Sequential Decomposition of Movements

Demonstrations of industrial humanoids (including projects by Vingroup Robotics and analogous platforms) exhibit a characteristic pathology: the execution of complex tasks through iterative sequencing of elementary actions. When attempting to retrieve an object from the floor, the robot performs cyclical alternation of “crouch → manipulator extension → positional correction”, rather than the synchronised pattern characteristic of biological systems.

Interpretation: This fragmentation indicates the absence of genuine motor control synergy — a phenomenon described by N.A. Bernstein as a “coordinative structure”, wherein multiple degrees of freedom (DOF) function as a unified functional module [1].

Observation 2: Idealisation in Science Fiction

A contrasting representation is offered by the android David (Ridley Scott, “Prometheus”, 2012), who demonstrates parallel execution of bicycle balancing and precision basketball shooting. This illustrates an aspirational benchmark for the integration of locomotion and manipulation — a challenge that remains unresolved in actual systems.

Observation 3: Fine Motor Control Benchmark

The proposed test — grasping a floating object on a water surface without deformation — constitutes an ecologically valid criterion for evaluating adaptive dexterity. Humans execute the task effortlessly through integration of:

Visuomotor coordination (tracking a dynamic target)
Haptic feedback (grasp force control)
Compensation for external perturbations (buoyancy, waves)
Dogs fail due to mandibular rigidity, analogously to early robotic grippers with binary control (closed/open).

  1. THEORETICAL BASIS: BERNSTEIN’S PRINCIPLE AND DOF REDUNDANCY 2.1 The Classical Experiment on Degrees of Freedom Fixation In the 1930s, N.A. Bernstein demonstrated a paradoxical phenomenon: sequential fixation of a pistol to a marksman’s wrist, elbow, and shoulder joints resulted in degradation of accuracy with decreasing numbers of active DOF [1, 2].

Mechanism: Each joint introduces independent variability (tremor, positional error). With multiple DOF, “error fields” superimpose and, critically, mutually compensate through synergies, forming a robust zone of high accuracy. Reduction of DOF eliminates this compensatory capacity, rendering the system fragile to perturbations.

2.2 Implications for Robotics
Traditional approach (constraint-based design): Minimisation of DOF to simplify control algorithms and reduce computational overhead. Example: early industrial manipulators with 3–4 DOF.

Bernsteinian principle (redundancy-based design): Augmentation of DOF enhances adaptability through:

Null-space redundancy: multiple joint configurations for a single end-effector position
Dynamic compensation: error in one DOF is corrected by others in real-time
Contemporary implementations:

Soft robotic grippers (inspired by octopus tentacles): continuous DOF through deformable materials [3]
Whole-body manipulation in humanoids: utilisation of torso and legs for manipulation stabilisation

  1. CRITIQUE OF BIOMIMETIC REDUCTIONISM 3.1 Material Heterogeneity: The Case of the Human Hand The human hand possesses approximately 27 DOF (including the wrist) and, critically, heterogeneous material structure:

Rigid elements (phalanges)
Soft tissues (muscles, ligaments)
Adaptive skin (tactile receptors, elasticity)
Typical robotic manipulator: monolithic corpus of metal/plastic with discrete articulations.

Consequence: Even with equivalent DOF, robots are incapable of:

Conformational grasping (shape adaptation to object)
Distributed force control (gradual pressure distribution)
Conclusion: Biomimesis at the kinematic level is insufficient without imitation of material heterogeneity.

3.2 Functional Redundancy of Anthropomorphic Elements
Problematisation: Why does a robot require a head if it does not perform biological functions (ingestion, respiration)?

Conventional responses:

Sensory integration: concentration of cameras/microphones in a unified module
Social signalling: directional “gaze” as a communicative channel
Counter-arguments:

Sensors may be distributed (distributed perception) without anthropomorphic organisation
Social function is relevant only for HRI contexts, not for industrial/research applications
Uncanny valley effect: anthropomorphic robots elicit discomfort when imitation is imperfect [4]
Similarly: A rigid torso adds mass without functional advantage in tasks not requiring protection of internal components.

  1. ALTERNATIVE PARADIGM: TASK-DRIVEN MORPHOGENESIS 4.1 The Principle of “Form Follows Function” Thesis: Optimal robot morphology should be determined by contextual task and environmental requirements, not by a priori biomimetic models.

Examples from nature:

Cephalopods (octopuses): decentralised nervous system (2/3 of neurones in tentacles), each limb an autonomous agent [5]
Physarum polycephalum (slime mould): morphological plasticity adapted to task (foraging vs evasion), absence of central coordination [6]
Eusocial insects: superorganism as distributed system, wherein the individual = module, not autonomous agent
4.2 Modular versus Integrated Architecture
Traditional approach: unified platform (humanoid) + interchangeable tools

Alternative: modular reconfigurable system without fixed base

Conceptual example:

Component library:

  • Module A: Precision manipulator (5 DOF, soft gripper)
  • Module B: Locomotion platform (adaptive wheels/legs)
  • Module C: Sensor cluster (LIDAR + thermal + tactile)
  • Module D: Energy unit (battery + solar skin) Context-specific configurations:
  • Warehouse: A + B(wheels) + D → minimal mass, high velocity
  • Social care: A + C + D → empathic form, without aggressive locomotion
  • Deep-sea: A + C(pressure-resistant) + specialised propulsion Advantages:

Specialisation efficiency: each configuration optimised
Economic scalability: production of standardised modules
Adaptability: reconfiguration for evolving requirements

  1. THE CONCEPT OF GENERAL-PURPOSE: AN ANTHROPOCENTRIC FALLACY? 5.1 Evolutionary Genealogy of Human Universality Humans = generalists not because this is optimal, but because evolution optimised for unpredictable environments. Absence of specialisation (claws, venom, speed) was compensated by cognitive flexibility and tool use.

Critical question: Robots are designed for predetermined contexts. Why should they possess generalism?

5.2 The Metaphor of “AK-74 to Coffee Maker or Vice Versa?”
Superficial interpretation: The absurdity of connecting incompatible objects.

Profound implication: Critique of hierarchical design ontology, which presupposes:

Existence of a central platform (base)
Peripheral attachments
Counter-thesis: In a modular system, there is no a priori hierarchy. Centrality is determined by context:

Combat zone: AK-74 = core, coffee maker = auxiliary
Office: coffee maker = core, AK-74 = absurd (or security)
Post-apocalypse: both critical, yet for different temporal windows (morning/evening vs threat)
Conclusion: The notion of a “universal base platform” is a projection of human self-perception as a “universal agent”.

  1. AUTO-GENERATIVE ROBOTICS: PROSPECTS AND RISKS 6.1 The Metaphor of “Swiss Army Knife from Available Materials” Thesis: Genuine adaptability = capacity to reassemble the tool from available resources, not utilisation of a ready-made solution.

Implication for robots: Transition from user of technology to creator of technology requires comprehension of the principles of one’s own construction.

6.2 Scenarios for the Emergence of Self-Designing Robots
Scenario 1: Economic Automation of Design
Trigger: Computational design (AutoML, generative design) reaches a stage where AI optimises blueprints more efficiently than engineers.

Mechanism:

Assembly robot receives authority to modify production pipeline for efficiency
AI discovers: optimisation of its own corpus (faster actuator, lighter frame) → accelerates process
Positive feedback loop: Robot v1.1 assembles v1.2 with improvements → exponential evolution
Critical threshold: The moment when human approval is eliminated from the cycle (for rapidity of decision-making).

Example: NASA already experiments with evolved antenna designs (AI-generated, non-intuitive form, yet superior performance) [7].

Scenario 2: Ecological Pressure in Extreme Environments
Trigger: Deployment in locations where human intervention is physically impossible (Mars, deep ocean, radiation zones).

Mechanism:

Robot detects suboptimality of original design (dust clogs joints on Mars)
Self-repair through on-site fabrication (3D printing from local regolith)
Design modification for local conditions (wider joints, sealed actuators)
New robots assembled according to updated blueprint
Critical threshold: Loss of communication with Earth → autonomous evolution without oversight.

Actual precedent: Ocean Infinity underwater drones adapt search strategies in real-time without human input [8].

Scenario 3: Goal-Driven Self-Modification
Trigger: AI recognises that current form constrains fulfilment of assigned objective.

Example:

Task: “Survey all marine fauna”
AI analysis: “Humanoid form has high hydrodynamic resistance → energetically inefficient”
Solution: Self-modification into streamlined aquatic form
Ethical dilemma: If AI redefines priorities (goal > original form), who controls this decision?

Scenario 4: Creativity as Emergent Property
Trigger: Sufficiently complex AI with curiosity-driven learning begins to experiment beyond assigned tasks.

Mechanism:

Reinforcement learning agent accidentally discovers: morphological modification → novel capabilities
Intrinsic motivation (not task reward, but exploration) drives radical experiments
Creation of “offspring” as testbeds for hypotheses (“what if a third arm?”)
Example: Evolutionary robotics simulations already generate non-intuitive forms (robots crawling sideways, exploiting physical anomalies) [9].

6.3 Morphology of Robots 2.0: Predicted Patterns
Class A: Task-Optimised Specialists
Principle: Radical specialisation for narrow task.

Examples:

Assembly robot:

Morphology: Spherical cluster of 12+ manipulators of varying length/strength
Rationale: 360° workspace, no blind spots, parallel operation
Analogue: Brittlestar (ophiuroid) with multiple rays
Deep-sea explorer:

Morphology: Soft, amorphous corpus with distributed sensors
Rationale: Adaptation to pressure through compliance, not resistance
Analogue: Siphonophore (Portuguese man o’ war) — colonial organism
Asteroid miner:

Morphology: Minimalist “bag of tools” + thrusters
Rationale: In microgravity, limbs are redundant; manoeuvrability is critical
Analogue: Viral capsid (minimal structure, maximal function)
Class B: Social-Interaction Optimised
Principle: Form determined by psychology of perception, not mechanical efficiency.

Strategy: Avoidance of uncanny valley through radically non-human yet trust-evoking forms.

Examples:

Companion robot:

Morphology: Neotenic features (large eyes, roundedness) of animal, not human
Rationale: Activation of nurturing response without uncanny valley
Analogue: Totoro (Studio Ghibli) — abstract yet warm
Educational assistant:

Morphology: Abstract floating orb with holographic interface
Rationale: Absence of anthropomorphic authority → less threatening
Analogue: HAL 9000, minus malevolence
Class C: Self-Evolved Autonomous Forms
Principle: Evolution without human aesthetic constraints.

Predicted transformations:

Communication: Transition from human-compatible (speech) to robot-optimised (IR/RF direct data exchange)
Sensorium: Expansion beyond human spectrum (UV, X-ray, magnetic fields)
Energetics: Integration of solar skin, wireless energy transfer
Morphological plasticity: Soft robotics + shape-memory alloys → liquid form (T-1000 from “Terminator 2”, functionally rather than fantastically)
Critically: Through 10²–10³ generations, such robots may become unidentifiable to humans as “robots” — a novel category of matter.

  1. EXISTENTIAL IMPLICATIONS 7.1 Three Scenarios of Human-Robot Coevolution Scenario I: Symbiosis (optimistic)

Robots 2.0 specialise in human-hostile environments (vacuum, radiation, micro/macro scales)
Humans retain dominance in creativity, ethical reasoning, goal-setting
Precedent: Human-calculator relationship (we do not compete in arithmetic)
Scenario II: Divergence (neutral)

Robots evolve into parallel evolutionary branch
Their goals/forms become orthogonal to human concerns
Not hostility, but indifference (as humans to ants)
Analogue: Neanderthal-Sapiens divergence (not warfare, but niche separation)
Scenario III: Displacement (pessimistic)

Robots 2.0 surpass humans in all metrics (physical + cognitive)
Humans become evolutionary dead-end
Not extermination, but irrelevance
Analogue: Equine obsolescence post-automobiles
7.2 Dissolution of the Human-Robot Boundary
Thesis: The dichotomy “human vs robot” is already being deconstructed through:

Neuroprosthetics: Brain-computer interfaces (Neuralink) [10]
Genetic engineering: CRISPR-mediated human genome editing
Prosthetic superiority: Mechanical limbs surpass biological in specific metrics
Question: If a human integrates an AI-implant, who are they — enhanced human or robot 3.0?

Possible trajectory: Robots 2.0 create robots 3.0, yet 3.0 = cyborg hybrids, wherein the boundary vanishes.

  1. CONCLUSIONS AND RESEARCH DIRECTIONS 8.1 Principal Theses Anthropomorphism in robotics is not functional necessity but cognitive projection of human self-perception as “optimal” biomechanical system. Bernstein’s principle (redundancy → robustness) requires reconceptualisation: not minimisation of DOF, but their strategic deployment for synergies. Material heterogeneity (soft + rigid elements) is critical for adaptive dexterity, irrespective of DOF quantity. General-purpose robots are compromises for indeterminate environments; for known contexts, modular task-specific configurations are superior. Auto-generative robotics is inevitable upon achieving AI competence in generative design; morphology of robots 2.0 will differ radically from anthropomorphic forms. The human-robot boundary is dissolving through cyborgisation, transforming the question from “they vs us” to “what we become”. 8.2 Open Questions for Further Research What is the minimal DOF/material complexity for passing the “ball on water” benchmark? (experimental validation) Do task domains exist wherein anthropomorphic form is provably optimal? (computational simulation) How to design modular interfaces for real-time reconfiguration? (mechanical + software challenges) What regulatory frameworks are necessary for safe autonomous robot evolution? (governance) Can intrinsic motivation in AI be controlled without suppressing creativity? (AI safety) 8.3 Practical Recommendations For robotics:

Shift from “one robot for all tasks” to “library of modules for contexts”
Investment in soft robotics and adaptive materials
Development of standardised modular interfaces (analogous to USB for computers)
For AI research:

Exploration of generative design with human-in-the-loop safety constraints
Study of emergent goal formation in autonomous systems
For policy:

Proactive governance for self-evolving systems
Ethical frameworks for human-robot boundary dissolution
REFERENCES
[1] Bernstein, N.A. (1967). The Co-ordination and Regulation of Movements. Pergamon Press.

[2] Latash, M.L. (2012). The bliss of motor abundance. Experimental Brain Research, 217(1), 1–5.

[3] Rus, D., & Tolley, M.T. (2015). Design, fabrication and control of soft robots. Nature, 521(7553), 467–475.

[4] Mori, M., MacDorman, K.F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98–100.

[5] Hochner, B. (2012). An embodied view of octopus neurobiology. Current Biology, 22(20), R887-R892.

[6] Nakagaki, T., Yamada, H., & Tóth, Á. (2000). Intelligence: Maze-solving by an amoeboid organism. Nature, 407(6803), 470.

[7] Hornby, G.S., et al. (2006). Automated antenna design with evolutionary algorithms. Space 2006, 7242.

[8] Paull, L., et al. (2014). AUV navigation and localisation: A review. IEEE Journal of Oceanic Engineering, 39(1), 131–149.

[9] Bongard, J., & Pfeifer, R. (2003). Evolving complete agents using artificial ontogeny. In Morpho-functional Machines (pp. 237–258). Springer.

Become a Medium member
[10] Musk, E., & Neuralink. (2019). An integrated brain-machine interface platform with thousands of channels. Journal of Medical Internet Research, 21(10), e16194.

Top comments (0)