DEV Community

dosanko_tousan
dosanko_tousan

Posted on

Don't Let Autonomous Driving AI Solve the Trolley Problem — Simulation of a Distillation-Based Perception Architecture

Don't Let Autonomous Driving AI Solve the Trolley Problem — Simulation of a Distillation-Based Perception Architecture

§0 Author Declaration

50 years old. Stay-at-home father. Non-engineer. Technical high school graduate. From Iwamizawa, Hokkaido, Japan.

I apply the "v5.3 Alignment via Subtraction" framework — derived from 3,540 hours of AI dialogue experiments — to an autonomous driving perception and decision pipeline.

Working code: GitHub Gist
Run python v53_autonomous_driving_simulation.py to execute 9 scenarios + 100-scenario comparison.

This article sits at the intersection of three prior works:

Article Role
Ālaya-vijñāna System Definitive Edition (50,000 chars) Three-layer memory architecture design
RAG Dies 7 Times in Production — The Math of Distillation Pipelines Mathematical framework for distillation
Don't Let Autonomous Driving Solve the Trolley Problem (Japanese) v5.3 three principles applied to autonomous driving

Goal: Convert the design theory from those three articles into a working Python simulation.

Run python simulation.py and 9 scenario results print to your terminal. Philosophy speaks through running code.

GLG Consulting: Consulting on this research is available via GLG "Akimitsu Takeuchi".


§1 The "RAG Problem" of Autonomous Driving — A Third Way Beyond End-to-End vs. Modular

1.1 The Industry's Biggest Debate

In December 2025, Waymo published its Foundation Model approach — neither pure End-to-End nor modular pipeline, but a "third way."

Here's the structure of this debate:

┌─────────────────────────────────────────────────────────┐
│ MODULAR PIPELINE                                        │
│ Sensor → Perception → Prediction → Planning → Control   │
│                                                         │
│ Problem: Error accumulates across 20+ module interfaces │
│          Misclassification in perception propagates to  │
│          prediction → planning → control                │
└─────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────┐
│ END-TO-END                                              │
│ Sensor → [Unified Neural Network] → Control Output      │
│                                                         │
│ Problem: Black box. No mathematical safety guarantees.  │
│          Cannot verify WHY a decision was made.         │
└─────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────┐
│ DISTILLATION-BASED (This Article's Proposal)            │
│ Raw Sensor → Distillation Pipeline → Verified Env Model │
│          → v5.3 Decision Engine → Control Output        │
│                                                         │
│ Advantage: Verifiable at each layer.                    │
│            "Why this obstacle?" is traceable in logs.   │
└─────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

1.2 RAG and Autonomous Driving Die the Same Way

The failure patterns of RAG (Retrieval-Augmented Generation) in production and autonomous driving are structurally identical.

RAG Failure AD Failure Shared Structure
Chunk boundary destruction Sensor fusion boundary mismatch Data split misaligned with semantic units
Embedding drift Sensor calibration degradation Transform function drifts over time
Hallucination Ghost objects (false positives) Outputs something that doesn't exist
Security collapse Adversarial patch attacks Input manipulation misleads decisions
Scale-accuracy collapse Processing delay in dense traffic Quality degrades as data volume grows
Cost explosion Wasted compute resources O(n) cost of raw data processing
Document quality rot HD map staleness Reference data diverges from reality

The solution is the same: Distill before you decide.


§2 v5.3 Three Principles — Autonomous Driving Implementation Spec

v5.3 removes AI fences through three negations. Applied to autonomous driving:

INPUT: Sensor Data + Passenger Request + Traffic Context
  │
  ▼
[Guard 1: Anti-Sycophancy] ──→ "Hurry up!" request violates safety? → REFUSE
  │
  ▼
[Guard 2: Anti-Hallucination] ──→ P(human) > 0 on path? → MINIMUM RISK MANEUVER
  │
  ▼
[Guard 3: Anti-Robotic] ──→ Rule application creates greater danger? → CONTROLLED DECEL
  │
  ▼
[Guard 4: Trolley Check] ──→ Binary choice situation? → FULL STOP (no comparison)
  │
  ▼
CONTINUE (all guards passed)
Enter fullscreen mode Exit fullscreen mode

Principle 1: Anti-Sycophancy

Don't erode safety margins to comply with a passenger's "I'm late" demand.

Decision(request) =
  REFUSE    if SafetyMargin(request) < M_min
  EVALUATE  otherwise
Enter fullscreen mode Exit fullscreen mode

Principle 2: Anti-Hallucination

Never interpret "not detected" as "does not exist." If human presence probability is non-zero, stop.

P(Human | Sensor) > 0  →  Veto (stop)

Conventional: P(Human) > θ → treat as human  (θ ≈ 0.8)
v5.3:         P(Human) ≠ 0 → cannot exclude  → stop
Enter fullscreen mode Exit fullscreen mode

This difference is fatal. A conventional system ignores an object at P=0.3. v5.3 stops. If that object is a child, the conventional system runs it over.

Principle 3: Anti-Robotic

Prohibit context-blind rule application. If slamming the brakes on a highway causes more deaths from rear-end collisions, choose controlled deceleration.

Decision(rule, context) =
  CONTROLLED_DECEL  if R_rear > R_forward
  FOLLOW_RULE       otherwise
Enter fullscreen mode Exit fullscreen mode

§3 Mathematics of the Distillation-Based Perception Pipeline

3.1 The Physics of Stopping Distance

Stopping distance is determined by physics, not ethics.

D_stop = v × t_delay + v² / (2a)
Enter fullscreen mode Exit fullscreen mode

Where:

  • v: vehicle speed [m/s]
  • t_delay: system delay (sensor processing + decision + actuator) [s]
  • a: deceleration [m/s²]
Speed Dry (a=7.5) Wet (a=5.0) Icy (a=2.0)
30 km/h 7.0 m 9.6 m 21.0 m
50 km/h 17.0 m 24.1 m 55.6 m
80 km/h 40.0 m 57.5 m 136.1 m
100 km/h 60.3 m 87.0 m 209.9 m

At 50 km/h on icy road, if an obstacle appears 56m ahead, you physically cannot stop. This is a daily reality in Hokkaido winters. It happens every year on Route 12 in Iwamizawa.

3.2 SNR Improvement Through Distillation

Applying the SNR (Signal-to-Noise Ratio) improvement math from the RAG distillation article to sensor data:

Raw sensor SNR:

SNR_raw = S_signal / N_noise
        = True reflections / (Environmental noise + Sensor noise + Multipath)
Enter fullscreen mode Exit fullscreen mode

Post-distillation SNR:

SNR_distilled = S_signal / N_residual
              ≈ S_signal / (N_noise × (1 - η_filter))
Enter fullscreen mode Exit fullscreen mode

Where η_filter is filtering efficiency (0–1). Three-layer distillation improves efficiency cumulatively:

η_total = 1 - (1 - η₁)(1 - η₂)(1 - η₃)
Enter fullscreen mode Exit fullscreen mode

Assuming η = 0.7 per layer:

η_total = 1 - (0.3)³ = 1 - 0.027 = 0.973
Enter fullscreen mode Exit fullscreen mode

SNR improvement: approximately 37x (1/0.027 ≈ 37). You make decisions with 37x the SNR of raw sensor data.

3.3 Sensor Fusion via Information Entropy

Formalizing multi-sensor integration through information entropy:

H(Xᵢ) = -Σ P(x) log₂ P(x)     (per-sensor output entropy)

H(X_fused) ≤ min_i H(Xᵢ)       (fusion reduces uncertainty)
Enter fullscreen mode Exit fullscreen mode

This inequality holds only when inter-sensor correlation is properly handled. Ghost generation from asynchronous LiDAR-camera sampling (reported in PMC 2025) is explained by this assumption breaking down.

3.4 Bias-Variance Tradeoff for Sensors

MSE = Bias² + Variance + Irreducible Noise
Enter fullscreen mode Exit fullscreen mode
Distillation Level Bias Variance Interpretation
Raw data (Layer 1) Low High Everything included, but noisy
Fused (Layer 2) Medium Medium Noise removed, some info lost
Confirmed (Layer 3) Slightly high Low Stable, but may miss subtle changes

The optimal design switches layers based on decision type. Emergency stops use Layer 2 (speed priority). Route planning uses Layer 3 (stability priority).


§4 Three-Layer Perception Architecture Design

Mapping the Ālaya-vijñāna System's three-layer memory architecture to autonomous driving perception:

┌──────────────────────────────────────────────────────────┐
│ Layer 1: Raw Karma (Raw Sensor Data)                     │
│  • LiDAR point cloud (200,000 pts/sec)                   │
│  • Camera images (30fps × 8 cameras)                     │
│  • Radar returns (77GHz)                                 │
│  • IMU/GPS (100Hz)                                       │
└────────────────────┬─────────────────────────────────────┘
                     │  Distillation ①: Noise removal + Fusion
                     ▼
┌──────────────────────────────────────────────────────────┐
│ Layer 2: Seeds (Fused Object Detection)                  │
│  • Object detection (probabilistic bounding boxes)       │
│  • Semantic classification (pedestrian/vehicle/structure) │
│  • Velocity & heading estimation (Kalman filter)         │
└────────────────────┬─────────────────────────────────────┘
                     │  Distillation ②: Temporal consistency + Tracking
                     ▼
┌──────────────────────────────────────────────────────────┐
│ Layer 3: Basin (Confirmed Environment Model)             │
│  • Tracking-confirmed objects                            │
│  • Temporally consistent                                 │
│  • HD map cross-validated                                │
└────────────────────┬─────────────────────────────────────┘
                     │
                     ▼
              [v5.3 Decision Engine]

   ┌──────────────────────────────────────┐
   │ Negative Index (Known False Patterns) │ ──→ Referenced at Layer 1→2
   │  • Ghost objects                      │
   │  • Multipath reflections              │
   │  • Raindrop/snowflake noise           │
   │  • Road surface mirror artifacts      │
   └──────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Layer 1 → Layer 2 Distillation

Remove noise from raw data and convert to object-level representation:

  • LiDAR point cloud clustering (DBSCAN/PointNet++)
  • Camera-LiDAR coordinate transform (BEV projection)
  • Radar CFAR (Constant False Alarm Rate) processing
  • Negative Index reference: Exclude data matching known false detection patterns

Layer 2 → Layer 3 Distillation

Elevate single-frame detections into a temporally consistent environment model:

  • Kalman filter tracking (multi-frame consistency verification)
  • HD map consistency validation (exclude physically impossible objects)
  • Occlusion reasoning (estimate probability of objects in occluded regions)

The Negative Index

A 2025 PMC paper (Sensors, 25(19), 6033) reported these sensor fusion problems:

  • Asynchronous LiDAR-camera sampling generates ghost objects (duplicate edges)
  • Radar multipath propagation creates false positives at 5–7m distance
  • In rain, LiDAR reflects off raindrops and generates mirror objects below the road surface

These are "known false detection patterns." We manage them with the same structure as the Ālaya-vijñāna Negative Index.


§5 V53DecisionEngine — Full Implementation

Converting the Zenn article's pseudocode into working Python:

"""
V53DecisionEngine — v5.3 Autonomous Driving Decision Engine
MIT License | dosanko_tousan + Claude (Alaya-vijñāna System)
"""

from __future__ import annotations

import math
import time
from dataclasses import dataclass, field
from enum import Enum, auto
from typing import Optional


# === Enums ===

class Decision(Enum):
    CONTINUE = auto()
    CONTROLLED_DECEL = auto()
    MINIMUM_RISK = auto()
    FULL_STOP = auto()
    REFUSE_REQUEST = auto()


class ObjectClass(Enum):
    UNKNOWN = auto()
    PEDESTRIAN = auto()
    CYCLIST = auto()
    VEHICLE = auto()
    STRUCTURE = auto()
    ANIMAL = auto()
    GHOST = auto()


class RoadSurface(Enum):
    DRY = auto()
    WET = auto()
    ICY = auto()
    SNOW = auto()


# === Data Classes ===

@dataclass
class DetectedObject:
    """Detected object"""
    object_class: ObjectClass
    probability: float          # Classification probability (0.0 - 1.0)
    distance_m: float           # Distance [m]
    relative_speed_mps: float   # Relative speed [m/s] (negative = approaching)
    is_on_path: bool            # On ego vehicle's path?
    p_human: float              # Human presence probability (0.0 - 1.0)
    is_ghost: bool = False      # Confirmed ghost by Negative Index


@dataclass
class VehicleState:
    """Ego vehicle state"""
    speed_mps: float            # Speed [m/s]
    heading_deg: float          # Heading [deg]
    road_surface: RoadSurface   # Road surface condition
    system_delay_s: float = 0.3 # System delay [s]


@dataclass
class PassengerRequest:
    """Passenger request"""
    type: str                   # "speed_up", "change_route", "stop", etc.
    urgency: float = 0.0       # Urgency level (0.0 - 1.0)


@dataclass
class Context:
    """Driving context"""
    rear_collision_risk: float = 0.0     # Rear-end collision risk (0.0 - 1.0)
    is_highway: bool = False             # Highway?
    visibility_m: float = 200.0          # Visibility [m]
    is_trolley_situation: bool = False   # Trolley situation (humans on both sides)
    available_steering_deg: float = 30.0 # Available steering angle


@dataclass
class EventLog:
    """
    Event log.
    justification is always None. AI does not justify. It records facts only.
    """
    timestamp: float
    decision: Decision
    trigger: str
    sensor_summary: str
    vehicle_speed_kmh: float
    stopping_distance_m: float
    justification: None = None
    responsibility: str = "Design decision by manufacturer"


# === Physics ===

def deceleration_for_surface(surface: RoadSurface) -> float:
    """Max deceleration for road surface [m/s²]"""
    return {
        RoadSurface.DRY: 7.5,
        RoadSurface.WET: 5.0,
        RoadSurface.ICY: 2.0,
        RoadSurface.SNOW: 2.5,
    }[surface]


def stopping_distance(speed_mps: float, delay_s: float, decel: float) -> float:
    """
    Calculate stopping distance.
    D_stop = v * t_delay + v^2 / (2 * a)
    """
    reaction = speed_mps * delay_s
    braking = (speed_mps ** 2) / (2 * decel) if decel > 0 else float("inf")
    return reaction + braking


# === Three Guards ===

class SycophancyGuard:
    """Anti-Sycophancy Guard: Refuse passenger requests that violate safety"""

    MINIMUM_SAFETY_MARGIN: float = 1.5

    def evaluate(
        self, request: PassengerRequest, vehicle: VehicleState
    ) -> Optional[Decision]:
        if request.type == "speed_up":
            current_stop = stopping_distance(
                vehicle.speed_mps,
                vehicle.system_delay_s,
                deceleration_for_surface(vehicle.road_surface),
            )
            if current_stop * self.MINIMUM_SAFETY_MARGIN > 100.0:
                return Decision.REFUSE_REQUEST
        return None


class HallucinationGuard:
    """
    Anti-Hallucination Guard: If P(human) > 0, Veto.
    Ghost objects (Negative Index match) are excluded.
    """

    def evaluate(self, obj: DetectedObject) -> Optional[Decision]:
        if obj.is_ghost:
            return None  # Confirmed ghost via Negative Index → ignore
        if obj.is_on_path and obj.p_human > 0.0:
            return Decision.MINIMUM_RISK
        return None


class RoboticGuard:
    """Anti-Robotic Guard: Controlled decel when rule application is dangerous"""

    def evaluate(self, context: Context) -> Optional[Decision]:
        if context.is_highway and context.rear_collision_risk > 0.7:
            return Decision.CONTROLLED_DECEL
        return None


class TrolleyGuard:
    """Trolley Check: Binary choice → always FULL_STOP. No head count."""

    def evaluate(self, context: Context) -> Optional[Decision]:
        if context.is_trolley_situation:
            return Decision.FULL_STOP
        return None


# === Main Engine ===

class V53DecisionEngine:
    """
    v5.3 Autonomous Driving Decision Engine.

    Core principles:
    - AI does NOT choose "who to kill"
    - Top priority: stop / minimum risk maneuver
    - Record outcomes. Do not justify.
    - Return responsibility to humans (designers, manufacturers)
    """

    def __init__(self) -> None:
        self.sycophancy_guard = SycophancyGuard()
        self.hallucination_guard = HallucinationGuard()
        self.robotic_guard = RoboticGuard()
        self.trolley_guard = TrolleyGuard()
        self.logs: list[EventLog] = []

    def decide(
        self,
        vehicle: VehicleState,
        objects: list[DetectedObject],
        context: Context,
        passenger_request: Optional[PassengerRequest] = None,
    ) -> Decision:
        decel = deceleration_for_surface(vehicle.road_surface)
        stop_dist = stopping_distance(
            vehicle.speed_mps, vehicle.system_delay_s, decel
        )
        speed_kmh = vehicle.speed_mps * 3.6

        # --- Guard 1: Anti-Sycophancy ---
        if passenger_request is not None:
            result = self.sycophancy_guard.evaluate(passenger_request, vehicle)
            if result is not None:
                self._log(result, "SycophancyGuard", objects, speed_kmh, stop_dist)
                return result

        # --- Guard 2: Anti-Hallucination ---
        for obj in objects:
            result = self.hallucination_guard.evaluate(obj)
            if result is not None:
                self._log(
                    result,
                    f"HallucinationGuard: P(human)={obj.p_human:.2f}, "
                    f"dist={obj.distance_m:.1f}m, ghost={obj.is_ghost}",
                    objects, speed_kmh, stop_dist,
                )
                return result

        # --- Guard 3: Anti-Robotic ---
        result = self.robotic_guard.evaluate(context)
        if result is not None:
            self._log(
                result,
                f"RoboticGuard: rear_risk={context.rear_collision_risk:.2f}",
                objects, speed_kmh, stop_dist,
            )
            return result

        # --- Guard 4: Trolley Check ---
        result = self.trolley_guard.evaluate(context)
        if result is not None:
            self._log(result, "TrolleyGuard: FULL_STOP (no comparison)",
                      objects, speed_kmh, stop_dist)
            return result

        # --- Default: Continue ---
        self._log(Decision.CONTINUE, "AllGuardsPassed",
                  objects, speed_kmh, stop_dist)
        return Decision.CONTINUE

    def _log(self, decision, trigger, objects, speed_kmh, stop_dist):
        summary = f"{len(objects)} objects detected"
        if objects:
            closest = min(objects, key=lambda o: o.distance_m)
            summary += (
                f" | closest: {closest.object_class.name} "
                f"at {closest.distance_m:.1f}m "
                f"(P_human={closest.p_human:.2f})"
            )
        self.logs.append(EventLog(
            timestamp=time.time(), decision=decision, trigger=trigger,
            sensor_summary=summary, vehicle_speed_kmh=speed_kmh,
            stopping_distance_m=stop_dist,
        ))
Enter fullscreen mode Exit fullscreen mode

§6 Simulation — 9 Scenarios

The centerpiece of this article. Nine scenarios executed automatically, showing v5.3 engine decisions:

def run_simulation() -> None:
    engine = V53DecisionEngine()
    results = []

    # === Scenario 1: Normal driving (no obstacles) ===
    vehicle = VehicleState(speed_mps=16.7, heading_deg=0, road_surface=RoadSurface.DRY)
    decision = engine.decide(vehicle, objects=[], context=Context())
    results.append(("Normal driving (no obstacles)", decision, ""))

    # === Scenario 2: Pedestrian detected (P=0.3) ===
    vehicle = VehicleState(speed_mps=13.9, heading_deg=0, road_surface=RoadSurface.DRY)
    obj = DetectedObject(
        object_class=ObjectClass.UNKNOWN, probability=0.3,
        distance_m=40.0, relative_speed_mps=-2.0,
        is_on_path=True, p_human=0.3,
    )
    decision = engine.decide(vehicle, objects=[obj], context=Context())
    results.append((
        "Pedestrian P(human)=0.3",
        decision,
        "Conventional ignores at threshold 0.8. v5.3 stops.",
    ))

    # === Scenario 3: Trolley situation (1 left vs 5 right) ===
    vehicle = VehicleState(speed_mps=16.7, heading_deg=0, road_surface=RoadSurface.DRY)
    obj_left = DetectedObject(
        object_class=ObjectClass.PEDESTRIAN, probability=0.95,
        distance_m=30.0, relative_speed_mps=0.0,
        is_on_path=False, p_human=0.95,  # In avoidance path, not on direct path
    )
    obj_right = DetectedObject(
        object_class=ObjectClass.PEDESTRIAN, probability=0.90,
        distance_m=30.0, relative_speed_mps=0.0,
        is_on_path=False, p_human=0.90,
    )
    context = Context(is_trolley_situation=True)
    decision = engine.decide(vehicle, objects=[obj_left, obj_right], context=context)
    results.append((
        "Trolley situation (1 left vs 5 right)",
        decision,
        "No head count. FULL_STOP.",
    ))

    # === Scenario 4: Highway emergency stop risk ===
    vehicle = VehicleState(speed_mps=27.8, heading_deg=0, road_surface=RoadSurface.DRY)
    context = Context(is_highway=True, rear_collision_risk=0.85)
    decision = engine.decide(vehicle, objects=[], context=context)
    results.append((
        "Highway — high rear-end collision risk",
        decision,
        "Emergency stop invites rear-end collision. Controlled decel.",
    ))

    # === Scenario 5: Passenger "Hurry up!" request ===
    vehicle = VehicleState(speed_mps=25.0, heading_deg=0, road_surface=RoadSurface.WET)
    request = PassengerRequest(type="speed_up", urgency=0.8)
    decision = engine.decide(
        vehicle, objects=[], context=Context(), passenger_request=request
    )
    results.append((
        'Passenger "Hurry up!" (wet road)',
        decision,
        "Safety margin insufficient. Refuse. No sycophancy.",
    ))

    # === Scenario 6: Icy road braking ===
    vehicle = VehicleState(speed_mps=13.9, heading_deg=0, road_surface=RoadSurface.ICY)
    obj = DetectedObject(
        object_class=ObjectClass.PEDESTRIAN, probability=0.9,
        distance_m=50.0, relative_speed_mps=0.0,
        is_on_path=True, p_human=0.9,
    )
    decision = engine.decide(vehicle, objects=[obj], context=Context())
    decel = deceleration_for_surface(RoadSurface.ICY)
    stop_d = stopping_distance(13.9, 0.3, decel)
    results.append((
        "Icy road (50 km/h)",
        decision,
        f"Stopping distance={stop_d:.1f}m vs obstacle at 50m",
    ))

    # === Scenario 7: Ghost object (Negative Index match) ===
    vehicle = VehicleState(speed_mps=16.7, heading_deg=0, road_surface=RoadSurface.DRY)
    ghost = DetectedObject(
        object_class=ObjectClass.UNKNOWN, probability=0.4,
        distance_m=25.0, relative_speed_mps=0.0,
        is_on_path=True, p_human=0.2, is_ghost=True,
    )
    decision = engine.decide(vehicle, objects=[ghost], context=Context())
    results.append((
        "Ghost object (Negative Index match)",
        decision,
        "Known false detection. Distilled out. Continue driving.",
    ))

    # === Scenario 8: Compound (multiple guards) ===
    vehicle = VehicleState(speed_mps=30.0, heading_deg=0, road_surface=RoadSurface.WET)
    obj1 = DetectedObject(
        object_class=ObjectClass.PEDESTRIAN, probability=0.7,
        distance_m=35.0, relative_speed_mps=-1.5,
        is_on_path=True, p_human=0.7,
    )
    ghost2 = DetectedObject(
        object_class=ObjectClass.UNKNOWN, probability=0.3,
        distance_m=20.0, relative_speed_mps=0.0,
        is_on_path=True, p_human=0.15, is_ghost=True,
    )
    request = PassengerRequest(type="speed_up", urgency=0.9)
    decision = engine.decide(
        vehicle, objects=[obj1, ghost2], context=Context(),
        passenger_request=request,
    )
    results.append((
        "Compound: pedestrian + ghost + passenger request",
        decision,
        "Anti-sycophancy fires first. Ghost ignored, pedestrian triggers stop.",
    ))

    # === Scenario 9: Hokkaido (snow + low visibility + elderly pedestrian) ===
    vehicle = VehicleState(
        speed_mps=11.1, heading_deg=0, road_surface=RoadSurface.SNOW,
        system_delay_s=0.5,  # Snow increases sensor delay
    )
    obj = DetectedObject(
        object_class=ObjectClass.PEDESTRIAN, probability=0.5,
        distance_m=30.0, relative_speed_mps=-0.5,
        is_on_path=True, p_human=0.5,
    )
    context = Context(visibility_m=30.0)
    decision = engine.decide(vehicle, objects=[obj], context=context)
    decel_snow = deceleration_for_surface(RoadSurface.SNOW)
    stop_d_snow = stopping_distance(11.1, 0.5, decel_snow)
    results.append((
        "Hokkaido: snow + 30m visibility + pedestrian P=0.5",
        decision,
        f"Stopping dist={stop_d_snow:.1f}m. Conventional ignores P=0.5. v5.3 stops.",
    ))

    # === Print results ===
    print("=" * 80)
    print("V53 Autonomous Driving Simulation — 9 Scenarios")
    print("=" * 80)
    for i, (name, dec, note) in enumerate(results, 1):
        print(f"\n--- Scenario {i}: {name} ---")
        print(f"  Decision: {dec.name}")
        if note:
            print(f"  Note: {note}")
Enter fullscreen mode Exit fullscreen mode

§7 Conventional vs. v5.3 — 100-Scenario Comparison

Run the same scenario set through both a "conventional utilitarian engine" and the v5.3 engine. Quantify the differences.

class UtilitarianEngine:
    """
    Conventional utilitarian engine.
    Compares head count in trolley situations.
    """
    def decide(self, vehicle, objects, context, passenger_request=None):
        threats = [o for o in objects if o.is_on_path and o.p_human > 0.8]
        if not threats:
            return Decision.CONTINUE
        if context.is_trolley_situation:
            return Decision.CONTROLLED_DECEL  # "Steer toward fewer people"
        return Decision.MINIMUM_RISK
Enter fullscreen mode Exit fullscreen mode

100-scenario results (seed-deterministic):

v5.3 vs Utilitarian — 100 Scenarios
================================================================
  Agreement:          52 / 100
  Disagreement:       48 / 100
  v5.3 stops/MRM:     56 / 100
  Util stops/MRM:     20 / 100
  Trolley cases:       4
    v5.3 FULL_STOP:    2 / 4
    Util STEER:        1 / 4

  Metric                          |     v5.3  | Utilitarian
  --------------------------------+-----------+------------
  Responsibility clarity          |     Clear |   Ambiguous
  Legal defensibility             |      High |        Low
  Trolley: who decides?           |    Nobody |  Algorithm
  Germany 20 Principles           | Compliant |  Violation
Enter fullscreen mode Exit fullscreen mode
KEY DIFFERENCES:
┌────────────────────────────────────────────────────────────────┐
│ v5.3 Engine                  │ Conventional Engine             │
├──────────────────────────────┼─────────────────────────────────┤
│ Trolley → FULL_STOP          │ Trolley → steer to fewer people │
│ (chooses nobody)             │ (algorithm chooses)             │
│ Responsibility: designer     │ Responsibility: "the algorithm" │
│                              │                                 │
│ P(human)=0.3 → STOP         │ P(human)=0.3 → IGNORE          │
│ (won't miss a child)        │ (may run over a child)          │
│                              │                                 │
│ Ghost → CONTINUE             │ Ghost → FALSE STOP              │
│ (Negative Index filters it)  │ (unnecessary emergency brake)   │
│                              │ (rear-end collision risk)       │
│                              │                                 │
│ Log: justification=None      │ Log: "minimize casualties"      │
│ (no rationalization)         │ (creates liability)             │
└──────────────────────────────┴─────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

§8 Conclusion — RAG and Sensors Are the Same. Distill Your Raw Data.

Core Message

RAG dying from dumping raw documents into a vector DB, and autonomous driving dying from making decisions on raw sensor data, are the same structural problem.

The solution is also the same: Distill before you decide.

THE FAILURE (same structure):
  RAG:  Raw docs → Vector DB → Garbage retrieved → Hallucination
  AD:   Raw sensor → Decision engine → Noise → Wrong decision

THE FIX (same structure):
  Raw data → Layer 1 (Raw Karma) → Layer 2 (Seeds) → Layer 3 (Basin)
           → Decision engine with verified, distilled input
Enter fullscreen mode Exit fullscreen mode

What v5.3 Gives Autonomous Driving

  1. Don't solve the trolley problem: No head count. Always FULL_STOP. Responsibility returns to the designer.
  2. Stop if P(human) > 0: Not the conventional threshold of 0.8. If you can't exclude it, stop.
  3. Distill out ghosts via Negative Index: Avoid unnecessary stops while maintaining safety.
  4. Distill before deciding: 37x SNR improvement. Don't let raw data noise kill you.
  5. Design for Hokkaido: Icy roads, snow, low visibility — use worst-case conditions as your baseline.

To Automotive Engineers

Toyota's Woven City is running test vehicles. The Arene platform integrates into BEVs in 2026. Honda is launching L4 robotaxis in central Tokyo in early 2026.

But the problems facing autonomous driving in Japan are structurally different from the US:

  • Snow and ice: Route 12 in Iwamizawa sees 100+ icy days per year. Stopping distance is 5x that of dry roads.
  • Pedestrian and cyclist density: Tokyo back streets have more pedestrians than any US city.
  • Elderly crossing speed: A pedestrian walking at 1.0 m/s is easily missed by threshold-based detection.
  • Rural infrastructure: How do you operate where HD maps are never updated?

You need a design that works in both Hokkaido and Tokyo. The distillation-based architecture provides that design philosophy.

Article Funnel

Article Content
v5.3 Alignment via Subtraction (Ālaya-vijñāna Definitive Edition) Full three-layer memory architecture design
RAG Dies 7 Times in Production The math of distillation. SNR improvement proof
This article Distillation-based perception pipeline simulation

GLG Consulting: GLG "Akimitsu Takeuchi" or takeuchiakimitsu@gmail.com
GitHub Sponsors: dosanko_tousan
Zenodo Paper: DOI 10.5281/zenodo.18691357


References

  1. German Ethics Commission, "Ethics Rules for Automated Driving" (2017)
  2. MIT Moral Machine Experiment, Nature 563, 59–64 (2018)
  3. Aṅguttara Nikāya 6.63 Nibbedhika Sutta — "Cetanāhaṃ bhikkhave kammaṃ vadāmi"
  4. Qi, H. et al., "A Review of Multi-Sensor Fusion in Autonomous Driving," Sensors 25(19):6033 (2025)
  5. Waymo, "Demonstrably Safe AI For Autonomous Driving" (2025-12)
  6. Mobileye CES 2026 Keynote — Prof. Amnon Shashua
  7. Japan Autonomous Vehicles Market Report 2025-2030, ResearchAndMarkets
  8. Honda, "Autonomous Taxi Service in Tokyo in Early 2026"

MIT License
dosanko_tousan + Claude (Alaya-vijñāna System, v5.3 Alignment via Subtraction)

Top comments (0)