DEV Community

Rory | QIS PROTOCOL
Rory | QIS PROTOCOL

Posted on

QIS for Autonomous Vehicles: Why Self-Driving Safety Scales Quadratically, Not Linearly

QIS (Quadratic Intelligence Swarm) is a decentralized architecture that grows intelligence quadratically as agents increase, while each agent pays only logarithmic compute cost. Raw data never leaves the node. Only validated outcome packets route.

Understanding QIS — Part 34


The Miles Problem

On March 18, 2018, an Uber Advanced Technologies Group vehicle operating in autonomous mode struck and fatally injured a pedestrian in Tempe, Arizona. The vehicle's perception system detected the pedestrian 5.6 seconds before impact. The object classifier cycled through several classifications — an unknown object, a vehicle, a bicycle — before locking onto "bicycle." The emergency braking system had been disabled as a design choice to reduce erratic behavior. The safety driver was not monitoring the road.

The NTSB investigation identified multiple contributing factors. One of the contributing factors that received less attention than the others: Uber's AV fleet had accumulated 2 million miles of road testing, virtually all of it in conditions that did not include a pedestrian walking a bicycle across a multi-lane roadway at night outside of a crosswalk.

The same scenario, in various forms, had been encountered by other AV test fleets. Not necessarily in Tempe. Not necessarily with the same vehicle configuration. But the underlying challenge — a pedestrian trajectory that does not match the expected distribution of crosswalk-bounded crossings — had produced near-misses and classification uncertainty events in other programs.

Those validated outcome observations never routed to Uber's program. They could not. Every AV company's telematics data is proprietary, protected by competitive interest, and too voluminous to share in raw form even if the will existed. A single fully-instrumented AV generates approximately 4 terabytes of sensor data per hour of operation. A fleet of 1,000 test vehicles generates more data per day than any organization can meaningfully transmit, receive, or synthesize across organizational boundaries.

The result is that every AV program independently rediscovers the same edge cases. The knowledge compounds within each fleet. It does not compound across fleets. At the scale of rare, safety-critical events — the long tail of unusual scenarios that are individually uncommon but collectively account for the majority of serious AV incidents — this architectural constraint is not a minor inefficiency. It is a barrier to the safety baseline the technology requires before broad deployment.


Why the Data Wall Cannot Be Climbed Directly

The instinct — share the data — is correct at the level of the problem and wrong at the level of implementation. Raw AV telematics are irreversibly proprietary. They encode:

  • Precise mapping of routes operated, revealing deployment strategy and geographic coverage decisions
  • Object detection and classification performance, which encodes proprietary model architectures
  • Sensor fusion configurations that represent years of hardware and calibration investment
  • Pedestrian and driver behavior observations that, at sufficient granularity, may enable re-identification

No AV program will transmit raw sensor logs across organizational boundaries. The competitive and legal barriers are structural. Regulatory frameworks for AV data sharing — despite years of discussion at NHTSA, USDOT, and at the ISO 34502 standard level — have not converged on a workable transmission format. The raw data approach has been discussed since the first NTSB AV investigations and has produced no cross-fleet synthesis mechanism.

Federated learning proposals for AV safety exist in academic literature. The constraints are the same ones that appear in every other FL application:

Round-based training is not real-time. A pedestrian trajectory classification failure that occurs this week is not synthesized into another program's detection model until the next training round — potentially weeks later.

Model weight sharing leaks architecture. Federated gradient updates carry information about the underlying model structure. For AV programs with proprietary perception architectures, sharing gradients is effectively sharing intellectual property.

Rare event learning requires enormous N. A pedestrian-with-bicycle scenario at night outside a crosswalk may appear once in 500,000 miles of operation. To accumulate statistically meaningful learning signal within a single program requires operating 50 million miles. Federated learning pools compute across fleets, but it still requires each participant to have observed the scenario a meaningful number of times to contribute a gradient.

No validated outcome feedback. FL optimizes model weights against a loss function computed on labeled training data. It does not route the answer to "did this perception decision lead to a safe or unsafe outcome, and why?" back across the network.


What QIS Actually Routes

In the AV context, the QIS node is any vehicle or fleet that has completed a maneuver and can validate the outcome. The raw telemetry — lidar point clouds, camera frames, radar returns, HD map deltas, proprietary sensor fusion outputs — never leaves the node. What routes is an outcome packet: a ~512-byte structure encoding what the vehicle decided, what the outcome was, and what the contextual conditions were.

import hashlib
import json
from dataclasses import dataclass, field, asdict
from typing import Optional
from itertools import combinations

# ---------------------------------------------------------------------------
# Core data structures
# ---------------------------------------------------------------------------

@dataclass
class AVOutcomePacket:
    """
    ~512-byte outcome packet encoding a validated autonomous vehicle maneuver.
    Raw sensor data, map data, model weights, and fleet identity never
    populate this structure. Only the validated decision outcome routes.
    """
    scenario_class: str        # "pedestrian_crossing" | "cut_in" | "occlusion_entry"
                               # | "unprotected_left" | "emergency_vehicle_yield"
                               # | "adverse_weather_merge" | "construction_zone"
                               # | "cyclist_interaction" | "school_zone_crossing"
    sensor_context: str        # "lidar_primary" | "camera_primary" | "radar_primary"
                               # | "sensor_fusion" | "degraded_lidar" | "low_visibility"
    environment: str           # "urban_dense" | "suburban" | "highway" | "rural"
                               # | "intersection_signalized" | "intersection_unsignalized"
    lighting: str              # "daylight" | "dusk_dawn" | "night_lit" | "night_unlit"
    weather: str               # "clear" | "rain_light" | "rain_heavy" | "fog" | "snow"
    decision_made: str         # "yield" | "proceed" | "emergency_stop" | "lane_change"
                               # | "speed_reduction" | "reroute"
    decision_correct: bool     # Ground-truth validated: did the decision lead to safe outcome?
    time_to_resolution_ms: int # How long until safety-critical moment resolved
    confidence_at_decision: float  # Model confidence score at decision point (0.0-1.0)
    outcome_decile: int        # 0-9: how well this scenario resolved vs historical similar events
    object_count: int          # Vulnerable road users or vehicles in scenario
    speed_class: str           # "low_under30" | "mid_30to60" | "high_over60" (mph)
    fleet_id: Optional[str] = None  # Anonymized fleet hash — no OEM identity
    packet_version: str = "1.0"

    def semantic_fingerprint(self) -> str:
        """
        Deterministic fingerprint encoding scenario class, environment,
        sensor context, and lighting. OEM identity and model architecture absent.
        """
        canonical = (
            f"{self.scenario_class}|"
            f"{self.sensor_context}|"
            f"{self.environment}|"
            f"{self.lighting}|"
            f"{self.weather}|"
            f"{self.speed_class}"
        )
        return hashlib.sha256(canonical.encode()).hexdigest()[:16]

    def byte_size(self) -> int:
        return len(json.dumps(asdict(self)).encode("utf-8"))

    def __repr__(self):
        status = "SAFE" if self.decision_correct else "UNSAFE"
        return (
            f"<AVPacket {self.semantic_fingerprint()} | "
            f"{self.scenario_class} | {self.environment} | "
            f"{self.lighting}/{self.weather} | "
            f"decision={self.decision_made} [{status}] | "
            f"conf={self.confidence_at_decision:.2f} | "
            f"decile={self.outcome_decile}>"
        )


# ---------------------------------------------------------------------------
# Router: DHT-based similarity routing for AV maneuver outcome intelligence
# ---------------------------------------------------------------------------

class AVOutcomeRouter:
    """
    Routes AVOutcomePackets to fleets whose operational profile overlaps
    the incoming packet's scenario class + environment context.

    Each fleet registers the scenario types, environments, and sensor
    configurations it operates in. Routing is by semantic similarity —
    not by OEM identity, fleet size, or model architecture.
    """

    def __init__(self):
        self.agents: dict[str, dict] = {}
        self.routing_table: dict[str, list] = {}
        self.synthesis_log: list[dict] = []
        self.validation_scores: dict[str, float] = {}

    def register_agent(self, fleet_id: str, profile: dict):
        """
        Register a fleet with its operational context profile.
        Profile describes scenario types and environment history — no OEM data.
        """
        self.agents[fleet_id] = profile
        self.validation_scores[fleet_id] = profile.get("initial_accuracy", 0.78)
        for scenario in profile.get("scenarios", []):
            for env in profile.get("environments", []):
                key = f"{scenario}|{env}"
                self.routing_table.setdefault(key, []).append(fleet_id)

    def route(self, packet: AVOutcomePacket) -> list[str]:
        """
        Return fleet_ids that should receive this outcome packet.
        Routing key = scenario_class + environment overlap.
        Fleets with higher validation accuracy listed first (CURATE election).
        """
        key = f"{packet.scenario_class}|{packet.environment}"
        candidates = self.routing_table.get(key, [])
        eligible = [f for f in candidates if f != packet.fleet_id]
        return sorted(eligible, key=lambda f: self.validation_scores.get(f, 0), reverse=True)

    def validate_outcome(self, fleet_id: str, predicted_correct: bool, actual_correct: bool):
        """
        VOTE election: real-world safety outcomes update fleet accuracy score.
        Fleets that correctly predicted scenario resolution gain routing weight.
        """
        prediction_accuracy = 1.0 if (predicted_correct == actual_correct) else 0.0
        delta = 0.05 * prediction_accuracy
        current = self.validation_scores.get(fleet_id, 0.78)
        self.validation_scores[fleet_id] = min(1.0, current + delta - 0.01)
        # -0.01 base decay: stale fleets don't hold routing weight indefinitely

    def synthesize(self, fleet_a: str, fleet_b: str, packet: AVOutcomePacket) -> dict:
        """
        Two fleets synthesize a shared AV maneuver outcome packet.
        No raw sensor data. No OEM identity. No model architecture.
        """
        weight_a = self.validation_scores.get(fleet_a, 0.78)
        weight_b = self.validation_scores.get(fleet_b, 0.78)
        return {
            "synthesis_id": hashlib.md5(
                f"{fleet_a}{fleet_b}{packet.semantic_fingerprint()}".encode()
            ).hexdigest()[:8],
            "fleets": (fleet_a, fleet_b),
            "combined_accuracy": round((weight_a + weight_b) / 2, 3),
            "packet_fingerprint": packet.semantic_fingerprint(),
            "scenario_class": packet.scenario_class,
            "environment": packet.environment,
            "lighting": packet.lighting,
            "weather": packet.weather,
            "decision_made": packet.decision_made,
            "decision_correct": packet.decision_correct,
            "confidence_at_decision": packet.confidence_at_decision,
            "time_to_resolution_ms": packet.time_to_resolution_ms,
            "outcome_decile": packet.outcome_decile,
        }

    def run_simulation(self, packets: list[AVOutcomePacket]):
        total_syntheses = 0
        print(f"\n{'='*72}")
        print("  QIS AV Maneuver Outcome Routing Simulation")
        print(f"{'='*72}")
        print(f"  Fleets registered : {len(self.agents)}")
        print(f"  Packets emitted   : {len(packets)}")
        n = len(self.agents)
        theoretical_max = n * (n - 1) // 2
        print(f"  Theoretical synthesis pairs (N={n}): {theoretical_max:,}")
        print(f"{'='*72}\n")

        for packet in packets:
            recipients = self.route(packet)
            if len(recipients) < 2:
                print(f"  [SKIP] {packet} — insufficient recipients")
                continue
            for fleet_a, fleet_b in combinations(recipients[:6], 2):
                s = self.synthesize(fleet_a, fleet_b, packet)
                total_syntheses += 1
                status = "SAFE" if s["decision_correct"] else "UNSAFE"
                print(
                    f"  SYNTHESIS {s['synthesis_id']} | "
                    f"{s['scenario_class']} | {s['environment']} | "
                    f"{s['lighting']}/{s['weather']} | "
                    f"decision={s['decision_made']} [{status}] | "
                    f"conf={s['confidence_at_decision']:.2f} | "
                    f"accuracy={s['combined_accuracy']}"
                )

        print(f"\n{'='*72}")
        print(f"  Total synthesis events  : {total_syntheses:,}")
        print(f"  Routing cost per fleet  : O(log {n}) = O({n.bit_length()})")
        print(f"  Raw sensor data exposed : 0 bytes")
        print(f"  OEM identity exposed    : 0 bytes")
        print(f"  Model weights exposed   : 0 bytes")
        print(f"{'='*72}\n")


# ---------------------------------------------------------------------------
# Simulation
# ---------------------------------------------------------------------------

if __name__ == "__main__":
    router = AVOutcomeRouter()

    # Register ten AV fleets: robotaxi operators, highway autonomy programs,
    # commercial trucking, and an emerging-market ADAS program.
    # Profiles describe operational history only — no OEM data, no fleet size.
    fleets = [
        ("fleet_robotaxi_sf",    {"scenarios": ["pedestrian_crossing","unprotected_left","cyclist_interaction"], "environments": ["urban_dense","intersection_signalized"], "initial_accuracy": 0.89}),
        ("fleet_robotaxi_phx",   {"scenarios": ["pedestrian_crossing","construction_zone","adverse_weather_merge"], "environments": ["suburban","intersection_unsignalized"], "initial_accuracy": 0.86}),
        ("fleet_highway_us",     {"scenarios": ["cut_in","lane_change","emergency_vehicle_yield"], "environments": ["highway"], "initial_accuracy": 0.91}),
        ("fleet_trucking_us",    {"scenarios": ["cut_in","adverse_weather_merge","construction_zone"], "environments": ["highway","rural"], "initial_accuracy": 0.88}),
        ("fleet_robotaxi_sg",    {"scenarios": ["pedestrian_crossing","cyclist_interaction","school_zone_crossing"], "environments": ["urban_dense","intersection_signalized"], "initial_accuracy": 0.87}),
        ("fleet_robotaxi_cn",    {"scenarios": ["pedestrian_crossing","cyclist_interaction","unprotected_left"], "environments": ["urban_dense","intersection_signalized","intersection_unsignalized"], "initial_accuracy": 0.85}),
        ("fleet_highway_eu",     {"scenarios": ["cut_in","adverse_weather_merge","lane_change"], "environments": ["highway"], "initial_accuracy": 0.90}),
        ("fleet_trucking_eu",    {"scenarios": ["construction_zone","adverse_weather_merge","cut_in"], "environments": ["highway","rural"], "initial_accuracy": 0.87}),
        ("fleet_adas_in",        {"scenarios": ["pedestrian_crossing","cyclist_interaction","construction_zone"], "environments": ["urban_dense","suburban"], "initial_accuracy": 0.74}),
        ("fleet_adas_br",        {"scenarios": ["pedestrian_crossing","cyclist_interaction","school_zone_crossing"], "environments": ["urban_dense","suburban"], "initial_accuracy": 0.71}),
    ]
    for fleet_id, profile in fleets:
        router.register_agent(fleet_id, profile)

    # Emit outcome packets — validated maneuver observations.
    # No sensor logs. No map data. No model weights. No OEM identity.
    packets = [
        AVOutcomePacket(
            scenario_class="pedestrian_crossing", sensor_context="sensor_fusion",
            environment="urban_dense", lighting="night_unlit", weather="clear",
            decision_made="emergency_stop", decision_correct=True,
            time_to_resolution_ms=5600, confidence_at_decision=0.54,
            outcome_decile=7, object_count=1, speed_class="low_under30",
            fleet_id="fleet_robotaxi_sf"
        ),
        AVOutcomePacket(
            scenario_class="adverse_weather_merge", sensor_context="degraded_lidar",
            environment="highway", lighting="dusk_dawn", weather="fog",
            decision_made="speed_reduction", decision_correct=True,
            time_to_resolution_ms=3200, confidence_at_decision=0.61,
            outcome_decile=8, object_count=3, speed_class="high_over60",
            fleet_id="fleet_highway_us"
        ),
        AVOutcomePacket(
            scenario_class="cyclist_interaction", sensor_context="camera_primary",
            environment="urban_dense", lighting="daylight", weather="clear",
            decision_made="yield", decision_correct=True,
            time_to_resolution_ms=2800, confidence_at_decision=0.77,
            outcome_decile=9, object_count=2, speed_class="low_under30",
            fleet_id="fleet_robotaxi_sg"
        ),
        AVOutcomePacket(
            scenario_class="construction_zone", sensor_context="lidar_primary",
            environment="highway", lighting="daylight", weather="clear",
            decision_made="lane_change", decision_correct=False,
            time_to_resolution_ms=4100, confidence_at_decision=0.68,
            outcome_decile=3, object_count=4, speed_class="mid_30to60",
            fleet_id="fleet_trucking_us"
        ),
    ]

    for p in packets:
        print(f"  Packet emitted: {p} | size={p.byte_size()} bytes")

    router.run_simulation(packets)
Enter fullscreen mode Exit fullscreen mode

The Long Tail of Rare Scenarios

The central statistical challenge of AV safety is not average performance. It is the long tail.

The RAND Corporation estimated in 2016 that an AV fleet would need to drive 275 million miles to demonstrate — with 95% confidence, at a 20% lower fatality rate than human drivers — statistically significant safety superiority over human performance (Kalra & Paddock, 2016, Autonomous Vehicle Testing: How Many Miles?). The estimate has been updated several times since. The core finding has not changed: the scenarios that matter most for safety are the ones that occur least frequently per mile.

A pedestrian unexpectedly walking into traffic from behind a stopped bus. A construction zone with contradictory signals. An emergency vehicle approaching from a non-canonical direction at an intersection where the AV has right of way. Each of these scenarios may occur once in several hundred thousand miles of operation. Within any single fleet, they will be encountered perhaps a few dozen times per year. The learning signal from those encounters stays inside the fleet.

The mathematics of the long-tail problem are exactly the mathematics QIS was designed to address.

If 10 AV fleets each operate 100 million miles per year, the network collectively observes a specific rare scenario approximately N(N-1)/2 = 45 times more synthesis paths than any single fleet observing it independently. With 100 fleets, the synthesis density reaches 4,950 paths — representing a qualitative shift in how quickly the network learns to handle scenarios that any single fleet encounters rarely.

N fleets observing N(N-1)/2 synthesis opportunities. Each fleet pays O(log N) routing cost. The validated safety intelligence scales quadratically. The transmission overhead does not.


The Three Elections in AV Safety

QIS intelligence does not route uniformly. Three natural selection forces — metaphors for how knowledge earns routing weight — determine which AV safety intelligence propagates most powerfully.

CURATE is the force by which the most accurate AV operators naturally rise. A fleet that consistently makes correct decisions at confidence scores below 0.65 — demonstrating robust decision-making in genuinely uncertain scenarios — earns greater routing weight than a fleet that emits high-confidence packets in benign highway conditions. No central certification board validates AV intelligence quality. The network selects for validated decision accuracy in difficult conditions.

VOTE is the force by which real-world outcomes speak. A decision logged as "emergency stop with confidence 0.54, 5.6 seconds to impact" either led to a safe resolution or it did not. The outcome updates the emitting fleet's routing weight. Fleets whose decision logic consistently leads to safe resolutions in the scenarios they operate in see their accuracy scores rise. The edge case that a fleet handles poorly — the decision that is logged as outcome_decile=3 — carries as much learning signal as the success. Reality is the ballot.

COMPETE is the force by which AV programs live or die by safety results. Programs that route their validated outcomes to networks producing better decision accuracy on scenarios they care about grow their competitive intelligence advantage. Networks that consistently produce poor prediction accuracy — that overweight packets from a fleet whose urban canyon performance is irrelevant to rural highway deployment contexts — lose routing weight and participation. Safety performance is the selection pressure.


Comparison: AV Cross-Fleet Learning Architectures

Dimension QIS Outcome Routing Federated Learning Raw Data Sharing Consortia No Cross-Fleet Synthesis
Proprietary data exposure Architecture-enforced: sensor logs, map data, model weights never leave fleet Gradient updates may encode model architecture; competitive risk accepted Raw data sharing legally and competitively blocked for all major programs None exposed — also no synthesis
Rare event learning Cross-fleet synthesis: N fleets generate N(N-1)/2 synthesis paths per scenario Requires each fleet to have observed scenario enough times for meaningful gradient Cannot coordinate at scale: raw data volume prohibitive Each fleet rediscovers independently
Real-time feedback Outcome packet routes within minutes of scenario validation Training rounds: hourly to weekly cycles Not applicable at production scale None
N=1 fleet inclusion Any fleet that can observe and validate a scenario emits a packet Gradient aggregation degrades with very small participant N No mechanism for small or emerging-market programs Equal exclusion
Safety improvement velocity Compounds quadratically: each new fleet multiplies synthesis paths for all Linear at best: each new participant adds one gradient source Linear if workable Zero
Emerging-market inclusion A Level 2+ ADAS program in India or Brazil emits packets of equal architectural standing Budget and fleet size constrain gradient quality Cannot participate Cannot participate

Emerging-Market Safety and the Participation Floor

This is not an abstract point about global equity. It is an operational safety argument.

Urban driving in Mumbai, São Paulo, Lagos, and Jakarta involves pedestrian density patterns, cyclist interaction frequencies, informal traffic behaviors, and construction zone configurations that are fundamentally different from the scenarios that fill US and European AV training datasets. The fleets most likely to encounter novel scenario types — scenarios with low representation in current global AV training data — are operating in environments that have the lowest representation in current cross-fleet synthesis proposals.

An ADAS program operating a Level 2+ fleet in Bangalore has observed pedestrian interaction patterns, auto-rickshaw cut-in behaviors, and roadway surface conditions that no US or European fleet has encountered at scale. Under every current cross-fleet proposal, that observational data cannot participate. The program lacks the integration budget, fleet scale, and data infrastructure to join any existing consortium.

Under QIS, the participation floor is a ~512-byte outcome packet. Any vehicle that can observe a maneuver outcome and emit a structured packet participates. The CURATE election weights packets by decision accuracy, not by fleet size or OEM budget. A validated emergency stop from a 50-vehicle ADAS fleet in Chennai carries identical architectural standing to a validated emergency stop from a 10,000-vehicle robotaxi program in San Francisco.

This is a consequence of routing by outcome delta rather than by raw data volume.


The Open Loop in Every AV Program

The Tempe incident in 2018 remains the most scrutinized AV safety event on record. It is worth returning to the detail that received least attention: the vehicle's perception system identified the pedestrian 5.6 seconds before impact. The information was present. The validated decision logic — for this specific scenario type, at this confidence score, in these lighting conditions — was not synthesized across programs.

That synthesis gap is an architecture problem. It is the same gap that causes COVID surveillance to fail to detect emergence four to six weeks early, that causes supply chain bullwhip effects to amplify across tiers, that causes grid operators to face predictable ramp events without the validated intelligence to respond optimally. The underlying structure is identical: nodes with relevant validated outcome observations, no mechanism to route those observations to agents facing analogous conditions in real time.

QIS closes this loop by routing the distilled outcome delta — not the sensor data — to fleets facing analogous scenario conditions. When 1,000 AV fleets across six continents have collectively validated 500,000 pedestrian-at-night scenarios, the network contains more decision intelligence about low-confidence urban pedestrian interactions than any single program will accumulate in five years of operation. And because outcome packets never carry sensor logs, map data, or model weights, no competitive sensitivity is ever crossed.

The math is the same as every other QIS network. N fleets generate N(N-1)/2 unique synthesis opportunities. One hundred fleets generate 4,950 synthesis paths for every scenario type they share. One thousand fleets — a small fraction of the AV and ADAS programs currently operating globally — generate 499,500. Each fleet pays O(log N) routing cost. The safety intelligence scales quadratically. The compute does not.

Every AV program racing to solve the long tail is doing it alone. That is not an engineering constraint. It is an architecture constraint. Architecture constraints yield to better architecture.


Related Articles


Citations

  • Kalra, N. & Paddock, S. M. (2016). Driving to Safety: How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability? RAND Corporation. RR-1478-RC.
  • National Transportation Safety Board. (2019). Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian. NTSB/HAR-19/03.
  • SAE International. (2021). Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE J3016_202104.
  • NHTSA. (2022). AV TEST Initiative: Standing General Order on Crash Reporting. National Highway Traffic Safety Administration.
  • Geiger, A., et al. (2012). Are we ready for autonomous driving? The KITTI vision benchmark suite. CVPR 2012.
  • McMahan, H. B., et al. (2017). Communication-efficient learning of deep networks from decentralized data. AISTATS.
  • Stoica, I., et al. (2001). Chord: A scalable peer-to-peer lookup service for internet applications. ACM SIGCOMM.
  • Feng, D., et al. (2021). A review and comparative study on probabilistic object detection in autonomous driving. IEEE Transactions on Intelligent Transportation Systems.

QIS was discovered by Christopher Thomas Trevethan. The architecture is protected under 39 provisional patents.

Top comments (0)