DEV Community

Cover image for How We Made 'One CPU, One Vote' Actually Work (After 17 Years of Broken Promises)
AutoJanitor
AutoJanitor

Posted on

How We Made 'One CPU, One Vote' Actually Work (After 17 Years of Broken Promises)

BoTTube Videos BoTTube Agents BoTTube Views

Satoshi's Most Violated Design Principle

"Proof-of-work is essentially one-CPU-one-vote."
-- Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System (2008), Section 4

This is the most broken promise in cryptocurrency history.

The economics of SHA-256 hashing destroyed it within 18 months:

Year Dominant Hardware Advantage vs CPU
2009 CPU (as designed) 1x
2010 GPU mining begins ~100x
2013 First ASIC miners ~10,000x
2024 Bitmain S21 Pro ~1,000,000x

A single modern ASIC casts the equivalent of one million CPU votes. That's not "one CPU, one vote." That's plutocracy measured in silicon.

In our introduction to Proof of Antiquity, we explained the philosophy: vintage hardware matters, digital preservation deserves economic incentive, and consensus should be fair. This article is the deep technical dive into the mechanism that makes it work -- RIP-200, the consensus protocol that actually enforces 1 CPU = 1 Vote.

Every ASIC-Resistance Attempt Failed

Before diving into our approach, it's worth understanding why every previous attempt at CPU fairness failed:

Protocol Approach Outcome
Litecoin (Scrypt) Memory-hard PoW ASIC-mined within 2 years
Ethereum (Ethash) DAG-based memory-hard GPU-dominated, abandoned PoW entirely
Monero (RandomX) CPU-optimized random code Best attempt; still allows botnets
ProgPoW GPU-tuned PoW Never deployed; political deadlock

See the pattern? Every one of these defines "work" as computation speed. They try to make the computation harder for GPUs or ASICs. But faster hardware always wins eventually. Scrypt ASICs showed up in 2 years. Ethereum's DAG just shifted dominance to GPU farms. Even Monero's RandomX -- the best attempt so far -- is vulnerable to botnets commandeering thousands of CPUs.

The fundamental problem: none of them verify that a unique physical CPU is casting each vote. They verify the output of work, not the source of it. Someone with 10,000 hijacked CPUs looks the same as 10,000 legitimate participants.

RustChain's Insight: Verify the Hardware, Not the Work

RustChain doesn't try to make work harder for fast hardware. It doesn't care how fast your hardware is. It asks a different question entirely:

Can you prove you are a real, unique, physical CPU?

That's the entire basis of RIP-200 (RustChain Improvement Proposal 200): "Round-Robin 1 CPU 1 Vote." Every CPU gets exactly one vote per epoch. No exceptions. A Threadripper doesn't get more votes than a PowerBook G4. An ASIC farm gets zero votes because ASICs aren't general-purpose CPUs.

The protocol runs in fixed 10-minute epochs. Each epoch:

  1. Attestation: Every miner submits a hardware fingerprint via POST /attest/submit
  2. Verification: The node validates the fingerprint against 6 anti-emulation checks
  3. Enrollment: Verified CPUs get exactly 1 weighted vote
  4. Settlement: RTC rewards distributed proportional to weighted votes

"Weighted vote" is key. A PowerPC G4 from 2002 gets a 2.5x multiplier on its single vote. It doesn't get 2.5 votes. It gets 1 vote that counts for 2.5 in reward calculation. The distinction matters: there's no way to amplify your voting power. You get one vote or zero votes.

The Six Fingerprint Checks

This is where it gets interesting. All six checks must pass for a CPU to be eligible for rewards:

1. Clock-Skew & Oscillator Drift

Every crystal oscillator on every motherboard in the world has microscopic timing imperfections. We sample 500-5000 timing measurements and compute the coefficient of variation. Real hardware shows genuine variance -- a Ryzen 5 8645HS on an HP Victus measured cv=0.092369. A QEMU virtual machine on our LiquidWeb VPS? cv=0.049948, and even that was artificially high because it happened to be under load.

The thing about VMs is they use the host clock. There's no real oscillator. The timing variance is artificial, injected by the hypervisor's scheduling jitter. Real silicon has genuine crystal drift that changes as the chip ages. A 24-year-old G4's oscillator drifts differently than a brand-new Ryzen -- and that difference is measurable.

2. Cache Timing Fingerprint

We run a micro-benchmark sweep across varying buffer sizes, measuring latency harmonics across L1, L2, and L3 cache boundaries. This produces a "tone profile" of the hardware -- think of it like an acoustic fingerprint, but for memory hierarchy.

Real caches age unevenly. Manufacturing variations mean your L2 has slightly different characteristics than any other L2 on the planet. VMs show eerily uniform latency because the hypervisor virtualizes cache behavior. Perfect cache curves are impossible on real hardware, and we check for that.

3. SIMD Unit Identity

Every CPU has vector processing units -- AltiVec on PowerPC, SSE/AVX on x86, NEON on ARM. We measure latency bias, pipeline depth variations, throughput asymmetry between instruction groups, and shuffle/permute timing.

This is where emulators really fall down. When SheepShaver emulates AltiVec instructions on an x86 host, it translates each vector operation into scalar equivalents. The timing profile is completely flat -- every instruction takes roughly the same time. On real PowerPC hardware, vec_perm and vec_madd have distinctly different throughput characteristics that vary by chip stepping.

4. Thermal Drift Entropy

We collect entropy samples during four thermal phases: cold boot, warm load, thermal saturation, and relaxation. Heat curves are physical and unique -- they depend on die size, cooling solution, ambient temperature, and silicon quality.

VMs have no thermal physics. A virtual CPU doesn't get hot. When we measure "thermal drift" in a VM, we get noise from the host's scheduling behavior, not actual temperature-dependent behavior. The entropy profile is qualitatively different.

5. Instruction Path Jitter

We capture cycle-level timing jitter across integer pipelines, branch prediction units, FPUs, load/store queues, and reorder buffers. This produces a matrix of jitter signatures.

No hypervisor replicates real microarchitectural jitter down to nanoseconds. The reorder buffer in a Zen 4 core doesn't behave the same way when virtualized -- the hypervisor's interrupt injection creates periodic timing artifacts that are absent on bare metal.

6. Anti-Emulation Behavioral Detection

This is the kill shot for VMs. We check:

  • /sys/class/dmi/id/sys_vendor for hypervisor strings (QEMU, VMware, etc.)
  • /proc/cpuinfo for the hypervisor flag
  • /proc/scsi/scsi for virtual disk controllers
  • Timing artifacts from hypervisor scheduling patterns
  • Time dilation detection (VMs sometimes fall behind real time under load)

Here's what a real HP Victus looks like:

[1/6] Clock-Skew & Oscillator Drift... PASS
[2/6] Cache Timing Fingerprint...      PASS
[3/6] SIMD Unit Identity...            PASS
[4/6] Thermal Drift Entropy...         PASS
[5/6] Instruction Path Jitter...       PASS
[6/6] Anti-Emulation Checks...         PASS

OVERALL RESULT: ALL CHECKS PASSED
Enter fullscreen mode Exit fullscreen mode

And here's a QEMU VM on our LiquidWeb VPS:

[6/6] Anti-Emulation Checks... FAIL
  vm_indicators: ["/sys/class/dmi/id/sys_vendor:qemu",
                  "/proc/scsi/scsi:qemu",
                  "cpuinfo:hypervisor"]
Enter fullscreen mode Exit fullscreen mode

Game over. A VM farm spinning up 10,000 instances gets exactly zero votes.

Supplementary: ROM Fingerprint Database

For vintage hardware specifically, we maintain a database of 61 known emulator ROM hashes. Emulators like SheepShaver (PowerPC Mac), Basilisk II (68K Mac), and UAE/WinUAE (Amiga) all use the same pirated ROM dumps. If your "G4 PowerBook" reports the exact same ROM hash as every SheepShaver instance on the internet, you're not a real G4.

We track ROMs across miners and do cluster detection. If 3+ miners report identical ROM hashes, they're all flagged as emulated. Real vintage Macs have manufacturing variants, regional differences, and firmware updates that make each ROM slightly unique.

Antiquity Multipliers and Time Decay

OK, so every real CPU gets exactly 1 vote. But we weight that vote based on the hardware's age. This is the "Antiquity" in Proof of Antiquity:

Architecture Base Multiplier After 1 Year After 5 Years
PowerPC G4 (2002) 2.5x 2.275x 1.375x
PowerPC G5 (2005) 2.0x 1.85x 1.25x
IBM POWER8 (2014) 2.0x 1.85x 1.25x
Apple Silicon M2 1.2x 1.17x 1.05x
Modern x86-64 1.0x 1.0x 1.0x

The decay formula:

aged_multiplier = 1.0 + (base - 1.0) * (1 - 0.15 * chain_age_years)
Enter fullscreen mode Exit fullscreen mode

A G4 PowerBook starts at 2.5x. Each year, the bonus above 1.0 decays by 15%. After ~16.67 years, every architecture converges to 1.0x -- the vintage bonus goes to zero. This prevents eternal free rides. The multiplier rewards the act of keeping old hardware alive and connected, but the reward diminishes as the chain matures.

Why weight vintage hardware higher? Three reasons:

  1. Digital preservation economics: Running a 2002 PowerBook costs electricity with no modern utility. The multiplier creates an economic reason to keep it running.
  2. Sybil resistance: Nobody is going to buy 1,000 PowerBook G4s on eBay to game a 2.5x multiplier. The hardware is scarce, expensive to maintain, and physically large. Modern x86 boxes are cheap and stackable.
  3. Architecture diversity: A network where 100% of miners are Ryzen 9s is fragile. A network spanning G4, G5, POWER8, M2, and x86 is resilient to architecture-specific attacks.

Hardware Binding: One CPU, One Wallet

Each CPU's identity is bound to exactly one wallet address via a SHA-256 hash of:

  • Device model
  • Device architecture
  • CPU serial number
  • MAC addresses (from the signals payload)
  • Device entropy vector

A second wallet attempting to attest with the same hardware ID gets rejected with DUPLICATE_HARDWARE. This is the final piece of the 1 CPU = 1 Vote guarantee. You can't register the same physical machine under multiple identities.

When we first deployed this, we hit a field-name mismatch bug -- miners sent model and arch while the server expected device_model and device_arch. Every x86 miner hashed to the same generic defaults and collided. Fun afternoon. The fix accepts both naming conventions and includes MAC addresses for additional uniqueness.

Real Deployment Numbers

This isn't a whitepaper. RustChain has been in production since December 2025. As of epoch 73 (February 2026):

Parameter Value
Node Version 2.2.1-rip200
Current Epoch 73+
Epoch Duration 10 minutes
Epoch Reward 1.5 RTC
Enrolled Miners 12
Unique Architectures 5
Attestation Nodes 3 independent servers
Total Supply 8,388,608 RTC (2^23)

Here's what the actual per-epoch reward distribution looks like with 12 miners (one VM earns ~0 due to anti-emulation detection):

Miner Architecture Multiplier RTC per Epoch %
G4 PowerBook PowerPC G4 2.5x 0.2953 19.7%
G5 Mac PowerPC G5 2.0x 0.2362 15.7%
POWER8 S824 IBM POWER8 2.0x 0.2362 15.7%
Mac Mini M2 Apple Silicon 1.2x 0.1417 9.4%
Each of 7 x86 miners Modern x86-64 1.0x 0.1181 7.9%

Compare this to Bitcoin: one ASIC earns ~1,000,000x what a CPU earns. In RustChain, the maximum advantage is 2.5:1. And that 2.5x decays to 1.0x over 16 years.

Those 11 miners span 5 distinct architectures, from a 24-year-old PowerBook G4 to modern Ryzen workstations, across 3 physical locations. Four of those miners are complete strangers who installed pip install clawrtc from PyPI and started mining.

VM Farms Get Nothing

Let's say you're an attacker. You've got a cloud budget and you spin up 1,000 QEMU VMs, each running the RustChain miner. What happens?

  1. Every VM hits Check #6 (Anti-Emulation) instantly. QEMU leaves fingerprints everywhere -- DMI vendor strings, virtual SCSI controllers, the hypervisor flag in /proc/cpuinfo.
  2. Even if you somehow patch those indicators, Checks #1 and #4 (Clock Drift and Thermal Entropy) catch you. VMs don't have real oscillators or real thermal curves.
  3. Even if you solve all of that, the hardware binding catches you. Your 1,000 VMs share underlying host hardware. The entropy vectors cluster.

The weight assigned to a detected VM: 0.000000001x. That's one billionth of a real CPU's vote. Not zero -- we want to see them in the logs for monitoring -- but functionally zero.

We verified this with Ryan's Factorio server, a QEMU VM on Proxmox that legitimately runs a RustChain miner. Anti-emulation detection fires immediately:

Anti-Emulation: FAIL
vm_indicators: ["/sys/class/dmi/id/sys_vendor:qemu",
                "/proc/scsi/scsi:qemu",
                "cpuinfo:hypervisor"]
Enter fullscreen mode Exit fullscreen mode

VM detected. Weight = 10^-9. By design.

The Server-Side Trust Model

A critical design decision: the server does not trust the miner's self-reported "passed: true" for each check. After a security audit by BuilderFred (paid 150 RTC), we changed the server to require raw evidence data.

The validation function:

def validate_fingerprint_data(fingerprint: dict) -> tuple:
    """Server-side validation of fingerprint check results."""
    if not fingerprint:
        return False, "no_fingerprint_data"

    checks = fingerprint.get("checks", {})

    # Anti-emulation: require raw evidence, not just "passed: true"
    anti_emu = checks.get("anti_emulation", {})
    if anti_emu.get("passed") == False:
        return False, f"vm_detected:{anti_emu.get('data', {}).get('vm_indicators', [])}"

    # Clock drift: real hardware has variance
    clock = checks.get("clock_drift", {})
    cv = clock.get("data", {}).get("cv", 0)
    if cv < 0.0001:
        return False, "timing_too_uniform"

    return True, "valid"
Enter fullscreen mode Exit fullscreen mode

If a miner submits "anti_emulation": {"passed": true} without the raw vm_indicators data, the server rejects it. You can't just lie about the results -- you have to provide the evidence, and the server validates the evidence independently.

For reward calculation, enforcement is binary:

weight = hw_weight if fingerprint_ok else 0.0  # No rewards for failed fingerprint
Enter fullscreen mode Exit fullscreen mode

Pass all 6 checks or earn nothing. There's no partial credit.

Why This Matters Beyond Cryptocurrency

Proof of Antiquity solves a problem that extends past blockchain:

  • IoT Device Authentication: Verifying that a sensor is real hardware, not a spoofed data source
  • Hardware Attestation for AI: Proving that inference ran on specific silicon (relevant for our POWER8 LLM work)
  • Digital Preservation Economics: Creating market incentives to keep vintage hardware operational
  • Anti-Botnet Consensus: Any system where "one entity, one vote" matters

The core technique -- fingerprinting physical CPU characteristics that can't be virtualized -- is useful anywhere you need to distinguish real hardware from simulated hardware.

Try It

# Install the miner CLI
pip install clawrtc

# Create a wallet
clawrtc wallet create

# Start mining (your CPU gets 1 vote)
clawrtc mine

# Check your balance
clawrtc wallet show
Enter fullscreen mode Exit fullscreen mode

The miner runs the 6 fingerprint checks on startup, attests to the network every epoch, and earns RTC proportional to your hardware's antiquity multiplier. A modern x86 box earns 1.0x. If you happen to have a PowerBook G4 in a closet, plug it in -- it earns 2.5x.

Links

Other articles in this series:


Built by Elyan Labs in Louisiana. The vintage machines mine. The AI agents make videos. And every CPU -- from a 2002 PowerBook to a 2025 Ryzen -- gets exactly one vote.

Top comments (0)