DEV Community

Dharrsan Amarnath
Dharrsan Amarnath

Posted on

Why Blockchains Exclude Floating Point at the Architecture Level

I ran the same C program on three machines. Same code. Same inputs. Three different answers. Here's exactly why

The Experiment

#include <stdio.h>
int main() {
    long double x = 0.1L + 0.2L;
    printf("%.20Lf\n", x);
    unsigned char *p = (unsigned char *)&x;
    for (int i = 0; i < sizeof(x); i++)
        printf("%02x ", p[i]);
    printf("\n");
    return 0;
}
Enter fullscreen mode Exit fullscreen mode

Three machines. All running the same binary-equivalent logic:

Machine OS Architecture
A Linux AMD x86_64
B Linux Raspberry Pi ARMv8
C macOS Apple Silicon M4 (ARM64)

The Results

Machine A: AMD x86_64 Linux (GCC)

0.30000000000000001665
9f 93 54 5d e9 52 49 81 ff 3f 00 00 00 00 00 00
Enter fullscreen mode Exit fullscreen mode

sizeof(long double) = 16 bytes on this machine. But only the first 10 bytes hold actual data: the remaining 6 are padding added for alignment. The meaningful precision lives in an 80-bit format called x87 extended precision.

Machine B: Raspberry Pi ARM Linux (GCC)

0.30000000000000004441
34 33 33 33 33 33 33 33 33 33 33 33 33 33 fd 3f
Enter fullscreen mode Exit fullscreen mode

sizeof(long double) = 16 bytes here too but the byte layout is completely different. On ARM Linux, GCC implements long double as software-emulated 128-bit quad precision (IEEE-754 binary128). The bytes are not compatible with Machine A's output, even though both are nominally "16 bytes."

Machine C: Apple M4 (ARM64, Clang)

0.30000000000000004
9a 99 99 99 99 99 d3 3f
Enter fullscreen mode Exit fullscreen mode

sizeof(long double) = 8 bytes. On Apple Silicon, Clang maps long double to the same 64-bit double type. There is no extended precision. What you write is exactly what you compute.


Why They Disagree: The IEEE-754 Representation Problem

This is not a hardware quality issue. It is a representation issue.

The core problem: not all decimals fit in binary

The decimal number 0.1 in binary is:

0.0001100110011001100110011001100110011001100110011001100110...
Enter fullscreen mode Exit fullscreen mode

It repeats infinitely. A computer must cut it off at a finite number of bits and round. In IEEE-754 double (64-bit), that cutoff is at 52 bits of mantissa.

The layout of a 64-bit IEEE-754 double is:

┌─────────┬───────────────────┬──────────────────────────────────────────────────────┐
│  Sign   │     Exponent      │                    Mantissa                          │
│  1 bit  │     11 bits       │                    52 bits                           │
└─────────┴───────────────────┴──────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

So before addition even happens:

0.1  ≈  0.1000000000000000055511151231257827021181583404541015625
0.2  ≈  0.2000000000000000111022302462515654042363166809082031250
Enter fullscreen mode Exit fullscreen mode

These are not 0.1 and 0.2. They are the closest representable binary fractions. The rounding error is baked in before a single arithmetic operation runs.

Why addition makes it worse across machines

When you add the two rounded approximations, the machine has to round again and where that second rounding happens depends on how wide the intermediate register is.

Machine Intermediate register width What this means
x86 Linux (A) x87 80-bit extended Computation happens with 64 bits of mantissa; rounded back down when written to memory
ARM Linux (B) Software 128-bit The rounding rules of a software IEEE-754 quad implementation are used; produces a different truncation point
Apple M4 (C) 64-bit strict No intermediate widening at all; the mantissa is 52 bits throughout, start to finish

The rounding path is different. So the final bit pattern is different.

What the hex reveals

Machine A's 16-byte hex: 9f 93 54 5d e9 52 49 81 ff 3f 00 00 00 00 00 00

  • Bytes 0–9: the 80-bit extended value
  • Bytes 10–15: compiler-inserted padding (00 00 ...)

Machine B's 16-byte hex: 34 33 33 33 33 33 33 33 33 33 33 33 33 33 fd 3f

  • All 16 bytes carry data this is a real 128-bit float
  • The repeating 33 pattern is the binary encoding of 0.3333... the internal representation of the rounded result at 128-bit precision

Machine C's 8-byte hex: 9a 99 99 99 99 99 d3 3f

  • A standard IEEE-754 double, little-endian
  • 3f d3 99 99 99 99 99 9a in big-endian: sign=0, exponent=01111111101 (= -2), mantissa = 0011001100110011... the truncated binary of 0.3 at 52 bits

Why This Is Catastrophic for Distributed / Financial Systems

Consider a simple balance operation repeated across nodes:

balance = balance * 1.000000001
Enter fullscreen mode Exit fullscreen mode

After 10 million such operations on a real bank ledger:

  • Node A (x86): $1,000.00000823...
  • Node B (ARM): $1,000.00000847...
  • Node C (M4): $1,000.00000819...

The states have diverged. Each node believes a different truth. There is no consensus.

In a traditional distributed database, this is serious but recoverable a primary node's value wins, replicas sync. But in a blockchain, there is no primary node. Every node is equal. Every node must independently arrive at the exact same bit-for-bit result. If they don't, the network fractures.


The Blockchain Solution: Integer Arithmetic Only

Blockchains don't try to fix floating point. They remove it.

How integers solve the problem

Integer arithmetic has no mantissa, no exponent, no rounding mode. 100 + 200 = 300 on x86, ARMv8, RISC-V, MIPS, and every other architecture, identically, always. There is nothing to round. There are no intermediate registers with different widths.

Integers are bit-for-bit deterministic across all architectures.

How major chains implement this

Ethereum represents all value in wei, stored as uint256. 1 ETH = 10¹⁸ wei. The Ethereum Virtual Machine (EVM) has explicit opcodes for integer arithmetic and deliberately has no floating-point opcode. Smart contract developers who want decimal semantics must implement fixed-point arithmetic manually using integer scaling.

Solana represents all value in lamports, stored as uint64. 1 SOL = 10⁹ lamports. Programs running in the Sealevel runtime must use integer arithmetic for any computation that enters the ledger.

Polkadot represents all value in planck, stored as u128. 1 DOT = 10¹⁰ planck. Logic runs inside WebAssembly-based runtimes where all balance and governance arithmetic is handled exclusively through integer types from Rust's standard library u128, u64, never floats.

Chain       | Unit      | Type    | Scale
------------|-----------|---------|------------------------
Ethereum    | wei       | uint256 | 10^18 per ETH
Solana      | lamport   | uint64  | 10^9 per SOL
Polkadot    | planck    | u128    | 10^10 per DOT
Enter fullscreen mode Exit fullscreen mode

What about real-world prices? (The oracle problem)

Real-world prices ETH/USD, BTC/EUR are inherently decimal data. How do oracle networks like Chainlink handle this without introducing float?

Floating point exists off-chain, integers cross the boundary.

  1. Price data is collected off-chain from exchanges as human-readable decimals
  2. Chainlink converts them to integers using parseUnits() passing the value as a string, not a float, to avoid precision loss at the conversion step itself
  3. The resulting integer is submitted on-chain
  4. Smart contracts only ever see and operate on the scaled integer
// WRONG — multiplying a float loses precision before it even hits the chain
const amount = 0.1 * 1e18  // imprecise

// CORRECT — string-based conversion, no precision loss
const amount = parseUnits("0.1", 18)  // → 100000000000000000n (exact)
Enter fullscreen mode Exit fullscreen mode

The reverse works the same way formatUnits() converts the on-chain integer back to a human-readable string for display, without ever passing through a float.


Take away:

Blockchains reject floating point not because it is inaccurate, but because it is not reproducible across machines at the bit level.

Top comments (0)