<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dharrsan Amarnath</title>
    <description>The latest articles on DEV Community by Dharrsan Amarnath (@dharrsan-hq).</description>
    <link>https://dev.to/dharrsan-hq</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dharrsan-hq"/>
    <language>en</language>
    <item>
      <title>Why Blockchains Exclude Floating Point at the Architecture Level</title>
      <dc:creator>Dharrsan Amarnath</dc:creator>
      <pubDate>Mon, 20 Apr 2026 00:51:11 +0000</pubDate>
      <link>https://dev.to/dharrsan-hq/why-blockchains-exclude-floating-point-at-the-architecture-level-8a4</link>
      <guid>https://dev.to/dharrsan-hq/why-blockchains-exclude-floating-point-at-the-architecture-level-8a4</guid>
      <description>&lt;blockquote&gt;
&lt;h2&gt;
  
  
  I ran the same C program on three machines. Same code. Same inputs. Three different answers. Here's exactly why  
&lt;/h2&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Experiment
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="cp"&gt;#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;stdio.h&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;long&lt;/span&gt; &lt;span class="kt"&gt;double&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;1L&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;2L&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"%.20Lf&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kt"&gt;unsigned&lt;/span&gt; &lt;span class="kt"&gt;char&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;unsigned&lt;/span&gt; &lt;span class="kt"&gt;char&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="k"&gt;sizeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"%02x "&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="n"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three machines. All running the same binary-equivalent logic:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Machine&lt;/th&gt;
&lt;th&gt;OS&lt;/th&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;A&lt;/td&gt;
&lt;td&gt;Linux&lt;/td&gt;
&lt;td&gt;AMD x86_64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;B&lt;/td&gt;
&lt;td&gt;Linux&lt;/td&gt;
&lt;td&gt;Raspberry Pi ARMv8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;macOS&lt;/td&gt;
&lt;td&gt;Apple Silicon M4 (ARM64)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  The Results
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Machine A: AMD x86_64 Linux (GCC)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0.30000000000000001665
9f 93 54 5d e9 52 49 81 ff 3f 00 00 00 00 00 00
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;sizeof(long double)&lt;/code&gt; = &lt;strong&gt;16 bytes&lt;/strong&gt; on this machine. But only the first 10 bytes hold actual data: the remaining 6 are padding added for alignment. The meaningful precision lives in an 80-bit format called &lt;strong&gt;x87 extended precision&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Machine B: Raspberry Pi ARM Linux (GCC)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0.30000000000000004441
34 33 33 33 33 33 33 33 33 33 33 33 33 33 fd 3f
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;sizeof(long double)&lt;/code&gt; = &lt;strong&gt;16 bytes&lt;/strong&gt; here too but the byte layout is completely different. On ARM Linux, GCC implements &lt;code&gt;long double&lt;/code&gt; as &lt;strong&gt;software-emulated 128-bit quad precision&lt;/strong&gt; (IEEE-754 binary128). The bytes are not compatible with Machine A's output, even though both are nominally "16 bytes."&lt;/p&gt;

&lt;h3&gt;
  
  
  Machine C: Apple M4 (ARM64, Clang)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0.30000000000000004
9a 99 99 99 99 99 d3 3f
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;sizeof(long double)&lt;/code&gt; = &lt;strong&gt;8 bytes&lt;/strong&gt;. On Apple Silicon, Clang maps &lt;code&gt;long double&lt;/code&gt; to the same 64-bit &lt;code&gt;double&lt;/code&gt; type. There is no extended precision. What you write is exactly what you compute.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why They Disagree: The IEEE-754 Representation Problem
&lt;/h2&gt;

&lt;p&gt;This is not a hardware quality issue. It is a &lt;strong&gt;representation&lt;/strong&gt; issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  The core problem: not all decimals fit in binary
&lt;/h3&gt;

&lt;p&gt;The decimal number &lt;code&gt;0.1&lt;/code&gt; in binary is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0.0001100110011001100110011001100110011001100110011001100110...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It repeats infinitely. A computer must cut it off at a finite number of bits and round. In IEEE-754 double (64-bit), that cutoff is at &lt;strong&gt;52 bits of mantissa&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The layout of a 64-bit IEEE-754 double is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────┬───────────────────┬──────────────────────────────────────────────────────┐
│  Sign   │     Exponent      │                    Mantissa                          │
│  1 bit  │     11 bits       │                    52 bits                           │
└─────────┴───────────────────┴──────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So before addition even happens:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0.1  ≈  0.1000000000000000055511151231257827021181583404541015625
0.2  ≈  0.2000000000000000111022302462515654042363166809082031250
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are not 0.1 and 0.2. They are the &lt;strong&gt;closest representable binary fractions&lt;/strong&gt;. The rounding error is baked in before a single arithmetic operation runs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why addition makes it worse across machines
&lt;/h3&gt;

&lt;p&gt;When you add the two rounded approximations, the machine has to round again and &lt;em&gt;where&lt;/em&gt; that second rounding happens depends on how wide the intermediate register is.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Machine&lt;/th&gt;
&lt;th&gt;Intermediate register width&lt;/th&gt;
&lt;th&gt;What this means&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;x86 Linux (A)&lt;/td&gt;
&lt;td&gt;x87 80-bit extended&lt;/td&gt;
&lt;td&gt;Computation happens with 64 bits of mantissa; rounded back down when written to memory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ARM Linux (B)&lt;/td&gt;
&lt;td&gt;Software 128-bit&lt;/td&gt;
&lt;td&gt;The rounding rules of a software IEEE-754 quad implementation are used; produces a different truncation point&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Apple M4 (C)&lt;/td&gt;
&lt;td&gt;64-bit strict&lt;/td&gt;
&lt;td&gt;No intermediate widening at all; the mantissa is 52 bits throughout, start to finish&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The rounding path is different. So the final bit pattern is different.&lt;/p&gt;

&lt;h3&gt;
  
  
  What the hex reveals
&lt;/h3&gt;

&lt;p&gt;Machine A's 16-byte hex: &lt;code&gt;9f 93 54 5d e9 52 49 81 ff 3f 00 00 00 00 00 00&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bytes 0–9: the 80-bit extended value&lt;/li&gt;
&lt;li&gt;Bytes 10–15: compiler-inserted padding (&lt;code&gt;00 00 ...&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Machine B's 16-byte hex: &lt;code&gt;34 33 33 33 33 33 33 33 33 33 33 33 33 33 fd 3f&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All 16 bytes carry data this is a real 128-bit float&lt;/li&gt;
&lt;li&gt;The repeating &lt;code&gt;33&lt;/code&gt; pattern is the binary encoding of &lt;code&gt;0.3333...&lt;/code&gt; the internal representation of the rounded result at 128-bit precision&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Machine C's 8-byte hex: &lt;code&gt;9a 99 99 99 99 99 d3 3f&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A standard IEEE-754 double, little-endian&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;3f d3 99 99 99 99 99 9a&lt;/code&gt; in big-endian: sign=0, exponent=01111111101 (= -2), mantissa = &lt;code&gt;0011001100110011...&lt;/code&gt; the truncated binary of 0.3 at 52 bits&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why This Is Catastrophic for Distributed Systems
&lt;/h2&gt;

&lt;p&gt;Consider a simple balance operation repeated across nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;balance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;balance&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;1.000000001&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After 10 million such operations on a real bank ledger:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node A (x86): &lt;code&gt;$1,000.00000823...&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Node B (ARM): &lt;code&gt;$1,000.00000847...&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Node C (M4):  &lt;code&gt;$1,000.00000819...&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The states have diverged. Each node believes a different truth. There is no consensus.&lt;/p&gt;

&lt;p&gt;In a traditional distributed database, this is serious but recoverable a primary node's value wins, replicas sync. But in a blockchain, &lt;strong&gt;there is no primary node&lt;/strong&gt;. Every node is equal. Every node must independently arrive at the exact same bit-for-bit result. If they don't, the network fractures.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Blockchain Solution: Integer Arithmetic Only
&lt;/h2&gt;

&lt;p&gt;Blockchains don't try to fix floating point. They remove it.&lt;/p&gt;

&lt;h3&gt;
  
  
  How integers solve the problem
&lt;/h3&gt;

&lt;p&gt;Integer arithmetic has no mantissa, no exponent, no rounding mode. &lt;code&gt;100 + 200 = 300&lt;/code&gt; on x86, ARMv8, RISC-V, MIPS, and every other architecture, identically, always. There is nothing to round. There are no intermediate registers with different widths.&lt;/p&gt;

&lt;p&gt;Integers are &lt;strong&gt;bit-for-bit deterministic across all architectures&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  How major chains implement this
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Ethereum&lt;/strong&gt; represents all value in &lt;strong&gt;wei&lt;/strong&gt;, stored as &lt;code&gt;uint256&lt;/code&gt;. 1 ETH = 10¹⁸ wei. The Ethereum Virtual Machine (EVM) has explicit opcodes for integer arithmetic and deliberately &lt;strong&gt;has no floating-point opcode&lt;/strong&gt;. Smart contract developers who want decimal semantics must implement fixed-point arithmetic manually using integer scaling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solana&lt;/strong&gt; represents all value in &lt;strong&gt;lamports&lt;/strong&gt;, stored as &lt;code&gt;uint64&lt;/code&gt;. 1 SOL = 10⁹ lamports. Programs running in the Sealevel runtime must use integer arithmetic for any computation that enters the ledger.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Polkadot&lt;/strong&gt; represents all value in &lt;strong&gt;planck&lt;/strong&gt;, stored as &lt;code&gt;u128&lt;/code&gt;. 1 DOT = 10¹⁰ planck. Logic runs inside WebAssembly-based runtimes where all balance and governance arithmetic is handled exclusively through integer types from Rust's standard library &lt;code&gt;u128&lt;/code&gt;, &lt;code&gt;u64&lt;/code&gt;, never floats.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Chain       | Unit      | Type    | Scale
------------|-----------|---------|------------------------
Ethereum    | wei       | uint256 | 10^18 per ETH
Solana      | lamport   | uint64  | 10^9 per SOL
Polkadot    | planck    | u128    | 10^10 per DOT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What about real-world prices? (The oracle problem)
&lt;/h3&gt;

&lt;p&gt;Real-world prices ETH/USD, BTC/EUR are inherently decimal data. How do oracle networks like Chainlink handle this without introducing float?&lt;/p&gt;

&lt;p&gt;Floating point exists off-chain, integers cross the boundary.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Price data is collected off-chain from exchanges as human-readable decimals&lt;/li&gt;
&lt;li&gt;Chainlink converts them to integers using &lt;code&gt;parseUnits()&lt;/code&gt; passing the value as a &lt;strong&gt;string&lt;/strong&gt;, not a float, to avoid precision loss at the conversion step itself&lt;/li&gt;
&lt;li&gt;The resulting integer is submitted on-chain&lt;/li&gt;
&lt;li&gt;Smart contracts only ever see and operate on the scaled integer
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// WRONG — multiplying a float loses precision before it even hits the chain&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;amount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;e18&lt;/span&gt;  &lt;span class="c1"&gt;// imprecise&lt;/span&gt;

&lt;span class="c1"&gt;// CORRECT — string-based conversion, no precision loss&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;amount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;parseUnits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0.1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;// → 100000000000000000n (exact)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reverse works the same way &lt;code&gt;formatUnits()&lt;/code&gt; converts the on-chain integer back to a human-readable string for display, without ever passing through a float.&lt;/p&gt;




&lt;h2&gt;
  
  
  Take away:
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Blockchains reject floating point not because it is inaccurate, but because &lt;strong&gt;it is not reproducible across machines at the bit level&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>web3</category>
      <category>blockchain</category>
      <category>programming</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
