DEV Community

Cover image for Why 0.1 + 0.2 0.3 in JavaScript: The Truth About Floating-Point Numbers
Zaid Rehman
Zaid Rehman

Posted on

Why 0.1 + 0.2 0.3 in JavaScript: The Truth About Floating-Point Numbers

Why 0.1 + 0.20.3 in JavaScript

If you’ve ever typed this into your console:

0.1 + 0.2 === 0.3 // false
console.log(0.1 + 0.2) // 0.30000000000000004
Enter fullscreen mode Exit fullscreen mode

…and felt confused, you’re not alone.

This tiny difference has confused developers for years, but once you understand floating-point numbers, it all makes sense.


🧮 What’s Going On?

JavaScript stores every number (except BigInt) as a 64-bit IEEE 754 floating-point value.

That means numbers are represented in binary, not decimal.

And here’s the problem:

Some decimal numbers simply can’t be represented exactly in binary form.

For example:

  • 1/3 is an infinite repeating decimal in base 10: 0.333333...
  • Similarly, 0.1 is an infinite repeating fraction in base 2 (binary).

JavaScript can’t store infinite digits, so it rounds the value slightly.

That’s why when you add 0.1 + 0.2, you get 0.30000000000000004.


💡 The Science Behind It

Every floating-point number is stored as three parts:

  1. Sign bit (positive or negative)
  2. Exponent (how large or small the number is)
  3. Mantissa (the precise digits)

This structure gives huge range (from 10⁻³²⁴ to 10³⁰⁸) but limited precision, about 15–17 decimal digits.

So small rounding errors are completely normal.


⚠️ Why It Matters

In most cases, these tiny errors don’t break anything.

But when you compare floats directly or work with money, they can cause problems.

❌ Don’t do this:

if (price === 0.3) {
  console.log("Exact match");
}
Enter fullscreen mode Exit fullscreen mode

This check might fail even when the numbers look equal.

✅ Do this instead:

if (Math.abs(price - 0.3) < Number.EPSILON) {
  console.log("Close enough!");
}
Enter fullscreen mode Exit fullscreen mode

Number.EPSILON is the smallest difference between two representable numbers, perfect for “almost equal” checks.


💰 When You Need Perfect Precision

If you’re handling prices, interest rates, or scientific data, rounding errors are unacceptable.

In those cases, use decimal libraries designed for precise math:

Example with decimal.js:

const Decimal = require('decimal.js');
const result = new Decimal(0.1).plus(0.2);
console.log(result.toString()); // "0.3"
Enter fullscreen mode Exit fullscreen mode

🧠 Key Takeaways

  • JavaScript numbers are binary floating-point values
  • Some decimals cannot be represented exactly
  • Always use tolerance (Number.EPSILON) when comparing floats
  • Round numbers only for display, not for logic
  • For money or math-heavy apps, use a decimal library

🔍 TL;DR

0.1 + 0.2 === 0.3 // false
// Because floating-point math is not perfect decimal math
Enter fullscreen mode Exit fullscreen mode

Once you understand this, you’ll never be surprised by 0.30000000000000004 again.

Top comments (2)

Collapse
 
tracygjg profile image
Tracy Gilmore • Edited

Hi Zaid, I think the title of your excellent post should be "Why 0.1 + 0.2 != 0.3 in IEEE 754".

Extracted from MDN "The JavaScript Number type is a double-precision 64-bit binary format IEEE 754 value, like double in Java or C#."

This means the lack of precision highlighted by your post is not peculiar to JS but can be found in a number of other programming languages as well. JS is just a convenient wiping-boy.

The entire problem stems from the fact that digital, unlike analogue, computers have to approximate values or use a lot more memory and irrational numbers will always be a problem whatever the technology.

Collapse
 
elanatframework profile image
Elanat Framework

Hi.
JavaScript also has specific challenges in sorting data.
Despite these challenges, JavaScript offers powerful tools and libraries to overcome them — but developers need to be aware of the pitfalls to avoid bugs and inconsistencies.