DEV Community

Vipul Kumar
Vipul Kumar

Posted on

Why is 0.1 + 0.2 != 0.3 in Java?

Why is 0.1 + 0.2 != 0.3 in Java?

At first glance, it feels like Java is broken. But the real culprit is floating-point precision.

System.out.println(0.1 + 0.2 == 0.3); // false ❌
Enter fullscreen mode Exit fullscreen mode

Here’s why:
👉 Java’s double follows the IEEE 754 standard.
👉 Numbers like 0.1 and 0.2 can’t be represented exactly in binary.
👉 So 0.1 + 0.2 evaluates to 0.30000000000000004, not exactly 0.3.

Real-world implications:
👉 Financial apps don’t use double for money. Instead, they rely on BigDecimal for precise arithmetic:

BigDecimal x = new BigDecimal("0.1");
BigDecimal y = new BigDecimal("0.2");
System.out.println(x.add(y).equals(new BigDecimal("0.3"))); // true ✅
Enter fullscreen mode Exit fullscreen mode

👉 Lesson:
Floating-point is great for scientific calculations, but never for money.
If you’re handling currency, billing, or financial logic, reach for BigDecimal, not double.

Have you ever been bitten by a floating-point bug in production?

I am @vipulkumarsviit and lets stat in touch: https://www.linkedin.com/in/vipulkumarsviit/

Top comments (0)