DEV Community

Cover image for Why `0.1 + 0.2 != 0.3` in C# (and most other languages)
Morteza Jangjoo
Morteza Jangjoo

Posted on

Why `0.1 + 0.2 != 0.3` in C# (and most other languages)

This article was originally published on my Hashnode blog: Read here

One of the most surprising things developers face — even experienced ones — is this seemingly illogical result in C#:

Console.WriteLine(0.1 + 0.2 == 0.3); // Outputs: False
Enter fullscreen mode Exit fullscreen mode

Why does this happen?

The issue lies in how floating-point numbers are represented in binary.

C# (like most modern languages such as Python, JavaScript, Java, etc.) follows the IEEE 754 standard for floating-point numbers (float and double). The problem is: decimal numbers like 0.1 and 0.2 can’t be represented exactly in binary.

What you really get is:

double sum = 0.1 + 0.2;
Console.WriteLine(sum);       // 0.30000000000000004
Console.WriteLine(sum == 0.3); // False
Enter fullscreen mode Exit fullscreen mode

So 0.30000000000000004 != 0.3, hence the result is false.

Is it just C#?

Nope.

Try it in Python:

print(0.1 + 0.2 == 0.3)  # False
Enter fullscreen mode Exit fullscreen mode

Or JavaScript:

console.log(0.1 + 0.2 === 0.3); // False
Enter fullscreen mode Exit fullscreen mode

Or Java:

System.out.println(0.1 + 0.2 == 0.3); // False
Enter fullscreen mode Exit fullscreen mode

All use binary floating-point arithmetic and face the same limitation.

What should you do instead?

Use a tolerance (epsilon) for comparison:

bool AreEqual(double a, double b, double epsilon = 1e-10)
{
    return Math.Abs(a - b) < epsilon;
}

Console.WriteLine(AreEqual(0.1 + 0.2, 0.3)); // True
Enter fullscreen mode Exit fullscreen mode

This is the recommended approach for comparing floating-point values in most languages.

Or use decimal for precision

If you're doing financial or monetary calculations in C#, use the decimal type:

decimal a = 0.1m;
decimal b = 0.2m;
Console.WriteLine(a + b == 0.3m); // True
Enter fullscreen mode Exit fullscreen mode

The decimal type uses base-10, which avoids binary rounding issues.

Conclusion

  • Don’t compare floating-point numbers using ==.
  • Use a tolerance (epsilon) or decimal type when exact precision matters.
  • This behavior is consistent across many languages due to IEEE 754.

Follow me on Hashnode for more C# and .NET content:
Read this post on Hashnode
I’m Morteza Jangjoo and “Explaining things I wish someone had explained to me”

Top comments (0)