# How to correctly compare floating point numbers for equality

###
Benjamin Black
*Updated on *
・1 min read

IEEE 754 is an approximate representation of floating-point numbers; therefore, floating-point numbers should not be directly compared for equality.

(This issue is common to any language that uses IEEE 754 floating-point, including Javascript.)

Programmers are probably familiar with variations of this head-scratcher:

```
let f1 = 0.1 + 0.2;
let f2 = 0.3;
console.log(f1 === f2); // 'false'
```

Comparisons of IEEE 754 floating-point numbers must account for ε (epsilon), the maximum relative error in floating point numbers (defined as "the difference between 1 and the smallest floating-point number greater than 1").

In Javascript, the value of ε for IEEE 754 double-precision floating point is a static property on the `Number`

object (`Number.EPSILON`

).

To account for ε and correctly compare two floating-point numbers for equality, subtract one number from the other and ensure the difference is smaller than ε.

```
let f1 = 0.1 + 0.2;
let f2 = 0.3;
console.log(Math.abs(f1 - f2) < Number.EPSILON); // 'true'
```

(The comparison should be strict less-than, not lte, by the definition of ε: `1.0 + ε ≠ 1.0`

in IEEE 754 representation.)

**Addendum:** There is a stage 0 proposal to add a decimal numbers to Javascript, which can precisely represent all decimal numbers within its range and precision.

## WARNING: Don't follow the recipe in this post **

** Unless you're only working with fairly small numbers

It's actually quite dangerous to use Number.Epsilon as a "tolerance" for number comparisons. Other languages have a similar construct (the .Net languages all have it as double.Epsilon for example), however it always comes with a warning not to use the "floating point epsilon" for comparisons.

The "epsilon" provided by the language is simply the smallest possible "increment" you can represent with that particular floating point type. For IEEEE double-precision numbers, that number (Number.Epsilon) is miniscule!

The problem with using it for comparisons is that floating point numbers are implemented like scientific notation, where you have a fixed number of significant digits, and an exponent that shifts the "decimal point" back and forth (hence the "floating point" thing in the name).

Double-precision floating point numbers (as used in javascript) have about 15 significant (decimal) digits. What that means is if you want to hold a number like 1,000,000,001 (10 significant digits), then you can only hold a fraction up to about five or six decimal places. The double-precision floating point numbers 3,000,000,000.00001 and 3,000,000,000.000011 will be seen as equal. (note that because floats are stored as binary, it's not a case of there being

exactly15 significant decimal digits at all times - information is lost at some power of two, not a power of 10).Number.Epislon is waaaaay smaller than .00001 - so while the example code that Benjamin gave works with a "tolerance" of Number.Epsilon (because the numbers being compared are all smaller than 1.0), this code does NOT work properly:

If you go hunting online, there's a fair bit of discussion on how to choose a suitable epsilon (or tolerance) for performing comparisons. After all the discussion, and some very clever code that has a good shot at figuring out a "dynamically calculated universal epsilon" (based on the largest number being compared) it always ends up boiling back down to this:

The reason dynamically calculated tolerances (based on the scale of the numbers being compared) aren't a universal solution is that when numbers being compared vary wildly in size it's easy to end up with a situation that breaks the rules of equality: i.e.

Using a tolerance that changes with every single equality test in your program is a good route to having a != c somewhere - and you can be guaranteed it'll happen at annoyingly "random" times!

Thar be the way to bugs me-hearties!!!

So, how do you choose a suitable tolerance for

yourprogram?Let's assume you're holding dimensions of a building in millimetres (where a 20 metre long building would be 20,000). Do you really care if that dimension is within .0000000001 of a millimetre of some other dimension when you're comparing? - probably not!

In this case a sensible epsilon (or tolerance) might be .01 or .001 - plug that into the

`Math.abs(f1 - f2) < tolerance`

expression instead. Definitely doNOTuse`Number.Epsilon`

for that application, since youmightget a 200m long building somewhere (200,000mm) and that'll fail to compare properly to another 200m long dimension using javascript's`Number.Epsilon`

.Incidentally, if you don't care whether your measurements are any closer than 1mm to each other, then you should probably just use an integer type and be done with it (if you're working in a language that has integer types - obviously the issue here is that Javascript does not have an integer type).

Likewise, if you only care whether your measurements are within .1mm of each other, then you could use a "decimal" type (if your language has it), or store all your measurements internally as integers (again, if your language has them) containing tenths of millimetres (e.g. 20m building = 200,000 internally)

If you want to have a play around with floating point comparisons in Javascript and peek into how the numbers lose precision as they get bigger, then there's a jsfiddle I stuck together at: jsfiddle.net/r0begv7a/3/

Nice! This problem is common for most of the languages, thanks for the reminder!

I'm new to js, and thus far have not seen the epsilon defined in any other languages. I recall in school calculating machine epsilon as an exercise. The only other time I've dealt with comparing floats we had to define our own acceptable epsilon. Is this very handy definition something that any other languages have implemented?

They sure do. C defines the values of ε for float, double, and long double floating point types in the standard header

`float.h`

as the constants`FLT_EPSILON`

,`DBL_EPSILON`

, and`LDBL_EPSILON`

.Awesome, thanks!

Defining your own epsilon - as you have done before - is the correct thing to do.

The "double epsilon" defined in various languages and frameworks is not an intended as some "margin of error" for performing comparisons - it's just the smallest possible increment you could ever represent with the number type.

Because we are talking about

floatingpoint numbers (not fixed scale/precision numbers), the level of precision specified by Number.Epsilon is only possible when you are storing a very small number - otherwise the precision will change depending on the number you store.See my wordy reply to the original article ( dev.to/alldanielscott/comment/b46f ) for why the code recommended in the article here is not safe!

The "correct" way would be for ECMA group to abstract this gotcha into an override of the == and/or === operators for floats in an upcoming version of ES :D... but good tip!

That would certainly be breaking.

I feel like I wouldn't mind them doing that to

`==`

, which means "approximately equal" in my head, but in reality that would become a reason to use`==`

with all its coercions and we definitely don't need that.