JavaScript uses the IEEE-754 floating-point representation, which is a binary representation, that is capable of exactly representing fractions like 1/2 and 1/8. This is great and all, but the fractions we most commonly use are decimal ones, such as: 1/10 and 1/100.
Numbers in JavaScript have a lot of precision but the problem is that floating point representations cannot exactly represent numbers like 0.1 or 0.01. They can very closely approximate decimal numbers, but the fact that numbers like these can't be exactly represented can lead to problems.
Example:
let x = .3 - .2;
let y = .2 - .1;
x === y // => false: They are not the same
x === .1 // => false: .3 - .2 !== .1
y === .1 // => true: .2 - .1 === .1
Because of rounding errors the approximations of x
is not exactly the same as the approximations of y
. They are very very close to each other, but not the same!
Also it's important to note that this problem is not specific to JavaScript, it affects all of the languages that use binary floating-point numbers.
Top comments (0)