Today I learned why float or double data types should not be used for storing currency values.
TLDR
Floats and doubles don't accurately represent base 10 values that we use for money. Floats are approximated, and if used for representing values that need to be exact, arithmetic operations can give results that are slightly off. These errors can compound over time and become significantly large.
Number Systems
The story starts at number systems and the basic differences between the human way of counting numbers (Base 10) and the machine way of counting numbers (Base 2).
In computing, decimal numbers are expressed as:
significant digit x (base^exponent)
For example, 1.25 in base 10 would be expressed as:
125 x (10^-2)
And 1.25 in base 2 would be expressed as:
20 x (2^-4)
Note: '^' denotes exponential power
To represent money, usually, all we need is to store values up to two decimal places ie. values that are multiples of 0.01. But the problem with using floating-point numbers is that most money-like numbers can't be expressed as an integer multiplied by a power of 2. So when floats or doubles are used to represent money, all arithmetic operations performed can be slightly off.
Trying it out
Assume that you had a dollar, and you wanted to subtract 18 cents from it. You would expect to have 82 cents left. But when you replicate this calculation on a python REPL (you can use any language which uses base 2 floating points), the result is slightly off.
Python 3.7.2
[Clang 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> 1-0.18
0.8200000000000001
To see more examples of the error, and the way it varies with the range, try the following snippet that subtracts values in the range of 0.01 to 0.99 from 1.
for i in range(1,100):
1-i/100
Why are my floats broken?
Floats weren't made to represent exact decimals, since the range of numbers that exist between two floats is infinite.
0.1 0.11 0.111 0.1111 0.11111 .... 0.2
This practically makes it impossible to create a data type that can store all possible decimal values. The floating-point representation was meant to be a trade-off between precision and range. It's a middle ground solution that can represent a reasonable range of decimal numbers while using only a limited amount of memory.
The speed of floating-point operations (FLOPS -> Floating point operations per second) is a metric often used to measure computational performance.
Do I need to be worried
If your work involves a large number of financial transactions (or any other computations that require a high degree of accuracy), you should consider using one of the alternatives in the next section. In most other cases the rounding errors would probably be negligible. But just as a general good practice, try not to use floats and doubles for representing currency values.
Alternatives
While there are some discussions online that suggest using a long int and representing money as the smallest unit (cents, paise, etc.), most languages have implemented special data types to represent currency values. Here are the ones I know about.
- JAVA : BigDecimal
- Python : decimal
- Javascript : big.js
- Ruby : BigDecimal
Read More
Algonquin College : Understanding floating point numbers
Wikipedia : Floating point arithmetic
StackOverflow : Why not use Double or Float to represent currency?
Top comments (0)