Mathematical operations are the basis of most algorithms. We increment counter variables, calculate percentages or measure distances between objects. And it just works.
Until it doesn't. Of course, there are obvious errors like reusing a variable without resetting its value or using the wrong formula for a problem. However, I'd like to shine light on a completely different issue: Numerical stability of algorithms.
Sounds fancy and it is kinda. Because this is such a broad and potentially difficult topic, we'll only look at a simple example.
The Task
I watched this great YouTube video the other day, which covered an 1869 MIT maths entrance exam. In exercise 5 this expression should be simplified:
The Programmer's Solution
As a programmer you could say: Wait, why would I do that? I have a super-fast computer. Let's just write a function to calculate this expression. We even use a simplification to make life simpler for the computer. We set:
and
. Now we got this:
Now the calculation can be completed in only two additions, two subtractions and five divisions. This is profoundly simpler than the initial five additions, five subtractions and five divisions. Note that we didn't change the expression's form.
We could go even further and find substitutes for and , but let's say it's good enough.
Now, we are ready to write our script:
Feel free to play around with the function a bit. You'll notice that you'll get NaN
for a=1
and b=1
because d
is zero if a == b
. Other than that, everything is fine, isn't it?
At least for now, we don't have a reason to assume that this implementation might be wrong, because all we have done is copying the formula.
Simplifying the expression
We could call it a day and stop here. But how about actually solving the problem?
We'll use the already simplified version, form the main denominator in numerator and denominator, which we can then reduce. :
Because we can't simplify this expression anymore, we'll resubstitute
c
and d
. Before we do that, we use the first and second binomial formula: Now we resubstitute:
The last step is to simplify this expression and we get:
We did it! If you check the video, you'll see the presenter found the same solution although we used a slightly different way.
However, is this solution really better? It requires one addition, four multiplications and one division. If we assume that every operation takes the same time (which isn't necessarily true), we have 6 operations instead of 9. However, if we'd have simplified mit1869_naive
even more, we could've saved two divisions and end up with 7 operations. Not a big difference between those two.
But let's say, we implement this function as well:
Comparing the two functions.
Looks fine. We can play around with some values and both functions will give us the same outputs. Ok, mit1869_naive
returns 1.2500000000000002
instead of 1.25
for a=1
and b=2
, but that doesn't seem like a relevant difference.
However, testers and engineers alike are more interested in extreme examples. So let's set a=0.1
and b=100
. We get:
-
mit1869_naive(0.1, 100)
:500.0005000000141
-
mit1869_simplified(0.1, 100)
:500.0005
Again, the naive version seems to have some problems in the last few digits but nothing dangerous. However, the number of valid digits shrank quite a bit to 10 from 15. But two data points are not a trend. So, let's try more values.
a | b | naive | simplified |
---|---|---|---|
0.001 | 1,000 | 50000.00000509277 | 50000.000004999994 |
0.0001 | 10,000 | 4999999.9984685425 | 5000000.00000005 |
0.00001 | 100,000 | 499999972.5076038 | 500000000 |
0.000001 | 1,000,000 | 50000134641.22209 | 50000000000 |
1e-6 | 1e7 | 4998445757347.942 | 5000000000000 |
1e-7 | 1e8 | 486875635391405 | 500000000000000 |
1e-8 | 1e8 | 3602879701896397 | 5000000000000000 |
1e-8 | 1e9 | -Infinity | 50000000000000000 |
As we can see, starting with a= 0.00001
and b= 100,000
we don't have any valid decimal places at all. And if we use really extreme values, we get invalid results.
However, our two functions are mathematically equal. What happened?
Why do these function output different values?
The problem is: Two functions that are equal on paper don't need to be equal if you implement them in a computer because computers have only limited memory.
Hence, a computer cannot store real numbers exactly (sometimes it does, but you can't count on that, even 0.1
won't be stored precisely). Usually that is not a problem, because for floating point numbers like defined in IEEE-754 we have a guarantee: The relative error will always be smaller than or equal to
, where m
is natural number depending on the amount of bits used. For a 32 bit IEEE-754 number, m
would be 24.
Let's look at some examples where x
is the number we want to represent:
x | order of magnitude | max absolute error |
---|---|---|
1 | one | 58*1e-9 |
100 | hundred | 5.9*1e-6 |
1e6 | million | 0.0596 |
1e9 | billion | 59.6 |
1e12 | trillion | 59.6*1e3 |
1e15 | quadrillion | 59.6*1e6 |
It's easy to see that you can be off quite a bit, but with respect to x
the error will always be small. If you have one million dollars, usually you won't care about $0.06
.
But there are situations in which you care. If we have a look at our example again, we could ask which implementation provides the correct or at least better results. To do that, we need the actual value.
Sadly, we cannot answer this question using another program [1]. Instead, we'll have to tackle the problem with pen and paper. Let's look at a=1e-8
and b=1e9
.
At first we'll look at the simplified version:
We simply inserted the values and simplified a bit. Now, we'll approximate
. This is a valid assumption because
is much larger. In fact, a computer would do the same or something similar if you add these two numbers (feel free to insert console.log
statements to test this). Now we get:
This is exactly the value the second JS script calculated as well.
After this success, look at the naive implementation. Again, we'll assume
if we add or subtract a
and b
because a
is much smaller.
Now we resubstitute:
Now you probably will say "But you cannot divide by zero!". And that's correct, however a computer does it. It simply assumes, that 0
is a very small number and if you divide something by something very small, you'll get something very big, in this case infinity
. But because you divided a negative number by 0
, you get -infinity
.
Without the approximations, both versions would've delivered the same results. But as explained above, because there's only a finite amount of memory used for each number, the computer has to make this approximation. On a side note: This means that our second approximation wasn't valid from a mathematical standpoint as the result was obviously incorrect. Hence, approximations must always look at where the resulting values will be used and wether small differences will be relevant.
Generalizing the effects
There's an upper bound for the maximum relative error.
If you try to represent a real number x
in a computer with a floating point number x'
, the following upper bound for the relative error exists [3]:
The value of m
depends on the implementation and wether a 32 bit or 64 bit system is used.
This has an important consequence: If we add an (in magnitude) small to an (in magnitude) large number, it's possible that we loose information about the small number.
Different implementations of the same algorithm can cause different results.
This is a direct consequence of the rule above. Sometimes, a specific algorithm is always better than another, sometimes it might depend on the input values.
Definition: Condition of a problem A problem is well-conditioned, if small changes in the input values don't cause very large changes in the result. On the other hand, a problem is ill-conditioned if small changes in the input values do cause large changes in the result [4]. (In fact you could quantify this, but that's a large topic for itself.)
Definition: Stable algorithm An algorithm is stable, if the accumulated error resulting from rounding errors isn't larger than what we would expect for this problem with its given condition and errors in input values. [5]
Example 1: If we have a well-conditioned problem and only small input variations, a stable algorithm would cause only small output variations.
Example 2: If we have an ill-conditioned problem, even a stable algorithm couldn't ensure that small input variations will only cause small output variations because the big output variations are inherent to the problem!
Example 3: This article looked at one problem. However, we found two algorithms to solve it. One is stable, one is not.
Applying our gained knowledge
While this only scratches this important topic of computing, you still can use it in your programs. Here are some rules of thumb that'll be useful quite often:
- Usually, it's better to use multiplications instead of sums. Especially if you'd add very large positive and very large negative numbers and expect an in magnitude small number as result.
- Divisions by zero should be avoided. In some programming languages exceptions will be thrown, in others it'll result in
infinite
orNaN
. Be careful if you divide by numbers that are almost zero. - Rounding errors can add up. Hence, it's better to use fresh values whenever possible. Read about a worst-case failure here.
- The associative property of addition is not guaranteed on a computer. Often, it's preferable to add numbers in the same order of magnitude as long as possible to approximate this property as good as possible.
Disclaimer: These are rules of thumb. They can be wrong and in some instances, they will be wrong. Critical systems [6] always should be developed by professionals and not be based on articles from the internet, not even this one.
Comments, Sources
[1] Actually, you could use some tricks like symbolic computing which might work. Or you could just trust in Wolfram Alpha (which again uses symbolic computing in parts of its calculations).
[2] Be aware: Most likely, some (probably better educated) people might have different definitions for these terms. That doesn't necessarily mean that mine or their definition is wrong. However, it shows that you have to be aware of what definitions are applied in different contexts.
[3] Page 9 in "Einführung in die numerische Mathematik"
[4] Page 10 in "Einführung in die numerische Mathematik"
[5] Page 17 in "Einführung in die numerische Mathematik"
[6] Let's imagine you develop a payment system that does calculations with floating point numbers. It'd be very hard to assess all possible 'calculation chains' which could cause numerical problems.
Top comments (0)