<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniel Scott</title>
    <description>The latest articles on DEV Community by Daniel Scott (@alldanielscott).</description>
    <link>https://dev.to/alldanielscott</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alldanielscott"/>
    <language>en</language>
    <item>
      <title>Why Floating Point Numbers are so Weird</title>
      <dc:creator>Daniel Scott</dc:creator>
      <pubDate>Sat, 06 Jul 2019 09:16:39 +0000</pubDate>
      <link>https://dev.to/alldanielscott/why-floating-point-numbers-are-so-weird-e03</link>
      <guid>https://dev.to/alldanielscott/why-floating-point-numbers-are-so-weird-e03</guid>
      <description>&lt;p&gt;If you've written any JavaScript before (which uses floating point numbers internally), or you've dealt with double or single precision floats in other languages then you've probably come across some version of this: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;return (0.1 + 0.2 == 0.3); // Returns FALSE !!! &lt;br&gt;
... and the walls in your office float away as the laws of mathematics begin to crumble&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Or, maybe, you've done some addition or subtraction on a couple of reasonable-looking numbers (with one or two decimal places), then printed the result to screen and been met with something like 10.66666666666669 when you were expecting a far more reasonable 10.7.&lt;/p&gt;

&lt;p&gt;If you haven't gone through the whole university shebang and had floats explained from top to bottom, then you may have had a "WTF" moment or two. Here's a bit of a rundown on what is going on ...&lt;/p&gt;

&lt;h2&gt;
  
  
  What the floating in "floating point" means
&lt;/h2&gt;

&lt;p&gt;In short, floating-point numbers are stored in memory using a form of scientific notation, which allows for a limited number of "significant digits" and a limited "scale". Scientific notation looks like this (remember back to high-school):&lt;/p&gt;

&lt;p&gt;1,200,000,000,000,000,000,000 = 1.2 x 10^21&lt;/p&gt;

&lt;p&gt;There are two significant digits in that number (1, and 2), which form the "mantissa" (or the "meat" of the number). All the zeros after the "12" are created by the exponent on base-10, which just moves the decimal point some number of places to the right. The exponent can add a lot of zeros (for a very low storage-cost), but it can't hold any "meat".&lt;/p&gt;

&lt;p&gt;A negative exponent can be used to shift the decimal point to the left and make a really tiny number.&lt;/p&gt;

&lt;p&gt;0.000,000,000,000,000,000,001,2 = 1.2 x 10^-21&lt;/p&gt;

&lt;h2&gt;
  
  
  It's all about the precision
&lt;/h2&gt;

&lt;p&gt;Imagine that we have a data type that can accept 2 significant (decimal) digits and allows (decimal) exponents up to +/-21. The two example numbers above would be getting near to the largest, and the smallest, that I could represent with that data type (the largest and smallest would actually be 9.9x10^21 and 0.1x10^-21 respectively).&lt;/p&gt;

&lt;p&gt;Following on from that, if I tried to hold the number 1,210,000,000,000,000,000,000 with this mythical 2-digit-precision floating-point data type, then I would be &lt;a href="https://www.urbandictionary.com/define.php?term=SOL"&gt;S.O.L&lt;/a&gt; as they say, and it would end up as 1,200,000,000,000,000,000,000, since my two-digit precision doesn't allow for 1.21 x 10^21 (that's &lt;em&gt;three&lt;/em&gt; significant digits, or a digit-too-far). &lt;/p&gt;

&lt;p&gt;This is one source of so-called "loss of precision" errors with floating point numbers. &lt;/p&gt;

&lt;h2&gt;
  
  
  Recurring Fractions
&lt;/h2&gt;

&lt;p&gt;The other source of of lost precision (which accounts for the 0.1 + 0.2 != 0.3 hilarity) is due to what can and can't be precisely represented by a base-2 number system. &lt;/p&gt;

&lt;p&gt;It's the same problem that the decimal number system has with numbers such as one-third (0.33333333333333333333333... anyone?). &lt;/p&gt;

&lt;p&gt;Computers don't store numbers as decimal, so everything that goes on inside a floating-point number in a computer is stored using a base-2 number system. &lt;/p&gt;

&lt;p&gt;Just replace all the x10^n references in the examples above with x2^n and you may start to see how some decimal (base-10) numbers fit well, while others just don't play nice. 0.1 might be a nice easy number for you or I to work with (being decimal creatures), but to a two-fingered binary bean-counter it's as awkward as 1/3 or 3/7 are in decimal.&lt;/p&gt;

&lt;h2&gt;
  
  
  A bit of wordy fun to illustrate
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Problem: Recurring Fractions
&lt;/h3&gt;

&lt;p&gt;To recreate that (binary) 0.1 + 0.2 != 0.3 problem in decimal, let's say we write a program for some mythical decimal-based computer, using a numeric data type that can store 4 significant decimal digits. Now let's try to get that program to figure out if 1/3 + 2/3 equals 1.&lt;/p&gt;

&lt;p&gt;Here we go:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Statement:&lt;/strong&gt; Store this number: 1/3rd — &lt;em&gt;for this example we're going to say that the human operator doesn't understand the decimal system and deals only in fractions. The decimal system is for deci-puters: real men use fractions!&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Action:&lt;/strong&gt; Stores .3333 — &lt;em&gt;this is the kind of thing that happens when you declare a number in your code using decimal digits, or you take decimal user input and it gets placed into memory as a binary floating point number&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Statement:&lt;/strong&gt; Store this number: 2/3rds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Action&lt;/strong&gt; Stores .6666&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Statement:&lt;/strong&gt; Add those two numbers together&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Action:&lt;/strong&gt; Calculates .9999&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now lets' try to get some sense out of what we've put in:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Question:&lt;/strong&gt; Does the total (.9999) equal 1.000?**&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Answer:&lt;/strong&gt; Hell no! (false)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Programmer&lt;/strong&gt;: &lt;em&gt;Tears out a few hairs and says out loud "WTF? 1/3 plus 2/3 definitely equals 1! This deci-puter is on crack!"&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Solution
&lt;/h3&gt;

&lt;p&gt;The way around this lack of precision is to stop trying to precisely compare something that can't (and shouldn't) be precisely compared. Instead, we must decide how close we need two things to be in order for us to consider them "equal" for our purpose.&lt;/p&gt;

&lt;p&gt;Here's the correct workaround in deci-puter pseudo-speak:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Question:&lt;/strong&gt; Is .9999 close_enough to 1.000?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error: Undefined Constant:&lt;/strong&gt; WTF? What have &lt;em&gt;you&lt;/em&gt; been smoking? How close is close_enough?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Oops! Let's try again:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Statement:&lt;/strong&gt; close_enough (my chosen tolerance) is plus-or-minus .1000&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Question:&lt;/strong&gt; Is .9999 close_enough to 1.000?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Answer:&lt;/strong&gt; Yes (true) — &lt;em&gt;the difference between .9999 and 1.000 is .0001: that's really damned close, which is closer than close_enough&lt;/em&gt; &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And so you can see, if thirds were really important to people (as a species), then we'd probably be using a base-3 or a base-9 number system, because dealing with them in decimal (and binary) is inconvenient!&lt;/p&gt;

&lt;p&gt;Also, because these are recurring fractions, it doesn't matter whether we can hold 4 significant digits or 4,000 significant digits: 1/3 + 2/3 will never precisely equal 1 when fed into our "deci-puter". We'll always need to allow some tolerance, and the built-in equality operator will always (accurately) reflect the fact that (0.3333... + 0.6666... != 1).&lt;/p&gt;

&lt;h2&gt;
  
  
  Extending our Example to other floating-point quirks
&lt;/h2&gt;

&lt;p&gt;If you were super-observant, you might have noticed that - in the previous example - there were only three decimal places in the 1.000 number, yet there were four in the .9999 number. Our pretend "decimal-system storage type" here only supports 4 significant digits, so we can't know what might be in the fourth decimal place if we also try to store a digit in the "ones" place.&lt;/p&gt;

&lt;p&gt;You can probably imagine some of the issues you might have with this pretend 4-digit floating point type if you try to compare 4,123,134 with 4,123,000. There are only 4 significant digits available to us, so these two numbers will become 4.123 x 10^3 and 4.123 x 10^3 respectively — the same number! &lt;/p&gt;

&lt;p&gt;If you start trying to store large integers in a double-precision float type then at some point (above 9,007,199,254,740,991) you'll start to run into this problem. It kicks in with a much smaller number for single-precision floats.&lt;/p&gt;

&lt;p&gt;Similarly you'll hit problems if you try to work with numbers at very different scales (try subtracting .0001 from 4356 using our pretend 4-significant-digit data type!).&lt;/p&gt;

&lt;h2&gt;
  
  
  Read More
&lt;/h2&gt;

&lt;p&gt;So, now you know the reasons why, you're not necessarily stuck with the only options being to do or die: there are workarounds!&lt;/p&gt;

&lt;p&gt;Another article in this series deals with how to choose a sensible tolerance for comparing floating-point numbers in &lt;em&gt;your&lt;/em&gt; program (and also when it's best to avoid them altogether). &lt;/p&gt;

&lt;p&gt;Although it's written with JavaScript in mind, the same guidelines apply to all languages with a floating point type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/alldanielscott/how-to-compare-numbers-correctly-in-javascript-1l4i"&gt;How to compare numbers correctly in JavaScript&lt;/a&gt;&lt;/p&gt;

</description>
      <category>float</category>
      <category>ieee</category>
      <category>double</category>
      <category>javascript</category>
    </item>
    <item>
      <title>How to compare numbers correctly in JavaScript</title>
      <dc:creator>Daniel Scott</dc:creator>
      <pubDate>Sat, 06 Jul 2019 05:24:11 +0000</pubDate>
      <link>https://dev.to/alldanielscott/how-to-compare-numbers-correctly-in-javascript-1l4i</link>
      <guid>https://dev.to/alldanielscott/how-to-compare-numbers-correctly-in-javascript-1l4i</guid>
      <description>&lt;p&gt;The advice in this post relates to JavaScript, since all numbers in JavaScript are (currently) IEEE-754 double-precision floating-point numbers. However, everything in here is equally applicable to any language that has a floating-point type.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In short: Don't use the language-provided equality test, and don't use language-provided "epsilon" constants as your "tolerance" for errors. Instead, choose your own tolerance.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, the long version (which I originally penned in response to some flawed advice I found online about how to compare numbers in JavaScript).&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem, and a flawed approach to solving it
&lt;/h2&gt;

&lt;p&gt;Take this ("bad") code, which addresses the classic floating point problem of &lt;code&gt;(0.1 + 0.2) == 0.3&lt;/code&gt; returning false:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;f1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;f2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;abs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;f1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;f2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;EPSILON&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// 'True - Yippeee!!!'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Ok, so far so good. But it fails with other inputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;f1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;1000000.1&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;f2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;1000000.3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;abs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;f1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;f2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;EPSILON&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// '!!!!!! false !!!!!!!'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The basic pattern being used is sound: avoid a direct equality comparison, and check that your two numbers are within some tolerable difference. However, the tolerance used is badly chosen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why does Number.EPSILON fail the second example above?
&lt;/h2&gt;

&lt;p&gt;It's actually very dangerous to use Number.Epsilon as a "tolerance" for number comparisons. &lt;/p&gt;

&lt;p&gt;Other languages have a similar construct (the .Net languages all have it as double.Epsilon for example). If you check any sound documentation for such constants, they tend to come with a warning not to use the "floating point epsilon" for comparisons.&lt;/p&gt;

&lt;p&gt;The "epsilon" provided by the language is simply the smallest possible "increment" you can represent with that particular floating point type. For IEEE double-precision numbers, that number (Number.EPSILON) is minuscule!&lt;/p&gt;

&lt;p&gt;The problem with using it for comparisons is that floating point numbers are implemented like scientific notation, where you have a some small(ish) number of significant digits, and an exponent which moves the decimal point left or right (possibly a loooooooooooong way left or right). &lt;/p&gt;

&lt;p&gt;Double-precision floating point numbers (as used in JavaScript) have about 15 significant (decimal) digits. What that means is if you want to hold a number like 1,000,000,000 (10 significant digits), then you can only hold a fraction up to about five or six decimal places. The double-precision floating point numbers 3,000,000,000.00001 and 3,000,000,000.000011 will be seen as equal. (note that because floats are stored as binary, it's not a case of there being &lt;em&gt;exactly&lt;/em&gt; 15 significant decimal digits at all times - information is lost at some power of two, not a power of 10).&lt;/p&gt;

&lt;p&gt;Number.EPSILON is waaaaay smaller than .00001 - so while the first example works with a "tolerance" of Number.EPSILON (because the numbers being compared are all smaller than 1.0), the second example breaks.&lt;/p&gt;

&lt;h2&gt;
  
  
  There is no one-size-fits all "epsilon" for comparisons
&lt;/h2&gt;

&lt;p&gt;If you go hunting online, there's a fair bit of discussion on how to choose a suitable epsilon (or tolerance) for performing comparisons. After all the discussion, and some very clever code that has a good shot at figuring out a "dynamically calculated universal epsilon" (based on the largest number being compared) it always ends up boiling back down to this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;YOU need to choose the tolerance that makes sense for your application!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The reason dynamically calculated tolerances (based on the scale of the numbers being compared) aren't a universal solution is that when a collection of numbers being compared vary wildly in size it's easy to end up with a situation that breaks one of the most important rules of equality: "equality must be transitive". i.e. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;if a == b, and b == c, then a == c must evaluate as TRUE as well! &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Using a tolerance that changes with every single equality test in your program is a very good route to having a != c somewhere when you would reasonably expect a and c to be equal. You can also guarantee this will happen at annoyingly "random" times. Thar be the way to Bug Island me-hearties: enter if ye dare and may the almighty have mercy on yer soul ... arrrrrrrr**!!!&lt;/p&gt;

&lt;p&gt;** &lt;em&gt;actually ... "arrrghhhhhhhh!!!" is more appropriate&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing a tolerance for your application
&lt;/h2&gt;

&lt;p&gt;So, how do you select a suitable tolerance for &lt;strong&gt;your&lt;/strong&gt; program? I'm glad you asked! ...&lt;/p&gt;

&lt;p&gt;Let's assume you're holding dimensions of a building in millimetres (where a 20 metre long building would be 20,000). Do you really care if that dimension is within .0000000001 of a millimetre of some other dimension when you're comparing? - probably not!&lt;/p&gt;

&lt;p&gt;In this case a sensible epsilon (or tolerance) might be .01 or .001**. Plug that into the &lt;code&gt;Math.abs(f1 - f2) &amp;lt; tolerance&lt;/code&gt; expression instead. &lt;/p&gt;

&lt;p&gt;Definitely do &lt;strong&gt;NOT&lt;/strong&gt; use &lt;code&gt;Number.EPSILON&lt;/code&gt; for this application, since you &lt;em&gt;might&lt;/em&gt; get a 200m long building somewhere (200,000mm) and that may fail to compare properly to another 200m long dimension using JavaScript's &lt;code&gt;Number.EPSILON&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;** &lt;em&gt;things will tend to work out even cleaner if you use tolerances that can be represented precisely in binary. Some nice simple options are powers of two. e.g. 0.5 ( 2^-1 ), 0.25 ( 2^-2 ), 0.125 ( 2^-3 ), 0.0625 ( 2^-4 ) etc.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Avoid floating point numbers wherever you can
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;even in JavaScript where they're unavoidable&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Incidentally, if you didn't care whether your measurements in the previous example were any closer than 1mm to each other, then you should probably just use an integer type and be done with it. &lt;/p&gt;

&lt;p&gt;If you're working in JavaScript then you're [currently**] stuck with floating point numbers. The only real alternative JavaScript offers is to store your numbers as strings. This can actually be a sensible approach for large integers that only need to be tested for equality and don't need to have numeric operations performed on them (such as database primary keys). There are some more "floating-point gotchas" waiting when you get to integers big enough to contain more than about 15-16 digits! (specifically, anything larger than 9,007,199,254,740,991)&lt;/p&gt;

&lt;p&gt;Likewise (still on the "building model" example above), if you only cared whether your measurements were within 0.1mm of each other, then you could use a "decimal" type (if your language supports it), or just store all your measurements internally as integers representing tenths of millimetres (e.g. 20 metre building = 200,000 "tenth-millimetres" internally)&lt;/p&gt;

&lt;p&gt;Floating point numbers are great for what they were designed for (complex modelling of real-world measurements or coordinates), but they introduce weirdness into calculations involving money, or other things we expect to "be nice and even".&lt;/p&gt;

&lt;p&gt;** &lt;em&gt;As of mid-2019, there has been talk of introducing a "BigInt" type to JavaScript (offering an alternative to floating-point numbers), but it's not supported in many implementations yet and it hasn't worked its way through to a final ECMAScript specification yet either. Google's V8 implementation of JavaScript seems to be an &lt;a href="https://v8.dev/features/bigint"&gt;early adopter&lt;/a&gt; along with Mozilla, so you should be able to use it in current versions of Chrome, Firefox, and other V8-derived platforms now.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are floating point numbers so weird?
&lt;/h2&gt;

&lt;p&gt;If you're not already familiar with the old 0.1+0.2 != 0.3 mind-bender, then I've thrown together a quick primer on the way floating point numbers work, which will shed some light on the madness.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/alldanielscott/why-floating-point-numbers-are-so-weird-e03"&gt;Why Floating Point Numbers are so Weird &amp;gt;&amp;gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  An interactive plaything: Go ahead and break stuff
&lt;/h2&gt;

&lt;p&gt;If you want to have a play around with floating point comparisons in Javascript and peek into how the numbers lose precision as they get bigger, then there's a jsfiddle I stuck together at: &lt;a href="https://jsfiddle.net/r0begv7a/3/"&gt;https://jsfiddle.net/r0begv7a/3/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://jsfiddle.net/r0begv7a/3//embedded//dark" width="100%" height="600"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>float</category>
      <category>ieee</category>
      <category>double</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
