Lately, I updated the version of the ES Linter we are using in one of our projects. It was quite a jump between minor versions, so I was expecting some changes, but not too many.
I was wrong.
There were quite a new bunch of rules added, which I had to either switch off ( XO is a very opinionated linter) or for which I'd have to adjust my code or in the best case scenario I'd just need to review the changes made by --fix
command)
Let's say this bluntly:
Lint Rules are a pain in the ass.
Code that was perfectly fine yesterday, and it's working, is now marked as an error just because of some Lint configuration!
Nevertheless, I usually like checking out the new rules because understanding the reason behind the rules forces me to read more and deeper in the documentation and this often discloses details of the language I overlooked or was not aware at all.
One of those rules and changes I reviewed last week this:
Prefer Number.isNaN() over isNaN()
By running xo --fix
any code doing isNaN(aPossibleNumber)
was changed to Number.isNaN(aPossibleNumber)
No big deal right?
Well. If it wasn't for our Unit tests we would have gotten into trouble.
We have a validation method that checks for valid userID, which is formally a number but treated anywhere in the code as a string, due to many reasons ( being passed around via query strings, through different applications written in different languages, and ultimately because it could even be a BigInt)
Imagine these unit tests:
test("A valid number", t=> {
t.true(validateID(12345))
})
test("A valid stringified number", t=> {
t.true(validateID("12345"))
})
test("A very big integer", t=> {
t.true(validateID("90071992547409999"))
})
test("A string that contains other than digits will throw an error", t=> {
t.throws(validateID("12345nope"))
})
test("Undefined will throw an error", t=> {
t.throws(validateID(undefined))
})
// and so on for empty string, null etc
and imagine an implementation looking something along these lines:
/**
validates that input is a number or stringified number (it does not matter if it's a number or string as long as they are just digits)
throws and error if does not pass validation
**/
const validateID = (val) => {
if (!val || isNaN(val)) {
throw new Error(`ValidationFailed: ${val} must be a number or a string containing only digits`)
}
return true
}
After the Linter --fix
isNaN became Number.isNaN and since everything looked fine I run git commit
.
Since we are using Husky, we are running a Git Hook that runs all our unit tests before committing.
And I am very glad we had unit tests and we had that pre-commit git hook, because the tests above started failing.
I checked again the Rule and noticed the side note:
Number.isNaN() over isNaN() (they have slightly different behavior)
(they have slightly different behavior)
(they have slightly different behavior)
and you call it SLIGHTLY different behaviour?
Look at this slightly different outputs:
isNaN("1234ZZZ56") // true
Number.isNaN("1234ZZZ56") // false
isNaN(undefined) // true
Number.isNaN(undefined) // false
isNaN("25") // false
Number.isNaN("25") // false
isNaN("blabla") // true
Number.isNaN("blabla") // false
isNaN("12nope") // true
Number.isNaN("12nope") // false
I must admit it has nothing to do with the Linter. Even the official docs state this
The Number.isNaN() method determines whether the passed value is NaN and its type is Number. It is a more robust version of the original, global isNaN().
Whether it is more robust or not I cannot say. But for sure it is not a slight difference. I find that to be quite a big difference, even though it's just a matter of "semantic".
I have always thought NaN in literal terms:
something that is not a number, anything that is not a number is going to be treated - in my understanding - as a NaN.
- null will be a NaN
- undefined will be a NaN
- a string which cannot be converted to a valid number will be a NaN
For me, it was pretty straightforward, but apparently that was confusing for many because of the type coercion implicitly applied to the value.
This is the main difference.
Global isNaN forces the value into Number and then checks if it's a NaN, while Number.isNaN first checks if the value is of type Number and if not just return false, only if it is a typeof Number it will evaluate if the value is NaN or not.
But... if it's the type is Number, how can it then be a Not-A-Number???
The problem is that NaN is not something we can use to represent something that is not-a-number as I always thought, it is rather an entity on its own that is returned by specific - failing - operations like Math or Parse.
So, in that matter, Number.isNaN is more robust and safe because values that would normally convert to NaN might not be actually the same value as NaN...
I still find the isNaN more logical to me, but hey.. who am i do disagree
By the way, after looking at this old validation method I thought that probably the best way to check if a value ( be it a string or a number) is, at least for our definition, not-a-number could be a regex.
Since Number.isNaN behaves differently than how I expect, and isNaN is not robust, and parseInt would definetely not fit
(because parseInt("123456-user") would return 123456 which is definetely not what i want! )
something like const ONLY_DIGITS = /^\d+$/
would do the trick.
(/^\d+$/).test(12345) // true
(/^\d+$/).test("12345") // true
(/^\d+$/).test("123nope45") // false
(/^\d+$/).test() // false
What do you think?
Top comments (5)
Yes, NaN is used to indicate that a number producing operation did not produce a valid number, so it makes sense to limit it to the number type, since that's the type of things that number producing operations produce. :)
agree that number producing operations produce numbers.
but i am still convinced that literally, NotANumber means something that is not a number, therefore I would not expect it from number producing operations. and actually i still find counterintuitive that something of Type Number, is then... well NotANumber... :-)
Those interfaces could have been designed to signal an error out of band, but weren't.
So they return the error indicator like any other value, and since they return type number, this means that number must include those error indicators.
Of course, those error indicators aren't valid numbers, so you end up things of type number which aren't numbers, and so NaN. :)
Ecmascript defines NaN as: 'Number value that is an IEEE 754-2019 βNot-a-Numberβ value'.
So you can point the blame for this decision at ecma and IEEE, and wish they had decided differently, but this is the way it turned out.
There are some advantages to this design in that they avoid a need to support early exit from numeric pipelines.
And some disadvantages -- the NaNs will happily keep flowing through, contaminating everything they touch, until you actually check to see if you have some.
Yes, the NaN is confusing.
In IEEE 754 floating point numbers NaN is a special value for the number that represents the result of an operation that yields a number that isn't a real number. For example: dividing by zero, or taking the square root of -1 (i is not a real number).
en.wikipedia.org/wiki/NaN
So, yeah, in most computer languages NaN IS a specific value of the number type system. Ie. it has a specific binary representation.
So when I first saw your code asking isNan(string) I was a bit surprised that could even work.. But then again, in JavaScript weird things happen.
So while your code seems to work fine (without the linter change), it violates the principle of least surprise. Because many programmers would expect that isNaN() would work like Number.isNaN(). Which is technically more "correct".