DEV Community

Discussion on: Has type safety gone too far?

Collapse
patricklafferty profile image
Patrick Lafferty

I get that this is intended to reduce the chances of a "null pointer exception" occurring but if an inexperienced programmer abuses the ! dereferencing just to get his/her code to compile then a null pointer exception will happen anyway!

Languages should not base decisions on whether or not something inconveniences an inexperienced/lazy programmer. Null is the billion dollar mistake. Literal decades of experience has shown that just trusting developers to get it right doesn't work. Preventing this mistake does not harm anyone in the long run.

Everything should be non-nullable by default, because 'null' can mean too many things. Did I forget to initialize this? Did a memory allocation fail? Did some function that was meant to create an instance of this fail? Does it just mean there is no value and that should be considered normal? You don't know because 'null' has no context, it could mean any of those things. So how do you fix it if you don't know the reason why it broke?

And when you try to dereference a nullable type, you can't even use the dot notation (e.g. someObject.somePropertyOrMethod) plainly - you have to "safely" dereference it by prepending either a ? or a ! to the dot.

This is the "Optional" concept found in many functional programming languages. It essentially wraps your type in a box to make it explicitly known whether something has a value or not. That's why you can't just use dot notation, because that's not your thing - your thing is inside of the box. The language forces the programmer to handle the case of None/Nil, instead of just hoping that they will. This is a Good Thing (TM), because now the compiler can instantly point out when you aren't doing this correctly for you. The other members on your team might miss it in a code review, your compiler never will.

Everyone should strive to make things more explicit. After all, the majority of time we spend as programmers is reading code, not the actual physical act of writing it. Explicitness shows intent: sure it's more verbose, but you know what the programmer meant. Implicitness shows doubt: was the implict behavior intended, or was it a mistake?

Lastly, a moment on type-safety and modern languages. Swift is far, far away from the outer reaches of type systems. For that you should look at languages like Haskell and Idris, where the type system is so much more rich and expressive. Idris has the concept of dependent types, which blew my mind when I first read of it. A dependent type is a type that depends on a value. Take your standard linked list type in whatever language you want. Now imagine that a list of 3 ints was a separate type from a list of 4 ints!

Sounds ridiculously restrictive, right? Actually, it's quite powerful. Consider that a head function fails on an empty list. You as the programmer have to handle that case yourself and check to make sure the list isn't empty before hand. With dependent types, you can define head's type as List[n] where n > 0. Append/insert's type would be something like List[int, n] -> int -> List[int, n + 1]. Remove's type would be List[n] -> List[n - 1] where n > 0. Given all of this information, the compiler can now track the list's usage. If you never called append, the size of the list never would have changed, so it can prove n = 0, and thus calling head becomes a compile-time error. This doesn't work all the time, and sometimes you have to construct a mathemetical proof to convince the compiler, but the concept itself is as groundbreaking as going from no types to types. This can move a whole bunch of run-time errors to compile time, and that's always a good thing.