DEV Community

Jonathan Böcker
Jonathan Böcker

Posted on

Does null safety (really) bring any value to businesses?

Beating a dead horseHere goes another article about null

<TL;DR>

It seems to me that including nullability in type systems is an obvious technical step forward in operational security and cost effectiveness. The market is either lagging behind or I am overestimating its value. Perhaps I am part of a daring but naive minority that gives too much credit to safer type systems.

</TL;DR>

Null safety has become quite fashionable in mainstream platforms. Languages such as Swift and Kotlin is endorsed by Apple and Google for use in their platforms. Rust is all the hype in systems programming and WebAssembly. C# received non-nullable types in its 8th edition and TypeScript is gaining traction as a replacement for JavaScript. The Dart team is working hard to make a similar journey in what they call Non-Null By Default(NNBD).

At the same time, I have encountered development teams where this topic is not spoken of, almost as if this null problem that so many language creators is working on is a non-issue. Many developers go about in their Java/C++/JavaScript code base, adding value to their company as intended, not adopting this null-safe fad. Null safe languages is rarely in the top 10 when measuring most used/most popular languages such as TIOBE index and Github Octoverse, with the exception of C# (not measuring the usage of C#8 and onward) and TypeScript (measured in open source projects).

"We manage to build software very well without null safety, thank you very much." - Mr Straw Man, for the sake of this argument

There is a million reasons not to solve this theoretical issue, and I sympathize with a lot of them. Introducing new technical decisions is always a risk, and there has to be equal or larger benefit balancing out the risk-reward scale. It might be a good idea to compare this paradigm leap to paradigm shifts of similar magnitude. The object-oriented paradigm brought a nominal type system which has been widely adopted and brings many benefits when statically typed:

// Java
class Cat {
    void meow() {}
}

class Dog {
    void bark() {}
}

Dog doggie = new Dog();

doggie.meow(); // Compiler error

This example might be obvious and intuitive as to why it will not, and should not, work. It is hard to quantify how much value the nominal type system has brought to businesses, but the popularity is statistically clear. The object-oriented languages has won the market with a landslide.

It is harder to reason about, and explain why this should be able to compile:

// Java
Dog doggie = null;

doggie.bark(); // Compiles but explodes at runtime

There is seemingly no good reason for this to be allowed, but it just is the way most popular languages work. It can seem a bit disorderly for a newcomer that a type system made to protect the programmer from common mistakes allows for this "shadow" type system. An extra dimension, where you as a programmer has to act as a compiler and find these programmatic landmines. Landmines that plague applications in production everyday, since humans are not as competent as compilers at finding type system errors.

"But null is a value and not a type, and should be evaluated at runtime", someone might add. True in a sense, but most type systems does not allow an instance of Cat to be assigned to a Dog-typed variable. I would argue the instantiated Cat is a value, at least when assigned to a Cat-typed variable.

Tony Hoare says it is a bad idea, and it is his invention

Most people have heard or read Tony Hoare declaring null references as his "billion dollar mistake", and it deserves to be repeated. There is a reason why so many languages deals with this. A weak but colorful analogy would be firearms, which in most cases has what is called a safety catch to somewhat reduce the risk of an accidental discharge. It is questionable whether a safety catch is a good idea when the user needs to fire the weapon as fast as possible in a threatening situation. Still most, if not all, military and police weapons implement some kind of safety catch. This could seem unproductive if the human factor were not accounted for, but it is indubitably a good idea when considering the users, probably stressed and maybe even sleep deprived.

When push comes to shove we need to compensate the human factor through the safety of their tools. A classic risk vs consequence analysis has to be made before deciding on the tool to be used by the many.
This leaves me wondering about the rationale behind the choice of a programming language without a null safety catch. Is the probability of a programmer forgetting a manual null check dismissably low, or is the eventual consequences considered tolerable?

Not all accidents can be prevented

Stray bullets still fire despite weapon safeties and applications crash or break even when implemented with type-safe languages. Why do language creators still bother creating these complex compiler heuristics, offering us these implied senses of security?
One could argue it gives the programmer a peace of mind while focusing on the main task at hand. Why doing cognitive work that can be offloaded to a compiler, says the pragmatist. The compiler can never catch all bugs, so it is better to check it properly yourself, the cynic would argue.

Return of investment (ROI) is a central concept to businesses, and technical decisions is no exempt to this reasoning. If the price of the safety catch quadruples the price of the gun, an argument can be made to just train the soldier to be more careful. If the price of development would rise in disproportion to the gains of security by choosing a type-safe language, it would be no wonder why Java and C++ is still popular. It is however not as easy to measure alternative development costs as it is to measure the costs of different weapon configurations.

Top comments (3)

Collapse
 
jmfayard profile image
Jean-Michel 🕵🏻‍♂️ Fayard

Tony Hoare not only called it a mistake, but also brillantly explained why

I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

As you can see null itself isn't a problem, the issue is that it was not integrated in the type system to ensure that all references are absolutely safe, with checking performed automatically by the compiler.

Adding null-safety to the type system is just a logical extension of the original vision.
If you like the idea of having a static type system that does automatically lots of checks, then you have no reason to not also want null-safety integrated in the type system.
If you prefer using no static type system and use Javascript, then OK.

I wrote about Tony Hoare here: dev.to/jmfayard/android-s-billion-...

Collapse
 
schwusch profile image
Jonathan Böcker • Edited

Great article there, and I wholeheartedly agree!
What I fail to understand is the industry attitude towards this integration of null into the type system. It seems to be apathy at large, which begs the question I asked in the article: Does it add anything to businesses? Not that I can answer the question myself, I just have to consider the fact that null-unaware type systems is not being abandoned in a significant hurry.

Collapse
 
jmfayard profile image
Jean-Michel 🕵🏻‍♂️ Fayard • Edited

Kotlin, Swift, typescript, GraphQL, F#, Eiffel have it, even Java via the @Nullable annotations. Not a bad start!