While many programming languages that have a “nullish” type (
that’s right, two nullish types. Therefore, one of the most common
recommendations is to use only one, and my recommendation is only to use
undefined and avoid
null. This article will explain why we might want to
The creator of null pointers (Tony Hoare) is known for calling his creation a “billion-dollar mistake”:
I call it my billion-dollar mistake (…) My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
When we use nullish values, we want to express that something is “not there,” a “no-value.” Generally, in typed languages, we represent those as “optional values” because they can either be set or be nullish.
The direct implication is that we need to test every “optional value” for its type and the nullish value it can take.
Now, imagine how bad it is for a language to have two nullish values. We now
need to test for not two different types but three. This negatively affects
maintenance, readability, and overall code quality. Because of this is that the
most common recommendation is to avoid nullish as much as possible, and in
discuss why I (and many other developers) prefer
null in all those scenarios, we need to explicitly set the values to
null, which will look like this:
For me, that sounds like
undefined with extra steps.
In that case, assign
undefined to it:
We all know at this point about the bug with
typeof null; that bug doesn’t
undefined, which works as expected:
Why would we use a bugged value intentionally? This is well known, but just in
backup of Mozilla’s JS API, you can read trough that code to see how
typeof null is implemented, and why it’s bugged, but long story short it was a
mistake with bitwise operators. There was
proposal to fix it, but was discarded.
Response bodies are drastically smaller if we rely on
undefined instead of
null. Here’s a response example using
Array is a particular case because when we create a new array of a given size,
the items inside said array are
that if we check for their value, it will give us
undefined, but they aren’t
taking any space in memory (performance reasons), so if we try to loop over it,
it will provide us with nothing:
When I say that we don’t need
null, folks that use it a lot (generally coming
from other languages with
null as the only nullish value) get pretty mad about
such claims. The most common response I get is:
nullis for intentional missing values, and
undefinedshould be used when the values were never set in the first place.
The first thing I think with responses like that is: Why would we ever need to
make that distinction? Both are “nullish,” and we don’t need to differentiate
between “intentionally missing” and “unintentionally missing.” One common usage
null is to do stuff like this:
But we can omit
middleName when the user doesn’t have one:
And we can set
middleName to an empty string if the user intentionally left
that blank if we need to know that for some reason:
And the TypeScript representation would be something like this:
So why would we waste memory with a
null value or bits with a JSON response
when we can omit what is not there?
But the API is responding with
null(maybe written in Java), so I have to use
nullall over my app as well.
My answer to that is: We should use an API wrapper. Instead of “spreading”
null all over our codebase, we should update our surface of contact with the
nulls are turned into
undefineds. If we have any contact with the
folks making the API, we should voice our concern about making API responses
smaller by eliminating
But in React I use
nullwhen I want a component to not render anything
We can use
undefined as well.
We have to type 5 more characters when we write
undefinedexplicitly in our code.
Generally, we will rely on it implicitly (omitting the value), but even if we
had to type it every time, it is worth it compared to all the downsides of
There are languages out there that don’t have nullish values and instead rely on
Maybe, which is a type that means “we might get a certain type or nothing.” We
can do a simple implementation of that in TypeScript like this:
So we might get whatever type we are expecting or
undefined. We can just use
? as well when it’s a property or argument:
To deal with our “Maybes,” we can use operators such as nullish coalescing
??) and optional chaining (
We can enforce avoiding
null by using this great ESLint
plugin, and adding this to our linting rules:
Here’s a list of some sources by other developers that share my opinion about
nullin favor of
- Null is bad by a lot of people.
- Why you should always use undefined, and never null by Fredrik Söderström.
- TypeScript coding guidelines
- A StackOverflow answer.
- The Better Parts by Douglas Crockford.
My opinion about
and should be written with
undefined instead.” So, as usual, I close this
article with a few open questions: Do we NEED to use
null? Don’t we have a
way of resolving “that” issue without it?