Wouldn't programming be easy if we only deal with "simple" values?

You know: integers, strings, booleans... writing a function that takes a string and gives an int, things like that.

Unfortunately, most of the time, the values (or the computations of them) are not so simple. They may have "quirks", such as:

The value may or may not exist!

The value may exists, but there are more than one of them :(

Getting the value would mean involving some kind of I/O operations...

The value may exists... eventually in the future.

The value may produce errors :(

The value may depend on some kind of a state of the outside world.

etc.

This is not an exhaustive list, but I hope you can see the pattern. Those quirks are all nitty gritty details... what if we don't have to deal with them? What if we could write functions that acts as if they don't exist? Surely as a software developer we can make some abstractions to solve it?

Well, monads to the rescue!

A monad acts as a container that abstracts away those quirks in the computations, and let us focus more on what we want to do with the contained values.

Let's get back on that list, shall we?

The value may or may not exist, handled by the Maybe monad.

The value may exists, but there are more than one of them, handled by the List monad (yes, List is a monad!).

Getting the value would mean involving some kind of I/O operations, handled by the IO monad.

The value may exists eventually in the future, handled by the Promise/Future monad (that Promise you use in JavaScript? It's a monad!--kind of).

The value may produce errors, handled by the Error/Result/Either monad.

The value may depend on some kind of a state of the outside world, handled by the State monad.

Amazing, isn't it? What's that? "If they're just containers, what's so special about them," you say?

Well, other than it being a container, it also defines a set of operations to work on that container. For this, let's introduce a term monadic value to refer to a simple value that is wrapped in a container. Those operations include:

return: how to wrap a "simple" value into a monadic value? You return it!

fmap: you have a function that takes a String and produces an Int. Can you use it for Maybe String to produce Maybe Int? Spoiler: yes, you fmap it!

join: oh no, I have a Maybe (Maybe String)! How can I flatten it? Use join, and it will give you Maybe String.

bind or chain or >>= (yes, that symbol, we have a name for it!): a combination of fmap + join, since that pattern (also called flatMap) occurs quite often.

liftM, liftM2, liftM3, etc.: if we have a function that takes a String and produces an Int, can we construct a function that takes a Future String and produces a Future Int? Yes, you "lift" them to the monadic world!

>=> (another weird symbol, I call it monadic compose): suppose you have a function that takes a String and outputs IO Int, and another function that takes an Int and produces IO Boolean, can you construct a function combining the two that takes a String and produces a IO Boolean? You guessed it, you compose them with >=>!

So that's it about monads! I hope it gives the 5-year-old you a practical understanding of how powerful this abstraction really is.

If you want more example, I've written a blog post about this here (this comment is actually a gist of the article).

Note: Before all the more initiated FP devs burn me, I realize that I'm conflating many things from Functors, Applicatives, and whatnots in this explanation, but I hope you forgive my omission, since omitting details might be necessary for 5-year-old-explanations. I also encourage the reader to study the theoretical foundation of it should you be interested :D

Let's say that one day you wake up and decide that you're tired of always fixing bugs, damnit! So you set out to create a language that won't have bugs. In order to do this, you make your language extremely simple: functions will always take 1 argument, will always return 1 value, and will never have side-effects. You've also heard that types are making a come-back, so this simple language will be statically typed. Just so we can talk more easily about this language, you create a shorthand: a -> a means "a function that takes an argument of type a and returns a value of type a. So a function that sorts a list would look like List -> List.

If you need to pass two arguments, then you will do that by writing a function that takes one argument and returns a function that takes one argument and generates a value. Using our short-hand notation we can write this as a -> b -> a, meaning " a function that takes an argument of type a and returns a function that takes an argument of type b and returns a value of type a. So a function that appends a character to a string would look like String -> Char -> String.

Now that we have a simple notation for our simple language, there's one more thing you want to do to avoid bugs, Tony Hoare's billion-dollar mistake: no null references. So every function must return something.

Great! Now we can start implementing basic functions. Start with add to add two integers. It's signature is Int -> Int -> Int and we use it like so: add 2 3 returns 5. Next, mul also has signature Int -> Int -> Int and mul 2 3 returns 6. Nice! Ok, on to div for division... Ah! But what's this? If we say it's signature is Int -> Int -> Int then what should div 3 0 return?

Crud!

Ok, so we need a type that represents a situation where we can't return a value: Nothing. This doesn't solve our problem completely, though, because we only want div to return Nothing if there's a division by zero. The rest of the time we would like to get back an Int. So we need another new type: Maybe a = Just a | Nothing (types like this are sometimes called a "sum type", "tagged union", "variant", or half-a-dozen other names no one can agree on). This little bit of notation means that any function that has in its signature a Maybe Int can accept either Just Int, which is just an Integer wrapped up to make it compatible with the Maybe type, or Nothing. Now we can write div's type signature as Int -> Int -> Maybe Int.

Problem Solved! ...or is it?

You may have heard rumor of the "Maybe Monad", but this Maybe type we've just described is not yet a monad. A monad requires not only a type but also at least two functions to work on that type. To understand why, consider if you want to start chaining functions in your new minimal language. add (mul 2 3) 4 works and returns 10, since mul turns into an Int after we feed in two Ints and add takes two Ints. But what about mixing in div? We would like add (div 4 2) 1 to return 3, but it won't work because div ends with a Maybe Int and add is expecting an Int.

Time to introduce one more bit of notation to make things a bit easier to talk about: (\x -> f x) indicates an anonymous function that takes an argument x and does something with it (in this case, using it as the argument to a function f).

Ok! Now, the first thing we need is "bind" (just to keep with Haskell notation, let's use >>= for "bind"). This is a function that will know how to take our Maybe type, pull out the value (if there is one) and use it in a function. If there's not a value (that's our Nothing type), then it just passes along Nothing. So, put into our shorthand notation, this looks like:

Justx>>=f=fxNothing>>=_=Nothing

(The _ just means that we don't particularly care what the second argument to "bind" is, because we're always going to return Nothing.)

We're almost there, but there's one more problem. Look what happens if we attempt to combine add and div as before (now using our new anonymous function notation and "bind"):

div42>>=(\x->addx1)

To understand why this is a problem, consider what the type signature of this whole thing should be? If div is returning a Just Int, then that will be passed along to add which will return Int. If, however, we swapped the 2 with a 0 then div would return Nothing and "bind", following our definition above, should return Nothing, which is a Maybe type. So in one case we get an Int and in the other a Maybe...but this is supposed to be a statically typed language!

Crud!

We're almost there. All we need to complete the job is return. This is simply a function that will know how to create an example of our type from some other type. Since Nothing should only ever be used when we don't have a value to return, the definition of return for Maybe a is quite straight-forward: return a = Just a. For other, more complicated types >>= and return could be more complicated.

Finally, now we can combine add and div:

div42>>=(\x->return(addx1))

Et voila! A Maybe monad!

So is a Monad really that simple? Well, yes and no. Much like General Relativity, writing down the basic functions for a Monad isn't all that difficult, but this simple combination of a type and two functions opens a whole world of possibilities for strict, statically typed functional languages. Essentially, Monads allow you to defer some part of your program's evaluation while still writing functions that take one value and return one value. In the case of Maybe, we're deferring what to do about missing values or values that we cannot produce. We can use Monads to defer other things like error handling (the Either Monad) or input (the infamous IO Monad).

First lets talk about functors, since monads are a special kind of functor.

Think about chocolate, you want to eat it but you also want to take it somewhere without it getting dirty.

You have to unwrap it to eat a piece but put the wrap back around it when you take it to your friends.

Or eggs, you want them in a carton so you can put them in bunches in your fridge, without rolling around, but you have to get those you want to eat out, before eating them.

Put your values in a list or array to store them in groups.

Put your values in a promise or task, so you can move them through your software before you calculated them.

put your values in a maybe or option, if you're not sure if it's okay to use it somewhere.

The nice thing about these concrete functors I listed here is, they can all be implemented with the same interface. Lets get them all a map method and you don't have to care anymore.

functor.map(oneValue => ...) works for all of these.

Arrays and lists will call your callback for every value stored.

Promises and tasks will call your callback when the value they calculating some time in the future is ready.

Maybes and options will call your callback right in that instant if a value is in them and if not, they won't ever call it so you don't have a special null case anymore.

The thing that makes functors to monads is a method, often called flatMap, that lets you return a value in another monad of the same type and doesn't nest them, but flattens them out.

Imagine, you exchange every egg of your carton for another carton filled with a few eggs, you wouldn't be able to store every new carton inside your old carton, but maybe you would be able to store every egg of your new cartons inside the old carton. So you would have to open up every new carton, get out every egg and put it in the old one. Monads do this for you with a method called flatMap.

[1,2].map(oneValue => [oneValue * 1, oneValue * -1]) would give you [[1,-1],[2,-2]], but maybe you want [1,-1,2,-2] so you use [1,2].flatMap(oneValue => [oneValue * 1, oneValue -1])

Same goes for the other monads.

A promise or task that results in creating another promise or task inside the map? Lets use flatMap instead so you can chain the whole thing.

To say something is a Monad means it meets a certain standard protocol. That includes implementing a few specific operations and a data type.

What you get for conforming to the protocol are: a) a uniform interface which is familiar to users of other Monads b) some common helper methods for free (or cheap)

For example. List.map toString integerList and Async.map toString integerAsync both transform the value(s) inside the monad, even though their data structures and purposes are very different. map is a standard operation on Monads, so any Monad I come across, I can understand what it is for.

Depending on language (e.g. Haskell?), you may not even have to define map yourself. Because this operation (and others) can be automatically implemented from more basic operations. So you could get it for free. In languages with more limited type systems, you can still get it "for cheap" by copying the same map definition among different Monads. E.g. in F#:

letmapfx=bind(f>>retn)x

This definition is "mathematical" if you will, so it will not change over time or between Monads. It only requires that bind and retn (2 of the basic operations) be defined already.

Once you're on the boat you can do 'boat stuff', eg. you gained the abilty to 'swim'. But you don't want to stay on the boat forever.

The boat would be the monad.

...

(This is inspired by / I stole this explanation from) Bart de Smet explaing the ABC of Monads when discussing MinLinq on Channel 9. It's at 45:50 where the Ana, Bind & Cata story starts.

The essence is composition as the basic design pattern. For all that exceeds the simple composition of total functions, one wants to define operations which enable composition, be it handling of side effects or any additional processing such as concatenation of intermediate results.

Monad M is a functor from a category K to itself (preserving composition) with two so-defined operations, best understood when cast into category Kleisli(K), where composition is enabled by "wrapping" arrow targets by M. The rest is technical, how to fit useful algirithms into composition, combine different so-enabled compositions (e.g., monad transformers), etc.

At least, this is how I would have explained it to my 5yr old :-)

Wouldn't programming be easy if we only deal with "simple" values?

You know: integers, strings, booleans... writing a function that takes a string and gives an int, things like that.

Unfortunately, most of the time, the values (or the computations of them) are not so simple. They may have "quirks", such as:

eventually in the future.This is not an exhaustive list, but I hope you can see the pattern. Those quirks are all nitty gritty details... what if we don't have to deal with them? What if we could write functions that acts as if they don't exist? Surely as a software developer we can make some

abstractionsto solve it?Well,

monads to the rescue!A monad acts as a

container that abstracts away those quirksin the computations, and let usfocus more on what we want to do with the contained values.Let's get back on that list, shall we?

Maybemonad.Listmonad (yes, List is a monad!).IOmonad.Promise/Futuremonad (that Promise you use in JavaScript? It's a monad!--kind of).Error/Result/Eithermonad.Statemonad.Amazing, isn't it? What's that?

"If they're just containers, what's so special about them,"you say?Well, other than it being a container, it also defines

a set of operationsto work on that container. For this, let's introduce a termmonadic valueto refer to a simple value that is wrapped in a container. Those operations include:`return`

: how to wrap a "simple" value into a monadic value? You`return`

it!`fmap`

: you have a function that takes a`String`

and produces an`Int`

. Can you use it for`Maybe String`

to produce`Maybe Int`

? Spoiler: yes, you`fmap`

it!`join`

: oh no, I have a`Maybe (Maybe String)`

! How can I flatten it? Use`join`

, and it will give you`Maybe String`

.`bind`

or`chain`

or`>>=`

(yes, that symbol, we have a name for it!): a combination of`fmap`

+`join`

, since that pattern (also called`flatMap`

) occurs quite often.`liftM`

,`liftM2`

,`liftM3`

, etc.: if we have a function that takes a`String`

and produces an`Int`

, can we construct a function that takes a`Future String`

and produces a`Future Int`

? Yes, you "lift" them to the monadic world!`>=>`

(another weird symbol, I call it monadic compose): suppose you have a function that takes a`String`

and outputs`IO Int`

, and another function that takes an`Int`

and produces`IO Boolean`

, can you construct a function combining the two that takes a`String`

and produces a`IO Boolean`

? You guessed it, you compose them with`>=>`

!So that's it about monads! I hope it gives the 5-year-old you a practical understanding of how powerful this abstraction really is.

If you want more example, I've written a blog post about this here (this comment is actually a gist of the article).

Note: Before all the more initiated FP devs burn me, I realize that I'm conflating many things from Functors, Applicatives, and whatnots in this explanation, but I hope you forgive my omission, since omitting details might be necessary for 5-year-old-explanations. I also encourage the reader to study the theoretical foundation of it should you be interested:DAbsolutely incredible explanation - thank you!

Let's say that one day you wake up and decide that you're tired of always fixing bugs, damnit! So you set out to create a language that won't

havebugs. In order to do this, you make your languageextremelysimple: functions will always take 1 argument, will always return 1 value, and will never have side-effects. You've also heard that types are making a come-back, so this simple language will be statically typed. Just so we can talk more easily about this language, you create a shorthand:`a -> a`

means "a function that takes an argument of type`a`

and returns a value of type`a`

. So a function that sorts a list would look like`List -> List`

.If you need to pass two arguments, then you will do that by writing a function that takes one argument and returns a function that takes one argument and generates a value. Using our short-hand notation we can write this as

`a -> b -> a`

, meaning " a function that takes an argument of type`a`

and returns a function that takes an argument of type`b`

and returns a value of type`a`

. So a function that appends a character to a string would look like`String -> Char -> String`

.Now that we have a simple notation for our simple language, there's one more thing you want to do to avoid bugs, Tony Hoare's billion-dollar mistake: no null references. So every function

mustreturnsomething.Great! Now we can start implementing basic functions. Start with

`add`

to add two integers. It's signature is`Int -> Int -> Int`

and we use it like so:`add 2 3`

returns`5`

. Next,`mul`

also has signature`Int -> Int -> Int`

and`mul 2 3`

returns`6`

. Nice! Ok, on to`div`

for division... Ah! But what's this? If we say it's signature is`Int -> Int -> Int`

then what should`div 3 0`

return?Crud!

Ok, so we need a type that represents a situation where we can't return a value:

`Nothing`

. This doesn't solve our problem completely, though, because we only want`div`

to return`Nothing`

if there's a division by zero. The rest of the time we would like to get back an`Int`

. So we need another new type:`Maybe a = Just a | Nothing`

(types like this are sometimes called a "sum type", "tagged union", "variant", or half-a-dozen other names no one can agree on). This little bit of notation means that any function that has in its signature a`Maybe Int`

can accept either`Just Int`

, which is just an Integer wrapped up to make it compatible with the`Maybe`

type, or`Nothing`

. Now we can write`div`

's type signature as`Int -> Int -> Maybe Int`

.Problem Solved! ...or is it?

You may have heard rumor of the "Maybe Monad", but this

`Maybe`

type we've just described is notyeta monad. A monad requires not only a type but also at least two functions to work on that type. To understand why, consider if you want to start chaining functions in your new minimal language.`add (mul 2 3) 4`

works and returns`10`

, since`mul`

turns into an`Int`

after we feed in two`Int`

s and`add`

takes two`Int`

s. But what about mixing in`div`

? We would like`add (div 4 2) 1`

to return 3, but itwon'twork because`div`

ends with a`Maybe Int`

and`add`

is expecting an`Int`

.Time to introduce one more bit of notation to make things a bit easier to talk about:

`(\x -> f x)`

indicates an anonymous function that takes an argument`x`

and does something with it (in this case, using it as the argument to a function`f`

).Ok! Now, the first thing we need is "bind" (just to keep with Haskell notation, let's use

`>>=`

for "bind"). This is a function that will know how to take our`Maybe`

type, pull out the value (if there is one) and use it in a function. If there's not a value (that's our`Nothing`

type), then it just passes along`Nothing`

. So, put into our shorthand notation, this looks like:(The

`_`

just means that we don't particularly care what the second argument to "bind" is, because we're always going to return`Nothing`

.)We're almost there, but there's one more problem. Look what happens if we attempt to combine

`add`

and`div`

as before (now using our new anonymous function notation and "bind"):To understand why this is a problem, consider what the type signature of this whole thing should be? If

`div`

is returning a`Just Int`

, then that will be passed along to`add`

which will return`Int`

. If, however, we swapped the`2`

with a`0`

then`div`

would return`Nothing`

and "bind", following our definition above, should return`Nothing`

, which is a`Maybe`

type. So in one case we get an`Int`

and in the other a`Maybe`

...but this is supposed to be a statically typed language!Crud!

We're almost there. All we need to complete the job is

`return`

. This is simply a function that will know how to create an example of our type from some other type. Since`Nothing`

should only ever be used when we don't have a value to return, the definition of`return`

for`Maybe a`

is quite straight-forward:`return a = Just a`

. For other, more complicated types`>>=`

and`return`

could be more complicated.Finally, now we can combine

`add`

and`div`

:Et voila!A`Maybe`

monad!So is a Monad really that simple? Well, yes and no. Much like General Relativity, writing down the basic functions for a Monad isn't all that difficult, but this simple combination of a type and two functions opens a whole world of possibilities for strict, statically typed functional languages. Essentially, Monads allow you to defer some part of your program's evaluation while still writing functions that take one value and return one value. In the case of

`Maybe`

, we're deferring what to do about missing values or values that we cannot produce. We can use Monads to defer other things like error handling (the`Either`

Monad) or input (the infamous`IO`

Monad).This is the best answer so far, thank you for writing this.

Clearly none of you have talked to a five year old before. Just kidding! Thanks for taking the time to explain! 😊

Monads are packaging, that can re-package itself.First lets talk about functors, since

monads are a special kind of functor.Think about chocolate, you want to eat it but you also want to take it somewhere without it getting dirty.

You have to unwrap it to eat a piece but put the wrap back around it when you take it to your friends.

Or eggs, you want them in a carton so you can put them in bunches in your fridge, without rolling around, but you have to get those you want to eat out, before eating them.

Put your values in a list or array to store them in groups.

Put your values in a promise or task, so you can move them through your software before you calculated them.

put your values in a maybe or option, if you're not sure if it's okay to use it somewhere.

The nice thing about these concrete functors I listed here is, they can all be implemented with the same interface. Lets get them all a map method and you don't have to care anymore.

`functor.map(oneValue => ...)`

works for all of these.Arrays and lists will call your callback for every value stored.

Promises and tasks will call your callback when the value they calculating some time in the future is ready.

Maybes and options will call your callback right in that instant if a value is in them and if not, they won't ever call it so you don't have a special null case anymore.

The thing that makes functors to monads is a method, often called, that lets you return a value in another monad of the same type and doesn't nest them, but flattens them out.`flatMap`

Imagine, you exchange every egg of your carton for another carton filled with a few eggs, you wouldn't be able to store every new carton inside your old carton, but maybe you would be able to store every egg of your new cartons inside the old carton. So you would have to open up every new carton, get out every egg and put it in the old one. Monads do this for you with a method called

`flatMap`

.`[1,2].map(oneValue => [oneValue * 1, oneValue * -1])`

would give you`[[1,-1],[2,-2]]`

, but maybe you want`[1,-1,2,-2]`

so you use`[1,2].flatMap(oneValue => [oneValue * 1, oneValue -1])`

Same goes for the other monads.

A promise or task that results in creating another promise or task inside the map? Lets use

`flatMap`

instead so you can chain the whole thing.`getDataAsync().flatMap(data => parseDataAsync(data)).map(data => log(data))`

`maybeImNotUseful.flatMap(value => maybeIcreateSomethingNew(value)).map(value => ...)`

This would look rather ugly without

`flatMap`

To say something is a Monad means it meets a certain standard protocol. That includes implementing a few specific operations and a data type.

What you get for conforming to the protocol are: a) a uniform interface which is familiar to users of other Monads b) some common helper methods for free (or cheap)

For example.

`List.map toString integerList`

and`Async.map toString integerAsync`

both transform the value(s) inside the monad, even though their data structures and purposes are very different.`map`

is a standard operation on Monads, so any Monad I come across, I can understand what it is for.Depending on language (e.g. Haskell?), you may not even have to define

`map`

yourself. Because this operation (and others) can be automatically implemented from more basic operations. So you could get it for free. In languages with more limited type systems, you can still get it "for cheap" by copying the same`map`

definition among different Monads. E.g. in F#:This definition is "mathematical" if you will, so it will not change over time or between Monads. It only requires that

`bind`

and`retn`

(2 of the basic operations) be defined already.Cross the river with a boat.

Once you're on the boat you can do 'boat stuff', eg. you gained the abilty to 'swim'. But you don't want to stay on the boat forever.

The boat would be the monad.

...

(This is inspired by / I stole this explanation from) Bart de Smet explaing the ABC of Monads when discussing MinLinq on Channel 9. It's at 45:50 where the Ana, Bind & Cata story starts.

The essence is composition as the basic design pattern. For all that exceeds the simple composition of total functions, one wants to define operations which enable composition, be it handling of side effects or any additional processing such as concatenation of intermediate results.

Monad M is a functor from a category K to itself (preserving composition) with two so-defined operations, best understood when cast into category Kleisli(K), where composition is enabled by "wrapping" arrow targets by M. The rest is technical, how to fit useful algirithms into composition, combine different so-enabled compositions (e.g., monad transformers), etc.

At least, this is how I would have explained it to my 5yr old :-)

I think this is easier to explain with pictures? LOL. But I can't draw :cry:

There's this article with pictures: adit.io/posts/2013-04-17-functors,... :smile: