Functors, Monads and better functions
DrBearhands Nov 29 '18 Updated on Dec 12, 2018 ・6 min read
This post is part of a series called functional fundamentals. See the introduction and index. That's also where you can request specific topics
Note that I am not an authoritative figure on these subjects, if you see something you think might be incorrect or incomplete, please point it out with a comment, let's prevent mistakes from spreading :)
This post was edited for clarity following feedback
In reading this, you should probably try not to think about concepts from imperative programming language. Especially forget about weakly types languages such as python and javascript. The isomorphism expressed here holds best with functional, strongly typed languages. Ideally, you have already written a very basic Elm or Haskell program or played around with their REPL.
It's about time to take a shallow dive into category theory. You can try to understand Functors and Monads without it, but that will often be awkward and sometimes even wrong. Category theory can also help you make better decisions about e.g. what type signatures your functions should have.
You might say category theory is the study of composition. This makes it weirdly abstract at times, but, I find, surprisingly easy to understand. At least, once your brain manages to parse all the symbols into meaning.
Category theory deals with objects (not related to OOP) and arrows. The arrows go from one object to another. Object and arrows are very abstract concepts, category theory simply does not case about what they are, only about how arrows connect objects.
This makes category theory applicable to a lot of domains.
A category is merely "that which we are talking about". In FP, "that which we are talking about" is usually types and functions, so the objects correspond to types and the arrows correspond to functions. But in imperative programming you might use program state as objects and commands as arrows, as in Hoare logic.
The two rules that we do require for a category are identity and composition.
Identity
Identity is an arrow that goes from an object to itself and exist for every object in a category.
Composition
If there is an arrow f
from object a
to object b
(denoted f :: a → b
), and there is an arrow g :: b → c
, then there must be an arrow a → c
, called the composition of g
after f
, denoted as g ∘ f
.
Isomorphism with FP
In the category of types (and functions), we have identity by the identity
function, and composition by the function composition operator (<<
in Elm and .
in Haskell). That means type systems in functional programming indeed follow the rules of category theory. Other languages do as well but in a bit of a weird way.
Functors
Objects and arrows are part of a category. While arrows connect one object to another, it is also possible to connect one category to another, by connecting all objects from one category to objects in the other. This is called a functor if the arrows in the first category connect objects in moreorless the same way as in the second category.
But what does that mean exactly?
For a functor F
, which goes from any object x
in one category to a corresponding object F x
in another category, and any pair of arrows f :: a → b
and g :: b → c
, and their composition g ∘ f
, there must be corresponding arrows
f' :: F a → F b
, g' :: F b → F c
and
(g ∘ f)' :: F a > F c
, so that:
 if
f
is the identity arrows ,f'
is also the identity arrows (identity is preserved) 
(g ∘ f)' = g' ∘ f'
(composition is preserved).
This tells us that structural rules that hold in one category, still hold in a category that is its functor.
In the image above, with F
being a functor represented by the dotted line, you can see that we can first follow f
, then F
, then g'
, or we can follow g ∘ f
, then F
, or F
followed by (g ∘ f)'
. The result will be the same because the 'path' on the left has a corresponding 'path' on the right.
A functor doesn't have to be between different categories, it can be from one category to itself, in which case we call it an endofunctor.
What does this mean for functional programming? Well, in type systems, we can create a type constructor (or higher kinded type if you prefer): List
, Set
, Array
, Iterator
, Maybe
/ Optional
, Cmd
etc. They aren't really types by themselves, but will construct a type from another type.
A type constructor F
is a functor iff. we have a function fmap
that takes any function a > b
and produces an F a > F b
(with preservation of identity and composition we discussed above).
This fmap
functions abstracts away the functor's inner structure, letting us focus on element operations. That way we can e.g. apply an operations to a list, dict, stream, promise or whatnot without resorting to loops, recursion or callback nesting.
Some Elm code samples:

List.map ((+) 1) listOfInts
will increase the values of all elements inlistOfInts
by 1. 
Maybe.map ((+) 1) maybeInt
will increasemaybeInts
by 1 if it isn'tNothing
. 
Cmd.map ((+) 1) someCmd
(very much Elm specific) will cause themsg : Int
passed to the update function in Elm's to be one higher than the result of executingsomeCmd : Cmd Int
.
In Haskell, you use typeclasses to write e.g. fmap ((+) 1) [...]
, but the idea is the same.
Monads
Monads are a special type of endofunctor (functors from a category to itself). What's special about them is that they are composable.
For an endofunctor M
to be a monad, any arrow f :: a → M b
must have a counterpart f' :: M a → M b
. That's pretty much it.
This is cool, because it means we can compose functions that return monads.
(example code is in Elm syntax)
E.g., if we have a function that parses a string into a float parseFloat : String > Maybe Float
and a division function that returns Nothing
is we try to divide by 0 divide : Float > Float > Maybe Float
(see dividing by 0), and we'd like to divide some number by the result of parseFloat someString
.
The code divide 7 (parseFloat someString)
wouldn't compile, because (parseFloat someString)
is a Maybe Float
, and divide 7
takes a Float
. Without monads, we'd have to do something like:
case parseFloat someString of:
Just result > divide 7 result
Nothing > Nothing
Forcing us to deal with the functor's internal structure explicitly.
With monads (and maybe is a monad), we can use andThen
, which converts a a > M b
to a M a > M b
, to lift divide 7 : Float > Maybe Float
to andThen (divide 7) : Maybe Float > Maybe Float
. This lets us write:
andThen (divide 7) (parseFloat someString)
or, leveraging the >
operator:
parseFloat someString
> andThen (divide 7)
Haskell doesn't have a standard >
operator, but combines >
with andThen
to >>=
.
parseFloat someString
>>= divide 7
I should note that Haskell (and probably other languages) has a few more bells and whistles for monads. This is somewhat Haskellspecific as it relates to lazy evaluation and do
blocks, so I'm not going to get into it at this point in time.
Function signatures
Category theory also teaches us what 'better' functions look like in regard to composition. Now this is a rather vague argument based on how sums and products are defined in category theory, but it's rather selfevident:
If you have f :: a → c
and g :: b → c
, and some h
exists s.t. h :: a → b
, but no h
exists s.t. h :: b → a
, arrow g
is 'better' than f
for constructing c
, since we'd also have g ∘ h :: a → c
.
In other words, when in doubt, implement the simplest function. This is a rather like a formalization of the first two pillars of the UNIX philosophy:
This is the Unix philosophy: Write programs that do one thing [...]. Write programs to work together. [...]
Further reading
 See if you can find out why a list is a monad.
 Try to think of a functor that isn't a monad. This might take you a while.
 I would heartily recommend further CT material by Bartosz Milewski. He has a somewhat practical, written series and a video lectures series. I personally prefer the content of the later but the format of the former, as I find videos a bit too passive for this material, but find the more abstract concepts a stronger basis.
Still didn't get it, sorry
I think that might be my fault, there were a few places were I was unsure about the explanations I chose.
Can you tell me where you got stuck? Maybe I can improve that section.
Otherwise, Batosz Milewski's series (linked at the bottom of the post) takes a lot more time to explain every concept (in particular the video series) so if this post doesn't work for you, you should definitely check them out.
Sorry for my short first reply I was very frustrated reading the nth "simple" explanation. My approach would be completely different. What you are doing is describing Category Theorie like a farmer would describe her "functormonardobjectarrowabcatgeoryunit"land. What I need is a way from "function, method, class, object"land to you land.
First of all: About which objects are you talking in when you say "CT deals with objects…". Is this an abstract term for something not related to OOP objects or just what I'm used to?
If you are talking about arrows I will think about functions in the classical programming approach. Use examples to explain how the theory is translated into functional programming.
Haskell and Elm are known for their tight connection to functional programming but Javascript is also very powerful and much more common. Adding examples from JS broadens your target audience.
You introduce "Category", now I realy need an real world example about what you are talking otherwise it's just abstract. And what is the "structure of arrows"? And why could that mean "F a ≠ F b"?
Then you go on with a functor from an object but you just wrote, an functor connects a category to another one. And what is an Unit? Is an Endofunctor the real name of an identitiy functor?
In the next paragraph you talk about the Maybe/Optional and Cmd type. Never heard of it. Also e.g. in PHP, JS, Python an Array is absolutely a real type as it doesn't have to be typed.
Next I assume a typo (or isn't it?): "iff. we" > "if we". Then the sentence I quitted: "and produces something isomorphic to identity if identity is provided as input". Well what?
Sorry I didn't get it. Maybe I'm just to old and my brain plasticity is below what is needed. But I still don't get the concept.
It looks like we just have very different backgrounds. Your feedback is very useful, I will use it to improve the post.
One of the big problems with learning / teaching FP is that people from IP (imperative programming) have expectations that don't hold for FP. So rather than lacking brain plasticity, you know too much. It is a common problem to which I have probably not given enough attention. To address your specific concerns:
Not the same. Objects in CT are pretty much undefined, nothing more than something for arrows to connect. So you can have a category where CT objects are OOP object, but you can also have a category where objects are apples, or the number 5, or the belief in a higher power.
In programming we are mostly concerned about the cases were objects are types (FP) or states (IP) and arrows are functions (FP) or commands (IP).
You've got it, arrows are isomorphic to functions ("isomorphic" means they have the same 'shape', they act the same).
The isomorphism doesn't work well for javascript because it is neither statically typed nor functional (being functional requires no sideeffects, not just passing callbacks around).
This series is about the theory of functional programming, not so much about how to make JS look like Haskell. There's other posts doing that pretty well.
I should have spent more words describing all this. It is all, indeed, very abstract though.
It's a bit much to address in a reply, please give me some time to fix the post.
I mentioned how the isomorphism doesn't really hold up for weakly typed languages. Essentially those languages don't have types, so they also don't have type constructors.
"Monads" in js are more monadinspired.
Nope. "iff." is short for "if and only if".
Expressed in JS, a function
map(callback)
should return a callbackcbk
that acts just likea => a
, i.e.cbk(foo) === (a => a)(foo)
.Two functions are isomorphic if they produce the same result for the same input.
Thank you for spending so much effort on explaining everything much clear. I can't say I completely understood CTland but it's a lot clearer.
That has been the clearest explanation I've seen so far. It explains just enough category theory without making pages about it, but it's not just about return, bind and fmap either. So thank you! :D
Good job explaining functors and endofunctors. I am currently working my way through Bartosz Milewski's book on category theory.
That is going to take you far beyond this :)