This post is part of a series called functional fundamentals. See the introduction and index. That's also where you can request specific topics
Note that I am not an authoritative figure on these subjects, if you see something you think might be incorrect or incomplete, please point it out with a comment, let's prevent mistakes from spreading :)
This post was edited for clarity following feedback
In reading this, you should probably try not to think about concepts from imperative programming language. Especially forget about weakly types languages such as python and javascript. The isomorphism expressed here holds best with functional, strongly typed languages. Ideally, you have already written a very basic Elm or Haskell program or played around with their REPL.
It's about time to take a shallow dive into category theory. You can try to understand Functors and Monads without it, but that will often be awkward and sometimes even wrong. Category theory can also help you make better decisions about e.g. what type signatures your functions should have.
You might say category theory is the study of composition. This makes it weirdly abstract at times, but, I find, surprisingly easy to understand. At least, once your brain manages to parse all the symbols into meaning.
Category theory deals with objects (not related to OOP) and arrows. The arrows go from one object to another. Object and arrows are very abstract concepts, category theory simply does not case about what they are, only about how arrows connect objects.
This makes category theory applicable to a lot of domains.
A category is merely "that which we are talking about". In FP, "that which we are talking about" is usually types and functions, so the objects correspond to types and the arrows correspond to functions. But in imperative programming you might use program state as objects and commands as arrows, as in Hoare logic.
The two rules that we do require for a category are identity and composition.
Identity
Identity is an arrow that goes from an object to itself and exist for every object in a category.
Composition
If there is an arrow f
from object a
to object b
(denoted f :: a → b
), and there is an arrow g :: b → c
, then there must be an arrow a → c
, called the composition of g
after f
, denoted as g ∘ f
.
Isomorphism with FP
In the category of types (and functions), we have identity by the identity
function, and composition by the function composition operator (<<
in Elm and .
in Haskell). That means type systems in functional programming indeed follow the rules of category theory. Other languages do as well but in a bit of a weird way.
Functors
Objects and arrows are part of a category. While arrows connect one object to another, it is also possible to connect one category to another, by connecting all objects from one category to objects in the other. This is called a functor if the arrows in the first category connect objects in moreorless the same way as in the second category.
But what does that mean exactly?
For a functor F
, which goes from any object x
in one category to a corresponding object F x
in another category, and any pair of arrows f :: a → b
and g :: b → c
, and their composition g ∘ f
, there must be corresponding arrows
f' :: F a → F b
, g' :: F b → F c
and
(g ∘ f)' :: F a > F c
, so that:
 if
f
is the identity arrows ,f'
is also the identity arrows (identity is preserved) 
(g ∘ f)' = g' ∘ f'
(composition is preserved).
This tells us that structural rules that hold in one category, still hold in a category that is its functor.
In the image above, with F
being a functor represented by the dotted line, you can see that we can first follow f
, then F
, then g'
, or we can follow g ∘ f
, then F
, or F
followed by (g ∘ f)'
. The result will be the same because the 'path' on the left has a corresponding 'path' on the right.
A functor doesn't have to be between different categories, it can be from one category to itself, in which case we call it an endofunctor.
What does this mean for functional programming? Well, in type systems, we can create a type constructor (or higher kinded type if you prefer): List
, Set
, Array
, Iterator
, Maybe
/ Optional
, Cmd
etc. They aren't really types by themselves, but will construct a type from another type.
A type constructor F
is a functor iff. we have a function fmap
that takes any function a > b
and produces an F a > F b
(with preservation of identity and composition we discussed above).
This fmap
functions abstracts away the functor's inner structure, letting us focus on element operations. That way we can e.g. apply an operations to a list, dict, stream, promise or whatnot without resorting to loops, recursion or callback nesting.
Some Elm code samples:

List.map ((+) 1) listOfInts
will increase the values of all elements inlistOfInts
by 1. 
Maybe.map ((+) 1) maybeInt
will increasemaybeInts
by 1 if it isn'tNothing
. 
Cmd.map ((+) 1) someCmd
(very much Elm specific) will cause themsg : Int
passed to the update function in Elm's to be one higher than the result of executingsomeCmd : Cmd Int
.
In Haskell, you use typeclasses to write e.g. fmap ((+) 1) [...]
, but the idea is the same.
Monads
Monads are a special type of endofunctor (functors from a category to itself). What's special about them is that they are composable (not right).
For an endofunctor M
to be a monad, any arrow f :: a → M b
must have a counterpart f' :: M a → M b
. That's pretty much it.
This is cool, because it means we can compose functions that return monads.
(example code is in Elm syntax)
E.g., if we have a function that parses a string into a float parseFloat : String > Maybe Float
and a division function that returns Nothing
is we try to divide by 0 divide : Float > Float > Maybe Float
(see dividing by 0), and we'd like to divide some number by the result of parseFloat someString
.
The code divide 7 (parseFloat someString)
wouldn't compile, because (parseFloat someString)
is a Maybe Float
, and divide 7
takes a Float
. Without monads, we'd have to do something like:
case parseFloat someString of:
Just result > divide 7 result
Nothing > Nothing
Forcing us to deal with the functor's internal structure explicitly.
With monads (and maybe is a monad), we can use andThen
, which converts a a > M b
to a M a > M b
, to lift divide 7 : Float > Maybe Float
to andThen (divide 7) : Maybe Float > Maybe Float
. This lets us write:
andThen (divide 7) (parseFloat someString)
or, leveraging the >
operator:
parseFloat someString
> andThen (divide 7)
Haskell doesn't have a standard >
operator, but combines >
with andThen
to >>=
.
parseFloat someString
>>= divide 7
I should note that Haskell (and probably other languages) has a few more bells and whistles for monads. This is somewhat Haskellspecific as it relates to lazy evaluation and do
blocks, so I'm not going to get into it at this point in time.
Function signatures
Category theory also teaches us what 'better' functions look like in regard to composition. Now this is a rather vague argument based on how sums and products are defined in category theory, but it's rather selfevident:
If you have f :: a → c
and g :: b → c
, and some h
exists s.t. h :: a → b
, but no h
exists s.t. h :: b → a
, arrow g
is 'better' than f
for constructing c
, since we'd also have g ∘ h :: a → c
.
In other words, when in doubt, implement the simplest function. This is a rather like a formalization of the first two pillars of the UNIX philosophy:
This is the Unix philosophy: Write programs that do one thing [...]. Write programs to work together. [...]
Further reading
 See if you can find out why a list is a monad.
 Try to think of a functor that isn't a monad. This might take you a while.
 I would heartily recommend further CT material by Bartosz Milewski. He has a somewhat practical, written series and a video lectures series. I personally prefer the content of the later but the format of the former, as I find videos a bit too passive for this material, but find the more abstract concepts a stronger basis.
Discussion (12)
I'm going to log some of my thoughts while reading this post, hopefully this may serve as constructive feedback, as well as notes to other readers
Category explanation
The explanation on what a category itself is, is spot on. Incredible job there, it unifies both the formal definition and the haskell implementation (
Control.Category
) extremely well.To add to your definition, I'd suggest readers to draw an intuitive analogy to sets. Most people are familiar with sets and set theory, not so much category theory. But really, a set is very similar to a category. In fact, a category can be thought of as a "set" equipped with some morphism on its elements. Don't get scared by the word morphism! It's just a fancy word for a function. Well, a pure function, of course :)
Functors explanation
I think the definition you used for functors is quite vague in an informal way. Don't get me wrong, there's nothing wrong with informality when explaining these topics  but it seems like you tried to combine the functors in fp, and the functors in category theory into one definition. A functor in haskell is really a concrete (and constrained) application of the abstract functor concept from category theory. Meanwhile, an actual functor, is "a mapping between categories".
I personally would suggest preferring the concrete functor definition over the abstract one. To say, "a functor is simply a computational context that can be mapped over" feels both intuitive and correct, in the context of haskell (and therefore in pretty much all fp langs).
Monads "compose"
This wording "What's special about them is that they are composable.", regarding monads, is....problematic. This isn't really anyone's fault. It just so happens that "compose" is used ambiguously in many places in these contexts. Monads don't compose!. That is their weakness.
I understand what you meant, you're talking about the composition of functions returning monads  the Kleisli arrow. But perhaps it'd be better to say err, monads chain..? It's a bit of a difficult analogy to be made here. Since skipping applicative functors has been, and still is, a historical mistake. The idea of chaining functions in a higher order world (functor world)  really bears well with applicative functors. Monads are also a pattern for chaining operations  but the monadic operation is binding, whereas an applicative functor operation is just applying. The key distinction is that monad allows dynamically choosing what computation (functor) is executed next, depending on the previous result. Applicatives may only choose statically. See: difference between applicatives and monads.
Monads allow the higher order world and the regular world to seamlessly interface. This is clear in the type signature of
bind
/>>=
. Look at the function it takes as argumenta > m b
.a
is a regular value,m b
is a higher order value, it's a computational context! A functor! A higher order value! We've just let the user meddle the 2 worlds together.<*>
andfmap
don't let the user do this.This essentially gives you a getoutofjailfree card to essentially design a completely imperative API.
Of course, this comes with major drawbacks  hence why many consider the focus on monads before applicatives.... a historical mistake. Monads don't compose. Dynamically choosing effects is expensive. In many cases, an applicative can do everything you need  just much more efficiently. A true unsung hero, it saddens me to not see this article mention applicatives.
>
in HaskellActually, haskell does have a
>
in the standard :). It's&
fromData.Function
. It's just a flipped version of$
. Nothing special. Though>>=
does not combineandThen
and&
.>>=
isandThen
(but flipped). The exact haskell alternative is=<<
They're the same thing. In fact, you could writeWhich, of course, can be simplified to
Covariant mapping of functions
The idea you discussed in "Function signatures" is very nice. I think that particular intuition is also a great door opener to contravariance. Contravariant functors, that is. Reverse all the arrows!
Which functor is not a monad?
+1 for the recommendation about "Try to think of a functor that isn't a monad". This is a neat little exercise once you have decent grasp of the covariant functor hierarchy (covariant is the default one in haskell, and therefore other fp langs).
Some hints to aid in your journey, dear readers Remember, a covariant functor won't be a monad if and only if the functor can proceed through
bind
without actually containing value. You need thea
to actually feed intoa > m b
! Think of sum types! Notice howMaybe a
is a monad becauseNothing
is not allowed to pass through bind. IfNothing
was allowed, what would you pass to thea > m b
function? nothing!Good stuff all around. Such a refresher to see something braver than "how to make a landing page with react". Keep it up!
Fair points.
I'm curious how you came upon this article. I posted it almost 3 years ago.
EDIT: just checked out the sources you posted for applicatives. Thanks for those. Real eyeopeners. Academic literature made them sound almost like some obscure technicality.
Oh whoa I didn't even notice the date. Whoops, sorry for necroing. A friend of mine posted this and cited it as "an interesting dev.to article". I frankly just assumed it was new :P.
That's pretty hilarious though. My friend must've been searching for articles on FP concepts on dev.to and happened to stumble upon your post.
No problem at all. Hey, I learned something new! I was just curious how that happens.
Still didn't get it, sorry
I think that might be my fault, there were a few places were I was unsure about the explanations I chose.
Can you tell me where you got stuck? Maybe I can improve that section.
Otherwise, Batosz Milewski's series (linked at the bottom of the post) takes a lot more time to explain every concept (in particular the video series) so if this post doesn't work for you, you should definitely check them out.
Sorry for my short first reply I was very frustrated reading the nth "simple" explanation. My approach would be completely different. What you are doing is describing Category Theorie like a farmer would describe her "functormonardobjectarrowabcatgeoryunit"land. What I need is a way from "function, method, class, object"land to you land.
First of all: About which objects are you talking in when you say "CT deals with objects…". Is this an abstract term for something not related to OOP objects or just what I'm used to?
If you are talking about arrows I will think about functions in the classical programming approach. Use examples to explain how the theory is translated into functional programming.
Haskell and Elm are known for their tight connection to functional programming but Javascript is also very powerful and much more common. Adding examples from JS broadens your target audience.
You introduce "Category", now I realy need an real world example about what you are talking otherwise it's just abstract. And what is the "structure of arrows"? And why could that mean "F a ≠ F b"?
Then you go on with a functor from an object but you just wrote, an functor connects a category to another one. And what is an Unit? Is an Endofunctor the real name of an identitiy functor?
In the next paragraph you talk about the Maybe/Optional and Cmd type. Never heard of it. Also e.g. in PHP, JS, Python an Array is absolutely a real type as it doesn't have to be typed.
Next I assume a typo (or isn't it?): "iff. we" > "if we". Then the sentence I quitted: "and produces something isomorphic to identity if identity is provided as input". Well what?
Sorry I didn't get it. Maybe I'm just to old and my brain plasticity is below what is needed. But I still don't get the concept.
It looks like we just have very different backgrounds. Your feedback is very useful, I will use it to improve the post.
One of the big problems with learning / teaching FP is that people from IP (imperative programming) have expectations that don't hold for FP. So rather than lacking brain plasticity, you know too much. It is a common problem to which I have probably not given enough attention. To address your specific concerns:
Not the same. Objects in CT are pretty much undefined, nothing more than something for arrows to connect. So you can have a category where CT objects are OOP object, but you can also have a category where objects are apples, or the number 5, or the belief in a higher power.
In programming we are mostly concerned about the cases were objects are types (FP) or states (IP) and arrows are functions (FP) or commands (IP).
You've got it, arrows are isomorphic to functions ("isomorphic" means they have the same 'shape', they act the same).
The isomorphism doesn't work well for javascript because it is neither statically typed nor functional (being functional requires no sideeffects, not just passing callbacks around).
This series is about the theory of functional programming, not so much about how to make JS look like Haskell. There's other posts doing that pretty well.
I should have spent more words describing all this. It is all, indeed, very abstract though.
It's a bit much to address in a reply, please give me some time to fix the post.
I mentioned how the isomorphism doesn't really hold up for weakly typed languages. Essentially those languages don't have types, so they also don't have type constructors.
"Monads" in js are more monadinspired.
Nope. "iff." is short for "if and only if".
Expressed in JS, a function
map(callback)
should return a callbackcbk
that acts just likea => a
, i.e.cbk(foo) === (a => a)(foo)
.Two functions are isomorphic if they produce the same result for the same input.
Thank you for spending so much effort on explaining everything much clear. I can't say I completely understood CTland but it's a lot clearer.
That has been the clearest explanation I've seen so far. It explains just enough category theory without making pages about it, but it's not just about return, bind and fmap either. So thank you! :D
Good job explaining functors and endofunctors. I am currently working my way through Bartosz Milewski's book on category theory.
That is going to take you far beyond this :)