What is a typeclass? What defines a typeclass, what are its instances, what is its relation to another typeclass? There is a whole infinite ocean of types, because they do not really exist, at least not physically buried deep in memory, no, they are abstract entities, abstract description of a class of abstract entities, they describe themselves in terms of their context, from general to specific. The most basic typeclass that comes to my mind is the
data Bool a = False a | True a
Could we syntactically reduce the definition removing the
a variable? Abstracting it away,
Bool actually applied to some action, something happening in the world. Like calling
Bool we would have to ask something, and that something would be the ephemeral
data Bool = False | True
Notice this definition tells us nothing about the relations of
Bool, only that there are two possibilities that may happen, something
False and something
True. We already know what
True means so this example somehow removes away the pleasure or surprise of something unknown, meaning while showing us terms we can at least conceptually in our minds compute we might risk not fully realizing the implications, the context itself of this relation, of a
data x = a | b event being described, applied at this place.
So what does
False represent, and what is true? Maybe let us begin by observing the structure, construction of relations, of the typeclass itself, how are these two terms related, defined within? But aren't we supposed to just flow with types, because they are not intrinsically defined, things that we open and use, they are like conceptual infinite containers, that like a borg conduit in transwarp space can take our thoughts to all kinds of abstract realms, along the way like memoizing them into real processes, functors encapsulate data without defining what that data is, or it could be said they define the most basic relation data can hold, the relation between another set of instances that all the data represent.
I am getting ahead of myself, so is Bool a type of a Functor is what I am meaning to ask? What is we compare
Maybe? We know that
data Maybe a = Nothing | Just a describes two kinds of relations,
Nothing as in there is nothing, no continuation of the process, or to limit our response there is nothing to do, meaning this could be an unknown unknown or known unknown too. And then the other choice, the other defined relation, that there is something and that something is just
Just a. This
Just again, could be an unknown known such as give me back what I provided you, just the value I expect, but what about an unknown unknown? Could Just represent another happening, another function being called which is not already contained in the
Maybe declaration. Maybe
Maybe is just that when observed externally, or observed while being applied to an action, conceptually the most basic observation we can make. Like the Descartes cogito argument of being aware of just the thinker itself, our Maybe is could be realized as a set made of False and True, the gap being in seeing both variations:
- a set of Maybe containing the True and False subsets
- a set of Maybe being
True, being there happening, or being
False, not being any set at all
Now even though we may seem to understand conceptually what
Maybe does, by observing the type declaration we still do not know how is it defined functionally within, what would be pure expressions that would describe computationally the flow of types, what is the inner operation that maps into both outcomes, how is the mapping operation applied? This mapping operation is more than just a partial function transforming a value, an input to some output, or a list of inputs.
map :: (a -> b) -> [a] -> [b]
Now if we move down through the abstraction layer, if we so to say reduce the abstraction, change the frame, we see map defined as:
map :: (a -> b) -> [a] -> [b] map _  =  map f (x:xs) = f x : map f xs
Wait a minute, here again we can observe
map _  =  meaning if nothing is provided to the mapping function, nothing in this case being represented as an empty list, a nil,
 then nothing is to be done with map, just the same empty list will remain. Does it make any difference if we write
map someFunction  = ? like
map f  = ? Or even
map (+)  = ? What about
map someSuperComplexFunction  = ? Would we still get the empty list back? Like saying what would one do if they get a million dollars, a list of million dollars like
[$1, $1, $1, ... $n] if they get an empty list
 then there is nothing we can do with that list right? We can't buy shit as they say. Only thing we do have is the empty list, like an empty dream, we only know there is something that was supposed to be there but it's not.
Now what strikes me is that
map _  =  is like a different but equivalent way of saying
False :: Bool, or
False is a type of a
False being like
Just Nothing returned.
When we compare
map f (x:xs) = f x : map f xs the first thing to be observed is that both actions imply some successful action taking place, an event, which may or may not succeed but what is important to notice is that it will play out in some specified way, and here we see again the recursive definition of what would imperatively be called a loop, an abstract way of mapping some action on the context in place. This time the mapping action is not just mapping contexts into its subsets but it is producing a value, a single value which encapsulates somehow the list provided to the function of map.
Let us dive into the thing mapped over, the
(x:xs) part of the definition. So to map a thing with some function is to first break it down into again, two elements, and notice here the abstraction happening again, two elements like our
False | True we observe the thing, the list of some things being abstracted into two things. Why just two things? So that we can define the basic way we could map, like teaching a child we first add or multiply two numbers and when we understand that we can add or multiply million numbers right?
So is it enough to just say
map (x:xs)? No, because then our definition of the process becomes imperative since we are not explicitly stating how to map
(x:xs), too much is being implied. So we say map the head of the thing, the head, the first known thing by realizing
f x and here we explicity state
f x meaning
someFunction toTheHead is happening to a single element of many more. That is also why we are not writing
map f x which would actually be a mistake because how do we map something to a single thing?
Then we add or cons, short for construct the whole thing, the symbol for which is the colon
(:) to the rest of the list, the all remaining objects of the list, by stating
map f xs. Now we have a complex abstract operation defined and yet there is something so simple at play here that I am almost shocked and happy!
With pure functional programming there is this underlying principle of abstraction, principle of composition that builds on knowledge. I feel safe within this pure world, safe not because I am afraid but because I feel all cases being covered while not being told who are the objects, what are the events. Naturally once we introduce side effects we will learn even more complex structures such as the monad that still build from the same principle as the functor. Monad is a Functor after all but not all Functors are Monads!