The Yin
and Yang
of Functional programming
I am for one a person who is very excited about Functional Programming and Haskell.
Normally you will see me yaping on social media about the beauty of FP, and how it's the best thing ever invented since
sliced bread.
While functional programming offers many benefits, it also comes with its own set of challenges, much like anything else in life. Interestingly, the very features that are touted as advantages—immutability, ease of reasoning, and conciseness—can also become sources of difficulty. Thus, the dual nature of functional programming, like Yin and Yang, is revealed.
What is functional programming in a nutshell
- Functional programming is a method or program construction that emphasises function and their application, rather than commands and their execution
- Functional programming uses simple mathematical notation that allows problems to be described clearly and concisely
- Functional programming has a simple mathematical basis that supports equational reasoning about the properties of programs.
(Source: Thinking functionally, Richard Bird) Chapter 1 page 1.
I will not go through the weeds of each bullet point, but generally, they say that FP is based on simple sound mathematical principles that make reasoning about programs easier.
I am showing 2 cases where it might not be necessarily the case.
The YIN and YANG
Conciseness
Let's take the following, program over 3 Natural numbers:
[1,2,3].map(x => x+1)
// [2,3,4]
Many people might think that this code translates to:
const arr = [1, 2, 3];
const result = [];
for (let i = 0; i < arr.length; i++) {
result.push(arr[i] + 1);
}
result
// [2,3,4]
And they would be missing half the story, as this is only a partial implementation.
Great we took 6 lines and converted them into 1, so is this is what we solving?
The conciseness of the [1,2,3].map(x => x+1)
is not about fewer characters, rather it hides from us several mathematical principles that will send us down a deep rabbit hole, (PLEASE BE AWARE I WILL BE USING scary words for a while)
What this code is saying is that we will be mapping over a data structure (natural numbers, that will need to be encoded in the language) that has properties of Functor and probably monad. Under the hood we would have something like this in Haskell, hidden by the standard library.
-- We NEED to encode the definition of numbers into the lang
data Nat = Zero | Succ Nat
-- equality type-class we need this in order to compare numbers
instance Eq Nat where
(==) :: Nat -> Nat -> Bool
-- we need this in order to print results
instance Show Nat where
show :: Nat -> String
-- provide binary functions we need this to tell the computer how to operate with numbers
instance Num Nat where
(+) :: Nat -> Nat -> Nat
(*) :: Nat -> Nat -> Nat
abs :: Nat -> Nat
...
-- and finally after we can map over our natural numbers
Now Haskell hides this from you by providing utility data types such as Num
Integer
and their respective
instances of Eq, Semigroup, Monoid, Monad...
and so on.
But there you go, it's concise mathematically speaking, but at the end, you are writing more lines of code.
Immutability
Probably the focal point of any OOP vs FP debate. Many FP elitists will look with disgust at any code that reassigns variable values, arguing that the state breaks the equational reasoning of programs. Where OOP pragmatics argue about the inherent performance. Both sides are right. I'll start by showing a beautiful canonical example of an FP function. Where laziness, recursive scheme and conciseness shine.
-- Fibonacci
fib :: Int -> Int
fib 0 = 0
fib 1 = 1
fib n = fib (n - 1) + fib (n - 2)
At a glance, we have described easily how to create fib numbers, but it's a terribly inefficient function, as it takes exponential time to evaluate. By not getting into details we can improve the time performance of this function, by using Tupling
technique of parameters (Chapter 7.6 Thinking Functionally, Bird.).
fib :: Int -> Int
fib n = fst (fib2 n)
fib2 0 = (0,1)
fib2 n = (b, a+b) where (a, b) = fib2 (n-1)
Evaluating fib now takes linear time, but the space involved is not constant (even ignoring the fact that arbitrarily large integers cannot be stored in constant space)
In order to constrain the space of our solution we will introduce the ST monad and the strict operator ($!). (See Chapter 10.4 Thinking functionally. Bird) like so.
fibST :: Int -> ST s Integer
fibST n = do { a <- newSTRef 0;
b <- newSTRef 1;
repeatFor n
(do {x <- readSTRef a;
y <- readSTRef b;
writeSTRef a y;
writeSTRef b $! (x + y);
readSTRef a})
}
-- we need however to run the ST monad to get a proper value
runST :: (forall s . ST s a) -> a
-- Then we can finally have an efficient Fibonacci function that looks like
fib :: Int -> Integer
fib n = runST (fibST n)
I do not know about you, but I found this significantly harder to digest.
On the other hand, we can make a one-to-one translation of the Haskell code above in python
as
def fib (n):
a,b = 0,1
for i in range(0, n):
a, b = b, a+b
return a
What this is telling us is that inmutability is not only making our code more efficient but also easier to understand!
Summary
As much as I love functional programming and using maths. Going full-on math principles can make easy code hard to reason about, especially when dealing with performance.
Top comments (2)
FP vs OP is like hammers vs screw drivers, except somehow people have formed sides where one is convinced screwing nails into the wood is worth the extra effort and the other thinks hammering in screws is just more practical.
Thanks for your comment. I agree with your thoughts! There is no silver bullet for anything