DEV Community

Menestret Martin
Menestret Martin

Posted on • Edited on • Originally published at geekocephale.com

Anatomy of functional programming

[Check my articles on my blog here]

I will try to group here, in an anatomy atlas, basic notions of functional programming that I find myself explaining often lately into a series of articles.

The idea here is to have a place to point people needing explanations and to increase my own understanding of these subjects by trying to explain them the best I can.
I'll try to focus more on making the reader feel an intuition, a feeling about the concepts rather than on the perfect, strict correctness of my explanations.

What is functional programming ?

A general definition

To begin our anatomy atlas of functional programming, the first question would be: what is functional programming ?

I would define it that way:

A style of programming which aims to avoid side effects by using, as much as possible, pure functions that manipulate immutable data.

As much as possible, here, means everywhere except, if needed, in your main function.

That way, almost your entire code base is said to be "pure" and has the nice properties we'll see after.

Your main is the only "impure" part but you drastically reduced the portion of your code you have to be extra careful with.

The definition we gave for functional programming introduced the notions of side effects, pure functions and immutable data that we are going to explain.

A general purpose of functional programming would be to reduce, as much as possible, the moving parts of your software.

Side effects

A function should have only one effect: the effect of computing its return value.

Any other effects triggered by that function are side effects (logging, printing, inputs and outputs of any kinds, and so on).

Functional programming does not forbid to do IO or logging, but it encourages to do it explicitly rather than in secret, hidden in functions, without publicly declaring it.

Pure functions

Pure functions are somehow a computational analogy of mathematical function.

Purity rules

Pure functions have to respect a set of simple rules:

  • Determinism: pure functions will always return the same output when given the same inputs. Consequently, they cannot use non-local variables, global mutable states, perform IO, and so on

    • def add(a: Int, b: Int): Int = a + b is deterministic, it will always return the same output value when given the same input values
    • def rand(a: Int): Int = Random.nextInt(a) is not, each call may return different values
    • A function returning Unit should be a huge code smell, as it is does nothing else than side effecting (execept if it only returns Unit and nothing else, but I doubt you would want to do that...) ! It a determinism's nemesis !
  • Totality: pure functions from type A => B (A is called the domain and B the co-domain), have to be defined for every value of its domain, here for every value of type A

    • def divide(a: Int, b: Int): Int = a / b is not total, it will crash if b == 0
    • def safeDivide(a: Int, b: Int): Option[Int] = Try(a / b).toOption is total by handling undefined case with None
  • Referential transparency: pure functions calls should be replacable by their results anywhere they are used without altering the rest of the program

    def launchTheMissiles: Unit = ???
    def formattingMyStuff(s: String): String = {
        launchTheMissiles
        s.toUpperCase
    }

Here we cannot replace any calls to formattingMyStuff("hi") by "HI" without altering the rest of the program, because formattingMyStuff performs a side effect, launchTheMissiles. We know that by replacing formattingMyStuff("hi") by "HI", the missiles won't be launched.

With def purelyFormattingMyStuff(s: String): String = s.toUpperCase howerver, any call to it could be replaced directly by the uppercased argument and we know that the program would work the same way it did before.

To go even further, a referentialy transparent function should be replaceable by a lookup table associating directly its inputs to its output.

Since def boolToString(b: Boolean): String = if (b) "That's true !" else "That's wrong..." is referencially transparent, it is replaceable by the following lookup table without altering the rest program:

Input Output
true "That's true !"
false "That's wrong..."

Now you can tell how functions that respect those rules limit the moving parts of your software.

They do what their signatures tell they do, and they only do that. They don't move other stuff around.

How to do "real world" stuff then ?

You tell me that the one and only function effect should only be to allocate memory and to use some processor power to compute its result ? And nothing else ?

Without the ability to do IO, to encode randomness or to fail, it might be pretty hard to do any real world work...

Of course, functional programming lets you do all that but it asks you do to it explicitly.

Here are some examples:

  • A function returning an Option[Int] is in fact just returning an Int but it adds explicitly to that Int the effect of being able to fail by wrapping it in an Option
  • A function returning an Either[String, Int] is in fact just returning an Int but it adds explicitly to that it the effect of potentialy returning a String instead (which is often used to describe a failure with its reason), by wrapping it in an Either[String, Int]
  • A Task[Int] or IO[Int] (these are more or less the same thing), returns a computation that is not run yet but that will produce an Int or fail, when executed, which is often used to explicitly represent a value that is obtained by doing an IO. It is the description of the things to do to produce that Int, but it hasn't started yet.
  • A lot of other effects can be encoded that way but going in details is some other blog post material (you can find a lot of related resources here)

Data philosophy

Data / behavior relationship

OOP and FP have two different approaches when it comes to data/behavior relationship:

  • Object oriented programming often combines data and behavior by mixing them into classes that:
    • Store data as an internal state
    • Expose methods that act on it and may mutate it
case class Player(nickname: String, var level: Int) {
    def levelUp(): Unit          = { level = level + 1 }
    def sayHi(): String          = s"Hi, I'm player $nickname, I'm lvl $level !"
}
  • Functional programming aims to completely separate data from behavior by:
    • Defining types (ADT on one side that expose no behavior and only holds data
    • Functions taking values of some of these types as inputs, acting on them and outputting values of some of those types (leaving the input values unchanged)
case class Player(nickname: String, level: Int)

object PlayerOperations {
    def levelUp(p: Player): Player = p.copy(level = p.level + 1)
    def sayHi(p: Player): String   = s"Hi, I'm player ${p.nickname}, I'm lvl ${p.level} !"
}

Expression problem

The expression problem is about how a language behaves when it comes to add to an existing code base:

  • New cases to existing types
  • New behaviors (functions) over existing types

And if they manage to do that without having to modify the existing code.

The idea behind the expression problem is to compare how languages and programming paradigms tackle these problems.

OOP and FP have both a different answer to that problem.

Object oriented programming paradigm

  • ๐Ÿ‘ : Adding new cases to an existing data type
    • A new class extending the existing class / interface
  • ๐Ÿ‘Ž : Adding a new behavior over an existing data type
    • A new method on the appropriate super class / interface which impacts every single sub classes

Base case:

trait MyType { def behavior: String }

final case class A() extends MyType { override def behavior: String = "I'm A" }
final case class B() extends MyType { override def behavior: String = "I'm B" }

Adding a new case to an existing data type:

final case class C() extends MyType { override def behavior: String = "I'm C" }

Adding a new behavior over an existing data type:

trait MyType { 
    def behavior: String
    def newBehavior: String
}

Now you have to get back to every single classes extending MyType to implement newBehavior.

Functional programming paradigm

  • ๐Ÿ‘Ž : Adding new cases to an existing data type
    • A new type to your existing sum type (cf: Anatomy of an algebra which impacts every functions over that data type (you'll have to handle that new case)
  • ๐Ÿ‘ : Adding a new behavior over an existing data type
    • A new function, nothing more

Base case:

sealed trait MyType
final case class A() extends MyType
final case class B() extends MyType

def behavior(m: MyType): String = m match {
    case A() โ‡’ "I'm A"
    case B() โ‡’ "I'm B"
}

Adding a new case to an existing data type:

final case class C() extends MyType

Now you have to get back to every functions over MyType to pattern match on the new case.

Adding a new behavior over an existing data type:

def newBehavior(m: MyType): String = m match {
    case A() โ‡’ ???
    case B() โ‡’ ???
  }

Immutable Data

That one is simple: a value is said to be immutable if, once evaluated, there is no way to change it.

  • That is mutability, and thus, should be avoided in functional programming:
var meaningOfLife = 41
meaningOfLife = meaningOfLife + 1
  • That piece of data is immutable:
val meaningOfLife = 42
meaningOfLife = meaningOfLife + 0
//<console>:12: error: reassignment to val
//       meaningOfLife = meaningOfLife + 0

If you really have to use mutability, say for performance reasons, I encourage you to do it with caution, by isolating and encapsulating the mutation in an immutable construct such as:

val magic = {
    var mutableMagic = 0
    mutableMagic = 42
    mutableMagic
}

That way you know that mutability won't spread and your moving part is contained in a non moving one.

Benefits of FP

For now, what we saw about FP was more or less only constraints.

But functional programming shouldn't be only seen a set of constraints, as Runar Bjarnason would say, constraints buy you a lot of freedom.

That may not seem obvious but I'll try to explain why.

Equationnal reasoning

Pure functions, while being restrictive allow you to reason about your program in a way that would not be possible otherwise, it is called equational reasoning.

It means that, once you figure out that f(x) = something, then everywhere f(x) appears, you can simply replace it by something and keep reducing your problematic complexity thay way as far as you need.

Equationnal reasoning allows you to follow the logic of your program by replacing your function calls by their results, exactly how you'd do when trying to solve a mathematical equation:

  • If you had these two equations:
    2x - 3y  = 12
    y + 2x   = 180
  • Then you could isolate x the first one:
    2x - 3y = 12
    2x      = 12 + 3y
    x       = (12 + 3y ) / 2
  • And then simply replace x in the rest of your reasoning by its value:
    y + 2x                  = 180
    y + 2 * (12 + 3y) / 2   = 180
    y + 12 + 3y             = 180
    4y                      = 168
    y                       = 42

That's a powerful way to reason about complex problems.

Without pure functions, you would have to analyze every single function calls, check if those functions do anything more than returning their results, keep a mental track of that other things they do, and keep analyzing while trying not to forget what you just witnessed.

That's a good example about how constraints on one side give you a lot of freedom on other sides.

Predictable data

As your data is immutable, it is easier to keep track of its state, since its state doesn't change.
The only way to create data is by using its constructor (in a broad sense) and that, also, reduces a lot your software's moving parts.

You know for sure that, when using a piece of data, it has not been modified in the meantime by something else, you can rely on it.

If you isolate the places where changes occur by severely restricting mutation, you create a much smaller space for errors to occur and have fewer places to test.
(Neal Ford)

Morever, it gives you thread safety by design, your data will never be in an unknown or undesirable state, which is huge in our more and more concurrent applications.

Playing with Lego

In addition to equational reasoning and immutability, I'll try to show you what else FP brings to you your code with an analogy.

Do you remember how it was to play with Lego ? Well FP lets you play with functions the way you did with Lego pieces.

You had plenty of pieces that do or don't fit together, each pieces being solid, immutable, and doing one single, simple thing.

Just as pure functions do.

It gives you:

  • Trivial refactoring

    • You know that you can change this red brick by that blue one if it respects the same contract without affecting at all the rest of your construct for obscure reasons (side effects, mutability, ...)
  • Composability

    • You can build a construct on one side, another on the other side and plug them together to make a new, more complex, construct safely that behaves as you would expect
    • That's exactly what you'll do with your pure functions, you know they take inputs, compute and return outputs and nothing else, thus you can safely compose them and build up more and more complexity fearlessly
  • Easier testability

    • You can easily test if a piece does what it is expected to since it does nothing more than being, well, a simple, independent, side effect free, piece !

Crystallizing design patterns

To end with FP benefits, there is this curious thing called Curryโ€“Howard correspondence which is a direct analogy between mathematical concepts and computational calculus (which is what we do, programmers).

This correspondence means that a lot of useful stuff discovered and proven for decades in Math can then be transposed to programming, opening a way for a lot of extremely robust constructs for free.

In OOP, Design patterns are used a lot and could be defined as idiomatic ways to solve a given problems, in specific contexts but their existences won't save you from having to apply and write them again and again each time you encounter the problems they solve.

Functional programming constructs, some directly coming from category theory (mathematics), solve directly what you would have tried to solve with design patterns.

The classical functional programming toolbox gives you constructs to handle, almost for free:

  • Global states
  • Concurrency
  • Computation parallelization
  • Computation failure
  • Validation accumulation
  • Asynchronism
  • Sequentiallity
  • ...

You can find here a list of how FP constructs corresponds to OOP classical design patterns: Lambda, the ultimate pattern factory.

Pretty convenient !

More material

If you want to keep diving deeper, some interesting stuff can be found on my FP resources list and in particular:

Conclusion

To sum up, we saw:

  • What actually is functional programming and that is what about manipulating immutable data with pure functions
  • Then we saw what were side effects, pure functions and immutability
  • The liberties that FP constraints gives you (equationnal reasoning, predictability, composability, easier refactoring, easier testability, etc.)
  • That it exists a bridge between the world of Mathematical logic and computation and how that bridge is used to give us powerful constructs
  • How some of these constructs can address real world problems you would have solved with cumbersome, hand crafted, design patterns otherwise

I'll try to keep that blog post updated.
If there are any additions, imprecision or mistakes that I should correct or if you need more explanations, feel free to contact me on Twitter or by mail !

Top comments (0)