What is the benefit of Functional Programming?

kspeakman profile image Kasey Speakman Updated on ・6 min read

Posted at the suggestion of Pablo Rivera.

I've seen some posts lately trying to come to grips with or warn against functional programming (FP) patterns in F#. So, I thought I would take a step back and explain how to derive tangible benefits from FP, what those benefits are, and how functional patterns relate to that.

By functional patterns, I mean things like partial application, currying, immutability, union types (e.g. Option, Result), record types, etc.

The most important principle to follow to reap benefits from FP is to write pure functions. "Wait," I hear you say, "you can't make a useful app with only pure functions." And right you are. At some point, you have to perform side effects like asking for the current time or saving something to a database. Part of the learning process for FP (in F#) is getting the sense for where to do side effects. Then the continuing journey from there is how to move side effects as far out to the edges of your program as possible to maximize the amount of pure functions on the inside. The goal being that every significant decision is made by pure functions.

"What's so great about pure functions?" A commonly touted benefit is test-ability, which comes almost by definition. But in my experience it goes much further than that. Programs which are largely pure functions tend to require less time to maintain. To put it another way, pure functions allow me to fearlessly refactor them. Because of their deterministic nature combined with type safety, the compiler can do most of the hard work in making sure I haven't made a gross error. Pure functions certainly don't prevent logic errors (common one for me: incorrectly negating a boolean). But it does prevent a lot of the hard-to-track-down and tedious problems I've dealt with in the past.

"I hear you, but I am skeptical." Yes, and you should be. First, there's a trade-off. And second, I wouldn't expect you to take my word for it. Probably the largest trade-off is with immutable data structures. Instead of mutating data (e.g. updating a value in an input array, which introduces a side effect to the caller), you return a new copy of the data with different values. As you can imagine, copying data costs more (cpu/memory) than updating in place. Generally, I find there's not a significant enough difference to worry about it. But in certain cases performance can dominate other concerns. One benefit of F# is that you can easily switch to doing mutation if you need to.

As for not taking my word for the maintenance/refactoring benefits, I can tell you how I discovered this and suggest you try it. I have to admit, I did NOT discover these benefits in depth in F#. Since F# is multi-paradigm, including OO, it's easy to skip or only go part-way toward learning how to model something with pure functions. It's also easy to get confused about how to do it with so many avenues available. My path of discovery must be credited to Elm -- a relatively young compile-to-javascript language/platform and the inspiration for Redux. Elm is a functional language in which you can only use immutable types and pure functions (although you can call out to Javascript in a controlled way). At first this seems restrictive and a bit mind-bending.

Because Elm's restrictions were unfamiliar, the first code I wrote in Elm was not the greatest, even after coding in F# for several years. (To be fair most of my F# code is for back-end, including performing side-effects.) My Elm code worked, but it just didn't fit quite right or it seemed fiddly. But as I discovered different ways to model things and tried them, I found Elm was actually very amenable to refactors, even sweeping epic ones. As I learned new ways to model in an environment of pure functions, the constraints placed on my code by Elm actually allowed my code to easily grow with me. Large refactors still take a bit of work, but my experience is that it's no longer an impossible-to-plan-for amount of "risky" work. It's just normal work. And I'm not the only one with that experience.

Eventually, I came to understand that this was not because of some white magic that Evan Czaplicki wove into Elm, but because of the insightful design choice that all user code consists of pure functions. A pure function is a contract which a compiler can verify. The contract doesn't specify everything about the function, but it can help you to resolve the majority of the really boring problems at compile time.

So here is where functional patterns come into the picture. They are tactics which help ensure that the compiler can verify the contract on a pure function. I'll run through some examples.

  • On .NET at least, null has no type and so the compiler cannot verify the contract is fulfilled if you provide a null value. So there's F#'s Option (Maybe in Elm), which not only preserves type, but also lets you know you should handle the "no value" case. Now the contract can be verified again.

  • Throwing exceptions invalidates a pure function's contract, because exceptions are hidden return paths that are not declared. Enter Result***, which provides a way to return either a success value or an error value in a declarative way, and thus the contract can be verified again.

  • Mutation (modifying data in-place) cannot be observed or accounted-for in a function declaration. It's also the source of some really hard-to-find bugs. So enter records which are immutable by default. To "change" data, you have to make a copy of the record with different values. Now the contract can be verified again.

Since the basic primitive to provide a "contract" is a function, partial application and currying follow naturally as conveniences. Types like Option and Result are possible (in a non-awkward way) because of union types, aka algebraic data types. And you can make your own union types. They are great for modeling mutually exclusive cases with their own sets of data. Like PaymentType is CreditCard with card data OR Check with #. If you model this with a normal class, you'd likely have nullable properties for both cases. That gives the potential for one, both, or none to be present, which requires a little more work on your part to ensure the data is valid. With union types, the compiler can do some of this work for you.

I hope you can see that functional patterns primarily exist to support writing pure functions. You can't write a useful program entirely out of pure functions, so the goal should be to maximize their usage in order to maximize refactorability. Learning how to do this is a process. Don't fret if you need to use an impure solution to get by until you discover another way. Also realize that writing pure functions in F# requires a little discipline, because the language doesn't provide a way to enforce that. The basic guidelines I follow to produce pure functions are 1) do not mutate input values and 2) result is deterministic... that is: given the same inputs, the output is always the same. I also tend to establish an outer boundary of my programs where side effects (database ops, HTTP calls, current time, etc.) are allowed. From this boundary I can call side-effect functions and pure functions. But pure functions should not call side effect functions. So far this has worked out really well in my F# code. But I'm sure I have more to learn. :)

*** Use Result judiciously. Since exceptions are foundational to .NET, practically all common external libraries throw exceptions. You'll create a lot of work for yourself trying to convert every interaction with an external library into a Result. My rule of thumb is to just let my code throw when I have to execute multiple operations against an external library which will throw. Higher up in the program, I'll convert an exception from my code to a Result error case. I also use Result for business logic failures like validation errors.

Posted on by:

kspeakman profile

Kasey Speakman


collector of ideas. no one of consequence.


Editor guide

Throwing an exception violated the contract of a function.
Take the function Number -> Number. It takes a number as input and returns a number.
If the function throws an exception it will no longer return a number and will therefor violate its contract.
That is why you use complex types for error handling in functional languages:
Number -> Either Err Number
This function now returns an Either type that can either be an error or a Number. Notice that we can now handle exceptions, but we still do not "throw".


Exceptions in no way violate the contract of a function, they are merely part of the contract. There is no fundamental reason a pure function could not throw an exception.

I've covered this in my article: The exceptional myth of non-local flow


In functional languages, pure functions depend on the function signature to be a declared contract. An exception is not declared in the signature. Some functional languages do not even have exceptions. With the examples in your article being in C, I'm not sure that our points of view are looking at the same set of circumstances. If we were talking about C, I might be swayed to your side.


I'm saying that exceptions in general don't prevent something from being a pure function. I don't belive it makes a significant difference whether the error handling is explicit or implicit -- it can be considered part of the contract either way.

For example, in my language Leaf, errors are an implicit part of the signature, they can however be turned off.

We will have to agree to disagree then. I think any error handling strategy could work in its own set of conditions. But to say that exceptions are the best way to handle errors across the board, I can't agree with.

In web environments you usually have one or two options to manage errors: retrying IO or returning an HTTP status code 500. There's a user in the other side waiting for a reply, and in the most common request handling model thread-per-request there's a thread bound to the current request waiting to be released. So you can either choose to handle the error flow like a special flow in your business logic (like Try's in scala), or you just can throw an exception, set up a catch-all handler and return a friendly error code so the user can retry the operation.

I really don't support error flows in web environments, it usually opens additional branches in the execution that makes harder to debug and detect errors. The easiest way I know to debug and fix errors is to throw exceptions closer to the place the error occurs.


I second that functional is better for maintainability. Btw. now a days c# can be as functional as F# or Scala, for example. All necessary language features are available, which makes the switch for a dev (team) much easier, allowing them to focus on methodologies and mindset instead of syntax and interop.


You can certainly write pure functions in C#. But it can be a bit awkward. Anonymous functions have overhead to setup, often requiring a type declaration in the form of Func<type1, type2, returntype>. Partial application becomes more verbose. C# makes it difficult to avoid nulls and thrown exceptions. (To be fair, .NET itself makes that latter hard to avoid in F# code too.) C# lacks the primary means that F# uses to avoid these: union types. (Union types are also a great modeling tool for business logic.) The C# alternative to this could be subclassing or marker interfaces, which also work well with the new pattern matching feature. However, it's quite verbose to setup compared to union types. Avoiding mutation is really tough to ensure in C#, and foreign to most of us coming from OO. It requires a high level of discipline (and training for new team members).

I'd say that you can do pure functions in C#, but it's not what is made easy there. And under deadline, I find that I will often go with the path of least resistance. If the goal is to write pure functions, F# makes that easier. F# is not perfect in that regard either... it's still pretty easy to introduce side effects like getting current time. I wish there was a way to mark modules "pure" so they would generate a compiler warning if a side effect was introduced. (I'm aware of Delegate.Sandbox, but I don't really want to run unoptimized code.) But in all, F# gets you a long way towards making pure functions easier to write.


"And under deadline, I find that I will often go with the path of least resistance"

Yes, that's the point. The new C# features are all going in a functional direction, but it's still too easy to either skip or miss core elements of FP. But it's still interesting to see how virtually all languages currently add functional elements (or new languages start functional from the beginning, like Elm), so hope for FP becoming more mainstream is still there.

Absolutely. As several recent languages show, there's no reason why FP can't exist alongside OO (and procedural) in the same language. I'd like to believe that FP becomes the dominant paradigm before long. I just hope people don't fall into the same trap I did and get discouraged with FP. The trap was that I focused on using match and Maybe and partial application, and all the other mechanics of FP. But disappointingly, I found they didn't provide much benefit, because I wasn't striving to make pure functions with them. Nobody really explained that to me, so I thought I would take a stab at it with this post. :) Best wishes!