loading...

Why I don't believe in pure functional programming anymore

yujiri8 profile image Ryan Westlund Originally published at yujiri.xyz ・3 min read

My first contact with pure functional programming was Haskell. The idea sounded bizarre, but fascinating. I had faith that once I'd learned Haskell, I'd understand how to use things like monads to replace the uses of state and side effects.

But that wasn't what happened. After reaching a medium level in Haskell, what I learned was that although the functionality of state and side effects could be replicated, it was only in the sense that everything can be replicated in assembly. Even with all the elaborate abstractions, state would never be easy. It was a huge weakness of functional programming that would plague almost any sizable project.

And I'm not blaming the paradigm for the flaws of a language. Most of my gripes with Haskell aren't the fault of pure functional programming (Idris is a remake that solves most of them), but this one is: state becomes unreasonably difficult to represent. Not 'difficult' in the sense that it was still hard to figure out once I understood the concepts, but that it was a ton of work to make something impure. Making a function deep in the logic log or be affected by an environment variable required everything below it in the stack to be refactored. Using multiple monads together required monad transformers which meant comically convoluted type signatures (which you would alias) and a sprinkling of glue functions like liftIO and runWriterT.

Are there good things about pure functional programming? I'm not sure. You probably expected me to say yes, because we're always expected to hedge like that when we make extraordinary claims. But I'm not sure because I've seen how every other good idea in Haskell can be had without banning side effects. The main demonstration is Rust.

Rust was influenced by the ML family (the same family that Haskell comes from). It has algebraic data types and a constraint system for type parameters. That lets it implement map, filter, comprehensions (filter_map), zip, monadic short-circuiting for the error-handling sum type, and other hallmarks of functional programming, with as much type safety and code reuse as any functional language can offer.

But Rust is an imperative language. And the nail in the coffin is this: everything is immutable by default. Rust still makes mutation explicit. And I think that provides most, maybe all, of the benefits of pure functional programming.

What is Rust missing compared to Haskell and Idris?

  • Read-only state and side effects that don't mutate program state can still be implicit. But the above story convinced me that this is a good thing. There's a mutual exclusion here: either the language allows you to do this implicitly, or the language makes them explicit, which can only be done by preventing the uses of implicit causation. So even if seeing these things in a type signature is nice sometimes, there's no way I'd choose a language that does it, all other things the same.

  • You can't write code polymorphic over monads. But... when is this ever actually useful? This ability doesn't map to any particular idea that can be expressed outside of the langauge; it's just a type system hack. I'm not sure you ever need to do this if you're not working in a language that requires such hacks to get things that just come naturally in imperative languages.

And that's why I don't believe in pure functional programming anymore.

Posted on by:

yujiri8 profile

Ryan Westlund

@yujiri8

I'm a programmer, writer, and philosopher. My Github account is yujiri8; all my content besides code is at yujiri.xyz.

Discussion

markdown guide
 

Thanks for sharing your experience Ryan.

This is my take on FP (Functional Programming)...

I think it's important to separate "pure", or "theoretical" or "academic" FP from "practical FP"

Practical FP you can do in pretty much any language.
A lot of it involves using only a minimal subset of a language, so the major construct that you're left with is the Function.

Here's what I don't use in JavaScript...
dev.to/functional_js/what-subset-o...

  • Practical FP means thinking in terms of functions.
  • Thinking of solving problems by composing functions.
  • Wrapping details away in functions.
  • Keep your functions free (not a member of an object), thus separating data from behavior.
  • Free functions don't mean they are pure; Pure functions I call "utility functions", and they're put into utility files.
  • Mutability isn't something you avoid, it's something you formally manage.
  • One of the many great things about FP is that it's great for writing pseudocode. You can design easily with FP pseudocode.

Here's a functional pipeline pseudocode example, which is pretty much all mutation...

const p1 = pipe(
  isInColl
  fetchFromApi
  cleanData
  insertIntoColl
  updateStats
  cacheResp
);
p1(val)
 

Nice writeup. Like some others here, I agree that pure FP is a PITA, but I suspect that it might have more to do with the type safety than the statelessness. I wonder if it's the combination of the two that causes the most pain, and that a lot of that just goes away when either constraint is lifted.

If you haven't already & have the time, I definitely recommend playing with Clojure. I personally did all of the Clojure tasks on exercism.io & really enjoyed that. Clojure does add homoiconicity to the mix, which is sort of divisive, but I personally got comfortable with it quickly and came to really enjoy the lack of "special" syntax. And it allows for powerful macros.

Debugging in Clojure is definitely not as nice as in other LISPs. But Clojure code is more readable than the others (IMO), and the wealth of library functions for dealing with collections and other data structures is the best I've seen anywhere.

FWIW, Hickey claimed, at least when it was added that Clojure's transducer implementation can't be made type-safe, but I've seen rebuttals to that and am just not smart enough when it comes to types to know who's right.

But I think it still hints at a larger problem, where the more generic your code is, the harder the types get. This might have a chilling effect, or at least waste your time on yak-shaving, even when things are not outright impossible to make type safe.

So is it better to have easier generic code & metaprogramming, or the benefits of type safety? Perhaps it depends on what's being written, but my view on type checkers is that, since the bugs they eliminate are the ones that are easy for a machine to check, and not typically domain-level stuff, there's a ceiling on their utility, whereas the amount of crap you have to do to keep them happy seems to know no bounds.

 

Well, after my old self learned Clojure (and went through a good part of the SICP), I seriously began to ask myself this question: is type safety provided by static type checking really that big of a deal?

Dynamic typing fans will tell you that they test their code, so it's safe. Static type checking fans will tell you that most of the thing are checked at compile time, so it's safer.

But the program can crash if the static type checking fans fail to see the tests they needed to do because no compiler can prevent every possible mistake happening. The program can crash as well if the dynamic typing fans didn't test the proper things. In the middle of that, no study whatsoever on the subject proved that one is better than the other.

So here's my conclusion for now: it doesn't matter. It's just a question of preferences, which often goes into debates space vs tab style.

Speaking about Clojure, Spec brings a lot on the test side. I didn't use it that much yet, but I was impressed playing with it.

 

As someone who came to Rust because he burned out multiple times trying to achieve the kind of confidence Rust's type system brings using Python unit tests, I think I'll have to disagree with that.

A type system may not be a magic wand, but it can rule out classes of bugs and, if it's powerful enough, you can do some really clever stuff.

(eg. Using Rust's type system to verify correct traversal of a finite state machine, as with things like Hyper's protection against PHP's "Can't set header. The body has already begun streaming" runtime error. Rust's ownership is key to that because it ensures you can't accidentally hold and use a reference to an old state.)

Also,

“Program testing can be used to show the presence of bugs, but never to show their absence!”
― Edsger W. Dijkstra

I'm at a disadvantage because I haven't written in Rust, but I would also guess that it's a strict improvement over Python. Python doesn't have good closures, or ways of being point-free, or metaprogramming, etc. So, after giving up type checking, you're not getting back. In even Ruby you'd have to give up a lot more (or perhaps admit a lot more complexity, like Scala's) to add type checking everywhere. (That said, I'm interested in trying Sorbet at some point, which lets you do gradual type checking in Ruby.)

Anyway, I tend to view bugs along a couple of axes. One of them is domain / non-domain. On the domain side, you have bugs that arise from shifting requirements, or misunderstandings of the problem's edge cases. Often these are the most costly to fix, and typically they're harder to discover. Certainly we can't just throw a tool or paradigm at them. On the other end of that spectrum, you have things that happen across codebases -- typos, dereferencing null, etc. These are amenable to tools and disciplines -- type checks, statelessness, immutability, reuse, etc.

I also picture another axis, for just the "low-level" bugs, from static (e.g. typos) to dynamic (e.g. I didn't realize this state was reachable).

Pure FP prevents bugs along the whole axis, but all code gets harder to write, and the more abstract / generic it is, the harder it gets. (In Haskell, mostly it's just hard, in Elm you're actually prevented from writing something like a generic map function, AFAIK.)

Everything else address just one area. Impure FP addresses the state / mutability side. Type-checked imperative code addresses the other. I'd also argue that design-by-contract is somewhere in the middle.

For me, the question becomes, ignoring pure FP, what's more important to prevent? Which constructs are too useful to sacrifice? I tend to prefer having more genericity, easier metaprogramming, etc., so I mostly prefer dynamic + FP. I used to do a lot of Java and don't miss it. (But maybe I should try Rust.)

I also haven't done much with Typescript, core.typed, Sorbet, etc. If I was in a position to add type checking gradually to code I was already writing, I'd probably use that at least some of the time.

But yeah, I feel like the story with Python is that there's a lot of "we have classes, BUT" / "we have lambdas BUT" etc. No type checking but not much of anything else either, in terms of well-executed language constructs. But it's easy to get started with, and that often has a lot of value.

That's an interesting categorization. But seeing you say this...

I'm at a disadvantage because I haven't written in Rust, but I would also guess that it's a strict improvement over Python.

... makes me want to mention a few things in Python's defense. (I do think Rust is a better language than Python, but far from strictly better.)

I think Python offers the following main benefits over Rust:

  • Much cleaner syntax (I know some people consider significant indentation a bad thing and this can get heated, but I favor it strongly). Comprehensions are as readable as a sentence, and you don't need a whole bunch of "glue" function calls like .to_string() to cast string literals from &str to String.

  • Inheritance (Rust doesn't have it, not even in the form of struct embedding)

  • A REPL. This is huge.

  • Error handling is better in some ways (though I realize it's worse in other ways): you get full stack traces with line numbers for each frame out of the box. No need for third-party crates and writing context messages yourself (except for displaying errors to end users).

  • Default argument values and keyword arguments.

  • A great stdlib and ecosystem. Rust has almost no stdlib and an ecosystem full of immature/abandoned/badly documented creates drowning out the few good ones.

Also, you say Python doesn't have metaprogramming. Are you defining metaprogramming to strictly mean macros? Because it's true Python doesn't have macros, but a lot of their use cases are filled by excellent reflection capabilities. Have you seen libraries like Pydantic (or even the stdlib dataclasses)? They use reflection to accomplish some pretty amazing stuff, like automatic model validation and cutting out all the constructor boilerplate.

I've seen more broad definitions of metaprogramming used before, by which Python has way better metaprogramming than any language without full-on macros.

A REPL. This is huge.

That's a great point, I was forgetting about the lack of REPL in Rust and what a game-changer they are.

Also sad to hear that the Rust ecosystem isn't as good. I think I'd rather have a REPL and good libraries than type-checking, tbh.

Also, you say Python doesn't have metaprogramming.... Python has way better metaprogramming than any language without full-on macros.

Sorry, that was ambiguous, I meant that it doesn't have "good" metaprorgramming, which is perhaps unfair. Certainly Python has metaprogramming. But I think of Ruby's as the benchmark -- send / define_method / missing_method / etc. are very lightweight, and can be used many contexts where something like metaclasses are overkill.

FWIW, I think this is often the story with Python, especially when comparing it with Ruby on a given feature. Generally the Python version will be clever, but narrow, and likely necessary because of a weakness somewhere else.

List comprehensions are a good example -- elegant syntax for the easy 20% of cases, which you pay for in the 80% where map would have been simpler.

By contrast, Ruby's solution is having a lightweight syntax for passing code into functions ("blocks"). Any function can be called with a block -- it's not opt-in, or even opt-out, it's just part of what it means to be a function. So map, filter, and reduce with a closure is very readable, and the standard lib stuff concerned with enumerables is all effortlessly higher-order.

Not only is that great when you build lists, it's great everywhere you use higher-order functions. As another specific example -- you don't need things like Python's context managers, which are sort of heavyweight. Regular ruby functions can already do that.

Also, re: Rust, lack of default arguments (and apparently overloading too) sounds really annoying. Python's evaluation of those at function-define-time is a really, really stupid gotcha, but at least Python has them. Even C has vararg functions. Good lord.

List comprehensions are a good example -- elegant syntax for the easy 20% of cases, which you pay for in the 80% where map would have been simpler.

Eh? If you ask me, list comprehensions are more readable than map in almost every case. But Python does have map in its prelude - as well as filter, and reduce is in the functools module. (I think there's also some currying stuff in there, which I've never needed.)

Also, comprehensions have serious performance benefits, which I attribute to them not constructing a lambda object and not iterating twice when you do both map/filter in one.

By contrast, Ruby's solution is having a lightweight syntax for passing code into functions ("blocks"). Any function can be called with a block -- it's not opt-in, or even opt-out, it's just part of what it means to be a function. So map, filter, and reduce with a closure is very readable, and the standard lib stuff concerned with enumerables is all effortlessly higher-order.

I've been digging into Crystal lately, which is basically statically typed Ruby. I'm finding the block solution a bit awkward and I'm not sure if it's even as powerful as just allowing functions to be passed to each other. My understanding is that you can't directly pass a function as an argment in Crystal or Ruby because just referencing its name calls it with 0 arguments. Crystal and Ruby have multiple different kinds of these things - blocks, procs, stabby lambdas - where Python just has functions and lambdas. The block syntax, while nice, seems restricted in use case to me because (unless I'm mistaken) it only allows passing a single block.

FWIW though I do see that Python's lambda story is a bit weak because they can only return something, and inline defs (which are prohibited at least in Crystal) don't handle scoping in the most desirable way. My ideal story in this department is Javascript (though I kind of hate to admit it, since I hate everything else about Javascript).

Not only is that great when you build lists, it's great everywhere you use higher-order functions. As another specific example -- you don't need things like Python's context managers, which are sort of heavyweight. Regular ruby functions can already do that.

Yeah, I realized yesterday that blocks are the equivalent of with, but I'm not sure if I like them more. I don't think context managers are heavyweight though. The contextlib.contextmanager decorator can turn a function into one; the function does its setup in try and yields the resource, and its cleanup in finally.

Also, re: Rust, lack of default arguments (and apparently overloading too) sounds really annoying. Python's evaluation of those at function-define-time is a really, really stupid gotcha, but at least Python has them. Even C has vararg functions. Good lord.

Lol. I agree that the evaluation of defaults at compile time is a stupid gotcha, but as for vararg functions, I'm not actually sold on them. What's the benefit compared to functions that take arrays?

Python does have map et al, but lacks the anonymous inner functions which would make them more useful for building & reducing collections.

A few things:

  • Blocks aren't merely the equal of with -- they're more like lambdas where return acts on the outer function. So (like lambdas) they cover the with case, the collection-wrangling cases, and many others.
  • You can pass multiple blocks -- proc makes blocks, lambdas, and even named functions first-class. But the "default block" has more convenient syntax.
  • I don't think it's fair to say that Python has one way to do it (lambdas) while other languages have multiple. I'd say Python has zero, unfortunately. Python's lambdas are anonymous inner expressions while every other language uses the term to mean anonymous inner functions. Likewise, some variants of BASIC have "user-defined functions" which are limited to single statement. Beats not having them, but still not as useful as real functions.

[I edited the above to group them and clarify what I mean by lambdas]

One more example -- there's no "reduce" comprehension in Python. So you're back to loops there. They give you a sum function, but you don't have the thing that would let you write your own sum function elegantly.

I should probably stop... it's hard to talk about these things convincingly because often the problem with Python isn't with what it has, but the fact that it needed to have those things in the first place. Along those lines, yes the generator-style context managers are lighter-weight than classes, but languages with anonymous functions don't need to provide "generator-style context managers" in the first place. Anyway, I don't want to come off as hating Python. It's just behind the times in some ways.

as for vararg functions, I'm not actually sold on them. What's the benefit compared to functions that take arrays?

Not much, since the varargs is syntactic sugar. But it lets you be explicit about intent -- sometimes you really are passing in single argument that happens to be an array. Varargs let you say "this isn't an array, but a bunch of discrete arguments that I will figure out what to do with at runtime."

One more example -- there's no "reduce" comprehension in Python. So you're back to loops there. For the specific case of adding, they give you a sum function. But you don't have the thing that would let you write your own (at least in a functional style).

Huh? I just mentioned functools.reduce. It works with any function or lambda. What does Ruby's reduce get that Python's doesn't?

Oh - is it the ability to reduce from the outer function (halting iteration)?

I guess that could be useful... in some really obscure situation or something...

Oh I just meant Python's comprehensions -- they cover some of the places you'd use map & filter, but there's no comprehension for going down to a scalar, AFAIK. (That said, since reduce can be used to build collections, list / dict / set comprehensions do end up overlapping with it.)

And to be clear, when I said "you're back to loops" I meant for the places where you wouldn't bother to create a named inner function, or to compose named functions before calling them. (It's not that you don't have options, but I'd argue that they're not idiomatic or lightweight.)

What does Ruby's reduce get that Python's doesn't?

While reduce works the same everywhere, it's most useful in languages with anonymous inner functions. It's true of higher-order functions in general, not just reduce.

Ruby goes a step further by providing a shorthand syntax for doing this (but is not unique there -- Clojure has the #() shorthand, and JS has ->).

FWIW, since you mentioned it -- I've only felt the need to halt from inside a reducer function in Clojure. I forget why, but it probably had to do with starting from an infinite stream. It'd be an optimization to prevent the creation of intermediate collections.

 

So what you are essentially saying is that side effects caused by mutations/reassignments are less harmful than those caused by IO and hence it isn't worth abstracting from the latter? Don't you think that IO related side effects hamper local/equational reasoning the same way that mutations/reassignments do?

 

It's not so much that IO-related side effects hamper local reasoning less, but that preventing them altogether is too impractical.

 

I think this is why F# is comfortable with the fact that it is an impure language and lacks some of the more advanced features that would require monad transformers et al.

If you've had it with types, you might also want to check out Clojure.

 

I've definitely not had it with types :D (quite the opposite) Rust and Crystal are both good languages, and are quickly becoming my two favorite without much competition.

 

I've come to FP through F#. F# is an "impure" FP language, but I think it'd fall more under the pure umbrella than outside of it, regarding the points in your post.

I describe F# as "OCaml for .NET".

In my assessment (which echos Bryan Hunter of Firefly Logic), the "big wins" for FP are:

  • succinctness and expressivity
  • immutability
  • recursion
  • pattern matching
  • higher-order functions
  • code as data
  • separation of behavior from data
  • referential transparency

In the meantime, I'll use C++. Because I can get a job using C++.

The thing I'm impressed with Rust is its amazing strong ownership model, and how it uses its novel compile time type state to make that happen. It is poised to eat C++'s lunch.

 

So you want to force mutability into functional programming. Maybe you didn't get as deep into the functional programming mindset as you thought you did. Especially with that claim that Rust provided "all the advantages" just because mutability has to be explicit.

There is more to functional programming and immutability. You can modify an immutable data structure, you'll just end up with a modified copy of it instead of modifying the original, and THAT is what the immutability is about: not about making your life hard by making it cumbersome to modify a data structure, but by having you be very explicit about who gets to see and operate on that modified structure.

The problem with mutable data structures isn't that they are mutable. It is that you have no idea what you cause by mutating it, because who-knows-how-many things might be accessing it.

Lazy operations are another classic. It looks like in Rust you need a library to even get lazy evaluation in the first place. But it opens so many doors in functional programming. Composing operations, operating on infinite collections, and so on.

 

So you want to force mutability into functional programming. Maybe you didn't get as deep into the functional programming mindset as you thought you did. Especially with that claim that Rust provided "all the advantages" just because mutability has to be explicit.

Please do not reach for questioning my experience as your first response to disagreement. You also misrepresent what I said, as I never claimed that Rust provided all the advantages. I said it provided "most, maybe all". I even described how I was open to the idea that there were some benefits it was missing out on, but that I believed they didn't justify the imposition of total purity because they were mutually exclusive with more important benefits.

There is more to functional programming and immutability. You can modify an immutable data structure, you'll just end up with a modified copy of it instead of modifying the original, and THAT is what the immutability is about: not about making your life hard by making it cumbersome to modify a data structure, but by having you be very explicit about who gets to see and operate on that modified structure.

What you describe here actually sounds like Rust. You can modify things, but have to be very explicit (via the ownership system) about who gets to see and operate on that modified structure. In Haskell, changing to do this requires a heavy refactor replete with arcane type system hacks.

The problem with mutable data structures isn't that they are mutable. It is that you have no idea what you cause by mutating it, because who-knows-how-many things might be accessing it.

Again, in Rust, ownership makes it clear what things are accessing it.

Lazy operations are another classic. It looks like in Rust you need a library to even get lazy evaluation in the first place. But it opens so many doors in functional programming. Composing operations, operating on infinite collections, and so on.

Rust doesn't have pervasive lazy evaluation, but it has lazy iterators which allow composing operations and operating on infinite collections. So does Python. I hear so does Javascript these days.

Are you aware of Idris, a remake of Haskell that retains pure functional programming but isn't lazy everywhere? The point I'm making by mentioning it is that laziness and functional purity are not as intertwined as you seem to think.

 

Again, in Rust, ownership makes it clear what things are accessing it.

Ownership in Rust is pretty much the same thing as having variables belong to a class in object oriented languages, from what I understood. As soon as you modified a variable, everything you ever gave this variable to now implicitly has this modified version. This is very different from the functional programming approach where you explicitly have to give the modified variant of the collection to every place you want to see the modification.

Are you aware of Idris, a remake of Haskell that retains pure functional programming but isn't lazy everywhere?

I am aware, but I never used it, and also don't really plan to - same as how I did look into Haskell a little, but opted not to use it; these days I am a Clojure user.

laziness and functional purity are not as intertwined as you seem to think.

They aren't as intertwined, but they are both part of the concepts of functional programming.

pub struct Immut<T>(T);
impl<T> std::ops::Deref for Immut<T> {
  type Target = T;
  fn deref(&self) -> &T { &self.0 }
}
pub fn immut<T>(x: T) -> Immut<T> {
  Immut(x)
}

this guarantees that Immut can't be mutated from outside this module(rust has module level privacy).

Ownership in Rust is pretty much the same thing as having variables belong to a class in object oriented languages, from what I understood. As soon as you modified a variable, everything you ever gave this variable to now implicitly has this modified version. This is very different from the functional programming approach where you explicitly have to give the modified variant of the collection to every place you want to see the modification.

I don't understand what you mean. I would love to see code example.

 

You can modify an immutable data structure, you'll just end up with a modified copy of it instead of modifying the original, and THAT is what the immutability is about: not about making your life hard by making it cumbersome to modify a data structure, but by having you be very explicit about who gets to see and operate on that modified structure

collections are not language feature. So there are immutable collections in rust.

 

As soon as you modified a variable, everything you ever gave this variable to now implicitly has this modified version.

If you gave this variable to something, how can you mutate it? the variable has moved.
I tried my best