My first contact with pure functional programming was Haskell. The idea sounded bizarre, but fascinating. I had faith that once I'd learned Haskell, I'd understand how to use things like monads to replace the uses of state and side effects.
But that wasn't what happened. After reaching a medium level in Haskell, what I learned was that although the functionality of state and side effects could be replicated, it was only in the sense that everything can be replicated in assembly. Even with all the elaborate abstractions, state would never be easy. It was a huge weakness of functional programming that would plague almost any sizable project.
And I'm not blaming the paradigm for the flaws of a language. Most of my gripes with Haskell aren't the fault of pure functional programming (Idris is a remake that solves most of them), but this one is: state becomes unreasonably difficult to represent. Not 'difficult' in the sense that it was still hard to figure out once I understood the concepts, but that it was a ton of work to make something impure. Making a function deep in the logic log or be affected by an environment variable required everything below it in the stack to be refactored. Using multiple monads together required monad transformers which meant comically convoluted type signatures (which you would alias) and a sprinkling of glue functions like liftIO
and runWriterT
.
Are there good things about pure functional programming? I'm not sure. You probably expected me to say yes, because we're always expected to hedge like that when we make extraordinary claims. But I'm not sure because I've seen how every other good idea in Haskell can be had without banning side effects. The main demonstration is Rust.
Rust was influenced by the ML family (the same family that Haskell comes from). It has algebraic data types and a constraint system for type parameters. That lets it implement map
, filter
, comprehensions (filter_map
), zip
, monadic short-circuiting for the error-handling sum type, and other hallmarks of functional programming, with as much type safety and code reuse as any functional language can offer.
But Rust is an imperative language. And the nail in the coffin is this: everything is immutable by default. Rust still makes mutation explicit. And I think that provides most, maybe all, of the benefits of pure functional programming.
What is Rust missing compared to Haskell and Idris?
Read-only state and side effects that don't mutate program state can still be implicit. But the above story convinced me that this is a good thing. There's a mutual exclusion here: either the language allows you to do this implicitly, or the language makes them explicit, which can only be done by preventing the uses of implicit causation. So even if seeing these things in a type signature is nice sometimes, there's no way I'd choose a language that does it, all other things the same.
You can't write code polymorphic over monads. But... when is this ever actually useful? This ability doesn't map to any particular idea that can be expressed outside of the langauge; it's just a type system hack. I'm not sure you ever need to do this if you're not working in a language that requires such hacks to get things that just come naturally in imperative languages.
And that's why I don't believe in pure functional programming anymore.
Discussion (26)
Thanks for sharing your experience Ryan.
This is my take on FP (Functional Programming)...
I think it's important to separate "pure", or "theoretical" or "academic" FP from "practical FP"
Practical FP you can do in pretty much any language.
A lot of it involves using only a minimal subset of a language, so the major construct that you're left with is the Function.
Here's what I don't use in JavaScript...
dev.to/functional_js/what-subset-o...
Here's a functional pipeline pseudocode example, which is pretty much all mutation...
Nice writeup. Like some others here, I agree that pure FP is a PITA, but I suspect that it might have more to do with the type safety than the statelessness. I wonder if it's the combination of the two that causes the most pain, and that a lot of that just goes away when either constraint is lifted.
If you haven't already & have the time, I definitely recommend playing with Clojure. I personally did all of the Clojure tasks on exercism.io & really enjoyed that. Clojure does add homoiconicity to the mix, which is sort of divisive, but I personally got comfortable with it quickly and came to really enjoy the lack of "special" syntax. And it allows for powerful macros.
Debugging in Clojure is definitely not as nice as in other LISPs. But Clojure code is more readable than the others (IMO), and the wealth of library functions for dealing with collections and other data structures is the best I've seen anywhere.
FWIW, Hickey claimed, at least when it was added that Clojure's transducer implementation can't be made type-safe, but I've seen rebuttals to that and am just not smart enough when it comes to types to know who's right.
But I think it still hints at a larger problem, where the more generic your code is, the harder the types get. This might have a chilling effect, or at least waste your time on yak-shaving, even when things are not outright impossible to make type safe.
So is it better to have easier generic code & metaprogramming, or the benefits of type safety? Perhaps it depends on what's being written, but my view on type checkers is that, since the bugs they eliminate are the ones that are easy for a machine to check, and not typically domain-level stuff, there's a ceiling on their utility, whereas the amount of crap you have to do to keep them happy seems to know no bounds.
Well, after my old self learned Clojure (and went through a good part of the SICP), I seriously began to ask myself this question: is type safety provided by static type checking really that big of a deal?
Dynamic typing fans will tell you that they test their code, so it's safe. Static type checking fans will tell you that most of the thing are checked at compile time, so it's safer.
But the program can crash if the static type checking fans fail to see the tests they needed to do because no compiler can prevent every possible mistake happening. The program can crash as well if the dynamic typing fans didn't test the proper things. In the middle of that, no study whatsoever on the subject proved that one is better than the other.
So here's my conclusion for now: it doesn't matter. It's just a question of preferences, which often goes into debates space vs tab style.
Speaking about Clojure, Spec brings a lot on the test side. I didn't use it that much yet, but I was impressed playing with it.
As someone who came to Rust because he burned out multiple times trying to achieve the kind of confidence Rust's type system brings using Python unit tests, I think I'll have to disagree with that.
A type system may not be a magic wand, but it can rule out classes of bugs and, if it's powerful enough, you can do some really clever stuff.
(eg. Using Rust's type system to verify correct traversal of a finite state machine, as with things like Hyper's protection against PHP's "Can't set header. The body has already begun streaming" runtime error. Rust's ownership is key to that because it ensures you can't accidentally hold and use a reference to an old state.)
Also,
I'm at a disadvantage because I haven't written in Rust, but I would also guess that it's a strict improvement over Python. Python doesn't have good closures, or ways of being point-free, or metaprogramming, etc. So, after giving up type checking, you're not getting back. In even Ruby you'd have to give up a lot more (or perhaps admit a lot more complexity, like Scala's) to add type checking everywhere. (That said, I'm interested in trying Sorbet at some point, which lets you do gradual type checking in Ruby.)
Anyway, I tend to view bugs along a couple of axes. One of them is domain / non-domain. On the domain side, you have bugs that arise from shifting requirements, or misunderstandings of the problem's edge cases. Often these are the most costly to fix, and typically they're harder to discover. Certainly we can't just throw a tool or paradigm at them. On the other end of that spectrum, you have things that happen across codebases -- typos, dereferencing null, etc. These are amenable to tools and disciplines -- type checks, statelessness, immutability, reuse, etc.
I also picture another axis, for just the "low-level" bugs, from static (e.g. typos) to dynamic (e.g. I didn't realize this state was reachable).
Pure FP prevents bugs along the whole axis, but all code gets harder to write, and the more abstract / generic it is, the harder it gets. (In Haskell, mostly it's just hard, in Elm you're actually prevented from writing something like a generic map function, AFAIK.)
Everything else address just one area. Impure FP addresses the state / mutability side. Type-checked imperative code addresses the other. I'd also argue that design-by-contract is somewhere in the middle.
For me, the question becomes, ignoring pure FP, what's more important to prevent? Which constructs are too useful to sacrifice? I tend to prefer having more genericity, easier metaprogramming, etc., so I mostly prefer dynamic + FP. I used to do a lot of Java and don't miss it. (But maybe I should try Rust.)
I also haven't done much with Typescript, core.typed, Sorbet, etc. If I was in a position to add type checking gradually to code I was already writing, I'd probably use that at least some of the time.
But yeah, I feel like the story with Python is that there's a lot of "we have classes, BUT" / "we have lambdas BUT" etc. No type checking but not much of anything else either, in terms of well-executed language constructs. But it's easy to get started with, and that often has a lot of value.
That's an interesting categorization. But seeing you say this...
... makes me want to mention a few things in Python's defense. (I do think Rust is a better language than Python, but far from strictly better.)
I think Python offers the following main benefits over Rust:
Much cleaner syntax (I know some people consider significant indentation a bad thing and this can get heated, but I favor it strongly). Comprehensions are as readable as a sentence, and you don't need a whole bunch of "glue" function calls like
.to_string()
to cast string literals from&str
toString
.Inheritance (Rust doesn't have it, not even in the form of struct embedding)
A REPL. This is huge.
Error handling is better in some ways (though I realize it's worse in other ways): you get full stack traces with line numbers for each frame out of the box. No need for third-party crates and writing context messages yourself (except for displaying errors to end users).
Default argument values and keyword arguments.
A great stdlib and ecosystem. Rust has almost no stdlib and an ecosystem full of immature/abandoned/badly documented creates drowning out the few good ones.
Also, you say Python doesn't have metaprogramming. Are you defining metaprogramming to strictly mean macros? Because it's true Python doesn't have macros, but a lot of their use cases are filled by excellent reflection capabilities. Have you seen libraries like Pydantic (or even the stdlib dataclasses)? They use reflection to accomplish some pretty amazing stuff, like automatic model validation and cutting out all the constructor boilerplate.
I've seen more broad definitions of metaprogramming used before, by which Python has way better metaprogramming than any language without full-on macros.
That's a great point, I was forgetting about the lack of REPL in Rust and what a game-changer they are.
Also sad to hear that the Rust ecosystem isn't as good. I think I'd rather have a REPL and good libraries than type-checking, tbh.
Sorry, that was ambiguous, I meant that it doesn't have "good" metaprorgramming, which is perhaps unfair. Certainly Python has metaprogramming. But I think of Ruby's as the benchmark --
send
/define_method
/missing_method
/ etc. are very lightweight, and can be used many contexts where something like metaclasses are overkill.FWIW, I think this is often the story with Python, especially when comparing it with Ruby on a given feature. Generally the Python version will be clever, but narrow, and likely necessary because of a weakness somewhere else.
List comprehensions are a good example -- elegant syntax for the easy 20% of cases, which you pay for in the 80% where
map
would have been simpler.By contrast, Ruby's solution is having a lightweight syntax for passing code into functions ("blocks"). Any function can be called with a block -- it's not opt-in, or even opt-out, it's just part of what it means to be a function. So
map
,filter
, andreduce
with a closure is very readable, and the standard lib stuff concerned with enumerables is all effortlessly higher-order.Not only is that great when you build lists, it's great everywhere you use higher-order functions. As another specific example -- you don't need things like Python's context managers, which are sort of heavyweight. Regular ruby functions can already do that.
Also, re: Rust, lack of default arguments (and apparently overloading too) sounds really annoying. Python's evaluation of those at function-define-time is a really, really stupid gotcha, but at least Python has them. Even C has vararg functions. Good lord.
Eh? If you ask me, list comprehensions are more readable than map in almost every case. But Python does have
map
in its prelude - as well asfilter
, andreduce
is in thefunctools
module. (I think there's also some currying stuff in there, which I've never needed.)Also, comprehensions have serious performance benefits, which I attribute to them not constructing a lambda object and not iterating twice when you do both map/filter in one.
I've been digging into Crystal lately, which is basically statically typed Ruby. I'm finding the block solution a bit awkward and I'm not sure if it's even as powerful as just allowing functions to be passed to each other. My understanding is that you can't directly pass a function as an argment in Crystal or Ruby because just referencing its name calls it with 0 arguments. Crystal and Ruby have multiple different kinds of these things - blocks, procs, stabby lambdas - where Python just has functions and lambdas. The block syntax, while nice, seems restricted in use case to me because (unless I'm mistaken) it only allows passing a single block.
FWIW though I do see that Python's lambda story is a bit weak because they can only return something, and inline
def
s (which are prohibited at least in Crystal) don't handle scoping in the most desirable way. My ideal story in this department is Javascript (though I kind of hate to admit it, since I hate everything else about Javascript).Yeah, I realized yesterday that blocks are the equivalent of
with
, but I'm not sure if I like them more. I don't think context managers are heavyweight though. Thecontextlib.contextmanager
decorator can turn a function into one; the function does its setup intry
andyield
s the resource, and its cleanup infinally
.Lol. I agree that the evaluation of defaults at compile time is a stupid gotcha, but as for vararg functions, I'm not actually sold on them. What's the benefit compared to functions that take arrays?
Python does have
map
et al, but lacks the anonymous inner functions which would make them more useful for building & reducing collections.A few things:
with
-- they're more like lambdas wherereturn
acts on the outer function. So (like lambdas) they cover thewith
case, the collection-wrangling cases, and many others.proc
makes blocks, lambdas, and even named functions first-class. But the "default block" has more convenient syntax.[I edited the above to group them and clarify what I mean by lambdas]
One more example -- there's no "reduce" comprehension in Python. So you're back to loops there. They give you a
sum
function, but you don't have the thing that would let you write your own sum function elegantly.I should probably stop... it's hard to talk about these things convincingly because often the problem with Python isn't with what it has, but the fact that it needed to have those things in the first place. Along those lines, yes the generator-style context managers are lighter-weight than classes, but languages with anonymous functions don't need to provide "generator-style context managers" in the first place. Anyway, I don't want to come off as hating Python. It's just behind the times in some ways.
Not much, since the varargs is syntactic sugar. But it lets you be explicit about intent -- sometimes you really are passing in single argument that happens to be an array. Varargs let you say "this isn't an array, but a bunch of discrete arguments that I will figure out what to do with at runtime."
Huh? I just mentioned
functools.reduce
. It works with any function or lambda. What does Ruby's reduce get that Python's doesn't?Oh - is it the ability to reduce from the outer function (halting iteration)?
I guess that could be useful... in some really obscure situation or something...
Oh I just meant Python's comprehensions -- they cover some of the places you'd use
map
&filter
, but there's no comprehension for going down to a scalar, AFAIK. (That said, sincereduce
can be used to build collections, list / dict / set comprehensions do end up overlapping with it.)And to be clear, when I said "you're back to loops" I meant for the places where you wouldn't bother to create a named inner function, or to compose named functions before calling them. (It's not that you don't have options, but I'd argue that they're not idiomatic or lightweight.)
While
reduce
works the same everywhere, it's most useful in languages with anonymous inner functions. It's true of higher-order functions in general, not justreduce
.Ruby goes a step further by providing a shorthand syntax for doing this (but is not unique there -- Clojure has the
#()
shorthand, and JS has->
).FWIW, since you mentioned it -- I've only felt the need to halt from inside a reducer function in Clojure. I forget why, but it probably had to do with starting from an infinite stream. It'd be an optimization to prevent the creation of intermediate collections.
Tabs for indentation, spaces for allignment :P
Oh, and type systems with nominal types can actually implement TDD into the type system, so you get also the "domain level stuff"
See here: pragprog.com/titles/swdddf/domain-...
Nim has actually an amazing Macro system and static type safety.
So what you are essentially saying is that side effects caused by mutations/reassignments are less harmful than those caused by
IO
and hence it isn't worth abstracting from the latter? Don't you think thatIO
related side effects hamper local/equational reasoning the same way that mutations/reassignments do?It's not so much that IO-related side effects hamper local reasoning less, but that preventing them altogether is too impractical.
I think this is why F# is comfortable with the fact that it is an impure language and lacks some of the more advanced features that would require monad transformers et al.
If you've had it with types, you might also want to check out Clojure.
I've definitely not had it with types :D (quite the opposite) Rust and Crystal are both good languages, and are quickly becoming my two favorite without much competition.
I've come to FP through F#. F# is an "impure" FP language, but I think it'd fall more under the pure umbrella than outside of it, regarding the points in your post.
I describe F# as "OCaml for .NET".
In my assessment (which echos Bryan Hunter of Firefly Logic), the "big wins" for FP are:
In the meantime, I'll use C++. Because I can get a job using C++.
The thing I'm impressed with Rust is its amazing strong ownership model, and how it uses its novel compile time type state to make that happen. It is poised to eat C++'s lunch.
So you want to force mutability into functional programming. Maybe you didn't get as deep into the functional programming mindset as you thought you did. Especially with that claim that Rust provided "all the advantages" just because mutability has to be explicit.
There is more to functional programming and immutability. You can modify an immutable data structure, you'll just end up with a modified copy of it instead of modifying the original, and THAT is what the immutability is about: not about making your life hard by making it cumbersome to modify a data structure, but by having you be very explicit about who gets to see and operate on that modified structure.
The problem with mutable data structures isn't that they are mutable. It is that you have no idea what you cause by mutating it, because who-knows-how-many things might be accessing it.
Lazy operations are another classic. It looks like in Rust you need a library to even get lazy evaluation in the first place. But it opens so many doors in functional programming. Composing operations, operating on infinite collections, and so on.
Please do not reach for questioning my experience as your first response to disagreement. You also misrepresent what I said, as I never claimed that Rust provided all the advantages. I said it provided "most, maybe all". I even described how I was open to the idea that there were some benefits it was missing out on, but that I believed they didn't justify the imposition of total purity because they were mutually exclusive with more important benefits.
What you describe here actually sounds like Rust. You can modify things, but have to be very explicit (via the ownership system) about who gets to see and operate on that modified structure. In Haskell, changing to do this requires a heavy refactor replete with arcane type system hacks.
Again, in Rust, ownership makes it clear what things are accessing it.
Rust doesn't have pervasive lazy evaluation, but it has lazy iterators which allow composing operations and operating on infinite collections. So does Python. I hear so does Javascript these days.
Are you aware of Idris, a remake of Haskell that retains pure functional programming but isn't lazy everywhere? The point I'm making by mentioning it is that laziness and functional purity are not as intertwined as you seem to think.
Ownership in Rust is pretty much the same thing as having variables belong to a class in object oriented languages, from what I understood. As soon as you modified a variable, everything you ever gave this variable to now implicitly has this modified version. This is very different from the functional programming approach where you explicitly have to give the modified variant of the collection to every place you want to see the modification.
I am aware, but I never used it, and also don't really plan to - same as how I did look into Haskell a little, but opted not to use it; these days I am a Clojure user.
They aren't as intertwined, but they are both part of the concepts of functional programming.
this guarantees that Immut can't be mutated from outside this module(rust has module level privacy).
I don't understand what you mean. I would love to see code example.
collections are not language feature. So there are immutable collections in rust.
If you gave this variable to something, how can you mutate it? the variable has moved.
I tried my best