Programming Paradigms and the Procedural Paradox

Eric Normand on August 30, 2017

I'm a collector of perspectives. I think each perspective we have within reach is another option we have to solve problems. We should all learn a... [Read Full]
markdown guide

I believe it is the best article that I ever read on Thumbs up!


This post has led me on a journey of discovery more about "pure" OO. In particular, the SmallTalk way. And what I saw standing out most with pure OO is the avoidance of control structures like if, case/switch (and additionally for).

This floored me at first, because I use if and case in functional programming. Case (match in F#) statements are basically required to deal with union types. Eventually I came to this conclusion: what is wrong with control structures (in OO languages) is that they require side effects, and they make it easy to mix side effects.

In (nearly all?) OO languages, there is no way to return a value from an if or switch statement except by mutation of something outside its scope. And if you have to add something to code which already mutates one thing outside of its scope, why not another? I'm not just theory-crafting here. I have seen and done this myself many times. This pattern / problem is basically borrowed from procedural programming.

string s = null;
if (Regex.IsMatch(input, somePattern)) {
    // must mutate s
    s = input;
    // why not also mutate something else ?
    // mutating s here too... who warns me if I forget?
    s = String.Empty;

Many people would replace this with the ternary operator precisely to avoid mixing mutations. But that falls apart if there are more than 2 cases.

I do not have an elegant solution in mind to do this in pure OO, or I would include that code here.

In functional programming if and case statements must return values. Some (most?) functional languages allow implicit side effects, so if can still be used for evil. But the methodology (and often the language) discourages you from doing so.

// no mutation
let s =
    if Regex.IsMatch(input, somePattern) then

In all I wish I had found out about pure OO when I started programming. The way I actually did OO was procedural with objects, and found that it failed me. I think it underlines the point that following the methodology of a given paradigm matters more than using its features. I have certainly found this to be true of functional programming as well.


This boils down more to the expression vs. statement idea. The smallest atomic building block of the procedural style is the statement: a unit of code that changes the value of the global, implicit state. The smallest atomic building block of the functional style is an expression: a unit of code that expresses a computation that returns a value.

Most languages these days have a pretty arbitrary mix of statements and expressions. For example, in Java, "1 + 1" is an expression, as is "a = 2" (which is why you can write "b = a = 2"), but "if {}" is a statement, so you can't write "a = if {}". "Functional" languages tend to "cheat" and say everything is an expression, but some expressions don't return a value, i.e. return some placeholder like nil or null or Nothing.

There's a good argument to be made that statements don't add anything to a language, and a language should always only be defined in terms of expressions. Existing languages that have statements are not going to get rid of them though.


I think the "placeholder" value you are looking for is called unit or () in most languages. Non-FP languages have it too. There it is called void but it is just a keyword, not a value you can return.

void and unit are generally type names, not values. My point was, even though some languages are syntactically defined such that everything is an expression and there is technically no statement in the language, there are still expressions that do not return any useful value. Clojure and Ruby would be examples of that.

Once static typing gets involved, the compiler may actively prevent using the return value of a void function, but that's a different story.

Here is a good post on void vs unit. Unit does in fact have a value and a type, unlike void. Otherwise I agree with you. Prime example of what you are saying in F#: when a function returns unit (which is a sign that it performs a side effect), and it contains an if statement, then the compiler will include an implicit else () if you don't supply one. I actually don't like that, because it hindered my understanding when I was first learning F#. The early things you do like printing to the console return unit, so it feels like if is a statement. But then later when you are doing other kinds of logic, it won't compile when you operate under that assumption.


Oh man, what a bundle of confusion this is! Where to begin?

The gist of the confusion is probably in this concluding phrase -- 'solving problems with code'. Basically, software architects, designers and implementers tend to act silly by focusing on solving problems with code. That's akin to barking up the wrong tree.

This barking up the wrong tree doesn't happen in real world engineering. For example, you can teach someone how to make a house using prefabricated bricks. The bricks are all of uniform size, and are suitable for composing a larger object -- namely, a house.

If you now ask that architect/engineer to build you a much bigger house, would you expect them to go back to the drawing board and design and implement much larger bricks?

Well, that's exactly the folly we're seeing in the world of software architecting/designing/engineering. When asked to build a bigger 'solution', software 'engineers' typically take a deep dive into designing and building bigger 'bricks'. They seem incapable of grasping the fact that you can build bigger systems using more of the same. Everything in their world needs to be a unique, precious snowflake.


Hey Alex,

Interesting discussion!

I sympathize with this line of thought because I often find myself in arguments on the side you're taking. I.e., "can't we just do the obvious thing that we know works?" Often it comes down to programmers not being aware of prior work.

I don't know if I agree with you in general, though. Programmers reuse "bricks" all the time. We reuse "bricks" on many levels. For instance, the database software, the web framework, the compiler for our language, third-party libraries, the operating system, and more.

But I'm sure you know that. So I guess I don't know what you're referring to by "bricks" in your analogy.

I think though that it could be that we don't know what the bricks are in software yet. Look at bricks: they are a commodity that you can buy at the store, they're very reliable (old technology that has been perfected), completely interchangeable, and their properties are known. Do we have something like that in software? Further, I would argue that we do have to re-engineer bricks all the time under some conditions. That is, where bricks are not available as a commodity, you have to build the bricks yourself. This is essentially an engineering problem.

To take it one step further: bricks actually don't scale that far without advanced architecture. You could make a quite simple house out of bricks if the house is small, just by stacking the bricks. But to make a multi-storied house, or a very long wall, you have to start relying on architecture. That is, techniques for arranging those bricks that are also considered "problem solving". And this problem with scaling is what Alan Kay was trying to solve with OOP. Some way to "build an arch".

So I guess I don't agree because I think someone has to go deep and reinvent everything because we still don't know how to build software all that well.

Thanks for the discussion! Looking forward to hearing your response.


Hi Eric,

I was actually referring to mesh computing.


Your analogy is somewhat misguided: if you want to build a hut, you'll probably make it out of wood; a patio would be made out of concrete; a small house can be built out of bricks, although some people like them in wood too; a skyscraper will be mostly built out of metal; when built with bricks, larger buildings do generally use larger bricks; etc. "Real engineering" is not as easy and codified as software people tend to think, and there are indeed a wide variety of bricks that exist, with different structural properties. Also, most buildings these days are not actually built out of bricks; bricks are just added as an external layer for aesthetic reasons.

There are some "bricks" we have figured out in the software industry. At the systems design level, we are starting to have a very good selection of off-the-shelf database engines, state coordination systems (ZooKeeper, etcd, ...), load balancers, as just a few examples. At the program level, this depends highly on your programming language; I'd argue Clojure's data structures, for example, are really great bricks to build pretty much anything, which could be contrasted with e.g. Java, where there is no common, built-in way to represent data so everyone invents their own classes.

I definitely agree that you would expect your bridge engineer to look at the existing catalogue of off-the-shelf bricks and try very hard to use an existing variety before she starts designing her own. It does happen, though.


I think you took my analogy in a completely wrong direction. The point is not reuse, which is what you seem to be implying. The point is rather that it is better to achieve complex objective by utilizing more of the same, instead of introducing bloat. From the engineering perspective, small simple steps are always better than few giant steps.

This approach is discussed in a bit more details here:

Definitely agree that composing small, simple things is better than building one large, complex one.

Yes, bloat is the bane of our profession. As soon as you task any dev team to get something done, they always deliver spectacular bloat.


I think your procedural paradox is really a case of confirmation bias or ingrained familiarity. I remember way back in the dark ages (8 bit processors) and thinking about how unnatural defining the how not the what was a challenge. Now it seems very normal because I have been doing it for thirty years. But I don't think it has any natural or intuitive advantage.

Further more the 'how' can easily mask the why. And that is critical information. I cannot count the number of times I have asked, "What is the code supposed to do?" In procedural programming it is a vacuous tautology that the code is the documentation. The code is telling us what the program is doing in pain-staking, tedious detail. But it is usually not obvious what the code is supposed to be doing, or what the intention of the programer is.

Also, I think you missed one of the key aspects of Haskell, and Scala to a lesser degree. The type system. Programming to types and strong type inference in these languages make a whole category of bug impossible.

Fun article to read. Thanks


Very good article!

I think the procedural paradox is mostly an acquired taste. There's been some research in computer science education, most notably by the people behind the Racket learning environment, that clearly demonstrate that it's easier for most people to learn functional programming first and learn effects on top of that later. I strongly suspect people who learn in that order would not feel the paradox you describe.

I'd like to challenge the idea that Alan Kay meant cells as "tiny things". The analogy of computers on a network is probably a better one for programmers: computers are fairly big, self-sufficient things. Cells are tiny compared to a human body, but we're very far from that level of complexity yet; cells are also self-sufficient, can perform a lot of functions, contain the whole DNA sequence of the individual, etc. In that sense a few thousand lines of code for an object doesn't seem that much (though obviously it would require better source code organization tools than typical Java-style classes to define the behaviour of a single object of that size). Alan Kay has reflected on multiple occasions that one of the big mistakes they made was making objects too small.

I like your decomposition of FP in data, effect and computation. I think most FP enthusiasts will agree that FP is what happens in the "computation" space, and that the "effect" part is a necessary evil. By that token, the functional paradigm would be to try to organize a piece of code such that the majority of the code is in that "computation" space, and as little as possible is in the "effect" one. Elm is a great example of how far a language can go: the language can be called "purely functional" because all of the side-effects are happening in the runtime, away from the programmer's code. FP is not about removing side effects from the program, it's about removing them from the (visible) code.

I would argue that these paradigms are not, as we often talk about them, properties of languages, but properties of sections of code: it's very hard to look at a list of language features and decide whether the language is functional or OO or procedural or, really, anything. It is much easier to agree on these properties for a given piece of code. Language features can nudge in one direction or the other, but that's as far as they go.


Congratulations, your article condenses the 3 paradigms giving in a few sentences, but i have to say it gives more information than bigger posts focuesd only on one of them :smile:



I'd like a language to support all the features. That way, I can have a functional interface on top of my procedural, stateless app and integrate an ORM that leverages OOP concepts. Like a peanut m&m.


Scala has pretty much every feature you can imagine, and some facilities to implement the other ones as libraries.

code of conduct - report abuse