Is Object-Oriented Programming "Dangerous"?

rachelsoderberg profile image Rachel Soderberg ・2 min read

This morning I came across an article that struck me as a fantastic discussion topic: Why Are So Many Developers Hating on Object-Oriented Programming? by David Cassel. "Hmm, sure it has its quirks but I couldn't imagine someone hating or outright avoiding it" I thought to myself. I read the article and had a few opinions, but I'll save that until the end.

According to Ilya Suzdalnitski, a senior full-stack engineer, Object-Oriented Programming is a "trillion-dollar disaster" and "turn out to be one big glob of global state, [which] can be mutated by anyone and anything without restrictions". Suzdalnitski's article can be found over on Medium, if anyone wants to read it.

Cassel's article provides the details of his over-email interview with Suzdalnitski, allowing for deeper explanation as to why Suzdalnitski has taken the controversial position. He explains that he stepped away from OOP when he realized his own return-on-investment was low in terms of time and education into the practice; these days he writes in F#, JavaScript, and C# (using functional programming styles) and "could never find any [use cases for OOP]."

The most controversial statement in Suzdalnitski's article is the statement "OOP is Dangerous". He claims that OOP is for cheap, inexperienced developers and "functional programmers are typically more smart, and more expensive." Suzdalnitski has received some backlash from the development community, with some even writing their own equally provocative articles: Developers Who Hate On OOP Don't Know How to Use It by Gary Willoughby.

So what do I think about these articles? First, I'd like to say Cassel took a heated topic and presented it in a very neutral way. I appreciate that I can read the article and not have had an indication of where his opinion lies in the debate. As for Suzdalnitski's article, I believe it is an excellent opinion piece and controversy can always be fun - but I do not agree that any one language, programming style, or framework is the be-all-end-all solution to every problem. I absolutely agree that OOP will not work for every need, but I disagree just as equally that functional programming will work for every need.

I also do not think that OOP is reserved for cheap, low-tier developers. There are many incredibly intelligent software engineers working with object-oriented programming and it takes a significant investment of time and practice to reach a level of mastery. That said, I have never tried functional programming, so I can't speak to the differences in use and learning curve.

What are your thoughts on the article? Do you agree with Suzdalnitski? Do you disagree? I'm curious what you have to say!

If you'd like to catch up with me on social media, come find me over on Twitter or LinkedIn and say hello!

Posted on by:

rachelsoderberg profile

Rachel Soderberg


I'm a Software Developer who loves working with C#.NET and Salesforce. In my free time I lift weights, do martial arts, and play video games.


markdown guide

I have explained it here, I found his claims very funny, cheap is a really wrong word, functional programming is like a building Rocket for transportation from home to office, which ofcourse is expensive and requires great minds, but do we really need it? I would use a simple car without understanding E=mc*2, it doesn't mean car or I am cheap.

This is exact same comment on different article by me, dev.to/akashkava/comment/7fig

Coding consists of two major parts, one logic and one is organizing logic. Consider logic as a contents of book and organizing as a book library.Book Library is organized by categories and then it is further organized by either author name or title in dictionary form. Imagine if you had a good contents of book but if it wasn't organized properly. Imagine if everyone organize contents of book and book in library with random order.

I consider class as a way to organize logic, you still have your functional ways to write logic but class is well understood by everyone and it is easy to document and organize. And it is very easy to refactor the code because IDE knows all references of identifier and it can safely refactor it. This was basically the reason why C++, Java and C# became more popular because developers need not focus on how to write it correctly and were able to focus on business needs.

Ideally doing everything in OOP or everything in Functions, both are bad, you must choose OOP to organize your logic, which is extremely important for long term healthy library and functions to improve your logic.

Functional programming represents Science computations closely so it is easy to use it for Rocket Science, OOPS is more for business applications.

More than anything important, I see that they outline frustration of not being able to do one thing easily in OOPS that they were able to do it easily in other paradigm, that doesn't make OOPS a trillion dollar mistake !!


Functional programming is not that abstract. Just the paradigm is different. So it feels unintuitive because most people are not familiar with patterns.


That's what intuitive means, man. It means it's similar to something you already understand. So for human beings, there are three broad categories of intuitive. Biological/environmental, sociocultural, and trade/domain. Some things are intuitive because they exist in nature. Some things are intuitive because they are taught in childhood. Some things are intuitive after having experiences in a specific domain.

OOP is intuitive based on all three, arguably, but definitely based on the latter two. Most languages people use have subjects, objects, verbs, and adjectives. We learn those languages as children and use them with all our co-workers, not just other programmers.

Advanced math, however, is only intuitive based on the third.

Okay, you are right about intuition but still, functional programming does not mean advanced math. At least not always.

OOP has strong logical and academic bases. But in day to day life, we don't use words like polymorphism or encapsulation.

Most of the functional code is things like, being side effect free or use of high order functions. So things like category theory and stuff is mostly proof of concept.
The pareto principle

In day to day life, polymorphism is when you get a can of pepsi instead of a bottle of coke, and the way we get our food in a restaurant is strongly encapsulated :)

It doesn't matter... At the end of day to write oop takes more time more code and harder to refactor

For example


Easy to read and understandable, who ever thinks its not shouldn't be programming.
The same 4 lines in oop would be maybe 30 including 3 different interfaces and 4 different classes and to refactor such a thing is much more complicated than adding one line that describes exactly what you want.

Plus look closely lots of the leading frameworks are fp... Express react redux flutter swiftui Compose etc...

FP + Declarative = easy to read and understand

My point is that OOP is intuitive because it tries to model the reality.

I don't deny the virtues of FP, I use it.

As well as OOP.


Writing . before name of function is called Encapsulation, even in FP, you are using some OOPS, remove . and then write everything, and see how many problems will arise!!!

By the way you can do that in C#

   .Select( x => ((x.Property1, x.Property2)));
   .Where(x => x.Property1 > 20)
   .Select(x => x.Property2);

The biggest advantage here is, strong typing, and class relationships allows IDE to provide intellisense and provide compile time errors way before the code's execution. These are other benefits of OOPS, though such things are possible in FP as well but I don't think the level of advance IDE matches anywhere near to OOPS IDEs have already mastered.

There is no end of such argument, I said in the beginning and even I am repeating, you have to use best of both worlds !! Don't try to overdo by doing everything in FP just to show off, its not worth it.

But as someone else has already pointed out, that example is OOP—and, in fact, may only be FP depending on how the functions passed into the methods are actually implemented.

list is the object (the subject)
map, filter, are methods of the object (the verbs)

Yep! That's why I think a mix of both is usually best—at least for the code I write.

Please stop pretending that natural languages have anything to do with this useless and fully artificial notion of "objects" that any of the OOP definitions is dealing with.


What? I did not say anything about natural languages

Yep, sorry, I misread your answer in a context of this thread.


Suzdalnitski's article must be a joke.

When I read it I feel like the intention is the opposite: Promote the OOP.

Somebody that takes every single advantage of OOP and strongly discredits them with the most dubious claims waiting other people jump in its defence.

Either that, or there is an strong disinformation and he just want to learn from the replies.

We need to be very responsible of treating the topics related to programming theory the same way that tutorials. We should not make so strong statements product of our empiric experiences without additional references.

I mean, somebody can be expert using a language, a lot of frameworks or programming paradigms but in order to talk about the theory everything should come with references, like in scientific papers.

Just to put one example all the section The Problem with Real World Modeling is so wrong that it seems a joke. I am sure he can not find scientific paper that states that The real world is not hierarchical.

The world seems to be not hierarchical, but the complexity of the systems that compose the world are organized that way (the matter, chemistry, biological systems, social systems). For more on this read the first chapter The Complexity of the book Object Oriented Analysis and Design with Applications

Not only the world is organized that way, also the scientists have discovered that the human mind organizes the information using similar structures. For more on this read about Semantic Computing or just Knowledge representation and the Semantics of Natural language. There you would find that OOPs use a subset of the semantic structures and relations that the human uses to understand the the world.

The OOP is not the ultimate paradigm to model the system's complexity of the real world but it has the two or three that serve to represent them in computer systems.

For practical purposes: classes, objects and relationships between them did allow to represent most of those problem domains.

The information system that runs in a computer doesn't try to model the whole world, only the part of the real system related to the information.

The article is weak in most of the arguments, but he had the courage to bringing the topic into the table.

It is very hard to deny the role of OOP in one engineering process, and one should be backed up with solid arguments in science and philosophy to do it. It is not just to point out that "it is more difficult".

We should not forget that information systems exist in order to solve a problem, in a problem domain. Those are the functional requirements. And OOP is good to represent that problem domain.

It is a fact that most of the code (85% some years ago) correspond to non-functional requirements ( persistence with data consistence, integrity, availability, scalability with parallel processing, redundancy, resilience to failure, etc..). Many languages, frameworks and software paradigms fit more those parts and many of them less. It is a matter of the effort inverted on each one of them.

At the end they would merge in just one stack, in the meanwhile we should learn how to profit from each one and combine them to solve the problems. It might be that at the end there is an unifying paradigm or something like that, but it would be combining the goods stuff of all of them. Or maybe not, but then ... how would the new paradigm be?

The new paradigm would be still between how the computers work and how we understand the world.


Wrong. The real world is absolutely, definitely not hierarchical.

Find a single scientific theory that is organised into any kind of a hierarchy. No such a thing exist. And theories always converge to a best possible language to describe their problem domains.


Find a single scientific theory that is organised into any kind of a hierarchy

Why should I do that? I have provided references that you obviously have not read.

Doing those strong and absolute asumptions it's not a polite way to answer when somebody has taken the time to provide you information.

The homework must be done. 😉

Nevertheless, I would follow the game and tell you.

Even when the complexity of the real world would not be organized hierarchical, your mental representation of the world does.

But the world is organized that way.

By the way... Not only the scientific theories are organized that way. It is everything that you can understand and describe.

And there is not a single hierarchy... There are many hierarchies called hollonimy, meronimy, hiponimy, etc..

It is true that those are basic relationships from our languages, and that's why our brain use those structures... But scientists think that those semantic relationships come from the evolution of our brain to fit the way the complexity is organized in the real world.

I have provided references that you obviously have not read.

I did. They're irrelevant.

Once again, we're talking about the real world problems here. And there are very few problems that can be adequately represented in terms of communicating objects.

your mental representation of the world does.

No, it does not. See my example - none of the scientific theories, which are exactly the distilled, perfected ways of representing the real world, use any of these abstractions.

But, by all means, keep trying to sledgehammer entirely unfitting real world semantics into your imaginary worthless ideal. Good luck. But you have to understand that nobody think this way. Even the most fanatical OOP zealots are not actually thinking in terms of communicating objects, you always have to waste tons of effort in order to squish your own mental models into this entirely artificial, unnatural paradigm.

I thing your answer is somehow full of emotions. One advice, never discard any info and be prepared to confront your believes.

The is no gain in ignoring the new knowledge. Or it might be.. But it is not knowledge gain.

I don't see your examples. Have you another source?

Look, scientists have classified all living beings in just one hierarchy... Do you think that would be possible if the complexity in nature were not organized that way?

The same happens with all the world. The resemblance is so high that since ancient times the philosophers had the idea that the concepts like "classes" exist beyond the reality and the reality was a materialization of them.

Today the scientists know that there are common patterns in the organization of real world's complexity. From the quantum world to the big galaxies.

You might think that not everything is covered by OOP, but that's because it uses only few relations from the theory of knowledge.

Check the second reference that I provided and you might see some relations that are implemented in other other paradigms. Some of them fit better for the current state of technologies than OOP, but the statement that they are better for representing the real world, or worst... that OOP doesn't represent the real world is something outrageous.

The other programming paradigms represent better some parts of the problem in terms of the world inside the computer. The is no such thing like an immutable object in the real world.

And everything in the world that you can understand can be classified, has identity, and behave according to its state.

More things can be added to the implementation of the OOP, but the theory it is based on has solid fundations.

I don't see your examples. Have you another source?

I gave you a very wide choice, pick any. If you want something very specific - ok, let's go with, say, classical mechanics. Its language is anything but OOP, and yet it describes systems perfectly and naturally.

Take any other scientific theory, and you'll see the same - languages are declarative, they're defined as sets of rules rather than some weird "objects" and their interactions.

Look, scientists have classified all living beings in just one hierarchy...

Hint: they did not. Good luck digging out of a mess which is the current and historical prokaryota classification, for example.

Hierarchical taxonomies are very rarely useful, not to mention that they're just that, taxonomies, they bear no operational semantics whatsoever.

And everything in the world that you can understand, can be classified, has an identity and a behavior.

Now, that's a perfect example of a tunnel vision. Most of the things are rules, not interacting entities with states. And even if they can be described as such, it's more often than not the worst possible representation that is insanely hard to deal with, while a simple set of rules would define the behaviour of your entire system for its whole life span, trivially.

You are perfectly right.

He's mistaking thinking Hierarchies of the scientific world are classes. They are only collection of entity with some common traits.
Hierarchies have no methods and are not rare cases of things how not fit any categories but more than one (it's light a wave or a particle? Oh it's both).

I think you are mixing the stuffs. I would never think that hierarchies have methods. How could you infer that from the previous conversation ?

I would never think that hierarchies have methods.

What are you talking about?

My point is that trying to even get a proper hierarchy, even in the one and only problem domain where a real-world inheritance exist, is still a fools errand and it always ends up in a massive mess. I referred to the prokaryota taxonomy for a reason - it suffered revision after revision and is still very far from being anything adequate. Genetic mapping proved that all the previous hierarchies built upon visible phenotypical differences are wrong.

As for "methods" - the OOP way of modeling the reality is to think of it as objects communicating with messages. And it's wrong, through and through. Only a tiny proportion of mostly irrelevant problem domains fit well into such a model.

Take any real world problem and try to represent it in form of hierarchical taxonomies of communicating objects. Than compare the monstrosity you created with a natural language of that problem domain. You won't see any similarity whatsoever. Real world does not speak in objects, real humans do not think in objects.

All of the OOP methodology is 100% synthetic and does not reflect real world in any way. Even, as I demonstrated above, in the one and only problem domain where an actual real world inheritance is present.

What in the heck are you talking about? Natural languages are full of objects!

Those two sentences contain at least two. "You" and "languages."

People organize into hierarchies. Whole nations and thousands of companies are organized in hierarchies.

All of these things exist in the real world and are problem domains.

Taxonomies are rarely useful? Then why are there so many?

Only if you stretch a definition of an object far beyond any scope OOP ever dared to reach. Only if you allow multiple parallel non-rigid, evolving hierarchies. Which again destroys anything OOP could work with.

The notion of objects is useless and does not help in any way to build models for any of the real world problem domains.

Man. Stop guessing and read the references I gave you.

The current understanding of the world origins is that everything comes from a singularity and a set of natural rules.

Every stuff of the universe that you can recognize comes from one of less complexity or the composition of entities of lower complexity that were formed the same way.

The fact that humans have not been able to classify something only means that there there is a lack of information about that specific entity.

Of course that one object belong to many hierarchies. And that is because of the composition, otherwise all the the entities of the universe would belong to one big hierarchy of inheritance.

Humans classify everything into a hierarchy of knowledge. Have you ever opened a dictionary? Everything is defined referring a concept of lower complexity and the distinctive qualities of the new concept.

A well known exercise is to open Wikipedia and make click in the first definition refered in each page and count how many pages are until you reach the top of the hierarchy.

You are denying a whole branch of computing that is called "semantic computing".

There is a lot to learn, the world wasn't born today and most of the theory that is used in informatics comes from hundreds of years ago.

A word of advise - in the future, avoid exposing your superficial and totally misguided delusion of how the world works when you're trying to argue with a physicist.

Let me point you to a far more important source than any of the incoherent ramblings you've cited here - the fundamental Algorithmic Information Theory. You could have heard of it in a context of Kolmogorov complexity. Once you learn it you'll see how horribly wrong yout assumption about a simple initial state + a set of rules is, and that no complexity in this universe would have been possible without a constant input of a true white noise.

And that's just one little thing where you're totally wrong. Don't even let me start on hierarchies and objects again, you clearly do not know what you're talking about.

A word of advise - in the future, avoid exposing your superficial and totally misguided delusion of how the world works when you're trying to argue with a physicist.

I must recognize that I am not an expert in any of the fields that we are dealing here. I am just an engineer, and that means trust in the knowledge and tools developed so far by the academy and build solutions with it.

I have just reacted this way -and I apologize for it- because the author of the article -unknown for me- is attacking a whole set of academy, industry method and tools without a basis understanding of most of the stuffs it criticizes. And doing so it is questioning also the way humans acquire, organize and use the information.

In practical terms, the discussion of how the world works in physical terms doesn't fit here in its full extension. It does just in the part concerned to the information that it contains and just to the part related to the problem domains that we deal with.

I have strong feeling that our brain has been optimized by the "evolution" or any equivalent mechanism to recognize and classify the information corresponding to what the world contains. With programming paradigms we are just trying to find abstraction tools to communicate that knowledge to the computer.

One programming paradigm fit better than other for that task, but there would be breaking-change if you change, for example, the basis concepts that make the computer you are trying to program. For example, would all those programming paradigms work the same in quantum computing ? Or a computer that works like the brain? I guess no.

So pretending that the reality works like a programming paradigm is a wrong way of thinking. The paradigms use to view the reality restricted to a particular scope, by the way they are made to be broken not to be taken as the ultimate and immovable reality. That would be creating a dogma.

Kolmogorov complexity and the branch of Algorithmic information theory deal with the the generation of objects by mean of computation. Do you really believe that our reality is being generated by one information system ? Maybe with Javascript ? ;-)

You might think that stochastic processes could be also involved in the creation of objects of bigger complexity. Sorry, I have used statistics, but I don't believe in processes of random nature. Calculating the probabilities for a dice throw is just a way to deal with the complexity behind all the mechanics.

By the way... can you give me a reference for this?

no complexity in this universe would have been possible without a constant input of a true white noise.

That's a good argument for more than one of the world religions. it is very interesting.

Being a physicist, you'll like the idea of a Theory of Everything. Just a formula to explain the basic mechanics of all the universe. But your mind would not be enough to derive from that formula and how one action moves in the market.

Do you see it better like a big real world state, immutable in the plank unit of time that is passed as parameter to one function and gives the state for the next time unit ?

I must recognize that Quantum nonlocality would favour that. But now that we have better Standard model, including elementary particles for the forces... that means that the objects have also the basic units of behaviour on bounded to its matter (shared-state) right ?

Well.. In practical terms, at some levels it is actually irrelevant if the world is discrete or continuous, even whether it is driven by big formula or some more discrete rules. There is one organization of Complex Systems in the stock market, there another one in the biological systems, there is one in social systems, etc... human did start to learn from top-down not from button-up, from practical levels. The context is: Systems Theory.

The organization of complexity has common patterns, and the basics of those patterns were studied by science, and somewhere at the beginning of this sequence some of them where taken for OOP. OOP didn't fall from the sky, and I am sure that FP neither.

Regarding to the hierarchies... ask why Amazon and other online shops use them as facets and mix them in the faceted search. It is because they are everywhere... and objects are trapped into many of them.

I did never say that all the objects belong to just one hierarchy. Humans use hierarchies also in faceted classification in all schemes of knowledge organization. The assertion that real worlds has no hierarchies is simply wrong.

is attacking a whole set of academy

Academia is not behind any of the OOP. Academic computer science never really been interested in OOP, and by now it's long in the past.

I have strong feeling that our brain has been optimized by the "evolution" or any equivalent mechanism to recognize and classify the information corresponding to what the world contains.

Just keep in mind that this classification have absolutely, totally, 100% nothing to do with any of the synthetic OOP classifications.

We do not classify things into rigid hierarchical taxonomies. As soon as you come up with a programming paradigm that operates in complex graph classifications our minds deal with, it'll become relevant to our discussion, but OOP is nowhere close to such a hypothetical model.

Do you really believe that our reality is being generated by one information system ?

Reality is an information system, by definition.

And, since it's heavily based on truly stochastic processes, we cannot say it's "generated" (as in computed from some initial state with a fixed set of rules).

By the way... can you give me a reference for this?

Pretty much any textbook on the Algorithmic Information Theory.

The very definition of the amount of information in a system is this exactly: a size of the smallest possible algorithm that reconstructs the entire system. If we could compute any future state with a 100% accuracy using only the initial state (which is, as you pointed out, just a singularity) and a set of rules (the fundamental laws), the amount of information in the whole of the Universe would have been no more than the few pages you'd need to write down the fundamental laws. Which is not what we observe.

The moment you add a true randomness to a system - you have an infinite source of information, because white noise is uncompressable, it contains exactly as much information in it as a number of data points you sampled.

A system fed with an infinite source of information can evolve, can generate new complexity. That's why the interpretation of quantum mechanics that relies on true randomness is so important - it explains why the Universe is evolving.

You can think of it as two kinds of processes:

One is fully deterministic:

NexState = F(PreviousState)

where F is a pure function (in mathematical sense), and this function is exactly the minimal algorithm, the whole information your system contains (along with the state_0 it all started with.

A stochastic process is more interesting:

NextState = F(PreviousState, Noise)

And the amount of information in the next state will be state_0 + F added to a size of the true Noise consumed by this system. Amount of information (and, therefore, a complexity) if this system can grow indefinitely.

Do you see it better like a big real world state, immutable in the plank unit of time that is passed as parameter to one function and gives the state for the next time unit ?

See the second case above - it's a function that consumes the previous state and a huge amount of true random noise to produce the next state. Without this noise we'd never have any observable complexity in the Universe.

ask why Amazon and other online shops use them as facets and mix them in the faceted search.

Did you notice that these hierarchies are not rigid in any way?

And OOP cannot handle non-rigid, complex hierarchies.

The assertion that real worlds has no hierarchies is simply wrong.

Real world does not have this kind of hierarchies that OOP is built upon.

My argument is that the kind of hierarchies OOP is built on are whichever kind of hierarchies people writing the code choose to organize objects into. We're not actually talking about the nature of Reality-with-a-captital-R. We're talking about the way everyday people understand reality. And about how a person might tell a computer to do certain things. That is all this is. Academics and theory are fascinating, necessary things. But I'm not a computer scientist or a physicist and I don't need my code to prove the big bang or whatever. I write code for a living and for that I need to stay in touch with the practical purpose of the code I'm writing. Lamba calculus is not practical for me. Expressiveness with plain words that make clearly readable sentences most people can understand is practical. Is there overly obtuse, counterintuitive, impractical OOP? Yes. Let a million people loose on anything for a few decades and see how much noise is in the signal. But which is a simpler word more people have an existing intuitive grasp of? Inheritance or currying? That's the fundamental appeal of OOP for me and the thing I want to see out of any paradigm before I go calling it The One True Way. Which I don't do anyway because there's not one: my hope is to adopt the best both have to offer and avoid the worst. I have a theory that that very benefit is part of the problem with OOP, though. As a new programmer it can make you feel like you really get it before you have the experience you need and you go wild, drunk on unnecessary complexity.

That's a good point. In most used problem domains use computers to help us with the stuff that our minds can not do: accurately calculate with speed and storage a big amount of data.

In such cases the they become a part of our information system, an extension of our brains. In such domains we trend to analyze them according to the semantic structure more than the functional behavior. Like commerce, storage, publishing...

But it might be domains where the accuracy of the interaction with physical objects should be priority. Like automation and communication.


Hola, you have also misunderstood the core principle of OOP. It's not about hierarchy. It's about objects with internal state which send messages to each other. Here's Alan Kay's original thoughts on OOP, summarized: wiki.c2.com/?AlanKaysDefinitionOfO...

Notice, no mention of hierarchies. Just objects, sending messages, belonging to classes.


Hello, the hierarchies are built when you try to describe the complexity of the problem.

You can not describe 5000 objects that represents sale orders and 2000 objects that represent purchase orders. You describe a set of objects with a common representation, that is the class. And there is stuff in common between them, so you create order with the commonalities and express the minimal differences in sale order and purchase order. That is the inheritance. When you put all of that together there is a hierarchy.

There is not a single hierarchy, a subset of them are linked by the composition, when an object is built with objects of other hierarchies (by example: a sale order contain order items, each one composed of a product, a quantity and a price.). But the product might belong to its own hierarchy of that describe the different classes of products.

Also many entities that generate financial operations, use to be linked to the accounts. And the accounts are classified in a hierarchy (the chart of accounts)

According to the functional requirements, the OOP might represent many of these hierarchies by mean of objects or not. Nevertheless, the hierarchy must persist somehow in the data structures... because they represent entities of the problem domain.

So not only OOP have mechanism to describe hierarchies, or hierarchies are used to organize the code reusing previous definitions. Those hierarchies do actually exist in the complexity of the real world.

The original article say:

OOP is not natural for the human brain, our thought process is centered around “doing” things — go for a walk, talk to a friend, eat pizza. Our brains have evolved to do things, not to organize the world into complex hierarchies of abstract objects.

And that is an incorrect assertion. The human brain did evolve to work with abstractions and generalizations, and the natural language was formed with those structures on them. This process requires classification of the information. So yes, humans constantly organize their mental representation of the world in hierarchies.

When you ask a child to describe an object, he doesn't answer with the set of rules that can be applied to it. He answer with the nearest abstract concept and the different characteristics that it has.

Adult: What is this ?
Child: It is a chair.
Adult: And what is a chair ?
Child: An object with four legs and a back that persons use to sit.

The child doesn't remember the details of 20 chairs of its classroom, if they seem similar for its practical purpose. The brain is somehow optimized to remember only useful information.

Basically, in the real world the behaviour is deeply bound to state.

OOP focus on structure and state and FP focus on behaviour.

Humans use hierarchies in their abstraction mechanisms for remember and reasoning. They identify new concepts identifying the difference in structure (state) and behaviour.

I am open to think that it might be better for the performance of our machines, for the scalability of our systems, a representation that focuses on behaviour. But then a bigger gap might arise in the process of communication (programming), that might be the reason why more expertise seem to be needed for FP.

There is no need to distort the notion of the reality. They are just different paradigms for describing it.


Here's a small snippet from my book Refactoring TypeScript on this topic that sums up my thoughts:

It needs to be brought up: What's better - object-oriented programming or functional programming?

For starters, most people don't understand what OOP was intended to be in the first place. Similar to how Agile today is usually misunderstood (e.g. just because you are having daily stand-ups, using story points, kanban, etc. doesn't mean you are doing Agile).

Alan Kay is considered the father of OOP, in a sense. In a certain email, he gave some frank explanations about what OOP was supposed to be.

"I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning -- it took a while to see how to do messaging in a programming language efficiently enough to be useful)...

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things...

But just to show how stubbornly an idea can hang on, all through the seventies and eighties, there were many people who tried to get by with "Remote Procedure Call" instead of thinking about objects and messages."

For those familiar with microservices, the actor model, and other advanced programming paradigms, your Spidey sense is tingling. These are actually more closely related to true OOP.

So, is FP better than true OOP?

I don't think so. I think they both have their merits. Languages like TypeScript are embracing both paradigms and allowing developers to use the tools and methods that work best for the given problem!


Yes, the truth is often somewhere in-between. I guess what I’m about to say isn’t a big surprise (coming from a guy who mostly writes about Buddhism), but the middle path is often the best choice.

I find that Typescript allows me to write in an FP style when I want to have data manipulation and OO when I want to encapsulate state changes. And due to it’s C styled syntax, I can wrap the pure functions in imperative code so that OO developers can contribute without feeling like they’re being pushed out of the way. It’s code that everyone can enjoy! :)


I think part of the problem is that people say OOP but mean imperative programming and that imperative object-oriented programming seems to be the default nowadays. In reality, FP and OOP are more orthogonal than opposites, and it is possible to write very functional OOP, just not in Java*.

My observations, by the way, are based on the set of people I have worked with and, therefore, absolutely not statistically relevant, so here goes nothing.

Different developers value different things, and develop different skills and focus areas. Some developers seem to value most highly having code that just does what it is supposed to do. Since most of the code we are taught and exposed to out there is imperative, and maybe object-oriented, that's the current they usually follow. If they can just write one instruction after another and read that style all the same, bothering with things like function composition, purity, referential transparency, monads, etc. seems just over-complicating things. They can write programs in a style they are already familiar with and, at the end of the day, the program runs the same.

Some other developers seem to be looking for the Holy Grail of programming, a legendary piece of code so perfect that no mortal eyes can withstand. Having your code do what it is supposed to do is nice, but have you ever felt the joy of writing code that reads like poetry, and not like a recipe from your grandma's baking book? Each piece must be abstracted just at the right level and everything must be composable and testable. These programmers write code very differently that the first group, focusing a lot on how the code should do what it is supposed to do.

Some of these developers discover FP and look at the "horrid" code that developers in the first group write and think that the problem lies within (what they call) OOP itself. Afterwards they go around writing articles on how much OOP sucks. But it is fair to say that others just figure out how to write pleasant imperative code.

However the problem is that imperative OOP is the default, so programmers that don't (not yet at least) have a large interest in how the code is supposed to do things stay in this sphere, while some of the developers most focused on the how feel attracted to more declarative paradigms and, not surprisingly, find a lot of like-minded programmers in that sphere.

Basically, my point is that FP forces you to think in a different way about coding than the default, so people that think more about how code should be written are more likely to make the jump. I believe that if declarative functional programming was the norm, we'd see a lot more sloppy FP and articles saying how dangerous FP is and that OOP is the right way.

* The last line of Java I wrote was in Java 1.8, for all I know this assertion could be incorrect nowadays.


Developers who hate on OOP don’t know how to use it.

I perceive objects as configurable tools for processing data throughput - ie processors that can be configured by the constructor and are further immutable. No obstruction, no getters or setters, no data. Data pass through the object only. I know it's very similar to functional programming.


That sounds like a very reasonable way of using OOP. However, there is a noticeable difference when you use a functional language.

In OOP languages, whether or not an object is immutable is really up to the discipline of the programmer, and neither the language, the type system, nor the compiler will help you maintain that discipline. Anytime you write code that interacts with a data structure, you must mentally evaluate whether the data was really left alone or not. And that adds mental overhead. You might even end up looking at the internals of the object/class to assure yourself that it's safe.

In contrast, with a functional language, where immutability is the default, you can rest assured that all the data is immutable. You'll never have to check the internals. The compiler will let you know. This alone eliminates 80% of the mental load of working with any particular part of your code. You no longer have to consider the entire application context while solving local problems. You really can't get that without the guarantees provided by a functional language design.

On the flip side, this does force you to solve problems differently, though. If you're used to mutation and side effects as the primary way you solve problems, then switching to a functional paradigm will require you to dramatically rethink seemingly simple algorithms. FP has an answer for all the problems OOP can solve, but the methodology is usually different enough to put people off from switching.


My data are also immutable in my applications, I pass parameters of methods only by value. Mutable are collections only. It really requires discipline, but the result is pure SOLID-compliant code.


You must break your real world problem down to some totally artificial, unintuitive "objects" communicating with each other first before you can apply this method. And the funny part is that most of the real world problems do not really fit well into such a representation. Not that FP allows any better tools, of course, both ways are almost always wrong.


Suzdalnitski's article reads like pretty steeotypical functional-programming zealotry to me. FP has it's virtues, but it also brings it's own unique set of issues. The same applies to OOP, and just about any other programming paradigm you can imagine.

Ultimately, the problem isn't OOP, or FP, or any of that, it's people misusing and abusing those paradigms. Yes, you can shoot yourself in the foot with OOP, but you can do so in any language that's actually realistically useful (and many that aren't too).


The chances of shooting your foot with OOP are far more grater then with FP... Thats my main issue with OOP.
You need to think 100 times and write much more code, as Suzdalnitski showed in his article, therefore yes OOP is really dangerous

When i just started programming most of my code was spaghetti because of OOP, the more i leaned towards FP the code was cleaner


You need to think 100 times and write much more code, as Suzdalnitski showed in his article, therefore yes OOP is really dangerous

The exact same could be said about FP when you don't have lots of experience with it. I'm not saying OOP is any better, just that this particular argument (which I hear all the time) can be applied to any programming paradigm if you don't have good experience with it and proper training in it's use.

Some people just 'get' FP, and it comes as naturally to them as breathing once they learn about it. The same is true of OOP, and the traditional procedural paradigm, and state machines, and almost any other programming paradigm you can think of.

When i just started programming most of my code was spaghetti because of OOP, the more i leaned towards FP the code was cleaner

That just means that you happened to get better at writing clean code as you learned FP. For all you know, you could have just naturally developed better coding habits independent of the fact that you were learning and applying FP principles.


People should stop comparing these and focus on best use cases. There is no God tool in a professional world. Visit any professional craftsman’s workspace and you will see very interesting gadgets for very specific actions that you would handle with a hammer and screwdriver at home. God tools are for amateurs that are ok to solve average problems with ease (edit: while not minding extra time and effort spent).


Suzdalnitski says in the article: "Some might disagree with me, but the truth is that modern Java/C# OOP has never been properly designed. It never came out of a proper research institution"

And why on Earth would they even need to come out of some institution? Languages are created for specific needs, not to stroke the ego of some Computer Scientist who has patented/perfected the One True Language. I do not care of purely theoretical aspects implemented in X language that is supposed to make everyone so much more productive... but only so long as you understand quantum entanglement / advanced lambda calculus / the origin of the Universe. Rather, does it fulfill my needs? Can I implement something elementary in it like mutating some data without working around artificial limitations of the language? Does it expose a competent API / can it integrate itself easy with, say, C libraries? These are the questions I ask myself when picking a language.

The truth is, the OOP aspects of the languages Suzdalnitski mentioned work absolutely fine for a lot of people (dare I say, the majority of people using those languages), and using them properly never hurt anyone, which is why most desktop/business software of today is written in, guess what, C++, C# and Java.

He just makes himself sound like every purist elitist nerd with a CS degree out there who discovered FP and deems it to be the all true solution to all problems in programming out there. I hate this kind of people with all soul, they only make programming worse.


Languages are created for specific needs

Well, so don't use java and OOP for everything, they are designed for a specific needs.

Functional programming is based on maths, maths is used in every science to describe how the universe works.
So Functional Programming can be used for everything.

Java (how is an OOP language) is not worst, is limited, because it's created for a specific needs.


I think a lot of the "controversy" comes from not understanding the basic problem.
Which is state-management.

OOP was design to help manage state in a time when the issue was quite new.
It does this by grouping state in objects, this organization comes at the hefty price of lots of bootstrapping code. But at the time it was implemented it was revolutionary. The result was Java, C#, C++ etc..

FP does away with all that bootstrapping and suggest just writing the bare-bones of what you need and then passing state along as needed. State is then only kept at the code entry point and then passed along.

This video goes over the history of state-management fairly well imho:

The question really should be, what idea-set is best of managing state?

IMHO it is FP, since it takes most of that pesky bootstrapping away, which enables code-reuse, testing, librarization and verbose action oriented code.

On top of that, you do not need as much specialization in your language to support FP. Which takes loads of complexity away.

In theory FP concepts could be applied in Basic using goto...


OOP is not "dangerous", not any more than FP. It is just inadequate, it is almost always a very poor match for the actual semantics of the real world problems. Not surprisingly, FP is also very rarely an adequate fit.


In the world of software development you (almost) never need to worry about your technology being frowned upon, because...
Give it 15 years and a new batch of genius newbies will be reinventing it as better than the standard technology they were trained on.

In a decade and some, a brilliant person hating functional programming will invent something better... probably a new way to do JavaScript programming... it'll have 2 main objectives... Prudence and Reusability... each of those solving 2 problems each found in functional programming.
They will probably call it the Micro-Ecology methodology. (It will work well with Micro-Services and Micro-Frontends... and it will encourage environmentally friendly concepts!)
The only time you need to worry is if your favored technology is too RAD and doesn't need a command prompt... then beware.


Only reason to hate one or the other is if you're stuck working with someone else's code. If you work at a place that only provides Craftsmen tools but you prefer SnapOn, get a new job rather than try to convince the whole Internet that Craftsmen sucks.


There is no silver bullet, devs! Use the paradigm(s) that better fits/abstracts your problem/use-case. Each one have pros and cons, and a great software engineer should be able to choose the best given its tradeoffs (sometimes both). That's it 🙂


Is it possible that the Suzdalnitski was edited? I don't see the "functional programmers are typically more smart, and more expensive" line. It would be a good edit if it happened, but it should be mentioned as an edit at the end.


That was an email response from Suzdalnitski during their interview - it's toward the end of the main article by Cassel


It is..if the guy who coined the term said it is slippery.
If OOP is the way we organize our thinking then it completely failed us since the original idea was about objects and their connections.
The version of OOP we are using is about classes and their dependancies. Yes we need to make object connections through classes' connections. How many objects we have in our app, and how many classes you have in our code, and exactly how they are connected, if we want to change object connection s how do we do that... if we cannot answer those questions with precision then fine, they will come back as bugs any way. The old folks could answer those questions with procedural programming. Of course you can throw back: who cannot handle those bugs are cheap developers.. but that should be another topic.
Hint: if OOP is broken for us, it is broken for others too, especially the ones invented it, look for them, some of them tried to fix OOP for a long time. It is us with practical thinking and move on with whatever we have in hand. Those guys invented things because they did not just moved on.


We all have biased perspectives towards certain subjects. I think that referring OOP to being very mutable is kinda naive. A lot of the data can be immutable. "Collections.unmodifiableSet(nameOfSet);" as an example and many more.


I don't agree with his premise even though I might share some of the same experiences. I have switched to FP. I think I was using some bad instructional material or a bad mental model for OO that basically made objects just imperative programming containers. And I always found that it became increasingly harder to maintain over time. I wish I had discovered better ways of doing OO back then. For example, I've since seen a lot of good stuff from Sandi Metz. In any case, OO is a paradigm that has gotten the job done for a long time. I think if you stay at it long enough, you can figure out how to avoid most of the footguns and write good software with it. But I found FP before I got to the equilibrium point, and now prefer it.

In my opinion, one of the biggest downsides of FP right now is that most instructional materials insist on bringing abstract math into it. It is not needed. And in fact, I avoid putting my own logic in terms of math categories. Because then I am requiring all future readers to be able to think in terms of abstract math to read the code. Which is a non-trivial burden to take on, definitely not friendly to new devs. The way we do FP is probably simpler than OO. I have an article about that approach here.

I think both OO and FP have potentially harmful approaches. In the end, they are just tools to get jobs done. It is all in how you use them.


I remember was OOP was gaining popularity. The article was very correct in many ways but also had some glaring flaws.

The issue with OOP lies with the programmer using it in unnecessary ways. It is not the fault of OOP that people add unessesary abstraction.

For instance, if your light isn't turning on and you decide to become an electrician to learn how to turn on the light when all you had to do was flip the lightswitch it isn't the fault of the lightswitch that you went way overboard. Yet if you are wiring the electricity in a skyrise you might need to do a bit more that slap light switches on the walls to get things to work right.


Ah yes from Ilya Suzdalnitski, the same auther as from "Functional Programming? Don’t Even Bother, It’s a Silly Toy"

Don't feed the troll...


The article on Functional Programming is ironic and not a denigration of Functional Programming, at contrary... Read it.


I had no idea he had written a similar article against Functional Programming! I'll have to read this when I have another opening to look at Medium (silly paywalls). Thanks for sharing!


A software engineer saying "I'm only going to use OOP" or "I'm only going to write functional code" is like a carpenter saying he is only going to use a miter saw, regardless of what he is building. Just like a carpenter, a software engineer should use the best tool for the job, regardless of which paradigm they prefer.


The situation with OOP is more complex than than it being a beginners style of programming but does also involve that.

OOP has the same set of problems as anything else. The DK effect and Cargo Cultism.

It has people who start from practice and think they know it all and people who start from theory and think they know it all.

Like all languages, it also has variations in how those manifest, the pitfalls and which are more likely. OOP has a surprising amount of problems with people versed in theory and not practice which becomes obvious when you consider that Java took over as the academic language of choice because it's good for "learners" (IE, after pascal, now with Python sneaking in there).

My advice is, no one can really understand the inherent pros and cons of different languages without practical experience with a number of main programming paradigms and languages. The ups and downs can often go beyond a language itself but also into how people typically use it and other things such as bigoted cultures around languages.

On the surface OOP all looks simple but in practice it gives a lot of rope to hang yourself with while also being very rigid in some respects. It's asymmetric. In some ways it can allow you to do too much too easily and in others your hands are tied. Some simplifications can defer complexity onto the programmer and make things worse.

Your immediate problem with OOP is that strict inheritance doesn't match real life relationships. That's a fairly well covered problem with a lot of discussions on it. Hence the wave of composition (membership) over inheritance.

A not so obvious problem is surface area and action at a distance, and uncertain state; all consequences of being able to make large object graphs.

When you create a library traditionally, then you just have internal and external methods, that's it. You have a very well defined interface. It's natural to keep related things close together. You end up with a structure not unlike a zoo that's relatively easy to divine, at least more often. You have people that can go around, animals that stay within their cages, then the few staff that might go almost anywhere. The analogy is not identical but roughly like this. With OOP, it's very easy to create all kinds of graphs that are a total mess. You end up with lions wondering around the pedestrian walkway, elephants wondering into the penguin enclosure, etc.

When you create OOP, each class is an interface unto itself, it's like being coerced into making a million little libraries or DLLs up front which quickly becomes problematic where layers above will then interface with those arbitrarily.

Technically speaking some features in OOP and methodologies can be used to minimise these but they're not well advertised.

A key reason for this mess is the proliferation of SOLID. It's a useful set of guidelines but following things like that to the extreme are counter productive in application development.

When you develop core libraries, that are open source, to be used by lots of people, then you want to cater to as many use cases as possible, make things very flexible, make everything pluggable with an alternative implementation, break things up, etc. Essentially you want it so people can use your code however they need without having to modify the upstream code.

With application code it's very different. You're more of a leaf of the library dependency graph or closer to the leaves and you can also trivially modify code as needs are discovered and you'll mostly have only one use cases per thing rather than all the possible use cases in the world per thing.

You see this where people make everything an interface, then you could implementations per interface. You find it's nearly always one in most end applications.

The problem with OOP is that between these two extremes, programming close to the root (libraries) and close to the leaf (applications), there are radical differences in how you might approach those and how much you have to do before hand.

This scale sadly often isn't taught. It tends to be program everything as though you're adding to the Java Standard Library and for public consumption.

Personally, I mix up all types of programming paradigm. Whatever best solves the problem for each module.

These days, most popular languages will allow you to use procedures, functions and classes.

Personally I don't really consider someone to have reached the level of being a full programmer until they've gotten used to all of those.

The biggest problems with OOP programmers, which is starting to also become a problem with functional programmers is that "they know how to use it" after... studying it.

It wasn't long ago I was looking at a piece of code from someone having "studied" functional and being a "master" of functional (I hardly know any theory at all, though functional people reviewing my code tend to tell me they're seeing things they studied) and then able to remove a tonne of nonsense that only served to obfuscate the code such as identity functions for the sake of identity functions, simplifying the functional "equations" down to about 500 bytes from 10KB of code that was in every way superior (easier to read, modify, extend, reuse, faster performance, lower resource consumption, less error prone, predicable behaviour, testable, etc).


Thank you for gathering the different opinions and bringing them up for exchange. Usually, I stop reading when someone says "xyz is outright evil and made by the devil himself". Most of the time, there are a lot of things coming together.

The most important part to me is, that you can write good maintainable software with both approaches. And both approaches a far from being new. Even the interpretation of the teens has changed over time when the experience with them accumulated.

In my experience, people often then away from one thing because they are facing a problem and then focus on solving this aspect. If someone was lost in a big pike of interfaces, classes using each other and being distributed over numerous namespaces, it is natural to find something appealing that eliminates the need for that.

If someone is lost in a system where everything is a function, but the things you are looking for are spread across different areas and you have a hard time publishing your state to all places where you need it, I can understand if they seek the order of interfaces and the ability to mutate state.

But there is no silver bullet in a paradigm or language. Each one has advantages and each advantage is bought by the downsides that come with it. So instead of focusing on the negative points, I find it more helpful to look at the advantages and then trying to judge if these are a good enough reason to cope with the inevitable disadvantages. And that's the real skill you have to master as software developer.