Originally published on my blog
What follows is my reply to The Case Against OOP is Wildly Overstated, by Matthew MacDonald. I support the author with my point of view.
It took me personally more than a decade of programming, to walk up and down the developer experience ladder:
- Just make it work and learn some principles along the way
- Make it work, but follow the principles at all costs
- Don't care if it doesn't work, blindly follow the principles, regardless.
- Punish others who don't follow the principles
- Why are principles stepping in my way all the time? Wasn't I supposed to make something that works?
- Learn how to use principles sparingly. Pragmatically focus on making something that works instead.
- Just make it work.
Problem Origin
Where am I going with this? Like all principles, the idea of Object-Oriented Programming (OOP) originated as a way to guide programmers on how to solve a particular problem, not as the only way of solving all problems. The fact that developers occasionally get burned by using it has little to do with OOP, as it has with the human brain's stubborn pursuit of making problems simpler than they are. Of making them fit into a one-size-fits-all shoe box.
At the time of OOP's origination, codebases started growing in size and complexity. The concept that we nowadays refer to as technology XYZ's Standard Library or SDK didn't exist yet. For those to be developed without hundreds of duplications, the need arose for ways to encapsulate common logic and data. That is how classes were born (encapsulation). Classes allowed multiple independent instances (objects) to interact with each other, sharing common functionality through their ancestors (inheritance). To further ease the reuse of code, one provided ways for safely working with objects, without explicitly knowing about the implementation behind their behaviors (polymorphism).
OOP quickly took the programming world by storm and allowed many of the foundational technologies we rely upon today to get built. It was the pivotal point that led some to believe that it could become the solution for all programming problems.
Applications != Libraries
As the author of the article points out, there is nothing wrong with OOP, when used appropriately, for building the right things. See, one thing that rarely any programming course or practice would teach you is that developing libraries requires a different approach to programming than developing end-user applications. Libraries need the deep hierarchies and Byzantine-levels of encapsulation, to ensure that the core functionality doesn't change, but gets reused as many times as possible. Libraries have a possibly indefinite number of consumers, and once established, can only be extended by adding functionality that didn't exist before, or by modifying the logic hidden behind the layers of abstraction. Breaking the abstraction requires changing all possible consumers of a library.
Applications, on the other hand, change all the time. This is their most normal behavior. The problems applications try to solve change all time; on a fundamental level, nothing in the Universe ever stays the same. Why is it then that programming practices teach us to treat applications as if they were libraries? This is a fundamental sin - trying to apply a principle of reusability and component isolation to a problem that will have morphed by the time a solution for it gets drafted. I am not saying that laying out abstractions and sticking to principles is bad. It does help when identifying certain parts of an application that have withstood the test of time. Until then, enforcing principles over a dynamically changing problem seems like trying to catch sunlight in a mirror, not moving the mirror an inch, waiting for the sunlight to fall on it.
As McDonald concludes, there is a recent proliferation of multi-paradigm programming languages. Ones that feature a subset of OOP without enforcing its use, but allow the programmer to resort to other practices, e.g. Functional Programming (FP) when it better fits the problem at hand. Knowing when to stick to a certain principle, but pragmatically discarding it where it does not apply, is what distinguishes experienced developers from the rest.
Top comments (0)