Arguing about programming paradigms is like debating the best way to eat spaghetti – everyone has their preferred method, but in the end, we're all just trying to untangle a messy, saucy problem.
I believe all those arguments about coding paradigms, ideologies, and styles are fundamentally based on people's misconception that their code can somehow decrease the complexity of the problem they're trying to solve.
In one of many YouTube videos on the topic of why OOP is bad, there's a video with millions of views where the author (Brian Will) poses a question towards the end:
Are we actually decreasing the total complexity of our program by splitting code into many small methods and separate classes, or are we just displacing the complexity, merely spreading it around?
He's right to notice that structuring a program with objects doesn't help to decrease the total complexity, but I would argue that it's impossible regardless of the way you code.
Don't get me wrong; you could easily increase the complexity if you're not careful, but that doesn't have anything to do with your paradigm of choice. I'm sure people have already come up with all kinds of FizzBuzzEnterpriseEdition using different styles and approaches.
I think Brian and most people who keep bringing up these arguments don't understand one simple thing: our profession is not about complexity minimization; it's complexity management.
We take a problem and shift and displace its complexity in a way that is useful for us. We can use libraries and tools like databases, message queues, and schedulers to delegate some aspects of a problem to them, but the complexity that, for example, a database helps us deal with doesn't go anywhere. The database developers had to manage the complexity of data storage, accessing, availability, and so on.
So, if OOP or any other approach helps you and your team better understand the problem and manage its complexity, good for you. But if you hope to reduce the problem's complexity, not just manage it, then you're going to work in a very frustrating environment, because what you want to achieve is simply not possible.
Instead of devoting yourself to some "school of thought," you probably should:
Break down problems into smaller, more manageable pieces. This is very trivial, yet when this idea gets named the "Single Responsibility Principle" and "Dependency Injection," some people grab their pitchforks. My theory is that this happens because people are introduced to these ideas before they have enough real-world experience to connect them to the base idea of breaking down problems. They think it is some complication "gatekeepers" came up with just to complicate their CS exams this semester. As with many things in life, if deep down you don't feel like you need it, don't bother. Don't use SRP or DI until you need them. Just keep them on your radar. And when you eventually feel you need them, don't be shy to rely on them.
While some might see microservices, modularity, and serverless code as a sneaky way to make developers replaceable, the truth is, there's a lot more to it than just fulfilling management's dark fantasies. Sure, it might help with that too. But, it also means you can more easily hop between jobs or projects without getting stuck in the mire of "job security" (a.k.a. spaghetti code). Let's face it, who wouldn't prefer working with code that's a breeze to understand and maintain? It might not be as thrilling as untangling a complex web of interdependent modules, but it sure beats needing a degree in archaeology just to figure out how the darn thing works.
Update your README. Ah, documentation – the forgotten stepchild of programming, locked away in the attic. We've evolved as a society enough to only approve merge requests with unit tests, but when it comes to documentation, it's often "oh, yeah, we don't keep it up-to-date." Yikes! What was I thinking, attempting to follow the README instructions to deploy the app locally? Let's be real: getting to know a codebase is like navigating a foreign city. Sure, you can find the nearest Starbucks, but try doing it with a map from the 1800s.
Subpar software artifacts often result from stubbornly sticking to convoluted algorithms, whereas embracing the KISS principle usually leads to a decrease in cognitive load and a corresponding boost in system sturdiness. In simpler terms: if you care about something, don't overcomplicate it. I know it's tempting to craft a one-liner using just a smattering of symbols, but let's fight that urge, shall we?
In summary, throw yourself a complexity party! Transform into a programming wizard who conjures up a "do it" button for those who thrive in other realms, like business, medical research, finance, etc. You'll enrich their lives by harnessing your passion for technical solutions and mastery over complexity.
So, stop the endless fights over programming paradigms and embark on an all-you-can-learn buffet, sampling knowledge from various sources that can prove helpful in different circumstances. Remember, it's not about reducing complexity – it's about embracing it and taming the wild beast to create a better world for you and others!
And, as you set off on this journey, never forget the immortal words of Albert Einstein: "Everything should be made as simple as possible, but not simpler." In the world of programming, that means accepting that complexity will always be a part of the job, but we can choose to dance with it rather than fight against it. Now, go out there and show that complexity who's boss!
Top comments (1)
Love this take. Thanks for sharing! 💃🍝