Create templates to quickly answer FAQs or store snippets for re-use.
You have a false premise. Sometimes making code extensible increases complexity, sometimes it reduces complexity.
It comes down to design. For example if you ask me to write a calendar to display the holidays in 2019, I could hard could images for each month with all the holidays in the code. My code would be none extensible, and suddenly become incredibly complex as soon as you ask for the most minor modification,, such as to display different holidays per country.
Or I could write really good code for displaying a single month and a list of holidays for that month. I then just pass in parameters for each month of 2019. It turns out my implementation of the original requirements is simpler and extending it for new requirements does not overly complicate things.
So you premise making extensible code is always more complicated is simply false. The truth is some things will make the code overly complicated, some will actually make it simpler.
I took it as a caution against early optimization. Unless you're building something that explicitly calls for reuse, spending time on accounting for unknown future requirements can actually cause more work when you have to undo it later.
There was a point made about dealing with extensibility once requirements are known, if you have to use the same calendar in two places, you already have those requirements.
With experience comes the intuition to know when abstraction is needed and at what level, without first having that 3rd or 4th requirement that will inevitably come.
I worked with so many people who insisted on not "future-proofing" their code, only to end up eating everyone's time to completely rewrite their horrible, single-use code or hack it horribly to make it work for more than 1 scenario.
Anyone blindly spouting YAGNI is a mere programmer in my book, not an engineer.
That comes down to the intent of the code. With experience come habits that lead to easily extensible design that can be readily modified to meet new requirements. But to that end, at what point are you over engineering your code? Should systems be written to be infinitely modifiable to meet any requirement, or should you write rapid code that meets the needs of the now and can be easily replaced with a more extensible design? Personally I fall into the second camp. When you write a POC or the first lines of a new system, code quality, while important, takes a back seat. That's not too say you shouldn't write quality code and use lessons learned, but when you are spending hours trying to design a highly extensible and robust system when you could have banged it out in thirty minutes, you are wasting time over optimizing for what ifs.
It boils down to a singular postulate, just make it work then refactor.
Experience gives you insight on what not to do, and that can be invaluable in simply not writing yourself into a corner for future use, you don't even need to guess at what the future holds if you aren't locking yourself into early rigidity.
Again, if the requirements aren't to write a reusable module, you probably shouldn't be doing it quite yet. If you think you should be, you likely need to renegotiate the requirements to include it.
A POC is throw away code, and should never be used as the basis of a real product by modifying it to add features. It's like the paper and wood model that an architect builds. The real building includes numerous engineering and design consideration that are not in the POC model.
It's the same with good software. Quality and consideration for potential future use cases must be baked in. Why? Software tends to live much longer than expected and be used in unexpected ways, and you're probably not going to be around (or remember what you did and why you did it) when users come back asking for enhancements. Every single codebase I've ever come across where the devs only coded for current requirements without considering the effect of time has been an absolute mess to deal with, and impossible the alter for new/changed requirements.
I would give this a thousand thumbs up if I could.
Unfortunately, I have seen too many POCs ending up in production, too many devs saying "we will never have to touch this again" -- and boom! its back on MY desk and I have to deal with the mess. Nothing lasts longer than a provisional solution.
So yes, I rather err on the side of SOLID and all these principles. If once in a while I over-engineer code slightly because it is actually never touched (extended, fixed) again, then so be it. The vast majority of cases go the other way.
A hard-coded calendar isn’t very extensible but is the easiest to understand. All the months are written right there instead of importing objects and classes with month name data saved elsewhere. There is always complexity in abstraction. I took her point to mean that if you start extending before you completely understand what you will need your program to do, you have added needless complexity, which is self-defeating at best.
A too straightforward approach would almost always be too verbose hence hard to read.
When it comes to over-abstracting, I guess the devil is in the details. Some level of future-proofing is almost always needed. The clients do not expect that adding a feature after initial development can take so much time (when your first version is very rigid).
100% agree, this article sounds more like the butt-hurt of Junior programmer who thought they were all that and found out they really weren't. Clearly there's a lack of exposure to enterprise development than real experience shown here.
I've been a software engineer for more than 25 years and have consistently found the worst code to work in to be code that has been over-engineered, over-abstracted or unnecessarily complicated. If @codingunicorn
is a Junior (which I doubt from her insights) then she has already learned an important lesson that it can take years to learn. The point isn't to never abstract it is to not abstract too early.
It is the absolute type statement of youth. It takes decades of experience for most developers to gain enough experience when they should simply go for the simplest solution they can think of, and when they should give deep thought as to how the code can be extensible, and if it should be...
The bits of extensible code that always seems to work out I'd when you notice you are coding the same thing multiple times, and decide to instead just create a util or helper method. The types that are really a bad idea is when you are doing a quick pov, and spend lots of time solving problems for a 5% use cases and a framework for the next part of the project. As the point of a pic is to try things and learn and then later design based on what you learned. Much the same way a mechanical engineer designs a prototype to learn how it will fail.
Most software projects are somewhere between. There are bits that should flexible and bits you just want ant anything that works.
That's because it's written by a marketer's fake persona; I would bet money the pic is a stock photo / model. Just look at the whole-ass blog.
Would be an awesome stunt, though 😂
Firstly embedding data is a quick and dirty fix which immediately limits useful lifetime of code. Also, if you are tasked with building a 2019 calendar, you will be asked to produce a 2020 update. It would be so much easier and more elegant to just replace a control/data file than rewrite the sucker. If the "black box" is well designed and documented, there is no reason to ever open it! For instance i wrote a UPS, FEDEX, etc billing parser/accounting info extractor 25 years ago that is still running strong. It has been recompiled for new platforms but never reengineered. The control file is tweaked every so often to keep up with input and ourput changes.
I have a simple tactics that keeps things really stupid!
It's a cousin of dependancy injection.
In your calendar example, when I need to get specific holidays, I would call a non existing method like: getMonthHolidays(1)
Then I would ponder from where this method has to come from. Is it a private method OR a first order method ?
That means that I am always writing some stupid code at different level of a abstraction.
This also leads to easy to understand, maintain and extend code.
I had a CS curriculum and that wasn't even taught as a quick win. That's sad!
I think you'd still use standard oo principals to code the calendar. You just might forgo adding interfaces to every utility, creating a factory to determine which type of calendar (Gregorian, Hessidic, Klingon, etc), create an adapter to wrap your ui components, a service bus to notify for click messages, etc.
Hello. I'm a solutions architect with 20 years in the industry. If designed correctly abstraction is very useful and powerful. With that being said, I have found that most developers don't understand abstraction. That 80% of developers are unable to think abstractly and if they try to design/code abstractly they'll achieve what you outlined in your post. Final point: abstraction is not bad it just shouldn't be done by developers that don't know how to do it.
good point. one should not take "code writers" for developers or developers for software engineers. while abstraction works for everything, it's not actually for everyone. even more, abstract design becomes a pain when done by code writers or developers lacking real live implementation experience
Abstraction and indirection are different. Abstractions simplify. Indirection adds complexity. Most of the indirection I'm seeing is useless. Always 1 implementation in the factory. Why have the factory? Well, it's good design. We "could" switch out the ORM with anything now! Most of the time it's not needed and even when it is, actually making the switch still requires changes to many other parts of the code. Delaying optimization, in my experience, has usually been a much cleaner approach. When you implement the needed indirection, you know the whole problem you need to solve.
I think you are reading things into my comment that are not there. You may want to reread it.
Really? I think you were very clear. "I have found that most developers don't understand abstraction. That 80% of developers are unable to think abstractly and if they try to design/code abstractly they'll achieve what you outlined in your post."
In other words, you're smarter than 80% of developers.
Consider that our job is to make the user and subsequent deverlopers feel smart. They should be able to understand our UI and our code without having to be experts at anything.
First off you are reading things into my comment. Never did I propose that I am smarter then 80%.
Your assumption that all abstractions are hard to follow is wrong. If designed and implemented correctly they are easy to follow and use which was my main point. If this was not the case we would not see so many frameworks in our industry.
I'm sorry you're taking this personally. Your primary point seemed silly and arrogant to me. Don't defend it. Edit it.
And yes, I agree that abstractions can be useful, but mostly they're abstract. Hence the word. I've been programming professionally for about 40 years and have mentored lots and lots of developers. I don't think abstractions, in general, make for better software. Or even less software. Programmers who love them tend to rubber stamp them all over the place causing them self and others to write lots of extra code around the abstractions to make them useful.
Human language is already OO - tactile, tangible nouns and verbs. If you listen to the business, not the tech, you'll hear a highly refined already refactored business model with users, stories, nouns and verbs already well defined. Any code we write that doesn't reflect that model is going to be unintuitive and is ultimately added complexity. Our job is to solve business problems, not create them...aka add as little complexity as possible.
About... "If this was not the case we would not see so many frameworks in our industry" is a function of creativity, not evidence of a cultural commitment to simplicity. Yes, most engineers preach about simplicity, but don't actually produce it. We have so many frameworks because software people are creative and love to produce "solutions" for imaginary problems, not because they're committed to less code.
For example, what makes a good musician? A good musician is someone with a large capacity for music. When we have a capacity for something we tend to have more of it. We enjoy it. And we often create it. Technical people have a capacity for complexity. We enjoy it. And we often can't help but create it. And so, we have lots of frameworks that go in and out of vogue every year and will for the foreseeable future. This is evidence of our creativity and capacity for complexity, not of a commitment to simplicity.
There are developers who think things are complex when they are just incapable of thinking beyond the lowest level of coding.
To me, the right answer is simply use the right style of programming where warranted. Saying the Open/Closed principle is problematic discounts 25 years of proving it a wonderful principal where needed.
Exactly. Any principle not fully understood or without greater context will be wrong some amount of time. And so the next generation comes, misunderstands, makes some mistakes, adds some bits, then arrives back at the same conclusion. The age-old cycle.
So true, take Angular or React. We use it everyday but few of us know the Internals. None of us knew it at all until we spent substantial time learning it. Any good reusable code takes time to learn.
Ironically, that's exactly a major reason I prefer React, it's easier to understand as it isn't as prescriptive. There are very simple look alikes that are easy. React itself is a bit harder, but not really harder than just the rxjs library angular is using by itself.
Agreed, Rxjs takes a long time to grasp, but is worth the pain because asynchronous push notifications are almost alwys better than pull constructs, even asynchronous pull requests.
I prefer redux/flux or graphql myself.
Open/closed is often difficult to grasp. I like to sum it up as its backwards compatible and not limited for new features. However sometimes no amount of planning will ever allow you to know what future features you'll need, and this is why you need to follow Robert C Martin's rule of "classes should be small, the second rule is they should be smaller than that"
The power of doing this is all good. Some naysayers claim that finding the source of these methods is a problem. To that argument, following covention over configuration is all that's needed. The convention of putting all extension methods in a unique namespace is all we need. Intellisense and go to definition takes us to the source.
Totally agree! This is an easy one for us devs to trip over. I still find myself doing "speculative programming" after years of being a professional dev.
Two major things I've noticed when this happens.
This is certainly a tricky aspect of development and takes real thought and discipline to avoid "gold plating" code that will look beautiful but never be used. This code is equivalent to writing features that were never requested.
To avoid doing this I personally try to balance the desire to be the overlying flexible system with the YAGNI principle
Nick Taylor (he/him)
Sometimes I wish YAGNI was called the YANNI principle. 😆 #BestPractices #DEVcommunity
19:59 PM - 31 Oct 2019
I agree. The balance is key. Systems are complex and designing for future-proofing often requires a qualified definition of what that means for the projects in scope and the constraints, technical and business, in play. I often ask myself how I can manage the technical debt of something I design rather than if there will be any debt at all.
I like to always start simple when it's a new problem and coalesce similar functionality as time passes. This idea really only works when you're willing to spend time refactoring the same code over and over (which I don't see as a waste of time when done correctly).
That said when an abstraction is obvious I absolutely begin with it.
I have a co worker that is a big fan of the rule of three. It's a good general rule of thumb to follow and I find it helps summize the same point I think you made here.
It's about finding a sweet spot, and not pursuing extremes
I’ve spent way too much time trying to manage others poorly written inflexible code. A minor change to requirements, say a change to a service account breaks everything. Taking hours or days to fix. Usually end up rewriting, and simplifying poor design, by improving flexibility.
Bad code design is ok for inexperienced developers. Hopefully these developers move beyond this. If not, they call developers like myself to rewrite code properly. I’ve spent decades mentoring inexperienced developers on avoiding bad habits and developing good design.
Can do the project quick and cheap or do the job right. Never a lack of work building and documenting flexible code.
This is something I have been noticing a lot particularly referencing interfaces: A tendency to write an interface "because we want to implement against an interface".
My answer as a software architect is always the same: Until you need it, the best interface is the implementation you already wrote. Particularly with fast-changing applications, you'll quickly find that you cannot - and must not - anticipate use cases for next month. You'll almost always have to redesign either way, and it is much easier to do when you know - rather than guess - the new requirements.
Depending on your language you may need to always develop against interfaces so you can use DI to unit test your code.
I definitely agree that it's hard to get the right abstraction.
Here's a great talk from Sebastian Markbage from the React team
The cover for the video says it all "No abstraction > wrong abstraction"
And here's the great blog post about the wrong abstraction by Sandi Metz which you referred to.
Nick Taylor (he/him)
Also, this popped up in my feed the other day and seems to fit with tonight’s topic. sandimetz.com/blog/2016/1/20… #devdiscuss
02:13 AM - 06 Nov 2019
I find this assertion misleading.
It's also true that no code is better than wrong code. It's always better to do nothing than to do evil, the same way that zero is always larger than any negative number. But where does it gets us? Zero is not enough. We need to do good, positive work. We need good abstractions that allow our code to be flexible. Having inflexible code that has enormous cost to every tiny change is useless. We live in fast moving and dynamic world. The only constant is change.
For sure we need to write code and ship things and iterate over them. The main premise is, finding the right abstraction is difficult or can be difficult, so don't just create them because it makes things DRY.
I agree. DRY is a very good principle, but it should not be followed blindly. Sometimes, it's not really DRY, it looks the same, but it has two complete different purposes, and if we leave it alone, the two so called "duplicates" will evolve on two completely different paths, and change for completely different reasons. Sometimes, the cost of creating a dependency to share the code outweighs the benefit of reusing the code. usually I follow the WET principle instead: Write Everything Two times, but not three.
Thanks for the video!
Problem is people prematurely optimize and premature optimization is the root of all evil. When you optimize prematurely you create leaky abstraction which will require you dive into the internals. At that point you have lost the advantage of creating that abstraction.
Really good abstractions holds strong for most use cases. Remember last time you had to deal with quirks of document.createElement in the browser.
Complexity makes the system harder to understand and harder to charge. What's worse, our abstractions are usually wrong, because often we design them up-front, before the actual flexibility is needed.
Complexity makes the system harder to understand and harder to charge. What's worse, our abstractions are usually wrong, because often we design them up-front, before the actual flexibility is needed.
This goes way beyond programming; it's a problem in engineering, architecture, law and politics too. Systems are created by dedicated, committed people with a clear vision of what is needed, but over time maintenance and further development are delegated to others without the same commitment or skills. The result is always the same; a steady degradation of the system. We eventually reach the point where not even the original builder is able to rectify the mistakes made; the complexity has become too great. Every participant in the project leaves their own flavor of change, with no explanation of where, how or why it differs from the original master plan.
We can argue forever about method and about which currently-fashionable magic bullet will save the day but the root cause remains, as do its effects. It's quite simple; a system will always break down once its level of complexity exceeds that at which it can be understood by those tasked with its maintenance. This is not a problem of software; it's a problem of being human.
If there is a solution - and I'm not saying there is - it's to design for the lowest common denominator. Or to accept that systems will always need to be rebuilt from the ground up, way before they reach their intended lifespan.
Your assumption that flexible code must be complex is wrong. Good abstractions are easier to grasp than zero abstractions sphagetti code.
It is true that finding the right abstraction is hard. The solution is not giving up on abstractions, the solution is learning how to abstract correctly.
At some point, changing inflexible code is so hard that the project grounds to a halt and all the devs cry to throw the code base to the trash and start everything anew. The only way to keep large codebases alive is making them flexible.
That's why there is 'soft' in software.
I can't tell you how many times I've listened to arguments like this from people too inexperienced to see how their code will evolve. It's gotten to the point where I view YAGNI as a bad word. Even though I completely agree with the concept in principle, it is often overapplied by conservative developers.
Sure, for new and inexperienced developers, overgeneralization is risky. They don't have a full grasp on the patterns and concepts that they need to properly design simple and flexible code
However, once a developer had some modicum of skill, domain expertise and business understanding, they should always be expected to develop flexible, scalable code.
Occasionally, experienced developers will find themselves on newer, research-oriented projects. This path may necessitate writing less-flexible code - but the purpose here is not to go to production. It is to learn and gain insight into new technologies and patterns.
If we all held ourselves to the standards described in this article, we'd have no shortage of jobs, but we'd also have no shortage of technical issues with the software we write (and use).
Always strive to write the most clear and flexible code you can, unless you are attempting to learn some new technologies or prototype within time constraints. Your teammates and successors will thank you.
God this is so wrong. I'm getting tired of this new wave of Gen Z'ers learning to write s**t procedural PHP/etc code and thinking they've discovered some new secret to "good" programming. Sorry, but no. Just stop, please.
This just more or less sounds like a problem for novice programmers. The proliferation of languages and technologies has not changed the fact that knowing how to write syntax and knowing how to program are two separate skills.
Advocating the limitation or elimination of reusable code sounds like the battle cry of the under-experienced.
(this is not meant to sound harsh or judgmental - just an observation)
Haha it's so true. When you write code you're always thinking "how could I make this more generic so I could potentially re-use it" but in reality you're just evolving an ever-more complex code-base (guilty!).
Flexible and modular code solves two big problems:
1) The first problem it solves it probably the most obvious. Putting some more time upfront will undoubtedly save much more time down the road if you know more development will come. Nothing is worse than needing to change a whole system that has already running in a production environment and needing to maintain the same data contract. Designing your code upfront to be modular will allow you to add to or refactor much easier when the time comes.
2) The second problem is probably the most forgotten, and that is testing. It really bothers me to not hear testing being talked about more. Modular code will make testing a much more pleasant experience. Following concepts like inversion of control and utilizing interfaces will make testing simple. You will be able to test the parts of code you care about and not have to worry about how you are going to test with your 3rd party integrations. You would instead be able to mock or stub out code that doesn't need tested.
In the end, it is always good to have some sort of code modularity and code flexibility to save you headaches, or a other developer's headache, down the road. Not only will you be able to test, or test better, you will make adding new features and such much easier when the time comes.
Refactor early and often but not before necessary. An abstraction can wrap a concrete implementation after a similar but unique use case develops. That's the time to add the complexity. Leveraging internal tools, documentation, and style standards as well as familiarizing common abstraction patterns will help fill in the complexity gaps.
I think the most important thing to keep in mind is that there are no absolutes in software development.
Computer science is the art of trade-offs.
How many people are working in the codebase? One, a dozen or thousands of Engineers?
Are you working on a long-lived, super complex SaaS product with thousands of features and hundreds or thousands of developers or a one-off project for a small shop?
Abstractions can be great if they are well thought out and make code easier to change.
I think the problem being talked about here is trying to over-solve for relatively simple problems. Big, long-lived, complex systems needs abstractions and make it easier to manage change.
Keep things simple if you can and solve for the current problem at hand without impacting the ability to manage change in the future.
But at a certain level of scale and complexity, you will need abstractions to manage the system.
Abstractions should make it easier to reason about code by encapsulating commonly used boilerplate so you can focus on writing clean, easy to understand business logic.
Computer science is simply the science of computation. It is a mathematical discipline that has little to do with actual programming. Computer programming is an engineering discipline and should be treated as such.
The science of computation is literally balancing tradeoffs of space and time complexity. Memory usage or speed. I don't understand this comment at all.
Computer science has little to do with programming for low-scale, simple projects. You don't need computer science to update the CSS on a website.
Computer science takes a backseat with simple systems because hardware has advanced to a point that efficiency doesn't matter much if you just have a couple users.
However, Big O notation, correct use of data structures and an understanding of core CS fundamentals is important if you work on systems that have millions or billions of users and require high-throughput.
For the first 7 or so years of my development career, I shared the same sentiment that my CS degree was a waste of time. Then you work on a really complex system that has a lot of throughput and extremely demanding SLAs and you realize that CS does have a part to play in modern software engineering.
I'm saying that CS is the why but engineering is the how. I've been at this for 20 year and have an MS in EE.
I think there is a balance to be found. As a developer who works closely with the product team I have found that it is possible to find the balance between flexibility and the easy of use (code wise). If it's something you may need by the next quarter you might as well bake it now and document it well so you can get started right away when you need it. If it's going to be over 6 months or not in the plans yet, then don't do it. That being said there are a few caveats to consider. First, I think that works well with smaller teams who do not change members often. Second, the documentation is absolutely crucial. Heck, I need a refresher on what I was doing after a long weekend... Third, you MUST watch out for over engineering. You may have a good balance in mind but get excited and abandon the balance w/o realizing it. Fourth and as important, testing is absolutely crucial. In complex systems if you don't have automated testing, it becomes easy to make a change that breaks something else.
This post title smells awfully like click bait. Flexibility is great and it's what allows any non-trivial system to evolve without requiring a rewrite. Over-engineering is the bad thing you're looking for here. But those two things should not be confused.
Strongly disagree, abstractions to leverage the intended process is the core principle of programming itself, dumb straightforward code has its benefits in some specific use cases and emergencies for quick response but good code is always flexible and simple to understand.
Flexibility isn't opposite of simplicity
YAGNI is a principle for programmers, not engineers, and arguments for it are usually invalid strawman arguments. Good abstraction are hard, yes. But that's your job as an engineer - to THINK hard about how your designs hold up over time, not to take the easy way out and create horrible tech dept that others that come after you have to clean up.
Engineer needs to know what to include, but shouldn't consider what not to include? Doesn't add up.
YAGNI isn't YOLO, and calling out programmers according to your arbitrary developer hierarchy doesn't actually help you or anyone else. Every acronym making their rounds in tech blogs is going to be adopted by the reader at their own level of experience, and applied to their own situation differently.
Yes, an engineer, does in fact have to know when not to use things. No, it's not a horrible thing to abstract, as and where needed, but not for the sake of abstraction itself.
The entire premise of an MVP is inherently accepting tech debt, that can still be minimized (again based on experience), in favor of moving quickly towards a given end, in which case your job is going to depend on knowing which corners you can cut and which you need to support at all costs. In so doing, you can accomplish what no amount of extra planning will get you, if your highest priority is fast delivery of a specific product.
Made an account here just to post on this. I don't understand how some people are supposedly "20 years in the industry" don't inherently see the value in this kind of post. Perhaps they were working on simple systems for those years.
Nothing complex can be made well without this principle, imo (I'm a game engine programmer, I guess, if you want to judge me that way). Programmers are the only engineers who think you can redesign a system purely in its external fashion, while changing its internals, without incurring cost if you "just abstract super good!". In reality, complexity adds cost, it adds bugs, and it adds confusion. Imagine if a building architect told you that he wanted to change how the internals of a skyscraper's foundation worked, such that it could support any building on top of it, yet it would still maintain the same external interface. I can tell you right now, that would be an impressive feat, but even if it could be done (I kind of doubt it), it would require the external world (the building) to bend to the need of its foundation.
Equally, our code ends up bending to the needs of our poor systems too often. The cost to that is development time, but maybe equally importantly, actual performance loss. How much software is slow for our elegant abstractions (which maybe aren't that elegant)? imo -- copy pasting is the business.
Also having a Pornhub/Github shirt is :chefs-kiss:
Have to agree... I fight a lot against introducing levels of complexity and patterns that aren't needed, or at least not yet.
You can create abstractions that make life easier in the life of the project. Abstract complexity behind easy to use interface. Though sometimes you're better off doing it "the hard way"
Flexibility isn't boolean. f(i) is flexible and abstract - I can change i, and the behavior is abstracted behind f(). I've never worked on a project whose team agreed to follow this "Open-Closed Principle" so I can only guess what flexibility and abstractions you find troubling. A code example would help illustrate the mental overhead and maintenance costs that are frequently overlooked or discounted.
It's a difficult balance, my rule of thumb is to stay simple until complexity is needed, however, I also leave some room for future enhancements if that doesn't require too much upfront effort.
One consideration is the people that will maintain the code in the future: over the years, with people coming and going, we've learned to write the code at mid-level developer skill level, which means it should be very easy to hire someone new to start using and maintaining the code, instead of newcomers wanting to rewrite everything or creating a competing complex framework because the old one was too complex. If we have to hire someone super smart with twenty years of experience to maintain some proprietary framework, then I failed as an architect.
The premise of this article boils down to sophistication. If you find that writing dumb code works for you, then go for it, but please don't try to tell me or others not to write elegant and flexible code. In doing so you and others are projecting your own limitations upon those of us who can and do write good flexible code the first time. If you want to really grow then challenge yourself to understand why you can't rise to the occasion. After all you call yourself a coding unicorn, are you a coding unicorn?
Please study the open-closed principle again more carefully. It applies to software entities (primarily classes), not systems on general. It is primarily about encouraging subclassing in the OOP sense as a means of extending functionality, in preference to modifying functionality.
Still doesn't make much sense to me.
Modifying functionality is OK. Providing extension points comes with a cost. Be it "simple" subclassing or system-level design.
You're missing the point. This principle is all about lower-level OOP, class-based design. Don't attempt to overgeneralize it. In class-based designs, you don't need to do anything special to extend existing functionality; just create a subclass with a method overridden. The point of this principle is that OOP by its nature allows you to do that without impinging on the behavior of the system.
Almost every commenter on your post is wrong and they all have mostly failed to see what's actually happening.
You are correctly identifying a problem, but your view of the problem is blurry. Yes the abstractions you talk about do increase complexity and this complexity also reduces development speed but this is not the main culprit that reduces development speed. You think that the "flexibility" trade off isn't worth it, but this not the full explanation.
Believe it or not, the "abstractions" you are talking about are actually Less flexible. They reduce flexibility and increase complexity. There is no tradeoff, it is fundamentally bad.
A common experience people have is that they will create a seemingly flexible design in their code only to find out that an additional feature request has rendered their modular code completely un-flexible. They end up doing a huge refactor or a huge hack to incorporate the feature. This happens time and time again and is the antithesis of modular code. Many people go on making all these "flexible" abstractions and attribute it to just the nature of programming and intrinsic deficiencies in "design."
While most people think that this blurry area of "design" is where the problem lies, in actuality, the problem you are describing has a concrete and almost scientific/logical reason for existing. It has very little to do with your "design" choices. In fact, if you don't realize what is going on, all your "designs" will always have the potential to have this issue with complexity and lack of modularity regardless of your skill/experience.
Again, no blurry segways into "bad design examples" the reason is highly concrete and I will try to make this as clear as I possibly can.
It appears to you that Abstractions are the culprit and this is true to a certain extent. However, this is not the root of the issue. The root of the issue is the kinds of abstractions that people use. You specialize in Java, and therefore must use OOP and OOP Design patterns. This is the heart of the problem. The very nature of OOP makes it so that it is fundamentally less modular then most other forms of programming. So, if the root paradigm is less modular, then, by extension, the Design Patterns and all the abstractions that sit on top of OOP are also less modular.
Why is OOP less modular? The explanation is simple, because OOP forces the unionization of two smaller primitives: function and state into a larger primitive called an object. Smaller primitives are more flexible than larger primitives. Thus OOP is less modular by its core nature.
A program is made out of pipes and fluid. Pipes are functions, data is fluid. You must keep these components separate and small like lego pieces in order for your program to have maximum modularity. You then build your program by composing these primitives together like lego pieces. Small pipes fit together to form larger compositions until you eventually build a network of pipes that data flows through like fluid.
With Objects the analogy changes: pipes are glued together with other pipes to form primitives and the data (aka fluid) is glued to the bundles pipes themselves as a union called a "pipe network" (this unionization is the object in OOP). Methods mutate state and Objects themselves can flow through pipes. So essentially, following the analogy, you have a "pipe network" where the state of the pipes are always mutating, (for example: pipes that are constantly changing in diameter) and you have other (pipe networks) flowing through the "pipe network" to augment the overall "pipe network" with new additions of other (pipe networks).
Other words for (pipe networks) flowing through "pipe networks" is: "dependency injection" or "object composition." Most Design patterns are some variation of previous mentioned patterns and therefore all suffer from the same issues.
Needless to say, nobody builds pipe networks like an Object Oriented program because the complexity is unnecessarily high and such a pipe network is not "flexible" to modification. For maximum flexibility I need access to the smallest pipe primitives, but in OOP I only have access to a bundle of pipes that are constantly mutating (an object). Objects are the primitive unit of composition in OOP, when in actuality the primitive unit should be the smallest pipe segment. Think about it, you don't build lego models with bricks glued together in arbitrary shapes. You build walls by composing bricks. The bricks are your unit of composition and the wall while composeable is not a primitive and therefore can be decomposed back into bricks or half-walls.... meanwhile an object cannot be decomposed, it is forever a mashup of lego bricks glued together because in OOP that object/lego wall is your primitive.
What is the programming primitive analogous to the smallest pipe segment or singular lego brick? The function.
How do I combine functions like lego bricks? Read on:
The object was created with a certain goal, but the creators of OOP did not anticipate the cost. What the object does is reduce complexity by allowing you to think of multiple concepts as a single object. However, objects reduce complexity at the cost of modularity/flexibility.
This trade off can be completely avoided by using functions as legos. The abstraction that "composes" functions is called "Function Composition." This is how functions can combine like lego bricks. You can look it up on google.
The paradigm that forces you to use "function composition" is called "functional programming." In functional programming your functions are pure and can never change state and your data flows through a composition of functions chained together with one taking an input and feeding it's output to the next segment of the pipe. Fitting fixed pipe segments together to form a unchanging pipe network is a one to one analogy to functional programming and composition.
Mind you, the style isn't perfect. You can still send "pipes" through your pipe network with first class functions and the paradigm breaks down at IO. You can also have functions return functions or essentially pipes that spawn new and varying pipe segments to add to the overall pipe network. Treating functions as data creates an isomorphism that is identical to unionizing data and functions into objects and therefore creates much of the same issues that exist by default in OOP... so use the techniques of first class functions in functional programming sparingly.
You will note that I said the FP paradigm breaks down at IO. This is exactly where much of the complexity with react and redux arises. The IO loop... if your web page was just a single static render then all of your react components can be pure functions and your App would be perfect modular and elegant... but because the User must interact with the page and change the state of the page this forms an IO loop which pure functions do not fit well with. Hence the development of awkward patterns like FRP and Redux to deal with these issues. The benefits (and weaknesses) of FP are hard to see with React, Redux and JS as the front end is not the best application of FP. You really need to dive into haskell to learn more about what it's all about.
Also note that you are not the only person to notice this issue with abstractions. A very popular language was created by a Famous computer scientist Rob Pike, with your very philosophy of writing simple and straightforward programs in mind. That language is called Go, and Rob Pike eliminated classes/objects from the language to exactly follow your idea in your post. Essentially Go-lang is going back from Java to C style programs where abstractions weren't as prominent. Rob Pike correctly identified the core problem as OOP. You sort of saw the same thing he did but in a more hazy way.
However, I don't entirely agree with Rob Pike. While less abstractions are better then JAVA, I believe that JAVA is bad not because of too much abstraction, but because it is the WRONG abstraction. Rob is a genius, but he has had little experience with functional programs, and while Go is better then Java, the way forward is not to eliminate abstractions but to use the RIGHT abstractions. I believe functional programming is the right and also best possible abstraction. However, I do not believe functional programming is the "perfect" abstraction, such perfection may not even exist.
Also A Segway on the commenters that responded to your post:
What most of these commenters are arguing for is that you should of picked the right objects for your design at inception. They blame "bad design" and say you should have of glued all your lego bricks into objects that are "flexible" enough to handle all future feature requests.
This is impossible unless you can see the future.
Rather to handle the future, you just don't glue the bricks together. Build the walls, but allow for the ability to decompose the wall into a smaller wall to form other primitives. If you realize that part of your lego wall can be reused to build some other concept you can always break it down to that fraction of a wall that fits what you need, you don't have to break it down all the way back to a single lego. This is maximum flexibility, maximum code reuse, with none of the over-complexity and lack of modularity of Object Oriented Programming.
Wow I wrote a lot. Hopefully that was educational. I'm going to save this for a repost as a future blog entry.
I am one to believe that you should start with the simplest possible solution and work from there; however, as soon as you need to test anything you will need abstraction. Given that you should be writing your tests as you write your code, abstractions will become useful right away. So no, I don’t believe abstractions are harmful. I believe that smelly code is harmful.
Much common sense. This is true. The word of so-called agile programming, tdd and rolling release, do not lead to good software. Writing software is an engineering discipline and must be treated as such.
Well, it depends on what is going to be needed. If the system is planned to be greatly extended, then flexible code will likely be a great help when the time comes. However, it would be silly to do abstractions and extensible stuff when you want to do a simple thing that won't get anywhere near the first case I was talking about, exactly because of the problems you have exposed. After all, you don't want to use a sledgehammer just to crack a nut.
Thanks for writing this. I'm an old guy and have endured every "better programming" tools, paradigms and methodologies since OOP in the early 80s. I'm still an OOP fan, btw, but like most things it has been polluted by tons of dogma. SOLID and the Gang of Four are fine academic conversations, but cause more pain that progress in production. The ONLY pattern that works is KISS.
I think it's interesting to combine "flexible" and "abstraction" here. When I was new to OOP I thought that was the point of Abstraction - to be very flexible, usable for lots of scenarios. And that's kind of right... But I took it to mean I should write classes that could do a lot of different things (depending on what you needed).
Actually, abstraction should be about writing classes (or modules, prototypes, whatever) that can do 1 specific thing really well. The flexibility comes from the fact that you can take them to do that one thing in almost any context - they don't care.
I think rather than warning to avoid abstraction, we need to teach that abstraction is about separation of concerns, not about flexibility.
I disagree, I think this is the dark age of modern software development. If the we were working in patterns and ways to write good abstractions instead to say to everyone don't do that the community could grown up and think about the job instead just write, copy and paste code.
If duplication is better than a wrong abstraction, a good one can solve a lot of structural problems and create beatiful and maintenable software.
Cool. OOP abstractions are just too low level for most real life problems. Then DSLs. But DSLs are cool only when they are not Turing-complete, otherwise we are resorting to the same problem of low-level GP programming language expressiveness. Then ContinuousRefactoring/CD/CI/Tooling/CodeAsData/Etc. Complexity can only be dealt with when we have enough computational budgets for it. Even OOP employs pretty big run-time overhead. The same is Garbage Collection. ZeroCost Abstractions of C++ (metaprogramming) increase compilation times dramatically.
Face it people. Complexity is inherent and can't be fought by 'simple tricks' of smart ideologies. Hard brute-force is the only way to go in the long run. The truth is in computational budgets. Now go and buy that new shiny Threadripper. It can make you a better programmer (if you know how to use it, of course).
"Early optimization" warnings were about finalizing your hand-coded C and assembly optimizations, not about providing extensibility and maintenance structures with ongoing growth of a system in mind.
This article is essentially clickbait, and at worse it's disinformation.
because often we design them up-front, before the actual flexibility is needed.
because often we design them up-front, before the actual flexibility is needed.
up-front design is bad, follow agile i.e. just-in-time design and keep evolving through refactoring, then it is not imagination flexibility but the best possible solution.
I have seen many developers do exactly that, over engineering what could be fast, efficient, elegant and simple solutions, ends up being costly and most of the time only adds complexity with marginal practical benefits.
Somewhat agree. Don't start out the gate trying to abstract a bit of code. Chances are you'd be overthinking it. Start simple, and later on you may find a repeating pattern you can reduce. More of a natural evolution of the code.
If flexible APIs couldn't be written then libc wouldn't exist. The entire world runs on libc and it hasn't required refactoring in decades. Writing reusable code simply requires sufficiently fine granularity of services.
One of the best programming blogs I have ever read. I have only been programming for 2 years, but I am already worn out from fixing others "clever" reusable code.
Try it for 20+ mate, you'll ignore it as rewriting is a waste of your company's time and money unless it has bugs, you need to add a new feature in the code area that needs it changing, or you're training someone in review. :)
I promise you you may not agree with me now but this way of thinking comes with experience and stress avoidance!
I agree that it is possible for frequently modified function/class contracts (in the name of adjusted behavior) to become a challenge or issue. However, DRY in concept I don't think is an issue. A few minor modifications for DRY purposes is fine.
Sorry, but that a very generalized assertion predicated on a false premise. Blanket statements tent to be born out of lack of understanding and experience.
Maybe I misunderstood the point of the article, but I couldn't disagree more with what I got out of it.
You've captured the problem quite succinctly and it's something I try to instill with my team especially when the example set by previous developers is to use every pattern in the book to solve every problem in the future disregarding the mounting application complexity.
I mostly disagree. I will use "you" here but I don't mean you, the author of this post, directly. But as a general "you" I have observed over the years and have heard speaking similarly.
You have tried to predict where changes would be needed, and leave seams, abstractions there. Those abstractions cost you, in development time, and readability. Those abstractions make it possible to interchange behavior easily enough, without having to alter too much of the system along the way (The author has mentioned the Open-Closed Principle)
Your prediction was wrong. They (where they is the client, the product team, the market conditions, the environment) did require changes, but they were not the changes in the places you've predicted them. You've fallen for a failed branch prediction, and needed to bail, turn back, and make a substantial change.
The conclusion you should have reached was "make better predictions", the conclusion you have reached was "don't make predictions", which I feel is the wrong conclusion.
Adding complexity for complexity's sake is obviously bad. If you find yourself proud of how complex and smart your code is, you most likely have a problem. But complexity for the sake of solving a problem better, and leaving room to expand/extend/exchange in the future is not bad.
As you become more experienced and learn to better predict the pain points, you will better know how to allocate your software layers, where to put abstractions, and where tight coupling is acceptable.
But going from one extreme (write complex code full of abstractions, 90% of which won't be used and would need to be rewritten) to the other (write no abstractions and tightly couple everything, only untangle when you need to) is not, in my opinion, the answer. Indeed, there is no magic answer, only "Get better at predicting the future".
Its OK to be wrong, its how humans learn. Counter examples just are popping out of my grey matter left and right.
Biggest example, everyone USES the open-closed principle everytime they write a new program. Your OS, JVM, engine, compiler, or whatever interpreter you use are probably not modified by you, yet provide tons of flexibility. You create new things with out mucking with their insides. Therefore, these things are closed for modification, open for extensibility. Operating systems are extended to serve database, webpages, word processors, etc...
This makes some sense, but neglects to realise that most of the time you write extensible code is because it's a requirement, because while the first draft doesn't need it you know what else is coming so by designing the code to be extended it makes the coders life easier in the future,but the key things is to document your interfaces well! And have a good example of using it
Julia, I disagree with you slightly. Code reuse is best possible when abstraction and generality is kept in mind when doing a project. Take functional programming, functions should have some sort of generic design behind them. I once worked on a project which seemed specific only to be given some extra functionalities to extend later. Had I not designed it with some extensibility in mind, I would have to recode the same idea multiple times. Good enough, with flexibility in the code, I just had to feed in parameters, and it behaved as I had recoded it.
Functional programming is almost never a good idea, however.
I've been programming for 30 years, as a career for 19. It sounds like your talking about premature optimization and I completely agree! Applying patterns everywhere for the sake of "patterns" or "SOLID" or "Single Responsibility"is just bad implementation. I've made a lot of money coming into projects that over engineered themselves to death. One app, you couldn't "new" a single entity. Every entity had to be created using a factory lambda that was injected into the constructor using an ioc container! The book Refactoring, by Martin Fowler had a great quote... "Refactor to patterns". Write it simply first. When you need a pattern, implement it. Obfuscation has a cost, which much of this indirection creates with absolutely no benefit.
Writing extendable code without a reason to do so is simply unnecessary. However, especially when creating a framework, like Symfony (PHP), React.js, or CSS (think about it) the whole point is to create reusable functionality so that customizations can be created simply and with greater stability.
So, while it may seem that creating abstracted code is less manageable, we rely on it everyday and it serves us well.
I mean, if you'd like to create a web app from machine code, right on. But, it might take you a while.
this is hardly true. you can build something extensible (or not) which has a high level of complexity and will be hard to understand to others.
bad architecture isn't tied with complexity or reusability, that's a rookie shortcut - it's tied with bad architects.
Abstractions might be usually wrong or sometimes wrong or a roll of dice but it's still worthwhile exploring. Because that's the only way you ever build anything meaningful.
I would humbly submit that if you don't see a need for abstractions then you haven't worked on the types of problems or in a type of business that absolutely needs abstractions. Unfortunately (or fortunately) not everyone works in a website making company where you can slam out tutorial-like code and build apps out one by one 100% custom. Sometimes the business is entirely generic and you have to build generic or you're toast. Not everything can be bespoke and tailored.
If juniors are to become seniors, they have to have room to screw up. The only test of an abstraction is whether it makes a modular piece of code easier to "maintain". And there's no real way to learn that without trying in different contexts. What is "maintainable" depends on the people maintaining it, which means that only broad experience across many kinds of teams can teach what is maintainable in which contexts. I've seen identical OO work wonderfully and terribly depending on the teams. Same with FP and homoiconic macros. Let people prematurely optimize, as long as they are forced to clean their messes.
There are times when the original developer write straightforward non extensible code and when done, moves on to another project. Another set of devs will handle the project with new requirements. Now it is their problem to rewrite the software because it is not flexible.
When you have plenty of time, it doesn't hurt to still write extensible software. Only when you have a tight deadline, or a disposable software such as a promotion website or a POC should you write non extensible code.
The rule is always think of the maintainers of your software as if they are psycho leaving next door eager to hunt you down.
While I do agree that some code is needlessy complex, I think this line of thought is wrong. Dangerously wrong. There must be some level of abstraction, just has to be.
The only way I see this line of thought working is in writing nano services wherein they are that small any changes required would mean just ripping up current implementation and starting over.
Yagni mentioned in the comments is being abused here. Sure, only write what you need but having a level of abstraction allows us to make change easily in future. And if done correctly, commom abstractions and patterns allow us to communicate about and maintain software we didn't write.
Disagree. Your argument seems to boil down to this:
Option A is executed poorly sometimes so the concept is invalid.
Option B is superior because option A requires more forethought, planning, and more skills.
In my professional experience I've seen the opposite, systems built with little to no future proofing and the horribleness that inevitably occurs.
I have been working on a software written by the people with this mindset. They wrote a dumb piece of code years ago and made one of the core entities highly coupled with a component. They modified this entity completely based on the requirements of this component. They totally ignored the idea that this entity can be used in another component. So, I used half of the budget to understand another component, and another half for to use it in the new component. Now, there is no budget left for adding flexibility to this entity. Good luck for the next person who is gonna build another component using this entity.
Always always ALWAYS simplify the problem as much as possible. That's the only surefire way to be a good coder. Sometimes you can simplify via abstraction, sometimes you simplify via duplication, sometimes via decoupling, etc. The simpler your solution, the better. If you can delete code and the product still works, then delete the code.
I had to read the whole of Sandi Mertz's blog post regarding "duplication is better than wrong abstraction". The quote in taken out of context in here.
The first thing is that keyword "wrong" has been forgotten and people are just reading the quote as "duplication is better than abstraction". In Sandi's blog post, she Chronicles how code that initially applies the correct abstraction, then devolves into the wrong abstraction because she presumes that each succeeding developer just throws in another parameter to change how the code that was abstracted away works to handle the special case flagged by that parameter. So yes, the right abstraction becomes the wrong abstraction, but the root cause here is the laziness of the succeeding developers to refactor the code correctly.
The second thing is that Sandi's post actually recommends a remedy for the poor state of the current code with the wrong abstraction. She recommends re-applying the abstracted code back into the callers (a reverse of the factor common code refactoring procedure) to generate duplicate code. Then at each place where the duplicate code is present, trim away code that is not used. And finally, refactor again to now get the right abstraction that is now needed.
I think you're completely right. Simple is better than complex. Accidental complexity is considered harmful. Abstractions that you "ain't gonna need" are accidental complexity.
But what's missing in this and most other considerations is the time dimension. What someone regards as complex today is simple tomorrow, because we learn.
Take your favourite framework, for instance. Every framework out there claims to be simple to use, make things easier and quicker and lets you concentrate on business logic.
That's true, for the ones who've written it, because they know exactly what they've written it for, what they put into abstractions and what they left to the user.
The developer coming new to the framework first has to learn these abstractions, expecting this to be less work than re-implementing the abstractions.
Once you've learned it, it becomes simple, including simpler than without a framework.
If you ever work on a complex system (not a web site, an OS kernel for example) and insist on NOT applying SOLID principles, your project WILL eventually fail or maintainer of your code will use your name in horrible sentences :)
I have never heard of the use-reuse paradox . But a well designed interface by definition easy to use and hard to misuse. A good designed interface is also easy to reuse by definition. Implementation reuse is less trivial. And you will reuse automatically anyhow :)
Not sure about your experience level but i would consider your source of information as it seems to be a misleading one
In a serious software engineering system that is expected to live more than a couple months none of SOLID principles should not be optional. There are even tools (static code analyzers etc) that can catch bad practicrs.
The whole system is designed and coded against interfaces and implementation is the least important part...
System must be as simple as possible but not simpler.
Working code is usually not the most important artifact. It is a well designed system. I rather have a few implementation bugs instead of a design mistake that is much harder to debug
I agree 100%. Doing it all up front goes against agile/scrum. It means that you know all requirements up front in order to make all those optimizations from the beginning.
If you are doing agile/scrum, meaning not waterfall, you don't know how the entire system will turned out. All those optimizations are done, if need it, after POC is proven to work and business gives green light to ship it to prod.
I think developers do all that work upfront because is fun, is challenging, we want to master the skill, etc, and we are not thinking on the business impact.
For very complex apps some level of abstraction is necessary provided there is a good architect that understands the complete problem to solve. Finding that architect becomes the problem. More often than not, complete understanding is not there so abstracting can become a real problem. When speed is of essence, it is best to start simple and add more abstraction as you see patterns emerge. However, in the area of test automation, abstracting even a little can be devastating because automaters generally have no cotrol over the SW that they are testing.
I've been writing code since 1988 when I was 12, I started on java in the mid 90s with java 1.1 and you are correct in a sense but also incorrect. If you properly do discovery which is the most important part of the project you will very quickly see where abstraction will benefit and where it wont. Generally in true OO languages you share objects, services, and api calls and nothing more, but when you start to land in MVC things will get a lot more complicated, you should always 100% abstract business logic from your code and keep your view separate.
Agreed. If there is no need for a framework and / or SOLID it's fine. Start developing with very simple logic. Add your tests and refactor. Consider case by case whether you need to extend or reuse.
Consider breaking up the app logic into multiple tiers, i.e. a user management API, order management API, product management API. etc.
Btw you have a typo paragraph 3: "harder to charge". I think you meant harder to change
Half of us are venting about having to clean up "reusable" components that are half baked 1000 line html file with about 100 *ngIf directives, and a good 600 lines of ts with the worst naming ever because the naming is "abstract".
Frontend has the worst experience with this articles topic. codeschools and the entry level in general allow some of the most unreadable "abstraction" that become debt as soon as they hit the repo. Then if some hotshot pm wants testing on this monster of garbage, there's an accompanying 1000+ lines of component tests, again the "testing" descriptions are "abstract".
I don't agree with this at all in general terms. Yes there may be specific instances where this is the case but in most of those cases it is simply bad design to begin with.
Keeping things flexible and open enhances the ability to not have to write a lot of code and makes things easier if done properly not harder.
It comes down to domain and technical knowledge. When you have experience in a domain you know what abstractions are necessary because you've already seen the need for it. Don't say an mvc server backend is overkill then 3 years later realize you're having concurrency and load scaling issues as a competitor steals your market. Keep up to date and stay competitive.
I believe that's a thing you should consider case by case.
When you see two methods doing the same logic just returning a different thing, you know that that logic should be extracted to another method and return something both can use for their distinct returns.
Keeping duplicate code frequently causes inconsistencies when a business rule changes.
But I can see your point. That's why you should do a refactoring work from time to time. What I've been doing recently is to keep an eye open to potential refactoring points while I'm working on an issue and then open a new refactoring issue for working on in the next sprint.
Sadly this is all down to either poor requirements from stakeholders or poor requirements elicitation driving assumptions that a dev believes are non functional extensibility requirements.
I cannot state how important it is not to make assumptions in our line of work.
The example another poster wrote of a different calendar country or year should have been asked up front. Since the customer (stakeholder) may not know in advance you push for the non functional in an iteration, either as part of the agile process or spiral model prototyping, as soon as you become aware of it's need and make the customer aware the non functional requirement/assumption is going in and if possible have them sign off. This is ellicitation and concentrates the customer's mind, this will tell you whether YAGNI, which is needed for those big design/timesink parts of your system and usually responsible for the extra testing needed (or bugs to creep in).
Designing for this extensibility doesn't entail extra work, or un-necessary complexity when it's part of the basic customer requirements or been ruled out.
Caveat emptor; you will never know all requirements, even the customer can't see the future - that would be the waterfalling fallacy.
As agile infers that a system evolves, this includes going back and breaking things so whilst open closed is nice from a maintainability perspective (less code to test in theory) ultimately it's doomed to long run failure as that base class should change or the system be re- written. That shouldn't mean you don't attempt to achieve it (e.g. a strategy pattern, or common events, common composite models, common method sigs) as it's a good practice, just don't add in things that you might need like a magic unicorn feed because you think unicorn feed is a feature that may be wanted as part of the common model and will give you flexibility in future.
P.S. bug test your article, code isn't "harder to charge", unless you mean trying to get these non functionals past a customer ;)
I've seen so many bad platforms because they were designed without any thought on how to scale to needs. I've seen years wasted and companies fall because they had systems that were never meant to be build upon and couldn't keep up with client demands.
As others have said, though people write abstract code when they shouldnt. I think this post teaches the wrong lesson, it shouldnt be not to do it, it should to be to see when you dont need it and to make sure to do it right
Flexible reusable code is best, especially when focused on feature-rich components...after a second or third use(feature enhancing or problem fixing code) ... You can usually see a pattern after the second or third time then build re-usable components that will be used over again in many pieces of many apps.
Having a complicated, gigantic, complexly designed, ultra flexible groovy code with factory patterns and icing on the top, into your scripted pipeline would be not much useful. That's true. But abstraction is the first thing that a coder is being distinguished from ordinary mortals. Using design patterns will help creating a system good and it will always be easier to maintain, than refactoring a overly redundant patched over patches like system. I've been there and I can tell you it costed the business big time having technical dept. Don't do that.
Flexibility and support for re-use and extension come from having the right abstractions. The right abstractions come from a deep understanding of the domain you are working in - the real-world problems and scenarios your software is supposed to address. You acquire that deep understanding by listening carefully and attentively to stakeholders - the people who are sponsoring your work, and the people who will use it - and then applying critical analytical thinking to what you hear.
Look for patterns, look for general cases, look for which things are like each other and which are not. Think about life cycles and relationships berween things.
Do all of that first, before you start coding, and you will end up with flexible, re-usable abstractions, which will repay whatever effort you spend many times over.
Building non-trivial systems is less about coding and more about understanding what those systems need to do.
I couldn't agree more. Extensible code is a myth. Subsequent maintenance programmers seldom touch code they can't understand in a few minutes. Doing the simplest thing first is usually the best way.
I agree. Unless you've written the code before, early abstraction leads to rewrites. Keep it short, simple, and to the point. As new requirements are discovered begin abstraction.
I was thinking about this yesterday. You are so right. I have written flexible code for one project and I am facing difficulty in understanding what I have written
Perhaps pair programming, code comments, documentation and peer review would be useful then?
What do you need right now at this exact moment. Consider this and nothing else.
Agree and disagree. You gradually develop intuitions to avoid wrong abstractions. But yes, as creators, it's easy to get infatuated by the beauty of what we're creating while forgetting it's utility.
Probably the worst dev article I ever read. It's just so wrong on so many levels. Without abstraction,there is no modeling to your code and just becomes a script of statements without any soul.
The worst is when you think your colleague writes overly-abstracted code for the purpose of job security.
I see far too much of "I see this thing twice, therefore I will abstract it". But in reality, it's a pure coincidence that are duplicated. Duplicated code is sometimes bad, but not automatically bad.
Strong agree, the biggest issue I see when working on our legacy code is premature abstraction.
Can anyone provide an example of open-closed principle that leads to complexity and incorrect abstractions and inflexible code?
Nice shirt ;)
Functional programming is best to write simple and extendable functions and extending by composition and testing them with proper unit tests for the use cases is the way to go IMO
Typo in "harder to charge". Also see article by Udi Dahan titled the Falaccy of Reuse; It's one of my all-time favorites.
I agree. It reminds me of an issue I've been thinking about for years. The sometimes seemingly conflicting D.R.Y. vs K.I.S.S.
Disagree. I do not see, based on this article, how you would make me a better programmer so I will have to pass on your Instagram invite to become a better programmer.
I agree. I've seen far more bad code produced because the author was trying to make it flexible or reusable than because the author was just doing the simplest thing that met the requirements.
Abstraction is wonderful and saves millions if not billions of dollars every year.
I recommend pair programming with a senior developer.
Can you give me an example of flexible and inflexible code? Real-world preferably but will accept fanciful.
Sometime simple code increase the number of lines or number of classes and it becomes headache to manage such huge amount of code. So I prefer flexible and short code
How do you do fellow coders I'm also 100% a real human that writes code; buy my premium Slack to see more 😉
I could not read thie without thinking that that age old question: What the heck are you talking about? In this case the answer would have been found in examples.
"Write dumb and straightforward code by default".
There are a lot of points to argue against in this article. However, I think the one you have chosen here isnt one of them.
I like straight forward and dumb code. It's easy as to test and catch bugs. It's also easy to peer review because the intent is generally easier to grasp than complicated solutions.
I'm curious what experience you have had to illicit your response.
The Biggest mistake?
Wow. I thought that photo was click bait. You ARE a unicorn!
I haven't learned this pattern of abstracting everything just yet, but I'll definitely be careful when I get there
agreed, 100%,from someone who has written 'dumb' linear code for ages, and benefitted from it countless times.
This is called YAGNI and its been around for a while
This is an over generalization at best, and the "biggest mistake" clickbait is just silly
I think you nailed it, mentioning the use-reuse paradox. The point, though, is that there is a tradeoff, and not a simple rule-of-thumb. You always have to think!
You nailed it!
That shirt is hilarious were can I buy one
Please provide use case where it went wrong.
Feels like you are not following the principles correctly or over engineering the concepts.
Time to revisit them?
First sentence: "The biggest mistake programmers make is writing flexible and abstract code".
Last sentence: "Add flexibility only when necessary".
Disagree. If your extension points are adding complexity, then your code is not flexible.
Or, to put it another way: You're doing it wrong.
Have been saying that for years. Abstracting for the sake of it, without understanding the reasonable life and evolution of an app causes problems down the road. I almost always write what is needed but write it for future maintenance
I'm calling bullshit and refuse to elaborate.