Programmers want to write good code. It makes sense. Folks in every profession take pride in their work and want to make sure it is of the highest quality. High quality software includes many things, one of which is making code easy to modify/maintainable.
Software is never really “done”. There are always new versions with new features which obviously means that there is more code written. That code is usually written on top of existing features. How the existing code was written seriously affects how difficult it will be to write the new feature.
For example, Amazon used to only sell books. Now they sell everything. Let’s say it takes you 6 months to build a website that sells books. Would you want it to take another 6 months to make your website capable of selling clothes? And another 6 months to add the ability to sell electronics?
Probably not. Ideally, adding a new type of item to sell should be quick. There’s all that infrastructure built to sell books. Most of it should be reuseable. Adding clothes or electronics should take a reduced amount of time, maybe a month or a week. This can only happen if the code to sell books was made easy to modify.
The problem is: how do you define whether code is easy to modify?
There’s little immediate feedback when writing maintainable code. You may not need to modify it for months or years after writing it. Even then, you may not do it often enough to see if your earlier decisions were good or not.
More importantly, how does a new programmer know how to make those decisions if they don’t get proper feedback for their decisions until after a couple of years? There’s no foundation for making these decisions because there just hasn’t been enough time to see results of early attempts. Beginning programmers can only act on what sounds good to them.
The thing that sounds best is usually making sure code isn’t duplicated. Using Amazon as an example, if they have code to add books to a shopping cart, would they want to write brand new code to add electronics to a shopping cart? Probably not. They would want to reuse the shopping cart code for books.
Like all good things, programmers can take this idea way too far. Many beginning programmers, including myself years ago, start seeing duplication where there isn’t any. More specifically, we see the possibility of duplication and we attempt to head it off at the pass.
This medium post is the perfect illustration. The program is extremely simple. Over time the programmer starts imagining more uses for the program. They create more code to make sure they will always be able to reuse the code that they wrote.
Because the programmer is unable to get feedback or experience the consequences of their decision, they can only assume they are on the right path. This means thinking of even more possible ways to reuse their code. This naturally increases over time and they end up writing over a hundred lines of code so that they can reuse something that was only a handful of lines to start with.
Maybe 100+ lines of code is worth it if that code does end up getting reused a lot?
The problem is that the programmer has accounted for possible ways to reuse the code, but not necessarily likely ways. In an attempt to cover every possibility imaginable, the programmer has created a convoluted system that is much more difficult to read for the rest of the team. There is also the fact that the more code you write, the likelier it is that something in that code is broken. So a simple program that is probably bug free is now a complex program that is more likely to have bugs.
At a certain point, hopefully before the 10 years listed in that post, the programmer has experienced enough that they start to realize the difference between likely and unlikely possibilities. They learn to separate the code that is most likely to be reused which needs to be built more robustly, and the code that is best kept simple.
There’s no magic bullet to gaining that ability. It requires making those decisions early on and seeing which ones were right and which ones were wrong. Practice makes perfect after all. And it doesn’t mean that programmers will eventually learn to make perfect decisions. They just gradually make better ones.
This post was originally published on blog.professorbeekums.com