I think this might be a bit of a controversial opinion nowadays, so I'll try to present it in a way that avoids misinterpretations: Writing your software for generic use cases is sub-optimal.
And I know I might be sounding like a Unix zealot now, but the old adage is (still) true: do one thing and do it well
.
Ok, this should not be that kind of article where statements are presented as axioms without claims that sustain them, so I'll present a theoretical project's lifecycle and examine how different those approaches would be.
How projects are born
I'm going to piggy back on my current project, Penna, a SLF4J backend built for the specific use case of writing JSON logs to stdout, targeted at services running in containers, k8s or similar.
Why did I come up with this idea? Well, that's a pattern I've seen in several companies I worked for. I'm pretty sure you might have come across something similar as well.
The current status quo involves setting up a XML file where you have to describe specifically the formatter, the layout, etc. Information you find online isn't always up-to-date. It is a thing that mostly works but whenever you come up with a new project you have to copy from somewhere or spend time searching for a recent-ish example that should work for you on countless unanswered similar stackoverflow questions.
At this stage, I can already tell that a generic solution imposes some level of cognitive load onto the users.
So then if we abstract away here, I would say a "specific" software is created to address a specific problem. A generic software is a specific one that hasn't outlined its boundaries clearly.
The development phase
One of the most powerful abilities one can develop is the ability to say no. As the backlog grows, it is not always the best idea to develop every idea that seems interesting, because that can lead to bloat.
And as you broaden the scope of your apps, you can allow unforeseen usages (we're creative beings after all), and I wouldn't be surprised if there is at least one company out there logging XML log messages to some manager's corporate mailbox.
So having a hard scope is a way to help us say no, even when we're weak and can fall into the temptation of writing a morse code encoder for our library. No bloat.
So the lesson here is that a specific requirement allows us to write more significant code while avoiding misguiding the users.
Runtime impact
This might be a bit more difficult to prove without a concrete code (but feel free to look at the benchmarks in Penna for one case), but it is pretty intuitive that specific software is more efficient. There are multiple reasons for that, as it is a single path you program has to take, as there are going to be fewer frames in the stack, less unnecessary abstractions and even reduces the chances for bugs.
Sure, your specific problem can be hard and require complex computation and that would naturally be slow, but it would likely be even slower if it is a complex problem in a generically implemented structure.
And now I hear the hordes yelling "But performance is not important", "premature optimization is the root of all evil" or even "computers are fast today anyway". Yes, maybe. To begin with, this is not premature optimization, it is intentional software design, rather than accidental. And even though processing power is increasing and resources are available, there is no benefit to intentionally making your software perform worse at the expense of CPU, Memory or even Battery resources of your customer's devices, or Cloud Billing dollars you or your boss are bleeding monthly.
Counter-argument
Now I'm going to take out my red fedora and wear another hat. Maybe your problem-space allows for several different approaches and the cost of implementing several tiny pieces independently is significantly higher than implementing some configurable piece that offers alternative solutions.
For example, let's imagine a tool that grabs several text files and packs them together into a single one, potentially compressing the end-result to a smaller-sized package.
This theoretical tool would need to have support for gzip, bzip, xz and maybe brotli, but it doesn't mean we should start with compression. This is a natural evolution for our software and, mind you, we haven't rolled our own algorithm for compressing files here, we're delegating to third-parties.
Still, in this case, the problem statement is specific and even though we have configurable compression algorithms, they can be well isolated and plugged into the software using a necessary abstraction. This is where good software engineering skills are necessary, so we understand 1) when an abstraction is necessary and 2) how to optimally implement the abstraction. To each case a different solution would sound better and this is where our thinking jobs are (for now) still secured against greedy job-taking AIs.
Turns out my other hat was a panama hat, still specific. And so is the "generic" software, just that its scope is too big.
Falling into the trap of starting generically might hurt our judgment ability, present eagerly with the needs for abstractions, causing hasty decisions and confuse your users. So, with that I guess I'll cut it for this article, or I might risk bloating it past its necessary scope ;)
Cover photo by Martin Adams on Unsplash
Top comments (0)