DEV Community

Cover image for Why Building Software is Hard
Philipp Giese
Philipp Giese

Posted on • Originally published at philgiese.com on

Why Building Software is Hard

Every day someone new decides that writing code will be the thing they'd like to do for a living. Or maybe just for fun. And thanks to incredible frameworks and Stack Overflow, getting your first results has become somewhat easy. This is great but has misled some people into thinking that building software (i.e., larger apps that people pay for) is easy. I believe this is because some practices that lead to great software aren't prominent when you first encounter them. This post will be my ever-growing list of activities and misconceptions of the software world.

Before we start

What I don't want this post to be is discouraging. As I said, you can start small and get results. You don't need to know or follow everything I'm describing from day one. Pick something, try it out, and see for yourself. Some points might work better for you (and your team) than others.


What you think works What actually works
Work on as many tasks as possible in parallel Focus on one task at a time
Code reviews Pair or swarm on tasks
Design and QA as separate steps Co-create and involve people at every stage
Do rigorous planning up-front Experiment and work in small increments
Few releases with extensive manual tests Automate releases and do them frequently
Never change a running system Continuously adapt
Build the big thing that handles everything Build many small blocks that do exactly one thing
Have someone manage releases Deliver continuously
Keep people busy all the time Provide room for self-improvement
Writing software is easy That sh*it ain't easy

Why you are faster the fewer tasks are open in parallel

When five people work on five tasks simultaneously, you'll get five results faster, right? Wrong! Many people (want to) believe this is true because then every additional developer will make the team faster. Unfortunately, the opposite is true. How come? This sentiment assumes that you can define and work on five tasks in complete isolation. Maybe on separate feature branches because this would be required to make the above somewhat true. In real life, the five people will probably need to talk to each other because their work is interdependent in one way or the other. They will also most likely change the code in ways that affect the work of the other developers. That means these developers will need to sync from time to time to check whether their work is still compatible. And with the word sync your parallel dream is over.

It can get worse. When person A works on something that person B needs and person B relies on the fact that A finishes their task before B picks up the next task, there is much more risk involved. Any interruption of A (vacation, sickness, manager casually swinging by their desk) will become an issue for B. Imagine there is a person C who is also waiting for B. This style of working builds a massive house of cards that will come down, eventually.

A system that optimizes for parallel work ultimately optimizes for busyness over productivity. People will always have something to do, but nothing will come out of it. Lean and Kanban teach us that single item flow (i.e., working on one thing at a time) is desirable when you want to get things done. That does not mean that one person works on one thing at a time (even though that might already be an improvement in some companies). It means that the whole team works on only one thing at a time. This seems counter-intuitive at first, but this way of working has some key advantages:

  • No need for reviews — When the whole team works on a task, the review happens continuously. With that in place, there is no need for another gateway step that speeds up the whole process.

  • No handovers — When designers and QA folks work with everyone, you don't need a design handover or a QA handover. The designers figure out the design with developers, and the QA folks help build the right thing. This removes a lot of back and forth, shortens feedback cycles, and speeds up the process.

  • No knowledge silos — As everyone is working on the same thing at the same time, everyone also shares the same knowledge. With that, it is absolutely no problem when people get sick or go on vacation. Everyone else is still on board, and when someone comes back, they are immediately on-boarded into the changes.

The methods I just described optimize the time not spent on wasteful activities. Doing so automatically increases the time you spend on activities that create value.

Why code reviews aren't helping you

Code reviews are a low bandwidth form of communication with a high chance for misunderstandings (Peter Hilton). When the reviewer does not share the same context as the person who wants the review, chances are high that the reviewer will miss or not fully understand the most significant bits.

The quality of a review is indirectly proportional to how much the code change is in need of a proper review. This means that a 15-line change will get a lot of comments because it is easy to understand. But because it is easy to understand, the chance that the reviewer will find something groundbreaking is also low. A 1500-line change will likely get no comments because the change is just too hard to understand.

Reviews are also an interruption to flow. What does the person who wants the review do now? Best case: nothing. Worst case: start the next task already (see why you are faster the fewer tasks are open in parallel).

You can make code reviews obsolete with mob programming plus test driven development. A code review happens all the time when the whole team works together. TDD is a forcing function to have the most important discussions at the start because everyone needs to be on the same page about what the mob is currently trying to achieve (get the current test green). It also helps to ensure you're not accidentally breakings something as you move along.

Why you should avoid handoffs

Handoffs happen when people work in parallel. This means that anytime the deliverable that got handed to you isn't perfect, you're going to interrupt the flow of someone else because they need to look at something again that they thought was finished. If you're lucky, they can do that right away. Otherwise, you'll need to wait until they finish their current task. You might be able to see how this quickly adds a lot of waste to your process. The more handoffs you have, the higher the chance for waste.

Now you're in a situation where everything takes forever, and you can't progress because everyone is waiting for something from someone else. This means deadlines will approach, and you're unable to deliver. Since people know that fixing issues takes forever, they tend to just not do this anymore, which leads to crap products being released. Crap products lead to bad customer feedback, leading to more rigorous handoffs. A vicious cycle.

Some people will claim that you can fix this situation with better planning. That won't work. Certain things are impossible to know up-front, and you'll only discover them when you start working on your task.

Again, the solution is to work together. Eliminate handoffs by co-creating with designers, developers, QA folks, and customers. Then make sure you work in small increments. When designers need to watch developers code for weeks on end, they're less likely to enjoy working with them. When you ship something every other day, and you can rapidly prototype and change the software as you get new insights, the process will motivate everyone involved.

Why you should not spend too much time planning

Some people think they are smarter than everybody else. The same people think you can plan a large software project in detail. A missed deadline? A better plan could have avoided that.

These people forget that our job is not to execute plans perfectly. Our job is to build the right thing and build it right. Nothing is worse than perfectly executing the wrong plan.

A big plan will ultimately lead to large releases. Why? Because frequent small releases will likely result in feedback, and that feedback may not match your plan. This might not sound bad to you, but for people who think their job is to come up with perfect plans, this is a threat. Mostly because it can expose that what they sold as knowledge was simply an assumption. Plans also fix many things in place because to create a plan, you need to make decisions. When you decide on aspects that might only become important in a couple of months early on, it is easy to pigeonhole yourself into a fixed mindset. And fixed mindsets don't like change at all.

Instead of planning too much, you should spend time experimenting. Thanks to concepts like feature toggles, you can test changes or new features with a subset of select users before you roll them out for everyone. This helps to mitigate the risk of releasing something that nobody wants.

You should also be clear about what you think is true and what you know to be true. Tracking your hypothesis can help you determine which experiments to run first because they have the most significant impact. With that, you can avoid surprises late in the process, which would otherwise lead to delays.

Why you will never find all the bugs

No one likes to introduce bugs. This is why some insist that rigorous testing must happen before each release. With that, they usually mean manual, exploratory testing. While this kind of testing definitively adds some value, I would argue that it shouldn't be done for every change.

When you try to find every edge case, you'll never ship something because you're never done searching. But when you never ship to production, you'll never discover all edge cases. Testing too much can lead to situations where people think they should plan more, outsource testing, release less often, and parallelize more work.

What to do? Accept that bugs will exist no matter what you do. Then work on reducing the risk for high impact bugs.

  • Work in small increments — This ensures that the code changes are small when a bug appears, making it easier to identify and fix. Also, working in small increments generally reduces the risk for bugs because small changes are easier to comprehend than large ones.
  • Better automated tests — Write tests that help you find defects faster so that even while developing, you can identify issues earlier and be more confident about your changes.
  • Test Driven Development — Admittedly, that's my personal bias. TDD makes you think about your changes before you do them. This usually forces you to have the most important discussions first, leading to better code and fewer defects.
  • Focus on MTTR — MTTR, or Mean Time to Resolve is my favorite metric. It measures how fast you are at fixing bugs. I think it is an excellent forcing function to help teams optimize their way of working to ensure that when a bug happens, it is fixed in the fasted way possible.
  • Use a zero-bug policy — A zero-bug policy does not mean that there are no bugs but that all bugs must be resolved before regular work happens. That's great because it means you'll never (knowingly) build features on top of broken code. This further reduces the risk of future bugs.

Why you should refactor

What habits did you change in the last year? Did you stop doing certain things and maybe start doing some others? Did you learn something new? Did the language you use get an update that makes it easier to achieve a certain task? All these things happen, so we need to refactor our code constantly.

The moment you write code you put an expiry date on it. Even if the code was crafted according to the current best practices and standards, the sheer fact of time would make it erode. Because your tooling gets better, and you'll learn new things.

When you never refactor code, it will eventually expire. This means that the cost of a large-scale refactor will equal the cost of a rewrite. It is a bit like housekeeping. When you clean up a little every day, your house will be in an acceptable state all the time. However, when you neglect cleaning for too long, things go from dirty to broken.

I always like to work according to the rule that you should always leave a place a little bit better than when you found it. Refactors don't have to be big. I'd even say they should never be big. Make small improvements every day, and they will amount to something big over time. When you get into this habit, tech debt related to outdated code won't be on your list of problems anymore.

Why that huge component might not be a good idea

When people learn to code, they inevitably encounter certain design patterns. One pretty much everyone knows (maybe because it is so simple) is "don't repeat yourself" or DRY. Oh, boy, do I not like this pattern. For some reason, people think that any form of repetition is bad design. What this leads to are premature and very leaky abstractions. Because they are premature, these abstractions leak all over the place and lead to high coupling and low cohesion in your code base.

One symptom of that is either huge classes or long functions. For functions, you can easily spot this when people start passing around flags that are used to control the flow. In most cases, you'd want two or more functions focusing on one thing. However, that would lead to some duplication, and when people think this is a bad design, they end up with huge functions that combine many use cases.

The goal of DRY was to keep things re-usable. Instead of duplicating logic into multiple places, it should be well encapsulated and re-used. That's a great idea! So why doesn't it work?

In a lot of cases, duplication is a coincidence and nothing more. If you mistake this coincidence for something more and create an abstraction out of it, then your abstraction also only works coincidentally. I always write the same code at least three times before I start thinking about extracting it. Only then will I have understood the use cases well enough to develop an abstraction that makes sense, is re-usable, and doesn't leak implementation details.

Think of abstractions and methods as a toolbox. You don't have one screwdriver that fits all different screws in all different sizes. Such a tool would be utterly useless. But multiple full-blown screwdrivers are also over the top. They would take up way too much space. So we took the re-usable part - the handle - and created a set of different bits that all fit on it. All bits still have some duplication: the metal, the part needed to make them fit into the handle. However, this duplication helps, and no one would think about removing it because that would make using the different bits much harder.

Why releasing should not be a big deal

Have you ever heard of No Release Fridays? It's a saying that teams should not release on Fridays because the release might break, and someone needs to spend their weekend fixing it. This sounds reasonable until you ask yourself why releasing is scary in the first place. When you build software so that releasing it is scary, change how you build it!

When something is hard, do it more often. That statement is true for working out and for almost anything else in life. Don't be afraid of bugs and release as soon as something hits your main branch.

When you're building a library or framework tools like semantic release can help you not only release but also figure out the correct version bump and automatically create change logs.

If you're interested in a real-world example of how to change your release process, then have a look at this article from my co-worker Tobias where he describes how we pulled it off at BRYTER.

Why people should not work all the time

We've already talked about wasteful activities such as working in parallel, code reviews, and handoffs. If you avoid all these pitfalls and crank out code using TDD in a mob all the time, you should be good, right? Unfortunately, not. In fact, you shouldn't be working 100% of your time.

That might be the point where you think I'm crazy. We're all knowledge workers, and if we're busy all the time, we're giving up a huge potential. The potential for innovation.

Think about it. When you're working head down all the time, you'll never have time to reflect. You will not read that book that gives you a great idea or watch that talk with some crazy new insight. It also removes the possibility of trying out the refactoring you thought about but never got to. I would argue that people in our busyness should only spend up to 70% of their work week with planned activities and leave the last 30% for their minds to wander.

Why writing software is hard

This has become the longest article I've written so far. I don't want to gate keep anyone from getting into the field of software development. But I also wanted to highlight that writing high-quality software and doing it in a way that produces good results in the fastest way possible is hard work.

This article merely scratches the surface of what is required of software developers every day. Of course, you won't be able to follow all these rules daily, which is exactly why continuous improvement is part of the list. The goal isn't to be perfect but to do something good every day. Over time these small bits will amount to something big.

I've mentioned TDD a couple of times. This practice alone takes so much time to get right. Now, not only do you have to get the code for the product right, but also the tests. Because tests should be written in a way so that they don't need to change every time a detail in your product changes but they need to make sure that you don't accidentally break stuff while you move along.

The best advice is to work in baby steps and keep iterations as small as possible. Get early feedback and improve. If you want to read more on the topic, Ron Jeffries has written a great article about it!


That's it for now. I'm happy to hear your feedback on Twitter.

Top comments (0)