loading...

Clean Code, bullshit or common sense?

fefas profile image Felipe Martins Originally published at blog.fefas.net on ・5 min read

Common story

Bob started to work at a new company. Soon he realized his new job would have a lot to do with a very old and complex legacy system. It was an anemic code without tests. There was neither any documentation. No explanations about underlying workflows which should represent the decisions made to solve business requirements which someday were asked by some stakeholder, then added to the backlog, implemented and delivered. However, that stakeholder wasn’t in the company anymore as well as the first developers.

What should be called the domain was always very coupled to the infrastructure concerns… many classes had more than one thousand lines… methods had so many if statements creating so many paths to go through… namespaces 10 levels deep… a large usage of classes extensions…

Almost every task would be to solve a bug. Some of them were created a long time ago and others were introduced recently by a new fix. Trying to investigate the code history to figure out the reason of some changes and how they could be related with other parts of the code wasn’t even helpful because the commits weren’t well-organized and had poor messages…

For changing a small behavior he had to spend hours debugging and trying to understand what was written there. After a whole day of work, the result sometimes was changing or adding ~10 lines. Most of the new code was one more if statement deciding to update some value because it was wrong somehow causing a bug… the classes didn’t validate themselves… there were getters and setters everywhere…

He was experiencing bad code…

Washing bunch of dirty dishes

Why does it happen so commonly?

Let’s start clarifying something: please, don’t get me wrong! I don’t want to say simply the code is a $#@& and to blame developers or companies, because it won’t lead us to anything. That is true, we do have a lot of unprofessional developers, but just blaming isn’t the point and we have to consider a broader scenario to get deeper.

So, why does it happen? Why are we always wading?

There are many factors that contribute to it. Let’s picture a company growing, hiring more employees to grow faster, new developers come in, old developers leave, the dynamic business changes the requirements quite constantly, the communication between IT and business has gaps, developers are so focused to achieve the sprint commitment, the rush is always putting pressure on everyone working on the project, no one cares to learn about the user-product relationship, no one cares to write tests, developers cannot design as it seems not to deliver short term value, TDD is philosophical, the company doesn’t invest time to train its teams, the most valuable practice is what solves the today’s problem, hero culture rises, we always need more logs to try to realise what is going on, we need to lead the market…

I could summarize it as the result of rush and no study.

Given that scenario, the output bad code is at least comprehensive. Our world isn’t a nice place sometimes and we have to deal with this reality.

What is the point then?

I haven’t brought you here to give an easy-solution-answer, because it simply doesn’t exist.

My goal with this post is to reply the person who has that canned answer when developers complain about bad code: Look… the system works and makes money for the company. That is what matters and the developers are paid to maintain it.

Okay… I see the point and I agree with “the most valuable thing is a running software”. It sounds like magic and can comfort us, developers. However, we still have a big problem there.

Is it really a running software valuable?

A company which highly depends on technology to be running and wants to be innovative will never achieve its goals when software turns into a maintenance issue instead of pushing up the business. Coming back to Bob’s situation, it’s almost impossible to add or change behaviors of the application. How would be possible to deliver a new feature which would change the business workflow without breaking anything else? The risk is too high, the unknown side effects could be catastrophic and, in this way, the company is trying to survive instead of being innovative… it’s now just brute force… the competition is out there rising up.

For me, it’s hard to see the value being generated given this scenario.

Software making money today doesn’t mean software still making money in the future.

Bad code can break established companies and make startups die early… it isn’t because developers aren’t the front line that we can’t be the fault reason.

Clean code, aka having care for code, aka testing first, is the way to achieve a software the business can truly grow with. Let’s study, let’s improve our craft, let’s learn how to manage the rush!

As developers, it’s our responsability to write clean code… that isn’t about achieving the goal of tomorrow faster, that is about achieving the goal of the next year faster and safer…

Have you ever asked yourself what your manager expects from you? Would you guess short time result or long time result?

I know, I know… the perfection isn’t reachable… sometimes we do have to work on urgent issues. Nevertheless, my point is we shouldn’t conform ourselves to the bad code just going along with it…

What does my experience say?

I do have a real case where we’ve spent more time in the beginning to achieve well parity dev-prod environments, completely isolated database for each running instance of the project, end-to-end tests for the whole application, feature and unit tests for each codebase, automated deployment, hexagonal architecture… well… a lot of themes which are unreachable and philosophical for many out there… just bullshit for them…

TDD chaos-vs-time graph

It wasn’t easy… it was challenging. However, the result of our work turned into a product with almost zero bug rate. When a new bug is discovered, a new test scenario is written. Deployments are fast, safe and can be executed anytime. New features are easy to add and the old ones easy to change… that are the TDD benefits… that is an agile team!

We should fight against bad code and always do our best. There is only one error: not to try…

Here are some references:

Posted on Jan 4 '18 by:

fefas profile

Felipe Martins

@fefas

Clean Coder and TDD evangelist delivering software ASAP (as simple as possible)

Discussion

markdown guide
 

Reading this made me remember an old project I worked on. It was a really big project and the company wanted it to go to production ASAP. So they took a lot of fast decisions, didn't plan much and they formed a team of Jr devs.
The MVP was put on production a couple of months later.
After that the bug fixing never stopped and the product has awful performance issues due (mostly) to architecture decisions taken without enough planning.

So, in my opinion...
Companies should invest in building high quality products that are just as good on the inside as they're on the outside.
And all dev teams should take responsibility for their code and at least try to explain the consequences of bad and fast code to the rest of the company.

 

Hey Paula! Thanks for your feedback :)

What you said about "responsibility" is an important topic. It is something Robert Martin (Uncle Bob) calls professional attitude and I truly agree :)

I read two articles very interesting which you might also like:

 
 

So, in my experience, it's pretty well understood in most teams that investing time up front to plan a system along with maintaining a relatively complrehensive test suite are important factors in good software development.

But you make the claim (perhaps unintentionally) that "having care for code" is synonymous with TDD. It would be beneficial for us to talk about why test-first specifically is a requirement for maintainable software opposed to automated testing in general. Right now, you are simply making a claim.

 

Hey Andrew! Thanks for your feedback!

Indeed, normally developers are concerned in sharing their understanding about the system and planning architecture.. that is, of course, an important step, but it doesn't mean the resulted software will be easy to understand, well tested, with domain uncoupled from structure stuff and so on... that is what my working experience has shown me.

About the claim.. yes, I did that intentionally.. as I understood we both agree tests are important for maintainability, so the point here is "why TDD?". So, in my opinion, TDD is the best way to write tests for a lot of reasons. It was simply a class as this theme could be easily converted into a new post.. I will enumerate some of the good points of using TDD according to my experience:

  • By defining scenarios, we are challenged to have a deep understand the problem being solved
  • Being focused in software behaviours instead of "what if this happens, what that happens".. just write the next scenario and code to get it green
  • Implementation MUST be decoupled, otherwise is imposible to TDD
  • It is a tool/methodolgy to discovery the applicaiton design.. let the test to guide us
  • Coding just what is needed.. nothing more, nothing less

It isn't easy to understand, even harder to put into pratice.. I will maybe write about it in a more consistent way with samples and will paste the link here :)

 

When faced with a huge messy code base: I adopt the "one test is better than none" philosophy.

  1. If there is no testing framework, take time to add one and make it part of the build process.
  2. Then write a test for the code you are changing and/or the bug you are fixing.
  3. Repeat 2.

This achieves a few things at a fairly low cost:
a) Your code/bug fix should never regress.
b) People may join in your effort as they see the benefit.
c) You may get manager approval and time to write more comprehensive system tests when they see the benefit.

As an example, after doing this for several months an "old-dog-don't-teach-me-new-tricks" started adding tests. A few years later he told me that he could no longer code without writing tests. The system I was working on went from zero tests and no testing framework to 150 tests or so in a few months, and the managers started to brag about how many tests passed and wanted metrics on failures and ... yeah... watch out... you may start a dangerous revolution.

 

Hey Harvey!

This result you achieved seems to have been great! Congrats for making business to have interest in test metrics! :)

Depending on the legacy code base, it can be really challenging. An old boss of mine said once, that working with legacy is like trying to untangle a mess of cables.. you have to identify the next and do it one by one.

Anyway, there is no magic solution to solve it. That made me remember the section "The Grand Redesign in the Sky" of the Clean Code book:

"[after redesign the whole system,] the current members are demanding that the new system be redesigned because it’s such a mess."

In other words: re-write the system is no guarantee to have a better system.

 

Quick and dirty development is fine. However, you need to be prepared to also retire software with extreme prejudice when it has served its purpose. Developers chronically underestimate the lost time and money associated with shipping late or not at all. For an MVP, getting it out asap allows you to validate assumptions earlier. Building the perfect solution before you do that is hard because you don't have all the facts. The whole point of an MVP is to ship it fast so you can validate your assumptions. Of course building a future proof MVP is even harder because you have very imperfect knowledge of the future. Lots of teams get sucked into building stuff perfectly that will never be used; often at great cost. Most features that sound like a good idea may never be used or simply be ignored by users.

Greenfield development without an MVP is highly risky and expensive. So limit the scope of your MVPs and plan for their replacement early on. Micro services are great for this since they have limited scope and relatively clean interfaces which means you can just rip them out and replace them when they become a problem.

 

Hello Jilles! Thank you for your comment!

Indeed, MVP has nothing to do with perfection... I would even ask if perfection is possible. I agree devs should have business concerns in mind. However, I think we have a different understanding about MVP.

Minimal Viable Product, aka MVP, is about finding the minimal necessary to launch a product.. it has nothing to do low software quality. Finding the minimal necessary to get a product running and starting getting feedback soon is about understanding what your product is going to ship. In that way, the idea behind MVP is to minimalize features. At the same time, quality should still be a concern..

  1. As you said: who has never implemented a feature that was never used? That should be solved by identifing exaclty what is the minimal necessary.
  2. But: who has ever thrown an MVP away after having validated it? Starting with a mess sounds not to be a good deal.

There is this interesting interview with Eric Ries (creator of the term MVP).

Of course, it is important not to waste time to have a top tech environment right from the first release iteration.. it would be just over-engineering. It should be achieved time by time, iteration by iteration while the application is getting mature.

I don't if I could make myself clear. This is also an extensive topic .. I think the keyword is simplification :)

 

The word MVP gets abused a lot in our industry. Early validation of assumptions (e.g. users will use this, this actually works, customers actually will pay for this, etc.) is a valid thing though.

In my experience, most software developed does not get to celebrate a second anniversary. And if it does, extensive refactoring is likely to happen several times in any case. There is a notion of throwaway software. With that in mind, quality is important only in so far as it does not slow you down. Getting stuck doing extensive changes on a shitty code base is bad, of course. However, a cheap, low quality but functional MVP that you ship fast can be replaced easily and give you early feedback on assumptions that you have made about the viability of your product. Engineer for replaceability rather than maintainability. Shipping months earlier ultimately buys you a lot of runway and early revenue that you can use to ship something better later. The longer it takes you to ship the less likely it is to be the right thing.

I've been on more than a few projects where more than half of the features that PMs insisted were absolutely critical eventually were scrapped because they were not needed, redundant, or because users simply don't use them. Engineers like to over-engineer. PMs always want everything and the kitchen sink. And customers always ask for more than is good for them. However, building the wrong thing for the wrong reasons in an MVP means you are shipping the wrong things way too late thus delaying the moment you know the thing is actually viable.

A feature MVP can simply be having a mock button in a UI and measuring if users actually bother to click that with some analytics. Compare that to implementing backend services, investing in devops to deploy, and finally doing the frontend work to hook up the stuff and gradually phasing in the button via AB testing in the hope that users will actually click the button. This stuff is really expensive and there is an enormous amount of waste effort in our industry.