DEV Community

Clean Code, bullshit or common sense?

Felipe Martins on April 08, 2018

Common story Bob started to work at a new company. Soon he realized his new job would have a lot to do with a very old and complex legac...
Collapse
 
paulasantamaria profile image
Paula Santamaría

Reading this made me remember an old project I worked on. It was a really big project and the company wanted it to go to production ASAP. So they took a lot of fast decisions, didn't plan much and they formed a team of Jr devs.
The MVP was put on production a couple of months later.
After that the bug fixing never stopped and the product has awful performance issues due (mostly) to architecture decisions taken without enough planning.

So, in my opinion...
Companies should invest in building high quality products that are just as good on the inside as they're on the outside.
And all dev teams should take responsibility for their code and at least try to explain the consequences of bad and fast code to the rest of the company.

Collapse
 
fefas profile image
Felipe Martins

Hey Paula! Thanks for your feedback :)

What you said about "responsibility" is an important topic. It is something Robert Martin (Uncle Bob) calls professional attitude and I truly agree :)

I read two articles very interesting which you might also like:

Collapse
 
paulasantamaria profile image
Paula Santamaría

Thanks! I'll check them out :D

Collapse
 
jillesvangurp profile image
Jilles van Gurp

Quick and dirty development is fine. However, you need to be prepared to also retire software with extreme prejudice when it has served its purpose. Developers chronically underestimate the lost time and money associated with shipping late or not at all. For an MVP, getting it out asap allows you to validate assumptions earlier. Building the perfect solution before you do that is hard because you don't have all the facts. The whole point of an MVP is to ship it fast so you can validate your assumptions. Of course building a future proof MVP is even harder because you have very imperfect knowledge of the future. Lots of teams get sucked into building stuff perfectly that will never be used; often at great cost. Most features that sound like a good idea may never be used or simply be ignored by users.

Greenfield development without an MVP is highly risky and expensive. So limit the scope of your MVPs and plan for their replacement early on. Micro services are great for this since they have limited scope and relatively clean interfaces which means you can just rip them out and replace them when they become a problem.

Collapse
 
fefas profile image
Felipe Martins • Edited

Hello Jilles! Thank you for your comment!

Indeed, MVP has nothing to do with perfection... I would even ask if perfection is possible. I agree devs should have business concerns in mind. However, I think we have a different understanding about MVP.

Minimal Viable Product, aka MVP, is about finding the minimal necessary to launch a product.. it has nothing to do low software quality. Finding the minimal necessary to get a product running and starting getting feedback soon is about understanding what your product is going to ship. In that way, the idea behind MVP is to minimalize features. At the same time, quality should still be a concern..

  1. As you said: who has never implemented a feature that was never used? That should be solved by identifing exaclty what is the minimal necessary.
  2. But: who has ever thrown an MVP away after having validated it? Starting with a mess sounds not to be a good deal.

There is this interesting interview with Eric Ries (creator of the term MVP).

Of course, it is important not to waste time to have a top tech environment right from the first release iteration.. it would be just over-engineering. It should be achieved time by time, iteration by iteration while the application is getting mature.

I don't if I could make myself clear. This is also an extensive topic .. I think the keyword is simplification :)

Collapse
 
jillesvangurp profile image
Jilles van Gurp

The word MVP gets abused a lot in our industry. Early validation of assumptions (e.g. users will use this, this actually works, customers actually will pay for this, etc.) is a valid thing though.

In my experience, most software developed does not get to celebrate a second anniversary. And if it does, extensive refactoring is likely to happen several times in any case. There is a notion of throwaway software. With that in mind, quality is important only in so far as it does not slow you down. Getting stuck doing extensive changes on a shitty code base is bad, of course. However, a cheap, low quality but functional MVP that you ship fast can be replaced easily and give you early feedback on assumptions that you have made about the viability of your product. Engineer for replaceability rather than maintainability. Shipping months earlier ultimately buys you a lot of runway and early revenue that you can use to ship something better later. The longer it takes you to ship the less likely it is to be the right thing.

I've been on more than a few projects where more than half of the features that PMs insisted were absolutely critical eventually were scrapped because they were not needed, redundant, or because users simply don't use them. Engineers like to over-engineer. PMs always want everything and the kitchen sink. And customers always ask for more than is good for them. However, building the wrong thing for the wrong reasons in an MVP means you are shipping the wrong things way too late thus delaying the moment you know the thing is actually viable.

A feature MVP can simply be having a mock button in a UI and measuring if users actually bother to click that with some analytics. Compare that to implementing backend services, investing in devops to deploy, and finally doing the frontend work to hook up the stuff and gradually phasing in the button via AB testing in the hope that users will actually click the button. This stuff is really expensive and there is an enormous amount of waste effort in our industry.

Collapse
 
buntine profile image
Andrew Buntine

So, in my experience, it's pretty well understood in most teams that investing time up front to plan a system along with maintaining a relatively complrehensive test suite are important factors in good software development.

But you make the claim (perhaps unintentionally) that "having care for code" is synonymous with TDD. It would be beneficial for us to talk about why test-first specifically is a requirement for maintainable software opposed to automated testing in general. Right now, you are simply making a claim.

Collapse
 
fefas profile image
Felipe Martins

Hey Andrew! Thanks for your feedback!

Indeed, normally developers are concerned in sharing their understanding about the system and planning architecture.. that is, of course, an important step, but it doesn't mean the resulted software will be easy to understand, well tested, with domain uncoupled from structure stuff and so on... that is what my working experience has shown me.

About the claim.. yes, I did that intentionally.. as I understood we both agree tests are important for maintainability, so the point here is "why TDD?". So, in my opinion, TDD is the best way to write tests for a lot of reasons. It was simply a class as this theme could be easily converted into a new post.. I will enumerate some of the good points of using TDD according to my experience:

  • By defining scenarios, we are challenged to have a deep understand the problem being solved
  • Being focused in software behaviours instead of "what if this happens, what that happens".. just write the next scenario and code to get it green
  • Implementation MUST be decoupled, otherwise is imposible to TDD
  • It is a tool/methodolgy to discovery the applicaiton design.. let the test to guide us
  • Coding just what is needed.. nothing more, nothing less

It isn't easy to understand, even harder to put into pratice.. I will maybe write about it in a more consistent way with samples and will paste the link here :)

Collapse
 
610yesnolovely profile image
Harvey Thompson

When faced with a huge messy code base: I adopt the "one test is better than none" philosophy.

  1. If there is no testing framework, take time to add one and make it part of the build process.
  2. Then write a test for the code you are changing and/or the bug you are fixing.
  3. Repeat 2.

This achieves a few things at a fairly low cost:
a) Your code/bug fix should never regress.
b) People may join in your effort as they see the benefit.
c) You may get manager approval and time to write more comprehensive system tests when they see the benefit.

As an example, after doing this for several months an "old-dog-don't-teach-me-new-tricks" started adding tests. A few years later he told me that he could no longer code without writing tests. The system I was working on went from zero tests and no testing framework to 150 tests or so in a few months, and the managers started to brag about how many tests passed and wanted metrics on failures and ... yeah... watch out... you may start a dangerous revolution.

Collapse
 
fefas profile image
Felipe Martins

Hey Harvey!

This result you achieved seems to have been great! Congrats for making business to have interest in test metrics! :)

Depending on the legacy code base, it can be really challenging. An old boss of mine said once, that working with legacy is like trying to untangle a mess of cables.. you have to identify the next and do it one by one.

Anyway, there is no magic solution to solve it. That made me remember the section "The Grand Redesign in the Sky" of the Clean Code book:

"[after redesign the whole system,] the current members are demanding that the new system be redesigned because it’s such a mess."

In other words: re-write the system is no guarantee to have a better system.

Collapse
 
aymanadam5 profile image
Ayman Adam Dawood

Awsome article