loading...
Cover image for How to Write a Good Piece of Code

How to Write a Good Piece of Code

taillogs profile image Ryland G Originally published at cdevn.com ・5 min read

Source

How to Write a Good Piece of Code


XKCD

Make Sure Your Code "Can Be Good"

The first and probably most important step to writing a good piece of code is to not code at all.

  • Have you validated your assumptions?
  • What is the scope of the code?
  • How will it affect existing code?
  • Has someone already written this code?

Being able to answer questions like these are the foundation of a good piece of code.

Discuss With Others

The best way to validate your choices is by getting the input of others. Strive to be in an environment where people aren't afraid to challenge your decisions and ideals.

Even the strongest wall might look weak when looked at with the right perspective.

Break It Down

Now that you're confident that your code "can be good", it's time to figure out how to actually make it good. Start, by thinking in terms of API's and attempt to break down your proposed code into the smallest pieces possible.

Understanding how to break down tasks into smaller pieces, is the number one thing I see junior programs struggle with. Remember, a chunk of code that you've broken down is one that others are able to help you with. Left as a monolith, it only serves to isolate you from the team.

The first part of a code design phase should very rarely touch on the implementation. Instead, you should be dealing in needs and constraints. Time spent on implementation is often wasted time, because high level API changes can invalidate implementation assumptions. In my personal experience, starting an implementation discussion with already agreed upon API, usually makes the discussion go a lot smoother.

Write Tests That Define It Before Writing It (Spicy and Opinionated)

maxwidth

Now that you know how to break down the code. Write a test for each discrete unit you've identified. Writing a test for each piece of functionality your code will expose, before you code it, is the defining trait of TDD (Test Driven Development). There's been a number studies on the effectiveness of TDD. While some of the studies are controversial, almost all of them report positive improvement on the number of bugs after using TDD.

Edit: I originally made a claim of 40%-80% reduction in bugs from TDD. After receiving comments in this Reddit thread I realized that it was a inherently biased representation of the data. I've instead included a picture of the studies results below, so you can judge for yourself. I've also included the precursor paragraph from the author.

The results are sometimes controversial (more so in the academic studies). This is no surprise, given incomparable measurements and the difficulty in isolating TDD's effects from many other context variables. In addition, many studies don't have the statistical power to allow for generalizations. So, we advise readers to consider empirical findings within each study's context and environment.

A 2005 study found that using TDD meant writing more tests and, in turn, programmers who wrote more tests tended to be more productive. Hypotheses relating to code quality and a more direct correlation between TDD and productivity were inconclusive. Hypotheses relating to code quality and a more direct correlation between TDD and productivity were inconclusive.

Source: Wikipedia

I believe test driven development forces you to put yourself in the point of view of users first, and this will result in a more practical and natural set of API's.

Resist the temptation to tackle multiple tasks at once. You should be writing failing tests for a single unit of your code, followed by writing the implementation for that test. This will allow you to validate your design efficiently, and maintain test coverage even though you're adding code to the codebase.

Keep Your Code Consistent

Personal style and preferences will differ between developers. What should not differ is code consistency. You should have consistent and predictable naming conventions for variables and declarations. If you use tabs, you should use tabs everywhere. If you use spaces, you should use spaces everywhere.

Many junior developers get caught up in the nuances of each choice. In reality, what's far more important is how reliable you are with with your choice. At first this may seem like a relatively small task, but consistency extends far past tabs vs spaces.

The logic of you code also needs to be consistent. Why did you use a map here and a for each over there? Why are you using var in some places but let and const in others? Predictability is one of the hardest traits to find in a programmer (or a human in general), it's also one of the most valuable.

Your worth as a programmer is defined by your "maximum potential value" multiplied by your "projected risk". Quality is meaningless without reliability.

Review It


Source

If code goes into master it should be reviewed. For a review to be beneficial, the author needs to truly appreciate the value of the review process.

Never in this life will you know everything.

A good programmer writes great code and doesn't get it reviewed.

A great programmer writes decent code but puts it through a scrutinous review process.

You should account for failure in every aspect of your life, including coding. Mistakes will be made, and most often all that's needed to stop them is another set of eyes.

Ship It

Congrats, you've now written a good piece of code. It's possible to write a good piece of code without this process, but it's not possible to "always write a good piece of code" without it.

After shipping, remember to communicate with your team about what you've accomplished, it may unblock someone.

Don't Overthink It

Every rule here should be taken with a grain of salt. Should a 2 line commit to an internal README really be reviewed?

Strive for the best practices but remain practical and rational, don't engineer things that didn't need to be engineered in the first place. The most important tool you have in your arsenal is your gut (intuition). Rules do not exist to get in your way, they exist to be consistent and reliable when you are not (and you won't be).

Foo

My Blog

Posted on by:

taillogs profile

Ryland G

@taillogs

Head of Product Experience at Temporal. previously lead architect and low-level systems programmer for scale out SaaS offering. Game engine developer, ML engineering expert. DMs open on Twitter.

Discussion

markdown guide
 

You should account for failure in every aspect of your life, including coding. Mistakes will be made, and most often all that's needed to stop them is another set of eyes.

Liked this part, we often struggle with things that other people can pickup in a jiffy!

 

Or as I tell my colleague, “come find the obvious issue with my code that I obviously just can’t see”

 

I recently ran into an error that held me back for two days, only to find out my code was working but I was checking for result in the wrong database server

 
  • Make sure your algorithm is clear, clean, simple to understand.

  • Make sure your code style/patterns is clear, clean, simple to understand.

  • Make sure your API is clean, simple, easy to understand.

Always separate API and implementation, so that you can change the implementation without affecting call sites.

 

Many junior developers get caught up in the nuances of each choice. In reality, what's far more important is how reliable you are with with your choice. At first this may seem like a relatively small task, but consistency extends far past tabs vs spaces.

Actually, they should not make any of those choices. As you wrote, consistency is the key and consistency can be guaranteed in most of the languages by linters.

In our C++ codebase, we use clang-format to reformat code at each commit then we check in our CI pipeline if actually the code is properly formatted.

You can write code in the style you want, you don't have to think about any such formatting choice, it will be taken care of. Using coding guidelines decrease the time you spent on unimportant choices and removes a good chunk of decision fatigue.

 

Tools like linters are obviously important and can be a huge time saver. I think they can actually be detrimental to junior developers.

It's incredibly important that new developers understand the "why" of each thing they are instructed to do. While many linting rules may be driven entirely by opinion, many also exist to prevent you from doing something stupid. If you learn to first code with a linter, you will simply do what the prompt tells you and have 0 transferrable knowledge. But if you start by learning the gotchas and real underlying constraints, the linter stops being magical.

To summarize, start without linter to build fundamentals. Once it becomes a chore, use a linter. And this is coming from someone who spends a lot of time optimizing for decision fatigue (I only own 1 style/color of shirt, pants, etc). Obviously if the code is going into a production system, it should be linted regardless.

We use tslint airbnb at work. I use Google style guide for everything else.

 

Your points are valid, and I think in the end, we agree.

I speak about roughly a million LoC codebases in enterprise environments.

You won't throw away the consistency/maintainability of your codebase just to teach people why they should not mix tabs with spaces.

Learn the fundamentals without a linter, true, but don't learn it by introducing inconsistent code to master.

Normally, you won't let that code into your codebase anyway because you have code reviews.

But if during the review you have to spend all your time on explaining why you shouldn't mix this and that, why you should avoid huge cyclomatic complexity, you won't actually review the logic behind.

After all, you end up in a worse situation then with linters.

On the other hand, you might not want to reformat automatically your code on each commit, but you can just deal with the error messages given right before the compilation by the CI pipeline. But, I don't think that's efficient.

Read articles, books, participate at coding dojos, build your own tools and side projects and actually read the coding guidelines you have to comply with. Try to understand it. In most cases, the coding guidelines/standard don't just give you rules, but the _why_s behind as well. If not, ask questions.

Your points are valid, and I think in the end, we agree.

Great start

I speak about roughly a million LoC codebases in enterprise environments.

You won't throw away the consistency/maintainability of your codebase just to teach people why they should not mix tabs with spaces.

Learn the fundamentals without a linter, true, but don't learn it by introducing inconsistent code to master.

I know it's unrealistic but I wish most entry devs already had these skills, education system is really failing in that sense. This is also part of the reason enterprise gets a bad rap, they optimize for developer output and not developer development.

On the other hand, you might not want to reformat automatically your code on each commit, but you can just deal with the error messages given right before the compilation by the CI pipeline. But, I don't think that's efficient.

It's funny. When I had to learn JavaScript for my company, I did so with a linter but not integrated into vim. So my development phase would look like,

  1. Write tests
  2. Write code
  3. See tons of lint errors
  4. Fix lint errors
  5. Repeat

Because linting at the end was so much less convenient, my brain picked up on the rules really quickly. After only a week or so, my "natural linting" was such that I almost never had linter errors. To this day, I don't use an integrated linter in vim because my accuracy is so high. It's like I have a built-in linter.

But obviously all Jenkins builds etc should have a linting precursor.

Read articles, books, participate at coding dojos, build your own tools and side projects and actually read the coding guidelines you have to comply with. Try to understand it. In most cases, the coding guidelines/standard don't just give you rules, but the _why_s behind as well. If not, ask questions.

Couldn't agree more. We really do agree in the end (literally).

Thanks for the great discussion. I hope others read it too because I really think it adds to the value of the post.

 

While I didn't get a chance to check out all the TDD articles referred in the summary article, from spot checking a few it's my feeling that these huge percentage increases in reliability come from comparisons with projects without any tests at all.

It really is never explained why writing the tests first is advantageous over writing them while you write the code. What I find particularly in real-world code is that 80% of it requires very basic testing, but about 10% requires extremely intensive testing to cover a lot of edge cases - and it really isn't obvious which that 10% is going to be before you actually run into the real issues.

Another disadvantage is in spike development. In the earliest stages of a module, I might have four or five different approaches that I play with in different branches. In this phase, I write very few tests if any, because I throw away all but one of the approaches before going on.

I've been in all parts of the industry for decades, in companies that do very heavy testing, and yet I haven't been in one shop that does TDD - i.e. writes the tests first.

 


From the source linked in my post

While I didn't get a chance to check out all the TDD articles referred in the summary article, from spot checking a few it's my feeling that these huge percentage increases in reliability come from comparisons with projects without any tests at all.

Obviously there are many studies listed in this table (along with another table of academic studies I've omitted). I think it's obvious that many of these companies were testing before switching TDD (such as IBM).

It really is never explained why writing the tests first is advantageous over writing them while you write the code. What I find particularly in real-world code is that 80% of it requires very basic testing, but about 10% requires extremely intensive testing to cover a lot of edge cases - and it really isn't obvious which that 10% is going to be before you actually run into the real issues.

It's both explained in my post and in the sources I link. This article by Eric Elliot should cover most of your questions. The TLDR is that TDD provides similar benefits to code reviews and generally forces your code to be more communicative and functional. I might skip TDD if I'm using a language like Typescript which makes it incredibly easy to define all my interfaces/API's before I actually start coding.

Another disadvantage is in spike development. In the earliest stages of a module, I might have four or five different approaches that I play with in different branches. In this phase, I write very few tests if any, because I throw away all but one of the approaches before going on.

As I said in my post, take everything with a grain of salt. If you're in the "exploration" phase you probably don't need to be as rigid with your standards. Once you start working on a "feature", I would personally recommend writing tests.

I've been in all parts of the industry for decades, in companies that do very heavy testing, and yet I haven't been in one shop that does TDD - i.e. writes the tests first.

It's a mixed bag. It takes a lot of discipline to correctly practice TDD and discipline does not scale. I know quite a few engineers at top notch companies (Google, Snapchat etc) who are absolutely TDDers (is that a thing?). But I'm talking about purist TDD here, not writing unit tests as you code. With a broader definition I know countless companies that fall into the categorization.

Thanks for the great comment and creating a wonderful discussion around TDD!

 

I agree with you that the first step should be breaking down the things that needs to be done, try to create a clear view of what you are going to create.

But there are times that you have to hack into a problem first and do Refactor later on. IDE Annotations like @TODO, @FIXME, @REFACTOR is a great tool for revisiting the old solution.

 

I don't agree on the flow diagram, which when the requirements changed that we "throw it all out and start over". If the code is broken down well, not all things are discarded. if the requirements that got changed is isolated separately, that is the only part that got changed. #Changeability

 

It wasn't intended to be a good example. I also don't agree with it, which I think the article makes pretty clear.

 
 

Can you explain this in more context
A good programmer writes great code and doesn't get it reviewed.

A great programmer writes decent code but puts it through a scrutinous review process.