DEV Community

Cover image for What programming best practice do you disagree with?

What programming best practice do you disagree with?

Nicolas Amabile on June 06, 2019

I've been recently asked this question in an interview and this was my answer: Pure unit tests and shallow rendering idea For ReactJS i...
Collapse
 
ryansmith profile image
Ryan Smith • Edited

80 characters per line is a common one. I feel that this is an old practice from technical limitations that lived on. I do not go overboard with extremely long lines of code, but with widescreen monitors, 80 characters seems a bit limiting.

Collapse
 
gsto profile image
Glenn Stovall

It's not just a technical issue. longer lines are harder for the human mind to parse out. In text, 50-60 characters are ideal. I think more than 80 in code is pushing it. Having said that, I agree that it's not something to be too dogmatic over. We have our linter set to 125 cpl.

Collapse
 
tarialfaro profile image
Tari R. Alfaro

I'm pretty fond of 80 characters being the limit.

Collapse
 
apihlaja profile image
Antti Pihlaja • Edited

Over 80 characters per line means project is written in some ancient language. For example Java.. or other older strongly typed language where it's occasionally impossible to make code nice & readable 😅

That, or there is some bigger issues in code structure.

Collapse
 
ryansmith profile image
Ryan Smith

I have heard of the 50-60 being ideal, but I thought that was more for reading sentences than text in general. I could be wrong though, I haven't looked at the studies on it.

Collapse
 
omrisama profile image
Omri Gabay

My coworkers use huddled terminals in an i3 workspace and with Vim, they appreciate having 80 characters per line.

Collapse
 
ghost profile image
Ghost

same here, 80 chars are visible in 3 columns at 1080p; and also 1 vim column + browser without triggering the "small screen websites format". By the way, i3 + nVim is perfect

Collapse
 
cjbrooks12 profile image
Casey Brooks

I prefer 120 characters, and stay pretty strict on that limit. 80 chars definitely is not enough, but I do like having a hard limit on line length.

Collapse
 
epse profile image
Stef Pletinck

I appreciate code that has an 80 char limit, means I can put two code panes next to each other without having to put the font eyebleedingly small

Collapse
 
marissab profile image
Marissa B

If I remember correctly that stemmed from the COBOL era where a punch card only had a certain number of characters across plus a blank column (the fourth I think?) for the sorting wheel to put cards in order.

I don't think I've ever adhered to a certain number of characters in C#.

Collapse
 
erikpischel profile image
Erik Pischel

Plus, in the 80s graphic cards the standard text mode was 80 columns X 24 lines.

Collapse
 
marble_shark profile image
Marble Shark

Erm, nobody insists on that anymore so no, not common at all - it's long been revised to 120.

Collapse
 
ryansmith profile image
Ryan Smith

Popular linters and formatters for JavaScript default to 80 characters:

eslint.org/docs/rules/max-len
prettier.io/docs/en/options.html#p...

Thread Thread
 
alastairdunca13 profile image
FineYoungDebaser

Like everything else about JavaScript, This limit is regressive at best.

Collapse
 
maixuanhan profile image
Han Mai

Yeah, multiple screens are common nowadays. Super long line is not encouraged. For easy reading I prefer 120~130.

Collapse
 
emptyother profile image
emptyother

I think 80 is just for encouraging good practices. Some people tend to ignore soft recommendations like "don't go overboard", but will follow hard limits.

Collapse
 
leob profile image
leob

80 characters per line is horribly outdated, it would make most of my code look ugly especially in a more verbose language like PHP.

120 characters per line is fine, that's much more okay.

Collapse
 
paul_melero profile image
Paul Melero

It has another use case: showing code on a presentation (you need to increase the base font size) keeps your code formatted and in the screen.

Collapse
 
jonrandy profile image
Jon Randy 🎖️

Spaces over tabs

Collapse
 
anwar_nairi profile image
Anwar

Team tab here.

Collapse
 
ghost profile image
Ghost

I don't see the advantage of tabs, is just a character that you can't differentiate from another just by looking. To me is like having to A that looks the same but are a different unicode. I like to think that in the future, when we solve all big problems, this will be one of the big wars WWX maybe, just after the big WWIX of Vim/Emacs.

Thread Thread
 
mt3o profile image
mt3o

You can render tabs as any width, while spaces are fixed. This is really useful for formats like yaml (which forces you to use space, btw) where it can get very clumsy with small width of white space

Thread Thread
 
ghost profile image
Ghost • Edited

I would like that the "r"s of my name looked like flaming swords and the "o"s like eagles, but the main purpose of written language is to visually recognize what is in there and to agree on that. You can't see the difference between a tab and a bunch of spaces, that should be the first priority, everything goes after that, including style; btw, you can use multiple spaces in yaml and in any language I know of, if you are reading code with a single space indentation you should murder the one who wrote it, I think that even a judge would understand. Even 2 spaces are questionable, to me 4 are ideal and 8 like the Linux Kernel guidelines is going too far.

Thread Thread
 
mt3o profile image
mt3o

With tabs you can render them as you like ;p even as 3 or 7 spaces :)

Thread Thread
 
kl13nt profile image
Nabil Tharwat

I found this article helpful in understanding the difference: dev.to/alexandersandberg/why-we-sh...

Thread Thread
 
ghost profile image
Ghost

The argument exposed there "accessibility by default" is flawed, accessibility needs of few are often opposed for the needs of the many. If you have trouble seeing 4 space indents, almos all editors have vertical lines, different colors, font size, space markers, etc. Is a weak argument for tabs. no 2 different characters should look alike, and unless you want to use tabs for all spacing, to me tabs are out. If I see a blank character I know what it is and can even estimate how many. That's why we use text editors and not word processors in the first place or let all just use MS Word or Libreoffice Write to program.

Collapse
 
michaeldober profile image
Michael Ober

Spaces over tabs for fixed width fonts. Tabs over spaces for proportional fonts. This way the indenting is consistent.

Collapse
 
erebos-manannan profile image
Erebos Manannán

I used to be strongly for tabs over spaces, nowadays I strongly prefer spaces over tabs, yet I am slightly annoyed by the fact that it's now more difficult for me to customize just how wide the indentation is.

Why I now prefer spaces, is that once the indentation is in place, it won't just go haywire on different machines. E.g.:

def some_func_name(value, use_default_args=True, whatever_other_args=None,
                   foobar=False):
    pass

This kind of a thing would be regularly painful with tabs.

Collapse
 
jonrandy profile image
Jon Randy 🎖️
Thread Thread
 
erebos-manannan profile image
Erebos Manannán

It's quite annoying when people just reply with a link instead of even sharing the gist of it and then the link for additional reference.

Tell this information to people writing PEP-8, black, gofmt and other such things. It's valuable feedback, but since fighting about formatting is not worth the time I use tools to auto-format everything and they will force the codebases to use spaces.

Collapse
 
will_the_dev profile image
Will Stewart

I didn't care about this argument until I read about the accessibility issues surrounding the use of spaces for those who are visually impaired.

reddit.com/r/javascript/comments/c...

Collapse
 
ben profile image
Ben Halpern

I question anything too religious around small files and tiny methods. Sometimes the better choice is to toss another method in the class so it’s easy to find rather than stash it away elsewhere in the codebase.

Collapse
 
woodencode00 profile image
WoodenCode00

I think modern IDEs are pretty smart to walk you to the file/class which encaosulates the method.
For me the method length limit is imposed by the response to simple question - "What the method accomplishes?". If you are not able to answer without using AND ..this.. AND ..that. AND ... multiple times, then it is time to better encapsulate the logic and maybe change the project's design.

Collapse
 
georgecoldham profile image
George

My counter to this is that with any suitably large product, having a logical folder structure that you religiously stick to, regardless of file size/content† , you can reliably find the content that you are looking for. Okay you occasionally deal with a silly file, but its better than having someone recreate something and have inconsistency between what should be functionally identical.

For small/informal project then I understand the convenience, but otherwise i feel it can add to problems long-term, especially when working with big/multiple teams.

have a single exported const in a file for all I care

Collapse
 
nickhristov profile image
Nick Hristov

Use an ide, preferrably a jetbrains one.

Collapse
 
omrisama profile image
Omri Gabay

git grep

Collapse
 
nickhristov profile image
Nick Hristov

TDD. In any environment which is somewhat agile you will have evolving requirements. Evolving requirements means evolving code or even design.

This means that you will end up writing a lot of test code which gets thrown away.

You should totally write unit tests for your code. And you should think how to split your code to make it testable. But don't put the carriage before the horse.

Collapse
 
ianisfluent profile image
IanIsFluent

Interesting! I couldn't disagree more :) to give you the confidence to refractor and improve your code (making it more readable, maintainable and easier to extend) you need good resting coverage. And if you use TDD for a while you realise that it actually makes the process of writing deep internal functions in your software faster, because of the much tighter feedback loop as you make changes.

Collapse
 
nickhristov profile image
Nick Hristov

So curious. When you write your code, do you end up writing it and structuring it correctly on the first go?

My process is always write some code, evaluate the abstractions, write some more code and see if the abstractions present are still good enough. This sometimes means that I need to refactor something pretty much immediately after I have written it because I made an assumption about something or I forgot about something. Or I realize that I have repeated functionality which needs to be dried.

In other words, when I write code, the first 40% of the effort is quite fluid and subject to change.

This does not mean that I start without having a solution in my head, or a plan. It doesn't mean I haven't thought about the problem. But the reality is that my solution are never 100% on point.

Thread Thread
 
rossdrew profile image
Ross • Edited

This sometimes means that I need to refactor something pretty much immediately after I have written it because I made an assumption about something or I forgot about something.

And that is exactly what TDD is supposed to support. You write a test that describes stepwise functionality until you have your application. Then you can refactor, change, rewrite as you desire and the tests still describe your use case.
I find people who are strongly against TDD tend to be the people who have been exposed to it wrongly.

The only people I think have a healthy view of TDD are those who say it's useful but not as a religious dogma. Essentially, anyone but obsessives and dismissives, which is true for any technology or methodology.

Take one of my personal projects. 100% TDD developed. Which -while not perfectly designed- lends itself to a very easy to understand, compose and modify structure and through TDD is forced to evolve positively.

Thread Thread
 
ianisfluent profile image
IanIsFluent • Edited

Yeah, this is interesting. Like anything it doesn't apply 100% of the time. The place where TDD works well is once you know what you're doing! Because otherwise, of course, you can end up testing things in the wrong place, as you realise things won't work and have to refactor them.

But in the case of bug fixing and adding simpler cases, where your tests are already there, writing a failing test first is an amazing habit to get into.

I think the counter argument to the whole 'but I'll have to keep rewriting my tests!' is, if you're not sure how to structure stuff, write the acceptance / integration test first - because you probably won't have to change that - and then start trying to work out how to satisfy that. Once you have something working, you can start TDDing with your unit tests until it is complete :)

Thread Thread
 
rossdrew profile image
Ross

While I wouldn't push anyone towards TDD, I would strongly push people away from integration test based development. The testing is not fine grained enough and test only a "walking skeleton", they are slower and they allow for terrible design. It's very easy to write integration tests which pass with a mess of interdependent code but very hard to do it with unit tests.

The major benefit of (Unit)TDD is that is pushed you to think about separation of concerns, ergo, good design.

Thread Thread
 
ianisfluent profile image
IanIsFluent

Cool. I am trying to do as Uncle Bob says ;) I want integration tests to check that the code does as the end-user needs, regardless of its internal structure.

Agree that doing ONLY end-to-end / integration tests is a bad idea, as you say.

Collapse
 
aturingmachine profile image
Vince

I used to be super against testing, more so testing every edge case. Until I worked on a large project and was tasked with refactoring a complex and extremely important part of our application.

Being able to make changes confidently backed up by the tests is an amazing feeling.

Thread Thread
 
ianisfluent profile image
IanIsFluent

Once you've had that feeling - you realise how precarious things felt before!

Collapse
 
darbyo profile image
Rob Darby

Would you not change the test first to reflect the impending code refactor? Do you only see value in refactoring production code and not unit tests?

The whole point of TDD is to give you confidence in what you a writing.

Collapse
 
nickhristov profile image
Nick Hristov

My point is that tests have a tendency to solidify existing code.

If the code is changing before it's even released, this is time wasted.

Write the code, solve the use cases, then write your tests. Before you have something concretely written, things are in flux and writing tests at that point is a waste of time.

That said, sometimes I break this philosophy. Sometimes I write unit tests immediately after I have written a small component.

But to take tdd as an overall approach to development would be hard to justify in my opinion. Maybe exceptions are if you are writing machine control software, or mission critical application. But in such cases you are probably using waterfall.

Thread Thread
 
darbyo profile image
Rob Darby

Sorry but I have to disagree. A large point of TDD is to allow for this. You write a test that describes the functionality that you want, then write the implementation. After you can refactor, change, rewrite the implementation as much as you want and the test it still valid. If you need to change the functionality then this can be done be first updating the tests.

If you work in an agile environment as you suggested you should be creating small, potentially shippable products. So once you have implemented something, it shouldn't be changing too much especially before you release, as it will have been agreed before brought into sprint.

Thread Thread
 
ghost profile image
Ghost

I don't understand the benefit of doing the tests first, the important thing is to test everything possible, doing the test first is very weird to me; I think that it depends of how you solve a problem, your mental frame it may work for you but to force it to everyone I think is a mistake. Is like forcin to make a drawing when you take notes or to make a mind map, do it if it helps but force someone who doesn't is just cumbersome. And agile is not always best or even possible.

Collapse
 
richardcochrane profile image
Richard Cochrane

It probably sounds wishy washy but I hate (and my team hate) HAVING to do TDD - preferring to have it as another tool in the toolbox, to be used when the requirement supports it. I find TDD works really well when I know just what I want, i.e. I need a class for a report that can print, download via csv or display onscreen - I can take what was requested, write tests and then make sure that my code works all nicely. But sometimes dev is a bit more exploratory - the exact solution provided is one that we have to feel our way towards, where the requirements themselves are not completely certain and you have users who need to spend some time actually working with a rough version of the feature to refine their own requirement, TDD doesn't help. One could argue that this would necessitate a prototype stage as part of requirements (that occurs before dev starts) but when that prototype needs to be functional and it would be quicker (or more resource efficient) to show something in real code rather than building a completely separate prototype, tests first is a really big hindrance to exploring solutions.

Collapse
 
seangwright profile image
Sean G. Wright

DRY is a goal unto itself or Copying and Pasting Code is BAD!

My mantra is, don't try to DRY before you are WET (Writing it Every Time).

Having good tests can go a long way towards ensuring duplicated code continues to function correctly after modifications.

And consolidating duplicated code too early can result in an abstraction that serves too many roles.

Different parts of your code will change at different speeds and for different purposes. If you force two parts to change at the same speed and for the same purpose even if they wouldn't had they not been DRY'd up, you will box yourself into corners that are hard to get out of.

Look for the patterns in your code before you start consolidating replication.

Let your application's patterns reveal themselves to you naturally instead of trying to force them prematurely under the banner of "DRY".

Collapse
 
gypsydave5 profile image
David Wickes • Edited

"Don't Deploy to Production on Fridays!"

This one drives me around the bend. Especially the smug self-satisfaction that's exudes from the developers who say it.

I'm not saying that they should deploy on a Friday - by the sounds of it most of them have insufficient test coverage, extended (1 hour +) deployment times and a very slow release cadence (once a week?). For them to deploy on a Friday under these conditions would be madness.

What annoys me is the lack of shame when they say it. They don't even sound like they want to deploy on a Friday, like they know that their pipeline is slow and flakey and that they aren't really doing CI let alone CD and they're perfectly happy about it because, lol, management won't let them so what are you going to do?

I'm sorry, that was a bit of a rant. It's just that it makes me angry when I see people noticing the broken windows and laughing about them rather than fixing it. "Don't deploy to production on a Friday" isn't a slogan for you to put on a t-shirt and lol about on Twitter -- it's an embarrassment to our profession.

Collapse
 
danielescoz profile image
Daniel Escoz

The problem with deploying on a Friday is that if something goes wrong after the fact, nobody's there to fix it. By deploying on a Friday you are guaranteeing that any problem will take at least 3 days to be fixed, unless there are people on call (or employees free time is completely disregarded). I don't see anything wrong with advising against that.

Collapse
 
gypsydave5 profile image
David Wickes • Edited

I have no problem with you or anyone else choosing not to deploy on a Friday. It's your life, you know your codebase and your production systems better than anyone else. Life is messy and imperfect.

But ...

If you're telling other people not to do it, as some sort of blanket rule, and think that it's an hilarious precondition of working as a developer... I find that really sad. We should be trying to make deployments delightful and easy so that they can happen at 5pm on a Friday without anyone worrying.

If you think that's impossible, I'd ask you to read around the subject and broaden your horizons.

Here, this is Charity Majors, who explains it all better than I can.

Anxiety related to deploys is the single largest source of technical debt in many, many orgs. Technical debt, lest we forget, is not the same as “bad code”. Tech debt hurts your people.

Saying “don’t push to production” is a code smell. Hearing it once a month at unpredictable intervals is concerning. Hearing it EVERY WEEK for an ENTIRE DAY OF THE WEEK should be a heartstopper alarm. If you’ve been living under this policy you may be numb to its horror, but just because you’re used to hearing it doesn’t make it any less noxious.

If you’re used to hearing it and saying it on a weekly basis, you are afraid of your deploys and you should fix that.

(Charity is my hero)

Thread Thread
 
scott_yeatts profile image
Scott Yeatts • Edited

Yeah, I've always taken it as a given that internal business applications deploy on Friday evening, public (non-revenue generating) apps deploy on a Monday/Tuesday during the day, and public (revenue generating) apps deploy during lowest use.

"Don't deploy on Fridays" is honestly new (within the last 5 years) to me. We all know the danger is something goes wrong... but that's why you test it first.

If the internal app is messed-up, that gives you 2 days without users, and a strong incentive to get the PO to make the call on whether or not something is critical. If it's not critical, enjoy your weekend. If it is, yep, the team's working for the weekend. Then you REALLY need to retrospect on your testing practices, and how this critical flaw made it to prod.

If low usage occurs at 2AM EST on a revenue generator, then that's just when the deployment needs to happen. You better believe I've tested that thing into the ground, cause nobody wants to wake the whole team up for a problem.

I'm the last person to say it's OK to take developer's personal time, and the first one to say "let's try and avoid late nights" but sometimes the nature of the job calls for it, just like doctors and lawyers. The key is the employer recognizing it and arranging comp time, late starts, etc.

If you're afraid of deploying to production for any reason (not just anxious... everyone should have a little anxiety any time prod changes... it keeps us on our toes and away from the "Just push to prod, it'll be fine" mentality), then there's something off in the process that needs to be addressed.

Addendum: This is all assuming they aren't in a CI/CD environment... which many MANY places are not. A lot of places deploy at the end of a sprint, and God help you if you're in a monthly, bi-monthly or quarterly release shop. Again... the testing should be there to allay those fears though.

Collapse
 
yawaramin profile image
Yawar Amin

David, most apps nowadays aren't built as a single monolithic systems directly under the team's control. There are many moving parts, many components, and many backend services which are involved in bringing just one app into production-readiness. Of course we all want to test and QA our apps thoroughly–that's due diligence and we wouldn't be doing our jobs otherwise. But I think it's useful to keep in mind that not every integration point, not every interaction, might have been tested. And if something unexpected pops up, you could be stuck debugging it late on a Friday evening, tired and hungry. That's not a great way to end the week.

That's also why I like the techniques of blue-green deploys, and canary deploys–we let the new release soak for a bit and fix issues on the fly–preferably near the start of the day when we're fresh.

Collapse
 
gsto profile image
Glenn Stovall

Sometimes small amounts of duplicate code is okay. When the alternative involves creating complicated abstractions and couplings, you'd be better off keeping it simple and having some code that does roughly the same thing in multiple places.

Collapse
 
jckuhl profile image
Jonathan Kuhl

C# Style guidelines. I don't like having the opening bracer on its own line:

Don't like

public static void DoTheThing()
{
   Thing theThing = new Thing();
   theThing.do();
}

Do like

public static void DoTheThing() {
   Thing theThing = new Thing();
   theThing.do();
}

Also, I refuse to have a dangling comma.

const favoriteThings = [
   "whiskers on kittens",
   "bright copper kettles",
   "warm woolen mittens",
   "brown paper packages tied up with strings",
];

Ugh. No. Gross. I think internally it makes my brain expect something to follow the last item, but nothing does.

Collapse
 
256hz profile image
Abe Dolinger

Old thread, but in JS, the dangling comma is actually mandatory in our style guide. It makes edits easier - you can reorder or delete the last item without changing other lines. I think your point about expecting something else is valid.

Collapse
 
michaeldober profile image
Michael Ober

For functions and methods, including anonymous lambda functions, I keep the opening brace on the next line by itself. For objects and everything I put the opening brace on the line with the declaration.

Will 'favoriteThings' even compile with that final comma?

Collapse
 
metruzanca profile image
Samuele Zanca • Edited

Oof, 6 months old buuuuttt. Yes, it does compile. Just like js, it's there for convenience when editing. You can't however do that in object initializers e.g.:

var obj = new MyClass
{
  item1, //shorthand if same name
  item2 = item2, //<—this comma gives error
};

(on mobile, fingers crossed for formatting)

Collapse
 
jckuhl profile image
Jonathan Kuhl

In JavaScript, the final comma is optional. In many other languages, that might not be the case.

Collapse
 
scottishross profile image
Ross Henderson

That code should be as few lines as possible. There's an element of refactoring that is important so that it's efficient, but it should also be understandable.

I believe you shouldn't write code as if you're going to be the one dealing with it, but someone underneath your level of understanding.

Collapse
 
dgreene profile image
Dan Greene

“I didn't have time to write a short function, so I wrote a long one instead.” - Mark Twain :)

Collapse
 
simov profile image
simo

;

Collapse
 
bklau2006 profile image
BK Lau

Daily SCRUM standups. It's just micromanagement in disguise. You can instead do 2x a week instead and save some development time

Collapse
 
neradev profile image
Moritz Schramm

Sorry to hear that. The daily meeting is not for micromanaging at all. If you have the feeling, it is that way, then well, you are doing it wrong.

Collapse
 
cubiclebuddha profile image
Cubicle Buddha • Edited

My least favorite practice is a function that looks pure but actually has side effects.

Collapse
 
samuraiseoul profile image
Sophie The Lionhart

I don't think that's considered best practice at all. Lol

But yes, that sucks.

Collapse
 
giangvincent profile image
giang vincent

Write comments in every function, and at the beginning of every class. Sometimes the projects will go really complicated, then you must roll over again every class you write just to look what it does.
What you need to do is create a structure and a walk through tutorial when things out of hands.

Collapse
 
pieteckhart profile image
Piet Eckhart

Code comments can be like deodorant for code smells. If you feel like writing a comment to clarify something, first try to change to structure of the code itself to make things more clear. Most of the time, simple (and safe) rename refacterings can be enough. After that try putting boolean expressions in properly named variables. If your happy about the naming you then can extract a method out of it so you can hide some of the implementation so the reader of your code doesn't have to mentally parse every tiny detail to get the big picture.

If you always so this your code will read like a story. Digging deeper you will find technical implementation but at the higher level it should be understandable by first time readers without having to peek at the lower levels. It takes a lot of practice and discipline tough.

Collapse
 
yawaramin profile image
Yawar Amin

I kinda was in that camp ... but have changed my mind over time. Extracting functions out is not about length (although that doesn't hurt), it's really about levels of abstraction. Each function should be dealing with one level of abstraction. Anything at a lower level should be in a separate function. This leads to a top-down, layered architecture where layers are modular and swappable. More about this technique in this great talk about writing quality code: youtu.be/CQyt9Vlkbis

Collapse
 
oscherler profile image
Olivier “Ölbaum” Scherler

Yoda conditions. A lot of PHP projects write if(42 == $value) instead of if($value == 42), to avoid the mistake of forgetting an equal sign and getting an if($value = 42) statement with an assignment in it, that thus will alway be true.

Except that in PHP, you need to write if(42 === $value), otherwise you are vulnerable to shady automatic type conversions. And those projects also write if(42 !== $value), if(42 < $value), etc, for symmetry, even though there are no mistakes to avoid in these cases. The result is code that is annoying to read, and you have to train yourself to not forget to reverse the operands, when you already trained yourself to not forget the third equal sign. It simply has no benefit.

Collapse
 
talha131 profile image
Talha Mansoor

A lot of C code, that I have come across, also has this issue, and for the same reason, "to avoid the mistake of forgetting an equal sign".

Collapse
 
cschliesser profile image
Charlie Schliesser

In JavaScript, tendency has been to chain lots of Array methods together to create ninja code that filters, maps, reduces, and sorts all in 1 fell swoop, using ECMA6 shorthand syntax for arguments and return values. Sometimes this is great, sometimes it's a mixed bag, and sometimes it requires a microscope to see what's going on. There are lots of situations where it's so much more pleasurable to write

for (const foo of foos) {
    foo.set(...);
    await foo.async(...);
}
Collapse
 
madani_sina profile image
Sina Madani

Getters and setters for every field, or even just getters for immutable ones. When inheritance is intended I understand the rationale but for serializable data classes, publically accessible fields should be made public.

Most of the time a final field with a getter can be made public, the only reason not to is to avoid using generics (by taking advantage of covariant return types)

Collapse
 
jckuhl profile image
Jonathan Kuhl

I've never fully understood getters and setters. If I have a field that has both a getter and a setter like so:

class Person {
   private String name;

   public String getName() {
     return this.name;
   }

   public void setName(String name) {
     this.name = name;
   }
}

Why not just make the field public? What's the point in making getters and setters and not just dealing with the field directly? I asked one instructor about this and he just said "it's better encapsulation" but I don't see it.

Collapse
 
rossdrew profile image
Ross

It allows you to validate and know when fields are accessed.

Thread Thread
 
michaeldober profile image
Michael Ober

Getters and Setters allow you to validate and normalize input. If this validation isn't needed then consider a public field.

Collapse
 
arekusandr_ profile image
Arekusandr

Single page application over server side html.
NOSQL db for 10 GB size of user collection.
Trying to fit unstructured data into a graph db instead of file.
Leetcode interviews instead of measuring engineering culture fit.
Microservices at 3 developers startups.

Collapse
 
rhymes profile image
rhymes

🔥🔥🔥

Collapse
 
seangwright profile image
Sean G. Wright

Yah, that makes sense. I think an intuition of the importance of being DRY in any given scenario is valuable.

I'm definitely not promoting laziness or apathy or saying that being DRY is dumb or religiously excessive.

Instead I'm saying that DRY is probably less important than SOLID and less important than the principles of DDD.

I've seen, on many occasions, code that has been preemptively consolidated because at the time of consolidation the multiple pieces looked to be the same.

What was realized later was the code did not have the same meaning or purpose and the different pieces of code would have evolved naturally at different speeds and for different business requirements over time.

What resulted was code that had to suit too many use cases or be re-separated.

The original principles of DRY were meant to encourage developers not to write the same pieces of code multiple times - but this would have been code that was for the same purpose, not just code that looks similar or the same at a given point in the projects history.

Preemptive DRYing reminds me of building a micro-services architecture before understanding the seams in the service boundaries.

Martin Fowler encourages, for many teams, building a monolith first (with SOLID principles) and then separating off pieces once the use-case patterns have been identified.

  • Let your code grow naturally.
  • Identify not just what some code does but also why it does it.
  • Know that many things can be done prematurely (optimization, architecture, DRYing).
  • Decide if something should be a cross-cutting concern applied through AOP if it feels it should be DRY'd up.

Thanks for the thoughts!

Collapse
 
bdougherty profile image
Brad Dougherty

That's the reason for using tabs though. Everyone can set their display preference to their liking. You can use a width of 2 if you prefer, while I can use the 4 that I prefer (or whatever the numbers are, it doesn't matter).

Collapse
 
mhenrixon profile image
Mikael Henriksson • Edited

I have found “Don’t repeat yourself” to cause the biggest mess many times. I’m guilty of this myself and I’ve inherited some really deeply nested inheritance chains where people thought it was best with 4-5 levels deep inheritance rather than just duplicating a couple of methods in a select few classes.

Especially in ruby where interfaces are missing it becomes really hard to refactor this code. Even worse then when one of those methods that all children uses have if or case statements with the type of child to skip or explicitly run some other method. BOOM not doing that again.

I like duplication, if it really is something that can be purely used without checking who it is that is running the code I might extract it but that is not my first choice.

Oh and also, I don’t write that many unit tests. I prefer business or use case driven tests/design/architecture. I’ve found especially stubbing to be a pain in the neck. Many times people just throw in a double and make it run some code. I can’t even count how many times the stubbed code doesn’t even match the underlying implementation... Nah, use real world code, test real implementation. If performance really becomes an issue then by all means I’ll consider refactoring the tests.

One of the gems I am maintaining has more than 500 (mostly) integration tests. Connects to redis,
stores a bunch of data etc. Test Suite finishes in around 5 seconds even though I have some sleeps and the likes due to implementation being asynchronous.

I went from untrustworthy unit tests that caused problems in production to reliable test suite that I can trust. Don’t stub business logic unless you really have to is my go to.

Collapse
 
thekashey profile image
Anton Korzunov

I strongly disagree with accepting everything thoughts leaders are saying, especially I like to disagree with Kent.

However, for vast majority it is a best practice they should follow.

Collapse
 
ianisfluent profile image
IanIsFluent

I'm glad I don't work with you! ;)

Collapse
 
thekashey profile image
Anton Korzunov

I would disagree with this ;) That's a decision blindly made without any prior research made.

Thread Thread
 
ianisfluent profile image
IanIsFluent

Good point. And I need to think more about this! Its easier to manage people who DO blindly accept stuff. I wonder if that's a problem I am having?

We need to do the right thing, not the easy thing. Hmm.

Collapse
 
tylerthehaas profile image
Tyler Haas

ever using OO. I don't see what it provides that the module pattern doesn't.

Collapse
 
prahladyeri profile image
Prahlad Yeri • Edited

That's the reason why we have two kinds of tests: unit tests for the individual components, and integration test for the whole package. To resemble the way your software is actually going to be used, you should focus and stress on the latter.

The former (unit tests) just ensures that the component doesn't break on its own, its only really helpful in large orgs where multiple teams develop their components separately and the build or devops team then integrates the whole thing.

If you are a solo dev who does both backend and frontend, it'd make sense to combine them, the integration test is the unit test for you, doing it separately will simply be a waste of time.

Collapse
 
ianisfluent profile image
IanIsFluent

Yep. I think we're discussing the difference between 'acceptance tests' and 'unit tests' - as Uncle Bob calls them.

I think 'front end' (not meant as a derogatory term!) devs tend not to think about unit testing as being worthwhile. Part of that is probably because there aren't as many layers in the front end 'project' structure as there usually is in the back end. In the back end I hope you'd expect both unit tests and acceptance tests, but if there aren't any complicated functions that need unit testing, maybe the acceptance tests are enough.

Collapse
 
codemouse92 profile image
Jason C. McDonald • Edited

I don't know that I really label any practice as unqualifiedly "terrible," but I don't agree with anything being followed religiously. I tend to disagree with...

  • "No Comments" rule of Clean Coding, although I still advocate all the rules of self-commenting code.

  • TDD in and of itself. I strongly recommend testing, however!

  • Agile, unless one takes the time to determine how the practices fit into the team's workflow. (True of any project management methodology, though.)

  • Auto-generated documentation. I still recommend documentation comments, including Javadoc/Doxygen compliant ones, but I believe documentation should be written by hand.

  • Knee-jerk compliance with Clean Coding, DRY, SOLID, or literally any other coding pratice. These are great in and of themselves, but you have to supply your own common sense.

That said, all of the above have their place! My issue is seldom with the practice; instead, my concern is with a blind, unilateral compliance on those practices.

Collapse
 
maixuanhan profile image
Han Mai

couldn't believe more on auto-generated DOC. It's suck sometimes. Make a short code looks never ending. Comment is everywhere.

Collapse
 
eileenmccall profile image
Eileen McCall

I have a lot of opinions about JS style that might a bit uncommon or idiosyncratic

var > let > const

{
  let x = 1; // let for block level vars
}
const pi = 3.14; // const for string/num constants
var y = 3; // var for everything else

Double quotes for strings, always

var yes = "good";
var no = 'bad';

Space before function parens in declaration, no space before invocation

function foo () { }
foo();

ONLY use arrow function when you require lexical this, and NEVER (well, almost never) use anonymous functions. Also, no arrow functions without curlies (I make exception for simple pluck operations because I'm lazy).

[1,2,3].map(function timesTwo (num) { return num * 2; });

var timesMultiplier = (num) => { return num * this.multiplier; };
[1,2,3].map(timesMultiplier);

userAPI.getUser().then(usr => usr.id); // this is fine I guess
 
quii profile image
Chris James

Man, I want to move to your universe. Nothing ever, ever goes wrong after deployment? You don't lose hundreds of thousands of dollars per day when your app goes down because your app never goes down? Sounds like heaven.

Why is it when anyone talks about deploying on friday the argument is always reduced to "well you dont work on something important".

I dont know you, but it's a pretty safe bet that Charity has worked on stuff as least as important and critical as you have

I'm (Charity) an operations engineer, co-founder, and (wholly accidentally) CEO of honeycomb.io. I've been on-call for various corners of the Internet ever since I was 17 years old -- university, Second Life, Parse, Facebook.

These ideas of continuous delivery on every green build come from important projects because it's important that teams regard deployments as a non-event.

Systems will have bugs/problems no matter what, being able to detect and recover from them quickly is what is important. Not batching up releases for monday and crossing your fingers because you have so little confidence in your own system that you're too scared to deploy it on a friday,

 
quii profile image
Chris James

You keep saying strawman fallacy and then type strawman after strawman

I don't think anyone is against trying to make deployments enjoyable and easy, but to assume that your process is perfect and nothing will go wrong is pure arrogance and inexperience.

Who is arguing this?

Man, I want to move to your universe. Nothing ever, ever goes wrong after deployment?

Or this?

Thread Thread
 
gypsydave5 profile image
David Wickes • Edited

Hi Jeff!

Just to reiterate - nobody is telling you how to release your code - your code base and your process belong to you and nobody else, and nobody is judging you. I'd much rather have a constructive discussion rather than talk about logical fallacies, as I'm sure you would too.

That's rarely the case at all. As I pointed out, teams will often designate a different day for deployment. In that common situation, there's no panic, no one saying "don't deploy to production," nobody is afraid of their deploys. When you leave yourself time to respond to any possible post-deployment emergencies, you give yourself peace of mind. When Thursday Deployment Day rolls around, you're happy to release, and you're not afraid.

You're right - this is common. But...

What if it goes wrong? What if you release a bug? Actually - not 'what if'. You will release a bug, because we're all human and imperfect as you've previously intimated. So there's a bug, in production, on Friday afternoon.

Well, you'll probably have to roll back - it's a big release and I'll be hard to work out exactly which of the many changes made the break happen. The blame is for tomorrow - now you have to work on implementing your recovery plan. Now, I'm not sure how long it'll take to implement, but let's assume that it's longer than rolling back a single commit as it's going to be bigger and might involve a few extra steps (database migrations, perhaps).

By prioritising the vision of safety you've laid out ('do the dangerous thing less often' - excuse me if I'm paraphrasing, but that's the impression you've left me with), you're prioritising an increased mean time to failure (MTTF), at the expense of mean time to recovery (MTTR). Your program will go wrong less often (you are taking extra care over those releases, as you've said), but when it does go wrong (and we all agree that it always will go wrong in the end), it will take longer to fix it.

The alternative is to prioritise MTTR over MTTF - this is what continuous delivery is all about. We deliver the code to production in smaller releases - ideally a single commit, but this is not always possible. We aim to automate as much of the quality checking of these releases as is possible - the usual suite of unit/acceptance tests, but also smoke tests and performance tests, and a lot of metrics in the production environment to see what the effect of each release is in real time. These pipelines are optimised for speed - the releases should get out as quickly as possible.

Then, when things go wrong (and they will go wrong), we can either roll back or roll forward (more often forward) very quickly as the change was small and the reasons for the regression is obvious. Ideally most of the serious possible errors will have been caught earlier by the automated tests, and so the regression shouldn't be too serious, but - as I'll say again - serious things can and will go wrong. In this scenario we aim for them to be small changes that can be fixed quickly.

So what's this got to do with releasing on Friday? Well, if your concern is that a release will take time to fix if it goes wrong (say all of Friday), and should be managed and manually monitored during its release, and so you're only doing it once a week - I'd say you're prioritising a reducing your mean time to failure.

This might be really important for your business, but I'd always argue that it makes more business sense to reduce the mean time to recovery. To quote Roy Osherove (who I have no idea if he's an authority, it's just a good example):

If Amazon.com was down once every three years, but it took them a whole day to recover, consumers won't care that the issue has not happened for three years. All that will be talked about is the long recovery time. But if Amazon.com was down for 3 times a day for less than one second, it would barely be noticeable.

So, there it is. Do we want one day in every three years, or three times a day for one second? Do we prioritise stability, or do we prioritise recovery. As I'm building programs that keep changing due to business requirements, I prioritise recovery, and so I release small multiple times a day, every day. That's why I promote releasing on Friday as a good idea.

One other thing

First I want to clarify that a blanket rule is not a hard and fast rule, it's a default rule you can start with to flesh out your needs.

I think your meaning would be better expressed if you said 'rule of thumb'; 'blanket', as an adjective, means "covering all cases or instances; total and inclusive".

Thread Thread
 
vitordevdd profile image
Vitor Vanacor

If "Nothing ever, ever goes wrong after deployment?" is a strawman, it means that sometimes things do go wrong with you. And what do you do in when this happens? Work on weekend? Why not shrink the risk with a simple, general guideline? I think that this is the question.

Thread Thread
 
quii profile image
Chris James

See Dave's excellent reply dev.to/gypsydave5/comment/bli6

Collapse
 
kl13nt profile image
Nabil Tharwat • Edited

It's not only about abstractions nor length. If a function becomes too long the engine won't be able to optimise it if the size of the bytecode generated after compiling the function exceeds 64 KiB in the case of Java on JVM, and 60 KiB in the case of JavaScript on V8, even if the function is considered "hot", invoked very often, that is. So, it's more about measuring the trade-offs of using longer functions vs breaking them up into smaller ones.

Edit: I just realised this was posted over 4 months ago.

Collapse
 
jhbertra profile image
Jamie Bertram

"Write tests, not too many, mostly integration"

Words to code by!

 
kl13nt profile image
Nabil Tharwat

Agreed. I wanted to add a different perspective to the discussion since most of the time the points that are focused on are abstraction, readability/testability, and length.

There are times when it's necessary to make functions long, and others to break them up into smaller units, as long as doing so has a measurable impact, makes the logic involved clear, and goes according to project goals.

Collapse
 
jhbertra profile image
Jamie Bertram

I disagree with object-oriented programming as it is commonly taught.

Collapse
 
fpolster profile image
Florian Polster

Any resources on how you would like to have it taught?

Collapse
 
johncip profile image
jmc • Edited

For individual projects -- all of them. It's good to know what folks have established as good ideas, but the ideas should be applied as a result of evaluating the costs & benefits for your particular case. i.e thought-driven development.

And honestly, for many projects, cargo cult programming will probably get you through, and it'll save you from having to do too much thinking, but it's far from the most direct route.

Collapse
 
jeastham1993 profile image
James Eastham

Getting hung up on high code coverage from testing.

Code coverage is a great metric, but not the be all and end all. 100% coverage but no test of how the software is actually used is largely irrelevant.

Collapse
 
funcke profile image
Jonas Funcke

If anything, best practices are general guidelines to create clean code. I think it really helps working towards them (split unnecessary long functions up, make speaking names, etc.) but not enforce them

Collapse
 
michaeldober profile image
Michael Ober

Since one of the goals of good code is to make it readable, you need to consider line length. Going over 80 characters will make your code harder to read by forcing word wraps on standard paper.

Collapse
 
gergelypolonkai profile image
Gergely Polonkai

During my 18 years of professional career unit tests saved my life so many times.

I agree that you probably donʼt need it for UI components, but the business intelligence should be thoroughly tested. Put the BI stuff in its own function, write tests for that function, then bind it to any UI element you like. Now if you change another thing your BI depends on, your BI related unit tests will fail regardless of your button.

Unit tests are what they advertised as: utilities to test your units. The smaller these units are, the easier to test.

Collapse
 
cschliesser profile image
Charlie Schliesser

"at the end of the day the only thing that matters is that you can ship your app with confidence." well put – the best practices in this arena can get us there, but aren't always a good fit. Sometimes it's a mix of unit and integration, and maybe one could say there's a bit of smell there, but it's not always a perfect situation.

Collapse
 
cschliesser profile image
Charlie Schliesser

Great example! Code that abstracts away "TrackerInitializers" and then lets an array of trackers attach themselves and initialize and ... suddenly we've got code that's theoretically pure for no good reason and requires a good bit of reading to figure out what it actually does. When it's time for something to change, it's a copy and paste 2 years down the road, and the new paste may not even fit into the interfaces we've built...

Collapse
 
scottshipp profile image
scottshipp • Edited

Things that are common that I disagree with:

  • "public" as the default access modifier

  • test-last development

  • dockerize all the things

Collapse
 
niorad profile image
Antonio Radovcic

Wouldn't it make more sense to leave the JSX-render-testing to selenium and use unit-tests for the business-logic and other non-rendering-code?

Collapse
 
aturingmachine profile image
Vince

In reality if one of those child components have an emission you could grab them in your tests and force the emission, then ensure your component behaves as expected when a child emits a value.

Collapse
 
andrewbrown profile image
Andrew Brown 🇨🇦
  • linters
  • service objects
  • react
  • immutable data
  • GraphQL
Collapse
 
mateiadrielrafael profile image
Matei Adriel

The 4th is the only triggering one lol

 
jckuhl profile image
Jonathan Kuhl

Yeah, they changed it recently (don't remember how recent though, ES6?) and now some devs think it's a great thing to do. I dunno, I hate it lol.

Thread Thread
 
klausdonnert profile image
Klaus Donnert

I believe it threw an error in IE only.

 
quii profile image
Chris James

Lol