loading...

Which contentious opinions in programming actually matter?

charliedevelops profile image charliedeveloper ・1 min read

I think this question would breed a toxic comment section if it was posted in most places on the internet but I think the dev.to community is up to the challenge for a positive chat about stereotypical programming opinions! So hold my beer, here goes...

There are more stereotypical opinions floating around the development community than you can shake a stick at: tabs vs spaces, Windows vs Mac, nano vs vim etc...

The vast majority don't really matter in the long run, however I am interested in peoples opinions about which stereotypically contentious issues actually make a difference and have as close to a generally accepted 'correct answer' as you are likely to find in programming?

An example (in my current opinion) to mull over: "the less code you write the better" I find this breeds unreadable code that causes knowledge silos and junior devs to spontaneously combust when they see it.

The difference in opinions in the topics raised are potential powder kegs so please try and be respectful if you respond. If in doubt, just imagine you are discussing it with your granny.

Discussion

pic
Editor guide
Collapse
elmuerte profile image
Michiel Hendriks

Comments in code are a defect.

Most people who've read Robert C Martin's Code Complete will probably agree. You write comments because you failed to write self explanatory code.

You can argue that you need comments in order to make it clear what the pre and post conditions are. But this is another argument for failure to explain this in code. Pre and post conditions could have been specified in contracts, but most languages have no or limited support.

Edit: most code quality tools will ignore comments when determining technical debt. This is wrong. Comments also need to be maintained and refactored. Wrong documentation is worse than no documentation.

Collapse
nepeckman profile image
nepeckman

I'd argue that this is only in the case of "what" comments. If you write a comment that describes the operations taking place in a block of code ("what" is happening), then there is probably something wrong with your code. Code should be as human readable as possible. But I think that including "why" comments is still valuable. If you write a comment that describes the reason you made certain decisions in the codebase, it can be very helpful. I don't know how you could communicate reasoning and decision making without a comment.

Collapse
desolosubhumus profile image
Desolo Sub Humus 🌎🌍

I do still write what is taking place in a block of code, if only because what may be obvious to me may not be obvious to someone just starting out. Granted, my code generally is self-explanatory as far as I'm concerned (I could file it away, find it in a decade, and still know what it was for right off the bat), but I remember back when I was still had to look up why I could just use class and id interchangeably.

I consider it to be a sort of passive mentorship. Keep it short and simple, of course, but keep it clear enough that even a non-coder can get the gist of what and why.

Collapse
georgeoffley profile image
George Offley

While I think you should always write your code in a way that the next person can come along, read through and understand what the code does. I do however think comments are still necessary. The human readable code is great for reading what the code does. However, comments can provide what code cannot. Context. It was done a certain way because of x, y and z. It allows you to enter the mind set of the original coder. And their thought process in doing something a certain way.

Collapse
jdsteinhauser profile image
Jason Steinhauser

There are some cases where comments are absolutely necessary. Some examples from my career:

  • Manually unrolling nested for loops for decreasing processing time
  • Implementing Ordinary Differential Equation solvers
  • Implementing algorithms from scientific whitepapers and having a customer that requires links to the whitepapers and explanations for any variations from the original algorithm (usually for performance).
  • Having comment-to-code ratio as a quality metric.

In general, though, most code I've written or reviewed has been pretty self-documenting. It gets tricky with numerical methods though, since we haven't fully embraced symbolic computing yet.

Collapse
charliedevelops profile image
charliedeveloper Author

Great topic choice Michiel,
I'm honestly not 100% sure where I stand on this one - whats your opinion about aiming to write code that doesn't need comments but using them sparingly when needed for the particularly complicated stuff? Is it best to take an all or nothing approach or is that just something to strive for?

Collapse
elmuerte profile image
Michiel Hendriks

If it is up to me (as I will not go against code convention of the project) I will write only why if it can be explained in a few words, max 2 lines.

As for doc-comments (i.e. method/class comments) I will write pre- and post-conditions if the language does not support this in other ways. As I mostly develop in Java I do use the null-annotations. And for the rest I document it in the method. I generally also declare all exceptions it will throw, even unchecked exceptions.

So in the end: I will always try to write code, which will read like a (weird) story. And leave out footnotes (comments) as much as possible.

There is no all or nothing. Just aim to write as little (code and comments) as needed without sacrificing readability. (See other comments about this).

Thread Thread
charliedevelops profile image
charliedeveloper Author

Sure thing, that seems like the sensible thing to do, nothing worse than going to read a comment in code and ending up trying to get through war and peace. Thats interesting what you say with regards to Java, I use PHP a bunch and have really enjoyed some of the new return type hinting improvements made in the latest versions precisely for the reason you mentioned. The other bonus of this is I find it also helps minimise the number of tests I have to write because the entry and exit criteria are much more controlled and behave more predictably.
Thanks for the answer!

Collapse
jarxg profile image
@jarxg

Comments are gold for people learning by hacking somebody else's code, i.e. almost everybody at some point.

Collapse
jacoby profile image
Dave Jacoby

They can be.

The code I am currently refactoring is much more commented about how it used to work like A but now works like B, but not what A or B actually do, much less what the whole thing does.

Collapse
sreisner profile image
Shawn Reisner

This is a great question. I think there's some contention between people who are very pedantic about code style consistency across a project and those who don't think it's important. I'm in the "very pedantic" camp because having a consistent style is the easiest way to remove friction in the readability of your code. It removes one additional barrier to understanding and it's something I think is very important.

Collapse
jillesvangurp profile image
Jilles van Gurp

Instead of being pedantic about style, which is very exhausting, I find it much more constructive to do automatic formatting at build time. I've set this up on a bunch of projects. This removes this as a topic of discussion and frees everybody up to disagree about more interesting things ;-)

Collapse
sreisner profile image
Shawn Reisner

I agree 100%. I do automatic formatting at build time because I'm so pedantic about style.

Collapse
charliedevelops profile image
charliedeveloper Author

thanks for the reply!
I definitely agree being consistent is important for readability of your code.
One thing that I would be interested to hear your opinion on with regards to this topic, is how would you approach consistency of code style when updating legacy codebases?

eg: have historically used inline css styles for particular attributes however this is now causing precedence issues with external stylesheets for other newer parts of the system? Would you lean towards engineering it to work with the consistent old style or encourage team to set up a new code style and edit all the old stuff?

apologies if the example isn't the clearest!

Collapse
sreisner profile image
Shawn Reisner

As always, it depends. Without more context, my gut reaction is that if your old inline styles are causing precedence issues and making it difficult to move forward with your project because you're always having to go back and deal with these frustrating issues, I'd say it's probably time to go in and refactor those inline styles. Those constant nagging issues don't just slow you down, they make developers unhappy. And unhappy developers quit, which is another major cost. Just something to think about!

Thread Thread
charliedevelops profile image
charliedeveloper Author

good answer! i suppose thats a good rule for all of development - follow a particular way of thinking until there is overwhelming evidence to do something else!

Collapse
trg profile image
Tom G

Using editorconfig or something similar to assist editors in automatically maintaining the style of the project is nice.

Collapse
kspeakman profile image
Kasey Speakman

Popular opinion: Components are the answer, no matter the question.

This isn’t really even widely contested. People are accustomed to the UI framework rat race and accept it as normal. They go from framework to framework expecting the next one to solve all the problems of the previous one. But they are all fundamentally based on the same idea (components), with different conveniences on top.

Collapse
hoelzro profile image
Rob Hoelz

My take on "the less code you write the better" isn't "try to do in 5 lines what should take 50", but rather to avoid writing certain code in the first place. For example, avoiding writing a function that an external dependency can do well (of course that brings up the issue of understandability and size of your program, not to mention maintenance - did someone say "trade-offs"?). A better example, I think, is not writing certain features, or even entire programs in the first place. Why add a feature to, say, a source code-searching program like ack if you can write a small program that takes ack's output and gets you the result you want? Why write a program, however small, to process data if you can whip up a shell oneliner to do that processing for you? Anyway, that's just my 2¢!

Collapse
charliedevelops profile image
charliedeveloper Author

Hey Rob, thanks for the considered response. I can definitely see your point of view about not reinventing the wheel and building things which already exist. I think you raise a good point that these stereotypical opinions can be interpreted differently from developer to developer. I would agree that working smart (using existing resources) is a great way to save time on a project.
To follow on from your point - I would be interested to hear about what your own personal criteria for deciding whether an external resource/dependancy is trustworthy enough to include in your project? Do you favour package downloads/online reviews/github issues solved?

Collapse
hoelzro profile image
Rob Hoelz

I consider a number of criteria when evaluating a library:

  • Reverse dependencies - who's already using this project? Perl's MetaCPAN shows this information readily, but you need to take it with a grain of salt, since it only shows code on CPAN that uses the library in question. So you'll naturally see fewer projects that depend on applications, or libraries that provide specialized services such as a Twitter API library.
  • How easy is it to use? This includes things like documentation, consistency, and bugs. A great example of the first two is Python's scikit-learn - it has phenomenal documentation and the API is very easy to fit in your head while offering a great breadth of features. As far as bugs go, a lot of the time you just have to run into these, but things like the issue tracker can provide hints, as you mentioned.
  • I often reach out to others in the community to get feedback on various contenders.
  • I will often read through some of the source code to get an idea of the quality of the code. Not just "wow, this code is bad, I can't rely on this" but also "will I be able to debug this when things break".

That's just a few examples, but that's the gist of how I approach external dependencies!

Thread Thread
charliedevelops profile image
charliedeveloper Author

this is a great mental ticklist to go through - I have made a note to try this next time I am making a decision.
I particularly like your last point, it seems so obvious now you have said it but can honestly say I am guilty of not diving in and having a quick look.
Thanks for taking the time to answer!

Collapse
kspeakman profile image
Kasey Speakman

Also a response to "the less code, the better".

I believe the original purpose of this statement was to point out that often the solution to a given problem is not a technical one. But our careers are generally focused on solving the technical problems. So sometimes we write (a lot of) code for things that don't need to be solved entirely with code.

When all you have is a hammer, every problem looks like a nail.

Great example recently: a client talked with me a few times about changing how an ID card was printed so that there was room to punch a hole for a lanyard. The first couple of times, my instinct was that I would need to make it a setting that clients could adjust. That entails a decent amount of work (UI, storage, integrating with printing code, testing, and future costs of reconciling the feature with other features / printers / etc).

But it finally occurred to me to ask: "Is there room above the hole? Can the hole punch machine be adjusted instead?" They were able to solve the problem that way. And now that is code I never had to write.

And it is not an either/or proposition. Sometimes the best solution includes some code plus some company processes. Versus a pure code solution that would be very complicated for both devs and users.

Collapse
charliedevelops profile image
charliedeveloper Author

Great answer, this is a really interesting point to make. Being able to think about the bigger picture is a real hallmark of a good developer.

Collapse
rbanffy profile image
Ricardo Bánffy

Another question that has generated some heated discussions is mocking for testing. Usually, we mock for performance reasons or to test different hard-to-replicate (say, time/date dependent) behavior, but I often see what I can describe as "overmocking" - checking whether a method was called depending on input provided, or the parameters it was called with. This is a way to increase your code coverage (as code that depends on the opaque object being manipulated is counted), but often stems from hard-to-test code that's within that logic.

Ideally, all the code you build is easy to test with little setup and teardown. You shouldn't need to test the language (whether foo.bar(baz) actually calls method bar of object foo with baz as parameter) but should test whether the parameters you generated and the method you'd select are the right ones, if possible outside the code that only depends on a correct implementation of the language runtime. In order to do that, you'll implement those as methods or functions (they feel like functions in my case) and test them in isolation (which will be faster, BTW, which one of the excuses I see for overmocking).

Collapse
thorstenhirsch profile image
Thorsten Hirsch

Is code ownership good or bad?

...and can the answer be "it depends (...on the size of the software)"?

Let's think about this question from the other end - who gets all the blame and praise? If a tool or component is so small that it can easily be written completely by a single developer, everybody will probably know its author, because he's the one who gave it to you or even presented it at your last team meeting, where he has received a lot of praise from management for writing it. So you just go to the author and ask him to implement a feature for the next release. Code ownership comes naturally for small components, scripts, and tools that are too small for an additional product owner or demand process.

But if the software is too big for a single developer, if it can only be the result of a team, no single developer will feel code ownership for the complete product. So who will care for things like coding style and test coverage? Do we all care by choice, because we're professional? Or do we need processes and/or a product owner to enforce good code quality?

Collapse
charliedevelops profile image
charliedeveloper Author

Good point - I agree the tendency is often to speak to the code author on small components for
future feature developments but this often leads to knowledge silos.

A potential way around this is the use of agile as the development process, the breaking down of small tasks will encourage a dev team to 'swarm' over even small pieces of work so everyone gets eyes on it. This coupled with strongly enforced code reviews by the team could potentially help. The tradeoff is it will probably take a little longer on smaller bodies of work because of process.

Collapse
thorstenhirsch profile image
Thorsten Hirsch

Could you please elaborate on that?

Collapse
elmuerte profile image
Michiel Hendriks

Using annotations for dependency injection and ORM is bad.

I prefer to write Spring XML configuration and Hibernate XML mappings instead of using annotations. In fact, I think using annotations are wrong as they will dictate the usage of the entities and thereby limitation their functionality.

Typical case of Data Objects (or POJOs in Java). They define the structure of data. If I would use annotations for persistence, I would also dictate how this and its sub-classes will be persisted. With this the data object no longer just defines the structure of the data, but also how to store it in case of ORM.

How it would get even worse if I also wanted to use that Data Object for data exchange. Should I also add annotations to instruct how to serialize it to JSON and/or XML? It will become a annotation hell.

Collapse
jillesvangurp profile image
Jilles van Gurp

I just got rid of hibernate and replaced it with JdbcTemplate and native SQL. Also replaced Java with Kotlin and I now have proper immutable entity classes without annotations, and a very simple DAO that queries the DB directly and does exactly what it says, and some simple service classes that use those with a TransactionTemplate. Not the first time I've done this on a project and hopefully the last time I need to be dealing with that.

IMHO class inheritance in general is something to avoid using and especially inappropriate with things that go in a database. It's called the object impedance mismatch for a reason: databases don't do inheritance and tables are not the same thing as classes. A good rule of thumb is that if you are not querying on it, there is no reason whatsoever for that thing to be a database column. So turning every class field into a column just creates a lot of complexity. Likewise, unless you are pulling it out separately, just shove it in text field in json form instead of creating extra tables. Apply the YAGNI rule with extreme prejudice and keep your schemas simple.

Regarding spring XML, yikes. This stopped being necessary with Spring v3 and as of v5 is not something you should be doing period. IMHO the proper way to do dependency injection is:

  • constructor injection only. That way the only thing you need to slap on your beans is @Component (and maybe a few @Value on constructor args). Even that is optional if you use Kotlin since you can declaratively specify your beans programmatically.
  • restrict use of @Autowired to things like controllers; as of spring 4.x there is no need to use @Autowired anywhere else. Having constructor injected beans means that spring already knows to autowire the constructor and allows you to use private final fields for your dependencies (no setters and getters needed).
  • use @Configuration for initialing things that are not spring beans or that need more complex setup (e.g. thread pools).
  • this completely removes the need for constructors that do work (other than setting private final fields). Constructors MUST not do work.
  • do not allow any .xml files to exist in your project. Just say know. I'm old enough to remember XML being a cool new thing. That's 20 years ago. I've not worked on a project where having XML around desirable or necessary in 10 years.
Collapse
rbanffy profile image
Ricardo Bánffy

As someone coming from both software engineering and operations, I place great value on having an intimate understanding of the actual environment your applications will run. Knowing well the ins and outs of your runtime kernel, of the low-level libraries you use and being able to anticipate issues that can arise from your persistence layer (be it a filesystem or a database) is extremely important.

Having said that, I'll go on a limb and bring up the Windows vs. Mac vs. Linux divide and suggest you should use the platforms your code will run as much as you can. If you program Windows desktop apps, nothing will substitute your hands-on experience with Windows and other Windows apps for making applications that seamlessly integrate with the environment. Same goes for Macs and Linux. Making a GUI app to manage services will make a lot of sense on Windows and absolutely none on Linux. If your code is deployed on Linux, nothing will help you more to understand your monitoring and operations than using a Linux machine every day.

rbanffy profile image
Ricardo Bánffy

Typing also acts as a form of documentation. You know a certain function expects arguments of certain types and returns values that we expect to have a certain behavior.

Thread Thread
thorstenhirsch profile image
Thorsten Hirsch

I totally agree about the benefits of static typing. But I'm pretty sure that "static typing advocates" also know how their IDE benefits from static typing. It's actually their foremost argument.

Collapse
coryodaniel profile image
Cory O'Daniel

“Why” changes. That’s why good commit messages are important so you can see how “why” changed over time.