DEV Community

Cover image for Which principle or saying is wrong and/or misused?
Ben Halpern
Ben Halpern Subscriber

Posted on

Which principle or saying is wrong and/or misused?

Software development gets passed down as an oral and written history of mistakes and learnings — and we wind up with a lot of "rules of thumb". Some of them are not as universally useful as some make them out to be. What are they?

Latest comments (26)

Collapse
 
perpetual_education profile image
perpetual . education

HTML Validation and Lighthouse scores and all of the accessibility best practices don't mean that your site is usable.

Collapse
 
webbureaucrat profile image
webbureaucrat

"There's always a catch" / "There are always technical tradeoffs" / "Faster, better, cheaper: pick two" This is true most of the time, but it's important to understand that in technology occasionally someone really does just build a better mousetrap, and it's really important to look for times when that happens because when it happens it means the other options are dead-end technologies.

I'm in discussions at work like, "should we keep putting dozens of apps on one managed dedicated instance or should we adopt containers?" There's really no serious conversation to be had there.

Collapse
 
jmplourde profile image
Jean-Michel Plourde

The Dunning-Kruger effect is often misinterpreted and not well understood. The results of the original study are criticised for being wrong in their calculation/interpretation and the subsequent buzz it created and all these citations contributed to solidify the myth.

This McGill article is a great read.

Collapse
 
liviufromendtest profile image
Liviu Lupei • Edited

The End-to-End (E2E) Testing term is used incorrectly.

Technically, that process involves testing from the perspective of a real user.
For example, automating a scenario where a user clicks on buttons and writes text in inputs.

That's why all the components get tested in that process (from the UI to the database).

If you're using a hack, it's no longer E2E Testing, because a real user would not do that.

A common example is when you're testing a scenario that involves multiple browser tabs (e.g. SSO Login scenario).

There are some libraries out there that cannot test in multiple browser tabs (such as Cypress), so in order to automate that scenario, you would have to pass the credentials in the header or remove the target="_blank" attribute from the element that you're clicking.

That involves a hack, and that means your test no longer mimics the exact behavior of a real user.

Another one from the testing world: Accessibility Testing
Most folks think that involves checking if your elements have the title attribute (for screen readers) and if the fonts and colors are friendly for users with visual deficiencies.

But Accessibility Testing actually just means making sure that your web application works for as many users as possible.

The major mistake here is that folks forget to include cross-browser testing in this category.

So, you might have 0.01% users who need screen readers, but you actually have 20% users who are on Safari and 15% who are on Firefox and maybe even some on Internet Explorer.

Collapse
 
savvasstephnds profile image
Savvas Stephanides

"Clean code".

People assume that the process for "clean code" is "code should be clean from the moment you try to make it work to the end". No. The very principle of clean code is "make it work, even if the code is crap. Then, once it works as you'd expect, then change it to make it clean"

Collapse
 
miguelmj profile image
MiguelMJ

Don't reinvent the wheel.
I know that it probably exists a package or library that does it better, faster, it's tested and maintained... but what if I don't want a new dependency? What if the library introduces more bloat than I want to accept? What if I'm trying to learn?
I think it's acceptable to reinvent the wheel when you don't like the wheels you find.

Collapse
 
perpetual_education profile image
perpetual . education

Yes. Pretty sure that if "The wheel" is "websites" - that we need be reinvestigating them a bit.

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

All of them. As part of the human learning process, we all tend to take something that worked out well in one scenario and try it on everything. In the small and the large. That's when you see posts extolling only the virtues of a new (to the author) tech or strategy. Examples: DRY, microservices. Then many people try it and are plagued by undiscovered downsides. Then they post articles condemning it. Eventually we gain a cultural understanding of where it fits and where it doesn't. That's what the Gartner hype cycle is meant to measure. And often the corpus of articles on a given topic indicates where we are with it.

Collapse
 
peerreynders profile image
peerreynders

Single Responsiblity Principle (SRP):

I do not think it means what you think it means

"Gather together those things that change for the same reason, and separate those things that change for different reasons… a subsystem, module, class, or even a function, should not have more than one reason to change."

Kevlin Henney Commentary

Collapse
 
alohci profile image
Nicholas Stimpson

Maybe "Avoid Premature Optimisation". Like all these principles, they're well meaning and well founded but traps lurk within. It's easy to reach a stage where retrofitting the optimisation by the time that it's proved that it's actually needed is WAY harder than if it had just been planned in from the start.

Collapse
 
leob profile image
leob

Good point ... I think you need to think about it and plan for it, but not always implement it right away.

Collapse
 
booboboston profile image
Bobo Brussels

This literally doesn't answer the question, but a really tremendous principle I was thinking about recently is "Principle of least surprise" — it's not prescriptive enough to be overbearing, but really has empathy for other developers and/or users baked in.

Collapse
 
peerreynders profile image
peerreynders

What an audience finds astonishing relates their background and general familiarity. So the principle only works relative to an implied audience which makes it somewhat subjective.