DEV Community

Discussion on: Why I'm phasing out ternary statements

Collapse
 
rsuttles58 profile image
Rob Suttles

Just one other consideration in this discussion might be performance. Are ternaries anymore or any less performant than standard if/else statements? I'm asking this as a junior that does not know the answer. ha

Collapse
 
mcgurkadam profile image
Adam McGurk

That's a great question, and a really important one to be asking in the context of refactoring!

I spun up a quick perf test:
jsperf.com/testing-ternary-for-dev...

And it looks like the results are negligible. The first time I ran it, the ternary was faster, the second time I ran it the non ternary was faster.

Feel free to run it yourself!!

Thanks for reading!

Collapse
 
joeattardi profile image
Joe Attardi

That sounds like premature optimization IMHO, in the vast majority of cases it probably doesn't matter in terms of human perceived performance. I say probably because there's always an exception to every rule :)

Collapse
 
mcgurkadam profile image
Adam McGurk

Yeah that's probably true, this block of code is probably too small to make any perf change have a non-negligible impact.

Great insight and thanks for reading!

Thread Thread
 
fennecdjay profile image
Jérémie Astor

Maybe (probably) I'm wrong, but this would impact performance if it was called a HUGE number of times?


Yet I think in this case, for C at least, this would compile to the same assembly.

Thread Thread
 
joeattardi profile image
Joe Attardi

I mentioned "premature" optimization. If this was called a huge number of times, and it did impact performance, then it would be time to optimize. But I would be willing to bet that for most use cases, it doesn't really matter.

Collapse
 
riccardomessineo profile image
Riccardo Messineo

If your Junior, I would suggest you to focus on readability. Above all.
We live in "Giga" times where in 99% of contexts it doesn't matter if you use a byte more or less (or a digest cycle, or a MB in RAM, or whatever).
For sake of clarity, I'm not saying to waste resources, but in general give more importance to readability (fewer bugs, quicker refactoring, more code reusability, and so on...).

To satisfy your genuine curiosity... this sort of instructions are both written in "high level" code that somehow will be translated in "low level" code.
I think (not sure) that they'll be translated in the same low-level instructions and have the same performances.

Collapse
 
mcgurkadam profile image
Adam McGurk

Yeah, the perf test I showed up top showed not much of a difference between the two!

Collapse
 
holywar20 profile image
Bryan Winter • Edited

I have a controversial opinion on this ... I'm not sure this is something worth really considering in 99.9% of use cases.

This can easily lead one to premature optimization. Most code executes rarely, and consumes fractions of a fraction of resources. Optimizing a millionth of a second benefit on code that executes say once per session is probably not a good use of your cognitive resources.

In fact, I'd argue that even if IF or Tenary was 100x more efficient than the other ( and it doesn't appear to be ), that it's generally bad to write code 'optimally performant' code.

Optimal performance can easily be done by building applications entirely in assembly and without using frameworks or high level languages at all. But the reason we write almost nothing in assembly is because the biggest barrier to getting a functioning application is the human writing it. Generally if a coder is writing an IF statement, your doing a high-level check, which compared to all the billions of things a computer does a second, isn't all that much time, and you would need to write a billion lines of code to get a few seconds of extra performance across all your applications built over your lifetime.

Optimize for readability first - add performance improvements later after you have had a chance to actually profile the code and figure out where your bottlenecks are.