We want our code to behave different on different environments, operating systems, so taking decisions at compile time is the best decision, isn't ...
For further actions, you may consider blocking this person and/or reporting abuse
Maybe the best variant is when the use calls look that simple, but, at compiler level, they are instantiated with real code or with stubs, depended on compiler directives. So all these
#ifare concentrated in declaration source. In a project, we used this in a variant with two level axes, so, for example, DEBUG_MIN is compiled always (with runtime check) but DEBUG_MAX if log set is >=MAX during compilation.For regrets on preprocessor, the main problem with it is not preprocessing itself - but its low level (just textual replacement). Common LISP is fundamentally based on preprocessor but it is really high-level, with content analyzer. If an analog was added to C++, most of template magic wouldn't be needed.
I totally agree with the concept. One question, it looks like the wrong and right samples are the same, is that right?
Uhh. Something in dev.to cache on the app happens.
I have reported this to them since it happens in all my smells.
Try refreshing the article
Memory and CPU will always be finite.
of course. but nowadays we are more concerned on writing declarative, less coupled and evolutionary software than making optimization hacks.
We don't need to waste resources. And software maintenance and evolution is our most scare resource now.