Our field is practically dripping with ideology. You can't pick up a textbook and simply learn how to do a certain style of programming without also hearing a slew of claims about how this particular way of doing programming is guaranteed to give you results.
On some level, you can't blame the authors and the conference speakers for making such claims. If they can't say their thing is useful, why would you buy the book? We live in a capitalist society and everyone's gotta make a buck.
It's strange to think that many of us paid good money to go to a University, just to be indoctrinated with the dominant ideology of the time. I was taught that Object Oriented Programming is the obvious evolution from Procedural Programming and that its features make it a better choice for most programming tasks. You likely went through a similar indoctrination process, although I can't speak directly to your experience.
But I'm not actually writing this to talk about Object Oriented Programming (well, I sort of am but not in the way you might think). I'm writing it to draw your attention to claims, what they look like, how weak and strong they are, and how to properly evaluate them in a scientific manner.
We are surrounded by claims that often go totally unnoticed, so simply noticing them and pointing them out confers a huge advantage if any of us want to get better at the craft and work together in a more empirical way.
So first, let's start with a very weak claim that I myself am inclined to make.
"I derive personal enjoyment and satisfaction from building video games from scratch in the C programming language on my Apple Macintosh Personal Computer. I think it is interesting to understand the lower-level components that go into a video game engine. I also believe that having this knowledge will make me better able to diagnose problems I might find in games and apps that I build using higher level languages and frameworks."
This is pretty bland and vanilla. There isn't a huge bar you have to clear to prove any of it empirically. I am making a simple set of claims about myself, so all you need to do is find some measurable way to define my "enjoyment" and "satisfaction", then measure my emotional state as I engage in the activities I say I enjoy.
Proving that I find something "interesting" is arguably much more complex, but it's still way easier than proving everyone finds that thing interesting or even proving that all programmers in the United States of America find it interesting. Because the claim is about me and not anything else, it is by definition easier to prove.
Alright, now let's kick it up a notch and try a different claim.
"Development in the C programming language improves the performance of video games and results in fewer dropped frames as well as reduced control lag and audio latency."
This is a bigger claim about a bigger set of set of entities. To properly prove it, you need to perform a randomized controlled trial on a few different groups, each assigned the same game but a different programming language.
You then need to measure the dropped frames, control lag, and audio latency in the video games each group produces. All of this needs to happen with a large enough sample size for the results to be statistically relevant.
You may also want to control for the programming competence of the team members making the game, which naturally adds even more complexity.
Of course, that's if you really want to produce a convincing case. There are other ways one might go about gathering evidence for this sort of proposition.
For example, you could go and buy a bunch of video games where you have the proper backstory and then test those for dropped frames, audio latency, etc. This is a pretty good way to gather evidence, but it isn't as convincing as a randomized controlled trial.
How do you really know those teams built the whole game using lower level languages and didn't "cheat" by using a framework here and there for other parts? You didn't oversee the development of the game, so you're basically taking their word at face value.
Still, it isn't as if this sort of evidence is totally useless. It's just not as good as a randomized controlled trial.
In all likelihood, the latter will have a much larger sample size since it is much easier to procure video games out in the open market than it is to procure them in highly controlled lab conditions. This kind of evidence carries some weight and might be adequate for making whatever decision you care to make.
Now let's get a little more bold.
"Object Oriented Programming has no place in the video game industry. It produces low-quality games and wastes developers' time."
This is a bolder claim, but more importantly it has also lost some degree of precision. The previous claim was more clear about the tool being used. This claim is kind of vague. What, exactly, is Object Oriented Programming? There are many conflicting definitions, so it isn't particularly illuminating whenever it is used to describe a set of programming practices.
Just to give you a sense of this vagueness, take the above sentence and rewrite it, replacing "Object Oriented Programming" with "Dynamic Memory Allocation".
"Dynamic memory allocation has no place in the video game industry. It produces low-quality games and wastes developers' time."
Now you have something more testable. Sure, the measure of "quality" is still vague, but we could easily swap in something like "more out of memory exceptions and crashes" to get back to a scientific hypothesis.
Notice how difficult it becomes to prove or say anything as the circle of generality expands. Let's expand it further with the following claim.
"Object Oriented Programming is a complete waste of time. It encourages developers to build bad habits, and it frequently results in bad products in all circumstances."
Wow, what a bafflingly vague and impossible statement to prove. First, you need to break down "Object Oriented Programming" into a set of testable programming methods, then you need to define "Bad Habits", and you also need to define "Bad Products."
Once you've gotten to the appropriate level of specificity, you then have to go about designing a series of randomized controlled trials to measure the measurable things you've defined. The sun might explode or you'll run out of money before you're ever able to do that.
But I want to point out something else here, because it often goes totally unnoticed. Let's try to flip the claim around and see if we get anything more testable.
"Object Oriented Programming is always a great use of time. It encourages developers to build good habits, and it frequently results in good products in all circumstances."
We didn't make much progress, did we? OOP is still a nebulous concept, we haven't really defined "good habits" in a measureable way, and "good products" is still totally subjective.
As Bill Nye says, "extraordinary claims require extraordinary evidence." Both of these are extraordinary claims. We need to collect an insane amount of evidence to prove them one way or the other.
This, my friends, is the nature of a "religious" debate. People feel whatever thing they feel, and they make sweeping generalizations that simply cannot be tested.
Having one of these "debates" is basically pulling teeth because the sorts of people who engage in them aren't willing to think like a scientist. You go back and forth, wasting your breath when you really ought to be out there in the world trying to find a way to collect better evidence to support your claims.
It is important to note that businesses oftentimes don't pick a particular technology or practice because they have ample empirical evidence and know it to be better. They usually pick things for purely capitalistic/opportunistic reasons.
The company goes with React Native because they've got a lot of web developers and no native developers. They pick Agile/Scrum because it's trending in the industry, and they think they can sell more of their services by advertising themselves as an Agile shop. They say they do a ton of unit tests because it might calm an uneasy customer who is worried about defects and longterm maintenance.
I've even seen some real knee-slappers.
"We picked Swift because it's less code and you don't need to type semicolons at the end of each line."
Seriously. You can't make this stuff up.
My point is, scientific and empirical thinking are totally lacking in this industry. We are awash in grandiose impossible-to-prove claims that people sling around willy-nilly, trying to impress their peers and make a few extra bucks.
Then be a good scientist and make sure you only let in beliefs when they are well-defined and there is ample evidence demonstrating them to be the case.