DEV Community

Cover image for The problem with thought leaders

The problem with thought leaders

stereobooster on August 26, 2019

In recent years (and due to the toxicity of the Twitter) term "thought leader" became a negative thing to me. None-scientific matter So...
Collapse
 
kspeakman profile image
Kasey Speakman • Edited

I see this soooooo much. I think it is part of human nature to play with something and then shout out how great it was to play with. Even assuming no bias (e.g. financial), there are two problems at work here. One, the writer with surface-level experience has probably stayed on the happy path. They haven't gotten familiar with the edge cases... and in fact, may be blinded from seeing them in the "honeymoon" phase. Two, readers will often naively assume otherwise. I know I have.

My take: articles that do not substantially discuss trade-offs are more for first impressions or entertainment. They don't contribute to my decision-making process. They can't. In order to properly decide, I need trade-offs to weigh. And life has taught me that everything has them.

Collapse
 
wrldwzrd89 profile image
Eric Ahnell

People get so inclined to toot their own horn that they completely forget that not everyone else thinks like they do. While trying to lead by example is great, if nobody follows you, are you really leading?

Collapse
 
thorstenhirsch profile image
Thorsten Hirsch

Martin Fowler writes very balanced articles when analysing new technologies and architecture trends. He's a thought leader in the area of application integration.

Collapse
 
ssimontis profile image
Scott Simontis

I totally get it. Was considering a startup and while doing market research, I actually started laughing out loud at how unscientific their "evidence" was. The unbiased paper was full of things like "this unit was offline 100% of the time and never produced data so we removed it from calculations as an outlier." Seriously? That sounds like something that needs some deep investigation considering that product cost over $100,000 to install and apparently cannot even turn on.

They had one methodology that was only in use at a single location, but covered 110 blocks of the city, while most of the other methods covered 3-6 blocks. They claimed they couldn't draw any conclusions about the 110 block system because it has not been deployed widely enough to draw conclusions. I failed probstat twice, but isn't 110 data points statistically more relevant than the conclusions drawn by comparing a few instances of 3-6 data points.

At the end of the article they revealed there was no raw data analyzed. It was all a phone interview with customers of the system where, pardon my French, the customers were basically pulling numbers out of their ass on how much the technology had improved their operations. I was feeling discouraged and about to give up until I read that study. When I saw the lack of integrity and scientific rigor, I knew I might just have a chance.