DEV Community

There Is No Substitute For Good Judgement, Especially if You Want to Be Data Driven

Ana Ulin 😻 on November 06, 2018

My recent entry on measuring a software engineer’s performance led to an interesting comment: but if you rely only on judgement, isn’t that a rec...
Collapse
 
kspeakman profile image
Kasey Speakman • Edited

The only way to see the values that individuals bring to the team is to be a part of that team. I think many times disconnected parties (for example, executives) try to substitute metrics here. Instead of trusting the leaders in that team to report the information. But without context and judgement, the metrics are not meaningful. And in fact, the whole idea of routine performance evaluations is suspect here. Why is performance being evaluated? Do we mistrust our management staff? Was there a problem? Was there a huge success? Or are we doing it because we've always done it or business magazines say we should do it? What's wrong with defaulting to paying people market value for the work they do, then dealing with exceptionally good or bad performances as they occur? You know, treating employees like humans, like I want to be treated. :)

Collapse
 
anaulin profile image
Ana Ulin 😻 • Edited

And in fact, the whole idea of routine performance evaluations is suspect here.

Couldn't agree more!

You got me thinking about this question of "do we even need performance evaluations?", and I ended up writing up a new post about it:

Collapse
 
anaulin profile image
Ana Ulin 😻

And in fact, the whole idea of routine performance evaluations is suspect here.

Couldn't agree more!

You got me thinking about this question of "do we even need performance evaluations?", and I ended up writing up a new post about it: dev.to/anaulin/why-do-we-have-perf...

Collapse
 
rhymes profile image
rhymes

the same statistic is often used to support opposing stances, the conclusions highly dependent on the worldview of the person analyzing the data.

So true, there's a lot of manipulation in data going on right now.

I've recently read an opinion that maybe one of the issues in the foundations of how we use AI now is the research for the perfect answer, trying to substitute human judgement with the machine's. Instead, the scientist proposed, to embrace fallibility and let the machine, when obviously feasible, output a few possible answers.

The article was this one: We Need an FDA For Algorithms.