All right, here's the deal:
AI is being used - reportedly for the first time - to screen job applicants in the UK and worldwide.
Is it just myself or does it smell really, really bad to you too? Please share your thoughts in the comments.
Let's discuss:
What do you think is your role, as a developer, in preventing AI from being used as a massive human discrimination engine based only on external traits?
Note: You/your company might not be working with AI now. But rest assured, you won't be able to avoid it for too long.
Reflections on the article for background:
The company behind the AI "... claims it enables hiring firms to interview more candidates ... and provides a more reliable and objective indicator of future performance free of human bias".
Ok.
Wow, wow, wow, wait. "Free of human bias"?
Last time I checked, humans were building AI models. And training samples!
"The algorithms select the best applicants by assessing their performances in the videos against about 25,000 pieces of ... information compiled from previous interviews of those who have gone on to prove to be good at the job".
console.log("wow, ".repeat(10));
console.log("wait, ".repeat(99));
Who gets to judge whoever has "proved to be good" or not? One thing is assessing an individual's performance. If the evaluator makes a mistake (or is biased), he'll damage that individual.
This is already bad enough, but the scale is one individual.
Now, who decided it would be a good idea to scale that to an entire population?
Please share your thoughts in the comments.
Cover image by Chaozzy Lin on Unsplash
Top comments (7)
I think this is a really important topic. There are a wide range of applications for machine learning. However, I always think we should stop and wonder if it really should be made. For example, imagine a world where algorithms are used to determine whether or not you are breaking the law. It could very easily become an everyone is guilty until proven innocent situation.
Exactly, there are many applications for AI that don't affect human relations and could help to lift living standards.
But messing up with human interactions - such as interviewing job applicants - is a disservice. It will create much more harm than good.
The Minority Report scenario you mentioned is way too scary, and I don't think many people realise how close we are to start seeing things happen on this direction...
I feel like I've read about some cases like this that already exist. This was the only example I could think of at the time. :)
As developers we are the ones who build the software that affects the world, and ultimately management can only tell us to build it. They can't will it into existence. The software that exists only exists by the agreement of the engineers to build it. This means both that we have a moral obligation to the people we affect by what we build and that we have collective power to refuse to build it. There can be no profits for the boss without the work of the engineer, and we should use that fact as leverage to bury any proposal that would require us to do harm to our users. Have courage, link arms, and refuse.
Couldn't agree more. I find it unbelievable to see people hiding behind ideas such as:
We can do better
I think in the future we'll get a lot of courses on how to imitate "those who have gone on to prove to be good" to get a job from an AI.
Another big issue summarized in simple terms. Thanks for adding this point!
Job application is already somewhat... robotic? If I may say. People on both sides are seldom comfortable to be themselves.
Using AI will only make it more theatrical.
And yes... The illusionist hype of live-rich-in-miami-selling-online-courses-effortlessly will incorporate one more subject to teach.