DEV Community

The Future (of AI) is Female: How Hiring Bias Mitigation in NLP Can Be Great for Women Now and in the Future

Tracy Lee | ladyleet on November 13, 2019

INTRODUCTION AI technology is often billed as an answer to the physical, and mental shortcomings of the human brain, and productive capacity. We ...
Collapse
 
opshack profile image
Pooria A

Is there really a problem of bias in NLP based AI? It feels like an arbitrary assumption to support your selective topic "female" to me.

Collapse
 
nicolasini profile image
Nico S___

there is, another example is google translate. If you take a phrase in english like "she is a doctor", translate it to a language with no gender pronouns, and then back to english you get "he is a doctor"

Collapse
 
nijeesh4all profile image
Nijeesh Joshy • Edited

Just trying to understand here, So what should be the behavior in the ideal case ? (50% of the time he and 50% of the time she ?)

since when we convert a sentence to a language with no gender pronouns there is no way to retain the gender when converting it back to English right ?

Collapse
 
opshack profile image
Pooria A • Edited

I tried and confirmed it you are right, there is a group attribution bias there probably caused by the training data than developers. But I'm interested to know what's your solution for this specific "problem"?

Thread Thread
 
nicolasini profile image
Nico S___

you got it, is not hat developers are mean and create biased AIs, is that the data used to train them has bias and that is picked up by the AI

Collapse
 
256hz profile image
Abe Dolinger

I think it's wild that people assume AI (which, at this point, means ML) will automatically eliminate bias. Surely they forget that the output is only as good as the input, and the input is our history, with all of its inherent prejudices. I think ML can be great for identifying bias, but removing it seems like a problem on the same scale as removing bias in humans.

Collapse
 
nicolasini profile image
Nico S___

is amazing that people don't realise that by teaching an algorithm with examples you will get whatever bias there is in the data into your algorithm.
many data scientist argued in the past that "math is not biased", its not, but the data you are using to train your model is

Collapse
 
roholazandie profile image
rohola

Equity and Equality Are Not Equal! Forcing 50%-50% of men women in workplaces doesn't mean equity and it's not optimal for women and men and society as a whole, I don't know how people have a hard time understanding this!

Collapse
 
nicolasini profile image
Nico S___

The problem is not wanting to have 50-50 of men-women, or even equal distribution of ethnicities. The problem, amongst others, is that nowadays many companies, recruiting firms, and HR departments are using AI algorithms to screen job applicants, give them a score, and decide whether or not to move in the recruitment process. As described here in the article the case of Amazon, and others that keep showing up, these algorithms are heavily biased, and will score women or non-whites lower, and also recommend lower salaries. Because thats that the data used to train those models say.

 
nicolasini profile image
Nico S___

Look at the example brought to twitter by DHH (creator of Ruby and founder of Basecamp), he and his wife applied for the new Apple Card, but she got less credit than him. This happened even thought they do all their finance together, and she has better scores than him.
As it turned out, the Apple Card is backed by a Goldman Sachs bank and it uses an AI algorithm to make those decisions.
Many other people started sharing similar issues applying for it, including Steve Wozniak, whose wife also got less credit approved even tho she also has a better credit score than him.
When Apple customer service was called they all seem to just say is the algorithm making the decision, and no human has a way to rectify that decision (this is the scary part).
As you suggested, regulation and law is what will be required to ensure we don't hand over decision making to AIs that were trained using biased data (thats all the data we have).
Or, we could look for a way to remove bias from data!

 
nicolasini profile image
Nico S___

there is no ideal case, this is not an easy problem to solve. What we can do is stop pretending that this AI algorithms are bias free and always right and fair, they are not.

Collapse
 
256hz profile image
Abe Dolinger

I mean, on the one hand, we have people on Dev hacking with currently existing brain-computer interfaces. On the other hand, lol.

 
nicolasini profile image
Nico S___

thats the whole point, we can't claim that AI doesn't have bias because is a machine, when its being trained with data generated by humans with bias