DEV Community

shreyan1999
shreyan1999

Posted on • Updated on

AI Bias: What to do? (Part 2)

#ai

This part 2 of the 3 series blog.As discussed in the earlier blog the data is really necessary,If not treated well it can lead to disasters.Unfortunately there are many AI systems which have been victims of this biased data. A data can turn into a biased one right from the initial step which is why it should be preprocessed well.For example, a faulty percept (eg.a faulty sensor) gives out wrong data and if it is not monitored well then definitely it will lead to a tragedy. Therefore, Monitoring the things during the initial step is one of the most important aspects in order to avoid faulty data This is the case of getting a wrong or biased data unintentionally or mishandling.
But what about the data which is being biased? Before getting into this let us analyse what biasing is. Bias can be seen in both, a fair and unfair manner. Here we will analyse the unfair way where in there is some sort of discrimination or inequality among some individuals, groups or community. AI is a rose but on the other hand it has thorns attached as well. Where we think of AI being used to reduce this discrimination, It can also fall into wrong hands and can be used to further increase the bias. Lets see some examples where this has already taken place,
Racism in US Healthcare allocation:
There was a problem in the AI system used in the US for the allocation of 200 million patients. But it was observed that the black people were receiving a lower standard of care from this system. Despite the high level of risk they were allotted to the lower risk levels which was not appropriate.This happened because mostly the black patients were not able to afford the fees. Hence, AI perceived that they were not entitled to such a standard and started discriminating. It should have been monitored well beforehand to never let this issue happen.

COMPAS:

This technology was used in the US courts, to guide sentencing by predicting the likelihood of a criminal reoffending. However, a few years ago it was reported that this algorithm reported highly in favour of whites. It was prejudiced that the blacks have a high tendency of reoffending.

Only men CEO:

The algorithm mis learnt looking at the advertisements that only men can be the ceo’s because usually most advertisements show only the male community as CEO’s and other high level positions. This was a case where the reinforcement algorithm learnt it wrong and led to confusions and sex discrimination.
This list continues to go on, innumerably, making it a bane in this boon of AI. Which is why the important question arises. How can we tackle this serious issue and avoid such instances? We are aware of the severe consequences one must face for committing such mistakes. It hurts the sentiments and emotions of people, in extreme cases it might be life threatening too!
Though many reasons are responsible for biasing in Artificial Intelligence, Humans form a big reason as the data might be manipulated and biased at the source or any steps of data mining intentionally or unintentionally. Besides, these algorithms used are also a cause, for example many unsupervised and reinforcement learning are prone to such disasters, if not handled carefully, as the model here learns on its own and the chances are that it might learn from misleading data which make it a biased system in real life application as seen before. It is necessary to look into this issue as it can cause discomfort and disputes among the users.So, how do we design a model? EU has an answer for that. They have suggested that a model should be:

1) Robust

2) Ethical

3) Lawful

So how to reduce bias? Let's see that in the next blog!

Top comments (0)