If an AI model shows "bias" (I absolutely hate that word) towards a specific group of people you cannot call it "biased". As far as I can tell such a situation can occur on 2 occasions:
1: The model has not been given enough input to be able to make the proper connections that, for example, people of different colors are to be treated the same. Much like the Tesla car issue some time ago.
2: The model has been trained correctly and it is actually correct that some human characteristic is detrimental to its decision-making process.
As much as people these days like to think that we are only defined by our actions it simply does not hold true in some cases and it is folly to pretend otherwise because it does not suit our views.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.