I think it's wild that people assume AI (which, at this point, means ML) will automatically eliminate bias. Surely they forget that the output is only as good as the input, and the input is our history, with all of its inherent prejudices. I think ML can be great for identifying bias, but removing it seems like a problem on the same scale as removing bias in humans.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I think it's wild that people assume AI (which, at this point, means ML) will automatically eliminate bias. Surely they forget that the output is only as good as the input, and the input is our history, with all of its inherent prejudices. I think ML can be great for identifying bias, but removing it seems like a problem on the same scale as removing bias in humans.