Accessibility Specialist. I focus on ensuring content created, events held and company assets are as accessible as possible, for as many people as possible.
I always love ideals, but you could write a whole paper on how attempting to correct for bias is an almost impossible task.
I may tackle it in a future article as I need to give it some deep thought, but the fact that AI does not have temporal reasoning, morals or an understand our intentions means that I think current attempts at correcting biases are lacking and far too "blunt". π
Ingo Steinke is a Berlin-based senior web developer focusing on front-end web development to create and improve websites and make the web more accessible, sustainable, and user-friendly.
I heard that Google has been doing years of research on AI without releasing anything in such a disputable state as chatGPT that Microsoft now wants to connect to Bing search somehow. Probably AI tools will never fully work out of the box without humans learning how to operate them, but then again we can see how far photography has become from a nerdy experimental apparatus to "Siri, take a picture of my dog!"
I always love ideals, but you could write a whole paper on how attempting to correct for bias is an almost impossible task.
I may tackle it in a future article as I need to give it some deep thought, but the fact that AI does not have temporal reasoning, morals or an understand our intentions means that I think current attempts at correcting biases are lacking and far too "blunt". π
I heard that Google has been doing years of research on AI without releasing anything in such a disputable state as chatGPT that Microsoft now wants to connect to Bing search somehow. Probably AI tools will never fully work out of the box without humans learning how to operate them, but then again we can see how far photography has become from a nerdy experimental apparatus to "Siri, take a picture of my dog!"
Maybe it should be up to the end users to make their intentions clear. We don't even manage to communicate with other human people without misunderstanding, how are we supposed to make machines do better? But like you said, it must be up to developers to restrict machines to prevent them from doing harm and spreading misinformation, clichΓ©s and hate speech etc. probably both by choosing the training material and by coming up with much better algorithms, as the whole idea of deep learning is based on the idea to take something old and replicate existing information.