That's some sort of Explain-Like-I'm-Five and before reading to the end, I already found some important aspect that are often overlooked in the current debates, although there has been discussion about AI ethics and responsibility long before chatGPT, see Dr. Joy Buolamwini for example: media.mit.edu/posts/how-i-m-fighti...
To follow up on one point in your previous post about the age of creation is that things are not guaranteed to be of high quality just because they have been produced by human beings. We might even succeed to improve and regulate algorithms and machine learning in a way that the creations would avoid old clichés, not only for the sake of inclusiveness and diversity, but also to avoid boring and uninspiring art and outdated documenation.
Accessibility First DevRel. I focus on ensuring content created, events held and company assets are as accessible as possible, for as many people as possible.
I always love ideals, but you could write a whole paper on how attempting to correct for bias is an almost impossible task.
I may tackle it in a future article as I need to give it some deep thought, but the fact that AI does not have temporal reasoning, morals or an understand our intentions means that I think current attempts at correcting biases are lacking and far too "blunt". 💗
I heard that Google has been doing years of research on AI without releasing anything in such a disputable state as chatGPT that Microsoft now wants to connect to Bing search somehow. Probably AI tools will never fully work out of the box without humans learning how to operate them, but then again we can see how far photography has become from a nerdy experimental apparatus to "Siri, take a picture of my dog!"
Maybe it should be up to the end users to make their intentions clear. We don't even manage to communicate with other human people without misunderstanding, how are we supposed to make machines do better? But like you said, it must be up to developers to restrict machines to prevent them from doing harm and spreading misinformation, clichés and hate speech etc. probably both by choosing the training material and by coming up with much better algorithms, as the whole idea of deep learning is based on the idea to take something old and replicate existing information.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
That's some sort of Explain-Like-I'm-Five and before reading to the end, I already found some important aspect that are often overlooked in the current debates, although there has been discussion about AI ethics and responsibility long before chatGPT, see Dr. Joy Buolamwini for example: media.mit.edu/posts/how-i-m-fighti...
To follow up on one point in your previous post about the age of creation is that things are not guaranteed to be of high quality just because they have been produced by human beings. We might even succeed to improve and regulate algorithms and machine learning in a way that the creations would avoid old clichés, not only for the sake of inclusiveness and diversity, but also to avoid boring and uninspiring art and outdated documenation.
I always love ideals, but you could write a whole paper on how attempting to correct for bias is an almost impossible task.
I may tackle it in a future article as I need to give it some deep thought, but the fact that AI does not have temporal reasoning, morals or an understand our intentions means that I think current attempts at correcting biases are lacking and far too "blunt". 💗
I heard that Google has been doing years of research on AI without releasing anything in such a disputable state as chatGPT that Microsoft now wants to connect to Bing search somehow. Probably AI tools will never fully work out of the box without humans learning how to operate them, but then again we can see how far photography has become from a nerdy experimental apparatus to "Siri, take a picture of my dog!"
Maybe it should be up to the end users to make their intentions clear. We don't even manage to communicate with other human people without misunderstanding, how are we supposed to make machines do better? But like you said, it must be up to developers to restrict machines to prevent them from doing harm and spreading misinformation, clichés and hate speech etc. probably both by choosing the training material and by coming up with much better algorithms, as the whole idea of deep learning is based on the idea to take something old and replicate existing information.