DEV Community

Discussion on: AI and Ethics [Chapter 1]

Collapse
 
grahamthedev profile image
GrahamTheDev • Edited

Is your brain fried after reading that?

Did you actually read it...probably not 😱🤣, but I hope that when you skimmed it that it sparked some thoughts and ideas, and I would love if you share them below.

As I said, I am learning this myself, I am certainly only at the starting line in terms of knowledge in this area, so every idea or thought that people share is valuable to me. 💗

Collapse
 
ingosteinke profile image
Ingo Steinke

That's some sort of Explain-Like-I'm-Five and before reading to the end, I already found some important aspect that are often overlooked in the current debates, although there has been discussion about AI ethics and responsibility long before chatGPT, see Dr. Joy Buolamwini for example: media.mit.edu/posts/how-i-m-fighti...

To follow up on one point in your previous post about the age of creation is that things are not guaranteed to be of high quality just because they have been produced by human beings. We might even succeed to improve and regulate algorithms and machine learning in a way that the creations would avoid old clichés, not only for the sake of inclusiveness and diversity, but also to avoid boring and uninspiring art and outdated documenation.

Collapse
 
grahamthedev profile image
GrahamTheDev

I always love ideals, but you could write a whole paper on how attempting to correct for bias is an almost impossible task.

I may tackle it in a future article as I need to give it some deep thought, but the fact that AI does not have temporal reasoning, morals or an understand our intentions means that I think current attempts at correcting biases are lacking and far too "blunt". 💗

Thread Thread
 
ingosteinke profile image
Ingo Steinke

I heard that Google has been doing years of research on AI without releasing anything in such a disputable state as chatGPT that Microsoft now wants to connect to Bing search somehow. Probably AI tools will never fully work out of the box without humans learning how to operate them, but then again we can see how far photography has become from a nerdy experimental apparatus to "Siri, take a picture of my dog!"

Maybe it should be up to the end users to make their intentions clear. We don't even manage to communicate with other human people without misunderstanding, how are we supposed to make machines do better? But like you said, it must be up to developers to restrict machines to prevent them from doing harm and spreading misinformation, clichés and hate speech etc. probably both by choosing the training material and by coming up with much better algorithms, as the whole idea of deep learning is based on the idea to take something old and replicate existing information.