DEV Community

Discussion on: What ethical considerations do AI and robotics raise?

Collapse
 
eelstork profile image
Tea • Edited

You do have a great example right here where Microsoft platformed a bot at scale for a few days - one bot which decidedly was in turn helpful, in turns ignorant, deceptive, manipulative and lacking commonsense.
People will trust this bot, dismiss bad answers as glitches when detected, and overlook less obvious biases and lies (already happening).
How would such a bot influence an election?
How would this bot respond to suicidal teenagers?

We already know what happens when symptoms are cured (quickly patch bad answers) vs taking out root causes.

So, the question here seems to be, who should be responsible for evaluating the potential for indirect harm, and vetting these bots before they are put in everyone's hands?

Who will be liable when we find strong correlation between text output from bots and resulting bodily harm (likely including many deaths) and political influence?

I wouldn't take this toooo seriously but what I read is "it's only a chatbot, lol", as if words didn't have a weight, and did not have consequences.