DEV Community

Discussion on: Google Duplex - The Conversation about Ethics?

Collapse
 
peter profile image
Peter Kim Frank

An AI / Assistant accepting voice as an input seems quite acceptable. It should have a "trigger word" (IE, "Alexa," or "Hey Google"), and otherwise clear it's memory of the sounds around it. We've gotten pretty

Voice as an output is way more ethically grey in my mind. I would agree with @scottishross that a disclaimer would be appropriate in these situations.

What happens when the AI gets confused and the conversation leaves it's pre-determined bounds? Without a disclaimer, things could get very crazy.

Collapse
 
rhymes profile image
rhymes • Edited

I'm thinking about a pseudo philosophical question:

let's say that tomorrow we have Duplex on our phones and we all get used to robots calling to book appointments. The question is: why should robots be explicitly programmed NOT to be recognisable as robots? Why are trying so desperatly to trick our brains into thinking we're engaging with a human being instead of just developing super advanced robots that we all know are robots and we accept them as such?

I don't have the answer, just the question :D

A few updates:

Should our machines sound human?

Also Zeynep Tufekci thread here is worth a read:

And finally, Google confirmed they're going to have these bots identify themselves as bots:

it seems as if Google taking extra steps to ensure the public that it’s taking a stance of transparency following the online outcry. That includes making sure that Duplex will make itself “appropriately identified” in the future, for the benefit of all parties involved.

from Google now says controversial AI voice calling system will identify itself to humans