An AI / Assistant accepting voice as an input seems quite acceptable. It should have a "trigger word" (IE, "Alexa," or "Hey Google"), and otherwise clear it's memory of the sounds around it. We've gotten pretty
Voice as an output is way more ethically grey in my mind. I would agree with @scottishross
that a disclaimer would be appropriate in these situations.
What happens when the AI gets confused and the conversation leaves it's pre-determined bounds? Without a disclaimer, things could get very crazy.
I'm thinking about a pseudo philosophical question:
let's say that tomorrow we have Duplex on our phones and we all get used to robots calling to book appointments. The question is: why should robots be explicitly programmed NOT to be recognisable as robots? Why are trying so desperatly to trick our brains into thinking we're engaging with a human being instead of just developing super advanced robots that we all know are robots and we accept them as such?
Google Assistant making calls pretending to be human not only without disclosing that it's a bot, but adding "ummm" and "aaah" to deceive the human on the other end with the room cheering it... horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.
15:12 PM - 09 May 2018
368410080
And finally, Google confirmed they're going to have these bots identify themselves as bots:
it seems as if Google taking extra steps to ensure the public that it’s taking a stance of transparency following the online outcry. That includes making sure that Duplex will make itself “appropriately identified” in the future, for the benefit of all parties involved.
An AI / Assistant accepting voice as an input seems quite acceptable. It should have a "trigger word" (IE, "Alexa," or "Hey Google"), and otherwise clear it's memory of the sounds around it. We've gotten pretty
Voice as an output is way more ethically grey in my mind. I would agree with @scottishross that a disclaimer would be appropriate in these situations.
What happens when the AI gets confused and the conversation leaves it's pre-determined bounds? Without a disclaimer, things could get very crazy.
I'm thinking about a pseudo philosophical question:
let's say that tomorrow we have Duplex on our phones and we all get used to robots calling to book appointments. The question is: why should robots be explicitly programmed NOT to be recognisable as robots? Why are trying so desperatly to trick our brains into thinking we're engaging with a human being instead of just developing super advanced robots that we all know are robots and we accept them as such?
I don't have the answer, just the question :D
A few updates:
Should our machines sound human?
Also Zeynep Tufekci thread here is worth a read:
And finally, Google confirmed they're going to have these bots identify themselves as bots:
from Google now says controversial AI voice calling system will identify itself to humans