Liquid error: internal
For further actions, you may consider blocking this person and/or reporting abuse
Liquid error: internal
For further actions, you may consider blocking this person and/or reporting abuse
shadowb -
Jagroop Singh -
ViitorCloud Technologies -
TheDev -
Top comments (7)
This is one of my major concerns when it comes to self-driving cars.
Humans are hard-wired for self-preservation. We'll always try and protect ourselves; however, when a computer has to make that decision, if it's not coded to always protect its passengers first then it's putting a value on human life. What makes my life worth more than a pedestrian, or vice-versa?
There is the moral machine by MIT: moralmachine.mit.edu/ if you fancy partaking in a thought experiment.
Interesting test, I took it but it reminded me that this type of test might be impossible to implement in an AI. Social status, age, fitness, they are not exactly discernible in a split second... Also, should they be implemented in an algorithm at all?
After all if I were driving a vehicle in an imminent accident, in that instant of survival instinct, I'm either swerving to try to save someone's life hoping to save mine as well or I'm not fast enough and going to hit them. No matter who they are.
what do you think?
"After all if I were driving a vehicle in an imminent accident, in that instant of survival instinct, I'm either swerving to try to save someone's life hoping to save mine as well or I'm not fast enough and going to hit them. No matter who they are."
That's exactly what the computer should do. It can calculate almost instantly (compared to our long long split-second) what are the odds of saving both lives vs. saving just the driver.
That's a good shout to be fair... when humans react in those situations they make split-second decisions based on instinct. I don't believe current hardware would be powerful enough to take in that amount of data and come to a conclusion quick enough.
It's even more complicated than that.
Let's say we have the hardware and bandwidth to do anything: how does a sensor determine if the person they are likely going to run over is a female doctor or a homeless person? Do we keep every living human being in a facial recognition DB? What if they're faces aren't scannable from the distance or the position....
Then, aside from the implications of having such a DB available to all car manufacturers, are we literally placing a point system on the value of each human life in the eyes of society? I know we do, after all the MIT test is about exactly that, but do we want to codify this in an algorithm?
What if the homeless person is a good person and the female doctor is wanted for murder :D ? What do you do then? You can see where I'm going with this...
Anyway, I'm glad I don't have to ask myself these questions for a living or even remotely try to come up with answers
Nice point there! I think it's really way more complicated than we can think of!
I don't know. I'm glad people far more knowledgeable than me in ethics and philosophy are working at this and I don't have to come up with an answer to this horrific choice