DEV Community

Omkar Ajnadkar
Omkar Ajnadkar

Posted on

What will it do? What should it do?

Liquid error: internal

Top comments (7)

Collapse
 
nataliedeweerd profile image
𝐍𝐚𝐭𝐚𝐥𝐢𝐞 𝐝𝐞 𝐖𝐞𝐞𝐫𝐝

This is one of my major concerns when it comes to self-driving cars.

Humans are hard-wired for self-preservation. We'll always try and protect ourselves; however, when a computer has to make that decision, if it's not coded to always protect its passengers first then it's putting a value on human life. What makes my life worth more than a pedestrian, or vice-versa?

There is the moral machine by MIT: moralmachine.mit.edu/ if you fancy partaking in a thought experiment.

Collapse
 
rhymes profile image
rhymes • Edited

Interesting test, I took it but it reminded me that this type of test might be impossible to implement in an AI. Social status, age, fitness, they are not exactly discernible in a split second... Also, should they be implemented in an algorithm at all?

After all if I were driving a vehicle in an imminent accident, in that instant of survival instinct, I'm either swerving to try to save someone's life hoping to save mine as well or I'm not fast enough and going to hit them. No matter who they are.

what do you think?

Collapse
 
gregruminski profile image
Greg Ruminski

"After all if I were driving a vehicle in an imminent accident, in that instant of survival instinct, I'm either swerving to try to save someone's life hoping to save mine as well or I'm not fast enough and going to hit them. No matter who they are."

That's exactly what the computer should do. It can calculate almost instantly (compared to our long long split-second) what are the odds of saving both lives vs. saving just the driver.

Collapse
 
nataliedeweerd profile image
𝐍𝐚𝐭𝐚𝐥𝐢𝐞 𝐝𝐞 𝐖𝐞𝐞𝐫𝐝

That's a good shout to be fair... when humans react in those situations they make split-second decisions based on instinct. I don't believe current hardware would be powerful enough to take in that amount of data and come to a conclusion quick enough.

Thread Thread
 
rhymes profile image
rhymes

It's even more complicated than that.

Let's say we have the hardware and bandwidth to do anything: how does a sensor determine if the person they are likely going to run over is a female doctor or a homeless person? Do we keep every living human being in a facial recognition DB? What if they're faces aren't scannable from the distance or the position....

Then, aside from the implications of having such a DB available to all car manufacturers, are we literally placing a point system on the value of each human life in the eyes of society? I know we do, after all the MIT test is about exactly that, but do we want to codify this in an algorithm?

What if the homeless person is a good person and the female doctor is wanted for murder :D ? What do you do then? You can see where I'm going with this...

Anyway, I'm glad I don't have to ask myself these questions for a living or even remotely try to come up with answers

Thread Thread
 
blackbird profile image
Omkar Ajnadkar

Nice point there! I think it's really way more complicated than we can think of!

Collapse
 
rhymes profile image
rhymes

I don't know. I'm glad people far more knowledgeable than me in ethics and philosophy are working at this and I don't have to come up with an answer to this horrific choice