DEV Community

Discussion on: AlphaGo: Observations about Machine Intelligence

Collapse
 
jillesvangurp profile image
Jilles van Gurp

I tend to think about ethics in terms of risk mitigation and ass coverage. Ethical reasons are great excuses to do nothing and avoid being liable for potentially harmful effects. But usually that's just delaying the inevitable. Self driving cars causing accidents is not an ethical problem but simply a legal and practical challenge. The math is brutally simple though, as soon as AI cars cause less traffic deaths than distracted, drunk, or otherwise incompetent humans driving cars, it's a ethically the right thing to drive them. Tesla marketing material suggests that they clearly believe that has already happened. However, the legalities, liability and moral responsibility when the inevitable deaths occur with self driving cars is still worth debating. It's just not a reason to not work on self driving cars. Rather the opposite.

Humans are funny when it comes to risk assessment. Probably if everyone were to drive self driving cars in their current state, there would be a massive reduction in traffic deaths followed by a rapid further reduction as the few accidents that still happen due to bugs, glitches, and other issues get addressed. Most traffic deaths are caused by humans. Fundamentally, things are quite safe already with self driving cars. However, we're stuck with overly conservative bureaucrats holding the industry back with their insistence on ass coverage and a legal climate that is leading vendors to prefer to not be ever liable for anything because of the financial risks of class action suits. So, we're literally killing people by exposing them to human drivers. Is that ethical or stupid?