AI is only as good as its training data. Some countries have laws that mean that any material can be scraped for use in the training set. While AI generated art/music can't collect royalties, the artists don't get compensated either.
The classic trolley problem. Imagine an AI car is driving along and suddenly someone rushes into the road for some reason (maybe to collect something they dropped, or to save their child who ran into the road). Does the car continue, running that person over? Or does it swerve into a large crowd of people on the pavement/sidewalk? Or does it hit emergency brakes, with a high risk of injury or worse to the passenger/s?
Honestly I think the trolley problem is not a problem to be solved by driving cars. One thing a driving car can not do is 'do nothing'. So, it might react based on the rules it already has (like not driving into people, preventing accidents, stay on the road, etc.) and trying to find a way within those boundaries. Pretty much the same way a human would... and maybe find a solution nobody thought of by doing so. If it does, great. If not, well, a human would probably also not have a solution.
True - it's not a problem to be solved 'by' the car itself. However, the car will do something based on at least one of two things:
The algorithms coded into it.
The training data given to it to 'learn'.
In either case, someone has to decide what the car should do beforehand in such situations - the car's software programmers, the head of the company that designs them, or even potentially government guidelines. Someone designs or influences the car's behaviour long before it ever reaches the hypothetical critical situation, and the car will behave according to this pre-planned logic.
This is in contrast to someone driving, who would most likely be in panic mode, or at the very least not thinking clearly. Either way, the outcome would most likely not be the result of a well thought-out plan.
If the programming were made to cause the 'least' damage, the car might well 'do nothing': applying emergency brakes with insufficient braking distance could lead to more people in total being injured.
In any case, the ethics problem I am highlighting is that of the designer of the car, who pre-instructs the car on what to do in such situations - either through coded algorithms, or the training data that influences outcomes.
Of course, there is another solution: install upward thrusters on the car, and it can jump over any dangers :)
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I know of two that seem to pop up regularly.
AI is only as good as its training data. Some countries have laws that mean that any material can be scraped for use in the training set. While AI generated art/music can't collect royalties, the artists don't get compensated either.
The classic trolley problem. Imagine an AI car is driving along and suddenly someone rushes into the road for some reason (maybe to collect something they dropped, or to save their child who ran into the road). Does the car continue, running that person over? Or does it swerve into a large crowd of people on the pavement/sidewalk? Or does it hit emergency brakes, with a high risk of injury or worse to the passenger/s?
Honestly I think the trolley problem is not a problem to be solved by driving cars. One thing a driving car can not do is 'do nothing'. So, it might react based on the rules it already has (like not driving into people, preventing accidents, stay on the road, etc.) and trying to find a way within those boundaries. Pretty much the same way a human would... and maybe find a solution nobody thought of by doing so. If it does, great. If not, well, a human would probably also not have a solution.
True - it's not a problem to be solved 'by' the car itself. However, the car will do something based on at least one of two things:
In either case, someone has to decide what the car should do beforehand in such situations - the car's software programmers, the head of the company that designs them, or even potentially government guidelines. Someone designs or influences the car's behaviour long before it ever reaches the hypothetical critical situation, and the car will behave according to this pre-planned logic.
This is in contrast to someone driving, who would most likely be in panic mode, or at the very least not thinking clearly. Either way, the outcome would most likely not be the result of a well thought-out plan.
If the programming were made to cause the 'least' damage, the car might well 'do nothing': applying emergency brakes with insufficient braking distance could lead to more people in total being injured.
In any case, the ethics problem I am highlighting is that of the designer of the car, who pre-instructs the car on what to do in such situations - either through coded algorithms, or the training data that influences outcomes.
Of course, there is another solution: install upward thrusters on the car, and it can jump over any dangers :)