With the growth we've seen in technology in the recent years, it can be hard to remember that although it's very exciting, there are some important ethical considerations to be taken into account. What moral and ethical considerations do artificial intelligence and robotics raise? 🤖
For further actions, you may consider blocking this person and/or reporting abuse
Top comments (10)
We see the issue of trust across many forms of information sharing,
Wikipedia is a prime example of a website that has turned to a more ideological standpoint where the original founder has even distanced himself from Wikipedia because of the bias that has crept into articles on that site.
Unfortunately it's a very hard problem to solve, but that source transparency is at least a step in the right direction.
I know of two that seem to pop up regularly.
AI is only as good as its training data. Some countries have laws that mean that any material can be scraped for use in the training set. While AI generated art/music can't collect royalties, the artists don't get compensated either.
The classic trolley problem. Imagine an AI car is driving along and suddenly someone rushes into the road for some reason (maybe to collect something they dropped, or to save their child who ran into the road). Does the car continue, running that person over? Or does it swerve into a large crowd of people on the pavement/sidewalk? Or does it hit emergency brakes, with a high risk of injury or worse to the passenger/s?
Honestly I think the trolley problem is not a problem to be solved by driving cars. One thing a driving car can not do is 'do nothing'. So, it might react based on the rules it already has (like not driving into people, preventing accidents, stay on the road, etc.) and trying to find a way within those boundaries. Pretty much the same way a human would... and maybe find a solution nobody thought of by doing so. If it does, great. If not, well, a human would probably also not have a solution.
True - it's not a problem to be solved 'by' the car itself. However, the car will do something based on at least one of two things:
In either case, someone has to decide what the car should do beforehand in such situations - the car's software programmers, the head of the company that designs them, or even potentially government guidelines. Someone designs or influences the car's behaviour long before it ever reaches the hypothetical critical situation, and the car will behave according to this pre-planned logic.
This is in contrast to someone driving, who would most likely be in panic mode, or at the very least not thinking clearly. Either way, the outcome would most likely not be the result of a well thought-out plan.
If the programming were made to cause the 'least' damage, the car might well 'do nothing': applying emergency brakes with insufficient braking distance could lead to more people in total being injured.
In any case, the ethics problem I am highlighting is that of the designer of the car, who pre-instructs the car on what to do in such situations - either through coded algorithms, or the training data that influences outcomes.
Of course, there is another solution: install upward thrusters on the car, and it can jump over any dangers :)
My main question is how do we block AI from scraping certain code. Like if I have a repo or multiple repos that I don’t want AI to have access to, how do I prevent it. How would I even know if it had been scraped, where is the protection for code creators from having their property essentially stolen even if it’s held privately or protected by copyrights and what not.
You do have a great example right here where Microsoft platformed a bot at scale for a few days - one bot which decidedly was in turn helpful, in turns ignorant, deceptive, manipulative and lacking commonsense.
People will trust this bot, dismiss bad answers as glitches when detected, and overlook less obvious biases and lies (already happening).
How would such a bot influence an election?
How would this bot respond to suicidal teenagers?
We already know what happens when symptoms are cured (quickly patch bad answers) vs taking out root causes.
So, the question here seems to be, who should be responsible for evaluating the potential for indirect harm, and vetting these bots before they are put in everyone's hands?
Who will be liable when we find strong correlation between text output from bots and resulting bodily harm (likely including many deaths) and political influence?
I wouldn't take this toooo seriously but what I read is "it's only a chatbot, lol", as if words didn't have a weight, and did not have consequences.
The alignment problem. Do algorithms actually pursue the goal we tried to define? Are we even able to prevent baking our own biases in those machines?
Certainly replacing jobs is an ethical consideration.