DEV Community

Cover image for What ethical considerations do AI and robotics raise?
BekahHW
BekahHW

Posted on

What ethical considerations do AI and robotics raise?

With the growth we've seen in technology in the recent years, it can be hard to remember that although it's very exciting, there are some important ethical considerations to be taken into account. What moral and ethical considerations do artificial intelligence and robotics raise? 🤖

Top comments (10)

Collapse
 
spo0q profile image
spO0q

I read that BARD will probably include sources in its results unlike chatGPT.

Despite the disclaimer that say AI-tools are not reliable for now, many people will read answers at face value, even AI-generated code might work but with many glitches and sometimes even completely wrong approaches.

If you don't craft your prompt messages carefully, the AI will suggest stuff that is simply not available on your system even if you mention it, for example.

The other aspect can be plagiarism and license, as the model include various reads, books, and other content without providing useful references that "inspired" it, like we do, as authors.

Collapse
 
rolfstreefkerk profile image
Rolf Streefkerk

We see the issue of trust across many forms of information sharing,

Wikipedia is a prime example of a website that has turned to a more ideological standpoint where the original founder has even distanced himself from Wikipedia because of the bias that has crept into articles on that site.

Unfortunately it's a very hard problem to solve, but that source transparency is at least a step in the right direction.

Collapse
 
spo0q profile image
spO0q

Yes. The glut of information can generate lots of biased articles, and we know what happens with the copy of the copy of the copy: ultimately, it's a dead-end.

It would be nice to have, at least, the references, cause the algorithm or the model in itself does not really indicate anything and can even be open-source.

What matters most is the set of data and the filters applied.

Collapse
 
ant_f_dev profile image
Anthony Fung

I know of two that seem to pop up regularly.

  • AI is only as good as its training data. Some countries have laws that mean that any material can be scraped for use in the training set. While AI generated art/music can't collect royalties, the artists don't get compensated either.

  • The classic trolley problem. Imagine an AI car is driving along and suddenly someone rushes into the road for some reason (maybe to collect something they dropped, or to save their child who ran into the road). Does the car continue, running that person over? Or does it swerve into a large crowd of people on the pavement/sidewalk? Or does it hit emergency brakes, with a high risk of injury or worse to the passenger/s?

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him)

Honestly I think the trolley problem is not a problem to be solved by driving cars. One thing a driving car can not do is 'do nothing'. So, it might react based on the rules it already has (like not driving into people, preventing accidents, stay on the road, etc.) and trying to find a way within those boundaries. Pretty much the same way a human would... and maybe find a solution nobody thought of by doing so. If it does, great. If not, well, a human would probably also not have a solution.

Collapse
 
ant_f_dev profile image
Anthony Fung

True - it's not a problem to be solved 'by' the car itself. However, the car will do something based on at least one of two things:

  1. The algorithms coded into it.
  2. The training data given to it to 'learn'.

In either case, someone has to decide what the car should do beforehand in such situations - the car's software programmers, the head of the company that designs them, or even potentially government guidelines. Someone designs or influences the car's behaviour long before it ever reaches the hypothetical critical situation, and the car will behave according to this pre-planned logic.

This is in contrast to someone driving, who would most likely be in panic mode, or at the very least not thinking clearly. Either way, the outcome would most likely not be the result of a well thought-out plan.

If the programming were made to cause the 'least' damage, the car might well 'do nothing': applying emergency brakes with insufficient braking distance could lead to more people in total being injured.

In any case, the ethics problem I am highlighting is that of the designer of the car, who pre-instructs the car on what to do in such situations - either through coded algorithms, or the training data that influences outcomes.

Of course, there is another solution: install upward thrusters on the car, and it can jump over any dangers :)

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him)

The alignment problem. Do algorithms actually pursue the goal we tried to define? Are we even able to prevent baking our own biases in those machines?

Collapse
 
j3ffjessie profile image
J3ffJessie

My main question is how do we block AI from scraping certain code. Like if I have a repo or multiple repos that I don’t want AI to have access to, how do I prevent it. How would I even know if it had been scraped, where is the protection for code creators from having their property essentially stolen even if it’s held privately or protected by copyrights and what not.

Collapse
 
eelstork profile image
Tea • Edited

You do have a great example right here where Microsoft platformed a bot at scale for a few days - one bot which decidedly was in turn helpful, in turns ignorant, deceptive, manipulative and lacking commonsense.
People will trust this bot, dismiss bad answers as glitches when detected, and overlook less obvious biases and lies (already happening).
How would such a bot influence an election?
How would this bot respond to suicidal teenagers?

We already know what happens when symptoms are cured (quickly patch bad answers) vs taking out root causes.

So, the question here seems to be, who should be responsible for evaluating the potential for indirect harm, and vetting these bots before they are put in everyone's hands?

Who will be liable when we find strong correlation between text output from bots and resulting bodily harm (likely including many deaths) and political influence?

I wouldn't take this toooo seriously but what I read is "it's only a chatbot, lol", as if words didn't have a weight, and did not have consequences.

Collapse
 
sherrydays profile image
Sherry Day

Certainly replacing jobs is an ethical consideration.