DEV Community

Cover image for On Artificial (Un)Intelligence
Yechiel Kalmenson
Yechiel Kalmenson

Posted on • Originally published at blog.yechiel.me on

On Artificial (Un)Intelligence

This post was originally published in Torah && Tech, the weekly newsletter I publish together with my good friend Ben Greenberg. To get the weekly issue delivered straight to your inbox click here.


This week’s T&&T will take a different format than usual. Where most weeks we introduce a point from the week’s Torah portion and try to learn a lesson from it to our tech lives, this week, I thought I would share some thoughts I had.

It will be more of a “stream of consciousness” thing and will probably read more like a Talmudic debate than a sermon. You might end up with more questions than answers (and that’s a good thing). The intent is to start a conversation. If you would like to share your thoughts on the issues raised, please feel free to reach out to Ben or me, we’d love to talk to you!

-Yechiel

computer screen with scary green letters running across it

Recently an uproar erupted when it turned out that a certain company in the business of giving out credit cards was discovered to have routinely extended less credit (sometimes 20x less) to women than to men under the same financial circumstances.

When called out on their discriminatory practices, the company defended itself by saying that all of the decisions were being made by an algorithm, and could therefore not be biased.

In this case, the effect of the biased algorithm was financial. Yet, algorithms have been called upon to make much more serious decisions, such as who should be let out on bail and for how much, and how healthcare should be administered; these are decisions with potential life-and-death implications.

A while ago, I saw a halachic discussion on whether we can hold an Artificial Intelligence (AI) liable for its actions.

Currently, that conversation is purely theoretical, as that would require AI to have an understanding of right and wrong that is way beyond anything we have at the time. But recent events did get me thinking about a slightly related question; if someone programs an AI to make some decision, and the AI causes harm, to what extent is the one who deployed the AI liable for the AI’s actions.

Put in simpler terms, is saying “it wasn’t me, that decision was made by the algorithm” a valid defense?

Finding the Torah view on this question is understandably challenging; the Torah, after all, doesn’t speak of computers. We will have to get creative and see if we can find an analogous situation.

One approach we might take is to consider the AI as a Shliach, or an agent, of the person who deployed it.

In Halachah, a person can appoint a Shliach to do an action on their behalf, and the activity will be attributed to the appointer (the Meshaleach). For example, if you appoint someone to sell something on your behalf, the sale is attributed to you as if you executed the transaction.

What if the person appointed the Shliach to do something wrong (e.g., to steal something)?

In such a case, the Talmud rules that אין שליח לדבר עבירה (the concept of a Shliach doesn’t apply where sin is involved.

In other words, the Shliach is expected to refuse the problematic job, and if they go ahead with it anyway, they are liable for committing the sin.

Trying to apply this rule to AI, though, leads to some problems. The reasoning behind the ruling that Shlichut doesn’t apply when sin is involved is because the Shliach is expected to use their moral judgment and refuse the Shlichut. That would require that the Shliach have a sense and knowledge of right and wrong, as well as the autonomy to make their own decisions, two things AI is nowhere near achieving, as mentioned earlier.

It would seem that using AI would be similar to using a tool. Similar to when a person kills someone using an arrow, they can’t defend themselves by saying “it wasn’t me, it was the arrow,” using an AI might be the same thing.

You might argue that AI is different than an arrow in that once you deploy the AI, you “lose control” over it. The algorithms behind AI are a “black box,” even the programmers who programmed and trained the AI are unable to know why it makes the decisions it makes.

So unlike an arrow where there is a direct causal relationship between the act of shooting the arrow and the victim getting hurt, the causal link in AI is not so clear-cut.

But then again, it seems like the Talmud discusses a case that might be analogous here.

Regarding a case where a person lit a fire on their property, and the fire got out of control and damaged a neighboring property the Talmud says the following: “We have learned that Rabbi Yochanan said: [he is liable for] his fire just as [he is liable for] his arrow.”

A closer look at the reasoning behind Rabbi Yochanan’s ruling, however, reveals a crucial difference. The reason you are liable for your fire spreading is that fire spreading is a predictable consequence of lighting a fire. If your fire spread due to unusually strong wind, for example, then you would not be liable because the spread of the fire could not have been predicted.

One can argue, that the fact that the AI made its own decision that could not be predicted, even by those who programmed the AI and wrote the algorithms behind it, would mean that the AI has some sort of agency here. Maybe not enough agency to hold the AI liable, but perhaps just enough to exculpate those who deployed it (as long as they are unaware that the AI is making faulty decisions).

Is there something between the full agency of a Shliach and the complete lack of agency of a tool/fire?

Let’s look at another passage:

“One who sends fire in the hands of a child or someone who is mentally impaired is not liable by the laws of man but is liable by the laws of Heaven.”

The idea that someone can be “not liable by the laws of man but liable by the laws of Heaven” is used often in the Talmud to refer to actions that are technically legal, but still unethical. So while the court can’t prosecute a person for a fire started by a child, it is still morally and ethically wrong to do so.

So perhaps that is how we can classify AI? Like a child who has enough agency to make decisions, but not enough to distinguish right from wrong? Are companies hiding behind black-box algorithms technically legal, but morally questionable?

As I said in the beginning, I don’t know the answer to these questions, but I do hope we can start a discussion because the days where such questions were the realm of theoretical philosophers are coming to an end faster than we think!

Top comments (8)

Collapse
 
naftoliost profile image
NaftoliOst

What a great discussion!
I remember learning the concept of "fire is essentially a derivative of an arrow" in yeshiva about 12 years ago, and that's what came to mind at the beginning of the article. I love your arguments for and against comparing that to AI.
The concept has so many diverse implications, especially when it comes to modern technology in halacha (Think machine matza, printed Sefer Torah, shabbos time-switch etc.) it's a really fascinating topic.
This reminds me of the (seemingly simple) question "which of the categories of damages does having a car crash fall under?" [a person who causes damage, an ox who gores, fire-as-a-derivative-of-an-arrow, fire-as-a-derivative-of-one's-possession, a pit in a public space...] and that's before we get on to self-driving cars!
Thanks for this great post

Collapse
 
drbearhands profile image
DrBearhands

You might argue that AI is different than an arrow in that once you deploy the AI, you “lose control” over it. The algorithms behind AI are a “black box,” even the programmers who programmed and trained the AI are unable to know why it makes the decisions it makes.

This would imply one cannot be held accountable for shooting many arrows, blindfolded, at a crowded area, for perceived monetary gain, because the blindfold makes it impossible to aim.

I refuse to believe the company in question did not know its AI would be biased. It is common knowledge that AIs often are (they always are, but that fact hasn't sunk into the public's perception yet). It is also obvious that decisions made by this AI can do harm.

So:

  • it is known an "object" can cause harm (arrows / credit decisions).
  • it is known that specific usage is likely to cause harm (shooting at a crowd / leaving decisions to an AI)
  • we deprive ourselves of some information (blindfold / "black box" models)

More generally, AIs (that are currently hip and trendy) are just models of correlations. What this company is saying is essentially a variant of "but numbers show woman spend too much".

Collapse
 
ferricoxide profile image
Thomas H Jones II

To me it's less:

I refuse to believe the company in question did not know its AI would be biased. It is common knowledge that AIs often are (they always are, but that fact hasn't sunk into the public's perception yet). It is also obvious that decisions made by this AI can do harm.

Than it simply being irresponsible to assume that a "black box" is always going to do the right thing. If you initiate an ongoing process, service, etc. you sould be monitoring the results it produces. Even if you don't understand why results are occurring, you should be able to tell that the results are negative and take action to stop them. Failure to monitor and act is, to me, what creates culpability.

Collapse
 
ferricoxide profile image
Thomas H Jones II

One of the things missing here is the ongoing actions of the agent. Much of what's being compared to are "once and done" deleterious actions. The problems with AIs isn't really "one and done". Indeed, the whole point of creating an AI is to create a long-running process to address a number of discrete problems.

In the case of someone using an agent for a single action, sure, the notions above abpply. However, in the case of ongoing agency, you bear a responsibility for evaluating each of your agents actions to ensure they were done in a way that reflects your values. If you use an agent over and over, it implies that you support that results produced by that agent.

Effectively, "once is an accident," but beyond that...

Collapse
 
yechielk profile image
Yechiel Kalmenson

That is very true!

That's similar to a case in the Talmud where someone's ox gored another ox the owner of the ox has to pay half the damage, but that only applies the first it second time, from the third time onwards the owner has to pay the entire damage because now it's known that the ox is aggressive so the owner should have been more careful.

Collapse
 
elliottbignell profile image
Elliott Bignell

What is the Talmudic position on damage caused by straying livestock? It seems to me that an AI, especially a robot, has many of the aspects of an animal. As seen by people, it is widely presumed to have agency but no moral understanding or liability. There must be a wealth of case law on injuries due to animals released deliberately and inadvertently.

Collapse
 
yechielk profile image
Yechiel Kalmenson

Yes! I can't believe my thoughts didn't go in that direction!

Damage caused by livestock is a huge topic in the Talmud!

In general people are responsible to ensure that their animals do not do damage. How much they are liable though depends on the type of damage, the animal's intent in the damage (was it in anger like an ox goring a rival ox? Was it for the animal's pleasure like a goat eating up the neighbor's vegetable patch?), and on whether it's a repeat offense (you have a stronger responsibility to keep your ox in check if he's known to be aggressive and has gored a few times in the past).

In fact, we could probably have an entire discussion just trying to figure out what case would be the most analogous to AI 😊

Collapse
 
n8chz profile image
Lorraine Lee

I tend to blame the problems inherent in surveillance capitalism not so much in immaturity or incompetence, rather information asymmetry. There's something in the New Testament that speaks to my concern: "Woe unto you, lawyers! for ye have taken away the key of knowledge: ye entered not in yourselves, and them that were entering in ye hindered." Perhaps in those times it was the lawyers, but today perhaps it's anyone who's signed an NDA.

How extreme is the level of information asymmetry surrounding a typical transaction in the market economy as we know it? I contend that it’s the informational equivalent of shooting fish in a barrel. Modern websites and mobile apps are designed specifically to transmit signal (behavioral data and other actionable data) in one direction and noise (basically bloat) in the other. “Basic informational realities of the universe” aside, this “Maxwell’s Dæmon” approach to accumulation of informational advantage seems to work. Maybe it doesn’t, and the businesses who are spending billions on the services of data brokers are basically buying snake oil. Then again, maybe we’re living in the best of all possible worlds. At any rate, when it comes to individual economic decisions, it appears to me that there are some very high bandwidth channels for accessing data that might be used for decision support. It also appears that the information landscape visible to individuals is largely controlled by business. Which price quotes offered up for price comparison, for example, is a question of which vendors have an exclusivity deal with the “comparison shopping” website. One thing that does not exist is a way to run queries against the combined product offerings of the global economy.

Hopefully, reverse engineering is not a crime.