DEV Community

Cover image for On Artificial (Un)Intelligence

On Artificial (Un)Intelligence

Yechiel Kalmenson on November 22, 2019

This post was originally published in Torah && Tech, the weekly newsletter I publish together with my good friend Ben Greenberg. To get the...
Collapse
 
naftoliost profile image
NaftoliOst

What a great discussion!
I remember learning the concept of "fire is essentially a derivative of an arrow" in yeshiva about 12 years ago, and that's what came to mind at the beginning of the article. I love your arguments for and against comparing that to AI.
The concept has so many diverse implications, especially when it comes to modern technology in halacha (Think machine matza, printed Sefer Torah, shabbos time-switch etc.) it's a really fascinating topic.
This reminds me of the (seemingly simple) question "which of the categories of damages does having a car crash fall under?" [a person who causes damage, an ox who gores, fire-as-a-derivative-of-an-arrow, fire-as-a-derivative-of-one's-possession, a pit in a public space...] and that's before we get on to self-driving cars!
Thanks for this great post

Collapse
 
drbearhands profile image
DrBearhands

You might argue that AI is different than an arrow in that once you deploy the AI, you “lose control” over it. The algorithms behind AI are a “black box,” even the programmers who programmed and trained the AI are unable to know why it makes the decisions it makes.

This would imply one cannot be held accountable for shooting many arrows, blindfolded, at a crowded area, for perceived monetary gain, because the blindfold makes it impossible to aim.

I refuse to believe the company in question did not know its AI would be biased. It is common knowledge that AIs often are (they always are, but that fact hasn't sunk into the public's perception yet). It is also obvious that decisions made by this AI can do harm.

So:

  • it is known an "object" can cause harm (arrows / credit decisions).
  • it is known that specific usage is likely to cause harm (shooting at a crowd / leaving decisions to an AI)
  • we deprive ourselves of some information (blindfold / "black box" models)

More generally, AIs (that are currently hip and trendy) are just models of correlations. What this company is saying is essentially a variant of "but numbers show woman spend too much".

Collapse
 
ferricoxide profile image
Thomas H Jones II

To me it's less:

I refuse to believe the company in question did not know its AI would be biased. It is common knowledge that AIs often are (they always are, but that fact hasn't sunk into the public's perception yet). It is also obvious that decisions made by this AI can do harm.

Than it simply being irresponsible to assume that a "black box" is always going to do the right thing. If you initiate an ongoing process, service, etc. you sould be monitoring the results it produces. Even if you don't understand why results are occurring, you should be able to tell that the results are negative and take action to stop them. Failure to monitor and act is, to me, what creates culpability.

Collapse
 
ferricoxide profile image
Thomas H Jones II

One of the things missing here is the ongoing actions of the agent. Much of what's being compared to are "once and done" deleterious actions. The problems with AIs isn't really "one and done". Indeed, the whole point of creating an AI is to create a long-running process to address a number of discrete problems.

In the case of someone using an agent for a single action, sure, the notions above abpply. However, in the case of ongoing agency, you bear a responsibility for evaluating each of your agents actions to ensure they were done in a way that reflects your values. If you use an agent over and over, it implies that you support that results produced by that agent.

Effectively, "once is an accident," but beyond that...

Collapse
 
yechielk profile image
Yechiel Kalmenson

That is very true!

That's similar to a case in the Talmud where someone's ox gored another ox the owner of the ox has to pay half the damage, but that only applies the first it second time, from the third time onwards the owner has to pay the entire damage because now it's known that the ox is aggressive so the owner should have been more careful.

Collapse
 
elliottbignell profile image
Elliott Bignell

What is the Talmudic position on damage caused by straying livestock? It seems to me that an AI, especially a robot, has many of the aspects of an animal. As seen by people, it is widely presumed to have agency but no moral understanding or liability. There must be a wealth of case law on injuries due to animals released deliberately and inadvertently.

Collapse
 
yechielk profile image
Yechiel Kalmenson

Yes! I can't believe my thoughts didn't go in that direction!

Damage caused by livestock is a huge topic in the Talmud!

In general people are responsible to ensure that their animals do not do damage. How much they are liable though depends on the type of damage, the animal's intent in the damage (was it in anger like an ox goring a rival ox? Was it for the animal's pleasure like a goat eating up the neighbor's vegetable patch?), and on whether it's a repeat offense (you have a stronger responsibility to keep your ox in check if he's known to be aggressive and has gored a few times in the past).

In fact, we could probably have an entire discussion just trying to figure out what case would be the most analogous to AI 😊

Collapse
 
n8chz profile image
Lorraine Lee

I tend to blame the problems inherent in surveillance capitalism not so much in immaturity or incompetence, rather information asymmetry. There's something in the New Testament that speaks to my concern: "Woe unto you, lawyers! for ye have taken away the key of knowledge: ye entered not in yourselves, and them that were entering in ye hindered." Perhaps in those times it was the lawyers, but today perhaps it's anyone who's signed an NDA.

How extreme is the level of information asymmetry surrounding a typical transaction in the market economy as we know it? I contend that it’s the informational equivalent of shooting fish in a barrel. Modern websites and mobile apps are designed specifically to transmit signal (behavioral data and other actionable data) in one direction and noise (basically bloat) in the other. “Basic informational realities of the universe” aside, this “Maxwell’s Dæmon” approach to accumulation of informational advantage seems to work. Maybe it doesn’t, and the businesses who are spending billions on the services of data brokers are basically buying snake oil. Then again, maybe we’re living in the best of all possible worlds. At any rate, when it comes to individual economic decisions, it appears to me that there are some very high bandwidth channels for accessing data that might be used for decision support. It also appears that the information landscape visible to individuals is largely controlled by business. Which price quotes offered up for price comparison, for example, is a question of which vendors have an exclusivity deal with the “comparison shopping” website. One thing that does not exist is a way to run queries against the combined product offerings of the global economy.

Hopefully, reverse engineering is not a crime.