On Artificial Un(intelligence)

Yechiel Kalmenson on November 22, 2019

This post was originally published in Torah && Tech, the weekly newsletter I publish together with my good friend Ben Greenberg. To get the... [Read Full]
markdown guide

You might argue that AI is different than an arrow in that once you deploy the AI, you “lose control” over it. The algorithms behind AI are a “black box,” even the programmers who programmed and trained the AI are unable to know why it makes the decisions it makes.

This would imply one cannot be held accountable for shooting many arrows, blindfolded, at a crowded area, for perceived monetary gain, because the blindfold makes it impossible to aim.

I refuse to believe the company in question did not know its AI would be biased. It is common knowledge that AIs often are (they always are, but that fact hasn't sunk into the public's perception yet). It is also obvious that decisions made by this AI can do harm.


  • it is known an "object" can cause harm (arrows / credit decisions).
  • it is known that specific usage is likely to cause harm (shooting at a crowd / leaving decisions to an AI)
  • we deprive ourselves of some information (blindfold / "black box" models)

More generally, AIs (that are currently hip and trendy) are just models of correlations. What this company is saying is essentially a variant of "but numbers show woman spend too much".


To me it's less:

I refuse to believe the company in question did not know its AI would be biased. It is common knowledge that AIs often are (they always are, but that fact hasn't sunk into the public's perception yet). It is also obvious that decisions made by this AI can do harm.

Than it simply being irresponsible to assume that a "black box" is always going to do the right thing. If you initiate an ongoing process, service, etc. you sould be monitoring the results it produces. Even if you don't understand why results are occurring, you should be able to tell that the results are negative and take action to stop them. Failure to monitor and act is, to me, what creates culpability.


What is the Talmudic position on damage caused by straying livestock? It seems to me that an AI, especially a robot, has many of the aspects of an animal. As seen by people, it is widely presumed to have agency but no moral understanding or liability. There must be a wealth of case law on injuries due to animals released deliberately and inadvertently.


Yes! I can't believe my thoughts didn't go in that direction!

Damage caused by livestock is a huge topic in the Talmud!

In general people are responsible to ensure that their animals do not do damage. How much they are liable though depends on the type of damage, the animal's intent in the damage (was it in anger like an ox goring a rival ox? Was it for the animal's pleasure like a goat eating up the neighbor's vegetable patch?), and on whether it's a repeat offense (you have a stronger responsibility to keep your ox in check if he's known to be aggressive and has gored a few times in the past).

In fact, we could probably have an entire discussion just trying to figure out what case would be the most analogous to AI 😊


One of the things missing here is the ongoing actions of the agent. Much of what's being compared to are "once and done" deleterious actions. The problems with AIs isn't really "one and done". Indeed, the whole point of creating an AI is to create a long-running process to address a number of discrete problems.

In the case of someone using an agent for a single action, sure, the notions above abpply. However, in the case of ongoing agency, you bear a responsibility for evaluating each of your agents actions to ensure they were done in a way that reflects your values. If you use an agent over and over, it implies that you support that results produced by that agent.

Effectively, "once is an accident," but beyond that...


That is very true!

That's similar to a case in the Talmud where someone's ox gored another ox the owner of the ox has to pay half the damage, but that only applies the first it second time, from the third time onwards the owner has to pay the entire damage because now it's known that the ox is aggressive so the owner should have been more careful.


I tend to blame the problems inherent in surveillance capitalism not so much in immaturity or incompetence, rather information asymmetry. There's something in the New Testament that speaks to my concern: "Woe unto you, lawyers! for ye have taken away the key of knowledge: ye entered not in yourselves, and them that were entering in ye hindered." Perhaps in those times it was the lawyers, but today perhaps it's anyone who's signed an NDA.

How extreme is the level of information asymmetry surrounding a typical transaction in the market economy as we know it? I contend that it’s the informational equivalent of shooting fish in a barrel. Modern websites and mobile apps are designed specifically to transmit signal (behavioral data and other actionable data) in one direction and noise (basically bloat) in the other. “Basic informational realities of the universe” aside, this “Maxwell’s Dæmon” approach to accumulation of informational advantage seems to work. Maybe it doesn’t, and the businesses who are spending billions on the services of data brokers are basically buying snake oil. Then again, maybe we’re living in the best of all possible worlds. At any rate, when it comes to individual economic decisions, it appears to me that there are some very high bandwidth channels for accessing data that might be used for decision support. It also appears that the information landscape visible to individuals is largely controlled by business. Which price quotes offered up for price comparison, for example, is a question of which vendors have an exclusivity deal with the “comparison shopping” website. One thing that does not exist is a way to run queries against the combined product offerings of the global economy.

Hopefully, reverse engineering is not a crime.


If an AI model shows "bias" (I absolutely hate that word) towards a specific group of people you cannot call it "biased". As far as I can tell such a situation can occur on 2 occasions:

1: The model has not been given enough input to be able to make the proper connections that, for example, people of different colors are to be treated the same. Much like the Tesla car issue some time ago.
2: The model has been trained correctly and it is actually correct that some human characteristic is detrimental to its decision-making process.

As much as people these days like to think that we are only defined by our actions it simply does not hold true in some cases and it is folly to pretend otherwise because it does not suit our views.

code of conduct - report abuse