loading...

re: On Artificial Un(intelligence) VIEW POST

FULL DISCUSSION
 

You might argue that AI is different than an arrow in that once you deploy the AI, you “lose control” over it. The algorithms behind AI are a “black box,” even the programmers who programmed and trained the AI are unable to know why it makes the decisions it makes.

This would imply one cannot be held accountable for shooting many arrows, blindfolded, at a crowded area, for perceived monetary gain, because the blindfold makes it impossible to aim.

I refuse to believe the company in question did not know its AI would be biased. It is common knowledge that AIs often are (they always are, but that fact hasn't sunk into the public's perception yet). It is also obvious that decisions made by this AI can do harm.

So:

  • it is known an "object" can cause harm (arrows / credit decisions).
  • it is known that specific usage is likely to cause harm (shooting at a crowd / leaving decisions to an AI)
  • we deprive ourselves of some information (blindfold / "black box" models)

More generally, AIs (that are currently hip and trendy) are just models of correlations. What this company is saying is essentially a variant of "but numbers show woman spend too much".

 

To me it's less:

I refuse to believe the company in question did not know its AI would be biased. It is common knowledge that AIs often are (they always are, but that fact hasn't sunk into the public's perception yet). It is also obvious that decisions made by this AI can do harm.

Than it simply being irresponsible to assume that a "black box" is always going to do the right thing. If you initiate an ongoing process, service, etc. you sould be monitoring the results it produces. Even if you don't understand why results are occurring, you should be able to tell that the results are negative and take action to stop them. Failure to monitor and act is, to me, what creates culpability.

code of conduct - report abuse