DEV Community

Discussion on: Would you willingly participate in creating a product like these?

Collapse
 
drbearhands profile image
DrBearhands

I feel there are a number of misconceptions going on here. So an AI 101 is in order:

Supervised learning classifiers - which includes most of the examples above - learn a relation between input and output features in a given data set. It doesn't invent it, the relation exists.

Now, for the caveats:

It is possible to "overfit" a model, meaning that it will work for the data set, but not for new data. You can detect and prevent this by reserving a subset of your data set to validate your results.

The classifier will learn any bias that is present in your data set. This is a common and well-understood problem, and has produced some interesting urban legends. This problem is not exclusive to AI but affects any data analysis. For instance, medical science is based mostly on research done on white, highly educated, young, men, simply because (a) it is sourced from students, (b) variables must be eliminated so reducing variations in race/age/gender is actually good and (c) men don't have menstrual cycles murking up the data. You can't blame scientists for this, they just can't find enough participants (privacy laws don't help here either). Bias exists in humans in spades, which is why you can't e.g. train an AI on past court rulings.

Most models (can't think of any counter examples right now) will give you a numerical certainty value rather than an enumerable answer.

Finally, models have recall and precision scores. In short, you model might simply not be very good.

Ethics

But lets suppose the classifiers are well-made and have found a correct correlation.

There are still ethical concerns left

  1. Correlation is not causation. Suppose the alt right is right and skin color is correlated to criminal behavior, but, as with most things, this correlation isn't perfect (say even 95%). Should we judge 5% for something they have not done because of a characteristic they cannot change? Of course not. More generally, you want to judge people based on things that matter, not whatever is correlated to it.
  2. Some civil disobedience is necessary. Nobody likes terrorism and pedos, but without civil disobedience we wouldn't have a democracy. Technology, including AI, might make this impossible if we aim for total security.
  3. One could argue that knowledge is power and AI is giving certain people too much of that. E.g. persecution of homosexuals in certain countries.