DEV Community

Cover image for Six Misconceptions About Artificial Intelligence
WebOccult Technologies
WebOccult Technologies

Posted on

Six Misconceptions About Artificial Intelligence

Understanding what artificial intelligence is is not easy. Many myths and inaccuracies about it are widespread. However, as a decision-maker in a company, politician, activist, or even consumer, since AI will interfere more and more in our daily lives, it is essential to know AI, its technology, and its challenges make informed decisions. Here are six common misconceptions among many others:

Machines learn on their own

It is the impression that one can have. But, on the one hand, machines are not yet at the stage where they decide for themselves their scope. And on the other hand, there is always a considerable amount of human work upstream. Seasoned specialists formulate the problem, prepare the models, determine the appropriate training data sets, eliminate the potential biases induced by this data, and so on. Also, they make the software evolve according to its performance. Thus, there is a lot of human brain time behind AI models.

Machines demonstrate objectivity

Nothing is, of course, further from the truth. The design of machines and the writing of their software are human creations. And in machine learning, objectivity relies primarily on the neutrality of the training data submitted to the model for training. Cognitive bias is practically inevitable, and the difficulty in preparing the data is to succeed in limiting this bias as much as possible. It often happens that a model reproduces a confirmation bias that it inherited from its human creators. If we introduce biased data, even unintentionally, into a system, we get biased results as the output.

AI stands for machine learning

Almost all current applications of artificial intelligence indeed fall under machine learning. But machine learning, based on the idea that machines can learn and adapt through experience, is just a tool of artificial intelligence. Perhaps one day, we will discover new methods of solving problems where machine learning fails, for example, with questions we do not have large amounts of qualified data. On the other hand, Artificial intelligence refers to a more general idea where machines can perform tasks in an “intelligent” way, that is to say, to have operations resembling human intelligence. That said, the concept of artificial intelligence does not have a commonly accepted definition, and its limits are unclear. Sometimes we should talk about complex information processing or cognitive automation, but that would undoubtedly be less sexy.

AI will cut jobs

As with the automation and robotics of recent decades, it would be more accurate to say that artificial intelligence technologies will replace some jobs and evolve others. That is, they will profoundly change the work landscape, as in previous industrial revolutions, but probably not reduce the number of jobs as much as robotization has made it possible to eliminate repetitive manual tasks, as much as artificial intelligence makes it possible to eradicate repetitive intellectual tasks, to work in a new and more innovative way. And as with robotization, artificial intelligence development can be more efficient at specific tasks than any human. Take as an example an application based on AI.

AI, not helpful in my business

And why that? Artificial intelligence can improve interactions with customers, analyze data faster, aid in decision-making, generate early alerts on upcoming disruptions, etc. Why deprive yourself of it? It also has several practical applications in an industrial environment, particularly computer vision/recognition, which, for example, makes it possible to detect a defective part with much more efficiency and speed than a human operator. Denying AI is the same as giving up the benefits of automation, at the risk of putting the company at a competitive disadvantage. It should be understood that AI is the logical extension of the industrial revolution of automation/robotization.

Super-intelligent machines will overtake humans.

Current AI applications are still particular; that is, they address a well-defined problem. On the other hand, generalized intelligence, capable of tackling several different tasks, just as human or natural intelligence does, is not yet on the agenda and belongs to the science register. Fiction. But in 1865, the journey from the Earth to the Moon belonged to the register of science fiction, and we know what it is today. So, one cannot make a definitive prediction and claim that this idea is entirely false. However, it seems wise to think that we will not know in our lifetime the super-robots that will overtake humans in everything.

Discussion (0)