DEV Community

Ibrahim Shamma
Ibrahim Shamma

Posted on • Updated on

Why AI is overrated

AI has been a thing since at least hobby magazines of the 60s.

Turned out to be harder than expected. Once in a while there is a popularized breakthrough, and we get a few years of frenzy. It's getting worse because companies play on press releases more and more, and the war on attention in the media is serious business.

What does the problem lie?

1. Hyped by whom?

There are two main parties benefit from this hype:

  • Cloud providers:
    When you build the model using their open source tools
    (for example Tensorflow) and this is indirect way to pull you to their clouds to train models and make them tons of money.
    SEE THIS

  • Companies & specially startups:
    You know how shinny it is to investors to use flashy words like ML & AI, even if the problem can be solved by traditional code.

2. ML is Alchemy

The harsh reality is that we have no idea how it works. At least, not with the same mathematical / analytical model that CS is known for. In some ways, it's really impressive that these complex numbers actually work magically, but it's intimidating to develop something that's so opaque.

3. Limited Computing Power

Despite advances in hardware computational limitations still exist due to the scale of learning required for complex problems.

4. Specific Application Focus

Machine Learning techniques suffer from a not-generalized application focus, as such their problem solving abilities are narrow. All machine learning techniques are designed to work within a limited scope in limited domain.

You don't have a go to way in building models.

Programmers cannot use a single approach meant to evaluate text, for example, and then send visual data into a network. The network would just generate nothing interesting. This narrow application focus cannot be avoided by training.
instead, you must develop another sophisticated model, maybe using different methodologies, and this new model would be defined for only one domain.

Programmers can only build more complicated networks and make them more useful. The major shortcoming of existing machine learning approaches is their lack of reusability. Problems in the real world are multidimensional, and conditions vary, thus the network must be redesigned as new factors are introduced.

5. Many Maintenance and Debugging issues

This issue single handedly holding back the field.
Because of his specified focus, networks used for complex tasks are typically complex and employ multiple techniques/layers. Individual programmers who are experts in that area may be needed to work on the code for these techniques. This makes it difficult for programmers to maintain and debug neural network-based programs.

6. Correlation does not Causation!

Neural network methods encode correlation within the data they process, they do not understand causation or relationships of data. Despite their successes, they are still a ‘dumb’ technique akin to historical brute force methods relying on computational power. This limits their abilities and they require trained individuals to draw conclusions from the data they produce.

However, there is a reason why machine learning has lately acquired popularity. The historical case of Deep Blue against AlphaGo exemplifies why. Deep Blue is a chess-playing computer built by IBM in 1996 that finally defeated Chess Grandmaster Garry Kasparov. A machine defeated a person at what was widely thought to be a game needing human intelligence to win at the time. Deep Blue did not have human-level intelligence; instead, it won through explicit programming and computing speed.

A comparable artificial intelligence (AI) called AlphaGo developed by Google's DeepMind used reinforcement learning and neural networks to overcome human Go players in 2015.

In stark contrast to Deep Blue, AlphaGo is a machine that learns rather than being explicitly programmed.
Deep Mind demonstrated the rules of the game of Go to AlphaGo.
Before playing against different versions of itself to learn from its mistakes and gradually improve, it was exposed to a large number of amateur games.

Explicit program

If email text contains X-WORD

Then mark as spam;

If email contains signup

Then CHECK SOMETHING-ELSE

If email contains ...

Learning program

Try to classify emails;

Change self to reduce errors;

repeat;

It's a pretty powerful difference.

However, it suffers from the above limitations.

Conclusion

ML is unquestionably helpful today and in the upcoming year. However, it cannot address every issue, and it is still difficult to determine which issues can be resolved quickly using ML techniques. Trial and error costs money, but it's necessary right now. Except for researchers, this makes it dull. Perhaps in 20 years it will be as simple to implement your plan as it is to look up a scenario in a table now in civil engineering, with a 99.999% assurance that it will work.

Top comments (0)