DEV Community

loading...
Cover image for 5 Things You Can't Do with Deep Learning

5 Things You Can't Do with Deep Learning

Rebecca Beris
Technology writer and editor working on deep learning, software testing, cybersecurity projects and more
・5 min read

Deep learning has been a source of fascination for people throughout the globe. Investors and entrepreneurs are in a race to finance and invent commercial and profitable products made for or by AI. Deep learning is a subfield of machine learning, in charge of helping machines learn. AI products rely on machine learning for their continued health and improvement. While deep learning has helped improve AI, the field isn’t bulletproof. You can’t do everything with deep learning.

What Is Deep Learning?

Deep learning is a subset of machine learning that uses artificial neural networks to teach machines. The inspiration for the concept comes from the design of the human brain, in which neural networks serve as the main space in which learning processes occur. In deep learning, neural networks are made up of hardware, software, or both.

Deep learning methodologies use a set of techniques called representation learning to help teach the machine to classify data based on a set of sample values. For that purpose, each neural network contains:

  • An input layer - composed of units that represent data. For example, pixels representing images.
  • One or more hidden layers - composed of hidden units or neurons,
  • An output layer - composed of labels that identify input categories. For example, cats, dogs, people. ## What Is a Deep Learning Neural Network? Unlike regular neural networks that can do away with only one hidden layer, deep learning require the use of two or more hidden layers.

In the image, each circle represents a neuron. Like in the human brain, the job of the neurons is to transmit information. Deep learning applies this process to machine learning. To help neurons transmit information, we give the neurons specific roles:

  • Input neurons - receive information.
  • Hidden neurons - take in the input and produce an output.
  • Output neurons - forward information.

To help the neurons communicate the information, we create connections between them (as depicted in the image).

Deep Learning Use Cases

Deep learning has been long recognized for its contributions to improving machine learning capabilities in the fields of speech recognition, image recognition, and translations. The Google translate of old days, which provided laughable translations, has been replaced by an improved version that aids people all over the world.

As we help machines developed better and more efficient modes of learning, their capabilities and expertise increases. Nowadays, enterprises worldwide seek to develop a product with or for the Artificial Intelligence (AI) market. Due to affordable cloud computing services, many businesses have access to the resources needed. According to Statista, the demand for AI products is expected to continue to peak and bolster the U.S. deep learning services market with $780 million in 2025.
The Limitations of Deep Learning

1. Overly Dependent on Data

Deep learning models are highly dependent on the availability of extensive volumes of training data to learn abstractions. This dependence on large volumes of data is in stark contrast to the ease at which humans learn abstractions using limited data. For example, in a 1999 study entitled Rule Learning by Seven-Month-Old Infants, the results indicated that infants can rapidly abstract algebra-like rules about languages using low levels of input information.

Given that artificial general intelligence is a primary goal of much of the research that goes into deep learning and other A.I. fields, it is clear that there is a huge discrepancy in this regard between the ability of deep learning systems to perform abstractions with limited data and the human ability for such tasks. In domains where the training data for learning abstractions is limited, deep learning falls short.

2. Current Deep Learning Is Superficial

The actual learning that occurs in deep learning models is arguably more superficial than it is deep. In a 2015 video that went viral on Youtube, Google DeepMind uses an unsupervised deep learning algorithm to develop a winning strategy to beat the Atari Breakout game by building a tunnel through a wall. While the video was entertaining, the degree of actual learning was shallow.

By implementing minor variations to the game’s code, such as changing the wall’s location, researchers were able to show that a similar DeepMind system failed to beat the game using this strategy. The deep learning model in question wasn’t able to come to a well-formed, solid understanding of what a wall is.

3. Easy to Spoof

Computer vision, which is the field concerned with helping computers gain high-level understanding from digital images or video, is an area in which deep learning has shown real promise. However, severe limitations exist in deep learning for computer vision, such as the visual classification of slightly altered objects.

It is relatively straightforward to spoof or fool deep learning models, as indicated in a 2018 paper. The authors of the paper cast a bright light on the subject through their experiment with traffic control. When they altered the physical appearance of stop signs, the AI misclassified them as speed limit signs. Such mistakes can cause traffic jams or lead to accidents.

4. Lack of Transparency Into Neural Networks

The lack of transparency into the neural networks that underpin deep learning makes it hard to know how or why these networks make decisions. While the networks may excel at specific tasks, such as classifying images, it’s hard to determine which part of the image is most important for the image to be accurately classified.

Given that many of the more sensationalized reports on deep learning focus on financial or healthcare use cases, the lack of transparency could be seen as a limitation for the human users in charge of developing or maintaining these models. If developers can’t debug the system, they might need to retire the product before the business is hit with serious financial and health ramifications.

5. Requires A Stable World

Deep learning works well under stable conditions with specific circumstances and rules, such as solving puzzles or playing games. The real world is more complex, volatile, and ever-changing. Practical applications of deep learning models should be taken with a grain of salt. At present, many AI products are like budding students - they require human supervision.

Conclusion

Deep learning is a method. There is nothing inherently flawed about the statistical techniques and mathematics underpinning deep learning. Like any other method, deep learning has its strengths and weaknesses. Study the field and use it wisely as part of your machine learning strategy.

Discussion (0)

Forem Open with the Forem app