DEV Community

Cover image for History of Neural Network
SandraMeshack
SandraMeshack

Posted on

History of Neural Network

In 1943, two University of Chicago professors Warren McCulloch(neuroscientist) and Walter Pitts(logician) proposed the neural networks. Since then, neural networks have received a massive amount of attention in numerous fields of study. The peculiar excitement and interest in neural networks stems from the fact that these networks make an attempt to model the human brain. Hence, they refer to a series of algorithms that attempts the recognition of elemental relationships in a given data set using a mechanism that tends to mimic the operation of the human mind. In as much as the capabilities of computers at numerical computations arguably far exceeds human capabilities, the human brain also possesses so much abilities that would be desirable in a computer. These include: feature recognition even in the presence of noise; understanding, interpretation and to take action based on chance or probable notions; to make decisions or infer based on previous experience and to learn by transferring information. This happens to be the motivation for modelling and understanding the human brain.

Maass in one of his early papers in the introduction of neural models stated that three generations of neural networks can be differentiated if the classification of neural network models is being done according to their computational units. The McCulloch-Pitts neuron is described as the first generation of neural network. The McCulloch-Pitts model happens to be a threshold unit where the neuron receives the total weight of inputs from the connected units and gives an output if the total weight exceeds the threshold. Hence, McCulloch Pitts neurons use computational units gleaned from perceptrons or threshold gates. The simple perceptron model that consists of McCulloch-Pitts model neurons with an input and output layer was formed in the early 1960s by Rosenblatt. Minsky and Papert discovered a problem with the perceptron model (linear separability problem) and their discovery waned down the use and popularity of neural networks. The backpropagation learning algorithm which became the most important development was later developed to train multilayered networks. Although the idea was developed in earlier works, it was not fully appreciated at the time of development. The development of backpropagation mitigated the problem of threshold gates or perceptron models and caused a renewed interest in neural networks. This backpropagation method happens to use activation functions as it’s computational units and Maass describes it as the second generation of neural networks while the third generation of neural models use spiking neurons as their computational units.

Image description

REFERENCES

  1. W. Maass, “Networks of Spiking Neurons: The Third Generation of Neural Network Models,” Tech. Rep. 9, 1997.
  2. B. Warner and M. Misra, “Understanding Neural Networks as Statis- tical Tools,” American Statistician, vol. 50, no. 4, pp. 284–293, 1996.
  3. K. Kumar and G. S. M. Thakur, “Advanced Applications of Neural Networks and Artificial Intelligence: A Review,” International Journal of Information Technology and Computer Science, vol. 4, no. 6, pp. 57–68, 6 2012.
  4. K. O’Shea and R. Nash, “An Introduction to Convolutional Neural Net- works,” 11 2015. [Online]. Available: http://arxiv.org/abs/1511.08458
  5. A. Kasinski, “Introduction to spiking neural networks: Information processing, learning and applications,” Tech. Rep., 2011.
  6. “Frank Rosenblatt: Principles of N eurodynamics: Perceptrons and the Theory of Brain Mechanisms,” Tech. Rep.
  7. J. L. Mcclelland, D. E. Rumelhart, J. W. Donahoe, and D. C. Palmer, “THE INTERPRETATION OF COMPLEX HUMAN BEHAVIOR: SOME REACTIONS TO PARALLEL DISTRIBUTED PROCESSING, EDITED,” Tech. Rep.

Top comments (0)