Spiking neural networks have been broadly described as the third generation of neural networks and the more computationally advanced generation of neural networks as opposed to the first and second generation. The mathematical model for spiking neurons could be traced back to over a hundred years ago when a paper on the excitability of nerves was published by Louis Lapicque. Spiking Neural Networks(SNN) is a special and enhanced category of Artificial Neural Networks where the communication and interaction of the neuronal models is done via series of spikes. The architecture of SNN is usually similar to the architecture of ANN but is largely inspired by biological properties hence has the ability to process computations in a more biological way. Studies have shown that spiking neuron networks possess good temporal data processing capability and are useful in the study of the pattern and behavior of the biological neural networks and provide useful tools for analysing elementary processes in the brain such as information processing, plasticity and learning. The focus of Spiking neural network is the development of a more biological plausible neuronal model. A phenomenological model such as spike response model is used in modelling the processing unit of SNN which is a spiking neuron while the transfer of information between neurons is done by electrical impulses known as spikes or action potentials that are released when the neurons are excited. Since trains of spikes are used in SNNs, they are capable of handling large amount of data while capturing the dynamics of the biological neurons.
The representation and the integration of several information like the time dimension, frequency and phase is also done by the SNN and they also provide solutions in applied engineering such as classification, event detection, signal processing, spatial navigation, speech recognition, data analytic, fault-tolerant computing and robotic control. The complex computational ability of the biological brain is not totally represented by the mathematical models for spiking neurons; however they are more realistic than the first and second generation of the neural network models because they tend to depict the output of the biological neuron which aids in theoretically investigating the use of time as resource for both interaction and computation.
Feed Forward Networks: The interest into the behaviour of feed-forward network had a dramatic increase after the backpropagation learning algorithm was discovered. In feed forward networks, there is no cycle formed in the connections to the nodes. The information in the feed forward network is uni-directional as information moves from one layer to the next forward layer.
Recurrent Networks: Recurrent networks have been used for the investigation of neural information processing that has to do with the formation of the associative memories which are also called working memory. Research has shown that recurrent networks could be difficult to develop and train although they are considered when dealing with models that have to do with decision making or sequence prediction problems.
Hybrid Networks: Hybrid network consists of a network with both feed forward and recurrent topologies. The interactions between the sub-populations could either be one directional or reciprocal. The types of hybrid networks shall be discussed in further write-ups.
The topology of the spiking neural network in the brain dynamically changes due to the learning process. This write up only deals with the introduction, history and topology of spiking networks while the review of models using spiking neural network shall be discussed in subsequent write-ups. Spiking Neural Network is more computationally advanced as earlier stated and has the ability to process computations better as it is largely inspired by the human mind. However, the SNN lacks the ability to self-repair like the human mind. Hence, researchers have gone ahead to introduce a new type of SNN called the Spiking astrocyte neural network which shall be introduced in the next write-up.
- A. Kasinski, “Introduction to spiking neural networks: Information processing, learning and applications,” Tech. Rep., 2011.
- W. Maass, “Networks of Spiking Neurons: The Third Generation of Neural Network Models,” Tech. Rep. 9, 1997.
- J. Liu, L. J. McDaid, J. Harkin, S. Karim, A. P. Johnson, D. M. Halliday, A. M. Tyrrell, J. Timmis, A. G. Millard, and J. Hilder, “Autonomous Learning Paradigm for Spiking Neural Networks,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11727 LNCS. Springer Verlag, 2019, pp. 737–744.
- L. F. Abbott, “Lapicque’s introduction of the integrate-and-fire model neuron (1907),” Tech. Rep. 6, 1999.
- N.BrunelandM.C.VanRossum,“Lapicque’s1907paper:Fromfrogs to integrate-and-fire,” Biological Cybernetics, vol. 97, no. 5-6, pp. 337– 339, 12 2007.
- ——, “Quantitative investigations of electrical nerve excitation treated as polarization,” Biological Cybernetics, vol. 97, no. 5-6, pp. 341–349, 12 2007.
- S. Ghosh-Dastidar, H. Adeli, and A. G. Lichtenstein, “SPIKING NEURAL NETWORKS,” Tech. Rep. 4, 2009. [Online]. Available: www.worldscientific.com
- A. P. Johnson, J. Liu, A. G. Millard, S. Karim, A. M. Tyrrell, J. Harkin, J. Timmis, L. J. McDaid, and D. M. Halliday, “Homeostatic fault tolerance in spiking neural networks: A dynamic hardware perspective,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 65, no. 2, pp. 687–699, 2 2018.
- A. Taherkhani, A. Belatreche, Y. Li, G. Cosma, L. P. Maguire, and T. M. McGinnity, “A review of learning in biologically plausible spiking neural networks,” Neural Networks, vol. 122, pp. 253–272, 2 2020.
- T. D. Sanger, “Optimal Unsupervised Learning in a Single-Layer Linear Feedforward Neural Network,” Tech. Rep., 1989.