About this Research Topic
Spiking neural network (SNN), a sub-category of brain-inspired neural networks, mimics the biological neural codes, dynamics, and circuitry. One particular observation is that the brain performs complex computation with high precision locally (at dendritic and neural level) while transmitting the outputs of these local computations in a binary code (at network level). SNN has achieved superior performance in processing noisy signals, and complex and sparse spatio-temporal information with high power efficiency in the event-driven computing paradigm. Hence, SNN shows great potential in both the investigation of biologically realistic models of human cognition and the development of efficient machine learning devices. One can draw inspiration from such a design principle and many others while working on advancing the field of SNN.
Recent significant progress has taken place in a wide spectrum of different sub-fields in artificial intelligence (AI), i.e., image processing, speech recognition, machine translation, etc. These successes are largely driven by advances made in systematic learning theories (e.g. stochastic gradient descent), explicit benchmarks (i.e. various tasks and datasets), friendly programming tools (e.g. Tensorflow, Pytorch), and efficient processing platforms (e.g. GPU/TPU). The above advances can be broadly classified under learning, benchmarking, programming, and executing. Whereas, SNN is still in the early stage regarding these aspects, but has already generated great interest in both the AI and neuroscience community.
There are many relevant but still open questions in the field of SNN. For instance, how does one train a deep SNN, which typically has multi-dimensional spatio-temporal dynamics akin to a Recurrent Neural Network (RNN) in deep learning but whose activations are discrete events which require a different treatment for the SNN to be properly trained for a given task. Second, while there are a few public datasets for SNN evaluation that are based on the dynamic vision sensor (DVS), e.g. N-MNIST, and static images, e.g. MNIST, CIFAR10, ImageNet, most of them (especially those commonly used in deep learning) fail to explore the true nature of SNN and hence cannot serve to properly evaluate the strengths and weaknesses of a newly proposed SNN. Many of the image datasets fail to exploit the temporal dimension of SNN computation, while sound datasets are still relatively new to the community. Either way (static image or sound), SNN has yet to scale up to the performance level (in terms of accuracy and size of dataset) to be convincing to the larger AI community. Having said that, the human brain is a complex system receiving sensory inputs of multiple modalities while performing actions in smooth trajectories. Given that the SNN closely mimics the brain, it seems reasonable then that the benchmarks we adopt should better reflect the cognitive tasks the brain typically undertakes; this may then help to distinguish us from the narrow AI that deep learning specializes in. Other than benchmarks, the community in general lacks an open programming framework that would allow us to easily conceptualize, train and share new SNN. The importance of programming frameworks such as Tensorflow and Pytorch to the progress of deep learning cannot be overstated.
Recent advances in the state-of-the-art modeling of spiking neural networks in-silico as demonstrated by the IBM True North, Intel Loihi and ETH DYNAP chips attest to the great potential of SNN implementation on hardware. This serves to garner further attention in this field. On the one hand, neuromorphic hardware architecture and design is an ongoing field of research. On the other hand, SNN is poorly supported by conventional hardware such as the GPU (the current workhorse for deep learning) which is designed for dense and high-precision matrix operations, as opposed to the asynchronous nature of computation in SNN. An optimally designed neuromorphic hardware should then also be highly energy efficient, which is well suited for cloud and edge computing.
This Research Topic aims to bring together research including, but not limited to the above topics involving theory and algorithm, evaluation framework, software engineering, hardware architecture and system, emerging applications, etc.
Topics relevant to this special issue include, but are not limited to:
● Learning algorithms for very deep and large scale SNNs
● SNN benchmarking framework (e.g. tasks and datasets suitable for SNN)
● SNN computing acceleration for both training and inference
● Friendly and efficient SNN programming tool
● Neuromorphic hardware for dedicated SNN applications in cloud or edge
● SNN oriented applications
Keywords: Deep Spiking Neural Networks, SNN Learning Algorithms, Programming Framework, SNN Benchmarks, Neuromorphics
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.