About this Research Topic
Towards the long-standing dream of artificial intelligence, two solution paths have been paved: (i) neuroscience-driven neuromorphic computing; (ii) computer science driven machine learning. The former targets at harnessing neuroscience to obtain insights for brain-like processing, by studying the detailed implementation of neural dynamics, circuits, coding and learning. Although our understanding of how the brain works is still very limited, this bio-plausible way offers an appealing promise for future general intelligence. In contrast, the latter aims at solving practical tasks typically formulated as a cost function with high accuracy, by eschewing most neuroscience details in favor of brute force optimization and feeding a large volume of data.
With the help of big data (e.g. ImageNet), high-performance processors (e.g. GPU, TPU), effective training algorithms (e.g. artificial neural networks with gradient descent training), and easy-to-use design tools (e.g. Pytorch, Tensorflow), machine learning has achieved superior performance in a broad spectrum of scenarios. Although acclaimed for the biological plausibility and the low power advantage (benefit from the spike signals and event-driven processing), there are ongoing debates and skepticisms about neuromorphic computing since it usually performs worse than machine learning in practical tasks especially in terms of the accuracy.
This performance gap comes from a variety of factors, including unclear internal knowledge about the neuronal model/ connection topology/ coding scheme/ learning algorithm, discrete state space with limited precision, and weak external supports of benchmark data resource/ computing platform/ programming tool, all of which are not sophisticated than those in machine learning. Currently, researchers in different sub-domains for the exploration of neuromorphic computing usually have distinct optimization objective, such as reproducing cell-level or circuit-level neural behaviors, emulating the brain-like functionalities at macro level, or just reducing the execution cost from hardware perspective. Neuromorphic computing still lags behind machine learning models if we cannot have clear target and only consider the application accuracy.
Therefore, we need to rethink the true advantage of human brain (e.g. strong generalization, multi-modal processing and association, memory-based computing) and the goal of neuromorphic computing, rather than coming to a dilemma of confronting machine learning. We should make efforts to understand and bridge the current “gap” between neuromorphic computing and machine learning.
This Research Topic aims to bring together researches including, but not limited to:
● High-performance neuromorphic algorithms exploring neuronal models, network architectures, coding and learning schemes;
● Benchmarking metrics, datasets, and tasks for neuromorphic computing;
● Optimizing the processing of neuromorphic models on GPUs or specialized platforms, including inference and training;
● Design tools for building neuromorphic models, encouraging to integrate the interface with mainstream machine learning tools;
● Heterogeneous systems based on neuromorphic computing and machine learning.
Keywords: Neuromorphic Computing, Machine Learning, Supervised/Unsupervised Learning, Neural Coding, Model Evaluation
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.