Neuromorphic computing systems, encompassing both digital and analog neural accelerators, promise to revolutionize AI processing by making it more sustainable and energy-efficient. These systems draw inspiration from the biological brain, adopting event-based and dataflow-driven processing paradigms that exhibit extensive parallelism and exploit spatio-temporal connection and communication sparsity. They aim to co-localize computation with memory state, thereby enhancing efficiency. Despite the advancements in neuromorphic processor engineering and the successful training of spiking neural network models, a critical question remains: ""Is neuromorphic processing more energy-efficient?"" Current solutions have yet to fully capitalize on the cornerstones of neuromorphic processing, such as asynchronous event-driven execution and on-device learning. This gap underscores the need for further research to validate and quantify the effectiveness of neuromorphic systems in achieving sustainable AI.This research topic aims to explore and address the challenges associated with making neuromorphic processing more energy-efficient. The primary objectives include investigating whether neuromorphic systems can indeed boost AI acceleration efficiency and identifying the best practices for exploiting their unique features. Specific questions to be answered include the effectiveness of algorithm-hardware co-optimizations, the potential of on-device learning and adaptation, and the role of synaptic delays in enhancing model performance. By addressing these questions, the research aims to pave the way for a richer and more diverse edge-AI application ecosystem.To gather further insights into the boundaries and limitations of neuromorphic computing, we welcome articles addressing, but not limited to, the following themes:- Applications, datasets, and benchmarks for demonstrating learning and adaptation on neuromorphic platforms- On-line or on-device model learning/adaptation to improve the efficiency of neuromorphic platforms- Algorithm-hardware co-optimizations and adaptation for neuromorphic processing- Exploiting synaptic (axonal, dendritic) delays in models- Hardware-aware or hardware-in-the-loop training for non-deterministic processing on digital asynchronous event-driven and/or analog neuromorphic platforms- Multi-timescale and delay-based parameterization of neural network models for hardware efficiency and performance- Model mapping and scheduling for neuromorphic processorsBy addressing these themes, the research aims to contribute significantly to the field of neuromorphic computing, ultimately making AI more sustainable and efficient.Topic editor Manolis Sifalakis is employed by Imec (Eindhoven, Netherlands). All other Topic Editors declare no competing interests with regards to the Research Topic subject.
Neuromorphic computing systems, encompassing both digital and analog neural accelerators, promise to revolutionize AI processing by making it more sustainable and energy-efficient. These systems draw inspiration from the biological brain, adopting event-based and dataflow-driven processing paradigms that exhibit extensive parallelism and exploit spatio-temporal connection and communication sparsity. They aim to co-localize computation with memory state, thereby enhancing efficiency. Despite the advancements in neuromorphic processor engineering and the successful training of spiking neural network models, a critical question remains: ""Is neuromorphic processing more energy-efficient?"" Current solutions have yet to fully capitalize on the cornerstones of neuromorphic processing, such as asynchronous event-driven execution and on-device learning. This gap underscores the need for further research to validate and quantify the effectiveness of neuromorphic systems in achieving sustainable AI.This research topic aims to explore and address the challenges associated with making neuromorphic processing more energy-efficient. The primary objectives include investigating whether neuromorphic systems can indeed boost AI acceleration efficiency and identifying the best practices for exploiting their unique features. Specific questions to be answered include the effectiveness of algorithm-hardware co-optimizations, the potential of on-device learning and adaptation, and the role of synaptic delays in enhancing model performance. By addressing these questions, the research aims to pave the way for a richer and more diverse edge-AI application ecosystem.To gather further insights into the boundaries and limitations of neuromorphic computing, we welcome articles addressing, but not limited to, the following themes:- Applications, datasets, and benchmarks for demonstrating learning and adaptation on neuromorphic platforms- On-line or on-device model learning/adaptation to improve the efficiency of neuromorphic platforms- Algorithm-hardware co-optimizations and adaptation for neuromorphic processing- Exploiting synaptic (axonal, dendritic) delays in models- Hardware-aware or hardware-in-the-loop training for non-deterministic processing on digital asynchronous event-driven and/or analog neuromorphic platforms- Multi-timescale and delay-based parameterization of neural network models for hardware efficiency and performance- Model mapping and scheduling for neuromorphic processorsBy addressing these themes, the research aims to contribute significantly to the field of neuromorphic computing, ultimately making AI more sustainable and efficient.Topic editor Manolis Sifalakis is employed by Imec (Eindhoven, Netherlands). All other Topic Editors declare no competing interests with regards to the Research Topic subject.