Spiking Neural Networks at Scale: Learning Algorithms, Architectures, and Real-World Applications

  • 1,479

    Total views and downloads

About this Research Topic

Submission deadlines

  1. Manuscript Summary Submission Deadline 24 December 2025 | Manuscript Submission Deadline 13 April 2026

  2. This Research Topic is currently accepting articles.

Background

Spiking Neural Networks (SNNs) are a cutting-edge approach to artificial intelligence, designed to emulate the brain's architecture and functionality. Their inherent ability to process temporal information and adopt an event-driven, sparse, and dynamic computation framework makes them a promising low-power alternative to conventional AI systems, especially when integrated with neuromorphic processors. Despite substantial advancements in the SNN field, including the scaling of SNNs in the deep learning era, a significant challenge remains in closing the performance gap with traditional artificial neural networks while retaining their low-energy benefits. Enhancing the performance of large-scale SNNs could revolutionize current AI workloads by offering low-latency and energy-efficient solutions in high-resolution vision, language processing, and large-scale perception, among others.

This Research Topic aims to address the remaining challenges in SNN development, focusing on three main areas: learning algorithms, architectures, and practical applications. The objectives include innovating in surrogate-gradient and conversion-based learning methods, integrating biologically-inspired learning mechanisms like STDP variants and self-supervised learning, and refining optimization and calibration designs to boost model expressivity at lower memory and compute costs. Architecturally, the goal is to harmonize SNNs under event-driven low-power conditions with modern models such as state-space models and Transformers. Moreover, the topic seeks to validate SNN capabilities in real-world scenarios by identifying key applications, migrating tasks from traditional ANNs to SNNs, and demonstrating notable improvements in accuracy, latency, and energy use.

The scope of this Research Topic spans several focal themes:

- High-performance spiking neural networks across neuron models, network architectures, coding strategies, and learning schemes.

- Methodologies for unsupervised, self-supervised, ANN-auxiliary training and online learning in large-scale and complex SNNs.

- Fundamental applications that exploit the unique strengths of SNNs, such as event-based sensing, low-latency perception, and energy-efficient intelligence.

- Acceleration techniques for SNNs, including GPU-based acceleration, neuromorphic hardware acceleration, and architecture-driven routing/mapping algorithms for large-scale SNN deployment on neuromorphic hardware.

- System-level optimization and Fundamental applications of SNN-based solutions.

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Case Report
  • Clinical Trial
  • Community Case Study
  • Data Report
  • Editorial
  • FAIR² Data
  • FAIR² DATA Direct Submission
  • General Commentary
  • Hypothesis and Theory

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: Spiking Neural Networks, Neuromorphic Computing, Brain-inspired Computing, Efficient Training, Artificial Intelligence

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 1,479Topic views
View impact