Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Neurosci.

Sec. Neuromorphic Engineering

Volume 19 - 2025 | doi: 10.3389/fnins.2025.1662886

This article is part of the Research TopicAlgorithm-Hardware Co-Optimization in Neuromorphic Computing for Efficient AIView all 6 articles

Efficient Spiking Convolutional Neural Networks Accelerator with Multi-Structure Compatibility

Provisionally accepted
Jiadong  WuJiadong Wu1Lun  LuLun Lu1Yinan  WangYinan Wang1*Zhiwei  LiZhiwei Li1Changlin  ChenChanglin Chen1Qingjiang  LiQingjiang Li1Kairang  ChenKairang Chen2
  • 1National University of Defense Technology, Changsha, China
  • 2Chongqing Polytechnic University of Electronic Technology, Chongqing, China

The final, formatted version of the article will be published soon.

Spiking Neural Networks (SNNs) possess excellent computational energy efficiency and biological credibility. Among them, Spiking Convolutional Neural Networks (SCNNs) have significantly improved performance, demonstrating promising applications in low-power and brain-like compu-ting. To achieve hardware acceleration for SCNNs, we propose an efficient FPGA accelerator architecture with multi-structure compatibility. This architecture supports both traditional convolutional and residual topologies, and can be adapted to diverse requirements from small networks to complex networks. This architecture uses a clock-driven scheme to perform convolution and neuron updates based on the spike-encoded image at each timestep. Through hierarchical pipelining and channel parallelization strategies, the computation speed of SCNNs is increased. To address the issue of current accelerators only supporting simple network, this architecture combines configuration and scheduling methods, including grouped reuse computation and line-by-line multi-timestep computation to accelerate deep networks with lots of channels and large feature map sizes. Based on the proposed accelerator architecture, we evaluated two scales of networks, named small-scale LeNet and deep residual SCNN, for object detection. Experiments show that the proposed accelerator achieves a maximum recognition speed of 1605 frames/s at a 100 MHz clock for the LeNet network, consuming only 0.65 mJ per image. Furthermore, the accelerator, combined with the proposed configuration and scheduling methods, achieves acceleration for each residual module in the deep residual SCNN, reaching a processing speed of 2.59 times that of the CPU with a power consumption of only 16.77% of the CPU. This demonstrates that the proposed accelerator architecture can achieve higher energy efficiency, compatibility, and wider applicability.

Keywords: spiking neural networks, Spiking Convolutional Neural Networks, artificial neural networks, brain-like computing, Hardwareaccelerator, FPGA

Received: 09 Jul 2025; Accepted: 02 Sep 2025.

Copyright: © 2025 Wu, Lu, Wang, Li, Chen, Li and Chen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Yinan Wang, National University of Defense Technology, Changsha, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.