ORIGINAL RESEARCH article
Front. Neurosci.
Sec. Neuromorphic Engineering
Volume 19 - 2025 | doi: 10.3389/fnins.2025.1665778
This article is part of the Research TopicNeuromorphic Computing and AI for Energy-Efficient and Adaptive Edge IntelligenceView all articles
Balancing Accuracy and Efficiency: Co-Design of Hybrid Quantization and Unified Computing Architecture for Spiking Neural Networks
Provisionally accepted- 1Beijing Institute of Technology, Beijing, China
- 2Sichuan TianFu New Area Beijing Institute of Technology Innovation Equipment Research Institute, Chengdu, China
- 3Beijing Institute of Technology Chongqing Innovation Center, Chongqing, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The deployment of Spiking Neural Networks (SNNs) on resource-constrained edge devices is hindered by a critical algorithm-hardware mismatch: a fundamental trade-off between the accuracy degradation caused by aggressive quantization and the resource redundancy stemming from traditional decoupled hardware designs. To bridge this gap, we present a novel algorithm-hardware co-design framework centered on a Ternary-8-bit Hybrid Weight Quantization (T8HWQ) scheme. Our approach recasts SNN computation into a unified "8-bit × 2-bit" paradigm by quantizing first-layer weights to 2 bits and subsequent layers to 8 bits. This standardization directly enables the design of a unified PE architecture, eliminating the resource redundancy inherent in decoupled designs. To mitigate the accuracy degradation caused by aggressive first-layer quantization, we first propose a channel-wise dual compensation strategy. This method synergizes channel-wise quantization optimization with adaptive threshold neu-rons, leveraging reparameterization techniques to restore model accuracy without incurring additional inference overhead. Building upon T8HWQ, we propose a novel unified computing architecture that overcomes the inefficiencies of traditional decoupled designs by efficiently multiplexing processing arrays. Experimental results support our approach: On CIFAR-100, our method achieves near-lossless accuracy (<0.7% degradation vs. full precision) with a single time step, matching state-of-the-art low-bit SNNs. At the hardware level, implementation results on the Xilinx Virtex 7 platform demonstrate that our unified computing unit conserves 20.2% of lookup table (LUT) resources compared to traditional decoupled architectures. This work delivers a 6× throughput improvement over state-of-the-art SNN accelerators—with comparable resource utilization and lower power consumption. Our integrated solution thus advances the practical implementation of high-performance, low-latency SNNs on resource-constrained edge devices.
Keywords: Spiking neural networks (SNNs), quantization, Field-programmable gate array(FPGA), Algorithm-hardware co-design, unified processing elements, resource-constrained de-vices.
Received: 14 Jul 2025; Accepted: 22 Sep 2025.
Copyright: © 2025 Li, Xu, Dong, Lan, Liu, Chen, Zhuang, Xie and Chen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
He Chen, chenhe@bit.edu.cn
Yizhuang Xie, xyz551_bit@bit.edu.cn
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.