ORIGINAL RESEARCH article
Front. Neurosci.
Sec. Neuromorphic Engineering
Volume 19 - 2025 | doi: 10.3389/fnins.2025.1661916
This article is part of the Research TopicSpiking Neural Networks: Enhancing Learning Through Neuro-Inspired AdaptationsView all 7 articles
Bidirectional Dynamic Threshold SNN for Enhanced Object Detection with Rich Spike Information
Provisionally accepted- 1Beijing Institute of Technology, Beijing, China
- 2Center of Brain Sciences, Institute of Basic Medical Sciences, Beijing, China
- 3Institute of Automation and Control Equipment, Beijing, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Spiking Neural Networks (SNNs), inspired by neuroscience principles, have gained attention for their energy efficiency. However, directly trained SNNs lag behind Artificial Neural Networks (ANNs) in accuracy for complex tasks like object detection due to the limited information capacity of binary spike feature maps. To address this, we propose BD-SNN, a new directly trained SNN equipped with Bidirectional Dynamic Threshold neurons (BD-LIF). BD-LIF neurons emit +1 and -1 spikes and dynamically adjust their thresholds, enhancing the network's information encoding capacity and activation efficiency. Our BD-SNN incorporates two new all-spike residual blocks, BD-Block1 and BD-Block2, for efficient information extraction and multi-scale feature fusion, respectively. Experiments on the COCO and Gen1 datasets demonstrate that BD-SNN improves accuracy by 3.1% and 2.8% compared to the state-of-the-art EMS-YOLO method, respectively, validating BD-SNN's superior performance across diverse input scenarios. Project will be available at https://github.com/Ganpei576/BD-SNN.
Keywords: RGB and Event, spiking neural networks, neuromorphic computing, neuron model, object detection
Received: 08 Jul 2025; Accepted: 03 Sep 2025.
Copyright: © 2025 Wu, Wang, Song, Zhao, Zhou, Meng and Liao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Gang Wang, Center of Brain Sciences, Institute of Basic Medical Sciences, Beijing, China
Yong Song, Beijing Institute of Technology, Beijing, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.