<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Neuroscience | Neuromorphic Engineering section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/neuroscience/sections/neuromorphic-engineering</link>
        <description>RSS Feed for Neuromorphic Engineering section in the Frontiers in Neuroscience journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-13T12:06:15.591+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1827009</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1827009</link>
        <title><![CDATA[Federated training of spiking neural networks on edge hardware for audio processing]]></title>
        <pubdate>2026-05-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Swaroop S. Kaimal</author><author>Ashwin JB</author><author>S. Sofana Reka</author><author>Prakash Venugopal</author>
        <description><![CDATA[Spiking Neural Networks have caught significant attention recently for their potential for energy-efficient computation on neuromorphic hardware and their event-driven processing. Spiking Neural networks employ spike-based learning paradigms, which require specialized training procedures such as Surrogate Gradient Descent. At the same time, Federated Learning allows collaborative model training on decentralized devices with preservation of data privacy protection. However, to date, few research has examined the suitability of Federated learning with ARM-based hardware. This work primarily investigates whether Federated Spiking Neural Networks training on ARM-based hardware is feasible with the Raspberry Pi 5 as a widely available and low-cost edge computing device for audio signal processing tasks. We perform a comparative analysis of federated Spiking Neural Network and federated convolutional neural networks on ARM processors and evaluate their performance on different data partitioning strategies using Dirichlet-based splits and various federated averaging algorithms. Using Federated learning, this work investigates the impact of data heterogeneity and aggregation strategies on model convergence, communication overhead, and latency in distributed training paradigms. The results provided showcases the important insights into the trade-offs of FL-SNN implementations on Von Neumann architectures and their applications in decentralized neuromorphic computing for audio processing.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1780751</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1780751</link>
        <title><![CDATA[Computational modeling of spatiotemporal afterimage visual perception with spiking neural networks]]></title>
        <pubdate>2026-03-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Irena Byzalov</author><author>Hadar Cohen Duwek</author><author>Elishai Ezra Tsur</author>
        <description><![CDATA[Contour-induced afterimages constitute an important class of chromatic visual illusions, in which an illusory color percept emerges post-exposure to a chromatic field. Their striking feature is dual polarity (the perception of both complementary and inducer hues) and the capacity for extending to naive, non-adapted regions, indicating the involvement of neural mechanisms that extend beyond established models of simple neural adaptation. In this work, we realized the perceptual afterimage effect with a biologically plausible spiking neural network. We compared the results with experimental findings with human participants, demonstrating how a complex temporal evolution of a visual illusion can emerge from the dynamics of its constituent spiking dynamics. Our neural design models a wide range of phenomena, including positive, negative, and combined afterimage configurations, as well as the effects of alternating and open contours. By intrinsically incorporating the temporal dimension through its spiking dynamics, the model accurately reproduces the temporal evolution of the perceived color, including the alternating polarity observed with successive contours. We show that a single, unified, and biologically plausible spiking architecture can account for both veridical color and the complex set of contour-induced afterimage phenomena, suggesting that a common, active neural process, chromatic filling-in, is responsible for the different forms of perceived color. From an engineering perspective, our model exemplifies neuromorphic computational processing of event-based representations of visual data without reducing to static frames, and enables systematic analysis of inference error and illusory afterimages through configurable parameters, offering conceptual guidance for designing bio-inspired neuromorphic imaging pipelines.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1766765</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1766765</link>
        <title><![CDATA[Learned adaptive properties for mitigation of weight perturbations in embedded spiking networks]]></title>
        <pubdate>2026-03-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Sarah Luca</author><author>T. Patrick Xiao</author><author>Frances S. Chance</author><author>Sapan Agarwal</author><author>Corinne Teeter</author><author>G. William Chapman</author>
        <description><![CDATA[Recent years have seen an increased importance of neural network inference in edge-based scenarios, which impose size and power constraints requiring novel computing devices. These same edge scenarios may require operating over long periods of time, or exposure to extreme environments, resulting in a drift of neural network weights that cause degraded performance. In searching for ways to develop neural network approaches that perform robustly under these conditions, we propose a biologically-inspired mechanism for the dynamic adaptation of within-neuron parameters that is guided by a global context signal carrying information about perturbations and variability in incoming stimuli. Specifically, we demonstrate that adaptive voltage thresholds or neuronal time constants, when informed by a global context signal, can enable network-level mechanisms to recover from perturbed synaptic weights. Consistent with prior literature, the context-modulated approach is effective for recurrent, but not feedforward networks, by modulating network level dynamics. We demonstrate this approach successfully recovers performance in image classification tasks and spatiotemporal tracking tasks under idealized and Gaussian noise as well as for realistic perturbations from a memristive device when exposed to ionizing radiation. Finally, we discuss how this approach enables the design of robust and energy-efficient neuromorphic systems that perform well, even in resource-constrained scenarios with extreme environments such as edge processing.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1755119</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1755119</link>
        <title><![CDATA[Operational manifolds in spiking neural networks]]></title>
        <pubdate>2026-02-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Szymon Mazurek</author><author>Jakub Caputa</author><author>Piotr Maj</author><author>Maciej Wielgosz</author>
        <description><![CDATA[Spiking Neural Networks (SNNs) can be more energy-efficient than conventional deep networks, but their performance and stability depend strongly on neuron hyperparameters and inference-time state handling. Here we study how leaky integrate-and-fire (LIF) parameters and deployment policies jointly shape operating regimes, accuracy–energy trade-offs, and robustness. We introduce the notion of an operational manifold: a contiguous region in neuron hyperparameter space where spiking activity remains balanced (neither silent nor saturated) while task performance is maintained. Focusing on the membrane time constant (τm) and firing threshold (Vth), we map this manifold via systematic grid sweeps across multiple architectures and datasets. To quantify efficiency, we estimate synaptic operation (SOP) cost during inference and define composite scores that couple normalized accuracy with SOP-based energy proxies, enabling the identification of accuracy–energy frontiers within the manifold. We further examine inference-time state handling by comparing reset and carry policies for membrane potentials. On static, i.i.d. inputs, reset improves accuracy for short inference windows, while carry introduces cross-sample interference that diminishes as the integration horizon grows, highlighting the importance of state management in streaming deployments. Furthermore, we analyze robustness through progressive input perturbations and show that leaving the operational manifold is accompanied by increased inter-neuronal spike-train correlations and more synchronized firing. Summary statistics of correlation distributions (including skewness, kurtosis, and tail mass) provide informative, label-free indicators of noise exposure and internal instability. Together, these results provide practical guidance for selecting neuron hyperparameters and inference policies that achieve energy-efficient and stable SNN operation, and they suggest correlation-based diagnostics as lightweight health monitors for deployed neuromorphic systems.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1697163</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1697163</link>
        <title><![CDATA[Evolving spiking neural networks: the role of neuron models and encoding schemes in neuromorphic learning]]></title>
        <pubdate>2026-02-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Bastian Loyola-Jara</author><author>Gabriela Fernández-Rodríguez</author><author>Javier Baladron</author>
        <description><![CDATA[This study investigates the impact of neuron models and encoding schemes on the performance of spiking neural networks trained using the NeuroEvolution of Augmenting Topologies (NEAT) algorithm. By evaluating both classification and reinforcement learning tasks, we compare the performance of the Leaky Integrate-and-Fire (LIF) and Izhikevich neuron models across various input and output coding strategies. Our results demonstrate that the Izhikevich model consistently outperforms the simpler LIF model, except in one task where both showed comparable results. These findings emphasize that the choice of neuron model is as critical as encoding schemes in neuromorphic learning and highlight the importance of task-specific configuration. The study also showcases the potential of simulation frameworks for prototyping and optimizing neuromorphic systems.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1772958</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1772958</link>
        <title><![CDATA[Bidirectional cross-day alignment of neural spikes and behavior using a hybrid SNN-ANN algorithm]]></title>
        <pubdate>2026-02-04T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Binjie Hong</author><author>Zihang Xu</author><author>Tengyu Zhang</author><author>Tielin Zhang</author>
        <description><![CDATA[Recent advances in deep learning have enabled effective interpretation of neural activity patterns from electroencephalogram signals; however, challenges persist in invasive brain signals for cross-day neural decoding and simulation tasks. The inherent non-stationarity of neural dynamics and representational drift across recording sessions fundamentally limit the generalization capabilities of existing approaches. We present AlignNet, a novel framework that establishes cross-modal alignment between spiking patterns and behavioral semantics through U-based representation learning. Our architecture employs hybrid SNN-ANN autoencoders to encode neural spikes and behavior into a shared latent space, where the neural spike autoencoder incorporates multiple neuron nodes following convolution layers, and the behavior autoencoder comprises standard convolution layers. These two representations are optimized through contrastive objectives to achieve session-invariant feature learning. To address cross-day adaptation challenges, we introduce a pretraining strategy leveraging multi-session single monkey experiment data, followed by task-specific fine-tuning for neural decoding and simulation. Comprehensive evaluations demonstrate that AlignNet achieves superior performance under both single-day and cross-day conditions; meanwhile, our pretrained model effectively executes decoding and simulation tasks after fine-tuning. The hybrid SNN-ANN representations exhibit temporal consistency across multi-day recording spikes while retaining behavioral semantics, thereby advancing cross-day neural interface applications.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1738140</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1738140</link>
        <title><![CDATA[Spike-based Q-learning in a non-von Neumann architecture]]></title>
        <pubdate>2026-02-03T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Donghyuk Shin</author><author>Hyeongcheol Jo</author><author>Hyeseung Jang</author><author>Yoo Ho Jeong</author><author>YeonJoo Jeong</author><author>Joon Young Kwak</author><author>Jongkil Park</author><author>Suyoun Lee</author><author>Inho Kim</author><author>Jong-Keuk Park</author><author>Seongsik Park</author><author>Hyun Jae Jang</author><author>Hyung-Min Lee</author><author>Jaewook Kim</author>
        <description><![CDATA[Non-von Neumann architectures overcome the memory-compute separation of von Neumann systems by distributing computation and memory locally, thereby reducing data-transfer bottlenecks and power consumption. These features are particularly advantageous for reinforcement learning (RL) workloads that rely on frequent value-function updates across large state-action spaces. When combined with event-driven spiking neural networks (SNNs), non-von Neumann architectures can further improve overall computational efficiency by leveraging the sparse nature of spike-based processing. In this study, we propose a hardware-feasible SNN-based non-von Neumann architecture that performs Q-learning, one of the most widely known reinforcement learning algorithms. The proposed architecture maps states and actions to individual neurons using one-hot encoding and locally stores each state–action pair's Q-value in the corresponding synapse. To enable each synapse to update its local Q-value based on the next state maximum Q stored in other synapses, a neuron group connected through a lateral inhibition structure is employed to produce the maximum Q, which is then globally transmitted to all synapses. A delay circuit is also added to align the next-state and current-state values to ensure temporally consistent updates. Each synapse locally generates a learning selection signal and combines it with the globally transmitted signals to update only the target synapse. The proposed architecture was validated through simulations on the Cart-pole benchmark, showing stable learning performance under low-bit precision and achieving comparable accuracy to software-based Q-learning with sufficient bit precision.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2026.1771268</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2026.1771268</link>
        <title><![CDATA[Editorial: Theoretical advances and practical applications of spiking neural networks, volume II]]></title>
        <pubdate>2026-01-28T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Gaetano Di Caterina</author><author>Malu Zhang</author><author>Jundong Liu</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1768235</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1768235</link>
        <title><![CDATA[Astrocyte-gated multi-timescale plasticity for online continual learning in deep spiking neural networks]]></title>
        <pubdate>2026-01-27T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Zhengshan Dong</author><author>Wude He</author>
        <description><![CDATA[Spiking Neural Networks (SNNs) offer a paradigm of energy-efficient, event-driven computation that is well-suited for processing asynchronous sensory streams. However, training deep SNNs robustly in an online and continual manner remains a formidable challenge. Standard Backpropagation-through-Time (BPTT) suffers from a prohibitive memory bottleneck due to the storage of temporal histories, while local plasticity rules often fail to balance the trade-off between rapid acquisition of new information and the retention of old knowledge (the stability-plasticity dilemma). Motivated by the tripartite synapse in biological systems, where astrocytes regulate synaptic efficacy over slow timescales, we propose Astrocyte-Gated Multi-Timescale Plasticity (AGMP). AGMP is a scalable, online learning framework that augments eligibility traces with a broadcast teaching signal and a novel astrocyte-mediated gating mechanism. This slow astrocytic variable integrates neuronal activity to dynamically modulate plasticity, suppressing updates in stable regimes while enabling adaptation during distribution shifts. We evaluate AGMP on a comprehensive suite of neuromorphic benchmarks, including N-Caltech101, DVS128 Gesture, and Spiking Heidelberg Digits (SHD). Experimental results demonstrate that AGMP achieves accuracy competitive with offline BPTT while maintaining constant O(1) temporal memory complexity. Furthermore, in rigorous Class-Incremental Continual Learning scenarios (e.g., Split CIFAR-100), AGMP significantly mitigates catastrophic forgetting without requiring replay buffers, outperforming state-of-the-art online learning rules. This work provides a biologically grounded, hardware-friendly path toward autonomous learning agents capable of lifelong adaptation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1735068</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1735068</link>
        <title><![CDATA[An event-based opto-tactile skin]]></title>
        <pubdate>2026-01-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mohammadreza Koolani</author><author>Simeon Bamford</author><author>Petr Trunin</author><author>Simon F. Müller-Cleve</author><author>Matteo Lo Preti</author><author>Fulvio Mastrogiovanni</author><author>Lucia Beccai</author><author>Chiara Bartolozzi</author>
        <description><![CDATA[This paper presents a neuromorphic, event-driven tactile sensing system for soft, large-area skin, based on the Dynamic Vision Sensors (DVS) integrated with a flexible silicone optical waveguide skin. Instead of repetitively scanning embedded photoreceivers, this design uses a stereo vision setup comprising two DVS cameras looking sideways through the skin. Such a design produces events as changes in brightness are detected, and estimates press positions on the 2D skin surface through triangulation, utilizing Density-Based Spatial Clustering of Applications with Noise (DBSCAN) to find the center of mass of contact events resulting from pressing actions. The system is evaluated over a 4,620 mm probed area of the skin using a meander raster scan. Across 95 % of the presses visible to both cameras, the press localization achieved a Root-Mean-Squared Error (RMSE) of 4.66 mm. The results highlight the potential of this approach for wide-area flexible and responsive tactile sensors in soft robotics and interactive environments. Moreover, we examined how the system performs when the amount of event data is strongly reduced. Using stochastic down-sampling, the event stream was reduced to 1/1,024 of its original size. Under this extreme reduction, the average localization error increased only slightly (from 4.66 mm to 9.33 mm), and the system still produced valid press localizations for 85 % of the trials. This reduction in pass rate is expected, as some presses no longer produce enough events to form a reliable cluster for triangulation. These results show that the sensing approach remains functional even with very sparse event data, which is promising for reducing power consumption and computational load in future implementations. The system exhibits a detection latency distribution with a characteristic width of 31 ms.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1658490</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1658490</link>
        <title><![CDATA[Overcoming quadratic hardware scaling for a fully connected digital oscillatory neural network]]></title>
        <pubdate>2026-01-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Bram F. Haverkort</author><author>Aida Todri-Sanial</author>
        <description><![CDATA[Computing with coupled oscillators or oscillatory neural networks (ONNs) has recently attracted a lot of interest due to their potential for massive parallelism and energy-efficient computing. However, to date, ONNs have primarily been explored either analytically or through analog circuit implementations. This paper shifts the focus to the digital implementation of ONNs, examining various design architectures. We first report on an existing digital ONN design based on a recurrent architecture. The major challenge for scaling such recurrent architectures is the quadratic increase in coupling hardware with the network size. To overcome this challenge, we introduce a novel hybrid architecture that balances serialization and parallelism in the coupling elements that shows near-linear hardware scaling, on the order of about 1.2 with the network size. Furthermore, we evaluate the benefits and costs of these different digital ONN architectures in terms of time to solution and resource usage on field programmable gate array (FPGA) emulation. The proposed hybrid architecture allows for a 10.5 × increase in the number of oscillators while using 5-bits to represent the coupling weights and 4-bits to represent the oscillator phase on a Zynq-7020 FPGA board. The near-linear scaling is a major step toward implementing large scale ONN architectures. To the best of our knowledge, this work presents the largest fully connected digital ONN architecture implemented thus far with a total of 506 fully connected oscillators.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1735027</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1735027</link>
        <title><![CDATA[Sequential analysis and its applications to neuromorphic engineering]]></title>
        <pubdate>2026-01-09T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Shivaram Mani</author><author>Saeed Afshar</author><author>Travis Monk</author>
        <description><![CDATA[Introduction:Neuromorphic circuits operate by comparing fluctuating signals to thresholds. This operation underpins sensing and computation in both neuromorphic architectures and biological nervous systems. Rigorous analysis of such systems is rarely attempted because the statistical tools to study them are both inaccessible and largely unknown to the neuromorphic community.MethodsWe offer a gentle introduction to one such tool, sequential analysis, a classical framework that addresses a particular class of threshold-crossing problems. We define the formal problem analyzed in sequential analysis and present Abraham Wald's elegant methodology for solving it.ResultsWe then apply this framework to three examples in neuromorphic engineering, demonstrating how it can serve as a benchmark, proxy model, and design tool. Our introduction is understandable without prior training in probability or statistics.DiscussionSequential analysis provides the statistical limits of circuit performance, tractable abstractions of complex circuit behavior, and constructive rules for circuit design. It establishes rigorous statistical baselines for evaluating hardware. It links low-level circuit parameters to observable dynamics, clarifying the computational role of neuromorphic architectures. By translating performance goals into optimal thresholds and design parameters, it offers principled prescriptions that go beyond empirical tuning.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1716204</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1716204</link>
        <title><![CDATA[A hybrid Spiking Neural Network–Transformer architecture for motor imagery and sleep apnea detection]]></title>
        <pubdate>2025-12-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Duc Thien Pham</author><author>Maryam Khoshkhooy Titkanlou</author><author>Roman Mouček</author>
        <description><![CDATA[IntroductionMotor imagery (MI) classification and sleep apnea (SA) detection are two critical tasks in brain-computer interface (BCI) and biomedical signal analysis. Traditional deep learning models have shown promise in these domains, but often struggle with temporal sparsity and energy efficiency, especially in real-time or embedded applications.MethodsIn this study, we propose SpiTranNet, a novel architecture that deeply integrates Spiking Neural Networks (SNNs) with Transformers through Spiking Multi-Head Attention (SMHA), where spiking neurons replace standard activation functions within the attention mechanism. This integration enables biologically plausible temporal processing and energy-efficient computations while maintaining global contextual modeling capabilities. The model is evaluated across three physiological datasets, including one electroencephalography (EEG) dataset for MI classification and two electrocardiography (ECG) datasets for SA detection.ResultsExperimental results demonstrate that the hybrid SNN-Transformer model achieves competitive accuracy compared to conventional machine learning and deep learning models.DiscussionThis work highlights the potential of neuromorphic-inspired architectures for robust and efficient biomedical signal processing across diverse physiological tasks.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1746610</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1746610</link>
        <title><![CDATA[Editorial: Algorithm-hardware co-optimization in neuromorphic computing for efficient AI]]></title>
        <pubdate>2025-12-04T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Amirreza Yousefzadeh</author><author>Alberto Patiño-Saucedo</author><author>Guido De Croon</author><author>Manolis Sifalakis</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1687815</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1687815</link>
        <title><![CDATA[SSEL: spike-based structural entropic learning for spiking graph neural networks]]></title>
        <pubdate>2025-11-28T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Shuangming Yang</author><author>Yuzhu Wu</author><author>Badong Chen</author>
        <description><![CDATA[Spiking Neural Networks (SNNs) offer transformative, event-driven neuromorphic computing with unparalleled energy efficiency, representing a third-generation AI paradigm. Extending this paradigm to graph-structured data via Spiking Graph Neural Networks (SGNNs) promises energy-efficient graph cognition, yet existing SGNN architectures exhibit critical fragility under adversarial topology perturbations. To address this challenge, this study presents the Spike-based Structural Entropy Learning framework (SSEL), which introduces structural entropy theory into the learning objectives of SGNNs. The core innovation establishes structural entropy-guided topology refinement: By minimizing structural entropy, we derive a sparse topological graph that intrinsically prunes noisy edges while preserving critical low-entropy connections. To further enforce robustness, we develop an entropy-driven topological gating mechanism that restricts spiking message propagation exclusively to entropy-optimized edges, systematically eliminating adversarial pathways. Crucially, this co-design strategy synergizes two sparsity sources: Structural sparsity from the entropy-minimized graph topology and Event-driven sparsity from spike-based computation. This dual mechanism not only ensures exceptional robustness (64.58% accuracy vs. 30.14% baseline under 0.1 salt-and-pepper noise) but also enables ultra-low energy consumption, achieving 97.28% reduction compared to conventional GNNs while maintaining state-of-the-art accuracy (85.31% on Cora). This work demonstrates that the principled minimization of structural entropy is a powerful strategy for enhancing the robustness of Spiking Graph Neural Networks. The SSEL framework successfully mitigates the impact of adversarial topological perturbations while capitalizing on the energy-efficient nature of spike-based computation, which underscore the significant potential of combining information-theoretic graph principles with neuromorphic computing paradigms.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1623497</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1623497</link>
        <title><![CDATA[Review of deep learning models with Spiking Neural Networks for modeling and analysis of multimodal neuroimaging data]]></title>
        <pubdate>2025-11-14T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Ayesha Khan</author><author>Vickie Shim</author><author>Justin Fernandez</author><author>Nikola K. Kasabov</author><author>Alan Wang</author>
        <description><![CDATA[Medical imaging has become an essential tool for identifying and treating neurological conditions. Traditional deep learning (DL) models have made tremendous advances in neuroimaging analysis; however, they face difficulties when modeling complicated spatiotemporal brain data. Spiking Neural Networks (SNNs), which are inspired by real neurons, provide a promising option for efficiently processing spatiotemporal data. This review discusses current improvements in using SNNs for multimodal neuroimaging analysis. Quantitative and thematic analyses were conducted on 21 selected publications to assess trends, research topics, and geographical contributions. Results show that SNNs outperform traditional DL approaches in classification, feature extraction, and prediction tasks, especially when combining multiple modalities. Despite their potential, challenges of multimodal data fusion, computational demands, and limited large-scale datasets persist. We discussed the growth of SNNs in analysis, prediction, and diagnosis of neurological data, along with the emphasis on future direction and improvements for more efficient and clinically applicable models.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1729354</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1729354</link>
        <title><![CDATA[Editorial: Novel memristor-based devices and circuits for neuromorphic and AI applications, volume II]]></title>
        <pubdate>2025-11-12T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Heba Abunahla</author><author>Said Al-Sarawi</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1667541</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1667541</link>
        <title><![CDATA[TDE-3: an improved prior for optical flow computation in spiking neural networks]]></title>
        <pubdate>2025-11-03T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Matthew Yedutenko</author><author>Federico Paredes-Vallés</author><author>Lyes Khacef</author><author>Guido de Croon</author>
        <description><![CDATA[Motion detection is a primary task required for robotic systems to perceive and navigate in their environment. Proposed in the literature bioinspired neuromorphic Time-Difference Encoder (TDE-2) combines event-based sensors and processors with spiking neural networks to provide real-time and energy-efficient motion detection through extracting temporal correlations between two points in space. However, on the algorithmic level, this design leads to a loss of direction-selectivity of individual TDEs in textured environments. In the present work, we propose an augmented 3-point TDE (TDE-3) with additional inhibitory input that makes TDE-3 direction-selectivity robust in textured environments. We developed a procedure to train the new TDE-3 using backpropagation through time and surrogate gradients to linearly map input velocities into an output spike count or an Inter-Spike Interval (ISI). Using synthetic data, we compared training and inference with spike count and ISI with respect to changes in stimuli dynamic range, spatial frequency, and level of noise. ISI turns out to be more robust toward variation in spatial frequency, whereas the spike count is a more reliable training signal in the presence of noise. We conducted an in-depth quantitative investigation of optical flow coding with TDE and compared TDE-2 vs. TDE-3 in terms of energy efficiency and coding precision. The results show that at the network level, both detectors show similar precision (20° angular error, 88% correlation with the truth of the ground). However, due to the more robust direction selectivity of individual TDEs, the TDE-3 based network spikes less and is hence more energy efficient. Reported precision is on par with model-based methods but the spike-based processing of the TDEs provides allows more energy-efficient inference with neuromorphic hardware. Additionally, we also employed TDE-2 and TDE-3 to estimate ego-motion and showed results competitive with those achieved by neural networks with 1.5 × 105 parameters.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1652274</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1652274</link>
        <title><![CDATA[Spiking neural networks for EEG signal analysis using wavelet transform]]></title>
        <pubdate>2025-10-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Li Yuan</author><author>Jian Wei</author><author>Ying Liu</author>
        <description><![CDATA[IntroductionBrain-computer interfaces (BCIs) leverage EEG signal processing to enable human-machine communication and have broad application potential. However, existing deep learning-based BCI methods face two critical limitations that hinder their practical deployment: reliance on manual EEG feature extraction, which constrains their ability to adaptively capture complex neural patterns, and high energy consumption characteristics that make them unsuitable for resource-constrained portable BCI devices requiring edge deployment.MethodsTo address these limitations, this work combines wavelet transform for automatic feature extraction with spiking neural networks for energy-efficient computation. Specifically, we present a novel spiking transformer that integrates a spiking self-attention mechanism with discrete wavelet transform, termed SpikeWavformer. SpikeWavformer enables automatic EEG signal time-frequency decomposition, eliminates manual feature extraction, and provides energy-efficient classification decision-making, thereby enhancing the model's cross-scene generalization while meeting the constraints of portable BCI applications.ResultsExperimental results demonstrate the effectiveness and efficiency of SpikeWavformer in emotion recognition and auditory attention decoding tasks.DiscussionThese findings indicate that SpikeWavformer can address the key limitations of existing BCI methods and holds promise for practical deployment in portable, resource-constrained scenarios.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fnins.2025.1665778</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fnins.2025.1665778</link>
        <title><![CDATA[Balancing accuracy and efficiency: co-design of hybrid quantization and unified computing architecture for spiking neural networks]]></title>
        <pubdate>2025-10-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jiahao Li</author><author>Ming Xu</author><author>Heng Dong</author><author>Bin Lan</author><author>Yuxin Liu</author><author>He Chen</author><author>Yin Zhuang</author><author>Yizhuang Xie</author><author>Liang Chen</author>
        <description><![CDATA[The deployment of Spiking Neural Networks (SNNs) on resource-constrained edge devices is hindered by a critical algorithm-hardware mismatch: a fundamental trade-off between the accuracy degradation caused by aggressive quantization and the resource redundancy stemming from traditional decoupled hardware designs. To bridge this gap, we present a novel algorithm-hardware co-design framework centered on a Ternary-8-bit Hybrid Weight Quantization (T8HWQ) scheme. Our approach recasts SNN computation into a unified “8-bit × 2-bit” paradigm by quantizing first-layer weights to 2 bits and subsequent layers to 8 bits. This standardization directly enables the design of a unified PE architecture, eliminating the resource redundancy inherent in decoupled designs. To mitigate the accuracy degradation caused by aggressive first-layer quantization, we first propose a channel-wise dual compensation strategy. This method synergizes channel-wise quantization optimization with adaptive threshold neurons, leveraging reparameterization techniques to restore model accuracy without incurring additional inference overhead. Building upon T8HWQ, we propose a novel unified computing architecture that overcomes the inefficiencies of traditional decoupled designs by efficiently multiplexing processing arrays. Experimental results support our approach: On CIFAR-100, our method achieves near-lossless accuracy (<0.7% degradation vs. full precision) with a single time step, matching state-of-the-art low-bit SNNs. At the hardware level, implementation results on the Xilinx Virtex 7 platform demonstrate that our unified computing unit conserves 20.2% of lookup table (LUT) resources compared to traditional decoupled architectures. This work delivers a 6 × throughput improvement over state-of-the-art SNN accelerators—with comparable resource utilization and lower power consumption. Our integrated solution thus advances the practical implementation of high-performance, low-latency SNNs on resource-constrained edge devices.]]></description>
      </item>
      </channel>
    </rss>