Your new experience awaits. Try the new design now and help us make it even better

OPINION article

Front. Comput. Neurosci., 08 January 2026

Volume 19 - 2025 | https://doi.org/10.3389/fncom.2025.1737839

This article is part of the Research TopicNeuromorphic and Deep Learning Paradigms for Neural Data Interpretation and Computational NeuroscienceView all 5 articles

Bridging neuromorphic computing and deep learning for next-generation neural data interpretation


Manyun Zhang,Manyun Zhang1,2Tianlei Wang
Tianlei Wang3*Zhiyuan Zhu
Zhiyuan Zhu2*
  • 1Key Laboratory of Songliao Aquatic Environment, Ministry of Education, Jilin Jianzhu University, Changchun, China
  • 2College of Electronic Engineering, Southwest University, Chongqing, China
  • 3Tianin Key Laboratory of Building Green Functional Materials, Tianjin Chengjian University, Tianjin, China

1 Introduction

The rapid advancement of electrophysiological techniques, brain imaging, and brain–machine interfaces (BMIs) has ushered neuroscience into an era of data explosion. Confronted with neural data that are high-dimensional, highly nonlinear, and exhibit complex temporal dependencies, conventional statistical and signal processing methods—often reliant on linear assumptions or low-dimensional projections—struggle to reveal the true mechanisms of brain activity (Livezey and Glaser, 2021). In response to this challenge, deep learning (DL) and neuromorphic computing (NC) have emerged as two promising yet conceptually distinct computational paradigms. Deep learning has demonstrated remarkable capabilities in data-driven modeling, achieving significant breakthroughs in neural signal decoding and cognitive state identification. However, its inherent limitations—high energy consumption, limited interpretability, and low biological plausibility—restrict its deeper application in computational neuroscience. In contrast, neuromorphic computing, inspired by the event-driven and local plasticity properties of biological neural systems, offers unique advantages in low-power adaptive processing. Nonetheless, it still faces challenges in training algorithms and scalability (Ivanov et al., 2022; Schuman et al., 2022). To address these complementary shortcomings, this article proposes a hybrid framework that integrates neuromorphic computing with deep learning, aiming to harmonize biological plausibility with high computational performance, thereby opening new pathways for developing next-generation models and tools for neural data interpretation (Liu et al., 2024).

2 Biological inspiration of neuromorphic computing

Neuromorphic computing seeks to emulate the structural and functional organization of the brain, enabling computational systems to operate in ways that resemble biological neural processing. Its central idea is to incorporate event-driven and asynchronous communication, allowing computation to occur only when an event is triggered rather than at fixed time intervals (Davies et al., 2021). This mechanism markedly reduces redundant operations and energy consumption, mirroring the physiological principle by which neurons fire only when their membrane potential surpasses a threshold.

At the heart of neuromorphic computing lies the Spiking Neural Network (SNN), which represents information through discrete electrical impulses, or spikes, that are temporally encoded to convey meaning (Guo et al., 2022; Michaelis et al., 2022). Learning in SNNs is commonly governed by Spike-Timing-Dependent Plasticity (STDP)—a local rule that adjusts synaptic strength based on the precise timing of pre- and postsynaptic spikes. This biologically grounded mechanism enables SNNs to capture causal relationships in neural activity, making them more faithful to real neural dynamics than conventional Artificial Neural Networks (ANNs). Consequently, SNNs excel in handling temporal sequences, sparse representations, and real-time responses.

In recent years, hardware implementations such as Intel Loihi, IBM TrueNorth, and BrainScaleS have demonstrated the promise of neuromorphic architectures for low-power, massively parallel computation (Davies et al., 2021; Michaelis et al., 2022). For example, the Loihi chip integrates on-chip plasticity circuits that support localized learning, achieving power efficiency several orders of magnitude better than traditional GPUs. Simultaneously, rapid advances in memristor technology have introduced new opportunities for neuromorphic hardware (Guo et al., 2022; Duan et al., 2024; Xiao et al., 2024). Memristors—devices that exhibit non-volatility, tunable conductance, and synapse-like behavior—enable in-memory computing, merging data storage and processing to emulate synaptic functionality directly on the chip.

Despite these advances, several challenges remain. The discrete nature of spikes makes training difficult, as standard backpropagation cannot be directly applied. Scalability also remains a concern: current systems struggle to maintain learning efficiency and robustness in large-scale data environments. Furthermore, the ecosystem of algorithms and software frameworks is still immature, lacking standardized interfaces for widespread adoption (Schuman et al., 2022). A recent Nature report highlights key breakthroughs in inter-chip communication, scalable architecture, and event-driven scheduling, signaling an important step toward large-scale, next-generation brain-inspired computing systems (Kudithipudi et al., 2025).

3 Breakthroughs and limitations of deep learning in neural data analysis

As the dominant paradigm in contemporary artificial intelligence, deep learning (DL) has achieved remarkable success in the analysis of neural data owing to its hierarchical feature extraction and nonlinear approximation capabilities (Livezey and Glaser, 2021; Zhou et al., 2024). Architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformers have been widely applied to the decoding of electroencephalography (EEG), local field potentials (LFP), and functional magnetic resonance imaging (fMRI) signals. These models can automatically extract multilayer representations, enabling the discovery of hidden relationships between neural activity and cognitive states.

In practical applications, CNNs have demonstrated high accuracy and robustness in EEG-based emotion recognition and brain–machine interface (BMI) command classification tasks (Li et al., 2024). RNNs and Long Short-Term Memory (LSTM) networks have exhibited outstanding capabilities in modeling the temporal dynamics of neural signals and predicting brain activity patterns (Bittar and Garner, 2022). Moreover, Transformer-based architectures have shown a superior ability to capture global dependencies across multimodal neural datasets, significantly improving the interpretability and scalability of neural decoding frameworks (Chen et al., 2023). Collectively, these advances indicate that deep learning not only facilitates complex pattern recognition in brain data but also provides statistical insights into the structure of neural processes.

However, the advantages of deep learning are accompanied by several intrinsic limitations. First, artificial neural networks rely on continuous activation functions and global backpropagation, which diverge from the local learning and synaptic plasticity observed in biological neural systems, resulting in limited biological plausibility. Second, the computational process of deep networks depends on large-scale matrix multiplications and parallel processing, leading to extremely high power consumption—orders of magnitude greater than that of biological brains. Third, deep models often suffer from poor interpretability; although their predictive performance is high, the internal representations rarely align with specific neurophysiological structures or mechanisms. In addition, conventional deep learning models lack temporal precision, making it difficult to accurately capture spike-based neural dynamics with millisecond resolution (Klos and Memmesheimer, 2025; Guo et al., 2023).

Therefore, while deep learning has proven powerful in feature extraction and cognitive modeling of neural data, achieving a genuine transition from “prediction” to “understanding” requires the incorporation of computational paradigms that align more closely with neurophysiological mechanisms. Neuromorphic computing offers a biologically inspired and energy-efficient complement to deep learning, paving the way toward a more interpretable and power-efficient framework for neural data modeling.

4 A hybrid framework integrating neuromorphic computing and deep learning

To reconcile the strong representational capacity of deep learning with the biological plausibility of neuromorphic computing, this work proposes a Hybrid Neuromorphic–Deep Learning Framework (Liu et al., 2024; Zhao et al., 2022). Figure 1 illustrates the overall conceptual framework of this hybrid approach. The framework integrates event-driven spiking computation with end-to-end deep feature learning, forming a multilayer neural data-processing system that is interpretable, energy-efficient, and aligned with neurophysiological dynamics.

Figure 1
Diagram illustrating a neuromorphic system and its components. (a) Shows neural architecture with synapses, neurons, and memristors. (b) Depicts a deep learning module with RNN and CNN decoders for neural inputs and outputs like movement and speech. (c) Describes hybrid information flow, highlighting hybrid transmission, modulation, and learnable hybrid units (HUs). (d) Displays application layers, including data processing with memristors and artificial intelligence implementations like electronic skin and AI vision in brain-like chips.

Figure 1. Conceptual framework of integrating neuromorphic computing and deep learning for neural data interpretation. (a) Neuromorphic system schematic showing the correspondence between biological neurons and memristor-based artificial synapses, illustrating spiking behavior and spike-timing-dependent plasticity (STDP). (b) Hybrid learning layer demonstrating the integration of spiking neural networks (SNNs) and artificial neural networks (ANNs) into a unified hybrid architecture for event-to-vector transformation. (c) Deep learning module showing representative deep architectures, including Transformer, RNN, and CNN, applied to neural decoding and cognitive modeling. (d) Application layer depicting memristor-based artificial intelligence chips and their potential uses in neural signal analysis, brain–machine interfaces, and cognitive modeling. (a) Adapted with permission from Guo et al. (2022). Copyright 2022, Royal Society of Chemistry. (b) Reprinted with permission from Livezey and Glaser (2021). Copyright 2021, Oxford University Press. (c) Reproduced with permission from Zhao et al. (2022). Copyright 2022, Nature Publishing Group. (d) Reproduced in part with permission under a Creative Commons License from Sun et al. (2023). Copyright 2023, American Chemical Society.

At the architectural level, the framework consists of three major components: a neuromorphic front-end, a hybrid learning layer, and a deep interpretation back-end. The neuromorphic front-end extracts event-driven signals—such as spike trains—from multimodal neural recordings while performing noise suppression and initial temporal encoding (Jin et al., 2023). Through the use of spiking neural networks (SNNs) or memristor-based neuromorphic circuits, this stage enables low-power, real-time preprocessing at the hardware level (Duan et al., 2024; Jin et al., 2023). Recent progress in two-dimensional-material neuromorphic chips has further enhanced efficiency, sensitivity, and scalability, offering robust hardware support for hybrid neural architectures (Zhang et al., 2025).

The hybrid learning layer serves as the interface between the neuromorphic and deep-learning modules. Its role is to map sparse spike-based events into higher-level feature vectors. This can be achieved using surrogate-gradient optimization or biologically inspired local plasticity rules (Zhou et al., 2024; Chen et al., 2023), integrating local learning mechanisms with global gradient-based adaptation for improved interpretability and flexibility.

To operationalize the event-to-vector transformation within this layer, several established spike-to-vector conversion and differentiable training strategies can be adopted. For example, (Meng et al. 2022) introduced a differentiable spike-representation learning method that maps temporal spike sequences into continuous vector spaces, enabling cross-domain transformation from event-based signals to feature embeddings. Similarly, (Zhang et al. 2023) demonstrated that surrogate-gradient–based direct training in hybrid SNN–ANN networks can effectively extract and vectorize event-driven features. Furthermore, hybrid neural frameworks such as the Hybrid Neural Network (HNN) proposed by (Zhao et al. 2022) have verified that constructing learnable interaction layers between ANNs and SNNs enables efficient cross-domain feature projection, providing a feasible technical route for implementing the hybrid module in our framework. In addition, the analysis by (Liu et al. 2024) on the mechanisms and information flow in hybrid neural systems offers further theoretical support for the mapping strategy adopted here.

The deep interpretation back-end applies advanced neural models—such as Transformers, graph convolutional networks (GCNs), or recurrent neural networks (RNNs)—for pattern recognition, brain-state decoding, and cognitive representation learning. These stage further benefits from pre-training and knowledge-transfer mechanisms, improving generalization and abstraction quality (Liu et al., 2024; Zhao et al., 2022). Information flows from event-driven spike streams at the neuromorphic front-end, through the hybrid mapping layer, to high-level cognitive inference in the deep model. With a closed-loop design, feedback from the deep model can dynamically regulate neuromorphic parameters, enabling online learning and forming an adaptive, self-optimizing neural system.

Compared with the HNN framework introduced by (Zhao et al. 2022), which primarily targets efficient heterogeneous ANN–SNN co-inference, the proposed framework extends beyond coupling mechanisms to incorporate a neuromorphic sensory front-end and a deep interpretive back-end. This results in a cross-scale architecture spanning event encoding, hybrid feature learning, and neurodynamics-aligned interpretation. Consequently, the present framework emphasizes event-driven representation, biological interpretability, and alignment with neurophysiological mechanisms, offering broader conceptual scope and greater relevance for neural data analysis.

This hybrid framework offers several notable advantages. Event-driven computation reduces redundant operations and energy consumption. Spike-based representations correspond directly to neuronal firing patterns, improving interpretability. Deep neural models provide scalable abstraction for high-dimensional neural datasets. Finally, the integration of local plasticity with online adaptation imparts robustness and flexibility, forming a promising foundation for next-generation self-adaptive brain-machine systems.

5 Discussion and outlook

The integration of neuromorphic computing and deep learning represents not only a technological complementarity but also a profound paradigm shift in computational neuroscience (Liu et al., 2024; Meng et al., 2022). This cross-level integration provides a systematic pathway for neural information processing that bridges biological inspiration and high-dimensional modeling, allowing models to simultaneously capture neurodynamic plausibility and the abstract representational power of deep networks. Through this hybrid framework, computational neuroscience is gradually transitioning from mere signal fitting toward functional interpretation of neural mechanisms.

In the domain of neural encoding and decoding, hybrid models can more accurately characterize the dynamic firing patterns of neuronal populations, enabling precise recognition of complex neural processes such as motor intention, perceptual representation, and cognitive load (Livezey and Glaser, 2021). By combining the temporal precision of spike-based events with the hierarchical representations of deep networks, researchers can achieve a multi-scale description of neural information, uncovering the diversity and plasticity inherent in neural coding.

For brain connectivity modeling, integrating spike-driven event models with Graph Neural Networks (GNNs) offers a promising approach to uncover causal interactions and functional topologies among neural circuits (Zhao et al., 2022). Such frameworks not only facilitate the reconstruction of dynamic brain networks but also provide computational insights for the early diagnosis and intervention of neurological disorders such as epilepsy and Alzheimer's disease.

In brain–machine interfaces (BMIs) and neural rehabilitation, the hybrid architecture enables real-time signal decoding on low-power neuromorphic hardware, supporting adaptive and online-learning-based neural communication systems (Duan et al., 2024; Sun et al., 2023). By combining the efficient temporal processing of spiking neural networks with the high-level pattern recognition capability of deep learning, these systems can dynamically adjust decoding strategies while maintaining energy efficiency—thereby enhancing self-learning capabilities for intelligent neuroprosthetic control and rehabilitation.

From a hardware–intelligence co-design perspective, the rapid progress of memristor technology and three-dimensional integrated circuits is driving a deep convergence between neuromorphic chips and AI accelerators (Xiao et al., 2024; Jin et al., 2023). This cross-layer collaboration enables brain-inspired learning and adaptive cognition in edge-computing environments, making it feasible to achieve efficient, real-time neural computation directly on localized devices.

Looking forward, the core objective of computational neuroscience will revolve around achieving a balanced trade-off between energy efficiency, interpretability, and scalability. By combining the biological realism of neuromorphic computing with the abstraction capabilities of deep learning, a new generation of neural intelligent systems may emerge—systems that not only elucidate the computational principles and information flow of the brain but also drive the advancement of adaptive intelligent chips, brain-inspired computing platforms, and next-generation brain–machine interface technologies.

Despite the significant potential of the hybrid neuromorphic–deep learning framework, several limitations remain that must be addressed in future research. First, event-driven encoding is inherently sensitive to noise and may not be suitable for neural recording modalities with low temporal resolution or high measurement noise—such as calcium imaging or fMRI—which restricts its applicability across modalities. Second, training hybrid ANN–SNN systems often incurs substantial computational overhead. Cross-domain gradient propagation can introduce instability, and the field still lacks a unified strategy for multimodal fusion across spike-based and continuous representations. Third, current neuromorphic hardware faces practical challenges, including variability and limited reproducibility in memristive devices, as well as bandwidth constraints in inter-chip communication. These factors hinder large-scale deployment and stable on-chip training. Finally, although the deep interpretation module enables powerful high-dimensional feature abstraction, its biological interpretability remains imperfect and cannot yet fully align with real neurophysiological mechanisms. Therefore, applying this framework to real neural data analysis and brain–machine interface systems will require careful balancing among algorithmic design, hardware implementation, and neuroscientific validation.

6 Conclusion

We argue that the fusion of neuromorphic computing and deep learning constitutes a paradigm shift for computational neuroscience, moving the field beyond isolated algorithms toward a holistic paradigm that embraces cross-level abstraction, biological plausibility, and stringent energy constraints (Liu et al., 2024). The synergistic integration of event-driven processing with deep hierarchical learning is pivotal, enabling not only a more profound interpretation of neural dynamics but also the co-design of intelligent and energy-efficient hardware. This hybrid framework establishes a new foundation for future research, poised to significantly advance our capabilities in decoding neural computation, diagnosing neurological disorders, and engineering adaptive, brain-inspired intelligence (Zhao et al., 2022; Sun et al., 2023).

Author contributions

MZ: Writing – original draft. TW: Supervision, Writing – review & editing. ZZ: Conceptualization, Funding acquisition, Supervision, Writing – review & editing.

Funding

The author(s) declared that financial support was received for this work and/or its publication. This research was supported by Open Fund Program of Key Laboratory of Songliao Aquatic Environment, Ministry of Education, Jilin Jianzhu Universiy (No. JLJUSLKF042024010), Open Foundation from the Tianjin Key Laboratory of Building Green Functional Materials (JZ-2023006) and the New Chongqing Youth Innovation Talent Project (CSTB2024NSCQ -QCXMX0072).

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Bittar, A., and Garner, P. N. (2022). A surrogate gradient spiking baseline for speech command recognition. Front. Neurosci. 16:865897. doi: 10.3389/fnins.2022.865897

PubMed Abstract | Crossref Full Text | Google Scholar

Chen, T., Wang, S., Gong, Y., Wang, L., and Duan, S. (2023). Surrogate gradient scaling for directly training spiking neural networks. Appl. Intell. 53, 27966–27981. doi: 10.1007/s10489-023-04966-x

Crossref Full Text | Google Scholar

Davies, M., Wild, A., Orchard, G., Sandamirskaya, Y., Fonseca Guerra, G. A., and Joshi, P. (2021). Advancing neuromorphic computing with Loihi: a survey of results and outlook. Proc. IEEE. 109, 911–34. doi: 10.1109/JPROC.2021.3067593

Crossref Full Text | Google Scholar

Duan, X., Cao, Z., Gao, K., Yan, W., Sun, S., Zhou, G., et al. (2024). Memristor-based neuromorphic chips. Adv. Mater. 36:e2310704. doi: 10.1002/adma.202310704

PubMed Abstract | Crossref Full Text | Google Scholar

Guo, T., Pan, K., Jiao, Y., Sun, B., Du, C., Mills, J. P., et al. (2022). Versatile memristor for memory and neuromorphic computing. Nanoscale Horiz. 7, 299–310. doi: 10.1039/D1NH00481F

PubMed Abstract | Crossref Full Text | Google Scholar

Guo, Y., Huang, X., and Ma, Z. (2023). Direct learning-based deep spiking neural networks: a review. Front. Neurosci. 17:1209795. doi: 10.3389/fnins.2023.1209795

PubMed Abstract | Crossref Full Text | Google Scholar

Ivanov, D., Chezhegov, A., Kiselev, M., Grunin, A., and Larionov, D. (2022). Neuromorphic artificial intelligence systems. Front. Rev. 16:959626. doi: 10.3389/fnins.2022.959626

Crossref Full Text | Google Scholar

Jin, B., Wang, Z., Wang, T., and Meng, J. (2025). Memristor-based artificial neural networks for hardware neuromorphic computing. Research 8:0758. doi: 10.34133/research.0758

PubMed Abstract | Crossref Full Text | Google Scholar

Klos, C., and Memmesheimer, R.-M. (2025). Smooth exact gradient descent learning in spiking neural networks. Phys. Rev. Lett. 134:27301. doi: 10.1103/PhysRevLett.134.027301

PubMed Abstract | Crossref Full Text | Google Scholar

Kudithipudi, D., Schuman, C. D., Vineyard, C. M., Pandit, T., Merkel, C., Kubendran, R., et al. (2025). Neuromorphic computing at scale. Nature 637, 801–812. doi: 10.1038/s41586-024-08253-8

PubMed Abstract | Crossref Full Text | Google Scholar

Li, Y., Zhao, F., Zhao, D., and Zeng, Y. (2024). Directly training temporal spiking neural network for neural decoding. Neurocomputing. arXiv [Preprint] arXiv:2406.19645v1. doi: 10.2139/ssrn.4580621

Crossref Full Text | Google Scholar

Liu, F., Zheng, H., Ma, S., Zhang, W., Liu, X., Chua, Y., et al. (2024). Advancing brain-inspired computing with hybrid neural networks. Natl. Sci. Rev. 11:nwae066. doi: 10.1093/nsr/nwae066

PubMed Abstract | Crossref Full Text | Google Scholar

Livezey, J. A., and Glaser, J. I. (2021). Deep learning approaches for neural decoding across architectures and recording modalities. Brief Bioinform. 22, 1577–1591. doi: 10.1093/bib/bbaa355

PubMed Abstract | Crossref Full Text | Google Scholar

Meng, Q., Xiao, Z., Yan, S., Wang, Y., Lin, Z., and Luo, Z.-Q. (2022). Training high-performance low-latency spiking neural networks by differentiation on spike representation. arXiv [Preprint] arXiv:2205.00459. doi: 10.48550/arXiv.2205.00459

Crossref Full Text | Google Scholar

Michaelis, C., Lehr, A. B., Oed, W., and Tetzlaff, C. (2022). Brian2Loihi: an emulator for the neuromorphic chip Loihi using the spiking neural network simulator Brian. Front. Neuroinform. 16:1015624. doi: 10.3389/fninf.2022.1015624

PubMed Abstract | Crossref Full Text | Google Scholar

Schuman, C. D., Kulkarni, S. R., Parsa, M., Mitchell, J. P., Date, P., and Kay, B. (2022).Opportunities for neuromorphic computing algorithms and applications. OSTI Tech. Rep. 2, 10–19.

Google Scholar

Sun, B., Chen, Y., Zhou, G., Cao, Z., Yang, C., Du, J., et al. (2023). Memristor-based artificial chips. ACS Nano. 18, 14–27. doi: 10.1021/acsnano.3c07384

PubMed Abstract | Crossref Full Text | Google Scholar

Xiao, Y., Gao, C., Jin, J., Sun, W., Wang, B., Bao, Y., et al. (2024). Recent progress in neuromorphic computing from memristive devices to chips. Appl. Rev. 5. doi: 10.34133/adi.0044

Crossref Full Text | Google Scholar

Zhang, G., Luo, Q., Yao, J., Zhong, S., Wang, H., Xue, F., et al. (2025). All-in-one neuromorphic hardware with 2D material technology: current status and future perspective. Chem. Soc. Rev. 54, 8196–242. doi: 10.1039/D5CS00251F

PubMed Abstract | Crossref Full Text | Google Scholar

Zhang, Y., Xiang, T., Liu, Q., Han, Y., Guo, X., and Hao, Y. (2023). Hybrid spiking fully convolutional neural network for semantic segmentation. Electronics 12:3565. doi: 10.3390/electronics12173565

Crossref Full Text | Google Scholar

Zhao, R., Yang, Z., Zheng, H., Wu, Y., Liu, F., Wu, Z., et al. (2022). A framework for the general design and computation of hybrid neural networks. Nat. Commun. 13:3427. doi: 10.1038/s41467-022-30964-7

PubMed Abstract | Crossref Full Text | Google Scholar

Zhou, C., Zhang, H., Yu, L., Ye, Y., Zhou, Z., Huang, L., et al. (2024). Direct training high-performance deep spiking neural networks: a review of theories and methods. Front. Neurosci. 18:1383844. doi: 10.3389/fnins.2024.1383844

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: brain–machine interface, computational neuroscience, deep learning, hybrid neural networks, neural decoding, neuromorphic computing

Citation: Zhang M, Wang T and Zhu Z (2026) Bridging neuromorphic computing and deep learning for next-generation neural data interpretation. Front. Comput. Neurosci. 19:1737839. doi: 10.3389/fncom.2025.1737839

Received: 02 November 2025; Revised: 03 December 2025;
Accepted: 09 December 2025; Published: 08 January 2026.

Edited by:

Chenglong Zou, Peking University, China

Reviewed by:

A. N. M. Nafiul Islam, Alfred University, United States

Copyright © 2026 Zhang, Wang and Zhu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Zhiyuan Zhu, enl1YW56aHVAc3d1LmVkdS5jbg==; Tianlei Wang, d2FuZ3RsQHRjdS5lZHUuY24=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.