<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Signal Processing | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/signal-processing</link>
        <description>RSS Feed for Frontiers in Signal Processing | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-23T12:04:36.63+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1761302</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1761302</link>
        <title><![CDATA[A patch-wise deep residual network (PwDRU-Net102) for multimodal MRI brain tumor segmentation]]></title>
        <pubdate>2026-04-22T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Manu Singh</author><author>Tanu Singh</author><author>Vinod Patidar</author>
        <description><![CDATA[Gliomas are among the most severe types of brain tumors and can be life-threatening without early detection. Accurate and timely segmentation of brain tumors from MRI scans is crucial for effective treatment planning; however, it remains challenging due to significant variation in tumor shape, size, and location. This paper proposes a 2D Patch-wise Deep Residual U-Net with 102 convolutional layers for automatic tumor segmentation. The approach divides MRI scans into uniform, non-overlapping patches to achieve precise localization and better preserve local features. Residual blocks with identity mapping help mitigate vanishing gradient issues, while dropout layers reduce overfitting during training. T1, T2, and FLAIR modalities from the BraTS 2019 and 2020 datasets were used to evaluate the model. Experimental results show high segmentation accuracy on BraTS 2020 and the Dice Similarity Coefficients (DSC) achieved were 0.9136 (WT), 0.7143 (TC), and 0.7028 (ET). The paper demonstrates that patch-wise deep residual architectures, even with limited training data, can deliver reliable and robust brain tumor segmentation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1728615</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1728615</link>
        <title><![CDATA[Temporal convolutional network architectures: a novel simultaneous spatio-temporal model for comparative analysis]]></title>
        <pubdate>2026-04-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Milad Jabbari</author><author>Eisa Aghchehli</author><author>Chenfei Ma</author><author>Kianoush Nazarpour</author>
        <description><![CDATA[IntroductionConventional temporal-based deep learning models often fail to extract inter- channel information from electromyographic (EMG) signals. Existing spatio-temporal approaches typically sequentially combine spatial and temporal networks, but this strategy increases model complexity and parameter count.MethodWe introduce a simultaneous spatio-temporal convolutional deep network, which integrates spatial and temporal feature extraction connections within a single, explainable deep network.ResultsTo evaluate the new architecture through a comprehensive comparative analysis, we compared its performance and model size with three other established decoding methods. We used two internal and two publicly available EMG databases. We report that the application of convolutional filters in both spatial and temporal directions simultaneously enhances myoelectric decoding accuracy. Finally, we explain the proposed model using the saliency maps method.DiscussionThe findings indicate that the proposed simultaneous spatio-temporal configuration offers reliable classification performance and is well-suited for real-time on-board deployment. The proposed model explains how simultaneous spatio-temporal convolution enhances the contribution of both temporal and spatial components of EMG activity, resulting in improved classification performance.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1794236</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1794236</link>
        <title><![CDATA[Dual-prediction and adaptive complexity for reversible watermarking of color images]]></title>
        <pubdate>2026-04-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Chen Cui</author><author>Li Li</author><author>Hao Du</author><author>Wen Wang</author>
        <description><![CDATA[To expand the application scope of digital watermarking for color images widely used in social communication, this paper proposes a reversible watermarking scheme for color images. This scheme integrates bidirectional prediction correction embedding, dual dimensional complexity evaluation, and multi-threshold joint control to achieve adaptive optimization prediction of embedding parameters. First, by defining reference and associated channels, bidirectional correction is achieved through forward utilization of the reference image to predict and embed data into associated images, while simultaneously utilizing associated images backward to optimize and correct points with large disturbances during the embedding of the reference image. Second, a dual-dimensional complexity evaluation model is constructed by fusing local variance and visual saliency features, which accurately characterizes both macroscopic edge structures and microscopic texture details of images to precisely localize optimal embedding regions. Finally, multi-threshold joint constraints facilitate adaptive selection of optimal embedding sizes, mitigating distortion induced by capacity expansion and invalid pixel shifts. Experimental results demonstrate that the proposed scheme outperforms state-of-the-art methods, adapting better to the content characteristics of color images and achieving a superior balance between image quality and embedding performance.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1778118</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1778118</link>
        <title><![CDATA[Automatic monitoring herbage prehensions in grazing cows using audio signals and deep learning techniques]]></title>
        <pubdate>2026-04-14T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Roberta Avanzato</author><author>Ludovica Beritelli</author><author>Salvatore Bognanno</author><author>Francesco Beritelli</author><author>Marcella Avondo</author>
        <description><![CDATA[BackgroundAccurate monitoring of feeding behavior in grazing ruminants, particularly the detection of prehension events, is a central challenge for Precision Livestock Farming (PLF). Traditional methods, such as accelerometers, show limitations in the reliable identification of individual events. Acoustic analysis based on deep learning is emerging as a non-invasive and promising alternative.MethodsThis study presents two main contributions: (i) a web-based software platform (built on React.js and TensorFlow.js) for the annotation, visualization, and in-browser inference of audio signals; (ii) a comparative analysis of several 2D-CNN architectures (DenseNet-121, ResNet-101, EfficientNet-B7, and YOLO11s-cls) for the classification of prehension events. Models were trained and tested on a dataset of logarithmic spectrograms (500 ms) derived from audio recordings acquired via collars on cattle.ResultsAnalysis revealed high performance across all architectures. Although DenseNet-121 achieved the highest weighted metrics (Accuracy 83.7%, AUC 0.90), the YOLO11s-cls model demonstrated remarkable competitiveness, achieving nearly identical accuracy (83.1%) but with significantly superior computational efficiency (4.5 ms inference time). Crucially for field applications, YOLO exhibited excellent rejection of non-relevant sounds, with a 91% Specificity on the “no-prehension” class.ConclusionsThe study validates the efficacy of spectrogram-based 2D-CNNs for ingestion monitoring and identifies YOLO as a promising candidate for efficiency-oriented deployment scenarios, offering a favorable trade-off between predictive reliability and low-latency requirements. The developed platform further supports this transition from research to in-field application.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1827692</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1827692</link>
        <title><![CDATA[Editorial: Emerging optimization, learning and signal processing for next-generation wireless communications and networking]]></title>
        <pubdate>2026-04-09T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Dionysis Kalogerias</author><author>Le Liang</author><author>Mark Eisen</author><author>Athina Petropulu</author><author>Leandros Tassiulas</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1779355</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1779355</link>
        <title><![CDATA[Optical character recognition based document image quality assessment]]></title>
        <pubdate>2026-04-09T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>R. Krithika</author><author>J. Joshan Athanesious</author><author>S. Kiruthika</author>
        <description><![CDATA[Optical Character Recognition (OCR) systems play a crucial role in digitizing documents. However, their performance significantly deteriorates when handling low-quality images. Even advanced OCR systems struggle if the input is visually or structurally poor. Therefore, achieving high OCR accuracy requires assessing document image quality in terms of how well characters can be recognized, not just visual clarity. In this work, we propose a Document Image Quality Assessment (DIQA) model that predicts OCR accuracy without requiring the actual execution of an OCR engine. To assess document image quality for OCR performance, twelve distinct features are extracted that capture various aspects of sharpness, focus, edge clarity, and structural distortion. Instead of relying on subjective human opinion scores, we generate labels by measuring the actual OCR accuracy using modern engines like PaddleOCR and Keras OCR. These accuracy scores, calculated using the Levenshtein distance, serve as ground truth labels for training. Using the extracted features and corresponding OCR-based labels, we train the machine learning models to learn the relationship between image characteristics and OCR performance. The proposed models are evaluated using statistical metrics such as RMSE, PLCC, and SROCC to determine the most effective predictor. Our experiments demonstrate the importance of using OCR scores as labels, and the results show that our approach yields improved performance compared to existing baseline methodologies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1843671</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1843671</link>
        <title><![CDATA[Retraction: Bayesian nonparametric learning and knowledge transfer for object tracking under unknown time-varying conditions]]></title>
        <pubdate>2026-04-08T00:00:00Z</pubdate>
        <category>Retraction</category>
        
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1776807</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1776807</link>
        <title><![CDATA[Self-face viewing attenuates cardiac modulation of corticospinal excitability]]></title>
        <pubdate>2026-04-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Milana Makarova</author><author>Nikita Fedosov</author><author>Irina Mikhailova</author><author>Maria Nikolaeva</author><author>Alexei Ossadtchi</author><author>Alexey Tumyalis</author><author>Maria Volodina</author>
        <description><![CDATA[IntroductionWhile self-referential attention is thought to enhance interoceptive sensitivity, its effect on cardiac modulation of corticospinal excitability remains unexplored. This pilot study investigated how viewing one’s own face (self-face processing) modulates the cardiac-phase coupling of motor output and whether this heart-brain coupling depends on interoceptive accuracy (heartbeat perception).MethodsIn 15 healthy adults, motor-evoked potentials (MEPs) were elicited via transcranial magnetic stimulation (TMS) at three fixed time points following the R-peak (0, 250, and 500 m) during presentation of either self-face or other-face pictures. A Modulation Index was derived from log-transformed MEPs to quantify cardiac-phase modulation strength. Interoceptive accuracy was assessed via a heartbeat-counting task.ResultsContrary to the hypothesis that self- face viewing would enhance cardiac–motor coupling through inward attentional focus, self-face processing significantly reduced the overall magnitude of cardiac-phase modulation. This attenuation was most pronounced at 0 m and 250 m post-R-peak, corresponding to systolic phase. Across conditions, higher interoceptive accuracy predicted stronger modulation, though this relationship showed a tendency toward attenuation during self-face viewing (interaction p = 0.059).DiscussionThe results of this pilot TMS study suggest that, in a task requiring explicit evaluation of facial stimuli, self-face viewing acts as a potent exteroceptive stimulus that diverts attention away from interoceptive signals, thereby weakening the cardiac-cycle influence on motor excitability. These findings highlight the context-dependency of self-processing effects and suggest a possible link between HCT-based interoceptive accuracy and heart-brain- body coupling.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1715921</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1715921</link>
        <title><![CDATA[The role of signal preprocessing on the discriminability of canonical time-series characteristics and classification among individuals with and without Parkinson’s disease during serious game interaction]]></title>
        <pubdate>2026-03-24T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Maria Fernanda Soares de Almeida</author><author>Ariana Moura Cabral</author><author>Leandro Rodrigues da Silva Souza</author><author>Mila Figueira Nozella</author><author>Camille Marques Alves</author><author>Pedro Henrique Bernardes Caetano Milken</author><author>Maria Olivia Domingos Rezio Ramos</author><author>Luanne Cardoso Mendes</author><author>Adriano de Oliveira Andrade</author>
        <description><![CDATA[IntroductionOver the past decade, there has been a significant increase in studies using biomedical signals for objective monitoring of Parkinson’s disease (PD) motor symptoms. Inertial sensors are widely employed to record motion, producing time-series data that capture the underlying motor condition of patients. A major challenge in the field is classifying these signals to discriminate healthy subjects from PD individuals and distinguish motor conditions among patients. While many studies focus on feature classification, there is a lack of research on the influence of signal preprocessing.MethodsTo fill this gap, we evaluate data from healthy subjects and PD patients during interaction with the RehaBEElitation serious game. We employed the catch22 feature set to extract robust time-series characteristics. To evaluate the influence of preprocessing on classification between healthy individuals and patients in on and off medication states, four strategies were adopted.ResultsInitially, features extracted from raw data showed limited accuracy due to noise and voluntary movements. Subsequent interpolation to address discontinuities produced inconsistent results. The third strategy involved wavelet decomposition, which effectively mitigated trends and motion artifacts, resulting in a significant increase in accuracy across all models and confirming the vital role of sophisticated signal filtering. The fourth strategy combined interpolation and wavelet decomposition, achieving the best results with optimal separation (Accuracy = 100.0%) in binary classification and significant improvement in the multi-class problem.DiscussionOur findings establish that signal conditioning is pivotal for maximizing discriminative power. To further validate our findings, we benchmarked our pipeline against the RandOm Convolutional KErnel Transform (ROCKET) using a RidgeClassifierCV. The catch22 with Random Forest (RF) classifier, using a wavelet-based approach, achieved a balanced accuracy of 76.0% in the multiclass task, demonstrating superior performance compared to the ROCKETRidgeClassifierCV framework (69.0%) while maintaining a more compact and computationally efficient feature representation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1727948</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1727948</link>
        <title><![CDATA[Implementation of selected ISO/IEC 29794-5 measures and proposing alternatives]]></title>
        <pubdate>2026-03-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Paulina Otlik</author><author>Wojciech Wodo</author>
        <description><![CDATA[Face recognition is currently one of the most popular forms of biometric verification. As the effectiveness and security of this solution increase, so does its use in specialized fields. Considering that the verification process involves thousands of people, often under varying lighting conditions and with equipment of different parameters, biometric samples are of mixed quality. Therefore, there is a need to define the conditions under which a biometric sample is objectively good for a face recognition system. To address this, the international standard ISO/IEC 29794-5:2025 was developed, with defined quality measures, along with a description of suggested implementation where the majority of substantive work has already been completed. The aim of this work is to provide non-proprietary implementation of the ISO/IEC 29794-5:2023 standard for face image quality assessment and to compare its performance against OFIQ reference implementation. More broadly, this study examines the common challenge that biometric standards sometimes propose ideas that may not be top-effective in real-life operational scenarios. This paper includes the implementation of two systems for assessing face image quality based on selected standard’s measures. The first system follows the implementation suggested directly by the standard, while the second utilizes the latest scientific and commercial solutions. Ultimately, these systems are compared using a database of photographs differentiated by demographics and quality.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1691777</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1691777</link>
        <title><![CDATA[EEG-based cognitive load estimation during the use of a virtual wheelchair simulator]]></title>
        <pubdate>2026-03-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Débora Pereira Salgado</author><author>Felipe Roque Martins</author><author>Angela Abreu Rosa de Sá</author><author>Ronan Flynnn</author><author>Niall Murray</author><author>Eduardo Lázaro Martins Naves</author>
        <description><![CDATA[IntroductionDriving a powered wheelchair is a complex task that requires the integration of motor, visual, and cognitive skills. The development of assistive technologies without appropriate assessment methods that help bridge the gap between users and developers may lead to abandonment and reduced engagement. Most assessments rely on explicit measures, such as performance metrics, or subjective tools like interviews and questionnaires. In contrast, implicit measures allow continuous inference of mental states during task execution. This study proposes the use of blink indices derived from electroencephalographic (EEG) signals as implicit metrics to estimate cognitive load during the use of a virtual reality wheelchair training simulator.MethodsA total of 25 participants (14 females and 11 males; mean age 26.50 ± 5.7 years) completed a predefined route using a virtual wheelchair simulator. Blink parameters, including frequency, duration, and velocity, were extracted from EEG signals during task performance. After completing the simulation, participants responded to the NASA Task Load Index (NASA-TLX) to assess subjective cognitive load, as well as the System Usability Scale (SUS) and the Igroup Presence Questionnaire (IPQ).ResultsThe findings showed that higher mental-visual demand was associated with decreases in blink frequency, duration, and velocity. Correlation analyses between NASA-TLX scores and blink parameters revealed weak to moderate associations. These results suggest partial convergence between subjective and physiological measures of cognitive load.DiscussionBlink-based indices derived from EEG signals provide relevant information regarding cognitive demand during wheelchair simulator use. However, blink parameters alone are insufficient to reliably infer cognitive load. When combined with subjective questionnaires, implicit physiological metrics may offer a more comprehensive assessment than questionnaires alone, supporting the development and refinement of assistive training technologies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1761293</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1761293</link>
        <title><![CDATA[A graph generation pipeline for critical infrastructures based on heuristics, images and depth data]]></title>
        <pubdate>2026-03-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mike Diessner</author><author>Yannick E. Tarant</author>
        <description><![CDATA[Virtual representations of physical critical infrastructures, such as water or energy plants, are used for simulations and digital twins to ensure resilience and continuity of their services. These models usually require 3D point clouds from laser scanners that are expensive to acquire and require specialist knowledge to use. In this article, we present a prototypical graph generation pipeline based on photogrammetry. The pipeline detects relevant objects and predicts their relation using RGB images and depth data generated by a stereo camera. This more cost-effective approach uses deep learning for object detection and instance segmentation of the objects, and employs user-defined heuristics or rules to infer their relations. Results of two hydraulic systems show that this strategy can produce graphs close to the ground truth. While this study focuses on hydraulic systems, the general process can be used to tailor the method to other types of infrastructures and applications. The user-defined rules create transparency qualifying the pipeline to be used in the high stakes decision-making that is required for critical infrastructures.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1745291</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1745291</link>
        <title><![CDATA[Equation-level parameterized fusion reformulation for multimodal epileptic seizure detection using interaction control and data-quality screening]]></title>
        <pubdate>2026-03-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Abdul-Mumin Khalid</author><author>Musah Sulemana</author><author>Iddrisu Wahab Abdul</author>
        <description><![CDATA[Epileptic seizure detection remains challenging due to noise, inter-subject variability, and the poor generalization ability of unimodal learning models. To address these limitations, this study proposes an equation-level multimodal fusion reformulation for epileptic seizure detection that integrates EEG, ECG, EMG, and ACC signals using adaptive parameterized fusion and interaction control. The framework introduces four interpretable parameters: a fusion exponent (ρ), an interaction weight (δ), a stabilization factor λ, and a synergy amplifier η, which jointly regulate modality contribution, nonlinear cross-modal interaction, numerical stability, and synergistic enhancement within a unified mathematical formulation applicable to both traditional and deep learning models. The study is conducted on a multimodal dataset comprising recordings from 120 clinically diagnosed epilepsy patients, including 60 patients from Tamale Teaching Hospital and 60 from publicly available datasets. Signals were sampled at 512 Hz and segmented into 2-second windows with 50% overlap, yielding approximately 1,024,000 labeled samples. A formal Data Quality Assurance (DQA) model and a Novel Cosine Similarity (NCS) index were employed to assess signal reliability and cross-source alignment prior to fusion. Twelve machine learning and deep learning classifiers were evaluated using a strict patient-wise data split to prevent data leakage. Experimental results demonstrate consistent performance improvements across all models following equation-level reformulation. Traditional machine learning models improved from baseline accuracies of approximately 55–67% to 82–92%, while deep learning models improved from 70–82% to 89–97.9%, with the Transformer-based model achieving the highest performance. These results confirm that equation-level multimodal fusion provides a generalizable, interpretable, and computationally efficient approach for robust epileptic seizure detection.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1764383</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1764383</link>
        <title><![CDATA[Airborne IMT users in precision agriculture: Monte-Carlo analysis of UAV interference in 694–2690 bands]]></title>
        <pubdate>2026-03-05T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Alexandr Solochshenko</author><author>Karina Turzhanova</author><author>Alexander Pastukh</author><author>Valery Tikhvinskiy</author><author>Yelizaveta Vitulyova</author>
        <description><![CDATA[The integration of unmanned aerial vehicles (UAVs) into precision agriculture, as envisioned in the agricultural systems, promises significant gains in crop monitoring, yield forecasting, and targeted agro-technical interventions. However, the use of IMT frequency bands for real-time UAV communications introduces new spectrum sharing and compatibility challenges. Unlike terrestrial user equipment, airborne agricultural drones operate always outdoors, above the base-station downtilt, with predominantly line-of-sight (LoS) propagation to multiple cells, drastically altering compatibility conditions and potentially increasing interference to other operators. This paper proposes a Monte Carlo-based simulation framework analysis of interference generated by such UAVs in IMT frequency allocations across 694–2690 MHz. Simulations model rural and urban macrocell deployments typical of large-scale farmlands, incorporating 3D antenna patterns, altitude-dependent air-to-ground channel models, realistic LTE/NR power-control schemes, and UAV operational patterns. Key metrics include aggregate uplink interference at victim cells, downlink degradation at UAVs, and cross-link interference in TDD systems. Results show that even low-power UAV transmissions can exceed harmful interference thresholds in multiple adjacent-channel cells. Operational recommendations are provided to ensure coexistence of precision-agriculture UAVs with terrestrial IMT networks.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1680796</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1680796</link>
        <title><![CDATA[Alzheimer’s detection using discrete wavelet transform based image fusion and vision information transformer]]></title>
        <pubdate>2026-03-04T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Amar A. Dum</author><author>Kshama V. Kulhalli</author><author>Priyanka Singh</author>
        <description><![CDATA[Alzheimer’s disease (AD) is the most prevalent form of dementia and a major cause of mortality among older adults. Magnetic resonance imaging (MRI) and positron emission tomography (PET) are commonly used for AD diagnosis. Despite extensive research, the accuracy of automated detection methods remains limited. This study proposes a highly accurate AD classification model by integrating complementary information from MRI and PET scans. The images are fused using a discrete wavelet transform (DWT), augmented, and subsequently classified using a Vision Transformer (ViT). Comprehensive evaluation across nine performance metrics shows that the proposed ViT-based framework achieves 97.68\% accuracy, surpassing benchmark transfer learning models and state-of-the-art methods. Ablation studies and comparative analysis further confirm the robustness and reliability of the proposed approach for AD detection.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2025.1729918</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2025.1729918</link>
        <title><![CDATA[Residual life prediction of progressive failure bearings based on NGO-AVMD hybrid domain features]]></title>
        <pubdate>2026-02-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jiong Zhou</author>
        <description><![CDATA[Accurate bearing Remaining Useful Life (RUL) prediction is vital for equipment availability, cost reduction, and safety. Existing data-driven methods often yield insufficient accuracy due to single-scale feature extraction and poor differentiation of failure modes. This paper proposes a hybrid-domain feature extraction method, integrating original vibration signals with Adaptive Variational Mode Decomposition optimized by Northern Goshawk Optimization (NGO-AVMD) reconstructed signals and additional deep features. These mixed-domain features are used to compute a health index that effectively distinguishes progressive and sudden bearing failure modes. Focusing on progressive degradation, a multi-attention Temporal Convolutional Network (TCN) is then employed for RUL prediction, using these features as input. Validated on the PHM2012 dataset, the method achieves an R2 of 98%, demonstrating its high accuracy in bearing life prediction.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1792985</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1792985</link>
        <title><![CDATA[Editorial: MmWave technologies as opportunistic ISAC for environmental monitoring]]></title>
        <pubdate>2026-02-13T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Congzheng Han</author><author>Jonatan Ostrometzky</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2025.1715540</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2025.1715540</link>
        <title><![CDATA[Singular spectrum analysis of near-infrared spectroscopy signal classification for mental arithmetic and rest state]]></title>
        <pubdate>2026-02-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Kanwardeep Singh Gahlot</author><author>Anukul Pandey</author><author>Sachin Taran</author><author>Rahul Thakur</author>
        <description><![CDATA[The brain–computer interface (BCI) is the connection between the human brain and computers, creating a bridge that mimics the human brain. The premise behind near-infrared spectroscopy (NIRS) is that increased oxygen consumption in the brain leads to increased blood flow due to nerve connections. NIRS is a non-invasive procedure; changes in oxyhemoglobin (Oxy-Hb) and deoxyhemoglobin (Deoxy-Hb) parameters can be easily utilized to detect brain hemodynamics. This study is based on the Oxy-Hb parameter to classify mental arithmetic and rest states of the brain using singular spectrum analysis (SSA). SSA results in a better-denoised signal and decomposition into different principal components for analysis of these states. Oxy- and Deoxy-Hb patterns are temporary and unstable, so features such as power bandwidth, entropy, and complexity were extracted for classification. The reported accuracy in existing methods is 79.4% for the antagonistic single-trial classification and 86.9% for graph NIRS methods. The present study’s mean accuracy was 98.4% based on a set of selected features using filtering detrending (FD)-SSA, thus reducing the cost of poor sorting. Finally, classification models were evaluated based on scores such as Matthew’s correlation coefficient, precision, F1-score, and recall, resulting in 0.889, 0.968, 0.966, and 0.963, respectively.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1783015</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1783015</link>
        <title><![CDATA[Correction: Convergence analysis of hyperparameter-free MCC-based channel-estimation for mmWave MIMO systems]]></title>
        <pubdate>2026-02-03T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Vimal Bhatia</author><author>Rajat Kumar</author><author>Rangeet Mitra</author><author>Sandesh Jain</author><author>Vidya Bhasker Shukla</author><author>K. Venkateswaran</author><author>Ondrej Krejcar</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1724468</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1724468</link>
        <title><![CDATA[Research on heart rate estimation algorithm based on dynamic PPG]]></title>
        <pubdate>2026-02-02T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jiawei Guo</author><author>Shiyuan Chen</author><author>Ting Lan</author><author>Ruochen Li</author><author>Lichao Wang</author><author>Yunchong Wu</author><author>Jun Zhong</author><author>Wei Zhu</author>
        <description><![CDATA[Heart rate is one of the most vital physiological parameters and is clinically widely used to assess human health status. In recent years, wearable devices based on photoplethysmography (PPG) have been extensively applied in real-time monitoring. However, PPG signals are susceptible to interference from various types of noise during acquisition, particularly motion artifacts (MA), which pose a significant challenge to the accurate extraction of physiological parameters. This study focuses on heart rate extraction from dynamic PPG signals and explores denoising methods combining traditional signal processing and machine learning techniques. The main research contents of this paper are as follows: further improvements are made on the basis of existing algorithms by integrating support vector machines (SVM). A more comprehensive signal quality assessment is performed via SVM, which incorporates the time-domain and frequency?domain statistical characteristics of both PPG signals and triaxial acceleration (ACC) signals. In addition, the short-time Fourier transform (STFT) is integrated to capture time-varying characteristics, thereby mitigating the impact of local signal quality degradation on the analysis of full-window signals. For spectral peak tracking, a Gaussian window is adopted to optimize the spectral search range and a comprehensive analysis is conducted by fusing spectral amplitude information with historical heart rate data. Experimental results demonstrate that the heart rate error of the test set is 1.71 beats per minute (BPM).]]></description>
      </item>
      </channel>
    </rss>