<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Signal Processing | Biomedical Signal Processing section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/signal-processing/sections/biomedical-signal-processing</link>
        <description>RSS Feed for Biomedical Signal Processing section in the Frontiers in Signal Processing journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-04T04:23:19.378+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1728615</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1728615</link>
        <title><![CDATA[Temporal convolutional network architectures: a novel simultaneous spatio-temporal model for comparative analysis]]></title>
        <pubdate>2026-04-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Milad Jabbari</author><author>Eisa Aghchehli</author><author>Chenfei Ma</author><author>Kianoush Nazarpour</author>
        <description><![CDATA[IntroductionConventional temporal-based deep learning models often fail to extract inter- channel information from electromyographic (EMG) signals. Existing spatio-temporal approaches typically sequentially combine spatial and temporal networks, but this strategy increases model complexity and parameter count.MethodWe introduce a simultaneous spatio-temporal convolutional deep network, which integrates spatial and temporal feature extraction connections within a single, explainable deep network.ResultsTo evaluate the new architecture through a comprehensive comparative analysis, we compared its performance and model size with three other established decoding methods. We used two internal and two publicly available EMG databases. We report that the application of convolutional filters in both spatial and temporal directions simultaneously enhances myoelectric decoding accuracy. Finally, we explain the proposed model using the saliency maps method.DiscussionThe findings indicate that the proposed simultaneous spatio-temporal configuration offers reliable classification performance and is well-suited for real-time on-board deployment. The proposed model explains how simultaneous spatio-temporal convolution enhances the contribution of both temporal and spatial components of EMG activity, resulting in improved classification performance.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1776807</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1776807</link>
        <title><![CDATA[Self-face viewing attenuates cardiac modulation of corticospinal excitability]]></title>
        <pubdate>2026-04-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Milana Makarova</author><author>Nikita Fedosov</author><author>Irina Mikhailova</author><author>Maria Nikolaeva</author><author>Alexei Ossadtchi</author><author>Alexey Tumyalis</author><author>Maria Volodina</author>
        <description><![CDATA[IntroductionWhile self-referential attention is thought to enhance interoceptive sensitivity, its effect on cardiac modulation of corticospinal excitability remains unexplored. This pilot study investigated how viewing one’s own face (self-face processing) modulates the cardiac-phase coupling of motor output and whether this heart-brain coupling depends on interoceptive accuracy (heartbeat perception).MethodsIn 15 healthy adults, motor-evoked potentials (MEPs) were elicited via transcranial magnetic stimulation (TMS) at three fixed time points following the R-peak (0, 250, and 500 m) during presentation of either self-face or other-face pictures. A Modulation Index was derived from log-transformed MEPs to quantify cardiac-phase modulation strength. Interoceptive accuracy was assessed via a heartbeat-counting task.ResultsContrary to the hypothesis that self- face viewing would enhance cardiac–motor coupling through inward attentional focus, self-face processing significantly reduced the overall magnitude of cardiac-phase modulation. This attenuation was most pronounced at 0 m and 250 m post-R-peak, corresponding to systolic phase. Across conditions, higher interoceptive accuracy predicted stronger modulation, though this relationship showed a tendency toward attenuation during self-face viewing (interaction p = 0.059).DiscussionThe results of this pilot TMS study suggest that, in a task requiring explicit evaluation of facial stimuli, self-face viewing acts as a potent exteroceptive stimulus that diverts attention away from interoceptive signals, thereby weakening the cardiac-cycle influence on motor excitability. These findings highlight the context-dependency of self-processing effects and suggest a possible link between HCT-based interoceptive accuracy and heart-brain- body coupling.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1715921</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1715921</link>
        <title><![CDATA[The role of signal preprocessing on the discriminability of canonical time-series characteristics and classification among individuals with and without Parkinson’s disease during serious game interaction]]></title>
        <pubdate>2026-03-24T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Maria Fernanda Soares de Almeida</author><author>Ariana Moura Cabral</author><author>Leandro Rodrigues da Silva Souza</author><author>Mila Figueira Nozella</author><author>Camille Marques Alves</author><author>Pedro Henrique Bernardes Caetano Milken</author><author>Maria Olivia Domingos Rezio Ramos</author><author>Luanne Cardoso Mendes</author><author>Adriano de Oliveira Andrade</author>
        <description><![CDATA[IntroductionOver the past decade, there has been a significant increase in studies using biomedical signals for objective monitoring of Parkinson’s disease (PD) motor symptoms. Inertial sensors are widely employed to record motion, producing time-series data that capture the underlying motor condition of patients. A major challenge in the field is classifying these signals to discriminate healthy subjects from PD individuals and distinguish motor conditions among patients. While many studies focus on feature classification, there is a lack of research on the influence of signal preprocessing.MethodsTo fill this gap, we evaluate data from healthy subjects and PD patients during interaction with the RehaBEElitation serious game. We employed the catch22 feature set to extract robust time-series characteristics. To evaluate the influence of preprocessing on classification between healthy individuals and patients in on and off medication states, four strategies were adopted.ResultsInitially, features extracted from raw data showed limited accuracy due to noise and voluntary movements. Subsequent interpolation to address discontinuities produced inconsistent results. The third strategy involved wavelet decomposition, which effectively mitigated trends and motion artifacts, resulting in a significant increase in accuracy across all models and confirming the vital role of sophisticated signal filtering. The fourth strategy combined interpolation and wavelet decomposition, achieving the best results with optimal separation (Accuracy = 100.0%) in binary classification and significant improvement in the multi-class problem.DiscussionOur findings establish that signal conditioning is pivotal for maximizing discriminative power. To further validate our findings, we benchmarked our pipeline against the RandOm Convolutional KErnel Transform (ROCKET) using a RidgeClassifierCV. The catch22 with Random Forest (RF) classifier, using a wavelet-based approach, achieved a balanced accuracy of 76.0% in the multiclass task, demonstrating superior performance compared to the ROCKETRidgeClassifierCV framework (69.0%) while maintaining a more compact and computationally efficient feature representation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1691777</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1691777</link>
        <title><![CDATA[EEG-based cognitive load estimation during the use of a virtual wheelchair simulator]]></title>
        <pubdate>2026-03-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Débora Pereira Salgado</author><author>Felipe Roque Martins</author><author>Angela Abreu Rosa de Sá</author><author>Ronan Flynnn</author><author>Niall Murray</author><author>Eduardo Lázaro Martins Naves</author>
        <description><![CDATA[IntroductionDriving a powered wheelchair is a complex task that requires the integration of motor, visual, and cognitive skills. The development of assistive technologies without appropriate assessment methods that help bridge the gap between users and developers may lead to abandonment and reduced engagement. Most assessments rely on explicit measures, such as performance metrics, or subjective tools like interviews and questionnaires. In contrast, implicit measures allow continuous inference of mental states during task execution. This study proposes the use of blink indices derived from electroencephalographic (EEG) signals as implicit metrics to estimate cognitive load during the use of a virtual reality wheelchair training simulator.MethodsA total of 25 participants (14 females and 11 males; mean age 26.50 ± 5.7 years) completed a predefined route using a virtual wheelchair simulator. Blink parameters, including frequency, duration, and velocity, were extracted from EEG signals during task performance. After completing the simulation, participants responded to the NASA Task Load Index (NASA-TLX) to assess subjective cognitive load, as well as the System Usability Scale (SUS) and the Igroup Presence Questionnaire (IPQ).ResultsThe findings showed that higher mental-visual demand was associated with decreases in blink frequency, duration, and velocity. Correlation analyses between NASA-TLX scores and blink parameters revealed weak to moderate associations. These results suggest partial convergence between subjective and physiological measures of cognitive load.DiscussionBlink-based indices derived from EEG signals provide relevant information regarding cognitive demand during wheelchair simulator use. However, blink parameters alone are insufficient to reliably infer cognitive load. When combined with subjective questionnaires, implicit physiological metrics may offer a more comprehensive assessment than questionnaires alone, supporting the development and refinement of assistive training technologies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1745291</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1745291</link>
        <title><![CDATA[Equation-level parameterized fusion reformulation for multimodal epileptic seizure detection using interaction control and data-quality screening]]></title>
        <pubdate>2026-03-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Abdul-Mumin Khalid</author><author>Musah Sulemana</author><author>Iddrisu Wahab Abdul</author>
        <description><![CDATA[Epileptic seizure detection remains challenging due to noise, inter-subject variability, and the poor generalization ability of unimodal learning models. To address these limitations, this study proposes an equation-level multimodal fusion reformulation for epileptic seizure detection that integrates EEG, ECG, EMG, and ACC signals using adaptive parameterized fusion and interaction control. The framework introduces four interpretable parameters: a fusion exponent (ρ), an interaction weight (δ), a stabilization factor λ, and a synergy amplifier η, which jointly regulate modality contribution, nonlinear cross-modal interaction, numerical stability, and synergistic enhancement within a unified mathematical formulation applicable to both traditional and deep learning models. The study is conducted on a multimodal dataset comprising recordings from 120 clinically diagnosed epilepsy patients, including 60 patients from Tamale Teaching Hospital and 60 from publicly available datasets. Signals were sampled at 512 Hz and segmented into 2-second windows with 50% overlap, yielding approximately 1,024,000 labeled samples. A formal Data Quality Assurance (DQA) model and a Novel Cosine Similarity (NCS) index were employed to assess signal reliability and cross-source alignment prior to fusion. Twelve machine learning and deep learning classifiers were evaluated using a strict patient-wise data split to prevent data leakage. Experimental results demonstrate consistent performance improvements across all models following equation-level reformulation. Traditional machine learning models improved from baseline accuracies of approximately 55–67% to 82–92%, while deep learning models improved from 70–82% to 89–97.9%, with the Transformer-based model achieving the highest performance. These results confirm that equation-level multimodal fusion provides a generalizable, interpretable, and computationally efficient approach for robust epileptic seizure detection.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1680796</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1680796</link>
        <title><![CDATA[Alzheimer’s detection using discrete wavelet transform based image fusion and vision information transformer]]></title>
        <pubdate>2026-03-04T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Amar A. Dum</author><author>Kshama V. Kulhalli</author><author>Priyanka Singh</author>
        <description><![CDATA[Alzheimer’s disease (AD) is the most prevalent form of dementia and a major cause of mortality among older adults. Magnetic resonance imaging (MRI) and positron emission tomography (PET) are commonly used for AD diagnosis. Despite extensive research, the accuracy of automated detection methods remains limited. This study proposes a highly accurate AD classification model by integrating complementary information from MRI and PET scans. The images are fused using a discrete wavelet transform (DWT), augmented, and subsequently classified using a Vision Transformer (ViT). Comprehensive evaluation across nine performance metrics shows that the proposed ViT-based framework achieves 97.68\% accuracy, surpassing benchmark transfer learning models and state-of-the-art methods. Ablation studies and comparative analysis further confirm the robustness and reliability of the proposed approach for AD detection.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2025.1715540</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2025.1715540</link>
        <title><![CDATA[Singular spectrum analysis of near-infrared spectroscopy signal classification for mental arithmetic and rest state]]></title>
        <pubdate>2026-02-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Kanwardeep Singh Gahlot</author><author>Anukul Pandey</author><author>Sachin Taran</author><author>Rahul Thakur</author>
        <description><![CDATA[The brain–computer interface (BCI) is the connection between the human brain and computers, creating a bridge that mimics the human brain. The premise behind near-infrared spectroscopy (NIRS) is that increased oxygen consumption in the brain leads to increased blood flow due to nerve connections. NIRS is a non-invasive procedure; changes in oxyhemoglobin (Oxy-Hb) and deoxyhemoglobin (Deoxy-Hb) parameters can be easily utilized to detect brain hemodynamics. This study is based on the Oxy-Hb parameter to classify mental arithmetic and rest states of the brain using singular spectrum analysis (SSA). SSA results in a better-denoised signal and decomposition into different principal components for analysis of these states. Oxy- and Deoxy-Hb patterns are temporary and unstable, so features such as power bandwidth, entropy, and complexity were extracted for classification. The reported accuracy in existing methods is 79.4% for the antagonistic single-trial classification and 86.9% for graph NIRS methods. The present study’s mean accuracy was 98.4% based on a set of selected features using filtering detrending (FD)-SSA, thus reducing the cost of poor sorting. Finally, classification models were evaluated based on scores such as Matthew’s correlation coefficient, precision, F1-score, and recall, resulting in 0.889, 0.968, 0.966, and 0.963, respectively.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2026.1724468</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2026.1724468</link>
        <title><![CDATA[Research on heart rate estimation algorithm based on dynamic PPG]]></title>
        <pubdate>2026-02-02T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jiawei Guo</author><author>Shiyuan Chen</author><author>Ting Lan</author><author>Ruochen Li</author><author>Lichao Wang</author><author>Yunchong Wu</author><author>Jun Zhong</author><author>Wei Zhu</author>
        <description><![CDATA[Heart rate is one of the most vital physiological parameters and is clinically widely used to assess human health status. In recent years, wearable devices based on photoplethysmography (PPG) have been extensively applied in real-time monitoring. However, PPG signals are susceptible to interference from various types of noise during acquisition, particularly motion artifacts (MA), which pose a significant challenge to the accurate extraction of physiological parameters. This study focuses on heart rate extraction from dynamic PPG signals and explores denoising methods combining traditional signal processing and machine learning techniques. The main research contents of this paper are as follows: further improvements are made on the basis of existing algorithms by integrating support vector machines (SVM). A more comprehensive signal quality assessment is performed via SVM, which incorporates the time-domain and frequency?domain statistical characteristics of both PPG signals and triaxial acceleration (ACC) signals. In addition, the short-time Fourier transform (STFT) is integrated to capture time-varying characteristics, thereby mitigating the impact of local signal quality degradation on the analysis of full-window signals. For spectral peak tracking, a Gaussian window is adopted to optimize the spectral search range and a comprehensive analysis is conducted by fusing spectral amplitude information with historical heart rate data. Experimental results demonstrate that the heart rate error of the test set is 1.71 beats per minute (BPM).]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2025.1700044</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2025.1700044</link>
        <title><![CDATA[Comparing compressive sensing and downsampling for COVID-19 diagnosis from cough and speech audio signals]]></title>
        <pubdate>2026-01-22T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Leticia Silva</author><author>Alan Floriano</author><author>Carlos Valadão</author><author>Eliete Caldeira</author><author>Sridhar Krishnan</author><author>Teodiano Bastos Filho</author>
        <description><![CDATA[IntroductionSince the onset of the COVID-19 pandemic, extensive research has focused on developing non-invasive diagnostic approaches of respiratory syndrome using biomedical signals, particularly cough and speech audio. Time-frequency representations combined with Machine Learning models have shown potential in identifying acoustic biomarkers associated with respiratory conditions. Although many existing approaches demonstrate high performance, their use may be limited in resource-constrained environments due to processing or implementation demands.MethodsIn this study, we propose an end-to-end approach for COVID-19 inference based on compressed time-domain audio signals. The method combines temporal signal compression strategies - Downsampling (DS) and Compressive Sensing (CS) - with a Convolutional Neural Network (CNN) trained directly on the waveforms. This design eliminates the need for handcrafted features or spectrograms, aiming to reduce computational complexity while preserving classification performance.ResultsTo evaluate the proposed structure, we used data from two open-access datasets, one for coughing and one for speech. Experimental results, assessed using accuracy and F1-score metrics, indicate that CS outperformed DS in most scenarios, particularly under high compression rates (e.g., 200 Hz and 100 Hz).DiscussionThese findings support the use of compressed audio-based classification in real-world embedded and mobile health systems, where computational efficiency is essential.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2025.1707422</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2025.1707422</link>
        <title><![CDATA[Unraveling cardiac arrhythmia frequency: comparative analysis using time and frequency domain algorithms]]></title>
        <pubdate>2026-01-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Laura Diaz-Maue</author><author>Annette Witt</author><author>Lina Elshareif </author><author>Holger Nobach</author>
        <description><![CDATA[During cardiac arrhythmia, the heart frequency is an important physiological parameter that can be identified by analyzing electrocardiogram (ECG) signals. However, the accuracy of the frequency estimation becomes increasingly challenging as the ECG morphology becomes more complex, for example, during transitions from tachycardia to fibrillation. In this paper, the authors compare seven conventional and novel time- and frequency-domain methods for cardiac arrhythmia frequency analysis, including an algorithm used in implantable cardioverter defibrillators. The objective of this study is to identify the approaches that reveal the potential presence of a dominant frequency and its role in characterizing different arrhythmia types. By evaluating the strengths and weaknesses of each method, the authors aim to establish an informative framework for extracting meaningful insights from electrocardiogram data in the context of cardiac arrhythmia frequency. In order to ascertain the statistical relevance of the methods, a dataset comprising 112 ECGs from arrhythmic murine hearts was analyzed. Additionally, a dataset comprising human arrhythmia data was examined to validate the techniques presented. The R-library, which contains the frequency determination algorithms, as well as the murine data set, is made available to the reader for the purposes of further testing and supplementation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2025.1679555</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2025.1679555</link>
        <title><![CDATA[A computational approach for prediction of exons using static encoding methods, digital filter and windowing technique]]></title>
        <pubdate>2025-11-27T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Shaik Benarjee</author><author>Vaegae Naveen Kumar</author>
        <description><![CDATA[IntroductionIdentifying protein-coding regions in eukaryotic Deoxyribonucleic acid (DNA) remains difficult due to the sparse and uneven distribution of exons.MethodsThis work focusses into four static encoding schemes—integer, Voss, paired numeric, and Electron-Ion Interaction Potential (EIIP) to improve exon prediction using genomic signal processing. Two benchmark sequences, Caenorhabditis elegans Cosmid F56F11.4 and Mouse apolipoprotein A-IV (M13966.1), were analyzed in MATLAB. A Cauer (elliptic) band-pass filter was used to isolate the period-3 component, and a Blackman-Harris window was utilised to reduce spectral leakage. The elliptic filter in conjunction with EIIP-based encoding achieved the most distinct separation between coding and non-coding areas among the assessed techniques, identifying every exon segment with a minimal amount of noise.Results and discussionThe technique obtained 84% sensitivity, 96% specificity, and 94% accuracy on the C. elegans Cosmid sequence and 86.5% sensitivity, 93% specificity, and 91% accuracy on the M13966.1 gene sequence.ConclusionThese results show that the EIIP, Cauer filter and Blackman-Harris windowing framework offers a reliable and effective method for identifying exons.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2025.1555876</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2025.1555876</link>
        <title><![CDATA[Editorial: Smart biomedical signal analysis with machine intelligence]]></title>
        <pubdate>2025-01-31T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Tilendra Choudhary</author><author>Pulakesh Upadhyaya</author><author>Mohammad Zavid Parvez</author><author>Shaik Rafi Ahamed</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2024.1479244</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2024.1479244</link>
        <title><![CDATA[Markerless vision-based knee osteoarthritis classification using machine learning and gait videos]]></title>
        <pubdate>2024-11-21T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Slim Ben Hassine</author><author>Ala Balti</author><author>Sabeur Abid</author><author>Mohamed Moncef Ben Khelifa</author><author>Mounir Sayadi</author>
        <description><![CDATA[IntroductionKnee osteoarthritis (KOA) is a major health issue affecting millions worldwide. This study employs machine learning algorithms to analyze human gait using kinematic data, aiming to enhance the diagnosis and detection of KOA. By adopting this approach, we contribute to the development of an effective diagnostic methods for KOA, a prevalent joint condition.MethodsThe methodology is structured around several critical steps to optimize the model’s performance. These steps include extracting kinematic features from video data to capture essential gait dynamics, applying data filtering and reduction techniques to remove noise and enhance data quality, and calculating key gait parameters to boost the model’s predictive power. The machine learning model trains on these refined features, validates through cross-validation for robust performance assessment, and tests on unseen data to ensure generalizability.ResultsOur approach demonstrates significant improvements in classification accuracy, highlighting its potential for early and precise KOA detection. The model achieves a high classification accuracy, indicating its effectiveness in distinguishing KOA-related gait patterns.DiscussionFurthermore, a comparative analysis with another model trained on the same dataset demonstrates the superiority of our method, suggesting that the proposed approach serves as a reliable tool for early KOA detection and potentially improves clinical diagnostic workflows.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2024.1496320</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2024.1496320</link>
        <title><![CDATA[Corrigendum: Editorial: Physiological signal processing for wellness]]></title>
        <pubdate>2024-10-15T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Rakesh Chandra Joshi</author><author>Navchetan Awasthi</author><author>Priyadarsan Parida</author><author>Manob Jyoti Saikia</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2024.1384744</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2024.1384744</link>
        <title><![CDATA[CovRoot: COVID-19 detection based on chest radiology imaging techniques using deep learning]]></title>
        <pubdate>2024-09-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ahashan Habib Niloy</author><author>S. M. Farah Al Fahim</author><author>Mohammad Zavid Parvez</author><author>Shammi Akhter Shiba</author><author>Faizun Nahar Faria</author><author>Md. Jamilur Rahman</author><author>Emtiaz Hussain</author><author>Tasmi Tamanna</author>
        <description><![CDATA[The world first came to know the existence of COVID-19 (SARS-CoV-2) in December 2019. Initially, doctors struggled to diagnose the increasing number of patients due to less availability of testing kits. To help doctors primarily diagnose the virus, researchers around the world have come up with some radiology imaging techniques using the Convolutional Neural Network (CNN). Previously some research methods were based on X-ray images and others on CT scan images. Few research methods addressed both image types, with the proposed models limited to detecting only COVID and NORMAL cases. This limitation motivated us to propose a 42-layer CNN model that works for complex scenarios (COVID, NORMAL, and PNEUMONIA_VIRAL) and more complex scenarios (COVID, NORMAL, PNEUMONIA_VIRAL, and PNEUMONIA_BACTERIA). Furthermore, our proposed model indicates better performance than any other previously proposed models in the detection of COVID-19.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2024.1396077</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2024.1396077</link>
        <title><![CDATA[Enhancement of single-lead dry-electrode ECG through wavelet denoising]]></title>
        <pubdate>2024-05-28T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Abdelrahman Abdou</author><author>Sridhar Krishnan</author>
        <description><![CDATA[Neonatal electrocardiogram (ECG) monitoring is an important diagnostic tool for identifying cardiac issues in infants at birth. Long-term remote neonatal dry-electrode ECG monitoring solutions can be an additional step for preventive healthcare measures. In these solutions, power and computationally efficient embedded signal processing techniques for denoising newborn ECGs can assist in increasing neonatal medical wearable time. Wavelet denoising is an appropriate denoising mechanism with low computational complexity that can be implemented on embedded microcontrollers for long-term remote ECG monitoring. Discrete wavelet transform (DWT) denoising for neonatal dry-electrode ECG using different wavelet families is investigated. The wavelet families and mother wavelets used include Daubechies (db1, db2, db3, db4, and db6), symlets (sym5), and coiflets (coif5). Different levels of added white Gaussian noise (AWGN) were added to 19 newborn ECG signals, and denoising was performed to select the appropriate wavelets for neonatal dry-electrode ECG. The selected wavelets then undergo real noise additions of baseline wander and electrode motion to determine their robustness and accuracy. Signal-to-noise ratio (SNR), mean squared error (MSE), and power spectral density (PSD) are used to examine denoising performance. db1, db2, and db3 wavelets are eliminated from analysis where the 30 dB AWGN led to negative SNR improvement for at least one newborn ECG, removing important ECG information. db4 and sym5 are eliminated from selection due to their different waveform morphology compared to the dry-electrode newborn ECG’s QRS complex. db6 and coif5 are selected due to their highest SNR improvement and lowest MSE of 6.26 × 10−6 and 1.65 × 10−7 compared to other wavelets, respectively. Their wavelet shapes are more like a newborn ECG’s QRS morphology, validating their selection. db6 and coif5 showed similar denoising performance, decreasing electrode motion and baseline wander noisy ECG signals by 10 dB and 14 dB, respectively. Further denoising of inherent dry-electrode noise is observed. DWT with coif5 or db6 wavelets is appropriate for denoising newborn dry-electrode ECGs for long-term neonatal dry-electrode ECG monitoring solutions under different noise types. Their similarity to newborn dry-electrode ECGs yields accurate and robust reconstructed denoised newborn dry-electrode ECG signals.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2024.1362754</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2024.1362754</link>
        <title><![CDATA[Feasibility, functionality, and user experience with wearable technologies for acute exacerbation monitoring in patients with severe COPD]]></title>
        <pubdate>2024-03-19T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Olivia C. Iorio</author><author>Felix-Antoine Coutu</author><author>Dany Malaeb</author><author>Bryan A. Ross</author>
        <description><![CDATA[Background: The increasing interest in remote patient monitoring technologies in patients with chronic obstructive pulmonary disease (COPD) requires a phased and stepwise investigative approach, which includes high-risk clinical subgroups who stand to benefit most from such innovations.Methods: Patients aged > 40 with spirometry-confirmed COPD presenting with a current acute exacerbation (ECOPD) were recruited from a tertiary centre Day Hospital in this prespecified feasibility study. Heart rate, respiratory rate, oxygen saturation, skin temperature, and daily activity and overnight sleep quality parameters were collected remotely by a wearable biometric wristband and ring for 21 consecutive days. “Total ambulatory wear time” and “percent of useable data” for eligible vital sign parameters were calculated. Correlation and agreement between cardiorespiratory vital sign data were performed using Spearman’s correlation rho and the Bland-Altman test, respectively. User experience was measured with end-of-study System Usability Scale (SUS) questionnaires.Results: Nine participants (mean age 66.8 ± 8.4 years, 22% female, mean FEV1 1.4L (34.1% predicted), with “severe” (56%) or “very severe” (44%) COPD) experiencing a current ECOPD were included. Wear time was 94% (wristband) and 88.2% (ring) of the total ambulatory study period. Wristband-obtained data (every 1 min, artefact-free) revealed 99.2% and 98.6% of all heart rate and temperature data, respectively, was useable, whereas only 17.6% of all respiratory rate data was useable. Ring-obtained data (every 5 min, “average” and “good” quality) revealed 84.5% of all heart rate data was useable. Cross-sectional analyses with nurse-obtained vital signs revealed correlation coefficients of 0.56 (p = 0.11) and 0.81 (p = 0.0086) for wristband-obtained and ring-obtained heart rate, respectively, and only 0.15 (p = 0.74) for wristband-obtained respiratory rate, without evidence of systematic/proportional bias. Longitudinal heart rate and respiratory rate inter-device analyses demonstrated correlations of 0.86 (p < 0.001) and 0.65 (p < 0.001), respectively. Finally, end-of-study SUS scores were 86.4/100 (wristband) and 89.2/100 (ring).Conclusion: Older adults with severe/very severe COPD experiencing a current ECOPD were capable of autonomous physiological data collection/upload/transmission from their home environment over several weeks using sophisticated wearable biometric technology, with favourable user experiences. Cross-sectional and longitudinal comparative results call into question the paradigm of single sets of infrequent/interval vital sign checks as the current “gold-standard” in frontline clinical practice.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2024.1391335</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2024.1391335</link>
        <title><![CDATA[Editorial: Physiological signal processing for wellness]]></title>
        <pubdate>2024-03-05T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Rakesh Chandra Joshi</author><author>Navchetan Awasthi</author><author>Priyadarsan Parida</author><author>Manob Jyoti Saikia</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2024.1322334</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2024.1322334</link>
        <title><![CDATA[Comparative study of time-frequency transformation methods for ECG signal classification]]></title>
        <pubdate>2024-01-29T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Min-Seo Song</author><author>Seung-Bo Lee</author>
        <description><![CDATA[In this study, we highlighted the growing need for automated electrocardiogram (ECG) signal classification using deep learning to overcome the limitations of traditional ECG interpretation algorithms that can lead to misdiagnosis and inefficiency. Convolutional neural networks (CNN) application to ECG signals is gaining significant attention owing to their exceptional image-classification capabilities. However, we addressed the lack of standardized methods for converting 1D ECG signals into 2D-CNN-compatible input images by using time-frequency methods and selecting hyperparameters associated with these methods, particularly the choice of function. Furthermore, we investigated the effects of fine-tuned training, a technique where pre-trained weights are adapted to a specific dataset, on 2D-CNNs for ECG classification. We conducted the experiments using the MIT-BIH Arrhythmia Database, focusing on classifying premature ventricular contractions (PVCs) and abnormal heartbeats originating from ventricles. We employed several CNN architectures pre-trained on ImageNet and fine-tuned using the proposed ECG datasets. We found that using the Ricker Wavelet function outperformed other feature extraction methods with an accuracy of 96.17%. We provided crucial insights into CNNs for ECG classification, underscoring the significance of fine-tuning and hyperparameter selection in image transformation methods. The findings provide valuable guidance for researchers and practitioners, improving the accuracy and efficiency of ECG analysis using 2D-CNNs. Future research avenues may include advanced visualization techniques and extending CNNs to multiclass classification, expanding their utility in medical diagnosis.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2024.1321861</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2024.1321861</link>
        <title><![CDATA[The performance of domain-based feature extraction on EEG, ECG, and fNIRS for Huntington’s disease diagnosis via shallow machine learning]]></title>
        <pubdate>2024-01-25T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Sucheer Maddury</author>
        <description><![CDATA[Introduction: The early detection of Huntington’s disease (HD) can substantially improve patient quality of life. Current HD diagnosis methods include complex biomarkers such as clinical and imaging factors; however, these methods have high time and resource demands.Methods: Quantitative biomedical signaling has the potential for exposing abnormalities in HD patients. In this project, we attempted to explore biomedical signaling for HD diagnosis in high detail. We used a dataset collected at a clinic with 27 HD-positive patients, 36 controls, and 6 unknowns with EEG, ECG, and fNIRS. We first preprocessed the data and then presented a comprehensive feature extraction procedure for statistical, Hijorth, slope, wavelet, and power spectral features. We then applied several shallow machine learning techniques to classify HD-positives from controls.Results: We found the highest accuracy was achieved by the extremely randomized trees algorithm, with an ROC AUC of 0.963 and accuracy of 91.353%.Discussion: The results provide improved performance over competing methodologies and also show promise for biomedical signals for early prognosis of HD.]]></description>
      </item>
      </channel>
    </rss>