Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Pediatr.

Sec. Neonatology

Objective Interpretation of Intrapartum Cardiotocography Images Using Attention-Guided Convolutional Neural Networks

Provisionally accepted
Xinghe  ZhouXinghe Zhou1,2Tianxin  QiuTianxin Qiu2Chunxia  LinChunxia Lin1*Juan  ZhouJuan Zhou1Shiling  JiangShiling Jiang1Litao  WangLitao Wang1Li  FengLi Feng1Xinhao  WangXinhao Wang2Qingshan  YouQingshan You2
  • 1The First People's Hospital of Longquanyi District Chengdu, Chengdu, China
  • 2Civil Aviation Flight University of China, Guanghan, China

The final, formatted version of the article will be published soon.

Objective: Automated analysis of Electronic Fetal Monitoring (EFM) is essential for the precise assessment of fetal health. However, the subjective interpretation and expertise-dependent nature of conventional cardiotocogram (CTG) analysis hinder diagnostic consistency. This study aims to develop an objective interpretation approach comprising a systematic preprocessing pipeline for signal reconstruction and an attention-guided convolutional neural network for pattern classification to mitigate the risk of missed diagnoses. Methods: A computer vision-based deep learning approach was developed. The workflow begins with a systematic preprocessing pipeline, where raw CTG images undergo grid removal, resampling, and curve reconstruction to generate standardized signal inputs. These signals are analyzed by a classifier based on the EfficientNet-B0 architecture, enhanced with a Convolutional Block Attention Module (CBAM). This attention mechanism enables the model to focus on clinically significant morphological features. The model was trained on a private clinical dataset using clinician-labeled FIGO classifications (Normal vs. Suspicious/Abnormal) as the primary outcome. To evaluate its clinical utility and robustness, the model was externally validated on the public CTU-UHB dataset, using objective umbilical artery pH levels (pH ≥7.05 vs. pH < 7.05) as the benchmark. Results: On the internal clinical dataset, the model achieved an accuracy of 92.66% and a macro-average F1-score of 92.14%. When tested on the external CTU-UHB dataset, the model maintained an accuracy of 95.65%. These results indicate that the proposed algorithm aligns with expert visual classification and remains consistent when validated against objective physiological outcomes (pH levels). This consistency across benchmarks supports the potential robustness and clinical relevance of the learned morphological features. Conclusion: This study presents an objective method for intrapartum CTG analysis. By integrating signal standardization with automated feature learning, the proposed approach addresses the inherent subjectivity of manual interpretation. It serves as a potential clinical decision support tool to assist in the consistency of fetal status assessment.

Keywords: CBAM, Computer Vision, deep learning, EfficientNet, Electronic fetal monitoring, Fetal status classification, Intrapartummonitoring

Received: 02 Oct 2025; Accepted: 26 Jan 2026.

Copyright: © 2026 Zhou, Qiu, Lin, Zhou, Jiang, Wang, Feng, Wang and You. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Chunxia Lin

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.