ORIGINAL RESEARCH article
Front. Physiol.
Sec. Computational Physiology and Medicine
Volume 16 - 2025 | doi: 10.3389/fphys.2025.1629238
A YOLOv11-Based AI System for Keypoint Detection of Auricular Acupuncture Points in Traditional
Provisionally accepted- 1Changshu Hospital Affiliated to Nanjing University of Chinese Medicine, Changshu City, China
- 2Changshu Hospital Affiliated to Soochow University, Changshu City, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Objective This study aims to develop an artificial intelligence model and web-based application for the automatic detection of 21 commonly used auricular acupoints based on the YOLOv11 neural network.Methods A total of 660 human ear images were collected from three medical centers. The LabelMe annotation tool was used to label the images with bounding boxes and key points, which were then converted into a format compatible with the YOLO model. Using this dataset, transfer learning and fine-tuning were performed on different-sized versions of the YOLOv11 neural network. The model performance was evaluated on validation and test sets, considering metrics such as mean average precision (mAP) under different thresholds, recall, and detection speed. The best-performing model was subsequently deployed as a web application using the Streamlit library in the Python environment.Five versions of the YOLOv11 keypoint detection model were developed, namely YOLOv11n, YOLOv11s, YOLOv11m, YOLOv11l, and YOLOv11x. Among them, YOLOv11x achieved the highest performance in the validation set with a precision of 0.991, recall of 0.976, mAP 50 of 0.983, and mAP 50-95 of 0.625, though it exhibited the longest inference delay (19ms/img). On the external test set, YOLOv11x achieved an ear recognition accuracy of 0.996, sensitivity of 0.996, and an F1-score of 0.998. For auricular acupoint localization, the model achieved an mAP 50 of 0.982, precision of 0.975, and recall of 0.976. The model has been successfully deployed as a web application, accessible on both mobile and desktop platforms to accommodate diverse user needs.The YoloEar21 web application, developed based on YOLOv11x and Streamlit, demonstrates superior recognition performance and user-friendly accessibility. Capable of providing automatic identification of 21 commonly used auricular acupoints across various scenarios for diverse users, it exhibits promising potential for clinical applications.
Keywords: Auricular Acupoints, deep learning, keypoint detection, YOLO, artificial intelligence
Received: 16 May 2025; Accepted: 23 Jun 2025.
Copyright: © 2025 Wang, Yin, Zhang, Xia, Su and Chen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Yue Su, Changshu Hospital Affiliated to Nanjing University of Chinese Medicine, Changshu City, China
Jian Chen, Changshu Hospital Affiliated to Soochow University, Changshu City, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.