Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. Medicine and Public Health

This article is part of the Research TopicArtificial Intelligence and Medical Image ProcessingView all 7 articles

A Lightweight Face Image–Based Auxiliary Detection Model for Autism Spectrum Disorder

Provisionally accepted
Chenkai  LiaoChenkai LiaoWenqiu  ZhuWenqiu Zhu*
  • Hunan University of Technology, Zhuzhou, China

The final, formatted version of the article will be published soon.

Early diagnosis of Autism Spectrum Disorder (ASD) plays a crucial role in improving patients' quality of life. In recent years, face image–based ASD detection has attracted increasing attention as an auxiliary diagnostic approach. However, existing lightweight models still show limitations in capturing fine-grained facial features. To address this problem, this paper proposes a lightweight face image–based auxiliary detection model for ASD, termed MN-ASD. First, MobileNetV4-S is selected as the baseline framework. To enhance the model's ability to capture subtle facial details, the FReLU activation function is introduced, which strengthens spatial feature modeling. Second, to further improve the performance and accuracy of the model, the Coordinate Attention (CA) module is incorporated at the final stage. By jointly modeling positional and channel dependencies, the CA mechanism enables the model to better locate key facial regions and enhance global feature extraction. Finally, experimental results demonstrate that the proposed method achieves an accuracy of 93.33% on the Kaggle dataset, with precision, recall, and F1-score all outperforming other mainstream lightweight networks. These results validate the effectiveness of MN-ASD in capturing typical facial features of ASD and indicate its suitability for early auxiliary screening in resource-constrained scenarios.

Keywords: Autism Spectrum Disorder, Face image, Lightweight, MobileNetV4, CoordinateAttention

Received: 21 Sep 2025; Accepted: 24 Nov 2025.

Copyright: © 2025 Liao and Zhu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Wenqiu Zhu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.