ORIGINAL RESEARCH article
Front. Built Environ.
Sec. Transportation and Transit Systems
Road Damage Detection Method Based on UAV Imagery and YOLO-SCX
Provisionally accepted- 1Qinghai Nationalities University, Xining, China
- 2Ningbo Huadong Nuclear Industry Investigation & Design Institute Group Co., Ltd, Ningbo, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Automated road damage detection using Unmanned Aerial Vehicle (UAV) imagery is technically constrained by small target dimensions and complex environmental backgrounds. To address these issues within a low-computational budget, this study proposes YOLO-SCX, a computationally efficient object detection architecture based on the YOLOv5n baseline. The methodological novelty of this work lies in the systematic integration of three structural optimizations designed for aerial sensing: (1) the Convolutional Block Attention Module (CBAM) to suppress background noise; (2) a Grouped SPPCSPC module to strengthen multi-scale feature fusion; and (3) a Decoupled Head to independently optimize classification and regression tasks. The research utilizes a composite dataset of 1,500 images derived from UAV-RDD and CrackForest sources, rigorously partitioned into training (1,000), validation (250), and testing (250) sets. Experimental results on the held-out test set demonstrate that YOLO-SCX achieves a mean Average Precision (mAP@0.5) of 66.3% and Precision of 79.2%, representing absolute improvements of 5.8% and 6.0% respectively over the baseline. Furthermore, the model maintains an inference speed of 185 FPS with 8.7 million parameters, confirming its suitability for real-time edge deployment compared to heavier architectures like YOLOv7 and YOLOv8.
Keywords: attention mechanism, decoupledhead, Multi-scale feature fusion, Road damage detection, UAV imagery, YOLOv5
Received: 07 Aug 2025; Accepted: 16 Feb 2026.
Copyright: © 2026 Yang, Ma, Zhu and Yu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Zhengfeng Ma
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
