EDITORIAL article
Front. Artif. Intell.
Sec. Machine Learning and Artificial Intelligence
Volume 8 - 2025 | doi: 10.3389/frai.2025.1715198
This article is part of the Research TopicArtificial Intelligence in Visual InspectionView all 6 articles
Artificial Intelligence in Visual Inspection
Provisionally accepted- 1School of Mechanical Engineering, Zhejiang University, Hangzhou, China
- 2State Key Laboratory of Fluid Power and Mechatronic Systems,Zhejiang University, Hangzhou, China
- 3Xi’an Jiaotong-Liverpool University, shuzhou, China
- 4University of Huddersfield, Huddersfield, United Kingdom
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The contributing articles exemplify a field that is rapidly maturing. They move beyond mere proof-of-concept demonstrations to offer sophisticated solutions for complex, real-world scenarios. A common thread is the innovative fusion of techniques-combining image enhancement with detection, leveraging hybrid model architectures, and integrating classical computer vision with deep learning-to create systems that are greater than the sum of their parts.A significant challenge in real-world visual inspection is operating under suboptimal conditions. The article by Liu et al. tackles this head-on, addressing the critical problem of drone-view object detection in low-light environments. Their proposed parallel joint encoding network is a notable innovation, co-optimizing image enhancement (using Zero-DCE++) and object detection (using a lightweight YOLOv5) within a single framework. This bidirectional approach, enhanced by specialized modules for feature modulation, significantly improves robustness against noise and insufficient illumination, as demonstrated on benchmark nighttime drone datasets. This work is vital for extending the operational window of drones in applications like nighttime surveillance, search and rescue, and traffic monitoring.Shifting from the skies to the streets, the application of AI for public safety is explored by Deshpande. His work on automatic rider helmet violation detection in Indian smart cities tackles a pressing real-world problem. The proposed two-step pipeline smartly leverages the strengths of different tools: the NVIDIA TAO toolkit with DetectNet for efficient rider and vehicle identification, and YOLOv8 for accurate helmet and license plate detection. By creating a custom dataset for a complex, real-world scenario and achieving high accuracy, this research provides a practical, deployable blueprint for automated traffic enforcement, with the potential to save lives.The most demanding domain for visual inspection is often medicine, where precision can be a matter of life and death. Three articles in this Topic address this with remarkable sophistication. First, Koshy and Anbarasi introduce HMA-Net for breast ultrasound image segmentation. Their hybrid framework masterfully combines a ConvMixer-based encoder with a ConvNeXTbased decoder, augmented by multihead attention. This architecture is specifically designed to capture both local textures and global contextual dependencies in ultrasound images, a key challenge due to their noisy and complex nature. The model's exceptional performance on standard datasets, achieving a Dice coefficient of over 99%, underscores its potential as a powerful tool for aiding the early and accurate detection of breast cancer.Similarly, Mochurad enhances medical image segmentation by integrating classical and deep learning techniques. Her approach for chest X-ray segmentation addresses the perennial challenge of low contrast and overlapping structures by preprocessing images with Sobel and Scharr edge detection filters. This simple yet highly effective strategy provides the subsequent U-Net model with enhanced boundary information, guiding it to achieve superior accuracy in segmenting the lungs, heart, and clavicles. This work demonstrates that hybrid methodologies can yield significant gains without necessarily increasing model complexity.Finally, Farhan et al. tackle the complexity of 3D brain tumor segmentation from MRI with a strategic ensemble approach. Their dual-modality method moves beyond using single MRI sequences, instead combining complementary modalities (e.g., T1ce and FLAIR) to exploit their synergistic information. Furthermore, the incorporation of Grad-CAM visualizations to create an XAI-MRI system is a crucial step forward. It not only achieves high segmentation accuracy but also provides explainable AI (XAI) heatmaps, building essential trust with clinicians by making the model's decision-making process transparent and actionable in a diagnostic context.In conclusion, the research presented in this Topic vividly illustrates that the future of visual inspection is intelligent, hybrid, and trustworthy. These studies show that the next frontier is not just about building more accurate models, but about crafting robust systems that can function in the real world, seamlessly combine diverse techniques, and, especially in medicine, explain their reasoning. The collective findings here provide a robust foundation for the next generation of AIpowered inspection systems that will enhance safety, improve healthcare outcomes, and drive industrial innovation. The journey from theoretical algorithm to practical tool is well underway, and the work showcased in this collection is leading the way.
Keywords: AI-driven Visual Inspection, robustness, hybrid models, Real-world Applicability, Explainable AI (XAI)
Received: 29 Sep 2025; Accepted: 09 Oct 2025.
Copyright: © 2025 Cao, Xu and Zeng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Yanlong Cao, sdcaoyl@zju.edu.cn
Zhijie Xu, zhijie.xu@xjtlu.edu.cn
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.