ORIGINAL RESEARCH article

Front. Mar. Sci.

Sec. Ocean Observation

YOLOv8n-PFA: A Parallel Fusion Attention Network for Enhanced Target Detection in Challenging Environments

  • 1. Huazhong University of Science and Technology, Wuhan, China

  • 2. International Science and Technology Cooperation 0ffshore Center for Ship and Marine Intelligent 6 Equipment and Technology, Wuhan, China

  • 3. Wuhan Belt and Road Joint Laboratory on Ship and Marine Intelligent Equipment & Technology, Wuhan, China

  • 4. Mehran University of Engineering & Technology, Jamshoro, Pakistan

  • 5. Peking University, Beijing, China

  • 6. Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia

  • 7. South Valley University, Qena, Egypt

Article metrics

View details

6

Views

The final, formatted version of the article will be published soon.

Abstract

Underwater target detection is a critical component of marine environment monitoring and ocean exploration, enabling the collection of valuable data in challenging underwater environments. Accurate detection remains difficult due to low illumination, and complex background interference, which often lead to missed or inaccurate detection of small and blurred objects. Although recent convolutional neural network-based detectors have improved performance, many existing methods are computationally expensive, limiting their deployment on resource-constrained underwater platforms. To address these challenges, we propose YOLOv8n-PFA, a lightweight and high-precision underwater object detection framework. The proposed method introduces a novel Parallel Fusion Attention (PFA) module that jointly models channel and spatial attention in parallel with residual connections. This enhances discriminative object features while suppressing background noise. In addition, the Wise Intersection over Union (WIoUv3) loss is incorporated to stabilize training and improve localization accuracy. Furthermore, depth-wise convolutions (DWConv) are strategically applied to reduce model parameters and computational cost. Extensive experiments demonstrate that YOLOv8n-PFA achieves a mean Average Precision (mAP) of 84.2% on the URPC2020 dataset with 2.68 M parameters and 7.7 GFLOPs, and 84.8% mAP on the RUOD dataset with 2.98 M parameters and 7.9 GFLOPs. To further validate generalization, the PFA module was also integrated into YOLOv11n, achieving 84.7% mAP on URPC2020 and 85.3% on RUOD with only 2.76 M parameters and 6.5 GFLOPs. Across both datasets, the proposed method improves mAP by 2.8-4.1% over the respective baselines while maintaining a lightweight footprint, demonstrating its scalability across YOLO generations. These results indicate that the proposed framework provides an effective and efficient solution for real-time underwater target detection in challenging marine environments.

Summary

Keywords

Lightweight deep learning models, marine environment monitoring, Parallel Fusion Attention (PFA), Real-time target detection, underwater object detection

Received

06 December 2025

Accepted

16 February 2026

Copyright

© 2026 Rashid, Wang, Ahmed, Ahmed, Mohsan, Alabdulkreem and Mostafa. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Samih M. Mostafa

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Share article

Article metrics