Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. Machine Learning and Artificial Intelligence

Volume 8 - 2025 | doi: 10.3389/frai.2025.1622100

This article is part of the Research TopicArtificial Intelligence in Visual InspectionView all 5 articles

Parallel Joint Encoding for Drone-view Object Detection under Low-light Conditions

Provisionally accepted
Liwen  LiuLiwen Liu1Bo  ZhouBo Zhou1Qiqin  LiQiqin Li1Gui  FuGui Fu1*You  WangYou Wang1Hongyu  ChuHongyu Chu2
  • 1Civil Aviation Flight University of China, Guanghan, China
  • 2Southwest University of Science and Technology, mianyang, China

The final, formatted version of the article will be published soon.

Under low-light conditions, the accuracy of drone-view object detection algorithms is frequently compromised by noise and insufficient illumination. Herein, we propose a parallel neural network that concurrently performs image enhancement and object detection for drone-view object detection in nighttime environments. Our innovative coevolutionary framework establishes bidirectional gradient propagation pathways between network modules, improving the robustness of feature representations through the joint optimization of the photometric correction and detection objectives. The illumination enhancement network employs Zero-DCE++, which adaptively adjusts the brightness distribution without requiring paired training data. In our model, object detection is performed using a lightweight YOLOv5 architecture that exhibits good detection accuracy while maintaining real-time performance. To further optimize feature extraction, we introduce a spatially adaptive feature modulation module and a high-and low-frequency adaptive feature enhancement block. The former dynamically modulates the input features through multiscale feature fusion, enhancing the ability of the model to perceive local and global information. The latter module enhances semantic representation and edge details through the parallel processing of spatial contextual information and feature refinement. Experiments on the two data sets of VisDrone2019 (Night) and Drone Vehicle (Night) show that the proposed method improves 3.13% and 3.

Keywords: Drone-view object detection, Image Enhancement, unmanned aerial vehicle (UAV), Low-light conditions, Parallel neural network

Received: 02 May 2025; Accepted: 25 Aug 2025.

Copyright: © 2025 Liu, Zhou, Li, Fu, Wang and Chu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Gui Fu, Civil Aviation Flight University of China, Guanghan, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.