AUTHOR=Hu Dengshu , Wang Ke , Zhang Cuijin , Liu Zheng , Che Yukui , Dong Shoubing , Kong Chuirui TITLE=Target-aware unregistered infrared and visible image fusion JOURNAL=Frontiers in Physics VOLUME=Volume 13 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/physics/articles/10.3389/fphy.2025.1599968 DOI=10.3389/fphy.2025.1599968 ISSN=2296-424X ABSTRACT=IntroductionInfrared (IR) and visible (VI) image fusion can provide richer texture details for subsequent object detection tasks. Conversely, object detection can offer semantic information about targets, which in turn helps improve the quality of the fused images. As a result, joint learning approaches that integrate infrared-visible image fusion and object detection have attracted increasing attention.MethodsHowever, existing methods typically assume that the input source images are perfectly aligned spatially—an assumption that does not hold in real-world applications. To address this issue, we propose a novel method that enables mutual enhancement between infrared-visible image fusion and object detection, specifically designed to handle misaligned source images. The core idea is to use the object detection loss, propagated via backpropagation, to guide the training of the fusion network, while a specially designed loss function mitigates the modality gap between infrared and visible images.ResultsComprehensive experiments on three public datasets demonstrate the effectiveness of our approach.DiscussionIn addition, our approach can be used with other radiation frequencies where different modalities require image fusion like, for example, radio-frequency, x- and gamma rays used in medical imaging.