ORIGINAL RESEARCH article

Front. Neurorobot.

Volume 19 - 2025 | doi: 10.3389/fnbot.2025.1603964

This article is part of the Research TopicPerceiving the World, Planning the Future: Advanced Perception and Planning Technology in RoboticsView all 4 articles

Depth-Aware Unpaired Image-to-Image Translation for Autonomous Driving Test Scenario Generation Using a Dual-Branch GAN

Provisionally accepted
Donghao  ShiDonghao Shi1,2*Chenxin  ZhaoChenxin Zhao1,2*Cunbin  ZhaoCunbin Zhao1,2Zhou  FangZhou Fang1,2Chonghao  YuChonghao Yu1,2Jian  LiJian Li1,2Minjie  FengMinjie Feng1,2
  • 1Advanced Manufacturing Metrology Research Center, Zhejiang Institute of Quality Sciences, Hangzhou, Jiangsu Province, China
  • 2Key Laboratory of Acoustics and Vibration Applied Measuring Technology,State Administration for Market Regulation, Hangzhou, Jiangsu Province, China

The final, formatted version of the article will be published soon.

Reliable visual perception is essential for autonomous driving test scenario generation, yet adverse weather and lighting variations pose significant challenges to simulation robustness and generalization. Traditional unpaired image-to-image translation methods primarily rely on RGBbased transformations, often resulting in geometric distortions and loss of structural consistency, which can negatively impact the realism and accuracy of generated test scenarios. To address these limitations, we propose a Depth-Aware Dual-Branch Generative Adversarial Network (DAB-GAN) that explicitly incorporates depth information to preserve spatial structures during scenario generation. The dual-branch generator processes both RGB and depth inputs, ensuring geometric fidelity, while a self-attention mechanism enhances spatial dependencies and local detail refinement. This enables the creation of realistic and structure-preserving test environments that are crucial for evaluating autonomous driving perception systems, especially under adverse weather conditions. Experimental results demonstrate that DAB-GAN outperforms existing unpaired image-to-image translation methods, achieving superior visual fidelity and maintaining depth-aware structural integrity. This approach provides a robust framework for generating diverse and challenging test scenarios, enhancing the development and validation of autonomous driving systems under various real-world conditions.删除了: This enables the creation of realistic and structure-37 preserving test environments, particularly for adverse weather 38 conditions, which are crucial for evaluating autonomous driving 39 perception systems. Despite the importance of real-world testing, collecting diverse adverse weather data remains costly, time-consuming, and logistically challenging. Moreover, real-world datasets often suffer from imbalance and limited coverage of extreme conditions, restricting their utility in comprehensive validation and robustness assessment (Agarwal et al., 2024;Lan et al., 2024;Li Y. et al., 2023). As a result, simulation-based testing has become a critical tool for autonomous driving development, allowing for the controlled generation of challenging environmental conditions to enhance model reliability and adaptability (Biagiola and Tonella, 2024;Huang et al., 2025;Sadid and Antoniou, 2024). A key requirement for effective simulation is the ability to generate photorealistic and geometrically consistent test scenarios that accurately reflect real-world conditions.

Keywords: Autonomous Driving, Unpaired image-to-image translation, Depth map, Selfattention mechanism, generative adversarial network

Received: 01 Apr 2025; Accepted: 08 May 2025.

Copyright: © 2025 Shi, Zhao, Zhao, Fang, Yu, Li and Feng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Donghao Shi, Advanced Manufacturing Metrology Research Center, Zhejiang Institute of Quality Sciences, Hangzhou, Jiangsu Province, China
Chenxin Zhao, Advanced Manufacturing Metrology Research Center, Zhejiang Institute of Quality Sciences, Hangzhou, Jiangsu Province, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.