AUTHOR=Mai Christian , Liniger Jesper , Pedersen Simon TITLE=Semantic segmentation using synthetic images of underwater marine-growth JOURNAL=Frontiers in Robotics and AI VOLUME=Volume 11 - 2024 YEAR=2025 URL=https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2024.1459570 DOI=10.3389/frobt.2024.1459570 ISSN=2296-9144 ABSTRACT=IntroductionSubsea applications recently received increasing attention due to the global expansion of offshore energy, seabed infrastructure, and maritime activities; complex inspection, maintenance, and repair tasks in this domain are regularly solved with pilot-controlled, tethered remote-operated vehicles to reduce the use of human divers. However, collecting and precisely labeling submerged data is challenging due to uncontrollable and harsh environmental factors. As an alternative, synthetic environments offer cost-effective, controlled alternatives to real-world operations, with access to detailed ground-truth data. This study investigates the potential of synthetic underwater environments to offer cost-effective, controlled alternatives to real-world operations, by rendering detailed labeled datasets and their application to machine-learning.MethodsTwo synthetic datasets with over 1000 rendered images each were used to train DeepLabV3+ neural networks with an Xception backbone. The dataset includes environmental classes like seawater and seafloor, offshore structures components, ship hulls, and several marine growth classes. The machine-learning models were trained using transfer learning and data augmentation techniques.ResultsTesting showed high accuracy in segmenting synthetic images. In contrast, testing on real-world imagery yielded promising results for two out of three of the studied cases, though challenges in distinguishing some classes persist.DiscussionThis study demonstrates the efficiency of synthetic environments for training subsea machine learning models but also highlights some important limitations in certain cases. Improvements can be pursued by introducing layered species into synthetic environments and improving real-world optical information quality—better color representation, reduced compression artifacts, and minimized motion blur—are key focus areas. Future work involves more extensive validation with expert-labeled datasets to validate and enhance real-world application accuracy.