Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Plant Sci.

Sec. Technical Advances in Plant Science

Volume 16 - 2025 | doi: 10.3389/fpls.2025.1698843

Structure-Aware Completion of Plant 3D LiDAR Point Clouds via a Multi-Resolution GAN-Inversion Network

Provisionally accepted
Zhiming  WeiZhiming Wei1,2Jianing  LongJianing Long3Zhihong  ZhangZhihong Zhang4Xinyu  XueXinyu Xue2*Yitian  SunYitian Sun1Qinglong  LiQinglong Li1Wu  LiuWu Liu5Jingxin  ShenJingxin Shen1Zhikai  ZhangZhikai Zhang6Xiaoju  LiXiaoju Li6Zhengguo  MaZhengguo Ma6
  • 1Shandong Academy of Agricultural machinery Sciences, Jinan, China
  • 2Nanjing Institute of Agricultural Mechanization Ministry of Agriculture and Rural Affairs, Nanjing, China
  • 3China Agricultural University, Beijing, China
  • 4Shanghai Institute of Technology, Shanghai, China
  • 5Bureau of Agriculture andRural Affairs of Yanggu County, Liaocheng, China
  • 6Qingdao Agricultural University, Qingdao, China

The final, formatted version of the article will be published soon.

Three-dimensional (3D) point cloud data acquired by sensors such as Light Detection and Ranging (LiDAR) are foundational across diverse application domains, including precision agriculture, mobile robotics, autonomous driving, construction and infrastructure inspection, and cultural heritage documentation. However, environmental disturbances and sensor limitations frequently lead to incomplete or noisy data, which significantly impairs the accuracy of subsequent applications. To address this challenge, this research introduces a novel unsupervised deep learning method, the Multi-Resolution Completion Net (MRC-Net), designed to effectively complete imperfect point cloud data while preserving both global structure and intricate details. MRC-Net, an advancement on the ShapeInversion method, ingeniously integrates a Generative Adversarial Network (GAN) inversion approach with multi-resolution principles. Its core architecture features an encoder for feature extraction, a generator for data completion, and a discriminator to validate data integrity and detail. The model's key innovations are its multi-resolution degradation mechanism and a multi-scale discriminator, which functions synergistically to capture both macro-structures and micro-details. To comprehensively evaluate its performance, the model was tested extensively on various datasets. On virtual datasets such as CRN, MRC-Net's performance was comparable to leading supervised networks, achieving an average Chamfer Distance (CD) of 8.0 and an F1 score of 91.3. More significantly, on a custom dataset designed to simulate agricultural applications, MRC-Net demonstrated exceptional practical utility. When completing standard objects like regular cartons, it achieved a CD of 3.3 and an F1 score of 97.3. For structurally complex simulated plants, the model successfully maintained the integrity of the overall structure despite minor imperfections, yielding average CD and F1 scores of 8.6 and 88.1, respectively. The experimental results confirm that MRC-Net represents a significant advance in point cloud completion. This study holds the potential to provide a reliable and accurate data foundation for downstream applications such as autonomous navigation, high-precision 3D modeling, and agricultural robotics, thereby making a valuable contribution to enhancing data quality in precision agriculture.

Keywords: Plant canopy architecture, 3D plant modeling, point cloud completion, deep learning, multi-resolution, GAN Inversion

Received: 04 Sep 2025; Accepted: 23 Oct 2025.

Copyright: © 2025 Wei, Long, Zhang, Xue, Sun, Li, Liu, Shen, Zhang, Li and Ma. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Xinyu Xue, 735178312@qq.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.