Your new experience awaits. Try the new design now and help us make it even better

METHODS article

Front. Plant Sci.

Sec. Sustainable and Intelligent Phytoprotection

Volume 16 - 2025 | doi: 10.3389/fpls.2025.1621934

GCASSN: A Graph Convolutional Attention Synergistic Segmentation Network for 3D Plant Point Cloud Segmentation

Provisionally accepted
Yibo  ZouYibo Zou1,2Haoqiang  WangHaoqiang Wang1,2Feng  ZhangFeng Zhang3Yan  GeYan Ge1,2Ming  ChenMing Chen1,2*
  • 1Shanghai Ocean University, Shanghai, China
  • 2Key Laboratory of Fisheries Information, Ministry of Agriculture and Rural Affairs, Shanghai 201306, China
  • 3Bright Food Group Shanghai Chongming Farm Co., Ltd., Shanghai 202162, China

The final, formatted version of the article will be published soon.

Plant phenotyping analysis serves as a cornerstone of agricultural research. 3D point clouds greatly improve the problem of overlapping and occlusion of leaves in two-dimensional images and have become a popular field of plant phenotyping research. The realization of faster and more effective plant point cloud segmentation is the basis and key to the subsequent analysis of plant phenotypic parameters. To balance lightweight design and segmentation precision, we propose a Graph Convolutional Attention Synergistic Segmentation Network (GCASSN) specifically for plant point cloud data. The framework mainly comprises (1) Trans-net, which normalizes input point clouds into canonical poses; (2) Graph Convolutional Attention Synergistic Module (GCASM), which integrates graph convolutional networks (GCNs) for local feature extraction and self-attention mechanisms to capture global contextual dependencies. Complementary advantages are realized. On plant 3D point cloud segmentation via the Plant3D and Phone4D datasets, the model achieves state-of-the-art performance with 95.46% mean accuracy and 90.41% mean intersection-over-union (mIoU), surpassing mainstream methods (PointNet, PointNet++, DGCNN, PCT, and Point Transformer). The computational efficiency is competitive, with the inference time and parameter quantity slightly exceeding that of the DGCNN. Without parameter tuning, it attains 85.47% mIoU and 82.9% mean class IoU on ShapeNet, demonstrating strong generalizability. The method proposed in this article can fully extract the local detail features and overall global features of plants, and efficiently and robustly complete the segmentation task of plant point clouds, laying a solid foundation for plant phenotype analysis. The code of the GCASSN can be found in https://github.com/fallovo/GCASSN.git.

Keywords: Plant Phenotype, 3D plant point cloud, Graph convolutional neural network, Self-attention mechanism, feature extraction

Received: 02 May 2025; Accepted: 25 Aug 2025.

Copyright: © 2025 Zou, Wang, Zhang, Ge and Chen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Ming Chen, Shanghai Ocean University, Shanghai, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.