ORIGINAL RESEARCH article
Front. Plant Sci.
Sec. Sustainable and Intelligent Phytoprotection
Real-Time Segmentation and Phenotypic Analysis of Rice Seeds Using YOLOv11-LA and RiceLCNN
Provisionally accepted- 1School of Artificial Intelligence, Changchun University of Science and Technology, Changchun, China
- 2Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China
- 3School of Data Science and Artificial Intelligence, Jilin Engineering Normal University, Changchun, China
- 4College of Electrical and Information Engineering, Jilin Engineering Normal University, Changchun, China
- 5Southwest University of Science and Technology, Mianyang, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Real-time, accurate detection and classification of rice seeds are vital for improving agricultural productivity, ensuring grain quality, and promoting smart agriculture. In recent years, significant advancements have been made in rice seed detection and classification technologies, mainly through the application of deep learning, particularly convolutional neural networks (CNNs) and attention-based models. Early methods such as threshold segmentation and single-grain classification demonstrated effectiveness in specific scenarios. However, these methods often struggled with computational efficiency and latency, particularly under conditions of high-density seed agglutination and the need for real-time processing. This study proposes an integrated intelligent analysis model that combines object detection, real-time tracking, precise classification, and high-accuracy phenotypic measurement. To address the limitations of existing approaches, we employ the lightweight YOLOv11-LA model for real-time grain segmentation, paired with the RiceLCNN classifier for efficient real-time classification. YOLOv11-LA utilizes separable convolutions, CBAM attention mechanisms, and module pruning strategies. This model not only maintains detection accuracy but also reduces the number of parameters by 63.2%, decreases computational complexity by 51.6%, and increases the mAP@0.5:0.95 score by 1.9%. The introduction of the DeepSORT algorithm effectively tackles challenges related to multi-object tracking, ensuring real-time tracking and accurate counting of various seed types while significantly minimizing duplicate identifications and frame loss. For classification tasks, we designed a lightweight RiceLCNN network, which has only 20.7% of the parameters of MobileNetV3. This network achieved classification accuracies of 89.78% on private datasets and 96.32% on public benchmark datasets. In addition, through sub-pixel edge detection technology and a dynamic scale calibration mechanism, the system accurately captures phenotypic features such as seed size and roundness, with measurement errors kept within 0.1 millimeters. Experimental validation demonstrates that this model outperforms existing lightweight methods in terms of detection speed, classification accuracy, and morphometric reliability, underscoring its significant potential for industrial applications.
Keywords: YOLOv11-LA, RiceLCNN, Rice seeds, object detection, Classification
Received: 30 Jul 2025; Accepted: 20 Nov 2025.
Copyright: © 2025 Zhang, Song, Liu, Xu and Xiayidan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Weiwei Xu, xuww@jlenu.edu.cn
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
