Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Plant Sci.

Sec. Technical Advances in Plant Science

This article is part of the Research TopicPlant Phenotyping for AgricultureView all 21 articles

Deep Learning-Based Methods for Phenotypic Trait Extraction in Rice Panicles

Provisionally accepted
Zhiao  WangZhiao Wang1*Ruihang  LiRuihang Li1Wei  LiWei Li1Xiaoding  MaXiaoding Ma1*Yan  ShenYan Shen1*Maomao  LiMaomao Li2Binhua  HuBinhua Hu3Ming  TangMing Tang4Guomin  ZhouGuomin Zhou1Jian  WangJian Wang1Jianhua  ZhangJianhua Zhang1*
  • 1Chinese Academy of Agricultural Sciences (CAAS), Beijing, China
  • 2Jiangxi Academy of Agricultural Sciences, Nanchang, China
  • 3Sichuan Academy of Agricultural Sciences, Chengdu, China
  • 4Heilongjiang Academy of Agricultural Sciences, Harbin, China

The final, formatted version of the article will be published soon.

Key panicle traits, such as grain number, panicle length, and grain dimensions, are direct determinants of rice yield and market quality. As a staple food for over half of the world's population, rice plays a vital role in global food security, making the high-precision and high-throughput measurement of these traits critical for accelerating genetic improvement and modern rice breeding. In this study, 5300 high-quality rice panicle images (covering loose/normal/dense panicle types and milk/dough/full maturity/over-ripe stages) were used, with 3290 for training, 940 for validation, and 470 for testing. A deep learning model integrating object detection (OPG-YOLOv8), dynamic depth-first search (DFS) pruning algorithm, and linear regression was proposed, comprising modules for grain counting, panicle length extraction, grain length/width extraction, and maturity assessment. Experimental results showed that the panicle length extraction algorithm achieved a coefficient of determination (R²) of 0.9583, root mean square error (RMSE) of 5.69 mm, mean absolute error (MAE) of 4.91 mm, and mean absolute percentage error (MAPE) of 2.03%. For grain counting, a two-stage method combining density classification and regression correction effectively improved accuracy: R² values for loose, normal, and dense panicles were 0.9799, 0.9551, and 0.9278, respectively. Grain length extraction yielded an R² of 0.8823, MAE of 0.94 mm, and RMSE of 1.05 mm, while grain width extraction showed a MAPE of 6.64% with acceptable measurement precision. The OPG-YOLOv8 model outperformed comparative models (YOLOv8, YOLOv7, etc.) with a mAP50 of 99.10%, 59.6M parameters, and an inference speed of 154 FPS, balancing accuracy, computational complexity, and efficiency. Additionally, the yellowness value of rice panicle images (quantified via RGB normalization, gamma correction, and Hunter Lab parameter calculation) provided a reliable reference for maturity assessment, with an overall classification accuracy of 92.0%. A web-based "Rice Panicle Trait Extraction System" (developed via Django framework) was constructed, enabling one-stop service from image input to data export with an average processing time of 5 seconds per image. This study provides important technical support for rice breeding and promotes the intelligent and digital transformation of modern agricultural breeding.modern agricultural breeding.

Keywords: deep learning, rice, Phenotypic trait, panicle traits, Precision extraction

Received: 22 Oct 2025; Accepted: 20 Nov 2025.

Copyright: © 2025 Wang, Li, Li, Ma, Shen, Li, Hu, Tang, Zhou, Wang and Zhang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Zhiao Wang, wza20010922@icloud.com
Xiaoding Ma, maxiaoding@caas.cn
Yan Shen, yanshen@caas.cn
Jianhua Zhang, zhangjianhua@caas.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.