ORIGINAL RESEARCH article

Front. Plant Sci.

Sec. Technical Advances in Plant Science

Volume 16 - 2025 | doi: 10.3389/fpls.2025.1443882

Automated dynamic phenotyping of whole oilseed rape (Brassica napus) plants from images collected under controlled conditions

Provisionally accepted
  • 1The Alan Turing Institute, London, United Kingdom
  • 2Zalando SE, Berlin, Germany
  • 3Rothamsted Research, Harpenden, Herefordshire, United Kingdom
  • 4University of Cambridge, Cambridge, England, United Kingdom

The final, formatted version of the article will be published soon.

Recent advancements in sensor technologies have enabled collection of many large, high-resolution plant images datasets that could be used to non-destructively explore the relationships between genetics, environment and management factors on phenotype or the physical traits exhibited by plants. The phenotype data captured in these datasets could then be integrated into models of plant development and crop yield to more accurately predict how plants may grow as a result of changing management practices and climate conditions, better ensuring future food security. However, automated methods capable of reliably and efficiently extracting meaningful measurements of individual plant components (e.g. leaves, flowers, pods) from imagery of whole plants are currently lacking. In this study, we explore interdisciplinary application of MapReader, a computer vision pipeline for annotating and classifying patches of larger images that was originally developed for semantic exploration of historical maps, to time-series images of whole oilseed rape (Brassica napus) plants. Models were trained to classify five plant structures in patches derived from whole plant images (branches, leaves, pods, flower buds and flowers), as well as background patches. Three modelling methods are compared: (i) 6-label multi-class classification, (ii) a chain of binary classifiers approach, and (iii) an approach combining binary classification of plant and background patches, followed by 5-label multi-class classification of plant structures are compared. A combined plant/background binarization and 5-label multi-class modelling approach using a 'resnext50d_4s2x40d' model architecture for both the binary classification and multi-class classification components was found to produce the most accurate patch classification for whole B. napus plant images (macro-averaged F1-score = 88.50, weighted average F1-score = 97.71). This combined binary and 5-label multi-class classification approach demonstrate similar performance to the top-performing MapReader 'railspace' classification model. This highlights the potential applicability of the MapReader model framework to images data from across scientific and humanities domains, and the flexibility it provides in creating pipelines with different modelling approaches. The pipeline for dynamic plant phenotyping from whole plant images developed in this study could potentially be applied to imagery from varied laboratory conditions, and to images datasets of other plants of both agricultural and conservation concern.

Keywords: plant phenotyping, Computer Vision, machine learning, image analysis, plant development

Received: 04 Jun 2024; Accepted: 14 Apr 2025.

Copyright: © 2025 Corcoran, Hosseini, Siles, Kurup and Ahnert. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Evangeline Corcoran, The Alan Turing Institute, London, United Kingdom

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.