Skip to main content

ORIGINAL RESEARCH article

Front. Plant Sci.
Sec. Technical Advances in Plant Science
Volume 15 - 2024 | doi: 10.3389/fpls.2024.1365266
This article is part of the Research Topic Vision, Learning, and Robotics: AI for Plants in the 2020s View all 18 articles

Development of a machine vision-based weight prediction system of butterhead lettuce (Lactuca sativa L.) using deep learning models for industrial plant factory

Provisionally accepted
  • 1 Department of Biosystems Engineering, Seoul National University, Seoul, Republic of Korea
  • 2 Department of Biosytems Engineering, Seoul National University, Seoul, Republic of Korea

The final, formatted version of the article will be published soon.

    Indoor agriculture, especially plant factories, becomes essential because of the advantages of cultivating crops yearly to address global food shortages. Plant factories have been growing in scale as commercialized. Developing an on-site system that estimates the fresh weight of crops non-destructively for decision-making on harvest time is necessary to maximize yield and profits. However, a multi-layer growing environment with on-site workers is too confined and crowded to develop a high-performance system. This research developed a machine vision-based fresh weight estimation system to monitor crops from the transplant stage to harvest with less physical labor in an on-site industrial plant factory. A linear motion guide with a camera rail moving in both the x-axis and y-axis directions was produced and mounted on a cultivating rack with a height under 35 cm to get consistent images of crops from the top view. Raspberry Pi4 controlled its operation to capture images automatically every hour. The fresh weight was manually measured eleven times for four months to use as the ground-truth weight of the models. The attained images were preprocessed and used to develop weight prediction models based on manual and automatic feature extraction The performance of models was compared, and the best performance among them was the automatic feature extraction-based model using convolutional neural networks (CNN; ResNet18). The CNN-based model on automatic feature extraction from images performed much better than any other manual feature extraction-based models with 0.95 of the coefficients of determination (R 2 ) and 8.06 g of root mean square error (RMSE). However, another multiplayer perceptron model (MLP_2) was more appropriate to be adopted on-site since it showed around nine times faster inference time than CNN with a little less R 2 (0.93). Through this study, field workers in a confined indoor farming environment can measure the fresh weight of crops non-destructively and easily. In addition, it would help to decide when to harvest on the spot.

    Keywords: Controlled environment agriculture, Convolutional Neural Networks, Commercialized plant factory, Computer Vision, data acquisition system, indoor farming, Linear motion guide, Regression model

    Received: 04 Jan 2024; Accepted: 10 May 2024.

    Copyright: © 2024 Kim, Moon, Park, Kim and Chung. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Soo Chung, Department of Biosytems Engineering, Seoul National University, Seoul, 151-742, Republic of Korea

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.