<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Plant Sci.</journal-id>
<journal-title>Frontiers in Plant Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Plant Sci.</abbrev-journal-title>
<issn pub-type="epub">1664-462X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpls.2021.701038</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Plant Science</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Automatic Diagnosis of Rice Diseases Using Deep Learning</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Deng</surname> <given-names>Ruoling</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1315980/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Tao</surname> <given-names>Ming</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Xing</surname> <given-names>Hang</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1429082/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Yang</surname> <given-names>Xiuli</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1386092/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Liu</surname> <given-names>Chuang</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Liao</surname> <given-names>Kaifeng</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1429052/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Qi</surname> <given-names>Long</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1318720/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>College of Engineering, South China Agricultural University</institution>, <addr-line>Guangzhou</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Lingnan Guangdong Laboratory of Modern Agriculture</institution>, <addr-line>Guangzhou</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Angelica Galieni, Council for Agricultural and Economics Research (CREA), Italy</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Mohsen Yoosefzadeh Najafabadi, University of Guelph, Canada; Pilar Hernandez, Spanish National Research Council, Spain</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Long Qi <email>qilong&#x00040;scau.edu.cn</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Technical Advances in Plant Science, a section of the journal Frontiers in Plant Science</p></fn></author-notes>
<pub-date pub-type="epub">
<day>19</day>
<month>08</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>12</volume>
<elocation-id>701038</elocation-id>
<history>
<date date-type="received">
<day>27</day>
<month>04</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>20</day>
<month>07</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2021 Deng, Tao, Xing, Yang, Liu, Liao and Qi.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Deng, Tao, Xing, Yang, Liu, Liao and Qi</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>Rice disease has serious negative effects on crop yield, and the correct diagnosis of rice diseases is the key to avoid these effects. However, the existing disease diagnosis methods for rice are neither accurate nor efficient, and special equipment is often required. In this study, an automatic diagnosis method was developed and implemented in a smartphone app. The method was developed using deep learning based on a large dataset that contained 33,026 images of six types of rice diseases: leaf blast, false smut, neck blast, sheath blight, bacterial stripe disease, and brown spot. The core of the method was the Ensemble Model in which submodels were integrated. Finally, the Ensemble Model was validated using a separate set of images. Results showed that the three best submodels were DenseNet-121, SE-ResNet-50, and ResNeSt-50, in terms of several attributes, such as, learning rate, precision, recall, and disease recognition accuracy. Therefore, these three submodels were selected and integrated in the Ensemble Model. The Ensemble Model minimized confusion among the different types of disease, reducing misdiagnosis of the disease. Using the Ensemble Model to diagnose six types of rice diseases, an overall accuracy of 91% was achieved, which is considered to be reasonably good, considering the appearance similarities in some types of rice disease. The smartphone app allowed the client to use the Ensemble Model on the web server through a network, which was convenient and efficient for the field diagnosis of rice leaf blast, false smut, neck blast, sheath blight, bacterial stripe disease, and brown spot.</p></abstract>
<kwd-group>
<kwd>convolutional neural network</kwd>
<kwd>rice disease</kwd>
<kwd>ensemble learning</kwd>
<kwd>diagnosis</kwd>
<kwd>deep learning</kwd>
</kwd-group>
<counts>
<fig-count count="11"/>
<table-count count="2"/>
<equation-count count="6"/>
<ref-count count="49"/>
<page-count count="15"/>
<word-count count="8650"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>Rice is an important crop in agriculture. However, crop diseases can significantly reduce its yield and quality, which is a great threat to food supplies around the world. Thus, disease control is critical for rice production. The key for successful disease control is a correct and fast diagnosis of diseases, so that pesticide control measures can be applied timely. Currently, the most widely used method to diagnose rice crop diseases is manual judgment based on the appearance of diseases (Sethy et al., <xref ref-type="bibr" rid="B39">2020</xref>). There are not enough people across the region with skills to perform such tasks in a timely manner. Therefore, a more efficient and convenient method for disease diagnosis of rice is required.</p>
<p>Over the past decades, researchers have used computer vision technology in agriculture for estimating crop yields (Gong et al., <xref ref-type="bibr" rid="B16">2013</xref>; Deng et al., <xref ref-type="bibr" rid="B12">2020</xref>), detecting crop nutritional deficiencies (Xu et al., <xref ref-type="bibr" rid="B44">2011</xref>; Baresel et al., <xref ref-type="bibr" rid="B6">2017</xref>; Tao et al., <xref ref-type="bibr" rid="B42">2020</xref>), estimating geometric sizes of crop (Liu et al., <xref ref-type="bibr" rid="B27">2019</xref>), and recognizing weeds (Jiang et al., <xref ref-type="bibr" rid="B23">2020</xref>). Several different approaches of computer vision have also been used for the diagnosis of crop diseases, such as image processing, pattern recognition, support vector machine, and hyperspectral detection (Ngugi et al., <xref ref-type="bibr" rid="B30">2020</xref>). Multi-spectral remote sensing images of tomato fields were used for cluster analysis to differentiate healthy tomatoes from diseased ones (Zhang et al., <xref ref-type="bibr" rid="B47">2005</xref>). The shape and texture features of rice bacterial leaf blight, sheath blight, and blast were extracted using a support vector machine. A genetic algorithm and a support vector machine were used to detect the diseased leaves of different crops (Singh and Misra, <xref ref-type="bibr" rid="B40">2017</xref>). Islam et al. (<xref ref-type="bibr" rid="B22">2018</xref>) detected the RGB value of an affected portion, and then used Naive Bayes to classify rice brown spot, bacterial blight, and blast. Infrared thermal imaging technology that provides temperature information of crop has also been used to detect tomato mosaic disease and wheat leaf rust (Zhu et al., <xref ref-type="bibr" rid="B49">2018</xref>). Although some of these existing methods could achieve reasonably high accuracies for crop disease diagnosis, most of them rely on manual extraction of disease features. As a result, the expression ability is limited, and it is difficult to generalize when results are applied. Also, some methods need special equipment that is not always readily available to users. All these drawbacks make it difficult to apply these methods for crop disease diagnosis.</p>
<p>Deep learning technology can be implemented in crop disease diagnosis methods to overcome the drawbacks. In recent years, deep learning has been widely used in image classification, object detection, and content recommendation. In fact, there have been researchers who used deep learning to detect diseases of various crops. Lu et al. (<xref ref-type="bibr" rid="B28">2017a</xref>) proposed an in-field automatic disease diagnosis system, which could achieve identification and localization for wheat diseases. Ozguven and Adem (<xref ref-type="bibr" rid="B31">2019</xref>) first applied a convolutional neural network (CNN), Faster R-CNN, to images of sugar beet leaves to detect spot disease. Karlekar and Seal (<xref ref-type="bibr" rid="B25">2020</xref>) proposed SoyNet that was applied to soybean leaf images for disease diagnosis. Deep learning also plays an important role in disease diagnosis of many other crops, such as tomato (Rangarajan et al., <xref ref-type="bibr" rid="B36">2018</xref>; Agarwal et al., <xref ref-type="bibr" rid="B1">2020</xref>), cassava (Sambasivam and Opiyo, <xref ref-type="bibr" rid="B37">2020</xref>), tulip (Polder et al., <xref ref-type="bibr" rid="B34">2019</xref>), and millet (Coulibaly et al., <xref ref-type="bibr" rid="B10">2019</xref>). Deep learning has also been applied for detecting rice crop diseases. For example, Kamal et al. (<xref ref-type="bibr" rid="B24">2019</xref>) combined a depthwise separable convolution architecture with Reduced MobileNet. In terms of recognition accuracy, there have been various claims. Chen et al. (<xref ref-type="bibr" rid="B9">2020</xref>) used Enhanced VGGNet with Inception Module through migration learning, which had an accuracy of 92% in the classification of rice diseases. Rahman et al. (<xref ref-type="bibr" rid="B35">2020</xref>) proposed a two-stage small CNN architecture, which achieved 93.3% accuracy with smaller model sizes. Some efforts have been made to improve the accuracy. For instance, Picon et al. (<xref ref-type="bibr" rid="B33">2019</xref>) used a dataset of five crops, 17 diseases, and 121,955 images, then proposed three different CNN architectures that incorporate contextual non-image meta-data. Arnal Barbedo (<xref ref-type="bibr" rid="B4">2019</xref>) proposed a method of image classification based on individual lesions and spots, testing 14 plants and 79 diseases, which improved the accuracy compared with using original images.</p>
<p>Relying on a single predictive model may cause machine learning algorithm to overfit (Ali et al., <xref ref-type="bibr" rid="B3">2014</xref>; Feng et al., <xref ref-type="bibr" rid="B15">2020</xref>). To solve this problem, ensemble learning with a set of algorithms to combine all possible predictions was used (Dietterich, <xref ref-type="bibr" rid="B13">2000</xref>). With the development of computer technology, ensemble learning was used for prediction in disease diagnosis (Albert, <xref ref-type="bibr" rid="B2">2020</xref>), soybean yield (Yoosefzadeh-Najafabadi et al., <xref ref-type="bibr" rid="B45">2021</xref>), protein binding hot spots (Hu et al., <xref ref-type="bibr" rid="B20">2017</xref>), and wheat grain yield (Fei et al., <xref ref-type="bibr" rid="B14">2021</xref>). Since the above studies have proven the feasibility of ensemble learning, ensemble technology would be used in this research to improve the accuracy of disease diagnosis.</p>
<p>In summary, deep learning is a promising technology for disease diagnosis of various crops with which high accuracy can be achieved. Existing research on the use of deep learning for rice diseases dealt with a limited number of rice diseases. Various types of rice diseases have been observed in rice fields, such as rice leaf blast, false smut, neck blast, sheath blight, bacterial stripe disease, and brown spot. The aim of this study was to increase the accuracy, efficiency, affordability, and convenience of rice disease diagnosis. The specific objectives of this study were to (1) develop a deep learning network model for diagnosing six different types of rice diseases, (2) evaluate the performance of the model, and (3) implement the diagnosis method in a cloud-based mobile app and test it in an application.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and Methods</title>
<sec>
<title>Model Development and Testing</title>
<sec>
<title>Data Acquisition</title>
<p>Deep learning requires a large number of training images to achieve good results (Barbedo, <xref ref-type="bibr" rid="B5">2018</xref>). Thus, a total of 33,026 images of rice diseases were collected over a 2-year period for the development of a disease diagnosis model. Among these images, 9,354 were for rice leaf blast, 4,876 were for rice false smut, 3,894 were for rice neck blast, 6,417 were for rice sheath blight, 6,727 were for rice bacterial stripe, and 1,758 were for rice brown spot diseases. The characteristics of rice leaf blast are large spindle-shaped lesions with grayish centers and brown edges. For false smut disease, the pathogen is fungal that infects rice flowers and turns them into rice false smut balls, which is the only visible feature of rice false smut. For rice neck blast disease, node and neck lesions often occur at the same time and have a similar characteristic, a blackish to a grayish brown color. For rice sheath blight disease, lesions on the leaves are usually irregular in shape, and after a period of infection, the center is usually grayish white with brown edges. For rice bacterial stripe disease, on young lesions, the bacteria ooze dew and dry out the plant, leaving yellow beads that eventually develop orange-yellow stripes on the leaves. For rice brown spot disease, the spots are initially small round, dark brown to purplish brown, and fully developed spots are round to elliptic with light brown to gray centers and reddish-brown edges. Example images of each disease are in the <xref ref-type="supplementary-material" rid="SM1">Supplementary Material</xref>. The images were from four locations in China: (1) Baiyun Base of The Guangdong Academy of Agricultural Sciences, Guangzhou, Guangdong, (2) Laibin, Guangxi, (3) Binyang, Guangxi, and (4) the Chinese Academy of Sciences, Hefei, Anhui. These images were taken using mobile phones with high resolution (more than 1 megapixel), so that the characteristics of rice diseases could be clearly captured. To prepare for model development, the images were split into a training set, a validation set, and a test set with a ratio of 7:2:1. This ratio was randomly applied to all the six disease categories; thus, the corresponding image numbers of these sets were 23,096; 6,684; and 3,246.</p>
</sec>
<sec>
<title>Image Preprocessing</title>
<p>Image preprocessing and data enhancement are performed to reduce the overfitting of models, as illustrated in <xref ref-type="fig" rid="F1">Figure 1</xref>. Before the model reads the image, the short side of the image was scaled to 256 pixels, and the long side was scaled proportionally to reduce the computational pressure of the model. Then, random affine transformation was applied to the image, which could randomly translate, rotate, scale, deform, and cut the image. At the same time, Gaussian blur and image flipping were applied randomly. Finally, the resized image was randomly cropped to a 224 &#x000D7; 224 pixels square area as the actual training image. These processes favored expanding the data set and reducing the over-fitting of the model on the original dataset without modifying the characteristics of rice diseases.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Steps of the image preprocessing for expanding dataset and reducing the overfitting of models.</p></caption>
<graphic xlink:href="fpls-12-701038-g0001.tif"/>
</fig>
<p>Next, the mean and standard deviation of the ImageNet dataset were applied for normalization to make image color distribution as similar as possible. As the number of images of different types of diseases was not equal, an over-sampling operation was adopted for a small number of rice brown spot images in the preprocessing, with a ratio of three times. This process was repeated for each training epoch; therefore, the number of images that each model read was different in each training epoch, and the number of image samples in the dataset was increased in this way.</p>
</sec>
<sec>
<title>Convolutional Neural Network (CNN) Models</title>
<p>The structure of the convolutional neural network has a crucial influence on the performance of the final model. It was necessary to compare the performance of different networks in the diagnosis of rice diseases. Five network structures were selected and tested, and they were: ResNet, DenseNet, SENet, ResNeXt, and ResNeSt. These networks are described below.</p>
<p>ResNet (He et al., <xref ref-type="bibr" rid="B18">2016</xref>) is a widely used network model, which uses residual blocks to enhance the depth of the CNN. The structure of the residual block is shown in <xref ref-type="fig" rid="F2">Figure 2A</xref>. By directly connecting the input and the output, ResNet can reduce the problems of gradient disappearance and gradient explosion, thus deepening the number of network layers and achieving better effects. DenseNet (Huang et al., <xref ref-type="bibr" rid="B21">2017</xref>) uses a dense connection, which connects each layer to every other layer (<xref ref-type="fig" rid="F2">Figure 2B</xref>). Since DenseNet allows features to be reused, this can generate many features with a small number of convolution kernels. As a result, it can reduce gradient loss and enhance the propagation of features, and the number of parameters is greatly reduced. SE-ResNet (Hu et al., <xref ref-type="bibr" rid="B19">2020</xref>) presents the &#x0201C;Squeeze-and-Excitation&#x0201D; block, which can establish the relationship between channels and adaptively recalibrate the responses of the channel-wise feature. The SE block can be added in different networks. <xref ref-type="fig" rid="F2">Figure 2C</xref> shows the SE block with ResNet. ResNeXt (Xie et al., <xref ref-type="bibr" rid="B43">2017</xref>) is an improved version of ResNet that was designed to have a multi-branch architecture and grouped convolutions to make channels wider (<xref ref-type="fig" rid="F2">Figure 2D</xref>). ResNeXt can improve accuracy without increasing parameter complexity while reducing the number of super parameters. ResNeSt (Zhang et al., <xref ref-type="bibr" rid="B46">2020</xref>) proposes Split-Attention blocks based on SENet, SKNet, and ResNeXt, which makes attentions grouped (<xref ref-type="fig" rid="F2">Figure 2E</xref>). This structure combines channel attention and feature map attention to improve performance without increasing the number of arguments.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Structures of different convolutional neural network (CNN) models tested. <bold>(A)</bold> Residual Block, <bold>(B)</bold> Dense Block, <bold>(C)</bold> SE Block with ResNet, <bold>(D)</bold> ResNeXt, <bold>(E)</bold> ResNeSt Block.</p></caption>
<graphic xlink:href="fpls-12-701038-g0002.tif"/>
</fig>
<p>Based on the five network structures above, five network models were selected for subsequent training, and they were ResNet-50, DenseNet-121, SE-ResNet-50, ResNeXt-50, and ResNeSt-50. The MACs (multiply&#x02013;accumulate operation number) and Params of the five network models above are shown in <xref ref-type="table" rid="T1">Table 1</xref>. MACs is an evaluation index of the computational force of the model, and Params is used to count the number of model parameters. Except for DenseNet-121, the number of calculations and parameters of the other models is very close. This means that their speed and model size are close to each other. Despite the small number of Params and MACs in DenseNet-121, due to the reuse of features, the occupation of training resources is still close to the other models, but it is more economical in model inference. Therefore, comparing these network models could eliminate the negative effect of hardware resource utilization.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Parameters of the models.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Model</bold></th>
<th valign="top" align="center"><bold>MACs (G)</bold></th>
<th valign="top" align="center"><bold>Params (M)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">ResNet-50</td>
<td valign="top" align="center">4.109</td>
<td valign="top" align="center">23.520</td>
</tr>
<tr>
<td valign="top" align="left">DenseNet-121</td>
<td valign="top" align="center">2.865</td>
<td valign="top" align="center">6.960</td>
</tr>
<tr>
<td valign="top" align="left">SE-ResNet-50</td>
<td valign="top" align="center">4.118</td>
<td valign="top" align="center">26.035</td>
</tr>
<tr>
<td valign="top" align="left">ResNeXt-50</td>
<td valign="top" align="center">4.257</td>
<td valign="top" align="center">22.992</td>
</tr>
<tr>
<td valign="top" align="left">ResNeSt-50</td>
<td valign="top" align="center">5.398</td>
<td valign="top" align="center">25.447</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Evaluation of the Models</title>
<p>The performance of the five network models was compared, so that the best models could be selected. For each network model, the results of disease prediction were given in four categories, and they were true positive (TP): correctly predicted the type of disease; false positive (FP): other types of diseases were predicted as this disease; true negative (TN): correctly predicted the disease not being other types of disease; and false negative (FN): the disease was predicted to be another type of disease. These outputs were used to determine the performance indicators: accuracy, precision, recall rate, F1 score, and Matthews correlation coefficient (MCC), as shown in Equations (1&#x02013;5). The accuracy and MCC were evaluated for all the types of diseases, and the other indicators were evaluated for a single type of disease:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>A</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>6</mml:mn></mml:mrow></mml:munderover></mml:mstyle><mml:mi>T</mml:mi><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mi>F</mml:mi><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E3"><label>(3)</label><mml:math id="M3"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:mi>F</mml:mi><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E4"><label>(4)</label><mml:math id="M4"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>F</mml:mi><mml:msub><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E5"><label>(5)</label><mml:math id="M5"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>M</mml:mi><mml:mi>C</mml:mi><mml:mi>C</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>*</mml:mo><mml:mi>T</mml:mi><mml:mi>N</mml:mi><mml:mo>-</mml:mo><mml:mi>F</mml:mi><mml:mi>P</mml:mi><mml:mo>*</mml:mo><mml:mi>F</mml:mi><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:msqrt><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>F</mml:mi><mml:mi>P</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>F</mml:mi><mml:mi>N</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>T</mml:mi><mml:mi>N</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>F</mml:mi><mml:mi>P</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>T</mml:mi><mml:mi>N</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mi>F</mml:mi><mml:mi>N</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msqrt></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>N</italic> is the number of all test images, A is accuracy, P is precision, R is recall rate, F1 is a score, <italic>i</italic> is the <italic>i</italic>th type of disease, and <italic>TP</italic><sub><italic>i</italic></sub>, <italic>FP</italic><sub><italic>i</italic></sub>, and <italic>FN</italic><sub><italic>i</italic></sub> are the numbers of true positives, false positives, and false negatives, respectively, in the <italic>i</italic>th type of disease. MCC is essentially the correlation coefficient between the observed and predicted binary classifications; it returns a value between &#x02212;1 and &#x0002B;1. The coefficient &#x0002B;1 means perfect prediction, 0 means no better than random prediction, and &#x02212;1 means complete discrepancy between prediction and observation.</p>
<p>Loss value is another indicator to evaluate the models. Different from the other indicators, loss is an evaluation of the fitting degree of the training set instead of test set. Although it cannot directly represent the performance of the model, the fitting condition of the model can be estimated through the changes in loss during the training process Here, we selected the cross entropy loss function (De Boer et al., <xref ref-type="bibr" rid="B11">2005</xref>).</p>
</sec>
<sec>
<title>Fine-Tuning of the Models</title>
<p>The models were fine-tuned using the transfer learning method to reduce training time. Transfer learning means applying the knowledge learned from one dataset to another, which has been proven to be effective for plant disease recognition (Kaya et al., <xref ref-type="bibr" rid="B26">2019</xref>; Chen et al., <xref ref-type="bibr" rid="B9">2020</xref>). In transfer learning, models fully trained on the ImageNet dataset were trained again on the rice disease dataset. Since 1,000 classes of ImageNet do not correspond to the number of disease categories identified for rice crop in this study, the last layers of all the models were modified to output six classes. Therefore, before the training for rice diseases, the parameters of the models were set as the pre-trained models except for the last layers. The weights of the last layers were initialized with the method used by He et al. (<xref ref-type="bibr" rid="B17">2015</xref>), and biases of the last layers were modified by uniform distribution.</p>
<p>After the pre-training, the models trained using the rice disease dataset were able to extract basic features such as edges and contours of leaves and spots; thus, the models could converge faster. The training policies of the five models were the same, where the batch size was 64, the data loader process number was eight, the max epoch was 200, the optimizer was stochastic gradient decent (SGD) with 0.9 momentum, and the initial learning rate was 0.001. To make the model converge quickly in the early stage and continue to improve in the later stage, a variable learning rate was applied. In the first five epochs, warm up was used, i.e., the learning rate increased linearly from 0 to the initial learning rate, which enabled the model to stabilize rapidly on a large data set. Subsequently, the learning rate decreased to 0 after 30 epochs according to the cosine function, and then returned to the initial learning rate, which decreased repeatedly until the max epoch was reached.</p>
</sec>
<sec>
<title>Ensemble Learning</title>
<p>Ensemble learning combines multiple submodels into a single model so if a submodel fails, the others can correct the errors (Caruana et al., <xref ref-type="bibr" rid="B7">2004</xref>). In this study, ensemble learning was achieved by combining the three best network submodels, which were selected out of the five submodels after comparisons of the performance of the five submodels. The type of the ensemble algorithm implemented here was voting. For the output of each selected network submodel, the Softmax function (Equation 6) was used to normalize first, and then the output scores of all three submodels were averaged to obtain the final scores of all classes, as illustrated in <xref ref-type="fig" rid="F3">Figure 3</xref>. The class that had the highest score was the diagnosed disease for the input image.</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M6"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>&#x003C3;</mml:mi><mml:msub><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mstyle mathvariant="bold"><mml:mtext>z</mml:mtext></mml:mstyle></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>z</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>K</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>z</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <bold>z</bold> is a vector of K real numbers, <italic>z</italic><sub><italic>i</italic></sub> and <italic>z</italic><sub><italic>j</italic></sub> are the <italic>i</italic>th and <italic>j</italic>th number of <bold>z</bold> respectively, and &#x003C3;(<bold>z</bold>) is the output vector whose value is between 0 and 1.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Architecture of the Ensemble Model for rice disease diagnosis.</p></caption>
<graphic xlink:href="fpls-12-701038-g0003.tif"/>
</fig>
</sec>
</sec>
<sec>
<title>Model Implementation and Application</title>
<p>The Ensemble Model was implemented in an app consisting of software architecture and user interface. The software system had two parts: the client and the server. The client runs on the smartphone, while the server runs on a server computer. As the Ensemble Model was trained and run under PyTorch 1.5.0 with CUDA 9.2 that is based on the Python language (Paszke et al., <xref ref-type="bibr" rid="B32">2019</xref>), the Python language was chosen for the server-side development. Django, a Python-based free and open-source web framework, was used to build a stable web server. The client transmits a rice disease image to the web server. When the server receives a POST request from the client, the server invokes the Ensemble Model to detect the image and returns results to the client in JSON format (<xref ref-type="fig" rid="F4">Figure 4</xref>). The results include status information, disease category, and probability score. After the client receives the JSON data, it parses and displays the data on the screen for the client to view. This structure of front end and back-end separation can help with subsequent functional expansion and support for more platforms in future development.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Software architecture of the system.</p></caption>
<graphic xlink:href="fpls-12-701038-g0004.tif"/>
</fig>
<p>The user interface for the mobile client was written using Flutter. Flutter is a cross-platform open-source software kit developed by Google, which can be used to develop applications for Android, iOS, Windows, Mac, Linux, and Google Fuchsia. Therefore, the app developed in this study can be used in the Android platform and also in other operating systems after some compilations.</p>
<p>To test the generalization of the Ensemble Model, the app was utilized to recognize rice diseases using a different test set of rice disease images sourced from Google and provided by Shenzhen SenseAgro Technology Co. Ltd. (Shenzhen, Guangdong, China). This set of images includes 50 images for each of the six types of disease, totaling 300 images. With these images, the performance of the Ensemble Model in practical application was evaluated. For the purpose of distinction, this image set was called independent test set, while the images from the original data set was called split test set.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Model Training and Testing Results</title>
<sec>
<title>Performance Comparisons of the Five Network Submodels</title>
<p>After fine-tuning and training, the loss value was low for all the five submodels, and the minimum loss values of all the submodels were below 0.002 (<xref ref-type="fig" rid="F5">Figure 5A</xref>). The learning rate was the same for all the submodels, and it was in the range of 0&#x02013;0.001 (<xref ref-type="fig" rid="F5">Figure 5B</xref>). The disease diagnosis accuracy on the training set of rice disease images was high for all the submodels, meaning all the submodels had fit the training set well, but that SE-ResNet-50, DenseNet-121, and ResNeSt-50 had better accuracies (over 99%) (<xref ref-type="fig" rid="F5">Figure 5C</xref>). When the submodels were applied on the validation set and test set of images, the disease diagnosis accuracy was also high for all the submodels, particularly for the SE-ResNet-50, DenseNet-121, and ResNeSt-50 submodels, which achieved accuracies of over 99% (<xref ref-type="fig" rid="F5">Figure 5D</xref>).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Comparisons in performance of the five different submodels. <bold>(A)</bold> Loss value, <bold>(B)</bold> learning rate, <bold>(C)</bold> validation accuracy, and <bold>(D)</bold> training accuracy.</p></caption>
<graphic xlink:href="fpls-12-701038-g0005.tif"/>
</fig>
<p>Confusion matrix is a specific table that makes it easy to see if the model is mislabeling one class as another. The performance of the five submodels can be visualized using the confusion matrix. <xref ref-type="fig" rid="F6">Figure 6</xref> shows the confusion matrixes in the split test set of images for the six types of rice diseases. The rows of confusion matrixes are the actual types of disease, while the columns are the predicted type of disease. The diagonal values represent the correct recognition from the model in the categories of true positives (TP) and true negative (TN). The off-diagonal values represent the incorrect recognition in the categories of false positives (FP) and false negative (FN), and smaller values means fewer misrecognitions occurred. The diagonal values were large, and the other values were small, which showed that all the submodels were quite effective in diagnosing all the various types of rice diseases. The depth of the color indicates the proportion of the number at that position to the total of the row, therefore the color on the diagonal represents the recall rate of the disease. According to the confusion matrix, the DenseNet-121, SE-ResNet-50, and ResNeSt-50 submodels overperformed the other two submodels in the confusion of different diseases, especially for the leaf blast, false smut, and sheath blight rice diseases.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Confusion matrixes of the five different submodels; images used were from the split test set. <bold>(A)</bold> ResNet-50, <bold>(B)</bold> DenseNet-121, <bold>(C)</bold> SE-ResNet-50, <bold>(D)</bold> ResNeXt-50, and <bold>(E)</bold> ResNeSt-50.</p></caption>
<graphic xlink:href="fpls-12-701038-g0006.tif"/>
</fig>
<p>To further verify the effect of the confusion matrix results, the MCC of the diseases corresponding to each model were also calculated, as shown in <xref ref-type="table" rid="T2">Table 2</xref>. According to the MCC values, which are shown in <xref ref-type="table" rid="T2">Table 2</xref>, the DenseNet-121, SE-ResNet-50, and ResNeSt-50 submodels overperformed the other two submodels in the confusion of different diseases, especially for the leaf blast, false smut, and sheath blight rice diseases.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>MCC values of the five different submodels.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><inline-graphic xlink:href="fpls-12-701038-i0001.tif"/></th>
<th valign="top" align="center"><bold>ResNet-50</bold></th>
<th valign="top" align="center"><bold>DenseNet-121</bold></th>
<th valign="top" align="center"><bold>SE-ResNet-50</bold></th>
<th valign="top" align="center"><bold>ResNeXt-50</bold></th>
<th valign="top" align="center"><bold>ResNeSt-50</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Rice leaf blast</td>
<td valign="top" align="center">0.978</td>
<td valign="top" align="center">0.995</td>
<td valign="top" align="center">0.994</td>
<td valign="top" align="center">0.978</td>
<td valign="top" align="center">0.995</td>
</tr>
<tr>
<td valign="top" align="left">Rice false blast</td>
<td valign="top" align="center">0.986</td>
<td valign="top" align="center">0.996</td>
<td valign="top" align="center">0.995</td>
<td valign="top" align="center">0.985</td>
<td valign="top" align="center">0.996</td>
</tr>
<tr>
<td valign="top" align="left">Rice neck blast</td>
<td valign="top" align="center">0.977</td>
<td valign="top" align="center">0.997</td>
<td valign="top" align="center">0.993</td>
<td valign="top" align="center">0.976</td>
<td valign="top" align="center">0.994</td>
</tr>
<tr>
<td valign="top" align="left">Rice sheath blight</td>
<td valign="top" align="center">0.979</td>
<td valign="top" align="center">0.996</td>
<td valign="top" align="center">0.997</td>
<td valign="top" align="center">0.982</td>
<td valign="top" align="center">0.990</td>
</tr>
<tr>
<td valign="top" align="left">Rice bacterial stripe</td>
<td valign="top" align="center">0.989</td>
<td valign="top" align="center">0.993</td>
<td valign="top" align="center">0.996</td>
<td valign="top" align="center">0.983</td>
<td valign="top" align="center">0.992</td>
</tr>
<tr>
<td valign="top" align="left">Rice brown spot</td>
<td valign="top" align="center">0.949</td>
<td valign="top" align="center">0.994</td>
<td valign="top" align="center">0.988</td>
<td valign="top" align="center">0.950</td>
<td valign="top" align="center">0.991</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The precision, recall, and F1 score of each submodel on recognition of each disease were determined using Equations (2&#x02013;4). <xref ref-type="fig" rid="F7">Figure 7</xref> below visually compares the boxplots of precision, recall and F1 score values for each of the five models, namely, ResNet-50, DenseNet-121, SE-ResNet-50, ResNeXt-50, and ResNeSt-50. The boxplots suggest that the DenseNet-121 model is significantly better than the other four submodels, whether it is compared with precision, recall, or F1 score. Except for the DenseNet-121 model, SE-ResNet-50 and ResNeSt-50 are better than ResNet-50 and ResNeXt-50 in terms of precision or recall and F1 score. In summary, DenseNet-121, ResNeSt-50, and SE-ResNet-50 had better overall performance among the five submodels tested.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>Boxplots of precision, recall, and F1 score of the different submodels.</p></caption>
<graphic xlink:href="fpls-12-701038-g0007.tif"/>
</fig>
</sec>
<sec>
<title>Visualization of the Three Best Submodels</title>
<p>Based on the discussion above, the three best submodels were DenseNet-121, ResNeSt-50, and SE-ResNet-50. Their performance was further demonstrated by visualization methods: Grad CAM (Selvaraju et al., <xref ref-type="bibr" rid="B38">2016</xref>), Grad CAM&#x0002B;&#x0002B; (Chattopadhyay et al., <xref ref-type="bibr" rid="B8">2017</xref>), and Guided Backpropagation (Springenberg et al., <xref ref-type="bibr" rid="B41">2015</xref>). The CAM is class activation map, which can show the areas most relevant to a particular category and map them to the original image (Zhou et al., <xref ref-type="bibr" rid="B48">2015</xref>). The Grad CAM is calculated by the weighted sum of the feature map and the weight of the corresponding class, which can generate CAM without changing the structure of model. Grad CAM&#x0002B;&#x0002B; is an improved version of Grad CAM, which introduces the weighting of the output gradient for the pixel level at a particular location, and it has better effects than Grad CAM. Guided Backpropagation uses backpropagation to calculate the output-to-input gradient, and it restricts the backpropagation of gradients less than 0 to find the points of the picture that maximizes the activation of a feature. In the results, these points are usually represented as the contours of features. Also, to make the Guided Backpropagation images clearer, high-pass filters using the Sobel operator were taken to post-process the images. The maps of these three visualization methods were generated for each of the three selected submodels on each of the six types of diseases (<xref ref-type="fig" rid="F8">Figure 8</xref>). In the Grad CAM and Grad CAM&#x0002B;&#x0002B;maps, the red area represented activation areas, and the model paid more attention to this area in the diagnosing process, whereas the blue area had no positive effect on the result. In the Guided Backpropagation map, the contours, in which the model was interested, were highlighted. It is obvious to find the basis of diagnosis using this map. When comparing the maps among the three submodels, the general shapes and locations of active areas (red areas) in the Grad CAM and Grad CAM&#x0002B;&#x0002B; maps are similar. However, the boundaries of the active areas from DenseNet-121 (<xref ref-type="fig" rid="F8">Figure 8A</xref>) are not as defined as those from the two other submodels (<xref ref-type="fig" rid="F8">Figures 8B,C</xref>). Also, it seemed that the locations of the active areas from SE-ResNet-50 better reflect the disease locations shown in the original images (<xref ref-type="fig" rid="F8">Figure 8C</xref>). In the Guided Backpropagation map, contours of interesting objects from DenseNet-121 (<xref ref-type="fig" rid="F8">Figure 8A</xref>) are not as obvious as those from ResNeSt-50 (<xref ref-type="fig" rid="F8">Figure 8B</xref>), and those from SE-ResNet-50 (<xref ref-type="fig" rid="F8">Figure 8C</xref>) are intermediate in this regard. Overall, all the three selected submodels have a good disease identification ability, as visually observed, and they would complement each other in the Ensemble Model.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>Visualization of rice disease diagnosis results from the three best submodels: <bold>(A)</bold> DenseNet-121, <bold>(B)</bold> ResNeSt-50, and <bold>(C)</bold> SE-ResNet-50.</p></caption>
<graphic xlink:href="fpls-12-701038-g0008.tif"/>
</fig>
</sec>
<sec>
<title>Performance of the Ensemble Model</title>
<p>To show the performance of the Ensemble Model, which is a combination of DenseNet-121, ResNeSt-50, and SE-ResNet-50, the confusion matrix was calculated. The diagonals of the confusion matrix indicated high values of TP (<xref ref-type="fig" rid="F9">Figure 9A</xref>), meaning the Ensemble Model had an accuracy of over 99%. The boxplots of the performance indicators of the Ensemble Model: precision, recall, and F1 score, are shown in <xref ref-type="fig" rid="F9">Figure 9B</xref>. The boxplots show that the Ensemble Model did not have outliers in precision, recall, and F1, indicating that the performance of the model in identifying diseases is very stable. These results demonstrate that the Ensemble Model had a good performance in recognizing all the six types of rice diseases.</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p>Test results of the Ensemble Model for different types of rice disease with the split test set of images. <bold>(A)</bold> The confusion matrix and <bold>(B)</bold> the boxplots of the precision, recall, and F1 score for the Ensemble model in diagnosing six rice diseases.</p></caption>
<graphic xlink:href="fpls-12-701038-g0009.tif"/>
</fig>
</sec>
</sec>
<sec>
<title>Application of the Ensemble Model</title>
<p>In the rice disease diagnosis app, the user interface is composed of several parts, as shown in <xref ref-type="fig" rid="F10">Figure 10</xref>. The main interface was for taking photos or uploading existing pictures. The photo interface was used for taking disease images and uploading them. The picture-selecting interface was used to select the existing disease pictures in the mobile phone for uploading. Considering the time required for network uploading, a wait interface was provided to improve user experience. After the client received the data returned by the server, the result interface displayed the results of the recognition of the disease image by the model.</p>
<fig id="F10" position="float">
<label>Figure 10</label>
<caption><p>Components of the user interface in the rice disease diagnosis app.</p></caption>
<graphic xlink:href="fpls-12-701038-g0010.tif"/>
</fig>
<p>To test the performance of the app in a practical application, a test set of images from different sources (Google images and SenseAgro) was used to verify the generalization of the Ensemble Model and the performance of the app. The boxplots of precision, recall, and F1 scores for the Ensemble Model are shown in <xref ref-type="fig" rid="F11">Figure 11</xref>. The boxplots illustrate that the Ensemble Model had a small degree of dispersion in precision, recall, and F1 score, indicating that the performance of the model in identifying diseases is relatively stable. The F1 score varied from 0.83 to 0.97 when the Ensemble Model was used to diagnose different types of disease. As for the overall performance, the results showed that the accuracy for all the diseases was 91%. As the F1 scores are over 0.8 and the accuracy is over 90% for all the diseases, the rice disease diagnosis app is considered to be good.</p>
<fig id="F11" position="float">
<label>Figure 11</label>
<caption><p>Boxplots of precision, recall, and F1 score for the Ensemble Model, tested with the independent test set of images.</p></caption>
<graphic xlink:href="fpls-12-701038-g0011.tif"/>
</fig>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>Rice leaf blast, rice false smut, rice neck blast, rice sheath blight, rice bacterial stripe, and rice brown spot are common diseases during the growth of rice. The identification of these diseases is of practical importance and can provide ideas for the identification of other rice diseases in the future. In this study, the dataset was split into a training set, a validation set, and a test set using a ratio of 7:2:1. From the training results, the ratio made full use of the data obtained from the collection and enabled the model to learn the important features of each disease. Considering that the test set obtained from splitting this dataset has a large similarity with the training set, various disease images from different sources were collected to form an independent test set. The test results of the independent test set demonstrate that the network designed in this study is generalizable and can be applied in practice. Therefore, the division of the data set and the selection of the test set are appropriate for this study.</p>
<sec>
<title>Comparison of the Submodels</title>
<p>The convergence speeds of DenseNet-121, ResNeSt-50, and SE-ResNet-50 were high (<xref ref-type="fig" rid="F5">Figure 5</xref>), and they reached a stable level when about 30 epochs were iterated, while ResNet-50 and ResNeXt-50 were relatively stable after 100 epochs. Throughout all the training processes, DenseNet-121, ResNeSt-50, and SE-ResNet-50 were more accurate than ResNet-50 and ResNeXt-50. The accuracy curves and the loss curves of the three submodels were also smoother. This indicates that DenseNet-121, ResNeSt-50, and SE-ResNet-50 have faster convergence speeds, higher accuracy rates, and more stable convergence states.</p>
<p>The confusion matrixes show that most diagnosis results were correct, and that some diseases were more easily misrecognized than the others (<xref ref-type="fig" rid="F6">Figure 6</xref>). There was a confusion between rice leaf blast and brown spot diseases in some of the submodels, because the early characteristics of rice leaf blast and rice brown spot were very similar. Both diseases consist of small brown spots, which are difficult to distinguish by naked eyes. Rice false smut and rice neck blast are also easily confused because they both appear at the ear of rice, which could sometimes lead to misjudgment by the submodels.</p>
<p><xref ref-type="fig" rid="F7">Figure 7</xref> provides a more intuitive view of the performance of the different submodels on different diseases. DenseNet-121, ResNeSt-50, and SE-ResNet-50 perform better than the other two submodels; the gap is most pronounced in rice brown spot. Each of the three submodels have internal advantages for different diseases. DenseNet-121 performed better with rice neck blast and rice brown spot; SE-ResNet-50 performed better with rice bacterial stripe; and ResNeSt-50 was more balanced with different diseases. Therefore, considering the better performance of DenseNet-121, ResNeSt-50, and SE-ResNet-50, these three submodels were selected as the submodels of the Ensemble Model.</p>
</sec>
<sec>
<title>Visualization Analysis of the Models</title>
<p>The learning conditions of different networks to different diseases can be found (<xref ref-type="fig" rid="F8">Figure 8</xref>). For rice leaf blast disease, characterized by large spindle-shaped lesions with grayish centers and brown edges, all three submodels are more sensitive to the whole spot area, so all of them could accurately learn the characteristics of this disease. In detail, the areas on Grad CAM and Grad CAM&#x0002B;&#x0002B; of ResNeSt-50 were the most precise, and in the Guided Backpropagation maps, the spots were the most obvious. Therefore, the feature extraction of ResNeSt-50 for rice blast was the best.</p>
<p>For rice false smut disease, the pathogen is fungal that infects rice flowers and turns them into rice false smut balls, which are the only visible feature of rice false smut. The heatmap of the three submodels is very close, the part that includes the rice false smut ball is focused, while the surrounding normal rice is ignored, which means that the learned characteristics of rice false smut are the same.</p>
<p>For rice neck blast disease, node and neck lesions often occur at the same time and have a similar characteristic, a blackish to a grayish brown color. DenseNet-121 and SE-ResNet-50 mainly focus on the neck and node of rice, while ResNeSt-50 mainly focus on the node of rice, which means that the feature extraction ability of ResNeSt-50&#x00027; in rice neck blast is poor compared with the other two submodels, as the latter submodel did not fully learn all the characteristics in the node and neck.</p>
<p>For rice sheath blight disease, lesions on the leaves are usually irregular in shape, and after a period of infection, the center is usually grayish-white, and the edges are usually brown. The Grad Cam heatmaps of the three submodels are also similar, and all the lesions are of concern.</p>
<p>For rice bacterial stripe disease, on young lesions, the bacteria ooze dew and dry the plant out, leaving yellow beads that eventually develop orange-yellow stripes on the leaves. DenseNet-121 and SE-ResNet-50 focus on most of the spots, while ResNeSt-50 focuses only on the upper spots, which means ResNeSt-50 is weaker than the other two submodels in feature extraction of rice bacterial stripe disease.</p>
<p>For rice brown spot disease, the spots are initially small round, dark brown to purplish brown, and fully developed spots are round to elliptic with light brown to gray centers and reddish-brown edges. DenseNet-121 performs poorly in feature learning and is only sensitive to some features, while the other two submodels contain most of the disease spots.</p>
<p>It should be noted that these heatmaps can only indicate which features the model paid more attention to, indicating that the model learned the features of the spots rather than other unrelated features. However, this is not exactly consistent with the final classification score of the model, because different types of diseases interact with each other. It is not enough to learn the characteristics of a disease. Learning the characteristics of the differences between various diseases also affects the final classification performance. Therefore, although the heatmaps of some models are not perfect for some diseases, they can still be well-classified.</p>
</sec>
<sec>
<title>Performance of the Ensemble Model</title>
<p>The results of the Ensemble Model tested with the split test set of images (<xref ref-type="fig" rid="F9">Figure 9</xref>) showed that by combining the scores of the different models, the confusion between different diseases was greatly reduced. This explains that the Ensemble Model combines the advantages of each model to solve the problem of a single model misjudging some diseases. Meanwhile, the precision, recall, and F1 scores of the Ensemble Model were also more stable than those of the single model.</p>
<p>The F1 scores of the Ensemble Model for each disease were tested using the independent test set of images, and the overall accuracy of the Ensemble Model in the independent test set was 91% (<xref ref-type="fig" rid="F11">Figure 11</xref>). Compared with the results of the previous test in the split test set, it can be found that although there was a reduction in accuracy, it was still high. The best recognition effect was on the rice sheath blight and rice bacterial stripe diseases; their indicator scores were close to one, which was close to the results from the test using the split test set of images. This means that the Ensemble Model has the best generalization for these two diseases. The indicators of rice leaf blast, rice false smut, and rice neck blast were all around 0.9, which was mainly caused by the confusion between diseases, and the samples from different sources also had some influence. The F1 score of brown spot disease was close to 0.8. On one hand, the training samples of rice brown spot were least in all the diseases, although data enhancement was performed. On the other hand, rice leaf blast and rice brown spot have similar characteristics, which may cause confusion easily. In general, the performance of the Ensemble Model in the independent test set was satisfactory, which indicated that the rice disease diagnosis app is reliable to be applied in the field.</p>
<p>Since the dataset used for training and testing in this study is different from that in previous studies and the diseases targeted by the study are different, a direct comparison cannot be made. However, the Ensemble Model designed in this study performed better on the split test set than the previous study on the corresponding dataset (Lu et al., <xref ref-type="bibr" rid="B29">2017b</xref>; Rahman et al., <xref ref-type="bibr" rid="B35">2020</xref>), which indicates that the Ensemble Model designed in this study is effective. The results on the independent test set also demonstrate the good generalization of the Ensemble Model. Therefore, as compared with previous applications, the proposed smartphone app can provide higher accuracy, which is the most important performance indicator of the application. To facilitate the implementation of the app, easy operation and simplicity are the key features for farmers to quickly adopt the app. Finally, the cost is a barrier to commercialization of any technology. The low cost of the app will attract many users.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="s5">
<title>Conclusion</title>
<p>In this study, a dataset containing 33,026 images of six types of rice diseases was established. Based on these images, five submodels, ResNet-50, ResNeXt-50, DenseNet-121, ResNeSt-50, and SE-ResNet-50 were trained and tested, achieving over 98% accuracy and over 0.95 F1 score. Among them, DenseNet-121, SE-ResNet-50, and ResNeSt-50 performed well. Visual analysis confirmed the good learning status of the submodels on the characteristics of rice diseases. Subsequently, the Ensemble Model, an integration of these three submodels, produced accurate judgment of confusable diseases, according to the confusion matrixes analysis. As a result, the F1 scores reached more than 0.99 for each of the six types of disease. Being tested by independently sourced images, the Ensemble Model achieved 91% accuracy, indicating that it has enough generalization ability to be implemented in a rice disease diagnosis app for field applications. With a software system that included both servers and clients, the smartphone app provided high accuracy, easy operation, simplicity, and low-cost means for the recognition of rice diseases. The limitation was that the Ensemble Model has many parameters, which may affect the speed of identification. Future studies will be carried out on network pruning to reduce the number of parameters.</p>
</sec>
<sec sec-type="data-availability-statement" id="s6">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="s7">
<title>Author Contributions</title>
<p>RD conceptualized the experiment, selected the algorithms, collected and analyzed the data, and wrote the manuscript. MT trained the algorithms, collected and analyzed data, and wrote the manuscript. HX analyzed the data. CL and KL collected the data. XY and LQ supervised the project. All the authors discussed and revised the manuscript.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s8">
<title>Publisher&#x00027;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back><sec sec-type="supplementary-material" id="s9">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fpls.2021.701038/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fpls.2021.701038/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Image_1.JPEG" id="SM1" mimetype="image/jpeg" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Supplementary Figure 1</label>
<caption><p>Sample images illustrating disease levels.</p></caption>
</supplementary-material>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Agarwal</surname> <given-names>M.</given-names></name> <name><surname>Singh</surname> <given-names>A.</given-names></name> <name><surname>Arjaria</surname> <given-names>S.</given-names></name> <name><surname>Sinha</surname> <given-names>A.</given-names></name> <name><surname>Gupta</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>ToLeD: tomato leaf disease detection using convolution neural network</article-title>. <source>Proc. Comput. Sci.</source> <volume>167</volume>, <fpage>293</fpage>&#x02013;<lpage>301</lpage>. <pub-id pub-id-type="doi">10.1016/j.procs.2020.03.225</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Albert</surname> <given-names>B. A.</given-names></name></person-group> (<year>2020</year>). <article-title>Deep learning from limited training data: novel segmentation and ensemble algorithms applied to automatic melanoma diagnosis</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>31254</fpage>&#x02013;<lpage>31269</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.2973188</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ali</surname> <given-names>I.</given-names></name> <name><surname>Cawkwell</surname> <given-names>F.</given-names></name> <name><surname>Green</surname> <given-names>S.</given-names></name> <name><surname>Dwyer</surname> <given-names>N.</given-names></name></person-group> (<year>2014</year>). <article-title>Application of statistical and machine learning models for grassland yield estimation based on a hypertemporal satellite remote sensing time series</article-title>. <source>Int. Geosci. Remote Sens. Symp.</source> <volume>2014</volume>, <fpage>5060</fpage>&#x02013;<lpage>5063</lpage>. <pub-id pub-id-type="doi">10.1109/IGARSS.2014.6947634</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arnal Barbedo</surname> <given-names>J. G.</given-names></name></person-group> (<year>2019</year>). <article-title>Plant disease identification from individual lesions and spots using deep learning</article-title>. <source>Biosyst. Eng.</source> <volume>180</volume>, <fpage>96</fpage>&#x02013;<lpage>107</lpage>. <pub-id pub-id-type="doi">10.1016/j.biosystemseng.2019.02.002</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barbedo</surname> <given-names>J. G. A.</given-names></name></person-group> (<year>2018</year>). <article-title>Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification</article-title>. <source>Comput. Electron. Agric.</source> <volume>153</volume>, <fpage>46</fpage>&#x02013;<lpage>53</lpage>. <pub-id pub-id-type="doi">10.1016/j.compag.2018.08.013</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baresel</surname> <given-names>J. P.</given-names></name> <name><surname>Rischbeck</surname> <given-names>P.</given-names></name> <name><surname>Hu</surname> <given-names>Y.</given-names></name> <name><surname>Kipp</surname> <given-names>S.</given-names></name> <name><surname>Hu</surname> <given-names>Y.</given-names></name> <name><surname>Barmeier</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Use of a digital camera as alternative method for non-destructive detection of the leaf chlorophyll content and the nitrogen nutrition status in wheat</article-title>. <source>Comput. Electron. Agric.</source> <volume>140</volume>, <fpage>25</fpage>&#x02013;<lpage>33</lpage>. <pub-id pub-id-type="doi">10.1016/j.compag.2017.05.032</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Caruana</surname> <given-names>R.</given-names></name> <name><surname>Niculescu-Mizil</surname> <given-names>A.</given-names></name> <name><surname>Crew</surname> <given-names>G.</given-names></name> <name><surname>Ksikes</surname> <given-names>A.</given-names></name></person-group> (<year>2004</year>). <article-title>Ensemble selection from libraries of models</article-title>, in <source>Proceedings, Twenty-First International Conference on Machine Learning, ICML 2004</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM Press</publisher-name>), <fpage>137</fpage>&#x02013;<lpage>144</lpage>. <pub-id pub-id-type="doi">10.1145/1015330.1015432</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chattopadhyay</surname> <given-names>A.</given-names></name> <name><surname>Sarkar</surname> <given-names>A.</given-names></name> <name><surname>Howlader</surname> <given-names>P.</given-names></name> <name><surname>Balasubramanian</surname> <given-names>V. N.</given-names></name></person-group> (<year>2017</year>). <article-title>Grad-CAM&#x0002B;&#x0002B;: improved visual explanations for deep convolutional networks</article-title>, in <source>Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV) 2018</source> (<publisher-loc>Lake Tahoe, NV</publisher-loc>), <fpage>839</fpage>&#x02013;<lpage>847</lpage>. <pub-id pub-id-type="doi">10.1109/WACV.2018.00097</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>J.</given-names></name> <name><surname>Chen</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>D.</given-names></name> <name><surname>Sun</surname> <given-names>Y.</given-names></name> <name><surname>Nanehkaran</surname> <given-names>Y. A.</given-names></name></person-group> (<year>2020</year>). <article-title>Using deep transfer learning for image-based plant disease identification</article-title>. <source>Comput. Electron. Agric.</source> <volume>173</volume>:<fpage>105393</fpage>. <pub-id pub-id-type="doi">10.1016/j.compag.2020.105393</pub-id><pub-id pub-id-type="pmid">33121188</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Coulibaly</surname> <given-names>S.</given-names></name> <name><surname>Kamsu-Foguem</surname> <given-names>B.</given-names></name> <name><surname>Kamissoko</surname> <given-names>D.</given-names></name> <name><surname>Traore</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>Deep neural networks with transfer learning in millet crop images</article-title>. <source>Comput. Ind.</source> <volume>108</volume>, <fpage>115</fpage>&#x02013;<lpage>120</lpage>. <pub-id pub-id-type="doi">10.1016/j.compind.2019.02.003</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Boer</surname> <given-names>P. T.</given-names></name> <name><surname>Kroese</surname> <given-names>D. P.</given-names></name> <name><surname>Mannor</surname> <given-names>S.</given-names></name> <name><surname>Rubinstein</surname> <given-names>R. Y.</given-names></name></person-group> (<year>2005</year>). <article-title>A tutorial on the cross-entropy method</article-title>. <source>Ann. Oper. Res.</source> <volume>134</volume>, <fpage>19</fpage>&#x02013;<lpage>67</lpage>. <pub-id pub-id-type="doi">10.1007/s10479-005-5724-z</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deng</surname> <given-names>R.</given-names></name> <name><surname>Jiang</surname> <given-names>Y.</given-names></name> <name><surname>Tao</surname> <given-names>M.</given-names></name> <name><surname>Huang</surname> <given-names>X.</given-names></name> <name><surname>Bangura</surname> <given-names>K.</given-names></name> <name><surname>Liu</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Deep learning-based automatic detection of productive tillers in rice</article-title>. <source>Comput. Electron. Agric.</source> <volume>177</volume>:<fpage>105703</fpage>. <pub-id pub-id-type="doi">10.1016/j.compag.2020.105703</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dietterich</surname> <given-names>T. G.</given-names></name></person-group> (<year>2000</year>). <article-title>Ensemble methods in machine learning</article-title>. <source>Lect. Notes Comput. Sci.</source> <volume>1857LNCS</volume>, <fpage>1</fpage>&#x02013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.1007/3-540-45014-9_1</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fei</surname> <given-names>S.</given-names></name> <name><surname>Hassan</surname> <given-names>M. A.</given-names></name> <name><surname>He</surname> <given-names>Z.</given-names></name> <name><surname>Chen</surname> <given-names>Z.</given-names></name> <name><surname>Shu</surname> <given-names>M.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Assessment of ensemble learning to predict wheat grain yield based on UAV-multispectral reflectance</article-title>. <source>Remote Sens</source>. <volume>13</volume>:<fpage>2338</fpage>. <pub-id pub-id-type="doi">10.3390/rs13122338</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feng</surname> <given-names>L.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Ma</surname> <given-names>Y.</given-names></name> <name><surname>Du</surname> <given-names>Q.</given-names></name> <name><surname>Williams</surname> <given-names>P.</given-names></name> <name><surname>Drewry</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Alfalfa yield prediction using UAV-based hyperspectral imagery and ensemble learning</article-title>. <source>Remote Sens.</source> <volume>12</volume>:<fpage>2028</fpage>. <pub-id pub-id-type="doi">10.3390/rs12122028</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gong</surname> <given-names>A.</given-names></name> <name><surname>Yu</surname> <given-names>J.</given-names></name> <name><surname>He</surname> <given-names>Y.</given-names></name> <name><surname>Qiu</surname> <given-names>Z.</given-names></name></person-group> (<year>2013</year>). <article-title>Citrus yield estimation based on images processed by an Android mobile phone</article-title>. <source>Biosyst. Eng.</source> <volume>115</volume>, <fpage>162</fpage>&#x02013;<lpage>170</lpage>. <pub-id pub-id-type="doi">10.1016/j.biosystemseng.2013.03.009</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>K.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Ren</surname> <given-names>S.</given-names></name> <name><surname>Sun</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>Delving deep into rectifiers: Surpassing human-level performance on imagenet classification</article-title>. <source>Proc. IEEE Int. Conf. Comput. Vis.</source> <volume>2015</volume>, <fpage>1026</fpage>&#x02013;<lpage>1034</lpage>. <pub-id pub-id-type="doi">10.1109/ICCV.2015.123</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>K.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Ren</surname> <given-names>S.</given-names></name> <name><surname>Sun</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>Deep residual learning for image recognition</article-title>, in <source>Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Computer Society</source>) (Las Vegas, NV), <fpage>770</fpage>&#x02013;<lpage>778</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2016.90</pub-id><pub-id pub-id-type="pmid">32166560</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hu</surname> <given-names>J.</given-names></name> <name><surname>Shen</surname> <given-names>L.</given-names></name> <name><surname>Albanie</surname> <given-names>S.</given-names></name> <name><surname>Sun</surname> <given-names>G.</given-names></name> <name><surname>Wu</surname> <given-names>E.</given-names></name></person-group> (<year>2020</year>). <article-title>Squeeze-and-excitation networks</article-title>. <source>IEEE Trans. Pattern Anal. Mach. Intell.</source> <volume>42</volume>, <fpage>2011</fpage>&#x02013;<lpage>2023</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2019.2913372</pub-id><pub-id pub-id-type="pmid">31034408</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hu</surname> <given-names>S.-S.</given-names></name> <name><surname>Chen</surname> <given-names>P.</given-names></name> <name><surname>Wang</surname> <given-names>B.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name></person-group> (<year>2017</year>). <article-title>Protein binding hot spots prediction from sequence only by a new ensemble learning method</article-title>. <source>Amin. Acids</source> <volume>49</volume>, <fpage>1773</fpage>&#x02013;<lpage>1785</lpage>. <pub-id pub-id-type="doi">10.1007/s00726-017-2474-6</pub-id><pub-id pub-id-type="pmid">28766075</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>G.</given-names></name> <name><surname>Liu</surname> <given-names>Z.</given-names></name> <name><surname>Van Der Maaten</surname> <given-names>L.</given-names></name> <name><surname>Weinberger</surname> <given-names>K. Q.</given-names></name></person-group> (<year>2017</year>). <article-title>Densely connected convolutional networks</article-title>, in <source>Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017</source> (<publisher-loc>Honolulu, HI</publisher-loc>), <fpage>2261</fpage>&#x02013;<lpage>2269</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2017.243</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Islam</surname> <given-names>T.</given-names></name> <name><surname>Sah</surname> <given-names>M.</given-names></name> <name><surname>Baral</surname> <given-names>S.</given-names></name> <name><surname>Roychoudhury</surname> <given-names>R.</given-names></name></person-group> (<year>2018</year>). <article-title>A faster technique on rice disease detectionusing image processing of affected area in agro-field</article-title>, in <source>Proceedings of the International Conference on Inventive Communication and Computational Technologies, ICICCT 2018</source> (<publisher-loc>Coimbatore</publisher-loc>: <publisher-name>Institute of Electrical and Electronics Engineers Inc.</publisher-name>), <fpage>62</fpage>&#x02013;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1109/ICICCT.2018.8473322</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jiang</surname> <given-names>H.</given-names></name> <name><surname>Zhang</surname> <given-names>C.</given-names></name> <name><surname>Qiao</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Zhang</surname> <given-names>W.</given-names></name> <name><surname>Song</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>CNN feature based graph convolutional network for weed and crop recognition in smart farming</article-title>. <source>Comput. Electron. Agric.</source> <volume>174</volume>:<fpage>105450</fpage>. <pub-id pub-id-type="doi">10.1016/j.compag.2020.105450</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kamal</surname> <given-names>K. C.</given-names></name> <name><surname>Yin</surname> <given-names>Z.</given-names></name> <name><surname>Wu</surname> <given-names>M.</given-names></name> <name><surname>Wu</surname> <given-names>Z.</given-names></name></person-group> (<year>2019</year>). <article-title>Depthwise separable convolution architectures for plant disease classification</article-title>. <source>Comput. Electron. Agric.</source> <volume>165</volume>:<fpage>104948</fpage>. <pub-id pub-id-type="doi">10.1016/j.compag.2019.104948</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Karlekar</surname> <given-names>A.</given-names></name> <name><surname>Seal</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>SoyNet: soybean leaf diseases classification</article-title>. <source>Comput. Electron. Agric.</source> <volume>172</volume>:<fpage>105342</fpage>. <pub-id pub-id-type="doi">10.1016/j.compag.2020.105342</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kaya</surname> <given-names>A.</given-names></name> <name><surname>Keceli</surname> <given-names>A. S.</given-names></name> <name><surname>Catal</surname> <given-names>C.</given-names></name> <name><surname>Yalic</surname> <given-names>H. Y.</given-names></name> <name><surname>Temucin</surname> <given-names>H.</given-names></name> <name><surname>Tekinerdogan</surname> <given-names>B.</given-names></name></person-group> (<year>2019</year>). <article-title>Analysis of transfer learning for deep neural network based plant classification models</article-title>. <source>Comput. Electron. Agric.</source> <volume>158</volume>, <fpage>20</fpage>&#x02013;<lpage>29</lpage>. <pub-id pub-id-type="doi">10.1016/j.compag.2019.01.041</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>H.</given-names></name> <name><surname>Ma</surname> <given-names>X.</given-names></name> <name><surname>Tao</surname> <given-names>M.</given-names></name> <name><surname>Deng</surname> <given-names>R.</given-names></name> <name><surname>Bangura</surname> <given-names>K.</given-names></name> <name><surname>Deng</surname> <given-names>X.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>A plant leaf geometric parameter measurement system based on the android platform</article-title>. <source>Sensors</source> <volume>19</volume>:<fpage>1872</fpage>. <pub-id pub-id-type="doi">10.3390/s19081872</pub-id><pub-id pub-id-type="pmid">31010148</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lu</surname> <given-names>J.</given-names></name> <name><surname>Hu</surname> <given-names>J.</given-names></name> <name><surname>Zhao</surname> <given-names>G.</given-names></name> <name><surname>Mei</surname> <given-names>F.</given-names></name> <name><surname>Zhang</surname> <given-names>C.</given-names></name></person-group> (<year>2017a</year>). <article-title>An in-field automatic wheat disease diagnosis system</article-title>. <source>Comput. Electron. Agric.</source> <volume>142</volume>, <fpage>369</fpage>&#x02013;<lpage>379</lpage>. <pub-id pub-id-type="doi">10.1016/j.compag.2017.09.012</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lu</surname> <given-names>Y.</given-names></name> <name><surname>Yi</surname> <given-names>S.</given-names></name> <name><surname>Zeng</surname> <given-names>N.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name></person-group> (<year>2017b</year>). <article-title>Identification of rice diseases using deep convolutional neural networks</article-title>. <source>Neurocomputing</source> <volume>267</volume>, <fpage>378</fpage>&#x02013;<lpage>384</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2017.06.023</pub-id><pub-id pub-id-type="pmid">32550561</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ngugi</surname> <given-names>L. C.</given-names></name> <name><surname>Abelwahab</surname> <given-names>M.</given-names></name> <name><surname>Abo-Zahhad</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>Recent advances in image processing techniques for automated leaf pest and disease recognition &#x02013; a review</article-title>. <source>Inf. Process. Agric.</source> <volume>4</volume>:<fpage>4</fpage>. <pub-id pub-id-type="doi">10.1016/j.inpa.2020.04.004</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ozguven</surname> <given-names>M. M.</given-names></name> <name><surname>Adem</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>Automatic detection and classification of leaf spot disease in sugar beet using deep learning algorithms</article-title>. <source>Phys. A Stat. Mech. Appl.</source> <volume>535</volume>:<fpage>122537</fpage>. <pub-id pub-id-type="doi">10.1016/j.physa.2019.122537</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Paszke</surname> <given-names>A.</given-names></name> <name><surname>Gross</surname> <given-names>S.</given-names></name> <name><surname>Massa</surname> <given-names>F.</given-names></name> <name><surname>Lerer</surname> <given-names>A.</given-names></name> <name><surname>Bradbury</surname> <given-names>J.</given-names></name> <name><surname>Chanan</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2019</year>). <source>PyTorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32, 8026&#x02013;8037</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://arxiv.org/pdf/1912.01703.pdf">https://arxiv.org/pdf/1912.01703.pdf</ext-link></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Picon</surname> <given-names>A.</given-names></name> <name><surname>Seitz</surname> <given-names>M.</given-names></name> <name><surname>Alvarez-Gila</surname> <given-names>A.</given-names></name> <name><surname>Mohnke</surname> <given-names>P.</given-names></name> <name><surname>Ortiz-Barredo</surname> <given-names>A.</given-names></name> <name><surname>Echazarra</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions</article-title>. <source>Comput. Electron. Agric.</source> <volume>167</volume>:<fpage>105093</fpage>. <pub-id pub-id-type="doi">10.1016/j.compag.2019.105093</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Polder</surname> <given-names>G.</given-names></name> <name><surname>Van de Westeringh</surname> <given-names>N.</given-names></name> <name><surname>Kool</surname> <given-names>J.</given-names></name> <name><surname>Khan</surname> <given-names>H. A.</given-names></name> <name><surname>Kootstra</surname> <given-names>G.</given-names></name> <name><surname>Nieuwenhuizen</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Automatic detection of tulip breaking virus (TBV) using a deep convolutional neural network</article-title>. <source>IFAC-PapersOnLine</source> <volume>52</volume>, <fpage>12</fpage>&#x02013;<lpage>17</lpage>. <pub-id pub-id-type="doi">10.1016/j.ifacol.2019.12.482</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rahman</surname> <given-names>C. R.</given-names></name> <name><surname>Arko</surname> <given-names>P. S.</given-names></name> <name><surname>Ali</surname> <given-names>M. E.</given-names></name> <name><surname>Iqbal Khan</surname> <given-names>M. A.</given-names></name> <name><surname>Apon</surname> <given-names>S. H.</given-names></name> <name><surname>Nowrin</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Identification and recognition of rice diseases and pests using convolutional neural networks</article-title>. <source>Biosyst. Eng.</source> <volume>194</volume>, <fpage>112</fpage>&#x02013;<lpage>120</lpage>. <pub-id pub-id-type="doi">10.1016/j.biosystemseng.2020.03.020</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rangarajan</surname> <given-names>A. K.</given-names></name> <name><surname>Purushothaman</surname> <given-names>R.</given-names></name> <name><surname>Ramesh</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>Tomato crop disease classification using pre-trained deep learning algorithm</article-title>. <source>Procedia Comput. Sci.</source> <volume>133</volume>, <fpage>1040</fpage>&#x02013;<lpage>1047</lpage>. <pub-id pub-id-type="doi">10.1016/j.procs.2018.07.070</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sambasivam</surname> <given-names>G.</given-names></name> <name><surname>Opiyo</surname> <given-names>G. D.</given-names></name></person-group> (<year>2020</year>). <article-title>A predictive machine learning application in agriculture: Cassava disease detection and classification with imbalanced dataset using convolutional neural networks</article-title>. <source>Egypt. Inform. J.</source> <volume>2</volume>:<fpage>7</fpage>. <pub-id pub-id-type="doi">10.1016/j.eij.2020.02.007</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Selvaraju</surname> <given-names>R. R.</given-names></name> <name><surname>Cogswell</surname> <given-names>M.</given-names></name> <name><surname>Das</surname> <given-names>A.</given-names></name> <name><surname>Vedantam</surname> <given-names>R.</given-names></name> <name><surname>Parikh</surname> <given-names>D.</given-names></name> <name><surname>Batra</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). <article-title>Grad-CAM: visual explanations from deep networks <italic>via</italic> gradient-based localization</article-title>. <source>arXiv</source> <volume>2017</volume>:<fpage>74</fpage>. <pub-id pub-id-type="doi">10.1109/ICCV.2017.74</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sethy</surname> <given-names>P. K.</given-names></name> <name><surname>Barpanda</surname> <given-names>N. K.</given-names></name> <name><surname>Rath</surname> <given-names>A. K.</given-names></name> <name><surname>Behera</surname> <given-names>S. K.</given-names></name></person-group> (<year>2020</year>). <article-title>Image processing techniques for diagnosing rice plant disease: a survey</article-title>. <source>Proc. Comput. Sci.</source> <volume>167</volume>, <fpage>516</fpage>&#x02013;<lpage>530</lpage>. <pub-id pub-id-type="doi">10.1016/j.procs.2020.03.308</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Singh</surname> <given-names>V.</given-names></name> <name><surname>Misra</surname> <given-names>A. K.</given-names></name></person-group> (<year>2017</year>). <article-title>Detection of plant leaf diseases using image segmentation and soft computing techniques</article-title>. <source>Inf. Process. Agric.</source> <volume>4</volume>, <fpage>41</fpage>&#x02013;<lpage>49</lpage>. <pub-id pub-id-type="doi">10.1016/j.inpa.2016.10.005</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Springenberg</surname> <given-names>J. T.</given-names></name> <name><surname>Dosovitskiy</surname> <given-names>A.</given-names></name> <name><surname>Brox</surname> <given-names>T.</given-names></name> <name><surname>Riedmiller</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). <article-title>Striving for simplicity: the all convolutional net</article-title>, in <source>3rd International Conference on Learning Representations, ICLR 2015 - Workshop Track Proceedings</source> (International Conference on Learning Representations, ICLR). Available online at: <ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1412.6806v3">https://arxiv.org/abs/1412.6806v3</ext-link> (accessed October 19, 2020).</citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tao</surname> <given-names>M.</given-names></name> <name><surname>Ma</surname> <given-names>X.</given-names></name> <name><surname>Huang</surname> <given-names>X.</given-names></name> <name><surname>Liu</surname> <given-names>C.</given-names></name> <name><surname>Deng</surname> <given-names>R.</given-names></name> <name><surname>Liang</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Smartphone-based detection of leaf color levels in rice plants</article-title>. <source>Comput. Electron. Agric.</source> <volume>173</volume>:<fpage>105431</fpage>. <pub-id pub-id-type="doi">10.1016/j.compag.2020.105431</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Xie</surname> <given-names>S.</given-names></name> <name><surname>Girshick</surname> <given-names>R.</given-names></name> <name><surname>Doll&#x000E1;r</surname> <given-names>P.</given-names></name> <name><surname>Tu</surname> <given-names>Z.</given-names></name> <name><surname>He</surname> <given-names>K.</given-names></name></person-group> (<year>2017</year>). <article-title>Aggregated residual transformations for deep neural networks</article-title>, in <source>2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source> (<publisher-loc>Honolulu, HI</publisher-loc>), <fpage>5987</fpage>&#x02013;<lpage>5995</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2017.634</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>G.</given-names></name> <name><surname>Zhang</surname> <given-names>F.</given-names></name> <name><surname>Shah</surname> <given-names>S. G.</given-names></name> <name><surname>Ye</surname> <given-names>Y.</given-names></name> <name><surname>Mao</surname> <given-names>H.</given-names></name></person-group> (<year>2011</year>). <article-title>Use of leaf color images to identify nitrogen and potassium deficient tomatoes</article-title>. <source>Pattern Recognit. Lett.</source> <volume>32</volume>, <fpage>1584</fpage>&#x02013;<lpage>1590</lpage>. <pub-id pub-id-type="doi">10.1016/j.patrec.2011.04.020</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yoosefzadeh-Najafabadi</surname> <given-names>M.</given-names></name> <name><surname>Earl</surname> <given-names>H. J.</given-names></name> <name><surname>Tulpan</surname> <given-names>D.</given-names></name> <name><surname>Sulik</surname> <given-names>J.</given-names></name> <name><surname>Eskandari</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>Application of machine learning algorithms in plant breeding: predicting yield from hyperspectral reflectance in soybean</article-title>. <source>Front. Plant Sci.</source> <volume>11</volume>:<fpage>2169</fpage>. <pub-id pub-id-type="doi">10.3389/fpls.2020.624273</pub-id><pub-id pub-id-type="pmid">33510761</pub-id></citation></ref>
<ref id="B46">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>H.</given-names></name> <name><surname>Wu</surname> <given-names>C.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Zhu</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Lin</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2020</year>). <source>ResNeSt: Split-Attention Networks</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/2004.08955">http://arxiv.org/abs/2004.08955</ext-link> (accessed July 9, 2020).</citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>M.</given-names></name> <name><surname>Qin</surname> <given-names>Z.</given-names></name> <name><surname>Liu</surname> <given-names>X.</given-names></name></person-group> (<year>2005</year>). <article-title>Remote sensed spectral imagery to detect late blight in field tomatoes</article-title>. <source>Precis. Agric.</source> <volume>6</volume>, <fpage>489</fpage>&#x02013;<lpage>508</lpage>. <pub-id pub-id-type="doi">10.1007/s11119-005-5640-x</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Zhou</surname> <given-names>B.</given-names></name> <name><surname>Khosla</surname> <given-names>A.</given-names></name> <name><surname>Lapedriza</surname> <given-names>A.</given-names></name> <name><surname>Oliva</surname> <given-names>A.</given-names></name> <name><surname>Torralba</surname> <given-names>A.</given-names></name></person-group> (<year>2015</year>). <article-title>Learning deep features for discriminative localization</article-title>, in <source>Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 2016-December</source>, 2921&#x02013;2929. Available online at: <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/1512.04150">http://arxiv.org/abs/1512.04150</ext-link> (accessed January 8, 2021).</citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>W.</given-names></name> <name><surname>Chen</surname> <given-names>H.</given-names></name> <name><surname>Ciechanowska</surname> <given-names>I.</given-names></name> <name><surname>Spaner</surname> <given-names>D.</given-names></name></person-group> (<year>2018</year>). <article-title>Application of infrared thermal imaging for the rapid diagnosis of crop disease</article-title>. <source>IFAC-PapersOnLine</source> <volume>51</volume>, <fpage>424</fpage>&#x02013;<lpage>430</lpage>. <pub-id pub-id-type="doi">10.1016/j.ifacol.2018.08.184</pub-id></citation>
</ref>
</ref-list>
<fn-group>
<fn fn-type="financial-disclosure"><p><bold>Funding.</bold> The research was funded by the Natural Science Foundation of China (No. 51875217), the National Science Foundation for Young Scientists of China (No. 31801258), the Science Foundation of Guangdong for Distinguished Young Scholars (No. 2019B151502056), and the Earmarked Fund for Modern Agro-industry Technology Research System (No. CARS-01-43).</p>
</fn>
</fn-group>
</back>
</article>