<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Plant Sci.</journal-id>
<journal-title>Frontiers in Plant Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Plant Sci.</abbrev-journal-title>
<issn pub-type="epub">1664-462X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpls.2022.991191</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Plant Science</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Evaluation of residual plastic film pollution in pre-sowing cotton field using UAV imaging and semantic segmentation</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Zhai</surname>
<given-names>Zhiqiang</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1902669/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Chen</surname>
<given-names>Xuegeng</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Zhang</surname>
<given-names>Ruoyu</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
<xref rid="c001" ref-type="corresp"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1905073/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Qiu</surname>
<given-names>Fasong</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1949655/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Meng</surname>
<given-names>Qingjian</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Yang</surname>
<given-names>Jiankang</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wang</surname>
<given-names>Haiyuan</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>College of Mechanical and Electrical Engineering, Shihezi University</institution>, <addr-line>Shihezi</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Key Laboratory of Northwest Agricultural Equipment, Ministry of Agriculture and Rural Affairs</institution>, <addr-line>Shihezi</addr-line>, <country>China</country></aff>
<author-notes>
<fn id="fn0001" fn-type="edited-by"><p>Edited by: Muhammad Naveed Tahir, Pir Mehr Ali Shah Arid Agriculture University, Pakistan</p></fn>
<fn id="fn0002" fn-type="edited-by"><p>Reviewed by: Rui Xu, University of Georgia, Georgia; Baohua Zhang, Nanjing Agricultural University, China; Guang Yang, Shihezi University, China; Jiangbo Li, Beijing Academy of Agriculture and Forestry Sciences, China</p></fn>
<corresp id="c001">&#x002A;Correspondence: Ruoyu Zhang, <email>zryzju@gmail.com</email></corresp>
<fn id="fn0003" fn-type="other"><p>This article was submitted to Sustainable and Intelligent Phytoprotection, a section of the journal Frontiers in Plant Science</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>09</day>
<month>09</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>13</volume>
<elocation-id>991191</elocation-id>
<history>
<date date-type="received">
<day>11</day>
<month>07</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>24</day>
<month>08</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 Zhai, Chen, Zhang, Qiu, Meng, Yang and Wang.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Zhai, Chen, Zhang, Qiu, Meng, Yang and Wang</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>To accurately evaluate residual plastic film pollution in pre-sowing cotton fields, a method based on modified U-Net model was proposed in this research. Images of pre-sowing cotton fields were collected using UAV imaging from different heights under different weather conditions. Residual films were manually labelled, and the degree of residual film pollution was defined based on the residual film coverage rate. The modified U-Net model for evaluating residual film pollution was built by simplifying the U-Net model framework and introducing the inception module, and the evaluation results were compared to those of the U-Net, SegNet, and FCN models. The segmentation results showed that the modified U-Net model had the best performance, with a mean intersection over union (MIOU) of 87.53%. The segmentation results on images of cloudy days were better than those on images of sunny days, with accuracy gradually decreasing with increasing image-acquiring height. The evaluation results of residual film pollution showed that the modified U-Net model outperformed the other models. The coefficient of determination(R<sup>2</sup>), root mean square error (RMSE), mean relative error (MRE) and average evaluation time per image of the modified U-Net model on the CPU were 0.9849, 0.0563, 5.33% and 4.85&#x2009;s, respectively. The results indicate that UAV imaging combined with the modified U-Net model can accurately evaluate residual film pollution. This study provides technical support for the rapid and accurate evaluation of residual plastic film pollution in pre-sowing cotton fields.</p>
</abstract>
<kwd-group>
<kwd>UAV imaging</kwd>
<kwd>deep learning</kwd>
<kwd>cotton field</kwd>
<kwd>residual film</kwd>
<kwd>pollution</kwd>
</kwd-group>
<contract-sponsor id="cn1">National Natural Science Foundation of China<named-content content-type="fundref-id">10.13039/501100001809</named-content>
</contract-sponsor>
<contract-sponsor id="cn2">Shihezi University<named-content content-type="fundref-id">10.13039/501100004317</named-content>
</contract-sponsor>
<counts>
<fig-count count="11"/>
<table-count count="2"/>
<equation-count count="9"/>
<ref-count count="25"/>
<page-count count="12"/>
<word-count count="5750"/>
</counts>
</article-meta>
</front>
<body>
<sec id="sec1" sec-type="intro">
<title>Introduction</title>
<p>Plastic film mulching is an agricultural technique that can improve soil temperature, reduce soil water loss, suppress weed growth, and improve crop water use efficiency, yield, and quality (<xref ref-type="bibr" rid="ref18">Yan et al., 2014</xref>; <xref ref-type="bibr" rid="ref17">Xue et al., 2017</xref>). However, much of the waste plastic film remains in the soil after harvesting. With polyethylene as raw material, plastic film is decomposed into residual film and microplastics over time under natural conditions (<xref ref-type="bibr" rid="ref7">Qi et al., 2020</xref>; <xref ref-type="bibr" rid="ref20">Zhang et al., 2022</xref>). However, complete decomposition of plastic film in the soil requires 200 to 400&#x2009;years (<xref ref-type="bibr" rid="ref5">He et al., 2009</xref>). The increase in residual film in the soil has brought a series of serious problems, such as soil structure damage, decreased soil quality, and crop yield loss (<xref ref-type="bibr" rid="ref3">Dong et al., 2015</xref>).</p>
<p>Cotton is one of the major cash crops in the world (<xref ref-type="bibr" rid="ref1">Akter et al., 2018</xref>; <xref ref-type="bibr" rid="ref2">Alves et al., 2020</xref>). China is one of the world&#x2019;s leading cotton growers, and Xinjiang Province has become an important region for high-quality cotton production in China. In 2021, Xinjiang&#x2019;s cotton production reached 5.129 million tons, accounting for approximately 89.5 percent of China&#x2019;s total cotton output. Due to the arid climate in Xinjiang, farms have used film mulching in cotton planting for a long time. However, the accumulation of plastic film waste has caused serious white pollution to farmland (<xref ref-type="bibr" rid="ref22">Zhao et al., 2017</xref>).</p>
<p>Farmland residual film pollution control is a systematic project. In addition to the development of residual film recycling machines, it is of great significance to carry out efficient and accurate residual film pollution monitoring to provide reference for reducing residual film pollution in farmlands.</p>
<p>At present, artificial collection of residual films is mostly used for residual film pollution evaluation. For example, <xref ref-type="bibr" rid="ref21">Zhang et al. (2016)</xref> studied the status and distribution characteristics of residual film in Xinjiang, the results indicated that the thickness of the film had significantly negative correlation with the amount of residual film. <xref ref-type="bibr" rid="ref13">Wang et al. (2022)</xref> analyzed residual film pollution in northwest China and found that plastic debris residing in soil tend to be fragmented, which could make plastic film recovery more challenging and cause severe soil pollution. <xref ref-type="bibr" rid="ref4">He et al. (2018)</xref> and <xref ref-type="bibr" rid="ref14">Wang et al. (2018)</xref> used manually stratified sampling to monitor cotton fields with different duration of film mulching according to the weight and area of residual film. They found that residual film content increased year by year as the film mulching continued, and the residual film broke down and moved into the deep soil during crop cultivation. However, artificial collection of residual films, with high labour intensity and low efficiency, cannot meet the requirement for rapid monitoring of residual film pollution. Therefore, it is urgent to develop an efficient evaluation method for evaluating farmland residual film pollution at present.</p>
<p>With the rapid development of UAV remote sensing and deep learning technology, UAV imaging combined with semantic segmentation has been increasingly widely used in agriculture. <xref ref-type="bibr" rid="ref23">Zhao et al. (2019)</xref> collected UAV RGB and multispectral images of rice lodging and proposed a U-shaped network-based method for rice lodging identification, finding that the Dice coefficients for RGB and multispectral images were 0.9442 and 0.9284, respectively. <xref ref-type="bibr" rid="ref26">Zou et al. (2021)</xref> proposed a weed density evaluation method using UAV imaging and modified U-Net, and the intersection over union (IOU) was 93.40%. <xref ref-type="bibr" rid="ref6">Li et al. (2022)</xref> proposed a method for high-density cotton yield estimation based on low-altitude UAV imaging and CD-SegNet. They found that the segmentation accuracy reached 90%, and the average error of the estimated yield was 6.2%.</p>
<p>In recent years, some scholars have preliminarily explored UAV imaging-based plastic film-mulched area detection and residual film identification. <xref ref-type="bibr" rid="ref25">Zhu et al. (2019)</xref> proposed a method for extracting the plastic film-mulched area in farmlands using UAV images. Based on UAV remote sensing images, the white and black film-mulched areas in farmlands were extracted, and the accuracy reached 94.84%. <xref ref-type="bibr" rid="ref9">Sun et al. (2018)</xref> proposed an area estimation approach for plastic film-mulched areas based on UAV images and deep learning, and five fully convolutional network (FCN) models were built by multiscale fusion, finding that the optimal identification accuracy of the FCN-4&#x2009;s model was 97%. <xref ref-type="bibr" rid="ref12">Tarantino and Figorito (2012)</xref> used the object-oriented nearest neighbour classification method to extract mulching information from aerial images. In addition, focused on farmland residual film pollution <xref ref-type="bibr" rid="ref16">Wu et al. (2020)</xref> proposed a method for plastic film residue identification using UAV images and a segmentation algorithm. To overcome the influence of light on the accuracy of residual film identification, an impulse coupled neural network based on the S component was built, and the average identification rate was 87.49%. However, this research aimed at farmland that was not ploughed after harvesting in autumn, residual film had good continuity and low fragmentation.</p>
<p>It is of great significance to monitor whether the farmland reaches the qualified conditions for sowing by the rapid detection of residual film pollution in pre-sowing cotton field. Before sowing in the spring, the agricultural mulch turned into film fragments as the cotton field went through a series of operations, such as straw crushing, ploughing, and field preparation et al. Compared with plastic film mulch area detection after sowing in spring and plastic film residue detection after harvest in autumn, residual film pollution evaluation in pre-sowing cotton fields is more difficult.</p>
<p>Aimed at detecting residual film coverage rate in pre-sowing cotton field surface, <xref ref-type="bibr" rid="ref19">Zhai et al. (2022)</xref> proposed a detection method based on pixel block and machine learning, however, the Mean Intersection Over Union(MIOU) was only 71.25%, and the image acquisition method was near-ground imaging, which is not convenient for rapid monitoring of residual film pollution. Therefore, this study proposed a method for residual film pollution evaluation in pre-sowing cotton fields based on UAV imaging and deep learning semantic segmentation algorithm, aiming to achieve rapid and accurate identification of residual films in pre-sowing cotton fields. This study provides a theoretical basis for further research on the rapid and accurate evaluation technology equipment for residue film pollution.</p>
</sec>
<sec id="sec2" sec-type="materials|methods">
<title>Materials and methods</title>
<sec id="sec3">
<title>Data acquisition</title>
<p>Residual film images were collected from Shihezi City, Xinjiang, China (43&#x00B0;26&#x2032;&#x2009;~&#x2009;45&#x00B0;20&#x2032;N, 84&#x00B0;58&#x2032;&#x2009;~&#x2009;86&#x00B0;24&#x2032;E, a.s.l. 450.8 M), where has a temperate continental climate. The main crops in this area were cotton, and drip irrigation - plastic film mulching has been widely adopted in cotton planting (<xref ref-type="bibr" rid="ref15">Wang et al., 2021</xref>). The amount of mulch films (thickness: approximately 0.008&#x2009;mm) used during sowing was between 75 and 120&#x2009;kg&#x00B7;hm<sup>&#x2212;2</sup>. After harvesting in autumn, straw return was performed after crushing, and films were recovered. Ploughing and other operations were carried out in cotton fields before sowing in spring.</p>
<p>In this study, UAV images of 20 residual plastic film-polluted cotton fields were collected using a DJI M200 aircraft (DJI Innovation Technology Co., Ltd., DJI-Innovations) equipped with a Zen Zenmuse X4S camera from 10:00 to 19:00 on sunny and cloudy days from April 5 to April 15, 2021. The image resolution was 5,472&#x2009;&#x00D7;&#x2009;3,078 pixels. As shown in <xref rid="fig1" ref-type="fig">Figure 1</xref>, the waypoint method was used for flight for image acquisition. Each cotton field had 10 flight points in a straight line, and the distance between each point was 20&#x2009;m. The flight speed of the UAV was 3&#x2009;m/s, the camera angle was 90&#x00B0;, perpendicular to the ground, and the image-acquiring height were 5, 7, and 9&#x2009;M. A total of 600 images were collected. Original UAV image data distribution of residual film in cotton field is shown in <xref rid="tab1" ref-type="table">Table 1</xref>. In this study, 600 images were divided into a training set (480), validation set (60), and test set (60).</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption><p>UAV image acquisition <bold>(A)</bold> and flight control parameters <bold>(B)</bold>.</p></caption>
<graphic xlink:href="fpls-13-991191-g001.tif"/>
</fig>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption><p>Original UAV image data distribution of residual film in cotton field.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th align="center" valign="top">5&#x2009;m</th>
<th align="center" valign="top">7&#x2009;m</th>
<th align="center" valign="top">9&#x2009;m</th>
<th align="center" valign="top">Total</th>
</tr>
</thead>
<tbody>
<tr>
<td align="char" valign="top" char=".">Sunny</td>
<td align="center" valign="top">100</td>
<td align="center" valign="top">100</td>
<td align="center" valign="top">100</td>
<td align="center" valign="top">300</td>
</tr>
<tr>
<td align="char" valign="top" char=".">Cloudy</td>
<td align="center" valign="top">100</td>
<td align="center" valign="top">100</td>
<td align="center" valign="top">100</td>
<td align="center" valign="top">300</td>
</tr>
<tr>
<td align="char" valign="top" char=".">Total</td>
<td align="center" valign="top">200</td>
<td align="center" valign="top">200</td>
<td align="center" valign="top">200</td>
<td align="center" valign="top">600</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec4">
<title>Image labelling and data enhancement</title>
<p>The images were manually annotated using Adobe Photoshop CS5 (Adobe Systems Inc., United States), and all residual films were manually annotated and filled with blue color. Then, the threshold segmentation method was used for binarization. Residual film pixels were labelled as 1, and background pixels such as soil were labelled as 0. The annotation results are shown in <xref rid="fig2" ref-type="fig">Figure 2</xref>.</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption><p>Image labelling: <bold>(A)</bold> Original image; <bold>(B)</bold> Labeled image.</p></caption>
<graphic xlink:href="fpls-13-991191-g002.tif"/>
</fig>
<p>As the original images were too large to directly use for training, to accelerate the model calculation, the image resolution was resized to 1,200&#x2009;&#x00D7;&#x2009;600 pixels. In addition, the training set data were enhanced in the process of model training. In each epoch of training, random cutting (size: 1024&#x2009;&#x00D7;&#x2009;512 pixels), random flipping (left and right), random flipping (up and down), and brightness adjustment were used for data enhancement. Each training epoch obtained 480 new training data, and 55 epochs of training were conducted. Finally, a total of 26,400 enhanced images were obtained and used for training.</p>
</sec>
<sec id="sec5">
<title>Residual film images segmentation network structure</title>
<p>The U-Net model is a common semantic segmentation network with an &#x201C;U&#x201D; shape (<xref ref-type="bibr" rid="ref8">Ronneberger et al., 2015</xref>; <xref ref-type="bibr" rid="ref24">Zhou et al., 2020</xref>) for image segmentation (<xref rid="fig3" ref-type="fig">Figure 3A</xref>). The left part of the network, the &#x201C;encoder,&#x201D; was repeatedly sampled by two convolution layers and one down-sampling layer. The right part of the network, the &#x201C;decoder,&#x201D; was connected by a deconvolution layer to the feature graph output by the &#x201C;encoder.&#x201D; Then deconvolution was performed two times. Finally, the channels output the desired number of categories through a 1&#x2009;&#x00D7;&#x2009;1 convolution operation. Based on the original U-Net model, a modified U-Net model was proposed in this research (<xref rid="fig3" ref-type="fig">Figure 3B</xref>) by reducing the number of convolution layers to accelerate the running time. Moreover, the inception module was used to increase the generalization ability and learning ability of the neural network. In the down-sampling layer, the inception module was used to replace the ordinary 3&#x2009;&#x00D7;&#x2009;3 convolutional layer, and a 1&#x2009;&#x00D7;&#x2009;1 convolution layer was connected after the inception module to reduce the input information and the model size.</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption><p>Network structure: <bold>(A)</bold> Structure of U-Net; <bold>(B)</bold> Structure of modified U-Net.</p></caption>
<graphic xlink:href="fpls-13-991191-g003.tif"/>
</fig>
<p>Depth and width are important parameters that affect convolutional neural networks. While increasing the network depth and width, the inception module also solves the problem of too many parameters and reduces the amount of parameter calculation (<xref ref-type="bibr" rid="ref10">Szegedy et al., 2015</xref>). The inception module used in this study is shown in <xref rid="fig4" ref-type="fig">Figure 4</xref>. Features of cotton fields of different scales were extracted using 1&#x2009;&#x00D7;&#x2009;1 and 3&#x2009;&#x00D7;&#x2009;3 convolutional layers. Therefore, the multiscale inception module is suitable for determining characteristics of the multimorphic, multiscale, and random distribution of residual films in pre-sowing cotton fields. In the inception module, the fusion of different scales and functional branches was realized through the construction of cascade relationships, and then the fusion of multiscale image features was realized.</p>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption><p>Structure of the inception module: <bold>(A)</bold> Training and validation loss; <bold>(B)</bold> Training and validation accuracy.</p></caption>
<graphic xlink:href="fpls-13-991191-g004.tif"/>
</fig>
</sec>
<sec id="sec6">
<title>Training for residual film detection</title>
<p>The deep learning model training hardware consisted of an Intel(R) Xeon(R) W-2223 CPU @ 3.60&#x2009;GHz and 128&#x2009;GB memory, and an NVIDIA GeForce RTX 3090 Graphics with 24&#x2009;GB memory. The software environment was Windows 10, CUDA 11.2, CUDNN 8.1.1, Python 3.8, and TensorFlow-GPU 2.5.</p>
<p>To simulate the actual application scenario, the hardware and software for the residual film pollution evaluation included an Intel (R) Xeon (R) CPU E3-1230&#x2009;V2 @ 3.30&#x2009;GHz, without GPU acceleration, 16&#x2009;GB memory, Windows 10 operating system, Python 3.7, and TensorFlow-CPU 2.3.</p>
<p>In this study, in the segmentation of residual films, a pixel is either classified as a residual film pixel or not. Similar to other binary classification networks, the &#x201C;sparse categorical cross-entropy&#x201D; function was used as the loss function. The neural networks were trained with a gradient descent method. The Adam optimizer algorithm was used to optimize the network, and the initial learning rate was 0.001. The batch of the training set was 6. During the iterative training process, changes in accuracy and loss were recorded, while only the best model was saved. When the number of training iterations reached 55, the training process converged and stopped.</p>
</sec>
<sec id="sec7">
<title>Network segmentation performance evaluation</title>
<p>In this study, the accuracy, F1-score, and mean IOU (MIOU) were used to assess the segmentation performance. The F1-score represents the combined results of precision and recall. The segmentation time and parameters of model were used to assess the segmentation speed and size, respectively.</p>
<disp-formula id="EQ1">
<label>(1)</label> <mml:math id="M1">
<mml:mrow>
<mml:mi mathvariant="normal">Accuracy</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="normal">TP</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="normal">TN</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">TP</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="normal">FP</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="normal">TN</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="normal">FN</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi>&#x0025;</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ2">
<label>(2)</label>
<mml:math id="M2">
<mml:mrow>
<mml:mi mathvariant="normal">Precision</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="normal">TP</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">TP</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="normal">FP</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi>&#x0025;</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ3">
<label>(3)</label>
<mml:math id="M3">
<mml:mrow>
<mml:mi mathvariant="normal">Recall</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="normal">TP</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">TP</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="normal">FN</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi>&#x0025;</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ4">
<label>(4)</label>
<mml:math id="M4">
<mml:mrow>
<mml:mi mathvariant="normal">F</mml:mi>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="normal">score</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi mathvariant="normal">Precision</mml:mi>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi mathvariant="normal">Recall</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">Precision</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="normal">Recall</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi>&#x0025;</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ5">
<label>(5)</label>
<mml:math id="M5">
<mml:mrow>
<mml:mi mathvariant="normal">IOU</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mo>&#x00D7;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="normal">TP</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">TP</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="normal">FP</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="normal">FN</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="normal">TN</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">TN</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="normal">FN</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi mathvariant="normal">FP</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi>&#x0025;</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Where TP is true positive, TN is true negative, FP is false positive, and FN is false negative.</p>
</sec>
<sec id="sec8">
<title>Evaluation of residual film pollution</title>
<p>The residual film coverage rate was used as the evaluation index of residual film pollution. For images with a size of M&#x2009;&#x00D7;&#x2009;N, the residual film coverage rate L is the ratio of the total number of residual film pixels [p (x, y) =1] to the total number of pixels in the image (<xref ref-type="disp-formula" rid="EQ6">Equation 6</xref>).</p>
<disp-formula id="EQ6">
<label>(6)</label>
<mml:math id="M6">
<mml:mrow>
<mml:mi mathvariant="normal">L</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi mathvariant="normal">x</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="normal">y</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">M</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="normal">N</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mi mathvariant="normal">p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="normal">x,y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="normal">M</mml:mi>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi mathvariant="normal">N</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi>&#x0025;</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<p>To test the accuracy of the modified UNet in residual film pollution evaluation, the L values of 60 images were calculated. Then, the relationship between the predicted residual film coverage rate (L<sub>1</sub>) and true residual film coverage rate (L<sub>2</sub>) was evaluated by regression analysis. The coefficient of determination (R<sup>2</sup>), root mean square error (RMSE), and mean relative error (MRE) were selected as the evaluation indexes.</p>
<disp-formula id="EQ7">
<label>(7)</label>
<mml:math id="M7">
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="normal">R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi mathvariant="normal">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi mathvariant="normal">N</mml:mi>
</mml:msubsup>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi mathvariant="normal">L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi mathvariant="normal">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi mathvariant="normal">N</mml:mi>
</mml:msubsup>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mover accent="true">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">&#x00AF;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ8">
<label>(8)</label>
<mml:math id="M8">
<mml:mrow>
<mml:mi mathvariant="normal">RMSE</mml:mi>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi mathvariant="normal">N</mml:mi>
</mml:mfrac>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi mathvariant="normal">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi mathvariant="normal">N</mml:mi>
</mml:munderover>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi mathvariant="normal">L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math></disp-formula>
<disp-formula id="EQ9">
<label>(9)</label>
<mml:math id="M9">
<mml:mrow>
<mml:mi mathvariant="normal">MRE</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi mathvariant="normal">N</mml:mi>
</mml:mfrac>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi mathvariant="normal">i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi mathvariant="normal">N</mml:mi>
</mml:munderover>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">L</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi mathvariant="normal">L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">L</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi>&#x0025;</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Where L<sub>1</sub> and L<sub>2</sub> are the i-th predicted and true L values from N data, respectively.</p>
</sec>
</sec>
<sec id="sec9" sec-type="results">
<title>Results</title>
<sec id="sec10">
<title>Training process of the modified U-Net</title>
<p><xref rid="fig5" ref-type="fig">Figure 5</xref> shows the change in loss and accuracy on the training and validation sets as the number of iterations increases during model training. The changes in the loss and accuracy of the training and validation sets showed the same trend. The loss value dropped first and then remained stable, and the accuracy value rose first and then remained stable. After approximately 10 epochs of training, both loss and accuracy remained stable. Furthermore, there was no significant difference in the previous loss values and the accuracy of the training and validation sets, so there was no model over-fitting. After iteratively training the model for 55 epochs, both the loss value and the accuracy converged, indicating that the model achieved good training results. After the model training stage, the loss and accuracy of the validation set were 0.0037 and 99.85%, respectively.</p>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption><p>Loss and accuracy changes during training: <bold>(A)</bold> Training and validation loss; <bold>(B)</bold> Training and validation accuracy.</p></caption>
<graphic xlink:href="fpls-13-991191-g005.tif"/>
</fig>
</sec>
<sec id="sec11">
<title>Residual film segmentation results</title>
<sec id="sec12">
<title>Segmentation results of different models</title>
<p>The modified U-Net model was compared with the state-of-the art methods such as SegNet, FCN, and U-Net. The segmentation results of different models are shown in <xref rid="tab2" ref-type="table">Table 2</xref>. The results showed that the modified U-Net model had the best performance and prediction accuracy on the test set. The accuracy of the modified U-Net was 99.72%, which was 0.25, 0.04, and 0.03% higher than that of SegNet, FCN, and U-Net, respectively. The F1-score of the modified U-Net model was 85.59%, which was 14.35, 2.91, and 1.83% higher than that of SegNet, FCN, and U-Net, respectively. The MIOU of the modified U-Net model was 87.53%, which was 10.02, 2.23, and 1.39% higher than that of SegNet, FCN, and U-Net, respectively. In terms of segmentation speed, the average segmentation time per image of the modified U-Net model was 192.50&#x2009;ms, the minimum parameters of model were 3.14&#x2009;&#x00D7;&#x2009;10<sup>6</sup> and was approximately 1/10 of that of the original U-Net model. Therefore, the modified U-Net model could improve the accuracy and speed of residual film segmentation, which facilitates the rapid and accurate identification of residual film.</p>
<table-wrap position="float" id="tab2">
<label>Table 2</label>
<caption><p>Segmentation results of different models.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Model</th>
<th align="center" valign="top">Accuracy (%)</th>
<th align="center" valign="top">F1-score (%)</th>
<th align="center" valign="top">MIOU (%)</th>
<th align="center" valign="top">Time (ms)</th>
<th align="center" valign="top">Parameters (10<sup>6</sup>)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="char" valign="top" char=".">SegNet</td>
<td align="char" valign="top" char=".">99.47</td>
<td align="char" valign="top" char=".">71.24</td>
<td align="char" valign="top" char=".">77.51</td>
<td align="char" valign="top" char=".">251.33</td>
<td align="char" valign="top" char=".">31.82</td>
</tr>
<tr>
<td align="char" valign="top" char=".">FCN</td>
<td align="char" valign="top" char=".">99.68</td>
<td align="char" valign="top" char=".">82.68</td>
<td align="char" valign="top" char=".">85.3</td>
<td align="char" valign="top" char=".">204.83</td>
<td align="char" valign="top" char=".">26.37</td>
</tr>
<tr>
<td align="char" valign="top" char=".">U-Net</td>
<td align="char" valign="top" char=".">99.69</td>
<td align="char" valign="top" char=".">83.76</td>
<td align="char" valign="top" char=".">86.14</td>
<td align="char" valign="top" char=".">245.17</td>
<td align="char" valign="top" char=".">31.06</td>
</tr>
<tr>
<td align="char" valign="top" char=".">Modified U-Net</td>
<td align="char" valign="top" char=".">99.72</td>
<td align="char" valign="top" char=".">85.59</td>
<td align="char" valign="top" char=".">87.53</td>
<td align="char" valign="top" char=".">192.50</td>
<td align="char" valign="top" char=".">3.14</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec13">
<title>Residual film segmentation results in different weather conditions</title>
<p>To study the influence of different weather conditions on the segmentation results. The segmentation results of cotton field images acquired in sunny and cloudy weather were compared (<xref rid="fig6" ref-type="fig">Figure 6</xref>). The results showed that no matter which model was used, the segmentation performance on images acquired on cloudy days was better than that on sunny days. <xref rid="fig7" ref-type="fig">Figure 7</xref> shows the MIOU of different models based on the images acquired in different weather conditions. It showed that under the same weather conditions, the SegNet model had the worst segmentation performance, followed by the FCN and U-Net models. The modified U-Net model had the optimal performance, with MIOU reaching 85.44 and 89.63% on sunny and cloudy days, respectively.</p>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption><p>Residual film segmentation results under different weather conditions.</p></caption>
<graphic xlink:href="fpls-13-991191-g006.tif"/>
</fig>
<fig position="float" id="fig7">
<label>Figure 7</label>
<caption><p>MIOU of different models under different weather conditions.</p></caption>
<graphic xlink:href="fpls-13-991191-g007.tif"/>
</fig>
</sec>
<sec id="sec14">
<title>Segmentation of images acquired at different heights</title>
<p>To study the effect of different image-acquiring height on the residual film segmentation results, the segmentation results of images acquired at the heights of 5, 7, and 9&#x2009;M were compared (<xref rid="fig8" ref-type="fig">Figure 8</xref>). The results showed that the segmentation performance gradually decreased with the increase of height. <xref rid="fig9" ref-type="fig">Figure 9</xref> shows the MIOU of different models based on the images acquired at different heights. The results showed that among the models, the SegNet model had the worst identification results at the same heights, followed by the FCN and U-Net models. The modified U-Net model had the optimal results, and its MIOU reached 90.55, 87.72, and 84.32% at 5, 7, and 9&#x2009;M, respectively.</p>
<fig position="float" id="fig8">
<label>Figure 8</label>
<caption><p>Residual film segmentation results of images acquired at different heights.</p></caption>
<graphic xlink:href="fpls-13-991191-g008.tif"/>
</fig>
<fig position="float" id="fig9">
<label>Figure 9</label>
<caption><p>MIOU of different models based on the images acquired at different heights.</p></caption>
<graphic xlink:href="fpls-13-991191-g009.tif"/>
</fig>
</sec>
</sec>
<sec id="sec15">
<title>Residual film pollution evaluation results</title>
<p>The regression analysis results of the UAV images-based evaluation and manual evaluation of different models are shown in <xref rid="fig10" ref-type="fig">Figure 10</xref>. The regression result of the modified U-Net model was slightly better than that of the other models, with a regression equation of y&#x2009;=&#x2009;0.9477x&#x2009;+&#x2009;0.7305. The R<sup>2</sup>, RMSE, and MRE were 0.9849, 0.0563, and 5.33%, respectively. Moreover, it was found that the intercept of the regression equations of different models was positive.</p>
<fig position="float" id="fig10">
<label>Figure 10</label>
<caption><p>Regression analysis results of the UAV images-based evaluation and manual evaluation: <bold>(A)</bold> SegNet; <bold>(B)</bold> FCN; <bold>(C)</bold> U-Net; <bold>(D)</bold> Modified U-Net.</p></caption>
<graphic xlink:href="fpls-13-991191-g010.tif"/>
</fig>
<p>The average evaluation time for 60 images in the test set on the CPU were statistically analyzed, and it was found that the evaluation time was slightly different. The time required to evaluate residual film pollution on the CPU is shown in <xref rid="fig11" ref-type="fig">Figure 11</xref>. It was found that the modified U-Net model had a minimum average evaluation time of 4.85&#x2009;s, which was 41.07% less than the evaluation time of the U-net model.</p>
<fig position="float" id="fig11">
<label>Figure 11</label>
<caption><p>Time required by different models for residual film evaluation on the CPU.</p></caption>
<graphic xlink:href="fpls-13-991191-g011.tif"/>
</fig>
</sec>
</sec>
<sec id="sec16" sec-type="discussions">
<title>Discussion</title>
<p>This study identified residual film and evaluated the residual film pollution in cotton fields before sowing using low-altitude UAV imaging and deep learning. Based on the traditional U-Net model, a residual film semantic segmentation model with a modified U-Net model structure was proposed. This model could effectively segment the residual film from UAV images, the MIOU of the residual film recognition results reached 87.53%, which was 16.28 percentage points higher than the residual membrane pixel block identification method (<xref ref-type="bibr" rid="ref19">Zhai et al., 2022</xref>). In this study, the residual film coverage rate was used to evaluate residual film pollution, and a rapid and accurate evaluation of residual film pollution was achieved based on the residual film semantic segmentation results. The results showed that the R<sup>2</sup> of the modified U-Net model was 0.9849, the RMSE was 0.0563, the MRE was 5.33%, and the average evaluation time per image was 4.85&#x2009;s on the CPU. These results indicate that the modified U-Net model can rapidly and accurately evaluate residual film pollution.</p>
<p>The residual film pollution evaluation method proposed in this study was mainly designed to identify residual films from the surface of cotton fields before sowing and to evaluate the degree of residual film pollution based on the proportion of residual films&#x2019; pixels. In this study, a multi classification neural network model was used to identify residual film, soil, straw, etc. Due to the surface of cotton fields includes residual film, soil, straw, drip irrigation belts, etc., it is very difficult to label each item one by one by pixel. Therefore, in the labelling process, only residual films (1) were manually labelled one by one, and soil, straw, and other items were marked as non-residual films (0). As the surface of the residual film attached to soil, the reflection of soil block and other reasons, resulting the existence of false positive (FP) and false negative detections (FN) in this study. The FP represents the segmentation model mistakenly identifies soil, straw and other samples as residual film samples; the FN represents the segmentation model mistakenly identifies residual film samples as soil, straw, etc.</p>
<p>This study proposed a model for residual film semantic segmentation based on a modified U-Net model. The image segmentation in this study is a binary classification, including identification and classification of residual films and non-residual films. Therefore, the feature extraction of the traditional U-Net model was simplified in this study to reduce the number of parameters and speed up the computation. Moreover, the multiscale feature extraction inception module was introduced to achieve accurate segmentation of residual films of different sizes by fusing multiscale image features. This modified network model may not perform as well on other more complex images but outperforms several traditional semantic segmentation models, including U-Net, SegNet, and FCN.</p>
<p>This study compared the identification performance on sunny and cloudy days and found that the identification performance on cloudy days was better than that on sunny days. This may be due to that the reflection of soil blocks causing them to be misjudged as residual films on sunny days. In addition, by comparing the effect of different image-acquiring height on the residual film segmentation, it was found that the lower the height is, the better the residual film segmentation effect. This may be due to that images acquired at lower heights have higher definition. However, when the height was too low, wind from the UAV&#x2019;s rotor could blow away residual films, affecting the residual film pollution evaluation. Therefore, in practical applications, the height of UAV should be considered while ensuring image definition.</p>
<p>The residual film pollution evaluation method in this paper has application value for the control of residual film pollution. This evaluation system can achieve a rapid and accurate evaluation of residual film pollution. Moreover, rapid evaluation of the degree of residual film pollution can provide some reference for the objective evaluation of the seeding suitability of cotton fields during the spring sowing stage. In addition, this study also provides the theoretical support for the detection of residual film pollution in cotton field plough layer using UAV imaging, the rapid prediction of residual film pollution in cotton field plough layer can be realized by studying the residual film pollution correlation between the surface and plough layer. Compared to manual sampling to monitor residual film pollution, the approach in this study saves manpower and reduces time costs.</p>
</sec>
<sec id="sec17" sec-type="conclusions">
<title>Conclusion</title>
<p>In this paper, residual film pollution images in pre-sowing cotton fields were collected by UAV imaging system. The more suitable residual film segmentation model was built by modified U-Net model. Finally, the residual film pollution was evaluated based on residual film coverage rate. Through the analysis of the test results, it was found that:</p>
<list list-type="simple">
<list-item>
<p>(1) The modified U-Net model was proposed by simplifying the U-Net model and introducing an inception module, which can realize the accurate segmentation of residual film from cotton fields before sowing. The MIOU of segmentation reached 87.53%.</p>
</list-item>
<list-item>
<p>(2) The identification performance on cloudy days was better than that on sunny days. The identification performance of residual films gradually decreased with increasing image-acquiring height.</p>
</list-item>
<list-item>
<p>(3) The modified U-Net model outperformed other models in residual film pollution evaluation, with R<sup>2</sup> of 0.9849, RMSE of 0.0563, MRE of 5.33% and the average evaluation time per image of 4.85&#x2009;s on the CPU.</p>
</list-item>
<list-item>
<p>(4) This study provides a theoretical reference for further development of evaluation technology and equipment for residual film pollution based on UAV imaging.</p>
</list-item>
</list>
</sec>
<sec id="sec18" sec-type="data-availability">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/supplementary material; further inquiries can be directed to the corresponding author.</p>
</sec>
<sec id="sec19">
<title>Author contributions</title>
<p>ZZ: methodology, model design, data analysis, and original manuscript writing. FQ: data collection and data analysis. QM: data analysis and revision. JY: data collection and revision. HW: data collection. XC and RZ: methodology, editing, revision, supervision, and funding. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec id="sec20" sec-type="funding-information">
<title>Funding</title>
<p>The authors gratefully acknowledge the financial support provided by the National Natural Science Foundation of China (32060412), the High-level Talents Research Initiation Project of Shihezi University (CJXZ202104), the earmarked fund for China Agriculture Research System (CARS-15-17), and the Graduate Education Innovation Project of Xinjiang Autonomous Region (XJ2022G082).</p>
</sec>
<sec id="conf1" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
<p>The reviewer GY declared a shared affiliation with the authors to the handling editor at the time of review.</p>
</sec>
<sec id="sec100" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ack>
<p>The authors would also like to thank Mengyun Zhang and Fengjie Cai for their assistance in the experiment.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="ref1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Akter</surname> <given-names>T.</given-names></name> <name><surname>Islam</surname> <given-names>A. K. M. A.</given-names></name> <name><surname>Rasul</surname> <given-names>M. G.</given-names></name> <name><surname>Kundu</surname> <given-names>S.</given-names></name></person-group> (<year>2018</year>). <article-title>Evaluation of genetic diversity in short duration cotton (<italic>Gossypium hirsutum</italic> L.)</article-title>. <source>J. Cotton Res.</source> <volume>1</volume>, <fpage>15</fpage>&#x2013;<lpage>20</lpage>. doi: <pub-id pub-id-type="doi">10.1186/s42397-018-0018-6</pub-id></citation></ref>
<ref id="ref2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alves</surname> <given-names>A. N.</given-names></name> <name><surname>Souza</surname> <given-names>W. S. R.</given-names></name> <name><surname>Borges</surname> <given-names>D. L.</given-names></name></person-group> (<year>2020</year>). <article-title>Cotton pests classification in field-based images using deep residual networks</article-title>. <source>Comput. Electron. Agric.</source> <volume>174</volume>:<fpage>105488</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2020.105488</pub-id></citation></ref>
<ref id="ref3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dong</surname> <given-names>H.</given-names></name> <name><surname>Liu</surname> <given-names>T.</given-names></name> <name><surname>Han</surname> <given-names>Z.</given-names></name> <name><surname>Sun</surname> <given-names>Q. M.</given-names></name> <name><surname>Li</surname> <given-names>R.</given-names></name></person-group> (<year>2015</year>). <article-title>Determining time limits of continuous film mulching and examining residual effects on cotton yields and soil properties</article-title>. <source>J. Environ. Biol.</source> <volume>36</volume>, <fpage>677</fpage>&#x2013;<lpage>684</lpage>.</citation></ref>
<ref id="ref4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>H. J.</given-names></name> <name><surname>Wang</surname> <given-names>Z. H.</given-names></name> <name><surname>Guo</surname> <given-names>L.</given-names></name> <name><surname>Zheng</surname> <given-names>X. R.</given-names></name> <name><surname>Zhang</surname> <given-names>J. Z.</given-names></name> <name><surname>Li</surname> <given-names>W. B.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Distribution characteristics of residual film over a cotton field under long-term film mulching and drip irrigation in an oasis agroecosystem</article-title>. <source>Soil Tillage Res.</source> <volume>180</volume>, <fpage>194</fpage>&#x2013;<lpage>203</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.still.2018.03.013</pub-id></citation></ref>
<ref id="ref5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>W. Q.</given-names></name> <name><surname>Yan</surname> <given-names>C. R.</given-names></name> <name><surname>Zhao</surname> <given-names>C. X.</given-names></name> <name><surname>Chang</surname> <given-names>R. Q.</given-names></name> <name><surname>Liu</surname> <given-names>Q.</given-names></name> <name><surname>Liu</surname> <given-names>S.</given-names></name></person-group> (<year>2009</year>). <article-title>Study on the pollution by plastic mulch film and its countermeasures in China</article-title>. <source>J. Agro-Environ. Sci.</source> <volume>28</volume>, <fpage>533</fpage>&#x2013;<lpage>538</lpage>.</citation></ref>
<ref id="ref6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>F.</given-names></name> <name><surname>Bai</surname> <given-names>J. Y.</given-names></name> <name><surname>Zhang</surname> <given-names>M. Y.</given-names></name> <name><surname>Zhang</surname> <given-names>R. Y.</given-names></name></person-group> (<year>2022</year>). <article-title>Yield estimation of high-density cotton fields using low-altitude UAV imaging and deep learning</article-title>. <source>Plant Methods</source> <volume>18</volume>:<fpage>55</fpage>. doi: <pub-id pub-id-type="doi">10.1186/s13007-022-00881-3</pub-id>, PMID: <pub-id pub-id-type="pmid">35477580</pub-id></citation></ref>
<ref id="ref7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Qi</surname> <given-names>R. M.</given-names></name> <name><surname>Jones</surname> <given-names>D. L.</given-names></name> <name><surname>Li</surname> <given-names>Z.</given-names></name> <name><surname>Liu</surname> <given-names>Q.</given-names></name> <name><surname>Yan</surname> <given-names>C. R.</given-names></name></person-group> (<year>2020</year>). <article-title>Behavior of microplastics and plastic film residues in the soil environment: a critical review</article-title>. <source>Sci. Total Environ.</source> <volume>703</volume>:<fpage>134722</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.scitotenv.2019.134722</pub-id>, PMID: <pub-id pub-id-type="pmid">31767311</pub-id></citation></ref>
<ref id="ref8"><citation citation-type="other"><person-group person-group-type="author"><name><surname>Ronneberger</surname> <given-names>O.</given-names></name> <name><surname>Fischer</surname> <given-names>P.</given-names></name> <name><surname>Brox</surname> <given-names>T.</given-names></name></person-group>, (<year>2015</year>). U-net: convolutional networks for biomedical image segmentation. In <italic>18th International Conference on Medical Image Computing and Computer-Assisted Intervention</italic>, 9351.234&#x2013;241.</citation></ref>
<ref id="ref9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>Y.</given-names></name> <name><surname>Han</surname> <given-names>J. Y.</given-names></name> <name><surname>Chen</surname> <given-names>Z. B.</given-names></name> <name><surname>Shi</surname> <given-names>M. C.</given-names></name> <name><surname>Fu</surname> <given-names>H. P.</given-names></name> <name><surname>Yang</surname> <given-names>M.</given-names></name></person-group> (<year>2018</year>). <article-title>Monitoring method for UAV image of greenhouse and plastic-mulched landcover based on deep learning</article-title>. <source>Trans. Chinese Soc. Agri. Machinery</source> <volume>49</volume>, <fpage>133</fpage>&#x2013;<lpage>140</lpage>. doi: <pub-id pub-id-type="doi">10.6041/j.issn.1000-1298.2018.02.018</pub-id></citation></ref>
<ref id="ref10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Szegedy</surname> <given-names>C.</given-names></name> <name><surname>Liu</surname> <given-names>W.</given-names></name> <name><surname>Jia</surname> <given-names>Y. Q.</given-names></name> <name><surname>Sermanet</surname> <given-names>P.</given-names></name> <name><surname>Reed</surname> <given-names>S.</given-names></name> <name><surname>Anguelov</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Going deeper with convolutions</article-title>. <source>IEEE Confer Comp. Vision Patter. Recog.</source> <volume>2015</volume>, <fpage>1</fpage>&#x2013;<lpage>9</lpage>. doi: <pub-id pub-id-type="doi">10.1109/CVPR.2015.7298594</pub-id></citation></ref>
<ref id="ref12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tarantino</surname> <given-names>E.</given-names></name> <name><surname>Figorito</surname> <given-names>B.</given-names></name></person-group> (<year>2012</year>). <article-title>Mapping rural areas with widespread plastic covered vineyards using true color aerial data</article-title>. <source>Remote Sens.</source> <volume>4</volume>, <fpage>1913</fpage>&#x2013;<lpage>1928</lpage>. doi: <pub-id pub-id-type="doi">10.3390/rs4071913</pub-id></citation></ref>
<ref id="ref13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>S. Y.</given-names></name> <name><surname>Fan</surname> <given-names>T. L.</given-names></name> <name><surname>Cheng</surname> <given-names>W. L.</given-names></name> <name><surname>Wang</surname> <given-names>L.</given-names></name> <name><surname>Zhao</surname> <given-names>G.</given-names></name> <name><surname>Li</surname> <given-names>S. Z.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Occurrence of macroplastic debris in the long-term plastic film-mulched agricultural soil: a case study of Northwest China</article-title>. <source>Sci. Total Environ.</source> <volume>831</volume>:<fpage>154881</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.scitotenv.2022.154881</pub-id>, PMID: <pub-id pub-id-type="pmid">35364156</pub-id></citation></ref>
<ref id="ref14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z. H.</given-names></name> <name><surname>He</surname> <given-names>H. J.</given-names></name> <name><surname>Zheng</surname> <given-names>X. R.</given-names></name> <name><surname>Zhang</surname> <given-names>J. Z.</given-names></name> <name><surname>Li</surname> <given-names>W. H.</given-names></name></person-group> (<year>2018</year>). <article-title>Effect of cotton stalk returning to fields on residual film distribution in cotton fields under mulched drip irrigation in typical oasis area in Xinjiang</article-title>. <source>Trans. Chinese Soc. Agri. Engineer.</source> <volume>34</volume>, <fpage>120</fpage>&#x2013;<lpage>127</lpage>. doi: <pub-id pub-id-type="doi">10.11975/j.issn.1002-6819.2018.21.015</pub-id></citation></ref>
<ref id="ref15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>S.</given-names></name> <name><surname>Wang</surname> <given-names>X. J.</given-names></name> <name><surname>Ji</surname> <given-names>C. R.</given-names></name> <name><surname>Guo</surname> <given-names>Y. Y.</given-names></name> <name><surname>Hu</surname> <given-names>Q. R.</given-names></name> <name><surname>Yang</surname> <given-names>M. F.</given-names></name></person-group> (<year>2021</year>). <article-title>Effects of climate change on cotton growth and development in Shihezi in recent 40 years</article-title>. <source>Agric. Eng.</source> <volume>11</volume>, <fpage>132</fpage>&#x2013;<lpage>136</lpage>.</citation></ref>
<ref id="ref16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>X. M.</given-names></name> <name><surname>Liang</surname> <given-names>C. J.</given-names></name> <name><surname>Zhang</surname> <given-names>D. B.</given-names></name> <name><surname>Yu</surname> <given-names>L. H.</given-names></name> <name><surname>Zhang</surname> <given-names>F. G.</given-names></name></person-group> (<year>2020</year>). <article-title>Identification method of plastic film residue based on UAV remote sensing images</article-title>. <source>Trans. Chinese Soc. Agri. Machinery</source> <volume>51</volume>, <fpage>189</fpage>&#x2013;<lpage>195</lpage>. doi: <pub-id pub-id-type="doi">10.6041/j.issn.1000-1298.2020.08.021</pub-id></citation></ref>
<ref id="ref17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xue</surname> <given-names>Y. H.</given-names></name> <name><surname>Cao</surname> <given-names>S. L.</given-names></name> <name><surname>Xu</surname> <given-names>Z. Y.</given-names></name> <name><surname>Jin</surname> <given-names>T.</given-names></name> <name><surname>Jia</surname> <given-names>T.</given-names></name> <name><surname>Yan</surname> <given-names>C. R.</given-names></name></person-group> (<year>2017</year>). <article-title>Status and trends in application of technology to prevent plastic film residual pollution</article-title>. <source>J. Agro-Environ. Sci.</source> <volume>36</volume>, <fpage>1595</fpage>&#x2013;<lpage>1600</lpage>. doi: <pub-id pub-id-type="doi">10.11654/jaes.2017-0298</pub-id></citation></ref>
<ref id="ref18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yan</surname> <given-names>C. R.</given-names></name> <name><surname>Liu</surname> <given-names>E. K.</given-names></name> <name><surname>Shu</surname> <given-names>F.</given-names></name> <name><surname>Liu</surname> <given-names>Q.</given-names></name></person-group> (<year>2014</year>). <article-title>Review of agricultural plastic mulching and its residual pollution and prevention measures in China</article-title>. <source>J. Agricu. Resour. Environ.</source> <volume>31</volume>, <fpage>95</fpage>&#x2013;<lpage>102</lpage>. doi: <pub-id pub-id-type="doi">10.13254/j.jare.2013.0223</pub-id></citation></ref>
<ref id="ref19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhai</surname> <given-names>Z. Q.</given-names></name> <name><surname>Chen</surname> <given-names>X. G.</given-names></name> <name><surname>Qiu</surname> <given-names>F. S.</given-names></name> <name><surname>Meng</surname> <given-names>Q. J.</given-names></name> <name><surname>Wang</surname> <given-names>H. Y.</given-names></name> <name><surname>Zhang</surname> <given-names>R. Y.</given-names></name></person-group> (<year>2022</year>). <article-title>Detecting surface residual film coverage rate in pre-sowing cotton fields using pixel block and machine learning</article-title>. <source>Trans. Chinese Soc. Agri. Engineer.</source> <volume>38</volume>, <fpage>140</fpage>&#x2013;<lpage>147</lpage>. doi: <pub-id pub-id-type="doi">10.11975/j.issn.1002-6819.2022.06.016</pub-id></citation></ref>
<ref id="ref20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Z. Q.</given-names></name> <name><surname>Cui</surname> <given-names>Q. L.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Zhu</surname> <given-names>X. Z.</given-names></name> <name><surname>Zhao</surname> <given-names>S. L.</given-names></name> <name><surname>Duan</surname> <given-names>C. J.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>A critical review of microplastics in the soil-plant system: distribution, uptake, phytotoxicity and prevention</article-title>. <source>J. Hazard. Mater.</source> <volume>424</volume>:<fpage>127750</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.jhazmat.2021.127750</pub-id>, PMID: <pub-id pub-id-type="pmid">34838359</pub-id></citation></ref>
<ref id="ref21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>D.</given-names></name> <name><surname>Liu</surname> <given-names>H. B.</given-names></name> <name><surname>Hu</surname> <given-names>W. L.</given-names></name> <name><surname>Qin</surname> <given-names>X. H.</given-names></name> <name><surname>Ma</surname> <given-names>X. W.</given-names></name> <name><surname>Yan</surname> <given-names>C. R.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>The status and distribution characteristics of residual mulching film in Xinjiang</article-title>. <source>J.Integrat. Agricul.</source> <volume>15</volume>, <fpage>2639</fpage>&#x2013;<lpage>2646</lpage>. doi: <pub-id pub-id-type="doi">10.1016/S2095-3119(15)61240-0</pub-id></citation></ref>
<ref id="ref22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>Y.</given-names></name> <name><surname>Chen</surname> <given-names>X. G.</given-names></name> <name><surname>Wen</surname> <given-names>H. J.</given-names></name> <name><surname>Zheng</surname> <given-names>X.</given-names></name> <name><surname>Niu</surname> <given-names>Q.</given-names></name> <name><surname>Kang</surname> <given-names>J. M.</given-names></name></person-group> (<year>2017</year>). <article-title>Research status and Prospect of control Technology for Residual Plastic Film Pollution in farmland</article-title>. <source>Trans. Chinese Soc. Agri. Machinery</source> <volume>48</volume>, <fpage>1</fpage>&#x2013;<lpage>14</lpage>. doi: <pub-id pub-id-type="doi">10.6041/j.issn.1000-1298.2017.06.001</pub-id></citation></ref>
<ref id="ref23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>X.</given-names></name> <name><surname>Yuan</surname> <given-names>Y. T.</given-names></name> <name><surname>Song</surname> <given-names>M. D.</given-names></name> <name><surname>Ding</surname> <given-names>Y.</given-names></name> <name><surname>Lin</surname> <given-names>F. F.</given-names></name> <name><surname>Liang</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Use of unmanned aerial vehicle imagery and deep learning UNet to extract rice lodging</article-title>. <source>Sensors</source> <volume>19</volume>:<fpage>3859</fpage>. doi: <pub-id pub-id-type="doi">10.3390/s19183859</pub-id>, PMID: <pub-id pub-id-type="pmid">31500150</pub-id></citation></ref>
<ref id="ref24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhou</surname> <given-names>D. Y.</given-names></name> <name><surname>Li</surname> <given-names>M.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Qi</surname> <given-names>J. T.</given-names></name> <name><surname>Liu</surname> <given-names>K.</given-names></name> <name><surname>Cong</surname> <given-names>X.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Detection of ground straw coverage under conservation tillage based on deep learning</article-title>. <source>Comput. Electron. Agric.</source> <volume>172</volume>:<fpage>105369</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compag.2020.105369</pub-id></citation></ref>
<ref id="ref25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>X. F.</given-names></name> <name><surname>Li</surname> <given-names>S. B.</given-names></name> <name><surname>Xiao</surname> <given-names>G. F.</given-names></name></person-group> (<year>2019</year>). <article-title>Method on extraction of area and distribution of plastic-mulched farmland based on UAV images</article-title>. <source>Trans. Chinese Soc. Agri. Engineer.</source> <volume>35</volume>, <fpage>106</fpage>&#x2013;<lpage>113</lpage>. doi: <pub-id pub-id-type="doi">10.11975/j.issn.1002-6819.2019.04.013</pub-id></citation></ref>
<ref id="ref26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zou</surname> <given-names>K. L.</given-names></name> <name><surname>Chen</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>F.</given-names></name> <name><surname>Zhou</surname> <given-names>H.</given-names></name> <name><surname>Zhang</surname> <given-names>C. L.</given-names></name></person-group> (<year>2021</year>). <article-title>A field weed density evaluation method based on UAV imaging and modified U-net</article-title>. <source>Remote Sens.</source> <volume>13</volume>:<fpage>310</fpage>. doi: <pub-id pub-id-type="doi">10.3390/rs13020310</pub-id></citation></ref>
</ref-list>
</back>
</article>