Original Research ARTICLE
Learning Semantic Graphics using Convolutional Encoder-Decoder Network for Autonomous Weeding in Paddy Field
- 1Department of Electronics Engineering, College of Engineering, Chonbuk National University, South Korea
- 2Division of Electronic Engineering, Intelligent Systems and Robotics Laboratory, Chonbuk National University, South Korea
Weeds in agricultural farms are aggressive growers which compete for nutrition and other resources with the crop and reduce production. The increasing use of chemicals to control them has inadvertent consequences to the human health and the environment. In this work a novel neural network training method combining semantic graphics for data annotation and an advanced encoder-decoder network for (a) automatic crop line detection, and (b) weed (wild millet) detection in paddy field is proposed. The detected crop lines act as guiding line for an autonomous weeding robot for inter-row weeding, whereas the detection of weeds enables autonomous intra-row weeding. The proposed data annotation method, semantic graphics, is intuitive and the desired targets can be annotated easily with minimal labor. Also the proposed “Extended Skip Network” is an improved deep convolutional encoder-decoder network for efficient learning of semantic graphics. Quantitative evaluations of the proposed method demonstrated an increment of 8.04% in mean intersection over union (mIoU), and a significantly higher recall compared to a popular deep learning based object detection approach on the wild-millet detection problem. The proposed method of learning semantic graphics with the enhanced Extended Skip Network leads to 2.08% and 14.75% improvement in IoU and mean pixel deviation, respectively, over the baseline network.
Keywords: Semantic graphics, extended skip network, autonomous weeding, crop line extraction, encoder-decoder network, Convolutional Neural Network
Received: 04 Jun 2019;
Accepted: 10 Oct 2019.
Copyright: © 2019 Adhikari, Yang and Kim. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Prof. Hyongsuk Kim, Division of Electronic Engineering, Intelligent Systems and Robotics Laboratory, Chonbuk National University, Jeonju, South Korea, firstname.lastname@example.org