AUTHOR=Zhang Fenyun , Sun Hongwei , Xie Shuang , Dong Chunwang , Li You , Xu Yiting , Zhang Zhengwei , Chen Fengnong TITLE=A tea bud segmentation, detection and picking point localization based on the MDY7-3PTB model JOURNAL=Frontiers in Plant Science VOLUME=Volume 14 - 2023 YEAR=2023 URL=https://www.frontiersin.org/journals/plant-science/articles/10.3389/fpls.2023.1199473 DOI=10.3389/fpls.2023.1199473 ISSN=1664-462X ABSTRACT=The identification and localization of tea picking points is a prerequisite for achieving automatic picking of famous tea. However, due to the similarity in color between tea buds and young leaves and old leaves, it is difficult for the human eye to accurately identify them. To address the problem of segmentation, detection, and localization of tea picking points in the complex environment of mechanical picking of famous tea, this paper proposes a new model called the MDY7-3PTB model, which combines the high-precision segmentation capability of DeepLabv3+ and the rapid detection capability of YOLOv7. This model achieves the process of segmentation first, followed by detection and finally localization of tea buds, resulting in accurate identification of the tea bud picking point. This model replaced the DeepLabv3+ feature extraction network with the more lightweight MobileNetV2 network to improve the model computation speed. In addition, multiple attention mechanisms (CBAM) were fused into the feature extraction and ASPP modules to further optimize model performance. Moreover, to address the problem of class imbalance in the dataset, the Focal Loss function was used to correct data imbalance and improve segmentation, detection, and positioning accuracy. The MDY7-3PTB model achieved an averageintersection over the union of 86.61%, an average pixel accuracy of 93.01%, and an average recall of 91.78% on the tea bud segmentation dataset. In terms of tea bud picking point recognition and positioning, the model achieved a mean average precision of 93.52%, a weighted average of precision and recall of 93.17%, a precision of 97.27%, and a recall of 89.41%. This model showed significant improvements in all aspects compared to existing mainstream YOLO series detection models, with strong versatility and robustness. This method eliminates the influence of the background and directly detects the tea bud picking points with almost no missed detections, providing accurate two-dimensional coordinates for the tea bud picking points. This provides a strong theoretical basis for future tea bud picking.