AUTHOR=Qu Hongchun , Zheng Chaofang , Ji Hao , Huang Rui , Wei Dianwen , Annis Seanna , Drummond Francis TITLE=A deep multi-task learning approach to identifying mummy berry infection sites, the disease stage, and severity JOURNAL=Frontiers in Plant Science VOLUME=Volume 15 - 2024 YEAR=2024 URL=https://www.frontiersin.org/journals/plant-science/articles/10.3389/fpls.2024.1340884 DOI=10.3389/fpls.2024.1340884 ISSN=1664-462X ABSTRACT=Mummy berry is a serious disease that may result in up to 70 percent of yield loss for lowbush blueberries. Practical mummy berry disease detection, stage classification and severity estimation remain great challenges for computer visionbased approaches because images taken in lowbush blueberry fields usually are mixture of different plant parts (leaves, bud, flowers and fruits) with a very complex background. Data scarcity due to high manual labelling cost, tiny and low contrast disease features interfered and occluded by healthy plant parts, over complicated deep neural networks made deploy difficult. Using real and raw blueberry field images, this research proposed a deep multi-task learning (MTL) approach to simultaneously accomplish three disease detection tasks: infection site identification, stage classification and severity estimation. By further incorporating a novel superimposed attention mechanism modules and grouped convolutions to the deep neural network, enables disease feature extraction from both channel and spatial perspective, achieving better detection performance in open and complex environment, while having lower computational cost and faster convergence rate. Experimental results demonstrated that our approach achieved higher detection efficiency compared with the state-of-the-art deep learning models in terms of detection accuracy, while having three main advantages: 1) field images mixed with various types of lowbush blueberry plant organs under a complex background can be used for disease detection; 2) parameter sharing among different tasks greatly reduced the size of training samples and saved 60% training time than when the three tasks (data preparation, model development and exploration) were trained separately; and 3) only one-sixth of the network parameter size (23.98M vs. 138.36M) and one-fifteenth of the computational cost (1.13G vs. 15.48G FLOPs) were used when compared with the most popular Convolutional Neural Network VGG16. These features make our solution to be very promising for future mobile deployment such as a drone carried sensor for real-time field surveillance.