ORIGINAL RESEARCH article
Front. Plant Sci.
Sec. Sustainable and Intelligent Phytoprotection
Volume 16 - 2025 | doi: 10.3389/fpls.2025.1611301
Deep-Broad Learning Network Model for Precision Identification and Diagnosis of Grape Leaf Diseases
Provisionally accepted- 1School of Mechanical Engineering, Anhui University of Technology, Ma’anshan, China
- 2School of Engineering, Anhui Agricultural University, Hefei, Anhui Province, China
- 3College of Information Engineering, Shaoxing Vocational & Technical College, Shaoxing, Zhejiang Province, China
- 4School of Horticulture college, Anhui Agricultural University, Hefei, Anhui Province, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The abstract has been revised as follows"This paper addresses the problem of rapid, precise, and efficient identification and diagnosis of grape leaf diseases by proposing the Deep-Broad Learning Network Model (ABLSS), which combines a Broad Learning network model with deep learning techniques. The model is optimized using the Adam algorithm based on BLS, and incorporates the LTM mechanism, which significantly enhances learning efficiency, stability, and recognition accuracy. Additionally, by integrating deep learning network optimization techniques, a SENet attention mechanism is added between the mapping and enhancement layers of BLS.Furthermore, based on the U-Net segmentation model, the method integrates dilated spatial pyramid pooling and feature pyramid networks. Dilated convolutions with varying dilation rates are used to capture multi-scale contextual information, which providing rich semantic information and high-resolution details during the decoding process. This improves the ABLSS model's ability to identify small disease spots. Experimental results show that the ABLSS model achieves the highest recognition accuracy for three types of diseases with similar features on grape leaves, with an average accuracy improvement of 7.69% over BLS and 4.48% over deep learning networks. The MIOU of the segmentation model reaches 86.61%, which is a 6.48% improvement over the original U-Net model, and the MPA is 90.23%, a 8.09% improvement over the original U-Net. These results demonstrate that the proposed method significantly improves the algorithm's recognition accuracy for small and irregular complex images. The ABLSS model recognizes images 0.375 seconds faster than the deep learning network, achieving a 72.12% speed improvement, thereby significantly enhancing the recognition efficiency of fine features. The ABLSS model combines the high recognition accuracy of deep learning with the fast processing speed of Broad Learning, while overcoming the limitations of 2 BLS in recognizing complex images. This study provides valuable support for the development of smart orchard technologies and the optimization of learning network models.
Keywords: Grape leaf diseases, Disease recognition, Broad Learning, lesion segmentation, deep learning, Diseases diagnosis
Received: 14 Apr 2025; Accepted: 01 Jul 2025.
Copyright: © 2025 Liu, Feng, Zhao, Fang, Quan and Zhang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Yangyang Liu, School of Mechanical Engineering, Anhui University of Technology, Ma’anshan, China
Longzhe Quan, School of Engineering, Anhui Agricultural University, Hefei, Anhui Province, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.