ORIGINAL RESEARCH article
Front. Plant Sci.
Sec. Sustainable and Intelligent Phytoprotection
Volume 16 - 2025 | doi: 10.3389/fpls.2025.1608515
An Intelligent Identification for Pest and Disease Detection in Wheat Leaf based on Environmental Data Using Multimodal Data Fusion
Provisionally accepted- Fuyang Normal University, Fuyang, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The rapid development of intelligent technologies has transformed various industries, and agriculture benefits greatly from precision farming innovations. One of the remarkable achievements in agriculture is enhancing pest and disease identification for better crop health control and higher yields. This paper presents novel models of a multimodal data fusion technique to meet the growing need for accurate and timely wheat pest and disease identification. It combines image processing, sensor -derived environmental data, and machine learning for reliable wheat pest and disease diagnosis. First, deep -learning algorithms in image analysis detect early -stage pests and diseases on wheat leaves. Second, environmental data such as temperature and humidity improve diagnosis. Third, the data fusion process integrates image data for further analysis. Finally, several criteria compare the proposed model with previous methods. Experimental results show the proposed techniques achieve a detection accuracy of 96.5%, precision of 94.8%, recall of 97.2%, F1 score of 95.9%, MCC of 0.91, and AUC -ROC of 98.4%. The training time is 15.3 hours, and the inference time is 180 ms. Compared with CNN -based and SVM -based techniques, the proposed model's improvement is analyzed. It can be adapted for real -time use and applied to more crops and diseases.
Keywords: machine learning, Wheat Leaf Pests, Disease detection, image processing, Intelligent identification, Agricultural technology, multimodal data fusion
Received: 09 Apr 2025; Accepted: 11 Jul 2025.
Copyright: © 2025 XU. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: SHENG-HE XU, Fuyang Normal University, Fuyang, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.