ORIGINAL RESEARCH article
Front. Plant Sci.
Sec. Sustainable and Intelligent Phytoprotection
This article is part of the Research TopicAdvanced Imaging and Phenotyping for Sustainable Plant Science and Precision Agriculture 4.0View all 6 articles
CNNAttLSTM: An Attention-Enhanced CNN– LSTM Architecture for High-Precision Jackfruit Leaf Disease Classification
Provisionally accepted- 1Chitkara University, Rajpura, India
- 2King Khalid University, Abha, Saudi Arabia
- 3The Government Sadiq College Women University Bahawalpur, Bahawalpur, Pakistan
- 4Gachon University, Seongnam, Republic of Korea
- 5Chandigarh University University Centre for Research & Development, Sahibzada Ajit Singh Nagar, India
- 6Universidad Europea del Atlantico, Santander, Spain
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Abstract: Jackfruit cultivation is highly susceptible to leaf diseases that reduce yield, quality, and farmer income, yet early diagnosis remains difficult due to the limitations of manual inspection. This study presents a hybrid deep learning framework, CNNAttLSTM, which integrates Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) units, and an attention mechanism for multi-class classification of algal leaf spot, black spot, and healthy jackfruit leaves. The proposed methodology begins with data preprocessing, where all jackfruit leaf images are resized to 224×224 pixels, normalized to the [0,1] range, and divided into ordered 56×56 spatial patches treated as pseudo-temporal sequences. These patches are processed through sequential Conv2D, MaxPooling2D, and GlobalAveragePooling2D layers for spatial feature extraction, followed by LSTM units to capture sequential dependencies, and an attention mechanism that assigns adaptive weights to emphasize diagnostically significant regions before final classification. Although the dataset consists of static images, each image is divided into ordered spatial patches that are sequentially processed by the LSTM to learn contextual dependencies across different regions of the same leaf. This approach enables temporal modelling of spatial feature relationships, effectively treating patch sequences as pseudo-temporal data. Experiments were conducted using the publicly available Kaggle jackfruit leaf disease dataset containing 38,019 images, maintaining its predefined training, validation, and testing splits. The proposed CNNAttLSTM achieved 99% classification accuracy, outperforming baseline CNN (86%) and CNN-LSTM (98%) models while remaining computationally efficient. The architecture contains only 3.7 million parameters—an 18% reduction compared to CNN-LSTM—converges within 45 minutes of training on an NVIDIA Tesla T4 GPU, and performs inference in 22 milliseconds per image (batch size = 1). These metrics demonstrate low computational cost and confirm the model's suitability for real-time, mobile, and edge-device deployment. Overall, the findings highlight CNNAttLSTM as a robust and efficient solution for automated jackfruit leaf disease detection and broader precision agriculture applications.
Keywords: Agricultural AI, attention mechanisms, CNNAttLSTM Model, Computational efficiency, Disease diagnosis, image classification, Jackfruit leaf classification, Plant disease detection
Received: 08 Oct 2025; Accepted: 02 Dec 2025.
Copyright: © 2025 Tuteja, Al-Yarimi, Ikram, Gupta, Rehman, Singh, Delgado Noya and DZUL LOPEZ. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Ateeq Ur Rehman
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
