Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell., 05 January 2026

Sec. Pattern Recognition

Volume 8 - 2025 | https://doi.org/10.3389/frai.2025.1558358

This article is part of the Research TopicAI-Enabled Breakthroughs in Computational Imaging and Computer VisionView all 4 articles

Anatomical study and early diagnosis of dome galls in Cordia Dichotoma using DeepSVM model

Said Khalid ShahSaid Khalid Shah1Mazliham Bin Mohd Su&#x;ud
Mazliham Bin Mohd Su’ud2*Aurangzeb KhanAurangzeb Khan1Muhammad Mansoor AlamMuhammad Mansoor Alam3Muhammad AyazMuhammad Ayaz1
  • 1Department of Computer Science, University of Science and Technology, Bannu, Khyber Pakhtunkhwa, Pakistan
  • 2Department of Computer Science, Multimedia University Cyberjaye Campus, Persiaran Multimedia, Cyberjaya, Malaysia
  • 3Department of Computer Science, Riphah International University, Islamabad, Pakistan

Introduction: Artificial intelligence (AI), particularly deep learning (DL), offers automated solutions for early detection of plant diseases to improve crop yield. However, training accurate models on real-field data remains challenging due to over fitting and limited generalization. As observed in prior studies, traditional CNNs often struggle with real-environment variability, and transfer learning can lead to instability in training on domain-specific leaf datasets. This study focuses on detecting dome galls, a disease in Cordia dichotoma, by formulating a binary classification task (healthy vs. diseased leaves) using a custom dataset of 3,900 leaf images collected from real field environments.

Methods: Initially, both custom CNNs and transfer learning models were trained and compared. Among them, a modified ResNet-50 architecture showed promising results but suffered from over fitting and unstable convergence. To address this, the final sigmoid activation layer was replaced with a Support Vector Machine (SVM), and L2 regularization was applied to reduce over fitting. This hybrid DeepSVM architecture stabilized training and improved model robustness. Image preprocessing and augmentation techniques were applied to increase variability and prevent over fitting.

Results: The final model was evaluated on a separate test set of 400 images, and the results remained stable across repeated runs. DeepSVM achieved an accuracy of 94.50% and an F1-score of 94.47%, outperforming other well-known models like VGG-16, InceptionResNetv2, and MobileNet-V2.

Conclusion: These results indicate that the proposed DeepSVM approach offers better generalization and training stability than conventional CNN classifiers, potentially aiding in automated disease monitoring for precision agriculture.

1 Introduction

Agriculture is a cornerstone of global food security and economic stability, yet crop productivity remains highly vulnerable to pathogen-induced diseases. However, plants are vulnerable to a range of illnesses caused by pests and pathogens, leading to an estimated $200 million in global economic losses annually. With the global population increasing by 1.6% each year, the demand for food and agricultural products continues to rise (Ashwinkumar et al., 2022). Experts plan to adopt new methods and technologies to detect and identify plant diseases at early stages to save the loss caused by plant diseases (Ullah et al., 2023). However, diseases such as dome galls in Cordia dichotoma remain understudied despite their economic impact, necessitating dedicated data-driven detection frameworks. After the enhancement in artificial intelligence-based techniques, like machine learning (ML) and deep learning (DL), the automated classification of image data, including plant diseases at early stages, has been a hot research area for the last decade (Bhosale et al., 2024). Various types of convolutional neural networks (CNNs) models from scratch and with transfer learning have been trained and deployed in real environments with satisfactory results (Altalak et al., 2022; Zhu et al., 2024).

Compared to previous naked-eye classifications, which were time-consuming and less satisfactory, the new automated methods produce accurate and timely results without the involvement of field experts because they can be used by non-expert users with a smartphone or drone cameras (Li et al., 2019). Early identification of dome galls in Cordia dichotoma leaves is essential due to their negative impact on crop yields and economic losses. Conventional detection methods lack speed and precision, necessitating a novel approach. By employing DL models customized for dome gall detection, this research offers a transformative solution to quickly identify and mitigate the spread of this specific disease. This advancement is vital for preserving crop health, ensuring sustainable agriculture, and meeting global food demands (Bhosale et al., 2023).

In the proposed study, dome galls, a disease that occurs in Cordia dichotoma, were addressed. The genus Cordia belongs to the family Boraginaceae of plants (Bhattacharya and Saha, 2013). It includes about 300 species of trees and shrubs, most of which are indigenous to warmer regions of the world. It is utilized in various industries, including medicine, agriculture, office supplies, musical instruments, furniture, watercraft, painting, and energy (Ferahtia, 2021; Ganjare et al., 2011; Matias et al., 2015). Dome galls are a common disease of Cordia dichotoma affecting the leaves in the form of multiple dome-like structures. The literature I reviewed indicates that no one has previously addressed this problem, and there is no publicly available dataset on the Internet. Therefore, images were collected from real scenarios for training and testing the DeepSVM model. Various villages in Bannu district, Khyber Pakhtunkhwa, Pakistan, were visited, and a custom data set was created. The image data was divided into two categories: dome galls and normal images. Most plant leaf disease symptoms are based on color change or wilting, but the target disease is unique in structure and has multiple raised surface areas called dome galls. It needs more concentration to classify the early stage of the disease because it is very close to healthy leaves, and it is difficult for ML and DL models to classify it easily. To increase the performance of the proposed model, training data, preprocessing and data augmentation techniques play an important role.

The proposed study used the transfer learning technique with ResNet-50 as the backbone for the plant leaf disease classification model, using pre-trained weights from the ImageNet dataset. To adapt the model for the specific task, three fully connected (FC) layers were added on top of the ResNet50. SVM was used as the final/output layer instead of the sigmoid layer to identify the leaf images as healthy or infected. The study’s key contributions include the following:

• A novel dataset on Cordia leaves is created, comprising 3,500 images taken from a real environment and labeled into two categories: healthy and diseased

• The study embraces a novel approach by fusing morpho-anatomical insight and automated computational systems to understand the root cause of the diseases better

• The leaves were subjected to careful morpho-anatomical examination using stereo and light microscopy. The disease was diagnosed as the mites-induced dome-galls on the leaf surface

• The novel DeepSVM model was trained and fine-tuned based on transfer learning to differentiate between healthy leaves and the dome gall’s early symptoms

• The performance of the DeepSVM model was evaluated with that of previous state-of-the-art (SOTA) models and two publicly available datasets to identify the generalizability of the proposed approach

The rest of the paper is arranged as follows: Section II reviews the relevant literature, while Section III outlines the methods used in our study. In Section IV, we present results and discussions. Finally, we offer some concluding thoughts and suggestions for future research in Section V

2 Related work

Due to ongoing developments in DL and computer vision, experts are attempting to integrate these new technologies in various fields including the agriculture industry (Zhu et al., 2025). They are used in agriculture for multiple purposes, such as plant diseases, identification of plant species, pest detection fruit ripeness, etc. CNN is a type of DL used for picture classification, image segmentation, object detection, and recognition. Most previous research work based on automatic plant disease detection and classification utilized CNN models with transfer learning. Pre-trained models (VGG-16, ResNet-50, MobileNet, EfficientNet, DenseNet, Inception, etc.) are used mostly for transfer learning because they are conducive to generalizing an image classification model if the training dataset is small. In contrast, training from scratch is only needed when there are several thousand training images, which is an arduous task (Hungilo et al., 2019).

Jakjoud et al. (2019) trained VGG-16 with various optimizers, including SGD, Adagrad, RMSprop, and Adadelta. They used a dataset of 13,692 images split in the ratio of 80% for training and 20% for testing, with 100 ×100 as the input size. They concluded that Adadelta was the most accurate, while SGD was the fastest and came in second in accuracy. Zekiwos and Bruck (2021) used a cotton leaf dataset that contained 2,400 images with 4 classes: one healthy and three diseased. Augmentation techniques were used to enhance the training process. Pre-processing techniques like image resizing and normalization, were utilized. Three convolutional layers for feature extraction and two fully connected layers for classification were used to train a customized CNN model. They trained the model with and without augmentation and claimed a 15% increase in accuracy with data augmentation. K-fold cross-validation was used. Adam and RMSprop optimizers were used, and Adam was recommended with 96.4% accuracy. Nandhini and Ashokkumar (2021) used a tomato leaf image dataset of 11,942 images of one healthy class and nine diseased classes taken from PlantVillage. In the Keras package, they created a CNN model with three convolutional layers, max pooling, two dense layers, and one output layer for classification. Ghosal and Sarkar (2020) built a CNN model for rice leaf disease classification. They used VGG-16 as the backbone of the model for transfer learning to overcome the drawback of a small training dataset. The model was trained using a small custom dataset consisting of 1,649 photos divided into four classes. Enhancing the data was utilized to lessen over-fitting and claim 92.4% accuracy. Bhujel and Shakya (2022) used a leaf picture data of rice taken from Kaggle containing 524 photos for each of the 4 categories. They used CNN model with transfer learning and EfficientB0 and EfficientNetB3 as base frameworks with new fine-tuned FC layers. They used the cyclical learning rate (CLR). They claimed 83.99 and 89.18% accuracy after 15 epochs, respectively.

A pre-trained network, DenseNet-121, was used by Chakraborty et al. (2021) to identify plant leaf ailments. For collecting training images, a well-known platform called PlanDoc was used. Multiple optimization techniques were used while fine-tuning the proposed approach, but SGD was finalized due to its stable growth while training. They claimed 92.5% accuracy after 10 epochs. Krishnamoorthy et al. (2021) trained a CNN model with transfer learning. InceptionResNetV2 was used as the backbone of the model. They used 1,000 images as a training set and 300 images for validation collected from Kaggle based on 4 classes. Image augmentation and preprocessing techniques were used to generalize the model. Dropout, batch normalization and global average pooling were used in the head of the model to reduce over fitting. They claimed 95.67% accuracy on the unseen images called test datasets. Hong et al. (2020) used transfer learning. They trained five tomato leaf disease classification models, including ResNet-50, Xception, MobileNet, ShuffleNet, and DenseNet121-Xception, with nine disease classes and a healthy class. ResNet-50 was recommended as the best model after applying image preprocessing and data augmentation. The training set consists of 13,112 photos downloaded from PlantVillage.

Kumar and Vani (2019) trained six models, Xception, LeNet, EfficientNet, VGG-16, VGG-19 and ResNet-50, with fine-tuning for tomato disease classification. The models have trained on color and grayscale segmented images separately with the SGD optimizer, and their results were compared. They claimed that VGG-16 performed well with color images, achieving 99.5% accuracy. Shijie et al. (2017) used a CNN model based on transfer learning. The VGG-16 model was used as the backbone of the model. Data augmentation techniques were used and achieved 89% accuracy. They used a tomato leaf dataset with 400 images per class and divided it into 65% for training, 25% for validation, and 10% for testing. They also tested the model with SVM as the final layer but got 1% less accuracy. They used 40 as the batch size, a 0.001 learning rate, and an SGD optimizer with 80 epochs.

Chowdhury et al. (2021) replaced the GoogleNet backbone with a new backbone made up of numerous convolutional layers, batch normalization (BN), and max pooling. They trained their custom backbone on the Plant-Village dataset and compared the results to the GoogleNet model’s output. They claimed that their model performed well compared to the original model. DeepPlantNet is a novel 28-layer deep learning model that consists of three FC layers and twenty-five convolutional layers, as presented by Ullah et al. (2023). The distinctive and successful Plant disease classification system uses Leaky RelU, BN, fire modules, and a combination of 3 × 3 and 1 × 1 filters. With average accuracy rates for eight-class and three-class classification schemes of 98.49 and 99.85%, respectively, DeepPlantNet successfully classified plant illnesses into ten categories. This novel method helps experts and farmers quickly detect and treat plant illnesses, which presents a viable way to lower agricultural losses. In a study by Sujatha et al. (2021), several pre-built ML and DL models were compared on a custom dataset of citrus leaves comprising five categories. The results showed that deep learning surpassed machine learning approaches, with random forest (RF) producing the lowest precision and VGG-Net producing the highest precision.

Syed-Ab-Rahman et al. (2022) employed transfer learning to achieve 94.37 per cent accuracy by utilizing a faster RCNN model trained on citrus leaf images accessible on Kaggle. Roy and Bhaduri (2021) used a modified version of Yolov4 with transfer learning on a dataset of apple leaves with a dataset of three classes and achieved 91.2% accuracy. They used image augmentation approach to expand the training set and prepare the model for complex backdrop images. Ullah et al. (2022) suggest using DeepPestNet, an end-to-end deep learning network, to identify and categorize crop pests. There are 11 layers in the model, including 8 convolution layers and 3 dense layers. The authors used image augmentation techniques such as rotation, flipping, blurring etc. to expand the training set and evaluate the robustness of the suggested method. Using crops dataset of Deng, they used 10 classes of pests with a success rate of 100%, and the Kaggle pest dataset did it with an accuracy of 98.92%.

Jamjoom et al. (2023) used conventional ML methods for plant leaf disease classification. For data preprocessing, discrete cosine transformation (DCT) and color space conversions were used. To segment the training images, a famous approach known as K-means clustering was used, and local binary pattern (LBP) feature grey level co-occurrence matrix (GLCM) was used for feature extraction. Radial base kernel and polynomial kernel approaches were used for feature classification, and SVM was used for disease type identification. Arshad et al. (2023) used preprocessing techniques including data augmentation and the U-Net model for region-of-interest segmentation. CNN models Vgg-19 and Inception v3 as ensemble learning were used for feature extraction, and transformers were used for the identification of potato diseases. CNN is considered to be the building block of recent computer vision applications. Bhatti et al. (2023) used a custom CNN model equipped with preprocessing and data augmentation for feature extraction and transfer learning based on Inception-V3 for disease identification. The proposed study used a hybrid training dataset taken from Plant-Village and Plant-DOC with sixty classes and claimed 99% testing accuracy.

The design of a good model for early diagnosis of plant leaf diseases requires a great knowledge of related literature to know the hardness of the area. Except for a few diseases for which datasets are available on the internet, there is a scarcity of dataset images for plant diseases. Collecting target disease leaf images from real-world environments is time-consuming and labor-intensive. Training a generalized model for such a new disease requires deep knowledge of CNN structure and parameters and their role in training a model. In the next section, all these issues are addressed systematically.

3 Methodology

The proposed approach comprises two main steps, i.e., anatomical and computational studies. The anatomical study foresees the root causes of possible diseases in the plant leaves. Similarly, the computational study uses fine-tuned CNN with SVM to classify the Cordia dichotoma leaves into healthy and diseased. The details of each methodological study are elaborated below:

3.1 Anatomical studies

A sharp blade was used to cut the galled leaf segments anatomically, and the thinnest pieces were dyed. The protocol listed below was chosen. Fresh solutions of safranin and methyl blue, each 5% in water, were combined in an equal ratio. The portions of galled segments were submerged in the combination for 10–20 min before being transferred to a solution of 50% alcohol. The sections were moved to 95% alcohol for 5 min, washed with pure alcohol for 5 min, mounted in a drop of glycerol, and dried. Different resolutions of an optical microscope (Lebomed LB-201) were used to observe the sections, and photos were taken using a Vivo S1 Pro camera (Verhertbruggen et al., 2017).

3.2 Computational studies

In this study, a novel model named DeepSVM was developed to identify Cordia dichotoma leaf images as healthy or diseased. Transfer learning was utilized by using ResNet-50 as the backbone for the model, which allowed us to leverage the pre-trained weights of the ResNet50 model previously trained on a large dataset (ImageNet) for feature extraction. On top of the ResNet50 backbone, three FC layers were added to learn higher-level representations specific to domain image set. Finally, SVM was used for classification as the output layer, a popular choice for classification tasks. The results showed that the proposed approach achieved high accuracy in classifying plant leaf images as healthy or diseased, demonstrating the effectiveness of using transfer learning and SVM in this context. The flow of the model is shown in Figure 1.

Figure 1
Flowchart depicting a deep learning pipeline for leaf classification. The process starts with the Coldia Leaf Dataset, followed by data preprocessing. The dataset is split into 80 percent and 20 percent portions, subjected to data augmentation, and used for DeepSVM model training. Model testing follows, leading to classification results.

Figure 1. Workflow of the suggested method.

3.3 Data collection

Developing real-time applications using DL necessitates a dataset comprising images for training the model. Unfortunately, there is no online dataset currently available on Cordia leaves to train and validate a DL model aiming to identify healthy and unhealthy Cordia leaves. Therefore, a custom dataset containing 3,500 images was collected carefully for model training, with an additional set of 400 images allocated for testing. A detailed distribution of the dataset is provided in Table 1. The leaf images were manually annotated with binary class labels: Healthy and Dome Galls. The leaves of Cordia dichotoma show variations in shape due to several factors, including soil type, overall plant health, climatic conditions, water availability, pest attacks, and nutrient deficiencies. Therefore, the images were collected from diverse locations, including urban and rural areas within the Bannu district of Khyber Pakhtunkhwa (KP), Pakistan, ensured that the dataset accurately reflected real-world scenarios. Most of these images were captured in the semi-hilly Baka Khel subdivision of district Bannu. The original image resolution was 2016 × 4,704 pixels, captured using a TECNO Camon 20 camera (64 MP, f/1.7 wide lens, 2 MP depth sensors). Images were collected under real field conditions at different times of the day (morning, noon, afternoon, and evening) to ensure variability. All images were resized to 256 × 256 pixels for model training. The dataset was split into 80% for training, 20% for validation, and 400 independent test images (200 per class). Importantly, it should be emphasized that employing authentic, real-world images as opposed to pre-existing internet datasets can significantly enhance the performance of DL systems (Nagaraju and Chawla, 2020). A sample of the pictures gathered for the suggested work is shown in Figure 2. The dataset is available on request for the researchers to experiment with the data and further improve the model accuracy.

Table 1
www.frontiersin.org

Table 1. Distribution of Cordia dichotoma leaf images across training, validation, and test sets.

Figure 2
Four side-by-side images of leaves on a white background. (a) Healthy leaf with uniform green color. (b) Diseased leaf with small dark spots. (c) Diseased leaf with visible brown patches. (d) Healthy leaf with an even surface and no discoloration.

Figure 2. Sample images from training dataset. (a) Healthy; (b) diseased; (c) diseased; (d) healthy.

3.4 Pre-processing

Image preprocessing is crucial step in training a DL classifier for computer vision tasks. Removing inadequate images, including those of poor quality, blurry, or low contrast, from a training dataset is essential in preparing the data for training a DL model. Inadequate images can negatively affect the model’s accuracy and result in poor performance (Li et al., 2025). Each image must be inspected manually to assess its quality and remove inadequate images. It is a time-consuming process, especially if the dataset is large but removing these images can improve the model’s accuracy, ensuring that it’s trained on high-quality data. Image resizing is essential in preparing images for feeding into a DL model. Similarly, CNN-based approaches are sensitive to the shape and size of the input photos. Therefore, all images should be resized to a standard size before being fed into the model. All input images were downsized to 256 by 256 for the proposed study. This resizing is performed using bilinear interpolation, a common method that estimates the pixel value at a non-integer coordinate (x, y) by considering the weighted average of the four nearest neighboring pixels. The interpolated value is computed using Equation (1) as follows:

I ( x , y ) ( 1 a ) ( 1 b ) . I ( x 0 , y 0 ) + a ( 1 b ) . I ( x 1 , y 0 ) + ( 1 a ) b . I ( x 0 , y 1 ) + ab . I ( x 1 , y 1 )     (1)

Where:

x₀ = ⌊x⌋, x₁ = ⌈x⌉.

y₀ = ⌊y⌋, y₁ = ⌈y⌉.

a = x − x₀, b = y − y₀.

I(x, y): interpolated pixel intensity.

I(x₀, y₀), etc.: intensities of the four neighboring pixels.

Image normalization is a method used in image processing to bring an image’s pixel values into a predetermined range or scale. For many deep learning (DL) algorithms, like Convolutional Neural Networks (CNNs), to learn effectively, the pixel values must fall within a similar range, which is typically achieved through normalization. In the proposed approach, each image was normalized to fall between 0 and 1 by dividing each pixel value by 255, i.e., using the transformation (rescale = 1./255). This converts the original 8-bit pixel values (ranging from 0 to 255) into floating-point values between 0 and 1. These image preparation methods help increase the model’s accuracy and improve its generalization to unseen data.

3.5 Data augmentation

Data augmentation is a strategy employed to expand the size of a training dataset by applying different modifications to the original images, such as rotation, mirror imaging, cropping, and scaling. Subsequently, the augmented data is employed for training the proposed model. The significance of data augmentation in CNN model training is rooted in its capacity to enhance the model’s ability to generalize. By introducing variations to the source images, the model can learn to identify the same object in diverse configurations and orientations, thereby enhancing its resilience to input variations. Moreover, data augmentation can serve as a countermeasure against over-fitting. Data augmentation reduces the possibility of over-fitting by adding a variety of changes to the data used for training, creating a more relevant and diverse image set. In the proposed approach, transformations like brightness (0.4–1.5), horizontal and vertical flipping, adjustments in height (0.2) and width (0.2), zooming and rotation (30), zooming (0.2) were employed. Some sample images post-augmentation are illustrated in Figure 3.

Figure 3
Eight images showing a leaf with different transformations: (a) Original, (b) Brightness adjusted, (c) Horizontally flipped, (d) Vertically flipped, (e) Width shifted, (f) Height shifted, (g) Rotated, and (h) Zoomed. Each variation demonstrates a distinct modification of the leaf's appearance.

Figure 3. Augmentation methods used in the proposed work. (a) Original; (b) brightness; (c) horizontal flip; (d) vertical flip; (e) width shift; (f) height shift; (g) rotation (h) zooming.

3.6 Proposed DeepSVM approach

The DeepSVM model proposed in this study integrated the Resnet-50 as its pre-trained backbone, augmented with three FC layers and an SVM as the output layer. ResNet-50 is a famous pre-trained model developed by Microsoft researchers in 2015. The general architecture of Resnet-50 is depicted in Figure 4. Its architecture starts with a convolutional layer having 64 feature maps with a 7×7 kernel size, followed by a max pooling layer. It adds sixteen residual blocks with 48 convolutional layers, with three in each block, followed by a global average pooling layer. The number of feature maps in convolutional layers is continuously increasing, going from top to bottom. Convolutional layers operate by applying a filter or kernel to an input matrix or image, performing element-wise multiplication followed by summation, to produce an output matrix known as a feature map. It extracts the features from the input images and then sends them to the FC layers for feature classification which is described as follows:

Figure 4
Diagram illustrating a deep residual network architecture. Panel (a) depicts the initial stage with an image of a leaf, followed by layers of convolution and pooling. Panel (b) shows blocks with configurations of 1x1, 3x3, and 1x1 convolutional layers, reflecting variations in kernel size and feature depth. Panel (c) explains skip connections with pathways allowing residual learning, enhancing model efficiency. Orange boxes indicate convolutions; arrows represent data flow, connecting operations for computational learning enhancement.

Figure 4. (a) General architecture of ResNet-50 (b) skip connection (Conv Block) (c) skip connection.

Let I = [ a ( i , j ) ] c × d be the input matrix of order c × d and K = [ k ( m , n ) ] k × k be the filter of order k (k is odd) is a smaller matrix. The filter slides across the input matrix, and at each position, it performs an element-wise multiplication with the corresponding region of the input. The results are then summed to generate the output matrix, as defined in Equation 2:

C ( i , j ) = m , n = 1 k a ( i + m , j + n ) k ( m , n )     (2)

Where “m” and “n” denote the positions of the kernel indices, and the sum is computed across these indices.

The ResNet (Residual Network) models are famous for tackling the challenge of vanishing gradients within intricate neural networks by incorporating skip connections. There are two types of skip connections, identity blocks or identity skip connections and convolutional blocks or projection skip connections. Identity skip connections are the basic skip connections where the input to a layer is added directly to its output as shown in Figure 4c. In ResNet-50, these connections are used in the residual blocks where the input and output dimensions are the same. Projection skip connections are utilized when the dimensions of the output and input of a residual component are not equal. In such cases, a 1×1 convolutional layer is employed to adjust the dimensionality of the input so that it matches the output dimension shown in Figure 4b. These connections are used in certain residual blocks within ResNet-50 where down sampling is required. The skip connections, alternatively labeled as shortcut connections, serve as a mechanism within deep neural networks to mitigate the vanishing gradient issue. In a neural network, gradients are employed for weight updates during training. However, as the gradient is propagated through the network, it can become very small, especially in deep networks, making it difficult to update the weights effectively. This can lead to slower convergence or the network becoming stuck in a local minimum. Skip connections aim to solve this issue by providing an alternate, more direct way for the gradient to pass via the network. This is achieved by adding a connection that skips one or more layers in the network. It works as follows:

Y = F ( x ) + x     (3)

Equation 3 shows how a neural network layer works: x is the input, and Y is the output. The function F(x) processes the input to extract useful information, and then the original input x is added back to the result of F(x), piece by piece. This helps the network learn more effectively by keeping some of the original information.

ResNet-50, having undergone extensive training on the ImageNet dataset with over 1.2 million images across 1,000 classes, brings a wealth of diverse features to the model. This pre-training on ImageNet establishes ResNet-50 as a popular choice for transfer learning in various computer vision tasks such as object detection, image segmentation, and classification. Transfer learning exploits the knowledge gained during pre-training, allowing fine-tuning on a new dataset or task with limited labeled examples. This approach often leads to enhanced performance and faster convergence, especially when dealing with smaller datasets. The transfer learning process involves using the pre-trained model’s weights as a starting point, replacing the final layer(s) with task-specific ones, and then training these new layers on the new dataset. In contrast, the pre-trained layers are frozen or fine-tuned with a reduced learning rate. In the proposed study, three FC layers were used for feature classification. Each FC layer was followed by a drop-out layer to reduce over fitting. SVM was used as the output layer for disease identification with L2 regularization.

SVM is the final/output layer for classifying the disease called dome galls. SVM, an ML technique, is employed for classification and regression analysis tasks. In classification, SVM endeavors to segregate data instances into distinct categories by identifying the hyper plane that optimizes the separation margin between these categories. This hyper plane serves as the boundary, ensuring the greatest possible gap between data instances of disparate classes. SVM is a potent algorithm that addresses linear and nonlinear classification and regression quandaries. Its widespread adoption spans diverse domains, including image classification, textual categorization, and bioinformatics. In the proposed approach, SVM is employed for binary classification using the hinge loss function along with L2 regularization. The mathematical representation is provided in Equation 4 below:

minimize : 1 2 ω 2 + C ( max ( 0 , 1 y i ( ω T x i + b ) ) )     (4)

In this context, ω represents the weight vector, b is the bias term, and C is the regularization parameter that controls the trade-off between maximizing the margin and minimizing classification errors. yᵢ denotes the binary class label for the ith training example, while xᵢ is its corresponding feature vector. The symbol ∑ indicates summation over all training examples. With these definitions, the hinge loss function used in the L2-regularized binary SVM is expressed in Equation 5 as follows:

max ( 0 , 1 y i ( ω T x i + b ) )     (5)

This loss function imposes a penalty on the model when a training sample is misclassified or falls on the wrong side of the decision boundary. If the model correctly classifies a sample with sufficient confidence, the loss becomes zero. The hinge loss is a convex function, meaning it has a single global minimum, and while it is not differentiable exactly at the point where the argument of the max function is zero, it remains smooth and manageable for optimization elsewhere. Table 2 contains the architecture and training configuration of the proposed DeepSVM Framework.

Table 2
www.frontiersin.org

Table 2. Model architecture and training configuration of the proposed DeepSVM framework.

4 Results and discussion

4.1 Training environment

In this study, Google Colab was used to develop and train the models. Google Colab offers a cost-effective alternative, providing a free GPU-enabled environment. Integrated with Google services like Gmail and Drive, it allows easy access to datasets and models. Its interactive notebooks, equipped with pre- installed libraries, streamline coding, saving time on setup. The Tesla T4 GPU, with 12 GB of RAM, powers Colab, accelerating model training for faster convergence and improved results. Keras library was used for developing the CNN models. This combination democratizes advanced hardware access, empowering researchers in image analysis with an efficient, collaborative platform.

4.2 Performance measurement metrics

To assess the model’s effectiveness, we employed a quartet of distinct metrics: accuracy, precision, recall, and the F1-score (Srinivasu et al., 2025). Accuracy is a conventional measure, gauging the global accuracy of the model’s predictions, shown in Equation 6. On the other hand, precision and recall gauge the model’s aptitude in accurately distinguishing positive and negative samples, shown in Equations 7, 8, respectively. The F1-score, a harmonic amalgamation of precision and recall, furnishes a harmonious gauge of the model’s performance, shown in Equation 9. Specificity, which measures the proportion of correctly identified negative samples, is shown in Equation 10.

Accuracy = TN + TP Total Samples     (6)
Precision = TP TP + FP     (7)
Sensitivity ( recall ) = TP TP + FN     (8)
F 1 Score = 2 ( Precision × Recall Precision + Recall )     (9)
Specificity = TN TN + FP     (10)

Where true positives (TP) show the correct classification of dome galls; true negatives (TN) show the correct classification of healthy leaves; false positives (FP) show the wrong classification of dome galls, i.e., healthy leaves as dome galls; and false negatives (FN) show the wrong classification of healthy leaves as dome galls.

4.3 Experiments and results

4.3.1 Anatomical results for dome galls

A tiny mite (Tyrophagus putrescentiae) was isolated from galled tissues in Cordia dichotoma, as shown in Figure 5. The mite may be the possible inducer of dome galls in C. dichotoma. A diversity of calcium oxalate crystals in mites-induced gall tissues in Cordia was observed Figure 6. These crystals are proposed to be released by the plant tissues as a defense against mites-induced stress. Anatomical observations of transverse sections of the leaves indicated the development of various trichomes. The trichomes are proposed to be developed by the plant as physical barriers to the mites. Irregular extensions of vascular tissues were noticeable (Figure 7).

Figure 5
Microscopic images of a spider mite under two different angles labeled

Figure 5. (a) Mite, surface morphology (b) side view.

Figure 6
Microscopic images depicting plant and fungal structures: (a) vibrant green plant tissue with vein-like patterns, (b) darker green with similar veins, (c) elongated section with reddish edges, (d and e) translucent cellular structures with brown and pink hues, (f) close-up of pinkish, spore-like growths.

Figure 6. (a) Healthy leaf section (b) initiation of dome gall (c) transverse section of leaf gall segment (d) trichome (e) trichome, closer view (f) Multiple trichomes.

Figure 7
Microscopic images showing different plant structures. Panel (a) displays a close-up of a brown, tube-like structure. Panel (b) shows red and yellowish-green, textured, fibrous matter. Panel (c) features slender, red-stained tubular strands extending upwards. Panel (d) presents a yellow-green fibrous pattern, similar to (b), with intricate details.

Figure 7. (a) A trichome with its secretion (b) irregular xylem vessel (C) multiple trichomes (d) multiple.

In plants, Galls are anomalous fleshy or woody outgrowths that are sometimes referred to as warts and tubercles when they are small or knots when they are large, usually with a web of complex and branched vascular tissues distributed irregularly (Lu et al., 2019). During tumor induction, cell hypertrophy is usually the initial noticeable response of the host plant organ (dos Santos Isaias et al., 2014). The presence of calcium oxalate crystals of varying shapes and dimensions within gall tissues is illustrated in Figure 8.

Figure 8
A collage of microscopic images displaying various crystalline structures in different shapes, sizes, and colors. The crystals range from compact red and brown to elongated, intricate formations in blue and green shades, set against a light background.

Figure 8. Calcium oxalate crystals of various dimensions.

The gall tissues are often accompanied by trichomes and other modified cells, which act as physical barriers to possibly, protect the tissues from further damage (Ferreira and dos Santos Isaias, 2014). Plant galls are usually distinct structures often induced by the invasion of pathogenic organisms, like insects, mites, nematodes and microbes on plants (Anand and Ramani, 2021). The initiation of a gall is accompanied by the rapid cell division and differentiation of parenchyma cells in order to provide supplementary vasculature to the growing gall. Dolzblasz et al. (2018) reported that leafy galls develop a complex network of vascular tissues in order to ensure the transport of water and dissolved minerals to the growing apices of gall. Karabourniotis et al. (2020) have reported anatomical and chemical modification in trichomes and structural diversity in trichomes as a plant’s strategy to overcome biotic and abiotic stresses. Moreover, they also reported that trichomes function as physical barriers to protect plant tissues against foreign invaders. While Nakata (2012) reports that plant synthesizes a diversity of calcium oxalate crystals when they are under stress. These crystals have been reported to play a vital role in regulating cellular calcium and protecting plants from biotic and abiotic stresses (Gómez-Espinoza et al., 2021).

4.3.2 Results with the proposed DeepSVM framework

In this study, a novel model, DeepSVM, was trained and fine-tuned, leveraging a dataset of 3,500 images for the accurate identification of early symptoms related to dome galls, a prevalent leaf disease in Cordia plants. The early symptoms of dome galls are similar to those of healthy leaves and require more concentration to classify. The model’s architecture culminated in a powerful configuration featuring Resnet-50 as its backbone, three FC layers, and an SVM as the final output layer. Training extended to 98 epochs, implementing the effective technique of early stopping to mitigate over-fitting risks by discontinuing training upon validation set performance degradation. Rigorously evaluated on 400 previously unseen images, the model showcased high performance with an accuracy of 94.50%. Comprehensive results are presented in Table 3. The results demonstrate that the model achieved high accuracy, precision, recall, and F1-score, indicating its effectiveness in classifying plant leaves as healthy or diseased. These findings suggest that the developed model could be an invaluable tool for diagnosing plant disease, helping farmers and agricultural experts identify and treat diseased plants promptly.

Table 3
www.frontiersin.org

Table 3. Shows the outcomes of DeepSVM on the test images.

In this experiment, a comprehensive exploration of model training strategies for the early symptom identification of dome galls was undertaken. Employing deep and shallow architectures, experiments ranged from training models from scratch to utilizing transfer learning. The customized CNN model, featuring seven convolutional layers with BN after each block, two FC layers, dropout layers, and an output layer, served as the baseline for the proposed research, with detailed results in Table 4.

Table 4
www.frontiersin.org

Table 4. Performance comparison of DeepSVM with other models.

The concept of training oscillation in DL, denoting erratic fluctuations in metrics like loss or accuracy during training, was elucidated. This phenomenon often arises from issues such as inappropriate learning rates or model architecture, impeding the convergence of the training procedure. Incorporating L2 regularization in the final layer effectively aligned results with the research objectives. Visual representations of the training and validation performance, encompassing accuracy and loss functions, are depicted in Figure 9. Additionally, MobileNet-v2 (Sandler et al., 2018), VGG-16 (Simonyan and Zisserman, 2014), InceptionResNet-V2 (Naveenkumar et al., 2021) and VGG-19 (Simonyan and Zisserman, 2014) were trained with the same parameters and FC layers, and their respective results are outlined in Table 4. Instead of three FC layers, the custom CNN model showed good performance with two FC layers. The confusion matrix of the proposed model showing detailed of TPs, TNs, FPs and FN is depicted in Figure 10.

Figure 9
Two graphs comparing training and validation for a model. Graph a shows accuracy, with red for training and blue for validation, both nearing one with some fluctuation. Graph b shows loss, where both lines rapidly decrease from six towards zero, stabilizing after 20 epochs.

Figure 9. Shows training performance of the proposed DeepSVM (a) accuracy (b) loss.

Figure 10
Confusion matrix showing model predictions. Predicted labels are

Figure 10. Shows confusion matrix of the proposed model.

The DeepSVM, employing ResNet-50 as a backbone, demonstrated strong performance in classifying Cordia dichotoma leaf images as healthy or infected. The utilization of ReLU activation, SGD optimizer with momentum (0.9), and hinge loss function contributed to the model’s success. ResNet-50’s deep architecture with 50 layers allowed it to learn complex features, addressing the vanishing gradient issue through residual connections. These connections facilitated efficient gradient propagation and reduced the risk of over-fitting. The model incorporated three FC layers, progressively reducing neurons (512, 256, and 128) toward the output layer, with dropout layers (dropout ratio of 0.3) and L2 regularization in each FC layer. This design enabled the model to capture intricate nonlinear relationships between features and class labels. The final layer employed SVM, outperforming sigmoid, showcasing SVM’s effectiveness in handling complex data and generalizing well to unseen instances. The DeepSVM with ResNet-50 as a backbone and SVM as the final layer presented a robust solution for accurate leaf image classification tasks (Tang, 2013).

4.4 Ablation study

The proposed model, DeepSVM, underwent a series of iterative experiments involving continuous adjustments to its architecture and hyper parameters. In the proposed study, transfer learning was used, and the main focus was on the head (FC layers) of the CNN model, changing the number of FC layers followed by the number of neurons per layer. The model was initially trained with a single fully connected (FC) layer containing 1,024 neurons, followed by experiments with two FC layers comprising 512 and 256 neurons, respectively. The best results, however, were achieved using three FC layers with 512, 256, and 128 neurons, respectively. Most of the previous literature used various optimizers, including Adam and stochastic gradient descent (SGD). In this study both of them were used, but SGD with momentum (0.9) performed well compared to Adam. The learning rate was adjusted multiple times during experimentation, with the optimal performance achieved at a finalized value of 0.0001. To reduce over fitting, the L2 regularizer was used in the final layer with a sigmoid activation function. After incorporating three fully connected (FC) layers with SGDM (momentum = 0.9), a loss value of 0.0001, a sigmoid activation function, and an L2 regularizer, a notable improvement was observed in training and validation accuracy as well as loss values. However, a non-negligible amount of oscillation persisted in the training and validation curves, potentially affecting the model’s generalization in real-world applications. To overcome this issue, the sigmoid activation function was replaced by SVM which greatly enhanced the results with reduced oscillation. Therefore, a model that contained a backbone of 50 convolutional layers, three FC layers and an SVM as an output layer was named DeepSVM.

4.5 Performance evaluation of the DeepSVM on public datasets

The results of the suggested model on two publicly accessible datasets show that it is somewhat more adaptable and performs better in a couple of areas. The model demonstrated an improved precision in classifying plant leaf illnesses using the Potato Leaf Disease (PLD) dataset, which consists of 4,072 images spanning the healthy, early blight, and late blight classes. On the pulmonary X-ray image dataset, which comprises 6,432 medical images classified into three classes—normal, COVID-19, and pneumonia—the model’s stated accuracy of 98.00% was similarly intriguing. This improved performance revealed the model’s capacity to recognize intricate patterns within radiography images, creating intriguing opportunities to enhance medical diagnostic processes. By carefully assessing its performance metrics, as indicated in Table 5, the model’s versatility and efficacy in addressing the unique issues presented by detecting plant diseases and analyzing medical images were brought to light. These findings demonstrated the improved generalizability of the suggested approach and not only demonstrated its adaptability but also it’s potential to make a substantial contribution to healthcare and agriculture.

Table 5
www.frontiersin.org

Table 5. Performance assessment of the DeepSVM on public datasets.

4.6 Comparison of the DeepSVM with other SOTA methods

The proposed approach is evaluated on a novel dataset curated for the dome galls. We cannot find related studies to evaluate and compare the performance of the proposed approach with the SOTA approaches concentrating on similar datasets. Therefore, to identify the effectiveness and performance of the proposed DeepSVM approach, it is evaluated against the publicly available dataset. The DeepSVM was trained and tested on the PLD dataset to create a solid foundation for the work. The detailed results are shown in Table 6. The model exhibited better performance, achieving a test accuracy of 97.50%. This achievement served as a testament to the efficacy of the novel approach. A meticulous comparative analysis was conducted to provide a comprehensive perspective, pitting the model against prior research that employed the same publicly available dataset. The detailed outcomes of this comparative evaluation are meticulously outlined in Table 4, offering valuable insights into the significant advancements made by the suggested model (Table 6).

Table 6
www.frontiersin.org

Table 6. Comparison of DeepSVM method with previous work on a public dataset.

4.7 Limitations of the study

In the context of our proposed study, DeepSVM, a novel model meticulously crafted for the early symptom identification of dome galls, showcased better results by achieving a better accuracy of 94.50% on the test dataset. This accomplishment represented a significant stride forward in the domain of plant leaf disease classification. However, an avenue for improvement lies in enhancing computational efficiency. DeepSVM necessitated a longer training duration than contemporaneously trained models, particularly those employing the sigmoid activation function as the final layer and Adam optimizer. The forthcoming research endeavors will optimize the DeepSVM training process and explore alternative optimization strategies, including optimizers and other parameters. This pursuit aims to strike an optimal equilibrium between elevated accuracy and diminished computational time, thereby fortifying the model’s practical applicability in real-world scenarios. Moreover, the proposed DeepSVM approach is a black box, which takes dome galls images as inputs and predicts whether the input image is healthy or unhealthy. To better understand the results of the proposed DeepSVM approach, we aim to introduce eXplainble Artificial Intelligence (XAI) with the proposed approach to reduce its complexity and results fairness.

5 Conclusion and future work

This study aimed to create a model that can spot early signs of dome galls, a leaf ailment that affects Cordia plants, and introduced a new method called DeepSVM. The model was built using ResNet-50 as the backbone, three FC layers, and the final layer contained SVM instead of the sigmoid activation function. A custom dataset containing 3,500 images of healthy and diseased leaves with class balancing was used for training the model, equipped with preprocessing and data augmentation techniques to enhance generalization and reduce over fitting, resulting in a test accuracy of 94.50%. The study highlighted the model’s potential for improving plant disease detection by highlighting its effectiveness in early symptom classification for plant leaf diseases. The proposed approach offers a reliable tool for early plant disease detection, enhancing crop productivity and outperforming traditional algorithms. It encourages the integration of anomalous histological feature extraction with the DeepSVM model to enhance performance further. This framework can be extended to other agricultural applications, improving existing machine and deep learning methods. Ultimately, it supports farmers in selecting effective pesticides, reducing costs, and maintaining crop quality through timely diagnosis.

There is scope for further research to improve the performance of the proposed approach. The accuracy may be increased, and the training time may be reduced without compromising accuracy. Collaborating with domain experts to amass a more comprehensive and diverse dataset, stratified into subcategories such as initial, mature, and severe dome gall cases, would enhance method evaluation. Expanding the proposed method to encompass various plant species for early leaf disease symptom identification holds promise. Moreover, exploring an extension of this approach to classify human diseases could broaden its applicability, fostering advancements in plant and human disease classification techniques.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

SS: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. MS: Conceptualization, Data curation, Funding acquisition, Project administration, Resources, Software, Supervision, Writing – review & editing. AK: Supervision, Writing – review & editing, Writing – original draft. MAl: Writing – review & editing. MAy: Data curation, Investigation, Software, Writing – review & editing.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Altalak, M., Ammad Uddin, M., Alajmi, A., and Rizg, A. (2022). Smart agriculture applications using deep learning technologies: a survey. Appl. Sci. 12:5919. doi: 10.3390/app12125919

Crossref Full Text | Google Scholar

Anand, P. P., and Ramani, N. (2021). Dynamics of limited neoplastic growth on Pongamia pinnata (L.)(Fabaceae) leaf, induced by Aceria pongamiae (Acari: Eriophyidae). BMC Plant Biol. 21, 1–18. doi: 10.1186/s12870-020-02777-7

Crossref Full Text | Google Scholar

Arshad, F., Mateen, M., Hayat, S., Wardah, M., Al-Huda, Z., Gu, Y. H., et al. (2023). PLDPNet: end-to-end hybrid deep learning framework for potato leaf disease prediction. Alex. Eng. J. 78, 406–418. doi: 10.1016/j.aej.2023.07.076

Crossref Full Text | Google Scholar

Ashikuzzaman, M., Roy, K., Lamon, A., and Abedin, S. (2024). Potato leaf disease detection by deep learning: a comparative study. In 2024 6th international conference on electrical engineering and Information & Communication Technology (ICEEICT) (pp. 278–283). IEEE.

Google Scholar

Ashwinkumar, S., Rajagopal, S., Manimaran, V., and Jegajothi, B. (2022). Automated plant leaf disease detection and classification using optimal MobileNet based convolutional neural networks. Mater Today Proc 51, 480–487. doi: 10.1016/j.matpr.2021.05.584

Crossref Full Text | Google Scholar

Bhattacharya, P., and Saha, A. (2013). Evaluation of reversible contraceptive potential of Cordia dichotoma leaves extract. Rev. Bras 23, 342–350. doi: 10.1590/S0102-695X2013005000020

Crossref Full Text | Google Scholar

Bhatti, U. A., Bazai, S. U., Hussain, S., Fakhar, S., Ku, C. S., Marjan, S., et al. (2023). Deep learning-based trees disease recognition and classification using hyperspectral data. Comp. Materials Continua 77, 681–697. doi: 10.32604/cmc.2023.037958

Crossref Full Text | Google Scholar

Bhosale, Y. H., Patnaik, K. S., Zanwar, S. R., Singh, S. K., Singh, V., and Shinde, U. B. (2024). Thoracic-net: explainable artificial intelligence (XAI) based few shots learning feature fusion technique for multi-classifying thoracic diseases using medical imaging. Multimed. Tools Appl. 84, 5397–5433. doi: 10.1007/s11042-024-20327-3

Crossref Full Text | Google Scholar

Bhosale, Y. H., Zanwar, S. R., Ali, S. S., Vaidya, N. S., Auti, R. A., and Patil, D. H. (2023). Multi-plant and multi-crop leaf disease detection and classification using deep neural networks, machine learning, image processing with precision agriculture-a review. In 2023 international conference on computer communication and informatics (ICCCI) (pp. 1–7). IEEE.

Google Scholar

Bhujel, S., and Shakya, S. (2022). Rice leaf diseases classification using discriminative fine tuning and CLR on efficientNet. J. Soft Computing Paradigm 4, 172–187. doi: 10.36548/jscp.2022.3.006

Crossref Full Text | Google Scholar

Chakraborty, A., Kumer, D., and Deeba, K. (2021). Plant leaf disease recognition using fastai image classification. In 2021 5th international conference on computing methodologies and communication (ICCMC) (pp. 1624–1630). Erode, India: IEEE.

Google Scholar

Chowdhury, M. E., Rahman, T., Khandakar, A., Ayari, M. A., Khan, A. U., Khan, M. S., et al. (2021). Automatic and reliable leaf disease detection using deep learning techniques. AgriEngineering 3, 294–312. doi: 10.3390/agriengineering3020020

Crossref Full Text | Google Scholar

Dolzblasz, A., Banasiak, A., and Vereecke, D. (2018). Neovascularization during leafy gall formation on Arabidopsis thaliana upon Rhodococcus fascians infection. Planta 247, 215–228. doi: 10.1007/s00425-017-2778-5,

PubMed Abstract | Crossref Full Text | Google Scholar

dos Santos Isaias, R. M., de Oliveira, D. C., da Silva Carneiro, R. G., and Kraus, J. E. (2014). Developmental anatomy of galls in the Neotropics: arthropods stimuli versus host plant constraints. Neotropical insect galls, 15–34. doi: 10.1007/978-94-017-8783-3_2

Crossref Full Text | Google Scholar

Ferahtia, A. (2021). See discussions, stats, and author profiles for this publication. Net/publication/350567414 surface water quality assessment in semi-arid region (el hodna watershed, Algeria) based on water quality index (WQI). Romania: Babes-Bolyai University.

Google Scholar

Ferreira, B. G., and dos Santos Isaias, R. M. (2014). Floral-like destiny induced by a galling Cecidomyiidae on the axillary buds of Marcetia taxifolia (Melastomataceae). Flora-Morphology, Distribution, Functional Ecology of Plants 209, 391–400. doi: 10.1016/j.flora.2014.06.004

Crossref Full Text | Google Scholar

Ganjare, A. B., Nirmal, S. A., Rub, R. A., Patil, A. N., and Pattan, S. R. (2011). Use of cordia dichotoma bark in the treatment of ulcerative colitis. Pharm. Biol. 49, 850–855. doi: 10.3109/13880209.2010.551539,

PubMed Abstract | Crossref Full Text | Google Scholar

Ghosal, S., and Sarkar, K. (2020). Rice leaf diseases classification using CNN with transfer learning. In 2020 IEEE Calcutta Conference (Calcon) (pp. 230–236). IEEE.

Google Scholar

Gómez-Espinoza, O., González-Ramírez, D., Méndez-Gómez, J., Guillén-Watson, R., Medaglia-Mata, A., and Bravo, L. A. (2021). Calcium oxalate crystals in leaves of the extremophile plant colobanthus quitensis (Kunth) bartl. (caryophyllaceae). Plants 10:1787. doi: 10.3390/plants10091787,

PubMed Abstract | Crossref Full Text | Google Scholar

Hong, H., Lin, J., and Huang, F. 2020 Tomato disease detection and classification by deep learning. In 2020 international conference on big data, artificial intelligence and internet of things engineering (ICBAIE) (pp. 25–29). IEEE.

Google Scholar

Hungilo, G. G., Emmanuel, G., and Emanuel, A. W. (2019). Image processing techniques for detecting and classification of plant disease: A review. Paper presented at the 2019 international conference

Google Scholar

Islam, M., Dinh, A., Wahid, K., and Bhowmik, P. 2017 Detection of potato diseases using image segmentation and multiclass support vector machine. In 2017 IEEE 30th Canadian conference on electrical and computer engineering (CCECE) (pp. 1–4). IEEE.

Google Scholar

Jakjoud, F., Anas, H., and Bouaaddi, A. (2019). Deep learning application for plant diseases detection. Paper presented at the BDIoT'19: the 4th international conference on big data and internet of things. 1–6.

Google Scholar

Jamjoom, M., Elhadad, A., Abulkasim, H., and Abbas, S. (2023). Plant leaf diseases classification using improved k-means clustering and svm algorithm for segmentation. Comp. Materials & Continua 76, 367–382. doi: 10.32604/cmc.2023.037310

Crossref Full Text | Google Scholar

Karabourniotis, G., Liakopoulos, G., Nikolopoulos, D., and Bresta, P. (2020). Protective and defensive roles of non-glandular trichomes against multiple stresses: structure–function coordination. J. For. Res. 31, 1–12. doi: 10.1007/s11676-019-01034-4

Crossref Full Text | Google Scholar

Krishnamoorthy, N., Prasad, L. N., Kumar, C. P., Subedi, B., Abraha, H. B., and Sathishkumar, V. E. (2021). Rice leaf diseases prediction using deep neural networks with transfer learning. Environ. Res. 198:111275. doi: 10.1016/j.envres.2021.111275

Crossref Full Text | Google Scholar

Kumar, A., and Vani, M. 2019 Image based tomato leaf disease detection. In 2019 10th international conference on computing, communication and networking technologies (ICCCNT) (pp. 1–6). IEEE.

Google Scholar

Kurmi, Y., Saxena, P., Kirar, B. S., Gangwar, S., Chaurasia, V., and Goel, A. (2022). Deep CNN model for crops’ diseases detection using leaf images. Multidim. Syst. Sign. Process. 33, 981–1000. doi: 10.1007/s11045-022-00820-4

Crossref Full Text | Google Scholar

Li, B., Chen, Z., Lu, L., Qi, P., Zhang, L., Ma, Q., et al. (2025). Cascaded frameworks in underwater optical image restoration. Information Fusion 117:102809. doi: 10.1016/j.inffus.2024.102809

Crossref Full Text | Google Scholar

Li, L. H., Chu, Y. S., Chu, J. Y., and Guo, S. H. (2019). A machine learning approach for detection plant disease: taking orchid as example. In Proceedings of the 3rd international conference on vision, image and signal processing (pp. 1–6).

Google Scholar

Lu, Q., Chen, H., Wang, C., Yang, Z. X., Lü, P., Chen, M. S., et al. (2019). Macro-and microscopic analyses of anatomical structures of Chinese gallnuts and their functional adaptation. Sci. Rep. 9:5193. doi: 10.1038/s41598-019-41656-6

Crossref Full Text | Google Scholar

Mahum, R., Munir, H., Mughal, Z. U. N., Awais, M., Sher Khan, F., Saqlain, M., et al. (2023). A novel framework for potato leaf disease detection using an efficient deep learning model. Hum. Ecol. Risk Assess. Int. J. 29, 303–326. doi: 10.1080/10807039.2022.2064814

Crossref Full Text | Google Scholar

Matias, E. F. F., Alves, E. F., Silva, M. K. D. N., Carvalho, V. R. D. A., Coutinho, H. D. M., Costa, J. G. M. D., et al. (2015). The genus cordia: botanists, ethno, chemical and pharmacological aspects. Rev. Bras 25, 542–552. doi: 10.1016/j.bjp.2015.05.012

Crossref Full Text | Google Scholar

Nagaraju, M., and Chawla, P. (2020). Systematic review of deep learning techniques in plant disease detection. Int. J. Syst. Assur. Eng. Manag. 11, 547–560. doi: 10.1007/s13198-020-00972-1

Crossref Full Text | Google Scholar

Nakata, P. A. (2012). Plant calcium oxalate crystal formation, function, and its impact on human health. Front. Biol. 7, 254–266. doi: 10.1007/s11515-012-1224-0

Crossref Full Text | Google Scholar

Nandhini, S., and Ashokkumar, K. 2021 Analysis on prediction of plant leaf diseases using deep learning. In 2021 international conference on artificial intelligence and smart systems (ICAIS) (pp. 165–169). IEEE.

Google Scholar

Naveenkumar, M., Srithar, S., Kumar, B. R., Alagumuthukrishnan, S., and Baskaran, P. 2021 InceptionResNetV2 for plant leaf disease classification. In 2021 fifth international conference on I-SMAC (IoT in social, Mobile, analytics and cloud) (I-SMAC) (pp. 1161–1167). Palladam, India: IEEE.

Google Scholar

Roy, A. M., and Bhaduri, J. (2021). A deep learning enabled multi-class plant disease detection model based on computer vision. Ai 2, 413–428. doi: 10.3390/ai2030026

Crossref Full Text | Google Scholar

Rozaqi, A. J., and Sunyoto, A. 2020 Identification of disease in potato leaves using convolutional neural network (CNN) algorithm. In 2020 3rd international conference on information and communications technology (ICOIACT) (pp. 72–76). IEEE.

Google Scholar

Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510–4520).

Google Scholar

Sanjeev, K., Gupta, N. K., Jeberson, W., and Paswan, S. (2021). Early prediction of potato leaf diseases using ANN classifier. Oriental J. Computer Sci. Technol. 13, 129–134. doi: 10.13005/ojcst13.0203.11

Crossref Full Text | Google Scholar

Shijie, J., Peiyi, J., and Siping, H. 2017 Automatic detection of tomato diseases and pests based on leaf images. In 2017 Chinese automation congress (CAC) (pp. 2537–2510). IEEE.

Google Scholar

Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. Cornell University, Ithaca, New York, USA: arXiv preprint arXiv:1409.1556.

Google Scholar

Singh, A., and Kaur, H. (2021). Potato plant leaves disease detection and classification using machine learning methodologies. In IOP conference series: Materials science and engineering (Vol. 1022, No. 1, 012121 IOP Publishing.

Google Scholar

Srinivasu, P. N., Kumari, G. L. A., Narahari, S. C., Ahmed, S., and Alhumam, A. (2025). Exploring the impact of hyperparameter and data augmentation in YOLO V10 for accurate bone fracture detection from X-ray images. Sci. Rep. 15:9828. doi: 10.1038/s41598-025-93505-4

Crossref Full Text | Google Scholar

Sujatha, R., Chatterjee, J. M., Jhanjhi, N. Z., and Brohi, S. N. (2021). Performance of deep learning vs machine learning in plant leaf disease detection. Microprocess. Microsyst. 80:103615. doi: 10.1016/j.micpro.2020.103615

Crossref Full Text | Google Scholar

Syed-Ab-Rahman, S. F., Hesamian, M. H., and Prasad, M. (2022). Citrus disease detection and classification using end-to-end anchor-based deep learning model. Appl. Intell. 52, 927–938. doi: 10.1007/s10489-021-02452-w

Crossref Full Text | Google Scholar

Tang, Y. (2013). Deep learning using support vector machines. CoRR, abs/1306.0239. CoRR Cornell University, Ithaca, New York, USA: (Computing Research Repository, part of arXiv). 2.

Google Scholar

Ullah, N., Khan, J. A., Al Qathrady, M., El-Sappagh, S., and Ali, F. (2023). An effective approach for plant leaf diseases classification based on a novel DeepPlantNet deep learning model. Front. Plant Sci. 14:1212747.

Google Scholar

Ullah, N., Khan, J. A., Alharbi, L. A., Raza, A., Khan, W., and Ahmad, I. (2022). An efficient approach for crops pests’ recognition and classification based on novel deeppestnet deep learning model. IEEE Access 10, 73019–73032. doi: 10.1109/ACCESS.2022.3189676

Crossref Full Text | Google Scholar

Ullah, N., Khan, J. A., Almakdi, S., Alshehri, M. S., Al Qathrady, M., Aldakheel, E. A., et al. (2023). A lightweight deep learning-based model for tomato leaf disease classification. Computers, Materials & Continua 77, 3969–3992. doi: 10.32604/cmc.2023.041819

Crossref Full Text | Google Scholar

Verhertbruggen, Y., Walker, J. L., Guillon, F., and Scheller, H. V. (2017). A comparative study of sample preparation for staining and immunodetection of plant cell walls by light microscopy. Front. Plant Sci. 8:260991. doi: 10.3389/fpls.2017.01505,

PubMed Abstract | Crossref Full Text | Google Scholar

Zekiwos, M., and Bruck, A. (2021). Deep learning-based image processing for cotton leaf disease and pest diagnosis. J. Electric. Computer Eng. 2021, 1–10. doi: 10.1155/2021/9981437

Crossref Full Text | Google Scholar

Zhu, Z., Li, X., Ma, Q., Zhai, J., and Hu, H. (2025). FDNet: Fourier transform guided dual-channel underwater image enhancement diffusion network. SCIENCE CHINA Technol. Sci. 68:1100403.

Google Scholar

Zhu, Z., Li, X., Zhai, J., and Hu, H. (2024). PODB: a learning-based polarimetric object detection benchmark for road scenes in adverse weather conditions. Information Fusion 108:102385. doi: 10.1016/j.inffus.2024.102385

Crossref Full Text | Google Scholar

Keywords: classification, Cordia dichotoma, DeepSVM, dome galls, fine tuning, Resnet-50, SVM, transfer learning

Citation: Shah SK, Su’ud MBM, Khan A, Alam MM and Ayaz M (2026) Anatomical study and early diagnosis of dome galls in Cordia Dichotoma using DeepSVM model. Front. Artif. Intell. 8:1558358. doi: 10.3389/frai.2025.1558358

Received: 10 January 2025; Revised: 30 November 2025; Accepted: 09 December 2025;
Published: 05 January 2026.

Edited by:

Xiaobo Li, Tianjin University, China

Reviewed by:

Yogesh Bhosale, Birla Institute of Technology, India
Shangwei Deng, Tianjin University, China

Copyright © 2026 Shah, Su’ud, Khan, Alam and Ayaz. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Mazliham Bin Mohd Su’ud, bWF6bGloYW1AbW11LmVkdS5teQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.