Skip to main content

ORIGINAL RESEARCH article

Front. Public Health, 30 August 2022
Sec. Digital Public Health

COVID-19 classification using chest X-ray images: A framework of CNN-LSTM and improved max value moth flame optimization

  • 1Department of Computer Science, HITEC University, Taxila, Pakistan
  • 2Department of Mathematics, University of Leicester, Leicester, United Kingdom
  • 3College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
  • 4Department of Electrical Engineering, College of Engineering, King Khalid University, Abha, Saudi Arabia
  • 5Department of Electrical Engineering, Faculty of Engineering, Aswan University, Aswan, Egypt
  • 6Institute for Neuro- and Bioinformatics, University of Lübeck, Lübeck, Germany
  • 7Faculty of Computers and Information, South Valley University, Qena, Egypt

Coronavirus disease 2019 (COVID-19) is a highly contagious disease that has claimed the lives of millions of people worldwide in the last 2 years. Because of the disease's rapid spread, it is critical to diagnose it at an early stage in order to reduce the rate of spread. The images of the lungs are used to diagnose this infection. In the last 2 years, many studies have been introduced to help with the diagnosis of COVID-19 from chest X-Ray images. Because all researchers are looking for a quick method to diagnose this virus, deep learning-based computer controlled techniques are more suitable as a second opinion for radiologists. In this article, we look at the issue of multisource fusion and redundant features. We proposed a CNN-LSTM and improved max value features optimization framework for COVID-19 classification to address these issues. The original images are acquired and the contrast is increased using a combination of filtering algorithms in the proposed architecture. The dataset is then augmented to increase its size, which is then used to train two deep learning networks called Modified EfficientNet B0 and CNN-LSTM. Both networks are built from scratch and extract information from the deep layers. Following the extraction of features, the serial based maximum value fusion technique is proposed to combine the best information of both deep models. However, a few redundant information is also noted; therefore, an improved max value based moth flame optimization algorithm is proposed. Through this algorithm, the best features are selected and finally classified through machine learning classifiers. The experimental process was conducted on three publically available datasets and achieved improved accuracy than the existing techniques. Moreover, the classifiers based comparison is also conducted and the cubic support vector machine gives better accuracy.

Introduction

In December 2019, Wuhan, Hubei Province, China, became the epicenter of an unknown-cause pneumonia epidemic, attracting national and international attention (1). The current outbreak of coronavirus disease 2019 (COVID-19), a coronavirus-associated acute respiratory illness, is the third worldwide pandemic in less than two decades (2). The COVID-19 sickness is caused by the SARS-CoV-2 virus (3). In severe cases, this illness can cause organ failure and breathing difficulties. Aside from the medical consequences, the disease had a massive economic and environmental impact on the world (4, 5).

Coronavirus disease 2019 detection methods include nucleic acid-based assays and polymerase chain reaction (PCR). The traditional real-time polymerase chain reaction (RT-PCR) method of COVID-19 detection is time-consuming (6). Artificial intelligence (AI) technologies have been widely used to combat the COVID-19 outbreak and its complications. To identify the COVID-19 instance based on the X-ray images, an automation method is required. It is the least expensive process when compared to the COVID-19 test. Human examination of these photographs, on the other hand, is a difficult and time-consuming task. For accurate classification, an expert physician is always required. As a result, it is critical to find these photos as soon as possible using a reliable method. In clinics, computerized approaches assist radiologists in confirming their subjective results and detecting COVID-19 (7).

The AI-based estimation methods rely on data from the patient's symptoms. A person infected with the coronavirus usually exhibits no signs or symptoms. As a result, identifying an infectious individual is extremely difficult (8). Traditional feature-based approaches and deep learning-based techniques are the two categories of AI-based techniques. Traditional features-based algorithms include some preprocessing procedures, handcrafted features (such as shape, texture, and geometric characteristics), removal of extraneous features, and classification. In deep learning architectures, raw photos are fed into convolutional neural network (CNN) models, which extract features from convolution layers and perform classification using the fully connected layers. Following that, a few researchers used feature optimization methods to select the best features before classifying them with the Softmax classifier.

Using deep learning (DL), several techniques are introduced for COVID-19 diagnosis and classification using chest X-rays and CT images (915). Additionally, CNN models are useful in the deployment of sophisticated COVID-19 pneumonia detection systems (16). Numerous strategies for identifying COVID-19 have been presented, all of which make use of deep CNN features and generate more precise findings than manual feature-based methods (17). In a few studies, the researchers focused on feature fusion techniques to get better information about an image. They fused features from different sources into one feature matrix. Özkaay et al. (18) fused deep features for COVID-19 classification using the feature ranking method. Shankar et al. (19) introduced an entropy based handcrafted and deep features fusion approach for better classification of COVID-19. Ragab et al. (20) combined several features based on concatenated fashion. These techniques performed well in terms of accuracy but on the other side, the computational time is significantly increased. Few researchers introduced feature reduction techniques to resolve the problem of high computational time but the reduction process decreases in accuracy due to dropping some important features (21, 22).

Feature selection is an important research area nowadays and many techniques are introduced in the literature. As compared to features reduction techniques, the feature selection technique is the process of subset selection from the originally extracted features instead of generating new features. The purpose of feature selection techniques is to reduce the computational time by selecting only important features based on some selection criteria or fitness function. A few important feature selection techniques are- genetic algorithm based selection, particle swarm optimization based selection, entropy based selection, bee colony optimization based selection, and many more (23).

In recent years, many research works have been done for the detection and classification of COVID-19 in X-ray and CT-scan images (24). They followed some traditional techniques and showed improved accuracy (25); however, COVID-19 patients are increasing day by day worldwide. A lot of data has been generated in the form of Chest X-ray and CT images that are not feasible for classification through traditional techniques. The traditional techniques work better for the smaller datasets but for the large datasets, accuracy is degraded (26). Based on this reason, it is room for improving the accuracy through the development of deep learning architectures. In this article, we proposed a new architecture based on deep learning and improved moth flame optimization for COVID-19 classification. Our major contributions are as follows:

• A contrast enhancement technique is proposed based on the fusion of the output of local and global filters. The resultant enhanced image is further utilized for the augmentation process.

• Proposed a CNN-LSTM architecture and trained it using deep transfer learning from scratch instead of freezing a few layers.

• Proposed a new features fusion technique named Serial based Maximum Information.

• An improved max value based moth flame optimization algorithm is proposed for best features selection.

Related study

Many computerized techniques have been introduced for COVID-19 in recent years by researchers of computer vision (27). Several researchers focused on traditional techniques and few of them using deep learning architectures for the detection and classification of COVID-19 from chest X-ray images. Ibrahim et al. (28) presented a deep learning method for multiclass classification problems such as COVID-19, pneumonia, and normal. They used a pre-trained CNN model named AlexNet and trained it on selected COVID-19 datasets. They considered the problems of both binary and multiclass and achieved accuracies of 94.43, 98.19, and 95.78%, respectively. The limitation of this study was the lack of data for training. Ismael and Sengür (24) presented a deep-learning-based technique for classifying COVID-19 and normal (healthy) chest X-ray images. They followed some sequential steps including deep feature extraction, fine-tuning of pre-trained CNNs, and end-to-end training of a fine-tuned CNN model. Three pre-trained CNN models were used for the training and feature extraction such as ResNet18, VGG16, and VGG19. The extracted deep features were finally classified using the Support Vector Machines (SVM) classifier. The fine-tuned ResNet50 deep model gives better accuracy of 92.60% than the other methods. The drawback of this method was less number of training samples. Ketu and Mishra (29) introduced a CNN-LSTM deep learning model that can accurately detect the COVID-19 infection. The proposed approach extracts useful information from the convolutional layers. Later on, the Long short-term memory (LSTM) network is designed to extract features that are fused with CNN features. The limitation of the presented method was the reliability and suitability of the model to the other series of data. Nivetha et al. (30) presented a new classification technique for COVID-19 based on Neighborhood Rough Neural Network Algorithm (NRNN). The presented method performed better than existing algorithms like Backpropagation Neural Network (BNN), Decision Tree, and Naive Bayes Classifiers. The accuracies of NRNN were 98, 92, 100, and 100% which was significantly better than other methods. Moreover, NRNN consumes less amount of training data compared to the existing methods. Shastri et al. (31) introduced a novel neural network based framework for COVID-19 classification. They used ChestXImageNet CNN model for the classification purpose and tested on the open-access dataset that consisted of both binary classes and multiclass and achieved accuracies of 100 and 100%, respectively. Khan et al. (32) described a deep learning technique in which they used three pre-trained models named EfficientNet B1, NasNetMobile, and MobileNetV2. Before training deep models, they performed data augmentation. Moreover, they optimized hyper-parameters for improving accuracy. The described model achieved 96.13% accuracy which was better than the existing methods. The limitation of described work was the use of high-weighted models that required high time for computation.

Imagawa et al. (33) presented a hybrid framework for the classification of COVID-19 images. They used two pre-trained deep learning models named- AlexNet and ResNet34 with and without transfer learning. On both fine-tuned models, classification is performed and attained improved accuracy. Falco et al. (34) designed another evolutionary algorithm based approach for COVID-19 classification. Sarki et al. (35) developed a deep learning system for the classification and valid detection of coronavirus using chest images. They evaluated the traditional networks and also developed a CNN from scratch and trained on the binary class and multiclass based datasets. Öztürk et al. (36) designed a machine learning method for the classification of viral epidemics by analyzing chest X-ray images and CT images. They applied hand-crafted feature extraction to make the data more convenient and optimized the features by using stacked auto-encoder and principal component analysis techniques. Al-Zubaidi et al. (37) applied CNNs for the classification of COVID-19 images. They used Google-Net for training and extracting automated features from the images. The above methods have several gaps such as—not performing well on imbalanced datasets and increasing higher computational time. Shazia et al. (38) presented a neural network based system for COVID-19 detection from Chest X-Ray images. They used three pre-trained models and fine-tuned them. The fine-tuned models have been trained through transfer learning and obtained improved accuracy. Shazia et al. (39) presented a comparative study of several deep learning models for COVID-19 classification from Chest X-Ray images. They used seven pre-trained deep models such as VGG16 and ResNet50 and named a few more and attained a classification accuracy of 99.48%. Joloudari et al. (40) combined the CNN model with a global feature extractor for the classification of COVID-19 infected and healthy patients. They used 10-fold cross-validation and obtained an accuracy of 96.71%.

Proposed methodology

Figure 1 depicts the suggested CNN-LSTM deep learning and features optimization architecture. In this diagram, the original images are acquired and the contrast is enhanced using a combination of filtering algorithms. Then, to expand the size of the dataset, data augmentation is used to train two deep learning networks: Modified EfficientNet B0 and CNN-LSTM. Both networks are built from scratch and extract information from the deep layers. Following the extraction of features, serial based maximum value fusion is carried out, which is then enhanced using the moth flame optimization technique. Finally, machine learning classifiers such as support vector machines (SVM), neural networks, and others are used to classify the best optimal features. Below is a quick description of each sub-step.

FIGURE 1
www.frontiersin.org

Figure 1. The proposed architecture of coronavirus disease 2019 (COVID-19) classification using an efficient network and CNN-LSTM.

Dataset collection and normalization

The experimental approach in this research uses three publicly available datasets: COVID-19 Radiography (https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database), COVID-GAN, and COVID-Net small chest x-ray (https://www.kaggle.com/yash612/covidnet-mini-and-gan-enerated-chest-xray), and Chest X-Ray (Pneumonia, COVID-19, Tuberculosis) (https://www.kaggle.com/datasets/jtiptj/chest-xray-pneumoniacovid19tuberculosis). There are four classes in the COVID-19 radiography dataset: COVID-19, lung opacity, normal, and viral pneumonia. COVID-19, normal, and pneumonia are the three classes in the COVID-GAN and COVID-Net small chest X-Ray dataset. COVID-19, normal pneumonia and tuberculosis are the four classes in the chest X-Ray dataset. The amount of images in each dataset is insufficient to train deep learning models, as shown in Table 1 for each dataset. Furthermore, these datasets are imbalanced, therefore we used data augmentation. Three simple approaches are used for data augmentation: rotate 90 degrees, flip left, and flip right. Figure 2 depicts the effects of each strategy graphically. The number of images is raised after the augmentation phase, as shown in Table 1.

TABLE 1
www.frontiersin.org

Table 1. Brief description of selected datasets.

FIGURE 2
www.frontiersin.org

Figure 2. Sample images after the data augmentation step.

Contrast enhancement

The enhancement of an input image is an important step to improve the quality of the original image. Based on this step, we obtained a brighter image (41). The primary motivation of this step in this study is to visualize the COVID-19 positive images than the healthy ones. The COVID-19 Radiography Database images have low contrast and bad quality; therefore, we designed a hybrid approach based on the fusion of different filtering outputs. Three different filters are opted such as top-hat and bottom-hat filtering, adjusting the pixel values, and sharp filter.

Consider that the COVID-19 Database has n images D ∈ ℝn, where every image denoted by kn(x, y) and (x, y) belong to ℝ. Every image has a size of N×M=512. Assume that s is a structuring element with the value of 11 and ◦ is an opening operator, • is a closing operator, then the top-hat and bottom-hat filtering is defined as follows:

ktop(x,y)=kn(x,y)-(kn(x,y) s  )    (1)
kbottom(x,y)=(kn(x,y))-kn(x,y)    (2)
f(x,y)=kn(x,y)+ktop(x,y)-kbottom(x,y)    (3)

Where f(x, y) is the resultant top-bottom hat filtering image, ktop(x, y) is the top-hat filtered image, and kbottom(x, y) is bottom hat filtered image, respectively. This image is further refined using adjust image pixel values filter. This filter raises the image's brightness by transferring the values of the input pixel intensity to particular values such that on average 1% of the information is absorbed in low and high input data intensities. The notation i is the intensity value of the image, and the gamma (γ) is a coefficient, which determines the form of the function between the coordinating variables (a, c) and (b, d).

An(x,y)=(i-ab-a)γ(d-c)+c    (4)

After that the resultant image An(x,y) is sharpened using the unsharp masking method. This filter is applied to increase the contrast along the edges. The radius is 2 which specifies the length of the sharpness zone from around gray levels and the amount is 1 which leads to greater enhancement in the contrast of the enhanced pixels. The sharpen using an un-sharp mask denoted as:

gn(x,y)=An(x,y)-ksmooth(x,y)    (5)
Ssharpn(x,y)= kn(x,y)+a×gn(x,y)    (6)

where ksmooth(x, y) is a smoothed version of kn(x, y), Ssharpn(x,y) is a sharpen using the un-sharp mask filtered image and a is a scaling variable that provides the amount of sharpening. Hence, the resultant image is defined as follows:

Rn(x,y)=f(x,y)+An(x,y)+Ssharpn(x,y)    (7)

Where Rn(x, y) represents the resultant contrast enhanced image and is visually presented in Figure 3.

FIGURE 3
www.frontiersin.org

Figure 3. Sample resultant images of hybrid contrast enhancement technique.

EfficientNet deep features

EfficientNet is a deep neural network design and scaling method that uses a complicated parameter to evenly scale all depth, width, and resolution variables. Unlike current practice, which scales these variables arbitrarily, the EfficientNet scaling approach uses a pre-determined set of scalability variables to alter network breadth, depth, and resolution uniformly (42). This model was trained on 1,000 classes (ImageNet dataset) and accepts input images up to 224 × 224 × 3 pixels in size. As the fully-connected layer is replaced with a new fully-connected layer that includes the target classes, we fine-tune this model. The updated model was trained using a transfer learning strategy on the target datasets. Transfer learning (TL) is the process of reusing a previously learned model for a new task. In TL, a computer applies the information gained from previous work to improve prediction about a specific task. The primary goal of TL is to resolve the target domain efficiently. It is an excellent technique if the targeted domain dataset is much less than the origin domain dataset (43). Domain D={F,P(f)} Includes two parts: a higher dimensional space F and a marginal probability density P(f), where F= {f|fiF,i=1,2.N }, and N is a dataset containing M items. Typically, distinct domains are established by the existence of distinct feature spaces or marginal probability distributions among them. when we give a specific domain D then it represented as: T={R,f(*)}. It also consists of two parts: a label space R and mapping function f(*) where R={𝔯|𝔯iR,i=1,2.N} is a label space set which equivalent instances in D. The mapping procedure f(*), also represented as f(x)=P(f𝔯), is a non-linear and implicit procedure that may connect the input items to the projected choice, which is intended to be learned from the provided datasets.

Given an original domain Do={Fo,Po(fo)}, with the original task To={Ro,fo(*)} and a target domain DT={FT,PT(fT)}  with target task TT={RT,fT(*)} intends to develop a more accurate mapping function fT(*) for the target task TT utilizing the transferrable knowledge acquired in the original domain Do and To. In contrast to traditional machine learning and deep learning, where the area and goal are similar across the original and target circumstances (Do=DT & To= TT). Transfer learning addresses issues that arise when the domain and task of the original and target situations are unrelated (i.e., (DoDT & To TT)). Hence, the deep transfer learning can be expanded as: Given a transfer learning task foT(*):FTRT based on [Do,DT,To,TT]. This process is visually illustrated in Figure 4. This figure illustrated how the original model weights and parameters are transferred to the updated model, which is subsequently trained using the COVID-19 datasets. After the training, deep features are extracted from the global average pooling layer of dimensional N × 1,280.

FIGURE 4
www.frontiersin.org

Figure 4. Visual process of transfer learning for the training of fine-tuned deep models.

LSTM-CNN features

LSTM is the category of recurrent neural network (RNNs), good at learning long-term relationships using residual connections (44). Feedforward neural networks have a “low memory” problem, which leads to poor performance on sequential and time-series tasks. To extract characteristics from time-series and sequence data, such models use cyclic linkages in their hidden layer. The well-known vanishing gradient difficulty hinders RNNs from learning long-term associations, despite this limitation. Input, output, and forget are the three basic gates in any LSTM network. As part of this framework, the LSTM learns long-term connections by “forgetting” and “preserving” information, allowing it to maintain a controlled flow of input (45). Further precisely, the input gate It associate with the second gate 𝔫t* regulates the new knowledge that stored in the memory state 𝔫t in time t. The forget gate Ft regulates the previous knowledge which must be removed or retained in the memory block at time t − 1. Although, the output gate Yt determines which information may be used as the memory cell's output. Mathematically, it is defined as follows:

Jt=σ(MIxt+NIht-1+bi),    (8)
Ft=σ(Mgxt+Nght-1+bg),    (9)
nt*=tanh(Mnxt+Nnht-1+bn),    (10)
nt=gtnt-1+Jtnt*,    (11)
yt=σ(Myxt+Nyht-1+by),    (12)

Where xt represents the input, M* and N* are weights vectors and b* are bias vectors, σ is the sigmoid procedure. ⊙ represents the element-wise multiplication. Finally, the hidden block ht which includes the output of the memory block is evaluated by:

ht=yttanh(nt).    (13)

In our study, we utilized the LSTM by using convolutional layers called CNN-LSTM. It consists of a convolutional layer of filter size is 5 and the number of filters is 20. Followed by a pooling layer, an LSTM layer included with the number of hidden units is 200. Furthermore, a fully connected layer, Softmax layer, and classification layers are added. The features are extracted from the LSTM layer and obtained a feature vector of dimension N × 200. Figure 5 illustrated the workflow of the proposed CNN-LSTM.

FIGURE 5
www.frontiersin.org

Figure 5. Proposed CNN-LSTM architecture.

Proposed features fusion

The process of combining information from two or more sources to improve an object's information is known as feature fusion. The serial-based features fusion technique is a simple but effective fusion method for combining data from multiple sources into a single matrix without losing any features. Simple serial based fusion is formulated as follows:

Consider that we have two feature vectors f1 and f2 having dimensions N × 1, 280 and N × 200, respectively. Then the serial based fusion vector dimension will be N × 1, 480 based on the following equation.

SFu(v)=[f1f2]N×k1+N×k2    (14)

This process combined all features in one matrix SFu(v) but after the analysis based on the results, it is observed that several of the combined features contain unrelated information; therefore, we tried to resolve this problem by employing a new equation called serial based maximum information.

mx=MAX(SFu(v))    (15)
V1=compare(f1,mx)    (16)
V2=compare(f2,mx)    (17)
SF~u(v)=[V1V2]N×k~1+N×k~2    (18)

Where SF~u(v) is the updated fused vector of dimension N × 980. This fused vector is further refined using the improved moth flame optimization approach.

Features optimization

In the field of Computer Vision (CV), feature selection is the process of selecting the best subset of features from the original feature vector to improve the accuracy and reduce the computational time. The dimension of the solution space grows exponentially in proportion to the number of features in the data collection. As a result, exhaustive search strategies are unable to get the optimum solution in reality, and these feature selection techniques continue to struggle from a local optima standstill (46). The concept is that using a subset of features improves classifier performance and enables quicker classification, resulting in an equivalent or even better accuracy rate than when all features are used (47). In this study, we implemented an improved moth flame optimization (IMFO) for best feature selection. Consider, we have a fused feature vector of dimension N × 980 and after implementing the IMFO, the size of the resultant vector is N × 751.

In this algorithm, moths and flames are the main concepts and potential solutions are moths, that are based on the movements in space. Due to the population-based nature of the MFO algorithm, the set of n moths is employed as a search agent in the challenge space. Flames represent the best n locations of moths that have been discovered so far. As a result, each moth seeks for and updates a flame if a better solution is discovered. As a result, flames are d dimensional statistics as well. A particular moth updates the location based on the following formulation:

S(Mm,Fn)=Dm·ecr·cos(2πr)+Fn    (19)

Where Dm is the Euclidian Distance of the mth moth for the nth flame, c is the coefficient describing the shape of the logarithmic spiral, Mm represents the mth moth, Fn represents the nth flame and r is the random number between [−1, 1]. A moth's upcoming location is determined with a flame. As a result, a hyper-ellipse may be considered in all dimensions surrounding the flame, and the moth's new location will be contained inside this area. To stress exploitation, even more, t is a random integer in the range [k, 1], where k decreases linearly from 1 to 2 over the course of each iteration, referred to as the convergence rate. Along with increasing the likelihood of convergence to a global solution, each moth is required to update its location utilizing just one of the flames. Each iteration, and once the flames list has been updated, the flames are sorted according to their fitness values. The moths then adjust their locations concerning their assigned fires. To facilitate extensive exploitation of the most potential solution, the number of flames to be tracked is lowered in proportion to the number of iterations.

Kflames=round(K-l·K-1Z)    (20)

Where K is the maximum number of flames, l is the current iteration number, and Z represents the maximum number of iterations. The selected K flames (features) are normalized and select the maximum values as follows:

NZi=Ki-μS    (21)

Where Ki denotes the selected flames, μ is the mean value, S denotes the standard deviation, and NZi is a normalized feature vector. After that, the max features are computed as follows:

Best=max1<inr~(NZi)    (22)

Where, nr~ represent the maximum iterations (100 times). Finally, we set a comparison among Ki and best selected features by Equation (23) and the best vector. The quadratic SVM is opted as a fitness function and the performance of the fitness function is measured through the mean squared error rate (MSER). The final selection is defined as follows:

FSi={FS˜(i)   for  BestTDiscard,     Elsewhere,where T=0.4    (23)

Results and analysis

The detailed experimental results of the proposed framework have been presented in this section- tabular form and visual graphs. Three datasets have been utilized for the experimental process and detail of each dataset has been given under Section Dataset collection and normalization. The results of each dataset are presented separately under the below sections. Each dataset is divided into 50:50 and set cross-validation value to 10. Several classifiers are utilized for the classification comparison and each classifier's performance is opted using several measures such as sensitivity rate, precision rate, F1-Score, accuracy, and time. The entire experimental process is tested on MATLAB2021b using Personal Desktop Computer with 16 GB of RAM and an 8 GB graphics card.

COVID-19 radiography database results

Proposed fusion

The proposed fusion method classification results for the COVID-19 Radiography dataset have been presented in Table 2. This table presents that the maximum attained accuracy is 93.4% for the QSVM classifier. For this classifier, the noted sensitivity rate is 93.25%, the precision rate is 94.12%, F1-Score is 93.68%, FPR is 0.025, and AUC is 0.98. These values are also computed for the rest of the classifiers and it is observed based on the numerical values, the performance of QSVM is better than the rest of the classifiers. These computed measures of the QSVM classifier can be further verified using a confusion matrix, illustrated in Figure 6. The computational time of each classifier is noted for this experiment and the minimum time is 105.98 (s) for the Medium Neural classifier, whereas, the maximum execution time is 1,122.5(s). Moreover, a clear picture of the change in time of different classifiers is shown in Figure 7. From this figure, it is observed that the Medium Neural classifier needs less time for execution than the rest of the classifiers.

TABLE 2
www.frontiersin.org

Table 2. Proposed features fusion method results on COVID-19 Radiography Database.

FIGURE 6
www.frontiersin.org

Figure 6. Confusion matrix of QSVM using proposed fusion method on COVID-19 Radiography Database.

FIGURE 7
www.frontiersin.org

Figure 7. Illustration of computational time for COVID-19 Radiography Database after proposed fusion method.

Proposed IMFO

The proposed optimization results for the COVID-19 Radiography dataset have been presented in Table 3. This table presents the best accuracy of 93.0% for the CSVM classifier. For this classifier, highlighted sensitivity rate is 92.82%, the precision rate is 93.00%, F1-Score is 92.90%, FPR is 0.025, and AUC is 0.98. These values are likewise calculated for the other classifiers, and it is noted that CSVM performs better than the remaining classifiers based on the statistical figures. This CSVM classifier's generated scores could be further confirmed using a confusion matrix, illustrated in Figure 8. For this hypothesis, the computing time of each classifier is recorded; the quickest time is 65.55 (s) for the Medium Neural classifier, while the highest execution time is 554.32 (s). Additionally, Figure 9 illustrates the relationship between the change in time of several classifiers. As shown in this figure, the Medium Neural classifier requires less time to execute than the other classifiers.

TABLE 3
www.frontiersin.org

Table 3. Proposed features optimization results on COVID-19 Radiography Database.

FIGURE 8
www.frontiersin.org

Figure 8. Confusion matrix of CSVM using proposed optimization method on COVID-19 Radiography Database.

FIGURE 9
www.frontiersin.org

Figure 9. Illustration of computational time for COVID-19 Radiography Database after proposed optimization method.

COVID-GAN and COVID-Net mini chest X-ray dataset

Fusion method

The classification results of the proposed fusion method using the Chest X-Ray dataset are presented in Table 4. This table illustrates that the CSVM classifier best accuracy of 94.2 %, sensitivity is 94.00 %, precision rate is 94.26 %, F1-Score is 94.12 %, FPR is 0.03, and AUC is 0.99 for this classifier. CSVM outperforms the rest of the classifiers in terms of statistical numbers, as are the values generated for the other classifiers. A confusion matrix is also illustrated in Figure 10 for the confirmation of CSVM performance. Classifier computation times have been noted for each classifier and the best noted time is 30.626 (s) for the Medium Neural classifier, whereas the highest execution time is 75.685 (s). Figure 11 depicts the execution time of all selected classifiers and shows that the Medium Neural Network classifier takes less time.

TABLE 4
www.frontiersin.org

Table 4. Proposed features fusion method results on COVID-GAN and COVID-net mini chest X-ray dataset.

FIGURE 10
www.frontiersin.org

Figure 10. Confusion matrix of COVID-GAN and COVID-Net mini chest X-ray dataset after fusion.

FIGURE 11
www.frontiersin.org

Figure 11. Illustration of computational time for COVID-GAN and COVID-Net mini chest X-ray dataset after proposed fusion method.

Proposed IMFO

The proposed features optimization results on COVID-GAN and COVID-Net mini Chest X-ray dataset have been presented in Table 5. The CSVM classifier attained an accuracy of 94.5%, whereas the sensitivity of 94.30, precision rate of 94.63, an F1-Score of 94.46%, an FPR of 0.03, and an AUC is 0.99, respectively. Figure 12 illustrates the confusion matrix of CSVM for the confirmation of computed values. We also noted the classifiers' computational time during the testing process and the Medium Neural classifier has the shortest time duration of 11.943 (s), whereas the highest execution time is 36.492 (s). The computational time of each classifier is also plotted in Figure 13.

TABLE 5
www.frontiersin.org

Table 5. Proposed features optimization method results on COVID-GAN and COVID-Net mini chest X-ray dataset.

FIGURE 12
www.frontiersin.org

Figure 12. Confusion matrix of COVID-GAN and COVID-Net mini chest X-ray dataset using proposed optimization method.

FIGURE 13
www.frontiersin.org

Figure 13. Illustration of computational time for COVID-GAN and COVID-Net mini chest X-ray dataset after proposed optimization method.

Chest X-ray dataset (pneumonia, COVID-19, tuberculosis)

Fusion results

Classification results for the Chest X-Ray (Pneumonia, COVID-19, Tuberculosis) dataset have been shown in Table 6. This table shows that the CSVM classifier attained the best accuracy of 98.3%. Among the other calculated measures, the sensitivity rate is 98.32, the precision rate is 98.32%, the F1-Score is 98.32%, the FPR is 0.05, and the AUC is 1. These values are also calculated for the other classifiers, and based on the numerical values, it is noted that the CSVM outperforms other classifiers. Figure 14 illustrated the confusion matrix of CSVM that was utilized for the confirmation of calculated values. For each classifier, the execution time is also noted, as plotted in Figure 15. In this figure, it is shown that the minimum noted is 30.022 s for the Medium Neural classifier, whereas the maximum noted time is 145.99 (s) for the MGSVM.

TABLE 6
www.frontiersin.org

Table 6. Proposed features fusion method results on chest X-ray dataset.

FIGURE 14
www.frontiersin.org

Figure 14. Confusion matrix of chest X-ray dataset for proposed fusion method.

FIGURE 15
www.frontiersin.org

Figure 15. Illustration of computational time for chest X-ray dataset using the proposed fusion method.

IMFO method

The proposed optimization method based classification results are given in Table 7. This table demonstrates that the CSVM classifier has attained the best accuracy of 98.5%. The values of other measures such as sensitivity rate is 98.50, the precision value is 98.22, F1-Score is 98.35%, FPR is 0.005, and AUC is 1, respectively. These statistics are also generated with the other learners, and based on the statistical results, it is reported that CSVM beats other listed classifiers. The CSVM performance can be further confirmed using a confusion matrix, illustrated in Figure 16. The execution time of each classifier is also noted and the minimum time is 32.537 (s) for the Medium Neural classifier. The maximum reported time is 75.75 (s) for the Tri-layered neural network. The time of all classifiers is also plotted in Figure 17. Overall, it is observed that the proposed optimization method performed well for all selected datasets.

TABLE 7
www.frontiersin.org

Table 7. Proposed features optimization method results on chest X-ray dataset.

FIGURE 16
www.frontiersin.org

Figure 16. Confusion matrix of chest X-ray dataset for proposed optimization method.

FIGURE 17
www.frontiersin.org

Figure 17. Illustration of computational time for chest X-ray dataset using proposed optimization method.

In the end, a detailed comparison is conducted with some recent techniques, presented in Table 8. In this table, several recently published techniques have been given and all of them used the deep learning framework. Recently, the maximum noted accuracy is 98.1% by (50). However, the proposed framework achieved an accuracy of Table 8, we compare the proposed technique with the different deep learning techniques. The COVID-19 diagnosis using deep learning methods on chest X-ray images was proposed and this technique achieved high accuracy of 94.7%. Classification of chest x-ray images using deep learning approaches and achieved 96.1% accuracy. The detection of COVID-19 by using deep learning through chest CT images for the joint edge-cloud computing framework method achieved 96.4%. A method based on deep learning for automatically diagnosing COVID-19 images using chest X-ray images acquired 96.6% accuracy. By using LSTM with an attention mechanism for COVID-19 detection and nodules segmentation on chest CT scans, the technique gained high accuracy of 98.1%. Our proposed method achieves 98.5%.

TABLE 8
www.frontiersin.org

Table 8. Comparison of the proposed framework with recent techniques.

Conclusion

We proposed an automated deep learning and improved optimization algorithm-based framework for COVID-19 classification using Chest X-ray images in this paper. In the proposed framework, contrast enhancement is done first to improve the quality of the infected region, and then data augmentation is used to increase the training samples. Following that, a CNN-LSTM architecture is created and trained with deep transfer learning. In addition, an EfficientNet deep model is fine-tuned, and feature extraction for both developed models is performed. Later, instead of using the original serial-based approach, the proposed fusion approach is used to better combine the information. The analysis of the fused feature vector reveals several redundant features; thus, a new features optimization technique is proposed. The proposed optimization method improves accuracy while also shortening classification time. The controlled vector size during the fusion process is the work's limitation. Furthermore, the optimization technique appears to have reduced some important features, which may have resulted in a reduction in final accuracy.

Data availability statement

Publicly available datasets were analyzed in this study. This data can be found here: COVID-19 Radiography (https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database), COVID-GAN and COVID-Net small chest x-ray (https://www.kaggle.com/yash612/covidnet-mini-and-gan-enerated-chest-xray), and Chest X-Ray (Pneumonia, COVID-19, Tuberculosis) (https://www.kaggle.com/datasets/jtiptj/chest-xray-pneumoniacovid19tuberculosis).

Author contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for supporting this study through the Large Groups Project under grant number (RGP.2/16/43).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Wang C, Horby PW, Hayden FG, Gao GF. A novel coronavirus outbreak of global health concern. Lancet. (2020) 395:470–3. doi: 10.1016/S0140-6736(20)30185-9

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Zahoor S, Lali IU, Khan MA, Javed K, Mehmood W. Breast cancer detection and classification using traditional computer vision techniques: a comprehensive review. Curr Med Imaging. (2020) 16:1187–200. doi: 10.2174/1573405616666200406110547

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Stoecklin SB, Rolland P, Silue Y, Mailles A, Campese C, Simondon A, et al. First cases of coronavirus disease 2019 (COVID-19) in France: surveillance, investigations and control measures, January 2020. Eurosurveillance. (2020) 25:2000094. doi: 10.2807/1560-7917.ES.2020.25.6.2000094

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Guan WJ, Ni ZY, Hu Y, Liang WH, Ou CQ, He JX, et al. Clinical characteristics of coronavirus disease 2019 in China. New Engl J Med. (2020) 382:1708–20. doi: 10.1056/NEJMoa2002032

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Kumar V, Alshazly H, Idris SA, Bourouis S. Evaluating the impact of covid-19 on society, environment, economy, and education. Sustainability. (2021) 13:13642. doi: 10.3390/su132413642

CrossRef Full Text | Google Scholar

6. Hayakijkosol O, Jaroenram W, Owens L, Elliman J. Reverse transcription polymerase chain reaction (RT-PCR) detection for Australian Cherax reovirus from redclaw crayfish (Cherax quadricarinatus). Aquaculture. (2021) 530:735881. doi: 10.1016/j.aquaculture.2020.735881

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Khozeimeh F, Sharifrazi D, Izadi NH, Joloudari JH, Shoeibi A, Alizadehsani R, et al. Combining a convolutional neural network with autoencoders to predict the survival chance of COVID-19 patients. Sci Rep. (2021) 11:1–18. doi: 10.1038/s41598-021-93543-8

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Kumar V, Singh D, Kaur M, Damaševičius R. Overview of current state of research on the application of artificial intelligence techniques for COVID-19. PeerJ Comp Sci. (2021) 7:e564. doi: 10.7717/peerj-cs.564

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Sharifrazi D, Alizadehsani R, Roshanzamir M, Joloudari JH, Shoeibi A, Jafari M, et al. Fusion of convolution neural network, support vector machine and Sobel filter for accurate detection of COVID-19 patients using X-ray images. Biomed Signal Process Control. (2021) 68:102622. doi: 10.1016/j.bspc.2021.102622

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Ayoobi N, Sharifrazi D, Alizadehsani R, Shoeibi A, Gorriz JM, Moosaei H, et al. Time series forecasting of new cases and new deaths rate for COVID-19 using deep learning methods. Results Phys. (2021) 27:104495. doi: 10.1016/j.rinp.2021.104495

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Alizadehsani R, Sharifrazi D, Izadi NH, Joloudari JH, Shoeibi A, Gorriz JM, et al. Uncertainty-aware semi-supervised method using large unlabeled and limited labeled covid-19 data. ACM Trans Multimedia Comput Commun Appl. (2021) 17:1–24. doi: 10.1145/3462635

CrossRef Full Text | Google Scholar

12. Alyasseri ZA, Al-Betar MA, Doush IA, Awadallah MA, Abasi AK, Makhadmeh SN, et al. Review on COVID-19 diagnosis models based on machine learning and deep learning approaches. Expert Syst. (2022) 39:e12759. doi: 10.1111/exsy.12759

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Alshazly H, Linse C, Barth E, Martinetz T. Explainable COVID-19 detection using chest CT scans and deep learning. Sensors. (2021) 21:455. doi: 10.3390/s21020455

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Kini AS, Gopal Reddy AN, Kaur M, Satheesh S, Singh J, Martinetz T, et al. Ensemble deep learning and internet of things-based automated COVID-19 diagnosis framework. Contrast Media Mol Imaging. (2022) 2022:7377502. doi: 10.1155/2022/7377502

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Alshazly H, Linse C, Abdalla M, Barth E, Martinetz T. COVID-Nets: deep CNN architectures for detecting COVID-19 using chest CT scans. PeerJ Comput Sci. (2021) 7:e655. doi: 10.7717/peerj-cs.655

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Afifi A, Hafsa NE, Ali MA, Alhumam A, Alsalman S. An ensemble of global and local-attention based convolutional neural networks for COVID-19 diagnosis on chest X-ray images. Symmetry. (2021) 13:113. doi: 10.3390/sym13010113

CrossRef Full Text | Google Scholar

17. Dansana D, Kumar R, Bhattacharjee A, Hemanth DJ, Gupta D, Khanna A. Early diagnosis of COVID-19-affected patients based on X-ray and computed tomography images using deep learning algorithm. Soft Comput. (2020) 1–9. doi: 10.1007/s00500-020-05275-y

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Özkaya U, Öztürk S, Barstugan M. Coronavirus (COVID-19) classification using deep features fusion and ranking technique. In: Hassanien AE, Dey N, Elghamrawy S, editors. Big Data Analytics and Artificial Intelligence Against COVID-19: Innovation Vision and Approach, Vol. 78. Cham: Springer (2020). doi: 10.1007/978-3-030-55258-9_17

CrossRef Full Text | Google Scholar

19. Shankar K, Perumal E. A novel hand-crafted with deep learning features based fusion model for COVID-19 diagnosis and classification using chest X-ray images. Complex Intelligent Syst. (2021) 7:1277–93. doi: 10.1007/s40747-020-00216-6

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Ragab DA, Attallah O. FUSI-CAD: coronavirus (COVID-19) diagnosis based on the fusion of CNNs and handcrafted features. PeerJ Comput Sci. (2020) 6:e306. doi: 10.7717/peerj-cs.306

PubMed Abstract | CrossRef Full Text | Google Scholar

21. Khan MA, Arshad H, Nisar W, Javed MY, Sharif M. An integrated design of fuzzy c-means and nca-based multi-properties feature reduction for brain tumor recognition. In: Priya E, Rajinikanth V, editors. Signal and Image Processing Techniques for the Development of Intelligent Healthcare Systems. Singapore: Springer (2021). doi: 10.1007/978-981-15-6141-2

CrossRef Full Text | Google Scholar

22. Lu X, Duan X, Mao X, Li Y, Zhang X. Feature extraction and fusion using deep convolutional neural networks for face detection. Math Prob Eng. (2017) 2017, 1–18. doi: 10.1155/2017/1376726

PubMed Abstract | CrossRef Full Text | Google Scholar

23. Kira K., Rendell L. A. (1992). “A practical approach to feature selection,” in Machine Learning Proceedings 1992, eds D. Sleeman and P. Edwards (Morgan Kaufmann), 249–256. doi: 10.1016/B978-1-55860-247-2.50037-1

CrossRef Full Text | Google Scholar

24. Ismael AM, Sengür A. Deep learning approaches for COVID-19 detection based on chest X-ray images. Expert Syst Appl. (2021) 164:114054. doi: 10.1016/j.eswa.2020.114054

PubMed Abstract | CrossRef Full Text | Google Scholar

25. Dey N, Zhang YD, Rajinikanth V, Pugalenthi R, Raja NS. Customized VGG19 architecture for pneumonia detection in chest X-rays. Pattern Recog Lett. (2021) 143:67–74. doi: 10.1016/j.patrec.2020.12.010

CrossRef Full Text | Google Scholar

26. Abed M, Mohammed KH, Abdulkareem GZ, Begonya M, Salama A, Maashi MS, et al. A comprehensive investigation of machine learning feature extraction and classification methods for automated diagnosis of COVID-19 based on X-ray images. Comput Mater Continua. (2021) 66:3289–310. doi: 10.32604/cmc.2021.012874

CrossRef Full Text | Google Scholar

27. Kulathilake KA, Abdullah NA, Lachyan AS, Bandara AM, Patel DD, Lai KW. Restoring lesions in low-dose computed tomography images of COVID-19 using deep learning. In: 6th Kuala Lumpur International Conference on Biomedical Engineering 2021. IFMBE Proceedings, Vol. 86. Springer (2022). p. 405–13.

Google Scholar

28. Ibrahim AU, Ozsoz M, Serte S, Al-Turjman F, Yakoi PS. Pneumonia classification using deep learning from chest X-ray images during COVID-19. Cogn Comput. (2021) 1–13. doi: 10.1007/s12559-020-09787-5

PubMed Abstract | CrossRef Full Text | Google Scholar

29. Ketu S, Mishra PK. India perspective: CNN-LSTM hybrid deep learning model-based COVID-19 prediction and current status of medical resource availability. Soft Comput. (2022) 26:645–64. doi: 10.1007/s00500-021-06490-x

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Nivetha S, Inbarani HH. Neighborhood rough neural network approach for COVID-19 image classification. Neural Process Lett. (2022) 54:1919–41. doi: 10.1007/s11063-021-10712-6

PubMed Abstract | CrossRef Full Text | Google Scholar

31. Shastri S, Kansal I, Kumar S, Singh K, Popli R, Mansotra V. CheXImageNet: a novel architecture for accurate classification of Covid-19 with chest x-ray digital images using deep convolutional neural networks. Health Technol. (2022) 12:193–204. doi: 10.1007/s12553-021-00630-x

PubMed Abstract | CrossRef Full Text | Google Scholar

32. Khan E, Rehman MZ, Ahmed F, Alfouzan FA, Alzahrani NM, Ahmad J. Chest X-ray classification for the detection of COVID-19 using deep learning techniques. Sensors. (2022) 22:1211. doi: 10.3390/s22031211

PubMed Abstract | CrossRef Full Text | Google Scholar

33. Imagawa K, Shiomoto K. Performance change with the number of training data: a case study on the binary classification of COVID-19 chest X-ray by using convolutional neural networks. Comput Biol Med. (2022) 142:105251. doi: 10.1016/j.compbiomed.2022.105251

PubMed Abstract | CrossRef Full Text | Google Scholar

34. De Falco I, De Pietro G, Sannino G. Classification of Covid-19 chest X-ray images by means of an interpretable evolutionary rule-based approach. Neural Comput Appl. (2022) 1–11. doi: 10.1007/s00521-021-06806-w

PubMed Abstract | CrossRef Full Text | Google Scholar

35. Sarki R, Ahmed K, Wang H, Zhang Y, Wang K. Automated detection of COVID-19 through convolutional neural network using chest x-ray images. PLoS ONE. (2022) 17:e0262052. doi: 10.1371/journal.pone.0262052

PubMed Abstract | CrossRef Full Text | Google Scholar

36. Öztürk S, Özkaya U, Barstugan M. Classification of Coronavirus (COVID-19) from X-ray and CT images using shrunken features. Int J Imag Syst Technol. (2021) 31:5–15. doi: 10.1002/ima.22469

PubMed Abstract | CrossRef Full Text | Google Scholar

37. Al-Zubaidi EA, Mijwil MM. Medical image classification for coronavirus disease (COVID-19) using convolutional neural networks. Iraqi J Sci. (2021) 62:2740–7. doi: 10.24996/ijs.2021.62.8.27

CrossRef Full Text | Google Scholar

38. Shazia A, Xuan TZ, Chuah JH, Mohafez H, Lai KW. Detection of COVID-19 on chest X-Ray using neural networks. In: Kuala Lumpur International Conference on Biomedical Engineering. Cham: Springer (2022). p. 415–23.

Google Scholar

39. Shazia A, Xuan TZ, Chuah JH, Usman J, Qian P, Lai KW. A comparative study of multiple neural network for detection of COVID-19 on chest X-ray. EURASIP J Adv Signal Process. (2021) 2021:1–16. doi: 10.1186/s13634-021-00755-1

PubMed Abstract | CrossRef Full Text | Google Scholar

40. Joloudari JH, Azizi F, Nodehi I, Nematollahi MA, Kamrannejhad F, Mosavi A, et al. DNN-GFE: a deep neural network model combined with global feature extractor for COVID-19 diagnosis based on CT scan images. EasyChair (2021).

Google Scholar

41. Muzammil SR, Maqsood S, Haider S, Damaševičius R. CSID: a novel multimodal image fusion algorithm for enhanced clinical diagnosis. Diagnostics. (2020) 10:904. doi: 10.3390/diagnostics10110904

PubMed Abstract | CrossRef Full Text | Google Scholar

42. Marques G, Agarwal D, de la Torre Díez I. Automated medical diagnosis of COVID-19 through EfficientNet convolutional neural network. Appl Soft Comput. (2020) 96:106691. doi: 10.1016/j.asoc.2020.106691

PubMed Abstract | CrossRef Full Text | Google Scholar

43. Pan SJ. Transfer learning. Learning. (2020) 21:1–2.

Google Scholar

44. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. (1997) 9:1735–80. doi: 10.1162/neco.1997.9.8.1735

PubMed Abstract | CrossRef Full Text | Google Scholar

45. Livieris IE, Pintelas E, Pintelas P. A CNN–LSTM model for gold price time-series forecasting. Neural Comput Appl. (2020) 32:17351–60. doi: 10.1007/s00521-020-04867-x

CrossRef Full Text | Google Scholar

46. Mirjalili S. Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowledge Based Syst. (2015) 89:228–49. doi: 10.1016/j.knosys.2015.07.006

CrossRef Full Text | Google Scholar

47. Guyon I, Elisseeff A. An introduction to variable and feature selection. J Mach Learn Res. (2003) 3:1157–82.

Google Scholar

48. Singh VK, Kolekar MH. Deep learning empowered COVID-19 diagnosis using chest CT scan images for collaborative edge-cloud computing platform. Multimedia Tools Appl. (2022) 81:3–30. doi: 10.1007/s11042-021-11158-7

PubMed Abstract | CrossRef Full Text | Google Scholar

49. Bhattacharyya A, Bhaik D, Kumar S, Thakur P, Sharma R, Pachori RB. A deep learning based approach for automatic detection of COVID-19 cases using chest X-ray images. Biomed Signal Process Control. (2022) 71:103182. doi: 10.1016/j.bspc.2021.103182

PubMed Abstract | CrossRef Full Text | Google Scholar

50. Ter-Sarkisov A. One shot model for COVID-19 classification and lesions segmentation in chest CT scans using LSTM with attention mechanism. IEEE Intelligent Syst. (2022) 1–1. doi: 10.1101/2021.02.16.21251754

CrossRef Full Text | Google Scholar

Keywords: coronavirus, enhancement, deep learning, LSTM, optimization

Citation: Hamza A, Attique Khan M, Wang S-H, Alqahtani A, Alsubai S, Binbusayyis A, Hussein HS, Martinetz TM and Alshazly H (2022) COVID-19 classification using chest X-ray images: A framework of CNN-LSTM and improved max value moth flame optimization. Front. Public Health 10:948205. doi: 10.3389/fpubh.2022.948205

Received: 26 May 2022; Accepted: 01 August 2022;
Published: 30 August 2022.

Edited by:

Khin Wee Lai, University of Malaya, Malaysia

Reviewed by:

Shazia Anis, University of Malaya, Malaysia
Roohallah Alizadehsani, Deakin University, Australia
Saneera Hemantha Kulathilake, Rajarata University of Sri Lanka, Sri Lanka

Copyright © 2022 Hamza, Attique Khan, Wang, Alqahtani, Alsubai, Binbusayyis, Hussein, Martinetz and Alshazly. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Muhammad Attique Khan, attique.khan@hitecuni.edu.pk

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.