Skip to main content

ORIGINAL RESEARCH article

Front. Neuroinform., 16 January 2023
Volume 16 - 2022 | https://doi.org/10.3389/fninf.2022.1063048

bSRWPSO-FKNN: A boosted PSO with fuzzy K-nearest neighbor classifier for predicting atopic dermatitis disease

Yupeng Li1 Dong Zhao1* Zhangze Xu2 Ali Asghar Heidari3 Huiling Chen2* Xinyu Jiang4,5 Zhifang Liu4,5 Mengmeng Wang4,5 Qiongyan Zhou4 Suling Xu4*
  • 1College of Computer Science and Technology, Changchun Normal University, Changchun, Jilin, China
  • 2College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, China
  • 3School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran, Iran
  • 4Department of Dermatology, The Affiliated Hospital of Medical School, Ningbo University, Ningbo, China
  • 5School of Medicine, Ningbo University, Ningbo, Zhejiang, China

Introduction: Atopic dermatitis (AD) is an allergic disease with extreme itching that bothers patients. However, diagnosing AD depends on clinicians’ subjective judgment, which may be missed or misdiagnosed sometimes.

Methods: This paper establishes a medical prediction model for the first time on the basis of the enhanced particle swarm optimization (SRWPSO) algorithm and the fuzzy K-nearest neighbor (FKNN), called bSRWPSO-FKNN, which is practiced on a dataset related to patients with AD. In SRWPSO, the Sobol sequence is introduced into particle swarm optimization (PSO) to make the particle distribution of the initial population more uniform, thus improving the population’s diversity and traversal. At the same time, this study also adds a random replacement strategy and adaptive weight strategy to the population updating process of PSO to overcome the shortcomings of poor convergence accuracy and easily fall into the local optimum of PSO. In bSRWPSO-FKNN, the core of which is to optimize the classification performance of FKNN through binary SRWPSO.

Results: To prove that the study has scientific significance, this paper first successfully demonstrates the core advantages of SRWPSO in well-known algorithms through benchmark function validation experiments. Secondly, this article demonstrates that the bSRWPSO-FKNN has practical medical significance and effectiveness through nine public and medical datasets.

Discussion: The 10 times 10-fold cross-validation experiments demonstrate that bSRWPSO-FKNN can pick up the key features of AD, including the content of lymphocytes (LY), Cat dander, Milk, Dermatophagoides Pteronyssinus/Farinae, Ragweed, Cod, and Total IgE. Therefore, the established bSRWPSO-FKNN method practically aids in the diagnosis of AD.

1. Introduction

Atopic dermatitis (AD) is a chronic inflammatory skin disease accompanied by allergic reactions characterized by itchy eczematous skin-lesions and dry skin, which is common in childhood and influences at least 20% of children around the world (Rehbinder et al., 2020; Asano et al., 2022; Johansson et al., 2022). Considering the starting point of the atopic march with the development of food allergy, asthma, and allergic rhinitis, it is important to distinguish AD and intervene (Spergel, 2021). There are many diagnostic criteria for AD in the world, such as Hanifin & Rajka diagnostic criteria, Williams diagnostic criteria, International Study of Asthma and Allergy in Children (ISAAC) questionnaire, which depends on the subjective judgment of dermatologists (Williams et al., 1994; Williams, 1996). William’s criteria are the primary basis for diagnosing AD, which includes an itchy skin condition for the last 12 months and three minor criteria (Williams, 1996; Williams et al., 1996). Depending on the clinicians’ extensive experiences, some patients are missed or misdiagnosed. The sensitivity of diagnosis is enhanced when combined with serology results (Poto et al., 2022). Thus, a comprehensive evaluation of combination clinic symptoms with serology gradually attracts attention.

However, exploring such questions relies on vast amounts of relevant data. A large subject of research is medical information systems, which aid clinicians in providing quicker and more accurate medical diagnoses (Li C. et al., 2022; Liu S. et al., 2022; Zhuang et al., 2022). In this regard, manual analysis is impractical because it is time-consuming, inefficient, and error-prone for vast amounts of data. Consequently, it is essential to establish a machine learning model for studying AD, which will help AD staff better and more efficiently explore the factors and pathogenic features affecting AD. Moreover, some machine learning studies focus on AD. Gustafson et al. (2017) described a machine learning-based phenotyping algorithm that obtained higher positive predictive values (PPV) than previous low-sensitivity algorithms and demonstrated the utility of natural language processing (NLP) and machine learning in EHR-based phenotyping. Suhendra et al. (2019) proposed a machine learning algorithm that successfully combined a multi-class SVM classifier to classify and predict AD severity based on skin color, texture, and redness with an overall accuracy of about 0.86. Maintz et al. (2021) analyzed the association of 130 factors with AD severity based on a machine learning gradient boosting approach, cross-validated tuning, and multinomial logistic regression. It was demonstrated that the associations among AD patients identified in this study contribute to a deeper understanding, prevention, and treatment of AD disorders.

Guimarães et al. (2020) established a fully automated method based on a convolutional neural network (CNN) combined with multiphoton tomography (MPT) imaging to achieve AD morbidity prediction successfully. Li X. et al. (2021) used three machine learning models to analyze AI-assisted AD diagnosis and subclassify AD severity by 3D Raster Scanning Photoacoustic Mesoscopy (RSOM) images to extract features from volumetric vascular structures and clinical information. Jiang et al. (2022) developed a precise and automatic machine learning classifier on the basis of transcriptomic and microbiota data to predict the risk of AD. This method can accurately distinguish 161 subjects with AD from healthy individuals. Holm et al. (2021) developed two machine learning models to predict AD and explore the relationship between various immune markers in the serum of AD patients and AD disease severity based on clinically obtained biomarkers. Clayton et al. (2021) conducted a dermatological biopsy transcriptome profiling for AD. They performed cross-validation at different skin inflammation conditions and disease stages by using co-expression clustering and machine learning tools, ultimately revealing the impact of keratin-forming cell programming on skin inflammation and suggesting that perturbation of uniaxial immune signaling alone may not be sufficient to resolve keratin-forming cell immunophenotype abnormalities. Berna et al. (2021) constructed a machine learning framework for exploring the association between AD pathogenesis and low-frequency, rare alleles. However, because of the variety of factors that influence the physiological status of AD. Although the above scholars have conducted a series of explorations and studies for the prevention, diagnosis and treatment of AD, these extant studies are still inadequate for AD.

Therefore, to further explore the key factors affecting the physiological condition of AD, we propose a novel and effective feature selection method, bSRWPSO-FKNN, by combining the swarm intelligence optimization algorithm and machine learning techniques in this paper. While proposing the method to make the feature selection performance of the combination of particle swarm optimization (PSO) and the FKNN more outstanding, we first enhance the PSO. Thus, an improved variant of PSO combined with Sobol sequence population initialization (SOB), random replacement strategy (RRS), and adaptive weight strategy (AWS), named SRWPSO, is proposed for the first time. In SRWPSO, this study exploits the advantage of uniform distribution of low discrepancy sequences by the SOB, which enhances the diversity of the initial population and the traversal of the population space. It makes it easier for PSO to find the optimal particle position at the beginning. RRS and AWS are also introduced into PSO, which cooperate to make the PSO overcome the shortcomings of poor convergence ability and having fallen into local optimum. Moreover, the comprehensive performance of SRWPSO is demonstrated on 30 benchmark functions of CEC 2014, mainly including mechanism combination verification experiments, quality analysis experiments, comparison experiments with traditional algorithms, comparison experiments with famous variants, and comparison experiments with new peer variants. The benchmark function validation experiments show that SRWPSO, under the action of the three enhancement strategies, has the relatively best all-around performance among all the well-known algorithms involved in the comparison. Then, to apply SRWPSO to feature prediction, a binary version of SRWPSO is proposed in this paper, named bSRWPSO. Next, this paper combines bSRWPSO with FKNN to propose the bSRWPSO-FKNN model.

What’s more, to prove the feature selection performance of the model, the article firstly uses nine public datasets of the UCI to compare bSRWPSO with 10 other binary versions of the algorithm based on FKNN through the 10-fold cross-validation experiment. Then, this article also sets up a series of comparison experiments for the model on a medical dataset by the 10-fold cross-validation, including the comparison experiments of bSRWPSO combined with five classifiers, the comparison experiments of bSRWPSO-FKNN with other well-known classification models and the comparison experiments of 11 FKNN models combined with swarm intelligence algorithms. In this paper, we analyze the results of the experiments and demonstrate that bSRWPSO-FKNN has a significant core advantage over all the methods involved in the comparison experiments by combining the following evaluation indicators: Accuracy, Sensitivity, Matthews correlation coefficient (MCC), and F-measure. Finally, based on the bSRWPSO-FKNN and the medical dataset (AD), the key features affecting AD are extracted by 10 times 10-fold cross-validation experiments, mainly including the content of lymphocytes (LY), Cat dander, Milk, Dermatophagoides Pteronyssinus/Farinae, Ragweed, Cod, and Total IgE. The correctness and validity of the experimental results are also verified in the context of clinical medical practice. The main contributions of this study are summarized below.

1. An improved variant is proposed based on the PSO, named SRWPSO, which has stronger convergence in global optimization tasks.

2. A binary algorithm is proposed for solving discrete problems, named bSRWPSO.

3. A novel and efficient medical prediction method is proposed by combining bSRWPSO and FKNN, named bSRWPSO-FKNN.

4. The bSRWPSO-FKNN is successfully applied to AD prediction and provides a scientific approach to diagnosing AD and other disorders.

The rest of the paper is structured as follows. Section 2 describes the main work related to this article. Section 3 introduces the principle of operation of the original PSO. In section 4, the improvement process of the SRWPSO is presented. In section 5, the proposed bSRWPSO-FKNN is described. Section 6 sets up a series of benchmark function experiments to verify the advantages of SRWPSO. Section 7 sets up a series of feature selection experiments for bSRWPSO-FKNN and validates the potential of the method by the 10 times 10-fold cross-validation experiments. At last, section 8 reviews all the contents and guides the future work.

2. Related works

In recent years, feature selection technology based on swarm intelligence algorithms and machine learning techniques has gained wide attention in the field of medical diagnosis. Furthermore, many excellent machine learning methods have also been developed and applied to link diseases with various factors (El-Kenawy et al., 2020; Liu et al., 2020; Houssein et al., 2021; Hu et al., 2022a; Liu S. et al., 2021; Li Y. et al., 2022). For example, Hu et al. (2022b) presented a predictive framework based on an improved binary Harris hawk optimization (HHO) algorithm combined with a kernel extreme learning machine (KELM), which provides adequate technical support for early and accurate assessment of COVID-19 and differentiation of disease severity. Hu et al. (2022c) proposed a diagnostic model based on an improved binary mutation quantum grey wolf optimizer (MQGWO) and the FKNN techniques. They validated the model for hypoalbuminemia by predicting trends in serum albumin levels.

Liu et al. (2020) used the suggested COSCA method to optimize the two critical parameters of the SVM. As a result, they proposed a medical model that can self-directed the prediction of cervical hyperextension injury, named COSCA-SVM. Wu S. et al. (2021) combined an improved variant of the sine cosine algorithm (LSCA) and the FKNN techniques to propose a medical predictive model, named LSCA-FKNN, and successfully Its effectiveness has been validated on the disease in 3 medical datasets and lupus nephritis. Based on the proposed dispersed foraging sine cosine algorithm (DFSCA) and the KELM. Xia et al. (2022) established a new machine learning model called DFSCA-KELM. The medical diagnostic significance of the model was successfully confirmed by six public datasets and two real medical cases in the UCI library. Yang X. et al. (2022) proposed a feature selection framework called BSWEGWO-KELM and successfully verified the framework’s effectiveness by analyzing 1,940 records from 178 HD patients. Ye H. et al. (2021) proposed a predictive model that utilizes the HHO to optimize the FKNN, called HHO-FKNN. They successfully used this model to distinguish the severity of COVID-19, which one of the most hard cases in medicine (Li H. et al., 2021). Zuo et al. (2013) proposed an effective and efficient diagnostic system for Parkinson’s disease (PD) diagnosis based on particle swarm optimization (PSO) enhanced FKNN, which provides strong technical support for the diagnosis of PD.

Optimization methods are the oldest methods that can quickly bring feasible solutions using deterministic and gradient info (Cao et al., 2020b, 2021a,b) or without them (metaheuristic class). Also, as an emerging evolutionary computing technique, swarm intelligence algorithms have become the focus of more and more researchers. With the escalation of the solution problem, many different swarm intelligence optimization algorithms have gradually emerged to suit different problems. For example, there is ant colony optimization based on continuous optimization (ACOR) (Dorigo, 1992; Dorigo and Caro, 1999; Socha and Dorigo, 2008), particle swarm optimizer (PSO) (Cao et al., 2020a), different evolution (DE) (Storn and Price, 1997), sine cosine algorithm (SCA) (Mirjalili, 2016), HHO (Heidari et al., 2019b), grey wolf optimization (GWO) (Mirjalili et al., 2014), hunger games search (HGS) (Yang et al., 2021), Harris hawks optimization (HHO) (Heidari et al., 2019b), slime mould algorithm (SMA) (Li S. et al., 2020), Runge Kutta optimizer (RUN) (Ahmadianfar et al., 2021), weighted mean of vectors (INFO) (Ahmadianfar et al., 2022), colony predation algorithm (CPA) (Tu et al., 2021b), whale optimization algorithm (WOA) (Mirjalili and Lewis, 2016), bat-inspired algorithm (BA) (Yang, 2010), moth-flame optimization (MFO) (Mirjalili, 2015), wind-driven optimization (WDO)(Bayraktar et al., 2010), and so on. As time progresses, the drawbacks of the traditional swarm intelligence algorithm have also gradually emerged with the change of the problem, mainly including the slow convergence speed and low convergence accuracy of the algorithm when solving the problem. Therefore, many scholars have proposed a series of optimization variants based on the traditional algorithm. For example, there are hybridizing SCA with DE (SCADE) (Nenavath and Jatoth, 2018), chaotic BA (CBA) (Adarsh et al., 2016), modified SCA (m_SCA) (Qu et al., 2018), chaotic random spare ACO (RCACO) (Dorigo, 1992; Dorigo and Caro, 1999; Zhao et al., 2021), ACO with Cauchy and greedy levy mutations (CLACO) (Dorigo, 1992; Dorigo and Caro, 1999; Liu L. et al., 2021), hybridizing SCA with PSO (SCA_PSO) (Nenavath et al., 2018), double adaptive random spare reinforced WOA (RDWOA) (Chen et al., 2019), boosted GWO (OBLGWO) (Heidari et al., 2019a), fuzzy self-tuning PSO (FSTPSO) (Nobile et al., 2018) and so on. Furthermore, they have been well applied in many fields, such as resource allocation (Deng et al., 2022a), feature selection (Hu et al., 2022a; Liu Y. et al., 2022), complex optimization problem (Deng et al., 2022b), robust optimization (He et al., 2019, 2020), fault diagnosis (Yu et al., 2021), scheduling problems (Gao et al., 2020; Han et al., 2021; Wang et al., 2022), medical diagnosis (Chen et al., 2016; Wang et al., 2017), multi-objective problem (Hua et al., 2021; Deng et al., 2022d), solar cell parameter Identification (Ye X. et al., 2021), expensive optimization problems (Li J.-Y. et al., 2020; Wu S.-H. et al., 2021), gate resource allocation (Deng et al., 2020a, 2022b), and airport taxiway planning (Deng et al., 2022c).

Inspired by the foraging behavior of bird flocks, Kennedy and Eberhart (1995) proposed PSO, which is a stochastic search algorithm based on group collaboration developed by simulating the foraging behavior of bird flocks in 1995. Then, many famous scholars have researched and developed various variants of PSO based on different problems. Zhou Q. et al. (2021) propose a human-knowledge-integrated particle swarm optimization (Hi-PSO) scheme to globally optimize the design of the hydraulic-electromagnetic energy-harvesting shock absorber (HESA) for road vehicles. Nagra et al. (2019) put forward a mixed population algorithm (GSADMSPSO) that combines dynamic multi-swarm PSO (DMSPSO) and a gravitational search algorithm. Wang et al. (2021) proposed a dynamic modified chaotic PSO algorithm (DMO). Tu et al. (2020) proposed a novel quantum-inspired PSO (MQPSO) algorithm for electromagnetic applications. Zhen et al. (2020) proposed a hybrid optimization method (WPA-PSO) based on the wolf pack algorithm (WPA) and PSO. They proved that it has obvious advantages over a single algorithm in estimating and predicting the parameters of the software reliability model. The above improved PSO algorithms can have stronger capability to solve problems in one or several specific fields. However, there is no free lunch (Wolpert and Macready, 1997). In other words, the above methods gain enhancements in some problems while exposing drawbacks in other problems. Based on the above studies, we can conclude that PSO is an excellent swarm intelligence optimization algorithm, but there are many areas for improvement. Therefore, in this paper, an improved version (SRWPSO) is proposed for PSO and succeeds in making the classifier obtain better experimental results in feature selection experiments.

3. An overview of PSO

During the food search, PSO evaluates the fitness value of each individual at a location by a special evaluation function and uses this value to characterize the likelihood that the searching individual will find food there. Theoretically, the lower the evaluation value, the better the location. In addition, PSO introduces a memory mechanism for each searching individual to record that individual’s current optimal position. Then, the best position of all the independent individuals in the whole group of birds is used to determine the best foraging point for the whole group of birds, which is the global optimal position for the whole solution process. The PSO model is described in the section below.

Before updating the particle population, the PSO initializes a random population space X, as shown in Eq. 1.

Xmn={(X1,1,X1,2,,X1,3X1,n)(X2,1,X2,2,,X2,3X2,n)(Xm,1,Xm,2,Xm,3Xm,n)(1)

where Xmn represents an initial population space, m represents the number of individuals in the population, and n represents the number of dimensions of each individual.

For each particle, the corresponding position is a potential solution to the optimization problem, and each position’s fitness value is obtained by a special evaluation function. Then, it is made to compare with the recorded fitness value of the current individual, and if it is smaller than the previous fitness value, it is replaced. The optimal position of that individual is updated once. In each search process, the optimal position of each particle is recorded by pB, as shown in Eq. 2.

pBi=(pBi,1,pBi,2,pBi,3pBi,dim)(2)

where pBi records the best foraging position found by the ith particle in the current population, and dim indicates that each individual has dim dimensions.

The updating method is shown in Eq. 3. In the equation, Xi(t + 1) represents the position of the individual after the current update process, pBi(t + 1) represents the best position obtained by the current individual after the t + 1th update, and f() represents the evaluation method for calculating the fitness value of each individual.

pBi(t+1)={Xi(t+1),iff(Xi(t+1))<f(pBi(t))pBi(t),otherwise(3)

For the whole particle population, the current search position of all particles becomes one of the candidates for the global optimal solution. The PSO will use the whole update process of the population to find the only global optimal target position and record it with Eq. 4, which is updated in the way shown in Eq. 5.

gBest=(gBest1,gBest2,gBest3gBestdim)(4)
gBest={Xi(t+1),iff(Xi(t+1))<f(gBest)gBest,otherwise(5)

where gBest indicates the global optimal position.

Of course, the key role of the PSO in updating the population of individuals is the movement vector of each particle, as shown in Eq. 6. Based on this vector, PSO can control the update direction and movement step of each particle, as represented by Eq. 7.

Vi=(Vi,1,Vi,2,Vi,3Vi,dim)(6)
Vi,j(t+1)=Vi,j(t)+c1rand(pBi,j(t)-Xi,j(t))
                       +c2rand(gBestj-Xi,j(t))(7)

where both c1 and c2 are learning factors representing the movement of the particles toward pB and gBest, respectively. To make the particles move under certain limits for better merit search, the PSO constrains the displacement vector [−Vmax, Vmax]. To deal with the displacement vector crossing problem, the researchers made the following settings, as shown in Eq. 8.

Vi,j={Vmax,Vi,j>Vmax-Vmax,Vi,j<-Vmax(8)

Finally, the update formula for individuals is shown in Eq. 9.

Xi,j(t+1)=Xi,j(t)+Vi,j(t+1)(9)

In summary, the workflow of the traditional PSO is shown in Algorithm 1 and Figure 1.

Algorithm 1. Pseudocode for the PSO

Input: The fitness function F (x),

maximum evaluation number (MaxFEs),

population size (N), dimension (dim)

Output: the best location (gBest)

Initialize a random population X

Initialize the parameters: FEs, t, Vmax, c1, c2

Initializes the velocity vector:

V = zeros(N, dim)

Initializes the optimal position

and grade of the current individual:

pB = zeros(N, dim), pB_score

Initialize position vector and score

for the best location: gBest, gBest_score

While (FEs < MaxFEs)

For i = 1: size(X, 1)

Keep each particle in the search

space

Calculate the fitness value for

every search particle

FEs = FEs + 1

Update the locations and scores

of gBest and pBi

End for

For i = 1: size(X, 1)

For j = 1: size (X, 2)

Updates the velocity vector

Vi,j by Eq. 7 and Eq. 8

Update the location of

particles by Eq. 9

End for

End for

t = t + 1

End while

Return gBest

FIGURE 1
www.frontiersin.org

Figure 1. The workflow diagram of the PSO.

In summary, the time complexity of traditional PSO can be easily found and is mainly affected by initialization, population updating, and fitness value calculation. The population initialization is the most important component of the initialization phase and can be analyzed as O(Initializing) = O(N × dim). The population updating phase was analyzed as O(Updating) = O(N × dim), and the fitness value calculation phase was analyzed as O(Calculating) = O(N × dim). Thus, O(PSO) = O(N × dim) + O(T = (N × dim)) = O((N × dim) = (T + 1)). Here, N denotes the population size, dim denotes that each particle has dim dimensions, and T = MaxFEs/M denotes the number of iterations, which is determined by having the total number of evaluations (MaxFEs) and the number of evaluations(M) during each iteration.

4. The proposed SRWPSO

4.1. Sobol sequence

Based on studies related to metaheuristic algorithms, it can be found that the distribution of initial population individuals affects the convergence performance of metaheuristic algorithms to some extent (Rahnamayan et al., 2007; Kazimipour et al., 2014; Dokeroglu et al., 2019). Therefore, in this study, a more uniformly distributed low-difference random sequence (Sobol sequence) is adopted instead of the traditional pseudo-random method in an attempt to improve the diversity of the population and the algorithm’s traversal of the population space through low-difference sample points, thus enhancing the efficiency of the algorithm in finding the global optimal solution.

In addition, many scholars have also conducted related research on population initialization. For example, Yang X. et al. (2022) used a sinusoidal initialization strategy (SS) to initialize the population of the GWO algorithm and successfully enhanced the search capability of traditional GWO. Qi et al. (2022a) combined the Levy fight strategy and the traditional initialization method and proposed a Levy fight initialization method with better effect and successfully used to improve WOA. Arora and Anand (2019) used the Circle chaos method to initialize the population and improve the Grasshopper optimization algorithm(GOA). The initialization steps in this study are as follows.

Step 1: The initialized population space takes a range of values [lb, ub]. lb denotes the lower bound of the population’s space, and ub denotes the upper bound of the population space.

Step 2: Sobol sequence generates random sample points with low variance properties [0, 1].

Step 3: The initialization method is defined as Eq. 10.

X i = l b + S i ( u b - l b ) (10)

where Xi denotes the i-th particle in the population and [1, N].

Step 4: Repeating Step 3 N times based on population size N.

4.2. Random replacement strategy

To develop the population in a better direction, many scholars have tried to enhance the ability of traditional swarm intelligence algorithms for population updating by various methods. For example, the random replacement strategy has been effectively used in the literature (Gupta and Deep, 2018; Chen et al., 2019; Zhao et al., 2021). This strategy enriches the diversity of the population of individuals by replacing the position vector in the j-th dimension of the current individual with the position vector in the same dimension of the current swarm optimal individual. Thus, it improves the chance of exploiting the optimal individual.

Inspired by the method, this paper introduces the random replacement strategy into PSO, with the difference that this study transforms the object being replaced. In this improvement process, we combine the characteristic of PSO to record the current best position of each particle and achieve the improvement of the traditional PSO by replacing the position vector on the j-th dimension of the current best position of each particle with the position vector on the j-th dimension of the best individual of the population, as shown in Eq. 11.

p B i , j = g B e s t j (11)

During the search process, when the current optimal position of the population obtained by the algorithm approaches the global best position, we cannot exclude the possibility that it has excellent position vectors in some individual dimensions. Therefore, a probability parameter is introduced in the replacement strategy, as shown in Eq. 12.

p B i , j ( t + 1 ) = { g B e s t j , a < C p B i , j , otherwise (12)
C = t a n ( π ( r a n d - 0.5 ) ) (13)
a = 1 - F E s / M a x F E s (14)

where C denotes a Cauchy random number, and a is a decay factor that decays linearly from 1 to 0 as the number of evaluations increases.

4.3. Adaptive weight strategy

From the perspective of convergence speed and accuracy, the traditional PSO is easily trapped in the local optimum and lacks the ability to jump out of the local optimum in the middle and early stages of the updating process. In this paper, to remedy this deficiency, the adaptive weight ω is introduced into the velocity vector of the traditional PSO. The purpose is to improve the diversity of individuals in the population by increasing the perturbation capacity of the velocity vector, which facilitates the particles to explore and exploit the global optimum better, as shown in Eq. 15.

ω = ( 1 - F E s M a x F E s ) β (15)
β = 1 - C 1 S / M a x F E s (16)

where β stands for a perturbation parameter under the control of C1 and S, giving the possibility of jumping out of the linearly decreasing trajectory when ω decreases linearly from 1 to 0. C1, like C, denotes a Cauchy random number. S denotes an adaptive parameter with an initial value of 0.01, which is updated, as shown in Eq. 17.

S = { S /2 , if g B e s t updated S + 1 , otherwise (17)

Therefore, the update of the velocity vector after the introduction of the adaptive weight ω can be expressed as Eq. 18.

V i , j ( t + 1 ) = ω V i , j ( t ) + c 1 r a n d ( p B i , j ( t ) - X i , j ( t ) )
                       + c 2 r a n d ( g B e s t i , j - X i , j ( t ) ) (18)

4.4. Implementation of SRWPSO

In order to improve the overall performance of the PSO, this paper makes PSO combined with the three optimization strategies introduced above for the first time and proposes an enhanced PSO named SRWPSO. First, this study introduces the Sobol sequence in PSO for initializing the particle population to enhance the algorithm for population space traversal by improving the overall quality of the initial population. Next, in order to improve the possibility of moving to the global optimal position, this study introduces a random substitution strategy based on the optimal position of the current particles. Finally, an adaptive weight strategy is introduced to improve the algorithm’s ability to jump out of the local trap during the optimization search to increase the particle population’s perturbation ability by enhancing the displacement vector’s scalability. The specific framework of the enhanced SRWPSO is shown in Algorithm 2 and Figure 2.

Algorithm 2. Pseudocode for the SRWPSO

Input: The fitness function F (x),

maximum evaluation number (MaxFEs),

population size (N), dimension (dim)

Output: the best location (gBest)

Initialize a random population X by

Eq. 10

Initialize the parameters:

FEs, t, Vmax, c1, c2, S

Initialize the velocity vector:

V = zeros(N, dim)

Initialize the optimal position and

grade of the current individual:

pB = zeros(N, dim), pB_score

Initialize position vector and score

for the best location: gBest, gBest_score

For i = 1: size(X, 1)

Keep each particle in the search

space

Calculate the fitness value for

every search particle

FEs = FEs + 1

Update the locations and scores

of gBest and pBi

End for

While (FEs = MaxFEs)

For i = 1: size(X, 1)

Updates the position of particles

by Eq. 12

Keep each particle in the search

space

Calculate the fitness value for

every search particle

FEs = FEs + 1

Update the locations and scores

of gBest and pBi

End for

Update the adaptive weight ω by Eq. 15

For i = 1: size(X, 1)

For j = 1: size(X, 2)

Update the velocity vector

Vi, j by Eq. 18

Update the location of

particles by Eq. 9

End for

End for

For i = 1: size(X, 1)

Keep each particle in the search

space

Calculate the fitness value for

every search particle

FEs = FEs + 1

Update the locations and scores

of gBest and pBi

Update the adaptive factor S by

Eq. 17

End for

t = t + 1

End while

Return gBest

FIGURE 2
www.frontiersin.org

Figure 2. The workflow diagram of the SRWPSO.

Analyzing the above workflow, we can find that the complexity of SRWPSO is mainly determined by the population size (N), dimension size (dim), and the maximum number of evaluations (MaxFEs). If the number of times the fitness value is calculated in one iteration is M, the number of iterations (T) can be calculated as T = MaxFEs/M, which is determined by MaxFEs and the application time of the evaluation function. Therefore, the overall time complexity is O(SRWPSO) = O(Sobol initialization) + O(Assessment and selection of initialization) + O(Random replacement strategy) + O(Adaptive weight strategy). The complexity under the Sobol initialization is O(N × dim). The complexity under assessment and selection of initialization is O(N × dim). The complexity under the random replacement strategy is O(N × dim + 2 × N × dim). The complexity under the adaptive weight strategy is O(2 × 2 × N × dim). In conclusion, O(SRWPSO) = O(2 × N × dim) + T = (O(4 × N = dim) + O(3 × N × dim)) = O(N × dim + T × (7N × dim)).

5. The proposed bSRWPSO-FKNN

5.1. Binary conversion method

It is well known that feature selection is a binary-based discretization problem. However, the SRWPSO in this paper is proposed based on a continuous problem. Therefore, in order to make the SRWPSO applicable to the feature selection experiments, this subsection provides a binary conversion method suitable for the SRWPSO for converting from the continuous problem to the feature selection problem and finally proposes a novel discrete binary version of the SRWPSO, named bSRWPSO. The following is a partial description of the binary conversion process of the SRWPSO.

(1) Initialize the problem domain as [0,1]. In the problem, each dimension of each individual represents an attribute of the problem, and each feature has a data marker between 0 and 1.

(2) Discrete the continuous problem. As shown in Eq. 19, the obtained feature values are transformed into 0 or 1 by the V-transformation equation, indicating whether the feature is selected. Where 1 indicates that it is selected and 0 represents the opposite meaning.

X d ( t + 1 ) = { X d , V ( X d ( t ) ) r X d , otherwise (19)

where r is a random number from 0 to 1, Xd denotes the binary transformed position of the search agent, and V(⋅) is a V-shaped discretization equation, as shown in Eq. 20.

V ( x ) = | tanh ( x ) | (20)

5.2. Fuzzy K-nearest neighbor

K-nearest neighbors (KNN) (Cover and Hart, 1967; Jadhav et al., 2018; Tang et al., 2020) is a simple, efficient, nonparametric classification method proposed by Cover and Hart (1967) and one of the world-famous machine learning algorithms since the 20th century. In KNN, one of its classes is assigned according to the most common class in its K-nearest neighbors. Keller et al. (1985) combined fuzzy set theory with the KNN and proposed a fuzzy version of the KNN, named the FKNN (Keller et al., 1985; Chen et al., 2011, 2013; Mailagaha Kumbure et al., 2020). Unlike the individual classes of the KNN, the fuzzy affiliations of the samples of the FKNN are assigned to different classes according to Eq. 21.

μ i ( x ) = j = 1 k μ i , j ( 1/ x-x j 2/ ( m - 1 ) ) j = 1 k ( 1/ x-x j 2/ ( m - 1 ) ) (21)

In the above equation, i = 1, 2, 3, …, C and j = 1, 2, 3, …, k. C denotes the number of classes and k represents the number of the nearest neighbors. In calculating the contribution of each neighbor to the affiliation value, the FKNN method determines the weight of the distance in the calculation process by using the fuzzy strength parameter m, which is usually taken as m ∈ (1, ∞). ∥ x-xj ∥ is calculated using the Euclidean distance, which denotes the distance between x and its j-th nearest neighbor xj. μi,j is the membership degree of the pattern xj from the training set to the class i, among the k nearest neighbors of x.

5.3. Implementation of bSRWPSO-FKNN

This section proposes a novel feature prediction model, named bSRWPSO-FKNN, based on the binary SRWPSO and the FKNN, which provides technical support for conducting feature selection experiments. The principle is to optimize the subset of data produced during the experiment by using the ability of the bSRWPSO to find the global optimum in order to obtain a better and more suitable optimization set for feature selection experiments and then use the FKNN to perform feature prediction on the obtained optimization set. By the above method, we not only exploit the potential of the FKNN but also improve the efficiency and accuracy of the classification experiments.

In addition, to better achieve the classification performance of the bSRWPSO-FKNN, this paper provides an evaluation method based on error rate and feature subset for aiding feature prediction, as shown in Eq. 22.

F i t n e s s = α E r r o r + β R D (22)

where Error denotes the error rate of classification results, and the sum of classification accuracy is 1; D denotes the number of features in the dataset involved in feature selection; R denotes the number of features in the subset of data obtained by the feature selection experiment; α and β are two important weight parameters, and α + β = 1, and α = 0.99 reflects the importance of error rate.

In summary, the workflow of the bSRWPSO-FKNN proposed in this paper is shown in Figure 3.

FIGURE 3
www.frontiersin.org

Figure 3. The basic workflow of the bSRWPSO-FKNNN.

6. Benchmark function validation

In this section, this paper experiments to test the performance of the SRWPSO based on 30 benchmark functions from the CEC 2014. The convergence process of the SRWPSO is analyzed from several aspects, and its ability to escape the local optimum and search for the global optimum is fully demonstrated.

6.1. Experimental setup

In order to verify the comprehensive ability of SRWPSO, this section sets up performance verification experiments for SRWPSO from four aspects, including mechanism combination verification experiments, quality analysis experiments, comparison experiments with traditional algorithms, comparison experiments with famous variants, and comparison experiments with new peer variants. At the same time, combined with the experimental results, this section analyzes the convergence process of SRWPSO and proves its excellent performance. As shown in Table 1, this subsection gives the specific details of the CEC 2014 benchmark function set. The parameters of the algorithms involved in this paper are shown in Table 2.

TABLE 1
www.frontiersin.org

Table 1. Description of the 30 benchmark functions.

TABLE 2
www.frontiersin.org

Table 2. The parameters of the algorithms involved in this article.

For the purpose of increasing the persuasion of test outcomes, we utilized two representative statistical standards in the analysis, namely average value (AVG) and variance (STD). In the analysis part, the AVG is employed to represent the comprehensive capability of the algorithm, and the smaller its value is, the better the comprehensive performance of the algorithm is; the STD reflects the performance state of the algorithm, and the smaller its value is, the more stable its comprehensive performance is. Then, to further discuss the comparative experimental results, this section provides two popular statistical methods for the experimental analysis process: the Wilcoxon signed-rank test (García et al., 2010) and the Friedman test (García et al., 2010). The “+,” “=,” and “–” appearing in the Wilcoxon signed-rank test, respectively mean that the performance of SRWPSO is superior to, equal to and inferior to competitors. In the table, the optimal data of the experimental results are highlighted in black. Eventually, some of the convergence curves are drawn to visualize the convergence effect of the algorithms.

In addition, in order to balance the influence of the experimental process on the experimental outcomes, the experimental environment was unified from the internal and external aspects of the experiment. As displayed in Table 3, the study sets the population size, test times, target dimension, and other aspects of each algorithm during the experiment to eliminate the influence of internal experimental parameters on the performance of each algorithm. The difference is that the maximum number of the evaluation is used in this experiment instead of the number of iterations, which can be calculated by using iteration times. As shown in Table 4, the study uses the same experimental equipment to avoid the interference of the external experimental environment, thus further increasing the fairness and scientific nature.

TABLE 3
www.frontiersin.org

Table 3. The parameter setting of the experiment.

TABLE 4
www.frontiersin.org

Table 4. Description of the experimental environment.

6.2. Impacts of components

In this part, the experimental process of SRWPSO is presented. In this process, this paper explores the impact of three improved strategies on the performance of PSO based on the CEC 2014 benchmark function set. Table 5 shows the different combinations in the improvement process. In the table, the SOB represents the Sobol initialization strategy, the RRS represents the random replacement strategy, and the AWS represents the adaptive weight strategy.

TABLE 5
www.frontiersin.org

Table 5. The results of strategy combinations.

Supplementary Appendix Table 1 reflects the effects of the different combinations of strategies on the comprehensive performance of PSO through AVG and STD. By analyzing the data in the table, it can be seen that the SRWPSO occupies the largest share of the number of excellent performances among the 30 test functions, especially in the unimodal functions and composition functions. For AVG, the smaller value obtained by the SRWPSO indicates that it performs better on the problem. Of course, the more frequency of occurrences of the minimum state of the AVG in the 30 test functions means that the SRWPSO is more adaptable to different problems. For STD, a smaller value obtained by the SRWPSO indicates a more stable performance on the corresponding problem. Similarly, the number of minimum states of the STD also reflects the adaptability of the corresponding algorithm to different problems in a certain extent. In addition, the performance of the PSO under single-strategy action is not outstanding enough compared to the traditional PSO, and even slightly worse than the traditional PSO in some problems. In the dual-strategy role, the PSO performs relatively well in terms of overall capability, especially the SWPSO and the RWPSO. Of course, by observing the table, it is easy to find that the SRWPSO shows the best-combined ability in this comparison test under the role of three strategies.

Supplementary Appendix Table 2 presents the p-values acquired based on the Wilcoxon signed-rank test. In analyzing the experiments, the article has bolded the experimental results less than 0.05 in the table, indicating that the excellent ability of the SRWPSO has statistical significance and higher confidence relative to the algorithms participating in the comparison. The table shows that the p-values less than 0.05 occupy a significant proportion compared to those greater than 0.05, especially relative to the performance of the traditional PSO on the 30 benchmark functions. It indicates that the SRWPSO proposed in this paper outperforms the single-strategy improvement variant, the two-strategy improvement variant, and the original PSO in the comparison experiments.

To enhance the persuasiveness of the experimental results, the experimental results based on the Wilcoxon signed-rank test are given in Table 6. By analyzing the table, it is easy to find that the SRWPSO shows the best comprehensive performance in this experiment, and the average value of the Wilcoxon signed-rank test obtained by it is much smaller than that of the second-ranked SWPSO. In addition, the SRWPSO performs better than the SWPSO on 15 of the 30 benchmark problems, and 14 have similar optimization capabilities. Of course, compared to the traditional PSO, the SRWPSO is more outstanding, with 26 benchmark functions performing well, and only one benchmark function performing less well than the traditional PSO, except for three with equal performance.

TABLE 6
www.frontiersin.org

Table 6. The results of Wilcoxon signed-rank test of different versions.

In order to advance to increase the authority of the experimental results, the statistical results based on the Friedman test are given in Figure 4. As seen from the figure, the Friedman statistic value obtained by the SRWPSO is 2.59, which is the smallest among the comparison results. In addition, it is not only much smaller than the traditional PSO, which ranks at the bottom, but also smaller than the SWPSO, which ranks second. This again indicates that the comprehensive performance of the SRWPSO performs relatively best in this experiment and also provides the basis for the SRWPSO proposed in this paper.

FIGURE 4
www.frontiersin.org

Figure 4. The results of the Friedman test of different versions.

6.3. The qualitative analysis of SRWPSO

Figure 5 analyzes the performance of the SRWPSO from several perspectives. Figure 5A provides a three-dimensional view of the benchmark function. Figure 5B marks the two-dimensional distribution of the historical search positions of the SRWPSO during the search for superiority, where the red markers indicate the best positions throughout the search process and the black dots indicate the historical search positions. Figure 5C shows the change of the first dimension of each position during the iteration. Figure 5D gives the variation of the average fitness of all individuals in the population during the iteration. Figure 5E then provides the convergence curves of the SRWPSO and the PSO. The three-dimensional and two-dimensional distributions show that the SRWPSO is able to obtain better global optimal solutions on benchmark functions of different complexity. The variation of the first dimension at each position shows that the amplitude of oscillation at the beginning of the iteration is small. As the number of iterations keeps increasing, the amplitude of oscillations increases and stabilizes to a certain extent, indicating that individuals can better traverse the search space and increase the diversity of the population, thus enhancing the ability to escape the local optimal position. Similarly, it can be seen from Figure 5D that the average fitness values of the SRWPSO have a large oscillation amplitude on F12, F16, F17, and F21, again indicating the existence of population diversity during the search process. the convergence curves of the SRWPSO and the PSO show that the final convergence accuracy of the SRWPSO is better than that of the PSO. The convergence curves in Figure 5E also shows that the convergence ability of the SRWPSO is much larger than that of the PSO; the convergence curves on F1, F2, and F16 show that the SRWPSO has a solid ability to escape from the local optimum. Each inflection point on the convergence curve represents a successful escape from the local optimum position.

FIGURE 5
www.frontiersin.org

Figure 5. The analysis results of SRWPSO and PSO from multiple perspectives. See section 6.3 for details.

The results of the equilibrium analysis of the corresponding benchmark functions in Figure 5 are given in Figure 6. By comparing the equilibrium images of the SRWPSO and the PSO, it is easy to observe that there is a significant improvement in the development capability of the SRWPSO relative to the PSO, which makes the SRWPSO based on the three optimization strategies reach a better balance point in both exploration and development stages, thus making the convergence speed and final convergence accuracy of the SRWPSO better than the PSO.

FIGURE 6
www.frontiersin.org

Figure 6. The balance analysis results of SRWPSO and PSO.

6.4. Comparison with traditional algorithms

This subsection discusses the experimental results of comparing the SRWPSO with nine well-known traditional algorithms to demonstrate the core advantages of the SRWPSO further. In this comparison, the traditional algorithms involved include ant colony optimization based on continuous optimization (ACOR) (Dorigo, 1992; Dorigo and Caro, 1999; Socha and Dorigo, 2008), different evolution (DE) (Storn and Price, 1997), sine cosine algorithm (SCA) (Mirjalili, 2016), HHO (Heidari et al., 2019b), grey wolf optimization (GWO) (Mirjalili et al., 2014), whale optimization algorithm (WOA) (Mirjalili and Lewis, 2016), bat-inspired algorithm (BA) (Yang, 2010), moth-flame optimization (MFO) (Mirjalili, 2015), and wind-driven optimization (WDO)(Bayraktar et al., 2010).

Supplementary Appendix Table 3 gives the results of the SRWPSO compared with nine traditional algorithms based on AVG and STD. In terms of the number of the best solutions obtained on the 30 benchmark functions, the SRWPSO ranks first in this experiment. This indicates that the SRWPSO not only has the best comprehensive performance relatively but also its adaptability to different problems. In the same way, it’s evident that the SRWPSO still has a tremendous advantage over the other nine algorithms in finding the global optimum.

The analysis of the Wilcoxon signed-rank test in Table 7 shows that the SRWPSO ranks first in this comparison experiment with an overall mean of 1.53. It is 1.67, smaller than the average score of DE, which is ranked second overall and outperforms DE on 20 functions. The results of the p-values obtained during the Wilcoxon signed-rank test are presented in Supplementary Appendix Table 4. The data bolded in black in the table indicate less than 0.05, indicating that the experimental results are credible. The table shows that the p-values are essentially less than 0.05 in all the comparison results, indicating that the optimal solutions obtained by SRWPSO are credible when compared with the other nine conventional algorithms.

TABLE 7
www.frontiersin.org

Table 7. The results of the Wilcoxon signed-rank test of SRWPSO with traditional algorithms.

To further demonstrate the performance of SRWPSO, Figure 7 analyzes the experimental results based on the Friedman test. It is not obvious from the figure that the SRWPSO ranks first, obtaining the Friedman test result of 2.02, and DE ranks second with a score of 3.48 in the experiment. Thus, this is another evidence that SRWPSO still has a clear advantage compared to the basic algorithm. Figure 8 shows the representative partial convergence curves of SRWPSO compared with the other nine traditional algorithms. Among them, SRWPSO has significantly better convergence accuracy than the other algorithms. In addition, SRWPSO performs significantly better in terms of convergence speed on F11, F16, F29, and F30. On F1, F2, and F3, the convergence curves of SRWPSO have obvious inflection points compared with other algorithms, which indicates that SRWPSO has a more vital ability to escape from the local optimum position on this type of problem. Overall, SRWPSO is more competitive than the other nine traditional algorithms in searching for the global optimum. Therefore, when SRWPSO is compared with other basic algorithms, its core advantages are also well demonstrated.

FIGURE 7
www.frontiersin.org

Figure 7. The results of Friedman test of SRWPSO with traditional algorithms.

FIGURE 8
www.frontiersin.org

Figure 8. The convergence curves of SRWPSO with traditional algorithms.

Figure 9 shows the time cost consumed by all algorithms in this experiment when run on the 30 benchmark functions. Each color in the figure represents an algorithm, and the experimental results are calculated in seconds. It is easy to see that SRWPSO has a higher consumption in the optimization task relative to these original classical algorithms. It is also easy to understand that this situation occurs due to the introduction of several improvement strategies in SRWPSO. However, the difference compared to ACOR and DE is not very large and even less consuming than them for most functions. This indicates that the computational cost of SRWPSO has an advantage over some well-known original algorithms.

FIGURE 9
www.frontiersin.org

Figure 9. The time complexity evaluation results of SRWPSO with traditional algorithms.

6.5. Comparison with famous variants

To further verify that the comprehensive performance of SRWPSO has core advantages, this subsection compares SRWPSO with nine well-known variants of algorithms proposed in recent years, mainly hybridizing SCA with DE (SCADE) (Nenavath and Jatoth, 2018), chaotic BA (CBA) (Adarsh et al., 2016), chaotic random spare ACO (RCACO) (Dorigo, 1992; Dorigo and Caro, 1999; Zhao et al., 2021), modified SCA (m_SCA) (Qu et al., 2018), ACO with Cauchy and greedy levy mutations (CLACO) (Dorigo, 1992; Dorigo and Caro, 1999; Liu L. et al., 2021), hybridizing SCA with PSO (SCA_PSO) (Nenavath et al., 2018), double adaptive random spare reinforced WOA (RDWOA) (Chen et al., 2019), boosted GWO (OBLGWO) (Heidari et al., 2019a), and fuzzy self-tuning PSO (FSTPSO) (Nobile et al., 2018).

Supplementary Appendix Table 5 analyzes the AVG and STD obtained in the experiment after 30 independent runs. It can be seen that SRWPSO obtains the largest number of minimum AVG, which indicates that its convergence performance and adaptability to the problem are more advantageous than the other nine well-known variants in this comparison experiment. Also, SRWPSO obtains the largest number of minimum STD, which indicates that it exhibits performance with more stability.

The analytical results of the Wilcoxon signed-rank test are given in Table 8. As seen from the table, SRWPSO achieves relatively better global optimal solutions for most of the functions and ranks first in this experiment with an overall mean of 1.87. In addition, it is not difficult to observe the second column of the table to find that SRWPSO outperforms the second-ranked CLRCO by 17 out of 30 benchmark functions, 19 outperforms the third-ranked RCACO, and even 30 outperforms the bottom-ranked FSTPSO. This indicates that the comprehensive performance of SRWPSO has a powerful core advantage among all the algorithms participating in this experiment. To further demonstrate the core advantage of SRWPSO, Supplementary Appendix Table 6 analyzes the p-values obtained in the Wilcoxon signed-rank test. The bolded and blackened data in the table indicate that the p-values obtained are less than 0.05, again indicating that it is plausible that SRWPSO excels over the other nine well-known variants for the corresponding problem. Thus, it is credible that we can easily see that SRWPSO has superior performance in most comparisons through the table.

TABLE 8
www.frontiersin.org

Table 8. The results of the Wilcoxon signed-rank test of SRWPSO with famous variants.

The results of the Friedman test given in Figure 10 show that SRWPSO ranks first with 2.37 and CLACO ranks second with 3.34, which proves that SRWPSO outperforms the other nine methods. To further analyze the convergence capability of SRWPSO, we give some of the convergence curves in this comparison experiment in Figure 11. From the figure, it can be observed that SRWPSO has the best convergence accuracy on the listed benchmark functions. In terms of convergence speed, SRWPSO is relatively more excellent on the F2, F3, F11, F16, and F30, while it is well demonstrated to have the ability to continuously find the global optimum on F1, F2, F17, and F21. Thus, the above analysis strongly demonstrates that the comprehensive performance of SRWPSO still has significant advantages compared with other advanced variants.

FIGURE 10
www.frontiersin.org

Figure 10. The results of Friedman test of SRWPSO with famous variants.

FIGURE 11
www.frontiersin.org

Figure 11. The convergence curves of SRWPSO with famous variants.

Figure 12 shows the time cost consumed by SRWPSO with 9 other famous variants for 30 optimization tasks. Each color in the figure represents an algorithm, and the experimental results are in seconds. SRWPSO consumes less than m_SCA and CLACO in all 30 optimization tasks, with the most prominent advantage over CLACO in particular. In addition, it is not difficult to find that SRWPSO consumes less than RCACO in most optimization tasks upon closer inspection. The difference compared to SCADE and CBA is also not very large. Of course, SRWPSO also has some weaknesses against several other variants, which are caused by introducing optimization strategies with different complexity to algorithms with different complexity. In conclusion, SRWPSO has good computational efficiency in comparison with famous variants.

FIGURE 12
www.frontiersin.org

Figure 12. The time complexity evaluation results of SRWPSO with famous variants.

6.6. Comparison with new peer variants

To highlight the core strengths of SRWPSO, we make a comparison of SRWPSO with seven new peer variants in this section. These variants are mainly enhanced WOA (EWOA) (Tu et al., 2021a), elite evolutionary strategy-based HHO (EESHHO) (Li C. et al., 2021), ACOR based on the directional crossover (DX) and directional variant (DM) (XMACOR) (Qi et al., 2022b), cellular grey wolf optimizer with a topological structure (CAGWO) (Lu et al., 2018), multi-core SCA (SGLSCA) (Zhou W. et al., 2021), improved GWO (IGWO) (Cai et al., 2019), and HHO based on Gaussian mutation and cuckoo search (GCHHO) (Song et al., 2021).

Supplementary Appendix Table 7 compares SRWPSO with new peer variants. It is obvious that SRWPSO has the best performance on unimodal functions (F1, F2, and F3) and composition functions (F23–F30). In addition, the AVG and STD of SRWPSO obtain the highest number of optimal in this experiment, which indicates that the method is relatively the most adaptable to different problems. In another way, it means that SRWPSO has not only better optimization ability but also better stability. Supplementary Appendix Table 8 shows the p-value of the comparison result of SRWPSO with new peer variants. The data marked in black in the table indicate that the p value is greater than 0.05, which indicates that these data lack statistical significance. On the contrary, other data have statistical significance and can be powerful evidence to verify SRWPSO. It is clear that most of the data is less than 0.05, which indicates that SRWPSO is better than the other algorithms in terms of the corresponding functions.

Table 9 shows the results of the Wilcoxon signed-rank test of SRWPSO with new peer variants. It is clear that SRWPSO ranks first in this test with a mean score of 1.90, which is 0.9, smaller than the second-ranked SGLSCA. Specifically, SRWPSO is stronger than SGLSCA on 13 functions, equal on 14 functions, and worse on only 3 functions. In addition, SRWPSO is superior to EWOA, XMACOR, CAGWO, IGWO, and GCHHO on 22 or more functions. In conclusion, this test shows that SRWPSO still has a significant advantage in comparison with the new peer variants.

TABLE 9
www.frontiersin.org

Table 9. The results of the Wilcoxon signed-rank test of SRWPSO with new peer variants.

Figure 13 is the results of the Friedman test of SRWPSO with new peer variants. It is not difficult to see from the figure that SRWPSO obtains the best scores in the Friedman test, which is 0.93 smaller than the second-ranked SGLSCA and 3.9 smaller than the last ranked CAGWO. Figure 14 shows 9 convergence images of SRWPSO and new peer variants. In this figure, SRWPSO obtains the best convergence accuracy in all 9 convergence images. Also, the convergence curves of SRWPSO on F1, F2, F3, F5, and F16 have obvious inflection points compared to the other variants, which indicates that the algorithm has a stronger ability to escape from local optimal on the corresponding functions. Moreover, it is obvious that SRWPSO also has better convergence speed. In conclusion, SRWPSO has a very significant core advantage in comparison with the new peer variant.

FIGURE 13
www.frontiersin.org

Figure 13. The results of Friedman test of SRWPSO with new peer variants.

FIGURE 14
www.frontiersin.org

Figure 14. The convergence curves of SRWPSO and new peer variants.

Figure 15 is the time complexity evaluation results of SRWPSO with new peer variants in this experiment. In this figure, each color represents an algorithm, and the experimental results are in seconds. SRWPSO consumes significantly less than EWOA and XMACOR on all 30 functions. Except for F1, F2, and F3, there is not much difference between them, although SRWPSO is more time-consuming than GCHHO. It is not promising that SRWPSO is more time-consuming relative to EESHHO, SGLSCA, and IGWO. It is not difficult to understand this situation, mainly caused by the introduction of optimization strategies with different degrees of complexity in algorithms with different degrees of complexity. In conclusion, SRWPSO has better computational efficiency than the new peer variants. The SRWPSO and the future improved PSO can be applied in different fields, such as human activity recognition (Qiu et al., 2022), dynamic module detection (Ma et al., 2020; Li D. et al., 2021), recommender system (Li et al., 2014, 2017), smart contract vulnerability detection (Zhang L. et al., 2022), privacy protection of electronic medical records (Wu et al., 2022), named entity recognition (Yang Z. et al., 2022), structured sparsity optimization (Zhang X. et al., 2022), microgrids planning (Cao et al., 2021c), location-based services (Wu et al., 2020; Wu Z. et al., 2021), disease prediction (Su et al., 2019; Li L. et al., 2021), medical data processing (Guo et al., 2022), drug discovery (Zhu et al., 2018; Li Y. et al., 2020), and object tracking (Zhang et al., 2015).

FIGURE 15
www.frontiersin.org

Figure 15. The time complexity evaluation results of SRWPSO with new peer variants.

7. Feature selection experiments and analysis

In this section, the performance of the bSRWPSO-FKNN is first tested and validated on the basis of nine public datasets in the UCI. Then, this section performs a secondary performance validation of the suggested model based on a medical dataset and successfully extracts the key features affecting the incidence of AD through 10 times 10-fold cross-validation experiments combined with clinical medical practice.

7.1. Experimental setup

For the purpose of verifying that the proposed bSRWPSO-FKNN has better performance in feature selection, a series of comparative tests are conducted in this paper between bSRWPSO and some well-known algorithms in this field based on nine public datasets and one medical dataset. The details of the public datasets are described in Table 10. The main binary swarm intelligence algorithms involved in the tests are bSCGWO, bGWO, bGSA, bPSO, bALO, bBA, bSSA, bQGWO, bHHO, and bSMA. In addition to converting the swarm intelligence methods involved in the comparison to a binary discrete version suitable for feature selection, the parameters unique to the algorithms themselves remain unchanged, as shown in Table 11.

TABLE 10
www.frontiersin.org

Table 10. A detailed description of the public dataset.

TABLE 11
www.frontiersin.org

Table 11. Key parameters of the experiment.

To facilitate the verification of the core advantages of the bSRWPSO-FKNN, four evaluation criteria, including Accuracy, Sensitivity, MCC, and F-measure, are applied in this paper to evaluate the methods involved in the comparison experiments, and the average value (AVG) and variance (STD) obtained during the experiments were analyzed and compared. Here, unlike the benchmark function validation part, a larger AVG indicates a more robust average performance of the method. In addition, the optimal data of the experimental results are highlighted in black. The details of the evaluation criteria are described in Table 12.

TABLE 12
www.frontiersin.org

Table 12. Detailed description of the evaluation criteria.

In the table above, true positive (TP) and true negative (TN) are correct situations, which means that positive and negative classes are correctly predicted, respectively. At the same time, false positive (FP) and false negative (FN) are error cases. The former means that negative class is wrongly predicted as positive class, while the latter indicates that positive class is incorrectly predicted as negative class.

In addition, this paper also evaluates the comparison models that participated in the feature selection experiments by the Friedman test and then gave the Friedman ranking corresponding to the comparison models, thus demonstrating more intuitively that the bSRWPSO-FKNN has relatively better feature selection performance. We conducted all of our experiments using fair aspects, which are recognized as being common across a wide range of computer platforms, while adhering to the guidelines for unbiased comparisons in preceding AI-based work (Duan et al., 2022; Jin et al., 2022; Yang B. et al., 2022). If the professional designed the programming, it is assumed that these variables occur regardless no matter how the approach is used (Li H. et al., 2021; Zhang Z. et al., 2022; Lu et al., 2023). Finally, to ensure the fairness of the experimental process, the internal environment of all experiments is kept consistent, and the external experimental environment is kept consistent with the experimental part of the benchmark function.

7.2. Public dataset experiments

The evaluation results of the classification accuracy of bSRWPSO with the other ten binary algorithms are given in Table 13. From the table, it can be seen that bSRWPSO has the largest average classification accuracy on all nine public datasets tested, indicating that the method ranks first on the corresponding datasets. Secondly, it can also be seen that the stability of bSRWPSO is also the strongest among the compared methods. Table 14 shows the average ranking results of the 11 comparison methods based on nine public datasets obtained by the Friedman test for this experiment. As shown in the second column of the table, the mean value of the Friedman test of bSRWPSO based on nine public datasets is 1.22, which is the smallest among all comparison algorithms, indicating that bSRWPSO is ranked first in this classification accuracy test.

TABLE 13
www.frontiersin.org

Table 13. The analysis results of accuracy.

TABLE 14
www.frontiersin.org

Table 14. The results of Friedman ranking for accuracy.

Table 15 analyzes the AVG and STD of the sensitivities of each method in this experiment. It can be seen that bSRWPSO performs the best on nine public datasets. Except for Australian, SpectEW, and Wielaw, the sensitivity of bSRWPSO on the other six public datasets is above 97%. Of course, bSRWPSO exhibits the highest number of smallest STD on the public datasets, indicating that the method has the relatively most stable adaptation to different classification problems. In addition, by codifying and analyzing the Friedman test results of each method on each public dataset, Table 16 compiles the Friedman mean values of each algorithm on the nine public datasets. According to the table, it can be seen that bSRWPSO is ranked first, which indicates that the sensitivity of this method is relatively the best among the methods involved in the comparison.

TABLE 15
www.frontiersin.org

Table 15. The analysis results of sensitivity.

TABLE 16
www.frontiersin.org

Table 16. The results of Friedman ranking for sensitivity.

Table 17 shows the AVG and STD of the MCC for bSRWPSO and other comparison algorithms. Except for Australian, heart, and SpectEW, the AVG of bSRWPSO on all other public datasets are between 0.92 and 1. Also, combining the STD obtained in the experiment, it is easy to find that the overall performance of bSRWPSO ranks first. As shown in Table 18, the Friedman average ranking of bSRWPSO based on 9 public datasets is also the first.

TABLE 17
www.frontiersin.org

Table 17. The analysis results of MCC.

TABLE 18
www.frontiersin.org

Table 18. The results of Friedman ranking for MCC.

Table 19 analyzes the performance of bSRWPSO on nine public datasets based on the F-measure and gives the AVG reflecting its average capability and the STD reflecting its stability. Looking at Table 19, it is easy to see that the minimum mean of bSRWPSO occurs on the public dataset SpectEW, but it is also above 0.88. Overall, it shows a general ability greater than 0.95 and ranks first on each dataset. The comprehensive Friedman ranking for the F-measure in this experiment is given in Table 20. Among them, bSRWPSO ranks first among the 11 compared algorithms with an average ranking of 1 and is smaller than the average ranking of the second-ranked bGSA by 3. While bPSO ranks fourth with a score of 4.56. Therefore, it can be concluded that the three improvement strategies introduced in this paper improve the classification performance of bPSO in a significant way.

TABLE 19
www.frontiersin.org

Table 19. The analysis results of F-measure.

TABLE 20
www.frontiersin.org

Table 20. The results of Friedman ranking for F-measure.

In summary, this section verifies the comprehensive performance of bSRWPSO in feature selection experiments by analyzing classification accuracy, sensitivity, MCC, and F-measure. Comparing ten other methods demonstrates that the classification capability of bSRWPSO has a strong core competitive advantage. Therefore, the bSRWPSO proposed in this paper is a novel method with a more substantial classification capability that can be used for feature selection.

7.3. AD dataset experiments

7.3.1. AD dataset

This medical dataset includes 181 patients enrolled at the Department of Dermatology at the Affiliated Hospital of Medical School, Ningbo University, from May 2021 to March 2022 diagnosed with AD. The primary demographic data such as sex and age are included, and the typical laboratory characteristics comprising the blood routine examination, blood serum allergen test, and Total IgE in serum are gathered. The clinical and laboratory results of the patients with AD are demonstrated in Table 21. Continuous data are expressed as means ± standard deviation. Categorical data are described as percentages. The Ethics Commission of the Affiliated Hospital of Medical School approved this medical dataset (NO. KY20191208).

TABLE 21
www.frontiersin.org

Table 21. The characteristics dataset of 181 patients with AD.

7.3.2. Medical validation experiments

To further demonstrate the classification capability of bSRWPSO, this section sets up four comparison experiments on bSRWPSO based on a specific medical dataset. First, to illustrate the core advantage of the combination of bSRWPSO and FKNN in feature selection, this section sets up comparison experiments by making bSRWPSO combined with FKNN, kernel extreme learning machine (KELM), KNN, SVM, and MLP, respectively. Then, to verify that the classification ability of the bSRWPSO-FKNN is better than that of the classical classifier, this section makes the comparison experiments and analysis of bSRWPSO-FKNN with five classical classifiers based on a specific medical dataset, mainly including BP, CART, RandomF, AdaBoost, and ELMforFS. Next, this section makes bSRWPSO, and ten other binary versions of the swarm intelligence optimization algorithm combined with FKNN, respectively, and the classification advantages of the bSRWPSO-FKNN are verified by setting up feature classification comparison experiments. Finally, this section uses the bSRWPSO-FKNN to set up ten times 10-fold cross-validation experiments on a medical dataset. As a result, it successfully extracts the key features affecting the onset of AD.

Figure 16 shows the analysis of the results of the comparative experiments combining the bSRWPSO with each of the five machine learning algorithms. As seen from the box plots, the bSRWPSO-FKNN obtains the most concentrated experimental results among the four evaluation criteria, indicating that the classification ability of the model is relatively the most stable in this experiment. In the figure, the marker × represents the average value of each group of data. Therefore, it is easy to observe that the average value of all four evaluation methods for the combination of bSRWPSO and FKNN is 1 and greater than the other four combinations, indicating that the classification performance of the bSRWPSO-FKNN is the best on this medical dataset.

FIGURE 16
www.frontiersin.org

Figure 16. Evaluation results for different combinations.

Figure 17 shows the results of comparing the bSRWPSO-FKNN with five classical classifiers. Combined with the characteristics of the box plot, it can be noticed that the bSRWPSO-FKNN has an undeniable competitive advantage in this comparison. bSRWPSO-FKNN has a relatively more stable classification performance and is the best in terms of comprehensive classification ability. On the contrary, the evaluation results of the other five classical classifiers in all four evaluation criteria are relatively scattered, indicating that the classification ability of these classical methods is unstable. Therefore, the bSRWPSO-FKNN still has the core competitive advantage in this experiment.

FIGURE 17
www.frontiersin.org

Figure 17. The results of bSRWPSO with classical classifiers.

Figure 18 shows the experimental results comparing bSRWPSO with ten other binary swarm intelligence algorithms. The figure shows the analysis results based on six evaluation criteria through six box plots, including Accuracy, Sensitivity, MCC, F-measure, Error, and Time. The error indicates the error rate of the classification method, and the sum of classification accuracy is 1. Time reflects the time spent by the classification method on feature experiments, and the larger the value, the more time the classification method consumes to extract key features successfully. Comparing the Accuracy, Sensitivity, MCC, and F-measure in the figure, we can see that bSRWPSO has the largest experimental results, which indicates that bSRWPSO has the most successful combination with FKNN among all the swarm intelligence optimization algorithms involved in the feature experiments. Its classification performance is not only the best but also the most stable. Comparing the Error in the figure, it is easy to find that bSRWPSO has the slightest possibility of an error during the experiments. However, by comparing the time of the 11 methods, it can be found that bSRWPSO has some shortcomings in time complexity; although it is much lower than the two methods, bSCGWO and bQGWO, in terms of time cost, it is higher than the other eight compared methods.

FIGURE 18
www.frontiersin.org

Figure 18. Comparison results of bSRWPSO with other binary algorithms.

To further demonstrate the core advantages of the bSRWPSO-FKNN, Table 22 gives the average (AVG) and ranking results (Rank) of the Friedman test based on Accuracy, Sensitivity, MCC, and F-measure. According to the table, it is not difficult to conclude that bSRWPSO obtained the minimum average in all four evaluation criteria, indicating its first ranking in each criterion.

TABLE 22
www.frontiersin.org

Table 22. The results of the Friedman test.

As shown in Figure 19, the convergence curve of bSRWPSO is lower than other methods’ convergence curves after reaching the maximum number of iterations, which means that bSRWPSO has the relatively best optimization accuracy among the methods involved in the experiments. Therefore, bSRWPSO is the most effective in optimizing the classification ability of FKNN.

FIGURE 19
www.frontiersin.org

Figure 19. Convergence curve of feature selection method.

Figure 20 shows the experimental results of bSRWPSO for ten times 10-fold cross-validation on the AD dataset. In the figure, the vertical axis indicates the number of times each attribute was selected and the horizontal axis indicates the different attributes in AD. From the figure, it is easy to find that features F5, F15, F20, F22, F27, F30, and F36 were selected by bSRWPSO-FKNN, denoting the content of LY, Cat dander, Milk, Dermatophagoides Pteronyssinus/Farinae, Ragweed, Cod, and Total IgE, respectively. Combined with clinical medicine, it was concluded that these features have medical reference value. Therefore, this experiment proves that bSRWPSO-FKNN is scientific and practical for predicting AD development.

FIGURE 20
www.frontiersin.org

Figure 20. The results of 10 times 10-fold crossover experiments.

8. Conclusion and future works

This study puts forward an improved algorithm according to SOB, RRS, AWS, and PSO, called SRWPSO. Next, we propose a binary version of the SRWPSO, called bSRWPSO. Then, we make bSRWPSO combined with FKNN to offer a novel feature prediction model called bSRWPSO-FKNN. In SRWPSO, the SOB improves the quality of the initial swarm and improves the algorithm’s traversal of the initial population space. The RRS boosts the capability of the original PSO to get rid of the local optimum, which enhances the original PSO’s convergence accuracy. The AWS perturbs the algorithm according to its optimization search process and enhances the exploration ability of the algorithm by controlling the displacement vector. The RRS, in conjunction with the AWS, improves the diversity of the particle swarm, which in turn enhances the exploration and exploitation capability of the original PSO. In addition, based on the performance analysis experiments of SRWPSO and the original PSO, it can be concluded that the combination of the three improved strategies not only increases the population diversity of the original PSO but also balances the exploration and exploitation of the original PSO, which in turn leads to a stronger convergence capability of the original PSO. By analyzing the comparison results of SRWPSO, the original PSO, the nine original algorithms, and the nine improved algorithms on 30 benchmark functions, it is easy to conclude that the SRWPSO’s core advantages are faster convergence speed, higher convergence accuracy, and greater ability to escape local optimal solutions. For bSRWPSO, this paper introduces a V-shaped binary transformation method in SRWPSO to successfully discretize SRWPSO. In bSRWPSO-FKNN, when doing feature selection experiments, the model first optimizes the datasets participating in the experiments by bSRWPSO to obtain the optimization subsets. Then, the model performs classification prediction on the optimized subsets by FKNN. In order to verify the classification capability of bSRWPSO-FKNN, a series of performance validation experiments are conducted on the model in this paper, which successfully demonstrates that the performance of the model is relatively the best among the methods involved in the experiments. In addition, we conducted ten times 10-fold cross-validation experiments on bSRWPSO-FKNN based on a specific medical dataset in this paper and successfully extracted the key features affecting the onset of AD, mainly including the content of lymphocytes (LY), Cat dander, Milk, Dermatophagoides Pteronyssinus/Farinae, Ragweed, Cod and Total IgE. Finally, we enabled the selected key features to be discussed and analyzed in conjunction with clinical practice, demonstrating that the bSRWPSO-FKNN possesses practical medical significance.

However, the approach proposed in this paper is flawed. While the three improvement strategies greatly enhance the performance of the original PSO, they increase the time complexity of the original PSO, which means that the SRWPSO will solve the problem at a higher cost than the original PSO. Since bSRWPSO-FKNN is proposed based on SRWPSO, the increase in time complexity of the model is inevitable, which can be seen in Figure 18. Therefore, reducing the time complexity of this study will be one of the essential works in the future. In addition, we can use more effective strategies to improve the original PSO in future work.

Data availability statement

The original contributions presented in this study are included in the article/Supplementary material, further inquiries can be directed to the corresponding authors.

Author contributions

YL, ZX, AH, XJ, ZL, MW, and QZ: writing—original draft, writing—review and editing, software, visualization, and investigation. DZ, HC, and SX: conceptualization, methodology, formal analysis, investigation, writing—review and editing, funding acquisition, and supervision. All authors contributed to the article and approved the submitted version.

Funding

This work was supported by grants from Major Science and Technology Program for Medicine and Health in Zhejiang Province (No. WKJ-ZJ-2012), Funded by the Project of NINGBO Leading Medical & Health Discipline, Project Number: 2022-F23, Natural Science Foundation of Jilin Province (YDZJ202201ZYTS567), “Thirteenth Five-Year” Science and Technology Project of Jilin Provincial Department of Education (JJKH20200829KJ), Changchun Normal University Ph.D., Research Startup Funding Project (BS [2020]), Natural Science Foundation of Zhejiang Province (LZ22F020005), and National Natural Science Foundation of China (62076185 and U1809209).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fninf.2022.1063048/full#supplementary-material

References

Adarsh, B. R., Raghunathan, T., Jayabarathi, T., and Yang, X. S. (2016). Economic dispatch using chaotic bat algorithm. Energy 96, 666–675. doi: 10.1016/j.energy.2015.12.096

CrossRef Full Text | Google Scholar

Ahmadianfar, I., Heidari, A., Gandomi, A., Chu, X., and Chen, H. (2021). RUN beyond the metaphor: An Efficient Optimization Algorithm Based on Runge Kutta Method. Expert Syst. Applic. 181:115079. doi: 10.1016/j.eswa.2021.115079

CrossRef Full Text | Google Scholar

Ahmadianfar, I., Heidari, A., Noshadian, S., Chen, H., and Gandomi, A. (2022). INFO: An efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Applic. 195:116516. doi: 10.1016/j.eswa.2022.116516

CrossRef Full Text | Google Scholar

Arora, S., and Anand, P. (2019). Chaotic grasshopper optimization algorithm for global optimization. Neural Comput. Applic. 31, 4385–4405. doi: 10.1007/s00521-018-3343-2

CrossRef Full Text | Google Scholar

Asano, K., Tamari, M., Zuberbier, T., Yasudo, H., Morita, H., Fujieda, S., et al. (2022). Diversities of allergic pathologies and their modifiers: Report from the second DGAKI-JSA meeting. Allergol. Int. 71, 310–317. doi: 10.1016/j.alit.2022.05.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Bayraktar, Z., Komurcu, M., and Werner, D. H. (2010). “Wind Driven Optimization (WDO): A novel nature-inspired optimization algorithm and its application to electromagnetics,” in Proceedings of the 2010 IEEE Antennas and Propagation Society International Symposium, Toronto. doi: 10.1109/APS.2010.5562213

CrossRef Full Text | Google Scholar

Berna, R., Mitra, N., Hoffstad, O., Wubbenhorst, B., Nathanson, K., Margolis, D., et al. (2021). Using a machine learning approach to identify low-frequency and rare FLG alleles associated with remission of atopic dermatitis. JID Innov. 1:100046. doi: 10.1016/j.xjidi.2021.100046

PubMed Abstract | CrossRef Full Text | Google Scholar

Cai, Z., Gu, J., Jie, L., and Zhang, Q. (2019). Evolving an optimal kernel extreme learning machine by using an enhanced grey wolf optimization strategy. Expert Syst. Applic. 138:112814. doi: 10.1016/j.eswa.2019.07.031

CrossRef Full Text | Google Scholar

Cao, B., Fan, S., Zhao, J., Tian, S., Zheng, Z., Yan, Y., et al. (2021a). Large-scale many-objective deployment optimization of edge servers. IEEE Trans. Intell. Transp. Syst. 22, 3841–3849. doi: 10.1109/TITS.2021.3059455

CrossRef Full Text | Google Scholar

Cao, B., Li, M., Liu, X., Zhao, J., Cao, W., and Lv, Z. (2021b). Many-objective deployment optimization for a drone-assisted camera network. IEEE Trans. Netw. Sci. Eng. 8, 2756–2764. doi: 10.1109/TNSE.2021.3057915

CrossRef Full Text | Google Scholar

Cao, X., Sun, X., Xu, Z., Zeng, B., and Guan, X. (2021c). Hydrogen-based networked microgrids planning through two-stage stochastic programming with mixed-integer conic recourse. IEEE Trans. Automat. Sci. Eng. 19, 3672–3685. doi: 10.1109/TASE.2021.3130179

CrossRef Full Text | Google Scholar

Cao, B., Zhao, J., and Lv, Z. (2020b). Diversified personalized recommendation optimization based on mobile data. IEEE Trans. Intell. Transp. Syst. 22, 2133–2139. doi: 10.1109/TITS.2020.3040909

CrossRef Full Text | Google Scholar

Cao, B., Gu, Y., Lv, Z., Yang, S., Zhao, J., and Li, Y. (2020a). RFID reader anticollision based on distributed parallel particle swarm optimization. IEEE Internet Things J. 8, 3099–3107. doi: 10.1109/JIOT.2020.3033473

CrossRef Full Text | Google Scholar

Chen, H., Yang, C., Heidari, A. A., and Zhao, X. (2019). An efficient double adaptive random spare reinforced whale optimization algorithm. Expert Syst. Applic. 154:113018. doi: 10.1016/j.eswa.2019.113018

CrossRef Full Text | Google Scholar

Chen, H.-L., Huang, C., Yu, X., Xu, X., Sun, X., Wang, G., et al. (2013). An efficient diagnosis system for detection of Parkinson’s disease using fuzzy k-nearest neighbor approach. Expert Syst. Applic. 40, 263–271. doi: 10.1016/j.eswa.2012.07.014

CrossRef Full Text | Google Scholar

Chen, H.-L., Wang, G., Ma, C., Cai, Z.-N., Liu, W.-B., and Wang, S.-J. (2016). An efficient hybrid kernel extreme learning machine approach for early diagnosis of Parkinson×s disease. Neurocomputing 184, 131–144. doi: 10.1016/j.neucom.2015.07.138

CrossRef Full Text | Google Scholar

Chen, H.-L., Yang, B., Wang, G., Liu, J., Xu, X., Wang, S., et al. (2011). A novel bankruptcy prediction model based on an adaptive fuzzy k-nearest neighbor method. Knowledge-Based Syst. 24, 1348–1359. doi: 10.1016/j.knosys.2011.06.008

CrossRef Full Text | Google Scholar

Clayton, K., Vallejo, A., Sirvent, S., Davies, J., Porter, G., Reading, I., et al. (2021). Machine learning applied to atopic dermatitis transcriptome reveals distinct therapy-dependent modification of the keratinocyte immunophenotype. Br. J. Dermatol. 184, 913–922. doi: 10.1111/bjd.19431

PubMed Abstract | CrossRef Full Text | Google Scholar

Cover, T., and Hart, P. (1967). Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 13, 21–27. doi: 10.1109/TIT.1967.1053964

CrossRef Full Text | Google Scholar

Deng, W., Ni, H., Liu, Y., Chen, H., and Zhao, H. (2022a). An adaptive differential evolution algorithm based on belief space and generalized opposition-based learning for resource allocation. Appl. Soft Comput. 127:109419. doi: 10.1016/j.asoc.2022.109419

CrossRef Full Text | Google Scholar

Deng, W., Xu, J., Gao, X. Z., and Zhao, H. (2022b). An enhanced MSIQDE algorithm with novel multiple strategies for global optimization problems. IEEE Trans. Syst. Man Cybern. Syst. 52, 1578–1587. doi: 10.1109/TSMC.2020.3030792

CrossRef Full Text | Google Scholar

Deng, W., Zhang, X., Zhou, Y., Liu, Y., Zhou, X., and Chen, H. (2022d). An enhanced fast non-dominated solution sorting genetic algorithm for multi-objective problems. Inf. Sci. 585, 441–453. doi: 10.1016/j.ins.2021.11.052

CrossRef Full Text | Google Scholar

Deng, W., Zhang, L., Zhou, X., Sun, Y., Zhu, W., Chen, H., et al. (2022c). Multi-strategy particle swarm and ant colony hybrid optimization for airport taxiway planning problem. Inf. Sci. 612, 576–593. doi: 10.1016/j.ins.2022.08.115

CrossRef Full Text | Google Scholar

Deng, W., Xu, J., Zhao, H., and Song, Y. (2020a). “A Novel Gate Resource Allocation Method Using Improved PSO-Based QEA,” in Proceedings of the IEEE Transactions on Intelligent Transportation Systems, Piscataway, NJ.

Google Scholar

Deng, W., Xu, J. J., Song, Y. J., and Zhao, H. M. (2020b). An effective improved co-evolution ant colony optimization algorithm with multi-strategies and its application. Int. J. Bio-Inspir. Comput. 16, 158–170. doi: 10.1504/IJBIC.2020.10033314

PubMed Abstract | CrossRef Full Text | Google Scholar

Dokeroglu, T., Sevinc, E., Kucukyilmaz, T., and Cosar, A. (2019). A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 137:106040. doi: 10.1016/j.cie.2019.106040

CrossRef Full Text | Google Scholar

Dorigo, M. (1992). Optimization, Learning and Natural Algorithms. Ph.D. thesis. Milan: University in Milan.

Google Scholar

Dorigo, M., and Caro, G. D. (1999). “The ant colony optimization meta-heuristic,” in New ideas in optimization, eds D. Corne, M. Dorigo, and F. Glover (Noida, IN: McGraw-Hill Ltd).

Google Scholar

Duan, C., Deng, H., Xiao, S., Xie, J., Li, H., Zhao, X., et al. (2022). Accelerate gas diffusion-weighted MRI for lung morphometry with deep learning. Eur. Radiol. 32, 702–713. doi: 10.1007/s00330-021-08126-y

PubMed Abstract | CrossRef Full Text | Google Scholar

El-Kenawy, E. S. M., Ibrahim, A., Mirjalili, S., Eid, M., and Hussein, S. (2020). Novel feature selection and voting classifier algorithms for COVID-19 classification in CT images. IEEE Access 8, 179317–179335. doi: 10.1109/ACCESS.2020.3028012

PubMed Abstract | CrossRef Full Text | Google Scholar

Gao, D., Wang, G.-G., and Pedrycz, W. (2020). Solving fuzzy job-shop scheduling problem using DE algorithm improved by a selection mechanism. IEEE Trans. Fuzzy Syst. 28, 3265–3275. doi: 10.1109/TFUZZ.2020.3003506

CrossRef Full Text | Google Scholar

García, S., Fernandez, A., Luengo, J., and Herrera, F. (2010). Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 180, 2044–2064. doi: 10.1016/j.ins.2009.12.010

CrossRef Full Text | Google Scholar

Guimarães, P., Batista, A., Zieger, M., Kaatz, M., and Koenig, K. (2020). Artificial intelligence in multiphoton tomography: atopic dermatitis diagnosis. Sci. Rep. 10:7968. doi: 10.1038/s41598-020-64937-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Guo, K., Chen, T., Ren, S., Li, N., Hu, M., and Kang, J. (2022). Federated learning empowered real-time medical data processing method for smart healthcare. IEEE/ACM Trans. Comput. Biol. Bioinform. doi: 10.1109/TCBB.2022.3185395 [Epub ahead of print].

PubMed Abstract | CrossRef Full Text | Google Scholar

Gupta, S., and Deep, K. (2018). A novel random walk grey wolf optimizer. Swarm Evol. Comput. 44, 101–112. doi: 10.1016/j.swevo.2018.01.001

CrossRef Full Text | Google Scholar

Gustafson, E., Pacheco, J., Wehbe, F., Silverberg, J., and Thompson, W. A. (2017). Machine learning algorithm for identifying atopic dermatitis in adults from electronic health records. IEEE Int. Conf. Healthc. Inform. 2017, 83–90. doi: 10.1109/ICHI.2017.31

PubMed Abstract | CrossRef Full Text | Google Scholar

Han, X., Han, Y., Chen, Q., Li, J., Sang, H., Liu, Y., et al. (2021). Distributed flow shop scheduling with sequence-dependent setup times using an improved iterated greedy algorithm. Compl. Syst. Model. Simul. 1, 198–217. doi: 10.23919/CSMS.2021.0018

CrossRef Full Text | Google Scholar

He, Z., Yen, G. G., and Ding, J. (2020). Knee-based decision making and visualization in many-objective optimization. IEEE Trans. Evol. Comput. 25, 292–306. doi: 10.1109/TEVC.2020.3027620

CrossRef Full Text | Google Scholar

He, Z., Yen, G. G., and Lv, J. (2019). Evolutionary multiobjective optimization with robustness enhancement. IEEE Trans. Evol. Comput. 24, 494–507. doi: 10.1109/TEVC.2019.2933444

CrossRef Full Text | Google Scholar

Heidari, A. A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., and Chen, H. (2019b). Harris hawks optimization: Algorithm and applications. Fut. Gener. Comput. Syst. 97, 849–872. doi: 10.1016/j.future.2019.02.028

CrossRef Full Text | Google Scholar

Heidari, A. A., Ali Abbaspour, R., and Chen, H. (2019a). Efficient boosted grey wolf optimizers for global search and kernel extreme learning machine training. Appl. Soft Comput. 81:105521. doi: 10.1016/j.asoc.2019.105521

CrossRef Full Text | Google Scholar

Holm, J. G., Hurault, G., Agner, T., Clausen, M., Kezic, S., Tanaka, R., et al. (2021). Immunoinflammatory biomarkers in serum are associated with disease severity in atopic dermatitis. Dermatology 237, 513–520. doi: 10.1159/000514503

PubMed Abstract | CrossRef Full Text | Google Scholar

Houssein, E. H., Abdelminaam, D. S., Hassan, H. N., Al-Sayed, M. M., and Nabil, E. (2021). A hybrid barnacles mating optimizer algorithm with support vector machines for gene selection of microarray cancer classification. IEEE Access 9, 64895–64905. doi: 10.1109/ACCESS.2021.3075942

CrossRef Full Text | Google Scholar

Hu, J., Gui, W. Y., Heidari, A. A., Cai, Z. N., Liang, G. X., Chen, H. L., et al. (2022a). Dispersed foraging slime mould algorithm: Continuous and binary variants for global optimization and wrapper-based feature selection. Knowledge-Based Syst. 237:107761. doi: 10.1016/j.knosys.2021.107761

CrossRef Full Text | Google Scholar

Hu, J., Han, Z., Heidari, A., Shou, Y., Ye, H., Wang, L., et al. (2022b). Detection of COVID-19 severity using blood gas analysis parameters and Harris hawks optimized extreme learning machine. Comput. Biol. Med. 142:105166. doi: 10.1016/j.compbiomed.2021.105166

PubMed Abstract | CrossRef Full Text | Google Scholar

Hu, J., Liu, Y., Heidari, A., Bano, Y., Ibrohimov, A., Liang, G., et al. (2022c). An effective model for predicting serum albumin level in hemodialysis patients. Comput. Biol. Med. 140:105054. doi: 10.1016/j.compbiomed.2021.105054

PubMed Abstract | CrossRef Full Text | Google Scholar

Hua, Y., Liu, Q., Hao, K., and Jin, Y. (2021). A survey of evolutionary algorithms for multi-objective optimization problems with irregular pareto fronts. IEEE/CAA J. Automat. Sin. 8, 303–318. doi: 10.1109/JAS.2021.1003817

CrossRef Full Text | Google Scholar

Jadhav, S., He, H., and Jenkins, K. (2018). Information gain directed genetic algorithm wrapper feature selection for credit rating. Appl. Soft Comput. 69, 541–553. doi: 10.1016/j.asoc.2018.04.033

CrossRef Full Text | Google Scholar

Jiang, Z., Li, J., Kong, N., Kim, J., Kim, B., Lee, M., et al. (2022). Accurate diagnosis of atopic dermatitis by combining transcriptome and microbiota data with supervised machine learning. Sci. Rep. 12:290. doi: 10.1038/s41598-021-04373-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Jin, K., Yan, Y., Chen, M., Wang, J., Pan, X., Liu, X., et al. (2022). Multimodal deep learning with feature level fusion for identification of choroidal neovascularization activity in age-related macular degeneration. Acta Ophthalmol. 100, e512–e520. doi: 10.1111/aos.14928

PubMed Abstract | CrossRef Full Text | Google Scholar

Johansson, E. K., Bergström, A., Kull, I., Melén, E., Jonsson, M., Lundin, S., et al. (2022). Prevalence and characteristics of atopic dermatitis among young adult females and males-report from the Swedish population-based study BAMSE. J. Eur. Acad. Dermatol. Venereol. 36, 698–704. doi: 10.1111/jdv.17929

PubMed Abstract | CrossRef Full Text | Google Scholar

Kazimipour, B., Li, X., and Qin, A. K. (2014). “A review of population initialization techniques for evolutionary algorithms,” in Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Piscataway, NJ. doi: 10.1109/CEC.2014.6900618

CrossRef Full Text | Google Scholar

Keller, J. M., Gray, M. R., and Givens, J. A. (1985). A fuzzy K-nearest neighbor algorithm. IEEE Trans. Syst. Man Cybern. 15, 580–585. doi: 10.1109/TSMC.1985.6313426

CrossRef Full Text | Google Scholar

Kennedy, J., and Eberhart, R. (1995). “Particle swarm optimization,” in Proceedings of the ICNN’95 – International Conference on Neural Networks, Houston, TX.

Google Scholar

Li, C., Dong, M., Li, J., Xu, G., Chen, X., Liu, W., et al. (2022). Efficient medical big data management with keyword-searchable encryption in healthchain. IEEE Syst. J. 1:12. doi: 10.1109/JSYST.2022.3173538

CrossRef Full Text | Google Scholar

Li, Y., Zhao, D., Liu, G., Liu, Y., Bano, Y., Ibrohimov, A., et al. (2022). Intradialytic hypotension prediction using covariance matrix-driven whale optimizer with orthogonal structure-assisted extreme learning machine. Front. Neuroinform. 16:956423. doi: 10.3389/fninf.2022.956423

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, C., Li, J., Chen, H., and Heidari, A. A. (2021). Memetic harris hawks optimization: Developments and perspectives on project scheduling and QoS-aware web service composition. Expert Syst. Applic. 171:114529. doi: 10.1016/j.eswa.2020.114529

CrossRef Full Text | Google Scholar

Li, D., Zhang, S., and Ma, X. (2021). Dynamic module detection in temporal attributed networks of cancers. IEEE/ACM Trans. Comput. Biol. Bioinform. doi: 10.1109/TCBB.2021.3069441 [Epub ahead of print].

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, H., Zhao, X., Wang, Y., Lou, X., Chen, S., Deng, H., et al. (2021). Damaged lung gas exchange function of discharged COVID-19 patients detected by hyperpolarized 129Xe MRI. Sci. Adv. 7:eabc8180. doi: 10.1126/sciadv.abc8180

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, L., Gao, Z., Wang, Y., Zhang, M., Ni, J., Zheng, C., et al. (2021). SCMFMDA: Predicting microRNA-disease associations based on similarity constrained matrix factorization. PLoS Comput. Biol. 17:e1009165. doi: 10.1371/journal.pcbi.1009165

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, X., Park, S., Paknezhad, M., Dinish, U. S., Binte Ebrahim Attia, A., Weng, Y., et al. (2021). “Atopic Dermatitis Classification Models of 3D Optoacoustic Mesoscopic Images,” in Proceedings of the European Conferences on Biomedical Optics 2021 (ECBO), Munich. doi: 10.1117/12.2615991

CrossRef Full Text | Google Scholar

Li, J., Chen, C., Chen, H., and Tong, C. (2017). Towards context-aware social recommendation via individual trust. Knowledge-Based Syst. 127, 58–66. doi: 10.1016/j.knosys.2017.02.032

CrossRef Full Text | Google Scholar

Li, J., Zheng, X., Chen, S., Song, W., and Chen, D. (2014). An efficient and reliable approach for quality-of-service-aware service composition. Inf. Sci. 269, 238–254. doi: 10.1016/j.ins.2013.12.015

CrossRef Full Text | Google Scholar

Li, J.-Y., Zhan, Z., Wang, C., Jin, H., and Zhang, J. (2020). Boosting data-driven evolutionary algorithm with localized data generation. IEEE Trans. Evol. Comput. 24, 923–937. doi: 10.1109/TEVC.2020.2979740

CrossRef Full Text | Google Scholar

Li, S., Chen, H., Wang, M., Heidari, A., and Mirjalili, S. (2020). Slime mould algorithm: A new method for stochastic optimization. Fut. Gener. Comput. Syst. 111, 300–323. doi: 10.1016/j.future.2020.03.055

CrossRef Full Text | Google Scholar

Li, Y., Li, X., Hong, J., Wang, Y., Fu, J., Yang, H., et al. (2020). Clinical trials, progression-speed differentiating features and swiftness rule of the innovative targets of first-in-class drugs. Brief. Bioinform. 21, 649–662. doi: 10.1093/bib/bby130

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, G., Jia, W., Wang, M., Heidari, A. A., Chen, H., Luo, Y., et al. (2020). Predicting cervical hyperextension injury: A covariance guided sine cosine support vector machine. IEEE Access 8, 46895–46908. doi: 10.1109/ACCESS.2020.2978102

CrossRef Full Text | Google Scholar

Liu, L., Zhao, D., Yu, F., Heidari, A., Li, C., Ouyang, J., et al. (2021). Ant colony optimization with Cauchy and greedy Levy mutations for multilevel COVID 19 X-ray image segmentation. Comp. Biol. Med. 136:104609. doi: 10.1016/j.compbiomed.2021.104609

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, S., An, J., Zhao, J., Zhao, S., Lv, H., and Wang, S. (2021). Drug-target interaction prediction based on multisource information weighted fusion. Contrast Media Mol. Imaging 2021:6044256. doi: 10.1155/2021/6044256

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, S., Yang, B., Wang, Y., Tian, J., Yin, L., and Zheng, W. (2022). 2D/3D multimode medical image registration based on normalized cross-correlation. Appl. Sci. 12:2828. doi: 10.3390/app12062828

CrossRef Full Text | Google Scholar

Liu, Y., Heidari, A. A., Cai, Z., Liang, G., Chen, H., Pan, Z., et al. (2022). Simulated annealing-based dynamic step shuffled frog leaping algorithm: Optimal performance design and feature selection. Neurocomputing 503, 325–362. doi: 10.1016/j.neucom.2022.06.075

CrossRef Full Text | Google Scholar

Lu, C., Gao, L., and Yi, J. (2018). Grey wolf optimizer with cellular topological structure. Expert Syst. Applic. 107, 89–114. doi: 10.1016/j.eswa.2018.04.012

CrossRef Full Text | Google Scholar

Lu, S., Yang, B., Xiao, Y., and Liu, S. (2023). Iterative reconstruction of low-dose CT based on differential sparse. Biomed. Signal Process. Control 79:104204. doi: 10.1016/j.bspc.2022.104204

CrossRef Full Text | Google Scholar

Ma, X., Sun, P. G., and Gong, M. (2020). An integrative framework of heterogeneous genomic data for cancer dynamic modules based on matrix decomposition. IEEE/ACM Trans. Comput. Biol. Bioinform. 19, 305–316. doi: 10.1109/TCBB.2020.3004808

PubMed Abstract | CrossRef Full Text | Google Scholar

Mailagaha Kumbure, M., Luukka, P., and Collan, M. (2020). A new fuzzy k-nearest neighbor classifier based on the Bonferroni mean. Pattern Recogn. Lett. 140, 172–178. doi: 10.1016/j.patrec.2020.10.005

CrossRef Full Text | Google Scholar

Maintz, L., Welchowski, T., Herrmann, N., Brauer, J., Kläschen, A., Fimmers, R., et al. (2021). Machine learning–based deep phenotyping of atopic dermatitis: Severity-associated factors in adolescent and adult patients. JAMA Dermatol. 157, 1414–1424. doi: 10.1001/jamadermatol.2021.3668

PubMed Abstract | CrossRef Full Text | Google Scholar

Mirjalili, S. (2015). Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowledge-Based Syst. 89, 228–249. doi: 10.1016/j.knosys.2015.07.006

CrossRef Full Text | Google Scholar

Mirjalili, S. (2016). SCA: A sine cosine algorithm for solving optimization problems. Knowledge-Based Syst. 96, 120–133. doi: 10.1016/j.knosys.2015.12.022

CrossRef Full Text | Google Scholar

Mirjalili, S., and Lewis, A. (2016). The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67. doi: 10.1016/j.advengsoft.2016.01.008

CrossRef Full Text | Google Scholar

Mirjalili, S., Mirjalili, S. M., and Lewis, A. (2014). Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61. doi: 10.1016/j.advengsoft.2013.12.007

CrossRef Full Text | Google Scholar

Nagra, A. A., Han, F., Ling, Q., and Mehta, S. (2019). An improved hybrid method combining gravitational search algorithm with dynamic multi swarm particle swarm optimization. IEEE Access 7, 50388–50399. doi: 10.1109/ACCESS.2019.2903137

CrossRef Full Text | Google Scholar

Nenavath, H., and Jatoth, R. K. (2018). Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking. Appl. Soft Comput. 62, 1019–1043. doi: 10.1016/j.asoc.2017.09.039

CrossRef Full Text | Google Scholar

Nenavath, H., Kumar Jatoth, D. R., and Das, D. S. (2018). A synergy of the sine-cosine algorithm and particle swarm optimizer for improved global optimization and object tracking. Swarm Evol. Comput. 43, 1–30. doi: 10.1016/j.swevo.2018.02.011

CrossRef Full Text | Google Scholar

Nobile, M. S., Cazzaniga, P., Besozzi, D., Colombo, R., Mauri, G., and Pasi, G. (2018). Fuzzy Self-Tuning PSO: A settings-free algorithm for global optimization. Swarm Evol. Comput. 39, 70–85. doi: 10.1016/j.swevo.2017.09.001

CrossRef Full Text | Google Scholar

Poto, R., Quinti, I., Marone, G., Taglialatela, M., de Paulis, A., Casolaro, V., et al. (2022). IgG Autoantibodies Against IgE from Atopic Dermatitis Can Induce the Release of Cytokines and Proinflammatory Mediators from Basophils and Mast Cells. Front. Immunol. 13:880412. doi: 10.3389/fimmu.2022.880412

PubMed Abstract | CrossRef Full Text | Google Scholar

Qi, A., Zhao, D., Yu, F., Heidari, A., Chen, H., and Xiao, L. (2022a). Directional mutation and crossover for immature performance of whale algorithm with application to engineering optimization. J. Comput. Design Eng. 9, 519–563. doi: 10.1093/jcde/qwac014

CrossRef Full Text | Google Scholar

Qi, A., Zhao, D., Yu, F., Heidari, A., Wu, Z., Cai, Z., et al. (2022b). Directional mutation and crossover boosted ant colony optimization with application to COVID-19 X-ray image segmentation. Comput. Biol. Med. 148:105810. doi: 10.1016/j.compbiomed.2022.105810

PubMed Abstract | CrossRef Full Text | Google Scholar

Qiu, S., Zhao, H., Jiang, N., Wang, Z., Liu, L., An, Y., et al. (2022). Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges. Inf. Fus. 80, 241–265. doi: 10.1016/j.inffus.2021.11.006

CrossRef Full Text | Google Scholar

Qu, C., Zeng, Z., Dai, J., Yi, Z., and He, W. A. (2018). Modified sine-cosine algorithm based on neighborhood search and greedy levy mutation. Comput. Intell. Neurosci. 2018:4231647. doi: 10.1155/2018/4231647

PubMed Abstract | CrossRef Full Text | Google Scholar

Rahnamayan, S., Tizhoosh, H. R., and Salama, M. M. A. (2007). A novel population initialization method for accelerating evolutionary algorithms. Comput. Math. Applic. 53, 1605–1614. doi: 10.1016/j.camwa.2006.07.013

CrossRef Full Text | Google Scholar

Rehbinder, E. M., Advocaat Endre, K., Lødrup Carlsen, K., Asarnoj, A., Stensby Bains, K., Berents, T., et al. (2020). Predicting skin barrier dysfunction and atopic dermatitis in early infancy. J. Allergy Clin. Immunol. 8, 664–673. doi: 10.1016/j.jaip.2019.09.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Socha, K., and Dorigo, M. (2008). Ant colony optimization for continuous domains. Eur. J. Oper. Res. 185, 1155–1173. doi: 10.1016/j.ejor.2006.06.046

CrossRef Full Text | Google Scholar

Song, S., Wang, P., Heidari, A. A., Wang, M., Zhao, X., Chen, H., et al. (2021). Dimension decided Harris hawks optimization with Gaussian mutation: Balance analysis and diversity patterns. Knowledge-Based Syst. 215:106425. doi: 10.1016/j.knosys.2020.106425

CrossRef Full Text | Google Scholar

Spergel, J. M. (2021). The atopic march: Where we are going? Can we change it? Ann. Allergy Asthma Immunol. 127, 283–284. doi: 10.1016/j.anai.2021.06.022

PubMed Abstract | CrossRef Full Text | Google Scholar

Storn, R., and Price, K. (1997). Differential Evolution – A Simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optimiz. 11, 341–359. doi: 10.1023/A:1008202821328

CrossRef Full Text | Google Scholar

Su, Y., Li, S., Zheng, C., and Zhang, X. (2019). A heuristic algorithm for identifying molecular signatures in cancer. IEEE Trans. NanoBiosci. 19, 132–141. doi: 10.1109/TNB.2019.2930647

PubMed Abstract | CrossRef Full Text | Google Scholar

Suhendra, R., Arnia, F., Idroes, R., Earlia, N., and Suhartono, E. (2019). “A Novel Approach to Multi-class Atopic Dermatitis Disease Severity Scoring using Multi-class SVM,” in Proceedings of the 2019 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom), Banda Aceh. doi: 10.1109/CYBERNETICSCOM.2019.8875693

CrossRef Full Text | Google Scholar

Tang, H., Xu, Y., Lin, A., Heidari, A. A., Wang, M., Chen, H., et al. (2020). Predicting green consumption behaviors of students using efficient firefly grey wolf-assisted K-nearest neighbor classifiers. IEEE Access 8, 35546–35562. doi: 10.1109/ACCESS.2020.2973763

CrossRef Full Text | Google Scholar

Tu, J., Chen, H., Wang, M., and Gandomi, A. (2021b). The colony predation algorithm. J. Bionic Eng. 18, 674–710. doi: 10.1007/s42235-021-0050-y

CrossRef Full Text | Google Scholar

Tu, J., Chen, H., Liu, J., Heidar, A., Zhang, X., Wang, M., et al. (2021a). Evolutionary biogeography-based whale optimization methods with communication structure: Towards measuring the balance. Knowledge-Based Syst. 212:106642. doi: 10.1016/j.knosys.2020.106642

CrossRef Full Text | Google Scholar

Tu, S., Rehman, O., Rehman, S., Ullah, S., Waqas, M., and Zhu, R. (2020). A Novel Quantum Inspired Particle Swarm Optimization Algorithm for Electromagnetic Applications. IEEE Access 8, 21909–21916. doi: 10.1109/ACCESS.2020.2968980

CrossRef Full Text | Google Scholar

Wang, G.-G., Gao, D., and Pedrycz, W. (2022). Solving multi-objective fuzzy job-shop scheduling problem by a hybrid adaptive differential evolution algorithm. IEEE Trans. Ind. Inform. 18, 8519–8528. doi: 10.1109/TII.2022.3165636

CrossRef Full Text | Google Scholar

Wang, M., Chen, H., Yang, B., Zhao, X., Hu, L., Cai, Z., et al. (2017). Toward an optimal kernel extreme learning machine using a chaotic moth-flame optimization strategy with applications in medical diagnoses. Neurocomputing 267, 69–84. doi: 10.1016/j.neucom.2017.04.060

CrossRef Full Text | Google Scholar

Wang, X., Fu, X., Dong, J., and Jiang, J. (2021). Dynamic modified chaotic particle swarm optimization for radar signal sorting. IEEE Access 9, 88452–88466. doi: 10.1109/ACCESS.2021.3091005

CrossRef Full Text | Google Scholar

Williams, H. C. (1996). Diagnostic criteria for atopic dermatitis. Lancet 348, 1391–1392. doi: 10.1016/S0140-6736(05)65466-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Williams, H. C., Burney, P., Pembroke, A., and Hay, R. (1994). The U.K. Working Party’s Diagnostic Criteria for Atopic Dermatitis. I. Derivation of a minimum set of discriminators for atopic dermatitis. Br. J. Dermatol. 131, 383–396. doi: 10.1111/j.1365-2133.1994.tb08530.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Williams, H. C., Burney, P., Pembroke, A., and Hay, R. (1996). Validation of the U.K. diagnostic criteria for atopic dermatitis in a population setting. U.K. Diagnostic Criteria for Atopic Dermatitis Working Party. Br. J. Dermatol. 135, 12–17. doi: 10.1111/j.1365-2133.1996.tb03599.x

CrossRef Full Text | Google Scholar

Wolpert, D. H., and Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1, 67–82. doi: 10.1109/4235.585893

CrossRef Full Text | Google Scholar

Wu, S., Mao, P., Li, R., Cai, Z., Heidari, A., Xia, J., et al. (2021). Evolving fuzzy k-nearest neighbors using an enhanced sine cosine algorithm: Case study of lupus nephritis. Comput. Biol. Med. 135:104582. doi: 10.1016/j.compbiomed.2021.104582

PubMed Abstract | CrossRef Full Text | Google Scholar

Wu, S.-H., Zhan, Z.-H., and Zhang, J. (2021). SAFE Scale-adaptive fitness evaluation method for expensive optimization problems. IEEE Trans. Evol. Comput. 25, 478–491. doi: 10.1109/TEVC.2021.3051608

CrossRef Full Text | Google Scholar

Wu, Z., Li, G., Shen, S., Lian, X., Chen, E., and Xu, G. (2021). Constructing dummy query sequences to protect location privacy and query privacy in location-based services. World Wide Web 24, 25–49. doi: 10.1007/s11280-020-00830-x

CrossRef Full Text | Google Scholar

Wu, Z., Wang, R., Li, Q., Lian, X., Xu, G., Chen, E., et al. (2020). A location privacy-preserving system based on query range cover-up for location-based services. IEEE Trans. Vehic. Technol. 69, 5244–5254. doi: 10.1109/TVT.2020.2981633

CrossRef Full Text | Google Scholar

Wu, Z., Xuan, S., Xie, J., Lin, C., and Lu, C. (2022). How to ensure the confidentiality of electronic medical records on the cloud: A technical perspective. Comput. Biol. Med. 147:105726. doi: 10.1016/j.compbiomed.2022.105726

PubMed Abstract | CrossRef Full Text | Google Scholar

Xia, J., Yang, D., Zhou, H., Chen, Y., Zhang, H., Liu, T., et al. (2022). Evolving kernel extreme learning machine for medical diagnosis via a disperse foraging sine cosine algorithm. Comput. Biol. Med. 141:105137. doi: 10.1016/j.compbiomed.2021.105137

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, B., Xu, S., Chen, H., Zheng, W., and Liu, C. (2022). Reconstruct dynamic soft-tissue with stereo endoscope based on a single-layer network. IEEE Trans. Image Process. 31, 5828–5840. doi: 10.1109/TIP.2022.3202367

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, X., Zhao, D., Yu, F., Heidari, A. A., Bano, Y., Ibrohimov, A., et al. (2022). An optimized machine learning framework for predicting intradialytic hypotension using indexes of chronic kidney disease-mineral and bone disorders. Comput. Biol. Med. 145:105510. doi: 10.1016/j.compbiomed.2022.105510

PubMed Abstract | CrossRef Full Text | Google Scholar

Yang, X.-S. (2010). “A New Metaheuristic Bat-Inspired Algorithm,” in Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), eds J. R. González, D. A. Pelta, C. Cruz, G. Terrazas, and N. Krasnogor (Berlin: Springer Berlin Heidelberg). doi: 10.1007/978-3-642-12538-6_6

CrossRef Full Text | Google Scholar

Yang, Y., Chen, H., Heidari, A., and Gandomi, A. H. (2021). Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 177:114864. doi: 10.1016/j.eswa.2021.114864

CrossRef Full Text | Google Scholar

Yang, Z., Ma, J., Chen, H., Zhang, J., and Chang, Y. (2022). Context-aware Attentive Multi-level Feature Fusion for Named Entity Recognition. IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2022.3178522 [Epub ahead of print].

PubMed Abstract | CrossRef Full Text | Google Scholar

Ye, H., Wu, P., Zhu, T., Xiao, Z., Zhang, X., Zheng, L., et al. (2021). Diagnosing Coronavirus Disease 2019 (COVID-19): Efficient Harris Hawks-Inspired Fuzzy K-Nearest Neighbor Prediction Methods. IEEE Access 9, 17787–17802. doi: 10.1109/ACCESS.2021.3052835

PubMed Abstract | CrossRef Full Text | Google Scholar

Ye, X., Liu, W., Li, H., Wang, M., Chi, C., Liang, G., et al. (2021). Modified whale optimization algorithm for solar cell and PV module parameter identification. Complexity 2021:8878686. doi: 10.1155/2021/8878686

CrossRef Full Text | Google Scholar

Yu, H., Yuan, K., Li, W., Zhao, N., Chen, W., Huang, C., et al. (2021). Improved butterfly optimizer-configured extreme learning machine for fault diagnosis. Complexity 2021:6315010. doi: 10.1155/2021/6315010

CrossRef Full Text | Google Scholar

Zhang, L., Wang, J., Wang, W., Jin, Z., Su, Y., and Chen, H. (2022). Smart contract vulnerability detection combined with multi-objective detection. Comput. Netw. 217:109289. doi: 10.1016/j.comnet.2022.109289

CrossRef Full Text | Google Scholar

Zhang, X., Zheng, J., Wang, D., Tang, G., Zhou, Z., and Lin, Z. (2022). Structured Sparsity Optimization with Non-Convex Surrogates of l2,0-Norm: A Unified Algorithmic Framework. IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2022.3213716 [Epub ahead of print].

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, X. Q., Hu, W., Xie, N., Bao, H., and Maybank, S. A. (2015). Robust tracking system for low frame rate video. Int. J. Comput. Vis. 115, 279–304. doi: 10.1007/s11263-015-0819-8

CrossRef Full Text | Google Scholar

Zhang, Z., Wang, L., Zheng, W., Yin, L., Hu, R., Yang, B., et al. (2022). Endoscope image mosaic based on pyramid ORB. Biomed. Signal Process. Control 71:103261. doi: 10.1016/j.bspc.2021.103261

CrossRef Full Text | Google Scholar

Zhao, D., Liu, L., Yu, F., Heidari, A. A., Wang, M., Liang, G., et al. (2021). Chaotic random spare ant colony optimization for multi-threshold image segmentation of 2D Kapur entropy. Knowledge-Based Syst. 216:106510. doi: 10.1016/j.knosys.2020.106510

CrossRef Full Text | Google Scholar

Zhen, L., Liu, Y., Dongsheng, W., and Wei, Z. (2020). Parameter Estimation of Software Reliability Model and Prediction Based on Hybrid Wolf Pack Algorithm and Particle Swarm Optimization. IEEE Access 8, 29354–29369. doi: 10.1109/ACCESS.2020.2972826

CrossRef Full Text | Google Scholar

Zhou, Q., Guo, S., Xu, L., Guo, X., Williams, H., Xu, H., et al. (2021). Global Optimization of the Hydraulic-Electromagnetic Energy-Harvesting Shock Absorber for Road Vehicles With Human-Knowledge-Integrated Particle Swarm Optimization Scheme. IEEE/ASME Trans. Mechatron. 26, 1225–1235. doi: 10.1109/TMECH.2021.3055815

CrossRef Full Text | Google Scholar

Zhou, W., Wang, P., Heidari, A., Wang, M., and Chen, H. (2021). Multi-core sine cosine optimization: Methods and inclusive analysis. Expert Syst. Applic. 164:113974. doi: 10.1016/j.eswa.2020.113974

CrossRef Full Text | Google Scholar

Zhu, F., Li, X., Yang, S., and Chen, Y. (2018). Clinical success of drug targets prospectively predicted by in silico study. Trends Pharmacol. Sci. 39, 229–231. doi: 10.1016/j.tips.2017.12.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhuang, Y., Jiang, N., and Xu, Y. (2022). Progressive distributed and parallel similarity retrieval of large CT image sequences in mobile telemedicine networks. Wireless Commun. Mob. Comput. 2022, 1–13. doi: 10.1155/2022/6458350

CrossRef Full Text | Google Scholar

Zuo, W.-L., Wang, Z.-Y., Liu, T., and Chen, H.-L. (2013). Effective detection of Parkinson’s disease using an adaptive fuzzy K-nearest neighbor approach. Biomed. Signal Process. Control 8, 364–373. doi: 10.1016/j.bspc.2013.02.006

CrossRef Full Text | Google Scholar

Keywords: swarm intelligence optimization, FKNN, feature selection, machine learning, atopic dermatitis

Citation: Li Y, Zhao D, Xu Z, Heidari AA, Chen H, Jiang X, Liu Z, Wang M, Zhou Q and Xu S (2023) bSRWPSO-FKNN: A boosted PSO with fuzzy K-nearest neighbor classifier for predicting atopic dermatitis disease. Front. Neuroinform. 16:1063048. doi: 10.3389/fninf.2022.1063048

Received: 06 October 2022; Accepted: 05 December 2022;
Published: 16 January 2023.

Edited by:

Fernando De La Prieta, University of Salamanca, Spain

Reviewed by:

Shuaiqi Liu, Hebei University, China
Essam Halim Houssein, Minia University, Egypt

Copyright © 2023 Li, Zhao, Xu, Heidari, Chen, Jiang, Liu, Wang, Zhou and Xu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Dong Zhao, www.frontiersin.org zd-hy@163.com; Huiling Chen, www.frontiersin.org chenhuiling.jlu@gmail.com; Suling Xu, www.frontiersin.org xusuling@nub.edu.cn

Download