AOPs-SVM: A Sequence-Based Classifier of Antioxidant Proteins Using a Support Vector Machine

Antioxidant proteins play important roles in countering oxidative damage in organisms. Because it is time-consuming and has a high cost, the accurate identification of antioxidant proteins using biological experiments is a challenging task. For these reasons, we proposed a model using machine-learning algorithms that we named AOPs-SVM, which was developed based on sequence features and a support vector machine. Using a testing dataset, we conducted a jackknife cross-validation test with the proposed AOPs-SVM classifier and obtained 0.68 in sensitivity, 0.985 in specificity, 0.942 in average accuracy, 0.741 in MCC, and 0.832 in AUC. This outperformed existing classifiers. The experiment results demonstrate that the AOPs-SVM is an effective classifier and contributes to the research related to antioxidant proteins. A web server was built at http://server.malab.cn/AOPs-SVM/index.jsp to provide open access.


INTRODUCTION
The antioxidant system in organisms has the ability to prevent damage caused by reactive oxygen species (ROS) (Siswoyo et al., 2011). The ROS, which include hydrogen peroxide, singlet oxygen, superoxide anion radical, hydroxyl radical, and nitric oxide, are the product of the metabolism and influence fatty acids, proteins, and DNA (Sögüt et al., 2003). An excess of ROS or the depression of the antioxidant system can lead to oxidative stress (Zima et al., 2001;Krishnaiah et al., 2007). This oxidative stress may then go on to lead to a series of pathological conditions such as heart disease, malaria, neurodegenerative diseases, AIDS, cancer, and the aging process (Ames, 1983;GEY, 1990;Ames et al., 1993;Smith et al., 1996;Diaz et al., 1997;Yang et al., 2019a).
Natural antioxidants are regarded as the second antioxidant defense line in organisms (Yigit et al., 2014), and have recently attracted increasing attention from researchers. Such antioxidants are mainly extracted from dietary sources such as fruits, vegetables, and foods with carotenoids and vitamin A (Geetha et al., 2002;Podsedek, 2007;Tang et al., 2019a,b). When these antioxidants are consumed, they scavenge from the ROS and minimize the oxidative stress, thus reducing the risk to organisms . Many extracted or purified proteins are used as natural antioxidants, including soy proteins, lactoferrin, casein, β-lactoglobulin, canola proteins, yam dioscorin, egg albumen proteins, maize zein, egg yolk phosvitin, and potato patatin. In addition, proteins extracted from fertilized eggs, jellyfish, white beans, chickpeas, melinjo (gnetum gnemon) seeds, and ginkgo biloba seeds were also reported to have antioxidant properties (Rajalakshmi and Narasimhan, 1996;Chiue et al., 1997;Maheswari et al., 1997;Kouoh et al., 1999;Satué-Gracia et al., 2000;Hou et al., 2001;Liu et al., 2003;Cumby et al., 2008;Huang et al., 2010;Li et al., 2017). In vitro assay systems are commonly employed to identify the antioxidant activity of a new protein, including any scavenging effect on DPPH and ABTS, the inhibition of linoleic acid autoxidation, any chelating or strength-reducing capabilities, and protections against DNA damage caused by hydroxyl radical-mediation (Liu et al., 2003;Dastmalchi et al., 2008;Sachindra and Bhaskar, 2008;Huang et al., 2010;Fu et al., 2018). However, the in vitro experiment is time-consuming and inefficient. Therefore, to increase the success rate, it is desirable to develop a classifier to confirm antioxidant proteins prior to the in vitro experiment.
Recently, several researchers have used a computational approach to the identification of antioxidant proteins. Enrique Fernandez-Blanco et al. used star graph topological indices and random forests to develop a model for identifying antioxidant proteins (Fernández-Blanco et al., 2013). However, when analyzing the dataset, we found that the sequences used for the training model do not include the removal of redundant data. As a result, data similarity increases, which makes the results of the model untrustworthy. In 2013, Feng et al. developed a Naive Bayes model based on a sequence feature (Feng et al., 2013b), and in 2016, they constructed a model named AodPred based on the support vector machine using a 3-gap dipeptides feature (Feng et al., 2016). Xu et al. also used the support vector machine to construct a model to identify antioxidant proteins . The latter two models were built on the same training dataset and included a sequence to remove redundant data. The analysis of the results indicates that there is room to improve the identification accuracy. The training set for our model is the same as the two models mentioned above. In the bioinformatics field, applying computational methods to identify a particular protein mainly requires machine-learning techniques. The process can be divided into two main steps: (1) extracting features from protein sequences, and (2) constructing classifiers.
The first step is to extract discriminative features from a protein sequence. Sequence-order information or its combination with biochemical characteristics of proteins is a common approach. The most popular is the pseudo amino acid (PseAAC) method proposed by Shen and Chou (2006). Subsequently, many methods based on PseAAC have emerged (Liu et al., 2015Zhu et al., 2015Zhu et al., , 2018Chen et al., 2016;Tang et al., 2016;Yang et al., 2016). In addition, there are also features to indicate the evolutionary and secondary structure information, primarily the PSI-BLAST (Altschul et al., 1997) and PSI-PRED (Jones, 1999) profiles. Then, a dimension-reduction algorithm is often applied to reduce the redundant information of extracting features (Liu, 2017;Tang et al., 2018;Xue et al., 2018;Tan et al., 2019;Zhu et al., 2019); these include ANOVA (Anderson, 2001;Ding and Li, 2015;Li et al., 2019b), mRMR (Peng et al., 2005), and MRMD (Zou et al., 2016b). These algorithms rank the features using certain criteria and then select the optimal feature. In the second step, classification algorithms have been applied to train on the optimal feature set and construct model. The support vector machine has been widely used and has obtained good results (Ding and Dubchak, 2001;Shamim et al., 2007;Yang and Chen, 2011;Feng et al., 2013a;Zou et al., 2016a;Ding et al., 2017;Chen et al., 2019). Furthermore, other classification methods, such as the hidden Markov mode (Bouchaffra and Tan, 2006), random forests (Dehzangi et al., 2010), and neural networks (Chen et al., 2007) have been used in this step. There are also ensemble classifiers. For example, Zou et al. proposed libD3C (Lin et al., 2014), which integrates multiple weak classifiers and voting for the final result.

Benchmark Dataset
We used the same dataset as Feng and Xu et al. The positive dataset was generated as follows.
(1) The sequences marked as "antioxidant" in the Universal Protein Resource (Uniport) (2014_02 release) were selected. (2) Sequences that contained residues such as "B, " "X, " and "Z, " were eliminated because of their uncertain meaning. (3) The protein sequences labeled with "reviewed" were the only ones considered to ensure that the selected sequences had been verified through experiments. The negative dataset was constructed with a list of PISCES-culled PDB (Wang and Dunbrack, 2003) proteins with identification values <20%, in the same manner as Fernández-Blanco et al. (2013). These steps resulted in 710 positive samples and 1,567 negative samples. To avoid a low quality dataset that may incorrectly predict the result, the CD-HIT program (Fu et al., 2012) was applied with a 60% threshold to obtain a benchmark dataset. This final dataset included 253 antioxidant proteins and 1,552 non-antioxidant proteins, which can be expressed as follows: Where Set + represents the positive dataset (the 253 antioxidant proteins); Set − represents the negative dataset constructed from 1,552 non-antioxidant proteins; and the "∪" symbol indicates that the benchmark dataset consisted of positive and negative datasets. The proportion of positive and negative samples is ∼1:6, which represents an unbalanced dataset.

Feature Extraction
In this study, we used the feature extraction algorithm (abbreviated as 473D) proposed by Wei et al. (2015). This algorithm generates 473 discrete features based on the PSI-BLAST (Altschul et al., 1997) and PSI-PRED (Jones, 1999) profiles. The former contains the evolutionary information and the latter contains the secondary structure information of the protein sequence. First, a protein with a number of amino acid residues is defined as: where A i means the ith amino acid residue of a protein sequence. Then, the 473D feature is extracted from the protein sequence in the following steps.
where each PSSM matrix entry is equal to the muting score of the ith amino acid residue in protein sequence S and the nth amino acid residue in the amino acid alphabet. The value of entries in M PSSM are grouped by the same column and averaged to form 20 values. Then, they are combined to generate a vector F pssm with a length of 20, which can be formulated as follows (Wei et al., 2015): where f n equal to the average score of each residue in the sequence S, mutating to nth amino acid residue in the evolutionary process.
(2) Extract 20 one-gram and 400 two-gram features from the frequency matrix. Each entry in the PSSM matrix multiplied by the corresponding background frequency is taken as the exponent and two (2) is the base. Then, the frequency matrix is obtained by a power operation as follows (Wei et al., 2015): where M frequency is the frequency matrix, p i,n is the PSSM ith row and nth column entry, and bf j is the background frequency of amino acid in the amino acid alphabet (The value of bf j is provided on the website http://server.malab.cn/AOPs-SVM/ data.jsp). The consensus sequence is generated from the first row to Lth row of M frequency per the following criteria. To ith row of the M frequency , determine the largest entry 2 p i,j ×bf j according to its column order, and choose the jth amino acid in the amino acid alphabet. Repeat this step L times to generate a new consensus sequence S c . From the analysis of the above process, it is concluded that S c is the evolutionary result of S, because each amino acid residue in S is replaced by the most frequent amino acid to generate S c . Then, a one-gram and two-gram algorithm are used to extract the frequency of occurrence features from the sequence S c . The one-gram algorithm calculates the frequency of 20 amino acids residue in the sequence, and the two-gram algorithm calculates the frequency of 20×20 possible amino acid residue adjacent pairs in the sequence, which are represented by (Wei et al., 2015): where O (x) means the occurrence time of x, A j is the amino acid alphabet, which can be a single amino acid residue A i or amino acid residue adjacent pair A i A j , and L is the sequence length. Then, by proportionally weighting F 1−gram and F 2−gram , the 420 features are obtained, which are represented as (Wei et al., 2015) (3) Extract six features from the PSI-PRED secondary structure sequence. Program PSI-PRED can generate a secondary structure sequence S structure from protein sequence S, which is represented as (Wei et al., 2015): where H, E, and C represent the secondary structure states of helix, strand, and coil, respectively. This means that the secondary structure sequence S structure is generated by each amino acid residue in protein sequence replaced by one of letter in H, E, and C. Then, from the sequence S structure , extract five features as follows (Wei et al., 2015): Posi e L(L − 1) Where Count h , Count e , and Count c are the total number of the H, E, and C in S structure ; Posi h , Posi e , and Posi c represent the position index of H, E, and C respectively; Max_Length e and Max_Length h are the largest numbers of continuous E and H. Then, transfer S structure to the segment sequence S segment by deleting coil states and continuous H and E are treated as segment H and segment E, and expressed in terms of α and β, respectively (Zhang et al., 2011). For instance, structure sequence EECCCHHHEEECHHHEECCEE can be transfer to segment sequence βαβαββ. Then, frequency of segment βαβ in S segment is defined as a feature and formulated as (Wei et al., 2015) where Count βαβ is the total number of segment βαβ.
where pro i,1 , pro i,2 , and pro i,3 are the probability values of amino acid residue in sequence to predict as secondary structure states of "C, " "H, " and "E, " respectively. Thus, this matrix has L rows. Three global structural features are calculated by averaging each column value as follows (Wei et al., 2015) Then M probability is divided into λ sub-matrices and three global structural features as Exp. (17) are calculated separately. Finally, obtain λ × 3 features. We chose the λ = 8 in this study, which are represented as (Wei et al., 2015) F pro_local = {f pro local 1 , f pro local 2 , . . . , f pro_local_i , . . . , f pro_local_8 } where f pro_local_i express three values consisting of the average of each column value in the submatrix. Therefore, there are 8 × 3 elements in vector F pro_ local . Finally, the above features are combined in the following order to form the 473D feature, which is represented as (Wei et al., 2015):

Feature Selection
Feature selection aims to select a subset of features to improve the generalization capacity of the learning models. The Max-Relevance-Max-Distance algorithms (MRMD) (Zou et al., 2016b) was utilized for feature selection. It has two steps-ranking features and selecting optimal feature sets. First, calculate the MRMD score of each feature vector. The MRMD score of a feature vector consists of a relevant value and a distance value. The former indicates the relevant value of a feature and target class vector, and it equals the Pearson correlation coefficient (Xu and Deng, 2018) between the feature and target class vector, which is calculated using the following formula (Zou et al., 2016b): where f i = 1/N( N k=1 f i k and similarly c = 1/N( N k=1 c k . f i is the ith feature vector and c is the target class vector, which consists of 0 and 1 in this study. RV i is relevant value of ith feature vector and equals to Pearson correlation coefficient between f i and c. N is the number of elements in a feature vector, and equals the total number of samples in the dataset. f i k denotes the kth element of feature f i . The distance value is a measurement of feature redundancy and is calculated by the Euclidean distance function as follows (Zou et al., 2016b;Dong et al., 2019): where DV i is the distance value of the ith feature vector. ED(f i , f j ) denotes the Euclidean distance of ith and jth feature vector and is formulated by (Zou et al., 2016b): Based on Equations (20) and (21), the MRMD score of feature f i is defined as (Zou et al., 2016b) Inverse sorting of feature set (19) using MRMD score to obtain new feature set F ′ , which is represented as Candidate subsets were constructed by adding from features in F ′ one-by-one each time in ranking order, and can be expressed as: . . , f n−1 ′ , f n ′ . Then, the above subsets were fed into random forest and construct models separately. Among them, a subset of the best performance is selected as the optimal feature set.

Support Vector Machine
Support Vector Machine (SVM) has been widely used in the bioinformatics fields and has performed excellently (Cao et al., 2014;Stephenson et al., 2019). SVM is a method based on the theory of Vapnik-Chervonenkis Dimension (Vapnik et al., 1994) (VC Dimension) and structural risk minimization. SVM maps low-dimensional data to high-dimensional space and uses hyperplane to segment different labeled data. In this study, we chose the toolbox LIBSVM 3.21 (Chang and Lin, 2011) to execute the SVM. It can be downloaded from https://www.csie.ntu.edu.tw/~cjlin/libsvm/. The default kernel function-the radial basis function (RBF) was adopted, and python program grid.py in the toolbox LIBSVM 3.21 was used to search the optimized value of the penalty constant C and the kernel width parameter γ . To correctively evaluate a model with an unbalanced data set, the official website provides a tool that enables LIBSVM to conduct crossvalidation with respect to other criteria, including F-score, AUC (Area Under Curve), precision, recall, and more (this tool is available at https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/eval/ index.html).

Proposed Classifier Flowchart
We proposed a sequence-based classifier using a support vector machine named AOPs-SVM; a flowchart is presented in Figure 1. The AOPs-SVM procedure consists of three phases: (1) feature extraction, (2) feature selection, and (3) model generation. In phase (1), the input protein sequences are processed by the PSI-BLAST and the PSI-PRED programs. The resulting profiles generate 473-dimension (473D) discrete vectors, including evolutionary information and secondary structure information. Then, in phase (2), these 473D vectors were fed into the MRMD method to rank and select the optimal feature set by random forest. In the model generation phase, the SVM was applied to generate a model on the optimal feature set. Lastly, this model was optimized by selecting the optimal value for the penalty constant C and the kernel width parameter γ by grid search in terms of F1 score.

Measurement
There are three kinds of evaluation methods commonly used in bioinformatics fields: an independent test, a k-fold cross validation and a jackknife test (Wei et al., 2017a(Wei et al., ,b, 2018Chen et al., 2018;Liu et al., 2018a,b;Ding et al., 2019;Lv et al., 2019;Yang et al., 2019b). In a jackknife test, each sample is tested by the model, which is trained by all other samples. In this study, we applied the jackknife test, as it is the most rigorous and least arbitrary method. Considering the unbalanced dataset used, sensitivity (Sn), specificity (Sp), accuracy (Acc), and Mathew's correlation coefficient (MCC) were employed as the evaluation metrics. In the feature extraction phase, two types of profiles are constructed using the PSI-BLAST and PSI-PRED programs. Then, 473D discrete vectors are generated by combining evolutionary information and secondary processing feature information, including 20D PSSM features, 20D 1-g, 400D 2-g features, 6D secondary structure sequence features and 27D global and local structural features. (B) In feature selection phase, ranking the 473D features by MRMD score and selecting optimal feature set by Random Forest. (C) At last, in model generation phase the optimal feature set is fed into SVM to generate the AOPs-SVM model and optimize it via a grid search.
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org The F1 score was used as the criterion for optimizing the model.
where TP, FP, FN, and TN indicate true positive, false positive, false negative, and true negative, respectively. In addition, Area Under Curve (AUC) is an important metric and accurately measures the overall performance of the model. It is the value of the area enclosed by the receiver operating characteristic curve (ROC curve) and the two coordinate axes. The ROC curve is a continuous line plotted by (1 − Sp) as X-coordinate and Sn as the Y-coordinate. The larger the AUC value, the better the performance of the model.

Determination of Parameters
There are two groups of parameters that have to be determined in the proposed classifier: the parameters associated with the random forest in the feature selection phase, and the parameters associated with the optimizing SVM in the model generation phase. The random forest parameters were initialized as follows: the number of trees was set to 100; the number of features to use in random selection was set to 0; the seed for the random number generator was set to 1; and the maximum depth of the tree was 0 for unlimited. The grid.py parameter selection tool was applied to evaluate the SVM in the model generation phase with F1 criteria under jackknife test. It involved searching the optimized value to penalty constant parameter C and the kernel width parameter γ . logarithmic function was adopted and set the searching range of log c 2 as {−5,15} with step of 0.5, similarly, searching range of log γ 2 is {3,−15} with a step of −0.5.

Performance of the Proposed Classifier
The 473D features were extracted in the feature extraction phase and ranked by MRMD score. The random forest method was applied and a 176D optimal feature set was selected. Then, this optimal feature set was fed into the SVM model and optimized (B) Comparing with three other traditional classifiers on optimal feature set (176D). RandomForest-176D, BayesNet-176D, and AdaBoostM1-176D are RandomForest, BayesNet and AdoBoostM1 on optimal feature set, respectively. (C) Comparing with other SVM models based on optimal feature set generated by ANOVA and mRMR respectively. ANOVA, mRMR generated 302D and 180D optimal feature set, respectively. (D) Comparing with state-of-the art methods. "<" denotes that Sn and SP of SeqSVM are <0.65 and 0.935, respectively. FIGURE 3 | The MRMD score and composition of the optimal feature set. The X-coordinate corresponds to 473 features; the Y-coordinate is the value of the MRMD score and participation rate. The orange vertical line represents the MRMD score of the 176 optimal feature set. Just as with the original feature set, the optimal feature set also consists of 6 feature types. The six horizontal lines represent the participation rates of each feature: 20D 1-gram feature (yellow); 400D 2-g feature (blue); 20D feature from PSSM (black); 6D secondary structure feature (red); and 27D global and local feature (green). We defined the participation rate as equal to the number of each feature type in the 176 feature set divided by the total number of corresponding features. For example, the 6D secondary structure feature (red) are all selected for inclusion in the optimal feature set, so the participation rate is 1. to generate the proposed AOPs-SVM classifier. To evaluate the performance of the proposed classifier, we conducted a series of comparisons, the results of which are presented in Figure 2.
The proposed AOPs-SVM classifier achieved 94.2% in ACC, 0.68 in sensitivity, 0.985 in specificity, 0.741 in MCC, and 0.832 in AUC. As seen in Figure 2A, the AOPs-SVM achieves the same performance with the SVM-473D, which is much better than the SVM-473D-weight. This demonstrates that the feature selection phase effectively solves for data redundancy when the feature set shrinks from 473D to 176D. In Figure 2B, although the random forest, Bayes Net, and AdoBoostM1 all achieve high specificity scores, they are inefficient in sensitivity, while two are even lower than random classification. This shows that the SVM produces a more balanced result on an optimal feature set compared to three other candidate classifiers. Figure 2C shows that AOPs-SVM is superior to SVM-mRMR-180D and SVM-ANOVA-302D. This result demonstrates that the MRMD algorithm not only results in a lower dimension (176D), but also retains the important features in the optimal feature set. In Figure 2D, the performance of the proposed classifier is compared to the AodPred (Feng et al., 2016) and SeqSVM  in term of sensitivity, specificity, and accuracy. The AOPs-SVM is slightly lower than AodPred for sensitivity. However, it outperforms the other two classifiers in specificity and accuracy.

Feature Contribution and Importance Analysis
Section Performance of the Proposed Classifier noted that the proposed AOPs-SVM classifier was trained on the optimal feature set (176D), and achieved the same performance as the SVM trained on the original feature set (473D). This demonstrates that the optimal feature set retained the important features. The MRMD score and feature composition of the optimal feature set (176D) are shown in Figure 3.
When comparing the six horizontal lines, the 20D feature from the PSSM, the 6D secondary structure features, and the 27D global and local features corresponding to F pssm , F H , F C , F E , F Max H , F Max E , F frequency βαβ and F pro local , F pro global of Equation (19), respectively, achieve the highest participation rate with reaching 100%. The latter two features come from PSI-PRED profile. It indicates that secondary structure information extracted from PSI-PRED profiles highly contributes to the antioxidant protein identification task. Analysis from the view of combining MRMD score and participation rate, the 20D feature from matrix PSSM, that is F pssm in Equation (4), obtains the highest 20 MRMD scores and all of them appear in the 176D optimal feature set. It indicates that 20 evolutionary features in F pssm have the most relevance to the target classification, but have the least redundant information. Therefore, we can conclude from a bioinformatics perspective that F pssm can be selected as an important marker for identifying antioxidant proteins. These 20 F pssm features' MRMD scores are shown in Table 1, where the odd-numbered rows are the order number of features slashed by the corresponding mutating residue. The even-numbered rows are the MRMD scores.

CONCLUSIONS
In this paper, we proposed a novel approach for identifying antioxidant proteins, and constructed a classifier called AOPs-SVM. The 473D discrete features, including evolutionary information and secondary structure information, were extracted from the training set. To eliminate redundant data, the MRMD algorithm was applied and the 176D optimal feature set was obtained. Then, the AOPs-SVM was generated by an SVM model based on the optimal feature set. Experimental results show that the proposed classifier is superior to other classifiers, including state-of-the art methods. In addition, we analyzed the contribution and composition of the optimal feature set using bioinformatics techniques. In the future, we will attempt to improve the performance achieved in this study by (1) searching and combining potential and significant features, as well as by using a more effective feature selection approach (Yu et al., 2018); and (2) adopting other classifying algorithms, such as extreme learning  and deep learning (Cao et al., 2017;Long et al., 2017;Conover et al., 2019;Hou et al., 2019;Zhang et al., 2019;Zou et al., 2019), etc.

DATA AVAILABILITY
Publicly available datasets were analyzed in this study. This data can be found here: http://server.malab.cn/AOPs-SVM/data.jsp.

AUTHOR CONTRIBUTIONS
CM, QZ, and SJ wrote the paper, participated in the research design, and developed the web server. LW and FG participated in preparation of the manuscript. CM, SJ, LW, FG, and QZ read and approved the final manuscript.