SYSTEMATIC REVIEW article

Front. Aging Neurosci., 06 May 2021

Sec. Parkinson’s Disease and Aging-related Movement Disorders

Volume 13 - 2021 | https://doi.org/10.3389/fnagi.2021.633752

Machine Learning for the Diagnosis of Parkinson's Disease: A Review of Literature

  • 1. Chemosensory Neuroanatomy Lab, Department of Anatomy, Université du Québec à Trois-Rivières (UQTR), Trois-Rivières, QC, Canada

  • 2. Laboratoire d'Imagerie, de Vision et d'Intelligence Artificielle (LIVIA), Department of Software and IT Engineering, École de Technologie Supérieure, Montreal, QC, Canada

  • 3. Centre de Recherche de l'Hôpital du Sacré-Coeur de Montréal, Centre Intégré Universitaire de Santé et de Services Sociaux du Nord-de-l'Île-de-Montréal (CIUSSS du Nord-de-l'Île-de-Montréal), Montreal, QC, Canada

Article metrics

View details

378

Citations

87,7k

Views

13,4k

Downloads

Abstract

Diagnosis of Parkinson's disease (PD) is commonly based on medical observations and assessment of clinical signs, including the characterization of a variety of motor symptoms. However, traditional diagnostic approaches may suffer from subjectivity as they rely on the evaluation of movements that are sometimes subtle to human eyes and therefore difficult to classify, leading to possible misclassification. In the meantime, early non-motor symptoms of PD may be mild and can be caused by many other conditions. Therefore, these symptoms are often overlooked, making diagnosis of PD at an early stage challenging. To address these difficulties and to refine the diagnosis and assessment procedures of PD, machine learning methods have been implemented for the classification of PD and healthy controls or patients with similar clinical presentations (e.g., movement disorders or other Parkinsonian syndromes). To provide a comprehensive overview of data modalities and machine learning methods that have been used in the diagnosis and differential diagnosis of PD, in this study, we conducted a literature review of studies published until February 14, 2020, using the PubMed and IEEE Xplore databases. A total of 209 studies were included, extracted for relevant information and presented in this review, with an investigation of their aims, sources of data, types of data, machine learning methods and associated outcomes. These studies demonstrate a high potential for adaptation of machine learning methods and novel biomarkers in clinical decision making, leading to increasingly systematic, informed diagnosis of PD.

Introduction

Parkinson's disease (PD) is one of the most common neurodegenerative diseases with a prevalence rate of 1% in the population above 60 years old, affecting 1–2 people per 1,000 (Tysnes and Storstein, 2017). The estimated global population affected by PD has more than doubled from 1990 to 2016 (from 2.5 million to 6.1 million), which is a result of increased number of elderly people and age-standardized prevalence rates (Dorsey et al., 2018). PD is a progressive neurological disorder associated with motor and non-motor features (Jankovic, 2008) which comprises multiple aspects of movements, including planning, initiation and execution (Contreras-Vidal and Stelmach, 1995).

During its development, movement-related symptoms such as tremor, rigidity and difficulties in initiation can be observed, prior to cognitive and behavioral alterations including dementia (Opara et al., 2012). PD severely affects patients' quality of life (QoL), social functions and family relationships, and places heavy economic burdens at individual and society levels (Johnson et al., 2013; Kowal et al., 2013; Yang and Chen, 2017).

The diagnosis of PD is traditionally based on motor symptoms. Despite the establishment of cardinal signs of PD in clinical assessments, most of the rating scales used in the evaluation of disease severity have not been fully evaluated and validated (Jankovic, 2008). Although non-motor symptoms (e.g., cognitive changes such as problems with attention and planning, sleep disorders, sensory abnormalities such as olfactory dysfunction) are present in many patients prior to the onset of PD (Jankovic, 2008; Tremblay et al., 2017), they lack specificity, are complicated to assess and/or yield variability from patient to patient (Zesiewicz et al., 2006). Therefore, non-motor symptoms do not yet allow for diagnosis of PD independently (Braak et al., 2003), although some have been used as supportive diagnostic criteria (Postuma et al., 2015).

Machine learning techniques are being increasingly applied in the healthcare sector. As its name implies, machine learning allows for a computer program to learn and extract meaningful representation from data in a semi-automatic manner. For the diagnosis of PD, machine learning models have been applied to a multitude of data modalities, including handwritten patterns (Drotár et al., 2015; Pereira et al., 2018), movement (Yang et al., 2009; Wahid et al., 2015; Pham and Yan, 2018), neuroimaging (Cherubini et al., 2014a; Choi et al., 2017; Segovia et al., 2019), voice (Sakar et al., 2013; Ma et al., 2014), cerebrospinal fluid (CSF) (Lewitt et al., 2013; Maass et al., 2020), cardiac scintigraphy (Nuvoli et al., 2019), serum (Váradi et al., 2019), and optical coherence tomography (OCT) (Nunes et al., 2019). Machine learning also allows for combining different modalities, such as magnetic resonance imaging (MRI) and single-photon emission computed tomography (SPECT) data (Cherubini et al., 2014b; Wang et al., 2017), in the diagnosis of PD. By using machine learning approaches, we may therefore identify relevant features that are not traditionally used in the clinical diagnosis of PD and rely on these alternative measures to detect PD in preclinical stages or atypical forms.

In recent years, the number of publications on the application of machine learning to the diagnosis of PD has increased. Although previous studies have reviewed the use of machine learning in the diagnosis and assessment of PD, they were limited to the analysis of motor symptoms, kinematics, and wearable sensor data (Ahlrichs and Lawo, 2013; Ramdhani et al., 2018; Belić et al., 2019). Moreover, some of these reviews only included studies published between 2015 and 2016 (Pereira et al., 2019). In this study, we aim to (a) comprehensively summarize all published studies that applied machine learning models to the diagnosis of PD for an exhaustive overview of data sources, data types, machine learning models, and associated outcomes, (b) assess and compare the feasibility and efficiency of different machine learning methods in the diagnosis of PD, and (c) provide machine learning practitioners interested in the diagnosis of PD with an overview of previously used models and data modalities and the associated outcomes, and recommendations on how experimental protocols and results could be reported to facilitate reproduction. As a result, the application of machine learning to clinical and non-clinical data of different modalities has often led to high diagnostic accuracies in human participants, therefore may encourage the adaptation of machine learning algorithms and novel biomarkers in clinical settings to assist more accurate and informed decision making.

Methods

Search Strategy

A literature search was conducted on the PubMed (https://pubmed.ncbi.nlm.nih.gov) and IEEE Xplore (https://ieeexplore.ieee.org/search/advanced/command) databases on February 14, 2020 for all returned results. Boolean search strings used are shown in Table 1. No additional filters were applied in the literature search. All retrieved studies were systematically identified, screened and extracted for relevant information following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (Moher et al., 2009).

Table 1

DatabaseBoolean search string
PubMed(“Parkinson Disease”[Mesh] OR Parkinson*) AND (“Machine Learning”[Mesh] OR machine learn* OR machine-learn* OR deep learn* OR deep-learn*) AND (human OR patient) AND
(“Diagnosis”[Mesh] OR diagnos* OR detect* OR classif* OR identif*) NOT review[Publication Type]
IEEE Xplore(Parkinson*) AND
(machine learn* OR machine-learn* OR deep learn* OR deep-learn*) AND (human OR patient) AND
(diagnosis OR diagnose OR diagnosing OR detection OR detect OR detecting OR classification OR classify OR classifying OR identification OR identify OR identifying)

Boolean search strings used for the retrieval of relevant publications on PubMed and IEEE Xplore databases.

Inclusion and Exclusion Criteria

Studies that satisfy one or more of the following criteria and used machine learning methods were included:

  • Classification of PD from healthy controls (HC),

  • Classification of PD from Parkinsonism (e.g., progressive supranuclear palsy (PSP) and multiple system atrophy (MSA)), and

  • Classification of PD from other movement disorders (e.g., essential tremor (ET)).

Studies falling into one or more of the following categories were excluded:

  • Studies related to Parkinsonism or/and diseases other than PD that did not involve classification or detection of PD (e.g., differential diagnosis of PSP, MSA, and other atypical Parkinsonian disorders),

  • Studies not related to the diagnosis of PD (e.g., subtyping or severity assessment, analysis of behavior, disease progression, treatment outcome prediction, identification, and localization of brain structures or parameter optimization during surgery),

  • Studies related to the diagnosis of PD, but performed analysis and assessed model performance at sample level (e.g., classification using individual MRI scans without aggregating scan-level performance to patient level),

  • Classification of PD from non-Parkinsonism (e.g., Alzheimer's disease),

  • Study did not use metrics that measure classification performance,

  • Study used organisms other than human (e.g., Caenorhabditis elegans, mice or rats),

  • Study did not provide sufficient or accurate descriptions of machine learning methods, datasets or subjects used (e.g., does not provide sample size, or incorrectly described the dataset(s) used),

  • Not original journal article or conference proceedings papers (e.g., review and viewpoint paper), and

  • In languages other than English.

Data Extraction

The following information is included in the data extraction table: (1) objectives, (2) type of diagnosis (diagnosis, differential diagnosis, sub-typing), (3) data source, (4) data type, (5) number of subjects, (6) machine learning method(s), splitting strategy and cross validation, (7) associated outcomes, (8) year, and (9) reference.

For studies published online first and archived in another year, “year of publication” was defined as the year during which the study was published online. If this information was unavailable, the year in which the article was copyrighted was regarded as the year of publication. For studies that introduced novel models and used existing models merely for comparison, information related to the novel models was extracted. Classification of PD and scans without evidence for dopaminergic deficit (SWEDD) was treated as subtyping (Erro et al., 2016).

Study Objectives

To outline the different goals and objectives of included studies, we have further categorized them based on the type of diagnosis and their general aim. From the perspective of diagnostics, these studies could be divided into (a) the diagnosis or detection of PD (which compares data collected from PD patients and healthy controls), (b) differential diagnosis (discrimination between patients with idiopathic PD and patients with atypical Parkinsonism), and (c) sub-typing (discrimination among sub-types of PD).

Included studies were also analyzed for their general aim: For studies with a focus on the development of novel technical approaches to be used in the diagnosis of Parkinson's disease, e.g., new machine learning and deep learning models and architectures, data acquisition devices, and feature extraction algorithms that haven't been previously presented and/or employed, we defined them as (a) “methodology” studies. Studies that validate and investigate (a) the application of previously published and validated machine learning and deep learning models, and/or (b) the feasibility of introducing data modalities that are not commonly used in the machine learning-based diagnosis of PD (e.g., CSF data), were defined as (b) “clinical application” studies.

Model Evaluation

In the present study, accuracy was used to compare performance of machine learning models. For each data type, we summarized the type of machine learning models that led to the per-study highest accuracy. However, in some studies, only one machine learning model was tested. Therefore, we define “model associated with the per-study highest accuracy” as (a) the only model implemented and used in a study or (b) the model that achieved the highest accuracy or that was highlighted in studies that used multiple models. Results are expressed as mean (SD).

For studies reporting both training and testing/validation accuracy, testing or validation accuracy was considered. For studies that reported both validation and test accuracy, test accuracy was considered. For studies with more than one dataset or classification problem (e.g., HC vs. PD and HC vs. idiopathic hyposmia vs. PD), accuracy was averaged across datasets or classification problems. For studies that reported classification accuracy for each group of subjects individually, accuracy was averaged across groups. For studies reporting a range of accuracies or accuracies given by different cross validation methods or feature combinations, the highest accuracies were considered. In studies that compared HC with diseases other than PD or PD with diseases other than Parkinsonism, diagnosis of diseases other than PD or Parkinsonism (e.g., amyotrophic lateral sclerosis) was not considered. Accuracy of severity assessment was not considered.

Results

Literature Review

Based on the search criteria, we retrieved 427 (PubMed) and 215 (IEEEXplore) search results, leading to a total of 642 publications. After removing duplicates, we screened 593 publications for titles and abstracts, following which we excluded 313 based on the exclusion criteria and examined 280 full text articles. Overall, we included 209 research articles for data extraction (Figure 1 and see Supplementary Materials for a full list of included studies). All articles were published from the year 2009 onwards, and an increase in the number of papers published per year was observed (Supplementary Figure 1).

Figure 1

Data Source and Sample Size

In 93 out of 209 studies (43.1%), original data were collected from human participants. In 108 studies (51.7%), data used were from public repositories and databases, including University of California at Irvine (UCI) Machine Learning Repository (Dua and Graff, 2018) (n = 44), Parkinson's Progression Markers Initiative (Marek et al., 2011) (PPMI; n = 33), PhysioNet (Goldberger et al., 2000) (n = 15), HandPD dataset (Pereira et al., 2015) (n = 6), mPower database (Bot et al., 2016) (n = 4), and 6 other databases (Mucha et al., 2018; Vlachostergiou et al., 2018; Bhati et al., 2019; Hsu et al., 2019; Taleb et al., 2019; Wodzinski et al., 2019; Table 2).

Table 2

Data source/DatabaseNumber of studiesPercentage
independent recruitment of human participants9343.06%
UCI Machine Learning Repository4420.37%
PPMI database3315.28%
PhysioNet156.94%
HandPD dataset62.78%
mPower database41.85%
Other databases
(1 PACS, 1 PaHaW, 1 PC-GITA database, 1 PDMultiMC database, 1 Neurovoz corpus, 1 The NTUA Parkinson Dataset)
62.78%
Collected postmortem10.46%
Commercially sourced10.46%
Acquired at another institution10.46%
From another study10.46%
From the author's institutional database10.46%
Others
(1 PPMI + Sheffield Teaching Hospitals NHS Foundation Trust;
1 PPMI + Seoul National University Hospital cohort;
1 UCI + collected from participants)
31.39%

Source of data of the included studies.

PACS, Picture Archiving and Communication System; PaHaW, Parkinson's Disease Handwriting Database.

In 3 studies, data from public repositories were combined with data from local databases or participants (Agarwal et al., 2016; Choi et al., 2017; Taylor and Fenner, 2017). In the remaining studies, data were sourced (Wahid et al., 2015) from another study (Fernandez et al., 2013), collected at another institution (Segovia et al., 2019), obtained from the authors' institutional database (Nunes et al., 2019), collected postmortem (Lewitt et al., 2013), or commercially sourced (Váradi et al., 2019).

The 209 studies had an average sample size of 184.6 (289.3), with a smallest sample size of 10 (Kugler et al., 2013), and a largest sample size of 2,289 (Tracy et al., 2019; Figure 2A). For studies that recruited human participants (n = 93), data from an average of 118.0 (142.9) participants were collected (range: 10–920; Figure 2B). For other studies (n = 116), an average sample size of 238.1 (358.5) was reported (range: 30–2,289; Figure 2B). For a description of average accuracy reported in these studies in relation to sample size, see Figure 2C.

Figure 2

Study Objectives

In included studies, although “diagnosis of PD” was used as the search criteria, machine learning had been applied for diagnosis (PD vs. HC), differential diagnosis (idiopathic PD vs. atypical Parkinsonism) and sub-typing (differentiation of sub-types of PD) purposes. Most studies focused on diagnosis (n = 168, 80.4%) or differential diagnosis (n = 20, 9.6%). Fourteen studies performed both diagnosis and differential diagnosis (6.7%), 5 studies (2.4%) diagnosed and subtyped PD, 2 studies (1.0%) included diagnosis, differential diagnosis, and subtyping.

Among the included studies, a total of 132 studies (63.2%) implemented and tested a machine learning method, a model architecture, a diagnostic system, a feature extraction algorithm, or a device for non-invasive, low-cost data acquisition that hasn't been established for the detection and early diagnosis of PD (methodology studies). In 77 studies (36.8%), previously proposed and validated machine learning methods were tested in clinical settings for early detection of PD, identification of novel biomarkers or examination of uncommonly used data modalities for the diagnosis of PD (e.g., CSF; clinical application studies).

Comparing Studies With Different Objectives

Source of Data

In the 132 studies that proposed or tested novel machine learning methods (i.e., methodology studies), a majority used data from publicly available databases (n = 89, 67.4%). Data collected from human participants were used in 41 studies (31.1%) and the two remaining studies (1.5%) used commercially sourced data or data from both existing public databases and local participants specifically recruited for the study. Out of the 77 studies that used machine learning models in clinical settings (i.e., clinical application studies), 52 (67.5%) collected data from human participants, 22 (28.6%) used data from public databases. Two (2.6%) studies obtained data from a database and a local cohort, and 1 (1.3%) study collected data postmortem.

Data Modality

In methodology studies, the most commonly used data modalities were voice recordings (n = 51, 38.6%), movement (n = 35, 26.5%), and MRI data (n = 15, 11.4%). For studies on clinical applications, MRI data (n = 21, 27.3%), movement (n = 16, 20.8%), and SPECT imaging data (n = 12, 15.6%) were of high relevance. All studies using CSF features (n = 5) focused on validation of existing machine learning models in a clinical setting (Figure 3A).

Figure 3

Number of Subjects

The average sample size was 137.1 for the 132 methodology studies (Figure 3B). For 41 out of the 132 studies that used data from recruited human participants, the average sample size was 81.7 (Figure 3C). In the 77 studies on clinical applications, the average sample size was 266.2 (Figure 3B). For 52 out of the 77 clinical studies that collected data from recruited participants, the average sample size was 145.9 (Figure 3C).

Machine Learning Methods Applied to the Diagnosis of PD

We divided 448 machine learning models from the 209 studies into 8 categories: (1) support vector machine (SVM) and variants (n = 132 from 130 studies), (2) neural networks (n = 76 from 62 studies), (3) ensemble learning (n = 82 from 57 studies), (4) nearest neighbor and variants (n = 33 from 33 studies), (5) regression (n = 31 from 31 studies), (6) decision tree (n = 28 from 27 studies), (7) naïve Bayes (n = 26, from 26 studies), and (8) discriminant analysis (n = 12 from 12 studies). A small percentage of models used did not fall into any of the categories (n = 28, used in 24 studies).

On average, 2.14 machine learning models per study were applied to the diagnosis of PD. One study may have used more than one category of models. For a full description of data types used to train each type of machine learning models and the associated outcomes, see Supplementary Materials and Supplementary Figure 2.

Performance Metrics

Various metrics have been used to assess the performance of machine learning models (Table 3). The most common metric was accuracy (n = 174, 83.3%), which was used individually (n = 55) or in combination with other metrics (n = 119) in model evaluation. Among the 174 studies that used accuracy, some have combined accuracy with sensitivity (i.e., recall) and specificity (n = 42), or with sensitivity, specificity and AUC (n = 16), or with recall (i.e., sensitivity), precision and F1 score (n = 7) for a more systematic understanding of model performance. A total of 35 studies (16.7%) used metrics other than accuracy. In these studies, the most used performance metrics were AUC (n = 19), sensitivity (n = 17), and specificity (n = 14), and the three were often applied together (n = 9) with or without other metrics.

Table 3

Performance metricDefinitionNumber of studies
Accuracy174
Sensitivity (recall)110
Specificity (TNR)94
AUCThe two-dimensional area under the Receiver Operating Characteristic (ROC) curve60
MCC9
Precision (PPV)31
NPV8
F1 score25
Others
(7 kappa; 4 error rate; 3 EER; 1 MSE; 1 LOR; 1 confusion matrix; 1 cross validation score; 1 YI; 1 FPR; 1 FNR; 1 G-mean; 1 PE; 5 combination of metrics)
N/A28

Performance metrics used in the evaluation of machine learning models.

TNR, true negative rate; AUC, Area under the ROC Curve; MCC, Matthews correlation coefficient; PPV, positive predictive value; NPV, negative predictive value; EER, equal error rate; MSE, mean squared error; LOR, log odds ratio; YI, Youden's Index; FPR, false positive rate; FNR, false negative rate; PE, probability excess.

Data Types and Associated Outcomes

Out of 209 studies, 122 (58.4%) applied machine learning methods to movement-related data, i.e., voice recordings (n = 55, 26.3%), movement data (n = 51, 24.4%), or handwritten patterns (n = 16, 7.7%). Imaging modalities analyzed including MRI (n = 36, 17.2%), SPECT (n = 14, 6.7%), and positron emission tomography (PET; n = 4, 1.9%). Five studies analyzed CSF samples (2.4%). In 18 studies (8.6%), a combination of different types of data was used.

Ten studies (4.8%) used data that do not belong to any categories mentioned above, such as single nucleotide polymorphisms (Cibulka et al., 2019) (SNPs), electromyography (EMG) (Kugler et al., 2013), OCT (Nunes et al., 2019), cardiac scintigraphy (Nuvoli et al., 2019), Patient Questionnaire of Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) (Prashanth and Dutta Roy, 2018), whole-blood gene expression profiles (Shamir et al., 2017), transcranial sonography (Shi et al., 2018) (TCS), eye movements (Tseng et al., 2013), electroencephalography (EEG) (Vanegas et al., 2018), and serum samples (Váradi et al., 2019).

Given that studies used different data modalities and sources, and sometimes different samples of the same database, a summary of model performance, instead of direct comparison across studies, is provided.

Voice Recordings (n = 55)

The 49 studies that used accuracy to evaluate machine learning models achieved an average accuracy of 90.9 (8.6) % (Figure 4A), ranging from 70.0% (Kraipeerapun and Amornsamankul, 2015; Ali et al., 2019a) to 100.0% (Hariharan et al., 2014; Abiyev and Abizade, 2016; Ali et al., 2019c; Dastjerd et al., 2019). In 3 studies, the highest accuracy was achieved by two types of machine learning models individually, namely regression or SVM (Ali et al., 2019a), neural network or SVM (Hariharan et al., 2014), and ensemble learning or SVM (Mandal and Sairam, 2013). The per-study highest accuracy was achieved with SVM in 23 studies (39.7%), with neural network in 16 studies (27.6%), with ensemble learning in 7 studies (12.1%), with nearest neighbor in 3 studies (5.2%), and with regression in 2 studies (3.4%). Models that do not belong to any given categories led to the per-study highest accuracy in 7 studies (12.1%; Figure 4B).

Figure 4

Voice recordings from the UCI machine learning repository were used in 42 studies (Table 4). Among the 42 studies, 39 used accuracy to evaluate classification performance and the average accuracy was 92.0 (9.0) %. The lowest accuracy was 70.0% and the highest accuracy was 100.0%. Eight out of 9 studies that collected voice recordings from human participants used accuracy as the performance metric, and the average, lowest and highest accuracies were 87.7 (6.8) %, 77.5%, and 98.6%, respectively. The 4 remaining studies used data from the Neurovoz corpus (n = 1), mPower database (n = 1), PC-GITA database (n = 1), or data from both the UCI machine learning repository and human participants (n = 1). Two out of these 4 studies used accuracy to evaluate model performance and reported an accuracy of 81.6 and 91.7%.

Table 4

ObjectivesType of diagnosisSource of dataNumber of subjects (n)Machine learning method(s), splitting strategy and cross validationOutcomesYearReferences
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDFuzzy neural system with 10-fold cross validationTesting accuracy = 100%2016Abiyev and Abizade, 2016
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDRPART, C4.5, PART, Bagging CART, random forest, Boosted C5.0, SVMSVM:2019Aich et al., 2019
Accuracy = 97.57%
Sensitivity = 0.9756
Specificity = 0.9987
NPV = 0.9995
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDDBN of 2 RBMsTesting accuracy = 94%2016Al-Fatlawi et al., 2016
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDEFMM-OneR with 10-fold cross validation or 5-fold cross validationAccuracy = 94.21%2019Sayaydeha and Mohammad, 2019
Classification of PD from HCDiagnosisUCI machine learning repository40; 20 HC + 20 PDLinear regression, LDA, Gaussian naïve Bayes, decision tree, KNN, SVM-linear, SVM-RBF with leave-one-subject-out cross validationLogistic regression or SVM-linear accuracy = 70%2019Ali et al., 2019a
Classification of PD from HCDiagnosisUCI machine learning repository40; 20 HC + 20 PDLDA-NN-GA with leave-one-subject-out cross validationTraining:2019Ali et al., 2019c
Accuracy = 95%
Sensitivity = 95%
Test:
Accuracy = 100%
Sensitivity = 100%
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDNNge with AdaBoost with 10-fold cross validationAccuracy = 96.30%2018Alqahtani et al., 2018
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDLogistic regression, KNN, naïve Bayes, SVM, decision tree, random forest, DNN with 10-fold cross validationKNN accuracy = 95.513%2018Anand et al., 2018
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDMLP with a train-validation-test ratio of 50:20:30Training accuracy = 97.86%2012Bakar et al., 2012
Test accuracy = 92.96%
MSE = 0.03552
Classification of PD from HCDiagnosisUCI machine learning repository31 (8 HC + 23 PD) for dataset 1 and 68 (20 HC + 48 PD) for dataset 2FKNN, SVM, KELM with 10-fold cross validationFKNN accuracy = 97.89%2018Cai et al., 2018
Classification of PD from HCDiagnosisUCI machine learning repository40; 20 HC + 20 PDSVM, logistic regression, ET, gradient boosting, random forest with train-test split ratio = 80:20Logistic regression accuracy = 76.03%2019Celik and Omurca, 2019
Classification of PD from HCDiagnosisUCI machine learning repository40; 20 HC + 20 PDMLP, GRNN with a training-test ratio of 50:50GRNN:2016Çimen and Bolat, 2016
Error rate = 0.0995 (spread parameter = 195.1189)
Error rate = 0.0958 (spread parameter = 1.2)
Error rate = 0.0928 (spread parameter = 364.8)
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDECFA-SVM with 10-fold cross validationAccuracy = 97.95%2017Dash et al., 2017
Sensitivity = 97.90%
Precision = 97.90%
F-measure = 97.90%
Specificity = 96.50%
AUC = 97.20%
Classification of PD from HCDiagnosisUCI machine learning repository40; 20 HC + 20 PDFuzzy classifier with 10-fold cross validation, leave-one-out cross validation or a train-test ratio of 70:30Accuracy = 100%2019Dastjerd et al., 2019
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDAveraged perceptron, BPM, boosted decision tree, decision forests, decision jungle, locally deep SVM, logistic regression, NN, SVM with 10-fold cross-validationBoosted decision trees:2017Dinesh and He, 2017
Accuracy = 0.912105
Precision = 0.935714
F-score = 0.942368
AUC = 0.966293
Classification of PD from HCDiagnosisUCI machine learning repository50; 8 HC + 42 PDKNN, SVM, ELM with a train-validation ratio of 70:30SVM:2017Erdogdu Sakar et al., 2017
Accuracy = 96.43%
MCC = 0.77
Classification of PD from HCDiagnosisUCI machine learning repository252; 64 HC + 188 PDCNN with leave-one-person-out cross validationAccuracy = 0.8692019Gunduz, 2019
F-measure = 0.917
MCC = 0.632
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDSVM, logistic regression, KNN, DNN with a train-test ratio of 70:30DNN:2018Haq et al., 2018
Accuracy = 98%
Specificity = 95%
sensitivity = 99%
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDSVM-RBF, SVM-linear with 10-fold cross validationAccuracy = 99%2019Haq et al., 2019
Specificity = 99%
Sensitivity = 100%
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDLS-SVM, PNN, GRNN with conventional (train-test ratio of 50:50) and 10-fold cross validationLS-SVM or PNN or GRNN:2014Hariharan et al., 2014
Accuracy = 100%
Precision = 100%
Sensitivity = 100%
specificity = 100%
AUC = 100
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDRandom tree, SVM-linear, FBANN with 10-fold cross validationFBANN:2014Islam et al., 2014
Accuracy = 97.37%
Sensitivity = 98.60%
Specificity = 93.62%
FPR = 6.38%
Precision = 0.979
MSE = 0.027
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDSVM-linear with 5-fold cross validationError rate ~0.132012Ji and Li, 2012
Classification of PD from HCDiagnosisUCI machine learning repository40; 20 HC + 20 PDDecision tree, random forest, SVM, GBM, XGBoostSVM-linear:2018Junior et al., 2018
FNR = 10%
Accuracy = 0.725
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDCART, SVM, ANNSVM accuracy = 93.84%2020Karapinar Senturk, 2020
Classification of PD from HCDiagnosisUCI machine learning repositoryDataset 1: 31; 8 HC + 23 PD
Dataset 2: 40; 20 HC + 20 PD
EWNN with a train-test ratio of 90:10 and cross validationDataset 1:
Accuracy = 92.9%
2018Khan et al., 2018
Ensemble classification accuracy = 100.0%
Sensitivity = 100.0%
MCC = 100.0%
Dataset 2:
Accuracy = 66.3%
Ensemble classification accuracy = 90.0%
Sensitivity = 93.0%
Specificity = 97.0%
MCC = 87.0%
Classification of PD from HCDiagnosisUCI machine learning repository40; 20 HC + 20 PDStacked generalization with CMTNN with 10-fold cross validationAccuracy = ~70%2015Kraipeerapun and Amornsamankul, 2015
Classification of PD from HCDiagnosisUCI machine learning repository40; 20 HC + 20 PDHMM, SVMHMM:2019Kuresan et al., 2019
Accuracy = 95.16%
Sensitivity = 93.55%
Specificity = 91.67%
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDIGWO-KELM with 10-fold cross validationIteration number = 1002017Li et al., 2017
Accuracy = 97.45%
Sensitivity = 99.38%
Specificity = 93.48%
Precision = 97.33%
G-mean = 96.38%
F-measure = 98.34%
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDSCFW-KELM with 10-fold cross validationAccuracy = 99.49%2014Ma et al., 2014
Sensitivity = 100%
Specificity = 99.39%
AUC = 99.69%
F-measure = 0.9966
Kappa = 0.9863
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDSVM-RBF with 10-fold cross validationAccuracy = 96.29%2016Ma et al., 2016
Sensitivity = 95.00%
Specificity = 97.50%
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDLogistic regression, NN, SVM, SMO, Pegasos, AdaBoost, ensemble selection, FURIA, rotation forest Bayesian network with 10-fold cross-validationAverage accuracy across all models = 97.06%
SMO, Pegasos, or AdaBoost accuracy = 98.24%
2013Mandal and Sairam, 2013
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDLogistic regression, KNN, SVM, naïve Bayes, decision tree, random forest, ANNANN:2018Marar et al., 2018
Accuracy = 94.87%
Specificity = 96.55%
Sensitivity = 90%
Classification of PD from HCDiagnosisUCI machine learning repositoryDataset 1: 31; 8 HC + 23 PDKNNDataset 1 accuracy = 90%2017Moharkan et al., 2017
Dataset 2: 40; 20 HC + 20 PDDataset 2 accuracy = 65%
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDRotation forest ensemble with 10-fold cross validationAccuracy = 87.1%2011Ozcift and Gulten, 2011
Kappa error = 0.63
AUC = 0.860
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDRotation forest ensembleAccuracy = 96.93%2012Ozcift, 2012
Kappa = 0.92
AUC = 0.97
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDSVM-RBF with 10-fold cross validation or a train-test ratio of 50:5010-fold cross validation:2016Peker, 2016
Accuracy = 98.95%
Sensitivity = 96.12%
Specificity = 100%
F-measure = 0.9795
Kappa = 0.9735
AUC = 0.9808
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDELM with 10-fold cross validationAccuracy = 88.72%2016Shahsavari et al., 2016
Recall = 94.33%
Precision = 90.48%
F-score = 92.36%
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDEnsemble learning with 10-fold cross validationAccuracy = 90.6%2019Sheibani et al., 2019
Sensitivity = 95.8%
Specificity = 75%
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDGLRA, SVM, bagging ensemble with 5-fold cross validationBagging:2017Wu et al., 2017
Sensitivity = 0.9796
Specificity = 0.6875
MCC = 0.6977
AUC = 0.9558
SVM:
Sensitivity = 0.9252
specificity = 0.8542
MCC = 0.7592
AUC = 0.9349
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDDecision tree classifier, logistic regression, SVM with 10-fold cross validationSVM:2011Yadav et al., 2011
Accuracy = 0.76
Sensitivity = 0.9745
Specificity = 0.13
Classification of PD from HCDiagnosisUCI machine learning repository80; 40 HC + 40 PDKNN, SVM with 10-fold cross validationSVM:2019Yaman et al., 2020
Accuracy = 91.25%
Precision = 0.9125
Recall = 0.9125
F-Measure = 0.9125
Classification of PD from HCDiagnosisUCI machine learning repository31; 8 HC + 23 PDMAP, SVM-RBF, FLDA with 5-fold cross validationMAP:2014Yang et al., 2014
Accuracy = 91.8%
Sensitivity = 0.986
Specificity = 0.708
AUC = 0.94
Classification of PD from other disordersDifferential diagnosisCollected from participants50; 30 PD + 9 MSA + 5 FND + 1 somatization + 1 dystonia + 2 CD + 1 ET + 1 GPDSVM, KNN, DA, naïve Bayes, classification tree with LOSOSVM-linear:2016Benba et al., 2016a
Accuracy = 90%
Sensitivity = 90%
Specificity = 90%
MCC = 0.794067
PE = 0.788177
Classification of PD from other disordersDifferential diagnosisCollected from participants40; 20 PD + 9 MSA + 5 FND + 1 somatization + 1 dystonia + 2 CD + 1ET + 1 GPDSVM (RBF, linear, polynomial, and MLP kernels) with LOSOSVM-linear accuracy = 85%2016Benba et al., 2016b
Classification of PD from HC and assess the severity of PDDiagnosisCollected from participants52; 9 HC + 43 PDSVM-RBF with cross validationAccuracy = 81.8%2014Frid et al., 2014
Classification of PD from HCDiagnosisCollected from participants54; 27 HC + 27 PDSVM with stratified 10-fold cross validation or leave-one-out cross validationAccuracy = 94.4%2018Montaña et al., 2018
Specificity = 100%
Sensitivity = 88.9%
Classification of PD from HCDiagnosisCollected from participants40; 20 HC + 20 PDKNN, SVM-linear, SVM-RBF with leave-one-subject-out or summarized leave-one-outSVM-linear:2013Sakar et al., 2013
Accuracy = 77.50%
MCC = 0.5507
Sensitivity = 80.00%
Specificity = 75.00%
Classification of PD from HCDiagnosisCollected from participants78; 27 HC + 51 PDKNN, SVM-linear, SVM-RBF, ANN, DNN with leave-one-out cross validationSVM-RBF:2017Sztahó et al., 2017
Accuracy = 84.62%
Precision = 88.04%
Recall = 78.65%
Classification of PD from HC and assess the severity of PDDiagnosisCollected from participants88; 33 HC + 55 PDKNN, SVM-linear, SVM-RBF, ANN, DNN with leave-one-subject-out cross validationSVM-RBF:2019Sztahó et al., 2019
Accuracy = 89.3%
Sensitivity = 90.2%
Specificity = 87.9%
Classification of PD from HCDiagnosisCollected from participants43; 10 HC + 33 PDRandom forests, SVM with 10-fold cross validation and a train-test ratio of 90:10SVM accuracy = 98.6%2012Tsanas et al., 2012
Classification of PD from HCDiagnosisCollected from participants99; 35 HC + 64 PDRandom forest with internal out-of-bag (OOB) validationEER = 19.27%2017Vaiciukynas et al., 2017
Classification of PD from HCDiagnosisUCI machine learning repository and participants40 and 28; 20 HC + 20 PD and 28 PD, respectivelyELMTraining data:2016Agarwal et al., 2016
Accuracy = 90.76%
MCC = 0.815
Test data:
Accuracy = 81.55%
Classification of PD from HCDiagnosisThe Neurovoz corpus108; 56 HC + 52 PDSiamese LSTM-based NN with 10-fold cross- validationEER = 1.9%2019Bhati et al., 2019
Classification of PD from HCDiagnosismPower database2,289; 2,023 HC + 246 PDL2-regularized logistic regression, random forest, gradient boosted decision trees with 5-fold cross validationGradient boosted decision trees:2019Tracy et al., 2019
Recall = 0.797
Precision = 0.901
F1-score = 0.836
Classification of PD from HCDiagnosisPC-GITA database100; 50 HC + 50 PDResNet with train-validation ratio of 90:10Precision = 0.922019Wodzinski et al., 2019
Recall = 0.92
F1-score = 0.92
Accuracy = 91.7%

Studies that applied machine learning models to voice recordings to diagnose PD (n = 55).

ANN, artificial neural network; AUC, area under the receiver operating characteristic (ROC) curve; CART, classification and regression trees; CD, cervical dystonia; CMTNN, complementary neural network; CNN, convolutional neural network; DA, discriminant analysis; DBN, deep belief network; DNN, deep neural network; ECFA, enhanced chaos-based firefly algorithm; EFMM-OneR, enhanced fuzzy min-max neural network with the OneR attribute evaluator; ELM, extreme Learning machine; ET, extra trees or essential tremor; EWNN, evolutionary wavelet neural network; FBANN, feedforward back-propagation based artificial neural network; FKNN, fuzzy k-nearest neighbor; FLDA, Fisher's linear discriminant analysis; FND, functional neurological disorder; FNR, false negative rate; FPR, false positive rate; FURIA, fuzzy unordered rule induction algorithm; GA, genetic algorithm; GBM, gradient boosting machine; GLRA, generalized logistic regression analysis; GPD, generalized paroxysmal dystonia; GRNN, general(ized) regression neural network; HC, healthy control; HMM, hidden Markov model; IGWO-KELM, improved gray wolf optimization and kernel(-based) extreme learning machine; KELM, kernel-based extreme learning machine; KNN, k-nearest neighbors; LDA, linear discriminant analysis; LOSO, leave-one-subject-out; LS-SVM, least-square support vector machine; LSTM, long short-term memory; MAP, maximum a posteriori decision rule; MCC, Matthews correlation coefficient; MLP, multilayer perceptron; MSA, multiple system atrophy; MSE, mean squared error; NN, neural network; NNge, non-nested generalized exemplars; NPV, negative predictive value; PD, Parkinson's disease; PNN, probabilistic neural network; RBM, restricted Boltzmann machine; ResNet, residual neural network; RPART, recursive partitioning and regression trees; SCFW-KELM, subtractive clustering features weighting and kernel-based extreme learning machine; SMO, sequential minimal optimization; SVM, support vector machine; SVM-linear, support vector machine with linear kernel; SVM-RBF, support vector machine with radial basis function kernel; XGBoost, extreme gradient boosting.

Movement Data (n = 51)

The 43 out of 51 studies using accuracy to assess model performance achieved an average accuracy of 89.1 (8.3) %, ranging from 62.1% (Prince and de Vos, 2018) to 100.0% (Surangsrirat et al., 2016; Joshi et al., 2017; Pham, 2018; Pham and Yan, 2018; Figure 4A). One study reported three machine learning methods (SVM, nearest neighbor and decision tree) achieving the highest accuracy individually (Félix et al., 2019). Out of the 51 studies, the per-study highest accuracy was achieved with SVM in 22 studies (41.5%), with ensemble learning in 13 studies (24.5%), with neural network in 9 studies (17.0%), with nearest neighbor in 4 studies (7.5%), with discriminant analysis in 1 study (1.9%), with naïve Bayes in 1 study (1.9%), and with decision tree in 1 study (1.9%). Models that do not belong to any given categories were associated with the highest per-study accuracy in two studies (3.8%; Figure 4B).

Among the 33 studies that collected movement data from recruited participants, 25 used accuracy in model evaluation, leading to an average accuracy of 87.0 (7.3) % (Table 5). The lowest and highest accuracies were 64.1% (Martínez et al., 2018) and 100.0% (Surangsrirat et al., 2016), respectively. Fifteen studies used data from the PhysioNet database (Table 5) and had an average accuracy of 94.4 (4.6) %, a lowest accuracy of 86.4% and a highest accuracy of 100%. Three studies used data from the mPower database (n = 2) or data sourced from another study (n = 1), and the average accuracy of these studies was 80.6 (16.2) %.

Table 5

ObjectivesType of diagnosisSource of dataNumber of
subjects (n)
Machine learning method(s), splitting strategy and cross validationOutcomesYearReferences
Classification of PD from HCDiagnosisCollected from participants103; 71 HC + 32 PDEnsemble method of 8 models (SVM, MLP, logistic regression, random forest, NSVC, decision tree, KNN, QDA)Sensitivity = 96%
Specificity = 97%
AUC = 0.98
2017Adams, 2017
Classification of PD, HC and other neurological stance disordersDiagnosis and differential diagnosisCollected from participants293; 57 HC + 27 PD + 49 AVS + 12 PNP + 48 CA + 16 DN + 25 OT + 59 PPVEnsemble method of 7 models (logistic regression, KNN, shallow and deep ANNs, SVM, random forest, extra-randomized trees) with 90% training and 10% testing data in stratified k-fold cross-validation8-class classification accuracy = 82.7%2019Ahmadi et al., 2019
Classification of PD from HCDiagnosisCollected from participants137; 38 HC + 99 PDSVM with leave-one-out-cross validationPD vs. HC accuracy = 92.3%2016Bernad-Elazari et al., 2016
Mild vs. severe accuracy = 89.8%
Mild vs. HC accuracy = 85.9%
Classification of PD from HCDiagnosisCollected from participants30; 14 HC + 16 PDSVM (linear, quadratic, cubic, Gaussian kernels), ANN, with 5-fold cross-validationClassification with ANN:2019Buongiorno et al., 2019
Accuracy = 89.4%
Sensitivity = 87.0%
Specificity = 91.8%
Severity assessment with ANN:
Accuracy = 95.0%
sensitivity = 90.0%
Specificity = 99.0%
Classification of PD from HCDiagnosisCollected from participants28; 12 HC + 16 PDNN with a train-validation-test ratio of 70:15:15, SVM with leave-one-out cross-validation, logistic regression with 10-fold cross validationSVM:
Accuracy = 85.71%
Sensitivity = 83.5%
Specificity = 87.5%
2017Butt et al., 2017
Classification of PD from HCDiagnosisCollected from participants28; 12 HC + 16 PDLogistic regression, naïve Bayes, SVM with 10-fold cross validationNaïve Bayes:2018Butt et al., 2018
Accuracy = 81.45%
Sensitivity = 76%
Specificity = 86.5%
AUC = 0.811
Classification of PD from HCDiagnosisCollected from participants54; 27 HC + 27 PDNaïve Bayes, LDA, KNN, decision tree, SVM-linear, SVM-RBF, majority of votes with 5-fold cross validationMajority of votes (weighted) accuracy = 96%2018Caramia et al., 2018
Classification of PD, HC and PD, HC, IHDiagnosisCollected from participants90; 30 PD + 30 HC + 30 IHSVM, random forest, naïve Bayes with 10-fold cross validationRandom forest:2019Cavallo et al., 2019
HC vs. PD:
Accuracy = 0.950
F-measure = 0.947
HC + IH vs. PD:
Accuracy = 0.917
F-measure = 0.912
HC vs. IH vs. PD:
Accuracy = 0.789
F-measure = 0.796
Classification of PD from HC and classification of HC, MCI, PDNOMCI, and PDMCIDiagnosis, differential diagnosis and subtypingCollected from participantsPD vs. HC:Decision tree, naïve Bayes, random forest, SVM, adaptive boosting (with decision tree or random forest) with 10-fold cross validationAdaptive boosting with decision tree:2015Cook et al., 2015
75; 50 HC + 25 PDPD vs. HC:
Accuracy = 0.79
Subtyping:AUC = 0.82
52; 18 HC + 16 PDNOMCI + 9 PDMCI + 9 MCISubtyping (HOA vs. MCI vs. PDNOMCI vs. PDMCI):
Accuracy = 0.85
AUC = 0.96
Classification of PD from HCDiagnosisCollected from participants580; 424 HC + 156 PDHidden Markov models with nearest neighbor classifier with cross validation and train-test ratio of 66.6:33.3Accuracy = 85.51%2017Cuzzolin et al., 2017
Classification of PD from HCDiagnosisCollected from participants80; 40 HC + 40 PDRandom forest, SVM with 10-fold cross validationSVM-RBF:2017Djurić-Jovičić et al., 2017
Accuracy = 85%
Sensitivity = 85%
Specificity = 82%
PPV = 86%
NPV = 83%
Classification of PD from HCDiagnosisCollected from participants13; 5 HC + 8 PDSVM-RBF with leave-one-out cross validation100% HC and PD classified correctly (confusion matrix)2014Dror et al., 2014
Classification of PD from HCDiagnosisCollected from participants75; 38 HC + 37 PDSVM with leave-one-out cross validationAccuracy = 85.61%2014Drotár et al., 2014
Sensitivity = 85.95%
Specificity = 85.26%
Classification of PD from ETDifferential diagnosisCollected from participants24; 13 PD + 11 ETSVM-linear, SVM-RBF with leave-one-out cross validationAccuracy = 83%2016Ghassemi et al., 2016
Classification of PD from HCDiagnosisCollected from participants41; 22 HC + 19 PDSVM, decision tree, random forest, linear regression with 10-fold and leave-one-individual out (L1O) cross validationSVM accuracy = 0.892018Klein et al., 2017
Classification of PD from HCDiagnosisCollected from participants74; 33 young HC + 14 elderly HC + 27 PDSVM with 10-fold cross validationSensitivity = ~90%2017Javed et al., 2018
Classification of PD from HC and assess the severity of PDDiagnosisCollected from participants55; 20 HC + 35 PDSVM with leave-one-out cross validationPD diagnosis:2016Koçer and Oktay, 2016
Accuracy = 89%
Precision = 0.91
Recall = 0.94
Severity assessment:
HYS 1 accuracy = 72%
HYS 2 accuracy = 77%
HYS 3 accuracy = 75%
HYS 4 accuracy = 33%
Classification of PD from HCDiagnosisCollected from participants45; 20 HC + 25 PDNaïve Bayes, logistic regression, SVM, AdaBoost, C4.5, BagDT with 10-fold stratified cross-validation apart from BagDTBagDT:
Sensitivity = 82%
Specificity = 90%
AUC = 0.94
2015Kostikis et al., 2015
Classification of PD from HCDiagnosisCollected from participants40; 26 HC + 14 PDRandom forest with leave-one-subject-out cross-validationAccuracy = 94.6%
Sensitivity = 91.5%
Specificity = 97.2%
2017Kuhner et al., 2017
Classification of PD from HCDiagnosisCollected from participants177; 70 HC + 107 PDESN with 10-fold cross validationAUC = 0.8522018Lacy et al., 2018
Classification of PD from HCDiagnosisCollected from participants39; 16 young HC + 12 elderly HC + 11 PDLDA with leave-one-out cross validationMulticlass classification (young HC vs. age-matched HC vs. PD):2018Martínez et al., 2018
Accuracy = 64.1%
Sensitivity = 47.1%
Specificity = 77.3%
Classification of PD from HCDiagnosisCollected from participants38; 10 HC + 28 PDSVM-Gaussian with leave-one-out cross validationTraining accuracy = 96.9%2018Oliveira H. M. et al., 2018
Test accuracy = 76.6%
Classification of PD from HCDiagnosisCollected from participants30; 15 HC + 15 PDSVM-RBF, PNN with 10-fold cross validationSVM-RBF:2015Oung et al., 2015
Accuracy = 88.80%
Sensitivity = 88.70%
Specificity = 88.15%
AUC = 88.48
Classification of PD from HCDiagnosisCollected from participants45; 14 HC + 31 PDDeep-MIL-CNN with LOSO or RkFWith LOSO:2019Papadopoulos et al., 2019
Precision = 0.987
Sensitivity = 0.9
specificity = 0.993
F1-score = 0.943
With RkF:
Precision = 0.955
Sensitivity = 0.828
Specificity = 0.979
F1-score = 0.897
Classification of PD, HC and post-strokeDiagnosis and differential diagnosisCollected from participants11; 3 HC + 5 PD + 3 post-strokeMTFL with 10-fold cross validationPD vs. HC AUC = 0.9832017Papavasileiou et al., 2017
Classification of PD from HCDiagnosisCollected from participants182; 94 HC + 88 PDLSTM, CNN-1D, CNN-LSTM with 5-fold cross-validation and a training-test ratio of 90:10CNN-LSTM:2019Reyes et al., 2019
Accuracy = 83.1%
Precision = 83.5%
Recall = 83.4%
F1-score = 81%
Kappa = 64%
Classification of PD from HCDiagnosisCollected from participants60; 30 HC + 30 PDNaïve Bayes, KNN, SVM with leave-one-out cross validationSVM:2019Ricci et al., 2020
Accuracy = 95%
Precision = 0.951
AUC = 0.950
Classification of PD, HC and IHDiagnosis and differential diagnosisCollected from participants90; 30 HC + 30 PD + 30 IHSVM-polynomial, random forest, naïve Bayes with 10-fold cross validationHC vs. PD, naïve Bayes or random forest:2018Rovini et al., 2018
Precision = 0.967
Recall = 0.967
Specificity = 0.967
Accuracy = 0.967
F-measure = 0.967
HC + IH vs. PD, random forest:
Precision = 1.000
Recall = 0.933
Specificity = 1.000
Accuracy = 0.978
F-measure = 0.966
Multiclass classification, random forest:
Precision = 0.784
Recall = 0.778
Specificity = 0.889
Accuracy = 0.778
F-measure = 0.781
Classification of PD, HC and IHDiagnosis and differential diagnosisCollected from participants45; 15 HC + 15 PD + 15 IHSVM-polynomial, random forest with 5-fold cross validationHC vs. PD, random forest:2019Rovini et al., 2019
Precision = 1.000
Recall = 1.000
Specificity = 1.000
Accuracy = 1.000
F-measure = 1.000
Multiclass classification (HC vs. IH vs. PD), random forest:
Precision = 0.930
Recall = 0.911
Specificity = 0.956
Accuracy = 0.911
F-measure = 0.920
Classification of PD from ETDifferential diagnosisCollected from participants52; 32 PD + 20 ETSVM-linear with 10-fold cross validationAccuracy = 12016Surangsrirat et al., 2016
Sensitivity = 1
Specificity = 1
Classification of PD from HCDiagnosisCollected from participants12; 10 HC + 2 PDNaive Bayes, LogitBoost, random forest, SVM with 10-fold cross-validationRandom forest:2017Tahavori et al., 2017
Accuracy = 92.29%
Precision = 0.99
Recall = 0.99
Classification of PD from HCDiagnosisCollected from participants39; 16 HC + 23 PDSVM-RBF with 10-fold stratified cross validationSensitivity = 88.9%2010Tien et al., 2010
Specificity = 100%
Precision = 100%
FPR = 0.0%
Classification of PD from HCDiagnosisCollected from participants60; 30 HC + 30 PDLogistic regression, naïve Bayes, random forest, decision tree with 10-fold cross validationRandom forest:2018Urcuqui et al., 2018
Accuracy = 82%
False negative rate = 23%
False positive rate = 12%
Classification of PD from HCDiagnosisPhysioNet47; 18 HC + 29 PDSVM, KNN, random forest, decision treeSVM with cubic kernel:2017Alam et al., 2017
Accuracy = 93.6%
Sensitivity = 93.1%
Specificity = 94.1%
Classification of PD from HCDiagnosisPhysioNet34; 17 HC + 17 PDMLP, SVM, decision treeMLP:2018Alaskar and Hussain, 2018
Accuracy = 91.18%
Sensitivity = 1
Specificity = 0.83
Error = 0.09
AUC = 0.92
Classification of PD from HC and assess the severity of PDDiagnosisPhysioNet166; 73 HC + 93 PD1D-CNN, 2D-CNN, LSTM, decision tree, logistic regression, SVM, MLP2D-CNN and LSTM accuracy = 96.0%2019Alharthi and Ozanyan, 2019
Classification of PD from HCDiagnosisPhysioNet146; 60 HC + 86 PDSVM-Gaussian with 3- or 5-fold cross validationAccuracy = 100%, 88.88%, and 100% in three test groups2019Andrei et al., 2019
Classification of PD from HCDiagnosisPhysioNet166; 73 HC + 93 PDANN, SVM, naïve Bayes with cross validationANN accuracy = 86.75%2017Baby et al., 2017
Classification of PD from HCDiagnosisPhysioNet31; 16 HC + 15 PDSVM-linear, KNN, naïve Bayes, LDA, decision tree with leave-one-out cross validationSVM, KNN and decision tree accuracy = 96.8%2019Félix et al., 2019
Classification of PD from HCDiagnosisPhysioNet31; 16 HC + 15 PDSVM-linear with leave-one-out cross validationAccuracy = 100%2017Joshi et al., 2017
Classification of PD from HCDiagnosisPhysioNet165; 72 HC + 93 PDKNN, CART, decision tree, random forest, naïve Bayes, SVM-polynomial, SVM-linear, K-means, GMM with leave-one-out cross validationSVM:
Accuracy = 90.32%
Precision = 90.55%
Recall = 90.21%
F-measure = 90.38%
2019Khoury et al., 2019
Classification of ALS, HD, PD from HCDiagnosisPhysioNet64; 16 HC + 15 PD + 13 ALS + 20 HDString grammar unsupervised possibilistic fuzzy C-medians with FKNN, with 4-fold cross validationPD vs. HC accuracy = 96.43%2018Klomsae et al., 2018
Classification of PD from HCDiagnosisPhysioNet166; 73 HC + 93 PDLogistic regression, decision trees, random forest, SVM-Linear, SVM-RBF, SVM-Poly, KNN with cross validationKNN:2018Mittra and Rustagi, 2018
Accuracy = 93.08%
Precision = 89.58%
Recall = 84.31%
F1-score = 86.86%
Classification of PD from HCDiagnosisPhysioNet85; 43 HC + 42 PDLS-SVM with leave-one-out, 2- or 10-fold cross validationLeave-one-out cross validation:2018Pham, 2018
AUC = 1
Sensitivity = 100%
Specificity = 100%
Accuracy = 100%
10-fold cross validation:
AUC = 0.89
Sensitivity = 85.00%
Specificity = 73.21%
Accuracy = 79.31%
Classification of PD from HCDiagnosisPhysioNet165; 72 HC + 93 PDLS-SVM with leave-one-out, 2- or 5- or 10-fold cross validationAccuracy = 100%2018Pham and Yan, 2018
Sensitivity = 100%
Specificity = 100%
AUC = 1
Classification of PD from HCDiagnosisPhysioNet166; 73 HC + 93 PDDCALSTM with stratified 5-fold cross validationSensitivity = 99.10%2019Xia et al., 2020
Specificity = 99.01%
Accuracy = 99.07%
Classification of HC, PD, ALS and HDDiagnosis and differential diagnosisPhysioNet64; 16 HC + 15 PD + 13 ALS + 20 HDSVM-RBF with 10-fold cross validationPD vs. HC:2009Yang et al., 2009
Accuracy = 86.43%
AUC = 0.92
Classification of PD, HD, ALS and ND from HCDiagnosisPhysioNet64; 16 HC + 15 PD + 13 ALS + 20 HDAdaptive neuro-fuzzy inference system with leave-one-out cross validationPD vs. HC:2018Ye et al., 2018
Accuracy = 90.32%
Sensitivity = 86.67%
Specificity = 93.75%
Classification of PD from HC and assess the severity of PDDiagnosismPower database50; 22 HC + 28 PDRandom forest, bagged trees, SVM, KNN with 10-fold cross validationRandom forest:2017Abujrida et al., 2017
PD vs. HC accuracy = 87.03%
PD severity assessment accuracy = 85.8%
Classification of PD from HCDiagnosismPower database1,815; 866 HC + 949 PDCNN with 10-fold cross validationAccuracy = 62.1%2018Prince and de Vos, 2018
F1 score = 63.4%
AUC = 63.5%
Classification of PD from HCDiagnosisDataset from Fernandez et al., 201349; 26 HC + 23 PDKFD-RBF, naïve Bayes, KNN, SVM-RBF, random forest with 10-fold cross validationRandom forest accuracy = 92.6%2015Wahid et al., 2015

Studies that applied machine learning models to movement data to diagnose PD (n = 51).

ALS, amyotrophic lateral sclerosis; ANN, artificial neural network; AUC, area under the receiver operating characteristic (ROC) curve; AVS, acute unilateral vestibulopathy; BagDT, bootstrap aggregation for a random forest of decision trees; CA, anterior lobe cerebella atrophy; CART, classification and regression trees; DCALSTM, dual-modal with each branch has a convolutional network followed by an attention-enhanced bi-directional LSTM; DN, downbeat nystagmus syndrome; ESN, echo state network; FKNN, fuzzy k-nearest neighbor; GMM, Gaussian mixture model; HC, healthy control; HD, Huntington's disease; IH, idiopathic hyposmia; KFD, kernel Fisher discriminant; KNN, k-nearest neighbors; LDA, linear discriminant analysis; LOSO, leave-one-subject-out; LS-SVM, least-squares support vector machine; LSTM, long short-term memory; MCI, mild cognitive impairment; MIL, multiple-instance learning; MLP, multilayer perceptron; MTFL, multi-task feature learning; NN, neural network; NSVC, nu-support vector classification; OT, primary orthostatic tremor; PD, Parkinson's disease; PDMCI, PD participants who met criteria for mild cognitive impairment; PDNOMCI, PD participants with no indication of mild cognitive impairment; PNN, probabilistic neural network; PNP, sensory polyneuropathy; PPV, phobic postural vertigo; QDA, quadratic discriminant analysis; RkF, repeated k-fold; SVM, support vector machine; SVM-Poly, support vector machine with polynomial kernel; SVM-RBF, support vector machine with radial basis function kernel.

MRI (n = 36)

Average accuracy of the 32 studies that used accuracy to evaluate the performance of machine learning models was 87.5 (8.0) %. In these studies, the lowest accuracy was 70.5% (Liu L. et al., 2016) and the highest accuracy was 100.0% (Cigdem et al., 2019; Figure 4A). Out of the 36 studies, the per-study highest accuracy was obtained with SVM in 21 studies (58.3%), with neural network in 8 studies (22.2%), with discriminant analysis in 3 studies (8.3%), with regression in 2 studies (5.6%), and with ensemble learning in 1 study (2.8%). One study (2.8%) obtained the highest per-study accuracy using models that do not belong to any of the given categories (Figure 4B). In 8 of 36 studies, neural networks were directly applied to MRI data, while the remaining studies used machine learning models to learn from extracted features, e.g., cortical thickness and volume of brain regions, to diagnose PD.

Out of 17 studies that used MRI data from the PPMI database, 16 used accuracy to evaluate model performance and the average accuracy was 87.9 (8.0) %. The lowest and highest accuracies were 70.5 and 99.9%, respectively (Table 6). In 16 out of 19 studies that acquired MRI data from human participants, accuracy was used to evaluate classification performance and an average accuracy was 87.0 (8.1) % was achieved. The lowest reported accuracy was 76.2% and the highest reported accuracy was 100% (Table 6).

Table 6

ObjectivesType of diagnosisSource of dataNumber of
subjects (n)
Machine learning method(s), splitting strategy and cross validationOutcomesYearReferences
Classification of PD from MSADifferential diagnosisCollected from participants150; 54 HC + 65 PD + 31 MSASVM with leave-one-out-cross validationMSA vs. PD:2019Abos et al., 2019
Accuracy = 0.79
Sensitivity = 0.71
Specificity = 0.86
MSA vs. HC:
Accuracy = 0.79
Sensitivity = 0.84
Specificity = 0.74
MSA vs. subsample of PD:
Accuracy = 0.84
Sensitivity = 0.77
Specificity = 0.90
Classification of PD from MSADifferential diagnosisCollected from participants151; 59 HC + 62 PD + 30 MSASVM with leave-one-out-cross validationAccuracy = 77.17%2019Baggio et al., 2019
Sensitivity = 83.33%
Specificity = 74.19%
Classification of PD from HCDiagnosisCollected from participants94; 50 HC + 44 PDCNN with 85 subjects for training and 9 for testingTraining accuracy = 95.24%2019Banerjee et al., 2019
Testing accuracy = 88.88%
Classification of PD from HCDiagnosisCollected from participants47; 26 HC + 21 PDSVM-linear with leave-one-out cross validationAccuracy = 93.62%2015Chen et al., 2015
Sensitivity = 90.47%
Specificity = 96.15%
Classification of PD from PSPDifferential diagnosisCollected from participants78; 57 PD + 21 PSPSVM with leave-one-out cross validationAccuracy = 100%2013Cherubini et al., 2014a
Sensitivity = 1
Specificity = 1
Classification of PD, MSA, PSP and HCDiagnosis and differential diagnosisCollected from participants106; 36 HC + 35 PD + 16 MSA + 19 PSPElastic Net regularized logistic regression with nested 10-fold cross validationHC vs. PD/MSA-P/PSP:2017Du et al., 2017
AUC = 0.88
Sensitivity = 0.80
Specificity = 0.83
PPV = 0.82
NPV = 0.81
HC vs. PD:
AUC = 0.91
Sensitivity = 0.86
Specificity = 0.80
PPV = 0.82
NPV = 0.89
PD vs. MSA/PSP:
AUC = 0.94
Sensitivity = 0.86
Specificity = 0.87
PPV = 0.88
NPV = 0.84
PD vs. MSA:
AUC = 0.99
Sensitivity = 0.97
Specificity = 1.00
PPV = 1.00
NPV = 0.93
PD vs. PSP:
AUC = 0.99
Sensitivity = 0.97
Specificity = 1.00
PPV = 1.00
NPV = 0.94
MSA vs. PSP:
AUC = 0.98
Sensitivity = 0.94
Specificity = 1.00
PPV = 1.00
NPV = 0.93
Classification of HC, PD, MSA and PSPDiagnosis and differential diagnosisCollected from participants64; 22 HC + 21 PD + 11 MSA + 10 PSPSVM-linear with leave-one-out cross validationPD vs. HC:2011Focke et al., 2011
Accuracy = 41.86%
Sensitivity = 38.10%
Specificity = 45.45%
PD vs. MSA:
Accuracy = 71.87%
Sensitivity = 36.36%
Specificity = 90.48%
PD vs. PSP:
Accuracy = 96.77%
Sensitivity = 90%
Specificity = 100%
MSA vs. PSP:
Accuracy = 76.19%
MSA vs. HC:
Accuracy = 78.78%
Sensitivity = 54.55%
Specificity = 90.91%
PSP vs. HC:
Accuracy = 93.75%
Sensitivity = 90.00%
Specificity = 95.45%
Classification of PD and atypical PDDifferential diagnosisCollected from participants40; 17 PD + 23 atypical PDSVM-RBF with 10-fold cross-validationAccuracy = 97.50%2012Haller et al., 2012
TPR = 0.94
FPR = 0.00
TNR = 1.00
FNR = 0.06
Classification of PD and other forms of ParkinsonismDifferential diagnosisCollected from participants36; 16 PD + 20 other ParkinsonismSVM-RBF with 10-fold cross validationAccuracy = 86.92%2012Haller et al., 2013
TP = 0.87
FP = 0.14
TN = 0.87
FN = 0.13
Classification of HC, PD, PSP, MSA-C and MSA-PDiagnosis and differential diagnosisCollected from participants464; 73 HC + 204 PD + 106 PSP + 21 MSA-C + 60 MSA-PSVM-RBF with 10-fold cross validationPD vs. HC:2016Huppertz et al., 2016
Sensitivity = 65.2%
Specificity = 67.1%
Accuracy = 65.7%
PD vs. PSP:
Sensitivity = 82.5%
Specificity = 86.8%
Accuracy = 85.3%
PD vs. MSA-C:
Sensitivity = 76.2%
Specificity = 96.1%
Accuracy = 94.2%
PD vs. MSA-P:
Sensitivity = 86.7%
Specificity = 92.2%
Accuracy = 90.5%
Classification of PD from HCDiagnosisCollected from participants42; 21 HC + 21 PDSVM-linear with stratified 10-fold cross validationAccuracy = 78.33%2017Kamagata et al., 2017
Precision = 85.00%
Recall = 81.67%
AUC = 85.28%
Classification of PD, PSP, MSA-P and HCDiagnosis and differential diagnosisCollected from participants419; 142 HC + 125 PD + 98 PSP + 54 MSA-PCNN with train-validation ratio of 85:15PD:2019Kiryu et al., 2019
Sensitivity = 94.4%
Specificity = 97.8%
Accuracy = 96.8%
AUC = 0.995
PSP:
Sensitivity = 84.6%
Specificity = 96.0%
Accuracy = 93.7%
AUC = 0.982
MSA-P:
Sensitivity = 77.8%
Specificity = 98.1%
Accuracy = 95.2%
AUC = 0.990
HC:
Sensitivity = 100.0%
Specificity = 97.5%
Accuracy = 98.4%
AUC = 1.000
Classification of PD from HCDiagnosisCollected from participants65; 31 HC + 34 PDFCP with 36 out of the 65 subjects as the training setAUC = 0.9972016Liu H. et al., 2016
Classification of PD, PSP, MSA-C and MSA-PDifferential diagnosisCollected from participants85; 47 PD + 22 PSP + 9 MSA-C + 7 MSA-PSVM-linear with leave-one-out cross validation4-class classification (MSA-C vs. MSA-P vs. PSP vs. PD) accuracy = 88%2017Morisi et al., 2018
Classification of PD from HCDiagnosisCollected from participants89; 47 HC + 42 PDBoosted logistic regression with nested cross-validationAccuracy = 76.2%2019Rubbert et al., 2019
Sensitivity = 81%
Specificity = 72.7%
Classification of PD, PSP and HCDiagnosis and differential diagnosisCollected from participants84; 28 HC + 28 PSP + 28 PDSVM-linear with leave-one-out cross validationPD vs. HC:2014Salvatore et al., 2014
Accuracy = 85.8%
Specificity = 86.0%
Sensitivity = 86.0%
PSP vs. HC:
Accuracy = 89.1%
Specificity = 89.1%
Sensitivity = 89.5%
PSP vs. PD:
Accuracy = 88.9%
Specificity = 88.5%
Sensitivity = 89.5%
Classification of PD, APS (MSA, PSP) and HCDiagnosis and differential diagnosisCollected from participants100; 35 HC + 45 PD + 20 APSCNN-DL, CR-ML, RA-ML with 5-fold cross-validationPD vs. HC with CNN-DL:2019Shinde et al., 2019
Test accuracy = 80.0%
Test sensitivity = 0.86
Test specificity = 0.70
Test AUC = 0.913
PD vs. APS with CNN-DL:
Test accuracy = 85.7%
Test sensitivity = 1.00
Test specificity = 0.50
Test AUC = 0.911
Classification of PD from HCDiagnosisCollected from participants101; 50 HC + 51 PDSVM-RBF with leave-one-out cross validationSensitivity = 92%
Specificity = 87%
2017Tang et al., 2017
Classification of PD from HCDiagnosisCollected from participants85; 40 HC + 45 PDSVM-linear with leave-one-out, 5-fold, 0.632-fold (1-1/e), 2-fold cross validationAccuracy = 97.7%2016Zeng et al., 2017
Classification of PD from HCDiagnosisPPMI database543; 169 HC + 374 PDRLDA with JFSS with 10-fold cross validationAccuracy = 81.9%2016Adeli et al., 2016
Classification of PD from HCDiagnosisPPMI database543; 169 HC + 374 PDRFS-LDA with 10-fold cross validationAccuracy = 79.8%2019Adeli et al., 2019
Classification of PD from HCDiagnosisPPMI database543; 169 HC + 374 PDRandom forest (for feature selection and clinical score); SVM with 10-fold stratified cross validationAccuracy = 0.932018Amoroso et al., 2018
AUC = 0.97
Sensitivity = 0.93
Specificity = 0.92
Classification of PD, HC and prodromalDiagnosisPPMI database906; 203 HC + 66 prodromal + 637 PDMLP, XgBoost, random forest, SVM with 5-fold cross validationMLP:2020Chakraborty et al., 2020
Accuracy = 95.3%
Recall = 95.41%
Precision = 97.28%
F1-score = 94%
Classification of PD from HCDiagnosisPPMI databaseDataset 1: 15; 6 HC + 9 PDSVM with leave-one-out cross validationDataset 1:2014Chen et al., 2014
EER = 87%
Dataset 2: 39; 21 HC + 18 PDAccuracy = 80%
AUC = 0.907
Dataset 2:
EER = 73%
Accuracy = 68%
AUC = 0.780
Classification of PD from HCDiagnosisPPMI database80; 40 HC + 40 PDNaïve Bayes, SVM-RBF with 10-fold cross validationSVM:2019Cigdem et al., 2019
Accuracy = 87.50%
Sensitivity = 85.00%
Specificity = 90.00%
AUC = 90.00%
Classification of PD from HCDiagnosisPPMI database37; 18 HC + 19 PDSVM-linear with leave-one-out cross validationAccuracy = 94.59%2017Kazeminejad et al., 2017
Classification of PD, HC and SWEDDDiagnosis and subtypingPPMI database238; 62 HC + 142 PD + 34 SWEDDJoint learning with 10-fold cross validationHC vs. PD:2018Lei et al., 2019
Accuracy = 91.12%
AUC = 94.88%
HC vs. SWEDD:
Accuracy = 94.89%
AUC = 97.80%
PD vs. SWEDD:
accuracy = 92.12%
AUC = 93.82%
Classification of PD and SWEDD from HCDiagnosisPPMI databaseBaseline: 238; 62 HC + 142 PD + 34 SWEDD12 months: 186; 54 HC + 123 PD + 9 SWEDD
24 months: 127; 7 HC + 88 PD + 22 SWEDD
SSAE with 10-fold cross validation
HC vs. PD:
Accuracy = 85.24%, 88.14%, and 96.19% for baseline, 12 m, and 24 mHC vs. SWEDD:
Accuracy = 89.67%, 95.24%, and 93.10% for baseline, 12 m, and 24 m
2019Li et al., 2019
Classification of PD from HCDiagnosisPPMI database112; 56 HC + 56 PDRLDA with 8-fold cross validationAccuracy = 70.5%2016Liu L. et al., 2016
AUC = 71.1
Classification of PD from HCDiagnosisPPMI database60; 30 HC + 30 PDSVM, ELM with train-test ratio of 80:20ELM:2016Pahuja and Nagabhushan, 2016
Training accuracy = 94.87%
Testing accuracy = 90.97%
Sensitivity = 0.9245
Specificity = 0.9730
Classification of PD from HCDiagnosisPPMI database172; 103 HC + 69 PDMulti-kernel SVM with 10-fold cross validation2017Peng et al., 2017
Accuracy = 85.78%
Specificity = 87.79%
Sensitivity = 87.64%
AUC = 0.8363
Classification of PD from HCDiagnosis and subtypingPPMI database109; 32 HC + 77 PD (55 PD-NC + 22 PD-MCI)SVM with 2-fold cross validationPD vs. HC:2016Peng et al., 2016
Accuracy = 92.35%
Sensitivity = 0.9035
Specificity = 0.9431
AUC = 0.9744
PD-MCI vs. HC:
Accuracy = 83.91%
Sensitivity = 0.8355
Specificity = 0.8587
AUC = 0.9184
PD-MCI vs. PD-NC:
Accuracy = 80.84%
Sensitivity = 0.7705
Specificity = 0.8457
AUC = 0.8677
Classification of PD, HC and SWEDDDiagnosis and subtypingPPMI database831; 245 HC + 518 PD + 68 SWEDDLSSVM-RBF with cross validationAccuracy = 99.9%
Specificity = 100%
Sensitivity = 99.4%
2015Singh and Samavedham, 2015
Classification of PD, HC and SWEDDDiagnosis and differential diagnosisPPMI database741; 262 HC + 408 PD + 71 SWEDDLSSVM-RBF with 10-fold cross validationPD vs. HC accuracy = 95.37%2018Singh et al., 2018
PD vs. SWEDD accuracy = 96.04%
SWEDD vs. HC accuracy = 93.03%
Classification of PD from HCDiagnosisPPMI database408; 204 HC + 204 PDCNN (VGG and ResNet)ResNet50 accuracy = 88.6%2019Yagis et al., 2019
Classification of PD from HCDiagnosisPPMI database754; 158 HC + 596 PDFCN, GCN with 5-fold cross validationAUC = 95.37%2018Zhang et al., 2018

Studies that applied machine learning models to MRI data to diagnose PD (n = 36).

APS, atypical parkinsonian syndromes; AUC, area under the receiver operating characteristic (ROC) curve; CNN, convolutional neural network; CNN-DL, convolutional neural network with discriminative localization; CR-ML, contrast ratio classifier; EER, equal error rate; ELM, extreme learning machine; FCN, fully connected network; FCP, folded concave penalized (learning); FN, false negative; FNR, false negative rate; FP, false positive; FPR, false positive rate; GCN, graph convolutional network; HC, healthy control; JFSS, joint feature-sample selection; LSSVM, least-squares support vector machine; MLP, multilayer perceptron; MSA, multiple system atrophy; MSA-C, multiple system atrophy with a cerebellar syndrome; MSA-P, multiple system atrophy with a parkinsonian type; PD, Parkinson's disease; PD-MCI, PD participants who met criteria for mild cognitive impairment; PD-NC, PD participants with no indication of mild cognitive impairment; PSP, progressive supranuclear palsy; RA-ML, radiomics based classifier; ResNet, residual neural network; RFS-LDA, robust feature-sample linear discriminant analysis; RLDA, robust linear discriminant analysis; SSAE, stacked sparse auto-encoder; SVM, support vector machine; SVM-RBF, support vector machine with radial basis function kernel; SWEDD, PD with scans without evidence of dopaminergic deficit; TN, true negative; TNR, true negative rate; TP, true positive; TPR, true positive rate; XgBoost, extreme gradient boosting.

Handwriting Patterns (n = 16)

Fifteen out of 16 studies used accuracy in model evaluation and the average accuracy was 87.0 (6.3) % (Table 7). Among these studies, the lowest accuracy was 76.44% (Ali et al., 2019b) and the highest accuracy was 99.3% (Pereira et al., 2018; Figure 4A). The highest accuracy per-study was obtained with neural network in 6 studies (37.5%), with SVM in 5 studies (31.3%), with ensemble learning in 4 studies (25.0%), and with naïve Bayes in 1 study (6.3%; Figure 4B).

Table 7

ObjectivesType of diagnosisSource of dataType of dataNumber of subjects (n)Machine learning method(s), splitting strategy and cross validationOutcomesYearReferences
Classification of PD from HCDiagnosisHandPDHandwritten patterns92; 18 HC + 74 PDLDA, KNN, Gaussian naïve Bayes, decision tree, Chi2 with Adaboost with 5- or 4-fold stratified cross validationChi-2 with Adaboost:
Accuracy = 76.44%
Sensitivity = 70.94%
Specificity = 81.94%
2019Ali et al., 2019b
Classification of PD (PD + SWEDD) from HCDiagnosisPPMI databaseMore than one388; 194 HC + 168 PD + 26 SWEDDEnsemble method of several SVM with linear kernel with leave-one-out cross validationAccuracy = 94.38%2018Castillo-Barnes et al., 2018
Classification of PD from HCDiagnosisPPMI databaseMore than one586; 184 HC + 402 PDMLP, BayesNet, random forest, boosted logistic regression with a train-test ratio of 70:30Boosted logistic regression:
Accuracy = 97.159%
AUC curve = 98.9%
2016Challa et al., 2016
Classification of tPD from rETDifferential diagnosisCollected from participantsMore than one30; 15 tPD + 15rETMulti-kernel SVM with leave-one-out cross validationAccuracy = 100%2014Cherubini et al., 2014b
Classfication of PD, HC and atypical PDDiagnosis, differential diagnosis and subtypingPPMI database and SNUH cohortSPECT imaging dataPPMI: 701; 193 HC + 431 PD + 77 SWEDD
snuh: 82 PD
CNN with train-validation ratio of 90:10PPMI:
Accuracy = 96.0%
Sensitivity = 94.2%
Specificity = 100%
SNUH:
Accuracy = 98.8%
Sensitivity = 98.6%
Specificity = 100%
2017Choi et al., 2017
Classification of PD from HCDiagnosisCollected from participantsOther270; 120 HC + 150 PDRandom forestClassification error = 49.6% (rs11240569)
Classification error = 44.8% (rs708727)
Classification error = 49.3% (rs823156)
2019Cibulka et al., 2019
Classification of PD from HCDiagnosisHandPDHandwritten patterns92; 18 HC + 74 PDNaïve Bayes, OPF, SVM with cross-validationSVM-RBF accuracy = 85.54%2018de Souza et al., 2018
Classification of PD from HCDiagnosisPPMI databaseMore than one1194; 816 HC + 378 PDBoostParkAccuracy = 0.901
AUC-ROC = 0.977
AUC-PR = 0.947
F1-score = 0.851
2017Dhami et al., 2017
Classification of PD and HC, and PD + SWEDD and HCDiagnosisPPMI databaseMore than one430; 127 HC + 263 PD + 40 SWEDDAdaBoost, SVM, naïve Bayes, decision tree, KNN, K-Means with 5-fold cross validationPD vs. HC (adaboost):
Accuracy = 0.98954704
Sensitivity = 0.97831978
Specificity = 0.99796748
PPV = 0.99723757
NPV = 0.98396794
LOR = 10.0058805
PD + SWEDD vs HC (adaboost):
Accuracy = 0.9825784
Sensitivity = 0.97560976
Specificity = 0.98780488
PPV = 0.98360656
NPV = 0.98181818
LOR = 8.08332861
2016Dinov et al., 2016
Classification of PD from HCDiagnosisCollected from participantsCSFCohort 1: 160; 80 HC + 80 PD
Cohort 2: 60; 30 HC + 30 PD
Elastic Net and gradient boosted regression with 10-fold cross validationEnsemble of 60 decision trees identified with gradient boosted model:
Sensitivity = 85%
Specificity = 75%
PPV = 77%
NPV = 83%
AUC = 0.77
2018Dos Santos et al., 2018
Classification of PD from HCDiagnosisCollected from participantsHandwritten patterns75; 38 HC + 37 PDSVM-RBF with stratified 10-fold cross-validationAccuracy = 88.13%
Sensitivity = 89.47%
Specificity = 91.89%
2015Drotár et al., 2015
Classification of PD from HCDiagnosisCollected from participantsHandwritten patterns75; 38 HC + 37 PDKNN, ensemble AdaBoost, SVMSVM:
Accuracy = 81.3%
Sensitivity = 87.4%
Specificity = 80.9%
2016Drotár et al., 2016
Classification of IPD, VaP and HCDifferential diagnosisCollected from participantsMore than one45; 15 HC + 15 IPD + 15 VaPMLP, DBN with 10-fold cross validationIPD + VaP vs HC with MLP:
Accuracy = 95.68%
Specificity = 98.08%
Sensitivity = 92.44%
VaP vs. IPD with DBN:
Accuracy = 75.33%
Specificity = 72.31%
Sensitivity = 79.18%
2018Fernandes et al., 2018
Classification of PD from HCDiagnosisCollected from participantsMore than one75; 15 HC + 60 PD
blood: 75; 15 HC + 60 PD
FDOPA PET: 58; 14 HC + 44 PD
FDG PET: 67; 16 HC + 51 PD
SVM-linear, random forest with leave-one-out cross validationSVM AUC for FDOPA + metabolomics: 0.98
SVM AUC for FDG + metabolomics: 0.91
2019Glaab et al., 2019
Classification of PD, HC and SWEDDDiagnosis and subtypingPPMI databaseMore than one666; 415 HC + 189 PD + 62 SWEDDEPNN, PNN, SVM, KNN, classification tree with train-test ratio of 90:10EPNN: PD vs SWEDD vs HC accuracy = 92.5%
PD vs HC accuracy = 98.6%
SWEDD vs HC accuracy = 92.0%
PD vs. SWEDD accuracy = 95.3%
2015Hirschauer et al., 2015
Classification of PD from HC and assess the severity of PDDiagnosisPicture Archiving and Communication System (PACS)SPECT imaging data202; 6 HC + 102 mild PD + 94 severe PDLinear regression, SVM-RBF with a train-test ratio of 50:50SVM-RBF:
Sensitivity = 0.828
Specificity = 1.000
PPV = 0.837
NPV = 0.667
Accuracy = 0.832
AUC = 0.845
Kappa = 0.680
2019Hsu et al., 2019
Classification of PD from VPDifferential diagnosisCollected from participantsSPECT imaging data244; 164 PD + 80 VPLogistic regression, LDA, SVM with 10-fold cross-validationSVM:
Accuracy = 0.904
Sensitivity = 0.954
Specificity = 0.801
AUC = 0.954
2014Huertas-Fernández et al., 2015
Classification of PD from HCDiagnosisCollected from participantsSPECT imaging data208; 108 HC + 100 PDSVM, KNN, NM with 3-fold cross validationSVM:
Sensitivity = 89.02%
Specificity = 93.21%
AUC = 0.9681
2012Illan et al., 2012
Classification of PD from HCDiagnosisCollected from participantsHandwritten patterns72; 15 HC + 57 PDCNN with 10-fold cross validation or leave-one-out cross validationAccuracy = 88.89%2018Khatamino et al., 2018
Classification of PD from HCDiagnosisCollected from participantsOther10; 5 HC + 5 PDSVM with leave-one-subject-out cross validationSensitivity = 0.90
Specificity = 0.90
2013Kugler et al., 2013
Classification of PD from HCDiagnosisUCI machine learning repositoryHandwritten patterns72; 15 HC + 57 PDSVM-linear, SVM-RBF, KNN with leave-one-subject-out cross validationSVM-linear:
Accuracy = 97.52%
MCC = 0.9150
F-score = 0.9828
2019İ et al., 2019
Classification of PD from HCDiagnosisCollected postmortemCSF105; 57 HC + 48 PDSVM with 10-fold cross validationSensitivity = 65%
Specificity = 79%
AUC = 0.79
2013Lewitt et al., 2013
Classification of PD from HCDiagnosisCollected from participantsCSF78; 42 HC + 36 PDRandom forest and extreme gradient tree boosting with 10-fold cross validationExtreme gradient tree boosting:
Specificity = 78.6%
Sensitivity = 83.3%
AUC = 83.9%
2018Maass et al., 2018
Classification of PD from HC or NPHDiagnosis and differential diagnosisCollected from participantsCSF157; 68 HC + 82 PD + 7 NPHSVM with 10-fold cross validation or leave-one-out cross validationCohort 1, PD vs HC:
AUC = 0.76
Cohort 2, PD vs HC:
AUC = 0.78
Cohort 3, PD vs HC:
AUC = 0.31
Cohort 4, PD vs NPH:
AUC = 0.88
2020Maass et al., 2020
Classification of PD from HCDiagnosisPPMI databaseMore than one550; 157 HC + 342 PD + 51 SWEDDSVM, random forest, MLP, logistic regression, KNN with nested cross-validationMotor features, SVM:
Accuracy = 78.4%
AUC = 84.7%
Non-motor features, KNN:
Accuracy = 82.2%
AUC = 88%
2018Mabrouk et al., 2019
Classification of PD from HCDiagnosisPPMI databaseSPECT imaging data642; 194 HC + 448 PDCNN (LENET53D, ALEXNET3D) with 10-fold stratified cross-validationALEXNET3D:
Accuracy = 94.1%
AUC = 0.984
2018Martinez-Murcia et al., 2018
Classification of PD from HCDiagnosisCollected from participantsHandwritten patterns75; 10 HC + 65 PDMLP, non-linear SVM, random forest, logistic regression with stratified 10-fold cross-validationMLP:
Accuracy = 84%
Sensitivity = 75.7%
Specificity = 88.9%
Weighted Kappa = 0.65
AUC = 0.86
2015Memedi et al., 2015
Classification of PD from HCDiagnosisParkinson's Disease Handwriting Database (PaHaW)Handwritten patterns69; 36 HC + 33 PDRandom forest with stratified 7-fold cross-validationAccuracy = 89.81%
Sensitivity = 88.63%
Specificity = 90.87%
MCC = 0.8039
2018Mucha et al., 2018
Classification of PD, MSA, PSP, CBS and HCDifferential diagnosisCollected from participantsSPECT imaging data578; 208 HC + 280 PD + 21 MSA + 41 PSP + 28 CBSSVM with 5-fold cross-validationAccuracy = 58.4–92.9%2019Nicastro et al., 2019
Classification of PD from HCDiagnosisCollected from participantsHandwritten patterns30; 15 HC + 15 PDKNN, decision tree, random forest, SVM, AdaBoost with 3-fold cross validationRandom forest accuracy = 0.912018Nõmm et al., 2018
Classification of HC, AD and PDDiagnosis and differential diagnosisThe authors' institutional oct databaseOther75; 27 HC + 28 PD + 20 ADSVM-RBF with 2-, 5- and 10-fold cross validationAccuracy = 87.7%
HC sensitivity = 96.2%
HC specificity = 88.2%
PD sensitivity = 87.0%
PD specificity = 100.0%
2019Nunes et al., 2019
Classification of idiopathic PD, atypical Parkinsonian and ETDifferential diagnosisCollected from participantsOther85; 50 idiopathic PD + 26 atypical PD + 9 ETSVM, random forest with leave-one-out cross validationSVM accuracy = 100%
Random forest accuracy = 98.5%
2019Nuvoli et al., 2019
Classification of PD from HCDiagnosisPPMI databaseSPECT imaging data654; 209 HC + 445 PDSVM-linear with leave-one-out cross validationAccuracy = 97.86%
Sensitivity = 97.75%
Specificity = 98.09%
2015Oliveira and Castelo-Branco, 2015
Classification of PD from HCDiagnosisPPMI databaseSPECT imaging data652; 209 HC + 443 PDSVM-linear, KNN, logistic regression with leave-one-out cross validationSVM-linear:
Accuracy = 97.9%
Sensitivity = 98.0%
Specificity = 97.6%
2017Oliveira F. et al., 2018
Classification of PD and non-PD (ET, drug-induced Parkinsonism)Differential diagnosisCollected from participantsSPECT imaging data90; 56 PD + 34 non-PDSVM-RBF with leave-one-out or 5-fold cross validationAccuracy = 95.6%2014Palumbo et al., 2014
Classification of PD from HCDiagnosisCollected from participantsHandwritten patterns55; 18 HC + 37 PDNaïve Bayes, OPF, SVM-RBF with 10-fold cross validationNaïve Bayes accuracy = 78.9%2015Pereira et al., 2015
Classification of PD from HCDiagnosisHandPDHandwritten patterns92; 18 HC + 74 PDNaïve Bayes, OPF, SVM-RBF with cross-validationSVM-RBF recognition rate (sensitivity) = 66.72%2016Pereira et al., 2016a
Classification of PD from HCDiagnosisExtended handpd dataset with signals extracted from a smart penHandwritten patterns35; 21 HC + 14 PDCNN with cross validation with a train:test ratio of 75:25 or 50:50Accuracy = 87.14%2016Pereira et al., 2016b
Classification of PD from HCDiagnosisHandPDHandwritten patterns92; 18 HC + 74 PDCNN, OPF, SVM, naïve Bayes with train-test split = 50:50CNN-Cifar10 accuracy = 99.30%
Early stage accuracy with CNN-ImageNet = 96.35% or 94.01% for Exam 3 or Exam 4
2018Pereira et al., 2018
Classification of PD from HCDiagnosisUCI machine learning repositoryMore than oneDataset 1: 40; 20 HC + 20 PD
dataset 2: 77; 15 HC + 62 PD
Random forest, KNN, SVM-RBF, ensemble method with 5-fold cross validationEnsemble method:
Accuracy = 95.89%
Specificity = 100%
Sensitivity = 91.43%
2019Pham et al., 2019
Classification of PD from HCDiagnosisPPMI databaseMore than one618; 195 HC + 423 PDSVM-linear, SVM-RBF, classification tree with a train-test ratio of 70:30SVM-RBF, test set:
Accuracy = 85.48%
Sensitivity = 90.55%
Specificity = 74.58%
AUC = 88.22%
2014Prashanth et al., 2014
Classification of PD from HCDiagnosis and subtypingPPMI databaseSPECT imaging data715; 208 HC + 427 PD + 80 SWEDDSVM, naïve Bayes, boosted trees, random forest with 10-fold cross validationSVM:
Accuracy = 97.29%
Sensitivity = 97.37%
Specificity = 97.18%
AUC = 99.26
2016Prashanth et al., 2017
Classification of PD from HCDiagnosisPPMI databaseMore than one584; 183 HC + 401 PDNaïve Bayes, SVM-RBF, boosted trees, random forest with 10-fold cross validationSVM:
Accuracy = 96.40%
Sensitivity = 97.03%
Specificity = 95.01%
AUC = 98.88%
2016Prashanth et al., 2016
Classification of PD from HCDiagnosisPPMI databaseOther626; 180 HC + 446 PDLogistic regression, random forests, boosted trees, SVM with cross validationAccuracy > 95%
AUC > 95%
Random forests:
Accuracy = 96.20–97.14% (95% CI)
2018Prashanth and Dutta Roy, 2018
Classification of PD from HCDiagnosismPower databaseMore than one133 out of 1,513 with complete source data; 46 HC + 87 PDLogistic regression, random forests, DNN, CNN, Classifier Ensemble, Multi-Source Ensemble learning with stratified 10-fold cross validationEnsemble learning:
Accuracy = 82.0%
F1-score = 87.1%
2019Prince et al., 2019
Classification of PD from HCDiagnosisHandPDHandwritten patterns35; 21 HC + 14 PDBidirectional Gated Recurrent Units with a train-validation-test ratio of 40:10:50 or 65:10:25The Spiral dataset:
Accuracy = 89.48%
Precision = 0.848
Recall = 0.955
F1-score = 0.897
The Meander dataset:
Accuracy = 92.24%
Precision = 0.952
Recall = 0.883
F1-score = 0.924
2019Ribeiro et al., 2019
Classification of PD from HCDiagnosisCollected from participantsHandwritten patterns130; 39 elderly HC + 40 young HC + 39 PD + 6 PD (validation set) + 6 HC (validation set)KNN, SVM-Gaussian, random forest with leave-one-out cross validationSVM for PD vs young HC:
Accuracy = 94.0%
Sensitivity = 0.94
Specificity = 0.94
F1-score = 0.94
SVM for PD vs elderly HC:
Accuracy = 89.3%
Sensitivity = 0.89
Specificity = 0.89
F1-score = 0.89
Random forest for validation set:
Accuracy = 83.3%
Sensitivity = 0.92
Specificity = 0.93
F1-score = 0.92
2019Rios-Urrego et al., 2019
Classification of IPD from non-IPDDifferential diagnosisCollected from participantsPET imaging87; 39 IPD + 48 non-IPD (24 MSA + 24 PSP)SVM with leave-one-out cross validationAccuracy = 78.16%
Sensitivity = 69.29%
Specificity = 85.42%
2015Segovia et al., 2015
Classification of PD from HCDiagnosisDataset from “Virgen de la Victoria” hospitalSPECT imaging data189; 94 HC + 95 PDSVM with 10-fold cross validationAccuracy = 94.25%
Sensitivity = 91.26%
Specificity = 96.17%
2019Segovia et al., 2019
Classification of PD from HCDiagnosisCollected from participantsOther486; 233 HC + 205 PD + 48 NDDSVM-linear with leave-batch-out cross validationValidation AUC = 0.79
Test AUC = 0.74
2017Shamir et al., 2017
Classification of PD from HCDiagnosisCollected from participantsPET imaging350; 225 HC + 125 PDGLS-DBN with a train-validation ratio of 80:20Test dataset 1:
Accuracy = 90%
Sensitivity = 0.96
Specificity = 0.84
AUC = 0.9120
Test dataset 2:
Accuracy = 86%
Sensitivity = 0.92
Specificity = 0.80
AUC = 0.8992
2019Shen et al., 2019
Classification of PD from HCDiagnosisCollected from participantsOther33; 18 HC + 15 PDSMMKL-linear with leave-one-out cross validationAccuracy = 84.85%
Sensitivity = 80.00%
Specificity = 88.89%
YI = 68.89%
PPV = 85.71%
NPV = 84.21%
F1 score = 82.76%
2018Shi et al., 2018
Classification of PD from HCDiagnosisCollected from participantsMore than onePlasma samples: 156; 76 HC + 80 PD;
CSF samples: 77; 37 HC + 40 PD
PLS, random forest with 10-fold cross validation with train-test ratio of 70:30PLS:
AUC (plasma) = 0.77
AUC (CSF) = 0.90
2018Stoessel et al., 2018
Classification of PD from HCDiagnosisPPMI databaseSPECT imaging data658; 210 HC + 448 PDLogistic Lasso with 10-fold cross validationTest errors:
FP = 2.83%
FN = 3.78%
Net error = 3.47%
2017Tagare et al., 2017
Classification of PD from HCDiagnosisPDMultiMChandwritten patterns42; 21 HC + 21 PDCNN, CNN-BLSTM with stratified 3-fold cross validationCNN:
Accuracy = 83.33%
Sensitivity = 85.71%
Specificity = 80.95%
CNN-BLSTM:
Accuracy = 83.33%
Sensitivity = 71.43%
Specificity = 95.24%
2019Taleb et al., 2019
Classification of PD from HCDiagnosisPPMI database and local databaseSPECT imaging dataLocal: 304; 113 Non-PDD + 191 PD
PPMI: 657; 209 HC + 448 PD
SVM with stratified, nested 10-fold cross-validationLocal data:
Accuracy = 0.88 to 0.92
PPMI:
Accuracy = 0.95 to 0.97
2017Taylor and Fenner, 2017
Classification of PD from HCDiagnosisCollected from participantsCSF87; 43 HC + 44 PDLogistic regressionSensitivity = 0.797
specIFICITy = 0.800
AUC = 0.833
2017Trezzi et al., 2017
Classification of PD from HCDiagnosisCollected from participantsOther38; 24 HC + 14 PDSVM-RFE with repeated leave-one-out bootstrap validationAccuracy = 89.6%2013Tseng et al., 2013
Classification of MSA and PDDifferential diagnosisCollected from participantsMore than one85; 25 HC + 30 PD + 30 MSA-PNNAUC = 0.7752019Tsuda et al., 2019
Classification of PD from HCDiagnosisCollected from participantsOther59; 30 HC + 29 PDLogistic regression, decision tree, extra treeExtra tree AUC = 0.994222018Vanegas et al., 2018
Classification of PD from HCDiagnosisCommercially sourcedOther30; 15 HC + 15 PDDecision treeCross validation score = 0.86 (male)
Cross validation score = 0.63 (female)
2019Váradi et al., 2019
Classification of PD from HCDiagnosisCollected from participantsMore than one84; 40 HC + 44 PDCNN with train-validation-test ratio of 80:10:10Accuracy = 97.6%
AUC = 0.988
2018Vásquez-Correa et al., 2019
Classification of PD and ParkinsonismDifferential diagnosisThe NTUA Parkinson DatasetMore than one78; 55 PD + 23 ParkinsonismMTL with DNNAccuracy = 0.91
Precision = 0.83
Sensitivity = 1.0
Specificity = 0.83
AUC = 0.92
2018Vlachostergiou et al., 2018
Classification of PD from HCDiagnosisPPMI databaseMore than one534; 165 HC + 369 PDpGTL with 10-fold cross validationAccuracy = 97.4%2017Wang et al., 2017
Classification of PD from HCDiagnosisPPMI databaseSPECT imaging data645; 207 HC + 438 PDCNN with train-validation-test ratio of 60:20:20Accuracy = 0.972
Sensitivity = 0.983
Specificity = 0.962
2019Wenzel et al., 2019
Classification of PD from HCDiagnosisCollected from participantsPET imagingCohort 1: 182; 91 HC + 91 PD
Cohort 2: 48; 26 HC + 22 PD
SVM-linear, SVM-sigmoid, SVM-RBF with 5-fold cross validationCohort 1:
Accuracy = 91.26%
Sensitivity = 89.43%
Specificity = 93.27%
Cohort 2:
Accuracy = 90.18%
Sensitivity = 82.05%
Specificity = 92.05%
2019Wu et al., 2019
Classification of PD, MSA and PSPDifferential diagnosisCollected from participantsPET imaging920; 502 PD + 239 MSA + 179 PSP3D residual CNN with 6-fold cross validationClassification of PD:
Sensitivity = 97.7%
Specificity = 94.1%
PPV = 95.5%
NPV = 97.0%
Classification of MSA:
Sensitivity = 96.8%
Specificity = 99.5%
PPV = 98.7%
NPV = 98.7%
Classification of PSP:
Sensitivity = 83.3%
Specificity = 98.3%
PPV = 90.0%
NPV = 97.8%
2019Zhao et al., 2019

Studies that applied machine learning models to handwritten patterns, SPECT, PET, CSF, other data types and combinations of data to diagnose PD (n = 67).

AD, Alzheimer's disease; AUC or AUC-ROC, area under the receiver operating characteristic (ROC) curve; AUC-PR, area under the precision-recall (PR) curve; BLSTM, bidirectional long short-term memory; CBS, corticobasal syndrome; CNN, convolutional neural network; CSF, cerebrospinal fluid; DBN, deep belief network; DNN, deep neural network; EPNN, enhanced probabilistic neural network; ET, essential tremor; FN, false negative; FP, false positive; GLS-DBN, group Lasso sparse deep belief network; HC, healthy control; IPD, idiopathic Parkinson's disease; KNN, k-nearest neighbors; LDA, linear discriminant analysis; LOR, log odds ratio; MCC, Matthews correlation coefficient; MLP, multilayer perceptron; MSA, multiple system atrophy; MSA-P, Parkinson's variant of multiple system atrophy; MTL, multi-task learning; NDD, neurodegenerative disease; NM, nearest mean; non-PDD, patients without pre-synaptic dopaminergic deficit; NPH, normal pressure hydrocephalus; NPV, negative predictive value; OPF, optimum-path forest; PD, Parkinson's disease; PET, positron emission tomography; pGTL, progressive graph-based transductive learning; PLS, partial least square; PNN, probabilistic neural network; PPV, positive predictive value; PSP, progressive supranuclear palsy; rET, essential tremor with rest tremor; SMMKL, soft margin multiple kernel learning; SPECT, single-photon emission computed tomography; SVM, support vector machine; SVM-RBF, support vector machine with radial basis function kernel; SVM-RFE, support vector machine-recursive feature elimination; SWEDD, PD with scans without evidence of dopaminergic deficit; tPD, tremor-dominant Parkinson's disease; VaP or VP, vascular Parkinsonism; YI, Youden's Index.

SPECT (n = 14)

Average accuracy of 12 out of 14 studies that used accuracy to measure the performance of machine learning models was 94.4 (4.2) % (Table 7). The lowest reported accuracy was 83.2% (Hsu et al., 2019) and 97.9% (Oliveira F. et al., 2018; Figure 4A). SVM led to the highest per-study accuracy in 10 out of 14 studies (71.4%). The highest per-study accuracy was obtained with neural networks in 3 studies (21.4%) and with regression in 1 study (7.1%; Figure 4B).

PET (n = 4)

All 4 studies used sensitivity and specificity (Table 7) in model evaluation while 3 used accuracy. Average accuracy of the 3 studies was 85.6 (6.6) %, with a lowest accuracy of 78.16% (Segovia et al., 2015) and a highest accuracy of 90.72% (Wu et al., 2019; Figure 4A). Half of the 4 studies (50.0%) obtained the highest per-study accuracy with SVM (Segovia et al., 2015; Wu et al., 2019) and the other half (50.0%) with neural networks (Figure 4B).

CSF (n = 5)

All 5 studies used AUC, instead of accuracy, to evaluate machine learning models (Table 7). The average AUC was 0.8 (0.1), the lowest AUC was 0.6825 (Maass et al., 2020) and the highest AUC was 0.839 (Maass et al., 2018), respectively. Two studies obtained the highest per-study AUC with ensemble learning, 2 studies with SVM and 1 study with regression (Figure 4B).

Other Types of Data (n = 10)

Only 5 studies used accuracy to measure the performance of machine learning models (Table 7). An average accuracy of 91.9 (6.4) % was obtained, with a lowest accuracy of 84.85% (Shi et al., 2018) and a highest accuracy of 100% (Nuvoli et al., 2019; Figure 4A). Out of the 10 studies, 5 (50%) used SVM to achieve the per-study highest accuracy, 3 (30%) used ensemble learning, 1 (10%) used decision trees and 1 (10%) used machine learning models that do not belong to any given categories (Figure 4B).

Combination of More Than One Data Type (n = 18)

Out of the 18 studies that used more than one type of data, 15 used accuracy in model evaluation (Table 7). An average accuracy of 92.6 (6.1) % was obtained, and the lowest and highest accuracy among the 15 studies was 82.0% (Prince et al., 2019) and 100.0% (Cherubini et al., 2014b), respectively (Figure 4A). The per-study highest accuracy was achieved with ensemble learning in 6 studies (33.3%), with neural network in 5 studies (27.8%), with SVM in 4 studies (22.2%), with regression in 1 (5.6%) study and with nearest neighbor (5.6%) in 1 study. One study (5.6%) used machine learning models that do not belong to any given categories to obtain the highest per-study accuracy (Figure 4B).

Discussion

Principal Findings

In this review, we present results from published studies that applied machine learning to the diagnosis and differential diagnosis of PD. Since the number of included papers was relatively large, we focused on a high-level summary rather than a detailed description of methodology and direct comparison of outcomes of individual studies. We also provide an overview of sample size, data source and data type, for a more in-depth understanding of methodological differences across studies and their outcomes. Furthermore, we assessed (a) how large the participant pool/dataset was, (b) to what extent new data (i.e., unpublished, raw data acquired from locally recruited human participants) were collected and used, (c) the feasibility of machine learning and the possibility of introducing new biomarkers in the diagnosis of PD. Overall, methodology studies that proposed and tested novel technical approaches (e.g., machine learning and deep learning models, data acquisition devices, and feature extraction algorithms) have repetitively shown that features extracted from data modalities including voice recordings and handwritten patterns could lead to high patient-level diagnostic performance, while facilitating accessible and non-invasive data acquisition. Nevertheless, only a small number of studies further validated these technical approaches in clinical settings using local human participants recruited specifically for these studies, indicating a gap between model development and their clinical applications.

A per-study diagnostic accuracy above chance levels was achieved in all studies that used accuracy in model evaluation (Figure 4A). Apart from studies using CSF data that measured model performance with AUC, classification accuracy associated with 8 other data types ranged between 85.6% (PET) and 94.4% (SPECT), with an average of 89.9 (3.0) %. Therefore, although the small number of studies of some data types may not allow for a generalizable prediction of how well these data types can help us differentiate PD from HC or atypical Parkinsonian disorders, the application of machine learning to a variety of data types led to high accuracy in the diagnosis of PD. In addition, an accuracy significantly above chance levels was achieved in all machine learning models (Supplementary Table 1), while SVM, neural networks and ensemble learning were among the most popular model choices, all yielding great applicability to a variety of data modalities. In the meantime, when compared with other models, they led to the per-study highest classification accuracy in >50% of all cases (50.7, 51.9, and 52.3%, respectively; Supplementary Table 1). Despite the high diagnostic accuracy and performance reported, in a number of studies, data splitting strategies and the use of cross validation were not specified. For data modalities such as 3D MRI scans, when 2D slices are extracted from 3D volumes, multiple slices could be generated for one subject. Having data from the same subject across training, validation and tests sets can lead to a biased data split (Wen et al., 2020), causing data leakage and overestimation of model performance, thus compromising reproducibility of published results.

As previously discussed (Belić et al., 2019), although satisfactory diagnostic outcomes could be achieved, sample size in few studies was extremely small (<15 subjects). The application of some machine learning models, especially neural networks, typically rely on a large dataset. Nevertheless, collecting data from a large pool of participants remains challenging in clinical studies, and data generated are commonly of high dimensionality and small sample size (Vabalas et al., 2019). To address this challenge, one solution is to combine data from a local cohort with public repositories including PPMI, UCI machine learning repository, PhysioNet and many others, depending on the type of data that have been collected from the local cohort. Furthermore, when a great difference in group size is observed (i.e., class imbalance problem), labeling all samples after the majority class may lead to an undesired high accuracy. In this case, evaluating machine learning models with other metrics including precision, recall and F-1 score is recommended (Jeni et al., 2013).

Even though high diagnostic accuracy of PD has been achieved in clinical settings, machine learning approaches have also reached high accuracy as shown in the present study, while models including SVM and neural networks are particularly useful in (a) diagnosis of PD using data modalities that have been overlooked in clinical decision making (e.g., voice), and (b) identification of features of high relevance from these data. For example, the use of machine learning models with feature selection techniques allows for assessing the relative importance of features of a large feature space in order to select the most differentiating ones, which is conventionally challenging using manual approaches. For the discovery of novel markers allowing for non-invasive diagnostic options with relatively high accuracy, e.g., handwritten patterns, a small number of studies have been conducted, mostly using data from published databases. Given that these databases generally included handwritten patterns from a small number of diagnosed PD patients, sometimes under 15, it would be of great importance to validate the use of handwritten patterns in early diagnosis of PD in clinical studies of a larger scale. In the meantime, diagnosing PD using more than one data modality has led to promising results. Accordingly, supplying clinicians with non-motor data and machine learning approaches may support clinical decision making in patients with ambiguous symptom presentations, and/or improve diagnosis at an earlier stage.

An issue observed in many included studies was the insufficient or inaccurate description of methods or results, and some failed to provide accurate information of the number and type of subjects used (for example, methodology studies on early diagnosis of PD missing a table summarizing the characteristics of subjects, therefore it was challenging to understand the stage of PD in recruited patients), or how machine learning models were implemented, trained and tested. Infrequently, authors skipped basic information such as number of subjects and their medical conditions and referred to another publication. Although we attempted to list model hyperparameters and cross-validation strategies in the data extraction table, many included studies did not make this information available in the main text, leading to potential difficulties in replicating the results. Apart from these, rounding errors or inconsistent reporting of results also exist. Furthermore, although we treated the differentiation of PD from SWEDD as subtyping, there is ongoing controversy regarding whether it should be considered as differential diagnosis or subtyping (Lee et al., 2014; Erro et al., 2016; Chou, 2017; Kwon et al., 2018). Given these limitations, clinicians interested in adapting machine learning models or implementing diagnostic systems based on novel biomarkers are advised to interpret published results with care. Further, in this context we would like to stress the need for uniform reporting standards in studies using machine learning.

In both machine learning research and clinical settings, appropriately interpreting published results and methodologies is a necessary step toward an understanding of state-of-the-art methods. Therefore, vagueness in reporting not only compromises the interpretation of results but makes further methodological developments based on published research unnecessarily challenging. Moreover, for medical doctors interested in learning how machine learning methods could be applied in their domains, insufficient description of methods may lead to incorrect model implementation and failure of replication.

To enable efficient replication of published results, detailed descriptions of (a) model and architecture (hyperparameters, number and type of layers, layer-specific parameter settings, regularization strategies, activation functions), (b) implementation (programming language, machine learning and deep learning libraries used, model training and testing, metrics and model evaluation, validation strategy, optimization), and (c) version numbers of software/libraries used for both preprocessing and model implementation, are often desirable, as newer software versions may lead to differences in pre-processing and model implementation stages (Chepkoech et al., 2016).

Due to the use of imbalanced datasets in medical sciences, reporting model performance with a confusion matrix may give rise to a more comprehensive understanding of the model's ability to discriminate between PD and healthy controls. In the meantime, due to costs associated with acquisition of patient data, researchers often need to expand data collected from a local cohort using data sourced from publicly available databases or published studies. Nevertheless, unclear description of data acquisition and pre-processing protocols in some published studies may lead to challenges in the integration of newly acquired data and previously published data. Taken together, to facilitate early, refined diagnosis of PD and efficient application of novel machine learning approaches in a clinical setting, and to allow for improved reproducibility of studies on machine learning-based diagnosis and assessment of PD, a higher transparency in reporting data collection, pre-processing protocols, model implementation, and study outcomes is required.

Limitations

In the present study, we have excluded research articles in languages other than English and results published in the form of conference abstracts, posters, and talks. Despite the ongoing discussion of advantages and importance of including conference abstracts in systematic reviews and reviews (Scherer and Saldanha, 2019), conference abstracts often do not report sufficient key information which is why we had to exclude them. However, this may lead to a publication and result bias. In addition, since the aim of the present review is to assess and summarize published studies on the detection and early diagnosis of PD, we noticed that few large-scale, multi-centric studies on subtyping or/and severity assessment of PD were therefore excluded. Given the current challenges in subtyping, severity assessment and prognosis of PD, a further step toward a more systematic understanding of the application of machine learning to neurodegenerative diseases would be to review these studies.

Moreover, due to the high inter-study variance in the data source and presentation of results, it was challenging to directly compare outcomes associated with each type of model across studies, as some studies failed to indicate whether model performance was evaluated using a test set, and/or results given by models that did not yield the best per-study performance. Results of published studies were discussed and summarized based on data and machine learning models used, and for data modalities such as PET (n = 4) or CSF (n = 5), the number of studies were too small despite the high total number of studies included. Therefore, it was improbable to assess the general performance of machine learning techniques when PET or CSF data are used.

Conclusions

To the best of our knowledge, the present study is the first review which included results from all studies that applied machine learning methods to the diagnosis of PD. Here, we presented included studies in a high-level summary, providing access to information including (a) machine learning methods that have been used in the diagnosis of PD and associated outcomes, (b) types of clinical, behavioral and biometric data that could be used for rendering more accurate diagnoses, (c) potential biomarkers for assisting clinical decision making, and (d) other highly relevant information, including databases that could be used to enlarge and enrich smaller datasets. In summary, realization of machine learning-assisted diagnosis of PD yields high potential for a more systematic clinical decision-making system, while adaptation of novel biomarkers may give rise to easier access to PD diagnosis at an earlier stage. Machine learning approaches therefore have the potential to provide clinicians with additional tools to screen, detect or diagnose PD.

Statements

Data availability statement

The original contributions generated for the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.

Author contributions

JM conceived and designed the study, collected the data, performed the analysis, and wrote the paper. CD and JF supervised the research. All authors contributed to the article and approved the submitted version.

Funding

JM was supported by the Québec Bio-Imaging Network Postdoctoral Fellowship (FRSQ—Réseaux de recherche thématiques; Dossier: 35450). JF was supported by FRQS (#283144), Parkinson Québec, Parkinson Canada (PPG-2020-0000000061), and CIHR (#PJT173514).

Acknowledgments

We thank Dr. Antje Haehner for her comments on the manuscript. This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and Québec Bio-Imaging Network.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnagi.2021.633752/full#supplementary-material

References

  • 1

    AbiyevR. H.AbizadeS. (2016). Diagnosing Parkinson's diseases using fuzzy neural system. Comput. Mathe. Methods Med.2016:1267919. 10.1155/2016/1267919

  • 2

    AbosA.BaggioH. C.SeguraB.CampabadalA.UribeC.GiraldoD. M.et al. (2019). Differentiation of multiple system atrophy from Parkinson's disease by structural connectivity derived from probabilistic tractography. Sci. Rep.9:16488. 10.1038/s41598-019-52829-8

  • 3

    AbujridaH.AguE.PahlavanK. (2017). Smartphone-based gait assessment to infer Parkinson's disease severity using crowdsourced data, in 2017 IEEE Healthcare Innovations and Point of Care Technologies (HI-POCT) (Bethesda, MD), 208211. 10.1109/HIC.2017.8227621

  • 4

    AdamsW. R. (2017). High-accuracy detection of early Parkinson's Disease using multiple characteristics of finger movement while typing. PLoS ONE12:e0188226. 10.1371/journal.pone.0188226

  • 5

    AdeliE.ShiF.AnL.WeeC.-Y.WuG.WangT.et al. (2016). Joint feature-sample selection and robust diagnosis of Parkinson's disease from MRI data. NeuroImage141, 206219. 10.1016/j.neuroimage.2016.05.054

  • 6

    AdeliE.ThungK.-H.AnL.WuG.ShiF.WangT.et al. (2019). Semi-supervised discriminative classification robust to sample-outliers and feature-noises. IEEE Trans. Pattern Anal. Mach. Intell.41, 515522. 10.1109/TPAMI.2018.2794470

  • 7

    AgarwalA.ChandrayanS.SahuS. S. (2016). Prediction of Parkinson's disease using speech signal with Extreme Learning Machine, in 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT) (Chennai), 37763779. 10.1109/ICEEOT.2016.7755419

  • 8

    AhlrichsC.LawoM. (2013). Parkinson's disease motor symptoms in machine learning: a review. arXiv preprint arXiv:1312.3825. 10.5121/hiij.2013.2401

  • 9

    AhmadiS. A.VivarG.FreiJ.NowoshilowS.BardinsS.BrandtT.et al. (2019). Towards computerized diagnosis of neurological stance disorders: data mining and machine learning of posturography and sway. J. Neurol.266(Suppl 1), 108117. 10.1007/s00415-019-09458-y

  • 10

    AichS.KimH.youngaK.HuiK. L.Al-AbsiA. A.SainM. (2019). A supervised machine learning approach using different feature selection techniques on voice datasets for prediction of Parkinson's disease, in 2019 21st International Conference on Advanced Communication Technology (ICACT) (PyeongChang), 11161121. 10.23919/ICACT.2019.8701961

  • 11

    AlamM. N.GargA.MuniaT. T. K.Fazel-RezaiR.TavakolianK. (2017). Vertical ground reaction force marker for Parkinson's disease. PLoS ONE12:e0175951. 10.1371/journal.pone.0175951

  • 12

    AlaskarH.HussainA. (2018). Prediction of Parkinson disease using gait signals, in 2018 11th International Conference on Developments in eSystems Engineering (DeSE) (Cambridge), 2326. 10.1109/DeSE.2018.00011

  • 13

    Al-FatlawiA. H.JabardiM. H.LingS. H. (2016). Efficient diagnosis system for Parkinson's disease using deep belief network, in 2016 IEEE Congress on Evolutionary Computation (CEC) (Vancouver, BC), 13241330. 10.1109/CEC.2016.7743941

  • 14

    AlharthiA. S.OzanyanK. B. (2019). Deep learning for ground reaction force data analysis: application to wide-area floor sensing, in 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE) (Vancouver, BC)„ 14011406. 10.1109/ISIE.2019.8781511

  • 15

    AliL.KhanS. U.ArshadM.AliS.AnwarM. (2019a). A multi-model framework for evaluating type of speech samples having complementary information about Parkinson's disease, in 2019 International Conference on Electrical, Communication, and Computer Engineering (ICECCE) (Swat), 15. 10.1109/ICECCE47252.2019.8940696

  • 16

    AliL.ZhuC.GolilarzN. A.JaveedA.ZhouM.LiuY. (2019b). Reliable Parkinson's disease detection by analyzing handwritten drawings: construction of an unbiased cascaded learning system based on feature selection and adaptive boosting model. IEEE Access7, 116480116489. 10.1109/ACCESS.2019.2932037

  • 17

    AliL.ZhuC.ZhangZ.LiuY. (2019c). Automated detection of Parkinson's disease based on multiple types of sustained phonations using linear discriminant analysis and genetically optimized neural network. IEEE J. Transl. Eng. Health Med.7, 110. 10.1109/JTEHM.2019.2940900

  • 18

    AlqahtaniE. J.AlshamraniF. H.SyedH. F.OlatunjiS. O. (2018). Classification of Parkinson's disease using NNge classification algorithm, in 2018 21st Saudi Computer Society National Computer Conference (NCC) (Riyadh), 17. 10.1109/NCG.2018.8592989

  • 19

    AmorosoN.La RoccaM.MonacoA.BellottiR.TangaroS. (2018). Complex networks reveal early MRI markers of Parkinson's disease. Med. Image Anal.48, 1224. 10.1016/j.media.2018.05.004

  • 20

    AnandA.HaqueM. A.AlexJ. S. R.VenkatesanN. (2018). Evaluation of machine learning and deep learning algorithms combined with dimentionality reduction techniques for classification of Parkinson's disease, in 2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) (Louisville, KY), 342347. 10.1109/ISSPIT.2018.8642776

  • 21

    AndreiA.Tău?anA.IonescuB. (2019). Parkinson's disease detection from gait patterns, in 2019 E-Health and Bioengineering Conference (EHB) (Iasi), 14. 10.1109/EHB47216.2019.8969942

  • 22

    BabyM. S.SajiA. J.KumarC. S. (2017). Parkinsons disease classification using wavelet transform based feature extraction of gait data, in 2017 International Conference on Circuit, Power and Computing Technologies (ICCPCT) (Kollam), 16. 10.1109/ICCPCT.2017.8074230

  • 23

    BaggioH. C.AbosA.SeguraB.CampabadalA.UribeC.GiraldoD. M.et al. (2019). Cerebellar resting-state functional connectivity in Parkinson's disease and multiple system atrophy: characterization of abnormalities and potential for differential diagnosis at the single-patient level. NeuroImage. Clin.22:101720. 10.1016/j.nicl.2019.101720

  • 24

    BakarZ. A.IspawiD. I.IbrahimN. F.TahirN. M. (2012). Classification of Parkinson's disease based on Multilayer Perceptrons (MLPs) neural network and ANOVA as a feature extraction, in 2012 IEEE 8th International Colloquium on Signal Processing and its Applications) (Malacca), 6367. 10.1109/CSPA.2012.6194692

  • 25

    BanerjeeM.ChakrabortyR.ArcherD.VaillancourtD.VemuriB. C. (2019). DMR-CNN: a CNN tailored For DMR scans with applications to PD classification, in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) (Venice), 388391. 10.1109/ISBI.2019.8759558

  • 26

    BelićM.BobićV.BadŽaM.ŠolajaN.Ã?urić-JovičićM.KostićV. S. (2019). Artificial intelligence for assisting diagnostics and assessment of Parkinson's disease–a review. Clin. Neurol. Neurosurg.184:105442. 10.1016/j.clineuro.2019.105442

  • 27

    BenbaA.JilbabA.HammouchA. (2016a). Discriminating between patients with Parkinson's and neurological diseases using cepstral analysis. IEEE Trans. Neural Syst. Rehab. Eng.24, 11001108. 10.1109/TNSRE.2016.2533582

  • 28

    BenbaA.JilbabA.HammouchA.SandabadS. (2016b). Using RASTA-PLP for discriminating between different neurological diseases, in 2016 International Conference on Electrical and Information Technologies (ICEIT) (Tangiers), 406409. 10.1109/EITech.2016.7519630

  • 29

    Bernad-ElazariH.HermanT.MirelmanA.GazitE.GiladiN.HausdorffJ. M. (2016). Objective characterization of daily living transitions in patients with Parkinson's disease using a single body-fixed sensor. J. Neurol.263, 15441551. 10.1007/s00415-016-8164-6

  • 30

    BhatiS.VelazquezL. M.VillalbaJ.DehakN. (2019). LSTM siamese network for Parkinson's disease detection from speech, in 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP) (Ottawa, ON), 15. 10.1109/GlobalSIP45357.2019.8969430

  • 31

    BotB. M.SuverC.NetoE. C.KellenM.KleinA.BareC.et al. (2016). The mPower study, Parkinson disease mobile data collected using ResearchKit. Sci. Data3, 19. 10.1038/sdata.2016.11

  • 32

    BraakH.Del TrediciK.RübU.De VosR. A.SteurE. N. J.BraakE. (2003). Staging of brain pathology related to sporadic Parkinson's disease. Neurobiol. Aging24, 197211. 10.1016/S0197-4580(02)00065-9

  • 33

    BuongiornoD.BortoneI.CascaranoG. D.TrottaG. F.BrunettiA.BevilacquaV. (2019). A low-cost vision system based on the analysis of motor features for recognition and severity rating of Parkinson's Disease. BMC Med. Inform. Decision Making19(Suppl 9):243. 10.1186/s12911-019-0987-5

  • 34

    ButtA. H.RoviniE.DolciottiC.BongioanniP.De PetrisG.CavalloF. (2017). Leap motion evaluation for assessment of upper limb motor skills in Parkinson's disease. IEEE Int. Conf. Rehabil. Robot.2017, 116121. 10.1109/ICORR.2017.8009232

  • 35

    ButtA. H.RoviniE.DolciottiC.De PetrisG.BongioanniP.CarbonciniM. C.et al. (2018). Objective and automatic classification of Parkinson disease with Leap Motion controller. Biomed. Eng. Online17:168. 10.1186/s12938-018-0600-7

  • 36

    CaiZ.GuJ.WenC.ZhaoD.HuangC.HuangH.et al. (2018). An intelligent Parkinson's disease diagnostic system based on a chaotic bacterial foraging optimization enhanced fuzzy KNN approach. Comp. Math. Methods Med.2018:2396952. 10.1155/2018/2396952

  • 37

    CaramiaC.TorricelliD.SchmidM.Munoz-GonzalezA.Gonzalez-VargasJ.GrandasF.et al. (2018). IMU-based classification of Parkinson's disease from gait: a sensitivity analysis on sensor location and feature selection. IEEE J. Biomed. Health Inf.22, 17651774. 10.1109/JBHI.2018.2865218

  • 38

    Castillo-BarnesD.RamírezJ.SegoviaF.Martínez-MurciaF. J.Salas-GonzalezD.GórrizJ. M. (2018). Robust ensemble classification methodology for I123-Ioflupane SPECT images and multiple heterogeneous biomarkers in the diagnosis of Parkinson's disease. Front. Neuroinf.12:53. 10.3389/fninf.2018.00053

  • 39

    CavalloF.MoschettiA.EspositoD.MaremmaniC.RoviniE. (2019). Upper limb motor pre-clinical assessment in Parkinson's disease using machine learning. Parkinsonism Relat. Disord.63, 111116. 10.1016/j.parkreldis.2019.02.028

  • 40

    CelikE.OmurcaS. I. (2019). Improving Parkinson's disease diagnosis with machine learning methods, in 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT) (Istanbul), 14. 10.1109/EBBT.2019.8742057

  • 41

    ChakrabortyS.AichS.KimH.-C. (2020). 3D textural, morphological and statistical analysis of voxel of interests in 3T MRI scans for the detection of Parkinson's disease using artificial neural networks. Healthcare8:E34. 10.3390/healthcare8010034

  • 42

    ChallaK. N. R.PagoluV. S.PandaG.MajhiB. (2016). An improved approach for prediction of Parkinson's disease using machine learning techniques, in 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES) (Paralakhemundi), 14461451. 10.1109/SCOPES.2016.7955679

  • 43

    ChenY.StorrsJ.TanL.MazlackL. J.LeeJ.-H.LuL. J. (2014). Detecting brain structural changes as biomarker from magnetic resonance images using a local feature based SVM approach. J. Neurosci. Methods221, 2231. 10.1016/j.jneumeth.2013.09.001

  • 44

    ChenY.YangW.LongJ.ZhangY.FengJ.LiY.et al. (2015). Discriminative analysis of Parkinson's disease based on whole-brain functional connectivity. PLoS ONE10:e0124153. 10.1371/journal.pone.0124153

  • 45

    ChepkoechJ. L.WalhovdK. B.GrydelandH.FjellA. M.Alzheimer's Disease Neuroimaging Initiative (2016). Effects of change in FreeSurfer version on classification accuracy of patients with Alzheimer's disease and mild cognitive impairment. Hum. Brain Mapp.37, 18311841. 10.1002/hbm.23139

  • 46

    CherubiniA.MorelliM.NisticóR.SalsoneM.ArabiaG.VastaR.et al. (2014a). Magnetic resonance support vector machine discriminates between Parkinson disease and progressive supranuclear palsy. Move. Disord.29, 266269. 10.1002/mds.25737

  • 47

    CherubiniA.NisticóR.NovellinoF.SalsoneM.NigroS.DonzusoG.et al. (2014b). Magnetic resonance support vector machine discriminates essential tremor with rest tremor from tremor-dominant Parkinson disease. Move. Disord.29, 12161219. 10.1002/mds.25869

  • 48

    ChoiH.HaS.ImH. J.PaekS. H.LeeD. S. (2017). Refining diagnosis of Parkinson's disease with deep learning-based interpretation of dopamine transporter imaging. NeuroImage Clin.16, 586594. 10.1016/j.nicl.2017.09.010

  • 49

    ChouK. L. (2017). Diagnosis and Differential Diagnosis of Parkinson Disease. Waltham, MA: UpToDate.

  • 50

    CibulkaM.BrodnanovaM.GrendarM.GrofikM.KurcaE.PilchovaI.et al. (2019). SNPs rs11240569, rs708727, and rs823156 in SLC41A1 do not discriminate between slovak patients with idiopathic parkinson's disease and healthy controls: statistics and machine-learning evidence. Int. J. Mol. Sci.20:4688. 10.3390/ijms20194688

  • 51

    CigdemO.DemirelH.UnayD. (2019). The performance of local-learning based clustering feature selection method on the diagnosis of Parkinson's disease using structural MRI, in 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC) (Bari), 12861291. 10.1109/SMC.2019.8914611

  • 52

    ÇimenS.BolatB. (2016). Diagnosis of Parkinson's disease by using ANN, in 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC) (Jalgaon), 119121. 10.1109/ICGTSPICC.2016.7955281

  • 53

    Contreras-VidalJ.StelmachG. E. (1995). Effects of Parkinsonism on motor control. Life Sci.58, 165176. 10.1016/0024-3205(95)02237-6

  • 54

    CookD. J.Schmitter-EdgecombeM.DawadiP. (2015). Analyzing activity behavior and movement in a naturalistic environment using smart home techniques. IEEE J. Biomed. Health Inf.19, 18821892. 10.1109/JBHI.2015.2461659

  • 55

    CuzzolinF.SapienzaM.EsserP.SahaS.FranssenM. M.CollettJ.et al. (2017). Metric learning for Parkinsonian identification from IMU gait measurements. Gait Posture54, 127132. 10.1016/j.gaitpost.2017.02.012

  • 56

    DashS.ThulasiramR.ThulasiramanP. (2017). An enhanced chaos-based firefly model for Parkinson's disease diagnosis and classification, in 2017 International Conference on Information Technology (ICIT) (Bhubaneswar), 159164. 10.1109/ICIT.2017.43

  • 57

    DastjerdN. K.SertO. C.OzyerT.AlhajjR. (2019). Fuzzy classification methods based diagnosis of Parkinson's disease from speech test cases. Curr. Aging Sci.12, 100120. 10.2174/1874609812666190625140311

  • 58

    de SouzaJ. W. M.AlvesS. S. A.RebouçasE. d,.SAlmeidaJ. S.Rebouças FilhoP.P. (2018). A new approach to diagnose Parkinson's disease using a structural cooccurrence matrix for a similarity analysis. Comput. Intell. Neurosci.2018:7613282. 10.1155/2018/7613282

  • 59

    DhamiD. S.SoniA.PageD.NatarajanS. (2017). Identifying Parkinson's patients: a functional gradient boosting approach. Artif. Intell. Med. Conf. Artif. Intell. Med.10259, 332337. 10.1007/978-3-319-59758-4_39

  • 60

    DineshA.HeJ. (2017). Using machine learning to diagnose Parkinson's disease from voice recordings, in 2017 IEEE MIT Undergraduate Research Technology Conference (URTC) (Cambridge, MA), 14. 10.1109/URTC.2017.8284216

  • 61

    DinovI. D.HeavnerB.TangM.GlusmanG.ChardK.DarcyM.et al. (2016). Predictive big data analytics: a study of parkinson's disease using large, complex, heterogeneous, incongruent, multi-source and incomplete observations. PLoS ONE11:e0157077. 10.1371/journal.pone.0157077

  • 62

    Djurić-JovičićM.BelićM.StankovićI.RadovanovićS.KostićV. S. (2017). Selection of gait parameters for differential diagnostics of patients with de novo Parkinson's disease. Neurol. Res.39, 853861. 10.1080/01616412.2017.1348690

  • 63

    DorseyE. R.ElbazA.NicholsE.Abd-AllahF.AbdelalimA.AdsuarJ. C.et al. (2018). Global, regional, and national burden of Parkinson's disease, 1990–2016: a systematic analysis for the Global Burden of Disease Study 2016. Lancet Neurol.17, 939953. 10.1016/S1474-4422(18)30295-3

  • 64

    Dos SantosM. C. T.SchellerD.SchulteC.MesaI. R.ColmanP.BujacS. R.et al. (2018). Evaluation of cerebrospinal fluid proteins as potential biomarkers for early stage Parkinson's disease diagnosis. PLoS ONE13:e0206536. 10.1371/journal.pone.0206536

  • 65

    DrorB.YanaiE.FridA.PelegN.GoldenthalN.SchlesingerI.et al. (2014). Automatic assessment of Parkinson's disease from natural hands movements using 3D depth sensor, in 2014 IEEE 28th Convention of Electrical & Electronics Engineers in Israel (IEEEI) (Eilat), 15. 10.1109/EEEI.2014.7005763

  • 66

    DrotárP.MekyskaJ.RektorováI.MasarováL.SmékalZ.Faundez-ZanuyM. (2014). Analysis of in-air movement in handwriting: a novel marker for Parkinson's disease. Comp. Methods Progr. Biomed.117, 405411. 10.1016/j.cmpb.2014.08.007

  • 67

    DrotárP.MekyskaJ.RektorováI.MasarováL.SmékalZ.Faundez-ZanuyM. (2015). Decision support framework for Parkinson's disease based on novel handwriting markers. IEEE Trans. Neural Syst. Rehabil. Eng.23, 508516. 10.1109/TNSRE.2014.2359997

  • 68

    DrotárP.MekyskaJ.RektorováI.MasarováL.SmékalZ.Faundez-ZanuyM. (2016). Evaluation of handwriting kinematics and pressure for differential diagnosis of Parkinson's disease. Artif. Intell. Med.67, 3946. 10.1016/j.artmed.2016.01.004

  • 69

    DuG.LewisM. M.KanekarS.SterlingN. W.HeL.KongL.et al. (2017). Combined Diffusion tensor imaging and apparent transverse relaxation rate differentiate Parkinson disease and atypical Parkinsonism. Am. J. Neuroradiol.38, 966972. 10.3174/ajnr.A5136

  • 70

    DuaD.GraffC. (2018). UCI Machine Learning Repository. Irvine, CA: University of California, School of Information and Computer Science.

  • 71

    Erdogdu SakarB.SerbesG.SakarC. O. (2017). Analyzing the effectiveness of vocal features in early telediagnosis of Parkinson's disease. PLoS ONE12:e0182428. 10.1371/journal.pone.0182428

  • 72

    ErroR.SchneiderS. A.StamelouM.QuinnN. P.BhatiaK. P. (2016). What do patients with scans without evidence of dopaminergic deficit (SWEDD) have? New evidence and continuing controversies. J. Neurol. Neurosurg. Psychiatry87, 319323. 10.1136/jnnp-2014-310256

  • 73

    FélixJ. P.VieiraF. H. T.CardosoÁ. A.FerreiraM. V. G.FrancoR. A. P.RibeiroM. A.et al. (2019). A Parkinson's disease classification method: an approach using gait dynamics and detrended fluctuation analysis, in 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE) (Edmonton, AB), 14. 10.1109/CCECE.2019.8861759

  • 74

    FernandesC.FonsecaL.FerreiraF.GagoM.CostaL.SousaN.et al. (2018). Artificial neural networks classification of patients with Parkinsonism based on gait, in 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (Madrid), 20242030. 10.1109/BIBM.2018.8621466

  • 75

    FernandezK. M.RoemmichR. T.StegemöllerE. L.AmanoS.ThompsonA.OkunM. S.et al. (2013). Gait initiation impairments in both Essential Tremor and Parkinson's disease. Gait Posture38, 956961. 10.1016/j.gaitpost.2013.05.001

  • 76

    FockeN. K.HelmsG.ScheeweS.PantelP. M.BachmannC. G.DechentP.et al. (2011). Individual voxel-based subtype prediction can differentiate progressive supranuclear palsy from idiopathic Parkinson syndrome and healthy controls. Human Brain Mapp.32, 19051915. 10.1002/hbm.21161

  • 77

    FridA.SafraE. J.HazanH.LokeyL. L.HiluD.ManevitzL.et al. (2014). Computational diagnosis of Parkinson's disease directly from natural speech using machine learning techniques, in 2014 IEEE International Conference on Software Science, Technology and Engineering (Ramat Gan), 5053. 10.1109/SWSTE.2014.17

  • 78

    GhassemiN. H.MarxreiterF.PasluostaC. F.KuglerP.SchlachetzkiJ.SchrammA.et al. (2016). Combined accelerometer and EMG analysis to differentiate essential tremor from Parkinson's disease. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc.2016, 672675. 10.1109/EMBC.2016.7590791

  • 79

    GlaabE.TrezziJ.-P.GreuelA.JägerC.HodakZ.DrzezgaA.et al. (2019). Integrative analysis of blood metabolomics and PET brain neuroimaging data for Parkinson's disease. Neurobiol. Dis.124, 555562. 10.1016/j.nbd.2019.01.003

  • 80

    GoldbergerA. L.AmaralL. A.GlassL.HausdorffJ. M.IvanovP. C.MarkR. G.et al. (2000). PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation101, e215e220. 10.1161/01.CIR.101.23.e215

  • 81

    GunduzH. (2019). Deep learning-based parkinson's disease classification using vocal feature sets. IEEE Access7, 115540115551. 10.1109/ACCESS.2019.2936564

  • 82

    HallerS.BadoudS.NguyenD.BarnaureI.MontandonM. L.LovbladK. O.et al. (2013). Differentiation between Parkinson disease and other forms of Parkinsonism using support vector machine analysis of susceptibility-weighted imaging (SWI): initial results. Eur. Radiol.23, 1219. 10.1007/s00330-012-2579-y

  • 83

    HallerS.BadoudS.NguyenD.GaribottoV.LovbladK. O.BurkhardP. R. (2012). Individual detection of patients with Parkinson disease using support vector machine analysis of diffusion tensor imaging data: initial results. Am. J. Neuroradiol.33, 21232128. 10.3174/ajnr.A3126

  • 84

    HaqA. U.LiJ.MemonM. H.KhanJ.DinS. U.AhadI.et al. (2018). Comparative analysis of the classification performance of machine learning classifiers and deep neural network classifier for prediction of Parkinson disease, in 2018 15th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP) (Chengdu), 101106.

  • 85

    HaqA. U.LiJ. P.MemonM. H.khanJ.MalikA.AhmadT.et al. (2019). Feature selection based on L1-Norm support vector machine and effective recognition system for Parkinson's disease using voice recordings. IEEE Access7, 3771837734. 10.1109/ACCESS.2019.2906350

  • 86

    HariharanM.PolatK.SindhuR. (2014). A new hybrid intelligent system for accurate detection of Parkinson's disease. Comput. Methods Programs Biomed.113, 904913. 10.1016/j.cmpb.2014.01.004

  • 87

    HirschauerT. J.AdeliH.BufordJ. A. (2015). Computer-aided diagnosis of Parkinson's disease using enhanced probabilistic neural network. J. Med. Syst.39:179. 10.1007/s10916-015-0353-9

  • 88

    HsuS.-Y.LinH.-C.ChenT.-B.DuW.-C.HsuY.-H.WuY.-C.et al. (2019). Feasible classified models for Parkinson disease from (99m)Tc-TRODAT-1 SPECT imaging. Sensors19:1740. 10.3390/s19071740

  • 89

    Huertas-FernándezI.García-GómezF. J.García-SolísD.Benítez-RiveroS.Marín-OyagaV. A.JesúsS.et al. (2015). Machine learning models for the differential diagnosis of vascular parkinsonism and Parkinson's disease using [(123)I]FP-CIT SPECT. Eur. J. Nucl. Med. Mol. Imag.42, 112119. 10.1007/s00259-014-2882-8

  • 90

    HuppertzH.-J.MöllerL.SüdmeyerM.HilkerR.HattingenE.EggerK.et al. (2016). Differentiation of neurodegenerative parkinsonian syndromes by volumetric magnetic resonance imaging analysis and support vector machine classification. Mov. Disord.31, 15061517. 10.1002/mds.26715

  • 91

    IK.UlukayaS.ErdemO. (2019). Classification of Parkinson's disease using dynamic time warping, in 2019 27th Telecommunications Forum (TELFOR) (Belgrade), 14

  • 92

    IllanI. A.GorrzJ. M.RamirezJ.SegoviaF.Jimenez-HoyuelaJ. M.Ortega LozanoS. J. (2012). Automatic assistance to Parkinson's disease diagnosis in DaTSCAN SPECT imaging. Med. Phys.39, 59715980. 10.1118/1.4742055

  • 93

    IslamM. S.ParvezI.HaiD.GoswamiP. (2014). Performance comparison of heterogeneous classifiers for detection of Parkinson's disease using voice disorder (dysphonia), in 2014 International Conference on Informatics, Electronics & Vision (ICIEV) (Dhaka), 17. 10.1109/ICIEV.2014.6850849

  • 94

    JankovicJ. (2008). Parkinson's disease: clinical features and diagnosis. J. Neurol. Neurosurg. Psychiatry79, 368376. 10.1136/jnnp.2007.131045

  • 95

    JavedF.ThomasI.MemediM. (2018). A comparison of feature selection methods when using motion sensors data: a case study in Parkinson's disease. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc.2018, 54265429. 10.1109/EMBC.2018.8513683

  • 96

    JeniL. A.CohnJ. F.De La TorreF. (2013). Facing imbalanced data–recommendations for the use of performance metrics, in 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (Geneva: IEEE), 245251. 10.1109/ACII.2013.47

  • 97

    JiW.LiY. (2012). Energy-based feature ranking for assessing the dysphonia measurements in Parkinson detection. IET Signal Proc.6, 300305. 10.1049/iet-spr.2011.0186

  • 98

    JohnsonS. J.DienerM. D.KaltenboeckA.BirnbaumH. G.SiderowfA. D. (2013). An economic model of P arkinson's disease: implications for slowing progression in the United States. Move. Disord.28, 319326. 10.1002/mds.25328

  • 99

    JoshiD.KhajuriaA.JoshiP. (2017). An automatic non-invasive method for Parkinson's disease classification. Comput. Methods Programs Biomed.145, 135145. 10.1016/j.cmpb.2017.04.007

  • 100

    JuniorS. B.CostaV. G. T.ChenS.GuidoR. C. (2018). U-healthcare system for pre-diagnosis of Parkinson's disease from voice signal, in 2018 IEEE International Symposium on Multimedia (ISM) (Taichung), 271274.

  • 101

    KamagataK.ZaleskyA.HatanoT.Di BiaseM. A.El SamadO.SaikiS.et al. (2017). Connectome analysis with diffusion MRI in idiopathic Parkinson's disease: evaluation using multi-shell, multi-tissue, constrained spherical deconvolution. NeuroImage. Clin.17, 518529. 10.1016/j.nicl.2017.11.007

  • 102

    Karapinar SenturkZ. (2020). Early diagnosis of Parkinson's disease using machine learning algorithms. Med. Hypoth.138:109603. 10.1016/j.mehy.2020.109603

  • 103

    KazeminejadA.GolbabaeiS.Soltanian-ZadehH. (2017). Graph theoretical metrics and machine learning for diagnosis of Parkinson's disease using rs-fMRI, in 2017 Artificial Intelligence and Signal Processing Conference (AISP) (Shiraz), 134139. 10.1109/AISP.2017.8324124

  • 104

    KhanM. M.MendesA.ChalupS. K. (2018). Evolutionary Wavelet Neural Network ensembles for breast cancer and Parkinson's disease prediction. PLoS ONE13:e0192192. 10.1371/journal.pone.0192192

  • 105

    KhataminoP.IC.ÖzyilmazL. (2018). A deep learning-CNN based system for medical diagnosis: an application on Parkinson's disease handwriting drawings, in 2018 6th International Conference on Control Engineering & Information Technology (CEIT) (Istanbul), 16. 10.1109/CEIT.2018.8751879

  • 106

    KhouryN.AttalF.AmiratY.OukhellouL.MohammedS. (2019). Data-driven based approach to aid Parkinson's disease diagnosis. Sensors19:242. 10.3390/s19020242

  • 107

    KiryuS.YasakaK.AkaiH.NakataY.SugomoriY.HaraS.et al. (2019). Deep learning to differentiate parkinsonian disorders separately using single midsagittal MR imaging: a proof of concept study. Eur. Radiol.29, 68916899. 10.1007/s00330-019-06327-0

  • 108

    KleinY.DjaldettiR.KellerY.BacheletI. (2017). Motor dysfunction and touch-slang in user interface data. Scientific reports7:4702. 10.1038/s41598-017-04893-1

  • 109

    KlomsaeA.AuephanwiriyakulS.Theera-UmponN. (2018). String grammar unsupervised possibilistic fuzzy C-medians for gait pattern classification in patients with neurodegenerative diseases. Comput. Intell. Neurosci.2018:1869565. 10.1155/2018/1869565

  • 110

    KoçerA.OktayA. B. (2016). Nintendo Wii assessment of Hoehn and Yahr score with Parkinson's disease tremor. Technol. Health Care24, 185191. 10.3233/THC-151124

  • 111

    KostikisN.Hristu-VarsakelisD.ArnaoutoglouM.KotsavasiloglouC. (2015). A smartphone-based tool for assessing Parkinsonian hand tremor. IEEE J. Biomed. Health Inf.19, 18351842. 10.1109/JBHI.2015.2471093

  • 112

    KowalS. L.DallT. M.ChakrabartiR.StormM. V.JainA. (2013). The current and projected economic burden of Parkinson's disease in the United States. Move. Disord.28, 311318. 10.1002/mds.25292

  • 113

    KraipeerapunP.AmornsamankulS. (2015). Using stacked generalization and complementary neural networks to predict Parkinson's disease, in 2015 11th International Conference on Natural Computation (ICNC) (Zhangjiajie), 12901294. 10.1109/ICNC.2015.7378178

  • 114

    KuglerP.JaremenkoC.SchlachetzkiJ.WinklerJ.KluckenJ.EskofierB. (2013). Automatic recognition of Parkinson's disease using surface electromyography during standardized gait tests. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc.2013, 57815784. 10.1109/EMBC.2013.6610865

  • 115

    KuhnerA.SchubertT.CenciariniM.WiesmeierI. K.CoenenV. A.BurgardW.et al. (2017). Correlations between motor symptoms across different motor tasks, quantified via random forest feature classification in Parkinson's disease. Front. Neurol.8:607. 10.3389/fneur.2017.00607

  • 116

    KuresanH.SamiappanD.MasundaS. (2019). Fusion of WPT and MFCC feature extraction in Parkinson's disease diagnosis. Technol. Health Care27, 363372. 10.3233/THC-181306

  • 117

    KwonD.-Y.KwonY.KimJ.-W. (2018). Quantitative analysis of finger and forearm movements in patients with off state early stage Parkinson's disease and scans without evidence of dopaminergic deficit (SWEDD). Parkinsonism Related Disord.57, 3338. 10.1016/j.parkreldis.2018.07.012

  • 118

    LacyS. E.SmithS. L.LonesM. A. (2018). Using echo state networks for classification: A case study in Parkinson's disease diagnosis. Artif. Intell. Med.86, 5359. 10.1016/j.artmed.2018.02.002

  • 119

    LeeM. J.KimS. L.LyooC. H.LeeM. S. (2014). Kinematic analysis in patients with Parkinson's disease and SWEDD. J. Parkinsons Dis.4, 421430. 10.3233/JPD-130233

  • 120

    LeiH.HuangZ.ZhouF.ElazabA.TanE.-L.LiH.et al. (2019). Parkinson's disease diagnosis via joint learning from multiple modalities and relations. IEEE J. Biomed. Health Inf.23, 14371449. 10.1109/JBHI.2018.2868420

  • 121

    LewittP. A.LiJ.LuM.BeachT. G.AdlerC. H.GuoL.et al. (2013). 3-hydroxykynurenine and other Parkinson's disease biomarkers discovered by metabolomic analysis. Move. Disord.28, 16531660. 10.1002/mds.25555

  • 122

    LiQ.ChenH.HuangH.ZhaoX.CaiZ.TongC.et al. (2017). An enhanced grey wolf optimization based feature selection wrapped kernel extreme learning machine for medical diagnosis. Comput. Math. Methods Med.2017:9512741. 10.1155/2017/9512741

  • 123

    LiS.LeiH.ZhouF.GardeziJ.LeiB. (2019). Longitudinal and Multi-modal Data Learning for Parkinson's Disease Diagnosis via Stacked Sparse Auto-encoder, in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) (Venice), 384387. 10.1109/ISBI.2019.8759385

  • 124

    LiuH.DuG.ZhangL.LewisM. M.WangX.YaoT.et al. (2016). Folded concave penalized learning in identifying multimodal MRI marker for Parkinson's disease. J. Neurosci. Methods268, 16. 10.1016/j.jneumeth.2016.04.016

  • 125

    LiuL.WangQ.AdeliE.ZhangL.ZhangH.ShenD. (2016). Feature selection based on iterative canonical correlation analysis for automatic diagnosis of Parkinson's disease. Med. Image Comput. Computer Assist. Interv.9901, 18. 10.1007/978-3-319-46723-8_1

  • 126

    MaC.OuyangJ.ChenH.-L.ZhaoX.-H. (2014). An efficient diagnosis system for Parkinson's disease using kernel-based extreme learning machine with subtractive clustering features weighting approach. Computat. Math. Methods Med.2014:985789. 10.1155/2014/985789

  • 127

    MaH.TanT.ZhouH.GaoT. (2016). Support Vector Machine-recursive feature elimination for the diagnosis of Parkinson disease based on speech analysis, in 2016 Seventh International Conference on Intelligent Control and Information Processing (ICICIP) (Siem Reap), 3440. 10.1109/ICICIP.2016.7885912

  • 128

    MaassF.MichalkeB.LehaA.BoergerM.ZerrI.KochJ.-C.et al. (2018). Elemental fingerprint as a cerebrospinal fluid biomarker for the diagnosis of Parkinson's disease. J. Neurochemistry145, 342351. 10.1111/jnc.14316

  • 129

    MaassF.MichalkeB.WillkommenD.LehaA.SchulteC.TöngesL.et al. (2020). Elemental fingerprint: reassessment of a cerebrospinal fluid biomarker for Parkinson's disease. Neurobiol. Dis.134:104677. 10.1016/j.nbd.2019.104677

  • 130

    MabroukR.ChikhaouiB.BentabetL. (2019). Machine learning based classification using clinical and DaTSCAN SPECT imaging features: a study on Parkinson's disease and SWEDD. IEEE Trans. Rad. Plasma Med. Sci.3, 170177. 10.1109/TRPMS.2018.2877754

  • 131

    MandalI.SairamN. (2013). Accurate telemonitoring of Parkinson's disease diagnosis using robust inference system. Int. J. Med. Informatics82, 359377. 10.1016/j.ijmedinf.2012.10.006

  • 132

    MararS.SwainD.HiwarkarV.MotwaniN.AwariA. (2018). Predicting the occurrence of Parkinson's disease using various classification models, in 2018 International Conference on Advanced Computation and Telecommunication (ICACAT) (Bhopal), 15. 10.1109/ICACAT.2018.8933579

  • 133

    MarekK.JenningsD.LaschS.SiderowfA.TannerC.SimuniT.et al. (2011). The parkinson progression marker initiative (PPMI). Progress Neurobiol.95, 629635. 10.1016/j.pneurobio.2011.09.005

  • 134

    MartínezM.VillagraF.CastelloteJ. M.PastorM. A. (2018). Kinematic and kinetic patterns related to free-walking in Parkinson's disease. Sensors18:4224. 10.3390/s18124224

  • 135

    Martinez-MurciaF. J.GórrizJ. M.RamírezJ.OrtizA. (2018). Convolutional neural networks for neuroimaging in Parkinson's disease: is preprocessing needed?Int. J. Neural Syst.28:1850035. 10.1142/S0129065718500351

  • 136

    MemediM.SadikovA.GroznikV.ŽabkarJ.MožinaM.BergquistF.et al. (2015). Automatic spiral analysis for objective assessment of motor symptoms in Parkinson's disease. Sensors15, 2372723744. 10.3390/s150923727

  • 137

    MittraY.RustagiV. (2018). Classification of subjects with Parkinson's disease using gait data analysis, in 2018 International Conference on Automation and Computational Engineering (ICACE) (Greater Noida), 8489. 10.1109/ICACE.2018.8687022

  • 138

    MoharkanZ. A.GargH.ChodhuryT.KumarP. (2017). A classification based Parkinson detection system, in 2017 International Conference On Smart Technologies For Smart Nation (SmartTechCon) (Bengaluru), 15091513. 10.1109/SmartTechCon.2017.8358616

  • 139

    MoherD.LiberatiA.TetzlaffJ.AltmanD. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann. Inter. Med.151, 264269. 10.7326/0003-4819-151-4-200908180-00135

  • 140

    MontañaD.Campos-RocaY.PérezC. J. (2018). A Diadochokinesis-based expert system considering articulatory features of plosive consonants for early detection of Parkinson's disease. Comp. Methods Programs Biomed.154, 8997. 10.1016/j.cmpb.2017.11.010

  • 141

    MorisiR.MannersD. N.GneccoG.LanconelliN.TestaC.EvangelistiS.et al. (2018). Multi-class parkinsonian disorders classification with quantitative MR markers and graph-based features using support vector machines. Parkinsonism Relat. Disord.47, 6470. 10.1016/j.parkreldis.2017.11.343

  • 142

    MuchaJ.MekyskaJ.Faundez-ZanuyM.Lopez-De-IpinaK.ZvoncakV.GalazZ.et al. (2018). Advanced Parkinson's disease dysgraphia analysis based on fractional derivatives of online handwriting, in 2018 10th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT) (Moscow), 16. 10.1109/ICUMT.2018.8631265

  • 143

    NicastroN.WegrzykJ.PretiM. G.FleuryV.Van de VilleD.GaribottoV.et al. (2019). Classification of degenerative parkinsonism subtypes by support-vector-machine analysis and striatal (123)I-FP-CIT indices. J. Neurol.266, 17711781. 10.1007/s00415-019-09330-z

  • 144

    NõmmS.Bard,õšK.ToomelaA.MedijainenK.TabaP. (2018). Detailed analysis of the Luria's alternating seriestests for Parkinson's disease diagnostics, in 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA) (Orlando, FL), 13471352. 10.1109/ICMLA.2018.00219

  • 145

    NunesA.SilvaG.DuqueC.JanuárioC.SantanaI.AmbrósioA. F.et al. (2019). Retinal texture biomarkers may help to discriminate between Alzheimer's, Parkinson's, and healthy controls. PLoS One14:e0218826. 10.1371/journal.pone.0218826

  • 146

    NuvoliS.SpanuA.FravoliniM. L.BianconiF.CascianelliS.MadedduG.et al. (2019). [(123)I]Metaiodobenzylguanidine (MIBG) cardiac scintigraphy and automated classification techniques in Parkinsonian disorders. Mol. Imaging Biol.22, 703710. 10.1007/s11307-019-01406-6

  • 147

    OliveiraF. P. M.Castelo-BrancoM. (2015). Computer-aided diagnosis of Parkinson's disease based on [(123)I]FP-CIT SPECT binding potential images, using the voxels-as-features approach and support vector machines. J. Neural Eng.12:026008. 10.1088/1741-2560/12/2/026008

  • 148

    OliveiraF. P. M.FariaD. B.CostaD. C.Castelo-BrancoM.TavaresJ. M. R. S. (2018). Extraction, selection and comparison of features for an effective automated computer-aided diagnosis of Parkinson's disease based on [(123)I]FP-CIT SPECT images. Eur. J. Nucl. Med. Mol. Imaging45, 10521062. 10.1007/s00259-017-3918-7

  • 149

    OliveiraH. M.MachadoA. R. P.AndradeA. O. (2018a). On the use of t-distributed stochastic neighbor embedding for data visualization and classification of individuals with Parkinson's disease. Comput. Math. Methods Med.2018:8019232. 10.1155/2018/8019232

  • 150

    OparaJ.BrolaW.LeonardiM.BłaszczykB. (2012). Quality of life in Parkinsons disease. J. Med. Life5:375.

  • 151

    OungQ. W.HariharanM.LeeH. L.BasahS. N.SarilleeM.LeeC. H. (2015). Wearable multimodal sensors for evaluation of patients with Parkinson disease, in 2015 IEEE International Conference on Control System, Computing and Engineering (ICCSCE) (Penang), 269274. 10.1109/ICCSCE.2015.7482196

  • 152

    OzciftA. (2012). SVM feature selection based rotation forest ensemble classifiers to improve computer-aided diagnosis of Parkinson disease. J. Med. Syst.36, 21412147. 10.1007/s10916-011-9678-1

  • 153

    OzciftA.GultenA. (2011). Classifier ensemble construction with rotation forest to improve medical diagnosis performance of machine learning algorithms. Comp. Methods Progr. Biomed.104, 443451. 10.1016/j.cmpb.2011.03.018

  • 154

    PahujaG.NagabhushanT. N. (2016). A novel GA-ELM approach for Parkinson's disease detection using brain structural T1-weighted MRI data, in 2016 Second International Conference on Cognitive Computing and Information Processing (CCIP) (Mysuru), 16. 10.1109/CCIP.2016.7802848

  • 155

    PalumboB.FravoliniM. L.BurestaT.PompiliF.ForiniN.NigroP.et al. (2014). Diagnostic accuracy of Parkinson disease by support vector machine (SVM) analysis of 123I-FP-CIT brain SPECT data: implications of putaminal findings and age. Medicine93:e228. 10.1097/MD.0000000000000228

  • 156

    PapadopoulosA.KyritsisK.KlingelhoeferL.BostanjopoulouS.ChaudhuriK. R.DelopoulosA. (2019). Detecting Parkinsonian tremor from IMU data collected in-the-wild using deep multiple-instance learning. IEEE J. Biomed. Health Inform.24, 25592569. 10.1109/JBHI.2019.2961748

  • 157

    PapavasileiouI.ZhangW.WangX.BiJ.ZhangL.HanS. (2017). Classification of neurological gait disorders using multi-task feature learning, in 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE) (Philadelphia, PA), 195204. 10.1109/CHASE.2017.78

  • 158

    PekerM. (2016). A decision support system to improve medical diagnosis using a combination of k-medoids clustering based attribute weighting and SVM. J. Med. Syst.40:116. 10.1007/s10916-016-0477-6

  • 159

    PengB.WangS.ZhouZ.LiuY.TongB.ZhangT.et al. (2017). A multilevel-ROI-features-based machine learning method for detection of morphometric biomarkers in Parkinson's disease. Neurosci. Lett.651, 8894. 10.1016/j.neulet.2017.04.034

  • 160

    PengB.ZhouZ.GengC.TongB.ZhouZ.ZhangT.et al. (2016). Computer aided analysis of cognitive disorder in patients with Parkinsonism using machine learning method with multilevel ROI-based features, in 2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI) (Datong), 17921796. 10.1109/CISP-BMEI.2016.7853008

  • 161

    PereiraC. R.PereiraD. R.da SilvaF. A.HookC.WeberS. A.PereiraL. A.et al. (2015). A step towards the automated diagnosis of parkinson's disease: Analyzing handwriting movements, in 2015 IEEE 28th International Symposium on Computer-Based Medical Systems (Sao Carlos: IEEE), 171176. 10.1109/CBMS.2015.34

  • 162

    PereiraC. R.PereiraD. R.RosaG. H.AlbuquerqueV. H. C.WeberS. A. T.HookC.et al. (2018). Handwritten dynamics assessment through convolutional neural networks: An application to Parkinson's disease identification. Artif. Intell. Med.87, 6777. 10.1016/j.artmed.2018.04.001

  • 163

    PereiraC. R.PereiraD. R.SilvaF. A.MasieiroJ. P.WeberS. A. T.HookC.et al. (2016a). A new computer vision-based approach to aid the diagnosis of Parkinson's disease. Comp. Methods Progr. Biomed.136, 7988. 10.1016/j.cmpb.2016.08.005

  • 164

    PereiraC. R.PereiraD. R.WeberS. A.HookC.de AlbuquerqueV. H. C.PapaJ. P. (2019). A survey on computer-assisted Parkinson's disease diagnosis. Artif. Intell. Med.95, 4863. 10.1016/j.artmed.2018.08.007

  • 165

    PereiraC. R.WeberS. A. T.HookC.RosaG. H.PapaJ. P. (2016b). Deep learning-aided Parkinson's disease diagnosis from handwritten dynamics, in 2016 29th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) (São Paulo), 340346. 10.1109/SIBGRAPI.2016.054

  • 166

    PhamH. N.DoT. T. T.ChanK. Y. J.SenG.HanA. Y. K.LimP.et al. (2019). Multimodal detection of Parkinson disease based on vocal and improved spiral test, in 2019 International Conference on System Science and Engineering (ICSSE) (Dong Hoi), 279284. 10.1109/ICSSE.2019.8823309

  • 167

    PhamT. D. (2018). Pattern analysis of computer keystroke time series in healthy control and early-stage Parkinson's disease subjects using fuzzy recurrence and scalable recurrence network features. J. Neurosci. Methods307, 194202. 10.1016/j.jneumeth.2018.05.019

  • 168

    PhamT. D.YanH. (2018). Tensor decomposition of gait dynamics in Parkinson's disease. IEEE Trans. Bio-Med. Eng.65, 18201827. 10.1109/TBME.2017.2779884

  • 169

    PostumaR. B.BergD.SternM.PoeweW.OlanowC. W.OertelW.et al. (2015). MDS clinical diagnostic criteria for Parkinson's disease. Move. Disord.30, 15911601. 10.1002/mds.26424

  • 170

    PrashanthR.Dutta RoyS. (2018). Early detection of Parkinson's disease through patient questionnaire and predictive modelling. In. J. Med. Inform.119, 7587. 10.1016/j.ijmedinf.2018.09.008

  • 171

    PrashanthR.Dutta RoyS.MandalP. K.GhoshS. (2016). High-accuracy detection of early Parkinson's disease through multimodal features and machine learning. Int. J. Med. Inform.90, 1321. 10.1016/j.ijmedinf.2016.03.001

  • 172

    PrashanthR.RoyS. D.MandalP. K.GhoshS. (2014). Parkinson's disease detection using olfactory loss and REM sleep disorder features, in Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2014, 57645767. 10.1109/EMBC.2014.6944937

  • 173

    PrashanthR.RoyS. D.MandalP. K.GhoshS. (2017). High-accuracy classification of Parkinson's disease through shape analysis and surface fitting in 123I-Ioflupane SPECT imaging. IEEE J. Biomed. Health Inform.21, 794802. 10.1109/JBHI.2016.2547901

  • 174

    PrinceJ.AndreottiF.VosM. D. (2019). Multi-source ensemble learning for the remote prediction of Parkinson's disease in the presence of source-wise missing data. IEEE Trans. Biomed. Eng.66, 14021411. 10.1109/TBME.2018.2873252

  • 175

    PrinceJ.de VosM. (2018). A deep learning framework for the remote detection of Parkinson's disease using smart-phone sensor data, in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (Honolulu, HI), 31443147. 10.1109/EMBC.2018.8512972

  • 176

    RamdhaniR. A.KhojandiA.ShyloO.KopellB. H. (2018). Optimizing clinical assessments in Parkinson's disease through the use of wearable sensors and data driven modeling. Front. Comput. Neurosci.12:72. 10.3389/fncom.2018.00072

  • 177

    ReyesJ. F.MontealegreJ. S.CastanoY. J.UrcuquiC.NavarroA. (2019). LSTM and convolution networks exploration for Parkinson's diagnosis, in 2019 IEEE Colombian Conference on Communications and Computing (COLCOM) (Barranquilla), 14. 10.1109/ColComCon.2019.8809160

  • 178

    RibeiroL. C. F.AfonsoL. C. S.PapaJ. P. (2019). Bag of samplings for computer-assisted Parkinson's disease diagnosis based on recurrent neural networks. Comp. Biol. Med.115:103477. 10.1016/j.compbiomed.2019.103477

  • 179

    RicciM.LazzaroG. D.PisaniA.MercuriN. B.GianniniF.SaggioG. (2020). Assessment of motor impairments in early untreated parkinson's disease patients: the wearable electronics impact. IEEE J. Biomed. Health Inform.24, 120130. 10.1109/JBHI.2019.2903627

  • 180

    Rios-UrregoC. D.Vásquez-CorreaJ. C.Vargas-BonillaJ. F.NöthE.LoperaF.Orozco-ArroyaveJ. R. (2019). Analysis and evaluation of handwriting in patients with Parkinson's disease using kinematic, geometrical, and non-linear features. Comput. Methods Progr. Biomed.173, 4352. 10.1016/j.cmpb.2019.03.005

  • 181

    RoviniE.MaremmaniC.MoschettiA.EspositoD.CavalloF. (2018). Comparative motor pre-clinical assessment in parkinson's disease using supervised machine learning approaches. Annals Biomed. Eng.46, 20572068. 10.1007/s10439-018-2104-9

  • 182

    RoviniE.MoschettiA.FioriniL.EspositoD.MaremmaniC.CavalloF. (2019). Wearable sensors for prodromal motor assessment of parkinson's disease using supervised learning, in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 43184321. 10.1109/EMBC.2019.8856804

  • 183

    RubbertC.MathysC.JockwitzC.HartmannC. J.EickhoffS. B.HoffstaedterF.et al. (2019). Machine-learning identifies Parkinson's disease patients based on resting-state between-network functional connectivity. Br. J. Radiol.92:20180886. 10.1259/bjr.20180886

  • 184

    SakarB. E.IsenkulM. E.SakarC. O.SertbasA.GurgenF.DelilS.et al. (2013). Collection and analysis of a Parkinson speech dataset with multiple types of sound recordings. IEEE J. Biomed. Health Inform.17, 828834. 10.1109/JBHI.2013.2245674

  • 185

    SalvatoreC.CerasaA.CastiglioniI.GallivanoneF.AugimeriA.LopezM.et al. (2014). Machine learning on brain MRI data for differential diagnosis of Parkinson's disease and progressive supranuclear palsy. J. Neurosci. Methods222, 230237. 10.1016/j.jneumeth.2013.11.016

  • 186

    SayaydehaO. N. A.MohammadM. F. (2019). Diagnosis of the Parkinson disease using enhanced fuzzy min-max neural network and OneR attribute evaluation method, in 2019 International Conference on Advanced Science and Engineering (ICOASE) (Zakho-Duhok), 6469. 10.1109/ICOASE.2019.8723870

  • 187

    SchererR. W.SaldanhaI. J. (2019). How should systematic reviewers handle conference abstracts? A view from the trenches. Syst. Rev.8:264. 10.1186/s13643-019-1188-0

  • 188

    SegoviaF.GórrizJ. M.RamírezJ.LevinJ.SchuberthM.BrendelM.et al. (2015). Analysis of 18F-DMFP PET data using multikernel classification in order to assist the diagnosis of Parkinsonism, in 2015 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC) (San Diego, CA), 14. 10.1109/NSSMIC.2015.7582227

  • 189

    SegoviaF.GórrizJ. M.RamírezJ.Martínez-MurciaF. J.Castillo-BarnesD. (2019). Assisted diagnosis of Parkinsonism based on the striatal morphology. Int. J. Neural Syst.29:1950011. 10.1142/S0129065719500114

  • 190

    ShahsavariM. K.RashidiH.BakhshH. R. (2016). Efficient classification of Parkinson's disease using extreme learning machine and hybrid particle swarm optimization, in 2016 4th International Conference on Control, Instrumentation, and Automation (ICCIA) (Qazvin), 148154. 10.1109/ICCIAutom.2016.7483152

  • 191

    ShamirR.KleinC.AmarD.VollstedtE.-J.BoninM.UsenovicM.et al. (2017). Analysis of blood-based gene expression in idiopathic Parkinson disease. Neurology89, 16761683. 10.1212/WNL.0000000000004516

  • 192

    SheibaniR.NikookarE.AlaviS. E. (2019). An ensemble method for diagnosis of Parkinson's disease based on voice measurements. J. Med. Signals Sens.9, 221226. 10.4103/jmss.JMSS_57_18

  • 193

    ShenT.JiangJ.LinW.GeJ.WuP.ZhouY.et al. (2019). Use of overlapping group LASSO sparse deep belief network to discriminate Parkinson's disease and normal control. Front. Neurosci.13:396. 10.3389/fnins.2019.00396

  • 194

    ShiJ.YanM.DongY.ZhengX.ZhangQ.AnH. (2018). Multiple kernel learning based classification of Parkinson's disease with multi-modal transcranial sonography, in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (Honolulu, HI), 6164. 10.1109/EMBC.2018.8512194

  • 195

    ShindeS.PrasadS.SabooY.KaushickR.SainiJ.PalP. K.et al. (2019). Predictive markers for Parkinson's disease using deep neural nets on neuromelanin sensitive MRI. NeuroImage. Clin.22:101748. 10.1016/j.nicl.2019.101748

  • 196

    SinghG.SamavedhamL. (2015). Unsupervised learning based feature extraction for differential diagnosis of neurodegenerative diseases: a case study on early-stage diagnosis of Parkinson disease. J. Neurosci. Methods256, 3040. 10.1016/j.jneumeth.2015.08.011

  • 197

    SinghG.SamavedhamL.LimE. C.-H.Alzheimer's Disease NeuroimagingI.Parkinson Progression MarkerI. (2018). Determination of imaging biomarkers to decipher disease trajectories and differential diagnosis of neurodegenerative diseases (DIsease TreND). J. Neurosci. Methods305, 105116. 10.1016/j.jneumeth.2018.05.009

  • 198

    StoesselD.SchulteC.Teixeira Dos SantosM. C.SchellerD.Rebollo-MesaI.DeuschleC.et al. (2018). Promising metabolite profiles in the plasma and CSF of early clinical Parkinson's disease. Front. Aging Neurosci.10:51. 10.3389/fnagi.2018.00051

  • 199

    SurangsriratD.ThanawattanoC.PongthornseriR.DumninS.AnanC.BhidayasiriR. (2016). Support vector machine classification of Parkinson's disease and essential tremor subjects based on temporal fluctuation. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc.2016, 63896392. 10.1109/EMBC.2016.7592190

  • 200

    SztahóD.TulicsM. G.VicsiK.ValálikI. (2017). Automatic estimation of severity of Parkinson's disease based on speech rhythm related features, in 2017 8th IEEE International Conference on Cognitive Infocommunications (CogInfoCom) (Debrecen), 000011000016. 10.1109/CogInfoCom.2017.8268208

  • 201

    SztahóD.ValálikI.VicsiK. (2019). Parkinson's disease severity estimation on Hungarian speech using various speech tasks, in 2019 International Conference on Speech Technology and Human-Computer Dialogue (SpeD) (Timisoara), 16. 10.1109/SPED.2019.8906277

  • 202

    TagareH. D.DeLorenzoC.ChelikaniS.SapersteinL.FulbrightR. K. (2017). Voxel-based logistic analysis of PPMI control and Parkinson's disease DaTscans. NeuroImage152, 299311. 10.1016/j.neuroimage.2017.02.067

  • 203

    TahavoriF.StackE.AgarwalV.BurnettM.AshburnA.HoseinitabatabaeiS. A.et al. (2017). Physical activity recognition of elderly people and people with parkinson's (PwP) during standard mobility tests using wearable sensors, in 2017 International Smart Cities Conference (ISC2) (Wuxi), 14. 10.1109/ISC2.2017.8090858

  • 204

    TalebC.KhachabM.MokbelC.Likforman-SulemL. (2019). Visual representation of online handwriting time series for deep learning Parkinson's disease detection, in 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW) (Sydney, NSW), 2530. 10.1109/ICDARW.2019.50111

  • 205

    TangY.MengL.WanC. M.LiuZ. H.LiaoW. H.YanX. X.et al. (2017). Identifying the presence of Parkinson's disease using low-frequency fluctuations in BOLD signals. Neurosci. Lett.645, 16. 10.1016/j.neulet.2017.02.056

  • 206

    TaylorJ. C.FennerJ. W. (2017). Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?EJNMMI Phys.4:29. 10.1186/s40658-017-0196-1

  • 207

    TienI.GlaserS. D.AminoffM. J. (2010). Characterization of gait abnormalities in Parkinson's disease using a wireless inertial sensor system, in Annu. Int. Conf. IEEE Eng. Med. Biol. Soc.2010, 33533356. 10.1109/IEMBS.2010.5627904

  • 208

    TracyJ. M.ÖzkancaY.AtkinsD. C.Hosseini GhomiR. (2019). Investigating voice as a biomarker: deep phenotyping methods for early detection of Parkinson's disease. J. Biomed. Inform.104:103362. 10.1016/j.jbi.2019.103362

  • 209

    TremblayC.MartelP. D.FrasnelliJ. (2017). Trigeminal system in Parkinson's disease: a potential avenue to detect Parkinson-specific olfactory dysfunction. Parkinsonism Relat. Dis.44, 8590. 10.1016/j.parkreldis.2017.09.010

  • 210

    TrezziJ.-P.GalozziS.JaegerC.BarkovitsK.BrockmannK.MaetzlerW.et al. (2017). Distinct metabolomic signature in cerebrospinal fluid in early parkinson's disease. Mov. Disord.32, 14011408. 10.1002/mds.27132

  • 211

    TsanasA.LittleM. A.McSharryP. E.SpielmanJ.RamigL. O. (2012). Novel speech signal processing algorithms for high-accuracy classification of Parkinson's disease. IEEE Trans. Bio. Med. Eng.59, 12641271. 10.1109/TBME.2012.2183367

  • 212

    TsengP.-H.CameronI. G. M.PariG.ReynoldsJ. N.MunozD. P.IttiL. (2013). High-throughput classification of clinical populations from natural viewing eye movements. J. Neurol.260, 275284. 10.1007/s00415-012-6631-2

  • 213

    TsudaM.AsanoS.KatoY.MuraiK.MiyazakiM. (2019). Differential diagnosis of multiple system atrophy with predominant parkinsonism and Parkinson's disease using neural networks. J. Neurol. Sci.401, 1926. 10.1016/j.jns.2019.04.014

  • 214

    TysnesO.-B.StorsteinA. (2017). Epidemiology of Parkinson's disease. J. Neural Trans.124, 901905. 10.1007/s00702-017-1686-y

  • 215

    UrcuquiC.CastañoY.DelgadoJ.NavarroA.DiazJ.MuñozB.et al. (2018). Exploring Machine Learning to Analyze Parkinson's Disease Patients, in 2018 14th International Conference on Semantics, Knowledge and Grids (SKG) (Guangzhou), 160166. 10.1109/SKG.2018.00029

  • 216

    VabalasA.GowenE.PoliakoffE.CassonA. J. (2019). Machine learning algorithm validation with a limited sample size. PLoS ONE14:e0224365. 10.1371/journal.pone.0224365

  • 217

    VaiciukynasE.VerikasA.GelzinisA.BacauskieneM. (2017). Detecting Parkinson's disease from sustained phonation and speech signals. PLoS ONE12:e0185613. 10.1371/journal.pone.0185613

  • 218

    VanegasM. I.GhilardiM. F.KellyS. P.BlangeroA. (2018). Machine learning for EEG-based biomarkers in Parkinson's disease, in 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (Madrid), 26612665. 10.1109/BIBM.2018.8621498

  • 219

    VáradiC.NehézK.HornyákO.ViskolczB.BonesJ. (2019). Serum N-glycosylation in Parkinson's disease: a novel approach for potential alterations. Molecules24:2220. 10.3390/molecules24122220

  • 220

    Vásquez-CorreaJ. C.Arias-VergaraT.Orozco-ArroyaveJ. R.EskofierB.KluckenJ.NöthE. (2019). Multimodal assessment of parkinson's disease: a deep learning approach. IEEE J. Biomed. Health Inform.23, 16181630. 10.1109/JBHI.2018.2866873

  • 221

    VlachostergiouA.TagarisA.StafylopatisA.KolliasS. (2018). Multi-task learning for predicting Parkinson's disease based on medical imaging information, in 2018 25th IEEE International Conference on Image Processing (ICIP) (Athens), 20522056. 10.1109/ICIP.2018.8451398

  • 222

    WahidF.BeggR. K.HassC. J.HalgamugeS.AcklandD. C. (2015). Classification of Parkinson's disease gait using spatial-temporal gait features. IEEE J. Biomed. Health Inform.19, 17941802. 10.1109/JBHI.2015.2450232

  • 223

    WangZ.ZhuX.AdeliE.ZhuY.NieF.MunsellB.et al. (2017). Multi-modal classification of neurodegenerative disease by progressive graph-based transductive learning. Med Image Anal.39, 218230. 10.1016/j.media.2017.05.003

  • 224

    WenJ.Thibeau-SutreE.Diaz-MeloM.Samper-GonzálezJ.RoutierA.BottaniS.et al. (2020). Convolutional neural networks for classification of Alzheimer's disease: overview and reproducible evaluation. Med. Image Anal.63:101694. 10.1016/j.media.2020.101694

  • 225

    WenzelM.MilletariF.KrügerJ.LangeC.SchenkM.ApostolovaI.et al. (2019). Automatic classification of dopamine transporter SPECT: deep convolutional neural networks can be trained to be robust with respect to variable image characteristics. Eur. J. Nuclear Med. Mol. Imaging46, 28002811. 10.1007/s00259-019-04502-5

  • 226

    WodzinskiM.SkalskiA.HemmerlingD.Orozco-ArroyaveJ. R.NöthE. (2019). Deep learning approach to Parkinson's disease detection using voice recordings and convolutional neural network dedicated to image classification, in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (Berlin), 717720. 10.1109/EMBC.2019.8856972

  • 227

    WuY.ChenP.YaoY.YeX.XiaoY.LiaoL.et al. (2017). Dysphonic voice pattern analysis of patients in Parkinson's disease using minimum interclass probability risk feature selection and bagging ensemble learning methods. Comput. Math. Methods Med.2017:4201984. 10.1155/2017/4201984

  • 228

    WuY.JiangJ.-H.ChenL.LuJ.-Y.GeJ.-J.LiuF.-T.et al. (2019). Use of radiomic features and support vector machine to distinguish Parkinson's disease cases from normal controls. Ann. Transl. Med.7:773. 10.21037/atm.2019.11.26

  • 229

    XiaY.YaoZ.YeQ.ChengN. (2020). A dual-modal attention-enhanced deep learning network for quantification of Parkinson's disease characteristics. IEEE Trans. Neural Syst. Rehab. Eng.28, 4251. 10.1109/TNSRE.2019.2946194

  • 230

    YadavG.KumarY.SahooG. (2011). Predication of Parkinson's disease using data mining methods: a comparative analysis of tree, statistical, and support vector machine classifiers. Ind. J. Med. Sci.65, 231242. 10.4103/0019-5359.107023

  • 231

    YagisE.HerreraA. G. S. D.CitiL. (2019). Generalization Performance of Deep Learning Models in Neurodegenerative Disease Classification, in 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (San Diego, CA), 16921698. 10.1109/BIBM47256.2019.8983088

  • 232

    YamanO.ErtamF.TuncerT. (2020). Automated Parkinson's disease recognition based on statistical pooling method using acoustic features. Med. Hypoth.135:109483. 10.1016/j.mehy.2019.109483

  • 233

    YangJ.-X.ChenL. (2017). Economic burden analysis of Parkinson's disease patients in China. Parkinson's Dis.2017:8762939. 10.1155/2017/8762939

  • 234

    YangM.ZhengH.WangH.McCleanS. (2009). Feature selection and construction for the discrimination of neurodegenerative diseases based on gait analysis, in 2009 3rd International Conference on Pervasive Computing Technologies for Healthcare (London), 17. 10.4108/ICST.PERVASIVEHEALTH2009.6053

  • 235

    YangS.ZhengF.LuoX.CaiS.WuY.LiuK.et al. (2014). Effective dysphonia detection using feature dimension reduction and kernel density estimation for patients with Parkinson's disease. PLoS ONE9:e88825. 10.1371/journal.pone.0088825

  • 236

    YeQ.XiaY.YaoZ. (2018). Classification of gait patterns in patients with neurodegenerative disease using adaptive neuro-fuzzy inference system. Computat. Math. Methods Med.2018:9831252. 10.1155/2018/9831252

  • 237

    ZengL.-L.XieL.ShenH.LuoZ.FangP.HouY.et al. (2017). Differentiating patients with Parkinson's disease from normal controls using gray matter in the cerebellum. Cerebellum16, 151157. 10.1007/s12311-016-0781-1

  • 238

    ZesiewiczT. A.SullivanK. L.HauserR. A. (2006). Nonmotor symptoms of Parkinson's disease. Expert Rev. Neurother.6, 18111822. 10.1586/14737175.6.12.1811

  • 239

    ZhangX.HeL.ChenK.LuoY.ZhouJ.WangF. (2018). Multi-view graph convolutional network and its applications on neuroimage analysis for Parkinson's disease. Annu. Symp. Proc.2018, 11471156.

  • 240

    ZhaoY.WuP.WangJ.LiH.NavabN.YakushevI.et al. (2019). A 3D deep residual convolutional neural network for differential diagnosis of Parkinsonian syndromes on 18F-FDG PET images, in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 35313534. 10.1109/EMBC.2019.8856747

Summary

Keywords

Parkinson's disease, machine learning, deep learning, diagnosis, differential diagnosis

Citation

Mei J, Desrosiers C and Frasnelli J (2021) Machine Learning for the Diagnosis of Parkinson's Disease: A Review of Literature. Front. Aging Neurosci. 13:633752. doi: 10.3389/fnagi.2021.633752

Received

26 November 2020

Accepted

22 March 2021

Published

06 May 2021

Volume

13 - 2021

Edited by

Christian Gaser, Friedrich Schiller University Jena, Germany

Reviewed by

Erika Rovini, Sant'Anna School of Advanced Studies, Italy; Silke Weber, Sao Paulo State University, Brazil

Updates

Copyright

*Correspondence: Jie Mei

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics