Performance of Machine Learning for Tissue Outcome Prediction in Acute Ischemic Stroke: A Systematic Review and Meta-Analysis

Machine learning (ML) has been proposed for lesion segmentation in acute ischemic stroke (AIS). This study aimed to provide a systematic review and meta-analysis of the overall performance of current ML algorithms for final infarct prediction from baseline imaging. We made a comprehensive literature search on eligible studies developing ML models for core infarcted tissue estimation on admission CT or MRI in AIS patients. Eleven studies meeting the inclusion criteria were included in the quantitative analysis. Study characteristics, model methodology, and predictive performance of the included studies were extracted. A meta-analysis was conducted on the dice similarity coefficient (DSC) score by using a random-effects model to assess the overall predictive performance. Study heterogeneity was assessed by Cochrane Q and Higgins I2 tests. The pooled DSC score of the included ML models was 0.50 (95% CI 0.39–0.61), with high heterogeneity observed across studies (I2 96.5%, p < 0.001). Sensitivity analyses using the one-study removed method showed the adjusted overall DSC score ranged from 0.47 to 0.52. Subgroup analyses indicated that the DL-based models outperformed the conventional ML classifiers with the best performance observed in DL algorithms combined with CT data. Despite the presence of heterogeneity, current ML-based approaches for final infarct prediction showed moderate but promising performance. Before well integrated into clinical stroke workflow, future investigations are suggested to train ML models on large-scale, multi-vendor data, validate on external cohorts and adopt formalized reporting standards for improving model accuracy and robustness.


INTRODUCTION
Stroke is a life-threatening disease accounting for approximately 10% of all deaths and presenting an estimated lifetime risk of 25% worldwide (1). Recanalization of the occluded vessels is the only effective treatment to restore blood flow and prevent neural functional deterioration. Early studies suggested 4.5 and 6 h as the time window for intravenous thrombolysis (IVT) and endovascular thrombectomy (EVT) from symptoms onset (2)(3)(4). Recent advances in endovascular approaches have broadened the boundaries of eligible patient selection and expanded the time window to 24 h by using advanced neuroimaging (5,6).
Currently, acute stroke imaging allows estimating the ischemic core and penumbra by predefined imaging thresholds. An apparent diffusion coefficient threshold between 600 and 625 × 10 −3 mm 2 /s remains a robust parameter for infarct core estimation, and a decreased relative cerebral blood flow (rCBF) threshold of <30% has been extensively used to quantify final infarct size for CT-based method. The mismatch between infarct core and perfusion deficit identified by time to maximum of the residue function (Tmax) with a delay >6s provides a delineation of tissue at risk (7,8). Despite the easy application of using single-valued thresholds to predict ischemic tissue outcome, conventional thresholds derived from approximate linear statistic models would probably fail to capture the heterogeneity of stroke lesion development from baseline imaging. Moreover, thresholds based on a single imaging modality disregarded the complementary effect of multimodal imaging, thus limiting the reliability in delineating infarct lesions.
Recent advances in machine learning (ML) offer promising applications in medical imaging by learning informative features and patterns from structured input data. It also drives the emergence of deep learning (DL) subfield, which has shown impressive results in medical image processing without prior selection for relevant features (9,10). Given the suboptimal performance of the conventional thresholding methods, initial studies attempted to apply ML and DL-based approaches and showed clear advantages for more precise prediction of the final infarct lesion from baseline imaging (11)(12)(13)(14)(15)(16)(17). These promising results inspired investigators to propose novel model methodologies by improving algorithm architectures, combining multi-modality input parameters, and applying in different clinical scenarios.
Although studies on this topic are growing, there is a lack of studies that review the general applications of the state-of-the-art ML-based approaches in ischemic core estimation. Therefore, we conducted this systematic review and meta-analysis to provide an overview of the potential advantages and remaining challenges of ML-based model methodologies for final infarct lesion prediction from acute stroke imaging, evaluate the overall performance of existing approaches, and provide suggestions for future research to potentially aid in acute ischemic stroke (AIS) management.

METHODS
This systematic review and meta-analysis was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (18).

Literature Search and Study Selection
We comprehensively searched PubMed, EMBASE, Cochrane Library, Science Direct, Springer, and IEEE Xplore Digital Library databases from inception to May 31, 2022, with the following keywords: "machine learning", "deep learning", "neural network", "stroke", "cerebrovascular event", "cerebral infarct", "computed tomography", "magnetic resonance imaging". Studies that developed ML algorithms for predicting the final infarct lesion from baseline acute stroke imaging were included. Eligible studies satisfying the following inclusion criteria were included in the meta-analysis: (1) study cohort was AIS patients; (2) study described ML algorithms for predicting ischemic core tissue from baseline CT or MR imaging; (3) reference standard (i.e., ground truth) was true infarct lesion segmented on followup imaging; (4) prediction performance was reported as dice similarity coefficient (DSC) score; (5) imaging sets for algorithm training and test were clearly defined; (6) published articles with full text; and (7) English language articles. Review articles, conference abstracts, letters, case reports including fewer than 10 patients, and non-human research were excluded. If studies came from the same cohort or compared different algorithms on the same dataset, we only retained the article with the largest sample size or the best-performing algorithm in the quantitative synthesis in case sample duplicate or overlapping would affect the overall pooled effect size.
One investigator (XW) read the titles and abstracts of all records. After preliminary screening, potentially eligible articles were shortlisted. Two investigators (XW and YF) independently read the full-text articles to assess eligibility, with disagreements resolved by discussion and consensus.

Data Extraction and Quality Assessment
Two investigators (XW and YF) independently extracted data from the included studies using a predefined data extraction sheet. Disagreements were re-evaluated and determined by a third investigator (NZ). The extracted data included: (1) first author and year; (2) source of the dataset; (3) sample size including the total patient number and numbers of the training, validation, and test sets; (4) model methodology, including algorithm types, input parameters and standard reference; (5) predictive performance, including the primary performance metric of DSC score and secondary metrics of area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, precision, recall and volume error between the prediction result and the standard reference.
To assess the quality of ML-based diagnostic accuracy studies, Collins and Moons initially introduced a modified version of the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis statement specific to machine learning (TRIPOD-ML) (19). However, the TRIPOD-ML guideline was complicated and covered a broad range of ML applications. The Radiology editorial board has developed a list of nine key considerations to improve the soundness and applicability of artificial intelligence research in diagnostic imaging (20). We adapted these items as quality assessment criteria in our study. Two investigators (XW and YF) independently evaluated the risks of bias using this questionnaire, with disagreements resolved by discussion and consensus.

Statistical Assessment
We estimated the overall performance of the ML models by using the DSC score, a commonly used volume-based performance metric for target segmentation. The DSC score represents the overlap between the prediction segmentation and the standard reference, ranging from 0 (indicates no overlap) to 1 (indicates complete overlap). For effect size calculation in the meta-analysis, the mean DSC score with standard deviation (SD) or 95% confidence interval (CI) was required. When study reported the DSC score as median and interquartile range (IQR), the mean and SD was converted using a quantile estimating method described by Wan et al. (21). The sample mean (X) and SD (S) were estimated as follows, where q1 referred to the first quartile, m referred to the median, and q3 referred to the third quartile.
A random-effects model meta-analysis was performed, and forest plots were generated to depict the effect size of individual studies and overall performance. The heterogeneity across the included studies was assessed using the Cochrane Q and Higgins I 2 tests, where the p-value < 0.05 in Cochrane Q test and Higgins I 2 -value > 75% indicated significant heterogeneity (22). Due to the high heterogeneity observed in this study, sensitivity analysis using the one-study removed method was conducted to explain the heterogeneity of the results. Subgroup analyses were performed according to algorithm types (conventional ML classifiers and deep neural networks) and imaging modality for model input (CT and MR data). Publication bias was examined by creating a funnel plot and Egger's bias test (23). Statistical analyses were performed using the STATA 17.0 statistical package (StataCorp, Stata Statistical Software). Two-sided p-value < 0.05 was considered statistically significant.

RESULTS
A total of 3,298 publications were initially identified through database searching. After removing 281 duplicate records, the remaining 3,017 publications were screened preliminarily. Based on title and abstract, 2,870 articles were excluded, and 147 articles were assessed for eligibility by two investigators independently. After full-text review, 38 studies were included in the systematic review. Among them, 11 studies that met the inclusion criteria and provided sufficient quantitative data were included in the meta-analysis, and 27 studies were excluded for the following reasons: 14 studies proposed ML models trained and tested on duplicate datasets, 6 studies didn't clearly define the training and testing sets, 7 studies didn't report a complete DSC score with standard deviation or interquartile range. The literature search flow diagram is presented in Figure 1.   (27-29, 32, 37, 39, 40, 42, 44, 49-51, 53). All except two studies (45,48) reported validation methods for the proposed model including using an independent test set, k-fold cross-validation, and leave-one-out cross-validation. External validation was only performed by one study (13). Thirteen studies adopted conventional ML algorithms including k-nearest neighbor classification (24), general linear regression (47), random forest (13,15,25,34,36,38,41,48) and gradient boosting (11,26,36) classifiers. Twentyfive studies proposed DL-based approaches consisting of artificial neural network (ANN) (31) and various types of convolutional neural network (CNN) with some of the noteworthy popular architectures, including 2D and 3D U-Net (12,16,17,27,28,39,40,43,49,50), residual network (ResNet) (12,29,37,50), recurrent residual U-Net (R2U-Net) (52) and DeepMedic (32). Four studies applied modifications of the common rectified linear unit (ReLU) activation function for non-linear transformation after each convolution operation, including parametric ReLU, noisy ReLU, and leaky ReLU activation (32,33,40,51). Given the class imbalance issue, 6 studies used hybrid loss function methods for target lesion segmentation (12,16,17,29,42,52). Four studies introduced optimization strategies such as data augmentation for the training procedure (28,29,37,39). The reference standard for model training was actual infarct lesion manually or semiautomatically segmented on follow-up CT or MR images with a wide-range time interval from 1 h to 90 days from baseline imaging.

Performance for Core Infarcted Tissue Prediction: A Meta-Analysis
Eleven studies were included in the meta-analysis (11,13,16,25,31,38,40,43,50,52,54). Methodological quality assessment of the included studies is shown in Table 2. Three studies clearly defined all three image sets of training, validation, and test (11,16,31). Only one study determined model performance using an external test set (13). Imaging data from one study were collected from four major manufacturers (44), five studies reported using single-vendor data (11,25,38,40,54), and others remained unknown (13,16,31,43,52). Although all the included studies clearly defined the validation methods, the relationship between the number of training images and model performance (i.e., sample size estimation) was not carefully evaluated. All studies described the data pre-processing procedure, trained their models using acceptable reference standards, and demonstrated the predictive performance assessed by multiple performance metrics. Algorithms from two studies were partially publicly available via the website of GitHub (11,40).
The overall performance of 11 predictive models is presented in Figure 2. The pooled DSC score was 0.50 (95% CI 0.39-0.61). The value of Cochrane Q test p < 0.001 and Higgins I 2 of 96.5%, indicating high heterogeneity across the included studies. We conducted a sensitivity analysis by removing one study at each step ( Figure 3). The adjusted overall DSC score ranged from 0.47 (95% CI 0.41-0.53) after removing the study by Zhu et al. (54) to 0.52 (95% CI 0.41-0.63) after removing the study by McKinley et al. (25). Publication bias assessed by graphic funnel plot showed an asymmetrical shape, and not all studies were plotted within the area under the curve of the pseudo-95% CI, indicating the potential publication bias among included studies (Figure 4). Egger's test showed no statistically significant publication bias (p = 0.565).

DISCUSSION
In this study, we reviewed the performance of ML-based approaches for final infarct lesion prediction from acute stroke imaging. The overall predictive performance of ML algorithms was moderate with a pooled DSC score of 0.50 (95% CI 0.39-0.61, Higgins I 2 = 96.5%, p < 0.001). Subgroup analyses indicated that the DL-based models outperformed the conventional ML classifiers with the best performance observed in DL algorithms combined with pre-processing CT data. Although high heterogeneity was present across studies, current ML algorithms still showed promising performance for ischemic tissue outcome prediction from baseline imaging. Estimating the final infarct lesion from baseline imaging is complex due to the heterogeneity of lesion shape, location, and progression. The aim of ML applications is to exact the maximum amount of predictive power from the available multi-modality imaging information, where conventional thresholding methods seem inadequate (10). A few studies validated their proposed MLbased approaches compared to conventional thresholds for core infarcted tissue delineation and showed significant improvement in measurement results using ML-based methods (11,(13)(14)(15)17). For instance, when training an attention-gated U-Net with baseline MR diffusion and perfusion parameters, the prediction model outperformed the ADC <620 × 10 −6 mm 2 /s threshold, with more precise segmentation (DSC score 0.53 vs. 0.45), higher discriminating power (AUC 0.92 vs. 0.71) and smaller volume error (median 9 vs. 12 ml) (14). Such strategies take advantage of the data-processing ability of ML algorithms to provide rapid and reliable assessment, which is promising to support clinical management.
Conventional ML classifiers included linear regression and decision trees. Grosser et al. compared the performance of 3 classical ML algorithms for infarct core estimate and revealed that decision trees (random forest and gradient boosting) performed better than linear regression model (36), indicating the necessity of using non-linear algorithms for stroke prediction segmentation. In the 2017 ISLES challenge, where uniform pre-processing data for model training and test were provided, almost all top-ranking teams employed DL algorithms instead of ML classifiers (28). Our finding was consistent with the results of ISLES challenge 2017, which indicated the advantages of using DL algorithms for final infarct prediction. However, a recent study held contradictory results that a U-net model performed less well than two decision tree classifiers (DSC score 0.48 vs. 0.53 and 0.51) (11). A possible reason was the relatively small sample size for fully training a DL algorithm. The inherent data-dependent characteristic of DL algorithms meant that once trained on sufficient data, the model performance would continue to improve, while classical ML approaches tend toward stability. Moreover, most well-performed DL models were customized on the baseline architectures. Innovative modifications of algorithm architectures and training strategies would further improve the predictive performance of DL models.
Most of the previous works have chosen MR images for model training, given the high tissue contrast and the sensitivity of MR diffusion for infarct core detection. Our study revealed that the DSC score of MR-input models stabilized at around 0.45, either using ML classifiers or DL models. In clinical practice, CT is more widely available for acute stroke triage, detecting large vessel occlusion, and selecting candidates for revascularization (8,55). Studies developing models training on CT perfusion data Was the AI algorithm trained using a standard of reference that is widely accepted in our field?
Was preparation of images for the AI algorithm adequately described?
Were the results of the AI algorithm compared with radiology experts and/or pathology?
Was the manner in which the AI algorithms makes decisions demonstrated?
Is the AI algorithm publicly available?    appeared late yet achieved comparable or better performance than MR-input models (12,29,31,33,42,44,47,50,52,53). One study employing a random forest classifier using features extracted from multi-phase CT angiography presented less satisfactory performance with a DSC score of 0.22 (15). However, a more recent study based on the R2U-RNet algorithm using non-contrast CT data with intensity normalization and histogram equalization showed promising performance with a DSC score of 0.54 (52). Theoretically, different imaging modalities and parameters provide complementary information, and thus the combination of multimodal imaging data with reasonable pre-processing would enhance the overall predictive performance. In addition, several approaches incorporated clinical data such as stroke severity, reperfusion status, and time variants (13,28,33,34,48). Multi-dimensional input information consisting of imaging and non-imaging data is expected to establish better prediction models, which is a direction of future research. In our study, we have chosen the DSC score as the primary performance metric, a commonly used volume-based metric containing lesion size and location information for target segmentation. Other segmentation metrics, such as Jaccard index was less reported in this research field, and Hausdorff distance and surface distance were distance-based metrics that were less optimal for final infarct lesion prediction (56). Although ROC is more familiar in diagnostic accuracy studies, its efficacy has been challenged for class imbalance tissue, such as infarct core prediction. Large numbers of "healthy" voxels would lead the AUC values to a high level and reduce its discriminating power. From a clinical standpoint, we included the volume error as a secondary performance metric, which enabled intuitive assessment of the size differences between prediction results and reference standards, as the estimate of core infarct volume was critical to identify eligible patients who would benefit from treatment in the late time window.
Although ML-based approaches provided promising results for final infarct lesion prediction, there is still no wide acceptance and implementation in clinical practice. Most of the proposed models were trained on datasets with small sample size, which was deemed insufficient to train an ML algorithm (especially a DL algorithm), leading to an overall moderate predictive performance. Many studies validated using the k-fold crossvalidation method to provide an unbiased evaluation with small sample size. However, the real predictive performance would be overestimated without an independent external validation (57). Another limitation was data heterogeneity, as models trained on single-center cohort using single-vendor data would reduce the model generalizability. One study validating their approach on an external cohort indicated less satisfactory performance with a median DSC score of 0.39 (13). There is an emerging trend to build up large multi-vendor, multi-institution diagnostic datasets with initial implementation on chest X-ray data (56). A similar dataset for stroke lesion segmentation would be helpful. In addition, a standardized methodologic procedure is also warranted including the definition of the clinical cohort, imaging protocols, reference standard, model training and validation process, and clinical evaluation of model performance.
Our study has several limitations. First, the heterogeneity was high across studies due to the differences in the study cohorts, algorithm types, and input parameters. We made sensitivity analyses and found no obvious deviation of the adjusted effect size from the main effect size. We also conducted subgroup analyses to explain the heterogeneity and found a downward trend of heterogeneity in the subgroup analyses. However, reevaluation of the overall model performance is needed as more relevant, intensive studies accumulate. Second, as an emerging field of artificial intelligence in imaging, there was no consensus on the reporting standards. Therefore, 13 studies were excluded before the meta-analysis because of lacking definition of image sets or results of DSC scores, which might result in an incomplete assessment of available studies. Third, although the publication bias examined by Egger's test was not significant, the funnel plot showed an asymmetrical shape. We excluded 14 studies due to dataset duplicates or overlapping to avoid affecting the overall pooled effect size. It might contribute to the risk of publication bias.

CONCLUSION
In this study, we conducted a systematic review and metaanalysis of current studies using ML algorithms for infarct core prediction. Despite the heterogeneity across studies, the overall performance of ML-based predictive methods is moderate but promising, with better predictive performance presented in the DL-based approaches. However, before well integrated into clinical stroke workflow, future studies are suggested to train MLbased approaches on large-scale, multi-vendor data, validate on external cohorts and adopt formalized reporting standards for improving model accuracy and robustness.

DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.