Skip to main content

ORIGINAL RESEARCH article

Front. Mol. Biosci., 10 August 2022
Sec. Molecular Diagnostics and Therapeutics
Volume 9 - 2022 | https://doi.org/10.3389/fmolb.2022.910688

Machine learning prediction of postoperative unplanned 30-day hospital readmission in older adult

  • 1Department of Anesthesiology, West China Hospital, Sichuan University and The Research Units of West China (2018RU012), Chinese Academy of Medical Sciences, Chengdu, China
  • 2Department of Anesthesiology, The Second Clinical Medical College, North Sichuan Medical College, Nanchong Central Hospital, Nanchong, China
  • 3College of Computer Science, Sichuan University, Chengdu, China

Background: Although unplanned hospital readmission is an important indicator for monitoring the perioperative quality of hospital care, few published studies of hospital readmission have focused on surgical patient populations, especially in the elderly. We aimed to investigate if machine learning approaches can be used to predict postoperative unplanned 30-day hospital readmission in old surgical patients.

Methods: We extracted demographic, comorbidity, laboratory, surgical, and medication data of elderly patients older than 65 who underwent surgeries under general anesthesia in West China Hospital, Sichuan University from July 2019 to February 2021. Different machine learning approaches were performed to evaluate whether unplanned 30-day hospital readmission can be predicted. Model performance was assessed using the following metrics: AUC, accuracy, precision, recall, and F1 score. Calibration of predictions was performed using Brier Score. A feature ablation analysis was performed, and the change in AUC with the removal of each feature was then assessed to determine feature importance.

Results: A total of 10,535 unique surgeries and 10,358 unique surgical elderly patients were included. The overall 30-day unplanned readmission rate was 3.36%. The AUCs of the six machine learning algorithms predicting postoperative 30-day unplanned readmission ranged from 0.6865 to 0.8654. The RF + XGBoost algorithm overall performed the best with an AUC of 0.8654 (95% CI, 0.8484–0.8824), accuracy of 0.9868 (95% CI, 0.9834–0.9902), precision of 0.3960 (95% CI, 0.3854–0.4066), recall of 0.3184 (95% CI, 0.259–0.3778), and F1 score of 0.4909 (95% CI, 0.3907–0.5911). The Brier scores of the six machine learning algorithms predicting postoperative 30-day unplanned readmission ranged from 0.3721 to 0.0464, with RF + XGBoost showing the best calibration capability. The most five important features of RF + XGBoost were operation duration, white blood cell count, BMI, total bilirubin concentration, and blood glucose concentration.

Conclusion: Machine learning algorithms can accurately predict postoperative unplanned 30-day readmission in elderly surgical patients.

Trial registration: http://www.chictr.org.cn/showproj.aspx?proj=35795, ChiCTR, ChiCTR1900021290

Background

The unplanned hospital readmission rate is one of the most widely used indicators to assess hospital care quality (Gupta and Fonarow, 2018). Due to its substantial contribution to medical resource costs, unplanned hospital readmission is increasingly recognized as an important public health concern, especially in developed countries (Jencks et al., 2009; Axon and Williams, 2011). Geriatric surgical patients, vulnerable to chronic illnesses, are at higher risk of unplanned hospital readmission with compounded factors. Although not all of these readmissions are preventable, it is critical to propose an effective framework for their early identification. A substantial body of models exists to identify patients at risk for unplanned readmission (Miotto et al., 2016; Kansagara et al., 2011; van Walraven et al., 2012; ohnson et al., 2019). However, most of them were created based on a specific disease cluster and cannot be extrapolated to the entire postoperative population, particularly elderly surgical patients (Ali and Gibbons, 2017; Ko et al., 2020; Kong and Wilkinson, 2020; Mišić et al., 2020; Sander et al., 2020; Shebeshi et al., 2020; Wasfy et al., 2020; Amritphale et al., 2021).

Recently, machine learning (ML) algorithms were considered to be potential tools for developing clinical predictive models because of their ability to deal with multidimensional datasets and make accurate predictions (Deo, 2015; Jordan and Mitchell, 2015). Since ML algorithms can process nonlinear relationships and interactions between predictors, they may be increasingly used in medical modeling. In this study, we aimed to investigate if ML-based algorithms can accurately predict postoperative unplanned 30-day readmission in an elderly surgical patient cohort using input features, such as demographic, comorbidity, laboratory, surgical, and medication data.

Methods

Data extraction

This study has been registered in the Chinese Clinical Trial Registry (ChiCTR-1900021290), and ethical approval was obtained from the Ethical Review Board of West China Hospital, Sichuan University, China. All the relevant clinical data were prospectively collected during the course of our routine anesthesia risk assessment, intraoperative records, and postoperative follow-up using a structured data schema designed by our institution. We extracted perioperative information of elderly patients older than 65 who underwent surgeries under general anesthesia in West China Hospital, Sichuan University from July 2019 to February 2021. For patients who had multiple admission records, we only included their first admissions for analysis. Meanwhile, for patients who underwent multiple surgeries during a single hospitalization, we included all their surgeries for analysis. A flow chart describing the inclusion and exclusion process is shown in Figure 1.

FIGURE 1
www.frontiersin.org

FIGURE 1. Flow chart of inclusion and exclusion process for overall data set.

Model endpoint definition

The label “postoperative 30-day unplanned readmission” was defined as follows: readmission due to the same surgical disease or postoperative complications within 30 days postoperatively in an unplanned fashion. Our professional follow-up personnel collected this information by telephone 30 days after surgery.

Data preprocessing

There were few admissions with missing data. Variables with a missing data rate greater than 30% were not included for model development. For numeric variables with a missing data rate less than 5%, the median of each variable was used for imputation. For numeric variables with a missing data rate between 5% and 30%, we performed various imputation techniques using mean absolute error (MAE) scores as estimated metrics for comparison. To estimate the score on an original full dataset, we excluded all missing value rows and randomly removed some values to create a new version of the dataset with artificially missing data. Then, we compared the performance of the random forest (RF) regressor on the complete original dataset with that on the altered dataset that used different imputation techniques. The comparison results presented in Figure 2 showed that we could find the lowest MAE to impute the missing values.

FIGURE 2
www.frontiersin.org

FIGURE 2. Imputation techniques in different missing data groups. FD, Full data; KNN, k nearest neighbor; BR, BayesianRidge; DTR, DecisionTreeRegressor; ETR, ExtraTreesRegressor; KNR, KNeighborsRegressor; MAE, Mean Absolute Error. BayesianRidge performed the best with the lowest MAE among all imputation techniques.

Considering the extreme imbalanced classification between the readmitted samples and non-readmitted samples (the readmission rate is only 3.36%), we both oversampled and undersampled the training set using the Synthetic Minority Over-sampling Technique (SMOTE) and Edited Nearest Neighbors (ENN). The SMOTE generated noisy samples by interpolating new points between marginal outliers and inliers, while ENN cleaned the space resulting from oversampling. Utilizing the SMOTE + ENN (SMOTEENN) algorithm provided by the imbalanced-learn Python library, we achieved a more balanced data distribution of readmitted samples and non-readmitted samples (Lemaître et al., 2017).

Our data were randomly divided into a training set and a test set according to a 70–30 split. We estimated models based on the training data (70%) and evaluated models based on the test data (30%). Each split was carried out to preserve the proportion of readmitted and not readmitted cases in the entire dataset. This random split was repeated ten times.

Feature selection

We focused on features that are easily accessible and not only available after discharge. For the preoperative laboratory data, we kept the last value prior to surgery. Before feature selection, we obtained 145 initial available variables. In model development, variable selection reduces the number of attributes and allows the selection of a subset of relevant features. Generally, there are three classes of optimal feature selection algorithms as follows: filter, wrapper, and embedded methods. In this study, we used the wrapper method because it can measure the usefulness of features based on the classifier performance through the search process, where different combinations of features are evaluated and compared by scores based on predictive model accuracy (Chandrashekar and Sahin, 2014).

To eliminate irrelevant, weakly relevant, or redundant features and reduce model overfitting as well as improve model generalization ability, we used a multilayer perceptron (MLP) as an estimator to implement a genetic algorithm (GA), which is a stochastic search algorithm based on the mechanics of evolution and natural selection (Torkamanian-Afshar et al., 2021). GA uses three operators, that is, selection, crossover, and mutation to improve the quality of solutions. We used Distributed Evolutionary Algorithms in Python to implement GA, while the function returns the optimal setting of feature selection as a binary array with the best accuracy score (Rainville et al., 2014). The independent probability for each attribute to be flipped was 0.1 in multiple flip-bit mutations. Tournament selection was set as the selection operator with a tournament size of 3. The population size was 100, the crossover probability was 0.5, and the mutation probability was 0.2.

The full list of features includes demographic data (e.g., age, gender, and body mass index [BMI]), available obtained laboratory tests prior to surgery (e.g., glucose concentration and oxygen saturation), descriptive intraoperative vital signs (e.g., systolic blood pressure), comorbidity (e.g., hypertension), and surgery descriptions (e.g., surgery type and anesthesia).

Model creation, training, and testing

This study considered different widespread types of models, that is, logistic regression, MLP, RF, extreme gradient boosting (XGBoost), and light gradient boosting machine (LGBM). The latter three are bagging or boosting ensemble learning algorithms. XGBoost is an optimized distributed gradient boosting library designed to have strong predictive power. It does not build the full tree structure but builds it greedily. It provides a parallel tree boosting that solves scientific problems, such as regression, classification, and ranking, in a fast and accurate way. LGBM is a high-performance gradient lifting framework that is based on a decision tree. Thus, it splits the tree leaf-wise with the simplest fit, whereas other boosting algorithms split the tree depth- or level-wise instead of leaf-wise. LGBM is quick because it uses a histogram-based algorithm that quickens the training procedure. We calculated MAEs as weights to combine RF and XGBoost into a hybrid model.

One of the advantages of using the abovementioned algorithms is that we can easily calculate the scores for all the input features, which represent the importance of each feature. A specific feature with a higher score means that it will have a larger effect on the model prediction. Random Forest Classifier, Logistic Regression, and MLP Classifier used in this study are from Scikit-learn. The XGB Classifier and LGBM Classifier were implemented using the xgboost and lightgbm packages (Python Software Foundation, 9450 SW Gemini Dr., ECM# 90772, Beaverton, OR 97008, United States) separately.

Model hyperparameters were set before training to improve the performance of the algorithms. We used RandomizedSearchCV and GridSearchCV provided by Scikit-learn. Five-fold cross-validation was applied to the training set, meaning that we calculated the average metrics while each of the five partitions was treated only once as a test set and four times as a training set. Before parameter optimization, all model classifier parameters were set to default values. We first used a random search with 200 iterations, and then a smaller range was determined based on the parameter selected in the previous step, and Grid Search worked with a small number of hyperparameters.

We used block bootstrapping to generate confidence intervals (CIs) for the performance metrics on the test set. Rather than randomly sampling procedures, we randomly sampled patients 1,000 times, included all predictions in the bootstrap sample, and sorted the performance metrics of each bootstrap sample.

Evaluation metrics

Model performance was assessed using the following metrics: area under the ROC curve (AUC), accuracy, precision, recall, and F1 score. ROC curve, as a visualization tool, can infer model performance by illustrating the relationship between precision and recall as we vary the threshold for selecting positives. Each time a different threshold was selected, a set of false-positive and true-positive rates were obtained. The calibration of the model was evaluated by Brier score and calibration plots. The 95% CIs of the abovementioned indicators were calculated through 1,000 repeated sampling. A feature ablation analysis was performed, and the change in AUC with the removal of each feature was then assessed to determine feature importance.

Results

Characteristics of the patients

Inclusion and exclusion criteria were strictly followed during the entire screening process. A flow chart indicating the inclusion and exclusion process is shown in Figure 1. Finally, a total of 10,358 elderly patients were included. The overall 30-day unplanned readmission rate was 3.36%. The demographic data and surgery-related information of patients are shown in Table 1.

TABLE 1
www.frontiersin.org

TABLE 1. Summary of demographic characteristics and perioperative data in this cohort.

Model performance

The AUCs of the six ML algorithms predicting postoperative 30-day unplanned readmission ranged from 0.6371 to 0.7686 including all features (Table 2) and from 0.6865 to 0.8654 including selected features (Table 3). The RF + XGboost classifier including selected features overall performed the best with an AUC of 0.8654 (95% CI, 0.8484-0.8824), the accuracy of 0.9868 (95% CI, 0.9834–0.9902), the precision of 0.3960 (95% CI, 0.3854–0.4066), recall of 0.3184 (95% CI, 0.259–0.3778), and F1 score of 0.4909 (95% CI, 0.3907–0.5911) (Table 3); The ROC curves of all the six ML algorithms predicting postoperative unplanned 30-day hospital readmission are shown in Figure 3, and the Precision-Recall (P-R) curves of all the six ML algorithms are also shown in Figure 4.

TABLE 2
www.frontiersin.org

TABLE 2. Performance of classification models including all features.

TABLE 3
www.frontiersin.org

TABLE 3. Performance of classification models including selected features.

FIGURE 3
www.frontiersin.org

FIGURE 3. The ROC curves and AUCs of six ML algorithms predicting postoperative unplanned 30-day hospital readmission in this cohort. ROC, receiver operating characteristic; AUC, area under the curve; RF, random forest; LR, logistic regression; XGBoost, eXtreme Gradient Boosting; LGBM, Light Gradient Boosting Machine; MLP, Multilayer Perceptron.

FIGURE 4
www.frontiersin.org

FIGURE 4. The P-R curves of six ML algorithms predicting postoperative unplanned 30-day hospital readmission in this cohort. P-R, Precision-Recall; RF, random forest; LR, logistic regression; XGBoost, eXtreme Gradient Boosting; LGBM, Light Gradient Boosting Machine; MLP, Multilayer Perceptron.

The Brier score of the RF + XGboost model predicting postoperative 30-day unplanned readmission was 0.0372 (95% CI, 0.0371–0.0372), showing the best calibration capability among all the ML algorithms (Table 4).

TABLE 4
www.frontiersin.org

TABLE 4. Calibration of classification models including selected features.

Feature importance

After performing a feature ablation analysis, we found that the five most important features of the RF + XGboost model were operation duration, white blood cell count, BMI, total bilirubin concentration, and blood glucose concentration. Figure 5 presents the feature importance of three models (RF、XGboost, and RF + XGboost) predicting postoperative unplanned 30-day hospital readmission.

FIGURE 5
www.frontiersin.org

FIGURE 5. The feature importance of three ML algorithms predicting postoperative unplanned 30-day hospital readmission. RF, random forest; XGBoost, eXtreme Gradient Boosting.

Discussion

We used five ML models separately and one hybrid model to predict the 30-day postoperative unplanned readmission of elderly patients. To analyze the performance of the proposed framework, we investigated the advantages and benefits of the proposed model over traditional ML models. Among all the algorithms, the RF + XGboost hybrid model generally performed relatively better, with an AUC of 0.8654 (95% CI, 0.8484–0.8824) and a Brier score of 0.0372 (95% CI, 0.0371–0.0372). For a single ML algorithm, RF nearly had the best performance in predicting the 30-day postoperative unplanned readmission, which has previously been reported (Peng et al., 2010; Hsieh et al., 2011; Alickovic and Subasi, 2016; Gowd et al., 2019). In addition, all ML models tended to perform similarly or better than the traditional approach (van Walraven et al., 2010; Cotter et al., 2012; Donzé et al., 2013; Low et al., 2017).

In the RF + XGboost model, the five most important features were operation duration, white blood cell count, BMI, total bilirubin concentration, and glucose concentration. Long duration of surgery is an important factor resulting in multiple postoperative complications, including unplanned 30-day postoperative readmission (Phan et al., 2017; Polites et al., 2017). Increased white blood cell count usually indicates an increased likelihood of infection. Postoperative infection is also an important reason for unplanned readmissions, such as lung infection requiring anti-infective treatment or wound infection requiring readmission for debridement or surgery. An increase in BMI is closely associated with higher incidence of hypertension, coronary heart disease, and diabetes, while reduced BMI, on the other hand, is also a sign of malnutrition and frailty status in the elderly (Graboyes et al., 2018; Sperling et al., 2018; Workman et al., 2020; Cutler et al., 2021). Hyperbilirubinemia reflects underlying hemolysis and hepatic dysfunction. Such patients have decreased tolerance for massive intraoperative blood loss, hypotension, and hepatic ischemia (Liao et al., 2013; Arvind et al., 2021). Elevated blood glucose level, usually including type 2 diabetes mellitus and impaired fasting glucose, is associated with postoperative infections, which are common causes of postoperative unplanned readmissions (Jones et al., 2017; Martin et al., 2019).

To improve the performance of unplanned readmission risk prediction, we combined the RF and XGBoost classifiers by setting weights according to MAE. Our study demonstrates that the combined model could perform significantly better than individual models in predicting unplanned readmission. Meanwhile, among all the models, MLP did not achieve relatively good scores, which may be because the neural network algorithm is relatively complex for small unbalanced text datasets. Actually, the performance of ML algorithms is closely related to the imbalance rate of a label (e.g., imbalance rate of unplanned readmission). When the number of positive samples is excessively low (<10%), ML algorithms are easily overfitted. In this study, the 30-day unplanned readmission rate was lower than 5%, indicating a high probability of predicting patients as negative samples. Although we used SMOTEENN as a sampling method to reduce the imbalance rate, the classification performance has much room for improvement, as seen from the recall and F1 scores. The Brier score of the hybrid model is 0.0372 (95% CI, 0.0371–0.0372), which is also the lowest among all the algorithms.

Our analysis of postoperative patients provides us with three key insights into the prediction of unplanned readmission. First, ML is a powerful artificial intelligence approach to using data to imitate the way that humans learn and make decisions, gradually improving its accuracy. In this study, nearly all models achieved an AUC of more than 0.7, whereas studies predicting unplanned readmissions achieved AUC in the range of 0.54–0.92 (Artetxe et al., 2018). Second, hybrid models may perform better than individual models. Third, effective data processing is essential to assist decision-making. Strategies to reduce potentially avoidable 30-day readmissions may help improve the quality of care and outcomes.

Limitations

Some potential limitations should be considered. First, we did not include the information of hospital personnel for analysis. There is no doubt it is closely related to the patients’ outcome; second, this is a monocenter study, and most of the patients came from western China. As a result, further external validation is needed. Third, the sample size is relatively small compared to some retrospective studies. Fourth, during data collection and follow-up, it is inevitable that some data will be missing.

Conclusion

ML algorithms can accurately predict postoperative unplanned 30-day readmission in elderly surgical patients.

Data availability statement

The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving human participants were reviewed and approved by the ethical review board of West China Hospital, Sichuan University, China. The ethics committee waived the requirement of written informed consent for participation.

Author contributions

Conception and design: LLi and TZ. Administrative support: LLu and TZ. Collection and assembly of data: LLi and LW. Data analysis and interpretation: LLi and LW. Manuscript writing and editing: LLi, LW, LLu, and TZ. All authors read and approved the final manuscript.

Funding

This study was supported by the National Key R&D Program of China (2018YFC2001800), Sichuan Provincial Science and Technology Key R&D Projects (2019YFG0491), and research funds from the Sichuan Provincial Health Commission (20PJ298).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Ali, A. M., and Gibbons, C. E. (2017). Predictors of 30-day hospital readmission after hip fracture: a systematic review. Injury 48 (2), 243–252. doi:10.1016/j.injury.2017.01.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Alickovic, E., and Subasi, A. (2016). Medical decision support system for diagnosis of heart arrhythmia using DWT and random forests classifier. J. Med. Syst. 40 (4), 108. doi:10.1007/s10916-016-0467-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Amritphale, A., Fonarow, G. C., Amritphale, N., Omar, B., and Crook, E. D. (2021). All-cause unplanned readmissions in the United States. Insights from the Nationwide readmission database. Intern. Med. J. [Epub ahead of print]. doi:10.1111/imj.15581

CrossRef Full Text | Google Scholar

Artetxe, A., Beristain, A., and Graña, M. (2018). Predictive models for hospital readmission risk: A systematic review of methods. Comput. Methods Programs Biomed. 164, 49–64. doi:10.1016/j.cmpb.2018.06.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Arvind, V., London, D. A., Cirino, C., Keswani, A., and Cagle, P. J. (2021). Comparison of machine learning techniques to predict unplanned readmission following total shoulder arthroplasty. J. Shoulder Elb. Surg. 30 (2), e50–e59. doi:10.1016/j.jse.2020.05.013

CrossRef Full Text | Google Scholar

Axon, R. N., and Williams, M. V. (2011). Hospital readmission as an accountability measure. JAMA 305 (5), 504–505. doi:10.1001/jama.2011.72

PubMed Abstract | CrossRef Full Text | Google Scholar

Chandrashekar, G., and Sahin, F. (2014). A survey on feature selection methods. Comput. Electr. Eng. 40 (1), 16–28. doi:10.1016/j.compeleceng.2013.11.024

CrossRef Full Text | Google Scholar

Cotter, P. E., Bhalla, V. K., Wallis, S. J., and Biram, R. W. (2012). Predicting readmissions: Poor performance of the LACE index in an older UK population. Age Ageing 41 (6), 784–789. doi:10.1093/ageing/afs073

PubMed Abstract | CrossRef Full Text | Google Scholar

Cutler, H. S., Collett, G., Farahani, F., Ahn, J., Nakonezny, P., Koehler, D., et al. (2021). Thirty-day readmissions and reoperations after total elbow arthroplasty: a national database study. J. Shoulder Elb. Surg. 30 (2), e41–e49. doi:10.1016/j.jse.2020.06.033

CrossRef Full Text | Google Scholar

Deo, R. C. (2015). Machine learning in medicine. Circulation 132 (20), 1920–1930. doi:10.1161/CIRCULATIONAHA.115.001593

PubMed Abstract | CrossRef Full Text | Google Scholar

Donzé, J., Aujesky, D., Williams, D., and Schnipper, J. L. (2013). Potentially avoidable 30-day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern. Med. 173 (8), 632–638. doi:10.1001/jamainternmed.2013.3023

PubMed Abstract | CrossRef Full Text | Google Scholar

Gowd, A. K., Agarwalla, A., Amin, N. H., Romeo, A. A., Nicholson, G. P., Verma, N. N., et al. (2019). Construct validation of machine learning in the prediction of short-term postoperative complications following total shoulder arthroplasty. J. Shoulder Elb. Surg. 28 (12), e410–e421. doi:10.1016/j.jse.2019.05.017

CrossRef Full Text | Google Scholar

Graboyes, E. M., Schrank, T. P., Worley, M. L., Momin, S. R., Day, T. A., Huang, A. T., et al. (2018). Thirty-day readmission in patients undergoing head and neck microvascular reconstruction. Head. Neck 40 (7), 1366–1374. doi:10.1002/hed.25107

PubMed Abstract | CrossRef Full Text | Google Scholar

Gupta, A., and Fonarow, G. C. (2018). The hospital readmissions reduction program-learning from failure of a healthcare policy. Eur. J. Heart Fail. 20 (8), 1169–1174. doi:10.1002/ejhf.1212

PubMed Abstract | CrossRef Full Text | Google Scholar

Hsieh, C. H., Lu, R. H., Lee, N. H., Chiu, W. T., Hsu, M. H., Li, Y. C., et al. (2011). Novel solutions for an old disease: diagnosis of acute appendicitis with random forest, support vector machines, and artificial neural networks. Surgery 149 (1), 87–93. doi:10.1016/j.surg.2010.03.023

PubMed Abstract | CrossRef Full Text | Google Scholar

Jencks, S. F., Williams, M. V., and Coleman, E. A. (2009). Rehospitalizations among patients in the Medicare fee-for-service program. N. Engl. J. Med. 360 (14), 1418–1428. doi:10.1056/NEJMsa0803563

PubMed Abstract | CrossRef Full Text | Google Scholar

Jones, C. E., Graham, L. A., Morris, M. S., Richman, J. S., Hollis, R. H., Wahl, T. S., et al. (2017). Association between preoperative hemoglobin A1c levels, postoperative hyperglycemia, and readmissions following gastrointestinal surgery. JAMA Surg. 152 (11), 1031–1038. doi:10.1001/jamasurg.2017.2350

PubMed Abstract | CrossRef Full Text | Google Scholar

Jordan, M. I., and Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science 349 (6245), 255–260. doi:10.1126/science.aaa8415

PubMed Abstract | CrossRef Full Text | Google Scholar

Kansagara, D., Englander, H., Salanitro, A., Kagen, D., Theobald, C., Freeman, M., et al. (2011). Risk prediction models for hospital readmission: a systematic review. JAMA 306 (15), 1688–1698. doi:10.1001/jama.2011.1515

PubMed Abstract | CrossRef Full Text | Google Scholar

Ko, D. T., Khera, R., Lau, G., Qiu, F., Wang, Y., Austin, P. C., et al. (2020). Readmission and mortality after hospitalization for myocardial infarction and heart failure. J. Am. Coll. Cardiol. 75 (7), 736–746. doi:10.1016/j.jacc.2019.12.026

PubMed Abstract | CrossRef Full Text | Google Scholar

Kong, C. W., and Wilkinson, T. M. A. (2020). Predicting and preventing hospital readmission for exacerbations of COPD. ERJ Open Res. 6 (2), 00325. doi:10.1183/23120541.00325-2019

PubMed Abstract | CrossRef Full Text | Google Scholar

Lemaître, G., Nogueira, F., and Aridas, C. K. (2017). Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. J. Mach. Learn. Res. 18 (1), 559–563. doi:10.48550/arXiv.1609.06570

CrossRef Full Text | Google Scholar

Liao, J. C., Chen, W. J., Chen, L. H., Niu, C. C., Fu, T. S., Lai, P. L., et al. (2013). Complications associated with instrumented lumbar surgery in patients with liver cirrhosis: a matched cohort analysis. Spine J. 13 (8), 908–913. doi:10.1016/j.spinee.2013.02.028

PubMed Abstract | CrossRef Full Text | Google Scholar

Low, L. L., Liu, N., Ong, M. E. H., Ng, E. Y., Ho, A. F. W., Thumboo, J., et al. (2017). Performance of the LACE index to identify elderly patients at high risk for hospital readmission in Singapore. Med. Baltim. 96 (19), e6728. doi:10.1097/MD.0000000000006728

CrossRef Full Text | Google Scholar

Martin, L. A., Kilpatrick, J. A., Al-Dulaimi, R., Mone, M. C., Tonna, J. E., Barton, R. G., et al. (2019). Predicting ICU readmission among surgical ICU patients: Development and validation of a clinical nomogram. Surgery 165 (2), 373–380. doi:10.1016/j.surg.2018.06.053

PubMed Abstract | CrossRef Full Text | Google Scholar

Miotto, R., Li, L., Kidd, B. A., and Dudley, J. T. (2016). Deep patient: An unsupervised representation to predict the future of patients from the electronic health records. Sci. Rep. 6, 26094. doi:10.1038/srep26094

PubMed Abstract | CrossRef Full Text | Google Scholar

Mišić, V. V., Gabel, E., Hofer, I., Rajaram, K., and Mahajan, A. (2020). Machine learning prediction of postoperative emergency department hospital readmission. Anesthesiology 132 (5), 968–980. doi:10.1097/ALN.0000000000003140

PubMed Abstract | CrossRef Full Text | Google Scholar

ohnson, P. C., Xiao, Y., Wong, R. L., D'Arpino, S., Moran, S. M. C., Lage, D. E., et al. (2019). Potentially avoidable hospital readmissions in patients with advanced cancer. J. Oncol. Pract. 15 (5), e420–e427. doi:10.1200/JOP.18.00595

PubMed Abstract | CrossRef Full Text | Google Scholar

Peng, S. Y., Chuang, Y. C., Kang, T. W., and Tseng, K. H. (2010). Random forest can predict 30-day mortality of spontaneous intracerebral hemorrhage with remarkable discrimination. Eur. J. Neurol. 17 (7), 945–950. doi:10.1111/j.1468-1331.2010.02955.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Phan, K., Kim, J. S., Capua, J. D., Lee, N. J., Kothari, P., Dowdell, J., et al. (2017). Impact of operation time on 30-day complications after adult spinal deformity surgery. Glob. Spine J. 7 (7), 664–671. doi:10.1177/2192568217701110

CrossRef Full Text | Google Scholar

Polites, S. F., Potter, D. D., Glasgow, A. E., Klinkner, D. B., Moir, C. R., Ishitani, M. B., et al. (2017). Rates and risk factors of unplanned 30-day readmission following general and thoracic pediatric surgical procedures. J. Pediatr. Surg. 52 (8), 1239–1244. doi:10.1016/j.jpedsurg.2016.11.043

PubMed Abstract | CrossRef Full Text | Google Scholar

Rainville, F. M. D., Fortin, F. A., Gardner, M. A., Parizeau, M., and Gagné, C. (2014). Deap: enabling nimbler evolutions. SIGEVOlution 6 (2), 17–26. doi:10.1145/2597453.2597455

CrossRef Full Text | Google Scholar

Sander, C., Oppermann, H., Nestler, U., Sander, K., von Dercks, N., Meixensberger, J., et al. (2020). Early unplanned readmission of neurosurgical patients after treatment of intracranial lesions: a comparison between surgical and non-surgical intervention group. Acta Neurochir. 162 (11), 2647–2658. doi:10.1007/s00701-020-04521-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Shebeshi, D. S., Dolja-Gore, X., and Byles, J. (2020). Unplanned readmission within 28 Days of hospital discharge in a longitudinal population-based cohort of older Australian women. Int. J. Environ. Res. Public Health 17 (9), 3136. doi:10.3390/ijerph17093136

CrossRef Full Text | Google Scholar

Sperling, C. D., Xia, L., Berger, I. B., Shin, M. H., Strother, M. C., Guzzo, T. J., et al. (2018). Obesity and 30-day outcomes following minimally invasive Nephrectomy. Urology 121, 104–111. doi:10.1016/j.urology.2018.08.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Torkamanian-Afshar, M., Nematzadeh, S., Tabarzad, M., Najafi, A., Lanjanian, H., Masoudi-Nejad, A., et al. (2021). In silico design of novel aptamers utilizing a hybrid method of machine learning and genetic algorithm. Mol. Divers. 25 (3), 1395–1407. doi:10.1007/s11030-021-10192-9

PubMed Abstract | CrossRef Full Text | Google Scholar

van Walraven, C., Dhalla, I. A., Bell, C., Etchells, E., Stiell, I. G., Zarnke, K., et al. (2010). Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ 182 (6), 551–557. doi:10.1503/cmaj.091117

PubMed Abstract | CrossRef Full Text | Google Scholar

van Walraven, C., Jennings, A., and Forster, A. J. (2012). A meta-analysis of hospital 30-day avoidable readmission rates. J. Eval. Clin. Pract. 18 (6), 1211–1218. doi:10.1111/j.1365-2753.2011.01773.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Wasfy, J. H., Hidrue, M. K., Ngo, J., Tanguturi, V. K., Cafiero-Fonseca, E. T., Thompson, R. W., et al. (2020). Association of an acute myocardial infarction readmission-reduction program with mortality and readmission. Circ. Cardiovasc. Qual. Outcomes 13 (5), e006043. doi:10.1161/CIRCOUTCOMES.119.006043

PubMed Abstract | CrossRef Full Text | Google Scholar

Workman, K. K., Angerett, N., Lippe, R., Shin, A., and King, S. (2020). Thirty-day unplanned readmission after total knee arthroplasty at a teaching community hospital: Rates, reasons, and risk factors. J. Knee Surg. 33 (2), 206–212. doi:10.1055/s-0038-1677510

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: machine learning, unplanned hospital readmission, surgery, prediction, elderly

Citation: Li L, Wang L, Lu L and Zhu T (2022) Machine learning prediction of postoperative unplanned 30-day hospital readmission in older adult. Front. Mol. Biosci. 9:910688. doi: 10.3389/fmolb.2022.910688

Received: 01 April 2022; Accepted: 06 July 2022;
Published: 10 August 2022.

Edited by:

Charles Hsu, Qatar University, Qatar

Reviewed by:

Ana Cláudia Coelho, University of Trás-os-Montes and Alto Douro, Portugal
Xianfeng Shen, Hubei University of Medicine, China

Copyright © 2022 Li, Wang, Lu and Zhu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Li Lu, luli@scu.edu.cn; Tao Zhu, xwtao_zhu@sina.cn

These authors have contributed equally to this work

Download