Subscribe to RSS
DOI: 10.1055/s-0045-1810755
Application of machine learning algorithms to recipient-related data for the prediction of short-term survival following liver transplantation
Authors
Background and Objective: Current prediction of short-term survival after liver transplantation (LT) primarily relies on linear clinical scores such as the MELD, Donor-MELD, or Balance-of-Risk score. However, these model scores often provide limited predictive accuracy and depend on donor-related parameters that are only available shortly before transplantation, limiting their use for early risk stratification on the waiting list. The aim of this study was to develop and evaluate a recipient-based machine learning (ML) model to predict short-term post-transplant survival, using only variables available before organ allocation.
Materials and Methods: Clinical data from 1260 LT recipients were used to train and validate models. Various algorithms were evaluated, including Random Forest, XGBoost, SVMs, and a Neural Network. Model discrimination was assessed using receiver operating characteristic (ROC) curves and various evaluation metrics. SHAP (Shapley Additive Explanations) was used to evaluate the relative importance of each variable based on its respective Shapley value.
Results: The final Random Forest model, developed using a subset of clinically relevant parameters selected from the metadata and SHAP analysis, demonstrated excellent predictive performance for 1-year post-transplant survival, achieving an AUC of 0.88. Among the top predictors, hemoglobin emerged as a strong positive factor for survival, while elevated C-reactive protein was associated with a significantly reduced likelihood of predicted survival. Additional key variables included leukocyte count, international normalized ratio, and serum creatinine—each negatively associated with survival. Bilirubin (both direct and total) and serum sodium contributed moderately to the model’s prediction, while iron and albumin had minor yet still relevant impacts. In contrast, demographic and static recipient characteristics such as age, sex, and body size showed minimal individual predictive value.
Conclusion: This study demonstrates that advanced machine learning approaches based solely on recipient data can improve the prediction of postoperative survival in LT recipients. These findings highlight the potential of data-driven models to support early risk stratification, patient prioritization, and clinical decision-making in LT programs.
Publication History
Article published online:
04 September 2025
© 2025. Thieme. All rights reserved.
Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany