Summary
Objectives: The ratio of observed to expected mortality (standardized mortality ratio, SMR),
is a key indicator of quality of care. We use PreControl Charts to investigate SMR
behavior over time of an existing tree-model for predicting mortality in intensive
care units (ICUs) and its implications for hospital ranking. We compare the results
to those of a logistic regression model.
Methods: We calculated SMRs of 30 equally-sized consecutive subsets from a total of 12,143
ICU patients aged 80 years or older and plotted them on a PreControl Chart. We calculated
individual hospital SMRs in 2009, with and without repeated recalibration of the models
on earlier data.
Results: The overall SMR of the tree-model was stable over time, in contrast to logistic regression.
Both models were stable after repeated recalibration. The overall SMR of the tree
on the whole validation set was statistically significantly different (SMR 1.00 ±
0.012 vs. 0.94 ± 0.01) and worse in performance than the logistic regression model
(AUC 0.76 ± 0.005 vs. 0.79 ± 0.004; Brier score 0.17 ± 0.012 vs. 0.16 ± 0.010). The
individual SMRs’ range in 2009 was 0.53–1.31 for the tree and 0.64–1.27 for logistic
regression. The proportion of individual hospitals with SMR >1, hinting at poor quality
of care, reduced from 38% to 29% after recalibration for the tree, and increased from
15% to 35% for logistic regression.
Conclusions: Although the tree-model has seemingly a longer shelf life than the logistic regression
model, its SMR may be less useful for quality of care assessment as it insufficiently
responds to changes in the population over time.
Keywords
Prognostic models - intensive care - predictive performance - temporal validation
- statistical process control