RSS-Feed abonnieren
DOI: 10.1055/a-2630-4192
Evaluating Equity in Usage and Effectiveness of the CONCERN Early Warning System
Funding American Nurses Foundation Reimagining Nursing Initiative approved grant for this study.

Abstract
Background
The CONCERN Early Warning System (CONCERN EWS) is an artificial intelligence-based clinical decision support system (AI-CDSS) for the prediction of clinical deterioration, leveraging signals from nursing documentation patterns. While a recent multisite randomized controlled trial (RCT) demonstrated its effectiveness in reducing inpatient mortality and length of stay, evaluating implementation outcomes is essential to ensure equitable results across patient populations.
Objectives
This study aims to (1) assess whether clinicians' usage of the CONCERN EWS, as measured by CONCERN Detailed Prediction Screen launches, varied by patient demographic characteristics, including sex, race, ethnicity, and primary language; (2) evaluate whether CONCERN EWS's effectiveness in reducing the risk of in-hospital mortality varied across patient demographic groups.
Methods
We conducted a retrospective observational analysis of electronic health record log files and clinical outcomes from a multisite, pragmatic, cluster-RCT involving four hospitals across two health care systems. Equity in usage was assessed by comparing CONCERN Detailed Prediction Screen launches across demographic groups, and effectiveness was examined by comparing the risk of in-hospital mortality between intervention and usual care groups using Cox proportional hazards models adjusted for patient characteristics.
Results
Clinicians' CONCERN Detailed Prediction Screen launches did not significantly differ by patients' demographic characteristics, suggesting equitable usage. The CONCERN EWS was significantly associated with reduced risk of in-hospital mortality overall (adjusted hazard ratio [HR] = 0.644, 95% CI: 0.532–0.778, p < 0.0001), with consistent effectiveness across most groups. Notably, patients whose primary language was not English experienced a greater reduction of mortality risk compared to patients whose primary language was English (adjusted HR = 0.419, 95% CI: 0.287–0.610, p = 0.0082).
Conclusion
This study presents a case of evaluating equity in AI-CDSS usage and effectiveness, contributing to the limited literature. While findings suggest equitable engagement and effectiveness, ongoing evaluations are needed to understand the observed variability and ensure responsible implementation.
Keywords
clinical decision support systems - artificial intelligence - implementation science - electronic health records - nursing informaticsProtection of Human and Animal Subjects
Institutional review boards at the researchers' institution (blinded for review) approved the protocol with a waiver of consent.
Authors' Contributions
Conceptualization: R.Y.L., S.C.R., K.D.C., P.C.D. Data curation, formal analysis, methodology: R.Y.L., H.J., G.L., S.C.R. Writing—original draft: R.Y.L. Writing—review and editing: R.Y.L., K.D.C., P.C.D., G.L., H.J., T.D., and S.C.R.
Publikationsverlauf
Eingereicht: 21. Februar 2025
Angenommen: 07. Juni 2025
Accepted Manuscript online:
10. Juni 2025
Artikel online veröffentlicht:
13. August 2025
© 2025. Thieme. All rights reserved.
Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany
-
References
- 1 Rossetti SC, Knaplund C, Albers D. et al. Healthcare process modeling to phenotype clinician behaviors for exploiting the signal gain of clinical expertise (HPM-ExpertSignals): development and evaluation of a conceptual framework. J Am Med Inform Assoc 2021; 28 (06) 1242-1251
- 2 Rossetti SC, Dykes PC, Knaplund C. et al. Real-time surveillance system for patient deterioration: a pragmatic cluster-randomized controlled trial. Nat Med 2025; 31: 1895-1902 (E-pub ahead of print)
- 3 Fu LH, Schwartz J, Moy A. et al. Development and validation of early warning score system: A systematic literature review. J Biomed Inform 2020; 105: 103410
- 4 Collins SA, Cato K, Albers D. et al. Relationship between nursing documentation and patients' mortality. Am J Crit Care 2013; 22 (04) 306-313
- 5 Kang MJ, Rossetti SC, Lowenthal G. et al. Designing and testing clinical simulations of an early warning system for implementation in acute care settings. JAMIA Open 2024; 7 (04) ooae092
- 6 van de Sande D, Chung EFF, Oosterhoff J, van Bommel J, Gommers D, van Genderen ME. To warrant clinical adoption AI models require a multi-faceted implementation evaluation. NPJ Digit Med 2024; 7 (01) 58
- 7 Coalition for Health AI; 2023. Accessed January 17 2025 at: https://assets.ctfassets.net/7s4afyr9pmov/4AXIWGIlcrjWDaW2ueTaRS/f98e5cb2528187635895cce6ba5ec309/Blueprint_for_Trustworthy_AI.pdf
- 8 Cross JL, Choma MA, Onofrey JA. Bias in medical AI: Implications for clinical decision-making. PLoS Digit Health 2024; 3 (11) e0000651
- 9 Jain A, Brooks JR, Alford CC. et al. Awareness of racial and ethnic bias and potential solutions to address bias with use of health care algorithms. JAMA Health Forum 2023; 4 (06) e231197
- 10 Kim JY, Hasan A, Kellogg KC. et al. Development and preliminary testing of Health Equity Across the AI Lifecycle (HEAAL): A framework for healthcare delivery organizations to mitigate the risk of AI solutions worsening health inequities. PLoS Digit Health 2024; 3 (05) e0000390
- 11 Labkoff S, Oladimeji B, Kannry J. et al. Toward a responsible future: recommendations for AI-enabled clinical decision support. J Am Med Inform Assoc 2024; 31 (11) 2730-2739
- 12 Pierson E, Cutler DM, Leskovec J, Mullainathan S, Obermeyer Z. An algorithmic approach to reducing unexplained pain disparities in underserved populations. Nat Med 2021; 27 (01) 136-140
- 13 Tighe P, Mossburg S, Gale B. Artificial Intelligence and Patient Safety: Promise and Challenges. AHRQ, PS Net. Published online March 27, 2024. Accessed January 17, 2025 at: https://psnet.ahrq.gov/perspective/artificial-intelligence-and-patient-safety-promise-and-challenges
- 14 Liu S, McCoy AB, Peterson JF. et al. Leveraging explainable artificial intelligence to optimize clinical decision support. J Am Med Inform Assoc 2024; 31 (04) 968-974
- 15 Magrabi F, Ammenwerth E, McNair JB. et al. Artificial intelligence in clinical decision support: Challenges for evaluating AI and practical implications. Yearb Med Inform 2019; 28 (01) 128-134
- 16 Elhaddad M, Hamam S. AI-driven clinical decision support systems: An ongoing pursuit of potential. Cureus 2024; 16 (04) e57728
- 17 National Institute of Standards and Technology | U.S. Department of Commerce. Artificial Intelligence Risk Management Framework (AI RMF 1.0). 2023
- 18 Ingraham NE, Jones EK, King S. et al. Re-aiming equity evaluation in clinical decision support: A scoping review of equity assessments in surgical decision support systems. Ann Surg 2023; 277 (03) 359-364
- 19 Sashegyi A, Ferry D. On the interpretation of the hazard ratio and communication of survival benefit. Oncologist 2017; 22 (04) 484-486
- 20 Salhi RA, Macy ML, Samuels-Kalow ME, Hogikyan M, Kocher KE. Frequency of discordant documentation of patient race and ethnicity. JAMA Netw Open 2024; 7 (03) e240549
- 21 Yemane L, Mateo CM, Desai AN. Race and ethnicity data in electronic health records-striving for clarity. JAMA Netw Open 2024; 7 (03) e240522
- 22 Bear Don't Walk IV OJ, Paullada A, Everhart A, Casanova-Perez R, Cohen T, Veinot T. Opportunities for incorporating intersectionality into biomedical informatics. J Biomed Inform 2024; 154: 104653
- 23 Crenshaw K. Demarginalizing the intersection of race and sex: A Black feminist critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics. In: Feminist Legal Theories. Routledge; 2013: 23-51
- 24 Blythe R, Naicker S, White N. et al. Clinician perspectives and recommendations regarding design of clinical prediction models for deteriorating patients in acute care. BMC Med Inform Decis Mak 2024; 24 (01) 241
- 25 Molloy MJ, Zackoff M, Gifford A. et al. Usability testing of situation awareness clinical decision support in the intensive care unit. Appl Clin Inform 2024; 15 (02) 327-334
- 26 Shelov E, Muthu N, Wolfe H. et al. Design and implementation of a pediatric ICU acuity scoring tool as clinical decision support. Appl Clin Inform 2018; 9 (03) 576-587
- 27 Kitzmiller RR, Vaughan A, Skeeles-Worley A. et al. Diffusing an innovation: Clinician perceptions of continuous predictive analytics monitoring in intensive care. Appl Clin Inform 2019; 10 (02) 295-306
- 28 Castner N, Arsiwala-Scheppach L, Mertens S. et al. Expert gaze as a usability indicator of medical AI decision support systems: a preliminary study. NPJ Digit Med 2024; 7 (01) 199
- 29 King AJ, Cooper GF, Clermont G. et al. Leveraging eye tracking to prioritize relevant medical record data: Comparative machine learning study. J Med Internet Res 2020; 22 (04) e15876
- 30 Visweswaran S, King AJ, Tajgardoon M. et al. Evaluation of eye-tracking for a decision support application. JAMIA Open 2021; 4 (03) ooab059
- 31 Kandaswamy S, Williams H, Thompson SA, Dawson TE, Muthu N, Orenstein EW. Realizing the full potential of clinical decision support: Translating usability testing into routine practice in health care operations. Appl Clin Inform 2024; 15 (05) 1039-1048
- 32 Damschroder LJ, Reardon CM, Opra Widerquist MA, Lowery J. Conceptualizing outcomes for use with the Consolidated Framework for Implementation Research (CFIR): the CFIR Outcomes Addendum. Implement Sci 2022; 17 (01) 7
- 33 Holtrop JS, Estabrooks PA, Gaglio B. et al. Understanding and applying the RE-AIM framework: Clarifications and resources. J Clin Transl Sci 2021; 5 (01) e126