Appl Clin Inform 2025; 16(04): 879-891
DOI: 10.1055/a-2647-1069
Special Issue on CDS Failures

Assessing Medication CDS Usability: Pilot Results from 10 Outpatient Clinics

Zoe Co
1   Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan, United States
2   Department of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
,
David W. Bates
1   Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan, United States
3   Clinical and Quality Analysis, Mass General Brigham, Somerville, Massachusetts, United States
4   Harvard Medical School, Boston, Massachusetts, United States
,
Jessica M. Cole
5   Division of Epidemiology, University of Utah School of Medicine, Salt Lake City, Utah, United States
,
Raj Ratwani
6   MedStar Health National Center for Human Factors in Healthcare, Washington, Dist. of Columbia, United States
,
David C. Classen
5   Division of Epidemiology, University of Utah School of Medicine, Salt Lake City, Utah, United States
› Author Affiliations

Funding None.
Preview

Abstract

Objectives

This study aimed to develop a human factors assessment for medication-related clinical decision support (CDS) based on a previously validated tool that assessed the integration of human factors principles in CDS, the instrument for evaluating human factors principles in medication-related decision support alerts (I-MeDeSA), and pilot it with 10 outpatient clinics across the United States.

Methods

The human factors assessment was developed based on past validations of I-MeDeSA. Examples included changing the wording of questions and reformatting answer choices to check-box options, allowing for multiple answer choices. We also added a section about how clinicians resolved alerts. Clinics received a percentage score based on how well their CDS adhered to human factors principles. To take the assessment, testing teams at each clinic triggered a high-severity drug–drug interaction (DDI) alert, and then took the human factors assessment. This assessment was piloted in 10 outpatient clinics, each of which used a different commercial electronic health record (EHR) system.

Results

The final assessment included five sections and twelve questions related to aspects like the timing, visual aspect, severity, content, and actions within the DDI alert. The mean overall percentage score was 62%. The sections regarding the timing and visual aspects of the alert were ones where clinics' EHRs performed the best. However, in the “actions” section, 40% of the clinics could bypass high severity alerts without any safeguards in place.

Conclusion

We found substantial variability in the integration of human factors principles in the design and delivery of DDI alerts among the outpatient clinics, and some lacked important medication safeguards. This assessment can be used by outpatient clinics for safety improvement initiatives.

Protection of Human and Animal Subjects

No human subjects were involved in this study, as all testing scenarios involved fictitious patient scenarios.




Publication History

Received: 28 January 2025

Accepted: 01 July 2025

Article published online:
20 August 2025

© 2025. Thieme. All rights reserved.

Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany