Appl Clin Inform 2018; 09(04): 817-830
DOI: 10.1055/s-0038-1675210
State of the Art/Best Practice Paper
Georg Thieme Verlag KG Stuttgart · New York

An Evidence-Based Tool for Safe Configuration of Electronic Health Records: The eSafety Checklist

Pritma Dhillon-Chattha
1   Alberta Health Services, Edmonton, Alberta, Canada
2   Department of Nursing, Yale University, Orange, Connecticut, United States
,
Ruth McCorkle
2   Department of Nursing, Yale University, Orange, Connecticut, United States
,
Elizabeth Borycki
3   School of Health Information Science, University of Victoria, Victoria, British Columbia, Canada
› Author Affiliations
Further Information

Address for correspondence

Pritma Dhillon-Chattha, DNP, MHA, RN
Department of Nursing, Yale University
Orange, Connecticut
United States   

Publication History

21 March 2018

10 September 2018

Publication Date:
14 November 2018 (online)

 

Abstract

Background Electronic health records (EHRs) are transforming the way health care is delivered. They are central to improving the quality of patient care and have been attributed to making health care more accessible, reliable, and safe. However, in recent years, evidence suggests that specific features and functions of EHRs can introduce new, unanticipated patient safety concerns that can be mitigated by safe configuration practices.

Objective This article outlines the development of a detailed and comprehensive evidence-based checklist of safe configuration practices for use by clinical informatics professionals when configuring hospital-based EHRs.

Methods A literature review was conducted to synthesize evidence on safe configuration practices; data were analyzed to elicit themes of common EHR system capabilities. Two rounds of testing were completed with end users to inform checklist design and usability. This was followed by a four-member expert panel review, where each item was rated for clarity (clear, not clear), and importance (high, medium, low).

Results An expert panel consisting of three clinical informatics professionals and one health information technology expert reviewed the checklist for clarity and importance. Medium and high importance ratings were considered affirmative responses. Of the 870 items contained in the original checklist, 535 (61.4%) received 100% affirmative agreement among all four panelists. Clinical panelists had a higher affirmative agreement rate of 75.5% (656 items). Upon detailed analysis, items with 100% clinician agreement were retained in the checklist with the exception of 47 items and the addition of 33 items, resulting in a total of 642 items in the final checklist.

Conclusion Safe implementation of EHRs requires consideration of both technical and sociotechnical factors through close collaboration of health information technology and clinical informatics professionals. The recommended practices described in this checklist provide systems implementation guidance that should be considered when EHRs are being configured, implemented, audited, or updated, to improve system safety and usability.


#

Response to: An Evidence-Based Tool for Safe Configuration of Electronic Health Records: The eSafety Checklist

Background and Significance

Electronic health records (EHRs) are transforming health care and have often been attributed to improving the quality, safety, and efficiency of care delivery. The Institute of Medicine (IOM) states that, “more than any other health technology to date, computers and communication technologies will affect the lives of patients in the twenty-first century.”[1] EHRs, a type of health information technology (HIT), can reduce patient safety incidents, but can also cause technology-induced errors if configured and/or used in an unsafe manner.[2] Literature pertaining to EHR safety has matured over the last decade, but remains a relatively new area of safety science with limited evidence, standards, and tools.

In 2012, the Office of the National Coordinator (ONC) for Health Information Technology commissioned the IOM to review the evidence on impact of HIT (including EHRs) on patient safety and recommend actions to be taken. In the report titled Health IT and Patient Safety: Building Safer Systems for Better Care,[1] the IOM found that HIT can improve patient safety under the right conditions but those conditions cannot be replicated easily. The committee discovered that information needed for an objective analysis of the safety of HIT was not available.[1] Instead, they focused on ways to make information about the magnitude of harm discoverable. They offered a vision of how the discipline of safety science can be better integrated into a HIT-enabled world, and provided specific recommendations to establish a HIT safety management framework that included monitoring and evaluation of incidents at both organizational and national levels.[1]

Canada responded to the IOM report by developing national standards titled 2013 eSafety Guidelines that offer program-level guidance for the inclusion of safety in design, implementation, and use of EHRs.[3] Published by Digital Health Canada (formerly known as COACH), the guidelines coined the term eSafety and defined it as the safety of HIT; the policies, processes, and practices which serve to protect patients against harm resulting from the development, implementation, and use of HIT solutions and software.[3] That same year the ONC published the Health Information Technology Patient Safety Action & Surveillance Plan.[4] Both guidelines called for adoption of eSafety frameworks in public and private health care organizations and more stringent policies and programs to support the safe implementation, use, and continuous improvement of EHRs.[3] [4]

It is estimated that approximately one-third of patient safety incidents following an EHR implementation are caused by its configuration and use.[5] In an audit conducted by Magrabi et al[6] of the U.S. Food and Drug Administration database, 42 reports of patient harm and 4 deaths in 436 critical incidents involving EHRs were reported over a 30-month period ending July 2010. A more recent study analyzed EHR-related patient safety incidents across 23 fully digital hospitals in Finland over a 2-year period, and showed the proportion of incidents to be markedly higher. The study found that human–computer interaction problems were the most frequently reported, and that technology-induced errors pose a significant safety risk in fully digital hospitals.[7] Identifying HIT-related patient safety events, however, can be challenging. They are often categorized under other more predominate root causes, for example, order entry errors are often categorized as medication-related events instead of HIT-related events.[8] Free-text narratives in patient safety event reports often reveal HIT contributing factors, manually reviewing these narratives, however, is a limiting factor to identifying HIT hazards.[8] Since mandatory reporting of HIT events is not required, and patient safety events overall are grossly underreported,[9] we can presume that actual rates of error are much higher.

To better understand errors and near misses associated with EHRs, Dr. Adelman, Chief Patient Safety Officer at Columbia University Medical Centre, created the “wrong-patient retract-and-reorder measure” which became the first HIT safety measure to be endorsed by the National Quality Forum (NQF Measure #2723).[10] The measure predicts unreported near misses on how often providers placed an order on the wrong patient and retracted it within 2 minutes.[10] It was discovered that 6,885 wrong patient near miss errors occurred at Columbia University Medical Centre over a 12-month period. Based on this, a daily wrong patient electronic order rate was estimated at 14 incidents per day, which was significantly higher than their rate of reported incidents. Dr. Adelman concluded that proactive audits of EHRs reveal significantly higher error and near miss rates that can be reduced by safer configuration of EHRs.[10]

eSafety literature has focused predominately on the development and adoption of high-level frameworks, policies, and processes, with very limited attention to tactical configuration guidance on how to prevent unsafe human–computer interaction problems—the most frequently reported incidents. In recent years, however, tools have begun to emerge to support organizations with configuration activities. Sengstack[11] published a 48-item computerized provider order entry (CPOE) checklist in 2010 for clinical informatics professionals to reference when configuring CPOE modules to reduce unanticipated harms. With increased adoption of CPOE over the last decade, this checklist does not include recent significant learning, and is limited to one specific module, excluding additional key EHR capabilities that have contributed to eSafety incidents.

The National Center for Cognitive Informatics and Decision Making in Healthcare developed a set of 10 “Safety Enhanced Design Briefs” in 2013.[12] They cover a variety of topics including effective table design, effective use of color, medication lists, and results management. The briefs are rich in tacit and practical knowledge to aid in reducing human–computer interaction problems and provide interventions that are in direct control of clinical informatics professionals (as opposed to program and policy recommendations requiring leadership approval and assignment of resources). These one-page briefs are excellent high-level design resources, but lack the level of detail informatics professionals seek during configuration and implementation. Additional guides have emerged from organizations such as the Institute for Safe Medication Practices and ECRI Institute; however, they focus on program level recommendations or provide guidance on one specific capability such as the copy and paste function,[13] and are in formats that are difficult to consume and translate into system configurations.

Sittig et al[14] published an organizational self-assessment strategy encompassing a set of nine tools called “SAFER Guides” in 2014 to optimize eSafety. The nine topics include: (1) organizational responsibilities, (2) system interfaces, (3) contingency planning, (4) high priority practices, (5) system configuration, (6) patient identification, (7) CPOE with decision support, (8) test results reporting and follow-up, and (9) clinical communication. Cumulatively, the SAFER Guides provide a total of 158 recommended practices that span across organization policy development, staff education, and tactical configuration items. The tools were designed to assess safety at a program level and, therefore, do not provide many detailed configuration interventions for front line informatics professionals.

Setting

This article describes the development of a comprehensive (642 item) eSafety checklist of detailed user interface configuration recommendations to assist health and clinical informatics professionals in applying evidence-based safety practices during configuration of EHRs. The need for this checklist was identified and validated by the sponsor organization, Alberta Health Services (AHS), in preparation for the implementation of its new provincial clinical information system, ConnectCare.

AHS was founded in 2008 after merging nine former health regions and three agencies to create one provincial health service. It is Canada's first and largest province-wide, fully integrated health system responsible for delivering publicly funded health services to more than 4.2 million Albertans.[15] With approximately 109,000 employees and over 650 sites, AHS is the fifth largest employer in Canada.[15] Following the merger, AHS inherited approximately 1,300 legacy HIT systems, including four major hospital-based EHRs. The fragmentation of systems created inefficiencies in care and potential patient safety hazards, and therefore in 2016 AHS was granted $400 million in funding from the provincial government over 4 years to acquire and implement a new province-wide EHR. With safety being a core value at AHS, the adoption of the Canadian eSafety guidelines, including policies, procedures, and tools for the safe design and use of EHRs was identified as a priority. The eSafety checklist was developed to address an information gap identified by AHS clinical informatics professionals pertaining to the lack of a consolidated resource of eSafety configuration interventions that can be directly applied to improve human–computer interactions and minimize unanticipated errors. The checklist was developed to be system-agnostic and, therefore, can be used to support configuration or optimization of any hospital-based EHR.


#
#

Objective

The objective of the project was to consolidate current evidence on safe configuration practices for hospital-based EHRs into a user-friendly format for use by health and clinical informatics professionals. Project phases included: (1) synthesis of evidence on safe configuration practices, (2) organization of evidence into an easy-to-use checklist, and (3) validation of checklist content by a panel of experts.

Project scope included providing configuration guidance on key EHR modules that are common among various vendors and used frequently by clinicians within hospital settings. The tool was designed specifically for use by information technology and clinical informatics professionals during the configuration phase of EHR implementations. It is assumed that the system in place meets applicable regulatory and meaningful use requirements. The tool is a new instrument; a detailed checklist of this kind has not otherwise been developed to support safe configuration of EHRs.


#

Methods

Phase One Methods

To synthesize evidence on safe configuration practices, our first project phase, a literature search was conducted in November 2015 using Ovid, PubMed, Scopus, and Google Scholar with search terms listed in [Table 1]. Considering much of the eSafety literature is focused on high-level adoption of programs and policies, we developed our search terms in efforts to return articles focused instead on safe user interface configuration and common terms for EHRs and their key capabilities. The search was restricted to English language, peer-reviewed journal articles published since 2005. Searching was supplemented by scanning references of included articles. The search returned 418 articles in total; upon initial screening of titles and abstracts, 67 articles were identified as duplicates and 211 articles were found to be irrelevant, and therefore 140 articles were considered for full-text review. Seven additional articles were identified by reviewing bibliographies, yielding a total of 147. Based on full-text review, 107 articles were excluded because safe user interface configuration was not a main objective of the paper, leaving 40 articles for detailed analysis. A secondary literature scan was conducted using the same databases and search terms in August 2017, but limited to publication between 2015 and 2017; one additional article was included yielding a total of 41 journal articles ([Fig. 1]).

Zoom Image
Fig. 1 Literature search.
Table 1

Search terms and restrictions

Concept 1

AND

Concept 2

AND

Concept 3

Electronic health record

Safety

Configuration

Health information technology

Electronic safety

Design

eHealth

Safety management

Usability

Health information system

Patient safety

User interface

Hospital information system

Incident

User interface Design

Clinical information system

Error

User centered design

Medication administration record

Computerized provider order entry

Clinical decision support system

Patient portal

clinical communication

electronic referral

result management

Note: Restrictions: published since 2005; peer-reviewed journal; English language.


The same search terms (excluding publication year parameters) were used to conduct a Web search in November 2015 and August 2017 for synthesis of relevant gray literature, standards, best practice guidelines, and lessons learned from reputed international agencies and organizations. The searches returned 103 unique articles that were reviewed for inclusion; of which 46 items were deemed relevant in 2015 with an additional 10 items included in 2017.

In total, 41 peer-reviewed journal articles and 56 gray literature items were included ([Fig. 1]). Each item was assessed for evidence level and quality by the project lead using the Johns Hopkins Nursing Evidence-Based Practice Model.[16] Data were extracted in an evidence table that detailed article design, purpose, outcome, configuration recommendations, and limitations.

Finally, recommended practices were organized according to a list of eight core EHR functionalities identified by the IOM Committee on Data Standards and Patient Safety.[17] Six of the eight core functionalities (health information and data, clinical decision support, order management, result management, clinical communication, and patient portal) aligned with recommended practices extracted from the literature review. Two functionalities (administrative processes, population health management) were excluded because they were outside the project scope. Five additional core functionalities were identified from the data that were common among EHR vendors and had frequently attributed to unanticipated safety incidents; a complete list of EHR functionalities considered and included in the checklist is provided in [Table 2].

Table 2

Core EHR functionalities

Identified by IOM, 2003[15]

Additional themes from literature review

Final functionalities included in checklist

Population Health Management

Administrative Processes

Health Information & Data

Order Management

Decision Support

Result Management

Electronic Communication

Patient Support

Quality Improvement

System-Wide Settings

Patient Identification

Medication Management

Referral Management

Global Settings

Patient Identification

Clinical Documentation

Order Management

Clinical Decision Support

Medication Management

Referral Management

Result Management

Clinical Communication

Patient Portal

Abbreviations: EHR, electronic health record; IOM, Institute of Medicine.



#

Phase Two Methods

The second phase of the project was to develop a checklist that was easy to navigate and use by end users—health and clinical informatics professionals. This was accomplished through two rounds of end user testing—a process unique to the development of this tool that differentiates it from existing tools. Through a survey of end users conducted by the project lead, it was determined that Microsoft Excel was their software of choice and would provide for greater adoption of the tool. An Excel template was designed for the eSafety checklist, in collaboration with AHS human factors experts, using 11 EHR functions that emerged from phase one as separate tabs in the Excel workbook. Standard formatting was used within each tab, which initially included columns for: item number, category, safety dimension, recommended practice, compliance, comments, and source. Additional administrative tabs were included in the workbook titled: home, instructions, and start, version control, and references. Each tab was populated with test content and/or items in preparation for initial end-user testing.

Testing was conducted on the initial template design by eight end users in April 2017. Potential checklist users were selected based on their role, expertise, and professional background. Two users (one with clinical informatics expertise and one with information technology expertise) were selected from each of the four major EHR systems (Allscripts, Epic, Meditech, Metavision) in use at AHS. Participation in testing was entirely voluntary and confidential to the project committee. Trained human factors safety specialists designed and conducted the testing using one-on-one standardized semistructured interviews with participants using an online meeting platform with screen sharing capabilities. Deidentified data from testing were captured in a standardized spreadsheet and summarized by the human factors team into a PowerPoint summary presentation for the project committee.

A second round of usability testing was conducted by the human factors team in October 2017, once the checklist content was complete. Six end users representing three different EHR systems participated in the second round of testing. Testing was conducted online in one-to-one sessions with participants, but this time using two to three scenarios and standardized semistructured interview questions. Deidentified data were recorded and results were summarized into a PowerPoint for the project committee. Both rounds of testing greatly informed iterative design and usability of the eSafety checklist.


#

Phase Three Methods

The third and final phase of the project was to validate the checklist content by a panel of experts. Panel members were selected based on their professional expertise and experience in this subject area, local and national recognition, scholarship, and responsiveness to requests for participation.[18] Specifically, the panel included expert representation from the following domains: nursing informatics, medical informatics, HIT architecture, eSafety, and academia. Panel participation was voluntary, and upon acceptance of our request, each panelist was briefed on the project, the checklist, and the rating instructions during a one-to-one online meeting. Upon participation in the online briefing and receipt of written consent, panelists were sent a paper and electronic copy of rating instructions and the checklist.

Panelists were asked to rate each system capability, subcategory, and item within the checklist for clarity (clear/not clear) and importance (high/medium/low); definitions of each rating were provided to panelists, and are listed in [Table 3]. Due to checklist length, panelists were given 1 month, that is, between December 15, 2017 and January 14, 2018, to independently evaluate.

Table 3

Expert panel rating definitions

Clear

Recommendation is clear, direct, and easily understood

Not clear

Recommendation lacks sufficient detail to be easily understood; there is a risk of misinterpretation

High importance

A critical requirement that if not applied, has a high likelihood to result in patient harm in the near future. System is not acceptable unless this requirement is satisfied

Medium importance

A major requirement that if not applied might result in patient harm in the near future. Would enhance safety, but the system is not unacceptable if absent

Low importance

A minor requirement that is unlikely to result in patient harm and would be nice to have if system and resources permit.


#
#

Results

Testing Results

The first round of testing gathered feedback on checklist utility, instruction clarity, and relevance of tab and column headings. On average, testers rated the overall value of the checklist to their work as 4.1 on a 5-point Likert scale, with 5 being high value. Of note, IT users rated the value lower than clinical informatics users. Based on feedback, several changes were made to the instructions tab for improved clarity, particularly for when the checklist should be used, and that only relevant sections pertaining to a project at hand should be completed. Participants agreed that EHR functionality tabs were distinct, necessary, and without significant overlap. Specific feedback on changes to workbook tab labels included: (1) change “Home” to “About,” (2) add a “Glossary” tab, (3) change “System Wide Settings” to “Global Settings,” (4) change “Health Information & Data” to “Clinical Documentation,” and (5) change “Personal Health Management” to “Patient Portal.” Testers also reviewed standard column headers within the EHR capability tabs; there was strong consensus to remove the columns for “Category” and “eSafety Dimension” due to confusion and lack of understanding. It was also suggested to add a column for “Evidence Level and Quality” for each recommended configuration item. Further general feedback included: use of consistent language, eliminate use of abbreviations, and each item/line should only provide one recommendation (e.g., do not recommend a font type and size on one line, separate as two recommended configuration practices).

The second round of testing focused on usability of the tool while participants were asked to use the checklist to improve safety of three different EHR screenshots. Based on this, four participants rated the checklist as user friendly, while two rated it as difficult to use. IT participants again rated the checklist lower than clinical informatics professionals. Specific feedback on changes to tab labels in the second round of testing included: (1) create a tab labeled “Instructions and Scoring” and remove this content from the “About” tab, and (2) remove the “Quality Assurance” (QA) tab and instead add this as a subsection within each of the system capability tabs. Further general feedback included: provide greater clarity on how items are scored, provide “tips and tricks” for Excel navigation, improve clarity of subheadings in system capability tabs, and reduce redundancy of items between medication management and order management tabs.


#

Expert Panel Results

Seven panelists were invited to review the checklist, of which six accepted the initial email invitation. Five went on to sign the consent (one did not respond), and participated in online briefings, after which they were provided the rating instructions and checklist. Four of the five participants returned completed ratings, one participant advised they were unable to commit the time and effort required for a thorough review within the prescribed period over the holiday season. Two of the final panelists were nurses with eSafety expertise, one panelist was a clinical informatics physician, and one was a nonclinician with a computer science background and eSafety specialization. All four panelists lived and worked in North America, and all but one panelist had academic appointments.

Panel ratings for clarity (clear, not clear) and importance (high, medium, low) were summarized in a master Excel file by the project lead. Items were considered for retention and inclusion in the eSafety checklist if they achieved at least 78% affirmative responses for importance (“high” or “medium” ratings); this ensured a level of agreement greater than chance.[18] To achieve this level of agreement in a four-member panel, 100% affirmative responses were required for an item to be considered for retention. Of 870 items reviewed by the panelists, 100% affirmative agreement was achieved on 535 (61.4%) items. When nonclinical panelist ratings were removed, 100% affirmative agreement among clinical panelists (n = 3) was achieved on 656 (75.5%) items. Items that were rated with medium or high importance by all three clinical panelists but rated with low importance by the IT panelist were reviewed and considered in detail by the project team. Upon review, it was decided that these items (n = 121) would be retained in the checklist because they directly impacted human–computer interaction and safety from a clinical end-user perspective—something that an IT user may not be privy to. Due to poor overall agreement on QA items in each category (n = 98), it was decided that further research and testing is required on these items for inclusion, and therefore, all QA items were excluded from the checklist, including those that achieved 100% clinical panelist agreement (n = 47).

The “Patient Portal” tab received 0% agreement among the four panelists because a clinical panelist rated all items within this tab with “low importance.” The panelist indicated they had never worked with patient portals, and therefore, did not feel capable of rating these items. The project team discussed this tab at length and agreed it was important to retain in the checklist due to its high visibility among patients, and because many organizations (and clinicians) lack experience pertaining to patient portals. Therefore, the ratings of one panelist were excluded in this section. Of 70 items in the patient portal tab, 33 (47.1%) items received 100% affirmative agreement among the remaining three panelists. Therefore, these items were retained, bringing the final total of the checklist to 642 items across 10 core EHR capabilities.

Finally, of the retained items, those rated “not clear” were reviewed and edited for clarity based on panelist notes and feedback.


#
#

Discussion

Ninety-seven articles pertaining to eSafety were analyzed ([Table 4]), which resulted in 870 unique configuration practice recommendations extracted from the literature. Recommendations were organized by 10 key EHR system capabilities, and input into their respective tabs within the eSafety checklist ([Table 5]). Data in each tab were analyzed into natural themes and grouped into subcategories that can be expanded and collapsed by the user, allowing for quick navigation; a screenshot of the Clinical Decision Support tab is provided in [Fig. 2].

Zoom Image
Fig. 2 Clinical Decision Support tab of eSafety checklist: electronic health record (EHR) capabilities are divided into tabs across the bottom of the checklist in orange. Each capability tab has the same standard format, including standard columns. Capability-specific expandable subcategories are listed within each tab for quick user navigation. This figure shows the “Drug-drug interaction decision support” subcategory expanded. Users can indicate their compliance level and add specific comments to provide rationale, if necessary. As user completes the checklist, compliance is automatically calculated in the “Instructions & Scoring” tab. Red diamonds indicate that further information/help is available upon hovering over the cell.
Table 4

Evidence cited in eSafety checklist

Reference

Global settings

Patient identification

Clinical documentation

Order management

Clinical decision support

Medication management

Referral management

Result management

Clinical communication

Patient portal

Other

1. Sengstack, 2010

*

*

*

*

*

2. Sittig and Singh, 2011

*

*

*

3. Meeks et al, 2014

*

4. Sittig et al, 2014

x

x

x

x

x

5. Sittig et al, 2014

*

6. McCoy et al, 2013

*

7. Singh et al, 2011

*

8. Sittig et al, 2007

x

x

x

x

9. Magrabi et al, 2009

*

10. Magrabi et al, 2012

*

*

11. Magrabi et al, 2013

*

12. Baker and Norton, 2004

*

13. Digital Health Canada, eSafety Guidelines, 2013

*

14. ONC HIT, HIT Patient Safety Action Plan, 2013

*

15. Wallace et al, 2013

*

16. Huckvale et al, 2010

*

17. Joint Commission, Safely Implementing HIT, 2008

*

18. Institute of Medicine, HIT, and Patient Safety, 2012

*

19. ONC HIT, Progress on HIT Patient Safety Action Plan, 2014

*

20. Kushniruk et al, 2013

*

21. ISMP, Safe eCommunication & Drug Nomenclature, 2015

*

*

*

*

22. ISMP, Guidelines for Standard Order Sets, 2010

*

*

*

23. ISMP, Tallman Lettering, 2015

*

24. ISMP, FDA Look-Alike Drugs, 2016

*

25. ISMP, Human Over-Reliance on Technology, 2016

x

26. McCoy et al, 2012

x

27. IOM, Key Capabilities of EHRs, 2003

*

28. Westbrook et al, 2012

*

29. Menon et al, 2014

x

x

30. Alberta Health Services, 2009

x

x

x

x

x

x

x

31. Partnership for HIT Patient Safety, Copy & Paste Toolkit, 2016

x

32. Cusack and Poon, 2007

x

x

x

x

x

33. Classen and Adelman, 2016

*

x

34. Roper Anderson et al, 2013

x

35. Horsky et al, 2005

x

*

*

36. National Quality Forum, HIT Patient Safety Measures, 2015

x

x

x

x

x

x

x

37. Osheroff et al, 2013

*

38. Centers for Medicare and Medicaid Services, CDS, 2014

*

*

39. Belden et al, 2014

*

*

*

*

*

x

40. Joint Commission, Sentinel Event: Safety Use of HIT, 2015

*

41. Koppel et al, 2005

*

*

*

*

42. Adelman et al, 2013

x

x

43. Ash et al, SAFER Guide: CPOE, 2014

*

*

*

*

*

*

44. Ash et al, SAFER Guide: Pt. ID, 2014

*

45. Ash et al, SAFER Guide: Clin. Comm., 2014

*

*

*

46. Ash et al, SAFER Guide: Test Results, 2014

*

x

*

47. Ash et al, SAFER Guide: High Priority Prac., 2014

*

*

*

*

x

48. Ash et al, SAFER Guide: Config. Practices, 2014

*

*

49. Lowry et al, 2012

*

*

*

*

*

*

50. NCCD, CDS, 2013

*

51. NCCD, Drug–Drug Interactions, 2013

*

52. Phansalkar et al, 2012

*

53. Paterno et al, 2009

*

54. Zhang and Walji, 2011

*

55. Saleem et al, 2016

*

*

*

*

*

56. Partnership for Health IT Patient Safety, Pt. ID Toolkit, 2017

*

*

57. HealthIT.gov, Definitions, 2017

*

58. NCCD, General Design, 2017

*

x

*

*

*

59. Alberta Health Services, 2014

*

60. Alberta Health Services, 2015

*

*

*

*

*

x

x

*

61. Campbell et al, 2006

*

62. Dang and Dearholt, 2017

*

63. Healthcare Human Factors, User Interface Guidelines, 2016

*

*

*

64. Phansalkar et al, 2010

*

*

65. AUS Commission on Safety and Quality in Health Care, 2016

*

66. The Electronic Medication Reconciliation Group, 2017

x

*

67. Adelman et al, 2015

*

68. Zahabi et al, 2015

*

x

x

*

69. Beeler et al, 2014

*

70. Marcilly et al, 2015

*

x

71. Lehmann, 2015

*

*

72. Kuperman et al, 2007

*

*

73. Khajouei and Jaspers, 2008

*

*

74. Silow-Carroll et al, 2012

*

75. Wright et al, 2011

*

*

76. Schnipper et al, 2008

*

77. Esquivel et al, 2012

*

*

78. Jirjis et al, 2005

*

*

79. Gupte et al, 2016

*

80. Straus et al, 2011

*

81. Osborn et al, 2011

*

*

*

82. Dalal et al, 2011

*

*

83. Liddy et al, 2015

*

*

84. Marcilly et al, 2016

*

85. Yackel and Embi, 2009

*

86. Menon et al, 2016

*

87. Bourgeois et al, 2009

*

*

88. ECRI Institute, Pt. ID Errors, 2016

x

89. NCCD, Preventing Medication Order Errors, 2013

*

90. NCCD, Medication Reconciliation, 2013

*

91. NCCD, Problem List, 2013

*

92. NCCD, Table Design, 2013

*

*

93. NCCD, Timely Result Management, 2013

*

94. NCCD, Effective User of Color, 2013

*

95. NCCD, Reducing Wrong Patient Selection, 2013

*

96. Kern et al, 2009

x

x

97. Wiegers, 1999

*

Abbreviations: CDS, clinical decision support; CPOE, computerized provider order entry; EHR, electronic health record; FDA, Food and Drug Administration; HIT, health information technology; IOM, Institute of Medicine; ISMP, Institute for Safe Medication Practices; NCCD, National Center for Cognitive Informatics and Decision Making in Healthcare; ONC, Office of the National Coordinator.


Note: * denotes recommended practice retained in final checklist. x denotes recommended practice excluded in final checklist. Please refer to [Supplementary Appendix A] (available in the online version) for details of the studies cited in the table.


Table 5

Complete list of checklist tabs and expandable subsections

About

i. Instructions & Scoring

ii. Glossary

1. Global Settings

 • Consistency and standards in design

 • Clear navigation

 • Match between system and world

 • Minimalist design

 • Designed to prevent errors from use

 • Minimize human memory load

 • Informative feedback

 • Enable user flexibility and efficiency

 • Useful error messages

 • Clear closure to tasks

 • Reversible actions

 • Clear and concise use of users' language

 • Users control system actions

 • Help and documentation

2. Patient Identification

 • Patient name & birthdate formatting

 • Patient demographics and identifiers

 • Patient banner

 • Patient information display

 • Patient record creation & merge

 • User notifications

3. Clinical Documentation

 • Allergies

 • Problem list

 • Patient status and consent

 • Structured charting templates & notes

 • Age and unit measures

 • Pediatric specific documentation

 • Clinical reference material

4. Order Management

 • Computerized provider order entry design principles

 • Order sets

 • Order forms

 • Order entry

 • Order verification

 • Order communication

5. Clinical Decision Support (CDS)

 • CDS design policies and principles

 • CDS alert display

 • CDS alert components & language

 • Recommended high-severity, clinically significant drug–drug interaction pair alerts

 • Drug–drug interaction decision support

 • Drug–allergy interaction decision support

 • Drug–laboratory interaction decision support

 • Drug–condition/age interaction decision support

 • Duplicate order decision support

 • Formulary decision support

 • Drug dosing decision support

 • Point of care alerts and reminders

 • Order facilitators

 • Relevant information display

 • Expert systems

 • Workflow support

6. Medication Management

 • Medication display settings

 • Dose expression

 • Medication name

 • Medication ordering

 • Medication reconciliation

7. Referral Management

 • Referral request

 • Referral tracking

 • Referral communication and notifications

8. Result Management

 • Structured data

 • Result tracking

 • Result notification & delivery

 • Pending results

 • Results display

 • Results follow-up

9. Clinical Communication

 • Secure messaging

 • Message delivery, notification & tracking

 • Clinician communication management workflow

 • Communication records

10. Patient Portal

 • Patient portal access for adults

 • Patient portal access for minors

 • Patient portal content availability

 • Patient data entry

iii. Version Control

iv. References

During usability testing, participants commented on the value of the tool to ensure consistency of systems design (particularly when multiple EHRs are in use), and its value in providing justification for evidence-based design decisions. Testers further commented that comprehensive evidence on tactical eSafety interventions is both sparse and difficult to find; therefore, the checklist addresses a significant information gap. Although vendor reference documents such as “Style Guides” exist for current EHRs in use, they are not focused on safety and are not detailed and easy to use. Most testers appreciated and preferred the level of detail in the checklist, commenting it was necessary to configure, implement, and evaluate system safety and makes it unique from other tools which focus predominately on project methodologies or high level strategies. For example, the SAFER Guides provide a total of 158 recommended practices, of which 28 were relevant and included in the eSafety checklist. Despite its detail, users found the tool easy to navigate due to its user-friendly, intuitive design. One user commented on the checklist's automatic scoring function “…[it is] nice to quantify the work that we do – we probably have looked at many of these elements over the years, it's nice to finally have it in one place.” Two users commented that although the full detailed checklist is necessary and includes many best practices they were not aware of, it would be nice to have an abbreviated version with just high priority items.

Expert panelists also commented that the checklist is very detailed, yet practical for immediate user interface configuration, in comparison to other EHR safety tools that provide policy and project guidance. Panelists agreed that a high priority version of the checklist would be helpful, although it may be too lengthy due to the large volume of practices rated with high importance. Although panelists were asked to rate each tab heading, subcategory heading, and each item in the checklist for importance and clarity, most panelists only returned item-specific ratings, thus making it difficult to eliminate entire categories or subcategories for a trimmed down version.

Arguably, the most important finding was that during both user testing and expert panel review, IT professionals tended to rate more items with low importance and/or relevance than clinical informatics professionals. Items that were rated unanimously high by clinical participants were frequently rated unanimously low by IT participants. For example, item 4.4.5 “Responsibility for the test result is assigned when the test is ordered” was rated this way. This dichotomy of opinions pertaining to the extent to which configurations impact patient safety is alarming and quite significant. Clinical informatics users commented the checklist will greatly improve user interfaces and system safety, whereas information technology users were less enthusiastic and questioned its function. There seems to be a significant disparity between what IT users and what clinical users perceive impacts system safety, highlighting the need for closer collaboration among clinical and IT staff to ensure safe systems configuration.

A key strength of the project was the multiple usability review cycles conducted on the checklist by diverse end users who work with different EHR systems. This ensured the tool was user-friendly, fit for purpose/adoption, and generalizable to multiple EHR vendor solutions. Testers commented they had not come across a similar tool to ensure their configuration approach minimized unanticipated harm. Another strength of the project was the interdisciplinary expert panel review, which not only helped determine items that would be retained or excluded in the eSafety checklist, but further confirmed the need for closer collaboration among eSafety experts.

This project has several limitations. Detailed evidence review and data extraction, including evidence level and quality assessment, was conducted by one reviewer. All relevant configuration practices were included in the checklist irrespective of the evidence level and quality. Evidence strength was documented in the checklist for the purpose of offering end users additional information to inform their decision to implement a recommended practice. Finally, expert panelists did not take evidence strength into account during their review. Although end users appreciated the length and detail of the checklist, it was a limitation to achieving a robust expert panel review. A larger panel would have been ideal, and one with an equal number of clinical informatics and HIT professionals to help validate the differences in opinions among these groups of experts. A larger panel would have also strengthened results pertaining to patient portal recommendations.


#

Conclusion

Although EHRs significantly improve patient safety, they also introduce unique and unintended consequences.[14] The results of this project underscore the importance of close collaboration between clinical informatics and IT professionals to identify and address sociotechnical factors that impact use of EHRs and cause unanticipated patient harm. The eSafety checklist is a resource for implementers that compiles emerging evidence on eSafety best practices in a user-friendly format, allowing for effective translation to practice. Although the checklist was developed for use by AHS, the tool is system agnostic and, therefore, generalizable for use with any hospital-based EHR. The eSafety checklist builds upon existing tools to offers more practical and detailed guidance for front line informatics staff when configuring EHR user interfaces. End-user testing was completed to ensure the tool was usable, and met the needs of the sponsor organization for adoption; a pilot implementation of the tool and evaluation of its impact and effectiveness remains outstanding. A pilot implementation is currently underway at AHS, where the tool will be evaluated for both qualitative and quantitative outcomes to inform future iterations of the tool.


#

Clinical Relevance Statement

The eSafety checklist aids to build organizational safety competence pertaining to user interface design, and fosters effective dialogue between IT teams and clinical informatics professionals to address the safety of EHRs collaboratively. The checklist compiles emerging eSafety evidence into a succinct and easy-to-navigate format for effective translation to practice.


#

Multiple Choice Question

What is eSafety?

  • The protection and security of information on the Internet.

  • The protection of personal information and identity in a digital environment.

  • The protection of patients through safe design of electronic health records.

  • Safe and secure transmission of health information between digital systems.

Correct Answer: The correct answer is option c. eSafety is defined as the safety of HIT; the policies, processes, and practices which serve to protect patients against harm resulting from the development, implementation, and use of HIT solutions and software.[3]


#
#

Conflict of Interest

None.

Acknowledgments

We acknowledge the Chief Medical Information Office, the Chief Information Office, and the Human Factors Department at AHS for their collaboration on this work. We further acknowledge the significant time commitment of our four expert panelists in reviewing the checklist and providing their thoughtful and candid feedback. A copy of the current version of the eSafety checklist can be requested for access and use by contacting the first author by following the link: https://docs.google.com/forms/d/1SxLdcZZo0oAAmynXQuMqtKdmKQJDA9l0aYI8K6qFkLg/edit.

Protection of Human and Animal Subjects

This quality improvement project was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects, and was reviewed by the Yale Human Research Protection Program, and Alberta Innovates: A Project Ethics Community Consensus Initiative (ARECCI). The project was granted operational approval by Alberta Health Services (AHS) in accordance with applicable AHS quality improvement policies and procedures.


Supplementary Material

  • References

  • 1 IOM (Institute of Medicine). Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: The National Academies Press; 2012
  • 2 Kushniruk AW, Triola MM, Borycki EM, Stein B, Kannry JL. Technology induced error and usability: the relationship between usability problems and prescription errors when using a handheld application. Int J Med Inform 2005; 74 (7-8): 519-526
  • 3 Digital Health Canada. eSafety Guidelines: eSafety for eHealth. Toronto, ON; 2013
  • 4 Office of the National Coordinator. Health IT Patient Safety Action & Surveillance Plan. ONC; 2013. Available at: https://www.healthit.gov/sites/default/files/safety_plan_master.pdf . Accessed September 26, 2015
  • 5 Westbrook JI, Reckmann M, Li L. , et al. Effects of two commercial electronic prescribing systems on prescribing error rates in hospital in-patients: a before and after study. PLoS Med 2012; 9 (01) e1001164
  • 6 Magrabi F, Ong MS, Runciman W, Coiera E. Using FDA reports to inform a classification for health information technology safety problems. J Am Med Inform Assoc 2012; 19 (01) 45-53
  • 7 Palojoki S, Mäkelä M, Lehtonen L, Saranto K. An analysis of electronic health record-related patient safety incidents. Health Informatics J 2017; 23 (02) 134-145
  • 8 Fong A, Howe JL, Adams KT, Ratwani RM. Using active learning to identify health information technology related patient safety events. Appl Clin Inform 2017; 8 (01) 35-46
  • 9 Classen DC, Resar R, Griffin F. , et al. ‘Global trigger tool’ shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood) 2011; 30 (04) 581-589
  • 10 Adelman J. Wrong patient errors V. Proceedings of the AHRQ National Web Conference on Assessing Safety Risks Associated with EHRs; August 29, 2016
  • 11 Sengstack P. CPOE configuration to reduce medication errors. J Healthc Inf Manag 2010; 24 (04) 26-34
  • 12 National Center for Cognitive Informatics & Decision Making in Healthcare, The University of Texas School of Biomedical Informatics. Safety enhanced design briefs. Available at: https://sbmi.uth.edu/nccd/SED/Briefs/ . Accessed September 26, 2016
  • 13 Tsou AY, Lehmann CU, Michel J, Solomon R, Possanza L, Gandhi T. Safe practices for copy and paste in the EHR. Systematic review, recommendations, and novel model for health IT collaboration. Appl Clin Inform 2017; 8 (01) 12-34
  • 14 Sittig DF, Ash JS, Singh H. The SAFER guides: empowering organizations to improve the safety and effectiveness of electronic health records. Am J Manag Care 2014; 20 (05) 418-423
  • 15 Alberta Health Services. Annual Report. Edmonton, AB; 2016–17. Available at: https://www.albertahealthservices.ca/assets/about/publications/ahs-pub-2016-2017-annual-report.pdf . Accessed September 24, 2017
  • 16 Dang D, Dearholt S. Johns Hopkins Nursing Evidence-Based Practice: Model and Guidelines. 3rd ed. Indianapolis, IN: Sigma Theta Tau International; 2017
  • 17 IOM (Institute of Medicine). Key Capabilities of an Electronic Health Record System: Letter Report. Washington, DC: The National Academies Press; 2003
  • 18 Lazenby M, Dixon J, Coviello J, McCorkle R. Instructions on Using Expert Panels to Rate Evidence-Based Content. New Haven, CT: Yale University; 2014

Address for correspondence

Pritma Dhillon-Chattha, DNP, MHA, RN
Department of Nursing, Yale University
Orange, Connecticut
United States   

  • References

  • 1 IOM (Institute of Medicine). Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: The National Academies Press; 2012
  • 2 Kushniruk AW, Triola MM, Borycki EM, Stein B, Kannry JL. Technology induced error and usability: the relationship between usability problems and prescription errors when using a handheld application. Int J Med Inform 2005; 74 (7-8): 519-526
  • 3 Digital Health Canada. eSafety Guidelines: eSafety for eHealth. Toronto, ON; 2013
  • 4 Office of the National Coordinator. Health IT Patient Safety Action & Surveillance Plan. ONC; 2013. Available at: https://www.healthit.gov/sites/default/files/safety_plan_master.pdf . Accessed September 26, 2015
  • 5 Westbrook JI, Reckmann M, Li L. , et al. Effects of two commercial electronic prescribing systems on prescribing error rates in hospital in-patients: a before and after study. PLoS Med 2012; 9 (01) e1001164
  • 6 Magrabi F, Ong MS, Runciman W, Coiera E. Using FDA reports to inform a classification for health information technology safety problems. J Am Med Inform Assoc 2012; 19 (01) 45-53
  • 7 Palojoki S, Mäkelä M, Lehtonen L, Saranto K. An analysis of electronic health record-related patient safety incidents. Health Informatics J 2017; 23 (02) 134-145
  • 8 Fong A, Howe JL, Adams KT, Ratwani RM. Using active learning to identify health information technology related patient safety events. Appl Clin Inform 2017; 8 (01) 35-46
  • 9 Classen DC, Resar R, Griffin F. , et al. ‘Global trigger tool’ shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood) 2011; 30 (04) 581-589
  • 10 Adelman J. Wrong patient errors V. Proceedings of the AHRQ National Web Conference on Assessing Safety Risks Associated with EHRs; August 29, 2016
  • 11 Sengstack P. CPOE configuration to reduce medication errors. J Healthc Inf Manag 2010; 24 (04) 26-34
  • 12 National Center for Cognitive Informatics & Decision Making in Healthcare, The University of Texas School of Biomedical Informatics. Safety enhanced design briefs. Available at: https://sbmi.uth.edu/nccd/SED/Briefs/ . Accessed September 26, 2016
  • 13 Tsou AY, Lehmann CU, Michel J, Solomon R, Possanza L, Gandhi T. Safe practices for copy and paste in the EHR. Systematic review, recommendations, and novel model for health IT collaboration. Appl Clin Inform 2017; 8 (01) 12-34
  • 14 Sittig DF, Ash JS, Singh H. The SAFER guides: empowering organizations to improve the safety and effectiveness of electronic health records. Am J Manag Care 2014; 20 (05) 418-423
  • 15 Alberta Health Services. Annual Report. Edmonton, AB; 2016–17. Available at: https://www.albertahealthservices.ca/assets/about/publications/ahs-pub-2016-2017-annual-report.pdf . Accessed September 24, 2017
  • 16 Dang D, Dearholt S. Johns Hopkins Nursing Evidence-Based Practice: Model and Guidelines. 3rd ed. Indianapolis, IN: Sigma Theta Tau International; 2017
  • 17 IOM (Institute of Medicine). Key Capabilities of an Electronic Health Record System: Letter Report. Washington, DC: The National Academies Press; 2003
  • 18 Lazenby M, Dixon J, Coviello J, McCorkle R. Instructions on Using Expert Panels to Rate Evidence-Based Content. New Haven, CT: Yale University; 2014

Zoom Image
Fig. 1 Literature search.
Zoom Image
Fig. 2 Clinical Decision Support tab of eSafety checklist: electronic health record (EHR) capabilities are divided into tabs across the bottom of the checklist in orange. Each capability tab has the same standard format, including standard columns. Capability-specific expandable subcategories are listed within each tab for quick user navigation. This figure shows the “Drug-drug interaction decision support” subcategory expanded. Users can indicate their compliance level and add specific comments to provide rationale, if necessary. As user completes the checklist, compliance is automatically calculated in the “Instructions & Scoring” tab. Red diamonds indicate that further information/help is available upon hovering over the cell.