Appl Clin Inform 2021; 12(01): 164-169
DOI: 10.1055/s-0041-1723023
Case Report

A Perioperative Care Display for Understanding High Acuity Patients

Laurie Lovett Novak
1   Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, United States
,
Jonathan Wanderer
2   Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, Tennessee, United States
,
David A. Owens
3   Vanderbilt University Owen Graduate School of Management, Nashville, Tennessee, United States
,
Daniel Fabbri
1   Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, United States
,
Julian Z. Genkins
4   Department of Medicine, University of California San Francisco, San Francisco, California, United States
,
Thomas A. Lasko
1   Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, United States
› Author Affiliations
Funding L.L.N., D.O., D.F., J.W., and T.A.L. report grant R01 EB0020666 from the National Institute of Biomedical Imaging and Bioengineering.
 

Abstract

Background The data visualization literature asserts that the details of the optimal data display must be tailored to the specific task, the background of the user, and the characteristics of the data. The general organizing principle of a concept-oriented display is known to be useful for many tasks and data types.

Objectives In this project, we used general principles of data visualization and a co-design process to produce a clinical display tailored to a specific cognitive task, chosen from the anesthesia domain, but with clear generalizability to other clinical tasks. To support the work of the anesthesia-in-charge (AIC) our task was, for a given day, to depict the acuity level and complexity of each patient in the collection of those that will be operated on the following day. The AIC uses this information to optimally allocate anesthesia staff and providers across operating rooms.

Methods We used a co-design process to collaborate with participants who work in the AIC role. We conducted two in-depth interviews with AICs and engaged them in subsequent input on iterative design solutions.

Results Through a co-design process, we found (1) the need to carefully match the level of detail in the display to the level required by the clinical task, (2) the impedance caused by irrelevant information on the screen such as icons relevant only to other tasks, and (3) the desire for a specific but optional trajectory of increasingly detailed textual summaries.

Conclusion This study reports a real-world clinical informatics development project that engaged users as co-designers. Our process led to the user-preferred design of a single binary flag to identify the subset of patients needing further investigation, and then a trajectory of increasingly detailed, text-based abstractions for each patient that can be displayed when more information is needed.


#

Background and Significance

Electronic Health Record Data Complexity

Most clinical information is entered into electronic health record (EHR) as tabular numeric data or narrative text, but these native formats do not always facilitate the cognitive processes required by clinical care.[1] [2] [3] In particular, raw EHR data are often much too detailed and dispersed throughout the record for a clinician to quickly assemble an answer to a specific clinical question about a given patient.[4] Historically, many different methods of displaying EHR data have been proposed as general solutions to this problem,[5] [6] [7] [8] [9] [10] [11] but the details of an optimal display must usually be tailored to the specific task, the background of the user, and the characteristics of the data.[2] [3] [12] [13]

The findings reported here come from a larger initiative to improve visualization of data for anesthesia and other clinical work at Vanderbilt University Medical Center. In this project, we designed a clinical display that was matched to a less common type of clinical task. Instead of answering a specific question about a single patient, our goal was to answer the same, predefined question for many patients at once, and to identify those whose answers need extra attention. We arrived at this goal with input from anesthesia colleagues. Our predefined question came from the daily task of assigning anesthesia providers to surgical procedures, but our findings can generalize to other clinical tasks such as “charge” roles that involve planning for future shifts of patient care.


#

Perioperative Environment and Anesthesia-In-Charge Role

The perioperative environment at the Vanderbilt University Hospital (VUH) is large and complex. As a tertiary care center, we have 864 licensed beds and perform approximately 40,000 surgical procedures per year. In addition to a significant volume of procedures, we also support education of a variety of trainees. Our anesthesiology service is staffed by anesthesia faculty members, anesthesia residents, certified registered nurse anesthetists, and student nurse anesthetists. Our supervision requirements allow anesthesia faculty to staff up to two rooms if an anesthesia resident or student nurse anesthetist is involved, or up to four rooms if only certified registered nurse anesthetists are involved. This variable staffing ratio makes it important for anesthesia-in-charge (AIC) to have a clear view of the acuity of cases that are being staffed to make appropriate assignments. Many, but not all, patients undergo evaluation at the Preoperative Evaluation Center, where clinical documentation is generated that specifically evaluates risk for undergoing anesthesia and surgery.

VUH uses Epic (Epic Systems, Verona, Wisconsin, United States) as our EHR. This integrated enterprise system is used for most aspects of perioperative care, including operative case scheduling and documentation of patient conditions. In conjunction with Epic, the anesthesiology service uses a separate system for entering the assignments of specific anesthesia staff to specific operating rooms. Bi-directional interfaces between this system and Epic operate to bring operative case scheduling information into our staff assignment system, and to import staff assignments into Epic.

Every day, surgical patients are assigned specific rooms and times, and anesthesia providers must then be assigned to each patient, accounting for the surgical procedure being performed, the patient's comorbidities, acuity, and complexity, the experience of the provider, and the case load for attending anesthesiologists supervising multiple rooms. At VUH, this assignment task is performed by a few anesthesiologists that rotate through the role of AIC.


#

Objective

Previous discussions with AICs had determined that the assignment task required both much time and much cognitive effort, arising in part from the effort involved in understanding each patient's condition in enough detail to match them with an appropriate anesthesia provider. Many low-acuity and low-complexity patients can be matched with any provider, including supervised trainees. Likewise, the most complex patients are easily matched to the most experienced providers. Those in between need further investigation to make an appropriate assignment. But in a high-volume surgical center, the AIC does not have enough time to read even a 1-page summary for all patients scheduled for that day to determine where each patient lies on this continuum. Our display task was to indicate a rough level of patient complexity and then draw attention to the subset of patients who truly needed a deeper look by the AIC. Our objective was to develop a visualization with the potential to lower the cognitive burden, uncertainty, and time requirements of making the daily assignments.


#
#

Methods

This project was approved by the Vanderbilt University Institutional Review Board.

User Engagement in Co-Design

Given the small number of AICs in the institution, we took a co-design approach.[14] We define co-design by adapting Sanders's definition to specify software design: “(software) designers and people not trained in (software) design working together in the design development process.” Co-design, also referred to as participatory design,[15] [16] differs from user- or human-centered design in that potential users of a technology or process always participate in the design work. Other members of the team learn about the work through their involvement, and in turn the user-participants learn about software development, issues with data, and in this case visualization. In addition to the AICs, our team included data scientists, some of whom were also physicians, a social scientist, and a design scholar.

We conducted two in-depth interviews with a participant who works full-time in the AIC role. The first interview was to understand the information needs in this role, and the second interview was to gain feedback on the preliminary version of the tool and further refine our understanding of the AIC information needs. We also interviewed a participant who performs the AIC role intermittently. In addition, after each version of the tool was implemented, we engaged these and two other AICs to ensure that the implemented changes were available to them in our production EHR environment and to obtain their feedback. This co-design approach is feasible when there are a small number of people performing the role for which technology is being designed. In traditional research terminology, our “sample” included the person who performed the role full-time and their backup.

Interviews were recorded and transcribed. Our data included notes from meetings, transcripts from interviews, and drawings that were created during the interviews and meetings. Data were analyzed by the team using a qualitative data analysis coding tool. Three team members coded the data using an open coding (i.e., no a priori framework) approach.[17] The team reviewed the coding in meetings and discussed design themes that emerged, and these were used to establish the initial design. Subsequent informal assessment meetings with AIC participants resulted in adaptations to the design. These sessions were not documented and analyzed with the formality of the initial interviews, given the iterative methodology.


#

Iterative Development Cycle

We used an iterative development cycle in which participants gave feedback after each design update. The following section details our experience in implementing the iterative development cycle. Ultimately, there were six versions of the design. As described above, participants were formally interviewed for the first two iterations, and we documented subsequent feedback without formal interviews through direct communication with the developer.


#
#

Results

Initial Interviews and the Work of the Anesthesia-In-Charge

Initial interviews established that the work of the AIC is complex, involving clinical and social components, and information about people, spaces, technology, and medical procedures. Information used to make provider assignments was located in a variety of systems, including the electronic health record, the perioperative information system, an equipment tracking system, messages from various personnel, and other sources. The AIC estimated that 90% of scheduled cases were planned 1 day in advance, with more complex cases being planned 2 to 3 days out. Information used included the surgical specialty and specific surgeon (of which there are hundreds), the procedure, the complexity of the case, and the baseline health of the patient. The AIC kept track of information about the various surgeons, including specific people or roles they preferred to work with, types of cases they perform, and other factors. The AIC tracked patient factors including medical conditions, previous anesthesia complications, cardiovascular issues, malformations in the face or airway, and pulmonary issues. The AIC also took note of the patient's ASA score. The ASA Physical Status is a subjective preoperative summary of a patient's clinical state, defined by the American Society of Anesthesiologists (ASA), and is predictive of both perioperative and postoperative outcomes.[18] Its integer values range from healthy (1) through severe systemic disease (3) to brain dead (6). The AIC also used information about room closures, equipment, and special requests. The primary AIC noted that if they were able to identify the sickest patients from the list, they would assign those cases first. The extant process involved assigning resident physicians first, then student nurse anesthetists, then certified registered nurse anesthetists, and finally attending physicians.


#

Iterative Design Refinements

In our working meetings, we reviewed all of our data and evaluated strategies for the design of a visualization of the data needed by the AICs. [Fig. 1] depicts options the team considered. Our first attempt to indicate patient information was an icon that indicated ASA status, the presence of a difficult airway, and the severity of problems in multiple organ systems, arranged as a 3 × 3 square ([Fig. 2]). We had planned to compute the values displayed in the icon with sophisticated data science methods that would infer the severity of problems in each system from structured and unstructured information in the EHR. These displays looked very useful for individual patients; however, when collected into a display showing an icon for each surgical patient, AIC feedback was that it induced information overload. With so many patients on a single page, a much simpler indicator was needed.

Zoom Image
Fig. 1 Image of the whiteboard from a design session that depicts priority clinical characteristics, options for creating and altering the visualization, and various layouts.
Zoom Image
Fig. 2 An initial guess at a summary abstraction indicating a patient's complexity and acuity. Locations/colors indicates organ systems, and the degree of fill represents the degree of disease severity for that system. Systems are arranged top to bottom by relevance to anesthesia planning (top: airway, cardiovascular, pulmonary; middle: endocrine, renal, hepatic; bottom: neurologic, American Society of Anesthesiologists status, rare conditions (an important allergy is indicated here). The red border indicates high overall acuity.

In the second iteration, we designed a single binary flag that signaled the need for further evaluation by the AIC. The absence of the flag indicated a low-complexity patient that needed no further investigation by the AIC. The presence of the flag indicated that the patient had one or more predefined conditions relevant to the provision of anesthesia, such as a known-difficult airway, any history of severe heart failure, implantation of a ventricular assist device, history of pulmonary hypertension, history of moderate, severe or critical aortic stenosis, history of malignant hypertension, or a history of refusing blood products. These conditions were chosen by the AICs, and their presence was inferred by using logical rules applied to information extracted from the patient's preoperative assessments, problem list, and past medical history. The output flag value was plugged into the “snapboard” system that AICs utilize for reviewing surgical cases when making assignments, and was visually represented with a lightning bolt icon (see [Fig. 3] for the lightning bolt on a subsequent version).

Zoom Image
Fig. 3 Snapboard image with false test data, depicting several design elements including lightning bolt icon, quick summary mouse hover, and sidebar report (fictional data used in the image).

This design was implemented, and subsequent assessment by the AIC co-design team members identified usability issues. First, the icon was not visually obvious due to the presence of a variety of other anesthesia resource icons for each surgical case. Second, it was challenging to understand the entire cohort at once because it did not fit on a single screen, and scrolling was so slow that users tried to avoid it whenever possible. To address these issues, we created a version of the snapboard that is used only for making assignments, allowing us to remove the anesthesia resource icons that were contributing to visual clutter, and we altered the height of the surgical case line to allow more cases to be viewed simultaneously.

The high latency of scrolling was an Epic property that we were unable to change, but our other revisions attempted to minimize the amount of scrolling that was needed. This turned out to be the only aspect of Epic architecture that constrained our final design. Other Epic constraints might have limited a more complex display design, but the simplicity of the design preferred by our users avoided those limits.

The next iteration provided a summary level of detail for each patient on mouse hover. Most of our surgical patients have undergone evaluation at our Preoperative Evaluation Center, which results in the generation of an anesthesia preoperative evaluation. This preoperative evaluation has been structured to provide a quick summary, as previously described.[19] In addressing our AIC's information need, we developed functionality that extracts the quick, one-sentence summary statement within the “history of present illness” section of the anesthesia preoperative evaluation, and displays it when hovering over the flagged surgical case ([Fig. 3], yellow tooltip).

This iteration was deployed into production and evaluated by our AICs. We verified that the changes had successfully addressed the concerns previously raised, and no new issues were identified during this pilot deployment.

Our final iteration added detailed information in a sidebar report when a case was selected via a mouse click ([Fig. 3], procedure note at right). This report includes detailed surgical case scheduling information, patient medications, past medical history, past surgical history, as well as the full content of the most recent anesthesia preoperative evaluation.

Given the small numbers of users, we did not conduct a formal evaluation of our display. But informal follow-up indicated that the final version decreased the time and effort needed for making provider assignments, and AICs were enthusiastic about the improvements.


#
#

Discussion

We used an iterative approach to design a clinical display targeted at understanding the acuity level of all patients in a moderately sized cohort. In this approach, interviews by a multidisciplinary team were iterated with design updates. Some aspects of the final preferred design surprised us.

First, the appropriate level of abstraction was not what the design team first imagined, even after the initial user interviews, because an abstraction that was appropriate for a single record in isolation imposed too heavy of a cognitive load when replicated for each record in the population and combined with all of the other information on screen for various reasons. The users' solution to this problem was to first identify the subset of patients for whom no further investigation was needed (the low-complexity, low-acuity patients, indicated by the absence of the icon), and then to answer, one at a time, the question of what was complex about the remaining patients.

Second, we were surprised by the fact that except for that top-level flag, the final preferred design included only text-based abstractions, rather than graphical abstractions. We would normally expect a graphical display to be preferred, because among other things, graphical displays allow for easier pattern recognition, which is the typically preferred cognitive mode for clinicians.[4] [20] [21] In our final design, the pattern recognition step happens at the population-level display, and then once the patients needing further attention are identified, the problem switches to a search-type cognitive task to understand why each of those patients needs extra attention, which in this case is better acquired with a short text list. The preference for text raises an interesting question for further research and highlights the depth of task understanding needed in designing effective data displays.

This was a small study, but its results agree with other research indicating that an effective way to manage information overload is not to simply filter out information, but to summarize details into a more abstract form.[4] [22] In this case, the binary flag is the simplest possible abstraction to indicate the presence of a complex patient, the one-sentence summary indicates the dimensions in which the patient is complex, and the full report gives the details on that complexity. Each of these levels of abstraction was carefully tuned to the clinical requirements, and neither the appropriate number of levels nor the appropriate amount of summarization at each level were obvious at the beginning of the project. We expect that the design of progressive abstraction or summarization will prove useful for other clinical tasks in which a limit on the amount of information on the screen at one time is an important constraint. This type of design promotes trust in the summarization by allowing more detail to be revealed as desired,[4] in contrast to pure filtering or data-hiding designs, which users tend to mistrust.[23] [24]

The limitations of this study include the representation of only one hospital, a focus on the visualization of data for work that is only performed by a few individuals, and the informal evaluation. However, the work of the AIC is central to the safe and efficient operation of surgical services in the hospital and thus deserving of optimized visual interpretation of vast clinical data resources. Additionally, the insights from our study are likely useful to the development of visualizations for people in other “charge” roles, for example, charge nurses, who consider the needs of an entire clinic or unit in planning for a future shift of clinical care.


#

Conclusion

Our iterative co-design process explored ways to visualize and understand a population of patients to facilitate the task of making appropriate assignments of anesthesia providers. Our process led to the user-preferred design of a single binary flag to identify the subset of patients needing further investigation, and then a trajectory of increasingly detailed, text-based abstractions for each patient that can be displayed when more information is needed.


#

Clinical Relevance Statement

This study provides a description of a real-world project in which a designed informatics solution was implemented. We also described methods for engaging users in the process, establishing a precedent for others engaged in designing tools for which there is a small number of users.


#

Multiple Choice Questions

  1. In a co-design project, users are considered:

    • Research subjects, studied using questionnaires

    • Topic experts

    • Partners in designing and refining a product

    • Design scholars

    Correct Answer: The correct answer is option c. Users of products are seen as partners in co-design projects, helping other members of the team arrive a workable design.

  2. Optimal data displays are tailored to the background of the user, characteristics of the data, and:

    • The specific task

    • The dimension of time

    • The programming language

    • The machine learning algorithm

    Correct Answer: The correct answer is option a. Data displays should support tasks being performed by the user.


#
#

Conflict of Interest

None declared.

Acknowledgments

The authors are grateful for the participation of the anesthesia-in-charge personnel who helped design the innovations described here.

Protection of Human and Animal Subjects

This project was approved by the Vanderbilt University Institutional Review Board.


  • References

  • 1 Powsner SM, Tufte ER. Graphical summary of patient status. Lancet 1994; 344 (8919): 386-389
  • 2 Bauer DT, Guerlain S, Brown PJ. The design and evaluation of a graphical display for laboratory data. J Am Med Inform Assoc 2010; 17 (04) 416-424
  • 3 Torsvik T, Lillebo B, Mikkelsen G. Presentation of clinical laboratory results: an experimental comparison of four visualization techniques. J Am Med Inform Assoc 2013; 20 (02) 325-331
  • 4 Lasko TA, Owens DA, Fabbri D, Wanderer JP, Genkins JZ, Novak LL. User-centered clinical display design issues for inpatient providers. Appl Clin Inform 2020; 11 (05) 700-709
  • 5 Crisan A, McKee G, Munzner T, Gardy JL. Evidence-based design and evaluation of a whole genome sequencing clinical report for the reference microbiology laboratory. PeerJ 2018; 6: e4218
  • 6 Wanderer JP, Nelson SE, Ehrenfeld JM, Monahan S, Park S. Clinical data visualization: the current state and future needs. J Med Syst 2016; 40 (12) 275
  • 7 West VL, Borland D, Hammond WE. Innovative information visualization of electronic health record data: a systematic review. J Am Med Inform Assoc 2015; 22 (02) 330-339
  • 8 Feblowitz JC, Wright A, Singh H, Samal L, Sittig DF. Summarization of clinical information: a conceptual model. J Biomed Inform 2011; 44 (04) 688-699
  • 9 Waller RG, Wright MC, Segall N. et al. Novel displays of patient information in critical care settings: a systematic review. J Am Med Inform Assoc 2019; 26 (05) 479-489
  • 10 Wright MC, Borbolla D, Waller RG. et al. Critical care information display approaches and design frameworks: a systematic review and meta-analysis. J Biomed Inform X 2019; 3: 100041
  • 11 Rind A, Wang TD, Aigner W. et al. Interactive information visualization to explore and query electronic health records. Found Trends Hum-Comput Interact. 2013; 5 (03) 207-298
  • 12 El-Kareh R, Hasan O, Schiff GD. Use of health information technology to reduce diagnostic errors. BMJ Qual Saf 2013; 22 (Suppl. 02) ii40-ii51
  • 13 Alberdi E, Becher J-C, Gilhooly K. et al. Expertise and the interpretation of computerized physiological data: implications for the design of computerized monitoring in neonatal intensive care. Int J Hum Comput Stud 2001; 55 (03) 191-216
  • 14 Sanders EBN, Stappers PJ. Co-creation and the new landscapes of design. CoDesign 2008; 4 (01) 5-18
  • 15 Jeffery AD, Novak LL, Kennedy B, Dietrich MS, Mion LC. Participatory design of probability-based decision support tools for in-hospital nurses. J Am Med Inform Assoc 2017; 24 (06) 1102-1110
  • 16 Gregory J. Scandinavian approaches to participatory design. Int J Eng Educ 2003; 19 (01) 62-74
  • 17 Bernard HR, Wutich A, Ryan GW. Analyzing Qualitative Data: Systematic Approaches. New York: SAGE publications; 2016
  • 18 American Society of Anesthesiologists. ASA Physical status classification system. Accessed April 5, 2020 from: https://www.asahq.org/standards-and-guidelines/asa-physical-status-classification-system
  • 19 Hagaman DH, Ehrenfeld JM, Terekhov M. et al. Compliance is contagious: using informatics methods to measure the spread of a documentation standard from a preoperative clinic. J Perianesth Nurs 2018; 33 (04) 436-443
  • 20 Croskerry P. From mindless to mindful practice--cognitive bias and clinical decision making. N Engl J Med 2013; 368 (26) 2445-2448
  • 21 Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak 2016; 16 (01) 138
  • 22 Woods DD, Watts JC. How not to have to navigate through too many displays. In: Helander MG, Landauer TK, Prabhu PV. eds. Handbook Of Human-Computer Interaction. Amsterdam, Netherlands: Elsevier; 1997: 617-650
  • 23 Jensen LG, Bossen C. Factors affecting physicians' use of a dedicated overview interface in an electronic health record: the importance of standard information and standard documentation. Int J Med Inform 2016; 87: 44-53
  • 24 Hsu W, Taira RK, El-Saden S, Kangarloo H, Bui AA. Context-based electronic health record: toward patient specific healthcare. IEEE Trans Inf Technol Biomed 2012; 16 (02) 228-234

Address for correspondence

Laurie L. Novak, PhD, MHSA
Department of Biomedical Informatics, Vanderbilt University Medical Center
2525 West End Avenue, Nashville, TN 37203
United Sates   

Publication History

Received: 11 June 2020

Accepted: 22 December 2020

Article published online:
03 March 2021

© 2021. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Powsner SM, Tufte ER. Graphical summary of patient status. Lancet 1994; 344 (8919): 386-389
  • 2 Bauer DT, Guerlain S, Brown PJ. The design and evaluation of a graphical display for laboratory data. J Am Med Inform Assoc 2010; 17 (04) 416-424
  • 3 Torsvik T, Lillebo B, Mikkelsen G. Presentation of clinical laboratory results: an experimental comparison of four visualization techniques. J Am Med Inform Assoc 2013; 20 (02) 325-331
  • 4 Lasko TA, Owens DA, Fabbri D, Wanderer JP, Genkins JZ, Novak LL. User-centered clinical display design issues for inpatient providers. Appl Clin Inform 2020; 11 (05) 700-709
  • 5 Crisan A, McKee G, Munzner T, Gardy JL. Evidence-based design and evaluation of a whole genome sequencing clinical report for the reference microbiology laboratory. PeerJ 2018; 6: e4218
  • 6 Wanderer JP, Nelson SE, Ehrenfeld JM, Monahan S, Park S. Clinical data visualization: the current state and future needs. J Med Syst 2016; 40 (12) 275
  • 7 West VL, Borland D, Hammond WE. Innovative information visualization of electronic health record data: a systematic review. J Am Med Inform Assoc 2015; 22 (02) 330-339
  • 8 Feblowitz JC, Wright A, Singh H, Samal L, Sittig DF. Summarization of clinical information: a conceptual model. J Biomed Inform 2011; 44 (04) 688-699
  • 9 Waller RG, Wright MC, Segall N. et al. Novel displays of patient information in critical care settings: a systematic review. J Am Med Inform Assoc 2019; 26 (05) 479-489
  • 10 Wright MC, Borbolla D, Waller RG. et al. Critical care information display approaches and design frameworks: a systematic review and meta-analysis. J Biomed Inform X 2019; 3: 100041
  • 11 Rind A, Wang TD, Aigner W. et al. Interactive information visualization to explore and query electronic health records. Found Trends Hum-Comput Interact. 2013; 5 (03) 207-298
  • 12 El-Kareh R, Hasan O, Schiff GD. Use of health information technology to reduce diagnostic errors. BMJ Qual Saf 2013; 22 (Suppl. 02) ii40-ii51
  • 13 Alberdi E, Becher J-C, Gilhooly K. et al. Expertise and the interpretation of computerized physiological data: implications for the design of computerized monitoring in neonatal intensive care. Int J Hum Comput Stud 2001; 55 (03) 191-216
  • 14 Sanders EBN, Stappers PJ. Co-creation and the new landscapes of design. CoDesign 2008; 4 (01) 5-18
  • 15 Jeffery AD, Novak LL, Kennedy B, Dietrich MS, Mion LC. Participatory design of probability-based decision support tools for in-hospital nurses. J Am Med Inform Assoc 2017; 24 (06) 1102-1110
  • 16 Gregory J. Scandinavian approaches to participatory design. Int J Eng Educ 2003; 19 (01) 62-74
  • 17 Bernard HR, Wutich A, Ryan GW. Analyzing Qualitative Data: Systematic Approaches. New York: SAGE publications; 2016
  • 18 American Society of Anesthesiologists. ASA Physical status classification system. Accessed April 5, 2020 from: https://www.asahq.org/standards-and-guidelines/asa-physical-status-classification-system
  • 19 Hagaman DH, Ehrenfeld JM, Terekhov M. et al. Compliance is contagious: using informatics methods to measure the spread of a documentation standard from a preoperative clinic. J Perianesth Nurs 2018; 33 (04) 436-443
  • 20 Croskerry P. From mindless to mindful practice--cognitive bias and clinical decision making. N Engl J Med 2013; 368 (26) 2445-2448
  • 21 Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak 2016; 16 (01) 138
  • 22 Woods DD, Watts JC. How not to have to navigate through too many displays. In: Helander MG, Landauer TK, Prabhu PV. eds. Handbook Of Human-Computer Interaction. Amsterdam, Netherlands: Elsevier; 1997: 617-650
  • 23 Jensen LG, Bossen C. Factors affecting physicians' use of a dedicated overview interface in an electronic health record: the importance of standard information and standard documentation. Int J Med Inform 2016; 87: 44-53
  • 24 Hsu W, Taira RK, El-Saden S, Kangarloo H, Bui AA. Context-based electronic health record: toward patient specific healthcare. IEEE Trans Inf Technol Biomed 2012; 16 (02) 228-234

Zoom Image
Fig. 1 Image of the whiteboard from a design session that depicts priority clinical characteristics, options for creating and altering the visualization, and various layouts.
Zoom Image
Fig. 2 An initial guess at a summary abstraction indicating a patient's complexity and acuity. Locations/colors indicates organ systems, and the degree of fill represents the degree of disease severity for that system. Systems are arranged top to bottom by relevance to anesthesia planning (top: airway, cardiovascular, pulmonary; middle: endocrine, renal, hepatic; bottom: neurologic, American Society of Anesthesiologists status, rare conditions (an important allergy is indicated here). The red border indicates high overall acuity.
Zoom Image
Fig. 3 Snapboard image with false test data, depicting several design elements including lightning bolt icon, quick summary mouse hover, and sidebar report (fictional data used in the image).