Keywords
data visualization - anesthesiology - human-computer interaction - requirements analysis
and design - machine learning - co-design
Background and Significance
Background and Significance
Electronic Health Record Data Complexity
Most clinical information is entered into electronic health record (EHR) as tabular
numeric data or narrative text, but these native formats do not always facilitate
the cognitive processes required by clinical care.[1]
[2]
[3] In particular, raw EHR data are often much too detailed and dispersed throughout
the record for a clinician to quickly assemble an answer to a specific clinical question
about a given patient.[4] Historically, many different methods of displaying EHR data have been proposed as
general solutions to this problem,[5]
[6]
[7]
[8]
[9]
[10]
[11] but the details of an optimal display must usually be tailored to the specific task,
the background of the user, and the characteristics of the data.[2]
[3]
[12]
[13]
The findings reported here come from a larger initiative to improve visualization
of data for anesthesia and other clinical work at Vanderbilt University Medical Center.
In this project, we designed a clinical display that was matched to a less common
type of clinical task. Instead of answering a specific question about a single patient,
our goal was to answer the same, predefined question for many patients at once, and
to identify those whose answers need extra attention. We arrived at this goal with
input from anesthesia colleagues. Our predefined question came from the daily task
of assigning anesthesia providers to surgical procedures, but our findings can generalize
to other clinical tasks such as “charge” roles that involve planning for future shifts
of patient care.
Perioperative Environment and Anesthesia-In-Charge Role
The perioperative environment at the Vanderbilt University Hospital (VUH) is large
and complex. As a tertiary care center, we have 864 licensed beds and perform approximately
40,000 surgical procedures per year. In addition to a significant volume of procedures,
we also support education of a variety of trainees. Our anesthesiology service is
staffed by anesthesia faculty members, anesthesia residents, certified registered
nurse anesthetists, and student nurse anesthetists. Our supervision requirements allow
anesthesia faculty to staff up to two rooms if an anesthesia resident or student nurse
anesthetist is involved, or up to four rooms if only certified registered nurse anesthetists
are involved. This variable staffing ratio makes it important for anesthesia-in-charge
(AIC) to have a clear view of the acuity of cases that are being staffed to make appropriate
assignments. Many, but not all, patients undergo evaluation at the Preoperative Evaluation
Center, where clinical documentation is generated that specifically evaluates risk
for undergoing anesthesia and surgery.
VUH uses Epic (Epic Systems, Verona, Wisconsin, United States) as our EHR. This integrated
enterprise system is used for most aspects of perioperative care, including operative
case scheduling and documentation of patient conditions. In conjunction with Epic,
the anesthesiology service uses a separate system for entering the assignments of
specific anesthesia staff to specific operating rooms. Bi-directional interfaces between
this system and Epic operate to bring operative case scheduling information into our
staff assignment system, and to import staff assignments into Epic.
Every day, surgical patients are assigned specific rooms and times, and anesthesia
providers must then be assigned to each patient, accounting for the surgical procedure
being performed, the patient's comorbidities, acuity, and complexity, the experience
of the provider, and the case load for attending anesthesiologists supervising multiple
rooms. At VUH, this assignment task is performed by a few anesthesiologists that rotate
through the role of AIC.
Objective
Previous discussions with AICs had determined that the assignment task required both
much time and much cognitive effort, arising in part from the effort involved in understanding
each patient's condition in enough detail to match them with an appropriate anesthesia
provider. Many low-acuity and low-complexity patients can be matched with any provider,
including supervised trainees. Likewise, the most complex patients are easily matched
to the most experienced providers. Those in between need further investigation to
make an appropriate assignment. But in a high-volume surgical center, the AIC does
not have enough time to read even a 1-page summary for all patients scheduled for
that day to determine where each patient lies on this continuum. Our display task
was to indicate a rough level of patient complexity and then draw attention to the
subset of patients who truly needed a deeper look by the AIC. Our objective was to
develop a visualization with the potential to lower the cognitive burden, uncertainty,
and time requirements of making the daily assignments.
Methods
This project was approved by the Vanderbilt University Institutional Review Board.
User Engagement in Co-Design
Given the small number of AICs in the institution, we took a co-design approach.[14] We define co-design by adapting Sanders's definition to specify software design:
“(software) designers and people not trained in (software) design working together
in the design development process.” Co-design, also referred to as participatory design,[15]
[16] differs from user- or human-centered design in that potential users of a technology
or process always participate in the design work. Other members of the team learn
about the work through their involvement, and in turn the user-participants learn
about software development, issues with data, and in this case visualization. In addition
to the AICs, our team included data scientists, some of whom were also physicians,
a social scientist, and a design scholar.
We conducted two in-depth interviews with a participant who works full-time in the
AIC role. The first interview was to understand the information needs in this role,
and the second interview was to gain feedback on the preliminary version of the tool
and further refine our understanding of the AIC information needs. We also interviewed
a participant who performs the AIC role intermittently. In addition, after each version
of the tool was implemented, we engaged these and two other AICs to ensure that the
implemented changes were available to them in our production EHR environment and to
obtain their feedback. This co-design approach is feasible when there are a small
number of people performing the role for which technology is being designed. In traditional
research terminology, our “sample” included the person who performed the role full-time
and their backup.
Interviews were recorded and transcribed. Our data included notes from meetings, transcripts
from interviews, and drawings that were created during the interviews and meetings.
Data were analyzed by the team using a qualitative data analysis coding tool. Three
team members coded the data using an open coding (i.e., no a priori framework) approach.[17] The team reviewed the coding in meetings and discussed design themes that emerged,
and these were used to establish the initial design. Subsequent informal assessment
meetings with AIC participants resulted in adaptations to the design. These sessions
were not documented and analyzed with the formality of the initial interviews, given
the iterative methodology.
Iterative Development Cycle
We used an iterative development cycle in which participants gave feedback after each
design update. The following section details our experience in implementing the iterative
development cycle. Ultimately, there were six versions of the design. As described
above, participants were formally interviewed for the first two iterations, and we
documented subsequent feedback without formal interviews through direct communication
with the developer.
Results
Initial Interviews and the Work of the Anesthesia-In-Charge
Initial interviews established that the work of the AIC is complex, involving clinical
and social components, and information about people, spaces, technology, and medical
procedures. Information used to make provider assignments was located in a variety
of systems, including the electronic health record, the perioperative information
system, an equipment tracking system, messages from various personnel, and other sources.
The AIC estimated that 90% of scheduled cases were planned 1 day in advance, with
more complex cases being planned 2 to 3 days out. Information used included the surgical
specialty and specific surgeon (of which there are hundreds), the procedure, the complexity
of the case, and the baseline health of the patient. The AIC kept track of information
about the various surgeons, including specific people or roles they preferred to work
with, types of cases they perform, and other factors. The AIC tracked patient factors
including medical conditions, previous anesthesia complications, cardiovascular issues,
malformations in the face or airway, and pulmonary issues. The AIC also took note
of the patient's ASA score. The ASA Physical Status is a subjective preoperative summary
of a patient's clinical state, defined by the American Society of Anesthesiologists
(ASA), and is predictive of both perioperative and postoperative outcomes.[18] Its integer values range from healthy (1) through severe systemic disease (3) to
brain dead (6). The AIC also used information about room closures, equipment, and
special requests. The primary AIC noted that if they were able to identify the sickest
patients from the list, they would assign those cases first. The extant process involved
assigning resident physicians first, then student nurse anesthetists, then certified
registered nurse anesthetists, and finally attending physicians.
Iterative Design Refinements
In our working meetings, we reviewed all of our data and evaluated strategies for
the design of a visualization of the data needed by the AICs. [Fig. 1] depicts options the team considered. Our first attempt to indicate patient information
was an icon that indicated ASA status, the presence of a difficult airway, and the
severity of problems in multiple organ systems, arranged as a 3 × 3 square ([Fig. 2]). We had planned to compute the values displayed in the icon with sophisticated
data science methods that would infer the severity of problems in each system from
structured and unstructured information in the EHR. These displays looked very useful
for individual patients; however, when collected into a display showing an icon for
each surgical patient, AIC feedback was that it induced information overload. With
so many patients on a single page, a much simpler indicator was needed.
Fig. 1 Image of the whiteboard from a design session that depicts priority clinical characteristics,
options for creating and altering the visualization, and various layouts.
Fig. 2 An initial guess at a summary abstraction indicating a patient's complexity and acuity.
Locations/colors indicates organ systems, and the degree of fill represents the degree
of disease severity for that system. Systems are arranged top to bottom by relevance
to anesthesia planning (top: airway, cardiovascular, pulmonary; middle: endocrine,
renal, hepatic; bottom: neurologic, American Society of Anesthesiologists status,
rare conditions (an important allergy is indicated here). The red border indicates
high overall acuity.
In the second iteration, we designed a single binary flag that signaled the need for
further evaluation by the AIC. The absence of the flag indicated a low-complexity
patient that needed no further investigation by the AIC. The presence of the flag
indicated that the patient had one or more predefined conditions relevant to the provision
of anesthesia, such as a known-difficult airway, any history of severe heart failure,
implantation of a ventricular assist device, history of pulmonary hypertension, history
of moderate, severe or critical aortic stenosis, history of malignant hypertension,
or a history of refusing blood products. These conditions were chosen by the AICs,
and their presence was inferred by using logical rules applied to information extracted
from the patient's preoperative assessments, problem list, and past medical history.
The output flag value was plugged into the “snapboard” system that AICs utilize for
reviewing surgical cases when making assignments, and was visually represented with
a lightning bolt icon (see [Fig. 3] for the lightning bolt on a subsequent version).
Fig. 3 Snapboard image with false test data, depicting several design elements including
lightning bolt icon, quick summary mouse hover, and sidebar report (fictional data
used in the image).
This design was implemented, and subsequent assessment by the AIC co-design team members
identified usability issues. First, the icon was not visually obvious due to the presence
of a variety of other anesthesia resource icons for each surgical case. Second, it
was challenging to understand the entire cohort at once because it did not fit on
a single screen, and scrolling was so slow that users tried to avoid it whenever possible.
To address these issues, we created a version of the snapboard that is used only for
making assignments, allowing us to remove the anesthesia resource icons that were
contributing to visual clutter, and we altered the height of the surgical case line
to allow more cases to be viewed simultaneously.
The high latency of scrolling was an Epic property that we were unable to change,
but our other revisions attempted to minimize the amount of scrolling that was needed.
This turned out to be the only aspect of Epic architecture that constrained our final
design. Other Epic constraints might have limited a more complex display design, but
the simplicity of the design preferred by our users avoided those limits.
The next iteration provided a summary level of detail for each patient on mouse hover.
Most of our surgical patients have undergone evaluation at our Preoperative Evaluation
Center, which results in the generation of an anesthesia preoperative evaluation.
This preoperative evaluation has been structured to provide a quick summary, as previously
described.[19] In addressing our AIC's information need, we developed functionality that extracts
the quick, one-sentence summary statement within the “history of present illness”
section of the anesthesia preoperative evaluation, and displays it when hovering over
the flagged surgical case ([Fig. 3], yellow tooltip).
This iteration was deployed into production and evaluated by our AICs. We verified
that the changes had successfully addressed the concerns previously raised, and no
new issues were identified during this pilot deployment.
Our final iteration added detailed information in a sidebar report when a case was
selected via a mouse click ([Fig. 3], procedure note at right). This report includes detailed surgical case scheduling
information, patient medications, past medical history, past surgical history, as
well as the full content of the most recent anesthesia preoperative evaluation.
Given the small numbers of users, we did not conduct a formal evaluation of our display.
But informal follow-up indicated that the final version decreased the time and effort
needed for making provider assignments, and AICs were enthusiastic about the improvements.
Discussion
We used an iterative approach to design a clinical display targeted at understanding
the acuity level of all patients in a moderately sized cohort. In this approach, interviews
by a multidisciplinary team were iterated with design updates. Some aspects of the
final preferred design surprised us.
First, the appropriate level of abstraction was not what the design team first imagined,
even after the initial user interviews, because an abstraction that was appropriate
for a single record in isolation imposed too heavy of a cognitive load when replicated
for each record in the population and combined with all of the other information on
screen for various reasons. The users' solution to this problem was to first identify
the subset of patients for whom no further investigation was needed (the low-complexity,
low-acuity patients, indicated by the absence of the icon), and then to answer, one
at a time, the question of what was complex about the remaining patients.
Second, we were surprised by the fact that except for that top-level flag, the final
preferred design included only text-based abstractions, rather than graphical abstractions.
We would normally expect a graphical display to be preferred, because among other
things, graphical displays allow for easier pattern recognition, which is the typically
preferred cognitive mode for clinicians.[4]
[20]
[21] In our final design, the pattern recognition step happens at the population-level
display, and then once the patients needing further attention are identified, the
problem switches to a search-type cognitive task to understand why each of those patients
needs extra attention, which in this case is better acquired with a short text list.
The preference for text raises an interesting question for further research and highlights
the depth of task understanding needed in designing effective data displays.
This was a small study, but its results agree with other research indicating that
an effective way to manage information overload is not to simply filter out information,
but to summarize details into a more abstract form.[4]
[22] In this case, the binary flag is the simplest possible abstraction to indicate the
presence of a complex patient, the one-sentence summary indicates the dimensions in
which the patient is complex, and the full report gives the details on that complexity.
Each of these levels of abstraction was carefully tuned to the clinical requirements,
and neither the appropriate number of levels nor the appropriate amount of summarization
at each level were obvious at the beginning of the project. We expect that the design
of progressive abstraction or summarization will prove useful for other clinical tasks
in which a limit on the amount of information on the screen at one time is an important
constraint. This type of design promotes trust in the summarization by allowing more
detail to be revealed as desired,[4] in contrast to pure filtering or data-hiding designs, which users tend to mistrust.[23]
[24]
The limitations of this study include the representation of only one hospital, a focus
on the visualization of data for work that is only performed by a few individuals,
and the informal evaluation. However, the work of the AIC is central to the safe and
efficient operation of surgical services in the hospital and thus deserving of optimized
visual interpretation of vast clinical data resources. Additionally, the insights
from our study are likely useful to the development of visualizations for people in
other “charge” roles, for example, charge nurses, who consider the needs of an entire
clinic or unit in planning for a future shift of clinical care.
Conclusion
Our iterative co-design process explored ways to visualize and understand a population
of patients to facilitate the task of making appropriate assignments of anesthesia
providers. Our process led to the user-preferred design of a single binary flag to
identify the subset of patients needing further investigation, and then a trajectory
of increasingly detailed, text-based abstractions for each patient that can be displayed
when more information is needed.
Clinical Relevance Statement
Clinical Relevance Statement
This study provides a description of a real-world project in which a designed informatics
solution was implemented. We also described methods for engaging users in the process,
establishing a precedent for others engaged in designing tools for which there is
a small number of users.
Multiple Choice Questions
Multiple Choice Questions
-
In a co-design project, users are considered:
Correct Answer: The correct answer is option c. Users of products are seen as partners in co-design
projects, helping other members of the team arrive a workable design.
-
Optimal data displays are tailored to the background of the user, characteristics
of the data, and:
Correct Answer: The correct answer is option a. Data displays should support tasks being performed
by the user.