3 Results
3.1 Diversity in Design Phase
While the design approaches used in the reviewed studies are diverse, they all fit
into the simplified HF/E design cycle model shown in [Figure 1] [[1]]. We therefore use this design cycle model to describe the diverse body of recent
HF/E-focused health informatics research included in this synthesis. The studies included
in this synthesis all addressed some subset of the understanding phase of the design cycle, some description of the creation phase, and/or various evaluation efforts they employed in their work. In doing so, HF/E-focused health informatics
research seeks to understanding user needs (both clinician and patient), and/or the
creation and evaluation of novel health informatics tools to address those needs.
Fig. 1 HF/E Design Cycle [[1]].
Numerous HF/E design frameworks and models used by researchers and practitioners within
the health informatics community align with the HF/E design cycle shown in [Figure 1], including human-centered design (HCD), user-centered design (UCD), and the system
development life cycle. HCD, defined by ISO, is both process- and outcome-focused,
defined as an “approach to systems design and development that aims to make interactive
systems more usable” [[4]]. HCD typically includes three general phases that are aligned with the HF/E design
cycle: Inspiration (Understand), Ideation (Create), and Implementation (Create + Evaluate)
[[5]]. UCD, while related to HCD, focuses more tightly on the needs of the specific end
users of a design, and is more commonly used in health informatics research [[4]]. UCD consists of iterative design cycles that involve understanding of end user
needs (Understand) and designing and iteratively refining prototypes with close involvement
of end users (Create) before deploying final designs (Create + Evaluate). The Office
of the National Coordinator of Health Information Technology (ONC) now requires a
“user-centered design processes to be applied to EHR technology that includes certain
capabilities” [[6]]. The exact UCD process is not prescribed by ONC, but resources exist from organizations
such as NIST to provide guidance on UCD process with respect to EHRs [[7]]. Yet, one recent comprehensive study showed that vendors vary significantly in
the quality of their UCD practices, ranging from well-developed UCD processes to fundamental
misconceptions of the UCD process [[8]]. Several studies included in this synthesis addressed the value of following a
user- or human-centered design approach broadly, either conducting UCD or HCD in their
own work or noting the importance of these approaches in health informatics design
[[2], [7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]]. However, for the reasons above, readers of HF/E-focused health informatics manuscripts
should be careful to interpret the authors’ use of the (often confused) terms HCD
and UCD, and not assume their design process is of high quality. Other manuscripts
synthesize how user perspectives and needs can be integrated into various stages of
a health informatics-focused versions of the systems development life cycle (SDLC),
including project planning, analysis (Understand), design of the system (Create),
implementation (Create + Evaluate), and system support/maintenance (Evaluate) [[18], [19]].
Shown in [Table 1], the bulk of studies included in this synthesis focused on understanding user needs
or on system evaluation efforts, but there was also a significant amount of work aimed
at and creating technologies to meet user needs. These technology creation efforts
may be driven by a recent shift in design capability to a broader community beyond
EHR vendors, as the technology-focus portion of this synthesis describes the large
volume of recent work focused on EHR customizations and mobile app development.
Table 1
HF/E design cycle phase addressed, and methods used in studies.
HF/E Design Cycle Phase
|
Understand
|
[9,17,20-51]
|
Create
|
[9,11,15,30,41,46,50-70]
|
Evaluate
|
[9,11,14,15,17,30,41,46-49,52-59,66-126]
|
|
Heuristic analysis
|
[16,23,41,62,69,70,121]
|
Observation
|
[20,22,28,32,33,36,45,72,74,93,120,124]
|
Focus groups
|
[7,9, 12,15,19,22,26,31,34,41,51,55,91,100,107,108,111, 118,122,127-133]
|
Cognitive walkthroughs
|
[70,121]
|
Study Methods
|
Interviews
|
[11,18,20,24-26,29,31-34,36,37,47,48,51,52,55,57,58, 70,74,78,79,82-84,88,93,99,112,114,117,119,120,122,
126,134]
|
Structured usability testing
|
[7,16,18,66,67,69,120,132]
|
User performance during simulated tasks
|
[54,69,93,105,113,118,123,125,135]
|
Surveys
|
[14,30,31,35,48,52,55,57,64,66,73,76,83,87,88,91,93,99, 101,103,107-109,111,114,115,120]
|
Questionnaires
|
[14,18,48,89,91,116-118,120,126]
|
The scope of analysis within which this HF/E design cycle occurs is diverse. For simplicity
in communication, the HF/E community often partitions their scope of work into the
three separate domains of cognitive, physical, and organizational ergonomics. Not
surprisingly, HF/E-focused health informatics research often focuses on one of these
three domains. Because health informatics technologies are often designed to support
information-intensive work, many HF/E focused health informatics technologies aim
to support clinician and/or patient cognition. Commonly used cognitive ergonomics-focused
approaches such as cognitive task analysis, focus on supporting and improving a range
of processes including clinician and patient comprehension, decision-making, distributed
or team cognition, and errors [[18], [136]]. Some HF/E-focused health informatics research attempts to capture all three domains
(cognitive, physical, organizational), focusing on the complex sociotechnical systems
within which health informatics technologies are used. For example, SEIPS 2.0, a “sociotechnical
work system → process outcomes” model based on the Systems Engineering Initiative
for Patient Safety model (SEIPS), helps HF/E researchers and practitioners capture
and evaluate the system elements impacting and being impacted by health informatics
technologies [[137], [138]]. Another sociotechnical model developed by Sittig and Singh is “designed to address
the socio-technical challenges involved in design, development, implementation, use,
and evaluation of HIT within complex adaptive healthcare systems” [[139]].
Studies included in this synthesis used diverse methods to conduct the “understand
→ create → evaluate HF/E design” cycle phases, shown in [Table 1]. Many studies used or reviewed the use of one or more qualitative HF/E approaches,
such as heuristic analysis, observations, focus groups, cognitive walkthroughs, and
interviews. Others used or reviewed the use of one or more quantitative HF/E methods
such as structured usability testing, user performance measurements during simulated
tasks, surveys, and questionnaires. Many studies combined qualitative and quantitative
methods, such as quantitative measurements during simulated tasks followed by debriefing
interviews.
This large body of health informatics work using a diverse set of HF/E methods to
carry out the “understand → create → evaluate” cycle is exciting. The volume of studies
focused on understanding user needs, creating health informatics technologies to meet
those needs, and evaluating technologies shows that HF/E frameworks as methods are
becoming embedded in the activities of the health informatics community. The health
informatics community would benefit from HF/E-focused health informatics researchers
and practitioners focusing significant effort on consolidating methodological “understand
→ create → evaluate” best practices and ensuring that those best practices are accessible
to the broad health informatics community. For example, evaluations of users’ perceptions
of system usability ranged widely. While methods and instruments measure different
constructs, it would be helpful to come to agreement on best practices for qualitative
and quantitative HF/E methods and making those recommendations accessible to the broad
health informatics community. Doing so would help the HF/E-focused health informatics
community better synthesize findings from these labor and resource-intensive studies.
3.2 Diversity in Technology Types
There are numerous ways we could categorize the technologies in the studies included
in this synthesis, such as the mode of delivery (e.g., handheld, desktop, …etc.),
interaction-type (e.g., voice, touch, gesture, etc.), or underlying algorithmic approach
(e.g., practice guideline-based, AI, machine learning, etc.). In this synthesis, we
focused on categorizing studies based on the expansion of health informatics design
opportunities supported by the (relatively) recent ability to customize EHR interfaces,
open APIs that allow developers to directly create new software tools leveraging EHR
data, and increasingly accessible platforms for mobile app development.
Shown in [Table 2], while some studies included in this synthesis were focused on physicians interacting
with existing EHR computerized provider order entry (CPOE) interfaces, most studies
in clinical settings focused on customizations to EHRs. Many technologies focused
on the creation and evaluation of customized clinical decision support tools. The
application of these customized clinical decision support tools varied widely in context,
including diagnostic support [[17], [132]], antibiotic stewardship [[36], [70]], screening for and management of chronic conditions [[53], [91], [119]], identifying individuals at risk for varied clinical outcomes [[50], [69], [87], [118]].
Table 2
Technology types addressed in studies.
Technology types
|
CPOE interfaces
|
[105,113]
|
Customized clinical decision support tools
|
[17,29,69,70,81,87,91,99,118,119,126,129,33,130,132,142, 34,36,37,49,50,53,67]
|
Customized data displays
|
[10,11,92,96,98,123-125,50-52,55,61,66,72,84]
|
Mobile apps
|
[13,15,47,54,56-58,63,75,85,108,110,23,115-117,121,122, 127,131,25-27,30,35,40,41]
|
Many studies focused on the creation and evaluation of customized data displays. These
customized data displays also focused on a diverse set of application areas, including
integrated dashboards [[10], [61], [124]], critical care displays [[96]], opioid management [[123]], plan of care tools [[125]], and patient-focused communication [[11], [55], [98]]. A smaller number of studies addressed non-EHR integrated information systems [[9], [65], [68], [83], [112], [120]], and EHR training design [[46], [59], [135]].
Many studies focused on creating and evaluating mobile apps, typically aimed at addressing
the needs of patients and consumers. These mobile apps focused on a variety of chronic
diseases such as diabetes and hypertension [[63], [122]], cardiovascular health [[30], [58], [121]], cancer care [[58], [127]], mental health [[13], [23], [25], [57], [110]], seizure management [[57]], bladder monitoring [[56]], tuberculosis treatment [[15]], and parental education [[40], [41]]. A small number of studies focused on telehealth [[10], [14], [140]] and personal health records or patient portals [[38], [109], [141]].
While evaluations of EHR usability are still critical [[2], [3]], the ability of HF/E-focused health informatics researchers and practitioners to
be designers of new technologies – rather than purely evaluators – is critical for
health informatics technologies of the future to be useful to and useable by a variety
of end users. HF/E-focused health informatics researchers and practitioners must therefore
advocate for continued development of accessible design resources and platforms that
allow them to be innovators.
3.3 Outcomes of Interest
HF/E methods aim to improve how individuals interact with complex systems. Qualitative
and quantitative HF/E methods can help to understand and affect positive change on
a range of human-system interactions, including supporting health information technology
user cognition and understanding how health informatics technologies affect and are
affected by complex sociotechnical systems within which they are embedded. These positive
impacts of HF/E-focused health informatics research include improving the safety (reducing
risk of injury or death), performance (increasing productivity, quality, and efficiency)
and satisfaction (acceptance, comfort, and wellbeing) of health informatics technologies
[[1]]. While HF/E-focused health informatics research ideally improves all three of these
outcomes, the relative weighting of safety, performance, and satisfaction typically
depends on the context of application, shown in [Figure 2]. The length of each leg on the triangle represents the relative importance of that
outcome, with a longer leg meaning that outcome is typically weighted more heavily
in that domain.
Fig. 2 Relative weighting of HF/E outcomes by domain [[1]].
Part of the challenge in designing and evaluating health informatics technologies
is the lack of clarity in which domain(s) a particular technology is deployed within.
For example, EHRs and customized CDS and visual displays are typically deployed in
high-risk workplaces. Yet, as we show below, the bulk of the evaluation measures from
the HF/E-focused health informatics work included in this synthesis focused on measuring
(often only) end user satisfaction. There are a multitude of likely reasons for this,
including a known lack of system usability, an assumption about the interrelatedness
of these measures (e.g., the impact of performance on satisfaction), and the relative
ease of measuring perceived satisfaction. This tension becomes even greater as we
consider how mobile health data might be integrated into the EHR – where the implementation
of these technologies involves a consumer product and related technology deployed
in a high-risk workplace.
While patient safety is often a motivator for the development of health informatics
technologies, it is difficult to measure, and is therefore infrequently directly assessed
– especially considering the HF/E concept of safety focuses on reducing risk of injury
or death. Shown in [Table 3], some studies in this synthesis instead focused on how EHR designs might negatively
influence patient safety, including CPOE ordering accuracy [[105]], unexpected use of free text data entry [[71]], discrepancies in documentation during patient transfers [[86]], lack of patient identification during CPOE [[113]], and appropriate responses to alerts [[134], [143]]. Other studies use proxy measures that may be correlated with, or lead to, safety
issues. For example, a recent analysis of an inpatient safety dashboard in the context
of opioid management focused on measuring user performance (i.e., time on task, mouse
clicks, mouse movement, cognitive load, and task inaccuracy) [[123]]. Another analysis of a patient safety dashboard measured system usage and perceived
satisfaction [[124]]. A recent study focused on developing and evaluating a dashboard targeting acute
kidney injury (AKI) to improve patient safety measured system usage and performance
with respect to six quality indicators, but the quality measures were developed by
end users – not validated safety measures [[126]]. The measures in these studies are important and likely related to safety outcomes,
but the relative rarity of patient safety events and lack of empirically validated
safety markers makes directly measuring the impact of health informatics design on
safety quite difficult.
Table 3
Outcomes addressed in studies
Outcome measures
|
Safety
|
Potential impact of design on safety
|
[71,86,105,113,134,143]
|
Proxy measures
|
[123,124,126]
|
Performance
|
Accuracy and completeness
Efficiency
|
[9,16,17,52,57,93,96,98,102,117,118,125]
[16,17,52,54,65,76,80,83,93,95,96,99,101,104,
111,123,125]
|
Satisfaction
|
Interviews and focus groups
|
[9,10,15,26,30,37,41,46,51,55,64,74,79,82,91,93,114,119,120,122,134]
|
SUS
|
[9,16,46,54,57,72,117,120]
|
TAM
|
[46,83,99]
|
UTAUT
|
[54,82,116]
|
Health-ITUES
|
[84,124]
|
PSSUQ
|
[69]
|
Computer system usability questionnaire
|
[125]
|
TTF
|
[144,145]
|
SUMI
|
[89]
|
Object-action interface questionnaire
|
[126]
|
NuHISS
|
[103]
|
Self-developed questionnaires
|
[14,48,73,75,77,87,91,114,115]
|
A larger set of studies focus on measuring aspects of performance (e.g., productivity,
quality, and efficiency) via measures related to time to complete tasks, markers of
task completion, and errors while completing those tasks. Many studies assessing performance
measured how accurate or complete the user interaction with the system was, with those
interactions varying from a layperson interpretating a visualization to a clinician
documenting a patient encounter. Shown in [Table 3], many studies focused on improving the efficiency of health informatics technologies,
by measuring current inefficiencies due to issues such as dispersion of information
across areas in the interface or interrupted workflows, and in some cases designing
tools that improve (or at least maintain) user time to complete tasks of interest.
The level of granularity of these time measures, however, ranged from milliseconds
(e.g., eye movements when searching for information) to hours or days (e.g., time
to complete results review).
Numerous studies assessed users’ satisfaction with a given technology. Shown in [Table 3], many studies used interviews and focus groups to glean users’ perceptions. Others
used a variety of questionnaire and survey instruments based on the System Usability
Scale (SUS), Technology Acceptance Model, Unified Theory of Acceptance and Usage of
Technology (UTAUT), Health-ITUES, Post-Study System Usability Questionnaire (PSSUQ),
Computer System Usability Questionnaire, Task-Technology Fit (TTF), Software Usability
Measurement Inventory (SUMI), Object-Action Interface (OAI), National Usability-focused
HIS Scale (NuHISS), and self-developed questionnaires.
Health informatics researchers and practitioners clearly see the need for their work
to jointly address the HF/E goals of improving safety, performance, and satisfaction.
Recent research demonstrates the breadth of methods and measures being used by the
HF/E-focused health informatics community to assess these three outcomes. Because
injuries or safety events of a specific type are relatively infrequent or difficult
to detect, safety is often measured via the prevalence of potentially unsafe actions
or via proxy measure that may be correlated with safety. While performance and satisfaction
are frequently assessed, there is relatively little cohesion around a standard set
of methods or measures to use when evaluating health informatics technologies. By
coming to agreement on methods to measure these outcomes, the health informatics community
can better synthesize findings across studies.