Keywords FHIR - user-centered design - electronic health record - interoperability - research
informatics
Background and Significance
Background and Significance
Pneumonia is the most common serious infection in children and ranks among the top
reasons for pediatric hospitalization.[1 ] Hospitalization rates vary widely among children's hospitals, even after accounting
for differences in population or illness severity.[1 ] Risk prediction tools using real-time clinical data may improve care consistency
and identify impending critical illness. Working toward that end, our team previously
developed and validated a proportional odds ordinal logistic regression model using
electronic health record (EHR) data to predict pneumonia severity in hospitalized
children.[1 ]
[2 ]
Effective models require timely, integrated, and interpretable presentation within
EHR workflows for optimal utility. Users' perceived risk is influenced by risk presentation,
with contributing factors including numerical presentation, risk framing, and visual
design elements. Accurate interpretation and effective communication of risk can be
achieved by following the best practices in visually presenting the results of clinical
prediction models, as suggested by Van Belle and colleagues.[3 ]
[4 ]
Each scenario is unique, requiring informed and thoughtful design to ensure model
results are displayed in an interpretable way to the right person at the right time
in workflow. User-centered design (UCD) frameworks can optimize the chance for clear
communication by combining the best practices and iterative prototyping into the design
process.[5 ] The Vanderbilt University Medical Center (VUMC) Center for Research and Innovation
in Systems Safety (CRISS) has previously demonstrated the successful use of such frameworks.[6 ]
[7 ]
Design scalability relies on cross-system interoperability. Health Level Seven's Fast
Healthcare Interoperability Resources (FHIR) standard aims to provide consistent data
standards and application programming interfaces (APIs) for health care interoperability.[8 ] SMART on FHIR extends FHIR by setting API standards and leveraging web standards
for user authorization and authentication.[8 ]
As part of the Improving CarE for Community Acquired Pneumonia in Children (ICE-CAP)
pragmatic randomized clinical trial (identifier: NCT06033079), we were tasked with
integrating the previously derived prognostic model for children with pneumonia into
clinical workflows for real-time use in two pediatric emergency departments (EDs).
The trial required implementation with two different vendor systems: Epic (Epic Systems
Corporation, Verona, Wisconsin, United States) at the Monroe Carell Jr. Children's
Hospital at the VUMC and Cerner (Cerner Corporation, North Kansas City, Missouri,
United States) at the University of Pittsburgh Medical Center Children's Hospital
of Pittsburgh (UPMC).
This study describes the design, implementation, and testing of an interoperable,
EHR-integrated, SMART on FHIR-based tool for clinicians caring for children with pneumonia
in the ED. We first present our four-part sequential design process: interface design,
architectural design, testing, and deployment at VUMC. We then describe the process
of deploying the application at UPMC.
Objective
To support a pragmatic, EHR-based randomized controlled trial, we applied UCD methods,
evidence-based risk communication strategies, and interoperable software architecture
to design, test, and deploy a prognostic tool for children with pneumonia in the ED.
In the following sections, we describe four critical phases of the project: User Interface
Design, Software Architecture, Evaluation and Deployment, and Second Site Deployment
with each containing its respective Methods and Results.
User Interface Design
Methods
We followed national standards[9 ] and a UCD framework ([Fig. 1 ]) consisting of the following steps: problem analysis and user research, conceptual
design and early prototyping, user interface development with iterative review and
high-fidelity prototyping, formative evaluation, and postdeployment summative evaluation.
Herein, we describe the application of each phase to the design of our prognostic
tool.
Fig. 1 User-centered design framework. Reproduced with permission of Matthew Weinger MD,
Russ Beebe, & Vanderbilt University Medical Center 2014.
Setting
VUMC and UPMC are large, academic quaternary referral centers in Nashville, Tennessee,
Unites States and Pittsburgh, Pennsylvania, United States, respectively. VUMC performed
model design and validation, application design, and initial deployment. UPMC subsequently
adapted and deployed the tool.
Prognostic Model Information
The previously described multivariable proportional odds logistic regression model
uses age, sex, race, temperature, heart rate, respiratory rate, systolic blood pressure,
and ratio of partial pressure of arterial oxygen to fraction of inspired oxygen (PaO2 :FiO2 ) as predictors, applying restricted cubic splines for continuous variables to relax
linearity assumptions and interaction terms to account for age-dependent vital sign
norms.[2 ] When true PaO2 and FiO2 measurements were unavailable, we estimated them using oxygen flow rates and arterial
oxygen saturation.[10 ]
[11 ]
[12 ]
[13 ] The model predicts three severity classes based on the most severe outcome experienced
during the encounter: very severe represents respiratory failure requiring invasive mechanical ventilation, shock requiring
vasopressors, or death; severe represents other intensive care units (ICU)-level care; and mild–moderate represents all other admitted or discharged children not requiring ICU care. Our
team previously validated the model, which demonstrated very good discrimination and
calibration in the ED setting.[2 ]
[14 ]
Problem Analysis and User Research
We conducted user research at VUMC, UPMC, and Primary Children's Hospital in Salt
Lake City, Utah, United States. (The Utah team contributed to user research but did
not participate in the implementation aim described in this manuscript). User research
consisted of field observations with interactive probing (i.e., contextual inquiry)
as well as formal interviews with individual providers and nurses in the EDs and ICUs.[15 ] We examined care processes for children with pneumonia and how clinicians make prognostic-
and disposition-related decisions. Participants included faculty physicians, resident
and fellow physicians, advanced practice providers (APPs), and nurses.
Conceptual Design and Early Prototyping
Based on the results of problem analysis and user research (discussed in “Results”),
we proceeded to wireframe development followed by iterative refinement informed by
formative usability testing.[16 ] We incorporated evidence-based design principles into our prototypes with attention
paid to numeracy, risk-framing, and visual communication strategies. Informed by user
preference for risk aversion, we employed negative framing in presenting our predicted
outcomes. Negative framing emphasizes the potential downside, such as stating there
is a “5% probability of a very severe outcome,” rather than highlighting the positive,
like a “95% probability of a less severe outcome.” We incorporated Van Belle's techniques
for presenting risk models, including the use of lines for representation of relative
risks, use of hue for categorical differences and saturation/brightness for scale
changes, and simultaneous use of both numbers and graphs.[3 ]
[4 ]
Close collaboration between the design, development, and analytic teams assured accurate
backend functionality and feasible user interfaces. Originally, we intended to perform
in-person user evaluations. However, due to the coronavirus disease 2019 pandemic,
we adapted our approach and conducted individual virtual usability tests, allowing
users to interact with high-fidelity prototypes over a videoconferencing platform.
Results
Problem Analysis and User Research
The CRISS team observed or interviewed 39 pediatric emergency medicine team members:
12 faculty physicians, 9 fellows, 11 residents, 4 APPs, and 3 triage nurses. Overall
user preference themes included risk averseness, accessibility, and importance of
timing. Risk averseness refers to clinicians' reluctance to agree with lower risk
estimates from a tool than suggested by their clinical gestalt. Quotes suggestive
of this theme included: “…people are unlikely to downgrade…” and “the tool should
focus on catching misses.” Accessibility refers to the need for risk communications
to be embedded and easily accessed within the EHR itself (i.e., clinicians would not
leave the EHR to obtain risk scores). The importance of timing refers to the need
to display model results after testing, such as labs or chest X-rays, was completed,
but before committing to disposition decisions.
Conceptual Design and Early Prototyping
Early design efforts yielded several prototypes, progressing from paper prototypes
to wireframe diagrams and, finally, high-fidelity prototypes. An image of the high-fidelity
prototype is available in the [Supplementary Materials ] (available in the online version).
Software Architecture
As we approached the final design, we turned our attention to software architecture
and EHR integration. This section details our implementation in Epic at VUMC. Interoperability
issues faced when adapting the application for Cerner at UPMC are discussed later
in the manuscript.
Methods
Clinical Decision Support Considerations
Design requirements and constraints mandated that the system: (1) integrate with an
existing pneumonia identification system (PIS) [VUMC only]; (2) facilitate trial enrollment
and randomization; (3) allow for downstream reporting; and (4) function within both
Epic and Cerner.
The PIS at the VUMC is a natural language processing (NLP) system employing a random
forest model that predicts the likelihood of pneumonia based on chest X-ray radiology
reports and files its results to Epic flowsheet rows, which can then trigger clinical
decision support (CDS).[17 ]
[18 ]
To address these constraints, we began our architectural design using the five “rights”
of CDS ([Table 1 ]).[19 ]
Table 1
Five “rights” design requirements for pediatric pneumonia prognostic tool
Right information
Model score (and visualization)
predictor values
Relative contribution of predictors
To the right person
Pediatric emergency medicine attendings, fellows, residents, nurse practitioners,
and physician assistants
In the right format
As determined by user research and interface design above
Risk framing
Numerical risk communication
In the right channel
As close to the EHR as possible
At the right time in workflow
While user focused on patient
After model predictors available
After X-ray report interpreted to allow pneumonia identification system to function
Abbreviation: EHR, electronic health record.
Given the PIS's ability to trigger native EHR workflows, we chose to use an interruptive
alert (Epic BestPractice Advisory or BPA) to initiate the CDS workflow. We then considered
how to implement model calculations, the user interface for model result display,
and the randomization and enrollment module.
For model calculation, we first considered Epic and Cerner-native clinical scoring
systems but neither could handle ordinal outcomes at the time. We then considered
Epic's Cognitive Computing platform—a proprietary system that integrates advanced
predictive models into EHR workflows, but enterprise-level licensing costs, trial
deadlines, and interoperability concerns made this impractical. We ultimately chose
to create a custom application designed to address our statistical, interface, and
interoperability needs. During the design phase of this project, both Epic and Cerner
maintained FHIR APIs, which facilitated interoperable design. Therefore, we chose
the SMART on FHIR platform to integrate the application with the EHR.[8 ]
At VUMC, we used Epic's Active Guidelines functionality to launch our custom application.
Active guidelines allows for simplified authentication and authorization, communication
of application context information, and application interface rendering directly within
an Epic browser window. Local expertise and server infrastructure led us to implement
the application using JavaScript via the Angular framework (Google, LLC, Mountain
View, California, United States).
Knowledge Management Considerations
Prior to development efforts, we performed Epic FHIR server validation. We first located
predictor variables in Epic based on observational and interview-based user research,
which allowed for verification of clinical workflows and variation in documentation
practice. We successfully identified all predictors as data elements in Epic.
We then built an application entry in Epic's App Orchard to serve as a test harness
for the FHIR server and its exposed APIs. This entry point facilitated the creation
of unique client identifiers, the exposure of FHIR web services, server endpoints,
and other application settings. Later, the application entry facilitated application
deployment.
We used Postman (Postman Inc, San Francisco, California, United States) to evaluate
web services exposed by Epic's FHIR server. Local expertise with the FHIR version
STU3 (as well as anticipated changes in the R4 version during the time course of application
development) led us to use this version. We explored the following FHIR resource requests:
While most patient data—including name, age, demographics, and vital signs—could be
retrieved through standard FHIR requests using either default or U.S. Core profiles,
we encountered barriers concerning oxygen supplementation data, which were not available
through Epic's FHIR implementation. To resolve this issue, we used the “Observation.Search”
request to extract the required data from Epic's native flowsheets, which aren't part
of the standard core profiles, successfully meeting our validation criteria. A screenshot
of the API validation process of an Observation.Search request is included in the
[Supplementary Materials ] (available in the online version).
The Observation.Search request will return multiple values that meet the specified
criteria. For most predictors (age, sex, temperature, heart rate, respiratory rate,
systolic blood pressure), we used the first documented values during the visit. Our
estimation of the PaO2 :FiO2 ratio requires contemporaneous SpO2 measurements and FiO2 measurements or estimates depending on the modality of oxygen provided. To account
for this, we used the first measured SpO2 value and the closest PaO2 or FiO2 value within 1 hour of the SpO2 measurement. If a measured or estimated PaO2 or FiO2 was unavailable during that window, room air was assumed.
Enrollment and Randomization Module
The pragmatic nature of the clinical trial required enrollment to be EHR based and
embedded within clinical care under waiver of informed consent. The waiver of informed
consent was justified based on feasibility concerns. Given the narrow time window
for clinical decision-making, it was critical that clinicians be the ones to enroll
patients. It would have been neither practical nor safe from a human timing standpoint
or user interface standpoint for a research assistant to act as the middleperson and
interrupt the clinical decision-making process. Given the conservative framing of
the tool from a clinical decision-making standpoint, this strategy was considered
to confer the least risk to the patient.
We designed the system to handle encounter enrollment and randomization to control
and intervention groups. To facilitate EHR-based randomization, we used a random number
generator within Epic via MUMPS code to file a random integer between 0 and 1 to a
patient-level data structure (SmartData Element), which corresponded to control and
intervention arms, respectively, in a roughly 1:1 ratio (subject to variation from
the random number generator). We used interruptive alerts (BPAs) to implement the
randomization and enrollment module and to launch the custom app within Epic. Upon
successful randomization and enrollment, the application launched immediately and
automatically via ActiveGuidelines.
Reporting Considerations
From conception, we designed the system to use EHR-native data structures to enable
downstream reporting. Specifically, our platform needed to write back enrollment,
randomization, and model result data. Since Epic's FHIR servers were able to write
to study-specific data elements using flowsheet rows via the Observation.Create FHIR
resource request, we used this approach. [Table 2 ] shows the data elements the application returns to Epic.
Table 2
Application data elements returned to Epic via FHIR interface
Model data element
FHIR U.S. core
Predictors
SpO2 value displayed
Yes
FiO2 value displayed
No
O2 flow rate displayed
No
O2 delivery displayed
No
RR value displayed
Yes
HR value displayed
Yes
Systolic BP value displayed
Yes
Age displayed
Yes
Temperature value displayed
Yes
Gender displayed
Yes
Race displayed
Yes
Outcomes
Predicted non-ICU percentage risk displayed
–
Predicted non-ICU raw score displayed
–
Predicted ICU percentage risk displayed
–
Predicted ICU raw score displayed
–
Intubation/shock or worse percentage risk displayed
–
Intubation/shock or worse raw value displayed
–
Model run date/time stamp
–
Abbreviations: BP, blood pressure; FHIR, Fast Healthcare Interoperability Resources;
ICU, intensive care unit; HR, heart rate; RR, respiratory rate.
We also considered several other platforms. We refrained from using an external database
managed by the custom development team, given the required downstream federation of
data sources and the absence of a longitudinally available database administrator.
We similarly forewent using REDCap,[20 ] as there were anticipated challenges with integrating REDCap data with Epic-native
structures in an automated fashion without the need for additional custom development.
Finally, a dashboard was created using Tableau (Tableau Software, Mountain View, California,
United States) as the presentation layer and Epic's Clarity database as the backend
layer for longitudinal study reporting.
Results
The final architectural design is shown in [Fig. 2 ]. The enrollment and randomization alert are displayed in [Fig. 3 ]. The custom application is shown in [Fig. 4 ]. A screenshot of the reporting dashboard is shown in [Fig. 5 ]. A video of the final workflow is included in the [Supplementary Materials ] (available in the online version).
Fig. 2 Final software architectural design at VUMC for custom application. VUMC, The Vanderbilt
University Medical Center.
Fig. 3 Epic-based enrollment and randomization module via BestPractice Advisory interruptive
alert.
Fig. 4 Final application user interface integrated within Epic EHR workflow at VUMC. This
is a test patient in a test environment. EHR, electronic health record; VUMC, The
Vanderbilt University Medical Center.
Fig. 5 Enrollment dashboard deployed using Tableau Server to monitor ongoing enrollment,
randomization, and model views.
Evaluation and Deployment
Evaluation and Deployment
Methods
All modules underwent rigorous testing in distinct environments: development for initial
code checks, proof of concept for feasibility, and test for functionality. They were
first examined individually (component testing) and then together through integrated
end-to-end testing prior to migration to the production environment. End-to-end integration
testing required real-time collaboration from the following teams:
Local EHR client access team for enabling FHIR endpoints within the EHR.
EHR integration team for application access.
Custom development team for real-time debugging and application deployment.
Server administration for app migration.
EHR ED team for EHR build migration through development, test, and production environments.
Clinical subject matter experts for evaluating score validity and app behavior.
User acceptance testing was performed before the application was available to end
users. Test cases were written using clinical scenarios extracted from a prior study
within the same grant.
Results
We unit tested the CDS system, FHIR app, and FHIR app dataflows separately, which
identified several configuration problems requiring correction prior to end-to-end
testing. During end-to-end testing, we identified environmental differences between
test and production environments requiring correction for the app to function. Having
the development and clinical teams present in real time during end-to-end testing
sessions allowed for prompt resolution and successful deployment.
The prognostic tool was successfully deployed to a production environment in November
2020. Between then and study completion in November 2022, the enrollment workflow
at VUMC triggered during 1,310 patient encounters, of which 333 (25.4%) were deemed
eligible by clinicians for enrollment. Of these, 147 (44.1%) were randomized to the
intervention group. Among the intervention group, the model was viewed in the FHIR
application by at least one user for 120 (81.6%) encounters. Enrollment for the trial
is now complete, and results associated with clinician decision-making and formal
usability evaluation with iterative updates will be submitted for publication upon
completion of the primary analyses.
Second Site Deployment (University of Pittsburgh Medical Center)
Second Site Deployment (University of Pittsburgh Medical Center)
Methods
Technology, interoperability, workflow, and interface challenges arose when transitioning
the application for use at UPMC. The application was developed using the Angular framework,
with which UPMC's developers were unfamiliar. The necessary training and familiarization
took additional time that had not been anticipated. Regular meetings between the development
teams from both sites minimized this time cost.
The VUMC application server was deployed in a Linux environment, whereas UPMC employs
primarily Windows Internet Information Services servers (Microsoft Corporation, Redmond,
Washington, United States). Deployment at UPMC therefore required the deployment of
a new Linux server and the Nginx web server (F5 Inc, Seattle, Washington, United States)
for app delivery.
There were between site variations in vendor FHIR server implementations resulting
in inconsistent FHIR resource availability. Some variables like oxygen delivery required
a “back door” through Epic's native web services and Cerner's FHIR server lacked equivalent
functions. To address this, UPMC developers carefully mapped concepts using LOINC
(Logical Observation Identifiers Names and Codes) codes and extracted many candidate
variables using a mix of FHIR and Cerner-native web services.
Beyond the challenges from an application and server standpoint, one-to-one mappable
CDS workflows do not exist between Epic and Cerner for triggering and displaying the
application and there was differing infrastructure for randomizing and enrolling patients.
For instance, since the NLP-based workflow was not available at UPMC, we required
a different approach to enrollment. Using Cerner's Discern rule-based framework, we
initiated the workflow when a clinician opened the patient's chart and the following
criteria were present: resulted chest X-ray and an ED chief complaint of respiratory
problem, cough, or congestion/upper respiratory infection. Patients were randomized
to proceed further based on whether their encounter ID was even or odd. If randomized,
the EHR displayed a passive alert via an icon on the Cerner ED track board (a department-level
spreadsheet style tool that allows clinicians to monitor ED care). If this icon was
selected, clinicians would be shown a Cerner Powerform (data entry module) that described
eligibility criteria and facilitated enrollment the patient in the study. If enrolled,
the FHIR application was launched using Cerner's mPage (web integration) functionality
and the remainder of the workflow was like VUMC's. [Fig. 6 ] depicts the workflow and architecture at UPMC.
Fig. 6 Parallel software architectural design at UPMC describing enrollment and application
launch. UPMC, University of Pittsburgh Medical Center.
Weekly meetings with the clinical, UCD, and development teams from both sites ensured
both application and EHR workflow versions matched as closely as possible while maintaining
the integrity of key metrics for downstream reporting. Ultimately, a similar test
strategy was employed for testing each component and end-to-end integration of the
app at UPMC before the tool was deployed in production.
Results
A PowerPoint file depicting the UPMC workflow is included in the [Supplementary Materials ] (available in the online version). The pediatric pneumonia prognostic tool was deployed
in a production environment from November 2020 through November 2022. During the study
period, the UPMC EHR interface triggered during 3,117 encounters. Clinicians deemed
201 of 3,117 (6.4%) patients eligible for enrollment, of whom 117 (58.2%) were randomized
to the intervention group. Among the intervention group, the model was viewed in the
FHIR application by at least one user during 81 (69.2%) encounters.
Discussion
We employed UCD and CDS principles to design, implement, test, and deploy a custom,
EHR-integrated FHIR application facilitating enrollment, randomization, model visualization,
data capture, and reporting in support of a randomized pragmatic clinical trial. We
learned many lessons during this process related to knowledge management, integrated
custom app maintenance, and interoperability. We suspect the differences in enrollment
rates between sites was related to differences in the enrollment workflow (i.e., the
presence or absence of the PIS NLP tool), which could lead to a broader range of patients
being included at the UPMC than at VUMC. This highlights the value in tools such as
the PIS for focusing enrollment and reducing alert fatigue.
Knowledge Management
Due to the dynamic nature of medical knowledge and representation, EHR data structures
and features change regularly. Ideally, upcoming EHR upgrades or local configuration
changes, such as where vital signs are recorded, would be perfectly communicated to
all teams who use them. However, communication failures are inevitable with downstream
consequences for custom apps. For instance, between implementing the PIS and the custom
application, many flowsheet rows in Epic at the VUMC representing oxygen delivery
changed due to clinical needs. This change was only detected by observing clinicians
and nurses at the point of care. Communication with end users and frequent regression
testing can identify these changes early.
Another knowledge management challenge relates to changing standards (e.g., FHIR)
and individual standard implementations (e.g., by vendor systems). Support teams usually
broadcast these changes, but communication failures or nonbroadcast developer level
feature changes (e.g., flowsheet access in Epic) can lead to unidentified bugs. For
example, after deploying our app, Epic changed how its FHIR Server handled patient
encounter information during application launch, which led to application failure
and the erroneous exclusion of an eligible encounter. Furthermore, this error was
only detected after our team noticed aberrant behavior from the application and collaborated
with our developers and the EHR vendor to identify the bug. Solutions to minimize
the impact of these changes include scheduled regression testing of both custom apps
and launch harnesses, frequent communication with vendor technical support staff,
and ensuring that custom development teams are included in new feature reviews from
vendor systems.
Integrated Custom App Maintenance
In contrast to a traditional software development environment where detailed automated
unit tests can be deployed, EHR systems typically use what-you-see-is-what-you-get
editors that reduce barriers to entry but allow for small environmental differences
to go undetected. For instance, during the implementation of this project, there were
several times where inconsistent browser allow lists, data structure identifiers,
and user security settings between development, test, and production environments
variably impacted our app in each environment. Furthermore, variable enabling of vendor
FHIR server features can unexpectedly impact integrated applications. Strategies for
managing these challenges include rigorous regression testing performed in each environment
and robust application logging to facilitate debugging after unexpected clinical events,
which are often hard to simulate or anticipate in test environments. An example where
logging was helpful was a case where a patient had more than 1,000 vital signs charted
over several visits, which exceeded the FHIR server's capacity. This had not been
anticipated during design, and the audit log allowed us to detect and address this
issue. Finally, implementing a “debug mode” for developers in production environments
(without writing to production data structures) was useful for identifying configuration
idiosyncrasies between environments. Formal risk analysis during architectural design
might help teams designing similar applications anticipate some of the challenges
we encountered prior to deployment.[21 ]
Interoperability Concerns
Converting our app for use at a second health system with differences in expertise,
preferred web development frameworks, operating systems, web server platforms, and
EHR vendors presented many challenges. While the site-specific differences were anticipated
early during the design phase, the FHIR-related challenges were not. Marketing materials
sometimes present FHIR as a simple solution for interoperability between vendor systems
or between sites using the same EHR. However, substantial differences exist in FHIR
server implementation and configuration between vendor systems and even different
sites using the same system. Therefore, configuration at a new site can be quite time
intensive due to data validation and application testing requirements.
Furthermore, the knowledge management burden for maintaining an integrated application
is substantial for a single site, but longitudinal surveillance for drift in how clinical
concepts are represented across multiple systems is an even greater challenge requiring
longitudinal, intentional surveillance. In the same vein, we recommend teams developing
similar applications in the future should prospectively identify necessary data structures
natively within each EHR and each EHR's FHIR server before architectural design, as
this approach would identify problems early (such as the representation of oxygen
delivery in our application). Dorr et al describe a useful approach to assessing data
adequacy using FHIR in the setting of hypertension management that would be useful
for teams pursuing projects like ours in the future.[22 ] Lobach et al provide another useful example of how future investigators can evaluate
and report the viability of FHIR app integration in an EHR environment during application
design.[23 ]
Finally, the FHIR standard, while robustly designed for customizability, still has
gaps in the U.S. Core profile (as demonstrated by the amount of effort needed to model
oxygenation) that adds substantial complexity when using the standard. In short, FHIR
applications are not simply plug-and-play and require informaticians and developers
with expertise in the FHIR standard, local EHR and FHIR configuration, and local custom
development resources for successful interoperability.
Regulatory Concerns
In 2022, after we designed and deployed our application, the Food and Drug Administration
(FDA) released recommendations about what type of CDS software is subject to FDA oversight
as a device.[24 ] While our application was not subject to these guidelines due to our timeline, it
likely would not have been considered a device under the FDA criteria. Future developers
of applications like ours should be mindful of these guidelines and review them with
their local legal and compliance offices.
Conclusion
The union of UCD principles, modern clinical data standards, and a multilayer testing
plan facilitated implementation of a real-time CDS application that displays prognostic
information for children presenting to the ED with pneumonia while facilitating a
pragmatic, EHR-based randomized clinical trial at two health systems with different
EHR vendors. Careful planning, iterative design, knowledge management, and rigorous
testing were critical for successful implementation.
Clinical Relevance Statement
Clinical Relevance Statement
This study demonstrates the feasibility of an interoperable FHIR application in enhancing
pediatric pneumonia prognostication in real time in multiple EDs using different EHR
vendor systems for use in a randomized clinical trial. It serves as an example of
how the synergy between UCD, risk communication best practices, and the application
of interoperable clinical data standards can lead to more effective health care applications.
However, this research also underscores the complexities inherent in developing such
systems, emphasizing the necessity for meticulous planning and collaboration to ensure
successful implementation and operation.
Multiple-Choice Questions
Multiple-Choice Questions
In deploying an interoperable FHIR application for pediatric pneumonia prognostication,
which aspect is crucial for ensuring effective data exchange across diverse EHR systems?
EHR vendor server interoperability standards availability
Familiarity with interoperability standards
Health information exchange integration
Predictive analytics models
Correct Answer: The correct answer is option b. Interoperability standards like FHIR
are essential for enabling effective data exchange across different EHR systems. Given
recent legislature, nearly all systems offer these abilities, including FHIR servers.
However, developer familiarity with the nuances that distinguish between each vendor's
implementation is key for successful interoperability.
Which principle is most critical in the UCD of health care applications like the FHIR
application for pediatric pneumonia?
Knowledge of pediatric pneumonia treatment guidelines
Compliance with best design practices for clinicians with visual impairment
Alignment with clinical workflow and user needs
Implementation of advanced machine learning algorithms
Correct Answer: The correct answer is option c. Aligning the design of health care
applications with clinical workflow and user needs is vital in a UCD approach. This
requires extensive user research, iterative prototyping, and repeated testing. This
ensures that the application supports clinicians' actual work processes and preferences,
thereby enhancing usability and effectiveness in clinical settings.