RSS-Feed abonnieren

DOI: 10.1055/a-2505-7743
“Be Really Careful about That”: Clinicians' Perceptions of an Intelligence Augmentation Tool for In-Hospital Deterioration Detection
- Abstract
- Background and Significance
- Methods
- Results
- Theme 1: Clinicians perceived Intelligence Augmentation as Valuable with Some Caveats Related to Function and Context
- Discussion
- Conclusion
- Clinical Relevance Statement
- Multiple-Choice Questions
- References
Abstract
Objective This study aimed to explore clinicians' perceptions and preferences of prototype intelligence augmentation (IA)-based visualization displays of in-hospital deterioration risk scores to inform future user interface design and implementation in clinical care.
Methods Prototype visualization displays incorporating an IA-based early warning score (EWS) for in-hospital deterioration were developed using cognitive theory and user-centered design principles. The displays featured variations of EWS and clinical data arranged in multipatient and single-patient views. Physician and nurse participants with at least 5 years of clinical experience were recruited to participate in semistructured qualitative interviews focused on understanding their experiences with IA and thoughts and preferences about the prototype displays. A thematic analysis was performed on these data.
Results Six themes were identified: (1) clinicians perceive IA as valuable with some caveats related to function and context; (2) individual differences among users influence preferences for customizability; (3) EWS are particularly useful for patient triage; (4) need for patient-centered contextual information to complement EWS; (5) perspectives related to understanding the EWS composition; and (6) design preferences that focus on clarity for interpretation of information.
Conclusion This study demonstrates clinicians' interest in and reservations about IA tools for clinical deterioration. The findings underscore the importance of understanding clinicians' cognitive needs and framing IA-generated tools as complementary to support them. A clinician focuses on high-level pattern matching information, and clinician's comments related to the power of consistency with typical views (e.g., this is “how I usually see things”), and questions regarding support of score interpretation (e.g., age of the data, questions about what the model “knows”) suggest some of the challenges of IA implementation. The findings also identify design implications including the need for contextualizing the EWS for the patient's specific situation, incorporating trend information, and explaining the display purpose for clinical use.
#
Keywords
intelligence augmentation - in-hospital deterioration - patient-centered - visualization displays - artificial intelligenceBackground and Significance
Early warning scores (EWSs) are developed to predict future patient deterioration and support clinicians in saving patient lives.[1] [2] [3] Inpatient care is fast-paced, high pressure, and decisions are a complex process of cognition and actions rife with human fragility (e.g., errors, decision biases).[4] [5] [6] [7] Advanced computational or machine learning model approaches to identify patients at risk of clinical deterioration are promising but complex issues hamper uptake and widespread adoption.[2] [8] [9] Intelligence augmentation (IA) refers to supplementing human cognition with technology. Distinguished from artificial intelligence (AI), defined broadly as tasks a machine is performing instead of a human, intelligence augmentation is referred to here by the acronym IA.[1]
The incorporation of IA into clinical care is a revolutionary development. However, the implementation of clinical decision support (CDS) tools even without IA is challenging.[10] EWS may be viewed as unnecessary, time-consuming, or excessively challenging to use in team-based care environments.[11] [12] [13] [14] IA-specific barriers including limited clinician interest exacerbate CDS implementation hurdles.[10] [15] Few randomized clinical trials have assessed the impacts of IA-enabled EWS in clinical care and these studies alone will not provide sufficient information about implementation.[16] [17] Sociotechnical approaches are crucial to understanding the complex personal, technical, work system, and societal implications of IA in clinical care.[7] [15] [18] [19] [20]
Studies of IA-based systems have focused on patient end users,[21] but a need remains for clinician-focused studies incorporating data visualizations and designed for cognitive processes. Cognitive theories such as Wickens' human information processing point to a preattention storage of sensory information including visual and auditory information about a patient. This sensory information decays from memory very quickly. Subsequent higher-level cognitive processes such as categorization support decision-making in a feedback loop, by which perception is further impacted by a decision pathway. This suggests the need for designs that attract clinician attention and supplement with more complex information.[22] [23] Dual-process theory categorizes thinking processes as intuitive, fast, and automatic (system 1) or deliberate, slow, and controlled (system 2). This theory also highlights the potential to coordinate information presentation to support cognitive processes by drawing attention to key information quickly and then providing detailed displays with far more patient information to explore.[24] [25] [26] Prototype designs in this study were developed based on cognitive theory and user-centered design principles identified in prior work by our team.[23] [27] [28] This qualitative study aimed to generate guidance for the theory-based design of user interfaces (UIs) for IA-based risk-scoring approaches.
#
Methods
The study consisted of two phases: (1) the design of prototype IA visualization displays and (2) clinician interviews stimulated by the presentation of iterations of the displays.
Design of Prototype Intelligence Augmentation Visualization Displays
Steps involved in generating the prototype displays included
-
Performing a review of the literature on advanced computation model approaches to risk score information displays to identify, for example, UI designs and important design questions.[2] [23]
-
Identifying key design elements for interview guide development such as the type of risk score (e.g., general deterioration, sepsis), individual-patient versus multipatient displays, trend visualization, and approaches to IA explanation.
-
Generating prototype data visualization displays that combined both general and condition-specific EWSs and clinical data.
Information in the displays was drawn from de-identified clinical data and resulting electronic Cardiac Arrest Risk Triage (eCART) deterioration risk scores.[29] Initial display variations presented to participants including multipatient views with EWSs by multiple systems (e.g., sepsis, respiratory), physiology, and laboratory test results ([Fig. 1]) and individual patient detail views included EWSs trending over time and views of physiology by hour ([Figs. 2] and [3]).[30]






#
Clinician Interviews
Recruitment
A “snowball” recruitment strategy was used approaching clinical leaders in the investigators' health care systems for recommendations for physicians or nurses with 5 or more years of clinical experience in critical care environments and emailing those recommended potential interviewees with an invitation to participate. The goal was to design general visualization tools rather than tailoring them for a specific user population. A semistructured interview guide was developed by the author team based on cognitive theories and the study objectives were to assess (1) clinicians' experience with AI-based EWSs in clinical care, (2) visualization preferences for attention-drawing and detailed information; and (3) clinicians' explanations of those preferences (see [Supplementary Appendix A] [available in the online version]). Qualitative interviews of approximately 1 hour in length were conducted over a virtual meeting platform and stimulated by the prototype visualization displays. Interviews were conducted by female investigators with sociotechnical and qualitative data collection expertise. Demographic data were collected during the interview.
#
#
Refinement of Visualization Displays
After seven interviews, participant feedback was used to update the prototype visualization displays. Based on the theoretically informed design and specific research questions, seven interviews provided adequate information power for the iteration of the designs.[31] [32] In the second phase of interviews, displays were iterated to include a single EWS depicting general deterioration and align displays with preferences from the first set of interviews ([Fig. 4]). Single-patient displays included comet graphs which displayed variability and the age of numerical data using a combination of color and shapes (e.g., more data observations with a longer comet tail; [Fig. 5]), and varying time frames for trends ([Fig. 6]).






#
Data Analysis
Information power was used rather than saturation to determine adequate sample size because the information power model is for theory-based design.[33] It is also important to note that the sample size matched the recommendation for informatics studies using qualitative methods.[34] Each interview was audio-recorded, transcribed, and coded using NVivo software.[35] In thematic analysis, three interview transcripts were coded collaboratively by the entire coding team (J.M.B., A.D., U.S., M.N., A.J., and K.M-K.) to generate an initial codebook based closely on the words of the participants. The 12 subsequent transcripts were coded individually and in duplicate with additional codes included in the codebook, and any coding discrepancies were resolved through discussion. Codes were grouped into representative themes through discussion and synthesis among the coding team.[32] [36] [37]
#
#
Results
A total of 15 clinicians from diverse practice environments participated in the study ([Table 1]). Thematic analysis results including sample quotes are presented below, in [Table 2], and in [Supplementary Table S1] (available in the online version).
Abbreviations: CTICU, cardiothoracic intensive care unit; NICU, neonatal ICU; SICU, surgical ICU.
Abbreviations: EWS, early warning score.
#
Theme 1: Clinicians perceived Intelligence Augmentation as Valuable with Some Caveats Related to Function and Context
Clinicians were generally interested in the incorporation of IA into practice (although they often did not distinguish between IA and AI) and highlighted the potential to improve efficiency. They addressed important caveats related to the function, context, and data that inform an IA visualization. Their reflections about how they might use (or not use) IA tools in the future signal potential user needs.
Clinicians recognized that computer-aided tasks support efficiency, “I know that the computer could do this in a second,” and that IA-based tools may be valuable for prediction “AI or informatics is going to be able to pick up in trends that you're just not.” Respondents noted that advanced computational model systems are built on data and algorithms—some of which might be problematic, particularly with challenges of explaining IA, unreliability, or potential for biased output, “AI models that are black boxes that end up leading to serious problems with inequity…” or other issues with data quality. Many clinicians understand that model performance problems can arise from the underlying data (e.g., if data are not measured frequently enough) suggesting user needs related to information about data quality, particularly when the advanced computational model outputs are inconsistent with other clinical findings.
Some clinicians thought that EWSs were not needed given their own personal clinical vigilance, “Am I supposed to go see every patient or just be more vigilant? But I'm already vigilant.” There were caveats related to the novelty of AI in clinical care. AI may be “just another technology,” and not likely to influence medicine in the extreme ways suggested by the promises of AI. Some clinicians believed that close patient observation and clinical judgment were unlikely to be surpassed by IA “people get so fixated and excited about the technology that they forget to take care of the patient.” In contrast, others mentioned the potential for EWS to draw attention to important parameters “So what if you had a scale that predicted heart failure that was different from sepsis? Would we have caught it?” highlighting examples of complex information related to sepsis diagnosis that could potentially be recognized sooner using IA. Clinicians indicated that once AI/IA is established into practice, concerns may recede because people “resist change in medicine. But then once it's out…no one questions.” Participants thought that IA would be most likely to be successfully implemented if clinicians recognized its value for complementarity.
Taken together, this theme reflects clinicians' beliefs about the capacity of AI to improve efficiency and patient care alongside concerns about data quality and appropriate use of AI/IA in clinical environments. Clinicians referenced the special role of a human and information that a human has that a computer either cannot have or likely does not have.
Theme 2: Individual Differences among Users Influence Preferences for Customizability
Clinicians described their preferences for customizing data visualization. Differences in preferences related to the role of the user in their cognitive approach (e.g., incorporating a cognitive pattern matching strategy such as the preference for a display similar to those they already use).
Clinicians noted that EWS would be interpreted differently by individuals in separate roles, “based on the background or the lens of the different users.” User roles influenced preferences related to the time horizon of data trends, “I think if I was on an ICU, I would want probably a slightly longer, like several days, maybe three days, 72 hours….” compared to shorter time windows for views in an emergency department. Clinicians pointed to novice versus expert characteristics, “We've got such brand-new nurses. They're still trying to figure out how to just get through the day.” Another clinician referred to differentiating high-priority information, “Some things I want to be alerted about differently than others and some that would be interruptive, some that would be non-interruptive.”
What is cognitively intuitive is shaped both by past experience and attentional control. One clinician described the potential for cognitive overload if the IA system is not harmoniously embedded in the surrounding electronic health record, “be really careful about that because as a clinician, when I come to the clinical data systems, there's already an intuitive color-coding and bolding scheme for honing my attention in on things. And if this system is going to use a different scheme… that's going to be a problem.”
Another clinician described individual differences as driven more by preferences than by logical (or system 2, deliberate, dual process) cognitive processing, “I suspect most of their opinions are not based in data or good reason, and it's just because that's how they feel about it.” This clinician also addresses the need for approaches to IA that do not focus only on preferences—but also on systematic visualization studies to assess impacts on patient care outcomes, “I would try and deconstruct that…with data…this is what it's best based on all the studies we've done.”
Clinician responses in this theme characterized the importance of individual differences as a factor in shaping preferences, such as for time trends, and more generally as a “lens” to filter information. Clinicians also identified features that draw or could draw their attention, consistent with both Wickens' preattentional sensory input (e.g., intuitive color coding) and with dual process theory. For example, pattern matching to a familiar clinical data system (system 1) and interest in systematic, logical processing (system 2).
#
Theme 3: Early Warning Score is Particularly Useful for Patient Prioritization
Clinicians noted the value of the EWS to identify patients who need care soon—a triage process. Information needs included identifying dynamic changes and the need for clinician attention to be drawn to key determinant factors.
Clinicians viewed the EWS as useful for detecting dynamic change, “So this (score) would be something that would be a part of an acute care person's every moment on their floor, real-time, right?” Clinicians pointed to different cognition during the process of triage (faster pattern matching system 1 approach) as compared to a slower, system 2 process to explore a detailed view of patient information, “I do that in a different workflow. …I'm looking at my list and I'm saying, ‘Who's sick and not sick?’” suggesting that the multipatient display was a particularly powerful use case for the EWS.
Clinicians noted that high-priority triage information supports quick identification and related actions. An example is that of subtle neurological changes—recognizing those more quickly because a higher EWS could permit quicker recognition of a potentially correctable condition, “we get called to neuro changes quite a bit…the nurses are like, ‘They’re just more sleepy today', or, ‘They’re saying things that are off…..'” suggesting that neurological changes may be a high priority for attention.
Clinicians expressed a need to identify the directional change in EWS trends over time. “Even if (the score) was high, if it's going down, I'm probably not going to pull my attention to that because I'm going to look at the ones that are getting worse.” One emergency department clinician commented on the utility of trending EWS for communication, “to let the hospitalist who I'm admitting to know what level of care they will need.”
Clinicians recognized that the EWS could support triage, identified specific categories of information that are a high priority for triage, and the importance of trend-related information.
#
Theme 4: Need for Patient-Specific Contextual Information
Clinicians expressed a need for visual integration of clear, patient-focused information related to the EWS and the importance of general framing for the score.
One clinician stated, “If [the score] was framed as if you want to know why this score is three, click here, then I would be in the mindset of, okay, now it's telling me how it got to the score of three rather than it's trying to help me take care of the patient. Those are two very different things.” Clinicians also expressed the need to view data within a patient-specific context “if I know that the patient who has a heart rate of eighty-four has a hemoglobin of six, and yesterday their hemoglobin was nine, I'm going to be really worried because that's an inappropriately low heart rate for their acute blood loss anemia… .”
One clinician highlighted an example of when critical safety information was challenging to find and the importance of including crucial patient context to support understanding the score, “Another thing that we did … for our emergency department was actually add how much supplemental oxygen they're on….” Clinicians also pointed to the need for actions related to communicating with others, “And so that'll get our attention when you say, ‘This is an 80% chance of mortality.’ We got to talk to the family….”
Taken together, these responses demonstrated that AI should address supporting clinicians in caring for their specific patients and should provide additional context for the interpretation of EWS for clinical decisions.
#
Theme 5: Perspectives Related to Understanding the Composition of the Early Warning Score
Understanding how the EWS was calculated—the quantitative values of the variables driving the score—was a low priority for some clinicians. When information (quantitative or semiquantitative) could support their understanding of how to use the score to care for the patient, clinicians were more engaged.
Some clinicians valued semiquantitative information, “I think just defining… the moderate and high range [of contribution] is pretty sufficient.” When one clinician was asked if they would like to see clinical data values contributing to increased or decreased risk they responded, “Yes, that's helpful. Because what is deteriorating the patient is helpful at that time.”
Clinicians asked questions related to understanding how clinical documentation is used in the calculation of the EWS. “…the AVPU (alert, verbally responsive, pain responsive, unresponsive) score—is it just the last documented? And if the predictive model…., discounted somebody's documentation, but it's still showing that the…neurotic status is altered… .” In response, the interviewer explained how clinical data can be used to inform the AI predictive model based on population data (e.g., patients with these risk factors are likely to show this specific outcome). Some clinicians indicated that understanding the components would be useful as part of the big picture, not necessarily during triage of the patient, but during flexible times to explore a more detailed explanation, “And then in my free time, I want to understand the (score in the) tool better.”
Considering how they would use information about EWS composition, a clinician noted the tradeoff between components that reflect fixed versus actionable information and how that would impact care, “if age is 95% of the value, I'm not going to change her age.” Clinicians also questioned whether the score is responsive to the up-to-the-minute clinical changes of the patient, “I gave them a bolus of IV fluid, I'm going to want to go and see what does their blood pressure look like 15 minutes later, 30 minutes later. How do they look? … And it's not clear to me that this score is going to know those answers.”
Given clinical demands, minimal curiosity, and the need for actionable information, there is a clear need to frame the purpose of the EWS. There were indications that some clinicians did not understand prediction models well (e.g., “How does it know what I did?”), which could diminish the potential for successful implementation.
#
Theme 6: Design Preferences Focus on clarity for interpretation of information
Clinicians considered the semiquantitative presentation of EWS contributing variables and color in the display to be valuable. Clinicians pointed to the importance of simplicity but also the ability to customize based on specific preferences.
Clinicians addressed ways in which mathematical representation of the EWS can influence interpretation such as how odds ratios might be challenging because of the difficulty understanding the reference group or what the percentile might refer to, “it's just easier to quantify by having the base integers… rather than a percentile.” Color was considered useful for indicating whether the EWS was low, medium, high, or critical. Many clinicians saw value in customizable views, centered on personal preferences, “I like simplicity in the context of how I would like it,” a reminder of the importance of personalizing information and supporting clinicians' need for autonomy.
Color was perceived as a valuable clue to the age of the displayed data. “This lab is more than 24 hours old …. Have it be greyed out….” Comet graphs, which were designed to show the range of time of specific values (see [Fig. 2], bottom middle panel for comet graph images), were not popular among our participants. “coming from somebody who's not used to looking at the graphs on the far right, I don't tend towards those....”
Generally, clinicians preferred design features that helped them understand the patient data, and preferred features of customization that allowed them to understand the score as a characteristic of the patient condition (e.g., trends over time). As in other themes, information that supports understanding priorities quickly (e.g., using color to attract or deflect attention), is consistent with the need to support a transition from preattention sensory storage to processing in more depth, consistent with Wickens' theory.[22] [23]
#
#
Discussion
This study elicited clinician perceptions and preferences for prototypes incorporating data visualizations and an IA-based EWS for patient deterioration. Clinicians generally had positive impressions of the displays alongside concerns about data quality and score reliability. Customizability preferences related to individual differences among clinicians. The EWS was seen as particularly useful for prioritization. Having sufficient patient-focused information to contextualize the EWS and related data was identified as important. Some clinicians were curious about how the score was calculated but were not always motivated to investigate this deeply themselves given job demands. To that end, preferred design features supported quick synthesis of the included information.
The results of this study point to (1) the importance of framing the purpose of IA-based EWS systems for clinical use, (2) the need to match tool function to individual differences of the clinical user, (3) the importance of trend data showing change over time, and (4) the need for transparency to support user assessment of tool validity. These findings are consistent with work by others finding the need to match functions of a tool to clinical user,[38] and the need for information contextualized by trend and treatment information.[39] Other studies of individual differences including how novices or experts would use the tool and also reinforce the gaps, we found that can be reduced through display design.[5] [40] [41] The need for transparency and the need for users to assess the validity of the recommendations and/or score is also well-described in other works.[42] [43]
There were participants who explicitly referenced the potential of the IA system design to support efficiency. No clinicians referenced the potential for less burnout, another hope for the impact of IA implementation.[15] Notable design features for consideration included the age of patient data, highlighting parameter severity using color, and ensuring that trajectories of data were easy to understand which suggests efficiency in implicit ways. Clinicians referenced special human-like characteristics that potentially relate to concerns about replacement by IA (e.g., “take care of the patient, that mantra is never going to get lost”).
Across themes, clinicians addressed the importance of supporting their attention to crucial issues (e.g., a tool that supports them in not missing key information, attention regulation). There were hints that IA is seen as potentially threatening—unnecessary for tasks clinicians already performed—and overhyped. This study is unique in examining a theory-based visualization design and demonstrating results consistent with the applied theoretical foundations. Customizability based on the clinicians' role and cognitive processing were identified as important design components.[26] [44] This finding is consistent with dual process theory and with recognition-primed decision-making models of naturalistic decision-making among experts.[25] [28] [45] Clinicians in this study explicitly addressed the importance of intuitive, fast, processing (system 1) consistent with dual process theory and the “lens” of the clinician viewing the information, consistent with Wickens' theory of human information processing.[22] [25] Of note, participants focus on the high-level pattern matching information, and “how things are” or “how I usually see things” suggest some of the challenges of presenting something “new” to clinicians. Potentially, the presentation of information that fits existing workflows or cognitive patterns will neglect some of the potential to disrupt those workflows and patterns.
In clinicians' descriptions of what they would like IA to do (e.g., recommend treatment, demonstrate knowledge of what the clinician is doing for treatment), clinicians are speaking to hopes about what AI/IA could do—and reflecting needs for flexible adjustment to patient-centered factors including treatment input. Taken together, this suggests that although disruptive change may be challenging given existing workflows and patterns, when care is demonstrably improved for patients by disruption clinicians are likely to welcome it.
Study strengths include stimulated discussion through prototype displays incorporating complex displays and incorporating feedback into the iterative design.[2] [46] [47] The sample size was appropriate for thematic analysis, a method designed to promote the exploration of the complexity of clinician views.[34] [36] However, our sample size was not sufficient to compare providers systematically in different roles or across practice settings.[48] In addition, some study participants worked in multiple critical care settings which also precluded systematic comparisons by practice settings. Finally, study prototypes were not suitable for extensive exploration of the data by clinicians related to their specific patient data or details of the predictive model.
#
Conclusion
The study findings highlight that understanding clinicians' cognitive needs and how to frame IA-based tools as a complementary support to address them is important for the future implementation of AI-based solutions.[15] [38] [39] [49] Augmenting clinician engagement with a focus on individual differences using a cognitive lens is important. Our findings point to the difficulty of implementing solutions with the potential for disruptive change. It may be important to refine approaches to assessing clinician needs based on cognitive patterns and individual differences to help IA achieve its promise for clinicians and the health care system.
#
Clinical Relevance Statement
IA-enabled prediction models are being used in clinical care. In our study, we worked directly with clinicians to assess feedback on ways to display information from an EWS risk score. Implications include ways to frame information and display information to optimize clinical use.
#
Multiple-Choice Questions
-
What are some characteristics of design features preferred by clinicians in this study?
-
Features of customization that allowed them to understand the score as a characteristic of the patient condition (e.g., trends over time).
-
Features promoting standardization that allowed for faster decision-making.
-
Features incorporating non-clinical data unrelated to patient conditions.
-
Features presenting information in a narrative format rather than graphical.
Correct Answer: The correct answer is option a. Several clinicians stated their preference for customization. One clinician stated, “I would say I see value in the customization. I like simplicity, generally speaking, but I like simplicity in the context of how I would like it. Other clinicians pointed out that the tool would need to be customized according to the role of the user. One clinician stated, “I think if I was on an ICU, I would want probably a slightly longer, like several days, maybe three days, 72 hours type thing. And if I was in an emergency department, probably 8 hours or so.”
-
-
According to the study's findings, what is a crucial hurdle for future implementation of IA solutions?
-
Training patients to advocate for themselves when IA falls short.
-
Establishing IA as the authority in decision-making, rather than relying on clinical expertise.
-
Understanding clinicians' cognitive needs and how to frame IA as a complementary support to address those needs.
-
Implementing machine learning to improve the IA solutions over time.
Correct Answer: The correct answer is option c. It is crucial to understand clinicians' cognitive needs because a tool that does not support a clinician will add to their cognitive load, rather than reduce it, leading to the tool never being used, even if it is a highly powerful tool.
-
#
#
Conflict of Interest
None declared.
Acknowledgments
We thank Dana Edelson for providing eCART scores used to provide realistic patient risk score data. We thank Rachel Dalrymple (Department of Biomedical Informatics at the University of Utah) for her assistance in the preparation of this manuscript. Rachel was not compensated for this work beyond her typical employment compensation. Additional study subteams participated in the development of the EWS which was essential to the work in this study.
Protection of Human and Animal Subjects
This project was reviewed and approved by the Institutional Review Board at Idaho State and University of Utah. Participants provided verbal consent for participation at the beginning of each session.
Note
The views expressed in this paper are those of the authors and do not necessarily represent the position or policy of the U.S. Department of Veterans Affairs or the United States Government.
-
References
- 1 Sadiku MNO, Ashaaola TJ, Ajayi-Majebi A, Musa SM. Augmented intelligence. Int J Sci Adv 2021; 2 (05) 772-776
- 2 Wan YJ, Wright MC, McFarland MM. et al. Information displays for automated surveillance algorithms of in-hospital patient deterioration: a scoping review. J Am Med Inform Assoc 2023; 31 (01) 256-273
- 3 Korach ZT, Cato KD, Collins SA. et al. Unsupervised machine learning of topics documented by nurses about hospitalized patients prior to a rapid-response event. Appl Clin Inform 2019; 10 (05) 952-963
- 4 Croskerry P, Campbell SG, Petrie DA. The challenge of cognitive science for medical diagnosis. Cogn Res Princ Implic 2023; 8 (01) 13
- 5 Patel VL, Kaufman DR, Arocha JF. Emerging paradigms of cognition in medical decision-making. J Biomed Inform 2002; 35 (01) 52-75
- 6 Corazza GR, Lenti MV, Howdle PD. Diagnostic reasoning in internal medicine: a practical reappraisal. Intern Emerg Med 2021; 16 (02) 273-279
- 7 Patel VL, Kannampallil TG. Cognitive informatics in biomedicine and healthcare. J Biomed Inform 2015; 53: 3-14
- 8 Peelen RV, Eddahchouri Y, Koeneman M, van de Belt TH, van Goor H, Bredie SJ. Algorithms for prediction of clinical deterioration on the general wards: a scoping review. J Hosp Med 2021; 16 (10) 612-619
- 9 Jahandideh S, Ozavci G, Sahle BW, Kouzani AZ, Magrabi F, Bucknall T. Evaluation of machine learning-based models for prediction of clinical deterioration: a systematic literature review. Int J Med Inform 2023; 175: 105084
- 10 Mann D, Hess R, McGinn T. et al. Adaptive design of a clinical decision support tool: what the impact on utilization rates means for future CDS research. Digit Health 2019; 5: 2055207619827716
- 11 Petersen JA, Rasmussen LS, Rydahl-Hansen S. Barriers and facilitating factors related to use of early warning score among acute care nurses: a qualitative study. BMC Emerg Med 2017; 17 (01) 36
- 12 Baig MM, GholamHosseini H, Afifi S, Lindén M. A systematic review of rapid response applications based on early warning score for early detection of inpatient deterioration. Inform Health Soc Care 2021; 46 (02) 148-157
- 13 Wood C, Chaboyer W, Carr P. How do nurses use early warning scoring systems to detect and act on patient deterioration to ensure patient safety? A scoping review. Int J Nurs Stud 2019; 94: 166-178
- 14 Chua WL, Wee LC, Lim JYG. et al. Automated rapid response system activation-Impact on nurses' attitudes and perceptions towards recognising and responding to clinical deterioration: mixed-methods study. J Clin Nurs 2023; 32 (17-18): 6322-6338
- 15 Baxter SL, Bass JS, Sitapati AM. Barriers to implementing an artificial intelligence model for unplanned readmissions. ACI Open 2020; 4 (02) e108-e113
- 16 Plana D, Shung DL, Grimshaw AA, Saraf A, Sung JJY, Kann BH. Randomized clinical trials of machine learning interventions in health care: a systematic review. JAMA Netw Open 2022; 5 (09) e2233946
- 17 Mello MM, Shah NH, Char DS. President Biden's executive order on artificial intelligence-implications for health care organizations. JAMA 2024; 331 (01) 17-18
- 18 Coiera E. The last mile: where artificial intelligence meets reality. J Med Internet Res 2019; 21 (11) e16323
- 19 Salwei ME, Carayon P. A sociotechnical systems framework for the application of artificial intelligence in health care delivery. J Cogn Eng Decis Mak 2022; 16 (04) 194-206
- 20 Lazar S, Nelson A. AI safety on whose terms?. Science 2023; 381 (6654) 138
- 21 Shields C, Cunningham SG, Wake DJ. et al. User-centered design of a novel risk prediction behavior change tool augmented with an artificial intelligence engine (MyDiabetesIQ): a sociotechnical systems approach. JMIR Hum Factors 2022; 9 (01) e29973
- 22 Wickens CD, Helton WS, Hollands JG, Banbury S. Engineering Psychology and Human Performance. Routledge; 2021
- 23 Wright MC. Chapter 14 - Information visualization and integration. In: Greenes RA, Del Fiol G. eds. Clinical Decision Support and Beyond (Third Edition). Oxford: Academic Press; 2023: 435-463
- 24 Kahneman D, Klein G. Conditions for intuitive expertise: a failure to disagree. Am Psychol 2009; 64 (06) 515-526
- 25 Croskerry P. A universal model of diagnostic reasoning. Acad Med 2009; 84 (08) 1022-1028
- 26 Weir CR, Rubin MA, Nebeker J, Samore M. Modeling the mind: how do we design effective decision-support?. J Biomed Inform 2017; 71S: S1-S5
- 27 Hegarty M. The cognitive science of visual-spatial displays: implications for design. Top Cogn Sci 2011; 3 (03) 446-474
- 28 Klein GA. Sources of power: How People Make Decisions. MIT press; 2017
- 29 Churpek MM, Yuen TC, Winslow C. et al. Multicenter development and validation of a risk stratification tool for ward patients. Am J Respir Crit Care Med 2014; 190 (06) 649-655
- 30 Wright MC, Borbolla D, Waller RG. et al. Critical care information display approaches and design frameworks: a systematic review and meta-analysis. J Biomed Inform 2019; 100S: 100041
- 31 Braun V, Clarke V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qual Res Sport Exerc Health 2021; 13 (02) 201-216
- 32 Braun V, Clarke V. Conceptual and design thinking for thematic analysis. Qual Psychol 2022; 9 (01) 3-26
- 33 Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qual Health Res 2016; 26 (13) 1753-1760
- 34 Ancker JS, Benda NC, Reddy M, Unertl KM, Veinot T. Guidance for publishing qualitative research in informatics. J Am Med Inform Assoc 2021; 28 (12) 2743-2748
- 35 QIP Ltd. QSR International 2020. NVivo (released in March 2020).
- 36 Braun V, Clarke V, Hayfield N, Terry G. Thematic Analysis. In: Liamputtong P. ed. Handbook of Research Methods in Health Social Sciences. Singapore: Springer; 2019
- 37 Braun V, Clarke V. What can “thematic analysis” offer health and wellbeing researchers?. Int J Qual Stud Health Well-being 2014; 9: 26152
- 38 Schütze D, Holtz S, Neff MC. et al. Requirements analysis for an AI-based clinical decision support system for general practitioners: a user-centered design process. BMC Med Inform Decis Mak 2023; 23 (01) 144
- 39 Helman S, Terry MA, Pellathy T. et al. Engaging clinicians early during the development of a graphical user display of an intelligent alerting system at the bedside. Int J Med Inform 2022; 159: 104643
- 40 Reese TJ, Del Fiol G, Tonna JE. et al. Impact of integrated graphical display on expert and novice diagnostic performance in critical care. J Am Med Inform Assoc 2020; 27 (08) 1287-1292
- 41 Reese TJ, Segall N, Del Fiol G. et al. Iterative heuristic design of temporal graphic displays with clinical domain experts. J Clin Monit Comput 2021; 35 (05) 1119-1131
- 42 Hwang J, Lee T, Lee H, Byun S. A clinical decision support system for sleep staging tasks with explanations from artificial intelligence: user-centered design and evaluation study. J Med Internet Res 2022; 24 (01) e28659
- 43 Zhang Z, Citardi D, Wang D, Genc Y, Shan J, Fan X. Patients' perceptions of using artificial intelligence (AI)-based technology to comprehend radiology imaging data. Health Informatics J 2021 ;27(2):14604582211011215
- 44 Cheng L, Senathirajah Y. Using clinical data visualizations in electronic health record user interfaces to enhance medical student diagnostic reasoning: randomized experiment. JMIR Hum Factors 2023; 10: e38941
- 45 Klein GA. A recognition-primed decision (RPD) model of rapid decision making. In: Decision making in action. Ablex; 1993. . Vol. 5(4), pp. 138-147
- 46 Johnson CM, Johnson TR, Zhang J. A user-centered framework for redesigning health care interfaces. J Biomed Inform 2005; 38 (01) 75-87
- 47 Patel S, Pierce L, Jones M. et al. Using participatory design to engage physicians in the development of a provider-level performance dashboard and feedback system. Jt Comm J Qual Patient Saf 2022; 48 (03) 165-172
- 48 Miller K, Kowalski R, Capan M, Wu P, Mosby D, Arnold R. Assessment of nursing response to a real-time alerting tool for sepsis: a provider survey. Am J Hosp Med 2017; 1 (03) 2017.021
- 49 Moorman LP. Principles for real-world implementation of bedside predictive analytics monitoring. Appl Clin Inform 2021; 12 (04) 888-896
Address for correspondence
Publikationsverlauf
Eingereicht: 08. August 2024
Angenommen: 16. Dezember 2024
Artikel online veröffentlicht:
30. April 2025
© 2025. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)
Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany
-
References
- 1 Sadiku MNO, Ashaaola TJ, Ajayi-Majebi A, Musa SM. Augmented intelligence. Int J Sci Adv 2021; 2 (05) 772-776
- 2 Wan YJ, Wright MC, McFarland MM. et al. Information displays for automated surveillance algorithms of in-hospital patient deterioration: a scoping review. J Am Med Inform Assoc 2023; 31 (01) 256-273
- 3 Korach ZT, Cato KD, Collins SA. et al. Unsupervised machine learning of topics documented by nurses about hospitalized patients prior to a rapid-response event. Appl Clin Inform 2019; 10 (05) 952-963
- 4 Croskerry P, Campbell SG, Petrie DA. The challenge of cognitive science for medical diagnosis. Cogn Res Princ Implic 2023; 8 (01) 13
- 5 Patel VL, Kaufman DR, Arocha JF. Emerging paradigms of cognition in medical decision-making. J Biomed Inform 2002; 35 (01) 52-75
- 6 Corazza GR, Lenti MV, Howdle PD. Diagnostic reasoning in internal medicine: a practical reappraisal. Intern Emerg Med 2021; 16 (02) 273-279
- 7 Patel VL, Kannampallil TG. Cognitive informatics in biomedicine and healthcare. J Biomed Inform 2015; 53: 3-14
- 8 Peelen RV, Eddahchouri Y, Koeneman M, van de Belt TH, van Goor H, Bredie SJ. Algorithms for prediction of clinical deterioration on the general wards: a scoping review. J Hosp Med 2021; 16 (10) 612-619
- 9 Jahandideh S, Ozavci G, Sahle BW, Kouzani AZ, Magrabi F, Bucknall T. Evaluation of machine learning-based models for prediction of clinical deterioration: a systematic literature review. Int J Med Inform 2023; 175: 105084
- 10 Mann D, Hess R, McGinn T. et al. Adaptive design of a clinical decision support tool: what the impact on utilization rates means for future CDS research. Digit Health 2019; 5: 2055207619827716
- 11 Petersen JA, Rasmussen LS, Rydahl-Hansen S. Barriers and facilitating factors related to use of early warning score among acute care nurses: a qualitative study. BMC Emerg Med 2017; 17 (01) 36
- 12 Baig MM, GholamHosseini H, Afifi S, Lindén M. A systematic review of rapid response applications based on early warning score for early detection of inpatient deterioration. Inform Health Soc Care 2021; 46 (02) 148-157
- 13 Wood C, Chaboyer W, Carr P. How do nurses use early warning scoring systems to detect and act on patient deterioration to ensure patient safety? A scoping review. Int J Nurs Stud 2019; 94: 166-178
- 14 Chua WL, Wee LC, Lim JYG. et al. Automated rapid response system activation-Impact on nurses' attitudes and perceptions towards recognising and responding to clinical deterioration: mixed-methods study. J Clin Nurs 2023; 32 (17-18): 6322-6338
- 15 Baxter SL, Bass JS, Sitapati AM. Barriers to implementing an artificial intelligence model for unplanned readmissions. ACI Open 2020; 4 (02) e108-e113
- 16 Plana D, Shung DL, Grimshaw AA, Saraf A, Sung JJY, Kann BH. Randomized clinical trials of machine learning interventions in health care: a systematic review. JAMA Netw Open 2022; 5 (09) e2233946
- 17 Mello MM, Shah NH, Char DS. President Biden's executive order on artificial intelligence-implications for health care organizations. JAMA 2024; 331 (01) 17-18
- 18 Coiera E. The last mile: where artificial intelligence meets reality. J Med Internet Res 2019; 21 (11) e16323
- 19 Salwei ME, Carayon P. A sociotechnical systems framework for the application of artificial intelligence in health care delivery. J Cogn Eng Decis Mak 2022; 16 (04) 194-206
- 20 Lazar S, Nelson A. AI safety on whose terms?. Science 2023; 381 (6654) 138
- 21 Shields C, Cunningham SG, Wake DJ. et al. User-centered design of a novel risk prediction behavior change tool augmented with an artificial intelligence engine (MyDiabetesIQ): a sociotechnical systems approach. JMIR Hum Factors 2022; 9 (01) e29973
- 22 Wickens CD, Helton WS, Hollands JG, Banbury S. Engineering Psychology and Human Performance. Routledge; 2021
- 23 Wright MC. Chapter 14 - Information visualization and integration. In: Greenes RA, Del Fiol G. eds. Clinical Decision Support and Beyond (Third Edition). Oxford: Academic Press; 2023: 435-463
- 24 Kahneman D, Klein G. Conditions for intuitive expertise: a failure to disagree. Am Psychol 2009; 64 (06) 515-526
- 25 Croskerry P. A universal model of diagnostic reasoning. Acad Med 2009; 84 (08) 1022-1028
- 26 Weir CR, Rubin MA, Nebeker J, Samore M. Modeling the mind: how do we design effective decision-support?. J Biomed Inform 2017; 71S: S1-S5
- 27 Hegarty M. The cognitive science of visual-spatial displays: implications for design. Top Cogn Sci 2011; 3 (03) 446-474
- 28 Klein GA. Sources of power: How People Make Decisions. MIT press; 2017
- 29 Churpek MM, Yuen TC, Winslow C. et al. Multicenter development and validation of a risk stratification tool for ward patients. Am J Respir Crit Care Med 2014; 190 (06) 649-655
- 30 Wright MC, Borbolla D, Waller RG. et al. Critical care information display approaches and design frameworks: a systematic review and meta-analysis. J Biomed Inform 2019; 100S: 100041
- 31 Braun V, Clarke V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qual Res Sport Exerc Health 2021; 13 (02) 201-216
- 32 Braun V, Clarke V. Conceptual and design thinking for thematic analysis. Qual Psychol 2022; 9 (01) 3-26
- 33 Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qual Health Res 2016; 26 (13) 1753-1760
- 34 Ancker JS, Benda NC, Reddy M, Unertl KM, Veinot T. Guidance for publishing qualitative research in informatics. J Am Med Inform Assoc 2021; 28 (12) 2743-2748
- 35 QIP Ltd. QSR International 2020. NVivo (released in March 2020).
- 36 Braun V, Clarke V, Hayfield N, Terry G. Thematic Analysis. In: Liamputtong P. ed. Handbook of Research Methods in Health Social Sciences. Singapore: Springer; 2019
- 37 Braun V, Clarke V. What can “thematic analysis” offer health and wellbeing researchers?. Int J Qual Stud Health Well-being 2014; 9: 26152
- 38 Schütze D, Holtz S, Neff MC. et al. Requirements analysis for an AI-based clinical decision support system for general practitioners: a user-centered design process. BMC Med Inform Decis Mak 2023; 23 (01) 144
- 39 Helman S, Terry MA, Pellathy T. et al. Engaging clinicians early during the development of a graphical user display of an intelligent alerting system at the bedside. Int J Med Inform 2022; 159: 104643
- 40 Reese TJ, Del Fiol G, Tonna JE. et al. Impact of integrated graphical display on expert and novice diagnostic performance in critical care. J Am Med Inform Assoc 2020; 27 (08) 1287-1292
- 41 Reese TJ, Segall N, Del Fiol G. et al. Iterative heuristic design of temporal graphic displays with clinical domain experts. J Clin Monit Comput 2021; 35 (05) 1119-1131
- 42 Hwang J, Lee T, Lee H, Byun S. A clinical decision support system for sleep staging tasks with explanations from artificial intelligence: user-centered design and evaluation study. J Med Internet Res 2022; 24 (01) e28659
- 43 Zhang Z, Citardi D, Wang D, Genc Y, Shan J, Fan X. Patients' perceptions of using artificial intelligence (AI)-based technology to comprehend radiology imaging data. Health Informatics J 2021 ;27(2):14604582211011215
- 44 Cheng L, Senathirajah Y. Using clinical data visualizations in electronic health record user interfaces to enhance medical student diagnostic reasoning: randomized experiment. JMIR Hum Factors 2023; 10: e38941
- 45 Klein GA. A recognition-primed decision (RPD) model of rapid decision making. In: Decision making in action. Ablex; 1993. . Vol. 5(4), pp. 138-147
- 46 Johnson CM, Johnson TR, Zhang J. A user-centered framework for redesigning health care interfaces. J Biomed Inform 2005; 38 (01) 75-87
- 47 Patel S, Pierce L, Jones M. et al. Using participatory design to engage physicians in the development of a provider-level performance dashboard and feedback system. Jt Comm J Qual Patient Saf 2022; 48 (03) 165-172
- 48 Miller K, Kowalski R, Capan M, Wu P, Mosby D, Arnold R. Assessment of nursing response to a real-time alerting tool for sepsis: a provider survey. Am J Hosp Med 2017; 1 (03) 2017.021
- 49 Moorman LP. Principles for real-world implementation of bedside predictive analytics monitoring. Appl Clin Inform 2021; 12 (04) 888-896











