CC BY 4.0 · ACI open 2022; 06(02): e66-e75
DOI: 10.1055/s-0042-1751088
Original Article

Physicians' Perceptions and Expectations of an Artificial Intelligence-Based Clinical Decision Support System in Cancer Care in an Underserved Setting

Rubina F. Rizvi*
1   IBM Watson Health, Cambridge, Massachusetts, United States
,
Srinivas Emani*
2   Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
3   Department of Behavioral, Social, and Health Education Sciences, Rollins School of Public Health, Emory University, Atlanta, Georgia, United States
,
Hermano A. Lima Rocha
4   Department of Community Health, Federal University of Ceará, Fortaleza, Ceará, Brazil
,
Camila Machado de Aquino
5   Department of Maternal and Child Health, Faculty of Medicine, Federal University of Ceará, Fortaleza, Ceará, Brazil
,
Pamela M. Garabedian
6   Clinical and Quality Analysis, Partners HealthCare, Somerville, Massachusetts, United States
,
Angela Rui
6   Clinical and Quality Analysis, Partners HealthCare, Somerville, Massachusetts, United States
,
Carlos André Moura Arruda
5   Department of Maternal and Child Health, Faculty of Medicine, Federal University of Ceará, Fortaleza, Ceará, Brazil
,
Megan Sands-Lincoln
1   IBM Watson Health, Cambridge, Massachusetts, United States
,
Ronen Rozenblum
2   Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
,
Winnie Felix
1   IBM Watson Health, Cambridge, Massachusetts, United States
,
Gretchen P. Jackson
7   Departments of Surgery, Pediatrics, and Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, United States
,
Sérgio F. Juacaba
8   Departments of Surgery, Cancer Institute of Ceara, Fortaleza, Ceará, Brazil
,
David W. Bates
2   Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States
› Author Affiliations
Funding This work has been supported by IBM Watson Health (Cambridge, MA, United States), which is not responsible for the content or recommendations made.
 

Abstract

Objectives Artificial intelligence (AI) tools are being increasingly incorporated into health care. However, few studies have evaluated users' expectations of such tools, prior to implementation, specifically in an underserved setting.

Methods We conducted a qualitative research study employing semistructured interviews of physicians at The Instituto do Câncer do Ceará, Fortaleza, Brazil. The interview guide focused on anticipated, perceived benefits and challenges of using an AI-based clinical decision support system tool, Watson for Oncology. We recruited physician oncologists, working full or part-time, without prior experience with any AI-based tool. The interviews were taped and transcribed in Portuguese and then translated into English. Thematic analysis using the constant comparative approach was performed.

Results Eleven oncologists participated in the study. The following overarching themes and subthemes emerged from the analysis of interview transcripts: theme-1, “general context” including (1) current setting, workload, and patient population and (2) existing challenges in cancer treatment, and theme-2, “perceptions around the potential use of an AI-based tool,” including (1) perceived benefits and (2) perceived challenges. Physicians expected that the implementation of an AI-based tool would result in easy access to the latest clinical recommendations, facilitate standardized cancer care, and allow it to be delivered with greater confidence and efficiency. Participants had several concerns such as availability of innovative treatments in resource-poor settings, treatment acceptance, trust, physician autonomy, and workflow disruptions.

Conclusion This study provides physicians' anticipated perspectives, both benefits and challenges, about the use of an AI-based tool in cancer treatment in a resource-limited setting.


#

Introduction

The use of artificial intelligence (AI) based on machine learning, natural language processing, and expert systems has accelerated in the health care domain. AI-based applications have been developed for a wide variety of areas ranging from public health and epidemiology[1] to more specialized care such as mental health,[2] cardiovascular medicine,[3] radiology,[4] and genomics research.[5] AI tools are currently being designed for use in many targeted health care applications, such as diagnostics, care coordination, patient monitoring, and clinical decision support systems (CDSSs).

According to the Centers for Disease Control and Prevention, CDSS are software tools that utilize data to provide prompts and reminders to assist health care providers in implementing evidence-based clinical guidelines at the point of care.[6] CDSSs facilitate clinicians with informed decision-making about patients by providing timely information, usually at the point of care.[7] AI-based CDSS has gained much attention in recent years.[8]

Earlier studies have shown that CDSS helps in rendering higher-quality health care, resulting in more effective and improved patient outcomes.[9] Amalgamation of AI into CDSS has furthered the growth of medicine by enhancing humans' analytic capabilities.[10] The application of such technologies in managing care delivery for various health conditions and at different phases has shown promising outcomes.[11] [12] [13] [14] They have been increasingly incorporated into complex and rapidly evolving disciplines, including oncology.[15]

Optimal adoption and integration of CDSSs require ongoing evaluation of their usability, workflow integration, and user satisfaction in real-world settings where they are intended for use.

To the best of our knowledge, there are few studies that have evaluated perceived expectations of AI CDSS from the viewpoint of their actual users, especially in underserved settings. In one of the published studies, the utility of CDSS in treatment selection for depression as perceived by physicians was evaluated.[16] In another other research study, pharmacists' perceptions of a machine learning model for the identification of atypical medication orders were studied.[17] There are a few other studies, conducted in economically advanced countries, in which the primary objective was to understand how health professionals perceive amalgamation of AI in their practices and what influences their views.[18] [19] However, none of these studies looked at the “anticipated perception” of future users of technology, especially in underserved settings, the main aim of this innovative study.

To maximize the benefits that an AI-based tool technology could offer, it is imperative that we understand the expectations of prospective users and experiences of actual users and patient populations across all potential use settings, including marginalized communities[20] and resource-poor countries.[21]


#

Objectives

The objective of this study was to evaluate expectations of physicians who were naïve to the use of AI in cancer care in an underresourced setting. Additionally, the study aimed to identify multilevel factors that could impede or support the use of an AI tool in cancer treatment in such settings.


#

Methods

Study Setting

The study employed a qualitative research design using semistructured interviews to investigate physicians' perceptions and expectations regarding the future use of the Watson for Oncology (WfO) AI system in an underserved setting.

IBM's WfO is an AI-based CDSS used for oncology treatment selection that provides ranked, evidence-based therapeutic options to oncologists for consideration.[22] The tool is trained by the Memorial Sloan Kettering Cancer Center (MSKCC)[23] through learnings from test cases and experts utilizing recommendations that are consistent with established guidelines and published evidence. All the information input is verified by the oncologists at MSKCC, and WfO data are updated to the latest information every 1 to 2 months. WfO currently supports 13 common cancer types and is available in seven languages to serve 15 countries worldwide.[24] [25] [26] [27] WfO has been implemented in hospitals around the world and has been used in the training of junior physicians and fellows, for communicating during multidisciplinary team meetings, and in supporting physicians in the decision-making process.[25] [26] [27]

The setting of the study was the Instituto do Câncer do Ceará (ICC) in Fortaleza in northeastern Brazil. The cancer center serves over half of the population of patients in the statewide region of Ceará which has nine million residents. The State of Ceará constitutes an underserved setting in Brazil with a high proportion of poverty (70%) in contrast to the southeastern part of Brazil with 23%.[28] [29] Approximately, 37% of the population has low literacy and poor health indicators, especially for infant mortality, immunizations, and infectious diseases.[30]

With the aid of local research personnel, we recruited a convenience sample of 11 physicians at ICC who provided patient care for the cancers that WfO covers (breast, prostate, cervical, gastric, lung, thyroid, colon, and rectal). At the time of study enrollment, these 11 physicians had never used WFO. Our sample excluded physicians who have ever used WFO or were not sure if they have ever used WFO.


#

Interview Guide

A semistructured interview guide was developed to collect data from physicians pertaining to their viewpoints on using an AI-based tool system like WfO. The interview guide consisted of 19 open-ended questions grouped into five sections that covered the topics of job role overview, patient population at ICC, perceived ease of use and usefulness of WfO, perceived productivity and efficiency of using WfO, and other comments. Demographic information was collected at the end of the guide. The full interview guide in English is provided in [Supplementary Appendix A] (available in the online version). The interview guide was translated into Portuguese (the local language) for data collection.

Local qualitative researchers at ICC (C.M. and A.M.) conducted the individual, face-to-face, semistructured interviews in Portuguese. All the data were audio recorded on an HIPAA compliant portable device. Study participants provided written informed consent prior to interview. For data privacy purposes, study participants were assigned unique identification numbers, and interviews were deidentified prior to analysis. This study was approved by the local ICC Institutional Review Board.


#

Data Analysis

Interviews were first transcribed, deidentified, and then translated into English by a translation agency with expertise in Portuguese to English translation.[31] A thematic analysis approach[32] [33] guided by the constant comparison method was employed for analysis using NVivo V.12.6.0, a qualitative data analysis tool.[34] Interview transcripts were systematically examined by two members (P.G. and A.R.) of the qualitative research team to generate a coding scheme which was reviewed and refined by the remaining research team members (S.E. and R.R.). The final 24 codes were used by the two researchers (P.G. and A.R.) to code and analyze the interview transcripts. Codes were then regrouped into overarching themes. Repetitive phrases that confirmed the same idea by the interviewer or interviewee were coded only once. Statements that expressed multiple concepts were assigned multiple codes accordingly. A physician informaticist (R.R.) acted as the third reviewer to perform the final adjudication process and address any discrepancies in the final codebook during the coding process.


#
#

Results

Participants

In total, 11 physician oncologists with varied subspecialties and working as either full-time or part-time employees at the ICC participated in this study. The majority of participants were male (N = 7, 64%) and between the age of 41 and 50 years (N = 5, 46%). Physician participants were either surgical (N = 7, 64%) or medical oncologists (N = 4, 36%), and just under half had practiced at ICC for less than 5 years (N = 5, 46%). The overall experience in health care varied widely across the study cohort of clinicians. Interviews lasted 17 minutes on average (range: 9.7–34.5 minutes).


#

Thematic Analysis

Interview transcripts were thematically analyzed and presented as three hierarchical levels—codes, subthemes, and overarching themes. Codes captured one (or more) insight about the data and represented the most granular level of interpretation (e.g., patient load, manual review, decision support, and trust; [Fig. 1]). The codes were first grouped into higher-level subthemes centered around a common concept or idea (i.e., current settings, workload and patient population, existing challenges in cancer treatment, perceived benefits/promoters, and perceived challenges/barriers using AI-based tools). Finally, the subthemes were grouped into the highest level themes conveying the overall meaning of the coded data (i.e., theme 1: general context and theme 2: perceptions around the potential use of an AI-based tool).

Zoom Image
Fig. 1 Thematic analysis describing themes, subthemes, and codes.

Theme 1: General context

The interviewees described several contextual factors such as underserved setting, workload, and existing challenges as described below.

(a) Current setting, workload, and patient population

According to the participants, ICC is one of the busiest training hospitals in Fortaleza, Brazil with a high patient volume and the greatest proportion of patients aged ≥50 years. Visiting patients had low income and education levels and were covered by Sistema Único de Saúde, the region's public insurance plan. Participants called out the high patient turnover, resulting increase in workload and varied patient experiences in the context of conversation around the integration of technology at ICC. However, they did not make any explicit remarks on “if” the existing setting would be affected by this new technology. Many participants reserved their opinion on how WfO would influence their workflow and patient care at ICC since they have not experienced the use of WfO in a real-world setting.


#

(b) Existing challenges in cancer treatment in an underserved setting

Participants' treatment recommendations in practice were based on their individual clinical knowledge and supported by multiple external resources (i.e., guidelines and current scientific literature), as well as a laborious review process. Clinicians frequently looked to international guidelines for treatment-related clinical decision support, and among the most common guidelines noted were the American Cancer Society,[35] the European Society for Medical Oncology,[36] and the NCCN guidelines.[37] In addition, a few participants mentioned the challenge of information load and sometimes having outdated literature. Participants also noted the need for geographic considerations, since recommendations and treatment options may not be applicable across regions, (e.g., substituting an expensive drug with limited availability with one that is lower cost and is readily available), given the availability of resources in their country and the clinic's unique setting. One participant added that:

“Generally, these consensuses [guideline], the majority of them are European or American, those we use currently, despite of there being many countries, different concepts... Ok, there is the European, there is the American, there are some too from Asia, Japan... Depending on the type of cancer, a lot more is produced. In reality, what we do is a compilation of all these consensuses and see which one is more appropriate to our reality, which is very different from theirs. Also because we, theoretically for the Entrevista de Saúde, we are considered a third world country, right? So, we kind of adapt their reality to ours, even some treatments, whatever can be done.” [Participant # 02]


#
#

Theme 2: Perception around the potential use of an AI-based tool

(a) Perceived benefits

Participants felt that WfO would provide support so that their decisions were based on the most current scientific evidence. Some participants used the words “assurance” and “security” in reference to the additional confidence that WfO would offer in the decision-making process. It was noted that when a physician's opinion differed from WfO, it may prompt the physician to research and confirm the suggestions through evidence, which ultimately resulted in an added layer of security in coming to a final conclusion. Quotations supporting the perceptions are reported in [Table 1].

Table 1

Frequently brought up perceived benefits resulting from implementation of an AI-based tool

Participant ID

Mapping to codes

Relevant quotes

# 03

Decision support

“But it is going to help also in the diagnose of treatment, but as an assistive technology.”

# 12

Decision support

“I believe that anything that proposes to add to the decision-making and benefits the patient is always welcome.”

# 03

Decision support

“I think it gives us greater security. And at the same time, Watson gives another, what is the word... another guidance. [We] have to research to see where we went wrong, you know? So, it is really going to help with that too. If it matches, I feel more secure.”

# 02

Current, scientific, evidence based

I see it as a great ally in this, if you carry out a course of action that is mentioned in the latest guidelines, if you follow a course of action that, including Watson, that is in real-time, it is saying that in reality that is the best course of action.”

# 13

Current, scientific, evidence based

“It's an artificial intelligence tool that will help us to make choices regarding treatments, based on the little I know. It is based on entering of patients' data, age, staging, and everything. It will give us treatment options that are references based on current scientific knowledge.”

#1

Learnability

“Watson - for Oncology - still is not superior to current intelligence, but that is only a matter of time and data for it to really surpass existing protocols, because it can learn, right? That's what I understood that he takes in all the... papers, all articles, and it does not only take in the words and phrases, but it actually interprets, right? I thought it was, it is brilliant.”

#1

Recommendation stratification by evidence

“I believe that, from the moment you have a Watson in your life that goes, “look, evidence level one is this, evidence level two is that,” it makes our lives a lot easier, doesn't it? Based on something that came out twenty minutes ago. So, I believe that to be really, critical.”

#2

Interoperability with EHRs

“Do not know if it would be possible and plausible - Watson [integrated] into the medical record. If that was possible, that would be perfect because you can enter the relevant information from the medical record, which you'd have to enter anyway, and this information would be then outsourced to Watson and then, done, you would not have to do two jobs.”

[Table 1] shows perceived benefits resulting from the implementation of an AI-based tool.


#

(b) Perceived challenges

Participants expressed multiple concerns related to WfO's capabilities in cancer treatment recommendations and decision-making including concerns around new drug/treatment availability, insurance coverage, and patients not medically fit to receive recommended treatment or refusing to try new treatments ([Table 2]).

Table 2

Frequently brought up perceived challenges resulting from implementation of an AI-based tool

Participant ID

Mapping to codes

Relevant quotes

# 01

Lack the human factor/intelligence

“…we know that, in reality, the book sometimes says one thing, no, for this one you have to use chemotherapy, this one you will have to do surgery. But we have to see our [reality], we have to adapt also to our reality. Sometimes the patient also has other comorbidities and the risk of a surgery is too high, at times the patient does not want it…”

# 01

Treatment delivery related barriers

“This issue of, sometimes doing a protocol that alien to our reality, because no, we are in a sub-developed country, we have limited resources, you see? Then, we can't always, like, apply American literature here, unfortunately, because of social differences, really. This is another concern. That sometimes you touch that which would be in an ideal world, but that, unfortunately, due to lack of resources, you can't manage.”

# 02

Treatment delivery related barriers

“… in the NCCNs, they already have these new therapies, already with indication, like “oh, for patients with neoplasia stage 4 this is the ideal. For us, no. This is not a reality. So we have to use, uh, somewhat older consensuses, still based on old concepts, because we do not have this new technology, because the [insurance] does not approve it.”

# 03

-Treatment delivery related barriers

“I think... What weighs the most is the lack of availability, you know. Certain therapies. Not everything that is in the literature, for example, is available to a SUS [the name of a public insurance] patient. So we have to work with what we have. [For example] new drugs for prostate cancer, angiogenesis inhibitor for kidney cancer, there isn't much availability.”

# 05

Treatment delivery related barriers

“…the majority of patients we see are from SUS, Sistema Único de Saúde [the name of a public insurance]. For those, I really doubt that it is going to add anything, because it is going to suggest a bunch of treatments, a bunch of things that our patients do not have access to, you know.”

# 01

Treatment delivery related barriers

“Considering that Watson, I imagine it is based on the most recent literature, on the most recent discoveries, on the most recent papers, on the most recent guidelines, it is going to suggest a bunch of treatments to us that our patients do not have access to, that our patients will not follow through.”

# 02

Treatment delivery related barriers

“There is no point in Watson saying, uh, let's say, “We need to do a… esophagectomy,” which is a complex surgery, but me, uh, “no, the esophagectomy is necessary but the patient has a cardiopathy, I do not have a cardio-ICU in this hospital.” This is for me impractical, this patient will not be treated with an esophagectomy, at least not here.”

#9

Physician autonomy

“I believe that what bothers doctors is when [we are told], “Look, this is here, you are going to follow this.” Their autonomy is taken away. As a doctor, as a person who graduated that long ago, I think that to be the obstacle.”

#1

Legal liability

“But I believe that in about ten years or so, who knows, it will be the standard, to use Watson for everything. What I sometimes fear is that it be made a rule, shoved down our throats, you know, “Look, if you can't get Watson, it is wrong,” you know? And I believe that we here have a strong social bias, you see? So, at times, I catch myself thinking in the legal issues. Let's say that I did not use Watson and we get to a point [...], “But why didn't you follow? See here, evidence A, B, C.” You get it?”

#5

Integration with workflow

“I'd have to see what it could add to my practice in the day-to-day, in what I do. What it could change. If you already work and follow scientifically recognized guidelines, theoretically believed to be accurate, that is done correctly, then your work follows what is based on the scientific literature. I would like to know, like, I don't know what it would add to the practice in the day-to-day.”

[Table 2] shows perceived challenges resulting from the implementation of an AI-based tool.

Other anticipated challenges with WfO included concerns around potential learning curve and workflow integration. Half of the participants felt that WfO would be hindered by the fact that it is an AI system and could not feel human empathy toward the patient. Participants relayed that the practice of oncology relies on a physician's ability to connect with patients, and WfO would not be able to form the human connection found in the physician–patient relationship. As one participant described,

“…the doctor's conscience, awareness [is not there]. So, it does not see the patient, it does not know the case, it is not part of the discussions, it does not know the interests of the patient, it does not know the patient's objectives…[Participant # 08]

Several people feared that WfO would take the place of the primary decision-maker and ultimately take away the physician's autonomy to make treatment recommendations. However, they also mentioned how humans have a deeper understanding of a patient's circumstances than a machine could comprehend.

“I think about it [this concern] too [i.e.,]of having Watson above the physician, though the doctor is the one [directly] dealing with [the patient and], with that situation and may have [deeper] insights [such as]seeing that the patient can't afford or the family does not want or they live too far away to do the therapy.” [Participant # 01]

When asked about how WfO might impact efficiency, participants expressed concerns regarding the additional work required to input data into WfO and their current electronic health record. One participant said:

“Whether you want it or not, it will require time for the consultation plus the time for implementation into Watson, to put [data] into the platform.” [Participant # 02]


#
#
#
#

Discussion

This study assessed perspectives of prospective user physicians of an AI-based tool for cancer care in an underserved clinical setting. This study identified both potential benefits and challenges/barriers of AI-based CDSS in underserved settings. With respect to benefits, physicians indicated that having easy access to the latest clinical recommendations could result in more enhanced, standardized cancer care, delivered with greater confidence and efficiency. This functionality is a potential benefit in all settings, but it is particularly important in high volume, underresourced settings where physicians have little time to keep pace with scientific advances. [38] [39] They also expected more efficiency in task performance facilitated by an automated solution, given that the tool has interoperability capabilities with the existing EHR, avoiding duplication of work. Several studies have reported similar benefits of WfO and comparable AI-based technologies in underserved settings mostly after the tool implementation.[40] [41] [42] [43] and have highlighted the existing gaps in implementation and acceptance of AI in health care relevant to various cultural and economic backgrounds.[44] Users in Mexico reported that clinics who lack expertise in a particular subspecialty would benefit, as would medical students and residents.[19] In China, one study demonstrated that an AI-based tool promoted the standardization and personalization of treatment,[29] and another suggested that a majority of Chinese users approved the quality (86.3%) and comprehensibility (88.2%) of treatment options and rationale offered by AI-based tools.[20]

Despite the clear benefits, physicians raised several challenges/barriers such as concerns pertaining to autonomy in making decisions if there was discordance, the possibility of lack of human empathy resulting from too much reliance on technology, the learning curve that new users might have to face, and workflow disruptions arising from suboptimal integration with the everyday routines. Novel concerns pertinent to underserved settings included availability of innovative treatments in resource-poor settings, including advanced tools and skill sets; the costs associated with such treatments; reservations around treatment acceptance from patients, particularly because of their varied level of education and technology acceptance; and development of trust among physicians ensuing from the uncertainties between what is known and unknown.

Additional factors impeding physician's enthusiasm were physicians shared fears about losing autonomy and their conviction that a machine should only be used as an aid to human clinicians to facilitate the decision-making process. Participants relayed that the field of oncology relies on a physician's ability to connect with the patient, and a machine would not be able to form the human connection found in the physician–patient relationship. Finally, clinicians shared concerns around a steep learning curve, increased need of time and effort, and disruption of routine workflow.

Transferability of health refers to the generalizability of an intervention to a wider population, to another setting, or another time and/or in a different context. This is specifically applicable when we are talking about implementation of advanced technologies in resource limited settings.[45] Our future work entails studies focusing on evaluating implementation strategies promoting health equity and understanding the perspectives of various stakeholders in the integration of a social agenda and evidence-based practices into cancer care in diverse contexts.

One of the biggest concerns to emerge in our study was around the availability of resources in low-middle income countries like Brazil[46] such as limited access to high-cost therapies, differences between the United States and other countries' cancer treatment guidelines, obtaining national regulatory approvals and contending with national pharmaceuticals. Several examples of localization-related challenges have been reported in studies conducted in countries such as Mexico and China,[41] [47] and our recently published review includes additional examples.[48] Prior studies have also demonstrated how implementation and the use of AI-based tools in resource-poor countries require a strong understanding of local social contexts, infrastructure requirements, and availability of other resources.[21] [49] In particular, there is an important need to assess the role of local social and cultural context in training an AI tool that has been developed in a Western setting. For example, in a country like India with multiple ethnic groups, AI tools need to take into account characteristics of patient ethnicity in the design and development of the tools.[50]

Trust in recommendations made by AI is another important factor that may influence adoption. This factor appeared important at ICC where physicians expressed reservations about building trust in technology, especially if used for cancer care. Concerns around patient acceptance also persisted regarding specific treatments, varied education levels, and technology acceptance. In the current era, where the interactions between humans and technologies are increasing, trust is an essential psychological mechanism needed to deal with the uncertainties between what is known and unknown. The accuracy and reliability of AI in healthcare have increased considerably, but issues of trust persist when there is a lack of understanding about system operation, commonly referred to as the “black box.”[51] There are ongoing efforts to enhance the interpretability and explainability of the outputs generated from AI models such as the use of user-centered design and efforts to present the output of predictive models to clinicians through visualizations and dashboards.[51] These explanatory capabilities would enhance the ability of the clinician to transfer this knowledge to their patients, especially when it comes to cancer care.[52]

While a postimplementation study was not in the scope of our current research, future research could evaluate whether physicians reported similar facilitators and barriers to this AI tool and whether new concerns emerged as a result of the use of the tool in this specific setting.

Furthermore, continued research should explore how patient characteristics, clinician workload, availability of resources and treatment, and other challenges common in underresourced settings will impact the use and benefits of advanced technologies in these settings. Evidence has shown that shared and informed decision-making process helps in rendering better care and furthering ethical goals.[53] [54] With shared decision-making, patients are put at the center of health care.[53] This is true whether technology is involved or not in the decision-making process. Previous studies have shown that clinical practice guidelines can help solve unwarranted variance in care by improving the decision-making of physicians and patients.[18] Whether there is a role of technology in standardizing treatment options and improving decision-making is an area that needs further exploration. In countries like Brazil, having a high prevalence of poor population with limited education, resource, and awareness to rights raises concerns about patient autonomy.[55] The impact of the integration of advanced technologies in such a scenario needs further investigation.

This study has several limitations. We did not include radiation oncologists in our study and did not capture their expectations of the AI tool which may differ from other oncologists. We also focused on physicians rather than other potential clinical users (e.g., nurses, trainees, and other allied health professionals). The study was conducted in one setting, and its findings may not be generalizable to other settings.


#

Conclusion

This study provides us with an initial understanding of how physicians anticipate an AI-based tool could influence cancer care. Physicians mentioned potential benefits along with several barriers ensuing from the integration of advanced technologies in cancer care delivery in an underserved setting. The learnings from this study provide an opportunity to AI developers to help enhance the utility of such tools by addressing the users' concerns practicing in unique settings across the world.


#

Clinical Relevance Statement

This study provides stakeholders with a novel opportunity to anticipate the expectations of future users prior to AI CDSS implementation in unique settings. The study suggests that the use of AI-based CDSS technologies could help physicians in underresourced settings keep pace with the most current, evidence-based recommendations and practices. Additionally, CDSSs in busy settings could also help alleviate some existing challenges, such as heavy workload, information overflow, and manual reviews. However, the use of AI in health care, in particularly cancer care, poses many challenges especially in underserved countries, such as loss of autonomy, paucity of resources, and lack of trust. Additional research is needed to understand the association between value-added and anticipated challenges by innovative AI CDSS technology.


#
#

Conflict of Interest

S.E., P.M.G., A.R., R.R., and D.W.B. received salary support from a grant funded by IBM Watson Health. D.W.B. has received research support and consults for EarlySense, which makes patient safety monitoring systems. He receives cash compensation from CDI (Negev), which is a not-for-profit incubator for health IT startups. He receives equity from ValeraHealth, which makes software to help patients with chronic diseases, Clew, which makes software to support clinical decision-making in intensive care, and MDClone, which takes clinical data and produces deidentified versions of it. He consults for and receives equity from AESOP, which makes software to reduce medication error rates, and FeelBetter. He has received research support from MedAware. R.R., W.F., and M.S.-L. are employed by IBM Watson Health, and G.P.J. was employed at IBM Watson Health when the research was conducted and G.P.J. is now employed by Intuitive Surgical. None declared.

Acknowledgments

The authors thank the physicians at Instituto do Câncer do Ceará (ICC) who participated in this study. They thank Rezzan Hekmat, IBM Watson Health, for help with project coordination and management.

Protection of Human and Animal Subjects

The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects and was reviewed and approved by the ICC Review Board.


* Equal contributor first authors.


Supplementary Material

  • References

  • 1 Snowdon JL, Robinson B, Staats C. et al. Empowering caseworkers to better serve the most vulnerable with a cloud-based care management solution. Appl Clin Inform 2020; 11 (04) 617-621
  • 2 Graham S, Depp C, Lee EE. et al. Artificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep 2019; 21 (11) 116
  • 3 Krittanawong C, Zhang H, Wang Z, Aydar M, Kitai T. Artificial intelligence in precision cardiovascular medicine. J Am Coll Cardiol 2017; 69 (21) 2657-2664
  • 4 Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer 2018; 18 (08) 500-510
  • 5 Toh C, Brody JP. Evaluation of a genetic risk score for severity of COVID-19 using human chromosomal-scale length variation. Hum Genomics 2020; 14 (01) 36
  • 6 Cdcgov. Implementing Clinical Decision Support Systems | CDC | DHDSP. @CDCgov. Updated 2021–07–22T02:23:13Z. Accessed June 07, 2022 at: https://www.cdc.gov/dhdsp/pubs/guides/best-practices/clinical-decision-support.htm
  • 7 Clinical Decision Support. . Accessed June 07, 2022 at: https://www.ahrq.gov/cpi/about/otherwebsites/clinical-decision-support/index.html
  • 8 Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med 2020; 3 (01) 17
  • 9 Murphy EV. Clinical decision support: effectiveness in improving quality processes and clinical outcomes and factors that may influence success. Yale J Biol Med 2014; 87 (02) 187-197
  • 10 Giordano C, Brennan M, Mohamed B, Rashidi P, Modave F, Tighe P. Accessing artificial intelligence for clinical decision-making. Front Digit Health 2021; 3: 645232
  • 11 Araujo SM, Sousa P, Dutra I. Clinical decision support systems for pressure ulcer management: systematic review. JMIR Med Inform 2020; 8 (10) e21621
  • 12 Minian N, Lingam M, Moineddin R. et al. Impact of a web-based clinical decision support system to assist practitioners in addressing physical activity and/or healthy eating for smoking cessation treatment: protocol for a hybrid type I randomized controlled trial. JMIR Res Protoc 2020; 9 (09) e19157
  • 13 Vani A, Kan K, Iturrate E. et al. Leveraging clinical decision support tools to improve guideline-directed medical therapy in patients with atherosclerotic cardiovascular disease at hospital discharge. Cardiol J 2020; DOI: 10.5603/CJ.a2020.0126.
  • 14 Romero-Brufau S, Wyatt KD, Boyum P, Mickelson M, Moore M, Cognetta-Rieke C. Implementation of artificial intelligence-based clinical decision support to reduce hospital readmissions at a regional hospital. Appl Clin Inform 2020; 11 (04) 570-577
  • 15 Pawloski PA, Brooks GA, Nielsen ME, Olson-Bullis BA. A systematic review of clinical decision support systems for clinical oncology practice. J Natl Compr Canc Netw 2019; 17 (04) 331-338
  • 16 Tanguay-Sela M, Benrimoh D, Popescu C. et al. Evaluating the perceived utility of an artificial intelligence-powered clinical decision support system for depression treatment using a simulation center. Psychiatry Res 2022; 308: 114336
  • 17 Hogue S-C, Chen F, Brassard G. et al. Pharmacists' perceptions of a machine learning model for the identification of atypical medication orders. J Am Med Inform Assoc 2021; 28 (08) 1712-1718
  • 18 Laï M-C, Brian M, Mamzer M-F. Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France. J Transl Med 2020; 18 (01) 14
  • 19 Rho MJ, Park J, Moon HW. et al. Dr. Answer AI for prostate cancer: intention to use, expected effects, performance, and concerns of urologists. Prostate Int 2021; 10 (01) 38-44
  • 20 Lazarus JV, Baker L, Cascio M. et al; Nobody Left Outside initiative. Novel health systems service design checklist to improve healthcare access for marginalised, underserved communities in Europe. BMJ Open 2020; 10 (04) e035621
  • 21 Wahl B, Cossy-Gantner A, Germann S, Schwalbe NR. Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings?. BMJ Glob Health 2018; 3 (04) e000798
  • 22 Saiz FS, Sanders C, Stevens R. et al. Artificial intelligence clinical evidence engine for automatic identification, prioritization, and extraction of relevant clinical oncology research. JCO Clin Cancer Inform 2021; 5: 102-111
  • 23 The Memorial Sloan Kettering Cancer Center (MSKCC). 2019
  • 24 IBM Watson for Oncology. 2019 . Accessed June 7, 2022 at: https://www.ibm.com/us-en/marketplace/clinical-decision-support-oncology
  • 25 Rocha HAL, Emani S, Arruda CAM. et al. Nonuser physician perspectives about an oncology clinical decision-support system: a qualitative study. J Clin Oncol 2020; 38 (15) DOI: 10.1200/JCO.2020.38.15_suppl.e14061.
  • 26 Arriaga Y, Hekmat R, Draulis K. et al. Abstract P4-14-05: a systematic review of concordance studies using Watson for Oncology (WfO) to support breast cancer treatment decisions: a four-year global experience. Cancer Res 2020; 80 (4_suppl): P4-14-05
  • 27 Arriaga YE, Hekmat R, Draulis K. et al. A review of gynecological cancers studies of concordance with individual clinicians or multidisciplinary tumor boards for an artificial intelligence-based clinical decision-support system. J Clin Oncol 2020; 38 (15_suppl) DOI: 10.1200/JCO.2020.38.15_suppl.e14070.
  • 28 Gradín C. Why is poverty so high among Afro-Brazilians? A decomposition analysis of the racial poverty gap. J Dev Stud 2009; 45 (09) 1426-1452
  • 29 Ferreira FH, Lanjouw P, Neri M. A robust poverty profile for Brazil using multiple data sources. Rev Bras Econ 2003; 57 (01) 59-92
  • 30 Cufino Svitone E, Garfield R, Vasconcelos MI, Araujo Craveiro V. Primary health care lessons from the northeast of Brazil: the Agentes de Saúde Program. Rev Panam Salud Publica 2000; 7 (05) 293-302
  • 31 Translations EB. . Accessed June 7, 2022 at: https://www.ebtranslations.com/
  • 32 Marks DF, Yardley L. Research methods for clinical and health psychology. Sage 2004
  • 33 Collins SA, Rozenblum R, Leung WY. et al. Acute care patient portals: a qualitative study of stakeholder perspectives on current practices. J Am Med Inform Assoc 2017; 24 (e1): e9-e17
  • 34 Qualitative Data Analysis Software | NVivo. Accessed June 6, 2022 at: https://www.qsrinternational.com/nvivo-qualitative-data-analysis-software/home
  • 35 American Cancer Society | Information and Resources about for Cancer: Breast, Colon, Lung, Prostate, Skin. 2020
  • 36 Esmo. ESMO. Accessed June 7, 2022 at: https://www.esmo.org/
  • 37 About NCCN. Accessed June 7, 2022 at: https://www.nccn.org/about/default.aspx
  • 38 Ngiam KY, Khor IW. Big data and machine learning algorithms for health-care delivery. Lancet Oncol 2019; 20 (05) e262-e273
  • 39 Clifford GD. The use of sustainable and scalable health care technologies in developing countries. Innov Entrep Health 2016; 3: 35-46
  • 40 Lee K, Lee SH, Preininger A, Shim J, Jackson G. Patient satisfaction with oncology clinical decision support in South Korea. J Clin Oncol 2019; 37 (15_suppl) DOI: 10.1200/JCO.2019.37.15_suppl.e18329.
  • 41 Sarre-Lazcano C, Armengol Alonso A, Huitzil Melendez FD. et al. Cognitive computing in oncology: a qualitative assessment of IBM Watson for oncology in Mexico. J Clin Oncol 2017; 35 (15_suppl) DOI: 10.1200/JCO.2017.35.15_suppl.e18166.
  • 42 Li T, Chen C, Zhang S-S. et al. Deployment and integration of a cognitive technology in China: experiences and lessons learned. J Clin Oncol 2019; 37 (15_suppl) DOI: 10.1200/JCO.2019.37.15_suppl.6538.
  • 43 Fang J, Zhu Z, Wang H. et al. The establishment of a new medical model for tumor treatment combined with Watson for Oncology, MDT and patient involvement. J Clin Oncol 2018; 36 (15_suppl) DOI: 10.1200/JCO.2018.36.15_suppl.e18504.
  • 44 Mahajan A, Vaidya T, Gupta A, Rane S, Gupta S. Artificial intelligence in healthcare in developing nations: the beginning of a transformative journey. Cancer Research, Statistics, and Treatment 2019; 2 (02) 182
  • 45 Schloemer T, Schröder-Bäck P. Criteria for evaluating transferability of health interventions: a systematic review and thematic synthesis. Implement Sci 2018; 13 (01) 88
  • 46 Paim J, Travassos C, Almeida C, Bahia L, Macinko J. The Brazilian health system: history, advances, and challenges. Lancet 2011; 377 (9779): 1778-1797
  • 47 Zou F-W, Tang Y-F, Liu C-Y, Ma J-A, Hu C-H. Concordance study between IBM Watson for oncology and real clinical practice for cervical cancer patients in China: a retrospective analysis. Front Genet 2020; DOI: 10.3389/fgene.2020.00200.
  • 48 Emani S, Rui A, Rocha HAL. et al. Physicians' perceptions of and satisfaction with artificial intelligence in cancer treatment: a clinical decision support system experience and implications for low-middle-income countries. JMIR Cancer 2022; 8 (02) e31461
  • 49 Rocha HAL, Dankwa-Mullan I, Meneleu P. et al. Using implementation science to examine impact of a social responsibility agenda on addressing cancer health disparities in Ceará, Brazil. J Clin Oncol 2020; 38 (15_suppl) DOI: 10.1200/JCO.2020.38.15_suppl.e19071.
  • 50 Pradhan K, John P, Sandhu N. Use of artificial intelligence in healthcare delivery in India. J Hosp Manage Health Pol 2021; 5 DOI: 10.21037/jhmhp-20-126.
  • 51 Turchioe MR, Benda NC, Liu LG, Wang F, Miller KE. Designing a window into the “black box”: user-centered design for improving interpretability of predictive models. Panel discussion. AMIA Annual Symposium proceedings/AMIA Symposium (e-pub ahead of print). 2020
  • 52 Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res 2020; 22 (06) e15154
  • 53 Stiggelbout AM, Van der Weijden T, De Wit MP. et al. Shared decision making: really putting patients at the centre of healthcare. BMJ 2012; 344: e256
  • 54 Alexander MB. Disclosing deviations: using guidelines to nudge and empower physician-patient decision making. Nev LJ 2018; 19: 867
  • 55 Mendonça VS, Custódio EM. Nuances and challenges of medical malpractice in Brazil: victims and their perception. Rev Bioet 2016; 24: 136-146

Address for correspondence

Srinivas Emani, PhD
Division of General Internal Medicine, Brigham and Women's Hospital
1620 Tremont Street, OBC-3, Boston, MA 02120
United States   

Publication History

Received: 25 January 2022

Accepted: 23 May 2022

Article published online:
30 July 2022

© 2022. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Snowdon JL, Robinson B, Staats C. et al. Empowering caseworkers to better serve the most vulnerable with a cloud-based care management solution. Appl Clin Inform 2020; 11 (04) 617-621
  • 2 Graham S, Depp C, Lee EE. et al. Artificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep 2019; 21 (11) 116
  • 3 Krittanawong C, Zhang H, Wang Z, Aydar M, Kitai T. Artificial intelligence in precision cardiovascular medicine. J Am Coll Cardiol 2017; 69 (21) 2657-2664
  • 4 Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer 2018; 18 (08) 500-510
  • 5 Toh C, Brody JP. Evaluation of a genetic risk score for severity of COVID-19 using human chromosomal-scale length variation. Hum Genomics 2020; 14 (01) 36
  • 6 Cdcgov. Implementing Clinical Decision Support Systems | CDC | DHDSP. @CDCgov. Updated 2021–07–22T02:23:13Z. Accessed June 07, 2022 at: https://www.cdc.gov/dhdsp/pubs/guides/best-practices/clinical-decision-support.htm
  • 7 Clinical Decision Support. . Accessed June 07, 2022 at: https://www.ahrq.gov/cpi/about/otherwebsites/clinical-decision-support/index.html
  • 8 Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med 2020; 3 (01) 17
  • 9 Murphy EV. Clinical decision support: effectiveness in improving quality processes and clinical outcomes and factors that may influence success. Yale J Biol Med 2014; 87 (02) 187-197
  • 10 Giordano C, Brennan M, Mohamed B, Rashidi P, Modave F, Tighe P. Accessing artificial intelligence for clinical decision-making. Front Digit Health 2021; 3: 645232
  • 11 Araujo SM, Sousa P, Dutra I. Clinical decision support systems for pressure ulcer management: systematic review. JMIR Med Inform 2020; 8 (10) e21621
  • 12 Minian N, Lingam M, Moineddin R. et al. Impact of a web-based clinical decision support system to assist practitioners in addressing physical activity and/or healthy eating for smoking cessation treatment: protocol for a hybrid type I randomized controlled trial. JMIR Res Protoc 2020; 9 (09) e19157
  • 13 Vani A, Kan K, Iturrate E. et al. Leveraging clinical decision support tools to improve guideline-directed medical therapy in patients with atherosclerotic cardiovascular disease at hospital discharge. Cardiol J 2020; DOI: 10.5603/CJ.a2020.0126.
  • 14 Romero-Brufau S, Wyatt KD, Boyum P, Mickelson M, Moore M, Cognetta-Rieke C. Implementation of artificial intelligence-based clinical decision support to reduce hospital readmissions at a regional hospital. Appl Clin Inform 2020; 11 (04) 570-577
  • 15 Pawloski PA, Brooks GA, Nielsen ME, Olson-Bullis BA. A systematic review of clinical decision support systems for clinical oncology practice. J Natl Compr Canc Netw 2019; 17 (04) 331-338
  • 16 Tanguay-Sela M, Benrimoh D, Popescu C. et al. Evaluating the perceived utility of an artificial intelligence-powered clinical decision support system for depression treatment using a simulation center. Psychiatry Res 2022; 308: 114336
  • 17 Hogue S-C, Chen F, Brassard G. et al. Pharmacists' perceptions of a machine learning model for the identification of atypical medication orders. J Am Med Inform Assoc 2021; 28 (08) 1712-1718
  • 18 Laï M-C, Brian M, Mamzer M-F. Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France. J Transl Med 2020; 18 (01) 14
  • 19 Rho MJ, Park J, Moon HW. et al. Dr. Answer AI for prostate cancer: intention to use, expected effects, performance, and concerns of urologists. Prostate Int 2021; 10 (01) 38-44
  • 20 Lazarus JV, Baker L, Cascio M. et al; Nobody Left Outside initiative. Novel health systems service design checklist to improve healthcare access for marginalised, underserved communities in Europe. BMJ Open 2020; 10 (04) e035621
  • 21 Wahl B, Cossy-Gantner A, Germann S, Schwalbe NR. Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings?. BMJ Glob Health 2018; 3 (04) e000798
  • 22 Saiz FS, Sanders C, Stevens R. et al. Artificial intelligence clinical evidence engine for automatic identification, prioritization, and extraction of relevant clinical oncology research. JCO Clin Cancer Inform 2021; 5: 102-111
  • 23 The Memorial Sloan Kettering Cancer Center (MSKCC). 2019
  • 24 IBM Watson for Oncology. 2019 . Accessed June 7, 2022 at: https://www.ibm.com/us-en/marketplace/clinical-decision-support-oncology
  • 25 Rocha HAL, Emani S, Arruda CAM. et al. Nonuser physician perspectives about an oncology clinical decision-support system: a qualitative study. J Clin Oncol 2020; 38 (15) DOI: 10.1200/JCO.2020.38.15_suppl.e14061.
  • 26 Arriaga Y, Hekmat R, Draulis K. et al. Abstract P4-14-05: a systematic review of concordance studies using Watson for Oncology (WfO) to support breast cancer treatment decisions: a four-year global experience. Cancer Res 2020; 80 (4_suppl): P4-14-05
  • 27 Arriaga YE, Hekmat R, Draulis K. et al. A review of gynecological cancers studies of concordance with individual clinicians or multidisciplinary tumor boards for an artificial intelligence-based clinical decision-support system. J Clin Oncol 2020; 38 (15_suppl) DOI: 10.1200/JCO.2020.38.15_suppl.e14070.
  • 28 Gradín C. Why is poverty so high among Afro-Brazilians? A decomposition analysis of the racial poverty gap. J Dev Stud 2009; 45 (09) 1426-1452
  • 29 Ferreira FH, Lanjouw P, Neri M. A robust poverty profile for Brazil using multiple data sources. Rev Bras Econ 2003; 57 (01) 59-92
  • 30 Cufino Svitone E, Garfield R, Vasconcelos MI, Araujo Craveiro V. Primary health care lessons from the northeast of Brazil: the Agentes de Saúde Program. Rev Panam Salud Publica 2000; 7 (05) 293-302
  • 31 Translations EB. . Accessed June 7, 2022 at: https://www.ebtranslations.com/
  • 32 Marks DF, Yardley L. Research methods for clinical and health psychology. Sage 2004
  • 33 Collins SA, Rozenblum R, Leung WY. et al. Acute care patient portals: a qualitative study of stakeholder perspectives on current practices. J Am Med Inform Assoc 2017; 24 (e1): e9-e17
  • 34 Qualitative Data Analysis Software | NVivo. Accessed June 6, 2022 at: https://www.qsrinternational.com/nvivo-qualitative-data-analysis-software/home
  • 35 American Cancer Society | Information and Resources about for Cancer: Breast, Colon, Lung, Prostate, Skin. 2020
  • 36 Esmo. ESMO. Accessed June 7, 2022 at: https://www.esmo.org/
  • 37 About NCCN. Accessed June 7, 2022 at: https://www.nccn.org/about/default.aspx
  • 38 Ngiam KY, Khor IW. Big data and machine learning algorithms for health-care delivery. Lancet Oncol 2019; 20 (05) e262-e273
  • 39 Clifford GD. The use of sustainable and scalable health care technologies in developing countries. Innov Entrep Health 2016; 3: 35-46
  • 40 Lee K, Lee SH, Preininger A, Shim J, Jackson G. Patient satisfaction with oncology clinical decision support in South Korea. J Clin Oncol 2019; 37 (15_suppl) DOI: 10.1200/JCO.2019.37.15_suppl.e18329.
  • 41 Sarre-Lazcano C, Armengol Alonso A, Huitzil Melendez FD. et al. Cognitive computing in oncology: a qualitative assessment of IBM Watson for oncology in Mexico. J Clin Oncol 2017; 35 (15_suppl) DOI: 10.1200/JCO.2017.35.15_suppl.e18166.
  • 42 Li T, Chen C, Zhang S-S. et al. Deployment and integration of a cognitive technology in China: experiences and lessons learned. J Clin Oncol 2019; 37 (15_suppl) DOI: 10.1200/JCO.2019.37.15_suppl.6538.
  • 43 Fang J, Zhu Z, Wang H. et al. The establishment of a new medical model for tumor treatment combined with Watson for Oncology, MDT and patient involvement. J Clin Oncol 2018; 36 (15_suppl) DOI: 10.1200/JCO.2018.36.15_suppl.e18504.
  • 44 Mahajan A, Vaidya T, Gupta A, Rane S, Gupta S. Artificial intelligence in healthcare in developing nations: the beginning of a transformative journey. Cancer Research, Statistics, and Treatment 2019; 2 (02) 182
  • 45 Schloemer T, Schröder-Bäck P. Criteria for evaluating transferability of health interventions: a systematic review and thematic synthesis. Implement Sci 2018; 13 (01) 88
  • 46 Paim J, Travassos C, Almeida C, Bahia L, Macinko J. The Brazilian health system: history, advances, and challenges. Lancet 2011; 377 (9779): 1778-1797
  • 47 Zou F-W, Tang Y-F, Liu C-Y, Ma J-A, Hu C-H. Concordance study between IBM Watson for oncology and real clinical practice for cervical cancer patients in China: a retrospective analysis. Front Genet 2020; DOI: 10.3389/fgene.2020.00200.
  • 48 Emani S, Rui A, Rocha HAL. et al. Physicians' perceptions of and satisfaction with artificial intelligence in cancer treatment: a clinical decision support system experience and implications for low-middle-income countries. JMIR Cancer 2022; 8 (02) e31461
  • 49 Rocha HAL, Dankwa-Mullan I, Meneleu P. et al. Using implementation science to examine impact of a social responsibility agenda on addressing cancer health disparities in Ceará, Brazil. J Clin Oncol 2020; 38 (15_suppl) DOI: 10.1200/JCO.2020.38.15_suppl.e19071.
  • 50 Pradhan K, John P, Sandhu N. Use of artificial intelligence in healthcare delivery in India. J Hosp Manage Health Pol 2021; 5 DOI: 10.21037/jhmhp-20-126.
  • 51 Turchioe MR, Benda NC, Liu LG, Wang F, Miller KE. Designing a window into the “black box”: user-centered design for improving interpretability of predictive models. Panel discussion. AMIA Annual Symposium proceedings/AMIA Symposium (e-pub ahead of print). 2020
  • 52 Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res 2020; 22 (06) e15154
  • 53 Stiggelbout AM, Van der Weijden T, De Wit MP. et al. Shared decision making: really putting patients at the centre of healthcare. BMJ 2012; 344: e256
  • 54 Alexander MB. Disclosing deviations: using guidelines to nudge and empower physician-patient decision making. Nev LJ 2018; 19: 867
  • 55 Mendonça VS, Custódio EM. Nuances and challenges of medical malpractice in Brazil: victims and their perception. Rev Bioet 2016; 24: 136-146

Zoom Image
Fig. 1 Thematic analysis describing themes, subthemes, and codes.