Subscribe to RSS
DOI: 10.1055/a-2647-1210
Health Consumers' Use and Perceptions of Health Information from Generative Artificial Intelligence Chatbots: A Scoping Review
Funding This work was supported by a start-up grant from MU Sinclair School of Nursing.

Abstract
Background
Health consumers can use generative artificial intelligence (GenAI) chatbots to seek health information. As GenAI chatbots continue to improve and be adopted, it is crucial to examine how health information generated by such tools is used and perceived by health consumers.
Objective
To conduct a scoping review of health consumers' use and perceptions of health information from GenAI chatbots.
Methods
Arksey and O'Malley's five-step protocol was used to guide the scoping review. Following PRISMA guidelines, relevant empirical papers published on or after January 1, 2019, were retrieved between February and July 2024. Thematic and content analyses were performed.
Results
We retrieved 3,840 titles and reviewed 12 papers that included 13 studies (quantitative = 5, qualitative = 4, and mixed = 4). ChatGPT was used in 11 studies, while two studies used GPT-3. Most were conducted in the United States (n = 4). The studies involve general and specific (e.g., medical imaging, psychological health, and vaccination) health topics. One study explicitly used a theory. Eight studies were rated with excellent quality. Studies were categorized as user experience studies (n = 4), consumer surveys (n = 1), and evaluation studies (n = 8). Five studies examined health consumers' use of health information from GenAI chatbots. Perceptions focused on: (1) accuracy, reliability, or quality; (2) readability; (3) trust or trustworthiness; (4) privacy, confidentiality, security, or safety; (5) usefulness; (6) accessibility; (7) emotional appeal; (8) attitude; and (9) effectiveness.
Conclusion
Although health consumers can use GenAI chatbots to obtain accessible, readable, and useful health information, negative perceptions of their accuracy, trustworthiness, effectiveness, and safety serve as barriers that must be addressed to mitigate health-related risks, improve health beliefs, and achieve positive health outcomes. More theory-based studies are needed to better understand how exposure to health information from GenAI chatbots affects health beliefs and outcomes.
Keywords
chatbots - consumer health informatics - generative artificial intelligence - health information - scoping reviewProtection of Human and Animal Subjects
Human and/or animal subjects were not included in the project.
Authors' Contributions
J.R.B. conceptualized, managed, supervised the project, and developed the search terms. J.R.B., D.H., and M.F. performed a manual search and analyzed the extracted data. J.R.B., G.P.S., R.Q.D.T., and C.E.R. reviewed search results and screened the references. All authors extracted the data and revised and approved the final version of the manuscript. J.R.B. and D.H. drafted the manuscript.
Publication History
Received: 07 March 2025
Accepted: 01 July 2025
Accepted Manuscript online:
02 July 2025
Article published online:
27 August 2025
© 2025. Thieme. All rights reserved.
Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany
-
References
- 1 Bautista JR, Zhang Y, Gwizdka J, Chang YS. Consumers' longitudinal health information needs and seeking: a scoping review. Health Promot Int 2023; 38 (04) daad066
- 2 Marr B. A Short History Of ChatGPT: How We Got To Where We Are Today. Forbes. 2023 . Accessed January 29, 2025 at: https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/
- 3 Holohan M. A boy saw 17 doctors over 3 years for chronic pain. ChatGPT found the diagnosis. TODAY.com. September 12, 2023. Accessed January 29, 2025 at: https://www.today.com/health/mom-chatgpt-diagnosis-pain-rcna101843
- 4 Ayo-Ajibola O, Davis RJ, Lin ME, Riddell J, Kravitz RL. Characterizing the adoption and experiences of users of artificial intelligence-generated health information in the United States: cross-sectional questionnaire study. J Med Internet Res 2024; 26 (01) e55138
- 5 Haltaufderheide J, Ranisch R. The ethics of ChatGPT in medicine and healthcare: a systematic review on large language models (LLMs). NPJ Digit Med 2024; 7 (01) 183
- 6 Wei Q, Yao Z, Cui Y, Wei B, Jin Z, Xu X. Evaluation of ChatGPT-generated medical responses: a systematic review and meta-analysis. J Biomed Inform 2024; 151: 104620
- 7 Yim D, Khuntia J, Parameswaran V, Meyers A. Preliminary evidence of the use of generative AI in health care clinical services: systematic narrative review. JMIR Med Inform 2024; 12 (01) e52073
- 8 Ferraris G, Monzani D, Coppini V. et al. Barriers to and facilitators of online health information-seeking behaviours among cancer patients: a systematic review. Digit Health 2023 ;9:20552076231210663
- 9 Zhang Y, Kim Y. Consumers' evaluation of web-based health information quality: meta-analysis. J Med Internet Res 2022; 24 (04) e36463
- 10 Wang C, Qi H. Influencing factors of acceptance and use behavior of mobile health application users: systematic review. Healthcare (Basel) 2021; 9 (03) 357
- 11 Freeman JL, Caldwell PHY, Scott KM. How adolescents trust health information on social media: a systematic review. Acad Pediatr 2023; 23 (04) 703-719
- 12 Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005; 8 (01) 19-32
- 13 Munn Z, Stern C, Aromataris E, Lockwood C, Jordan Z. What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol 2018; 18 (01) 5
- 14 National Institutes of Health. Health consumer. Toolkit. Accessed January 30, 2025 at: https://toolkit.ncats.nih.gov/glossary/health-consumer
- 15 Solaiman I, Brundage M, Clark J. et al. Release strategies and the social impacts of language models. Published online November 13, 2019
- 16 Moher D, Liberati A, Tetzlaff J, Altman DG. PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg 2010; 8 (05) 336-341
- 17 Graafsma J, Murphy RM, van de Garde EMW. et al. The use of artificial intelligence to optimize medication alerts generated by clinical decision support systems: a scoping review. J Am Med Inform Assoc 2024; 31 (06) 1411-1422
- 18 Al-Anezi FM. Exploring the use of ChatGPT as a virtual health coach for chronic disease management. Learn Health Syst 2024; 8 (03) e10406
- 19 Alanezi F. Assessing the effectiveness of ChatGPT in delivering mental health support: a qualitative study. J Multidiscip Healthc 2024; 17: 461-471
- 20 Alanezi F. Examining the role of ChatGPT in promoting health behaviors and lifestyle changes among cancer patients. Nutr Health 2025; 31 (02) 739-748
- 21 Al Shboul MKI, Alwreikat A, Alotaibi FA. Investigating the use of ChatGPT as a novel method for seeking health information: a qualitative approach. Sci Technol Libr (New York, NY) 2024; 43 (03) 225-234
- 22 Choudhury A, Elkefi S, Tounsi A. Exploring factors influencing user perspective of ChatGPT as a technology that assists in healthcare decision making: a cross sectional survey study. PLoS One 2024; 19 (03) e0296151
- 23 Gordon EB, Towbin AJ, Wingrove P. et al. Enhancing patient communication with Chat-GPT in radiology: evaluating the efficacy and readability of answers to common imaging-related questions. J Am Coll Radiol 2024; 21 (02) 353-359
- 24 Karinshak E, Liu SX, Park JS, Hancock JT. Working with AI to persuade: examining a large language model's ability to generate pro-vaccination messages. Proc ACM Hum-Comput Interact. 2023; 7 (CSCW1): 1-29
- 25 Kim SH, Tae JH, Chang IH. et al. Changes in patient perceptions regarding ChatGPT-written explanations on lifestyle modifications for preventing urolithiasis recurrence. Digit Health 2023 ;9:20552076231203940
- 26 Lockie E, Choi J. Evaluation of a ChatGPT-generated patient information leaflet about laparoscopic cholecystectomy. ANZ J Surg 2024; 94 (03) 353-355
- 27 Saeidnia HR, Kozak M, Lund BD, Hassanzadeh M. Evaluation of ChatGPT's responses to information needs and information seeking of dementia patients. Sci Rep 2024; 14 (01) 10273
- 28 Schmidt S, Zimmerer A, Cucos T, Feucht M, Navas L. Simplifying radiologic reports with natural language processing: a novel approach using ChatGPT in enhancing patient understanding of MRI results. Arch Orthop Trauma Surg 2024; 144 (02) 611-618
- 29 Yun JY, Kim DJ, Lee N, Kim EK. A comprehensive evaluation of ChatGPT consultation quality for augmentation mammoplasty: a comparative analysis between plastic surgeons and laypersons. Int J Med Inform 2023; 179: 105219
- 30 Hong QN, Pluye P, Fàbregues S. et al. Mixed methods appraisal tool (MMAT), version 2018. Regist Copyr.; 2018: 1148552
- 31 Akhlaq A, McKinstry B, Muhammad KB, Sheikh A. Barriers and facilitators to health information exchange in low- and middle-income country settings: a systematic review. Health Policy Plan 2016; 31 (09) 1310-1325
- 32 Hurley D, Swann C, Allen MS, Ferguson HL, Vella SA. A systematic review of parent and caregiver mental health literacy. Community Ment Health J 2020; 56 (01) 2-21
- 33 Derksen ME, van Strijp S, Kunst AE, Daams JG, Jaspers MWM, Fransen MP. Serious games for smoking prevention and cessation: a systematic review of game elements and game effects. J Am Med Inform Assoc 2020; 27 (05) 818-833
- 34 Moore EC, Tolley CL, Bates DW, Slight SP. A systematic review of the impact of health information technology on nurses' time. J Am Med Inform Assoc 2020; 27 (05) 798-807
- 35 Joanna Briggs Institute. 10.2.7 Data extraction - JBI Manual for Evidence Synthesis - JBI Global Wiki. Accessed December 20, 2024 at: https://jbi-global-wiki.refined.site/space/MANUAL/355862769/10.2.7+Data+extraction
- 36 World Bank. World Bank Country and Lending Groups. 2025 . Accessed January 6, 2025 at: https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups
- 37 Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. Manage Inf Syst Q 2003; 27 (03) 425-478
- 38 Charnock D, Shepperd S, Needham G, Gann R. DISCERN: an instrument for judging the quality of written consumer health information on treatment choices. J Epidemiol Community Health 1999; 53 (02) 105-111
- 39 Shoemaker SJ, Wolf MS, Brach C. Development of the patient education materials assessment tool (PEMAT): a new measure of understandability and actionability for print and audiovisual patient information. Patient Educ Couns 2014; 96 (03) 395-403
- 40 Baretta D, Bondaronek P, Direito A, Steca P. Implementation of the goal-setting components in popular physical activity apps: review and content analysis. Digit Health 2019; 5: 2055207619862706
- 41 Maddison R, Rawstorn JC, Shariful Islam SM. et al. mHealth interventions for exercise and risk factor modification in cardiovascular disease. Exerc Sport Sci Rev 2019; 47 (02) 86-90
- 42 Zhang J, Oh YJ, Lange P, Yu Z, Fukuoka Y. Artificial intelligence chatbot behavior change model for designing artificial intelligence chatbots to promote physical activity and a healthy diet: viewpoint. J Med Internet Res 2020; 22 (09) e22845
- 43 Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci 2011; 6 (01) 42
- 44 Rosenstock IM, Strecher VJ, Becker MH. Social learning theory and the health belief model. Health Educ Q 1988; 15 (02) 175-183
- 45 Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process 1991; 50 (02) 179-211
- 46 Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009; 4 (01) 50
- 47 Lekadir K, Frangi AF, Porras AR. et al; FUTURE-AI Consortium. FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ 2025; 388: e081554
- 48 Gallifant J, Afshar M, Ameen S. et al. The TRIPOD-LLM reporting guideline for studies using large language models. Nat Med 2025; 31 (01) 60-69
- 49 Weiss TR. OpenAI GPT-3 Waiting List Dropped as GPT-3 Is Fully Released for Developer and Enterprise Use. AIwire. November 18, 2021. Accessed February 11, 2025 at: https://www.aiwire.net/2021/11/18/openai-gtp-3-waiting-list-is-gone-as-gtp-3-is-fully-released-for-use/
- 50 McClain C. Americans' use of ChatGPT is ticking up, but few trust its election information. Pew Research Center; . March 26, 2024. Accessed February 18, 2025 at: https://www.pewresearch.org/short-reads/2024/03/26/americans-use-of-chatgpt-is-ticking-up-but-few-trust-its-election-information/
- 51 Funk AT Giancarlo Pasquini, Alison Spencer and Cary. 60% of Americans Would Be Uncomfortable With Provider Relying on AI in Their Own Health Care. Pew Research Center; . February 22, 2023. Accessed February 18, 2025 at: https://www.pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/
- 52 Lee MK, Rich K. Who Is Included in Human Perceptions of AI?: Trust and Perceived Fairness around Healthcare AI and Cultural Mistrust. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. CHI '21. Association for Computing Machinery; 2021: 1-14
- 53 Spotnitz M, Idnay B, Gordon ER. et al. A survey of clinicians' views of the utility of large language models. Appl Clin Inform 2024; 15 (02) 306-312
- 54 Lee G, Kim HY. Human vs. AI: the battle for authenticity in fashion design and consumer response. J Retailing Consum Serv 2024; 77: 103690
- 55 Salmi L, Lewis DM, Clarke JL. et al. A proof-of-concept study for patient use of open notes with large language models. JAMIA Open 2025; 8 (02) ooaf021
- 56 Presiado M, Montero A, Lopes L, Published LH. KFF Health Misinformation Tracking Poll: Artificial Intelligence and Health Information. KFF. August 15, 2024. Accessed May 26, 2025 at: https://www.kff.org/health-information-and-trust/poll-finding/kff-health-misinformation-tracking-poll-artificial-intelligence-and-health-information/
- 57 Albashayreh A, Zeinali N, Gusen NJ, Ji Y, Gilbertson-White S. An informatics approach to characterizing rarely documented clinical information in electronic health records: spiritual care as an exemplar. Appl Clin Inform 2025;
- 58 Langevin R, Berry ABL, Zhang J. et al. Implementation fidelity of chatbot screening for social needs: acceptability, feasibility, appropriateness. Appl Clin Inform 2023; 14 (02) 374-391
- 59 World Intellectual Property Organization. Patent Landscape Report - Generative Artificial Intelligence (GenAI) - Key findings and insights. Published online 2024. Accessed February 11, 2025 at: https://www.wipo.int/web-publications/patent-landscape-report-generative-artificial-intelligence-genai/en/key-findings-and-insights.html
- 60 Menz BD, Modi ND, Abuhelwa AY. et al. Generative AI chatbots for reliable cancer information: Evaluating web-search, multilingual, and reference capabilities of emerging large language models. Eur J Cancer 2025; 218: 115274