Appl Clin Inform 2025; 16(04): 892-902
DOI: 10.1055/a-2647-1210
Review Article

Health Consumers' Use and Perceptions of Health Information from Generative Artificial Intelligence Chatbots: A Scoping Review

John Robert Bautista
1   Sinclair School of Nursing, University of Missouri-Columbia, Columbia, Missouri, United States
2   Institute for Data Science and Informatics, University of Missouri-Columbia, Columbia, Missouri, United States
,
Drew Herbert
1   Sinclair School of Nursing, University of Missouri-Columbia, Columbia, Missouri, United States
,
Matthew Farmer
1   Sinclair School of Nursing, University of Missouri-Columbia, Columbia, Missouri, United States
,
Ryan Q. De Torres
3   College of Nursing, University of the Philippines-Manila, Manila, Philippines
,
Gil P. Soriano
4   Department of Nursing, National University, Manila, Philippines
,
Charlene E. Ronquillo
5   School of Nursing, University of British Columbia Okanagan, Canada
› Author Affiliations

Funding This work was supported by a start-up grant from MU Sinclair School of Nursing.
Preview

Abstract

Background

Health consumers can use generative artificial intelligence (GenAI) chatbots to seek health information. As GenAI chatbots continue to improve and be adopted, it is crucial to examine how health information generated by such tools is used and perceived by health consumers.

Objective

To conduct a scoping review of health consumers' use and perceptions of health information from GenAI chatbots.

Methods

Arksey and O'Malley's five-step protocol was used to guide the scoping review. Following PRISMA guidelines, relevant empirical papers published on or after January 1, 2019, were retrieved between February and July 2024. Thematic and content analyses were performed.

Results

We retrieved 3,840 titles and reviewed 12 papers that included 13 studies (quantitative = 5, qualitative = 4, and mixed = 4). ChatGPT was used in 11 studies, while two studies used GPT-3. Most were conducted in the United States (n = 4). The studies involve general and specific (e.g., medical imaging, psychological health, and vaccination) health topics. One study explicitly used a theory. Eight studies were rated with excellent quality. Studies were categorized as user experience studies (n = 4), consumer surveys (n = 1), and evaluation studies (n = 8). Five studies examined health consumers' use of health information from GenAI chatbots. Perceptions focused on: (1) accuracy, reliability, or quality; (2) readability; (3) trust or trustworthiness; (4) privacy, confidentiality, security, or safety; (5) usefulness; (6) accessibility; (7) emotional appeal; (8) attitude; and (9) effectiveness.

Conclusion

Although health consumers can use GenAI chatbots to obtain accessible, readable, and useful health information, negative perceptions of their accuracy, trustworthiness, effectiveness, and safety serve as barriers that must be addressed to mitigate health-related risks, improve health beliefs, and achieve positive health outcomes. More theory-based studies are needed to better understand how exposure to health information from GenAI chatbots affects health beliefs and outcomes.

Protection of Human and Animal Subjects

Human and/or animal subjects were not included in the project.


Authors' Contributions

J.R.B. conceptualized, managed, supervised the project, and developed the search terms. J.R.B., D.H., and M.F. performed a manual search and analyzed the extracted data. J.R.B., G.P.S., R.Q.D.T., and C.E.R. reviewed search results and screened the references. All authors extracted the data and revised and approved the final version of the manuscript. J.R.B. and D.H. drafted the manuscript.


Supplementary Material



Publication History

Received: 07 March 2025

Accepted: 01 July 2025

Accepted Manuscript online:
02 July 2025

Article published online:
27 August 2025

© 2025. Thieme. All rights reserved.

Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany