J Neurol Surg B Skull Base 2025; 86(06): 688-693
DOI: 10.1055/a-2436-4222
Original Article

Accuracy and Completeness of Bard and Chat-GPT 4 Responses for Questions Derived from the International Consensus Statement on Endoscopic Skull-Base Surgery 2019

Authors

  • Yavar Abgin

    1   California Northstate University College of Medicine, Elk Grove, California, United States
  • Kayla Umemoto

    1   California Northstate University College of Medicine, Elk Grove, California, United States
  • Andrew Goulian

    1   California Northstate University College of Medicine, Elk Grove, California, United States
  • Missael Vasquez

    2   Otolaryngology Department, Cedars-Sinai Medical Center, Los Angeles, California, United States
  • Sean Polster

    3   Department of Neurological Surgery, The University of Chicago Medicine, Chicago, Illinois, United States
  • Arthur Wu

    2   Otolaryngology Department, Cedars-Sinai Medical Center, Los Angeles, California, United States
  • Christopher Roxbury

    4   Department of Otolaryngology - Head and Neck Surgery, The University of Chicago Medicine, Chicago, Illinois, United States
  • Pranay Soni

    5   Department of Neurological Surgery, Cleveland Clinic Neurological Institute, Cleveland, Ohio, United States
  • Omar G. Ahmed

    6   Department of Otolaryngology - Head and Neck Surgery, Houston Methodist Hospital, Houston, Texas, United States
  • Dennis M. Tang

    7   Division of Otolaryngology, Cedars-Sinai Medical Center, Los Angeles, California, United States
Preview

Abstract

Artificial intelligence large language models (LLMs), such as Chat Generative Pre-Trained Transformer 4 (Chat-GPT) by OpenAI and Bard by Google, emerged in 2022 as tools for answering questions, providing information, and offering suggestions to the layperson. These LLMs impact how information is disseminated and it is essential to compare their answers to experts in the corresponding field. The International Consensus Statement on Endoscopic Skull-Base Surgery 2019 (ICAR:SB) is a multidisciplinary international collaboration that critically evaluated and graded the current literature.

Objectives

Evaluate the accuracy and completeness of Chat-GPT and Bard responses to questions derived from the ICAR:SB policy statements.

Design

Thirty-four questions were created based on ICAR:SB policy statements and input into Chat-GPT and Bard. Two rhinologists and two neurosurgeons graded the accuracy and completeness of LLM responses, using a 5-point Likert scale. The Wilcoxon rank-sum and Kruskal–Wallis tests were used for analysis.

Setting

Online.

Participants

None.

Outcomes

Compare the mean accuracy and completeness scores between (1) responses generated by Chat-GPT versus Bard and (2) rhinologists versus neurosurgeons.

Results

Using the Wilcoxon rank-sum test, there were statistically significant differences in (1) accuracy (p < 0.001) and completeness (p < 0.001) of Chat-GPT compared with Bard; and (2) accuracy (p < 0.001) and completeness (p < 0.001) ratings between rhinologists and neurosurgeons.

Conclusion

Chat-GPT responses are overall more accurate and complete compared with Bard, but both are very accurate and complete. Overall, rhinologists graded lower than neurosurgeons. Further research is needed to better understand the full potential of LLMs.

Supplementary Material



Publication History

Received: 07 March 2024

Accepted: 06 October 2024

Accepted Manuscript online:
08 October 2024

Article published online:
30 October 2024

© 2024. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany