Abstract
Artificial intelligence large language models (LLMs), such as Chat Generative Pre-Trained
Transformer 4 (Chat-GPT) by OpenAI and Bard by Google, emerged in 2022 as tools for
answering questions, providing information, and offering suggestions to the layperson.
These LLMs impact how information is disseminated and it is essential to compare their
answers to experts in the corresponding field. The International Consensus Statement
on Endoscopic Skull-Base Surgery 2019 (ICAR:SB) is a multidisciplinary international
collaboration that critically evaluated and graded the current literature.
Objectives
Evaluate the accuracy and completeness of Chat-GPT and Bard responses to questions
derived from the ICAR:SB policy statements.
Design
Thirty-four questions were created based on ICAR:SB policy statements and input into
Chat-GPT and Bard. Two rhinologists and two neurosurgeons graded the accuracy and
completeness of LLM responses, using a 5-point Likert scale. The Wilcoxon rank-sum
and Kruskal–Wallis tests were used for analysis.
Setting
Online.
Participants
None.
Outcomes
Compare the mean accuracy and completeness scores between (1) responses generated
by Chat-GPT versus Bard and (2) rhinologists versus neurosurgeons.
Results
Using the Wilcoxon rank-sum test, there were statistically significant differences
in (1) accuracy (p < 0.001) and completeness (p < 0.001) of Chat-GPT compared with Bard; and (2) accuracy (p < 0.001) and completeness (p < 0.001) ratings between rhinologists and neurosurgeons.
Conclusion
Chat-GPT responses are overall more accurate and complete compared with Bard, but
both are very accurate and complete. Overall, rhinologists graded lower than neurosurgeons.
Further research is needed to better understand the full potential of LLMs.
Keywords artificial intelligence - large language models - Chat-GPT 4 - endoscopic skull-base
surgery