Subscribe to RSS
DOI: 10.1055/a-2794-6130
Comment on “The Pediatric Surgeon's AI Toolbox: How Large Language Models Like ChatGPT Are Simplifying Practice and Expanding Global Access”
Authors
We would like to comment on “The Pediatric Surgeon's AI Toolbox: How Large Language Models Like ChatGPT Are Simplifying Practice and Expanding Global Access.”[1] This study's strength is its narrative review, which draws data from big and diverse databases such as PubMed, Scopus, Embase, and Google Scholar across a 10-year period, as well as policy documents from international organizations such as the WHO, FDA, and the EU. However, narrative review approaches are constrained by their systematic rigor and quantitative synthesis. Data selection by a single author raises the possibility of selection and confirmation bias, especially if the study quality evaluation does not require the use of established criteria such as PRISMA or CASP. Furthermore, the study's restricted English-language emphasis may have left out valuable data from nations with varied medical AI policies.
In terms of data analysis, this study used a qualitative synthesis method, which is effective for identifying trends and addressing practical concerns. However, it lacks statistical measurements of effectiveness or impact, such as the average time saved by using LLM or the level of content validity when compared with expert physicians. The absence of inter-rater reliability for evaluating message quality or communication outcomes makes it impossible to determine whether LLM is genuinely effective in practice. Furthermore, this work contained several “preprints” that were not peer-reviewed, which may have influenced the correctness of the conclusions.
Based on reanalysis and reinterpretation, this evidence implies that LLMs have a greater potential to improve clinical communication and documentation than direct medical decision-making. LLMs are effective at reducing documentation and improving family understanding, but they do not replace clinical decision-making, which still requires a “clinician-in-the-loop” to prevent errors and biases in datasets. However, this analysis highlights concerns about how AI technology will affect legal liability and professional ethics when integrated into real-world systems, particularly in pediatric patients with legal restrictions on consent and personal data.
Broad debate questions include: (1) To what extent may LLMs replace medical assistants while maintaining patient care quality? (2) What ethical and legal criteria should govern the use of artificial intelligence to communicate with sick children's families? (3) How should “privacy-preserving AI” systems be created to comply with child health data requirements across countries? (4) Should scholars create a consistent, international strategy to evaluate LLMs? Asking these questions will contribute to a more informed, safe, and sustainable use of LLM in pediatric surgery.
Declaration of GenAI Use
The authors used language editing computational tool (QuillBot) in preparation of the article.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author, H. Daungsupawong, upon reasonable request.
Contributors' Statement
H.D. contributed 50% of the ideas and was responsible for conceptualization, data analysis, validation, visualization, and manuscript preparation, including original drafting and review and editing, and approved the final version. V.W. contributed the remaining 50% of the ideas and was responsible for conceptualization, supervision, validation, and visualization, and approved the final version.
Publication History
Received: 15 October 2025
Accepted: 21 January 2026
Article published online:
03 February 2026
© 2026. Thieme. All rights reserved.
Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany
-
Reference
- 1 Tinajero CAC. The pediatric surgeon's AI toolbox: how large language models like ChatGPT are simplifying practice and expanding global access. Eur J Pediatr Surg 2025; . Epub ahead of print
