Subscribe to RSS
DOI: 10.1055/a-2771-3155
Reply to Yu et al.
Authors
We are grateful to Yu et al. for their outstanding comments [1], particularly their thoughtful emphasis on transparency, usability, scalability, safety, and workflow integration. The Matsu Islands have implemented a series of online artificial intelligence (AI) interpretation systems [1] [2] [3]. Physicians are informed about limitations of each model, especially regarding their role as decision support tools rather than standalone diagnostic systems.
Displayed on the mobile picture archiving and communication system platform, the per-image interpretations with heatmaps, and the per-patient probability-based diagnoses, including the applied thresholds, are readily accessible to physicians through instructions that report the sensitivity and specificity of diagnosis.
The models were developed using high performance computing resources. After optimization, the final models required approximately 10 GB of RAM, two central processing unit cores, and 10 GB of storage, enabling graphics processing unit-free computation. Given the relatively stable prevalence of Helicobacter pylori infection and premalignant gastric conditions in typical Asian populations, the predictive values observed in real-world practice are expected to closely align with those reported in the study.
Regarding scalability, images were collected from a relatively monopolized endoscopy system with consistent imaging conditions. The model was trained using data from multiple hospitals [4], resembling a federated learning-like approach and reducing impact of site-to-site variability. Nonetheless, real-world benefits and potential harms must continue to be monitored through ongoing safety and quality assurance procedures.
These probability scores and categorical flags can be stored directly in the electronic health record, allowing endoscopists to view AI-derived information alongside routine reports. Existing approaches often characterize AI mainly by its technical functions, with an emphasis on fulfilling regulatory requirements. AI governance should function as an enabling architecture that aligns developers, clinicians, institutions, and regulators toward shared population health goals [5]. Further work is needed to establish these interconnections and feedback loops and to promote trust, patient safety, innovation, and equitable diffusion of AI technologies.
Publication History
Article published online:
20 February 2026
© 2026. Thieme. All rights reserved.
Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany
-
References
- 1 Chiang TH, Hsu YN, Chen MH. et al. A rural-to-center artificial intelligence model for diagnosing Helicobacter pylori infection and premalignant gastric conditions using endoscopy images captured in routine practice. Endoscopy 2025;
- 2 Hsu TK, Lai IP, Tsai MJ. et al. A deep learning approach for the screening of referable age-related macular degeneration – model development and external validation. J Formos Med Assoc 2024;
- 3 Lin CS, Liu WT, Tsai DJ. et al. AI-enabled electrocardiography alert intervention and all-cause mortality: a pragmatic randomized clinical trial. Nat Med 2024; 30: 1461-1470
- 4 Lee YC, Chao YT, Lin PJ. et al. Quality assurance of integrative big data for medical research within a multihospital system. J Formos Med Assoc 2022; 121: 1728-1738
- 5 Lekadir K, Frangi AF, Porras AR. FUTURE-AI Consortium. et al. FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ 2025; 388: e081554
