Methods Inf Med 2022; 61(05/06): 195-200
DOI: 10.1055/a-1900-7351
Short Paper

Medical Text Prediction and Suggestion Using Generative Pretrained Transformer Models with Dental Medical Notes

Joseph Sirrianni
1   The Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, Ohio, United States
,
Emre Sezgin
1   The Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, Ohio, United States
,
Daniel Claman
2   Pediatric Dentistry, Nationwide Children's Hospital, Columbus, Ohio, United States
,
Simon L. Linwood
1   The Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, Ohio, United States
3   School of Medicine, University of California, Riverside, California, United States
› Author Affiliations
Funding This study was supported by Award Number UL1TR002733 from the National Center for Advancing Translational Sciences. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center for Advancing Translational Sciences or the National Institutes of Health.

Abstract

Background Generative pretrained transformer (GPT) models are one of the latest large pretrained natural language processing models that enables model training with limited datasets and reduces dependency on large datasets, which are scarce and costly to establish and maintain. There is a rising interest to explore the use of GPT models in health care.

Objective We investigate the performance of GPT-2 and GPT-Neo models for medical text prediction using 374,787 free-text dental notes.

Methods We fine-tune pretrained GPT-2 and GPT-Neo models for next word prediction on a dataset of over 374,000 manually written sections of dental clinical notes. Each model was trained on 80% of the dataset, validated on 10%, and tested on the remaining 10%. We report model performance in terms of next word prediction accuracy and loss. Additionally, we analyze the performance of the models on different types of prediction tokens for categories. For comparison, we also fine-tuned a non-GPT pretrained neural network model, XLNet (large), for next word prediction. We annotate each token in 100 randomly sampled notes by category (e.g., names, abbreviations, clinical terms, punctuation, etc.) and compare the performance of each model by token category.

Results Models present acceptable accuracy scores (GPT-2: 76%; GPT-Neo: 53%), and the GPT-2 model also performs better in manual evaluations, especially for names, abbreviations, and punctuation. Both GPT models outperformed XLNet in terms of accuracy. The results suggest that pretrained models have the potential to assist medical charting in the future. We share the lessons learned, insights, and suggestions for future implementations.

Conclusion The results suggest that pretrained models have the potential to assist medical charting in the future. Our study presented one of the first implementations of the GPT model used with medical notes.

Ethical Approval

This study is approved by the Institutional Review Board (IRB) of Nationwide Children's Hospital (IRB No: 00000877).


Author Contributions

S. L. L. conceived the idea. All authors contributed to the design of the study. J. S. designed the experiments and conducted the analysis. D. C. supported retrieving the dataset. S. L. L. and D. C. supervised all parts of the study. J. S. and E. S. drafted the manuscript. All authors contributed to the manuscript and approved the final version of it.


Data Availability

The datasets used in this study include private and sensitive information (e.g., medical records, personal health information) which cannot be shared publicly. Please contact the corresponding author for your inquiries.




Publication History

Received: 08 March 2022

Accepted: 11 July 2022

Accepted Manuscript online:
14 July 2022

Article published online:
15 November 2022

© 2022. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany