Appl Clin Inform 2025; 16(05): 1493-1506
DOI: 10.1055/a-2707-2959
Research Article

Leveraging a Large Language Model for Streamlined Medical Record Generation: Implications for Health Care Informatics

Authors

  • Yi-Ling Chiang

    1   Division of Clinical Informatics, Department of Digital Medicine, Taichung Veterans General Hospital, Taichung, Taiwan
    2   Department of Industrial Engineering and Enterprise Information, Tunghai University, Taichung, Taiwan
  • Kuei-Fen Yang

    3   Department of Medical Administration, Medical Records Management Section, Taichung Veterans General Hospital, Taichung, Taiwan
  • Pin-Chih Su

    1   Division of Clinical Informatics, Department of Digital Medicine, Taichung Veterans General Hospital, Taichung, Taiwan
  • Shang-Feng Tsai

    1   Division of Clinical Informatics, Department of Digital Medicine, Taichung Veterans General Hospital, Taichung, Taiwan
    4   Division of Nephrology, Department of Internal Medicine, Taichung Veterans General Hospital, Taichung, Taiwan
    5   Department of Life Science, Tunghai University, Taichung, Taiwan
    6   Department of Post-Baccalaureate Medicine, College of Medicine, National Chung Hsing University, Taichung, Taiwan
  • Kai-Li Liang

    6   Department of Post-Baccalaureate Medicine, College of Medicine, National Chung Hsing University, Taichung, Taiwan
    7   Department of Otolaryngology, Taichung Veterans General Hospital, Taichung, Taiwan
    8   Department of Medical Administration, Taichung Veterans General Hospital, Taichung, Taiwan

Abstract

Objectives

This study aimed to leverage a large language model (LLM) to improve the efficiency and thoroughness of medical record documentation. This study focused on aiding clinical staff in creating structured summaries with the help of an LLM and assessing the quality of these artificial intelligence (AI)-proposed records in comparison to those produced by doctors.

Methods

This strategy involved assembling a team of specialists, including data engineers, physicians, and medical information experts, to develop guidelines for medical summaries produced by an LLM (Llama 3.1), all under the direction of policymakers at the study hospital. The LLM proposes admission, weekly summaries, and discharge notes for physicians to review and edit. A validated Physician Documentation Quality Instrument (PDQI-9) was used to compare the quality of physician-authored and LLM-generated medical records.

Results

The results showed no significant difference was observed in the total PDQI-9 scores between the physician-drafted and AI-created weekly summaries and discharge notes (p = 0.129 and 0.873, respectively). However, there was a significant difference in the total PDQI-9 scores between the physician and AI admission notes (p = 0.004). Furthermore, there were significant differences in item levels between physicians' and AI notes. After deploying the note-assisted function in our hospital, it gradually gained popularity.

Conclusion

LLM shows considerable promise for enhancing the efficiency and quality of medical record summaries. For the successful integration of LLM-assisted documentation, regular quality assessments, continuous support, and training are essential. Implementing LLM can allow clinical staff to concentrate on more valuable tasks, potentially enhancing overall health care delivery.

Protection of Human and Animal Subjects

This study was performed in compliance with the World Medical Association Declaration of Helsinki on ethical principles for medical research involving human subjects and was reviewed by the Institutional Review Board of Taichung Veterans General Hospital (approval number: CE24503B). Informed consent was obtained from all the participants.


Note

The authors, confirm that all figures presented in this work have been fully anonymized. They do not contain any information that could be used to identify individual patients, including but not limited to names, initials, dates, medical record numbers, or institutional identifiers. Furthermore, no third-party copyrighted material has been included in the figures.




Publication History

Received: 16 April 2025

Accepted: 22 September 2025

Accepted Manuscript online:
25 September 2025

Article published online:
29 October 2025

© 2025. Thieme. All rights reserved.

Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany