Appl Clin Inform
DOI: 10.1055/a-2327-4121
Research Article

Evaluation of a Digital Scribe: Conversation Summarization for Emergency Department Consultation Calls

Emre Sezgin
1   Nationwide Children's Hospital, Columbus, United States (Ringgold ID: RIN2650)
,
Joseph Winstead Sirrianni
2   IT Research Innovation - Data Science, Nationwide Children's Hospital, Columbus, United States (Ringgold ID: RIN2650)
,
Kelly Kranz
3   Nationwide Children's Hospital, Columbus, United States (Ringgold ID: RIN2650)
› Author Affiliations
Supported by: National Center for Advancing Translational Sciences UM1TR004548

Objective: We present a proof-of-concept digital scribe system as an Emergency Department (ED) consultation call-based clinical conversation summarization pipeline to support clinical documentation, and report its performance. Materials and Methods: We use four pre-trained large language models to establish the digital scribe system: T5-small, T5-base, PEGASUS-PubMed, and BART-Large-CNN via zero-shot and fine-tuning approaches. Our dataset includes 100 referral conversations among ED clinicians and medical records. We report the ROUGE-1, ROUGE-2, and ROUGE-L to compare model performance. In addition, we annotated transcriptions to assess the quality of generated summaries. Results: The fine-tuned BART-Large-CNN model demonstrates greater performance in summarization tasks with the highest ROUGE scores (F1ROUGE-1=0.49, F1ROUGE-2=0.23, F1ROUGE-L=0.35) scores. In contrast, PEGASUS-PubMed lags notably (F1ROUGE-1=0.28, F1ROUGE-2=0.11, F1ROUGE-L=0.22). BART-Large-CNN's performance decreases by more than 50% with the zero-shot approach. Annotations show that BART-Large-CNN performs 71.4% recall in identifying key information and a 67.7% accuracy rate. Discussion: The BART-Large-CNN model demonstrates a high level of understanding of clinical dialogue structure, indicated by its performance with and without fine-tuning. Despite some instances of high recall, there is variability in the model's performance, particularly in achieving consistent correctness, suggesting room for refinement. The model's recall ability varies across different information categories. Conclusion: The study provides evidence towards the potential of AI-assisted tools in assisting clinical documentation. Future work is suggested on expanding the research scope with additional language models and hybrid approaches, and comparative analysis to measure documentation burden and human factors.



Publication History

Received: 08 January 2024

Accepted after revision: 14 May 2024

Accepted Manuscript online:
15 May 2024

© . Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany