Methods Inf Med 2017; 56(03): 248-260
DOI: 10.3414/ME16-01-0091
Paper
Schattauer GmbH

Increasing the Efficiency on Producing Radiology Reports for Breast Cancer Diagnosis by Means of Structured Reports[*]

A Comparative Study
J. Damian Segrelles
1   Instituto de Instrumentación para Imagen Molecular (I3M), Centro mixto CSIC – Universitat Politècnica de València, Valencia, Spain
,
Rosana Medina
2   Department of Radiology, University Hospital Dr. Peset, Valencia, Spain
,
Ignacio Blanquer
1   Instituto de Instrumentación para Imagen Molecular (I3M), Centro mixto CSIC – Universitat Politècnica de València, Valencia, Spain
3   GIBI230 – Biomedical Imaging Research Group, La Fe University and Polytechnic Hospital, Valencia, Spain
,
Luis Martí-Bonmatí
3   GIBI230 – Biomedical Imaging Research Group, La Fe University and Polytechnic Hospital, Valencia, Spain
4   Medical Imaging Department, La Fe University and Polytechnic Hospital, Valencia, Spain
› Institutsangaben
Funding INDIGO – DataCloud receives funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement RIA 653549.
Weitere Informationen

Publikationsverlauf

received: 09. August 2016

accepted in revised form: 09. Januar 2017

Publikationsdatum:
24. Januar 2018 (online)

Summary

Background: Radiology reports are commonly written on free-text using voice recognition devices. Structured reports (SR) have a high potential but they are usually considered more difficult to fill-in so their adoption in clinical practice leads to a lower efficiency. However, some studies have demonstrated that in some cases, producing SRs may require shorter time than plain-text ones. This work focuses on the definition and demonstration of a methodology to evaluate the productivity of software tools for producing radiology reports. A set of SRs for breast cancer diagnosis based on BI-RADS have been developed using this method. An analysis of their efficiency with respect to free-text reports has been performed.

Material and Methods: The methodology proposed compares the Elapsed Time (ET) on a set of radiological reports. Free-text reports are produced with the speech recognition devices used in the clinical practice. Structured reports are generated using a web application generated with TRENCADIS framework. A team of six radiologists with three different levels of experience in the breast cancer diagnosis was recruited. These radiologists performed the evaluation, each one introducing 50 reports for mammography, 50 for ultrasound scan and 50 for MRI using both approaches. Also, the Relative Efficiency (REF) was computed for each report, dividing the ET of both methods. We applied the T-Student (T-S) test to compare the ETs and the ANOVA test to compare the REFs. Both tests were computed using the SPSS software.

Results: The study produced three DICOM- SR templates for Breast Cancer Diagnosis on mammography, ultrasound and MRI, using RADLEX terms based on BIRADs 5th edition. The T-S test on radiologists with high or intermediate profile, showed that the difference between the ET was only statistically significant for mammography and ultrasound. The ANOVA test performed grouping the REF by modalities, indicated that there were no significant differences between mammograms and ultrasound scans, but both have significant statistical differences with MRI. The ANOVA test of the REF for each modality, indicated that there were only significant differences in Mammography (ANOVA p = 0.024) and Ultrasound (ANOVA p = 0.008). The ANOVA test for each radiologist profile, indicated that there were significant differences on the high profile (ANOVA p = 0.028) and medium (ANOVA p=0.045).

Conclusions: In this work, we have defined and demonstrated a methodology to evaluate the productivity of software tools for producing radiology reports in Breast Cancer. We have evaluated that adopting Structured Reporting in mammography and ultrasound studies in breast cancer diagnosis improves the performance in producing reports.

* Supplementary material published on our website https://doi.org/10.3414/ME16-01-0091


 
  • References

  • 1 Oakley J. Digital Imaging; A Primer for Radiographers, Radiologists and Health Care Professionals.. Cambridge: Cambridge University Press; 2003
  • 2 Ratib O. Imaging informatics: From image management to image navigation. Yearb Med Inform. 2009: 167-172.
  • 3 Mustra M, Delac K, Grgic M. Overview of the DICOM Standard. ELMAR, 2008. 50th International Symposium. 2008: 39-44.
  • 4 Zafar A, Overhage JM, McDonald CJ. Continuous speech recognition for clinicians. J Am Med Inform Assoc. 1999; 6 (03) 195-204.
  • 5 Zick RG, Olsen J. Voice recognition software versus a traditional transcription service for physician charting in the ED. Am J Emerg Med. 2001; 19 (04) 295-298.
  • 6 Borowitz SM. Computer-based speech recognition as an alternative to medical transcription. J Am Med Inform Assoc. 2001; 8 (01) 101-102.
  • 7 Weiss DL, Langlotz CP. Structured reporting: patient care enhancement or productivity nightmare?. Radiology. 2008; 249 (03) 739-747.
  • 8 Wilkinson MD, Dumontier M, Aalbersberg IJ, Appleton G, Axton M, Baak A. et al. Sci Data. 2016; 3: 160018.
  • 9 Clunie DA. DICOM Structured Reporting.. Bangor, PA: PixelMed; 2000
  • 10 DICOM Standards Committee [Internet]. Digital Imaging and Communications in Medicine (DICOM); Supplement 23: Structured Reporting Storage SOP Classes;. 1999 Available from: http://dicom.nema.org/dicom/supps/sup23_lb.pdf.
  • 11 Rita N. Benefits of the DICOM Structured Report. J Digit Imaging. 2006; 19: 295-306.
  • 12 American College of Radiology. Breast imaging reporting and data system (BI-RADS).. 3rd ed. Reston, VA: American College of Radiology; 1998
  • 13 RadLex. Radiology Society of North America (RSNA) [cited 2016 July]. Available from: http://www.rsna.org/radlex.aspx.
  • 14 Marwede D, Schulz T, Kahn T. Indexing Thoracic CT Reports Using a Preliminary Version of a Standardized Radiological Lexicon (RadLex). J Digit Imaging. 2007; 28: 1-8.
  • 15 Sahni VA, Silveira PC, Sainani NI, Khorasani R. Impact of a Structured Report Template on the Quality of MRI Reports for Rectal Cancer Staging. AJR Am J Roentgenol. 2015; 205 (03) 584-588.
  • 16 Schwartz LH, Panicek DM, Berk AR, Li Y, Hricak H. Improving communication of diagnostic radiology findings through structured reporting. Radiology. 2011; 260 (01) 174-181.
  • 17 Johnson SB, Bakken S, Dine D, Hyun S, Mendonça E, Morrison F. et al. An electronic health record based on structured narrative. J Am Med Inform Assoc. 2008; 15: 54-64.
  • 18 O’Connell BT, Cho C, Shah N, Brown KM, Shiffman RN. Take note(s): differential HER satisfaction with two implementations under one roof. J Am Med Inform Assoc. 2004; 11: 43-49.
  • 19 Hasegawa Y, Matsumura Y, Mihara N, Kawakami Y, Sasai K, Takeda H. et al. Development of a system that generates structured reports for chest x-ray radiography. Methods Inf Med. 2010; 49 (04) 360-370.
  • 20 Maestre C, Segrelles D, Torres E, Blanquer I, Medina R, Hernández V. et al. Assessing the usability of a science gateway for medical knowledge bases with TRENCADIS. J Grid Computing. 2012; 10: 665-688.
  • 21 Moody DL. The method evaluation model: a theoretical model for validating information systems design methods, In: Proceedings of the 11th European Conference on Information Systems. 2003: 1327-1336.
  • 22 Walpole R, Myers R, Ye K. Probability and Statistics for Engineers and Scientists.. Boston, MA: Pearson Education; 2002
  • 23 Daniel WW. “Kolmogorov-Smirnov one-sample test”. Applied Nonparametric Statistics.. 2nd ed. Boston, MA: PWS-Kent; 1990: 319-330.
  • 24 Bartlett MS, Kendall DG. The Statistical Analysis of Variance Heterogeneity and Logarithmic Transformation. Journal of the Royal Statistical Society, Ser. B. 1946; 8: 128-138.
  • 25 Hayter AJ. The Maximum Familywise Error Rate of Fisher’s Least Significant Difference Test. Journal of the American Statistical Association. 1986; 81 (396) 1000-1004.
  • 26 Levene H. Robust tests for equality of variances. Olkin I, Hotelling H. et al. Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling.. Stanford: Stanford University Press; 1960: 278-292.
  • 27 Cramer D, Howitt D. Tamhane’s T2 multiple comparison test. The SAGE dictionary of statistics.. London: SAGE Publications, Ltd; 2004: 169.
  • 28 IBM Corp. IBM SPSS Statistics for Windows, Version 23.0.. Armonk, NY: IBM Corp.; 2015
  • 29 Medina R, Torres E, Segrelles D, Blanquer I, Martí L, Almenar D. A Systematic Approach for Using DICOM Structured Reports in Clinical Processes: Focus on Breast Cancer. J Digit Imaging. 2015; 28: 132-145.
  • 30 Elmore JG, Jackson SL, Abraham L, Miglioretti DL, Carneya PA. et al. Variability in Interpretive Performance at Screening Mammography and Radiologists’ Characteristics Associated with Accuracy. Radiology. 2009; 253 (03) 641-651.
  • 31 Berg WA, Campassi C, Langenberg P, Sexton MJ. Breast Imaging Reporting and Data System: inter- and intraobserver variability in feature analysis and final assessment. AJR Am J Roentgenol. 2000; 174: 1769-1777.
  • 32 Kerlikowske K, Grady D, Barclay J, Frankel SD, Ominsky SH, Sickles EA. et al. Variability and accuracy in mammographic interpretation using the American College of Radiology Breast Imaging Reporting and Data System. J Natl Cancer Inst. 1998; 90: 1801-1809.
  • 33 Elmore JG, Wells CK, Lee CH, Howard DH, Feinstein AR. Variability in radiologists’ interpretations of mammograms. N Engl J Med. 1994; 331: 1493-1499.
  • 34 Lazarus E, Mainiero MB, Schepps B, Koelliker SL, Livingston LS. BI-RADS lexicon for US and mam- mography: interobserver variability and positive predictive value. Radiology. 2006; 239: 385-391.
  • 35 Abdullah N, Mesurolle B, El-Khoury M, Kao E. Breast Imaging Reporting and Data System Lexicon for US: Interobserver Agreement for Assessment of Breast Masses. Radiology. 2009; 252: 665-672.
  • 36 Calas MJ, Almeida RM, Gutfilen B, Pereira WC. Intra-Observer Interpretation of Breast Ultrasonography Following the BI-RADS Classification. European Journal of Radiology. 2009; 74: 525-528.
  • 37 Park CS, Lee JH, Yim HW, Kang BJ, Kim HS, Jung JI. et al. Observer Agreement Using the ACR Breast Imaging Reporting and Data System (BI-RADS)-Ultrasound. Korean Journal of Radiology. 2007; 8: 397-402.
  • 38 Lee HJ, Kim EK, Kim MJ, Youk JH, Lee JY, Kang DR. et al. Observer Variability of Breast Imaging Reporting and Data System (BI-RADS) for Breast Ultrasound. European Journal of Radiology. 2008; 65: 293-298.
  • 39 Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin. 1968; 70: 213-220.
  • 40 Pezzullo JA, Tung GA, Rogg JM. et al. Voice Recognition Dictation: Radiologist as Transcriptionist. J Digit Imaging. 2008; 21: 384.
  • 41 Bhan SN, Coblentz CL, Norman GR, Ali SH. Effect of Voice Recognition on Radiologist Reporting Time. Can Assoc Radiol J. 2008; 59 (04) 203-209.
  • 42 Hanna TN, Shekhani H, Maddu K, Zhang C, Chen Z, Johnson JO. Structured report compliance: effect on audio dictation time, report length, and total radiologist study time. Emerg Radiol. 2016; 23 (05) 449-453.
  • 43 El Khoury M, Lalonde L, David J, Labelle M, Mesurolle B, Trop I. Breast imaging reporting and data system (BI-RADS) lexicon for breast MRI: Interobserver variability in the description and assignment of BI-RADS category. Eur J Radiol. 2015; 84 (01) 71-76.