Z Orthop Unfall 2020; 158(S 01): S142
DOI: 10.1055/s-0040-1717495
Poster
DKOU20-670 Grundlagenforschung>28. Bildgebung - Navigation - Robotik

Maintaining the spatial relation to improve deep-learning-assisted diagnosis for magnetic resonace imaging of the knee

W Nikolas
*   = präsentierender Autor
1   Klinik für Orthopädie und Sportorthopädie, Klinikum rechts der Isar, Technische Universität München, München
,
L Jan
1   Klinik für Orthopädie und Sportorthopädie, Klinikum rechts der Isar, Technische Universität München, München
,
M Carina
1   Klinik für Orthopädie und Sportorthopädie, Klinikum rechts der Isar, Technische Universität München, München
,
Rüdiger von Eisenhart-Rothe
1   Klinik für Orthopädie und Sportorthopädie, Klinikum rechts der Isar, Technische Universität München, München
,
B Rainer
1   Klinik für Orthopädie und Sportorthopädie, Klinikum rechts der Isar, Technische Universität München, München
› Author Affiliations
 

Objectives For knee injury diagnosis Magnetic Resonance Imaging (MRI) is the preferred approach. However, interpretation is time consuming and subject to diagnostic error. With image based deep learning analysis having significantly improved in recent years, it can serve as a useful tool to reduce both, time consumption for diagnosis and the error rate. In order to develop these methods new datasets for training were released such as the MRNet dataset [1]. This dataset contains 1.370 knee MRI exams with further information about general abnormalities and specific diagnosis of anterior cruciate ligament (ACL) and meniscal tears.

Bien et al. further developed a baseline for diagnosis with promising results [1]. They use a transfer learning approach based on AlexNet [2], which is a two dimensional convolutional neural network (CNN), transforming each slice of an MRI scan into a vector. The attained information over all slices is compressed by applying a MaxPool operation. Therefore only the maximum values over all slices remain. For this reason the approach lacks the three dimensional understanding of the data, as a pertubation of the slices would not affect the output.

In order to improve the interpretability and accuracy of the existing baseline, a new architecture is delevoped with the goal of maintaining the spatial information between slices.

Methods A straightforward solution to maintain the three dimensional relation is to increase the network dimension from two dimensions to three. Yet, training a three dimensional CNN from scratch is insufficient, as the training dataset is too small for generalisation and inconsistent slice numbers per scan result in bad data efficiency.

However, the existing approach from Bien et al. [1] can be expanded by replacing the MaxPool operation with a tool from time series prediction, the Gated Recurrent Unit (GRU) [3]. This GRU can memorize past information, more specifically past slices and therefore build up a more sophisticated understanding. The modified architecture can be seen in the attached figure.

Zoom Image
Abb. 1 Visualization of the modified Architecture of MRNet [1]. The change is marked with red, as the MaxPool layer is replaced by the GRU layer [3]. Dimensions are annotated below.

Results and Conclusion The new architecture requires a longer training time, as more parameters need to be defined.

However, these changes proof beneficial as the accuracies increase to 96.9 % for abnormalities (+3.2 %), 98.5 % for ACL tears (+2.0 %) and 89.9 % for meniscal tears (+5.2 %) compared to the baseline by Bien et al. on the validation dataset [1].

Further, the spatial relation can be used for increased interpretability. The activation mappings can be visualized for the whole MRI exam in comparison to each slice separately. This improves the focus on the most crucial parts for diagnosis.

Stichwörter Assisted Diagnostic; Knee; MRI; Meniscus tear; ACL tear; Deep Learning;



Publication History

Article published online:
15 October 2020

© 2020. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany