CC BY-NC-ND 4.0 · Yearb Med Inform 2021; 30(01): 155-156
DOI: 10.1055/s-0041-1726527
Section 4: Sensor, Signal and Imaging Informatics
Best Paper Selection

Best Paper Selection

 

Gemein LAW, Schirrmeister RT, Chrabąszcz P, Wilson D, Boedecker J, Schulze-Bonhage A, Hutter F, Ball T. Machine-learning-based diagnostics of EEG pathology. https://www.sciencedirect.com/science/article/pii/S1053811920305073?via%3Dihub

Karimi D, Dou H, Warfield SK, Gholipour A. Deep learning with noisy labels: Exploring techniques and remedies in medical image analysis. https://www.sciencedirect.com/science/article/abs/pii/S1361841520301237?via%3Dihub

Langner T, Strand R, Ahlström H, Kullberg J. Large-scale biometry with interpretable neural network regression on UK Biobank body MRI. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7576214/

Saito H, Aoki T, Aoyama K, Kato Y, Tsuboi A, Yamada A, Fujishiro M, Oka S, Ishihara S, Matsuda T, Nakahori M, Tanaka S, Koike K, Tada T. Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network. https://www.giejournal.org/article/S0016-5107(20)30132-2/fulltext


#

Appendix 1: Content Summaries of Selected Best Papers for the 2021 IMIA Yearbook, Section Sensors, Signals, and Imaging Informatics (CB)

Gemein LAW, Schirrmeister RT, Chrabąszcz P, Wilson D, Boedecker J, Schulze-Bonhage A, Hutter F, Ball T

Machine-learning-based diagnostics of EEG pathology

Neuroimage 2020 Oct 15;220:117021

The analysis of clinical electroencephalograms (EEGs) is a time-consuming and demanding process and requires years of training. The development of algorithms for automatic EEG diagnosis, such as machine learning (ML) methods, could be a tremendous benefit to clinicians in analyzing EEGs. In this work, end-to-end decoding using deep neural networks was compared with feature-based decoding using a large set of features. Approximately 3,000 recordings from the Temple University Hospital EEG Corpus (TUEG) study were used, representing the largest publicly available collection of EEG recordings to date. For feature-based pathology decoding, Random Forest (RF), Support Vector Machine (SVM), Riemannian geometry (RG), and Auto-Skill Classifier (ASC) were used, while three types of convolutional neural networks (CNN) were applied for end-to-end pathology decoding: the 4-layer ConvNet architecture Braindecode Deep4 ConvNet (BD-Deep4), Braincode (BD) and TCN. The main result of this study was that the EEG pathology decoding accuracy is in a narrow range of 81-86%, also compared to a wide range of analysis strategies, network archetypes, network architects, feature-based classifiers and ensembles, and datasets. Based on the feature visualizations, features extracted in the theta and delta regions of temporal electrode positions were considered informative. Feature correlation analysis showed strong correlations of features extracted at different electrode positions. Besides the fact that there is no statistical evidence that the deep neural networks studied perform better than the feature-based approach, this work presents that a somewhat elaborate feature-based approach can be used to achieve similar decoding results as deep end-to-end methods. The authors recommend decoding specific labels to avoid the consequences of label noise in decoding EEG pathology. This work provides a remarkable and objective comparison between deep learning and feature-based methods based on numerous experiments, including cross-validation, bootstrapping, and input signal perturbation strategies.

Karimi D, Dou H, Warfield SK, Gholipour A

Deep learning with noisy labels: Exploring techniques and remedies in medical image analysis

Med Image Anal 2020 Oct;65:101759

Label noise is unavoidable in many medical image datasets. It can be caused by limited attention or expertise of the human annotator, the subjective nature of labeling, or errors in computerized labeling systems. This is especially concerning for medical applications where datasets are typically small, labeling requires domain expertise and suffers from high inter- and intra-observer variability, and erroneous predictions may influence decisions directly impacting human health. The authors reviewed the state-of-the-art label noise handling in deep learning and investigated how these methods were applied to medical image analysis. Their key recommendations to account for label noise are: label cleaning and pre-processing, adaptions on network architectures, the use of label-noise-robust loss functions, re-weighting data, label consistency checks, and the choice of training procedures. They underpin their findings with experiments on three medical datasets where label noise was introduced by the systematic error of a human annotator, the inter-observer variability, or the noise generated from an algorithm. Their results suggest a careful curation of data for training deep learning algorithms for medical image analysis. Furthermore, the authors recommend integrating label noise analyses in development processes for robust deep learning models.

Langner T, Strand R, Ahlström H, Kullberg J

Large-scale biometry with interpretable neural network regression on UK Biobank body MRI

Sci Rep 2020 Oct 20;10(1):17752

This work presents a novel neural network approach for image-based regression to infer 64 biological metrics (beyond age) from neck-to-knee body MRIs with relevance for cardiovascular and metabolic diseases. Image data were collected from the UK Biobank study, linked to extensive metadata comprising non-imaging properties such as measurements of body composition by dual-energy X-ray absorptiometry (DXA) imaging, patient-related parameters, i.e., age, sex, height and weight, and additional biomarkers for cardiac health including pulse rate, accumulated fat in the liver and grip strength. The authors adapted and optimized a previously presented regression pipeline for age estimation using a ResNet50 architecture, not requiring any manual intervention or direct access to reference segmentations. Based on 31,172 magnetic resonance imaging (MRI) scans, the neural network was trained and cross-validated on simplified, two-dimensional representations of the MR images and evaluated by generated predictions and saliency maps for all examined properties. The work is noteworthy for its extensive validation of both the whole framework and predictions, demonstrating a robust performance and outperforming linear regression baseline in all applied cases. Saliency analysis showed that the developed neural network accurately targets specific body regions, organs, and limbs of interest. The network can emulate different modalities, including DXA or atlas-based MRI segmentation, and on average, correctly targets specific structures on either side of the body. The authors impressively demonstrated how convolutional neural network regression could effectively be applied in MRI and offer a first valuable, fully automated approach to measure a wide range of important biological metrics from single neck-to-knee body MRIs.

Saito H, Aoki T, Aoyama K, Kato Y, Tsuboi A, Yamada A, Fujishiro M, Oka S, Ishihara S, Matsuda T, Nakahori M, Tanaka S, Koike K, Tada T

Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network

Gastrointest Endosc 2020 Jul;92(1):144-151.e1

Wireless capsule endoscopy (WCE) is an established examination method for the diagnosis of small-bowel diseases. Automated detection and classification of protruding lesions of various types from WCE images is still challenging because it takes 1 to 2 hours on average for a correct diagnosis by a physician. In this work, a deep neural network architecture, termed single shot multibox detector (SSD) based on a deep convolutional neural network (CNN) structure with 16 or more layers, was trained on 30,584 WCE images from 292 patients collected from multiple centers and tested on an independent set of 17,507 images from 93 patients, including 7507 images of protruding lesions from 73 patients. All regions showing protruding lesions were manually annotated by six independent expert endoscopists, representing the ground truth for training the network. The CNN performance was evaluated by a ROC analysis, revealing an AUC of 0.911, a sensitivity of 90.7%, and a specificity of 79.8% at the optimal cut-off value of 0.317 for the probability score. In a subanalysis of the categories of protruding lesions, the sensitivities appeared between 77.0% and 95.8% for the detection of polyps, nodules, epithelial tumors, submucosal tumors, and venous structures, respectively. In individual patient analyses, the detection rate of protruding lesions was 98.6%. The rates of concordance of the labeling by the CNN and three expert endoscopists were between 42% and 83% for the different morphological structures. A false positive/negative error analysis was reported, indicating some limitations of the current approach in terms of an imbalanced number of cases, color diversity, and variation of structures in the images. The work is notable for its excellent clinical applicability using a new computer-aided system with good diagnostic performance to detect protruding lesions in small-bowel capsule endoscopy.


#
#

No conflict of interest has been declared by the author(s).

Publication History

Article published online:
03 September 2021

© 2021. IMIA and Thieme. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany