Rofo 2022; 194(06): 605-612
DOI: 10.1055/a-1718-4128
Review

Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities

Multiparametrische onkologische Hybridbildgebung: Herausforderungen und Chancen für maschinelles Lernen
1   Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
,
Tobias Hepp
1   Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
,
Ferdinand Seith
2   Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
› Author Affiliations
 

Abstract

Background Machine learning (ML) is considered an important technology for future data analysis in health care.

Methods The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and – for PET imaging – reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers.

Results and Conclusion In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future.

Key Points:

  • ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET.

Citation Format

  • Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Fortschr Röntgenstr 2022; 194: 605 – 612


#

Zusammenfassung

Hintergrund Maschinelles Lernen (ML) gilt als eine wichtige Technologie für die zukünftige Datenanalyse im Gesundheitswesen.

Methode Die inhärent technologiegetriebene diagnostische Radiologie und Nuklearmedizin werden sowohl bei der Bildaufnahme als auch bei der Bildrekonstruktion von ML profitieren. In den nächsten Jahren wird dies zu einer beschleunigten Bildaufnahme, einer verbesserten Bildqualität, einer Reduzierung von Bewegungsartefakten und – für die PET-Bildgebung – zu einer reduzierten Strahlenexposition und neuen Ansätzen zur Schwächungskorrektur führen. Darüber hinaus hat ML das Potenzial, die Entscheidungsfindung durch eine kombinierte Analyse von Daten aus verschiedenen Modalitäten, insbesondere im Bereich der Onkologie, zu unterstützen. In diesem Zusammenhang sehen wir ein großes Potenzial für ML in der multiparametrischen Hybrid-Bildgebung und der Entwicklung von bildgebenden Biomarkern.

Ergebnisse und Schlussfolgerung In diesem Review werden wir die Grundlagen von ML beschreiben, Ansätze in der hybriden Bildgebung von MRT, CT und PET vorstellen und die damit verbundenen spezifischen Herausforderungen und die kommenden Schritte diskutieren, um ML in Zukunft zu einem diagnostischen und klinischen Werkzeug zu machen.

Kernaussagen:

  • ML bietet eine praktikable klinische Lösung für die Rekonstruktion, Verarbeitung und Analyse von Hybrid-Bildgebung der MRT, CT und PET.


#

Background

Machine learning (ML) has become ubiquitous and is considered to be an important technology for health care data analysis in the future. The scientific interest is high: the number of publications listed on PubMed with the search query “machine learning” has increased exponentially from 57 in the year 2000 to 16 722 in 2020. In 2017, > 500 projects received total funding of 264 million USD from the National Institutes of Health for clinical research projects applying ML techniques [1] and even new journals like Nature Machine Intelligence were founded to pool expertise and build platforms for the increasing number of submitted papers. Inherently technology-driven, data processing has become an integral element of radiology since the introduction of a standard to archive and transfer imaging data in 1985 [2]. For decades, the main focus of imaging research has been improving image quality, reducing radiation exposure, accelerating image acquisition or – for nuclear medicine – developing new tracers. Although ML has the potential in multiparametric and hybrid imaging to accelerate image acquisition [3] or support attenuation correction in positron-emission tomography (PET) [4], a substantial innovation of ML is to support image interpretation, i. e. diagnostic decision support [5] [6], across all imaging modalities. Oncologic imaging is a major field of application for ML. Since cancer is one of the leading causes of death worldwide, treatment options are rapidly evolving and over the last two decades, health spending on cancer has increased faster than the increase of cancer incidence (total cost of cancer in Europe was €199 billion in 2018 [7]). Although new biomarkers like circulating cell-free tumor DNA (i. e., liquid biopsy) are on the rise [8], imaging plays a crucial role in therapy planning and response assessment in clinical trials. Despite the broad spectrum of available (semi-)quantitative imaging techniques in radiology and tracers in nuclear medicine, accepted criteria to define cancer or evaluate treatment response based on more than one imaging parameter (e. g., the diameter, RECIST, or glucose consumption, PERCIST) of a tumor are rare (e. g., PIRADS). Multiparametric oncologic hybrid imaging, i. e., the simultaneous acquisition of anatomical information and (several) functional tissue parameters using two different scan technologies (PET combined with computed tomography, PET/CT, or magnetic resonance imaging, PET/MRI) is aiming to provide deeper insight into tumor biology. Nonetheless, to be superior to conventional imaging is a challenging task and requires technical knowledge of all imaging modalities and pitfalls of its combination and an understanding of cancer biology and the specific mechanism of action of the applied therapy. ML extracts imaging features and has the potential to support the interpretation and acceptance of multiparametric oncologic imaging by providing new biomarkers of clinical relevance. In the present review, we discuss the techniques for image preparation, automated lesion segmentation, and data analysis. Finally, we will discuss possible approaches to overcome current limitations.


#

Machine learning basics

Different types of tasks can be solved using ML, such as image segmentation (e. g., delineation of lesions), image classification (e. g., benign versus malignant lesions), and regression tasks (e. g., estimation of lesion permeability). The main purpose of ML is to train a mathematical model that, based on the provided data, learns a representation with respect to the underlying task as shown in [Fig. 1]. The learning model represents any (non-)linear and parametric model that maps the inputs to the model outputs. The mapping function can be, for example, a neural network (or any other parametric model) whose parameters can be optimized under some given cost function. The cost function, also known as the error or loss function, is a quantitative measure that describes the match between the model predicted output and the desired target (depending on the type of learning).

Zoom Image
Fig. 1 Overview of machine learning (ML) training, validation, and test phase. During training, the model learns from a pool of (labelled) data – depending on the type of learning (Fig. 2) – the model parameters. ML hyperparameters are optimized on a separate validation set. The best trained model is applied during inference/test on new unseen data. The input data can be images or pre-processed features, with the output (classification or regression) depending on the underlying application.

The input to the model can be an image, numerical information, or any pre-processed data of these inputs from previous feature extraction steps, like features as used in radiomics [9]. The output varies depending on the respective task. While for regression, e. g., image reconstruction or image-to-image translations, a continuous-valued output per voxel is required, classification tasks, e. g., image segmentation or treatment response prediction, provide a discrete-valued output on a global (whole image) or local (patch or voxel) scale.

During a training process, ML models are provided evidence in the form of data samples such that the parameters of the neural network can be learned to predict a reasonable output for any new (unseen) input. During testing, the model is fixed, and the trained parameters generate the output from new unseen test data. The models perform a sequence of operations on the inputs to yield the task-specific output. The main aim of the models is to generalize based on their learned experience to minimize their associated empirical risk. The models are usually iteratively trained and tuned on a training and evaluation set, with the final trained model being applied to unseen test data. The training, evaluation, and test sets are disjunct sets with ideally distinct patients to minimize bias.

Types of learning can be differentiated based on the availability of label information and the type of label integration during training, as illustrated in [Fig. 2]. In principle, this depends on if and how a human observer is involved during training. In supervised learning, data samples along with their task-specific labels exist in the database. Labelling can be very time- and cost-intensive and often requires human interaction, ranging from data sorting and curating to annotating structures within the image. In semi-supervised learning, both labelled and unlabeled data are included in the learning process. The unlabeled data provides additional information about the underlying data distribution. Self-supervised learning circumvents the problem of external labels. The input data itself is used to guide the learning. In a similar sense, in reinforcement learning the model receives feedback as rewards or penalties, based on its current prediction, which drives the training procedure. Active learning integrates an oracle into the training procedure which is periodically queried to either label or select the next most meaningful samples for training. The oracle is in most cases a human observer but can also be another algorithm. In unsupervised learning, no labelled data is available that could be leveraged to guide the training. The network purely learns to identify patterns in the data. Common approaches are clustering [10] (e. g., k-means or Gaussian Mixture Models), principal component analysis (PCA), (variational) autoencoders [11] [12], Deep Belief Networks [13], and Generative Adversarial Networks (GANs) [14]. Transfer learning investigates the possibility to transfer knowledge between models or tasks. It can involve sharing information from simpler to more complex tasks, or from a source domain to another (but similar) target domain [15] [16] [17]. Federated learning [18] [19] [20] trains a model across multiple decentralized devices, where each device holds their own set of training data and only the model weights are shared across devices. This allows training across highly heterogeneous datasets. Federated learning has a vast potential in medicine, where sharing of data across multiple centers is challenging due to data protection and data privacy [21].

Zoom Image
Fig. 2 Overview of types of learning with human observer involvement.

#

Machine learning-based processing and analysis of multiparametric and hybrid imaging

In the field of medical imaging, ML methods have been proposed to support the human observer in examining the task of interest with recent shifting from hand-crafted radiomic features [22] towards data-driven deep learning features. The applications of ML in the processing and analysis of multiparametric and hybrid imaging ranges from the acquisition side to deriving a diagnostic biomarker as depicted in [Fig. 3] as an example.

Zoom Image
Fig. 3 Multiparametric and hybrid imaging data processing steps: Acquisition, reconstruction, post-processing and analysis, with exemplary use cases of machine learning-based methods within these processing steps.

On the acquisition side, ML can enable acceleration of the imaging sequence – especially for MRI, in which long protocols can be expected for multiparametric imaging [23] [24] [25]. Sampling below the Nyquist-Shannon limit requires incoherent and randomly sampled data points that can be sparsely represented in a transform domain [26]. Reconstruction is usually performed iteratively with non-linear methods which for conventional techniques can require substantial computation power and time. ML can reduce this workload by training an appropriate reconstruction model offline before usage and which can then allow inference in a few seconds. In some cases, this also allows further improvement in reconstructed image quality or reduction of acquisition time (in the order of 2 to 8 times, in some applications for which more degrees of freedom can be exploited around 16 times or higher) over conventional reconstruction algorithms [27] [28] [29] [30] [31].

Similar advancements can be achieved in PET imaging. Instead of administering the full dose, a reduced dose (in the range of 50 % to 90 % reduced administered dose) or alternatively shorter imaging duration can be performed with the former being given preference [32] [33]. The reduction of long PET reconstruction times in iterative algorithms such as maximum-likelihood expectation maximization or its incremental updated version, ordered subset expectation maximization, have been studied in end-to-end trained PET reconstruction models [34]. To improve the image quality of these reconstructions, imaging data from other modalities like MRI can be leveraged in a joint reconstruction [35] [36].

Reliable quantitative PET reconstruction depends on accurate attenuation coefficients derived from (simultaneous) CT or MR imaging. ML has also shown promising advances here to predict missing features in images (e. g., bone in MRI) or to learn a more generalizable realization [4] [37]. In the case of missing imaging data from other modalities for deriving an attenuation map, PET data itself can serve as the source with an ML model trained to replicate CT-derived attenuation maps in an image-to-image translation [38]. Besides attenuation correction, physiological motion can have a severe impact on the obtained image quality. Motion correction with motion models derived from other imaging modalities or surrogate measures [39] [40] [41] [42] [43] [44] [45] [46] [47] can be integrated [48] to compensate for motion-induced blurring and aliasing in PET.

Before or intertwined with image analysis, segmentation can be employed to focus the model’s attention on the region of interest, with an additional benefit of automatically streamlining the processing workflow. These automatic segmentations can support in segmenting lesions [49] [50] [51] or organs/tissues of interest [52]. In this context, multiparametric and hybrid imaging can be utilized to provide distinct and non-redundant information for better localization of the target region or to make it more robust with respect to outliers and residual imaging artifacts. Melanoma lesion segmentation from hybrid data is shown in [Fig. 4] for an ML-based solution in comparison to a manually labelled ground truth.

Zoom Image
Fig. 4 Exemplary melanoma lesion segmentation in two patients with a machine learning-based segmentation network from hybrid imaging data in comparison to the manually labeled expert ground-truth. The segmented lesions are depicted in red.

The analysis of imaging data for multiparametric and hybrid imaging [53] [54] [55] mainly supports in cancer classification (e. g., lung nodules in chest CT [56], skin lesions [57], or lymphoma and lung cancer [58]), disease classification [59] [60] [61] and the detection of melanoma [62] [63] [64] [65] [66], abnormalities, and tumors [67] [68] [69] [70] [71] [72] [73]. Often these models combine contextual, non-imaging, and imaging information in an end-to-end fashion under the usage of multi-stream convolutional neural networks (CNNs) to accommodate multiple sources of information (e. g., imaging and non-imaging data) or representations of the input (e. g., imaging modalities or multiple scales, orientations) [74] [75].

The obtained image analysis [76] [77] [78] and image-based disease diagnostics and prognostics [79] [80] can serve as biomarkers that could later be integrated into diagnostic decision-making. However, despite the high performance, reduced processing time, and improved workflow demonstrated by ML models, most methods have only been studied in a laboratory environment and clinical adoption is still limited [81]. So far, ML methods have been primarily proposed with a specific task in mind and were driven by learning patterns on the provided database. ML is good at finding these patterns in data but cannot explain how they are connected and to what extent this estimation is reasonable or reliable. Domain-aware (e. g., population, prevalence, imaging application, imaging hardware, imaging conditions) and expert knowledge (e. g., targeted pathology, extractable information) are valuable information that are only partially considered within the context of ML-based solutions. Models could however better adapt to changing scenarios and domains if causal information exchange is considered. Although multiparametric hybrid data may allow increased information sharing among imaging data samples, in particular the handling of this multiparametric hybrid data brings several challenges, like different imaging orientations, modalities, contrasts, and so on, that need to be addressed and accounted for in ML processing. Furthermore, widespread usage is also limited by the generalizability of the models due to lack of widespread and diverse data. Transfer learning or federated learning strategies could help to mitigate these problems in the future. Transferring knowledge between domains [82], generalizing models better for different domains [83], including expert’s decision-making into model predictions [84] [85], or examining the influence of medical imaging meta information [86] can help to shape the next generation of ML models for multiparametric and hybrid imaging.


#

Conclusion

In this review, we discussed the current state-of-the-art approaches in ML with a focus on hybrid imaging. Although the results are promising, there is general skepticism about ML as a “black box”-like tool. ML can extract data from multiparametric images and relates this information to biological or clinical endpoints. An important point of criticism is the missing underlying biological rationale of this entirely data-driven approach which stands in contrast to biomarker development driven by a biology-based hypothesis [87]. Of course, radiologic images are influenced by tissue properties (e. g., photon attenuation, proton density, T1/T2 times, diffusivity, glucose consumption) on a – more or less – molecular scale, and a validation of imaging features to a histopathologic or genetic ground truth would increase the acceptance of imaging data analysis. However, the link between a genetic code or cell surface markers to the Hounsfield unit/signal intensity/SUV (or the combination) of pixels in a macroscopic CT/MRI/PET image is rather complex and proving this link might be too ambitious for in-vivo images. This is all the more important as the supposed reference standard, like the pathology, might not be more precise than the imaging biomarker to be validated [88]. Therefore, a post-hoc generation of hypotheses and a validation through clinical endpoints might be more preferred for ML techniques. On this note, preference should be given to self-explaining ML strategies that output their prediction together with an explanation for that prediction, turning output interpretation into explanation [89]. To make ML an accepted diagnostic tool, there are several steps ahead. A major issue for ML is the limitation and the heterogeneity of available data for training. The Quantitative Imaging Biomarkers Alliance (QIBA, [90]) and The Cancer Imaging Archive (TCIA, [91]) are initiatives aiming to make quantitative imaging more robust and to pool cancer imaging data. To enable multicenter and multidisciplinary data analysis, the prerequisites for digital medicine in Europe need to be created. Therefore, the patchwork of regulations throughout the European health systems including strategies for data security, privacy, as well as ethical and legal concerns need to be overcome [92]. Besides regulatory and bureaucratic concerns, ML studies need computational power and engineering effort. Therefore, a digital infrastructure is needed to run ML algorithms in the clinical routine. Regarding ML systems, a stringent standardization and description of the analytical methods in publications is crucial. For future implementations in health care, traceability and auditability of ML systems is required [93]. With these steps, ML can reach its full acceptance and potential in daily clinical usage.


#
#

Conflict of Interest

The authors declare that they have no conflict of interest.

  • References

  • 1 Annapureddy AR, Angraal S, Caraballo C. et al The National Institutes of Health funding for clinical research applying machine learning techniques in 2017. NPJ digital medicine 2020; 3: 13-13
  • 2 Mildenberger P, Eichelberg M, Martin E. Introduction to the DICOM standard. Eur Radiol 2002; 12: 920-927
  • 3 Willemink MJ, Noël PB. The evolution of image reconstruction for CT-from filtered back projection to artificial intelligence. Eur Radiol 2019; 29: 2185-2195
  • 4 Lee JS. A Review of Deep-Learning-Based Approaches for Attenuation Correction in Positron Emission Tomography. IEEE Transactions on Radiation and Plasma Medical Sciences 2021; 5: 160-184
  • 5 Syeda-Mahmood T. Role of Big Data and Machine Learning in Diagnostic Decision Support in Radiology. J Am Coll Radiol 2018; 15: 569-576
  • 6 Hosny A, Parmar C, Quackenbush J. et al Artificial intelligence in radiology. Nat Rev Cancer 2018; 18: 500-510
  • 7 Hofmarcher T, Lindgren P, Wilking N. et al The cost of cancer in Europe 2018. Eur J Cancer 2020; 129: 41-49
  • 8 Ignatiadis M, Sledge GW, Jeffrey SS. Liquid biopsy enters the clinic – implementation issues and future challenges. Nat Rev Clin Oncol 2021; 18: 297-312
  • 9 Gillies RJ, Kinahan PE, Hricak H. Radiomics: Images Are More than Pictures, They Are Data. Radiology 2016; 278: 563-577
  • 10 Bishop C. Pattern recognition and machine learning. Springer; 2006
  • 11 Kingma DP, Welling M. An introduction to variational autoencoders. In: Now Publishers Inc; 2019: 307–392
  • 12 Kramer MA. Nonlinear principal component analysis using autoassociative neural networks. AIChE Journal 1991; 37: 233-243
  • 13 Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science 2006; 313: 504-507
  • 14 Goodfellow IJ, Pouget-Abadie J, Mirza M. et al Generative adversarial nets. In, Advances in Neural Information Processing Systems. January ed; 2014: 2672–2680
  • 15 Pan SJ, Yang Q. A survey on transfer learning. In; 2010: 1345–1359
  • 16 Raghu M, Zhang C, Kleinberg J. et al Transfusion: Understanding transfer learning for medical imaging. In; 2019
  • 17 Tan C, Sun F, Kong T. et al A survey on deep transfer learning. In, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Springer Verlag; 2018: 270–279
  • 18 Brendan McMahanH, Moore E, Ramage D. et al Communication-efficient learning of deep networks from decentralized data. In; 2017
  • 19 Li T, Sahu AK, Talwalkar A. et al Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 2020; 37: 50-60
  • 20 Yang Q, Liu Y, Chen T. et al Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology 2019; 10: 1-19
  • 21 Rieke N, Hancox J, Li W. et al The future of digital health with federated learning. npj Digital Medicine 2020; 3: 1-7
  • 22 Lambin P, Leijenaar RTH, Deist TM. et al Radiomics: the bridge between medical imaging and personalized medicine. Nature Reviews Clinical Oncology 2017; 14: 749-762
  • 23 Hammernik K, Klatzer T, Kobler E. et al Learning a variational network for reconstruction of accelerated MRI data. Magnetic Resonance in Medicine 2018; 79: 3055-3071
  • 24 Zhu B, Liu JZ, Cauley SF. et al Image reconstruction by domain-transform manifold learning. Nature 2018; 555: 487-492
  • 25 Sandino CM, Cheng JY, Chen F. et al Compressed Sensing: From Research to Clinical Practice with Deep Neural Networks: Shortening Scan Times for Magnetic Resonance Imaging. IEEE Signal Processing Magazine 2020; 37: 117-127
  • 26 Lustig M, Donoho D, Pauly JM. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine 2007; 58: 1182-1195
  • 27 Knoll F, Hammernik K, Zhang C. et al Deep learning methods for parallel magnetic resonance image reconstruction. arXiv preprint arXiv:190401112 2019, DOI:
  • 28 Hyun CM, Kim HP, Lee SM. et al Deep learning for undersampled MRI reconstruction. Phys Med Biol 2018; 63: 135007
  • 29 Lin DJ, Johnson PM, Knoll F. et al Artificial Intelligence for MR Image Reconstruction: An Overview for Clinicians. J Magn Reson Imag 2020; DOI: 10.1002/jmri.27078.
  • 30 Küstner T, Fuin N, Hammernik K. et al CINENet: deep learning-based 3D cardiac CINE MRI reconstruction with multi-coil complex-valued 4D spatio-temporal convolutions. Scientific Reports 2020; 10: 1-13
  • 31 Sandino CM, Lai P, Vasanawala SS. et al Accelerating cardiac cine MRI using a deep learning-based ESPIRiT reconstruction. Magnetic Resonance in Medicine 2020, DOI: mrm.28420-mrm.28420
  • 32 Kaplan S, Zhu YM. Full-Dose PET Image Estimation from Low-Dose PET Image Using Deep Learning: a Pilot Study. J Digit Imaging 2019; 32: 773-778
  • 33 Katsari K, Penna D, Arena V. et al Artificial intelligence for reduced dose 18F-FDG PET examinations: a real-world deployment through a standardized framework and business case assessment. EJNMMI Physics 2021; 8: 25
  • 34 Häggström I, Schmidtlein CR, Campanella G. et al DeepPET: A deep encoder–decoder network for directly solving the PET image reconstruction inverse problem. Medical Image Analysis 2019; 54: 253-262
  • 35 Wang YJ, Baratto L, Hawk KE. et al Artificial intelligence enables whole-body positron emission tomography scans with minimal radiation exposure. Eur J Nucl Med Mol Imaging 2021; DOI: 10.1007/s00259-021-05197-3.
  • 36 Knoll F, Holler M, Koesters T. et al Joint MR-PET Reconstruction Using a Multi-Channel Image Regularizer. IEEE Trans Med Imaging 2017; 36: 1-16
  • 37 Liu F, Jang H, Kijowski R. et al Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging. Radiology 2018; 286: 676-684
  • 38 Armanious K, Hepp T, Küstner T. et al Independent attenuation correction of whole body [18F]FDG-PET using a deep learning approach with Generative Adversarial Networks. EJNMMI Research 2020; 10: 53
  • 39 Catana C. Motion correction options in PET/MRI. Semin Nucl Med 2015; 45: 212-223
  • 40 Grimm R, Furst S, Souvatzoglou M. et al Self-gated MRI motion modeling for respiratory motion compensation in integrated PET/MRI. Med Image Anal 2015; 19: 110-120
  • 41 Lamare F, Ledesma Carbayo MJ, Cresson T. et al List-mode-based reconstruction for respiratory motion correction in PET using non-rigid body transformations. Phys Med Biol 2007; 52: 5187-5204
  • 42 Manber R, Thielemans K, Hutton BF. et al Practical PET Respiratory Motion Correction in Clinical PET/MR. J Nucl Med 2015; 56: 890-896
  • 43 Küstner T, Schwartz M, Martirosian P. et al MR-based respiratory and cardiac motion correction for PET imaging. Med Image Anal 2017; 42: 129-144
  • 44 Gratz M, Ruhlmann V, Umutlu L. et al Impact of respiratory motion correction on lesion visibility and quantification in thoracic PET/MR imaging. PLOS ONE 2020; 15: e0233209
  • 45 Marin T, Djebra Y, Han PK. et al Motion correction for PET data using subspace-based real-time MR imaging in simultaneous PET/MR. Phys Med Biol 2020; 65: 235022
  • 46 Kolbitsch C, Prieto C, Tsoumpas C. et al A 3D MR-acquisition scheme for nonrigid bulk motion correction in simultaneous PET-MR. Med Phys 2014; 41: 082304
  • 47 Munoz C, Kolbitsch C, Reader AJ. et al MR-Based Cardiac and Respiratory Motion-Compensation Techniques for PET-MR Imaging. PET Clin 2016; 11: 179-191
  • 48 McClelland JR, Hawkes DJ, Schaeffter T. et al Respiratory motion models: a review. Med Image Anal 2013; 17: 19-42
  • 49 Kamnitsas K, Ledig C, Newcombe VFJ. et al Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis 2017; 36: 61-78
  • 50 Ghafoorian M, Karssemeijer N, Heskes T. et al Non-uniform patch sampling with deep convolutional neural networks for white matter hyperintensity segmentation. In, 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI); 2016: 1414–1417
  • 51 Brosch T, Tang LYW, Yoo Y. et al Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation. IEEE Transactions on Medical Imaging 2016; 35: 1229-1239
  • 52 Hesamian MH, Jia W, He X. et al Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. JDI 2019; 32: 582-596
  • 53 Morin O, Vallières M, Jochems A. et al A Deep Look Into the Future of Quantitative Imaging in Oncology: A Statement of Working Principles and Proposal for Change. Int J Radiat Oncol Biol Phys 2018; 102: 1074-1082
  • 54 Parmar C, Barry JD, Hosny A. et al Data Analysis Strategies in Medical Imaging. Clin Cancer Res 2018; 24: 3492-3499
  • 55 Xue Y, Chen S, Qin J. et al Application of Deep Learning in Automated Analysis of Molecular Images in Cancer: A Survey. Contrast Media. Molecular Imaging 2017; 2017: 9512370
  • 56 Setio AAA, Ciompi F, Litjens G. et al Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks. IEEE Transactions on Medical Imaging 2016; 35: 1160-1169
  • 57 Kawahara J, Hamarneh G. Multi-resolution-Tract CNN with Hybrid Pretrained and Skin-Lesion Trained Layers. Cham: Springer International Publishing; 2016: 164-171
  • 58 Sibille L, Seifert R, Avramovic N. et al (18)F-FDG PET/CT Uptake Classification in Lymphoma and Lung Cancer by Using Deep Convolutional Neural Networks. Radiology 2020; 294: 445-452
  • 59 Li R, Zhang W, Suk HI. et al Deep learning based imaging data completion for improved brain disease diagnosis. Med Image Comput Comput Assist Interv 2014; 17: 305-312
  • 60 Anthimopoulos M, Christodoulidis S, Ebner L. et al Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network. IEEE Transactions on Medical Imaging 2016; 35: 1207-1216
  • 61 Fakoor R, Ladhak F, Nazi A. et al Using deep learning to enhance cancer diagnosis and classification. In, Proceedings of the international conference on machine learning: ACM New York, USA; 2013
  • 62 Codella NC, Nguyen Q-B, Pankanti S. et al Deep learning ensembles for melanoma recognition in dermoscopy images. IBM Journal of Research and Development 2017; 61: 1-5
  • 63 Yu L, Chen H, Dou Q. et al Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks. IEEE Transactions on Medical Imaging 2017; 36: 994-1004
  • 64 Esteva A, Kuprel B, Novoa RA. et al Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017; 542: 115-118
  • 65 Jafari MH, Nasr-Esfahani E, Karimi N. et al Extraction of skin lesions from non-dermoscopic images for surgical excision of melanoma. International Journal of Computer Assisted Radiology and Surgery 2017; 12: 1021-1030
  • 66 Nasr-Esfahani E, Samavi S, Karimi N. et al Melanoma detection by analysis of clinical images using convolutional neural network. In, 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); 2016: 1373–1376
  • 67 Cireşan DC, Giusti A, Gambardella LM. et al Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg; 2013: 411-418
  • 68 Sirinukunwattana K, Raza SEA, Tsang Y. et al Locality Sensitive Deep Learning for Detection and Classification of Nuclei in Routine Colon Cancer Histology Images. IEEE Transactions on Medical Imaging 2016; 35: 1196-1206
  • 69 Wang H, Cruz-Roa A, Basavanhally A. et al Mitosis detection in breast cancer pathology images by combining handcrafted and convolutional neural network features. J Med Imaging (Bellingham) 2014; 1: 034003
  • 70 Cruz-Roa A, Basavanhally A, González F. et al Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks. In, Medical Imaging 2014: Digital Pathology: International Society for Optics and Photonics; 2014: 904103
  • 71 Roth HR, Lu L, Seff A. et al A new 2.5D representation for lymph node detection using random sets of deep convolutional neural network observations. Medical image computing and computer-assisted intervention: MICCAI International Conference on Medical Image Computing and Computer-Assisted Intervention 2014; 17: 520-527
  • 72 Wang D, Khosla A, Gargeya R. et al Deep learning for identifying metastatic breast cancer. arXiv preprint arXiv:160605718 2016, DOI:
  • 73 Shin H-C, Roth HR, Gao M. et al Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging 2016; 35: 1285-1298
  • 74 Barbu A, Lu L, Roth H. et al An analysis of robust cost functions for CNN in computer-aided diagnosis. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2018; 6: 253-258
  • 75 Roth HR, Lu L, Liu J. et al Improving computer-aided detection using convolutional neural networks and random view aggregation. IEEE transactions on medical imaging 2015; 35: 1170-1181
  • 76 Fourcade A, Khonsari RH. Deep learning in medical image analysis: A third eye for doctors. J Stomatol Oral Maxillofac Surg 2019; 120: 279-288
  • 77 Shen D, Wu G, Suk H-I. Deep learning in medical image analysis. Annual review of biomedical engineering 2017; 19: 221-248
  • 78 Litjens G, Kooi T, Bejnordi BE. et al A survey on deep learning in medical image analysis. Medical image analysis 2017; 42: 60-88
  • 79 Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. ZMedPhys 2019; 29: 102-127
  • 80 Hosny A, Parmar C, Quackenbush J. et al Artificial intelligence in radiology. Nature Reviews Cancer 2018; 18: 500-510
  • 81 Dayhoff JE, DeLeo JM. Artificial neural networks: opening the black box. Cancer 2001; 91: 1615-1635
  • 82 Chen KT, Schürer M, Ouyang J. et al Generalization of deep learning models for ultra-low-count amyloid PET/MRI using transfer learning. European Journal of Nuclear Medicine and Molecular Imaging 2020; 47: 2998-3007
  • 83 Schölkopf B. Causality for machine learning. arXiv preprint arXiv:191110500 2019, DOI:
  • 84 Cypko MA, Stoehr M, Kozniewski M. et al Validation workflow for a clinical Bayesian network model in multidisciplinary decision making in head and neck oncology treatment. Int J Comput Assist Radiol Surg 2017; 12: 1959-1970
  • 85 Lucas PJ, van der Gaag LC, Abu-Hanna A. Bayesian networks in biomedicine and health-care. Artif Intell Med 2004; 30: 201-214
  • 86 Maier-Hein L, Eisenmann M, Reinke A. et al Why rankings of biomedical image analysis competitions should be interpreted with care. Nature Communications 2018; 9: 5217
  • 87 Tomaszewski MR, Gillies RJ. The Biological Meaning of Radiomic Features. Radiology 2021; 298: 505-516
  • 88 Elmore JG, Longton GM, Carney PA. et al Diagnostic concordance among pathologists interpreting breast biopsy specimens. Jama 2015; 313: 1122-1132
  • 89 Elton DC. Self-explaining AI as an alternative to interpretable AI. In, International Conference on Artificial General Intelligence: Springer; 2020: 95–106
  • 90 Shukla-Dave A, Obuchowski NA, Chenevert TL. et al Quantitative imaging biomarkers alliance (QIBA) recommendations for improved precision of DWI and DCE-MRI derived biomarkers in multicenter oncology trials. J Magn Reson Imaging 2019; 49: e101-e121
  • 91 Clark K, Vendt B, Smith K. et al The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J Digit Imaging 2013; 26: 1045-1057
  • 92 Bukowski M, Farkas R, Beyan O. et al Implementation of eHealth and AI integrated diagnostics with multidisciplinary digitized data: are we ready from an international perspective?. European Radiology 2020; 30: 5510-5524
  • 93 Commission E. Ethics Guidelines for Trustworthy AI. In; 2019

Correspondence

Dr. Thomas Küstner
Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen
Hoppe-Seyler-Str. 3
72076 Tubingen
Germany   
Phone: +49/70 71/2 98 05 07   

Publication History

Received: 22 June 2021

Accepted: 25 November 2021

Article published online:
24 February 2022

© 2022. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Annapureddy AR, Angraal S, Caraballo C. et al The National Institutes of Health funding for clinical research applying machine learning techniques in 2017. NPJ digital medicine 2020; 3: 13-13
  • 2 Mildenberger P, Eichelberg M, Martin E. Introduction to the DICOM standard. Eur Radiol 2002; 12: 920-927
  • 3 Willemink MJ, Noël PB. The evolution of image reconstruction for CT-from filtered back projection to artificial intelligence. Eur Radiol 2019; 29: 2185-2195
  • 4 Lee JS. A Review of Deep-Learning-Based Approaches for Attenuation Correction in Positron Emission Tomography. IEEE Transactions on Radiation and Plasma Medical Sciences 2021; 5: 160-184
  • 5 Syeda-Mahmood T. Role of Big Data and Machine Learning in Diagnostic Decision Support in Radiology. J Am Coll Radiol 2018; 15: 569-576
  • 6 Hosny A, Parmar C, Quackenbush J. et al Artificial intelligence in radiology. Nat Rev Cancer 2018; 18: 500-510
  • 7 Hofmarcher T, Lindgren P, Wilking N. et al The cost of cancer in Europe 2018. Eur J Cancer 2020; 129: 41-49
  • 8 Ignatiadis M, Sledge GW, Jeffrey SS. Liquid biopsy enters the clinic – implementation issues and future challenges. Nat Rev Clin Oncol 2021; 18: 297-312
  • 9 Gillies RJ, Kinahan PE, Hricak H. Radiomics: Images Are More than Pictures, They Are Data. Radiology 2016; 278: 563-577
  • 10 Bishop C. Pattern recognition and machine learning. Springer; 2006
  • 11 Kingma DP, Welling M. An introduction to variational autoencoders. In: Now Publishers Inc; 2019: 307–392
  • 12 Kramer MA. Nonlinear principal component analysis using autoassociative neural networks. AIChE Journal 1991; 37: 233-243
  • 13 Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science 2006; 313: 504-507
  • 14 Goodfellow IJ, Pouget-Abadie J, Mirza M. et al Generative adversarial nets. In, Advances in Neural Information Processing Systems. January ed; 2014: 2672–2680
  • 15 Pan SJ, Yang Q. A survey on transfer learning. In; 2010: 1345–1359
  • 16 Raghu M, Zhang C, Kleinberg J. et al Transfusion: Understanding transfer learning for medical imaging. In; 2019
  • 17 Tan C, Sun F, Kong T. et al A survey on deep transfer learning. In, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Springer Verlag; 2018: 270–279
  • 18 Brendan McMahanH, Moore E, Ramage D. et al Communication-efficient learning of deep networks from decentralized data. In; 2017
  • 19 Li T, Sahu AK, Talwalkar A. et al Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine 2020; 37: 50-60
  • 20 Yang Q, Liu Y, Chen T. et al Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology 2019; 10: 1-19
  • 21 Rieke N, Hancox J, Li W. et al The future of digital health with federated learning. npj Digital Medicine 2020; 3: 1-7
  • 22 Lambin P, Leijenaar RTH, Deist TM. et al Radiomics: the bridge between medical imaging and personalized medicine. Nature Reviews Clinical Oncology 2017; 14: 749-762
  • 23 Hammernik K, Klatzer T, Kobler E. et al Learning a variational network for reconstruction of accelerated MRI data. Magnetic Resonance in Medicine 2018; 79: 3055-3071
  • 24 Zhu B, Liu JZ, Cauley SF. et al Image reconstruction by domain-transform manifold learning. Nature 2018; 555: 487-492
  • 25 Sandino CM, Cheng JY, Chen F. et al Compressed Sensing: From Research to Clinical Practice with Deep Neural Networks: Shortening Scan Times for Magnetic Resonance Imaging. IEEE Signal Processing Magazine 2020; 37: 117-127
  • 26 Lustig M, Donoho D, Pauly JM. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine 2007; 58: 1182-1195
  • 27 Knoll F, Hammernik K, Zhang C. et al Deep learning methods for parallel magnetic resonance image reconstruction. arXiv preprint arXiv:190401112 2019, DOI:
  • 28 Hyun CM, Kim HP, Lee SM. et al Deep learning for undersampled MRI reconstruction. Phys Med Biol 2018; 63: 135007
  • 29 Lin DJ, Johnson PM, Knoll F. et al Artificial Intelligence for MR Image Reconstruction: An Overview for Clinicians. J Magn Reson Imag 2020; DOI: 10.1002/jmri.27078.
  • 30 Küstner T, Fuin N, Hammernik K. et al CINENet: deep learning-based 3D cardiac CINE MRI reconstruction with multi-coil complex-valued 4D spatio-temporal convolutions. Scientific Reports 2020; 10: 1-13
  • 31 Sandino CM, Lai P, Vasanawala SS. et al Accelerating cardiac cine MRI using a deep learning-based ESPIRiT reconstruction. Magnetic Resonance in Medicine 2020, DOI: mrm.28420-mrm.28420
  • 32 Kaplan S, Zhu YM. Full-Dose PET Image Estimation from Low-Dose PET Image Using Deep Learning: a Pilot Study. J Digit Imaging 2019; 32: 773-778
  • 33 Katsari K, Penna D, Arena V. et al Artificial intelligence for reduced dose 18F-FDG PET examinations: a real-world deployment through a standardized framework and business case assessment. EJNMMI Physics 2021; 8: 25
  • 34 Häggström I, Schmidtlein CR, Campanella G. et al DeepPET: A deep encoder–decoder network for directly solving the PET image reconstruction inverse problem. Medical Image Analysis 2019; 54: 253-262
  • 35 Wang YJ, Baratto L, Hawk KE. et al Artificial intelligence enables whole-body positron emission tomography scans with minimal radiation exposure. Eur J Nucl Med Mol Imaging 2021; DOI: 10.1007/s00259-021-05197-3.
  • 36 Knoll F, Holler M, Koesters T. et al Joint MR-PET Reconstruction Using a Multi-Channel Image Regularizer. IEEE Trans Med Imaging 2017; 36: 1-16
  • 37 Liu F, Jang H, Kijowski R. et al Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging. Radiology 2018; 286: 676-684
  • 38 Armanious K, Hepp T, Küstner T. et al Independent attenuation correction of whole body [18F]FDG-PET using a deep learning approach with Generative Adversarial Networks. EJNMMI Research 2020; 10: 53
  • 39 Catana C. Motion correction options in PET/MRI. Semin Nucl Med 2015; 45: 212-223
  • 40 Grimm R, Furst S, Souvatzoglou M. et al Self-gated MRI motion modeling for respiratory motion compensation in integrated PET/MRI. Med Image Anal 2015; 19: 110-120
  • 41 Lamare F, Ledesma Carbayo MJ, Cresson T. et al List-mode-based reconstruction for respiratory motion correction in PET using non-rigid body transformations. Phys Med Biol 2007; 52: 5187-5204
  • 42 Manber R, Thielemans K, Hutton BF. et al Practical PET Respiratory Motion Correction in Clinical PET/MR. J Nucl Med 2015; 56: 890-896
  • 43 Küstner T, Schwartz M, Martirosian P. et al MR-based respiratory and cardiac motion correction for PET imaging. Med Image Anal 2017; 42: 129-144
  • 44 Gratz M, Ruhlmann V, Umutlu L. et al Impact of respiratory motion correction on lesion visibility and quantification in thoracic PET/MR imaging. PLOS ONE 2020; 15: e0233209
  • 45 Marin T, Djebra Y, Han PK. et al Motion correction for PET data using subspace-based real-time MR imaging in simultaneous PET/MR. Phys Med Biol 2020; 65: 235022
  • 46 Kolbitsch C, Prieto C, Tsoumpas C. et al A 3D MR-acquisition scheme for nonrigid bulk motion correction in simultaneous PET-MR. Med Phys 2014; 41: 082304
  • 47 Munoz C, Kolbitsch C, Reader AJ. et al MR-Based Cardiac and Respiratory Motion-Compensation Techniques for PET-MR Imaging. PET Clin 2016; 11: 179-191
  • 48 McClelland JR, Hawkes DJ, Schaeffter T. et al Respiratory motion models: a review. Med Image Anal 2013; 17: 19-42
  • 49 Kamnitsas K, Ledig C, Newcombe VFJ. et al Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis 2017; 36: 61-78
  • 50 Ghafoorian M, Karssemeijer N, Heskes T. et al Non-uniform patch sampling with deep convolutional neural networks for white matter hyperintensity segmentation. In, 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI); 2016: 1414–1417
  • 51 Brosch T, Tang LYW, Yoo Y. et al Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation. IEEE Transactions on Medical Imaging 2016; 35: 1229-1239
  • 52 Hesamian MH, Jia W, He X. et al Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. JDI 2019; 32: 582-596
  • 53 Morin O, Vallières M, Jochems A. et al A Deep Look Into the Future of Quantitative Imaging in Oncology: A Statement of Working Principles and Proposal for Change. Int J Radiat Oncol Biol Phys 2018; 102: 1074-1082
  • 54 Parmar C, Barry JD, Hosny A. et al Data Analysis Strategies in Medical Imaging. Clin Cancer Res 2018; 24: 3492-3499
  • 55 Xue Y, Chen S, Qin J. et al Application of Deep Learning in Automated Analysis of Molecular Images in Cancer: A Survey. Contrast Media. Molecular Imaging 2017; 2017: 9512370
  • 56 Setio AAA, Ciompi F, Litjens G. et al Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks. IEEE Transactions on Medical Imaging 2016; 35: 1160-1169
  • 57 Kawahara J, Hamarneh G. Multi-resolution-Tract CNN with Hybrid Pretrained and Skin-Lesion Trained Layers. Cham: Springer International Publishing; 2016: 164-171
  • 58 Sibille L, Seifert R, Avramovic N. et al (18)F-FDG PET/CT Uptake Classification in Lymphoma and Lung Cancer by Using Deep Convolutional Neural Networks. Radiology 2020; 294: 445-452
  • 59 Li R, Zhang W, Suk HI. et al Deep learning based imaging data completion for improved brain disease diagnosis. Med Image Comput Comput Assist Interv 2014; 17: 305-312
  • 60 Anthimopoulos M, Christodoulidis S, Ebner L. et al Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network. IEEE Transactions on Medical Imaging 2016; 35: 1207-1216
  • 61 Fakoor R, Ladhak F, Nazi A. et al Using deep learning to enhance cancer diagnosis and classification. In, Proceedings of the international conference on machine learning: ACM New York, USA; 2013
  • 62 Codella NC, Nguyen Q-B, Pankanti S. et al Deep learning ensembles for melanoma recognition in dermoscopy images. IBM Journal of Research and Development 2017; 61: 1-5
  • 63 Yu L, Chen H, Dou Q. et al Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks. IEEE Transactions on Medical Imaging 2017; 36: 994-1004
  • 64 Esteva A, Kuprel B, Novoa RA. et al Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017; 542: 115-118
  • 65 Jafari MH, Nasr-Esfahani E, Karimi N. et al Extraction of skin lesions from non-dermoscopic images for surgical excision of melanoma. International Journal of Computer Assisted Radiology and Surgery 2017; 12: 1021-1030
  • 66 Nasr-Esfahani E, Samavi S, Karimi N. et al Melanoma detection by analysis of clinical images using convolutional neural network. In, 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); 2016: 1373–1376
  • 67 Cireşan DC, Giusti A, Gambardella LM. et al Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks. Berlin, Heidelberg: Springer Berlin Heidelberg; 2013: 411-418
  • 68 Sirinukunwattana K, Raza SEA, Tsang Y. et al Locality Sensitive Deep Learning for Detection and Classification of Nuclei in Routine Colon Cancer Histology Images. IEEE Transactions on Medical Imaging 2016; 35: 1196-1206
  • 69 Wang H, Cruz-Roa A, Basavanhally A. et al Mitosis detection in breast cancer pathology images by combining handcrafted and convolutional neural network features. J Med Imaging (Bellingham) 2014; 1: 034003
  • 70 Cruz-Roa A, Basavanhally A, González F. et al Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks. In, Medical Imaging 2014: Digital Pathology: International Society for Optics and Photonics; 2014: 904103
  • 71 Roth HR, Lu L, Seff A. et al A new 2.5D representation for lymph node detection using random sets of deep convolutional neural network observations. Medical image computing and computer-assisted intervention: MICCAI International Conference on Medical Image Computing and Computer-Assisted Intervention 2014; 17: 520-527
  • 72 Wang D, Khosla A, Gargeya R. et al Deep learning for identifying metastatic breast cancer. arXiv preprint arXiv:160605718 2016, DOI:
  • 73 Shin H-C, Roth HR, Gao M. et al Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging 2016; 35: 1285-1298
  • 74 Barbu A, Lu L, Roth H. et al An analysis of robust cost functions for CNN in computer-aided diagnosis. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2018; 6: 253-258
  • 75 Roth HR, Lu L, Liu J. et al Improving computer-aided detection using convolutional neural networks and random view aggregation. IEEE transactions on medical imaging 2015; 35: 1170-1181
  • 76 Fourcade A, Khonsari RH. Deep learning in medical image analysis: A third eye for doctors. J Stomatol Oral Maxillofac Surg 2019; 120: 279-288
  • 77 Shen D, Wu G, Suk H-I. Deep learning in medical image analysis. Annual review of biomedical engineering 2017; 19: 221-248
  • 78 Litjens G, Kooi T, Bejnordi BE. et al A survey on deep learning in medical image analysis. Medical image analysis 2017; 42: 60-88
  • 79 Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. ZMedPhys 2019; 29: 102-127
  • 80 Hosny A, Parmar C, Quackenbush J. et al Artificial intelligence in radiology. Nature Reviews Cancer 2018; 18: 500-510
  • 81 Dayhoff JE, DeLeo JM. Artificial neural networks: opening the black box. Cancer 2001; 91: 1615-1635
  • 82 Chen KT, Schürer M, Ouyang J. et al Generalization of deep learning models for ultra-low-count amyloid PET/MRI using transfer learning. European Journal of Nuclear Medicine and Molecular Imaging 2020; 47: 2998-3007
  • 83 Schölkopf B. Causality for machine learning. arXiv preprint arXiv:191110500 2019, DOI:
  • 84 Cypko MA, Stoehr M, Kozniewski M. et al Validation workflow for a clinical Bayesian network model in multidisciplinary decision making in head and neck oncology treatment. Int J Comput Assist Radiol Surg 2017; 12: 1959-1970
  • 85 Lucas PJ, van der Gaag LC, Abu-Hanna A. Bayesian networks in biomedicine and health-care. Artif Intell Med 2004; 30: 201-214
  • 86 Maier-Hein L, Eisenmann M, Reinke A. et al Why rankings of biomedical image analysis competitions should be interpreted with care. Nature Communications 2018; 9: 5217
  • 87 Tomaszewski MR, Gillies RJ. The Biological Meaning of Radiomic Features. Radiology 2021; 298: 505-516
  • 88 Elmore JG, Longton GM, Carney PA. et al Diagnostic concordance among pathologists interpreting breast biopsy specimens. Jama 2015; 313: 1122-1132
  • 89 Elton DC. Self-explaining AI as an alternative to interpretable AI. In, International Conference on Artificial General Intelligence: Springer; 2020: 95–106
  • 90 Shukla-Dave A, Obuchowski NA, Chenevert TL. et al Quantitative imaging biomarkers alliance (QIBA) recommendations for improved precision of DWI and DCE-MRI derived biomarkers in multicenter oncology trials. J Magn Reson Imaging 2019; 49: e101-e121
  • 91 Clark K, Vendt B, Smith K. et al The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J Digit Imaging 2013; 26: 1045-1057
  • 92 Bukowski M, Farkas R, Beyan O. et al Implementation of eHealth and AI integrated diagnostics with multidisciplinary digitized data: are we ready from an international perspective?. European Radiology 2020; 30: 5510-5524
  • 93 Commission E. Ethics Guidelines for Trustworthy AI. In; 2019

Zoom Image
Fig. 1 Overview of machine learning (ML) training, validation, and test phase. During training, the model learns from a pool of (labelled) data – depending on the type of learning (Fig. 2) – the model parameters. ML hyperparameters are optimized on a separate validation set. The best trained model is applied during inference/test on new unseen data. The input data can be images or pre-processed features, with the output (classification or regression) depending on the underlying application.
Zoom Image
Fig. 2 Overview of types of learning with human observer involvement.
Zoom Image
Fig. 3 Multiparametric and hybrid imaging data processing steps: Acquisition, reconstruction, post-processing and analysis, with exemplary use cases of machine learning-based methods within these processing steps.
Zoom Image
Fig. 4 Exemplary melanoma lesion segmentation in two patients with a machine learning-based segmentation network from hybrid imaging data in comparison to the manually labeled expert ground-truth. The segmented lesions are depicted in red.