Ultraschall in Med 2018; 39(04): 379-381
DOI: 10.1055/a-0642-9545
Editorial
© Georg Thieme Verlag KG Stuttgart · New York

Machine learning and deep learning applied in ultrasound

„Machine“ und „Deep Learning“ im Ultraschall angewandt
Lea Marie Pehrson
1  Copenhagen University College, Denmark
2  Department of Diagnostic Radiology, Rigshospitalet, Denmark
,
Carsten Lauridsen
1  Copenhagen University College, Denmark
2  Department of Diagnostic Radiology, Rigshospitalet, Denmark
,
Michael Bachmann Nielsen
2  Department of Diagnostic Radiology, Rigshospitalet, Denmark
3  University of Copenhagen, Department of Clinical Medicine, Denmark
› Author Affiliations
Further Information

Publication History

Publication Date:
02 August 2018 (online)

We live in an exciting time, which many call the Information Age, signifying the fact that the amount of information in the world is growing exponentially. The use of medical imaging is vastly increasing [1]. From 1996 until 2010 the number of ultrasound exams has approximately doubled, while the number of CT and MRI examinations has almost tripled and quadrupled, respectively [2]. The large amount of data and the need for the early detection of pathologies are very demanding [3]. Due to the increasing workload, data handling needs to be faster and more precise. This editorial will showcase some of the basic principles of machine learning and deep learning within ultrasound imaging.

Machine learning is a method established from artificial intelligence, where the computer captures patterns in data sets and uses these patterns extensively in decision making. Machine learning offers different abilities with regards to medical imaging [4] [5]. The main purpose of the different types of algorithms is to elevate the diagnostic accuracy and the consistency of image interpretation. These observations and predictions are constructed on the basis of the data presented to the algorithm.

Deep learning has attracted a lot of attention over the past two years. It emerged from machine learning and automatically learns hierarchical features. The algorithm consists of multiple layers composed of simple and nonlinear modules. The data is transformed into representations which is important for the algorithm to discriminate the data [6]. Deep learning algorithms can learn from former mistakes, whereas traditional machine learning algorithms are not able to. Machine learning algorithms are constructed using hand-engineered features. These features are not able to be adjusted after configuration of the machine learning algorithm.

As the name of the algorithms suggest, machine and deep learning algorithms can learn to detect. This is usually done by either supervised or unsupervised learning. Supervised learning consists of labeled data presented to the algorithm. The images are presented with an appropriate classification outcome. The classifier can be trained to output a value of 1 for input images with malignancy occurring on the image. For images with no pathology or benign lesions, the algorithm is taught to give an output with the value 0 [7]. Once the training of the algorithm has been completed, the algorithm can begin its classification of unseen images.

The algorithm must be trained correctly to allow classification. From a supervised learning point of view, this requires large amounts of well annotated images or scans [8]. This can be challenging to obtain. A way of optimizing this is to use weakly supervised learning. This method reduces the amount of information that is annotated and extracted [7]. Decreasing the number of details that are annotated by the expert simplifies the process and annotation effort. An example of weakly labeled data could be an image of a tumor which is annotated, while the precise location or boundaries are not. The disadvantage of this method is that the number of annotated images needs to be substantially larger for the algorithm to learn to the same degree.

Unsupervised learning consists of unlabeled data presented to the algorithm. The algorithms search and analyze the data to detect clusterings or tendencies. These clusterings and tendencies can be used to determine different features. Features can be applied to distinguish between the different classes. This is especially useful when working with content-based retrieval. A feature selection algorithm is often applied to reduce the number of features into a smaller composite set [7].

The workload that goes into handcrafting features within machine learning has prompted researchers to look at algorithms that are able to acquire features from data without human intervention, such as deep learning. In ultrasound, acoustic patterns are not obvious nor easily engineered. Given the ability to extract non-linear features from data, a deep learning algorithm is an especially good choice when working with ultrasound.

Chan et al. reported the first evidence that the ROC curve for radiologists’ detection of clustered micro-calcifications was improved significantly when a computer output was available [9]. Brattain et al. has published an in-depth review of machine learning discussing the opportunities within medical ultrasound, which methods are applied and the status of the research published in February 2018 [7]. This review showcases some of the opportunities within medical ultrasound with regard to machine learning and deep leaning. The author surveyed 56 papers with the aim of providing insight into the progress and the best approaches within ultrasound imaging. In particular, approaches using deep learning are highlighted and compared to approaches using handcrafted features [7]. Litjens et al. has published a review of the literature within different anatomical regions, modalities and architectures, which included over 300 contributions and goes in depth explaining the relevant concepts concerning medical image analysis [8].

Becker et al. has demonstrated the use of deep learning in a study applying generic software for industrial image analysis to diagnose breast cancer on breast ultrasound images. These results show high accuracy, comparable to human readers [10]. The algorithm was able to allow real-time analysis during the ultrasound examination [10]. These results show that the algorithm optimizes potential detection during the examination. The deep learning algorithm learns faster and better than a human reader with no prior experience given the same amount of training data [10].

The development of machine and deep learning from idea to a clinical product will require close collaboration between medical and data sciences and begins with the definition of needs. The concept of deep learning can potentially be applied to all imaging modalities and examinations allowing new standards for image interpretation systems. The first results are beginning to make their way into ultrasound conferences and we expect them to appear in journals like ours in the coming years.