Subscribe to RSS

DOI: 10.1055/s-0045-1814382
A Deep Neural Network-Based Computer-Assisted Prostate Segmentation in Biparametric Magnetic Resonance Images
Authors
Funding None.
Abstract
Introduction
Prostate cancer is one of the most well-known cancers in men. To decrease the mortality rate associated with prostate cancer, early identification is very essential for further treatment planning. Accurate diagnosis of the cancer stage is essential for effective treatment planning.
Objectives
We propose a computer-assisted detection and diagnosis system that uses prostate segmentation along with detection and prediction of prostate cancer grades utilizing biparametric magnetic resonance images.
Materials and Methods
The study proposed included 236 patients who underwent biparametric magnetic resonance imaging scans. These scans generated T2-weighted images and DW images of 183 patients with cancer and 53 patients without cancer. The Prostate Imaging Reporting and Data System score ranged from 1 to 5. We initially generated a prostate probabilistic map using a two-way approach, and we employed a rule-based algorithm to identify the clinically significant region within the segmented prostate. The classifiers were used to forecast grades once the clinically significant issues were confirmed.
Results
The proposed system achieved a Dice similarity coefficient of 89.4% and a Hausdorff distance of 7.78 mm. The area under the receiver operating characteristic curve (AUC) is used as an indicator of the classifier's performance, with a value of 0.91, and the accuracy of the combined modality was 90.48%.
Conclusion
The stacked autoencoder is used to overcome challenges such as blurred prostate boundaries, variations in the size and shape of the prostate among subjects by extracting hidden features. The combined modality achieved higher classification accuracy and AUC compared with each individual modality. The random forest classifier demonstrated reasonably enhanced performance compared with K-nearest neighbor (k-NN) and support vector machine classifiers.
Introduction
Prostate cancer is a common and slowly developing malignancy in men. In 2018, a total of 1,276,106 fresh cases were recorded with 358,989 deaths reported across 20 world regions.[1] Most prostate cancer patients do not show noteworthy symptoms until the disease progresses to a more advanced stage. When prostate cancer starts to develop rapidly within and beyond the prostate, it becomes risky. Prostate cancer is becoming more common in India, where estimates indicate that by 2020, the number of patients will be doubled.[2] Hence, it is essential to find the exact location of the cancerous region inside the prostate for advanced treatment planning. Therefore, timely detection of cancer is crucial to avoid a patient's death. At present, transrectal ultrasound, digital rectal exam, and magnetic resonance imaging (MRI) are established medical procedures to identify prostate cancer. Multiparametric MRI is a popular procedure in prostate cancer findings because it facilitates lesion detection and cancer staging. The Prostate Imaging-Reporting and Data System (PI-RADS) provides standardized guidelines for prostate cancer detection and tumor scoring on multiparametric MRI (mpMRI), yielding superior diagnostic outcomes compared with using any single MRI modality.[3] [4]
Prostate cancer detection accuracy using MRI imaging is inconsistent and solely depends on the reader's experience.[5] [6] The presence of many slices in each mpMRI sequence makes the analysis process quite time-consuming. Therefore, to help readers overcome the above difficulties, an automated computer-assisted detection system is essential in prostate cancer diagnosis. A reliable system is crucial for addressing issues of unequal prostate shape and size among patients, discrepancies in slice intensity, and poorly defined prostate boundaries. In computer-assisted detection systems, nowadays, researchers are employing deep learning and machine learning methods for prostate cancer detection.[7] [8] Computer-aided detection of diseases using different techniques is helpful for radiologists, enabling accurate analysis in a short time. In the machine learning methodology, the system is programmed to learn patterns from a large datasets using different analytical tools to predict the current state of the disease. In deep learning, different types of neural networks are structured to perform object detection in images.[9] The diagnostic performance of radiologists in prostate cancer is enhanced by using machine learning (ML) algorithms. An ML based predictive system is designed to detect cancerous regions in different prostate zones using radiomics features.[10] Logistic regression along with the ML mechanism has been reported to achieve higher diagnosis accuracy in prostate cancer detection.[11] One key advantage of deep learning is its ability to learn hierarchical feature representations and extract high-level semantic features. ACMs and deep learning are used to separate the prostate from MR image with improved accuracy.[12] An organ is identified in MRI images using a stacked autoencoder to extract features from the datasets.[13] Deep learning has become an increasingly popular and valuable approach in prostate cancer research, as it provides higher segmentation accuracy, reduces researchers' effort through automated optimization, and delivers consistent, robust, and highly discriminative results. Deep learning techniques are scalable, meaning they can handle large volumes of data and can be trained on diverse datasets from different sources, enhancing generalizability across populations. The accuracy and efficiency of deep learning techniques continue to improve as more data becomes accessible and models are fine tuned.
The advancement of deep learning has yielded substantial progress in the realm of prostate cancer investigation, particularly in the domain of medical imaging and MRI segmentation. Overall, deep learning represents a powerful tool in the arsenal of prostate cancer investigators, offering enhanced capabilities for MRI segmentation, diagnosis, and ultimately, improving patient outcomes through more precise treatment planning and monitoring. Motivated by the above advantages, we have deployed deep learning mechanisms in our study to segment prostate boundaries.
Materials and Methods
Data Collection
In the proposed study, all patients underwent mpMRI scanning at Nanavati Max Super Specialty Hospital Mumbai, containing T2-weighted (T2W) images, diffusion-weighted imaging (DWI) images of 236 patients (53 patients without cancer and 183 patients with cancer) with a PI-RADS range of 1 to 5. The PI-RADS v2 version of the PI-RADS scoring system used in our study. In this study, dynamic contrast-enhanced (DCE) imaging was omitted, making the protocol a biparametric MRI (bpMRI) study. We included a total of 5,664 two-dimensional slices of bpMRI in our dataset. The data were collected using 3 Tesla MRI scanners (GE Medical Systems) with surface coils. The DWI sequences used to measure water diffusion in prostate tissues, were obtained using different b-values of 0, 100, 400, and 1000 s/mm2. The apparent diffusion coefficient (ADC) map was derived from DWI for b-values of 100, 400, and 800 s/mm2 using a scanner' tool. All acquired sequences were saved in DICOM format with dimensions of 512 × 512 × 24. MATLAB R2017a exclusively used to assess proposed systems. The dataset was split into a training/validation set (75%) and a test set (25%). In our study, histopathological confirmation was obtained for all patients who underwent prostate biopsy or surgery. The Gleason scoring system was used to grade prostate cancer on histopathology, and the results were correlated with the corresponding MRI findings.
Overview Of Proposed Computer-Assisted Detection System
The presented computer-assisted detection system, shown in [Fig. 1], includes two different approaches of cancer detection in prostate bpMRI. The input data comprises T2W MRI sequences, DWI, and ADC maps. The first approach involves a traditional ML technique, which includes several steps such as prostate segmentation, feature extraction, training the model, and testing the model. The second approach is the deep learning technique, which involves deep neural networks to understand hidden features for prostate analysis and image tagging.


First Approach: Traditional Machine Learning Pipeline
The first approach in the diagram follows a more formal, modular pipeline that includes manual feature engineering and traditional ML methods. The procedure begins with biMRI scans, which are: T2W images, DWI, and ADC maps. Initially, an image pre-processing is performed by normalizing the input image to a domain where malignancy can be easily identified. To reduce interpatient discrepancy, most MRI images are normalized. We had normalized T2W images by using the Gaussian normalization method. An image registration was performed prior to feature extraction by combining different scanning modalities to correct distorted images. The first approach is implemented by incorporating the following techniques, which are discussed in detail in the proposed ML pipeline.
Prostate Segmentation
Segmentation Algorithms
Segmentation of organs is a crucial stage in the medical image analysis. Segmentation is the technique to remove the prostate from surrounding structures in MRI. Researchers have used different algorithms to perform prostate segmentation with a higher degree of segmentation accuracy.[14] T2W images provide clear structural information of the prostate, hence widely used to accomplish prostate segmentation. Improved segmentation performance is achieved for inadequately defined regions and pixel intensities by using an atlas-based segmentation method.[15] [16] An active contour model (ACM) is implemented to perform prostate segmentation by using geometrical facts, knowledge of perfect optimization, and physics. ACM separates the prostate from MRI test using a probabilistic map and appearance -based evidence. Final segmentation is achieved by moving the curve toward the prostate border using internal and external energies. Internal energy confines the curve at the border and preserves smoothness. Internal energy is given by [Eq. (1)]


where u(n) is the curve to be approximated, and the two terms in the above equation represent the elasticity and stiffness of the curve, while external energy controls the movement of the curve close to the border and preserves the appropriate shape of curve as specified by [Eq. (2)]


ACM was used to perform prostate segmentation from MRI and [Fig. 2] illustrates segmentation results acquired by proposed segmentation technique.


Feature Extraction
One of the most important tasks in medical image analysis is feature extraction from the region of interest and feature selection improves the classifier performance. In medical image processing, features like texture, volume, intensity, shape, and statistical features are widely used by researchers. Initially, intensity features were largely used in medical image analysis but did not deliver adequate outcomes, due to limitations such as contrast and illumination variations in MRI. Therefore, we extracted and used vision features such as Haar, local binary pattern, and a histogram of gradient in our presented work to carry out classification to deal with the shortcomings of intensity features. These vision-based features were collectively used to perform segmentation in the presence of diverse illumination and slight rotations to yield improved segmentation performance.
Model Training and Testing
Once the prostate is segmented, then the next task is to detect the cancerous region by using rule addressed procedures. DWI and ADC maps are used collectively to confirm the clinically significant region in the segmented prostate. The classification of prostate cancer in the peripheral zone is effectively performed using support vector machines (SVMs), achieving higher accuracy.[17] In the research conducted, we employed following classifiers: a K-nearest neighbor (k-NN), Softmax, and random forest (RF) classifier.
Second Approach: Deep Learning-Based Pipeline
The second approach is more modern and relies on end-to-end deep learning for automated feature learning and classification. In this approach, we selected an atlas segmentation system to acquire an isolated prostate from MRI. We used a selective and iterative method to estimate performance levels for atlas selection. Probabilistic maps were derived using atlas-based segmentation techniques. Once probabilistic maps were generated, features were learned using auto-encoders. In a recent study, researchers extracted hidden features to perform prostate segmentation using deep learning and sparse autocoders.[18]
Deep learning is currently a major focus of research because it can automatically learn features hierarchically from high-level representations.[19] In clinical findings, precious information is obtained using deep learning techniques, which employ supervised or unsupervised methodologies to perform prostate cancer analysis.[20]
In deep learning, autoencoders play an important role in feature representation of input data. An autoencoder is commonly utilized for reducing dimensionality and is a non-linear feed-forward neural network. It performs encoding as well as decoding operations on input data, minimizing the reconstruction errors by learning the weights and given by [Eq. (3)]


where W, W′, b, b′ are weights and intercepts among the layers. Consequently, features were efficiently learned from unseen images using autoencoders. An autoencoder is a basic structure that acquires latent features from an untagged input image. A sparsity constraint term is combined to obtain an objective function. Thus, by using these latent features, the image tagging is performed, and the prostate is tagged from the background in input MRI. Once the prostate is segmented, the cancerous region is detected to carry out classification as per PIRADS guidelines. The second approach looks simple and flexible, but it requires a large number of images for algorithm training. As a result, we overcame the problem of lesion detection and located the lesion in segmented prostate T2W-MRI. The proposed multi-modality deep learning approach performs significantly better than single-modality MRI methods and has the potential to enhance the diagnostic capabilities of radiologists.
Primary and Secondary Outcomes
The primary outcome of our study is automatic segmentation of prostate using vision-based features and deep-learned features in T2W, DWI, and ADC imaging sequences. The secondary outcome is to detect tumors in segmented prostate and predict the grade using PIRADS guidelines. We have developed an algorithm to automatically segment prostate, detect tumors, and grade prostate cancer from bpMRI with high specificity.
Statistical Analysis
ANOVA tests were used to examine the correlations between both data groups (own dataset and online dataset) and the performance index parameters. Statistical significance was defined as p-values less than 0.05, and all statistical analyses were conducted on a two-sided basis. Analyses were performed using the R Studio IDE program.
Inclusion and Exclusion Criterion
The inclusion criterion was men with prostate cancer who underwent mpMRI having PIRADS ratings from 1 to 5. Men with multiple number of lesions in segmented MRI data were excluded from the study.
Ethical Approval
This study was approved by the institutional ethics committee of the hospital (approval no.: BHN/5168/2017, dated March 14, 2017), and the study was conducted in accordance with the 1964 Declaration of Helsinki and its later amendments. As hospital-based data were collected for analysis, patient consent was not required, and all patient data were anonymized.
Result
To assess the performance of the presented approaches, we considered prostate segmentation and tumor detection accuracy. In the first part, we measured the performance of prostate segmentation in T2W images using different performance indices and features, while in the second part, we assessed cancer detection accuracy using different classifiers. We conducted the experiments using both individual imaging modality and combined modality. The segmentation analysis made use of different metrics, including the Dice similarity coefficient (DSC), precision, Hausdorff distance (HD), and mean absolute surface distance (ASD). The training and testing data were used to identify patterns accurately and precisely predict cancerous regions in the dataset. [Table 1] provides the average values of different performance indices obtained using the proposed methods
Abbreviations: ASD, average surface distance; DSC, dice similarity coefficient; HD, Hausdorff distance.
The mean DSC achieved using FCM was 86.3%, with a minimal value of 81.6% and a maximum value of 91.7%. The average DSC value indicated that our two-approach systems segmented the entire test set efficiently, with an accuracy of 88.7%. The precision, which signifies the resemblance between the automatically and manually segmented prostate, averaged 86.7%, and ranged from 81.3 to 94.1%. The HD, representing the distance between the two borders, had an average value of 8.85 mm, ranging from 6.3 to 9.7 mm. The ASD indicated a small error percentage in prostate segmentation, with an average ASD of 1.9 mm, fluctuating between 1.5 and 3.7 mm. Using the FCM-DM technique, the average values for the indices were: DSC at 87.1%, precision at 87.1%, HD at 8.48 mm, and ASD at 1.81 mm. Similarly, the average values for the SAE and SAE-DM techniques were 88.1%, 87.0%, 7.87 mm, 1.58 mm and 89.4%, 89.2%, 7.78 mm, 1.52 mm, respectively.
Once the prostate had been segmented and features effectively extracted, rule-based computer-assisted detection systems identified clinically significant and clinically insignificant prostate cancer regions. In our study, lesions in both the peripheral zone and transition zone were evaluated. We deployed K-NN, SVM, and RF classifiers to perform tumor classification tasks. All classifiers were trained to predict the presence of clinically significant and clinically insignificant prostate cancer regions in the segmented prostate and were tested on a dataset. The combination of Classifier and SAE has not been extensively explored in prostate cancer analysis; therefore, we used this combination to predict clinically significant and clinically insignificant prostate cancer regions. The proposed system was assessed using the area under the receiver operating characteristic curve (AUC) curve, and true positives, true negatives, false positives, and false negatives were determined. Additionally, we computed the accuracy, sensitivity, and specificity of the classifiers utilized in our research.
A total of 218 lesions were analyzed using the proposed system, including 53 clinically insignificant lesions and 165 clinically significant lesions. Among the 218 lesions, 23were PIRADS 1 grade, 30 were PIRADS 2 grade, 49 were PIRADS 3 grade, 62 were PIRADS 4 grade, and 54 were PIRADS 5 grade. The identified regions were predicted with PIRADS grades 1 to 5 with reasonable accuracy, sensitivity, and specificity. We achieved better classification results with an accuracy of 90.4% as we had trained classifiers on sufficiently large datasets. The details of the performance of various modalities are summarized in [Table 2]. Higher classifier performance was obtained for the combined modalities.
Abbreviations: ADC, apparent diffusion coefficient; AUC, area under the curve; DWI, diffusion-weighted imaging ;PC, prostrate cancer.
The accuracy obtained using the T2W modality was 87.63%, while the ADC and DWI modalities provided accuracies of 89.32 and 88.53%, respectively. The sensitivity and specificity achieved using the modality were 86.33 and 86.13%, respectively, while DWI provided 87.59% for both sensitivity and specificity. The AUC values for T2W, ADC, and DWI were 0.86, 0.88, and 0.87, respectively. These values were obtained by evaluating each modality individually in prostate cancer analysis. These values were lower than those obtained by combining all three modalities simultaneously in the analysis. The accuracy, sensitivity, specificity, and AUC obtained by the combined modality were 90.48, 91.25, 91.17, and 0.91, respectively. [Fig. 3] illustrates the ROC curves of different modalities and classifiers used in the presented work. The performance of the classifiers was also compared using AUC values. The highest AUC, 0.91, was obtained with the RF classifier, while the k-NN and SVM classifiers yielded AUC values of 0.87 and 0.89, respectively.


Discussion
A supervised stacked sparse autoencoder approach was used in the current research to automatically segment the prostate, with and without a deformable model. [Fig. 4] provides qualitative outcomes from four distinct example cases achieved using the implied segmentation technique.


The segmentation results from our approach are shown in yellow, while the manually created ground truth, annotated by skilled radiologists, is depicted as a red contour. Segmented outcomes of every 12th case in the test set (from the initial fold of the experiment) have been chosen to be showcased in [Fig. 4], along with their corresponding index values.
The fivefold analysis yielded quantitative evaluation readings for all 52 test cases from our dataset. The proposed system delivered significant performance in prostate segmentation using bpMRI, based on a comparison of visual results with quantitative measures. Initially, we examined the findings of our CAD system by analyzing supervised stacked sparse autoencoders without a deformable model and supervised stacked sparse autoencoders with s deformable model for each data fold.
It was observed that the mean values of each index across all fivefolds had low variation and small standard deviations, indicating the reliability and reproducibility of the proposed method. The mean DSC value (87. 61% ± 2. 50%) suggests that the proposed system successfully segmented all 52 test cases with a satisfactory level of accuracy and a reasonable standard deviation.
The precision of the automated segmentation was 88.04% with a small standard deviation of 2.71%. This indicates a strong similarity between the volume of the automatically segmented regions and those manually segmented by an experienced radiologist. Additionally, the HD was measured at 7.62 mm with a standard deviation of 0.85 mm, representing the largest minimal distance between the two boundaries. The average surface distance (ASD) was calculated as 1.87 mm with a standard deviation of 0.35 mm—indicating a low segmentation error rate in prostate cancer. The small standard deviation values obtained in our experiments demonstrate the robustness and consistency of the proposed method. These quantitative assessments are further supported by visual examination, reinforcing the reliability of the findings.
[Table 3] compares the presented method with existing methods. While others used binary classification, we used a multiclass classification method. The results of our proposed method remain unmatched on private datasets likely due to differences in datasets used. In our study, we utilized the publicly available PROMISE online dataset, which consists of 80 cases, along with a private dataset to validate our results. Our method demonstrated superior performance compared with existing methods, particularly in accurately segmenting the prostate.
Abbreviation: DSC, dice similarity coefficient.
Our approach aligns with recent literature demonstrating that bpMRI—utilizing PI-RADS v2.1 scoring—can provide diagnostic accuracy comparable to mpMRI for detecting clinically significant prostate cancer.[21] [22] [23] Particularly, PI-RADS v2.1 guidelines indicate that when T2-weighted imaging and DWI/ADC sequences are of diagnostic quality, DCE imaging plays a minor role in determining the PI-RADS assessment category. Therefore, the omission of DCE imaging does not prohibit the application of PI-RADS v2.1 in bpMRI protocols.
Limitation of Study
In our study, we included images that contain only one lesion. Thus, a limitation of our study is that the presented approach is oriented toward single-lesion per image collections. In future work, we will include examination of images of patients with multiple lesions in the segmented prostate. In future we also aim to detect and predict cancer in the central zone of prostate by including DCE images.
Conclusion
A computer-assisted diagnosis system was designed to detect and predict the presence of cancer in the prostate. A two-way segmentation approach was used in the system to segment the prostate in bpMRI. The proposed method automatically located the prostate in bpMRI with better segmentation accuracy. A stacked autoencoder was used to overcome challenges such as blurred prostate boundaries and variations in prostrate shape and size among subjects by extracting hidden features. To precisely locate clinically significant region within the segmented prostate, a rule-based tumor detection methodology was employed. Higher classification accuracy and AUC were achieved using combined modalities compared with individual modalities. The RF classifier provided reasonably enhanced performance compared with k-NN and SVM classifiers.
Conflict of Interest
None declared.
Authors' Contributions
The manuscript has been read and approved by all authors and each author believes that the manuscript represents honest and genuine work. All requirements for authorship have been met while drafting this manuscript.
-
References
- 1 Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 2018; 68 (06) 394-424
- 2 Jain S, Saxena S, Kumar A. Epidemiology of prostate cancer in India. Meta Gene 2014; 2: 596-605
- 3 Weinreb JC, Barentsz JO, Choyke PL. et al. PI-RADS prostate imaging–reporting and data system: 2015, version 2. Eur Urol 2016; 69 (01) 16-40
- 4 Reisæter LA, Fütterer JJ, Halvorsen OJ. et al. 1.5-T multiparametric MRI using PI-RADS: a region by region analysis to localize the index-tumor of prostate cancer in patients undergoing prostatectomy. Acta Radiol 2015; 56 (04) 500-511
- 5 Rosenkrantz AB, Ayoola A, Hoffman D. et al. The learning curve in prostate MRI interpretation: self-directed learning versus continual reader feedback. AJR Am J Roentgenol 2017; 208 (03) W92-W100
- 6 Gatti M, Faletti R, Calleris G. et al. Prostate cancer detection with biparametric magnetic resonance imaging (bpMRI) by readers with different experience: performance and comparison with multiparametric (mpMRI). Abdom Radiol (NY) 2019; 44 (05) 1883-1893
- 7 Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine learning for medical imaging. Radiographics 2017; 37 (02) 505-515
- 8 Alkadi R, Taher F, El-Baz A, Werghi N. A deep learning-based approach for the detection and localization of prostate cancer in T2 magnetic resonance images. J Digit Imaging 2019; 32 (05) 793-807
- 9 Zaharchuk G, Gong E, Wintermark M, Rubin D, Langlotz CP. Deep learning in neuroradiology. AJNR Am J Neuroradiol 2018; 39 (10) 1776-1784
- 10 Ginsburg SB, Algohary A, Pahwa S. et al. Radiomic features for prostate cancer detection on MRI differ between the transition and peripheral zones: preliminary findings from a multi-institutional study. J Magn Reson Imaging 2017; 46 (01) 184-193
- 11 Wu M, Krishna S, Thornhill RE, Flood TA, McInnes MDF, Schieda N. Transition zone prostate cancer: logistic regression and machine-learning models of quantitative ADC, shape and texture features are highly accurate for diagnosis. J Magn Reson Imaging 2019; 50 (03) 940-950
- 12 Cheng R, Roth HR, Lu L. et al. Active appearance model and deep learning for more accurate prostate segmentation on MRI. Med Imaging: Image processing 2016; 9784: 678-686 . SPIE
- 13 Shin HC, Orton MR, Collins DJ, Doran SJ, Leach MO. Stacked autoencoders for unsupervised feature learning and multiple organ detection in a pilot study using 4D patient data. IEEE Trans Pattern Anal Mach Intell 2013; 35 (08) 1930-1943
- 14 Fassia MK, Balasubramanian A, Woo S. et al. Deep learning prostate MRI segmentation accuracy and robustness: a systematic review. Radiol Artif Intell 2024; 6 (04) e230138
- 15 Wang S, Burtt K, Turkbey B, Choyke P, Summers RM. Computer aided-diagnosis of prostate cancer on multiparametric MRI: a technical review of current research. BioMed Res Int 2014; 2014 (01) 789561
- 16 Kalinic H. Atlas-Based Image Segmentation: A Survey. Croatian Sci Bibliography; 2009: 1-7
- 17 Vos PC, Barentsz JO, Karssemeijer N, Huisman HJ. Automatic computer-aided detection of prostate cancer based on multiparametric magnetic resonance image analysis. Phys Med Biol 2012; 57 (06) 1527-1542
- 18 Guo Y, Gao Y, Shen D. Deformable MR prostate segmentation via deep feature learning and sparse patch matching. IEEE Trans Med Imaging 2016; 35 (04) 1077-1089
- 19 Cheng R, Roth HR, Lu L. et al. Active appearance model and deep learning for more accurate prostate segmentation on MRI. Paper presented at: Medical Imaging. Vol. 9784 of Proceedings of the SPIE:678–686; 2016
- 20 Domingues I, Pereira G, Martins P. et al. Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2020; 53 (06) 4093-4160
- 21 Bertelli E, Vizzi M, Marzi C. et al. Biparametric vs. Multiparametric MRI in the detection of cancer in transperineal targeted-biopsy-proven peripheral prostate cancer lesions classified as PI-RADS Score 3 or 3+1: the added value of ADC quantification. Diagnostics (Basel) 2024; 14 (15) 1608
- 22 Schelb P, Kohl S, Radtke JP. et al. Classification of cancer at prostate MRI: deep learning versus clinical PI-RADS assessment. Radiology 2019; 293 (03) 607-617
- 23 Siegel C. Re: classification of cancer at prostate MRI: deep learning versus clinical PI-RADS assessment. J Urol 2020; 204 (03) 597
- 24 Tian Z, Liu L, Zhang Z, Fei B. Superpixel-based segmentation for 3D prostate MR images. IEEE Trans Med Imaging 2016; 35 (03) 791-801
- 25 Vincent G, Guillard G, Bowes M. Fully automatic segmentation of the prostate using active appearance models. In: MICCAI Grand Challenge: Prostate MR Image Segmentation 2012; 2012
- 26 Mahapatra D, Buhmann JM. Prostate MRI segmentation using learned semantic knowledge and graph cuts. IEEE Trans Biomed Eng 2014; 61 (03) 756-764
- 27 Toth R, Madabhushi A. Multifeature landmark-free active appearance models: application to prostate MRI segmentation. IEEE Trans Med Imaging 2012; 31 (08) 1638-1650
- 28 Milletari F, Navab N, Ahmadi SA. V-net: Fully convolutional neural networks for volumetric medical image segmentation. Paper presented at: 2016 Fourth International Conference on 3D Vision (3DV); 2016: 565-571
- 29 Zhu Q, Du B, Turkbey B. et al. Deeply-supervised CNN for prostate segmentation. Paper presented at: 2017 International Joint Conference on Neural Networks (IJCNN); 2017: 178-184
- 30 Liao S, Gao Y, Oto A. et al. Representation learning: a unified deep learning framework for automatic prostate MR segmentation. Paper presented at: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2013: 16th International Conference, Nagoya, Japan, September 22–26, 2013, Proceedings, Part II 16; 2013: 254-261
- 31 Li A, Li C, Wang X. et al. Automated segmentation of prostate MR images using prior knowledge enhanced random walker. Paper presented at: 2013 International Conference on Digital Image Computing: Techniques and Applications (DICTA): IEEE; 2013: 1-7
- 32 Yu L, Yang X, Chen H. et al. Volumetric ConvNets with mixed residual connections for automated prostate segmentation from 3D MR images. Paper presented at: Proceedings of the AAAI conference on artificial intelligence. Vol. 31; No. 1; 2017
- 33 Meglič J, Sunoqrot MRS, Bathen TF, Elschot M. Label-set impact on deep learning-based prostate segmentation on MRI. Insights Imaging 2023; 14 (01) 157
Address for correspondence
Publication History
Article published online:
15 December 2025
© 2025. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/)
Thieme Medical and Scientific Publishers Pvt. Ltd.
A-12, 2nd Floor, Sector 2, Noida-201301 UP, India
-
References
- 1 Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 2018; 68 (06) 394-424
- 2 Jain S, Saxena S, Kumar A. Epidemiology of prostate cancer in India. Meta Gene 2014; 2: 596-605
- 3 Weinreb JC, Barentsz JO, Choyke PL. et al. PI-RADS prostate imaging–reporting and data system: 2015, version 2. Eur Urol 2016; 69 (01) 16-40
- 4 Reisæter LA, Fütterer JJ, Halvorsen OJ. et al. 1.5-T multiparametric MRI using PI-RADS: a region by region analysis to localize the index-tumor of prostate cancer in patients undergoing prostatectomy. Acta Radiol 2015; 56 (04) 500-511
- 5 Rosenkrantz AB, Ayoola A, Hoffman D. et al. The learning curve in prostate MRI interpretation: self-directed learning versus continual reader feedback. AJR Am J Roentgenol 2017; 208 (03) W92-W100
- 6 Gatti M, Faletti R, Calleris G. et al. Prostate cancer detection with biparametric magnetic resonance imaging (bpMRI) by readers with different experience: performance and comparison with multiparametric (mpMRI). Abdom Radiol (NY) 2019; 44 (05) 1883-1893
- 7 Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine learning for medical imaging. Radiographics 2017; 37 (02) 505-515
- 8 Alkadi R, Taher F, El-Baz A, Werghi N. A deep learning-based approach for the detection and localization of prostate cancer in T2 magnetic resonance images. J Digit Imaging 2019; 32 (05) 793-807
- 9 Zaharchuk G, Gong E, Wintermark M, Rubin D, Langlotz CP. Deep learning in neuroradiology. AJNR Am J Neuroradiol 2018; 39 (10) 1776-1784
- 10 Ginsburg SB, Algohary A, Pahwa S. et al. Radiomic features for prostate cancer detection on MRI differ between the transition and peripheral zones: preliminary findings from a multi-institutional study. J Magn Reson Imaging 2017; 46 (01) 184-193
- 11 Wu M, Krishna S, Thornhill RE, Flood TA, McInnes MDF, Schieda N. Transition zone prostate cancer: logistic regression and machine-learning models of quantitative ADC, shape and texture features are highly accurate for diagnosis. J Magn Reson Imaging 2019; 50 (03) 940-950
- 12 Cheng R, Roth HR, Lu L. et al. Active appearance model and deep learning for more accurate prostate segmentation on MRI. Med Imaging: Image processing 2016; 9784: 678-686 . SPIE
- 13 Shin HC, Orton MR, Collins DJ, Doran SJ, Leach MO. Stacked autoencoders for unsupervised feature learning and multiple organ detection in a pilot study using 4D patient data. IEEE Trans Pattern Anal Mach Intell 2013; 35 (08) 1930-1943
- 14 Fassia MK, Balasubramanian A, Woo S. et al. Deep learning prostate MRI segmentation accuracy and robustness: a systematic review. Radiol Artif Intell 2024; 6 (04) e230138
- 15 Wang S, Burtt K, Turkbey B, Choyke P, Summers RM. Computer aided-diagnosis of prostate cancer on multiparametric MRI: a technical review of current research. BioMed Res Int 2014; 2014 (01) 789561
- 16 Kalinic H. Atlas-Based Image Segmentation: A Survey. Croatian Sci Bibliography; 2009: 1-7
- 17 Vos PC, Barentsz JO, Karssemeijer N, Huisman HJ. Automatic computer-aided detection of prostate cancer based on multiparametric magnetic resonance image analysis. Phys Med Biol 2012; 57 (06) 1527-1542
- 18 Guo Y, Gao Y, Shen D. Deformable MR prostate segmentation via deep feature learning and sparse patch matching. IEEE Trans Med Imaging 2016; 35 (04) 1077-1089
- 19 Cheng R, Roth HR, Lu L. et al. Active appearance model and deep learning for more accurate prostate segmentation on MRI. Paper presented at: Medical Imaging. Vol. 9784 of Proceedings of the SPIE:678–686; 2016
- 20 Domingues I, Pereira G, Martins P. et al. Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2020; 53 (06) 4093-4160
- 21 Bertelli E, Vizzi M, Marzi C. et al. Biparametric vs. Multiparametric MRI in the detection of cancer in transperineal targeted-biopsy-proven peripheral prostate cancer lesions classified as PI-RADS Score 3 or 3+1: the added value of ADC quantification. Diagnostics (Basel) 2024; 14 (15) 1608
- 22 Schelb P, Kohl S, Radtke JP. et al. Classification of cancer at prostate MRI: deep learning versus clinical PI-RADS assessment. Radiology 2019; 293 (03) 607-617
- 23 Siegel C. Re: classification of cancer at prostate MRI: deep learning versus clinical PI-RADS assessment. J Urol 2020; 204 (03) 597
- 24 Tian Z, Liu L, Zhang Z, Fei B. Superpixel-based segmentation for 3D prostate MR images. IEEE Trans Med Imaging 2016; 35 (03) 791-801
- 25 Vincent G, Guillard G, Bowes M. Fully automatic segmentation of the prostate using active appearance models. In: MICCAI Grand Challenge: Prostate MR Image Segmentation 2012; 2012
- 26 Mahapatra D, Buhmann JM. Prostate MRI segmentation using learned semantic knowledge and graph cuts. IEEE Trans Biomed Eng 2014; 61 (03) 756-764
- 27 Toth R, Madabhushi A. Multifeature landmark-free active appearance models: application to prostate MRI segmentation. IEEE Trans Med Imaging 2012; 31 (08) 1638-1650
- 28 Milletari F, Navab N, Ahmadi SA. V-net: Fully convolutional neural networks for volumetric medical image segmentation. Paper presented at: 2016 Fourth International Conference on 3D Vision (3DV); 2016: 565-571
- 29 Zhu Q, Du B, Turkbey B. et al. Deeply-supervised CNN for prostate segmentation. Paper presented at: 2017 International Joint Conference on Neural Networks (IJCNN); 2017: 178-184
- 30 Liao S, Gao Y, Oto A. et al. Representation learning: a unified deep learning framework for automatic prostate MR segmentation. Paper presented at: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2013: 16th International Conference, Nagoya, Japan, September 22–26, 2013, Proceedings, Part II 16; 2013: 254-261
- 31 Li A, Li C, Wang X. et al. Automated segmentation of prostate MR images using prior knowledge enhanced random walker. Paper presented at: 2013 International Conference on Digital Image Computing: Techniques and Applications (DICTA): IEEE; 2013: 1-7
- 32 Yu L, Yang X, Chen H. et al. Volumetric ConvNets with mixed residual connections for automated prostate segmentation from 3D MR images. Paper presented at: Proceedings of the AAAI conference on artificial intelligence. Vol. 31; No. 1; 2017
- 33 Meglič J, Sunoqrot MRS, Bathen TF, Elschot M. Label-set impact on deep learning-based prostate segmentation on MRI. Insights Imaging 2023; 14 (01) 157














