Endoscopy 2020; 52(S 01): S329
DOI: 10.1055/s-0040-1705298
ESGE Days 2020 ePoster presentations
Thursday, April 23, 2020 09:00 – 17:00 Endoscopic technology ePoster area
© Georg Thieme Verlag KG Stuttgart · New York

AUTOMATED CLASSIFICATION OF GASTRIC NEOPLASMS IN ENDOSCOPIC IMAGES USING A CONVOLUTIONAL NEURAL NETWORK

CS Bang
1   Hallym University College of Medicine, Internal Medicine, Chuncheon, Korea, Republic of
,
BJ Cho
2   Hallym University College of Medicine, Chuncheon, Korea, Republic of
,
GH Baik
2   Hallym University College of Medicine, Chuncheon, Korea, Republic of
› Author Affiliations
Further Information

Publication History

Publication Date:
23 April 2020 (online)

 

Aims Visual inspection, lesion detection, and differentiation between malignant and benign features are key aspects of an endoscopist´s role. The use of machine learning for the recognition and differentiation of images has been increasingly adopted in clinical practice. This study aimed to establish convolutional neural network (CNN) models to automatically classify gastric neoplasms based on endoscopic images.

Methods Endoscopic white-light images of pathologically confirmed gastric lesions were collected and classified into five categories: advanced gastric cancer, early gastric cancer, high grade dysplasia, low grade dysplasia, and non-neoplasm. Three pretrained CNN models were fine-tuned using a training dataset. The classifying performance of the models was evaluated using a test dataset and a prospective validation dataset.

Results A total of 5017 images were collected from 1269 patients, among which 812 images from 212 patients were used as the test dataset. An additional 200 images from 200 patients were collected and used for prospective validation. For the five-category classification, the weighted average accuracy of the Inception-Resnet-v2 model reached 84.6 %. The mean area under the curve (AUC) of the model for differentiating gastric cancer and neoplasm was 0.877 and 0.927, respectively. In prospective validation, the Inception-Resnet-v2 model showed lower performance compared with the endoscopist with the best performance (five-category accuracy 76.4 % vs. 87.6 %; cancer 76.0 % vs. 97.5 %; neoplasm 73.5 % vs. 96.5 %; P <  0.001). However, there was no statistical difference between the Inception-Resnet-v2 model and the endoscopist with the worst performance in the differentiation of gastric cancer (accuracy 76.0 % vs. 82.0 %) and neoplasm (AUC 0.776 vs. 0.865).

Conclusions The evaluated deep-learning models have the potential for clinical application in classifying gastric cancer or neoplasm on endoscopic white-light images.