Next Article in Journal
Demarcation Line Determination for Diagnosis of Gastric Cancer Disease Range Using Unsupervised Machine Learning in Magnifying Narrow-Band Imaging
Next Article in Special Issue
Very High Yield of Urgent Small-Bowel Capsule Endoscopy for Ongoing Overt Suspected Small-Bowel Bleeding Irrespective of the Usual Predictive Factors
Previous Article in Journal
Comparisons among the Ultrasonography Prediction Model, Real-Time and Shear Wave Elastography in the Evaluation of Major Salivary Gland Tumors
Previous Article in Special Issue
An Overview of the Evolution of Capsule Endoscopy Research—Text-Mining Analysis and Publication Trends
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Multi-Domain Model Provides Accurate Detection and Grading of Mucosal Ulcers in Different Capsule Endoscopy Types

1
Penta-AI, Tel Aviv 6701101, Israel
2
Faculty of Medicine, Ben-Gurion University of the Negev, Be’er Sheva 8410501, Israel
3
Department of Internal Medicine E, Sheba Medical Center, Tel Hashomer, Ramat Gan 5262100, Israel
4
Sackler School of Medicine, Tel Aviv University, P.O.B 39040, Tel Aviv 6997801, Israel
5
Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, Ramat Gan 5262100, Israel
6
Department of Internal Medicine F, Sheba Medical Center, Tel Hashomer, Ramat Gan 5262100, Israel
7
Internal Medicine B, Assuta Medical Center, Ashdod, Israel, Ben-Gurion University of the Negev, Be’er Sheva 8410501, Israel
8
Sami Sagol AI Hub, ARC, Sheba Medical Center, Tel Hashomer, Ramat Gan 5262100, Israel
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(10), 2490; https://doi.org/10.3390/diagnostics12102490
Submission received: 31 August 2022 / Revised: 6 October 2022 / Accepted: 8 October 2022 / Published: 14 October 2022
(This article belongs to the Special Issue Current and Future Use of Capsule Endoscopy)

Abstract

:
Background and Aims: The aim of our study was to create an accurate patient-level combined algorithm for the identification of ulcers on CE images from two different capsules. Methods: We retrospectively collected CE images from PillCam-SB3′s capsule and PillCam-Crohn’s capsule. ML algorithms were trained to classify small bowel CE images into either normal or ulcerated mucosa: a separate model for each capsule type, a cross-domain model (training the model on one capsule type and testing on the other), and a combined model. Results: The dataset included 33,100 CE images: 20,621 PillCam-SB3 images and 12,479 PillCam-Crohn’s images, of which 3582 were colonic images. There were 15,684 normal mucosa images and 17,416 ulcerated mucosa images. While the separate model for each capsule type achieved excellent accuracy (average AUC 0.95 and 0.98, respectively), the cross-domain model achieved a wide range of accuracies (0.569–0.88) with an AUC of 0.93. The combined model achieved the best results with an average AUC of 0.99 and average mean patient accuracy of 0.974. Conclusions: A combined model for two different capsules provided high and consistent diagnostic accuracy. Creating a holistic AI model for automated capsule reading is an essential part of the refinement required in ML models on the way to adapting them to clinical practice.

1. Introduction

Capsule Endoscopy (CE), in clinical use since 2000, is a reliable and noninvasive diagnostic tool that revolutionized the assessment of small-bowel mucosa [1,2,3,4,5]. CE is a sensitive and accurate clinical tool for diagnosing and monitoring Crohn’s disease (CD) [3,4,6,7,8,9,10] and has good prognostic value for relapse in clinical remission [11]. CE is recommended in international Crohn’s disease guidelines together with cross-sectional imaging [12,13]. Despite the well-described merits of CE, the clinical performance of this modality may be further augmented by shortening reading time, improving interobserver variability, and implementing precise scoring algorithms. In the past few years, artificial intelligence (AI) deep learning algorithms, termed convolutional neural networks (CNN), have revolutionized the computer vision field, offering remarkable near-human accuracy in different image analysis tasks, including medical image analysis [14]. Several research groups have tested the ability of AI algorithms to diagnose various lesions in the small intestine by CE, including bleeding [15,16,17], angioectasia [18,19,20], intestinal stricture [21], celiac disease signs [22,23] and hookworm infection [24], achieving high sensitivity and specificity. In CD, deep learning has been proven accurate in detecting and grading ulcers and strictures on CE [21,25,26,27,28,29,30,31,32,33]. There is still a long way to go before the implementation of AI-based capsule reading algorithms in clinical practice, and there are several challenges. The main potential obstacles for patient-level implementation are the marked variability of images between examinations with marked dissimilarities in image characteristics such as color hue, brightness and contrast, the difference in ulcer shape and size, and the quality of preparation. Another challenge in developing machine learning algorithms is the adaptation of algorithms to various platforms and capsule types (one-head capsule, two-head capsule, different manufacturers, and future capsule endoscopies).
The aim of our study was to create an accurate cross-domain model algorithm for the identification and grading of ulcers on CE, using two models of CE (PillCam Crohn and PillCam SB3, Medtronic) in CD patients.

2. Materials and Methods

2.1. Study Design

We randomly selected CE videos from patients diagnosed with CD as well as healthy subjects from our database and downloaded de-identified images from both ulcerated and normal mucosa. All patients were diagnosed and followed by the department of gastroenterology at Sheba Medical Center. The images were obtained by PillCam SB3 capsule and PillCam Crohn capsule (Medtronic Ltd., Dublin, Ireland) and reviewed with Rapid 9 (Medtronic Ltd., Dublin, Ireland) capsule reading software. The extracted images were labeled by gastroenterology fellows supervised by capsule experts. Both ulcers and erosions were considered “ulcerated mucosa.” For patients diagnosed with CD, we aimed to extract a comparable number of pathological and normal images. An institutional review board granted approval for this retrospective study.
Identification of small bowel ulcerated mucosa (Experiments 1–3):
Classification of CE images of the small bowel into normal mucosa and ulcerated mucosa was evaluated through 3 experiments:
Experiment 1: The model was trained on CE images from each of the capsules separately and was tested on images from the same capsule type.
Experiment 2 and experiment 3 evaluated “domain transfer”:
Experiment 2: The model that had been trained on CE images from each capsule type (in experiment 1) was tested on CE images from the other capsule type, i.e., the model was trained on CE images from the PillCam SB3 capsule and was tested on CE images from PillCam Crohn capsule, and vice versa (cross-domain).
Experiment 3: The model was trained and tested on a combined dataset of CE images from both capsule types (combined model).
Identification of colonic ulcerated mucosa (Experiment 4):
Classification of CE images of the colon into normal mucosa and ulcerated mucosa was done by training and testing the model on CE images from the PillCam Crohn capsule.
Ulcer grading (Experiment 5):
The model was trained and tested on a combined dataset of CE images of the small bowel from both capsule types. Differentiation between Grade 1 (mild) ulceration and Grade 3 (severe) ulceration was evaluated. Grade 2 (intermediate) ulcerations were omitted due to anticipated label noisiness. As they have intermediate properties, according to a previous study [28], the performance of machine learning (ML) algorithms is expected to be lower in their definition, and even the agreement among readers is lower.
Software and hardware:
The models were developed on Python (ver. 3.6.5, 64 bits) utilizing the open-source Pytorch and Pytorch Lightning libraries as the backend for CNN algorithms and the open-source Scikit-Learn library (ver. 0.20.2) for evaluation metrics algorithms. Models trained and evaluated on an Intel i7 CPU and Tesla V-100 GPU.

2.2. Neural Network Model

Deep learning is a subtype of AI mainly involving artificial neural networks. CNN, a subtype of deep learning, is optimized for solving computer vision tasks by employing pattern recognition [34]. EfficientNet is a CNN architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a compound coefficient. The compound scaling method is justified by the intuition that if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image. The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of MobileNetV2, in addition to squeeze-and-excitation blocks. We used EfficientNet-B4 as the training network. Google’s EfficientNet networks family has shown state-of-the-art results on the ImageNet dataset [35]. The network’s weights were initialized using weights from the 1.2 million ImageNet everyday color images. All the computer vision tasks in the study were binary classification tasks using binary cross-entropy loss. The models’ final neurons were sigmoid neurons for outputting class probabilities. The network was trained on capsule endoscopy images. In experiments 1 and 2, only a single type of capsule was used for training, while in the multi-domain case (experiment 3), both the PillCam SB3 capsule and the PillCam Crohn capsule images were used for training, thus letting the network learn from 2 different modalities. The preprocessing of capsule images included cropping of images’ borders and legends. Images were then resized into a 516 × 516 matrix, and pixels were normalized into 0–1 by dividing by 255.
The following parameters were used for training the network:
-
For ulcer identification: 3 epochs; batch size 12; Adam optimization with a learning rate of 5 × 10−5. The network output was a binary classification layer: non-ulcerated mucosa versus ulcerated mucosa images.
-
For ulcer grading: 3 epochs; batch size 12; Adam optimization with a learning rate of 5 × 10−5. The network output was a binary classification layer: grade 1 (mild) ulcerations vs. grade 3 (severe) ulcerations.

2.3. Class Activation Maps

Class activation maps [CAM] were used to analyze which image regions led to the network’s classification decisions of ulcers. For this purpose, we have applied gradient-weighted class activation mapping [Grad-CAM] [36]. This algorithm uses the gradients of the target label, flowing into the final convolutional layer, to produce a coarse localization map highlighting the important regions in the image.

2.4. Metrics

Accuracies were calculated using a cut-off probability of 0.5. Receiver operating curves [ROC] were plotted for the network results by varying the operating threshold. The area under the ROC curve [AUC] and accuracies were calculated both patient-wise and for each of the 5 folds. Sub-analyses included the AUCs of ulcerated mucosa vs. non-ulcerated mucosa images and grade 1 vs. grade 3 ulceration images.

3. Results

3.1. Study Population

The entire dataset included 33,100 CE images. There were 20,621 PillCam SB3 CE images and 12,479 PillCam Crohn CE images, of which 3582 are from the colon. Considering the findings, there were 15,684 normal mucosa images and 17,416 ulcerated mucosa images. Data regarding small bowel CE images are presented in Table 1.
For the colon, we collected PillCam SB3 CE images: 1597 normal mucosa images and 1985 ulcerated mucosa images. Colonic ulcerated mucosa images were not graded.

3.2. Identification of Small Bowel Ulcerated Mucosa (Experiments 1–3)

The dataset included 20,621 PillCam SB3 CE images and 12,479 PillCam Crohn CE images of the small bowel.
Experiment 1: Separate models
The model was trained and tested separately on CE images from each capsule type. In both capsule types, there was excellent accuracy in classification, with the area under the curve (AUC) over 0.95 in the majority. For each capsule type, over five different folds, with non-overlapping patients in each, ROC AUC, accuracy, and mean patient accuracy are presented in Table 2 and shown in Figure 1.
Experiment 2: Cross Domain
To justify the need for a global model, the models trained on each of the capsule types (“domains”) separately and were tested on the other capsule-type images. With the model trained on the PillCam SB3 CE images and tested on the PillCam Crohn CE images, the accuracy and mean patient accuracy were 0.569 and 0.545, respectively. However, it accomplished a high AUC of 0.921 (Figure 2a). With the model trained on the PillCam Crohn CE images and tested on the PillCam SB3 CE images, it achieved high accuracy (0.877) and a mean patient accuracy of 0.88, with an AUC of 0.948 (Figure 2b).
Experiment 3: Combined model
The model was trained and tested on a combined dataset of CE images from both capsule types and achieved excellent accuracy in the identification of ulcerated mucosa with an area under the curve (AUC) of over 0.98. Over five different folds, with non-overlapping patients in each, the ROC AUC, accuracy, and mean patient accuracy are presented in Table 3 and shown in Figure 3.

3.3. Identification of Colonic Ulcerated Mucosa (Experiment 4)

The model was trained and tested on 3582 CE images of PillCam Crohn capsule: 1597 Normal mucosa images and 1985 Ulcerated mucosa images. There was excellent accuracy in the identification of colonic ulcerated mucosa with an area under the curve (AUC) of over 0.98 in the majority. Over five different folds, with non-overlapping patients in each, the ROC AUC, accuracy, and mean patient accuracy are presented in Table 4 and shown in Figure 4.

3.4. Ulcer Grading (Experiment 5)

This experiment included 10,898 CE images of small bowel ulcerated mucosa from both capsule types. The model was pre-trained on both types as the combined model led to the best results in ulcer identification. There was excellent accuracy in classification ulcerations to grade 1 and grade 3 with an area under the curve (AUC) of 0.99. Over five different folds, with non-overlapping patients in each, the ROC AUC, accuracy, and mean patient accuracy are presented in Table 5, and ROC curves for each fold are shown in Figure 5.

3.5. Class Activation Map

Class activation maps [CAM] were used to analyze which image regions led to the network’s classification of images. The gradient-weighted class activation mapping [Grad-CAM] produces a coarse localization map highlighting the important regions in the image. Applying this algorithm to the images enables the visual presentation of the ulcer area in the CE images (Figure 6).

4. Discussion

In recent years several studies have examined the utility and efficacy of ML in the identification of different pathologies in small bowel mucosa. Previous studies [25,26,27,28,29,30,31,32,33] have demonstrated the high accuracy of ML models in the identification and grading of small bowel ulcers in CE images in CD patients. Recent metanalysis showed a combined sensitivity of 93% and a combined specificity of 92% [37] in the identification of gastrointestinal ulcers. These studies are almost exclusively limited to a single capsule type, meaning the ML model is trained and tested on the CE images of one capsule type.
Our study shows that the CNN algorithm was able to detect ulcerations in established CD patients with an AUC of 0.98 and above. Applying the same algorithm on CE images originating in both types of capsules, PillCam SB3 and PillCam Crohn capsules, produced an accurate diagnostic capability with high AUC.
The present study reinforces the results of previous studies on the ability of ML to detect and rank small bowel ulcers of CD patients in two types of capsules commonly used worldwide. In addition, it shows the same ability to detect ulcers in the colon. The uniqueness of this study is in the development of a combined model for two different capsules, which not only did not reduce its accuracy but also raised it. This is part of the refinement required in ML models on the way to adapting them to clinical practice, where we hope there will be one algorithm that can be adapted to all existing and future capsule types.
Recently, Houdeville et al. [38] addressed this issue in a different clinical setting, the detection of angiectasias in small bowel CE images. They evaluated an ML algorithm trained on CE images from PillCam SB3 capsule on CE images from another manufacturer, i.e., Mirocam’s capsule. The model achieved high sensitivity and specificity (96.1% and 97.8%, respectively). This resembles part of experiment 2 (cross-domain) in our study, but with different lesions and with only one direction, meaning they did not check training the model on Mirocam’s and testing it on PillCam SB3 CE images. In our study, the cross-domain model revealed high accuracy when trained on the PillCam Crohn CE images and tested on the PillCam SB3 CE images but with low accuracy (0.5) in the opposite situation. Their study has the advantage of dealing with different manufacturers’ capsules. However, in our study, we took another step and examined a combined model on a combined dataset of CE images from two capsule types, analyzing both identification of ulcers and grading ulcers.
Another aspect examined in our study is the analysis of the performance of the model on individual patients, i.e., the identification of ulcers in a set of images from the same patient. The individual patient-level analysis provided high and consistent diagnostic accuracy with shortened reading time. This is also more appropriate for clinical practice, where the algorithm is supposed to read one capsule of one patient at a time.
As for misdetections, false negative errors may be secondary to the small diameter of the ulcer or its suboptimal visibility due to contents in the bowel cavity. When it comes to the diagnosis of ulcers and aphthae in Crohn’s disease, the clinical importance of missing a single ulcer or aphthae is low and has significance only in the quantification of the inflammation. The importance of misdetections can be more significant in the diagnosis of other pathologies, such as angiodysplasias or polyps. Regarding visibility, future algorithms should include an assessment of the degree of cleanliness so that the DL will alert in case of low visibility.
This study had several limitations: first, the analysis was devoted to ulcerations and aphthae only; any other given pathology will require similar training. In the future, there might be a combined model that includes all the possible SB lesions. Second, although we used CE images from two different capsule types, they both are from the same manufacturer, suggesting they may have common imaging characteristics that contributed to the excellent accuracy of the combined model. Future studies should include images from different capsules from different manufacturers. Third, the number of evaluated colon images in this study was substantially smaller than the number of evaluated small bowel images. However, this is only a secondary outcome of this study since the colonic images are from one capsule type, i.e., PillCam SB3, and a combined model for colonic images is out of the scope of this study. Future studies should also focus on additional small bowel and colonic pathologies, including other inflammatory etiologies. Finally, as in previous studies, we also used separate still CE images in the analysis, but the next step should be to identify and quantify signs of inflammation in an individual film.

5. Conclusions

While single capsule models performed well on validation sets from the same domain, they performed poorly on the other capsule’s test sets. Developing a combined model for two different capsules provided high and consistent diagnostic accuracy. Creating a holistic AI model for automated capsule reading is an essential part of the refinement required in ML models on the way to adapting them to clinical practice.

Author Contributions

Conceptualization, U.K.; methodology, U.K., E.K., and T.K.; software, T.K., N.S., Y.L., O.M., and Y.M.; formal analysis, T.K., N.S., Y.L., O.M., and Y.M.; data curation, O.S., R.S., A.A., O.U., S.K., L.D., and R.M.Y.; writing—original draft preparation, R.M.Y., and T.K.; writing—review and editing, S.S., U.K., E.K., S.B.H., and R.E.; supervision, U.K., R.E., and R.M.Y.; project administration, U.K., T.K., and R.M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Sheba Medical Center.

Informed Consent Statement

Patient consent was not required as we used a database of de-identified images.

Data Availability Statement

Data are available by correspondence to the Corresponding Author.

Conflicts of Interest

S.B.H. has received consulting and advisory board fees and/or research support from AbbVie, MSD, Janssen, Takeda, and CellTrion. U.K. has received speaker fees from Abbvie, Janssen, and Takeda; research support from Takeda and Janssen; and consulting fees from Takeda and CTS. R.E. has received advisory and/or research support from Abbvie, Janssen, Takeda, and Medtronic. R.M.Y. has received consulting fees from Medtronic. None of the other authors have any conflicts to declare.

References

  1. Kopylov, U.; Seidman, E.G. Diagnostic modalities for the evaluation of small bowel disorders. Curr. Opin. Gastroenterol. 2015, 31, 111–117. [Google Scholar] [CrossRef] [PubMed]
  2. Kopylov, U.; Seidman, E.G. Clinical applications of small bowel capsule endoscopy. Clin. Exp. Gastroenterol. 2013, 6, 129–137. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Eliakim, R. Video capsule endoscopy of the small bowel. Curr. Opin. Gastroenterol. 2008, 24, 159–163. [Google Scholar] [CrossRef]
  4. Pennazio, M.; Spada, C.; Eliakim, R.; Keuchel, M.; May, A.; Mulder, C.J.; Rondonotti, E.; Adler, S.N.; Albert, J.; Baltes, P.; et al. Small-bowel capsule endoscopy and device-assisted enteroscopy for diagnosis and treatment of small-bowel disorders: European Society of Gastrointestinal Endoscopy (ESGE) clinical guideline. Endoscopy 2015, 47, 352–386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Mishkin, D.S.; Chuttani, R.; Croffie, J.; DiSario, J.; Liu, J.; Shah, R.; Somogyi, L.; Tierney, W.; Song, L.M.K.; Petersen, B.T.; et al. ASGE technology status evaluation report: Wireless capsule endoscopy. Gastrointest. Endosc. 2006, 63, 539–545. [Google Scholar] [CrossRef]
  6. Kopylov, U.; Koulaouzidis, A.; Klang, E.; Carter, D.; Ben-Horin, S.; Eliakim, R. Monitoring of small bowel Crohn’s disease. Exp. Rev. Gastroenterol. Hepatol. 2017, 11, 1047–1058. [Google Scholar] [CrossRef]
  7. Melmed, G.Y.; Dubinsky, M.C.; Rubin, D.T.; Fleisher, M.; Pasha, S.F.; Sakuraba, A.; Tiongco, F.; Shafran, I.; Fernandez-Urien, I.; Rosa, B.; et al. Utility of video capsule endoscopy for longitudinal monitoring of Crohn’s disease activity in the small bowel: A prospective study. Gastrointest. Endosc. 2018, 88, 947–955. [Google Scholar] [CrossRef] [Green Version]
  8. Eliakim, R. Video capsule endoscopy of the small bowel. Curr. Opin. Gastroenterol. 2010, 26, 129–133. [Google Scholar] [CrossRef]
  9. Waterman, M.; Eliakim, R. Capsule enteroscopy of the small intestine. Abdom. Imaging 2009, 34, 452–458. [Google Scholar] [CrossRef]
  10. Kopylov, U.; Nemeth, A.; Koulaouzidis, A.; Makins, R.; Wild, G.; Afif, W.; Bitton, A.; Johansson, G.W.; Bessissow, T.; Eliakim, R.; et al. Small bowel capsule endoscopy in the management of established Crohn’s disease: Clinical impact, safety, and correlation with inflammatory biomarkers. Inflamm. Bowel Dis. 2015, 21, 93–100. [Google Scholar] [CrossRef]
  11. Ben-Horin, S.; Lahat, A.; Amitai, M.M.; Klang, E.; Yablecovitch, D.; Neuman, S.; Levhar, N.; Selinger, L.; Rozendorn, N.; Turner, D.; et al. Assessment of small bowel mucosal healing by video capsule endoscopy for the prediction of short-term and long-term risk of Crohn’s disease flare: A prospective cohort study. Lancet Gastroenterol. Hepatol. 2019, 4, 519–528. [Google Scholar] [CrossRef]
  12. Maaser, C.; Sturm, A.; Vavricka, S.R.; Kucharzik, T.; Fiorino, G.; Annese, V.; Calabrese, E.; Baumgart, D.C.; Bettenworth, D.; Nunes, P.B.; et al. ECCO-ESGAR Guideline for Diagnostic Assessment in IBD Part 1: Initial diagnosis, monitoring of known IBD, detection of complications. J. Crohn’s Colitis 2019, 13, 144–164. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Sturm, A.; Maaser, C.; Calabrese, E.; Annese, V.; Fiorino, G.; Kucharzik, T.; Vavricka, S.R.; Verstockt, B.; van Rheenen, P.; Tolan, D.; et al. ECCO-ESGAR Guideline for Diagnostic Assessment in IBD Part 2: IBD scores and general principles and technical aspects. J. Crohn’s Colitis 2019, 13, 273–284. [Google Scholar] [CrossRef] [PubMed]
  14. Soffer, S.; Ben-Cohen, A.; Shimon, O.; Amitai, M.M.; Greenspan, H.; Klang, E. Convolutional Neural Networks for Radiologic Images: A Radiologist’s Guide. Radiology 2019, 290, 590–606. [Google Scholar] [CrossRef] [PubMed]
  15. Jia, X.; Meng, M.Q.-H. A deep convolutional neural network for bleeding detection in wireless capsule endoscopy images. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; Volume 639. [Google Scholar]
  16. Aoki, T.; Yamada, A.; Kato, Y.; Saito, H.; Tsuboi, A.; Nakada, A.; Niikura, R.; Fujishiro, M.; Oka, S.; Ishihara, S.; et al. Automatic detection of blood content in capsule endoscopy images based on a deep convolutional neural network. J. Gastroenterol. Hepatol. 2020, 35, 1196–1200. [Google Scholar] [CrossRef]
  17. Jia, X.; Meng, M.Q.-.H. Gastrointestinal bleeding detection in wireless capsule endoscopy images using handcrafted and CNN features. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju Island, Korea, 11–15 July 2017; Volume 3154. [Google Scholar]
  18. Leenhardt, R.; Vasseur, P.; Li, C.; Saurin, J.C.; Rahmi, G.; Cholet, F.; Becq, A.; Marteau, P.; Histace, A.; Dray, X. A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy. Gastrointest. Endosc. 2019, 89, 189–194. [Google Scholar] [CrossRef]
  19. Tsuboi, A.; Oka, S. Artificial intelligence using a convolutional neural network for automatic detection of small-bowel angioectasia in capsule endoscopy images. Dig. Endosc. 2020, 32, 382–390. [Google Scholar] [CrossRef]
  20. Mascarenhas Saraiva, M.; Ribeiro, T.; Afonso, J.; Andrade, P.; Cardoso, P.; Ferreira, J.; Cardoso, H.; Macedo, G. Deep Learning and Device-Assisted Enteroscopy: Automatic Detection of Gastrointestinal Angioectasia. Medicina 2021, 57, 1378. [Google Scholar] [CrossRef]
  21. Klang, E.; Grinman, A.; Soffer, S.; Margalit Yehuda, R.; Barzilay, O.; Amitai, M.M.; Konen, E.; Ben-Horin, S.; Eliakim, R.; Barash, Y.; et al. Automated Detection of Crohn’s Disease Intestinal Strictures on Capsule Endoscopy Images Using Deep Neural Networks. J. Crohn’s Colitis 2021, 15, 749–756. [Google Scholar] [CrossRef]
  22. Wang, X.; Qian, H.; Ciaccio, E.J.; Lewis, S.K.; Bhagat, G.; Green, P.H.; Xu, S.; Huang, L.; Gao, R.; Liu, Y. Celiac disease diagnosis from video capsule endoscopy images with residual learning and deep feature extraction. Comput. Methods Programs Biomed. 2020, 187, 105236. [Google Scholar] [CrossRef]
  23. Stoleru, C.A.; Dulf, E.H.; Ciobanu, L. Automated detection of celiac disease using Machine Learning Algorithms. Sci. Rep. 2022, 12, 4071. [Google Scholar] [CrossRef] [PubMed]
  24. He, J.Y.; Wu, X.; Jiang, Y.G.; Peng, Q.; Jain, R. Hookworm Detection in Wireless Capsule Endoscopy Images With Deep Learning. IEEE Trans. Image Process. 2018, 27, 2379–2392. [Google Scholar] [CrossRef] [PubMed]
  25. Klang, E.; Barash, Y.; Margalit, R.Y.; Soffer, S.; Shimon, O.; Albshesh, A.; Ben-Horin, S.; Amitai, M.M.; Eliakim, R.; Kopylov, U. Deep learning algorithms for automated detection of Crohn’s disease ulcers by video capsule endoscopy. Gastrointest. Endosc. 2020, 91, 606–613.e2. [Google Scholar] [CrossRef] [PubMed]
  26. Barash, Y.; Azaria, L.; Soffer, S.; Margalit Yehuda, R.; Shlomi, O.; Ben-Horin, S.; Eliakim, R.; Klang, E.; Kopylov, U. Ulcer severity grading in video capsule images of patients with Crohn’s disease: An ordinal neural network solution. Gastrointest. Endosc. 2021, 93, 187–192. [Google Scholar] [CrossRef] [PubMed]
  27. Fan, S.; Xu, L.; Fan, Y.; Wei, K.; Li, L. Computer-aided detection of small intestinal ulcer and erosion in wireless capsule endoscopy images. Phys. Med. Biol. 2018, 63, 165001. [Google Scholar] [CrossRef]
  28. Alaskar, H.; Hussain, A.; Al-Aseem, N.; Liatsis, P.; Al-Jumeily, D. Application of convolutional neural networks for automated ulcer detection in wireless capsule endoscopy images. Sensors 2019, 19, 1265. [Google Scholar] [CrossRef] [Green Version]
  29. Aoki, T.; Yamada, A.; Aoyama, K.; Saito, H.; Tsuboi, A.; Nakada, A.; Niikura, R.; Fujishiro, M.; Oka, S.; Ishihara, S.; et al. Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest. Endosc. 2019, 89, 357–363. [Google Scholar] [CrossRef]
  30. Wang, S.; Xing, Y.; Zhang, L.; Gao, H.; Zhang, H. A systematic evaluation and optimization of automatic detection of ulcers in wireless capsule endoscopy on a large dataset using deep convolutional neural networks. Phys. Med. Biol. 2019, 64, 235014. [Google Scholar] [CrossRef]
  31. Klang, E.; Kopylov, U.; Mortensen, B.; Damholt, A.; Soffer, S.; Barash, Y.; Konen, E.; Grinman, A.; Yehuda, R.M.; Buckley, M.; et al. A Convolutional Neural Network Deep Learning Model Trained on CD Ulcers Images Accurately Identifies NSAID Ulcers. Front. Med. 2021, 8, 656493. [Google Scholar] [CrossRef]
  32. Afonso, J.; Saraiva, M.M.; Ferreira, J.P.S.; Cardoso, H.; Ribeiro, T.; Andrade, P.; Parente, M.; Jorge, R.N.; Macedo, G. Automated detection of ulcers and erosions in capsule endoscopy images using a convolutional neural network. Med. Biol. Eng. Comput. 2022, 60, 719–725. [Google Scholar] [CrossRef]
  33. Ferreira, J.P.S.; de Mascarenhas Saraiva, M.J.D.Q.E.C.; Afonso, J.P.L.; Ribeiro, T.F.C.; Cardoso, H.M.C.; Ribeiro Andrade, A.P.; de Mascarenhas Saraiva, M.N.G.; Parente, M.P.L.; Natal Jorge, R.; Lopes, S.I.O.; et al. Identification of Ulcers and Erosions by the Novel Pillcam™ Crohn’s Capsule Using a Convolutional Neural Network: A Multicentre Pilot Study. J. Crohn’s Colitis 2022, 16, 169–172. [Google Scholar] [CrossRef] [PubMed]
  34. Klang, E. Deep learning and medical imaging. J. Thorac. Dis. 2018, 10, 1325–1328. [Google Scholar] [CrossRef] [PubMed]
  35. Mingxing, T.; Quoc, V.L. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
  36. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. arXiv 2017, arXiv:1610.02391. [Google Scholar]
  37. Bang, C.S.; Lee, J.J.; Baik, G.H. Computer-Aided Diagnosis of Gastrointestinal Ulcer and Hemorrhage Using Wireless Capsule Endoscopy: Systematic Review and Diagnostic Test Accuracy Meta-analysis. J. Med. Internet Res. 2021, 23, e33267. [Google Scholar] [CrossRef]
  38. Houdeville, C.; Souchaud, M.; Leenhardt, R.; Beaumont, H.; Benamouzig, R.; McAlindon, M.; Grimbert, S.; Lamarque, D.; Makins, R.; Saurin, J.C.; et al. A multisystem-compatible deep learning-based algorithm for detection and characterization of angiectasias in small-bowel capsule endoscopy. A proof-of-concept study. Dig. Liver Dis. 2021, 53, 1627–1631. [Google Scholar] [CrossRef]
Figure 1. Experiment 1-Identification of small bowel ulcerated mucosa according to capsule type: (a) PillCam SB3 CE images, (b) PillCam Crohn CE images.
Figure 1. Experiment 1-Identification of small bowel ulcerated mucosa according to capsule type: (a) PillCam SB3 CE images, (b) PillCam Crohn CE images.
Diagnostics 12 02490 g001
Figure 2. Experiment 2: (a) Classification of PillCam Crohn CE images by a model trained on PillCam SB3 CE images (b) Classification of PillCam SB3 CE images by a model trained on PillCam Crohn CE images.
Figure 2. Experiment 2: (a) Classification of PillCam Crohn CE images by a model trained on PillCam SB3 CE images (b) Classification of PillCam SB3 CE images by a model trained on PillCam Crohn CE images.
Diagnostics 12 02490 g002
Figure 3. Experiment 3—Classification of PillCam Crohn CE images by a combined model.
Figure 3. Experiment 3—Classification of PillCam Crohn CE images by a combined model.
Diagnostics 12 02490 g003
Figure 4. Experiment 4—Identification of colonic ulcerated mucosa.
Figure 4. Experiment 4—Identification of colonic ulcerated mucosa.
Diagnostics 12 02490 g004
Figure 5. Experiment 5—Grading of small bowel CE mucosal ulcerations images.
Figure 5. Experiment 5—Grading of small bowel CE mucosal ulcerations images.
Diagnostics 12 02490 g005
Figure 6. Class activation map.
Figure 6. Class activation map.
Diagnostics 12 02490 g006
Table 1. Small bowel CE images.
Table 1. Small bowel CE images.
PillCam SB3PillCam CrohnTotal
Normal10,248383914,087
Ulcer grade 1539821477545
Ulcer grade 2300615274533
Ulcer grade 3196913843353
Total20,621889729,518
Table 2. Experiment 1-Identification of small bowel ulcerated mucosa according to capsule type.
Table 2. Experiment 1-Identification of small bowel ulcerated mucosa according to capsule type.
Pillcam SB3 CE ImagesPillCam Crohn CE Images
FoldAccuracyMean Patient AccuracyROC_AUCFoldAccuracyMean Patient AccuracyROC_AUC
00.9660.9770.99300.9350.8930.982
10.9780.9760.99510.9150.9080.967
20.9740.9850.99620.9780.9770.997
30.9350.980.98330.9540.9540.991
40.9750.9820.99740.9240.9180.974
Table 3. Classification of PillCam Crohn CE images by a combined model.
Table 3. Classification of PillCam Crohn CE images by a combined model.
FoldAccuracyMean Patient AccuracyROC_AUC
00.9410.9780.984
10.9750.9750.998
20.9580.9820.989
30.9630.9670.991
40.9630.9680.992
Table 4. Experiment 4—Identification of colonic ulcerated mucosa.
Table 4. Experiment 4—Identification of colonic ulcerated mucosa.
FoldAccuracyMean Patient AccuracyROC_AUC
00.8960.9510.98
10.4780.740.77
20.9370.9480.989
30.9310.9050.986
40.8280.7950.946
Table 5. Experiment 5—Grading of small bowel CE mucosal ulcerations images.
Table 5. Experiment 5—Grading of small bowel CE mucosal ulcerations images.
FoldAccuracyMean Patient AccuracyROC_AUC
00.9720.9790.995
10.9650.9540.992
20.9600.9250.990
30.9500.9160.950
40.9480.9320.989
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kratter, T.; Shapira, N.; Lev, Y.; Mauda, O.; Moshkovitz, Y.; Shitrit, R.; Konyo, S.; Ukashi, O.; Dar, L.; Shlomi, O.; et al. Deep Learning Multi-Domain Model Provides Accurate Detection and Grading of Mucosal Ulcers in Different Capsule Endoscopy Types. Diagnostics 2022, 12, 2490. https://doi.org/10.3390/diagnostics12102490

AMA Style

Kratter T, Shapira N, Lev Y, Mauda O, Moshkovitz Y, Shitrit R, Konyo S, Ukashi O, Dar L, Shlomi O, et al. Deep Learning Multi-Domain Model Provides Accurate Detection and Grading of Mucosal Ulcers in Different Capsule Endoscopy Types. Diagnostics. 2022; 12(10):2490. https://doi.org/10.3390/diagnostics12102490

Chicago/Turabian Style

Kratter, Tom, Noam Shapira, Yarden Lev, Or Mauda, Yehonatan Moshkovitz, Roni Shitrit, Shani Konyo, Offir Ukashi, Lior Dar, Oranit Shlomi, and et al. 2022. "Deep Learning Multi-Domain Model Provides Accurate Detection and Grading of Mucosal Ulcers in Different Capsule Endoscopy Types" Diagnostics 12, no. 10: 2490. https://doi.org/10.3390/diagnostics12102490

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop