Next Article in Journal
Human Intestinal Organoids and Microphysiological Systems for Modeling Radiotoxicity and Assessing Radioprotective Agents
Previous Article in Journal
Correction: Krayem et al. The Benefit of Reactivating p53 under MAPK Inhibition on the Efficacy of Radiotherapy in Melanoma. Cancers 2019, 11, 1093
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Future of Minimally Invasive Capsule Panendoscopy: Robotic Precision, Wireless Imaging and AI-Driven Insights

by
Miguel Mascarenhas
1,2,3,*,†,
Miguel Martins
1,2,†,
João Afonso
1,2,3,
Tiago Ribeiro
1,2,3,
Pedro Cardoso
1,2,3,
Francisco Mendes
1,2,
Patrícia Andrade
1,2,3,
Helder Cardoso
1,2,3,
João Ferreira
4,5 and
Guilherme Macedo
1,2,3
1
Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal
2
WGO Gastroenterology and Hepatology Training Center, 4200-047 Porto, Portugal
3
Faculty of Medicine, University of Porto, 4200-427 Porto, Portugal
4
Department of Mechanic Engineering, Faculty of Engineering, University of Porto, 4200-065 Porto, Portugal
5
DigestAID—Digestive Artificial Intelligence Development, 455/461, 4200-135 Porto, Portugal
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Cancers 2023, 15(24), 5861; https://doi.org/10.3390/cancers15245861
Submission received: 4 November 2023 / Revised: 4 December 2023 / Accepted: 13 December 2023 / Published: 15 December 2023
(This article belongs to the Section Cancer Informatics and Big Data)

Abstract

:

Simple Summary

The exponential growth in artificial intelligence development, particularly its application in capsule endoscopy, serves as a compelling model for gastroenterologists. This review focusses on the latest advancements in capsule endoscopy, analyzing the possible benefits and ethical challenges that artificial intelligence may bring to the field of minimally invasive capsule panendoscopy, while also offering insights into future directions. Specifically in the context of oncological gastrointestinal screening, there is still a need to explore alternative strategies for enhancing this process. Artificial intelligence-enhanced capsule panendoscopy has the potential to positively impact the future by addressing time constraints and improve accessibility through the use of highly efficient diagnostic models.

Abstract

In the early 2000s, the introduction of single-camera wireless capsule endoscopy (CE) redefined small bowel study. Progress continued with the development of double-camera devices, first for the colon and rectum, and then, for panenteric assessment. Advancements continued with magnetic capsule endoscopy (MCE), particularly when assisted by a robotic arm, designed to enhance gastric evaluation. Indeed, as CE provides full visualization of the entire gastrointestinal (GI) tract, a minimally invasive capsule panendoscopy (CPE) could be a feasible alternative, despite its time-consuming nature and learning curve, assuming appropriate bowel cleansing has been carried out. Recent progress in artificial intelligence (AI), particularly in the development of convolutional neural networks (CNN) for CE auxiliary reading (detecting and diagnosing), may provide the missing link in fulfilling the goal of establishing the use of panendoscopy, although prospective studies are still needed to validate these models in actual clinical scenarios. Recent CE advancements will be discussed, focusing on the current evidence on CNN developments, and their real-life implementation potential and associated ethical challenges.

1. Introduction to Panendoscopy and Its Challenges

Capsule endoscopy (CE) is a minimally invasive procedure that was initially conceived for evaluation of the small bowel and has achieved a high diagnostic yield for the detection of small bowel lesions [1]. The notion of a panenteric examination (e.g., for Crohn’s disease assessment) emerged with the development and implementation of colon capsule endoscopy [2]. In fact, since CE allows for the evaluation of the whole gastrointestinal (GI) tract, the concept of a single minimally invasive panendoscopy has become quite a tempting idea [3]. Technical feasibility and expected favorable patient tolerance both support the use of this method. Nevertheless, there are several challenges in implementing it.
Firstly, the implementation of capsule panendoscopy (CPE) would further increase the reading burden of an already time-consuming exam. Without any auxiliary procedural automation, this would most likely reduce the cost-effectiveness of a gastroenterology department, not to mention that many medical institutions would lack the experience or resources required to perform it [4]. More importantly, by considerably increasing the number of frames that must be reviewed, fatigue and monotony levels would increase, potentially leading to missed lesions and/or decisive frames.
Secondly, the diagnostic accuracy of CE in assessing the esophagus and stomach is still suboptimal. In addition to the inability to inflate the lumen, which is an inherent constraint of CE in any anatomical area, there are other limitations that must be considered. In the esophagus, the capsule moves quickly, especially if taken in a sitting/orthostat position, which can reduce the number of mucosal frames and may be associated with incomplete visualization of the Z-line [5]. In the stomach, which is not a cylindric structure as is the case for the small bowel, some areas, particularly proximal ones, may be overlooked, since it is entirely dependent on peristaltic motions, even when dual-headed endoscopic capsules are used [6].
Lastly, while adequate bowel preparation is one of the most important current concerns of capsule enteric evaluation, it becomes much more determinant in the scenario of CPE. In fact, we have yet to find an effective and reproducible method of bowel preparation that is widely accepted and tolerated by patients, not only for small bowel CE, but also for colon CE [7]. Even though numerous studies have been conducted in this domain, including systematic reviews with meta-analysis, it remains challenging to reach a final conclusion due to heterogeneity in how researchers analyze mucosa cleansing [8,9]. There is currently no method that fulfils the criteria of the method being non-time-consuming, consistent, and free of inter-observer variability. Neither the development of operator-dependent nor color-intensity-based automated methods have fully addressed this issue [7,8,9]. The development of a standardized method and its integration in CE reading tools most likely needs to be the former step, thus facilitating the subsequent design of an appropriate clinical trial to determine the most beneficial preparation.

2. Wireless Capsule Endoscopy: A Pill-Sized Revolution in Gastrointestinal Imaging

Single-camera capsules were the first to be developed in the early 2000s, initially with lower resolution and a lower capturing frame rate [1]. Over time, improvements were made, including to the camara resolution capturing frame rate and battery power, and software refinement as well as hardware advancements took place with the introduction of real-time viewers [10]. The progress eventually led to development of adaptative frame rate technology, where the faster the capsule progresses, the higher the capture rate, reaching a maximum of six frames per second [10].
Dual-camera capsules were introduced in 2006 [2]. First-generation designs went into sleep mode shortly after ingestion due to power saving issues, rewiring only in the small bowel. The capturing frame rate was poor, resulting in a lower sensitivity in detecting polyps, compared to second-generation models [2,11]. These later devices became accessible later in 2009, offered a wider view angle and came with an adaptative frame rate up to 35 frames per second, which was a valuable inclusion to preserve battery [11]. More recently, in 2016, a third-generation design was introduced which was able to stay operational without interruption along the entire GI tract [12]. Initially it was intended to assess inflammatory bowel disease patients more accurately, but it rapidly prompted discussions of CPE.
Since its introduction, CE has established itself as the first-line method for assessing the small bowel mucosa. The two main indications for its usage include suspected mid-gastrointestinal bleeding and diagnosis/follow-up in situations of suspected/confirmed small bowel Crohn’s disease [13]. Moreover, it can also be used to monitor hereditary polyposis syndromes, mainly Peutz–Jeghers, and to rule out small intestine tumors [13]. It is also applicable to the evaluation of nonresponsive or refractory celiac disease cases, when the diagnosis of celiac disease is uncertain, or in malabsorption syndromes [13]. Additionally, dual-camera capsules have improved the visualization of the colonic mucosa, by enabling greater visibility of both haustra and areas located behind folds [11]. As a result, they have improved capsules’ overall diagnostic yield, not only for detecting protuberant lesions, but also for other mucosal lesions [11]. Consequently, this has become a possible alternative method for colorectal cancer (CCR) screening, mostly in situations where prior colonoscopy was incomplete or there was a greater risk of complications or contraindication to conventional colonoscopy or sedation [14,15,16].
CE is generally safe and well tolerated, with few contraindications. Caution is warranted in patients with swallowing disorders, due to risk of aspiration [17]. Additionally, it requires clinical assessment of the risk of capsule retention [13,18]. This is particularly applicable for patients with established Crohn’s disease (ECD), where the risk of retention is increased, and whenever obstructive symptoms are observed [13,19]. Given the high risk of CE retention in Crohn’s disease, the inability to distinguish high-risk from low-risk patients based on clinical presentation alone, and the indisputable effectiveness of patency testing, the safest approach would be to pursue patency testing before CE in all ECD patients [13,19,20]. Moreover, there is also an increased risk of retention in patients with previous gastrointestinal surgery or radiation therapy of the abdomen and pelvis, as well as persistent users of non-steroid anti-inflammatories and patients with a personal history of small bowel tumors [17]. In these cases, a patency capsule might also be considered [17,21]. The use of CE in individuals with implantable cardiac devices (pacemaker, defibrillators and left ventricular assist devices) should not be contraindicated, since several studies have shown that is safe [22].

3. Robotic-Assisted Panendoscopy: Advancements and Benefits

In addition to wireless CE, magnetically controlled capsule endoscopy (MCE) has emerged as an alternative method to evaluate the superior GI tract (Figure 1) [23]. In this case, the capsule contains a magnet that can be moved in real time by a magnetic field that is generated outside the patient after swallowing it, using forces of translation and rotation [24].
There are three types of magnetic control systems: hand-held magnets, electromagnetic coil systems (comparable to present-day MRIs) and robotic arms [25,26,27,28]. Of these techniques confined to very few centers, the latter is the most widely used and studied, mainly for the assessment of the gastric mucosa, given its operability (either manually or automatically), tolerability (the exam is conducted without patient movement) and ease of installation (compared to the installation of larger electromagnetic coil systems) [23].
The development and implementation of MCE for gastric assessment addresses one of the shortcomings of wireless CE by not depending entirely on stomach peristalsis to move. Although the protocol is not fully established, patients are typically asked to drink 1 L of water (generally mixed with an anti-foaming agent) 10 min prior to the start of the exam, to enhance gastric distention [28]. Then, the capsule is mobilized through this water interface, enabling evaluation of the gastric mucosa. In fact, there is some evidence that shows that MCE’s diagnostic accuracy for detecting gastric lesions might be comparable to the gold standard upper endoscopy, with superior overall agreement in 90% of cases [27]. This, in turn, may serve as a safe and effective alternative for gastric assessment, besides wireless CE, in patients who cannot tolerate esophagogastroduodenoscopy.
Furthermore, the implementation of MCE controlled by a robotic arm could also contribute to panendoscopic evaluation of the whole GI tract. For example, a patient could ingest the capsule lying down (to maximize the assessment of the esophageal mucosa), followed by an extensive evaluation protocol of the stomach with the help of magnetic fields [28]. Then, when the capsule enters the duodenum, the patient would be able to leave the examination bed and move as in wireless CE, allowing for the remaining panenteric assessment.
When it comes to contraindications, they are similar to those outlined previously in wireless CE. The presence of a magnetic field adds extra contraindications, comparable to those applied to MRI, namely the presence of implanted electronic devices, non-MRI-compatible pacemakers and/or magnetic metal foreign bodies [28].
From a diagnostic standpoint, it should be highlighted that MCE’s ability to evaluate the fundus is still incomplete, with some studies reporting impossibility in 20% of instances [29]. Furthermore, thus far, it is challenging to compare wireless CE and MCE, given the lack of comparative research between them.

4. Artificial Intelligence in Panendoscopy: Enhancing Diagnostic Accuracy

In recent years, artificial intelligence (AI) has gained relevance in diverse fields of medical practice, particularly is specialties with a strong imaging and diagnostic component [30]. Gastroenterology has always been marked by ground-breaking achievements, using highly innovative technologies to improve patient care. As a result, it is not surprising that it is also leading the way in the advancement of AI technologies in healthcare.
AI-related developments have been achieved in two areas of computational science over the previous decade: machine learning (ML) and deep learning (DL). These two fields emerged around the same period. However, the lack of adequate computational power in the past limited the widespread adoption of DL models. As a result, technology initially embraced ML algorithms. Their aim was to complete a task by analyzing patterns automatically. Nevertheless, ML requires a supervised phase to ensure proper data annotation [31].
With the current availability of ample computational resources, DL models have gained significant momentum in recent years. These models are a subset of machine learning that are also used for automatic pattern identification but, unlike the former, do not necessarily require human interaction to train the model, displaying supervised or unsupervised learning potential [32,33]. They involve neural networks with multiple layers (three or more), structured in a hierarchical human-brain-inspired architecture, which is capable of performing more complicated tasks by sequentially combining inputs from various layers ranging from lower-level to higher-levels ones [34]. One DL model example is a convolutional neural network (CNN), which, as the name suggests, has a multilayer neural network structure that is used to automatically analyze complex visual data, mimicking the neurobiological process [35].
There are some ML-based capsule software add-ons which assist the gastroenterologist in image-pattern analysis. They were developed for many purposes, including color image analysis (e.g., automatically detecting blood, as in PillCam’s Suspected Blood Indicator), topographic segmentation (e.g., automatically recognizing distinct anatomical sections) and video adjustment (e.g., reducing duration of a video by displaying frames with the highest probability of being abnormal, as in PillCam’s Top 100) [36]. These tools helped to reduce the reading burden, although the percentage of missing lesions is higher compared to that in developed DL models [37]. Therefore, there has been exponential interest in the development and validation of DL models for CE. Table 1 provides an overview of the published work regarding CNN development for CE.
Convolutional neural networks were initially developed using frames from a single-camera capsules, later expanding to dual-camera capsules. Specific CNNs were applied first in the small bowel, followed by the colon and rectum, and then, in both anatomical regions, excelling at detecting a particular lesion [37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54]. Nevertheless, adopting a sequential approach where each specific CNN is applied one at a time for an AI-assisted review of a CE video, while logical, might not be the most efficient strategy.
Complex CNNs have started to emerge, offering the capability of detecting multiple types of lesions at once [55,56,57,58,59]. The first paper in this field was published by Ding et al. and demonstrates the potential of a CNN-based approach to assisting in the reading of wireless CE. Indeed, their AI system provides the simultaneous detection of a wide range of lesions. Despite the novelty of being the first published complex model, the findings of this study are currently a topic of debate, as this CNN can accurately detect various types of lesions but fails to differentiate between them [55]. The CNN described in that study serves as the core technology for the newly developed DL solution (ProScan™, AnX Robotica, Plano, TX, USA) to be incorporated into the reading software of the NaviCam SB system (AnX Robotic Corp, Plano, TX, USA). Although the hardware has received clearance by the Food and Drug Administration (FDA), this clearance has not been granted for ProScan™, which currently awaits approval for commercial use. Other groups have also developed DL models which are most often used in the small bowel, but are also capable of being used in the colon [56,57,58,59].
From panenteric AI-enhanced mucosa evaluation, some groups have also tried to develop DL solutions for assessing the stomach. First, they used MCE capsules [60,61]. Then, there was also a published CNN that used various types of wireless CE capsules, representing another important step for pursuing the AI-enhanced panendoscopy vision [62].
The technological readiness level of these algorithms in CE is currently situated in the initial stages of development, spanning from experimental to demonstration pilots, with some still in the research phase focused on concept validation. To fully understand the potential of AI during CE, prospective and multicentric studies are still required since most research conducted so far has been retrospective. The role of this DL-based technology in the identification of esophageal lesions by CE is still to be explored. CE is associated with a scarcity of esophagus images, which limits the establishment of esophageal-only databases. Nevertheless, the development of these types of models may be a pivotal step towards minimally invasive AI-enhanced CPE.

5. Integrating Robotic Systems and Wireless Capsules: A Synergetic Approach

As previously discussed, CPE allows for visualization of the whole GI system, particularly if proper bowel preparation is carried out [58]. This could be a valuable asset in evaluating cases of inflammatory bowel disease and overt GI bleeding [63,64,65]. Moreover, with AI advancements, CPE could become a cost-effective cancer screening method. Given the different types of capsules already available, it is debatable whether technology should advance towards wireless CE panendoscopy or robotic MCE panendoscopy.
From a global perspective, wireless CE is more widespread, in contrast to MCE, which is only found in very few centers currently. Although there are published CNNs for both modalities for the detection of gastric lesions, there are more groups working with AI on wireless CE. Although there are no studies comparing diagnostic performance between wireless CE and MCE, it is possible that panendoscopy based on wireless CE could be more affordable and effective. In fact, wireless panendoscopy has the potential advantage of having the capability to be performed in homecare, without the patient having to be in a clinic or hospital.
Nonetheless, in countries with high prevalence of stomach cancer, choosing robotic panendoscopy to screen both gastric and colorectal cancer could be a reasonable approach, taking into account the MCE features previously discussed. Having available both wireless and robotic CE expands and diversifies the toolkit of minimally invasive CE. Robotic CE has the potential to address specific limitations of wireless CE, offering enhanced stomach visualization. Moreover, it could create possibilities for tissue sampling and even therapeutic interventions, given the increased control over capsule propulsion [66].

6. Overcoming Limitations: AI-Assisted Navigation in Panendoscopy

The implementation of ever less invasive diagnostic/therapeutic procedures has contributed to the evolution of medicine. As a result, it may be anticipated that progress will be made in the development of alternative diagnostic modalities to assess the digestive system, in addition to the currently gold standard upper and lower endoscopy. The thought process is that, whereas CE classically focuses on the small bowel, it may be capable of assessing the whole GI tract, starting from the esophagus and progressing through the stomach to the small bowel, colon and rectum.
CPE has the potential to change the way GI diseases are evaluated. The case of GI oncological screening is a challenge worth mentioning, since colorectal and gastric cancers are two of the top five malignancies affecting countries with a high human development index [67]. Although CE could serve as an alternative non-invasive screening method and be able to assess these two anatomical locations at once, it would be too time-consuming and would probably result in non-negligible false negative rates. In this clinical scenario, this would only be possible with the aid of AI technology, significantly reducing CPE’s health-related burden (Figure 2) [68].
Aside from being a procedure that consumes a considerable amount of time and incurs increased costs, it is important to note CPE’s additional constraints of being a single-use procedure and not being able to perform therapeutic interventions (can robotic CE change this?) [66]. However, despite these limitations, its noteworthy disruptive potential warrants emphasis that, in the long run, with DL technology optimization, it might be a suitable alternative (cost-effective) to opt for CPE as the preferred populational oncological screening method for the GI tract. This is based on the notion that its diagnostic accuracy would be comparable to the current gold standard, making it more likely to be accepted by most patients, as it is less invasive and does not require air inflation, radiation or sedation. Consequently, upper and lower endoscopy would mostly be used to clear diagnostic uncertainties, obtain tissue for histological and molecular analysis and treat CPE-detected lesions.
On top of this, by adopting a more interventional approach for conventional upper and lower endoscopy, gastroenterology could work towards becoming greener [69]. In fact, this is one of pressing concern of this field, as it involves an elevated amount of single-use disposable materials and a large number of resources for adequate device disinfection [70]. If AI-assisted CPE proves to be cost-effective, then it has the potential to significantly reduce the number of unnecessary exams, particularly those with a primary diagnostic aim, lowering endoscopy’s carbon footprint.

7. Improving Patient Experience: Wireless Capsule Endoscopy vs. Traditional Endoscopy

The importance of upper and lower endoscopy in advancing the field of gastroenterology cannot be overstated, as they successfully combine diagnostic and therapeutic functionalities. Nonetheless, they are invasive procedures with a not insignificant complication risk [71]. Furthermore, they may cause discomfort in a proportion of patients and may even be poorly tolerated by some individuals [72].
These procedures can certainly be supplemented with sedation techniques, serving to both improve patient comfort while also empowering the gastroenterologist’s diagnostic and therapeutic proficiencies. However, these might also lead to a higher risk of complications, increase recovery times for patients and escalate costs (including the loss of working days) [72].
When compared to traditional endoscopy, CPE may improve patient experience. Patients would still need to follow a low-fiber diet for a few days, and take an oral bowel preparation whose substance, timing and dose have yet to be established and optimized [21,73]. Following the ingestion of the capsule, the patient would need to check the capsule’s transit from stomach (or to complete a robotic capsule gastric protocol), and to administer an additional booster once the capsule has reached the duodenum (0 and 3 h after) [68]. However, it is important to acknowledge that wireless CPE could be more easily incorporated into a widespread daily routine. Moreover, it can potentially decrease the reluctance of patients as is less invasive and does not require air inflation, radiation or sedation [3].

8. Ethical Considerations and Challenges in AI-Assisted Panendoscopy

AI’s widespread acceptance is dependent on addressing three main groups of bioethical challenges, encompassing data acquisition (input), model development (AI tool) and the impact of AI-generated responses on clinical practice (output) (Figure 3) [74].
In the first place, it is important to acknowledge that the process of developing a CNN is a complex test that requires the acquisition and standardization of an extensive volume of data. With information becoming more readily available and possibly collected without individuals’ knowledge, privacy concerns may arise. Moreover, as cyberattacks become more frequent, there is also an obvious need for appropriate data protection (e.g., respecting the General Data Protection Regulation 2016/79 in E.U), as well as non-traceability [75,76]. Current innovations in healthcare blockchain may mitigate these concerns, given the decentralized data framework using chronological and immutable blocks [77].
Furthermore, we must deal with the inherent selection bias present in the dataset used to train the deep learning model [78]. On the one hand, the effectiveness of an AI algorithm is directly related to the quality of its training data. If this is insufficient or inaccurate, it might lead to inappropriate CNN development [78]. On the other, even with a high-quality dataset, the model’s training population may lack proper representation, impacting its external validity [74]. In addition, extended training in one population may result in model overfitting, in which it may not yield equivalent diagnostic performance outcomes when exposed to different data [79]. Assuring data quality is a pivotal role of the medical community in AI-related research.
There are also two clinical scenarios one must consider when discussing these. The first is related to the “black-box” nature of this AI technology, since it may detect patterns (in this case, lesions) that physicians cannot notice [80]. Although this explainability problem arises in other aspects of modern medicine (e.g., drugs that improve a patient’s prognosis without knowing the exact mechanism), decisions based on AI that are not made by humans face greater resistance [74]. The second is when the model fails at detecting a relevant lesion, resulting in a false negative. In fact, there may still be an accountability problem, since the absence of a human decision will not exempt someone from taking responsibility for an untoward event [74].
Even if AI-assisted CE performance proved to be equivalent to that of experienced endoscopists in assessing full-length CE videos in prospective validation studies, it is still necessary to discuss which of the following modes of presentation is better: while reading the video (e.g., square lesion delimitation while the video is playing) or prior to this (e.g., DL software analyses of the full-video, selecting the most relevant frames for the physician) [81]. In the first case, the model is simpler to understand, but there is a risk of ignoring surrounding areas, and the reading time reduction would be lower. In the other, this would be a less biased approach to image assessment, although there is a higher risk of incomplete visualization of the video. Moreover, the gastroenterologist may struggle to comprehend the model’s frame selection. Emerging solutions like heatmaps may address this by delimitating the area with the maximum probability of lesion presence (Figure 4).
AI integration in real practice requires well-regulated channels. Some technologies have previously received FDA clearance, such as AI/ML-Based Software as a Medical Device (SaMD). In general, regulation is written in such way that any changes made after the original market authorization would require premarket FDA review [82]. Nonetheless, since CNNs evolve and adapt quickly, is essential to recognize that new frameworks capable of adequately regulating them are still necessary.

9. Concluding Remarks—Enabling the Goal of Establishing the Use of Panendoscopy: Robotic and Wireless Capsule Endoscopy Assisted by Artificial Intelligence

The exponential growth of AI publications demonstrating excellent diagnostic accuracy, while demonstrating proficient processing power, has the potential to disrupt the current paradigm.
In a short time, gastroenterologists will possess two major tools to provide better care for patients. One is conventional endoscopy, whose therapeutic potential is pushing its traditional boundaries beyond imagination. The other involves the ongoing advancements in AI technology in this specialty. While the first factor is widely acknowledged as one of the primary factors motivating doctors to pursue it, the second one is still seen with high levels of caution.
The medical community may be concerned about the ongoing technological advancements. Still, this should be embraced as a new era, comparable to changes seen after industrialization and the emergence of the global web and search engines. The integration of AI and big data knowledge into medical professionals’ core curriculum is an important step, as well. On the one hand, doctors must partner with engineers and data scientists to craft such technology, since medical expertise is crucial to the development of valid databases. On the other, even without direct involvement in model creation, doctors should understand AI studies to know if their findings are applicable to their patients.
Currently, the majority of studies concerning deep learning model development in CE are based on still frames or video segments. Moreover, there is still no SaMD approved by FDA that is capable of multiorgan evaluation and suitable for various devices. Conducting prospective and multicentric studies and assessing AI models with full videos, in real clinical scenarios, are a necessary steps before considering CPE’s implementation in daily routine. This milestone must be fulfilled before considering the use of AI-assisted minimally invasive CPE.

Author Contributions

Conceptualization, M.M. (Miguel Mascarenhas) and M.M. (Miguel Martins); resources, J.A., T.R., P.C. and F.M.; resources, J.A., P.C., F.M. and J.F.; writing—original draft preparation, M.M. (Miguel Mascarenhas) and M.M. (Miguel Martins); writing—review and editing, M.M. (Miguel Mascarenhas), M.M. (Miguel Martins), T.R., H.C. and J.F.; supervision—P.A., H.C. and G.M.; project administration—M.M. (Miguel Mascarenhas), P.A., H.C. and G.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

J.F. is a paid employee of DigestAid—Digestive Artificial Intelligence Development. The other authors declare no conflicts of interests. The funders had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Iddan, G.; Meron, G.; Glukhovsky, A.; Swain, P. Wireless capsule endoscopy. Nature 2000, 405, 417. [Google Scholar] [CrossRef] [PubMed]
  2. Eliakim, R.; Fireman, Z.; Gralnek, I.M.; Yassin, K.; Waterman, M.; Kopelman, Y.; Lachter, J.; Koslowsky, B.; Adler, S.N. Evaluation of the PillCam Colon capsule in the detection of colonic pathology: Results of the first multicenter, prospective, comparative study. Endoscopy 2006, 38, 963–970. [Google Scholar] [CrossRef] [PubMed]
  3. Eliakim, R.; Adler, S.N. Colon PillCam: Why not just take a pill? Dig. Dis. Sci. 2015, 60, 660–663. [Google Scholar] [CrossRef]
  4. Piccirelli, S.; Mussetto, A.; Bellumat, A.; Cannizzaro, R.; Pennazio, M.; Pezzoli, A.; Bizzotto, A.; Fusetti, N.; Valiante, F.; Hassan, C.; et al. New Generation Express View: An Artificial Intelligence Software Effectively Reduces Capsule Endoscopy Reading Times. Diagnostics 2022, 12, 1783. [Google Scholar] [CrossRef] [PubMed]
  5. Park, J.; Cho, Y.K.; Kim, J.H. Current and Future Use of Esophageal Capsule Endoscopy. Clin. Endosc. 2018, 51, 317–322. [Google Scholar] [CrossRef]
  6. Kim, J.H.; Nam, S.J. Capsule Endoscopy for Gastric Evaluation. Diagnostics 2021, 11, 1792. [Google Scholar] [CrossRef]
  7. Spada, C.; McNamara, D.; Despott, E.J.; Adler, S.; Cash, B.D.; Fernández-Urién, I.; Ivekovic, H.; Keuchel, M.; McAlindon, M.; Saurin, J.C.; et al. Performance measures for small-bowel endoscopy: A European Society of Gastrointestinal Endoscopy (ESGE) Quality Improvement Initiative. United Eur. Gastroenterol. J. 2019, 7, 614–641. [Google Scholar] [CrossRef]
  8. Tabone, T.; Koulaouzidis, A.; Ellul, P. Scoring Systems for Clinical Colon Capsule Endoscopy-All You Need to Know. J. Clin. Med. 2021, 10, 2372. [Google Scholar] [CrossRef]
  9. Rosa, B.; Margalit-Yehuda, R.; Gatt, K.; Sciberras, M.; Girelli, C.; Saurin, J.C.; Valdivia, P.C.; Cotter, J.; Eliakim, R.; Caprioli, F.; et al. Scoring systems in clinical small-bowel capsule endoscopy: All you need to know! Endosc. Int. Open 2021, 9, E802–E823. [Google Scholar] [CrossRef]
  10. Cortegoso Valdivia, P.; Pennazio, M. Chapter 2—Wireless capsule endoscopy: Concept and modalities. In Artificial Intelligence in Capsule Endoscopy; Mascarenhas, M., Cardoso, H., Macedo, G., Eds.; Academic Press: Cambridge, MA, USA, 2023; pp. 11–20. [Google Scholar] [CrossRef]
  11. Eliakim, R.; Yassin, K.; Niv, Y.; Metzger, Y.; Lachter, J.; Gal, E.; Sapoznikov, B.; Konikoff, F.; Leichtmann, G.; Fireman, Z.; et al. Prospective multicenter performance evaluation of the second-generation colon capsule compared with colonoscopy. Endoscopy 2009, 41, 1026–1031. [Google Scholar] [CrossRef]
  12. Tontini, G.E.; Rizzello, F.; Cavallaro, F.; Bonitta, G.; Gelli, D.; Pastorelli, L.; Salice, M.; Vecchi, M.; Gionchetti, P.; Calabrese, C. Usefulness of panoramic 344°-viewing in Crohn’s disease capsule endoscopy: A proof of concept pilot study with the novel PillCam™ Crohn’s system. BMC Gastroenterol. 2020, 20, 97. [Google Scholar] [CrossRef] [PubMed]
  13. Pennazio, M.; Rondonotti, E.; Despott, E.J.; Dray, X.; Keuchel, M.; Moreels, T.; Sanders, D.S.; Spada, C.; Carretero, C.; Cortegoso Valdivia, P.; et al. Small-bowel capsule endoscopy and device-assisted enteroscopy for diagnosis and treatment of small-bowel disorders: European Society of Gastrointestinal Endoscopy (ESGE) Guideline—Update 2022. Endoscopy 2023, 55, 58–95. [Google Scholar] [CrossRef] [PubMed]
  14. Vuik, F.E.R.; Nieuwenburg, S.A.V.; Moen, S.; Spada, C.; Senore, C.; Hassan, C.; Pennazio, M.; Rondonotti, E.; Pecere, S.; Kuipers, E.J.; et al. Colon capsule endoscopy in colorectal cancer screening: A systematic review. Endoscopy 2021, 53, 815–824. [Google Scholar] [CrossRef]
  15. Kjølhede, T.; Ølholm, A.M.; Kaalby, L.; Kidholm, K.; Qvist, N.; Baatrup, G. Diagnostic accuracy of capsule endoscopy compared with colonoscopy for polyp detection: Systematic review and meta-analyses. Endoscopy 2021, 53, 713–721. [Google Scholar] [CrossRef]
  16. Möllers, T.; Schwab, M.; Gildein, L.; Hoffmeister, M.; Albert, J.; Brenner, H.; Jäger, S. Second-generation colon capsule endoscopy for detection of colorectal polyps: Systematic review and meta-analysis of clinical trials. Endosc. Int. Open 2021, 9, E562–E571. [Google Scholar] [CrossRef]
  17. Nakamura, M.; Kawashima, H.; Ishigami, M.; Fujishiro, M. Indications and Limitations Associated with the Patency Capsule Prior to Capsule Endoscopy. Intern. Med. 2022, 61, 5–13. [Google Scholar] [CrossRef] [PubMed]
  18. Garrido, I.; Andrade, P.; Lopes, S.; Macedo, G. Chapter 5—The role of capsule endoscopy in diagnosis and clinical management of inflammatory bowel disease. In Artificial Intelligence in Capsule Endoscopy; Mascarenhas, M., Cardoso, H., Macedo, G., Eds.; Academic Press: Cambridge, MA, USA, 2023; pp. 69–90. [Google Scholar] [CrossRef]
  19. Pasha, S.F.; Pennazio, M.; Rondonotti, E.; Wolf, D.; Buras, M.R.; Albert, J.G.; Cohen, S.A.; Cotter, J.; D’Haens, G.; Eliakim, R.; et al. Capsule Retention in Crohn’s Disease: A Meta-analysis. Inflamm. Bowel Dis. 2020, 26, 33–42. [Google Scholar] [CrossRef]
  20. Silva, M.; Cardoso, H.; Macedo, G. Patency Capsule Safety in Crohn’s Disease. J. Crohn’s Colitis 2017, 11, 1288. [Google Scholar] [CrossRef]
  21. Rondonotti, E.; Spada, C.; Adler, S.; May, A.; Despott, E.J.; Koulaouzidis, A.; Panter, S.; Domagk, D.; Fernandez-Urien, I.; Rahmi, G.; et al. Small-bowel capsule endoscopy and device-assisted enteroscopy for diagnosis and treatment of small-bowel disorders: European Society of Gastrointestinal Endoscopy (ESGE) Technical Review. Endoscopy 2018, 50, 423–446. [Google Scholar] [CrossRef]
  22. Tabet, R.; Nassani, N.; Karam, B.; Shammaa, Y.; Akhrass, P.; Deeb, L. Pooled Analysis of the Efficacy and Safety of Video Capsule Endoscopy in Patients with Implantable Cardiac Devices. Can. J. Gastroenterol. Hepatol. 2019, 2019, 3953807. [Google Scholar] [CrossRef]
  23. Liao, Z.; Zou, W.; Li, Z.S. Clinical application of magnetically controlled capsule gastroscopy in gastric disease diagnosis: Recent advances. Sci. China Life Sci. 2018, 61, 1304–1309. [Google Scholar] [CrossRef]
  24. Shamsudhin, N.; Zverev, V.I.; Keller, H.; Pane, S.; Egolf, P.W.; Nelson, B.J.; Tishin, A.M. Magnetically guided capsule endoscopy. Med. Phys. 2017, 44, e91–e111. [Google Scholar] [CrossRef] [PubMed]
  25. Swain, P.; Toor, A.; Volke, F.; Keller, J.; Gerber, J.; Rabinovitz, E.; Rothstein, R.I. Remote magnetic manipulation of a wireless capsule endoscope in the esophagus and stomach of humans (with videos). Gastrointest. Endosc. 2010, 71, 1290–1293. [Google Scholar] [CrossRef] [PubMed]
  26. Rahman, I.; Afzal, N.A.; Patel, P. The role of magnetic assisted capsule endoscopy (MACE) to aid visualisation in the upper GI tract. Comput. Biol. Med. 2015, 65, 359–363. [Google Scholar] [CrossRef] [PubMed]
  27. Liao, Z.; Hou, X.; Lin-Hu, E.Q.; Sheng, J.Q.; Ge, Z.Z.; Jiang, B.; Hou, X.H.; Liu, J.Y.; Li, Z.; Huang, Q.Y.; et al. Accuracy of Magnetically Controlled Capsule Endoscopy, Compared With Conventional Gastroscopy, in Detection of Gastric Diseases. Clin. Gastroenterol. Hepatol. 2016, 14, 1266–1273.e1. [Google Scholar] [CrossRef] [PubMed]
  28. He, C.; Wang, Q.; Jiang, X.; Jiang, B.; Qian, Y.-Y.; Pan, J.; Liao, Z. Chapter 13—Magnetic capsule endoscopy: Concept and application of artificial intelligence. In Artificial Intelligence in Capsule Endoscopy; Mascarenhas, M., Cardoso, H., Macedo, G., Eds.; Academic Press: Cambridge, MA, USA, 2023; pp. 217–241. [Google Scholar] [CrossRef]
  29. Liao, Z.; Duan, X.D.; Xin, L.; Bo, L.M.; Wang, X.H.; Xiao, G.H.; Hu, L.H.; Zhuang, S.L.; Li, Z.S. Feasibility and safety of magnetic-controlled capsule endoscopy system in examination of human stomach: A pilot study in healthy volunteers. J. Interv. Gastroenterol. 2012, 2, 155–160. [Google Scholar] [CrossRef] [PubMed]
  30. Currie, G.; Hawk, K.E.; Rohren, E.; Vial, A.; Klein, R. Machine Learning and Deep Learning in Medical Imaging: Intelligent Imaging. J. Med. Imaging Radiat. Sci. 2019, 50, 477–487. [Google Scholar] [CrossRef] [PubMed]
  31. Handelman, G.S.; Kok, H.K.; Chandra, R.V.; Razavi, A.H.; Lee, M.J.; Asadi, H. eDoctor: Machine learning and the future of medicine. J. Intern. Med. 2018, 284, 603–619. [Google Scholar] [CrossRef]
  32. Kim, J.; Kim, J.; Jang, G.J.; Lee, M. Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection. Neural Netw. 2017, 87, 109–121. [Google Scholar] [CrossRef]
  33. Afonso, J.; Martins, M.; Ferreira, J.; Mascarenhas, M. Chapter 1—Artificial intelligence: Machine learning, deep learning, and applications in gastrointestinal endoscopy. In Artificial Intelligence in Capsule Endoscopy; Mascarenhas, M., Cardoso, H., Macedo, G., Eds.; Academic Press: Cambridge, MA, USA, 2023; pp. 1–10. [Google Scholar] [CrossRef]
  34. Amisha; Malik, P.; Pathania, M.; Rathaur, V.K. Overview of artificial intelligence in medicine. J. Family Med. Prim. Care 2019, 8, 2328–2331. [Google Scholar] [CrossRef]
  35. Li, N.; Zhao, X.; Yang, Y.; Zou, X. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network. Comput. Intell. Neurosci. 2016, 2016, 7942501. [Google Scholar] [CrossRef]
  36. Fisher, M.; Mackiewicz, M. Colour Image Analysis of Wireless Capsule Endoscopy Video: A Review. In Color Medical Image Analysis; Celebi, M.E., Schaefer, G., Eds.; Springer: Dordrecht, The Netherlands, 2013; pp. 129–144. [Google Scholar] [CrossRef]
  37. Aoki, T.; Yamada, A.; Kato, Y.; Saito, H.; Tsuboi, A.; Nakada, A.; Niikura, R.; Fujishiro, M.; Oka, S.; Ishihara, S.; et al. Automatic detection of blood content in capsule endoscopy images based on a deep convolutional neural network. J. Gastroenterol. Hepatol. 2020, 35, 1196–1200. [Google Scholar] [CrossRef]
  38. Afonso, J.; Saraiva, M.M.; Ferreira, J.P.S.; Ribeiro, T.; Cardoso, H.; Macedo, G. Performance of a convolutional neural network for automatic detection of blood and hematic residues in small bowel lumen. Dig. Liver Dis. 2021, 53, 654–657. [Google Scholar] [CrossRef]
  39. Leenhardt, R.; Vasseur, P.; Li, C.; Saurin, J.C.; Rahmi, G.; Cholet, F.; Becq, A.; Marteau, P.; Histace, A.; Dray, X. A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy. Gastrointest. Endosc. 2019, 89, 189–194. [Google Scholar] [CrossRef]
  40. Tsuboi, A.; Oka, S.; Aoyama, K.; Saito, H.; Aoki, T.; Yamada, A.; Matsuda, T.; Fujishiro, M.; Ishihara, S.; Nakahori, M.; et al. Artificial intelligence using a convolutional neural network for automatic detection of small-bowel angioectasia in capsule endoscopy images. Dig. Endosc. 2020, 32, 382–390. [Google Scholar] [CrossRef]
  41. Houdeville, C.; Souchaud, M.; Leenhardt, R.; Beaumont, H.; Benamouzig, R.; McAlindon, M.; Grimbert, S.; Lamarque, D.; Makins, R.; Saurin, J.C.; et al. A multisystem-compatible deep learning-based algorithm for detection and characterization of angiectasias in small-bowel capsule endoscopy. A proof-of-concept study. Dig. Liver Dis. 2021, 53, 1627–1631. [Google Scholar] [CrossRef]
  42. Ribeiro, T.; Saraiva, M.M.; Ferreira, J.P.S.; Cardoso, H.; Afonso, J.; Andrade, P.; Parente, M.; Jorge, R.N.; Macedo, G. Artificial intelligence and capsule endoscopy: Automatic detection of vascular lesions using a convolutional neural network. Ann. Gastroenterol. 2021, 34, 820–828. [Google Scholar] [CrossRef]
  43. Aoki, T.; Yamada, A.; Aoyama, K.; Saito, H.; Tsuboi, A.; Nakada, A.; Niikura, R.; Fujishiro, M.; Oka, S.; Ishihara, S.; et al. Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest. Endosc. 2019, 89, 357–363.e2. [Google Scholar] [CrossRef] [PubMed]
  44. Klang, E.; Barash, Y.; Margalit, R.Y.; Soffer, S.; Shimon, O.; Albshesh, A.; Ben-Horin, S.; Amitai, M.M.; Eliakim, R.; Kopylov, U. Deep learning algorithms for automated detection of Crohn’s disease ulcers by video capsule endoscopy. Gastrointest. Endosc. 2020, 91, 606–613.e2. [Google Scholar] [CrossRef] [PubMed]
  45. Barash, Y.; Azaria, L.; Soffer, S.; Margalit Yehuda, R.; Shlomi, O.; Ben-Horin, S.; Eliakim, R.; Klang, E.; Kopylov, U. Ulcer severity grading in video capsule images of patients with Crohn’s disease: An ordinal neural network solution. Gastrointest. Endosc. 2021, 93, 187–192. [Google Scholar] [CrossRef] [PubMed]
  46. Afonso, J.; Saraiva, M.J.M.; Ferreira, J.P.S.; Cardoso, H.; Ribeiro, T.; Andrade, P.; Parente, M.; Jorge, R.N.; Saraiva, M.M.; Macedo, G. Development of a Convolutional Neural Network for Detection of Erosions and Ulcers With Distinct Bleeding Potential in Capsule Endoscopy. Tech. Innov. Gastrointest. Endosc. 2021, 23, 291–296. [Google Scholar] [CrossRef]
  47. Saito, H.; Aoki, T.; Aoyama, K.; Kato, Y.; Tsuboi, A.; Yamada, A.; Fujishiro, M.; Oka, S.; Ishihara, S.; Matsuda, T.; et al. Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest. Endosc. 2020, 92, 144–151.e1. [Google Scholar] [CrossRef] [PubMed]
  48. Mascarenhas Saraiva, M.; Afonso, J.; Ribeiro, T.; Ferreira, J.; Cardoso, H.; Andrade, P.; Gonçalves, R.; Cardoso, P.; Parente, M.; Jorge, R.; et al. Artificial intelligence and capsule endoscopy: Automatic detection of enteric protruding lesions using a convolutional neural network. Rev. Esp. Enferm. Dig. 2021, 115, 75–79. [Google Scholar] [CrossRef]
  49. Yamada, A.; Niikura, R.; Otani, K.; Aoki, T.; Koike, K. Automatic detection of colorectal neoplasia in wireless colon capsule endoscopic images using a deep convolutional neural network. Endoscopy 2021, 53, 832–836. [Google Scholar] [CrossRef] [PubMed]
  50. Saraiva, M.M.; Ferreira, J.P.S.; Cardoso, H.; Afonso, J.; Ribeiro, T.; Andrade, P.; Parente, M.P.L.; Jorge, R.N.; Macedo, G. Artificial intelligence and colon capsule endoscopy: Development of an automated diagnostic system of protruding lesions in colon capsule endoscopy. Tech. Coloproctol. 2021, 25, 1243–1248. [Google Scholar] [CrossRef] [PubMed]
  51. Ribeiro, T.; Mascarenhas, M.; Afonso, J.; Cardoso, H.; Andrade, P.; Lopes, S.; Ferreira, J.; Mascarenhas Saraiva, M.; Macedo, G. Artificial intelligence and colon capsule endoscopy: Automatic detection of ulcers and erosions using a convolutional neural network. J. Gastroenterol. Hepatol. 2022, 37, 2282–2288. [Google Scholar] [CrossRef] [PubMed]
  52. Majtner, T.; Brodersen, J.B.; Herp, J.; Kjeldsen, J.; Halling, M.L.; Jensen, M.D. A deep learning framework for autonomous detection and classification of Crohn’s disease lesions in the small bowel and colon with capsule endoscopy. Endosc. Int. Open 2021, 9, E1361–E1370. [Google Scholar] [CrossRef]
  53. Ferreira, J.P.S.; de Mascarenhas Saraiva, M.; Afonso, J.P.L.; Ribeiro, T.F.C.; Cardoso, H.M.C.; Ribeiro Andrade, A.P.; de Mascarenhas Saraiva, M.N.G.; Parente, M.P.L.; Natal Jorge, R.; Lopes, S.I.O.; et al. Identification of Ulcers and Erosions by the Novel Pillcam™ Crohn’s Capsule Using a Convolutional Neural Network: A Multicentre Pilot Study. J. Crohn’s Colitis 2022, 16, 169–172. [Google Scholar] [CrossRef] [PubMed]
  54. Mascarenhas Saraiva, M.; Ferreira, J.P.S.; Cardoso, H.; Afonso, J.; Ribeiro, T.; Andrade, P.; Parente, M.P.L.; Jorge, R.N.; Macedo, G. Artificial intelligence and colon capsule endoscopy: Automatic detection of blood in colon capsule endoscopy using a convolutional neural network. Endosc. Int. Open 2021, 9, E1264–E1268. [Google Scholar] [CrossRef]
  55. Ding, Z.; Shi, H.; Zhang, H.; Meng, L.; Fan, M.; Han, C.; Zhang, K.; Ming, F.; Xie, X.; Liu, H.; et al. Gastroenterologist-Level Identification of Small-Bowel Diseases and Normal Variants by Capsule Endoscopy Using a Deep-Learning Model. Gastroenterology 2019, 157, 1044–1054.e5. [Google Scholar] [CrossRef]
  56. Aoki, T.; Yamada, A.; Kato, Y.; Saito, H.; Tsuboi, A.; Nakada, A.; Niikura, R.; Fujishiro, M.; Oka, S.; Ishihara, S.; et al. Automatic detection of various abnormalities in capsule endoscopy videos by a deep learning-based system: A multicenter study. Gastrointest. Endosc. 2021, 93, 165–173.e1. [Google Scholar] [CrossRef] [PubMed]
  57. Mascarenhas Saraiva, M.J.; Afonso, J.; Ribeiro, T.; Ferreira, J.; Cardoso, H.; Andrade, A.P.; Parente, M.; Natal, R.; Mascarenhas Saraiva, M.; Macedo, G. Deep learning and capsule endoscopy: Automatic identification and differentiation of small bowel lesions with distinct haemorrhagic potential using a convolutional neural network. BMJ Open Gastroenterol. 2021, 8, e000753. [Google Scholar] [CrossRef] [PubMed]
  58. Mascarenhas, M.; Ribeiro, T.; Afonso, J.; Ferreira, J.P.S.; Cardoso, H.; Andrade, P.; Parente, M.P.L.; Jorge, R.N.; Mascarenhas Saraiva, M.; Macedo, G. Deep learning and colon capsule endoscopy: Automatic detection of blood and colonic mucosal lesions using a convolutional neural network. Endosc. Int. Open 2022, 10, E171–E177. [Google Scholar] [CrossRef] [PubMed]
  59. Xie, X.; Xiao, Y.F.; Zhao, X.Y.; Li, J.J.; Yang, Q.Q.; Peng, X.; Nie, X.B.; Zhou, J.Y.; Zhao, Y.B.; Yang, H.; et al. Development and Validation of an Artificial Intelligence Model for Small Bowel Capsule Endoscopy Video Review. JAMA Netw. Open 2022, 5, e2221992. [Google Scholar] [CrossRef] [PubMed]
  60. Xia, J.; Xia, T.; Pan, J.; Gao, F.; Wang, S.; Qian, Y.Y.; Wang, H.; Zhao, J.; Jiang, X.; Zou, W.B.; et al. Use of artificial intelligence for detection of gastric lesions by magnetically controlled capsule endoscopy. Gastrointest. Endosc. 2021, 93, 133–139.e4. [Google Scholar] [CrossRef]
  61. Pan, J.; Xia, J.; Jiang, B.; Zhang, H.; Zhang, H.; Li, Z.S.; Liao, Z. Real-time identification of gastric lesions and anatomical landmarks by artificial intelligence during magnetically controlled capsule endoscopy. Endoscopy 2022, 54, E622–E623. [Google Scholar] [CrossRef]
  62. Mascarenhas, M.; Mendes, F.; Ribeiro, T.; Afonso, J.; Cardoso, P.; Martins, M.; Cardoso, H.; Andrade, P.; Ferreira, J.; Saraiva, M.M.; et al. Deep Learning and Minimally Invasive Endoscopy: Automatic Classification of Pleomorphic Gastric Lesions in Capsule Endoscopy. Clin. Transl. Gastroenterol. 2023, 14, e00609. [Google Scholar] [CrossRef]
  63. Eliakim, R. The impact of panenteric capsule endoscopy on the management of Crohn’s disease. Therap. Adv. Gastroenterol. 2017, 10, 737–744. [Google Scholar] [CrossRef]
  64. Eliakim, R.; Yablecovitch, D.; Lahat, A.; Ungar, B.; Shachar, E.; Carter, D.; Selinger, L.; Neuman, S.; Ben-Horin, S.; Kopylov, U. A novel PillCam Crohn’s capsule score (Eliakim score) for quantification of mucosal inflammation in Crohn’s disease. United Eur. Gastroenterol. J. 2020, 8, 544–551. [Google Scholar] [CrossRef]
  65. Mussetto, A.; Arena, R.; Fuccio, L.; Trebbi, M.; Tina Garribba, A.; Gasperoni, S.; Manzi, I.; Triossi, O.; Rondonotti, E. A new panenteric capsule endoscopy-based strategy in patients with melena and a negative upper gastrointestinal endoscopy: A prospective feasibility study. Eur. J. Gastroenterol. Hepatol. 2021, 33, 686–690. [Google Scholar] [CrossRef]
  66. Kim, S.H.; Chun, H.J. Capsule Endoscopy: Pitfalls and Approaches to Overcome. Diagnostics 2021, 11, 1765. [Google Scholar] [CrossRef]
  67. Cancer Today IARC. Global Cancer Observatory: Cancer Today. Available online: https://gco.iarc.fr/today (accessed on 30 August 2023).
  68. Ribeiro, T.; Fernández-Urien, I.; Cardoso, H. Chapter 15—Colon capsule endoscopy and artificial intelligence: A perfect match for panendoscopy. In Artificial Intelligence in Capsule Endoscopy; Mascarenhas, M., Cardoso, H., Macedo, G., Eds.; Academic Press: Cambridge, MA, USA, 2023; pp. 255–269. [Google Scholar] [CrossRef]
  69. Baddeley, R.; Aabakken, L.; Veitch, A.; Hayee, B. Green Endoscopy: Counting the Carbon Cost of Our Practice. Gastroenterology 2022, 162, 1556–1560. [Google Scholar] [CrossRef] [PubMed]
  70. Sebastian, S.; Dhar, A.; Baddeley, R.; Donnelly, L.; Haddock, R.; Arasaradnam, R.; Coulter, A.; Disney, B.R.; Griffiths, H.; Healey, C.; et al. Green endoscopy: British Society of Gastroenterology (BSG), Joint Accreditation Group (JAG) and Centre for Sustainable Health (CSH) joint consensus on practical measures for environmental sustainability in endoscopy. Gut 2023, 72, 12–26. [Google Scholar] [CrossRef] [PubMed]
  71. Levy, I.; Gralnek, I.M. Complications of diagnostic colonoscopy, upper endoscopy, and enteroscopy. Best. Pract. Res. Clin. Gastroenterol. 2016, 30, 705–718. [Google Scholar] [CrossRef] [PubMed]
  72. Helmers, R.A.; Dilling, J.A.; Chaffee, C.R.; Larson, M.V.; Narr, B.J.; Haas, D.A.; Kaplan, R.S. Overall Cost Comparison of Gastrointestinal Endoscopic Procedures With Endoscopist- or Anesthesia-Supported Sedation by Activity-Based Costing Techniques. Mayo Clin. Proc. Innov. Qual. Outcomes 2017, 1, 234–241. [Google Scholar] [CrossRef] [PubMed]
  73. Silva, V.M.; Rosa, B.; Mendes, F.; Mascarenhas, M.; Saraiva, M.M.; Cotter, J. Chapter 11—Small bowel and colon cleansing in capsule endoscopy. In Artificial Intelligence in Capsule Endoscopy; Mascarenhas, M., Cardoso, H., Macedo, G., Eds.; Academic Press: Cambridge, MA, USA, 2023; pp. 181–197. [Google Scholar] [CrossRef]
  74. Mascarenhas, M.; Afonso, J.; Ribeiro, T.; Andrade, P.; Cardoso, H.; Macedo, G. The Promise of Artificial Intelligence in Digestive Healthcare and the Bioethics Challenges It Presents. Medicina 2023, 59, 790. [Google Scholar] [CrossRef] [PubMed]
  75. Kruse, C.S.; Frederick, B.; Jacobson, T.; Monticone, D.K. Cybersecurity in healthcare: A systematic review of modern threats and trends. Technol. Health Care 2017, 25, 1–10. [Google Scholar] [CrossRef] [PubMed]
  76. Regulation (EU). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance); Publications Office of the European Union: Luxembourg, 2016; pp. 1–88. [Google Scholar]
  77. Mascarenhas, M.; Santos, A.; Macedo, G. Chapter 12—Introducing blockchain technology in data storage to foster big data and artificial intelligence applications in healthcare systems. In Artificial Intelligence in Capsule Endoscopy; Mascarenhas, M., Cardoso, H., Macedo, G., Eds.; Academic Press: Cambridge, MA, USA, 2023; pp. 199–216. [Google Scholar] [CrossRef]
  78. Suresh, H.; Guttag, J.V. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In Proceedings of the EAAMO’21: Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, New York, NY, USA, 5–9 October 2021. [Google Scholar] [CrossRef]
  79. Ying, X. An Overview of Overfitting and its Solutions. J. Phys. Conf. Ser. 2019, 1168, 022022. [Google Scholar] [CrossRef]
  80. Price, W.N., II. Black-Box Medicine. Harv. J. Law. Technol. 2014, 28, 419. [Google Scholar]
  81. Messmann, H.; Bisschops, R.; Antonelli, G.; Libânio, D.; Sinonquel, P.; Abdelrahim, M.; Ahmad, O.F.; Areia, M.; Bergman, J.; Bhandari, P.; et al. Expected value of artificial intelligence in gastrointestinal endoscopy: European Society of Gastrointestinal Endoscopy (ESGE) Position Statement. Endoscopy 2022, 54, 1211–1231. [Google Scholar] [CrossRef]
  82. FDA. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan; FDA Statement: Silver Spring, MD, USA, 2021.
Figure 1. Various types of capsule endoscopy devices.
Figure 1. Various types of capsule endoscopy devices.
Cancers 15 05861 g001
Figure 2. Conventional reading time vs. artificial intelligence (AI)-enhanced capsule endoscopy assessment.
Figure 2. Conventional reading time vs. artificial intelligence (AI)-enhanced capsule endoscopy assessment.
Cancers 15 05861 g002
Figure 3. Essential criteria for development of trustworthy AI in capsule endoscopy.
Figure 3. Essential criteria for development of trustworthy AI in capsule endoscopy.
Cancers 15 05861 g003
Figure 4. Examples of generated heatmaps.
Figure 4. Examples of generated heatmaps.
Cancers 15 05861 g004
Table 1. Overview of the published work regarding convolutional neural network (CNN) development for capsule endoscopy.
Table 1. Overview of the published work regarding convolutional neural network (CNN) development for capsule endoscopy.
Publication,
Author, Year
Study
Aim
Capsule
Types
Centers
n
Exams
n
Frames nTypes of
CNN
Dataset
Methods
Analysis
Methods
Classification CategoriesPerformance Metrics
TotalLesionSENSPEAUC
Specific CNN for Small Bowel LesionsAoki, 2020
[37]
Detection of bloodSB2
SB3
16638,0556711ResNetFrame labeling of all datasets
(normal vs. blood content)
Train–test split
(73–26%)
Blood97100100
Afonso, 2021
[38]
Detection of bloodSB32148323,19013,515XceptionFrame labeling of all datasets
(normal vs. blood or hematic residues)
Staged incremental frame with train–test split (80–20%)Blood/hematic residues9898100
Leenhardt, 2019
[39]
Detection of angiectasiaSB3French national of still frames database (from 13 centers)NA1200600YOLOPrevious manual annotation of all angiectasias for the French national databaseDeep feature extraction of dataset already manually annotated (300 lesions and 300 normal frames)
Validation with classification of new dataset (300 lesions and 300 normal frames)
Angiectasia10096NK
Tsuboi, 2020
[40]
Detection of angiectasiaSB2
SB3
216912,7252725SSDManual annotation of all angiectasiasCNN is trained exclusively in positive frames (2237 with angiectasias)
Validation on mixed data with positive and negative frames
(488 angiectasias and 10,000 normal frames)
Angiectasia9998100
Houdeville, 2021
[41]
Detection of angiectasiaSB3
Mirocam
NANA12,255613YOLOPrevious trained on SB3 devices Validation with 626 new SB3 still frames and 621 new Mirocam still framesAngiectasia (SB3)9799NK
Angiectasia (Mirocam)9698NK
Ribeiro, 2021
[42]
Detection of vascular lesions + categorization of bleeding potentialSB32148311,5882063XceptionFrame labeling of all datasets
(normal (N) vs. red spots (P1V) vs. angiectasia or varices (P2V))
Train–test split
(80–20%)
with 3 × 3 confusion matrix
N vs. all909798
P1V vs. all929597
P2V vs. all949598
Aoki, 2019
[43]
Detection of ulcerative lesionsSB2
SB3
118015,8005800SSDManual annotation of all ulcers or erosionsCNN is trained exclusively in positive frames (5630 with ulcers)
Validation on mixed data with positive and negative frames
(440 lesions and 10,000 normal frames)
Ulcers or erosions889196
Klang, 2020
[44]
Detection of ulcers
+ differentiation from normal mucosa
SB314917,6407391XceptionFrame labeling of all datasets
(normal vs. ulcer)
5-fold cross-validation
with train–test split (80 vs. 20%)
Ulcers
(mean of cross-validation)
959799
Barash, 2021
[45]
Categorization of severity grade of ulcersSB31NKRandom selection of 1546 ulcer frames from Klang dataset ResNetFrame labeling of all datasets
(mild ulceration (1) vs. moderate ulceration (2) vs. severe ulceration (3))
Train–test split (80–20%) with 3 × 3 confusion matrix1 vs. 2347157
2 vs. 3739193
1 vs. 3919196
Afonso, 2021
[46]
Detection of ulcerative lesions + categorization of bleeding potentialSB32256523,7205675XceptionFrame labeling of all datasets
(normal (N) vs. erosions (P1E) vs. ulcers with uncertain/intermediate bleeding potential (P1U) vs. ulcers with high bleeding potential (P2U))
Train–test split
(80–20%)
with 4 × 4 confusion matrix
N vs. all949198
P1E vs. all739695
P1U vs. all729696
P2U vs. all9199100
Saito, 2020
[47]
Detection of protruding lesionsSB2
SB3
338548,09138091SSDManual annotation of all protruding lesions (polyps, nodules, epithelial tumors, submucosal tumors, venous structures)CNN is trained exclusively in positive frames (30,584 with protruding lesions)
Validation on mixed data with positive and negative frames
(7507 lesions and 10,000 normal frames)
Protruding lesions918091
Saraiva, 2021
[48]
Detection of protruding lesions + categorization of bleeding potentialSB31148318,6252830XceptionFrame labeling of all data
(normal (N) vs. protruding lesions with uncertain/intermediate bleeding potential (P1PR) vs. protruding lesions with high bleeding potential (P2PR))
Train–test split
(80–20%)
with 3 × 3 confusion matrix
N vs. all929999
P1PR vs. all969499
P2PR vs. all9798100
Specific CNN for Colonic LesionsYamada, 2021
[49]
Detection of colorectal neoplasiasCOLON2118420,71717,783SSDManual annotation of all colorectal neoplasias (polyps and cancers)CNN is trained exclusively in positive frames (15,933 with colorectal neoplasias)
Validation on mixed data with positive and negative frames
(1805 lesions and 2934 normal frames)
Colorectal neoplasias798790
Saraiva, 2021
[50]
Detection of protruding lesionsCOLON21243640860XceptionFrame labeling of all datasets
(normal vs. protruding lesions: polyps, epithelial tumors, subepithelial lesions)
Train–test split
(80–20%)
Protruding lesions919397
Ribeiro, 2022
[51]
Detection of ulcerative lesionsCOLON2212437,3193570XceptionFrame labeling of all datasets
(normal vs. ulcer or erosions)
train–validation (for hyperparameter tuning)–test split (70–20–10%)Ulcers or erosions97100100
Majtner, 2021
[52]
Panenteric (small bowel and colon) detection of ulcerative lesionsCROHN13877,7442748ResNetFrame labeling of all datasets
(normal vs. ulcer or erosions)
Train–validation–test (70–20–10%) with patient splitUlcers or erosions96100NK
Ferreira, 2022
[53]
Panenteric (small bowel and colon) detection of ulcerative lesionsCROHN25924,6755300XceptionFrame labeling of all datasets
(normal vs. ulcer or erosions)
Train–test split
(80–20%)
Ulcers or erosions9899100
Saraiva, 2021
[54]
Detection of bloodCOLON212458252975XceptionFrame labeling of all datasets
(normal vs. blood or hematic residues)
Train–test split
(80–0%)
Blood or hematic residues10093100
Complex CNN for Enteric and Colonic LesionsDing, 2019
[55]
Detection of abnormal findings in the small bowel without discrimination capacityNaviCam771970158,235 + validation setNKResNetFrame labeling of training set (inflammation, ulcer, polyps, lymphangiectasia, bleeding, vascular disease, protruding lesion, lymphatic follicular hyperplasia, diverticulum, parasite, normal)Testing with 5000 independent CE videosAbnormal findings100100NK
Aoki, 2021
[56]
Detection of multiple types of lesions in the small bowelSB33NK66,028 + validation set44,684Combined 3 SSD + 1 ResNetManual annotation of all mucosa breaks, angiectasias, protruding lesions and blood contentsCNN is trained on mixed data with positive and negative frames (44,684 lesions and 21,344 normal frames)
Validation on 379 full videos
Mucosal brakes vs. other lesions9699NK
Angiectasias vs. other lesions7999NK
Protruding lesions vs. other lesions10095NK
Blood content vs. other lesions100100NK
Saraiva, 2021
[57]
Detection of multiple types of lesions in the small bowel + categorization of bleeding potentialSB3
OMON
2579353,55535,545XceptionFrame labeling of all data
(normal (N) vs. lymphangiectasias (P0L) vs. xanthomas (P0X) vs. erosions (P1E) vs. ulcers with uncertain/intermediate bleeding potential (P1U) vs. ulcers with high bleeding potential (P2U) vs. red spots (P1RS) vs. vascular lesions (angiectasias or varices) (P2V) vs. protruding lesions with uncertain/intermediate bleeding potential (P1P) vs. protruding lesions with high bleeding potential (P2P) vs. blood or hematic residues)
Train–test split
(80–20%)
with 11 × 11 confusion matrix
N vs. all929699
P0L vs. all889999
P0X vs. all859899
P1E vs. all739997
P1U vs. all819999
P2U vs. all9498100
P1RS vs. all809998
P2V vs. all9199100
P1P vs. all939999
P2P vs. all9410099
Blood vs. all99100100
Saraiva, 2022
[58]
Detection of pleomorphic lesions or blood in the colonCOLON221249005pl5,930XceptionFrame labeling of all datasets
(normal (N) vs. blood or hematic residues (B) vs. mucosal lesions (ML), including ulcers, erosions, vascular lesions (red spots, angiectasia and varices) and protruding lesions (polyps, epithelial tumors, submucosal tumors and nodes))
Train–test split
(80–20%)
with 3 × 3 confusion matrix
N vs. all9796100
Blood vs. all100100100
ML vs. all929990
Xie, 2022
[59]
Detection of multiple types of lesions in the small bowel + differentiation from normal mucosaOMON515825757,770NKEfficientNet + YoloFrame labeling of all datasets
Protruding lesions (venous structure, nodule, mass/tumor, polyp(s)), flat lesions (angiectasia, plaque (red), plaque (white), red spot, abnormal villi), mucosa (lymphangiectasia, erythematous, edematous), excavated lesion (erosion, ulcer, aphtha) and content (blood, parasite)
CNN is trained on mixed data with positive and negative frames
Validation on 2898 full videos
Venous structure vs. all98100NK
Nodule vs. all97100NK
Mass or tumor vs. all95100NK
Polyp vs. all95100NK
Angiectasia vs. all96100NK
Plaque (red) vs. all94100NK
Plaque (white) vs. all95100NK
Red spot vs. all96100NK
Abnormal villi vs. all95100NK
Lymphangiectasia vs. all98100NK
Erythematous mucosa vs. all95100NK
Edematous mucosa vs. allNKNKNK
Erosion vs. allNKNKNK
Ulcer vs. allNKNKNK
Aphtha vs. allNKNKNK
Blood vs. allNKNKNK
Parasite vs. allNKNKNK
Complex CNN for Gastric LesionsXia, 2021
[60]
Detection of multiple types of lesions + differentiation from normal mucosaNaviCam MCE17971,023,955NKResNetFrame labeling of training set (erosions, polyps, ulcers, submucosal tumors, xanthomas, normal)testing with 100 independent CE videosPleomorphic lesions967684
Pan, 2022
[61]
Detect in real time of both gastric anatomic landmarks and different types of lesionsNaviCam
MCE
190634,062 + validation setNKResNetFrame labeling of all datasets (ulcerative (ulcer and erosions), protruding lesions (polyps and submucosal tumors), xanthomas, normal mucosa)Prospective validation on 50 CE examsGastric lesions99NKNK
Anatomic landmarks94NKNK
Saraiva, 2023
[62]
Detection of pleomorphic gastric lesionsSB3
CROHN
OMON
210712,9186074XceptionFrame labeling of all datasets
(normal vs. pleomorphic lesion (vascular, ulcerative or protruding lesion or blood/hematic residues))
Train–test split
(80–20%)
with patient split design and 3-fold cross-validation during training set
Pleomorphic lesions
(mean of cross-validation)
889296
Pleomorphic lesions
(test set)
9796100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mascarenhas, M.; Martins, M.; Afonso, J.; Ribeiro, T.; Cardoso, P.; Mendes, F.; Andrade, P.; Cardoso, H.; Ferreira, J.; Macedo, G. The Future of Minimally Invasive Capsule Panendoscopy: Robotic Precision, Wireless Imaging and AI-Driven Insights. Cancers 2023, 15, 5861. https://doi.org/10.3390/cancers15245861

AMA Style

Mascarenhas M, Martins M, Afonso J, Ribeiro T, Cardoso P, Mendes F, Andrade P, Cardoso H, Ferreira J, Macedo G. The Future of Minimally Invasive Capsule Panendoscopy: Robotic Precision, Wireless Imaging and AI-Driven Insights. Cancers. 2023; 15(24):5861. https://doi.org/10.3390/cancers15245861

Chicago/Turabian Style

Mascarenhas, Miguel, Miguel Martins, João Afonso, Tiago Ribeiro, Pedro Cardoso, Francisco Mendes, Patrícia Andrade, Helder Cardoso, João Ferreira, and Guilherme Macedo. 2023. "The Future of Minimally Invasive Capsule Panendoscopy: Robotic Precision, Wireless Imaging and AI-Driven Insights" Cancers 15, no. 24: 5861. https://doi.org/10.3390/cancers15245861

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop