Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (23)

Search Parameters:
Keywords = endoscopic video analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 1466 KiB  
Review
Non-Robotic Endoscopic-Assisted Internal Mammary Artery Harvest—A Historical Review and Recent Advancements
by De Qing Görtzen, Fleur Sampon, Joost Ter Woorst and Ferdi Akca
J. Cardiovasc. Dev. Dis. 2025, 12(2), 68; https://doi.org/10.3390/jcdd12020068 - 13 Feb 2025
Viewed by 641
Abstract
Background: The non-robotic endoscopic harvest of the internal mammary artery (IMA) facilitates minimally invasive bypass grafting while minimizing chest wall trauma. The technique was pioneered in the early 1990s and has recently regained popularity due to its accessibility and reproducibility. This review [...] Read more.
Background: The non-robotic endoscopic harvest of the internal mammary artery (IMA) facilitates minimally invasive bypass grafting while minimizing chest wall trauma. The technique was pioneered in the early 1990s and has recently regained popularity due to its accessibility and reproducibility. This review aims to provide an overview of endoscopic IMA harvest from its inception to the present. Methods: In August 2024, a literature search was performed using the electronic databases of the Cochrane Controlled Trials Register (CCTR) and PubMed. To obtain optimal search results, the keywords “thoracoscopic”, “endoscopic”, “minimally invasive”, “video-assisted”, “video-assisted thoracoscopic surgery VATS”, and “internal mammary artery” or “internal thoracic artery” were used, excluding the term “robotic”. References from the extracted articles were also reviewed to identify additional studies on endoscopic IMA harvest. Results: A total of 17 articles were included in the final analysis. Left internal mammary artery (LIMA) harvest times of between 17 and 164 min were reported, with an injury to LIMA rates between 0.7 and 2.2%. Conclusions: After a 15-year period without scientific publications, interest in the endoscopic-assisted approach has rekindled in recent years due to the reduction in chest trauma compared to direct vision harvest and the widespread availability of conventional endoscopic tools. This renewed focus underscores the potential to make minimally invasive coronary surgery available in all centers. Full article
(This article belongs to the Special Issue New Advances in Minimally Invasive Coronary Surgery)
Show Figures

Figure 1

16 pages, 2668 KiB  
Article
Localization of Capsule Endoscope in Alimentary Tract by Computer-Aided Analysis of Endoscopic Images
by Ruiyao Zhang, Boyuan Peng, Yiyang Liu, Xinkai Liu, Jie Huang, Kohei Suzuki, Yuki Nakajima, Daiki Nemoto, Kazutomo Togashi and Xin Zhu
Sensors 2025, 25(3), 746; https://doi.org/10.3390/s25030746 - 26 Jan 2025
Viewed by 854
Abstract
Capsule endoscopy is a common method for detecting digestive diseases. The location of a capsule endoscope should be constantly monitored through a visual inspection of the endoscopic images by medical staff to confirm the examination’s progress. In this study, we proposed a computer-aided [...] Read more.
Capsule endoscopy is a common method for detecting digestive diseases. The location of a capsule endoscope should be constantly monitored through a visual inspection of the endoscopic images by medical staff to confirm the examination’s progress. In this study, we proposed a computer-aided analysis (CADx) method for the localization of a capsule endoscope. At first, a classifier based on a Swin Transformer was proposed to classify each frame of the capsule endoscopy videos into images of the stomach, small intestine, and large intestine, respectively. Then, a K-means algorithm was used to correct outliers in the classification results. Finally, a localization algorithm was proposed to determine the position of the capsule endoscope in the alimentary tract. The proposed method was developed and validated using videos of 204 consecutive cases. The proposed CADx, based on a Swin Transformer, showed a precision of 93.46%, 97.28%, and 98.68% for the classification of endoscopic images recorded in the stomach, small intestine, and large intestine, respectively. Compared with the landmarks identified by endoscopists, the proposed method demonstrated an average transition time error of 16.2 s to locate the intersection of the stomach and small intestine, as well as 13.5 s to locate that of the small intestine and the large intestine, based on the 20 validation videos with an average length of 3261.8 s. The proposed method accurately localizes the capsule endoscope in the alimentary tract and may replace the laborious real-time visual inspection in capsule endoscopic examinations. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

29 pages, 17674 KiB  
Article
Noise-Perception Multi-Frame Collaborative Network for Enhanced Polyp Detection in Endoscopic Videos
by Haoran Li, Guoyong Zhen, Chengqun Chu, Yuting Ma and Yongnan Zhao
Electronics 2025, 14(1), 62; https://doi.org/10.3390/electronics14010062 - 27 Dec 2024
Viewed by 737
Abstract
The accurate detection and localization of polyps during endoscopic examinations are critical for early disease diagnosis and cancer prevention. However, the presence of artifacts and noise, along with the high similarity between polyps and surrounding tissues in color, shape, and texture complicates polyp [...] Read more.
The accurate detection and localization of polyps during endoscopic examinations are critical for early disease diagnosis and cancer prevention. However, the presence of artifacts and noise, along with the high similarity between polyps and surrounding tissues in color, shape, and texture complicates polyp detection in video frames. To tackle these challenges, we deployed multivariate regression analysis to refine the model and introduced a Noise-Suppressing Perception Network (NSPNet) designed for enhanced performance. NSPNet leverages wavelet transform to enhance the model’s resistance to noise and artifacts while improving a multi-frame collaborative detection strategy for dynamic polyp detection in endoscopic videos, efficiently utilizing temporal information to strengthen features across frames. Specifically, we designed a High-Low Frequency Feature Fusion (HFLF) framework, which allows the model to capture high-frequency details more effectively. Additionally, we introduced an improved STFT-LSTM Polyp Detection (SLPD) module that utilizes temporal information from video sequences to enhance feature fusion in dynamic environments. Lastly, we integrated an Image Augmentation Polyp Detection (IAPD) module to improve performance on unseen data through preprocessing enhancement strategies. Extensive experiments demonstrate that NSPNet outperforms nine SOTA methods across four datasets on key performance metrics, including F1Score and recall. Full article
Show Figures

Figure 1

10 pages, 987 KiB  
Article
Cerebral Aneurysms and Arteriovenous Malformation: Preliminary Experience with the Use of Near-Infrared Fluorescence Imaging Applied to Endoscopy
by Denis Aiudi, Alessio Iacoangeli, Andrea Mattioli, Alessio Raggi, Mauro Dobran, Gabriele Polonara, Riccardo Gigli, Maurizio Iacoangeli and Maurizio Gladi
J. Pers. Med. 2024, 14(12), 1117; https://doi.org/10.3390/jpm14121117 - 22 Nov 2024
Cited by 1 | Viewed by 803
Abstract
Background/Objectives: Indocyanine green video angiography, integrated into the operative microscope, is frequently used in cerebrovascular surgery. This technology is often preferred, for cost or availability, to Doppler or intraoperative DSA (digital subtraction angiography). With the same assumption it was possible, in our preliminary [...] Read more.
Background/Objectives: Indocyanine green video angiography, integrated into the operative microscope, is frequently used in cerebrovascular surgery. This technology is often preferred, for cost or availability, to Doppler or intraoperative DSA (digital subtraction angiography). With the same assumption it was possible, in our preliminary experience, to partially vicariate the aforementioned devices using the SPY mode of the Stryker endoscope; it allowed the visualization of fluorescence in high definition. Methods: A retrospective analysis was conducted on a series of five patients suffering from cerebral aneurysm or AVM (arteriovenous malformation) who underwent, during the last year, surgical treatment with the aid of the microscope supported by the Stryker endoscope in the SPY mode for the visualization of the fluorescence emitted by indocyanine green. Results: All aneurysms were completely excluded from the cerebrovascular circulation in the absence of residues in the collar and occlusion of adjacent vessels; the complete removal of the nidus in all the AVMs was achieved with no residues. Conclusions: The intraoperative use of indocyanine green was a safe, rapid, and effective technique within a preliminary case study of “regular—not giant” aneurysms and superficially located AVM. The endoscopic technique in the SPY mode has allowed to partially vicariate the use of Doppler, intraoperative angiography, and integrated microscope video angiography. For these purposes, we propose, in selected cases, the support of the endoscope in the SPY mode during the microsurgical procedure in order to visualize the green fluorescence of indocyanine. Full article
(This article belongs to the Special Issue Clinical and Experimental Surgery in Personalized Molecular Medicine)
Show Figures

Figure 1

24 pages, 3240 KiB  
Article
ESFPNet: Efficient Stage-Wise Feature Pyramid on Mix Transformer for Deep Learning-Based Cancer Analysis in Endoscopic Video
by Qi Chang, Danish Ahmad, Jennifer Toth, Rebecca Bascom and William E. Higgins
J. Imaging 2024, 10(8), 191; https://doi.org/10.3390/jimaging10080191 - 7 Aug 2024
Viewed by 2133
Abstract
For patients at risk of developing either lung cancer or colorectal cancer, the identification of suspect lesions in endoscopic video is an important procedure. The physician performs an endoscopic exam by navigating an endoscope through the organ of interest, be it the lungs [...] Read more.
For patients at risk of developing either lung cancer or colorectal cancer, the identification of suspect lesions in endoscopic video is an important procedure. The physician performs an endoscopic exam by navigating an endoscope through the organ of interest, be it the lungs or intestinal tract, and performs a visual inspection of the endoscopic video stream to identify lesions. Unfortunately, this entails a tedious, error-prone search over a lengthy video sequence. We propose a deep learning architecture that enables the real-time detection and segmentation of lesion regions from endoscopic video, with our experiments focused on autofluorescence bronchoscopy (AFB) for the lungs and colonoscopy for the intestinal tract. Our architecture, dubbed ESFPNet, draws on a pretrained Mix Transformer (MiT) encoder and a decoder structure that incorporates a new Efficient Stage-Wise Feature Pyramid (ESFP) to promote accurate lesion segmentation. In comparison to existing deep learning models, the ESFPNet model gave superior lesion segmentation performance for an AFB dataset. It also produced superior segmentation results for three widely used public colonoscopy databases and nearly the best results for two other public colonoscopy databases. In addition, the lightweight ESFPNet architecture requires fewer model parameters and less computation than other competing models, enabling the real-time analysis of input video frames. Overall, these studies point to the combined superior analysis performance and architectural efficiency of the ESFPNet for endoscopic video analysis. Lastly, additional experiments with the public colonoscopy databases demonstrate the learning ability and generalizability of ESFPNet, implying that the model could be effective for region segmentation in other domains. Full article
(This article belongs to the Special Issue Advancements in Imaging Techniques for Detection of Cancer)
Show Figures

Figure 1

13 pages, 2034 KiB  
Article
An Automated Video Analysis System for Retrospective Assessment and Real-Time Monitoring of Endoscopic Procedures (with Video)
by Yan Zhu, Ling Du, Pei-Yao Fu, Zi-Han Geng, Dan-Feng Zhang, Wei-Feng Chen, Quan-Lin Li and Ping-Hong Zhou
Bioengineering 2024, 11(5), 445; https://doi.org/10.3390/bioengineering11050445 - 30 Apr 2024
Viewed by 1761
Abstract
Background and Aims: Accurate recognition of endoscopic instruments facilitates quantitative evaluation and quality control of endoscopic procedures. However, no relevant research has been reported. In this study, we aimed to develop a computer-assisted system, EndoAdd, for automated endoscopic surgical video analysis based on [...] Read more.
Background and Aims: Accurate recognition of endoscopic instruments facilitates quantitative evaluation and quality control of endoscopic procedures. However, no relevant research has been reported. In this study, we aimed to develop a computer-assisted system, EndoAdd, for automated endoscopic surgical video analysis based on our dataset of endoscopic instrument images. Methods: Large training and validation datasets containing 45,143 images of 10 different endoscopic instruments and a test dataset of 18,375 images collected from several medical centers were used in this research. Annotated image frames were used to train the state-of-the-art object detection model, YOLO-v5, to identify the instruments. Based on the frame-level prediction results, we further developed a hidden Markov model to perform video analysis and generate heatmaps to summarize the videos. Results: EndoAdd achieved high accuracy (>97%) on the test dataset for all 10 endoscopic instrument types. The mean average accuracy, precision, recall, and F1-score were 99.1%, 92.0%, 88.8%, and 89.3%, respectively. The area under the curve values exceeded 0.94 for all instrument types. Heatmaps of endoscopic procedures were generated for both retrospective and real-time analyses. Conclusions: We successfully developed an automated endoscopic video analysis system, EndoAdd, which supports retrospective assessment and real-time monitoring. It can be used for data analysis and quality control of endoscopic procedures in clinical practice. Full article
Show Figures

Figure 1

16 pages, 6324 KiB  
Article
Simultaneous High-Speed Video Laryngoscopy and Acoustic Aerodynamic Recordings during Vocal Onset of Variable Sound Pressure Level: A Preliminary Study
by Peak Woo
Bioengineering 2024, 11(4), 334; https://doi.org/10.3390/bioengineering11040334 - 29 Mar 2024
Cited by 3 | Viewed by 1353
Abstract
Voicing: requires frequent starts and stops at various sound pressure levels (SPL) and frequencies. Prior investigations using rigid laryngoscopy with oral endoscopy have shown variations in the duration of the vibration delay between normal and abnormal subjects. However, these studies were not physiological [...] Read more.
Voicing: requires frequent starts and stops at various sound pressure levels (SPL) and frequencies. Prior investigations using rigid laryngoscopy with oral endoscopy have shown variations in the duration of the vibration delay between normal and abnormal subjects. However, these studies were not physiological because the larynx was viewed using rigid endoscopes. We adapted a method to perform to perform simultaneous high-speed naso-endoscopic video while simultaneously acquiring the sound pressure, fundamental frequency, airflow rate, and subglottic pressure. This study aimed to investigate voice onset patterns in normophonic males and females during the onset of variable SPL and correlate them with acoustic and aerodynamic data. Materials and Methods: Three healthy males and three healthy females were studied by simultaneous high-speed video laryngoscopy and recording with the production of the gesture [pa:pa:] at soft, medium, and loud voices. The fiber optic endoscope was threaded through a pneumotachograph mask for the simultaneous recording and analysis of acoustic and aerodynamic data. Results: The average increase in the sound pressure level (SPL) for the group was 15 dB, from 70 to 85 dB. The fundamental frequency increased by an average of 10 Hz. The flow was increased in two subjects, reduced in two subjects, and remained the same in two subjects as the SPL increased. There was a steady increase in the subglottic pressure from soft to loud phonation. Compared to soft to medium phonation, a significant increase in glottal resistance was observed with medium-to-loud phonation. Videokymogram analysis showed the onset of vibration for all voiced tokens without the need for full glottis closure. In loud phonation, there is a more rapid onset of a larger amplitude and prolonged closure of the glottal cycle; however, more cycles are required to achieve the intended SPL. There was a prolonged closed phase during loud phonation. Fast Fourier transform (FFT) analysis of the kymography waveform signal showed a more significant second- and third-harmonic energy above the fundamental frequency with loud phonation. There was an increase in the adjustments in the pharynx with the base of the tongue tilting, shortening of the vocal folds, and pharyngeal constriction. Conclusion: Voice onset occurs in all modalities, without the need for full glottal closure. There was a more significant increase in glottal resistance with loud phonation than that with soft or middle phonation. Vibration analysis of the voice onset showed that more time was required during loud phonation before the oscillation stabilized to a steady state. With increasing SPL, there were significant variations in vocal tract adjustments. The most apparent change was the increase in tongue tension with posterior displacement of the epiglottis. There was an increase in pre-phonation time during loud phonation. Patterns of muscle tension dysphonia with laryngeal squeezing, shortening of the vocal folds, and epiglottis tilting with increasing loudness are features of loud phonation. These observations show that flexible high-speed video laryngoscopy can reveal observations that cannot be observed with rigid video laryngoscopy. An objective analysis of the digital kymography signal can be conducted in selected cases. Full article
(This article belongs to the Special Issue The Biophysics of Vocal Onset)
Show Figures

Figure 1

16 pages, 5371 KiB  
Article
Deep-Learning-Enabled Computer-Aided Diagnosis in the Classification of Pancreatic Cystic Lesions on Confocal Laser Endomicroscopy
by Tsung-Chun Lee, Clara Lavita Angelina, Pradermchai Kongkam, Hsiu-Po Wang, Rungsun Rerknimitr, Ming-Lun Han and Hsuan-Ting Chang
Diagnostics 2023, 13(7), 1289; https://doi.org/10.3390/diagnostics13071289 - 29 Mar 2023
Cited by 8 | Viewed by 2514
Abstract
Accurate classification of pancreatic cystic lesions (PCLs) is important to facilitate proper treatment and to improve patient outcomes. We utilized the convolutional neural network (CNN) of VGG19 to develop a computer-aided diagnosis (CAD) system in the classification of subtypes of PCLs in endoscopic [...] Read more.
Accurate classification of pancreatic cystic lesions (PCLs) is important to facilitate proper treatment and to improve patient outcomes. We utilized the convolutional neural network (CNN) of VGG19 to develop a computer-aided diagnosis (CAD) system in the classification of subtypes of PCLs in endoscopic ultrasound-guided needle-based confocal laser endomicroscopy (nCLE). From a retrospectively collected 22,424 nCLE video frames (50 videos) as the training/validation set and 11,047 nCLE video frames (18 videos) as the test set, we developed and compared the diagnostic performance of three CNNs with distinct methods of designating the region of interest. The diagnostic accuracy for subtypes of PCLs by CNNs with manual, maximal rectangular, and U-Net algorithm-designated ROIs was 100%, 38.9%, and 66.7% on a per-video basis and 88.99%, 73.94%, and 76.12% on a per-frame basis, respectively. Our per-frame analysis suggested differential levels of diagnostic accuracy among the five subtypes of PCLs, where non-mucinous PCLs (serous cystic neoplasm: 93.11%, cystic neuroendocrine tumor: 84.31%, and pseudocyst: 98%) had higher diagnostic accuracy than mucinous PCLs (intraductal papillary mucinous neoplasm: 84.43% and mucinous cystic neoplasm: 86.1%). Our CNN demonstrated superior specificity compared to the state-of-the-art for the classification of mucinous PCLs (IPMN and MCN), with high specificity (94.3% and 92.8%, respectively) but low sensitivity (46% and 45.2%, respectively). This suggests the complimentary role of CNN-enabled CAD systems, especially for clinically suspected mucinous PCLs. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

13 pages, 1106 KiB  
Article
A DSC Test for the Early Detection of Neoplastic Gastric Lesions in a Medium-Risk Gastric Cancer Area
by Valli De Re, Stefano Realdon, Roberto Vettori, Alice Zaramella, Stefania Maiero, Ombretta Repetto, Vincenzo Canzonieri, Agostino Steffan and Renato Cannizzaro
Int. J. Mol. Sci. 2023, 24(4), 3290; https://doi.org/10.3390/ijms24043290 - 7 Feb 2023
Cited by 2 | Viewed by 2460
Abstract
In this study, we aimed to assess the accuracy of the proposed novel, noninvasive serum DSC test in predicting the risk of gastric cancer before the use of upper endoscopy. To validate the DSC test, we enrolled two series of individuals living in [...] Read more.
In this study, we aimed to assess the accuracy of the proposed novel, noninvasive serum DSC test in predicting the risk of gastric cancer before the use of upper endoscopy. To validate the DSC test, we enrolled two series of individuals living in Veneto and Friuli-Venezia Giulia, Italy (n = 53 and n = 113, respectively), who were referred for an endoscopy. The classification used for the DSC test to predict gastric cancer risk combines the coefficient of the patient’s age and sex and serum pepsinogen I and II, gastrin 17, and anti-Helicobacter pylori immunoglobulin G concentrations in two equations: Y1 and Y2. The coefficient of variables and the Y1 and Y2 cutoff points (>0.385 and >0.294, respectively) were extrapolated using regression analysis and an ROC curve analysis of two retrospective datasets (300 cases for the Y1 equation and 200 cases for the Y2 equation). The first dataset included individuals with autoimmune atrophic gastritis and first-degree relatives with gastric cancer; the second dataset included blood donors. Demographic data were collected; serum pepsinogen, gastrin G17, and anti-Helicobacter pylori IgG concentrations were assayed using an automatic Maglumi system. Gastroscopies were performed by gastroenterologists using an Olympus video endoscope with detailed photographic documentation during examinations. Biopsies were taken at five standardized mucosa sites and were assessed by a pathologist for diagnosis. The accuracy of the DSC test in predicting neoplastic gastric lesions was estimated to be 74.657% (65%CI; 67.333% to 81.079%). The DSC test was found to be a useful, noninvasive, and simple approach to predicting gastric cancer risk in a population with a medium risk of developing gastric cancer. Full article
(This article belongs to the Special Issue Cancer Prevention with Molecular Target Therapies 3.0)
Show Figures

Figure 1

16 pages, 5822 KiB  
Article
Multi-Stage Temporal Convolutional Network with Moment Loss and Positional Encoding for Surgical Phase Recognition
by Minyoung Park, Seungtaek Oh, Taikyeong Jeong and Sungwook Yu
Diagnostics 2023, 13(1), 107; https://doi.org/10.3390/diagnostics13010107 - 29 Dec 2022
Cited by 6 | Viewed by 2511
Abstract
In recent times, many studies concerning surgical video analysis are being conducted due to its growing importance in many medical applications. In particular, it is very important to be able to recognize the current surgical phase because the phase information can be utilized [...] Read more.
In recent times, many studies concerning surgical video analysis are being conducted due to its growing importance in many medical applications. In particular, it is very important to be able to recognize the current surgical phase because the phase information can be utilized in various ways both during and after surgery. This paper proposes an efficient phase recognition network, called MomentNet, for cholecystectomy endoscopic videos. Unlike LSTM-based network, MomentNet is based on a multi-stage temporal convolutional network. Besides, to improve the phase prediction accuracy, the proposed method adopts a new loss function to supplement the general cross entropy loss function. The new loss function significantly improves the performance of the phase recognition network by constraining un-desirable phase transition and preventing over-segmentation. In addition, MomnetNet effectively applies positional encoding techniques, which are commonly applied in transformer architectures, to the multi-stage temporal convolution network. By using the positional encoding techniques, MomentNet can provide important temporal context, resulting in higher phase prediction accuracy. Furthermore, the MomentNet applies label smoothing technique to suppress overfitting and replaces the backbone network for feature extraction to further improve the network performance. As a result, the MomentNet achieves 92.31% accuracy in the phase recognition task with the Cholec80 dataset, which is 4.55% higher than that of the baseline architecture. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

11 pages, 1594 KiB  
Article
An Optimal Artificial Intelligence System for Real-Time Endoscopic Prediction of Invasion Depth in Early Gastric Cancer
by Jie-Hyun Kim, Sang-Il Oh, So-Young Han, Ji-Soo Keum, Kyung-Nam Kim, Jae-Young Chun, Young-Hoon Youn and Hyojin Park
Cancers 2022, 14(23), 6000; https://doi.org/10.3390/cancers14236000 - 5 Dec 2022
Cited by 12 | Viewed by 2239
Abstract
We previously constructed a VGG-16 based artificial intelligence (AI) model (image classifier [IC]) to predict the invasion depth in early gastric cancer (EGC) using endoscopic static images. However, images cannot capture the spatio-temporal information available during real-time endoscopy—the AI trained on static images [...] Read more.
We previously constructed a VGG-16 based artificial intelligence (AI) model (image classifier [IC]) to predict the invasion depth in early gastric cancer (EGC) using endoscopic static images. However, images cannot capture the spatio-temporal information available during real-time endoscopy—the AI trained on static images could not estimate invasion depth accurately and reliably. Thus, we constructed a video classifier [VC] using videos for real-time depth prediction in EGC. We built a VC by attaching sequential layers to the last convolutional layer of IC v2, using video clips. We computed the standard deviation (SD) of output probabilities for a video clip and the sensitivities in the manner of frame units to observe consistency. The sensitivity, specificity, and accuracy of IC v2 for static images were 82.5%, 82.9%, and 82.7%, respectively. However, for video clips, the sensitivity, specificity, and accuracy of IC v2 were 33.6%, 85.5%, and 56.6%, respectively. The VC performed better analysis of the videos, with a sensitivity of 82.3%, a specificity of 85.8%, and an accuracy of 83.7%. Furthermore, the mean SD was lower for the VC than IC v2 (0.096 vs. 0.289). The AI model developed utilizing videos can predict invasion depth in EGC more precisely and consistently than image-trained models, and is more appropriate for real-world situations. Full article
Show Figures

Figure 1

14 pages, 3333 KiB  
Article
Cost-Effective Design of a Miniaturized Zoom Lens for a Capsule Endoscope
by Wen-Shing Sun, Chuen-Lin Tien and Ping-Yi Chen
Micromachines 2022, 13(11), 1814; https://doi.org/10.3390/mi13111814 - 24 Oct 2022
Cited by 1 | Viewed by 2129
Abstract
This paper presents a miniaturized design of a 2× zoom lens for application to a one-megapixel image sensor in a capsule endoscope. The zoom lens is composed of four lenses, including three plastic aspheric lenses and one glass spherical lens, and adopts a [...] Read more.
This paper presents a miniaturized design of a 2× zoom lens for application to a one-megapixel image sensor in a capsule endoscope. The zoom lens is composed of four lenses, including three plastic aspheric lenses and one glass spherical lens, and adopts a three-lens group design. This capsule endoscope is mainly for observation of the small intestine, which has a radius of about 12.5 mm. The height of the object is thus set to 12.5 mm. The object surface is designed to be curved surface with a radius of curvature of 15 mm. The focal length of the zoom lens ranges from 1.064 mm to 2.039 mm, and the full angle of view ranges from 60° to 143°, the f-number is F/2.8–F/3.5, the zoom lens is 11.6 mm in length, and the maximum effective diameter of the zoom lens is 6 mm. The zoom lens design is divided into six segments, corresponding to the different magnifications from Zoom 1 to Zoom 6. The magnification ratios are −0.0845, −0.0984, −0.1150, −0.1317, −0.1482, and −0.1690, respectively. Comparing the positions from Zoom 1 to Zoom 6, the maximum optical distortion is −14.89% for the Zoom 1 and 1.45% for the Zoom 6. The maximum vertical video distortion is 8.19% for Zoom 1 and 1.00% for Zoom 6. At a 1.0 field of view, the minimum relative illuminance is 71.8% at a magnification of M = −0.1317. Finally, we perform the tolerance analysis and lens resolution analysis at different zooming positions. Our design can obtain high-quality images for capsule endoscope. Full article
Show Figures

Figure 1

6 pages, 215 KiB  
Article
The Utility of Video Recording in Assessing Bariatric Surgery Complications
by Marius Nedelcu, Sergio Carandina, Patrick Noel, Henry-Alexis Mercoli, Marc Danan, Viola Zulian, Anamaria Nedelcu and Ramon Vilallonga
J. Clin. Med. 2022, 11(19), 5573; https://doi.org/10.3390/jcm11195573 - 22 Sep 2022
Cited by 2 | Viewed by 1439
Abstract
Introduction: Recording every procedure could diminish the postoperative complication rates in bariatric surgery. The aim of our study was to evaluate the correlation between recording every bariatric surgery and their postoperative analysis in relation to the early or late postoperative complications. Methods: Seven [...] Read more.
Introduction: Recording every procedure could diminish the postoperative complication rates in bariatric surgery. The aim of our study was to evaluate the correlation between recording every bariatric surgery and their postoperative analysis in relation to the early or late postoperative complications. Methods: Seven hundred fifteen patients who underwent a bariatric procedure between January 2018 and December 2019 were included in a retrospective analysis. There were: 589 laparoscopic sleeve gastrectomies (LSGs); 110 Roux-en-Y bypasses (RYGBs) and 16 gastric bands (LAGBs). The video recording was systematically used, and all patients were enrolled in the IFSO registry. Results: There were 15 patients (2.1%) with surgical postoperative complications: 5 leaks, 8 hemorrhages and 2 stenosis. Most complications were consequent to LSG, except for two, which occurred after RYGB. In four cases a site of active bleeding was identified. After reviewing the video, in three cases the site was correlated with an event which occurred during the initial procedure. Three out of five cases of leak following sleeve were treated purely endoscopically, and no potential correlated mechanism was identified. Two other possible benefits were observed: a better evaluation of the gastric pouch for the treatment of the ulcer post bypass and the review of one per operative incident. Two negative diagnostic laparoscopies were performed. The benefit of the systematic video recording was singled out in eight cases. All the other cases were completed by laparoscopy with no conversion. Conclusion: To record every bariatric procedure could help in understanding the mechanism of certain complications, especially when the analysis is performed within the team. Still, recording the procedure did not prevent the negative diagnostic laparoscopy, but it could play a significant role for the medico-legal aspect in the future. Full article
(This article belongs to the Section General Surgery)
15 pages, 6363 KiB  
Article
Gastric Cancer Angiogenesis Assessment by Dynamic Contrast Harmonic Imaging Endoscopic Ultrasound (CHI-EUS) and Immunohistochemical Analysis—A Feasibility Study
by Victor Mihai Sacerdoțianu, Bogdan Silviu Ungureanu, Sevastiţa Iordache, Sergiu Marian Cazacu, Daniel Pirici, Ilona Mihaela Liliac, Daniela Elena Burtea, Valeriu Șurlin, Cezar Stroescu, Dan Ionuț Gheonea and Adrian Săftoiu
J. Pers. Med. 2022, 12(7), 1020; https://doi.org/10.3390/jpm12071020 - 21 Jun 2022
Cited by 1 | Viewed by 1884
Abstract
Tumor vascular perfusion pattern in gastric cancer (GC) may be an important prognostic factor with therapeutic implications. Non-invasive methods such as dynamic contrast harmonic imaging endoscopic ultrasound (CHI-EUS) may provide details about tumor perfusion and could also lay out another perspective for angiogenesis [...] Read more.
Tumor vascular perfusion pattern in gastric cancer (GC) may be an important prognostic factor with therapeutic implications. Non-invasive methods such as dynamic contrast harmonic imaging endoscopic ultrasound (CHI-EUS) may provide details about tumor perfusion and could also lay out another perspective for angiogenesis assessment. Methods: We included 34 patients with GC, adenocarcinoma, with CHI-EUS examinations that were performed before any treatment decision. We analyzed eighty video sequences with a dedicated software for quantitative analysis of the vascular patterns of specific regions of interest (ROI). As a result, time-intensity curve (TIC) along with other derived parameters were automatically generated: peak enhancement (PE), rise time (RT), time to peak (TTP), wash-in perfusion index (WiPI), ROI area, and others. We performed CD105 and CD31 immunostaining to calculate the vascular diameter (vd) and the microvascular density (MVD), and the results were compared with CHI-EUS parameters. Results: High statistical correlations (p < 0.05) were observed between TIC analysis parameters MVD and vd CD31. Strong correlations were also found between tumor grade and 7 CHI-EUS parameters, p < 0.005. Conclusions: GC angiogenesis assessment by CHI-EUS is feasible and may be considered for future studies based on TIC analysis. Full article
(This article belongs to the Special Issue Advances of Personalized Medicine for Gastric Cancer)
Show Figures

Figure 1

31 pages, 12259 KiB  
Article
Hybrid and Deep Learning Approach for Early Diagnosis of Lower Gastrointestinal Diseases
by Suliman Mohamed Fati, Ebrahim Mohammed Senan and Ahmad Taher Azar
Sensors 2022, 22(11), 4079; https://doi.org/10.3390/s22114079 - 27 May 2022
Cited by 42 | Viewed by 4044
Abstract
Every year, nearly two million people die as a result of gastrointestinal (GI) disorders. Lower gastrointestinal tract tumors are one of the leading causes of death worldwide. Thus, early detection of the type of tumor is of great importance in the survival of [...] Read more.
Every year, nearly two million people die as a result of gastrointestinal (GI) disorders. Lower gastrointestinal tract tumors are one of the leading causes of death worldwide. Thus, early detection of the type of tumor is of great importance in the survival of patients. Additionally, removing benign tumors in their early stages has more risks than benefits. Video endoscopy technology is essential for imaging the GI tract and identifying disorders such as bleeding, ulcers, polyps, and malignant tumors. Videography generates 5000 frames, which require extensive analysis and take a long time to follow all frames. Thus, artificial intelligence techniques, which have a higher ability to diagnose and assist physicians in making accurate diagnostic decisions, solve these challenges. In this study, many multi-methodologies were developed, where the work was divided into four proposed systems; each system has more than one diagnostic method. The first proposed system utilizes artificial neural networks (ANN) and feed-forward neural networks (FFNN) algorithms based on extracting hybrid features by three algorithms: local binary pattern (LBP), gray level co-occurrence matrix (GLCM), and fuzzy color histogram (FCH) algorithms. The second proposed system uses pre-trained CNN models which are the GoogLeNet and AlexNet based on the extraction of deep feature maps and their classification with high accuracy. The third proposed method uses hybrid techniques consisting of two blocks: the first block of CNN models (GoogLeNet and AlexNet) to extract feature maps; the second block is the support vector machine (SVM) algorithm for classifying deep feature maps. The fourth proposed system uses ANN and FFNN based on the hybrid features between CNN models (GoogLeNet and AlexNet) and LBP, GLCM and FCH algorithms. All the proposed systems achieved superior results in diagnosing endoscopic images for the early detection of lower gastrointestinal diseases. All systems produced promising results; the FFNN classifier based on the hybrid features extracted by GoogLeNet, LBP, GLCM and FCH achieved an accuracy of 99.3%, precision of 99.2%, sensitivity of 99%, specificity of 100%, and AUC of 99.87%. Full article
Show Figures

Figure 1

Back to TopTop