Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = deep learning algorithm (DLA)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1368 KiB  
Article
Automatic Active Contour Algorithm for Detecting Early Brain Tumors in Comparison with AI Detection
by Mohammed Almijalli, Faten A. Almusayib, Ghala F. Albugami, Ziyad Aloqalaa, Omar Altwijri and Ali S. Saad
Processes 2025, 13(3), 867; https://doi.org/10.3390/pr13030867 - 15 Mar 2025
Viewed by 510
Abstract
The automatic detection of objects in medical photographs is an essential component of the diagnostic procedure. The issue of early-stage brain tumor detection has progressed significantly with the use of deep learning algorithms (DLA), particularly convolutional neural networks (CNN). The issue lies in [...] Read more.
The automatic detection of objects in medical photographs is an essential component of the diagnostic procedure. The issue of early-stage brain tumor detection has progressed significantly with the use of deep learning algorithms (DLA), particularly convolutional neural networks (CNN). The issue lies in the fact that these algorithms necessitate a training phase involving a large database over several hundred images, which can be time-consuming and require complex computational infrastructure. This study aimed to comprehensively evaluate a proposed method, which relies on an active contour algorithm, for identifying and distinguishing brain tumors in magnetic resonance images. We tested the proposed algorithm using 50 brain images, specifically focusing on glioma tumors, while 2000 images were used for DLA from the BRATS Challenges 2021. The proposed segmentation method is made up of an active contour algorithm, an anisotropic diffusion filter for pre-processing, active contour segmentation (Chan-Vese), and morphologic operations for segmentation refinement. We evaluated its performance using various metrics, such as accuracy, precision, sensitivity, specificity, Jaccard index, Dice index, and Hausdorff distance. The proposed method provided an average of the first six performance metrics of 0.96, which is higher than most classical image segmentation methods and was comparable to the deep learning methods, which have an average performance score of 0.98. These results indicate its ability to detect brain tumors accurately and rapidly. The results section provided both numerical and visual insights into the similarity between segmented and ground truth tumor areas. The findings of this study highlighted the potential of computer-based methods in improving brain tumor identification using magnetic resonance imaging. Future work must validate the efficacy of these segmentation approaches across different brain tumor categories and improve computing efficiency to integrate the technology into clinical processes. Full article
Show Figures

Figure 1

22 pages, 8660 KiB  
Article
Ship Contour: A Novel Ship Instance Segmentation Method Using Deep Snake and Attention Mechanism
by Chen Chen, Songtao Hu, Feng Ma, Jie Sun, Tao Lu and Bing Wu
J. Mar. Sci. Eng. 2025, 13(3), 519; https://doi.org/10.3390/jmse13030519 - 8 Mar 2025
Viewed by 798
Abstract
Ship instance segmentation technologies enable the identification of ship targets and their contours, serving as an auxiliary tool for monitoring, tracking, and providing critical support for maritime and port safety management. However, due to the different shapes and sizes of ships, as well [...] Read more.
Ship instance segmentation technologies enable the identification of ship targets and their contours, serving as an auxiliary tool for monitoring, tracking, and providing critical support for maritime and port safety management. However, due to the different shapes and sizes of ships, as well as the complexity and fluctuation of lighting and weather, existing ship instance segmentation approaches frequently struggle to accomplish correct contour segmentation. To address this issue, this paper introduces Ship Contour, a real-time segmentation method for ship instances based on contours that detects ship targets using an improved CenterNet algorithm. This method utilizes DLA-60 (deep layer aggregation) as the core network to ensure detection accuracy and speed, and it integrates an efficient channel attention (ECA) mechanism to boost feature extraction capability. Furthermore, a Mish activation function replaces ReLU to better adapt deep network learning. These improvements to CenterNet enhance model robustness and effectively reduce missed and false detection. In response to the issue of low accuracy in extracting ship target edge contours using the original deep snake end-to-end method, a scale- and translation-invariant normalization scheme is employed to enhance contour quality. To validate the effectiveness of the proposed method, this research builds a dedicated dataset with up to 2300 images. Experiments demonstrate that this method achieves competitive performance, with an accuracy rate of AP0.5:0.95 reaching 63.6% and a recall rate of AR0.5:0.95 reaching 67.4%. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

11 pages, 1519 KiB  
Article
Development and Validation of a Deep-Learning-Based Algorithm for Detecting and Classifying Metallic Implants in Abdominal and Spinal CT Topograms
by Moon-Hyung Choi, Joon-Yong Jung, Zhigang Peng, Stefan Grosskopf, Michael Suehling, Christian Hofmann and Seongyong Pak
Diagnostics 2024, 14(7), 668; https://doi.org/10.3390/diagnostics14070668 - 22 Mar 2024
Viewed by 1298
Abstract
Purpose: To develop and validate a deep-learning-based algorithm (DLA) that is designed to segment and classify metallic objects in topograms of abdominal and spinal CT. Methods: DLA training for implant segmentation and classification was based on a U-net-like architecture with 263 annotated hip [...] Read more.
Purpose: To develop and validate a deep-learning-based algorithm (DLA) that is designed to segment and classify metallic objects in topograms of abdominal and spinal CT. Methods: DLA training for implant segmentation and classification was based on a U-net-like architecture with 263 annotated hip implant topograms and 2127 annotated spine implant topograms. The trained DLA was validated with internal and external datasets. Two radiologists independently reviewed the external dataset consisting of 2178 abdomen anteroposterior (AP) topograms and 515 spine AP and lateral topograms, all collected in a consecutive manner. Sensitivity and specificity were calculated per pixel row and per patient. Pairwise intersection over union (IoU) was also calculated between the DLA and the two radiologists. Results: The performance parameters of the DLA were consistently >95% in internal validation per pixel row and per patient. DLA can save 27.4% of reconstruction time on average in patients with metallic implants compared to the existing iMAR. The sensitivity and specificity of the DLA during external validation were greater than 90% for the detection of spine implants on three different topograms and for the detection of hip implants on abdominal AP and spinal AP topograms. The IoU was greater than 0.9 between the DLA and the radiologists. However, the DLA training could not be performed for hip implants on spine lateral topograms. Conclusions: A prototype DLA to detect metallic implants of the spine and hip on abdominal and spinal CT topograms improves the scan workflow with good performance for both spine and hip implants. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

11 pages, 3789 KiB  
Article
Deep Learning Algorithm for Tumor Segmentation and Discrimination of Clinically Significant Cancer in Patients with Prostate Cancer
by Sujin Hong, Seung Ho Kim, Byeongcheol Yoo and Joo Yeon Kim
Curr. Oncol. 2023, 30(8), 7275-7285; https://doi.org/10.3390/curroncol30080528 - 1 Aug 2023
Cited by 5 | Viewed by 2529
Abstract
Background: We investigated the feasibility of a deep learning algorithm (DLA) based on apparent diffusion coefficient (ADC) maps for the segmentation and discrimination of clinically significant cancer (CSC, Gleason score ≥ 7) from non-CSC in patients with prostate cancer (PCa). Methods: Data from [...] Read more.
Background: We investigated the feasibility of a deep learning algorithm (DLA) based on apparent diffusion coefficient (ADC) maps for the segmentation and discrimination of clinically significant cancer (CSC, Gleason score ≥ 7) from non-CSC in patients with prostate cancer (PCa). Methods: Data from a total of 149 consecutive patients who had undergone 3T-MRI and been pathologically diagnosed with PCa were initially collected. The labelled data (148 images for GS6, 580 images for GS7) were applied for tumor segmentation using a convolutional neural network (CNN). For classification, 93 images for GS6 and 372 images for GS7 were used. For external validation, 22 consecutive patients from five different institutions (25 images for GS6, 70 images for GS7) representing different MR machines were recruited. Results: Regarding segmentation and classification, U-Net and DenseNet were used, respectively. The tumor Dice scores for internal and external validation were 0.822 and 0.7776, respectively. As for classification, the accuracies of internal and external validation were 73 and 75%, respectively. For external validation, diagnostic predictive values for CSC (sensitivity, specificity, positive predictive value and negative predictive value) were 84, 48, 82 and 52%, respectively. Conclusions: Tumor segmentation and discrimination of CSC from non-CSC is feasible using a DLA developed based on ADC maps (b2000) alone. Full article
(This article belongs to the Topic Artificial Intelligence in Cancer, Biology and Oncology)
Show Figures

Figure 1

11 pages, 1257 KiB  
Article
Deep Learning Enhances Radiologists’ Detection of Potential Spinal Malignancies in CT Scans
by Leonard Gilberg, Bianca Teodorescu, Leander Maerkisch, Andre Baumgart, Rishi Ramaesh, Elmer Jeto Gomes Ataide and Ali Murat Koç
Appl. Sci. 2023, 13(14), 8140; https://doi.org/10.3390/app13148140 - 13 Jul 2023
Cited by 7 | Viewed by 3778
Abstract
Incidental spinal bone lesions, potential indicators of malignancies, are frequently underreported in abdominal and thoracic CT imaging due to scan focus and diagnostic bias towards patient complaints. Here, we evaluate a deep-learning algorithm (DLA) designed to support radiologists’ reporting of incidental lesions during [...] Read more.
Incidental spinal bone lesions, potential indicators of malignancies, are frequently underreported in abdominal and thoracic CT imaging due to scan focus and diagnostic bias towards patient complaints. Here, we evaluate a deep-learning algorithm (DLA) designed to support radiologists’ reporting of incidental lesions during routine clinical practice. The present study is structured into two phases: unaided and AI-assisted. A total of 32 scans from multiple radiology centers were selected randomly and independently annotated by two experts. The U-Net-like architecture-based DLA used for the AI-assisted phase showed a sensitivity of 75.0% in identifying potentially malignant spinal bone lesions. Six radiologists of varying experience levels participated in this observational study. During routine reporting, the DLA helped improve the radiologists’ sensitivity by 20.8 percentage points. Notably, DLA-generated false-positive predictions did not significantly bias radiologists in their final diagnosis. These observations clearly indicate that using a suitable DLA improves the detection of otherwise missed potentially malignant spinal cases. Our results further emphasize the potential of artificial intelligence as a second reader in the clinical setting. Full article
Show Figures

Figure 1

11 pages, 1923 KiB  
Article
Artificial Intelligence Approach for Early Detection of Brain Tumors Using MRI Images
by Adham Aleid, Khalid Alhussaini, Reem Alanazi, Meaad Altwaimi, Omar Altwijri and Ali S. Saad
Appl. Sci. 2023, 13(6), 3808; https://doi.org/10.3390/app13063808 - 16 Mar 2023
Cited by 27 | Viewed by 8179
Abstract
Artificial intelligence (AI) is one of the most promising approaches to health innovation. The use of AI in image recognition considerably extends findings beyond the constraints of human sight. The application of AI in medical imaging, which relies on picture interpretation, is beneficial [...] Read more.
Artificial intelligence (AI) is one of the most promising approaches to health innovation. The use of AI in image recognition considerably extends findings beyond the constraints of human sight. The application of AI in medical imaging, which relies on picture interpretation, is beneficial for automatic diagnosis. Diagnostic radiology is evolving from a subjective perceptual talent to a more objective science thanks to AI. Automatic object detection in medical images is an essential AI technology in medicine. The problem of detecting brain tumors at an early stage is well advanced with convolutional neural network (CNN) and deep learning algorithms (DLA). The problem is that those algorithms require a training phase with a big database of more than 500 images and time-consuming with a complex computational and expensive infrastructure. This study proposes a classical automatic segmentation method for detecting brain tumors in the early stage using MRI images. It is based on a multilevel thresholding technique on a harmony search algorithm (HSO); the algorithm was developed to suit MRI brain segmentation, and parameters selection was optimized for the purpose. Multiple thresholds, based on the variance and entropy functions, break the histogram into multiple portions, and different colors are associated with each portion. To eliminate the tiny arias supposed as noise and detect brain tumors, morphological operations followed by a connected component analysis are utilized after segmentation. The brain tumor detection performance is judged using performance parameters such as Accuracy, Dice Coefficient, and Jaccard index. The results are compared to those acquired manually by experts in the field. The results were further compared with different CNN and DLA approaches using Brain Images dataset called the “BraTS 2017 challenge”. The average Dice Index was used as a performance measure for the comparison. The results of the proposed approach were found to be competitive in accuracy to those obtained by CNN and DLA methods and much better in terms of execution time, computational complexity, and data management. Full article
(This article belongs to the Special Issue Advances in Medical Image Analysis and Computer-Aided Diagnosis)
Show Figures

Figure 1

23 pages, 3326 KiB  
Article
An Intelligent Task Scheduling Model for Hybrid Internet of Things and Cloud Environment for Big Data Applications
by Souvik Pal, N. Z. Jhanjhi, Azmi Shawkat Abdulbaqi, D. Akila, Faisal S. Alsubaei and Abdulaleem Ali Almazroi
Sustainability 2023, 15(6), 5104; https://doi.org/10.3390/su15065104 - 14 Mar 2023
Cited by 30 | Viewed by 3235
Abstract
One of the most significant issues in Internet of Things (IoT) cloud computing is scheduling tasks. Recent developments in IoT-based technologies have led to a meteoric rise in the demand for cloud storage. In order to load the IoT services onto cloud resources [...] Read more.
One of the most significant issues in Internet of Things (IoT) cloud computing is scheduling tasks. Recent developments in IoT-based technologies have led to a meteoric rise in the demand for cloud storage. In order to load the IoT services onto cloud resources efficiently even while satisfying the requirements of the applications, sophisticated planning methodologies are required. This is important because several processes must be well prepared on different virtual machines to maximize resource usage and minimize waiting times. Different IoT application tasks can be difficult to schedule in a cloud-based computing architecture due to the heterogeneous features of IoT. With the rise in IoT sensors and the need to access information quickly and reliably, fog cloud computing is proposed for the integration of fog and cloud networks to meet these demands. One of the most important necessities in a fog cloud setting is efficient task scheduling, as this can help to lessen the time it takes for data to be processed and improve QoS (quality of service). The overall processing time of IoT programs should be kept as short as possible by effectively planning and managing their workloads, taking into account limitations such as task scheduling. Finding the ideal approach is challenging, especially for big data systems, because task scheduling is a complex issue. This research provides a Deep Learning Algorithm for Big data Task Scheduling System (DLA-BDTSS) for the Internet of Things (IoT) and cloud computing applications. When it comes to reducing energy costs and end-to-end delay, an optimized scheduling model based on deep learning is used to analyze and process various tasks. The method employs a multi-objective strategy to shorten the makespan and maximize resource consumption. A regional exploration search technique improves the optimization algorithm’s capacity to exploit data and avoid becoming stuck in local optimization. DLA-BDTSS was compared to other well-known task allocation methods in accurate trace information and the CloudSim tools. The investigation showed that DLA-BDTSS performed better than other well-known algorithms. It converged faster than different approaches, making it beneficial for big data task scheduling scenarios, and it obtained an 8.43 percent improvement in the outcomes. DLA-BDTSS obtained an 8.43% improvement in the outcomes with an execution time of 34 s and fitness value evaluation of 76.8%. Full article
Show Figures

Figure 1

7 pages, 970 KiB  
Brief Report
Artificial Intelligence Based Analysis of Corneal Confocal Microscopy Images for Diagnosing Peripheral Neuropathy: A Binary Classification Model
by Yanda Meng, Frank George Preston, Maryam Ferdousi, Shazli Azmi, Ioannis Nikolaos Petropoulos, Stephen Kaye, Rayaz Ahmed Malik, Uazman Alam and Yalin Zheng
J. Clin. Med. 2023, 12(4), 1284; https://doi.org/10.3390/jcm12041284 - 6 Feb 2023
Cited by 16 | Viewed by 2455
Abstract
Diabetic peripheral neuropathy (DPN) is the leading cause of neuropathy worldwide resulting in excess morbidity and mortality. We aimed to develop an artificial intelligence deep learning algorithm to classify the presence or absence of peripheral neuropathy (PN) in participants with diabetes or pre-diabetes [...] Read more.
Diabetic peripheral neuropathy (DPN) is the leading cause of neuropathy worldwide resulting in excess morbidity and mortality. We aimed to develop an artificial intelligence deep learning algorithm to classify the presence or absence of peripheral neuropathy (PN) in participants with diabetes or pre-diabetes using corneal confocal microscopy (CCM) images of the sub-basal nerve plexus. A modified ResNet-50 model was trained to perform the binary classification of PN (PN+) versus no PN (PN−) based on the Toronto consensus criteria. A dataset of 279 participants (149 PN−, 130 PN+) was used to train (n = 200), validate (n = 18), and test (n = 61) the algorithm, utilizing one image per participant. The dataset consisted of participants with type 1 diabetes (n = 88), type 2 diabetes (n = 141), and pre-diabetes (n = 50). The algorithm was evaluated using diagnostic performance metrics and attribution-based methods (gradient-weighted class activation mapping (Grad-CAM) and Guided Grad-CAM). In detecting PN+, the AI-based DLA achieved a sensitivity of 0.91 (95%CI: 0.79–1.0), a specificity of 0.93 (95%CI: 0.83–1.0), and an area under the curve (AUC) of 0.95 (95%CI: 0.83–0.99). Our deep learning algorithm demonstrates excellent results for the diagnosis of PN using CCM. A large-scale prospective real-world study is required to validate its diagnostic efficacy prior to implementation in screening and diagnostic programmes. Full article
(This article belongs to the Section Clinical Neurology)
Show Figures

Figure 1

14 pages, 3992 KiB  
Article
Vehicle Tracking Algorithm Based on Deep Learning in Roadside Perspective
by Guangsheng Han, Qiukun Jin, Hui Rong, Lisheng Jin and Libin Zhang
Sustainability 2023, 15(3), 1950; https://doi.org/10.3390/su15031950 - 19 Jan 2023
Cited by 3 | Viewed by 2583
Abstract
Traffic intelligence has become an important part of the development of various countries and the automobile industry. Roadside perception is an important part of the intelligent transportation system, which mainly realizes the effective perception of road environment information by using sensors installed on [...] Read more.
Traffic intelligence has become an important part of the development of various countries and the automobile industry. Roadside perception is an important part of the intelligent transportation system, which mainly realizes the effective perception of road environment information by using sensors installed on the roadside. Vehicles are the main road targets in most traffic scenes, so tracking a large number of vehicles is an important subject in the field of roadside perception. Considering the characteristics of vehicle-like rigid targets from the roadside view, a vehicle tracking algorithm based on deep learning was proposed. Firstly, we optimized a DLA-34 network and designed a block-N module, then the channel attention and spatial attention modules were added in the front of the network to improve the overall feature extraction ability and computing efficiency of the network. Next, the joint loss function was designed to improve the intra-class and inter-class discrimination ability of the tracking algorithm, which can better discriminate objects of similar appearance and the color of vehicles, alleviate the IDs problem and improve algorithm robustness and the real-time performance of the tracking algorithm. Finally, the experimental results showed that the method had a good tracking effect for the vehicle tracking task from the roadside perspective and could meet the practical application demands of complex traffic scenes. Full article
Show Figures

Figure 1

9 pages, 4339 KiB  
Article
Prediction of Cobb Angle Using Deep Learning Algorithm with Three-Dimensional Depth Sensor Considering the Influence of Garment in Idiopathic Scoliosis
by Yoko Ishikawa, Terufumi Kokabu, Katsuhisa Yamada, Yuichiro Abe, Hiroyuki Tachi, Hisataka Suzuki, Takashi Ohnishi, Tsutomu Endo, Daisuke Ukeba, Katsuro Ura, Masahiko Takahata, Norimasa Iwasaki and Hideki Sudo
J. Clin. Med. 2023, 12(2), 499; https://doi.org/10.3390/jcm12020499 - 7 Jan 2023
Cited by 11 | Viewed by 3098
Abstract
Adolescent idiopathic scoliosis (AIS) is the most common pediatric spinal deformity. Early detection of deformity and timely intervention, such as brace treatment, can help inhibit progressive changes. A three-dimensional (3D) depth-sensor imaging system with a convolutional neural network was previously developed to predict [...] Read more.
Adolescent idiopathic scoliosis (AIS) is the most common pediatric spinal deformity. Early detection of deformity and timely intervention, such as brace treatment, can help inhibit progressive changes. A three-dimensional (3D) depth-sensor imaging system with a convolutional neural network was previously developed to predict the Cobb angle. The purpose of the present study was to (1) evaluate the performance of the deep learning algorithm (DLA) in predicting the Cobb angle and (2) assess the predictive ability depending on the presence or absence of clothing in a prospective analysis. We included 100 subjects with suspected AIS. The correlation coefficient between the actual and predicted Cobb angles was 0.87, and the mean absolute error and root mean square error were 4.7° and 6.0°, respectively, for Adam’s forward bending without underwear. There were no significant differences in the correlation coefficients between the groups with and without underwear in the forward-bending posture. The performance of the DLA with a 3D depth sensor was validated using an independent external validation dataset. Because the psychological burden of children and adolescents on naked body imaging is an unignorable problem, scoliosis examination with underwear is a valuable alternative in clinics or schools. Full article
Show Figures

Figure 1

19 pages, 1521 KiB  
Review
Artificial Intelligence and Corneal Confocal Microscopy: The Start of a Beautiful Relationship
by Uazman Alam, Matthew Anson, Yanda Meng, Frank Preston, Varo Kirthi, Timothy L. Jackson, Paul Nderitu, Daniel J. Cuthbertson, Rayaz A. Malik, Yalin Zheng and Ioannis N. Petropoulos
J. Clin. Med. 2022, 11(20), 6199; https://doi.org/10.3390/jcm11206199 - 20 Oct 2022
Cited by 17 | Viewed by 4108
Abstract
Corneal confocal microscopy (CCM) is a rapid non-invasive in vivo ophthalmic imaging technique that images the cornea. Historically, it was utilised in the diagnosis and clinical management of corneal epithelial and stromal disorders. However, over the past 20 years, CCM has been increasingly [...] Read more.
Corneal confocal microscopy (CCM) is a rapid non-invasive in vivo ophthalmic imaging technique that images the cornea. Historically, it was utilised in the diagnosis and clinical management of corneal epithelial and stromal disorders. However, over the past 20 years, CCM has been increasingly used to image sub-basal small nerve fibres in a variety of peripheral neuropathies and central neurodegenerative diseases. CCM has been used to identify subclinical nerve damage and to predict the development of diabetic peripheral neuropathy (DPN). The complex structure of the corneal sub-basal nerve plexus can be readily analysed through nerve segmentation with manual or automated quantification of parameters such as corneal nerve fibre length (CNFL), nerve fibre density (CNFD), and nerve branch density (CNBD). Large quantities of 2D corneal nerve images lend themselves to the application of artificial intelligence (AI)-based deep learning algorithms (DLA). Indeed, DLA have demonstrated performance comparable to manual but superior to automated quantification of corneal nerve morphology. Recently, our end-to-end classification with a 3 class AI model demonstrated high sensitivity and specificity in differentiating healthy volunteers from people with and without peripheral neuropathy. We believe there is significant scope and need to apply AI to help differentiate between peripheral neuropathies and also central neurodegenerative disorders. AI has significant potential to enhance the diagnostic and prognostic utility of CCM in the management of both peripheral and central neurodegenerative diseases. Full article
(This article belongs to the Special Issue Corneal Confocal Microscopy and the Nervous System)
Show Figures

Figure 1

13 pages, 2765 KiB  
Article
StarDist Image Segmentation Improves Circulating Tumor Cell Detection
by Michiel Stevens, Afroditi Nanou, Leon W. M. M. Terstappen, Christiane Driemel, Nikolas H. Stoecklein and Frank A. W. Coumans
Cancers 2022, 14(12), 2916; https://doi.org/10.3390/cancers14122916 - 13 Jun 2022
Cited by 33 | Viewed by 4502
Abstract
After a CellSearch-processed circulating tumor cell (CTC) sample is imaged, a segmentation algorithm selects nucleic acid positive (DAPI+), cytokeratin-phycoerythrin expressing (CK-PE+) events for further review by an operator. Failures in this segmentation can result in missed CTCs. The CellSearch segmentation algorithm was not [...] Read more.
After a CellSearch-processed circulating tumor cell (CTC) sample is imaged, a segmentation algorithm selects nucleic acid positive (DAPI+), cytokeratin-phycoerythrin expressing (CK-PE+) events for further review by an operator. Failures in this segmentation can result in missed CTCs. The CellSearch segmentation algorithm was not designed to handle samples with high cell density, such as diagnostic leukapheresis (DLA) samples. Here, we evaluate deep-learning-based segmentation method StarDist as an alternative to the CellSearch segmentation. CellSearch image archives from 533 whole blood samples and 601 DLA samples were segmented using CellSearch and StarDist and inspected visually. In 442 blood samples from cancer patients, StarDist segmented 99.95% of CTC segmented by CellSearch, produced good outlines for 98.3% of these CTC, and segmented 10% more CTC than CellSearch. Visual inspection of the segmentations of DLA images showed that StarDist continues to perform well when the cell density is very high, whereas CellSearch failed and generated extremely large segmentations (up to 52% of the sample surface). Moreover, in a detailed examination of seven DLA samples, StarDist segmented 20% more CTC than CellSearch. Segmentation is a critical first step for CTC enumeration in dense samples and StarDist segmentation convincingly outperformed CellSearch segmentation. Full article
(This article belongs to the Special Issue The 5th ACTC: “Liquid Biopsy in Its Best”)
Show Figures

Figure 1

11 pages, 12645 KiB  
Article
Accuracy and Efficiency of Right-Lobe Graft Weight Estimation Using Deep-Learning-Assisted CT Volumetry for Living-Donor Liver Transplantation
by Rohee Park, Seungsoo Lee, Yusub Sung, Jeeseok Yoon, Heung-Il Suk, Hyoungjung Kim and Sanghyun Choi
Diagnostics 2022, 12(3), 590; https://doi.org/10.3390/diagnostics12030590 - 25 Feb 2022
Cited by 14 | Viewed by 2549
Abstract
CT volumetry (CTV) has been widely used for pre-operative graft weight (GW) estimation in living-donor liver transplantation (LDLT), and the use of a deep-learning algorithm (DLA) may further improve its efficiency. However, its accuracy has not been well determined. To evaluate the efficiency [...] Read more.
CT volumetry (CTV) has been widely used for pre-operative graft weight (GW) estimation in living-donor liver transplantation (LDLT), and the use of a deep-learning algorithm (DLA) may further improve its efficiency. However, its accuracy has not been well determined. To evaluate the efficiency and accuracy of DLA-assisted CTV in GW estimation, we performed a retrospective study including 581 consecutive LDLT donors who donated a right-lobe graft. Right-lobe graft volume (GV) was measured on CT using the software implemented with the DLA for automated liver segmentation. In the development group (n = 207), a volume-to-weight conversion formula was constructed by linear regression analysis between the CTV-measured GV and the intraoperative GW. In the validation group (n = 374), the agreement between the estimated and measured GWs was assessed using the Bland–Altman 95% limit-of-agreement (LOA). The mean process time for GV measurement was 1.8 ± 0.6 min (range, 1.3–8.0 min). In the validation group, the GW was estimated using the volume-to-weight conversion formula (estimated GW [g] = 206.3 + 0.653 × CTV-measured GV [mL]), and the Bland–Altman 95% LOA between the estimated and measured GWs was −1.7% ± 17.1%. The DLA-assisted CT volumetry allows for time-efficient and accurate estimation of GW in LDLT. Full article
Show Figures

Figure 1

21 pages, 6350 KiB  
Article
Evaluating the Work Productivity of Assembling Reinforcement through the Objects Detected by Deep Learning
by Jiaqi Li, Xuefeng Zhao, Guangyi Zhou, Mingyuan Zhang, Dongfang Li and Yaochen Zhou
Sensors 2021, 21(16), 5598; https://doi.org/10.3390/s21165598 - 19 Aug 2021
Cited by 9 | Viewed by 2512
Abstract
With the rapid development of deep learning, computer vision has assisted in solving a variety of problems in engineering construction. However, very few computer vision-based approaches have been proposed on work productivity’s evaluation. Therefore, taking a super high-rise project as a research case, [...] Read more.
With the rapid development of deep learning, computer vision has assisted in solving a variety of problems in engineering construction. However, very few computer vision-based approaches have been proposed on work productivity’s evaluation. Therefore, taking a super high-rise project as a research case, using the detected object information obtained by a deep learning algorithm, a computer vision-based method for evaluating the productivity of assembling reinforcement is proposed. Firstly, a detector that can accurately distinguish various entities related to assembling reinforcement based on CenterNet is established. DLA34 is selected as the backbone. The mAP reaches 0.9682, and the speed of detecting a single image can be as low as 0.076 s. Secondly, the trained detector is used to detect the video frames, and images with detected boxes and documents with coordinates can be obtained. The position relationship between the detected work objects and detected workers is used to determine how many workers (N) have participated in the task. The time (T) to perform the process can be obtained from the change of coordinates of the work object. Finally, the productivity is evaluated according to N and T. The authors use four actual construction videos for validation, and the results show that the productivity evaluation is generally consistent with the actual conditions. The contribution of this research to construction management is twofold: On the one hand, without affecting the normal behavior of workers, a connection between construction individuals and work object is established, and the work productivity evaluation is realized. On the other hand, the proposed method has a positive effect on improving the efficiency of construction management. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

11 pages, 248 KiB  
Article
Testing a Deep Learning Algorithm for Detection of Diabetic Retinopathy in a Spanish Diabetic Population and with MESSIDOR Database
by Marc Baget-Bernaldiz, Romero-Aroca Pedro, Esther Santos-Blanco, Raul Navarro-Gil, Aida Valls, Antonio Moreno, Hatem A. Rashwan and Domenec Puig
Diagnostics 2021, 11(8), 1385; https://doi.org/10.3390/diagnostics11081385 - 31 Jul 2021
Cited by 27 | Viewed by 3227
Abstract
Background: The aim of the present study was to test our deep learning algorithm (DLA) by reading the retinographies. Methods: We tested our DLA built on convolutional neural networks in 14,186 retinographies from our population and 1200 images extracted from MESSIDOR. The retinal [...] Read more.
Background: The aim of the present study was to test our deep learning algorithm (DLA) by reading the retinographies. Methods: We tested our DLA built on convolutional neural networks in 14,186 retinographies from our population and 1200 images extracted from MESSIDOR. The retinal images were graded both by the DLA and independently by four retina specialists. Results of the DLA were compared according to accuracy (ACC), sensitivity (S), specificity (SP), positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic curve (AUC), distinguishing between identification of any type of DR (any DR) and referable DR (RDR). Results: The results of testing the DLA for identifying any DR in our population were: ACC = 99.75, S = 97.92, SP = 99.91, PPV = 98.92, NPV = 99.82, and AUC = 0.983. When detecting RDR, the results were: ACC = 99.66, S = 96.7, SP = 99.92, PPV = 99.07, NPV = 99.71, and AUC = 0.988. The results of testing the DLA for identifying any DR with MESSIDOR were: ACC = 94.79, S = 97.32, SP = 94.57, PPV = 60.93, NPV = 99.75, and AUC = 0.959. When detecting RDR, the results were: ACC = 98.78, S = 94.64, SP = 99.14, PPV = 90.54, NPV = 99.53, and AUC = 0.968. Conclusions: Our DLA performed well, both in detecting any DR and in classifying those eyes with RDR in a sample of retinographies of type 2 DM patients in our population and the MESSIDOR database. Full article
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease)
Back to TopTop