Topic Editors

Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
Institute of Electronics, Lodz University of Technology, Wolczanska 211/215, 90-924 Łódź, Poland
Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland
Department of Radiology, Jagiellonian University Medical College, 19 Kopernika Street, 31-501 Cracow, Poland

AI in Medical Imaging and Image Processing

Abstract submission deadline
closed (31 October 2024)
Manuscript submission deadline
31 December 2024
Viewed by
46030

Topic Information

Dear Colleagues,

In modern healthcare, the importance of computer-aided diagnosis is quickly becoming obvious, with clear benefits for the medical professionals and patients. Automatization of processes traditionally maintained by human professionals is also growing in importance. The process of image analysis can be supported by the use of networks that can carry out multilayer analyses of patterns—collectively called artificial intelligence (AI). If supported by large datasets of input data, computer networks can suggest the result with low error bias. Medical imaging focused on pattern detection is typically supported by AI algorithms. AI can be used as an important aid in three major steps of decision making in the medical imaging workflow: detection (image segmentation), recognition (assignment to the class), and result description (transformation of the result to natural language). The implementation of AI algorithms may participate in the diagnostic process standardization and markedly reduces the time needed to achieve pathology detection and description of the results. With AI support, medical specialists may work more effectively, which can improve healthcare quality. As AI has been a topic of interest for a while now, there are many approaches to and techniques for the implementation of AI based on different computing methods designed to work in various systems. The aim of this Special Issue in to present the current knowledge dedicated to the AI methods used in medical systems, with their applications in different fields of diagnostic imaging. Our goal is for this collection of works to contribute to the exchange of knowledge resulting in a better understanding of AI technical aspects and applications in modern radiology.

Dr. Karolina Nurzynska
Prof. Dr. Michał Strzelecki
Prof. Dr. Adam Piorkowski
Dr. Rafał Obuchowicz
Topic Editors

Keywords

  • artificial intelligence
  • computer-aided diagnosis
  • medical imaging
  • image analysis
  • image processing

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
BioMed
biomed
- - 2021 20.3 Days CHF 1000 Submit
Cancers
cancers
4.5 8.0 2009 16.3 Days CHF 2900 Submit
Diagnostics
diagnostics
3.0 4.7 2011 20.5 Days CHF 2600 Submit
Journal of Clinical Medicine
jcm
3.0 5.7 2012 17.3 Days CHF 2600 Submit
Tomography
tomography
2.2 2.7 2015 23.9 Days CHF 2400 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (41 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
21 pages, 15071 KiB  
Article
MaskAppendix: Backbone-Enriched Mask R-CNN Based on Grad-CAM for Automatic Appendix Segmentation
by Emre Dandıl, Betül Tiryaki Baştuğ, Mehmet Süleyman Yıldırım, Kadir Çorbacı and Gürkan Güneri
Diagnostics 2024, 14(21), 2346; https://doi.org/10.3390/diagnostics14212346 - 22 Oct 2024
Viewed by 374
Abstract
Background: A leading cause of emergency abdominal surgery, appendicitis is a common condition affecting millions of people worldwide. Automatic and accurate segmentation of the appendix from medical imaging is a challenging task, due to its small size, variability in shape, and proximity to [...] Read more.
Background: A leading cause of emergency abdominal surgery, appendicitis is a common condition affecting millions of people worldwide. Automatic and accurate segmentation of the appendix from medical imaging is a challenging task, due to its small size, variability in shape, and proximity to other anatomical structures. Methods: In this study, we propose a backbone-enriched Mask R-CNN architecture (MaskAppendix) on the Detectron platform, enhanced with Gradient-weighted Class Activation Mapping (Grad-CAM), for precise appendix segmentation on computed tomography (CT) scans. In the proposed MaskAppendix deep learning model, ResNet101 network is used as the backbone. By integrating Grad-CAM into the MaskAppendix network, our model improves feature localization, allowing it to better capture subtle variations in appendix morphology. Results: We conduct extensive experiments on a dataset of abdominal CT scans, demonstrating that our method achieves state-of-the-art performance in appendix segmentation, outperforming traditional segmentation techniques in terms of both accuracy and robustness. In the automatic segmentation of the appendix region in CT slices, a DSC score of 87.17% was achieved with the proposed approach, and the results obtained have the potential to improve clinical diagnostic accuracy. Conclusions: This framework provides an effective tool for aiding clinicians in the diagnosis of appendicitis and other related conditions, reducing the potential for diagnostic errors and enhancing clinical workflow efficiency. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

20 pages, 3823 KiB  
Article
Pulmonary Fissure Segmentation in CT Images Using Image Filtering and Machine Learning
by Mikhail Fufin, Vladimir Makarov, Vadim I. Alfimov, Vladislav V. Ananev and Anna Ananeva
Tomography 2024, 10(10), 1645-1664; https://doi.org/10.3390/tomography10100121 - 9 Oct 2024
Viewed by 490
Abstract
Background: Both lung lobe segmentation and lung fissure segmentation are useful in the clinical diagnosis and evaluation of lung disease. It is often of clinical interest to quantify each lobe separately because many diseases are associated with specific lobes. Fissure segmentation is important [...] Read more.
Background: Both lung lobe segmentation and lung fissure segmentation are useful in the clinical diagnosis and evaluation of lung disease. It is often of clinical interest to quantify each lobe separately because many diseases are associated with specific lobes. Fissure segmentation is important for a significant proportion of lung lobe segmentation methods, as well as for assessing fissure completeness, since there is an increasing requirement for the quantification of fissure integrity. Methods: We propose a method for the fully automatic segmentation of pulmonary fissures on lung computed tomography (CT) based on U-Net and PAN models using a Derivative of Stick (DoS) filter for data preprocessing. Model ensembling is also used to improve prediction accuracy. Results: Our method achieved an F1 score of 0.916 for right-lung fissures and 0.933 for left-lung fissures, which are significantly higher than the standalone DoS results (0.724 and 0.666, respectively). We also performed lung lobe segmentation using fissure segmentation. The lobe segmentation algorithm shows results close to those of state-of-the-art methods, with an average Dice score of 0.989. Conclusions: The proposed method segments pulmonary fissures efficiently and have low memory requirements, which makes it suitable for further research in this field involving rapid experimentation. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

13 pages, 2246 KiB  
Article
Opportunistic Screening for Low Bone Mineral Density in Adults with Cystic Fibrosis Using Low-Dose Computed Tomography of the Chest with Artificial Intelligence
by Matthias Welsner, Henning Navel, Rene Hosch, Peter Rathsmann, Florian Stehling, Annie Mathew, Sivagurunathan Sutharsan, Svenja Strassburg, Dirk Westhölter, Christian Taube, Sebastian Zensen, Benedikt M. Schaarschmidt, Michael Forsting, Felix Nensa, Mathias Holtkamp, Johannes Haubold, Luca Salhöfer and Marcel Opitz
J. Clin. Med. 2024, 13(19), 5961; https://doi.org/10.3390/jcm13195961 - 7 Oct 2024
Viewed by 750
Abstract
Background: Cystic fibrosis bone disease (CFBD) is a common comorbidity in adult people with cystic fibrosis (pwCF), resulting in an increased risk of bone fractures. This study evaluated the capacity of artificial intelligence (AI)-assisted low-dose chest CT (LDCT) opportunistic screening for detecting low [...] Read more.
Background: Cystic fibrosis bone disease (CFBD) is a common comorbidity in adult people with cystic fibrosis (pwCF), resulting in an increased risk of bone fractures. This study evaluated the capacity of artificial intelligence (AI)-assisted low-dose chest CT (LDCT) opportunistic screening for detecting low bone mineral density (BMD) in adult pwCF. Methods: In this retrospective single-center study, 65 adult pwCF (mean age 30.1 ± 7.5 years) underwent dual-energy X-ray absorptiometry (DXA) of the lumbar vertebrae L1 to L4 to determine BMD and corresponding z-scores and completed LDCTs of the chest within three months as part of routine clinical care. A fully automated CT-based AI algorithm measured the attenuation values (Hounsfield units [HU]) of the thoracic vertebrae Th9–Th12 and first lumbar vertebra L1. The ability of the algorithm to diagnose CFBD was assessed using receiver operating characteristic (ROC) curves. Results: HU values of Th9 to L1 and DXA-derived BMD and the corresponding z-scores of L1 to L4 showed a strong correlation (all p < 0.05). The area under the curve (AUC) for diagnosing low BMD was highest for L1 (0.796; p = 0.001) and Th11 (0.835; p < 0.001), resulting in a specificity of 84.9% at a sensitivity level of 75%. The HU threshold values for distinguishing normal from low BMD were <197 (L1) and <212 (Th11), respectively. Conclusions: Routine LDCT of the chest with the fully automated AI-guided determination of thoracic and lumbar vertebral attenuation values is a valuable tool for predicting low BMD in adult pwCF, with the best results for Th11 and L1. However, further studies are required to define clear threshold values. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

16 pages, 6662 KiB  
Article
Fully Automated Detection of the Appendix Using U-Net Deep Learning Architecture in CT Scans
by Betül Tiryaki Baştuğ, Gürkan Güneri, Mehmet Süleyman Yıldırım, Kadir Çorbacı and Emre Dandıl
J. Clin. Med. 2024, 13(19), 5893; https://doi.org/10.3390/jcm13195893 - 2 Oct 2024
Viewed by 804
Abstract
Background: The accurate segmentation of the appendix with well-defined boundaries is critical for diagnosing conditions such as acute appendicitis. The manual identification of the appendix is time-consuming and highly dependent on the expertise of the radiologist. Method: In this study, we propose a [...] Read more.
Background: The accurate segmentation of the appendix with well-defined boundaries is critical for diagnosing conditions such as acute appendicitis. The manual identification of the appendix is time-consuming and highly dependent on the expertise of the radiologist. Method: In this study, we propose a fully automated approach to the detection of the appendix using deep learning architecture based on the U-Net with specific training parameters in CT scans. The proposed U-Net architecture is trained on an annotated original dataset of abdominal CT scans to segment the appendix efficiently and with high performance. In addition, to extend the training set, data augmentation techniques are applied for the created dataset. Results: In experimental studies, the proposed U-Net model is implemented using hyperparameter optimization and the performance of the model is evaluated using key metrics to measure diagnostic reliability. The trained U-Net model achieved the segmentation performance for the detection of the appendix in CT slices with a Dice Similarity Coefficient (DSC), Volumetric Overlap Error (VOE), Average Symmetric Surface Distance (ASSD), Hausdorff Distance 95 (HD95), Precision (PRE) and Recall (REC) of 85.94%, 23.29%, 1.24 mm, 5.43 mm, 86.83% and 86.62%, respectively. Moreover, our model outperforms other methods by leveraging the U-Net’s ability to capture spatial context through encoder–decoder structures and skip connections, providing a correct segmentation output. Conclusions: The proposed U-Net model showed reliable performance in segmenting the appendix region, with some limitations in cases where the appendix was close to other structures. These improvements highlight the potential of deep learning to significantly improve clinical outcomes in appendix detection. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

14 pages, 828 KiB  
Article
Lightweight MRI Brain Tumor Segmentation Enhanced by Hierarchical Feature Fusion
by Lei Zhang, Rong Zhang, Zhongjie Zhu, Pei Li, Yongqiang Bai and Ming Wang
Tomography 2024, 10(10), 1577-1590; https://doi.org/10.3390/tomography10100116 - 1 Oct 2024
Viewed by 776
Abstract
Background: Existing methods for MRI brain tumor segmentation often suffer from excessive model parameters and suboptimal performance in delineating tumor boundaries. Methods: For this issue, a lightweight MRI brain tumor segmentation method, enhanced by hierarchical feature fusion (EHFF), is proposed. This method reduces [...] Read more.
Background: Existing methods for MRI brain tumor segmentation often suffer from excessive model parameters and suboptimal performance in delineating tumor boundaries. Methods: For this issue, a lightweight MRI brain tumor segmentation method, enhanced by hierarchical feature fusion (EHFF), is proposed. This method reduces model parameters while improving segmentation performance by integrating hierarchical features. Initially, a fine-grained feature adjustment network is crafted and guided by global contextual information, leading to the establishment of an adaptive feature learning (AFL) module. This module captures the global features of MRI brain tumor images through macro perception and micro focus, adjusting spatial granularity to enhance feature details and reduce computational complexity. Subsequently, a hierarchical feature weighting (HFW) module is constructed. This module extracts multi-scale refined features through multi-level weighting, enhancing the detailed features of spatial positions and alleviating the lack of attention to local position details in macro perception. Finally, a hierarchical feature retention (HFR) module is designed as a supplementary decoder. This module retains, up-samples, and fuses feature maps from each layer, thereby achieving better detail preservation and reconstruction. Results: Experimental results on the BraTS 2021 dataset demonstrate that the proposed method surpasses existing methods. Dice similarity coefficients (DSC) for the three semantic categories ET, TC, and WT are 88.57%, 91.53%, and 93.09%, respectively. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

10 pages, 757 KiB  
Article
Effectiveness of an Artificial Intelligence Software for Limb Radiographic Fracture Recognition in an Emergency Department
by Guillaume Herpe, Helena Nelken, Tanguy Vendeuvre, Jeremy Guenezan, Clement Giraud, Olivier Mimoz, Antoine Feydy, Jean-Pierre Tasu and Rémy Guillevin
J. Clin. Med. 2024, 13(18), 5575; https://doi.org/10.3390/jcm13185575 - 20 Sep 2024
Viewed by 827
Abstract
Objectives: To assess the impact of an Artificial Intelligence (AI) limb bone fracture diagnosis software (AIS) on emergency department (ED) workflow and diagnostic accuracy. Materials and Methods: A retrospective study was conducted in two phases—without AIS (Period 1: 1 January 2020–30 June 2020) [...] Read more.
Objectives: To assess the impact of an Artificial Intelligence (AI) limb bone fracture diagnosis software (AIS) on emergency department (ED) workflow and diagnostic accuracy. Materials and Methods: A retrospective study was conducted in two phases—without AIS (Period 1: 1 January 2020–30 June 2020) and with AIS (Period 2: 1 January 2021–30 June 2021). Results: Among 3720 patients (1780 in Period 1; 1940 in Period 2), the discrepancy rate decreased by 17% (p = 0.04) after AIS implementation. Clinically relevant discrepancies showed no significant change (−1.8%, p = 0.99). The mean length of stay in the ED was reduced by 9 min (p = 0.03), and expert consultation rates decreased by 1% (p = 0.38). Conclusions: AIS implementation reduced the overall discrepancy rate and slightly decreased ED length of stay, although its impact on clinically relevant discrepancies remains inconclusive. Key Point: After AI software deployment, the rate of radiographic discrepancies decreased by 17% (p = 0.04) but this was not clinically relevant (−2%, p = 0.99). Length of patient stay in the emergency department decreased by 5% with AI (p = 0.03). Bone fracture AI software is effective, but its effectiveness remains to be demonstrated. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

14 pages, 1218 KiB  
Article
Are Preoperative CT Findings Useful in Predicting the Duration of Laparoscopic Appendectomy in Pediatric Patients? A Single Center Study
by Ismail Taskent, Bunyamin Ece and Mehmet Ali Narsat
J. Clin. Med. 2024, 13(18), 5504; https://doi.org/10.3390/jcm13185504 - 18 Sep 2024
Viewed by 567
Abstract
Background/Objectives: Preoperative computed tomography (CT) imaging plays a vital role in accurately diagnosing acute appendicitis and assessing the severity of the condition, as well as the complexity of the surgical procedure. CT imaging provides detailed information on the anatomical and pathological aspects of [...] Read more.
Background/Objectives: Preoperative computed tomography (CT) imaging plays a vital role in accurately diagnosing acute appendicitis and assessing the severity of the condition, as well as the complexity of the surgical procedure. CT imaging provides detailed information on the anatomical and pathological aspects of appendicitis, allowing surgeons to anticipate technical challenges and select the most appropriate surgical approach. This retrospective study aimed to investigate the correlation between preoperative CT findings and the duration of laparoscopic appendectomy (LA) in pediatric patients. Methods: This retrospective study included 104 pediatric patients diagnosed with acute appendicitis via contrast-enhanced CT who subsequently underwent laparoscopic appendectomy (LA) between November 2021 and February 2024. CT images were meticulously reviewed by two experienced radiologists blinded to the clinical and surgical outcomes. The severity of appendicitis was evaluated using a five-point scale based on the presence of periappendiceal fat, fluid, extraluminal air, and abscesses. Results: The average operation time was 51.1 ± 21.6 min. Correlation analysis revealed significant positive associations between operation time and neutrophil count (p = 0.014), C-reactive protein levels (p = 0.002), symptom-to-operation time (p = 0.004), and appendix diameter (p = 0.017). The total CT score also showed a significant correlation with operation time (p < 0.001). Multiple regression analysis demonstrated that a symptom duration of more than 2 days (p = 0.047), time from CT to surgery (p = 0.039), and the presence of a periappendiceal abscess (p = 0.005) were independent predictors of prolonged operation time. In the perforated appendicitis group, the presence of a periappendiceal abscess on CT was significantly associated with prolonged operation time (p = 0.020). In the non-perforated group, the presence of periappendiceal fluid was significantly related to longer operation times (p = 0.026). Conclusions: In our study, preoperative CT findings, particularly the presence of a periappendiceal abscess, were significantly associated with prolonged operation times in pediatric patients undergoing laparoscopic appendectomy. Elevated CRP levels, the time between CT imaging and surgery, and a symptom duration of more than 2 days were also found to significantly impact the procedure’s duration. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

14 pages, 898 KiB  
Article
Interrater Variability of ML-Based CT-FFR in Patients without Obstructive CAD before TAVR: Influence of Image Quality, Coronary Artery Calcifications, and Location of Measurement
by Robin F. Gohmann, Adrian Schug, Christian Krieghoff, Patrick Seitz, Nicolas Majunke, Maria Buske, Fyn Kaiser, Sebastian Schaudt, Katharina Renatus, Steffen Desch, Sergey Leontyev, Thilo Noack, Philipp Kiefer, Konrad Pawelka, Christian Lücke, Ahmed Abdelhafez, Sebastian Ebel, Michael A. Borger, Holger Thiele, Christoph Panknin, Mohamed Abdel-Wahab, Matthias Horn and Matthias Gutberletadd Show full author list remove Hide full author list
J. Clin. Med. 2024, 13(17), 5247; https://doi.org/10.3390/jcm13175247 - 4 Sep 2024
Viewed by 797
Abstract
Objectives: CT-derived fractional flow reserve (CT-FFR) can improve the specificity of coronary CT-angiography (cCTA) for ruling out relevant coronary artery disease (CAD) prior to transcatheter aortic valve replacement (TAVR). However, little is known about the reproducibility of CT-FFR and the influence of [...] Read more.
Objectives: CT-derived fractional flow reserve (CT-FFR) can improve the specificity of coronary CT-angiography (cCTA) for ruling out relevant coronary artery disease (CAD) prior to transcatheter aortic valve replacement (TAVR). However, little is known about the reproducibility of CT-FFR and the influence of diffuse coronary artery calcifications or segment location. The objective was to assess the reliability of machine-learning (ML)-based CT-FFR prior to TAVR in patients without obstructive CAD and to assess the influence of image quality, coronary artery calcium score (CAC), and the location of measurement within the coronary tree. Methods: Patients assessed for TAVR, without obstructive CAD on cCTA were evaluated with ML-based CT-FFR by two observers with differing experience. Differences in absolute values and categorization into hemodynamically relevant CAD (CT-FFR ≤ 0.80) were compared. Results in regard to CAD were also compared against invasive coronary angiography. The influence of segment location, image quality, and CAC was evaluated. Results: Of the screened patients, 109/388 patients did not have obstructive CAD on cCTA and were included. The median (interquartile range) difference of CT-FFR values was −0.005 (−0.09 to 0.04) (p = 0.47). Differences were smaller with high values. Recategorizations were more frequent in distal segments. Diagnostic accuracy of CT-FFR between both observers was comparable (proximal: Δ0.2%; distal: Δ0.5%) but was lower in distal segments (proximal: 98.9%/99.1%; distal: 81.1%/81.6%). Image quality and CAC had no clinically relevant influence on CT-FFR. Conclusions: ML-based CT-FFR evaluation of proximal segments was more reliable. Distal segments with CT-FFR values close to the given threshold were prone to recategorization, even if absolute differences between observers were minimal and independent of image quality or CAC. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

14 pages, 2364 KiB  
Article
Repurposing the Public BraTS Dataset for Postoperative Brain Tumour Treatment Response Monitoring
by Peter Jagd Sørensen, Claes Nøhr Ladefoged, Vibeke Andrée Larsen, Flemming Littrup Andersen, Michael Bachmann Nielsen, Hans Skovgaard Poulsen, Jonathan Frederik Carlsen and Adam Espe Hansen
Tomography 2024, 10(9), 1397-1410; https://doi.org/10.3390/tomography10090105 - 1 Sep 2024
Viewed by 998
Abstract
The Brain Tumor Segmentation (BraTS) Challenge has been a main driver of the development of deep learning (DL) algorithms and provides by far the largest publicly available expert-annotated brain tumour dataset but contains solely preoperative examinations. The aim of our study was to [...] Read more.
The Brain Tumor Segmentation (BraTS) Challenge has been a main driver of the development of deep learning (DL) algorithms and provides by far the largest publicly available expert-annotated brain tumour dataset but contains solely preoperative examinations. The aim of our study was to facilitate the use of the BraTS dataset for training DL brain tumour segmentation algorithms for a postoperative setting. To this end, we introduced an automatic conversion of the three-label BraTS annotation protocol to a two-label annotation protocol suitable for postoperative brain tumour segmentation. To assess the viability of the label conversion, we trained a DL algorithm using both the three-label and the two-label annotation protocols. We assessed the models pre- and postoperatively and compared the performance with a state-of-the-art DL method. The DL algorithm trained using the BraTS three-label annotation misclassified parts of 10 out of 41 fluid-filled resection cavities in 72 postoperative glioblastoma MRIs, whereas the two-label model showed no such inaccuracies. The tumour segmentation performance of the two-label model both pre- and postoperatively was comparable to that of a state-of-the-art algorithm for tumour volumes larger than 1 cm3. Our study enables using the BraTS dataset as a basis for the training of DL algorithms for postoperative tumour segmentation. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

14 pages, 10901 KiB  
Article
EpidermaQuant: Unsupervised Detection and Quantification of Epidermal Differentiation Markers on H-DAB-Stained Images of Reconstructed Human Epidermis
by Dawid Zamojski, Agnieszka Gogler, Dorota Scieglinska and Michal Marczyk
Diagnostics 2024, 14(17), 1904; https://doi.org/10.3390/diagnostics14171904 - 29 Aug 2024
Viewed by 600
Abstract
The integrity of the reconstructed human epidermis generated in vitro can be assessed using histological analyses combined with immunohistochemical staining of keratinocyte differentiation markers. Technical differences during the preparation and capture of stained images may influence the outcome of computational methods. Due to [...] Read more.
The integrity of the reconstructed human epidermis generated in vitro can be assessed using histological analyses combined with immunohistochemical staining of keratinocyte differentiation markers. Technical differences during the preparation and capture of stained images may influence the outcome of computational methods. Due to the specific nature of the analyzed material, no annotated datasets or dedicated methods are publicly available. Using a dataset with 598 unannotated images showing cross-sections of in vitro reconstructed human epidermis stained with DAB-based immunohistochemistry reaction to visualize four different keratinocyte differentiation marker proteins (filaggrin, keratin 10, Ki67, HSPA2) and counterstained with hematoxylin, we developed an unsupervised method for the detection and quantification of immunohistochemical staining. The pipeline consists of the following steps: (i) color normalization; (ii) color deconvolution; (iii) morphological operations; (iv) automatic image rotation; and (v) clustering. The most effective combination of methods includes (i) Reinhard’s normalization; (ii) Ruifrok and Johnston color-deconvolution method; (iii) proposed image-rotation method based on boundary distribution of image intensity; and (iv) k-means clustering. The results of the work should enhance the performance of quantitative analyses of protein markers in reconstructed human epidermis samples and enable the comparison of their spatial distribution between different experimental conditions. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

14 pages, 1275 KiB  
Article
VerFormer: Vertebrae-Aware Transformer for Automatic Spine Segmentation from CT Images
by Xinchen Li, Yuan Hong, Yang Xu and Mu Hu
Diagnostics 2024, 14(17), 1859; https://doi.org/10.3390/diagnostics14171859 - 25 Aug 2024
Viewed by 837
Abstract
The accurate and efficient segmentation of the spine is important in the diagnosis and treatment of spine malfunctions and fractures. However, it is still challenging because of large inter-vertebra variations in shape and cross-image localization of the spine. In previous methods, convolutional neural [...] Read more.
The accurate and efficient segmentation of the spine is important in the diagnosis and treatment of spine malfunctions and fractures. However, it is still challenging because of large inter-vertebra variations in shape and cross-image localization of the spine. In previous methods, convolutional neural networks (CNNs) have been widely applied as a vision backbone to tackle this task. However, these methods are challenged in utilizing the global contextual information across the whole image for accurate spine segmentation because of the inherent locality of the convolution operation. Compared with CNNs, the Vision Transformer (ViT) has been proposed as another vision backbone with a high capacity to capture global contextual information. However, when the ViT is employed for spine segmentation, it treats all input tokens equally, including vertebrae-related tokens and non-vertebrae-related tokens. Additionally, it lacks the capability to locate regions of interest, thus lowering the accuracy of spine segmentation. To address this limitation, we propose a novel Vertebrae-aware Vision Transformer (VerFormer) for automatic spine segmentation from CT images. Our VerFormer is designed by incorporating a novel Vertebrae-aware Global (VG) block into the ViT backbone. In the VG block, the vertebrae-related global contextual information is extracted by a Vertebrae-aware Global Query (VGQ) module. Then, this information is incorporated into query tokens to highlight vertebrae-related tokens in the multi-head self-attention module. Thus, this VG block can leverage global contextual information to effectively and efficiently locate spines across the whole input, thus improving the segmentation accuracy of VerFormer. Driven by this design, the VerFormer demonstrates a solid capacity to capture more discriminative dependencies and vertebrae-related context in automatic spine segmentation. The experimental results on two spine CT segmentation tasks demonstrate the effectiveness of our VG block and the superiority of our VerFormer in spine segmentation. Compared with other popular CNN- or ViT-based segmentation models, our VerFormer shows superior segmentation accuracy and generalization. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

15 pages, 1533 KiB  
Article
MRI T2w Radiomics-Based Machine Learning Models in Imaging Simulated Biopsy Add Diagnostic Value to PI-RADS in Predicting Prostate Cancer: A Retrospective Diagnostic Study
by Jia-Cheng Liu, Xiao-Hao Ruan, Tsun-Tsun Chun, Chi Yao, Da Huang, Hoi-Lung Wong, Chun-Ting Lai, Chiu-Fung Tsang, Sze-Ho Ho, Tsui-Lin Ng, Dan-Feng Xu and Rong Na
Cancers 2024, 16(17), 2944; https://doi.org/10.3390/cancers16172944 - 23 Aug 2024
Viewed by 650
Abstract
Background: Currently, prostate cancer (PCa) prebiopsy medical image diagnosis mainly relies on mpMRI and PI-RADS scores. However, PI-RADS has its limitations, such as inter- and intra-radiologist variability and the potential for imperceptible features. The primary objective of this study is to evaluate the [...] Read more.
Background: Currently, prostate cancer (PCa) prebiopsy medical image diagnosis mainly relies on mpMRI and PI-RADS scores. However, PI-RADS has its limitations, such as inter- and intra-radiologist variability and the potential for imperceptible features. The primary objective of this study is to evaluate the effectiveness of a machine learning model based on radiomics analysis of MRI T2-weighted (T2w) images for predicting PCa in prebiopsy cases. Method: A retrospective analysis was conducted using 820 lesions (363 cases, 457 controls) from The Cancer Imaging Archive (TCIA) Database for model development and validation. An additional 83 lesions (30 cases, 53 controls) from Hong Kong Queen Mary Hospital were used for independent external validation. The MRI T2w images were preprocessed, and radiomic features were extracted. Feature selection was performed using Cross Validation Least Angle Regression (CV-LARS). Using three different machine learning algorithms, a total of 18 prediction models and 3 shape control models were developed. The performance of the models, including the area under the curve (AUC) and diagnostic values such as sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), were compared to the PI-RADS scoring system for both internal and external validation. Results: All the models showed significant differences compared to the shape control model (all p < 0.001, except SVM model PI-RADS+2 Features p = 0.004, SVM model PI-RADS+3 Features p = 0.002). In internal validation, the best model, based on the LR algorithm, incorporated 3 radiomic features (AUC = 0.838, sensitivity = 76.85%, specificity = 77.36%). In external validation, the LR (3 features) model outperformed PI-RADS in predictive value with AUC 0.870 vs. 0.658, sensitivity 56.67% vs. 46.67%, specificity 92.45% vs. 84.91%, PPV 80.95% vs. 63.64%, and NPV 79.03% vs. 73.77%. Conclusions: The machine learning model based on radiomics analysis of MRI T2w images, along with simulated biopsy, provides additional diagnostic value to the PI-RADS scoring system in predicting PCa. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

15 pages, 2761 KiB  
Article
Preoperative OCT Characteristics Contributing to Prediction of Postoperative Visual Acuity in Eyes with Macular Hole
by Yoko Mase, Yoshitsugu Matsui, Koki Imai, Kazuya Imamura, Akiko Irie-Ota, Shinichiro Chujo, Hisashi Matsubara, Hiroharu Kawanaka and Mineo Kondo
J. Clin. Med. 2024, 13(16), 4826; https://doi.org/10.3390/jcm13164826 - 15 Aug 2024
Viewed by 699
Abstract
Objectives: To develop a machine learning logistic regression algorithm that can classify patients with an idiopathic macular hole (IMH) into those with good or poor vison at 6 months after a vitrectomy. In addition, to determine its accuracy and the contribution of the [...] Read more.
Objectives: To develop a machine learning logistic regression algorithm that can classify patients with an idiopathic macular hole (IMH) into those with good or poor vison at 6 months after a vitrectomy. In addition, to determine its accuracy and the contribution of the preoperative OCT characteristics to the algorithm. Methods: This was a single-center, cohort study. The classifier was developed using preoperative clinical information and the optical coherence tomographic (OCT) findings of 43 eyes of 43 patients who had undergone a vitrectomy. The explanatory variables were selected using a filtering method based on statistical significance and variance inflation factor (VIF) values, and the objective variable was the best-corrected visual acuity (BCVA) at 6 months postoperation. The discrimination threshold of the BCVA was the 0.15 logarithm of the minimum angle of the resolution (logMAR) units. Results: The performance of the classifier was 0.92 for accuracy, 0.73 for recall, 0.60 for precision, 0.74 for F-score, and 0.84 for the area under the curve (AUC). In logistic regression, the standard regression coefficients were 0.28 for preoperative BCVA, 0.13 for outer nuclear layer defect length (ONL_DL), −0.21 for outer plexiform layer defect length (OPL_DL) − (ONL_DL), and −0.17 for (OPL_DL)/(ONL_DL). In the IMH form, a stenosis pattern with a narrowing from the OPL to the ONL of the MH had a significant effect on the postoperative BCVA at 6 months. Conclusions: Our results indicate that (OPL_DL) − (ONL_DL) had a similar contribution to preoperative visual acuity in predicting the postoperative visual acuity. This model had a strong performance, suggesting that the preoperative visual acuity and MH characteristics in the OCT images were crucial in forecasting the postoperative visual acuity in IMH patients. Thus, it can be used to classify MH patients into groups with good or poor postoperative visual acuity, and the classification was comparable to that of previous studies using deep learning. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

15 pages, 1070 KiB  
Article
Reproducibility and Repeatability in Focus: Evaluating LVEF Measurements with 3D Echocardiography by Medical Technologists
by Marc Østergaard Nielsen, Arlinda Ljoki, Bo Zerahn, Lars Thorbjørn Jensen and Bent Kristensen
Diagnostics 2024, 14(16), 1729; https://doi.org/10.3390/diagnostics14161729 - 9 Aug 2024
Viewed by 708
Abstract
Three-dimensional echocardiography (3DE) is currently the preferred method for monitoring left ventricular ejection fraction (LVEF) in cancer patients receiving potentially cardiotoxic anti-neoplastic therapy. In Denmark, however, the traditional standard for LVEF monitoring has been rooted in nuclear medicine departments utilizing equilibrium radionuclide angiography [...] Read more.
Three-dimensional echocardiography (3DE) is currently the preferred method for monitoring left ventricular ejection fraction (LVEF) in cancer patients receiving potentially cardiotoxic anti-neoplastic therapy. In Denmark, however, the traditional standard for LVEF monitoring has been rooted in nuclear medicine departments utilizing equilibrium radionuclide angiography (ERNA). Although ERNA remains a principal modality, there is an emerging trend towards the adoption of echocardiography for this purpose. Given this context, assessing the reproducibility of 3DE among non-specialized medical personnel is crucial for its clinical adoption in such departments. To assess the feasibility of 3DE for LVEF measurements by technologists, we evaluated the repeatability and reproducibility of two moderately experienced technologists. They performed 3DE on 12 volunteers over two sessions, with a collaborative review of the results from the first session before the second session. Two-way intraclass correlation values increased from 0.03 to 0.77 across the sessions. This increase in agreement was mainly due to the recognition of false low measurements. Our findings underscore the importance of incorporating reproducibility exercises in the context of 3DE, especially when operated by technologists. Additionally, routine control of the acquisitions by physicians is deemed necessary. Ensuring these hurdles are adequately managed enables the adoption of 3DE for LVEF measurements by technologists. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

19 pages, 5027 KiB  
Article
Brain Tumor Detection and Classification Using an Optimized Convolutional Neural Network
by Muhammad Aamir, Abdallah Namoun, Sehrish Munir, Nasser Aljohani, Meshari Huwaytim Alanazi, Yaser Alsahafi and Faris Alotibi
Diagnostics 2024, 14(16), 1714; https://doi.org/10.3390/diagnostics14161714 - 7 Aug 2024
Viewed by 3588
Abstract
Brain tumors are a leading cause of death globally, with numerous types varying in malignancy, and only 12% of adults diagnosed with brain cancer survive beyond five years. This research introduces a hyperparametric convolutional neural network (CNN) model to identify brain tumors, with [...] Read more.
Brain tumors are a leading cause of death globally, with numerous types varying in malignancy, and only 12% of adults diagnosed with brain cancer survive beyond five years. This research introduces a hyperparametric convolutional neural network (CNN) model to identify brain tumors, with significant practical implications. By fine-tuning the hyperparameters of the CNN model, we optimize feature extraction and systematically reduce model complexity, thereby enhancing the accuracy of brain tumor diagnosis. The critical hyperparameters include batch size, layer counts, learning rate, activation functions, pooling strategies, padding, and filter size. The hyperparameter-tuned CNN model was trained on three different brain MRI datasets available at Kaggle, producing outstanding performance scores, with an average value of 97% for accuracy, precision, recall, and F1-score. Our optimized model is effective, as demonstrated by our methodical comparisons with state-of-the-art approaches. Our hyperparameter modifications enhanced the model performance and strengthened its capacity for generalization, giving medical practitioners a more accurate and effective tool for making crucial judgments regarding brain tumor diagnosis. Our model is a significant step in the right direction toward trustworthy and accurate medical diagnosis, with practical implications for improving patient outcomes. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

14 pages, 2341 KiB  
Article
Evaluation of Machine Learning Classification Models for False-Positive Reduction in Prostate Cancer Detection Using MRI Data
by Malte Rippa, Ruben Schulze, Georgia Kenyon, Marian Himstedt, Maciej Kwiatkowski, Rainer Grobholz, Stephen Wyler, Alexander Cornelius, Sebastian Schindera and Felice Burn
Diagnostics 2024, 14(15), 1677; https://doi.org/10.3390/diagnostics14151677 - 2 Aug 2024
Viewed by 1143
Abstract
In this work, several machine learning (ML) algorithms, both classical ML and modern deep learning, were investigated for their ability to improve the performance of a pipeline for the segmentation and classification of prostate lesions using MRI data. The algorithms were used to [...] Read more.
In this work, several machine learning (ML) algorithms, both classical ML and modern deep learning, were investigated for their ability to improve the performance of a pipeline for the segmentation and classification of prostate lesions using MRI data. The algorithms were used to perform a binary classification of benign and malignant tissue visible in MRI sequences. The model choices include support vector machines (SVMs), random decision forests (RDFs), and multi-layer perceptrons (MLPs), along with radiomic features that are reduced by applying PCA or mRMR feature selection. Modern CNN-based architectures, such as ConvNeXt, ConvNet, and ResNet, were also evaluated in various setups, including transfer learning. To optimize the performance, different approaches were compared and applied to whole images, as well as gland, peripheral zone (PZ), and lesion segmentations. The contribution of this study is an investigation of several ML approaches regarding their performance in prostate cancer (PCa) diagnosis algorithms. This work delivers insights into the applicability of different approaches for this context based on an exhaustive examination. The outcome is a recommendation or preference for which machine learning model or family of models is best suited to optimize an existing pipeline when the model is applied as an upstream filter. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

14 pages, 2354 KiB  
Article
Identifying Acute Aortic Syndrome and Thoracic Aortic Aneurysm from Chest Radiography in the Emergency Department Using Convolutional Neural Network Models
by Yang-Tse Lin, Bing-Cheng Wang and Jui-Yuan Chung
Diagnostics 2024, 14(15), 1646; https://doi.org/10.3390/diagnostics14151646 - 30 Jul 2024
Viewed by 923
Abstract
(1) Background: Identifying acute aortic syndrome (AAS) and thoracic aortic aneurysm (TAA) in busy emergency departments (EDs) is crucial due to their life-threatening nature, necessitating timely and accurate diagnosis. (2) Methods: This retrospective case-control study was conducted in the ED of three hospitals. [...] Read more.
(1) Background: Identifying acute aortic syndrome (AAS) and thoracic aortic aneurysm (TAA) in busy emergency departments (EDs) is crucial due to their life-threatening nature, necessitating timely and accurate diagnosis. (2) Methods: This retrospective case-control study was conducted in the ED of three hospitals. Adult patients visiting the ED between 1 January 2010 and 1 January 2020 with a chief complaint of chest or back pain were enrolled in the study. The collected chest radiography (CXRs) data were divided into training (80%) and testing (20%) datasets. The training dataset was trained by four different convolutional neural network (CNN) models. (3) Results: A total of 1625 patients were enrolled in this study. The InceptionV3 model achieved the highest F1 score of 0.76. (4) Conclusions: Analysis of CXRs using a CNN-based model provides a novel tool for clinicians to interpret ED patients with chest pain and suspected AAS and TAA. The integration of such imaging tools into ED could be considered in the future to enhance the diagnostic workflow for clinically fatal diseases. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

10 pages, 4375 KiB  
Article
Deep Learning in Cardiothoracic Ratio Calculation and Cardiomegaly Detection
by Jakub Kufel, Iga Paszkiewicz, Szymon Kocot, Anna Lis, Piotr Dudek, Łukasz Czogalik, Michał Janik, Katarzyna Bargieł-Łączek, Wiktoria Bartnikowska, Maciej Koźlik, Maciej Cebula, Katarzyna Gruszczyńska and Zbigniew Nawrat
J. Clin. Med. 2024, 13(14), 4180; https://doi.org/10.3390/jcm13144180 - 17 Jul 2024
Viewed by 1031
Abstract
Objectives: The purpose of this study is to evaluate the performance of our deep learning algorithm in calculating cardiothoracic ratio (CTR) and thus in the assessment of cardiomegaly or pericardial effusion occurrences on chest radiography (CXR). Methods: From a database of [...] Read more.
Objectives: The purpose of this study is to evaluate the performance of our deep learning algorithm in calculating cardiothoracic ratio (CTR) and thus in the assessment of cardiomegaly or pericardial effusion occurrences on chest radiography (CXR). Methods: From a database of 8000 CXRs, 13 folders with a comparable number of images were created. Then, 1020 images were chosen randomly, in proportion to the number of images in each folder. Afterward, CTR was calculated using RadiAnt Digital Imaging and Communications in Medicine (DICOM) Viewer software (2023.1). Next, heart and lung anatomical areas were marked in 3D Slicer. From these data, we trained an AI model which segmented heart and lung anatomy and determined the CTR value. Results: Our model achieved an Intersection over Union metric of 88.28% for the augmented training subset and 83.06% for the validation subset. F1-score for subsets were accordingly 90.22% and 90.67%. In the comparative analysis of artificial intelligence (AI) vs. humans, significantly lower transverse thoracic diameter (TTD) (p < 0.001), transverse cardiac diameter (TCD) (p < 0.001), and CTR (p < 0.001) values obtained using the neural network were observed. Conclusions: Results confirm that there is a significant correlation between the measurements made by human observers and the neural network. After validation in clinical conditions, our method may be used as a screening test or advisory tool when a specialist is not available, especially on Intensive Care Units (ICUs) or Emergency Departments (ERs) where time plays a key role. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

11 pages, 1014 KiB  
Article
Optimizing GPT-4 Turbo Diagnostic Accuracy in Neuroradiology through Prompt Engineering and Confidence Thresholds
by Akihiko Wada, Toshiaki Akashi, George Shih, Akifumi Hagiwara, Mitsuo Nishizawa, Yayoi Hayakawa, Junko Kikuta, Keigo Shimoji, Katsuhiro Sano, Koji Kamagata, Atsushi Nakanishi and Shigeki Aoki
Diagnostics 2024, 14(14), 1541; https://doi.org/10.3390/diagnostics14141541 - 17 Jul 2024
Cited by 1 | Viewed by 1180
Abstract
Background and Objectives: Integrating large language models (LLMs) such as GPT-4 Turbo into diagnostic imaging faces a significant challenge, with current misdiagnosis rates ranging from 30–50%. This study evaluates how prompt engineering and confidence thresholds can improve diagnostic accuracy in neuroradiology. Methods: We [...] Read more.
Background and Objectives: Integrating large language models (LLMs) such as GPT-4 Turbo into diagnostic imaging faces a significant challenge, with current misdiagnosis rates ranging from 30–50%. This study evaluates how prompt engineering and confidence thresholds can improve diagnostic accuracy in neuroradiology. Methods: We analyze 751 neuroradiology cases from the American Journal of Neuroradiology using GPT-4 Turbo with customized prompts to improve diagnostic precision. Results: Initially, GPT-4 Turbo achieved a baseline diagnostic accuracy of 55.1%. By reformatting responses to list five diagnostic candidates and applying a 90% confidence threshold, the highest precision of the diagnosis increased to 72.9%, with the candidate list providing the correct diagnosis at 85.9%, reducing the misdiagnosis rate to 14.1%. However, this threshold reduced the number of cases that responded. Conclusions: Strategic prompt engineering and high confidence thresholds significantly reduce misdiagnoses and improve the precision of the LLM diagnostic in neuroradiology. More research is needed to optimize these approaches for broader clinical implementation, balancing accuracy and utility. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

9 pages, 787 KiB  
Brief Report
Enhanced Diagnostic Precision: Assessing Tumor Differentiation in Head and Neck Squamous Cell Carcinoma Using Multi-Slice Spiral CT Texture Analysis
by Lays Assolini Pinheiro de Oliveira, Diana Lorena Garcia Lopes, João Pedro Perez Gomes, Rafael Vinicius da Silveira, Daniel Vitor Aguiar Nozaki, Lana Ferreira Santos, Gabriela Castellano, Sérgio Lúcio Pereira de Castro Lopes and Andre Luiz Ferreira Costa
J. Clin. Med. 2024, 13(14), 4038; https://doi.org/10.3390/jcm13144038 - 10 Jul 2024
Viewed by 1050
Abstract
This study explores the efficacy of texture analysis by using preoperative multi-slice spiral computed tomography (MSCT) to non-invasively determine the grade of cellular differentiation in head and neck squamous cell carcinoma (HNSCC). In a retrospective study, MSCT scans of patients with HNSCC were [...] Read more.
This study explores the efficacy of texture analysis by using preoperative multi-slice spiral computed tomography (MSCT) to non-invasively determine the grade of cellular differentiation in head and neck squamous cell carcinoma (HNSCC). In a retrospective study, MSCT scans of patients with HNSCC were analyzed and classified based on its histological grade as moderately differentiated, well-differentiated, or poorly differentiated. The location of the tumor was categorized as either in the bone or in soft tissues. Segmentation of the lesion areas was conducted, followed by texture analysis. Eleven GLCM parameters across five different distances were calculated. Median values and correlations of texture parameters were examined in relation to tumor differentiation grade by using Spearman’s correlation coefficient and Kruskal–Wallis and Dunn tests. Forty-six patients were included, predominantly female (87%), with a mean age of 66.7 years. Texture analysis revealed significant parameter correlations with histopathological grades of tumor differentiation. The study identified no significant age correlation with tumor differentiation, which underscores the potential of texture analysis as an age-independent biomarker. The strong correlations between texture parameters and histopathological grades support the integration of this technique into the clinical decision-making process. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

27 pages, 12614 KiB  
Article
Attention-Based Deep Learning Approach for Breast Cancer Histopathological Image Multi-Classification
by Lama A. Aldakhil, Haifa F. Alhasson and Shuaa S. Alharbi
Diagnostics 2024, 14(13), 1402; https://doi.org/10.3390/diagnostics14131402 - 1 Jul 2024
Viewed by 1412
Abstract
Breast cancer diagnosis from histopathology images is often time consuming and prone to human error, impacting treatment and prognosis. Deep learning diagnostic methods offer the potential for improved accuracy and efficiency in breast cancer detection and classification. However, they struggle with limited data [...] Read more.
Breast cancer diagnosis from histopathology images is often time consuming and prone to human error, impacting treatment and prognosis. Deep learning diagnostic methods offer the potential for improved accuracy and efficiency in breast cancer detection and classification. However, they struggle with limited data and subtle variations within and between cancer types. Attention mechanisms provide feature refinement capabilities that have shown promise in overcoming such challenges. To this end, this paper proposes the Efficient Channel Spatial Attention Network (ECSAnet), an architecture built on EfficientNetV2 and augmented with a convolutional block attention module (CBAM) and additional fully connected layers. ECSAnet was fine-tuned using the BreakHis dataset, employing Reinhard stain normalization and image augmentation techniques to minimize overfitting and enhance generalizability. In testing, ECSAnet outperformed AlexNet, DenseNet121, EfficientNetV2-S, InceptionNetV3, ResNet50, and VGG16 in most settings, achieving accuracies of 94.2% at 40×, 92.96% at 100×, 88.41% at 200×, and 89.42% at 400× magnifications. The results highlight the effectiveness of CBAM in improving classification accuracy and the importance of stain normalization for generalizability. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

15 pages, 5298 KiB  
Article
Development and Validation of an Artificial Intelligence Model for Detecting Rib Fractures on Chest Radiographs
by Kaehong Lee, Sunhee Lee, Ji Soo Kwak, Heechan Park, Hoonji Oh and Jae Chul Koh
J. Clin. Med. 2024, 13(13), 3850; https://doi.org/10.3390/jcm13133850 - 30 Jun 2024
Viewed by 1067
Abstract
Background: Chest radiography is the standard method for detecting rib fractures. Our study aims to develop an artificial intelligence (AI) model that, with only a relatively small amount of training data, can identify rib fractures on chest radiographs and accurately mark their [...] Read more.
Background: Chest radiography is the standard method for detecting rib fractures. Our study aims to develop an artificial intelligence (AI) model that, with only a relatively small amount of training data, can identify rib fractures on chest radiographs and accurately mark their precise locations, thereby achieving a diagnostic accuracy comparable to that of medical professionals. Methods: For this retrospective study, we developed an AI model using 540 chest radiographs (270 normal and 270 with rib fractures) labeled for use with Detectron2 which incorporates a faster region-based convolutional neural network (R-CNN) enhanced with a feature pyramid network (FPN). The model’s ability to classify radiographs and detect rib fractures was assessed. Furthermore, we compared the model’s performance to that of 12 physicians, including six board-certified anesthesiologists and six residents, through an observer performance test. Results: Regarding the radiographic classification performance of the AI model, the sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) were 0.87, 0.83, and 0.89, respectively. In terms of rib fracture detection performance, the sensitivity, false-positive rate, and free-response receiver operating characteristic (JAFROC) figure of merit (FOM) were 0.62, 0.3, and 0.76, respectively. The AI model showed no statistically significant difference in the observer performance test compared to 11 of 12 and 10 of 12 physicians, respectively. Conclusions: We developed an AI model trained on a limited dataset that demonstrated a rib fracture classification and detection performance comparable to that of an experienced physician. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

21 pages, 5833 KiB  
Article
Developing the Benchmark: Establishing a Gold Standard for the Evaluation of AI Caries Diagnostics
by Julian Boldt, Matthias Schuster, Gabriel Krastl, Marc Schmitter, Jonas Pfundt, Angelika Stellzig-Eisenhauer and Felix Kunz
J. Clin. Med. 2024, 13(13), 3846; https://doi.org/10.3390/jcm13133846 - 29 Jun 2024
Viewed by 1094
Abstract
Background/Objectives: The aim of this study was to establish a histology-based gold standard for the evaluation of artificial intelligence (AI)-based caries detection systems on proximal surfaces in bitewing images. Methods: Extracted human teeth were used to simulate intraoral situations, including caries-free [...] Read more.
Background/Objectives: The aim of this study was to establish a histology-based gold standard for the evaluation of artificial intelligence (AI)-based caries detection systems on proximal surfaces in bitewing images. Methods: Extracted human teeth were used to simulate intraoral situations, including caries-free teeth, teeth with artificially created defects and teeth with natural proximal caries. All 153 simulations were radiographed from seven angles, resulting in 1071 in vitro bitewing images. Histological examination of the carious lesion depth was performed twice by an expert. A total of thirty examiners analyzed all the radiographs for caries. Results: We generated in vitro bitewing images to evaluate the performance of AI-based carious lesion detection against a histological gold standard. All examiners achieved a sensitivity of 0.565, a Matthews correlation coefficient (MCC) of 0.578 and an area under the curve (AUC) of 76.1. The histology receiver operating characteristic (ROC) curve significantly outperformed the examiners’ ROC curve (p < 0.001). All examiners distinguished induced defects from true caries in 54.6% of cases and correctly classified 99.8% of all teeth. Expert caries classification of the histological images showed a high level of agreement (intraclass correlation coefficient (ICC) = 0.993). Examiner performance varied with caries depth (p ≤ 0.008), except between E2 and E1 lesions (p = 1), while central beam eccentricity, gender, occupation and experience had no significant influence (all p ≥ 0.411). Conclusions: This study successfully established an unbiased dataset to evaluate AI-based caries detection on bitewing surfaces and compare it to human judgement, providing a standardized assessment for fair comparison between AI technologies and helping dental professionals to select reliable diagnostic tools. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

19 pages, 7513 KiB  
Article
Augmenting Radiological Diagnostics with AI for Tuberculosis and COVID-19 Disease Detection: Deep Learning Detection of Chest Radiographs
by Manjur Kolhar, Ahmed M. Al Rajeh and Raisa Nazir Ahmed Kazi
Diagnostics 2024, 14(13), 1334; https://doi.org/10.3390/diagnostics14131334 - 24 Jun 2024
Viewed by 1068
Abstract
In this research, we introduce a network that can identify pneumonia, COVID-19, and tuberculosis using X-ray images of patients’ chests. The study emphasizes tuberculosis, COVID-19, and healthy lung conditions, discussing how advanced neural networks, like VGG16 and ResNet50, can improve the detection of [...] Read more.
In this research, we introduce a network that can identify pneumonia, COVID-19, and tuberculosis using X-ray images of patients’ chests. The study emphasizes tuberculosis, COVID-19, and healthy lung conditions, discussing how advanced neural networks, like VGG16 and ResNet50, can improve the detection of lung issues from images. To prepare the images for the model’s input requirements, we enhanced them through data augmentation techniques for training purposes. We evaluated the model’s performance by analyzing the precision, recall, and F1 scores across training, validation, and testing datasets. The results show that the ResNet50 model outperformed VGG16 with accuracy and resilience. It displayed superior ROC AUC values in both validation and test scenarios. Particularly impressive were ResNet50’s precision and recall rates, nearing 0.99 for all conditions in the test set. On the hand, VGG16 also performed well during testing—detecting tuberculosis with a precision of 0.99 and a recall of 0.93. Our study highlights the performance of our deep learning method by showcasing the effectiveness of ResNet50 over traditional approaches like VGG16. This progress utilizes methods to enhance classification accuracy by augmenting data and balancing them. This positions our approach as an advancement in using state-of-the-art deep learning applications in imaging. By enhancing the accuracy and reliability of diagnosing ailments such as COVID-19 and tuberculosis, our models have the potential to transform care and treatment strategies, highlighting their role in clinical diagnostics. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

14 pages, 3006 KiB  
Article
Lung Disease Detection Using U-Net Feature Extractor Cascaded by Graph Convolutional Network
by Pshtiwan Qader Rashid and İlker Türker
Diagnostics 2024, 14(12), 1313; https://doi.org/10.3390/diagnostics14121313 - 20 Jun 2024
Viewed by 1197
Abstract
Computed tomography (CT) scans have recently emerged as a major technique for the fast diagnosis of lung diseases via image classification techniques. In this study, we propose a method for the diagnosis of COVID-19 disease with improved accuracy by utilizing graph convolutional networks [...] Read more.
Computed tomography (CT) scans have recently emerged as a major technique for the fast diagnosis of lung diseases via image classification techniques. In this study, we propose a method for the diagnosis of COVID-19 disease with improved accuracy by utilizing graph convolutional networks (GCN) at various layer formations and kernel sizes to extract features from CT scan images. We apply a U-Net model to aid in segmentation and feature extraction. In contrast with previous research retrieving deep features from convolutional filters and pooling layers, which fail to fully consider the spatial connectivity of the nodes, we employ GCNs for classification and prediction to capture spatial connectivity patterns, which provides a significant association benefit. We handle the extracted deep features to form an adjacency matrix that contains a graph structure and pass it to a GCN along with the original image graph and the largest kernel graph. We combine these graphs to form one block of the graph input and then pass it through a GCN with an additional dropout layer to avoid overfitting. Our findings show that the suggested framework, called the feature-extracted graph convolutional network (FGCN), performs better in identifying lung diseases compared to recently proposed deep learning architectures that are not based on graph representations. The proposed model also outperforms a variety of transfer learning models commonly used for medical diagnosis tasks, highlighting the abstraction potential of the graph representation over traditional methods. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

10 pages, 1068 KiB  
Article
Breast Cancer Screening Using Inverse Modeling of Surface Temperatures and Steady-State Thermal Imaging
by Nithya Sritharan, Carlos Gutierrez, Isaac Perez-Raya, Jose-Luis Gonzalez-Hernandez, Alyssa Owens, Donnette Dabydeen, Lori Medeiros, Satish Kandlikar and Pradyumna Phatak
Cancers 2024, 16(12), 2264; https://doi.org/10.3390/cancers16122264 - 19 Jun 2024
Viewed by 1015
Abstract
Cancer is characterized by increased metabolic activity and vascularity, leading to temperature changes in cancerous tissues compared to normal cells. This study focused on patients with abnormal mammogram findings or a clinical suspicion of breast cancer, exclusively those confirmed by biopsy. Utilizing an [...] Read more.
Cancer is characterized by increased metabolic activity and vascularity, leading to temperature changes in cancerous tissues compared to normal cells. This study focused on patients with abnormal mammogram findings or a clinical suspicion of breast cancer, exclusively those confirmed by biopsy. Utilizing an ultra-high sensitivity thermal camera and prone patient positioning, we measured surface temperatures integrated with an inverse modeling technique based on heat transfer principles to predict malignant breast lesions. Involving 25 breast tumors, our technique accurately predicted all tumors, with maximum errors below 5 mm in size and less than 1 cm in tumor location. Predictive efficacy was unaffected by tumor size, location, or breast density, with no aberrant predictions in the contralateral normal breast. Infrared temperature profiles and inverse modeling using both techniques successfully predicted breast cancer, highlighting its potential in breast cancer screening. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

12 pages, 1621 KiB  
Review
Artificial Intelligence in Coronary Artery Calcium Scoring
by Afolasayo A. Aromiwura and Dinesh K. Kalra
J. Clin. Med. 2024, 13(12), 3453; https://doi.org/10.3390/jcm13123453 - 13 Jun 2024
Viewed by 938
Abstract
Cardiovascular disease (CVD), particularly coronary heart disease (CHD), is the leading cause of death in the US, with a high economic impact. Coronary artery calcium (CAC) is a known marker for CHD and a useful tool for estimating the risk of atherosclerotic cardiovascular [...] Read more.
Cardiovascular disease (CVD), particularly coronary heart disease (CHD), is the leading cause of death in the US, with a high economic impact. Coronary artery calcium (CAC) is a known marker for CHD and a useful tool for estimating the risk of atherosclerotic cardiovascular disease (ASCVD). Although CACS is recommended for informing the decision to initiate statin therapy, the current standard requires a dedicated CT protocol, which is time-intensive and contributes to radiation exposure. Non-dedicated CT protocols can be taken advantage of to visualize calcium and reduce overall cost and radiation exposure; however, they mainly provide visual estimates of coronary calcium and have disadvantages such as motion artifacts. Artificial intelligence is a growing field involving software that independently performs human-level tasks, and is well suited for improving CACS efficiency and repurposing non-dedicated CT for calcium scoring. We present a review of the current studies on automated CACS across various CT protocols and discuss consideration points in clinical application and some barriers to implementation. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Graphical abstract

18 pages, 3096 KiB  
Article
Causal Forest Machine Learning Analysis of Parkinson’s Disease in Resting-State Functional Magnetic Resonance Imaging
by Gabriel Solana-Lavalle, Michael D. Cusimano, Thomas Steeves, Roberto Rosas-Romero and Pascal N. Tyrrell
Tomography 2024, 10(6), 894-911; https://doi.org/10.3390/tomography10060068 - 6 Jun 2024
Viewed by 1385
Abstract
In recent years, Artificial Intelligence has been used to assist healthcare professionals in detecting and diagnosing neurodegenerative diseases. In this study, we propose a methodology to analyze functional Magnetic Resonance Imaging signals and perform classification between Parkinson’s disease patients and healthy participants using [...] Read more.
In recent years, Artificial Intelligence has been used to assist healthcare professionals in detecting and diagnosing neurodegenerative diseases. In this study, we propose a methodology to analyze functional Magnetic Resonance Imaging signals and perform classification between Parkinson’s disease patients and healthy participants using Machine Learning algorithms. In addition, the proposed approach provides insights into the brain regions affected by the disease. The functional Magnetic Resonance Imaging from the PPMI and 1000-FCP datasets were pre-processed to extract time series from 200 brain regions per participant, resulting in 11,600 features. Causal Forest and Wrapper Feature Subset Selection algorithms were used for dimensionality reduction, resulting in a subset of features based on their heterogeneity and association with the disease. We utilized Logistic Regression and XGBoost algorithms to perform PD detection, achieving 97.6% accuracy, 97.5% F1 score, 97.9% precision, and 97.7%recall by analyzing sets with fewer than 300 features in a population including men and women. Finally, Multiple Correspondence Analysis was employed to visualize the relationships between brain regions and each group (women with Parkinson, female controls, men with Parkinson, male controls). Associations between the Unified Parkinson’s Disease Rating Scale questionnaire results and affected brain regions in different groups were also obtained to show another use case of the methodology. This work proposes a methodology to (1) classify patients and controls with Machine Learning and Causal Forest algorithm and (2) visualize associations between brain regions and groups, providing high-accuracy classification and enhanced interpretability of the correlation between specific brain regions and the disease across different groups. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

21 pages, 10826 KiB  
Article
Breast Cancer Diagnosis Method Based on Cross-Mammogram Four-View Interactive Learning
by Xuesong Wen, Jianjun Li and Liyuan Yang
Tomography 2024, 10(6), 848-868; https://doi.org/10.3390/tomography10060065 - 1 Jun 2024
Viewed by 868
Abstract
Computer-aided diagnosis systems play a crucial role in the diagnosis and early detection of breast cancer. However, most current methods focus primarily on the dual-view analysis of a single breast, thereby neglecting the potentially valuable information between bilateral mammograms. In this paper, we [...] Read more.
Computer-aided diagnosis systems play a crucial role in the diagnosis and early detection of breast cancer. However, most current methods focus primarily on the dual-view analysis of a single breast, thereby neglecting the potentially valuable information between bilateral mammograms. In this paper, we propose a Four-View Correlation and Contrastive Joint Learning Network (FV-Net) for the classification of bilateral mammogram images. Specifically, FV-Net focuses on extracting and matching features across the four views of bilateral mammograms while maximizing both their similarities and dissimilarities. Through the Cross-Mammogram Dual-Pathway Attention Module, feature matching between bilateral mammogram views is achieved, capturing the consistency and complementary features across mammograms and effectively reducing feature misalignment. In the reconstituted feature maps derived from bilateral mammograms, the Bilateral-Mammogram Contrastive Joint Learning module performs associative contrastive learning on positive and negative sample pairs within each local region. This aims to maximize the correlation between similar local features and enhance the differentiation between dissimilar features across the bilateral mammogram representations. Our experimental results on a test set comprising 20% of the combined Mini-DDSM and Vindr-mamo datasets, as well as on the INbreast dataset, show that our model exhibits superior performance in breast cancer classification compared to competing methods. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

17 pages, 17399 KiB  
Article
Evaluating Cellularity Estimation Methods: Comparing AI Counting with Pathologists’ Visual Estimates
by Tomoharu Kiyuna, Eric Cosatto, Kanako C. Hatanaka, Tomoyuki Yokose, Koji Tsuta, Noriko Motoi, Keishi Makita, Ai Shimizu, Toshiya Shinohara, Akira Suzuki, Emi Takakuwa, Yasunari Takakuwa, Takahiro Tsuji, Mitsuhiro Tsujiwaki, Mitsuru Yanai, Sayaka Yuzawa, Maki Ogura and Yutaka Hatanaka
Diagnostics 2024, 14(11), 1115; https://doi.org/10.3390/diagnostics14111115 - 28 May 2024
Viewed by 1157
Abstract
The development of next-generation sequencing (NGS) has enabled the discovery of cancer-specific driver gene alternations, making precision medicine possible. However, accurate genetic testing requires a sufficient amount of tumor cells in the specimen. The evaluation of tumor content ratio (TCR) from hematoxylin and [...] Read more.
The development of next-generation sequencing (NGS) has enabled the discovery of cancer-specific driver gene alternations, making precision medicine possible. However, accurate genetic testing requires a sufficient amount of tumor cells in the specimen. The evaluation of tumor content ratio (TCR) from hematoxylin and eosin (H&E)-stained images has been found to vary between pathologists, making it an important challenge to obtain an accurate TCR. In this study, three pathologists exhaustively labeled all cells in 41 regions from 41 lung cancer cases as either tumor, non-tumor or indistinguishable, thus establishing a “gold standard” TCR. We then compared the accuracy of the TCR estimated by 13 pathologists based on visual assessment and the TCR calculated by an AI model that we have developed. It is a compact and fast model that follows a fully convolutional neural network architecture and produces cell detection maps which can be efficiently post-processed to obtain tumor and non-tumor cell counts from which TCR is calculated. Its raw cell detection accuracy is 92% while its classification accuracy is 84%. The results show that the error between the gold standard TCR and the AI calculation was significantly smaller than that between the gold standard TCR and the pathologist’s visual assessment (p<0.05). Additionally, the robustness of AI models across institutions is a key issue and we demonstrate that the variation in AI was smaller than that in the average of pathologists when evaluated by institution. These findings suggest that the accuracy of tumor cellularity assessments in clinical workflows is significantly improved by the introduction of robust AI models, leading to more efficient genetic testing and ultimately to better patient outcomes. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

11 pages, 880 KiB  
Article
Anomaly Detection and Biomarkers Localization in Retinal Images
by Liran Tiosano, Ron Abutbul, Rivkah Lender, Yahel Shwartz, Itay Chowers, Yedid Hoshen and Jaime Levy
J. Clin. Med. 2024, 13(11), 3093; https://doi.org/10.3390/jcm13113093 - 24 May 2024
Viewed by 847
Abstract
Background: To design a novel anomaly detection and localization approach using artificial intelligence methods using optical coherence tomography (OCT) scans for retinal diseases. Methods: High-resolution OCT scans from the publicly available Kaggle dataset and a local dataset were used by four [...] Read more.
Background: To design a novel anomaly detection and localization approach using artificial intelligence methods using optical coherence tomography (OCT) scans for retinal diseases. Methods: High-resolution OCT scans from the publicly available Kaggle dataset and a local dataset were used by four state-of-the-art self-supervised frameworks. The backbone model of all the frameworks was a pre-trained convolutional neural network (CNN), which enabled the extraction of meaningful features from OCT images. Anomalous images included choroidal neovascularization (CNV), diabetic macular edema (DME), and the presence of drusen. Anomaly detectors were evaluated by commonly accepted performance metrics, including area under the receiver operating characteristic curve, F1 score, and accuracy. Results: A total of 25,315 high-resolution retinal OCT slabs were used for training. Test and validation sets consisted of 968 and 4000 slabs, respectively. The best performing across all anomaly detectors had an area under the receiver operating characteristic of 0.99. All frameworks were shown to achieve high performance and generalize well for the different retinal diseases. Heat maps were generated to visualize the quality of the frameworks’ ability to localize anomalous areas of the image. Conclusions: This study shows that with the use of pre-trained feature extractors, the frameworks tested can generalize to the domain of retinal OCT scans and achieve high image-level ROC-AUC scores. The localization results of these frameworks are promising and successfully capture areas that indicate the presence of retinal pathology. Moreover, such frameworks have the potential to uncover new biomarkers that are difficult for the human eye to detect. Frameworks for anomaly detection and localization can potentially be integrated into clinical decision support and automatic screening systems that will aid ophthalmologists in patient diagnosis, follow-up, and treatment design. This work establishes a solid basis for further development of automated anomaly detection frameworks for clinical use. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

16 pages, 1888 KiB  
Article
Prediction of Lymph Node Metastasis in T1 Colorectal Cancer Using Artificial Intelligence with Hematoxylin and Eosin-Stained Whole-Slide-Images of Endoscopic and Surgical Resection Specimens
by Joo Hye Song, Eun Ran Kim, Yiyu Hong, Insuk Sohn, Soomin Ahn, Seok-Hyung Kim and Kee-Taek Jang
Cancers 2024, 16(10), 1900; https://doi.org/10.3390/cancers16101900 - 16 May 2024
Viewed by 1114
Abstract
According to the current guidelines, additional surgery is performed for endoscopically resected specimens of early colorectal cancer (CRC) with a high risk of lymph node metastasis (LNM). However, the rate of LNM is 2.1–25.0% in cases treated endoscopically followed by surgery, indicating a [...] Read more.
According to the current guidelines, additional surgery is performed for endoscopically resected specimens of early colorectal cancer (CRC) with a high risk of lymph node metastasis (LNM). However, the rate of LNM is 2.1–25.0% in cases treated endoscopically followed by surgery, indicating a high rate of unnecessary surgeries. Therefore, this study aimed to develop an artificial intelligence (AI) model using H&E-stained whole slide images (WSIs) without handcrafted features employing surgically and endoscopically resected specimens to predict LNM in T1 CRC. To validate with an independent cohort, we developed a model with four versions comprising various combinations of training and test sets using H&E-stained WSIs from endoscopically (400 patients) and surgically resected specimens (881 patients): Version 1, Train and Test: surgical specimens; Version 2, Train and Test: endoscopic and surgically resected specimens; Version 3, Train: endoscopic and surgical specimens and Test: surgical specimens; Version 4, Train: endoscopic and surgical specimens and Test: endoscopic specimens. The area under the curve (AUC) of the receiver operating characteristic curve was used to determine the accuracy of the AI model for predicting LNM with a 5-fold cross-validation in the training set. Our AI model with H&E-stained WSIs and without annotations showed good performance power with the validation of an independent cohort in a single center. The AUC of our model was 0.758–0.830 in the training set and 0.781–0.824 in the test set, higher than that of previous AI studies with only WSI. Moreover, the AI model with Version 4, which showed the highest sensitivity (92.9%), reduced unnecessary additional surgery by 14.2% more than using the current guidelines (68.3% vs. 82.5%). This revealed the feasibility of using an AI model with only H&E-stained WSIs to predict LNM in T1 CRC. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

18 pages, 2277 KiB  
Article
Revolutionizing Radiological Analysis: The Future of French Language Automatic Speech Recognition in Healthcare
by Mariem Jelassi, Oumaima Jemai and Jacques Demongeot
Diagnostics 2024, 14(9), 895; https://doi.org/10.3390/diagnostics14090895 - 25 Apr 2024
Viewed by 1221
Abstract
This study introduces a specialized Automatic Speech Recognition (ASR) system, leveraging the Whisper Large-v2 model, specifically adapted for radiological applications in the French language. The methodology focused on adapting the model to accurately transcribe medical terminology and diverse accents within the French language [...] Read more.
This study introduces a specialized Automatic Speech Recognition (ASR) system, leveraging the Whisper Large-v2 model, specifically adapted for radiological applications in the French language. The methodology focused on adapting the model to accurately transcribe medical terminology and diverse accents within the French language context, achieving a notable Word Error Rate (WER) of 17.121%. This research involved extensive data collection and preprocessing, utilizing a wide range of French medical audio content. The results demonstrate the system’s effectiveness in transcribing complex radiological data, underscoring its potential to enhance medical documentation efficiency in French-speaking clinical settings. The discussion extends to the broader implications of this technology in healthcare, including its potential integration with electronic health records (EHRs) and its utility in medical education. This study also explores future research directions, such as tailoring ASR systems to specific medical specialties and languages. Overall, this research contributes significantly to the field of medical ASR systems, presenting a robust tool for radiological transcription in the French language and paving the way for advanced technology-enhanced healthcare solutions. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

14 pages, 2340 KiB  
Article
Classification of Osteophytes Occurring in the Lumbar Intervertebral Foramen
by Abdullah Emre Taçyıldız and Feyza İnceoğlu
Tomography 2024, 10(4), 618-631; https://doi.org/10.3390/tomography10040047 - 19 Apr 2024
Viewed by 1591
Abstract
Background: Surgeons have limited knowledge of the lumbar intervertebral foramina. This study aimed to classify osteophytes in the lumbar intervertebral foramen and to determine their pathoanatomical characteristics, discuss their potential biomechanical effects, and contribute to developing surgical methods. Methods: We conducted a retrospective, [...] Read more.
Background: Surgeons have limited knowledge of the lumbar intervertebral foramina. This study aimed to classify osteophytes in the lumbar intervertebral foramen and to determine their pathoanatomical characteristics, discuss their potential biomechanical effects, and contribute to developing surgical methods. Methods: We conducted a retrospective, non-randomized, single-center study involving 1224 patients. The gender, age, and anatomical location of the osteophytes in the lumbar intervertebral foramina of the patients were recorded. Results: Two hundred and forty-nine (20.34%) patients had one or more osteophytes in their lumbar 4 and 5 foramina. Of the 4896 foramina, 337 (6.88%) contained different types of osteophytes. Moreover, four anatomical types of osteophytes were found: mixed osteophytes in 181 (3.69%) foramina, osteophytes from the lower endplate of the superior vertebrae in 91 (1.85%) foramina, osteophytes from the junction of the pedicle and lamina of the upper vertebrae in 39 foramina (0.79%), and osteophytes from the upper endplate of the lower vertebrae in 26 (0.53%) foramina. The L4 foramen contained a significantly higher number of osteophytes than the L5 foramen. Osteophyte development increased significantly with age, with no difference between males and females. Conclusions: The findings show that osteophytic extrusions, which alter the natural anatomical structure of the lumbar intervertebral foramina, are common and can narrow the foramen. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

17 pages, 9206 KiB  
Article
Contrast Agent Dynamics Determine Radiomics Profiles in Oncologic Imaging
by Martin L. Watzenboeck, Lucian Beer, Daria Kifjak, Sebastian Röhrich, Benedikt H. Heidinger, Florian Prayer, Ruxandra-Iulia Milos, Paul Apfaltrer, Georg Langs, Pascal A. T. Baltzer and Helmut Prosch
Cancers 2024, 16(8), 1519; https://doi.org/10.3390/cancers16081519 - 16 Apr 2024
Viewed by 1063
Abstract
Background: The reproducibility of radiomics features extracted from CT and MRI examinations depends on several physiological and technical factors. The aim was to evaluate the impact of contrast agent timing on the stability of radiomics features using dynamic contrast-enhanced perfusion CT (dceCT) or [...] Read more.
Background: The reproducibility of radiomics features extracted from CT and MRI examinations depends on several physiological and technical factors. The aim was to evaluate the impact of contrast agent timing on the stability of radiomics features using dynamic contrast-enhanced perfusion CT (dceCT) or MRI (dceMRI) in prostate and lung cancers. Methods: Radiomics features were extracted from dceCT or dceMRI images in patients with biopsy-proven peripheral prostate cancer (pzPC) or biopsy-proven non-small cell lung cancer (NSCLC), respectively. Features that showed significant differences between contrast phases were identified using linear mixed models. An L2-penalized logistic regression classifier was used to predict class labels for pzPC and unaffected prostate regions-of-interest (ROIs). Results: Nine pzPC and 28 NSCLC patients, who were imaged with dceCT and/or dceMRI, were included in this study. After normalizing for individual enhancement patterns by defining seven individual phases based on a reference vessel, 19, 467 and 128 out of 1204 CT features showed significant temporal dynamics in healthy prostate parenchyma, prostate tumors and lung tumors, respectively. CT radiomics-based classification accuracy of healthy and tumor ROIs was highly dependent on contrast agent phase. For dceMRI, 899 and 1027 out of 1118 features were significantly dependent on time after contrast agent injection for prostate and lung tumors. Conclusions: CT and MRI radiomics features in both prostate and lung tumors are significantly affected by interindividual differences in contrast agent dynamics. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

16 pages, 2832 KiB  
Article
Impact of Deep Learning Denoising Algorithm on Diffusion Tensor Imaging of the Growth Plate on Different Spatial Resolutions
by Laura Santos, Hao-Yun Hsu, Ronald R. Nelson, Jr., Brendan Sullivan, Jaemin Shin, Maggie Fung, Marc R. Lebel, Sachin Jambawalikar and Diego Jaramillo
Tomography 2024, 10(4), 504-519; https://doi.org/10.3390/tomography10040039 - 2 Apr 2024
Viewed by 1163
Abstract
To assess the impact of a deep learning (DL) denoising reconstruction algorithm applied to identical patient scans acquired with two different voxel dimensions, representing distinct spatial resolutions, this IRB-approved prospective study was conducted at a tertiary pediatric center in compliance with the Health [...] Read more.
To assess the impact of a deep learning (DL) denoising reconstruction algorithm applied to identical patient scans acquired with two different voxel dimensions, representing distinct spatial resolutions, this IRB-approved prospective study was conducted at a tertiary pediatric center in compliance with the Health Insurance Portability and Accountability Act. A General Electric Signa Premier unit (GE Medical Systems, Milwaukee, WI) was employed to acquire two DTI (diffusion tensor imaging) sequences of the left knee on each child at 3T: an in-plane 2.0 × 2.0 mm2 with section thickness of 3.0 mm and a 2 mm3 isovolumetric voxel; neither had an intersection gap. For image acquisition, a multi-band DTI with a fat-suppressed single-shot spin-echo echo-planar sequence (20 non-collinear directions; b-values of 0 and 600 s/mm2) was utilized. The MR vendor-provided a commercially available DL model which was applied with 75% noise reduction settings to the same subject DTI sequences at different spatial resolutions. We compared DTI tract metrics from both DL-reconstructed scans and non-denoised scans for the femur and tibia at each spatial resolution. Differences were evaluated using Wilcoxon-signed ranked test and Bland–Altman plots. When comparing DL versus non-denoised diffusion metrics in femur and tibia using the 2 mm × 2 mm × 3 mm voxel dimension, there were no significant differences between tract count (p = 0.1, p = 0.14) tract volume (p = 0.1, p = 0.29) or tibial tract length (p = 0.16); femur tract length exhibited a significant difference (p < 0.01). All diffusion metrics (tract count, volume, length, and fractional anisotropy (FA)) derived from the DL-reconstructed scans, were significantly different from the non-denoised scan DTI metrics in both the femur and tibial physes using the 2 mm3 voxel size (p < 0.001). DL reconstruction resulted in a significant decrease in femorotibial FA for both voxel dimensions (p < 0.01). Leveraging denoising algorithms could address the drawbacks of lower signal-to-noise ratios (SNRs) associated with smaller voxel volumes and capitalize on their better spatial resolutions, allowing for more accurate quantification of diffusion metrics. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

17 pages, 2263 KiB  
Article
Characterization of CD34+ Cells from Patients with Acute Myeloid Leukemia (AML) and Myelodysplastic Syndromes (MDS) Using a t-Distributed Stochastic Neighbor Embedding (t-SNE) Protocol
by Cathrin Nollmann, Wiebke Moskorz, Christian Wimmenauer, Paul S. Jäger, Ron P. Cadeddu, Jörg Timm, Thomas Heinzel and Rainer Haas
Cancers 2024, 16(7), 1320; https://doi.org/10.3390/cancers16071320 - 28 Mar 2024
Viewed by 1314
Abstract
Using multi-color flow cytometry analysis, we studied the immunophenotypical differences between leukemic cells from patients with AML/MDS and hematopoietic stem and progenitor cells (HSPCs) from patients in complete remission (CR) following their successful treatment. The panel of markers included CD34, CD38, CD45RA, CD123 [...] Read more.
Using multi-color flow cytometry analysis, we studied the immunophenotypical differences between leukemic cells from patients with AML/MDS and hematopoietic stem and progenitor cells (HSPCs) from patients in complete remission (CR) following their successful treatment. The panel of markers included CD34, CD38, CD45RA, CD123 as representatives for a hierarchical hematopoietic stem and progenitor cell (HSPC) classification as well as programmed death ligand 1 (PD-L1). Rather than restricting the evaluation on a 2- or 3-dimensional analysis, we applied a t-distributed stochastic neighbor embedding (t-SNE) approach to obtain deeper insight and segregation between leukemic cells and normal HPSCs. For that purpose, we created a t-SNE map, which resulted in the visualization of 27 cell clusters based on their similarity concerning the composition and intensity of antigen expression. Two of these clusters were “leukemia-related” containing a great proportion of CD34+/CD38 hematopoietic stem cells (HSCs) or CD34+ cells with a strong co-expression of CD45RA/CD123, respectively. CD34+ cells within the latter cluster were also highly positive for PD-L1 reflecting their immunosuppressive capacity. Beyond this proof of principle study, the inclusion of additional markers will be helpful to refine the differentiation between normal HSPCs and leukemic cells, particularly in the context of minimal disease detection and antigen-targeted therapeutic interventions. Furthermore, we suggest a protocol for the assignment of new cell ensembles in quantitative terms, via a numerical value, the Pearson coefficient, based on a similarity comparison of the t-SNE pattern with a reference. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

12 pages, 2975 KiB  
Article
Validation of Left Atrial Volume Correction for Single Plane Method on Four-Chamber Cine Cardiac MRI
by Hosamadin Assadi, Nicholas Sawh, Ciara Bailey, Gareth Matthews, Rui Li, Ciaran Grafton-Clarke, Zia Mehmood, Bahman Kasmai, Peter P. Swoboda, Andrew J. Swift, Rob J. van der Geest and Pankaj Garg
Tomography 2024, 10(4), 459-470; https://doi.org/10.3390/tomography10040035 - 25 Mar 2024
Viewed by 1356
Abstract
Background: Left atrial (LA) assessment is an important marker of adverse cardiovascular outcomes. Cardiovascular magnetic resonance (CMR) accurately quantifies LA volume and function based on biplane long-axis imaging. We aimed to validate single-plane-derived LA indices against the biplane method to simplify the post-processing [...] Read more.
Background: Left atrial (LA) assessment is an important marker of adverse cardiovascular outcomes. Cardiovascular magnetic resonance (CMR) accurately quantifies LA volume and function based on biplane long-axis imaging. We aimed to validate single-plane-derived LA indices against the biplane method to simplify the post-processing of cine CMR. Methods: In this study, 100 patients from Leeds Teaching Hospitals were used as the derivation cohort. Bias correction for the single plane method was applied and subsequently validated in 79 subjects. Results: There were significant differences between the biplane and single plane mean LA maximum and minimum volumes and LA ejection fraction (EF) (all p < 0.01). After correcting for biases in the validation cohort, significant correlations in all LA indices were observed (0.89 to 0.98). The area under the curve (AUC) for the single plane to predict biplane cutoffs of LA maximum volume ≥ 112 mL was 0.97, LA minimum volume ≥ 44 mL was 0.99, LA stroke volume (SV) ≤ 21 mL was 1, and LA EF ≤ 46% was 1, (all p < 0.001). Conclusions: LA volumetric and functional assessment by the single plane method has a systematic bias compared to the biplane method. After bias correction, single plane LA volume and function are comparable to the biplane method. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

14 pages, 1931 KiB  
Article
Automatic Active Lesion Tracking in Multiple Sclerosis Using Unsupervised Machine Learning
by Jason Uwaeze, Ponnada A. Narayana, Arash Kamali, Vladimir Braverman, Michael A. Jacobs and Alireza Akhbardeh
Diagnostics 2024, 14(6), 632; https://doi.org/10.3390/diagnostics14060632 - 16 Mar 2024
Viewed by 1443
Abstract
Background: Identifying active lesions in magnetic resonance imaging (MRI) is crucial for the diagnosis and treatment planning of multiple sclerosis (MS). Active lesions on MRI are identified following the administration of Gadolinium-based contrast agents (GBCAs). However, recent studies have reported that repeated administration [...] Read more.
Background: Identifying active lesions in magnetic resonance imaging (MRI) is crucial for the diagnosis and treatment planning of multiple sclerosis (MS). Active lesions on MRI are identified following the administration of Gadolinium-based contrast agents (GBCAs). However, recent studies have reported that repeated administration of GBCA results in the accumulation of Gd in tissues. In addition, GBCA administration increases health care costs. Thus, reducing or eliminating GBCA administration for active lesion detection is important for improved patient safety and reduced healthcare costs. Current state-of-the-art methods for identifying active lesions in brain MRI without GBCA administration utilize data-intensive deep learning methods. Objective: To implement nonlinear dimensionality reduction (NLDR) methods, locally linear embedding (LLE) and isometric feature mapping (Isomap), which are less data-intensive, for automatically identifying active lesions on brain MRI in MS patients, without the administration of contrast agents. Materials and Methods: Fluid-attenuated inversion recovery (FLAIR), T2-weighted, proton density-weighted, and pre- and post-contrast T1-weighted images were included in the multiparametric MRI dataset used in this study. Subtracted pre- and post-contrast T1-weighted images were labeled by experts as active lesions (ground truth). Unsupervised methods, LLE and Isomap, were used to reconstruct multiparametric brain MR images into a single embedded image. Active lesions were identified on the embedded images and compared with ground truth lesions. The performance of NLDR methods was evaluated by calculating the Dice similarity (DS) index between the observed and identified active lesions in embedded images. Results: LLE and Isomap, were applied to 40 MS patients, achieving median DS scores of 0.74 ± 0.1 and 0.78 ± 0.09, respectively, outperforming current state-of-the-art methods. Conclusions: NLDR methods, Isomap and LLE, are viable options for the identification of active MS lesions on non-contrast images, and potentially could be used as a clinical decision tool. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

13 pages, 3128 KiB  
Article
Impact of AI-Based Post-Processing on Image Quality of Non-Contrast Computed Tomography of the Chest and Abdomen
by Marcel A. Drews, Aydin Demircioğlu, Julia Neuhoff, Johannes Haubold, Sebastian Zensen, Marcel K. Opitz, Michael Forsting, Kai Nassenstein and Denise Bos
Diagnostics 2024, 14(6), 612; https://doi.org/10.3390/diagnostics14060612 - 13 Mar 2024
Cited by 1 | Viewed by 1571
Abstract
Non-contrast computed tomography (CT) is commonly used for the evaluation of various pathologies including pulmonary infections or urolithiasis but, especially in low-dose protocols, image quality is reduced. To improve this, deep learning-based post-processing approaches are being developed. Therefore, we aimed to compare the [...] Read more.
Non-contrast computed tomography (CT) is commonly used for the evaluation of various pathologies including pulmonary infections or urolithiasis but, especially in low-dose protocols, image quality is reduced. To improve this, deep learning-based post-processing approaches are being developed. Therefore, we aimed to compare the objective and subjective image quality of different reconstruction techniques and a deep learning-based software on non-contrast chest and low-dose abdominal CTs. In this retrospective study, non-contrast chest CTs of patients suspected of COVID-19 pneumonia and low-dose abdominal CTs suspected of urolithiasis were analysed. All images were reconstructed using filtered back-projection (FBP) and were post-processed using an artificial intelligence (AI)-based commercial software (PixelShine (PS)). Additional iterative reconstruction (IR) was performed for abdominal CTs. Objective and subjective image quality were evaluated. AI-based post-processing led to an overall significant noise reduction independent of the protocol (chest or abdomen) while maintaining image information (max. difference in SNR 2.59 ± 2.9 and CNR 15.92 ± 8.9, p < 0.001). Post-processing of FBP-reconstructed abdominal images was even superior to IR alone (max. difference in SNR 0.76 ± 0.5, p ≤ 0.001). Subjective assessments verified these results, partly suggesting benefits, especially in soft-tissue imaging (p < 0.001). All in all, the deep learning-based denoising—which was non-inferior to IR—offers an opportunity for image quality improvement especially in institutions using older scanners without IR availability. Further studies are necessary to evaluate potential effects on dose reduction benefits. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

10 pages, 1275 KiB  
Article
Performance of ECG-Derived Digital Biomarker for Screening Coronary Occlusion in Resuscitated Out-of-Hospital Cardiac Arrest Patients: A Comparative Study between Artificial Intelligence and a Group of Experts
by Min Ji Park, Yoo Jin Choi, Moonki Shim, Youngjin Cho, Jiesuck Park, Jina Choi, Joonghee Kim, Eunkyoung Lee and Seo-Yoon Kim
J. Clin. Med. 2024, 13(5), 1354; https://doi.org/10.3390/jcm13051354 - 27 Feb 2024
Cited by 3 | Viewed by 1229
Abstract
Acute coronary syndrome is a significant part of cardiac etiology contributing to out-of-hospital cardiac arrest (OHCA), and immediate coronary angiography has been proposed to improve survival. This study evaluated the effectiveness of an AI algorithm in diagnosing near-total or total occlusion of coronary [...] Read more.
Acute coronary syndrome is a significant part of cardiac etiology contributing to out-of-hospital cardiac arrest (OHCA), and immediate coronary angiography has been proposed to improve survival. This study evaluated the effectiveness of an AI algorithm in diagnosing near-total or total occlusion of coronary arteries in OHCA patients who regained spontaneous circulation. Conducted from 1 July 2019 to 30 June 2022 at a tertiary university hospital emergency department, it involved 82 OHCA patients, with 58 qualifying after exclusions. The AI used was the Quantitative ECG (QCG™) system, which provides a STEMI diagnostic score ranging from 0 to 100. The QCG score’s diagnostic performance was compared to assessments by two emergency physicians and three cardiologists. Among the patients, coronary occlusion was identified in 24. The QCG score showed a significant difference between occlusion and non-occlusion groups, with the former scoring higher. The QCG biomarker had an area under the curve (AUC) of 0.770, outperforming the expert group’s AUC of 0.676. It demonstrated 70.8% sensitivity and 79.4% specificity. These findings suggest that the AI-based ECG biomarker could predict coronary occlusion in resuscitated OHCA patients, and it was non-inferior to the consensus of the expert group. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

Back to TopTop