Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (100)

Search Parameters:
Keywords = artificial retina

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 3646 KB  
Article
Diagnostic Accuracy and Real-Life Advantages of the MONA.health Artificial Intelligence Software in Screening for Diabetic Retinopathy and Maculopathy
by Martina Tomić, Romano Vrabec, Toma Babić, Kristina Kljajić and Tomislav Bulum
Diagnostics 2026, 16(5), 730; https://doi.org/10.3390/diagnostics16050730 - 1 Mar 2026
Viewed by 1193
Abstract
Background/Objectives: We aimed to evaluate the diagnostic accuracy of the MONA.health artificial intelligence (AI) software (Version 1.0.0; MONA.health, Leuven, Belgium) and compare its advantages in screening for diabetic retinopathy (DR) and diabetic macular edema (DME) with standard fundus photography. Methods: This [...] Read more.
Background/Objectives: We aimed to evaluate the diagnostic accuracy of the MONA.health artificial intelligence (AI) software (Version 1.0.0; MONA.health, Leuven, Belgium) and compare its advantages in screening for diabetic retinopathy (DR) and diabetic macular edema (DME) with standard fundus photography. Methods: This cross-sectional, real-life instrument validation study was conducted at the Vuk Vrhovac University Clinic in Zagreb during routine DR screening and included 296 patients (592 eyes) with diabetes. Following standard fundus photography using a 45° Zeiss VISUCAM NM/FA camera (Carl Zeiss Meditec AG, Jena, Germany), each patient also underwent imaging with an automated portable retinal camera (NFC-600, Crystalvue Ophthalmic Instruments, Taoyuan City, Taiwan). Two retina specialists independently graded images from the standard camera, while images from the NFC-600 were analyzed using the MONA.health AI software. Results: Among the 592 eyes, human grading identified 81 with any DR, including 17 with mild NPDR, 64 with referable DR (moderate/severe NPDR or PDR), and 13 with DME. The MONA.health AI software identified 65 eyes with referable DR and 19 with DME. For MONA DR screening compared to the standard fundus camera, the area under the curve, sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, kappa agreement, diagnostic odds ratio, and diagnostic effectiveness were 99.74%, 100%, 99.81%, 99.33%, 100%, 528.00, 0.00, 0.99, infinity, and 99.85%, respectively. For MONA DME screening, these metrics were 97.97%, 100%, 98.95%, 85.93%, 100%, 95.67, 0.00, 0.81, infinity, and 99.02%, respectively. The MONA AI screening process required 1 day of training and approximately 5 min for image capture and analysis, compared to 7 days of training and 13 min for image acquisition and grading with the standard method. Conclusions: These findings demonstrate that the MONA.health AI software matches the accuracy of standard fundus photography for screening and early detection of referable DR and DME, while offering a faster, simpler, and more user-friendly workflow that significantly reduces the time to obtain screening results. Full article
(This article belongs to the Special Issue Innovative Diagnostic Approaches in Retinal Diseases)
Show Figures

Figure 1

23 pages, 4501 KB  
Article
Complexity-Driven Adversarial Validation for Corrupted Medical Imaging Data
by Diego Renza, Jorge Brieva and Ernesto Moya-Albor
Information 2026, 17(2), 125; https://doi.org/10.3390/info17020125 - 29 Jan 2026
Viewed by 359
Abstract
Distribution shifts commonly arise in real-world machine learning scenarios in which the fundamental assumption that training and test data are drawn from independent and identically distributed samples is violated. In the case of medical data, such distribution shifts often occur during data acquisition [...] Read more.
Distribution shifts commonly arise in real-world machine learning scenarios in which the fundamental assumption that training and test data are drawn from independent and identically distributed samples is violated. In the case of medical data, such distribution shifts often occur during data acquisition and pose a significant challenge to the robustness and reliability of artificial intelligence systems in clinical practice. Additionally, quantifying these shifts without training a model remains a key open problem. This paper proposes a comprehensive methodological framework for evaluating the impact of such shifts on medical image datasets under artificial transformations that simulate acquisition variations, leveraging the Cumulative Spectral Gradient (CSG) score as a measure of multiclass classification complexity induced by distributional changes. Building on prior work, the proposed approach is meaningfully extended to twelve 2D medical imaging benchmarks from the MedMNIST collection, covering both binary and multiclass tasks, as well as grayscale and RGB modalities. We evaluate the metric analyzing its robustness to clinically inspired distribution shifts that are systematically simulated through motion blur, additive noise, brightness and contrast variation, and sharpness variation, each applied at three severity levels. This results in a large-scale benchmark that enables a detailed analysis of how dataset characteristics, transformation types, and distortion severity influence distribution shifts. Thus, the findings show that while the metric remains generally stable under noise and focus distortions, it is highly sensitive to variations in brightness and contrast. On the other hand, the proposed methodology is compared against Cleanlab’s widely used Non-IID score on the RetinaMNIST dataset using a pre-trained ResNet-50 model, including both class-wise analysis and correlation assessment between metrics. Finally, interpretability is incorporated through class activation map analysis on BloodMNIST and its corrupted variants to support and contextualize the quantitative findings. Full article
Show Figures

Figure 1

20 pages, 2189 KB  
Review
Gravity in the Eye: How ‘Gravitational Ischemia’ in the Retina May Be Released and Resolved Through Rapid Eye Movement (REM), a Component of Gravity Opposition Physiology
by J. Howard Jaster, Joshua Ong and Giulia Ottaviani
Physiologia 2025, 5(4), 55; https://doi.org/10.3390/physiologia5040055 - 12 Dec 2025
Cited by 1 | Viewed by 1001
Abstract
This narrative review of rapid eye movement (REM) focuses on its primary etiology and how it fits into the larger framework of neurophysiology and general physiology. Arterial blood flow in the retina may be sensitive to the full overlying ‘weight’ of its adjacent [...] Read more.
This narrative review of rapid eye movement (REM) focuses on its primary etiology and how it fits into the larger framework of neurophysiology and general physiology. Arterial blood flow in the retina may be sensitive to the full overlying ‘weight’ of its adjacent and contiguous vitreous humor caused by the humoral mass effect in the Earth’s gravitational field. During waking hours of the day, this ‘weight’ is continuously shifted in position due to changing head position and eye movements associated with ordinary environmental observations. This reduces its impact on any one point on the retinal field. However, during sleep, the head may maintain a relatively constant position (often supine), and observational eye movements are minimal, leaving essentially one retinal area exposed at the ‘bottom’ of each eye, relative to gravity. During sleep, REM may provide a mechanism for frequently repositioning the retina with respect to the weight it incurs from its adjacent (overlying) vitreous humor. Our findings were consistent with the intermittent terrestrial nocturnal development of ‘gravitational ischemia’ in the retina, wherein the decreased blood flow is accompanied metabolically by decreased oxygen tension, a critically important metric, with a detrimental influence on nerve-related tissue generally. However, the natural mechanisms for releasing and resolving gravitational ischemia, which likely involve glymphatics and cerebrospinal fluid shifts, as well as REM, may gradually fail in old age. Concurrently associated with old age in some individuals is the deposition of alpha-synuclein and/or tau in the retina, together with similar deposition in the brain, and it is also associated with the development of Parkinson’s disease and/or Alzheimer’s disease, possibly as a maladaptive attempt to release and resolve gravitational ischemia. This suggests that a key metabolic parameter of Parkinson’s disease and Alzheimer’s disease may be a lack of oxygen in some neural tissues. There is some evidence that oxygen therapy (hyperbaric oxygen) may be an effective supplemental treatment. Many of the cardinal features of spaceflight-associated neuro-ocular syndrome (SANS) may potentially be explained as features of gravity opposition physiology, which becomes unopposed by gravity during spaceflight. Gravity opposition physiology may, in fact, create significant challenges for humans involved in long-duration space travel (long-term microgravity). Possible solutions may include the use of artificial gravitational fields in space, such as centrifuges. Full article
Show Figures

Figure 1

10 pages, 1718 KB  
Proceeding Paper
Explainability of Diabetic Retinopathy Detection and Classification with Deep Learning Hybrid Architecture: AlterNet-K and ResNet-101
by Lavkush Gupta, Richa Gupta, Parul Agarwal and Suraiya Praveen
Chem. Proc. 2025, 18(1), 141; https://doi.org/10.3390/ecsoc-29-26888 - 13 Nov 2025
Viewed by 513
Abstract
Diabetic retinopathy (DR), an eye disease that is a threatening cause of irreversible blindness, always challenging to detect and diagnose on time. There are many ophthalmic invasive procedures which exist in medical science for the diagnosis of oculi (eyes). These all require highly [...] Read more.
Diabetic retinopathy (DR), an eye disease that is a threatening cause of irreversible blindness, always challenging to detect and diagnose on time. There are many ophthalmic invasive procedures which exist in medical science for the diagnosis of oculi (eyes). These all require highly skilled medical practitioners with operational knowledge of diagnosing sensitive organs like the retina and its tiny vessels. Due to the dearth of retinal specialists, the eye’s organs’ sensitivity, and the complexity of retinal therapy, invasive procedures are time-consuming, costly, and have slow progress. The fundus images are the visual information of the rear part of the retina. The progression of lesions around the retinal tissue’s surface causes the electric signals to not able to reach at the visual cortex, thus causing blurry vision or vision loss experienced by patients. The older methods using retinal fundus images for diagnosing lesions and symptoms of DR take time, causing delays in treatment and hence reducing the chance of success. Therefore, for early diagnosis, using fundus or retinal images can save the required effort and time of both doctors and patients. Artificial intelligence (AI) techniques have the capability to learn the tissue structures of the eye’s anatomy and to provide an analysis of the disease through the retinal fundus images. This process consists of operations, first apply the image preprocessing techniques followed by segmentation and filtering, then classify the disease using the artificial intelligence-based model. The proposed model trained over a dataset of DR images, for the prediction of accurate results, followed by deciding if the diagnosis by the model is correctly classified or not using the Explainable AI (XAI) algorithm. The rapid growth and better outcome of machine learning and deep learning algorithms are reasons to adopt, enhance the early diagnosis and treatments of patients. Full article
Show Figures

Figure 1

19 pages, 2953 KB  
Article
Independent Mutations in the LRP2 Gene Mediating Telescope Eyes and Celestial Eyes in Goldfish
by Rongni Li, Bo Zhang, Yansheng Sun and Jingyi Li
Int. J. Mol. Sci. 2025, 26(21), 10625; https://doi.org/10.3390/ijms262110625 - 31 Oct 2025
Viewed by 826
Abstract
After intensive artificial selection, the development of celestial eyes in goldfish involves the eyeballs protuberating and turning upwards. Thus, the celestial eye goldfish is an excellent model for both evolutionary and human ocular disease studies. Here, two mapping populations of goldfish with segregating [...] Read more.
After intensive artificial selection, the development of celestial eyes in goldfish involves the eyeballs protuberating and turning upwards. Thus, the celestial eye goldfish is an excellent model for both evolutionary and human ocular disease studies. Here, two mapping populations of goldfish with segregating eye phenotypes in the offspring were constructed. Through whole-genome sequencing and RNA-seq for eyeball samples, a premature stop codon in Exon 38 of the LRP2 gene was identified as the top candidate mutation for the celestial eye in goldfish. Fatty acid metabolism and epidermal cells, especially keratocyte-related functions, were inhibited in the eyeballs of celestial eye goldfish, while inflammatory reactions and extracellular matrix secretions were stimulated. These results suggest the dysfunction of the cornea in the celestial eye goldfish, and the same for the retina, which could be the results of the truncated LRP2 protein. In addition, the same gene, LRP2, is in charge of similar phenotypes (celestial eye and telescope eye) in goldfish, but these phenotypes have no shared mutations. In conclusion, the candidate mutation for the celestial eye in goldfish was identified by this study for the first time, and parallel evolutions of similar phenotypes at the molecular level under artificial selection were observed. These findings provide insights into the developmental and evolutionary processes of morphological changes in the eyes of goldfish. Full article
Show Figures

Figure 1

37 pages, 2371 KB  
Review
Visual Neurorestoration: An Expert Review of Current Strategies for Restoring Vision in Humans
by Jonathon Cavaleri, Michelle Lin, Kevin Wu, Zachary Gilbert, Connie Huang, Yu Tung Lo, Vahini Garimella, Jonathan C. Dallas, Robert G. Briggs, Austin J. Borja, Jae Eun Lee, Patrick R. Ng, Kimberly K. Gokoffski and Darrin J. Lee
Brain Sci. 2025, 15(11), 1170; https://doi.org/10.3390/brainsci15111170 - 30 Oct 2025
Viewed by 4878
Abstract
Visual impairment impacts nearly half a billion people globally. Corrective glasses, artificial lens replacement, and medical management have markedly improved the management of diseases inherent to the eye, such as refractive errors, cataracts, and glaucoma. However, therapeutic strategies for retinopathies, optic nerve damage, [...] Read more.
Visual impairment impacts nearly half a billion people globally. Corrective glasses, artificial lens replacement, and medical management have markedly improved the management of diseases inherent to the eye, such as refractive errors, cataracts, and glaucoma. However, therapeutic strategies for retinopathies, optic nerve damage, and distal optic pathways remain limited. The complex optic apparatus comprises multiple neural structures that transmit information from the retina to the diencephalon to the cortex. Over the last few decades, innovations have emerged to address the loss of function at each step of this pathway. Given the retina’s lack of regenerative potential, novel treatment options have focused on replacing lost retinal cell types through cellular replacement with stem cells, restoring lost gene function with genetic engineering, and imparting new light sensation capabilities with optogenetics. Additionally, retinal neuroprosthetics have shown efficacy in restoring functional vision, and neuroprosthetic devices targeting the optic nerve, thalamus, and cortex are in early stages of development. Non-invasive neuromodulation has also shown some promise in modulating the visual cortex. Recently, the first in-human whole-eye transplant was performed. While functional vision was not restored, the feasibility of such a transplant with viable tissue graft at one year was demonstrated. Subsequent studies are now focused on guidance cues for axonal regeneration past the graft site to reach the lateral geniculate nucleus. Although the methods discussed above have shown promise individually, improvements in vision have been modest at best. Achieving the goal of restoration of functional vision will clearly require further development of cellular therapies, genetic engineering, transplantation, and neuromodulation. A concerted multidisciplinary effort involving scientists, engineers, ophthalmologists, neurosurgeons, and reconstructive surgeons will be necessary to restore vision for patients with vision loss from these challenging pathologies. In this expert review article, we describe the current literature in visual neurorestoration with respect to cellular therapeutics, genetic therapies, optogenetics, neuroprosthetics, non-invasive neuromodulation, and whole-eye transplant. Full article
(This article belongs to the Special Issue Novel Neuroimaging of Neurological and Psychiatric Disorders)
Show Figures

Figure 1

20 pages, 45835 KB  
Article
Computer Vision-Assisted Spatial Analysis of Mitoses and Vasculature in Lung Cancer
by Anna Timakova, Alexey Fayzullin, Vladislav Ananev, Egor Zemnuhov, Vadim Alfimov, Alexey Baranov, Yulia Smirnova, Vitaly Shatalov, Natalia Konukhova, Evgeny Karpulevich, Peter Timashev and Vladimir Makarov
J. Clin. Med. 2025, 14(21), 7526; https://doi.org/10.3390/jcm14217526 - 23 Oct 2025
Viewed by 769
Abstract
Background/Objectives: Lung cancer is characterized by a significant microstructural heterogenicity among different histological types. Artificial intelligence and digital pathology instruments can facilitate morphological analysis by introducing calculated metrics allowing for the distinguishment of different tissue patterns. Methods: We used computer vision models to [...] Read more.
Background/Objectives: Lung cancer is characterized by a significant microstructural heterogenicity among different histological types. Artificial intelligence and digital pathology instruments can facilitate morphological analysis by introducing calculated metrics allowing for the distinguishment of different tissue patterns. Methods: We used computer vision models to calculate a number of morphometric features of tumor vascularization and proliferation. We used two frameworks to process whole-slide images: (1) LVI-PathNet framework for vascular detection, based on the SegFormer architecture; and (2) Mito-PathNet framework for mitotic figure detection, based on the RetinaNet detector and an ensemble classification model. The results were visualized in the segmented and gradient heatmaps. Results: SegFormer for vessel segmentation achieved the following quality metrics: IoU = 0.96, FBeta-score = 0.98, and AUC-ROC = 0.98. RetinaNet + CNN ensemble achieved the following quality metrics: specificity = 0.96 and sensitivity = 0.97. The analysis of the obtained parameters allowed us to identify trophic patterns of lung cancer according to the degree of aggressiveness, which can serve as potential targets for therapy, including proliferative-vascular, hypoxic, proliferative, vascular, and inactive. Conclusions: The analysis of the obtained parameters allowed us to identify distinct quantitative characteristics for each histological type of lung cancer. These patterns could potentially become markers for therapeutic choices, such as antiangiogenic and hypoxia-induced factor therapy. Full article
Show Figures

Figure 1

15 pages, 457 KB  
Review
Use of AI Histopathology in Breast Cancer Diagnosis
by Valentin Ivanov, Usman Khalid, Jasmin Gurung, Rosen Dimov, Veselin Chonov, Petar Uchikov, Gancho Kostov and Stefan Ivanov
Medicina 2025, 61(10), 1878; https://doi.org/10.3390/medicina61101878 - 20 Oct 2025
Cited by 4 | Viewed by 2329
Abstract
Background and Objectives: Breast cancer (BC) is a global health concern for women; the disease contributes to significant morbidity and mortality. A key element in the diagnosis of BC involves the histopathological diagnosis, which determines patient management and therapy. However, BC is [...] Read more.
Background and Objectives: Breast cancer (BC) is a global health concern for women; the disease contributes to significant morbidity and mortality. A key element in the diagnosis of BC involves the histopathological diagnosis, which determines patient management and therapy. However, BC is a multifaceted disease, limiting access to early diagnosis and, therefore, treatment. Artificial intelligence (AI) is transforming diagnostics in the medical field, especially in the detection of BC. Due to the increased availability of digital slides, it has facilitated the effective integration of AI in breast cancer diagnosis. Diagnosis poses a great challenge, even for experienced pathologists, due to the heterogeneity of this malignancy. Analysing microscopic slides by pathologists requires a considerable amount of time. Implementation of AI into routine workflows holds potential to improve diagnostic sensitivity and inter-observer concordance, and to increase efficiency by reducing the review time, thereby helping to alleviate the burden of diagnosing BC. Previous studies mainly address imaging modalities or oncology broadly, while a few specifically concentrates on the histopathological aspect of breast cancer. This review aims to explore the novel synthesis of AI advancements in digital pathology, including tumour classification, grading, lymph node staging, and biomarker evaluation, and discuss their potential incorporation into clinical workflows. We will also discuss the current barriers and prospects for future advancements. Materials and Methods: A literature search was conducted in PubMed and Google Scholar using the mentioned keywords. Articles published in English until July 2025 were reviewed and synthesised narratively. Results: Recent studies demonstrate that AI models such as convolutional neural networks (CNNs), YOLO, and RetinaNet achieve high accuracy in tumour detection, histological grading, lymph node metastasis localisation, and biomarker analysis. The reported performance values range from 75% to over 95% accuracy across various tasks, with gains in diagnostic sensitivity and inter-observer concordance, and reduced review time in assisted workflows. However, certain limitations, such as data variability, external validation in clinical practice, and ethical concerns, restrict the growth and optimal performance of AI and its clinical applicability. Conclusions: The future for AI looks promising, as it is rapidly evolving. By analysing evidence across multiple domains, this review evaluates both opportunities and persisting barriers, offering practical overviews for future clinical transition. AI cannot replace pathologists; however, it has the capabilities to enhance diagnostic precision, efficiency, and ultimately patient outcomes. It is only a matter of time before AI is adopted into healthcare. Full article
(This article belongs to the Section Oncology)
Show Figures

Figure 1

13 pages, 2033 KB  
Article
Deep Learning-Based Segmentation of Geographic Atrophy: A Multi-Center, Multi-Device Validation in a Real-World Clinical Cohort
by Hasenin Al-khersan, Simrat K. Sodhi, Jessica A. Cao, Stanley M. Saju, Niveditha Pattathil, Avery W. Zhou, Netan Choudhry, Daniel B. Russakoff, Jonathan D. Oakley, David Boyer and Charles C. Wykoff
Diagnostics 2025, 15(20), 2580; https://doi.org/10.3390/diagnostics15202580 - 13 Oct 2025
Cited by 1 | Viewed by 1368
Abstract
Background: To report a deep learning-based algorithm for automated segmentation of geographic atrophy (GA) among patients with age-related macular degeneration (AMD). Methods: Validation of a deep learning algorithm was performed using optical coherence tomography (OCT) images from patients in routine clinical care diagnosed [...] Read more.
Background: To report a deep learning-based algorithm for automated segmentation of geographic atrophy (GA) among patients with age-related macular degeneration (AMD). Methods: Validation of a deep learning algorithm was performed using optical coherence tomography (OCT) images from patients in routine clinical care diagnosed with GA, with and without concurrent nAMD. For model construction, a 3D U-Net architecture was used with the output modified to generate a 2D mask. Accuracy of the model was assessed relative to the manual labeling of GA with the Dice similarity coefficient (DSC) and correlation r2 scores. Results: The OCT data set included 367 scans from the Spectralis (Heidelberg, Germany) from 55 eyes in 33 subjects; 267 (73%) scans had concurrent nAMD. In parallel, 348 scans were collected using the Cirrus (Zeiss), from 348 eyes in 326 subjects; 101 (29%) scans had concurrent nAMD. For Spectralis data, the mean DSC score was 0.83 and r2 was 0.91. For Cirrus data, the mean DSC score was 0.82 and r2 was 0.88. Conclusions: The reported deep learning algorithm demonstrated strong agreement with manual grading of GA secondary to AMD on the OCT data set from routine clinical practice. The model performed well across two OCT devices as well as amongst patients with GA with concurrent nAMD, suggesting applicability in the clinical space. Full article
Show Figures

Figure 1

25 pages, 7213 KB  
Article
Interpreting Deep Neural Networks in Diabetic Retinopathy Grading: A Comparison with Human Decision Criteria
by Sangeeta Biswas, Md. Ahanaf Arif Khan, Md. Hasnain Ali, Johan Rohdin, Subrata Pramanik, Md. Iqbal Aziz Khan, Sanjoy Kumar Chakravarty and Bimal Kumar Pramanik
Life 2025, 15(9), 1473; https://doi.org/10.3390/life15091473 - 19 Sep 2025
Viewed by 1516
Abstract
Diabetic retinopathy (DR) causes visual impairment and blindness in millions of diabetic patients globally. Fundus image-based Automatic Diabetic Retinopathy Classifiers (ADRCs) can ensure regular retina checkups for many diabetic patients and reduce the burden on the limited number of retina experts by referring [...] Read more.
Diabetic retinopathy (DR) causes visual impairment and blindness in millions of diabetic patients globally. Fundus image-based Automatic Diabetic Retinopathy Classifiers (ADRCs) can ensure regular retina checkups for many diabetic patients and reduce the burden on the limited number of retina experts by referring only those patients who require their attention. Over the last decade, numerous deep neural network-based algorithms have been proposed for ADRCs to distinguish the severity levels of DR. However, it has not been investigated whether DNN-based ADRCs consider the same criteria as human retina professionals (HRPs), i.e., whether they follow the same grading scale when making decisions about the severity level of DR, which may put the reliability of ADRCs into question. In this study, we investigated this issue by experimenting on publicly available datasets using MobileNet-based ADRCs and analyzing the output of the ADRCs using two eXplainable artificial intelligence (XAI) techniques named Gradient-weighted Class Activation Map (Grad-CAM) and Integrated Gradients (IG). Full article
(This article belongs to the Special Issue Retinal Diseases: From Molecular Mechanisms to Therapeutics)
Show Figures

Figure 1

34 pages, 945 KB  
Review
Artificial Intelligence in Ocular Transcriptomics: Applications of Unsupervised and Supervised Learning
by Catherine Lalman, Yimin Yang and Janice L. Walker
Cells 2025, 14(17), 1315; https://doi.org/10.3390/cells14171315 - 26 Aug 2025
Cited by 3 | Viewed by 2870
Abstract
Transcriptomic profiling is a powerful tool for dissecting the cellular and molecular complexity of ocular tissues, providing insights into retinal development, corneal disease, macular degeneration, and glaucoma. With the expansion of microarray, bulk RNA sequencing (RNA-seq), and single-cell RNA-seq technologies, artificial intelligence (AI) [...] Read more.
Transcriptomic profiling is a powerful tool for dissecting the cellular and molecular complexity of ocular tissues, providing insights into retinal development, corneal disease, macular degeneration, and glaucoma. With the expansion of microarray, bulk RNA sequencing (RNA-seq), and single-cell RNA-seq technologies, artificial intelligence (AI) has emerged as a key strategy for analyzing high-dimensional gene expression data. This review synthesizes AI-enabled transcriptomic studies in ophthalmology from 2019 to 2025, highlighting how supervised and unsupervised machine learning (ML) methods have advanced biomarker discovery, cell type classification, and eye development and ocular disease modeling. Here, we discuss unsupervised techniques, such as principal component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE), uniform manifold approximation and projection (UMAP), and weighted gene co-expression network analysis (WGCNA), now the standard in single-cell workflows. Supervised approaches are also discussed, including the least absolute shrinkage and selection operator (LASSO), support vector machines (SVMs), and random forests (RFs), and their utility in identifying diagnostic and prognostic markers in age-related macular degeneration (AMD), diabetic retinopathy (DR), glaucoma, keratoconus, thyroid eye disease, and posterior capsule opacification (PCO), as well as deep learning frameworks, such as variational autoencoders and neural networks that support multi-omics integration. Despite challenges in interpretability and standardization, explainable AI and multimodal approaches offer promising avenues for advancing precision ophthalmology. Full article
Show Figures

Figure 1

26 pages, 30652 KB  
Article
Hybrid ViT-RetinaNet with Explainable Ensemble Learning for Fine-Grained Vehicle Damage Classification
by Ananya Saha, Mahir Afser Pavel, Md Fahim Shahoriar Titu, Afifa Zain Apurba and Riasat Khan
Vehicles 2025, 7(3), 89; https://doi.org/10.3390/vehicles7030089 - 25 Aug 2025
Cited by 1 | Viewed by 1965
Abstract
Efficient and explainable vehicle damage inspection is essential due to the increasing complexity and volume of vehicular incidents. Traditional manual inspection approaches are not time-effective, prone to human error, and lead to inefficiencies in insurance claims and repair workflows. Existing deep learning methods, [...] Read more.
Efficient and explainable vehicle damage inspection is essential due to the increasing complexity and volume of vehicular incidents. Traditional manual inspection approaches are not time-effective, prone to human error, and lead to inefficiencies in insurance claims and repair workflows. Existing deep learning methods, such as CNNs, often struggle with generalization, require large annotated datasets, and lack interpretability. This study presents a robust and interpretable deep learning framework for vehicle damage classification, integrating Vision Transformers (ViTs) and ensemble detection strategies. The proposed architecture employs a RetinaNet backbone with a ViT-enhanced detection head, implemented in PyTorch using the Detectron2 object detection technique. It is pretrained on COCO weights and fine-tuned through focal loss and aggressive augmentation techniques to improve generalization under real-world damage variability. The proposed system applies the Weighted Box Fusion (WBF) ensemble strategy to refine detection outputs from multiple models, offering improved spatial precision. To ensure interpretability and transparency, we adopt numerous explainability techniques—Grad-CAM, Grad-CAM++, and SHAP—offering semantic and visual insights into model decisions. A custom vehicle damage dataset with 4500 images has been built, consisting of approximately 60% curated images collected through targeted web scraping and crawling covering various damage types (such as bumper dents, panel scratches, and frontal impacts), along with 40% COCO dataset images to support model generalization. Comparative evaluations show that Hybrid ViT-RetinaNet achieves superior performance with an F1-score of 84.6%, mAP of 87.2%, and 22 FPS inference speed. In an ablation analysis, WBF, augmentation, transfer learning, and focal loss significantly improve performance, with focal loss increasing F1 by 6.3% for underrepresented classes and COCO pretraining boosting mAP by 8.7%. Additional architectural comparisons demonstrate that our full hybrid configuration not only maintains competitive accuracy but also achieves up to 150 FPS, making it well suited for real-time use cases. Robustness tests under challenging conditions, including real-world visual disturbances (smoke, fire, motion blur, varying lighting, and occlusions) and artificial noise (Gaussian; salt-and-pepper), confirm the model’s generalization ability. This work contributes a scalable, explainable, and high-performance solution for real-world vehicle damage diagnostics. Full article
Show Figures

Figure 1

22 pages, 1329 KB  
Review
Visual Field Examinations for Retinal Diseases: A Narrative Review
by Ko Eun Kim and Seong Joon Ahn
J. Clin. Med. 2025, 14(15), 5266; https://doi.org/10.3390/jcm14155266 - 25 Jul 2025
Viewed by 5524
Abstract
Visual field (VF) testing remains a cornerstone in assessing retinal function by measuring how well different parts of the retina detect light. It is essential for early detection, monitoring, and management of many retinal diseases. By mapping retinal sensitivity, VF exams can reveal [...] Read more.
Visual field (VF) testing remains a cornerstone in assessing retinal function by measuring how well different parts of the retina detect light. It is essential for early detection, monitoring, and management of many retinal diseases. By mapping retinal sensitivity, VF exams can reveal functional loss before structural changes become visible. This review summarizes how VF testing is applied across key conditions: hydroxychloroquine (HCQ) retinopathy, age-related macular degeneration (AMD), diabetic retinopathy (DR) and macular edema (DME), and inherited disorders including inherited dystrophies such as retinitis pigmentosa (RP). Traditional methods like the Goldmann kinetic perimetry and simple tools such as the Amsler grid help identify large or central VF defects. Automated perimetry (e.g., Humphrey Field Analyzer) provides detailed, quantitative data critical for detecting subtle paracentral scotomas in HCQ retinopathy and central vision loss in AMD. Frequency-doubling technology (FDT) reveals early neural deficits in DR before blood vessel changes appear. Microperimetry offers precise, localized sensitivity maps for macular diseases. Despite its value, VF testing faces challenges including patient fatigue, variability in responses, and interpretation of unreliable results. Recent advances in artificial intelligence, virtual reality perimetry, and home-based perimetry systems are improving test accuracy, accessibility, and patient engagement. Integrating VF exams with these emerging technologies promises more personalized care, earlier intervention, and better long-term outcomes for patients with retinal disease. Full article
(This article belongs to the Special Issue New Advances in Retinal Diseases)
Show Figures

Figure 1

26 pages, 5939 KB  
Article
Multi-Resolution UAV Remote Sensing for Anthropogenic Debris Detection in Complex River Environments
by Peaceibisia Jack, Trent Biggs, Daniel Sousa, Lloyd Coulter, Sarah Hutmacher and Hilary McMillan
Remote Sens. 2025, 17(13), 2172; https://doi.org/10.3390/rs17132172 - 25 Jun 2025
Cited by 1 | Viewed by 1726
Abstract
Anthropogenic debris in urban floodplains poses significant environmental and ecological risks, with an estimated 4 to 12 million metric tons entering oceans annually via riverine transport. While remote sensing and artificial intelligence (AI) offer promising tools for automated debris detection, most existing datasets [...] Read more.
Anthropogenic debris in urban floodplains poses significant environmental and ecological risks, with an estimated 4 to 12 million metric tons entering oceans annually via riverine transport. While remote sensing and artificial intelligence (AI) offer promising tools for automated debris detection, most existing datasets focus on marine environments with homogeneous backgrounds, leaving a critical gap for complex terrestrial floodplains. This study introduces the San Diego River Debris Dataset, a multi-resolution UAV imagery collection with ground reference designed to support automated detection of anthropogenic debris in urban floodplains. The dataset includes manually annotated debris objects captured under diverse environmental conditions using two UAV platforms (DJI Matrice 300 and DJI Mini 2) across spatial resolutions ranging from 0.4 to 4.4 cm. We benchmarked five deep learning architectures (RetinaNet, SSD, Faster R-CNN, DetReg, Cascade R-CNN) to assess detection accuracy across varying image resolutions and environmental settings. Cascade R-CNN achieved the highest accuracy (0.93) at 0.4 cm resolution, with accuracy declining rapidly at resolutions above 1 cm and 3.3 cm. Spatial analysis revealed that 51% of debris was concentrated within unsheltered encampments, which occupied only 2.6% of the study area. Validation confirmed a strong correlation between predicted debris extent and field measurements, supporting the dataset’s operational reliability. This openly available dataset fills a gap in environmental monitoring resources and provides guides for future research and deployment of UAV-based debris detection systems in urban floodplain areas. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

37 pages, 3931 KB  
Review
Retinal Imaging as a Window into Cardiovascular Health: Towards Harnessing Retinal Analytics for Precision Cardiovascular Medicine
by Jay Bharatsingh Bisen, Hayden Sikora, Anushree Aneja, Sanjiv J. Shah and Rukhsana G. Mirza
J. Cardiovasc. Dev. Dis. 2025, 12(6), 230; https://doi.org/10.3390/jcdd12060230 - 17 Jun 2025
Cited by 2 | Viewed by 6140
Abstract
Rising morbidity and mortality from cardiovascular disease (CVD) have increased interest in precision and preventive management to reduce long-term sequelae. While retinal imaging has traditionally been recognized for identifying vascular changes in systemic conditions such as hypertension and type 2 diabetes mellitus, a [...] Read more.
Rising morbidity and mortality from cardiovascular disease (CVD) have increased interest in precision and preventive management to reduce long-term sequelae. While retinal imaging has traditionally been recognized for identifying vascular changes in systemic conditions such as hypertension and type 2 diabetes mellitus, a new ophthalmologic field, cardiac-oculomics, has associated retinal biomarker changes with other cardiovascular diseases with retinal manifestations. Several imaging modalities visualize the retina, including color fundus photography (CFP), optical coherence tomography (OCT), and OCT angiography (OCTA), which visualize the retinal surface, the individual retinal layers, and the microvasculature within those layers, respectively. In these modalities, imaging-derived biomarkers can present due to CVD and have been linked to the presence, progression, or risk of developing a range of CVD, including hypertension, carotid artery disease, valvular heart disease, cerebral infarction, atrial fibrillation, and heart failure. Promising artificial intelligence (AI) models have been developed to complement existing risk-prediction tools, but standardization and clinical trials are needed for clinical adoption. Beyond risk estimation, there is growing interest in assessing real-time cardiovascular status to track vascular changes following pharmacotherapy, surgery, or acute decompensation. This review offers an up-to-date assessment of the cardiac-oculomics literature and aims to raise awareness among cardiologists and encourage interdepartmental collaboration. Full article
(This article belongs to the Section Imaging)
Show Figures

Graphical abstract

Back to TopTop