Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,121)

Search Parameters:
Keywords = optical features

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 3959 KB  
Article
Multimodal Video Summarization Using Machine Learning: A Comprehensive Benchmark of Feature Selection and Classifier Performance
by Elmin Marevac, Esad Kadušić, Nataša Živić, Nevzudin Buzađija, Edin Tabak and Safet Velić
Algorithms 2025, 18(9), 572; https://doi.org/10.3390/a18090572 - 10 Sep 2025
Abstract
The exponential growth of user-generated video content necessitates efficient summarization systems for improved accessibility, retrieval, and analysis. This study presents and benchmarks a multimodal video summarization framework that classifies segments as informative or non-informative using audio, visual, and fused features. Sixty hours of [...] Read more.
The exponential growth of user-generated video content necessitates efficient summarization systems for improved accessibility, retrieval, and analysis. This study presents and benchmarks a multimodal video summarization framework that classifies segments as informative or non-informative using audio, visual, and fused features. Sixty hours of annotated video across ten diverse categories were analyzed. Audio features were extracted with pyAudioAnalysis, while visual features (colour histograms, optical flow, object detection, facial recognition) were derived using OpenCV. Six supervised classifiers—Naive Bayes, K-Nearest Neighbors, Logistic Regression, Decision Tree, Random Forest, and XGBoost—were evaluated, with hyperparameters optimized via grid search. Temporal coherence was enhanced using median filtering. Random Forest achieved the best performance, with 74% AUC on fused features and a 3% F1-score gain after post-processing. Spectral flux, grayscale histograms, and optical flow emerged as key discriminative features. The best model was deployed as a practical web service using TensorFlow and Flask, integrating informative segment detection with subtitle generation via beam search to ensure coherence and coverage. System-level evaluation demonstrated low latency and efficient resource utilization under load. Overall, the results confirm the strength of multimodal fusion and ensemble learning for video summarization and highlight their potential for real-world applications in surveillance, digital archiving, and online education. Full article
(This article belongs to the Special Issue Visual Attributes in Computer Vision Applications)
22 pages, 5732 KB  
Article
Explainable Transformer-Based Framework for Glaucoma Detection from Fundus Images Using Multi-Backbone Segmentation and vCDR-Based Classification
by Hind Alasmari, Ghada Amoudi and Hanan Alghamdi
Diagnostics 2025, 15(18), 2301; https://doi.org/10.3390/diagnostics15182301 - 10 Sep 2025
Abstract
Glaucoma is an eye disease caused by increased intraocular pressure (IOP) that affects the optic nerve head (ONH), leading to vision problems and irreversible blindness. Background/Objectives: Glaucoma is the second leading cause of blindness worldwide, and the number of people affected is [...] Read more.
Glaucoma is an eye disease caused by increased intraocular pressure (IOP) that affects the optic nerve head (ONH), leading to vision problems and irreversible blindness. Background/Objectives: Glaucoma is the second leading cause of blindness worldwide, and the number of people affected is increasing each year, with the number expected to reach 111.8 million by 2040. This escalating trend is alarming due to the lack of ophthalmology specialists relative to the population. This study proposes an explainable end-to-end pipeline for automated glaucoma diagnosis from fundus images. It also evaluates the performance of Vision Transformers (ViTs) relative to traditional CNN-based models. Methods: The proposed system uses three datasets: REFUGE, ORIGA, and G1020. It begins with YOLOv11 for object detection of the optic disc. Then, the optic disc (OD) and optic cup (OC) are segmented using U-Net with ResNet50, VGG16, and MobileNetV2 backbones, as well as MaskFormer with a Swin-Base backbone. Glaucoma is classified based on the vertical cup-to-disc ratio (vCDR). Results: MaskFormer outperforms all models in segmentation in all aspects, including IoU OD, IoU OC, DSC OD, and DSC OC, with scores of 88.29%, 91.09%, 93.83%, and 93.71%. For classification, it achieved accuracy and F1-scores of 84.03% and 84.56%. Conclusions: By relying on the interpretable features of the vCDR, the proposed framework enhances transparency and aligns well with the principles of explainable AI, thus offering a trustworthy solution for glaucoma screening. Our findings show that Vision Transformers offer a promising approach for achieving high segmentation performance with explainable, biomarker-driven diagnosis. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

24 pages, 7007 KB  
Article
M4MLF-YOLO: A Lightweight Semantic Segmentation Framework for Spacecraft Component Recognition
by Wenxin Yi, Zhang Zhang and Liang Chang
Remote Sens. 2025, 17(18), 3144; https://doi.org/10.3390/rs17183144 - 10 Sep 2025
Abstract
With the continuous advancement of on-orbit services and space intelligence sensing technologies, the efficient and accurate identification of spacecraft components has become increasingly critical. However, complex lighting conditions, background interference, and limited onboard computing resources present significant challenges to existing segmentation algorithms. To [...] Read more.
With the continuous advancement of on-orbit services and space intelligence sensing technologies, the efficient and accurate identification of spacecraft components has become increasingly critical. However, complex lighting conditions, background interference, and limited onboard computing resources present significant challenges to existing segmentation algorithms. To address these challenges, this paper proposes a lightweight spacecraft component segmentation framework for on-orbit applications, termed M4MLF-YOLO. Based on the YOLOv5 architecture, we propose a refined lightweight design strategy that aims to balance segmentation accuracy and resource consumption in satellite-based scenarios. MobileNetV4 is adopted as the backbone network to minimize computational overhead. Additionally, a Multi-Scale Fourier Adaptive Calibration Module (MFAC) is designed to enhance multi-scale feature modeling and boundary discrimination capabilities in the frequency domain. We also introduce a Linear Deformable Convolution (LDConv) to explicitly control the spatial sampling span and distribution of the convolution kernel, thereby linearly adjusting the receptive field coverage range to improve feature extraction capabilities while effectively reducing computational costs. Furthermore, the efficient C3-Faster module is integrated to enhance channel interaction and feature fusion efficiency. A high-quality spacecraft image dataset, comprising both real and synthetic images, was constructed, covering various backgrounds and component types, including solar panels, antennas, payload instruments, thrusters, and optical payloads. Environment-aware preprocessing and enhancement strategies were applied to improve model robustness. Experimental results demonstrate that M4MLF-YOLO achieves excellent segmentation performance while maintaining low model complexity, with precision reaching 95.1% and recall reaching 88.3%, representing improvements of 1.9% and 3.9% over YOLOv5s, respectively. The mAP@0.5 also reached 93.4%. In terms of lightweight design, the model parameter count and computational complexity were reduced by 36.5% and 24.6%, respectively. These results validate that the proposed method significantly enhances deployment efficiency while preserving segmentation accuracy, showcasing promising potential for satellite-based visual perception applications. Full article
Show Figures

Figure 1

11 pages, 1178 KB  
Article
Optical Coherence Tomography- and Optical Coherence Tomography Angiography-Based Evaluation in Treatment-Naïve Non-Exudative Macular Neovascularization
by Geun Young Moon, Jong Seok Park and Ki Woong Bae
J. Clin. Med. 2025, 14(18), 6375; https://doi.org/10.3390/jcm14186375 - 10 Sep 2025
Abstract
Background/Objectives: We evaluated the clinical features and natural course of treatment-naïve non-exudative macular neovascularization (NE MNV) associated with age-related macular degeneration in Korean patients. Methods: This retrospective longitudinal study of 21 eyes of 21 patients with NE MNV involved a chart review of [...] Read more.
Background/Objectives: We evaluated the clinical features and natural course of treatment-naïve non-exudative macular neovascularization (NE MNV) associated with age-related macular degeneration in Korean patients. Methods: This retrospective longitudinal study of 21 eyes of 21 patients with NE MNV involved a chart review of best corrected visual acuity (BCVA), optical coherence tomography (OCT), and OCT angiography parameters. Results: This study included 13 men (13/21, 61.9%) and 8 women (8/21, 38.1%), with a mean age of 71.5 ± 9.1 years. The average follow-up period was 15.1 ± 11.8 (range 6.0–49.6) months, and 14 eyes (66.7%) demonstrated exudative changes on OCT scans. The baseline BCVA was 0.15 ± 0.18 logMAR. The initial central macular thickness (CMT), subfoveal choroidal thickness, and the outer retinal layer thickness were 265.3 ± 37.1, 245.2 ± 95.2, and 86.6 ± 5.3 μm, respectively. Cox proportional hazards analysis revealed that older age (hazard ratio [HR]: 1.096, 95% confidence interval [CI]: 1.002–1.200; p = 0.045), larger baseline CMT (HR: 1.025, 95% CI: 1.002–1.049; p = 0.035), and larger baseline MNV (HR: 1.618, 95% CI: 1.035–2.529; p = 0.035) were significant risk factors for exudative changes. Conclusions: We observed the clinical features and natural course of NE MNV in Korean patients and identified that significant risk factors for exudative changes in NE MNV included old age, initially thick CMT, and larger MNV size at baseline. For eyes with NE MNV that have risk factors of exudative conversion, more frequent observation is recommended to ensure the appropriate management. Full article
Show Figures

Figure 1

15 pages, 6498 KB  
Article
A Ring-Core Anti-Resonant Photonic Crystal Fiber Supporting 90 Orbital Angular Momentum Modes
by Huimin Shi, Linghong Jiang, Chao Wang, Junjun Wu, Limian Ren and Pan Wang
Photonics 2025, 12(9), 906; https://doi.org/10.3390/photonics12090906 - 10 Sep 2025
Abstract
To address the issues of limited orbital angular momentum (OAM) mode count, poor transmission quality, and complex cladding structures in ring-core photonic crystal fibers, a novel OAM-supporting ring-core anti-resonant photonic crystal fiber is designed. This fiber features a high-index-doped ring-core surrounded by a [...] Read more.
To address the issues of limited orbital angular momentum (OAM) mode count, poor transmission quality, and complex cladding structures in ring-core photonic crystal fibers, a novel OAM-supporting ring-core anti-resonant photonic crystal fiber is designed. This fiber features a high-index-doped ring-core surrounded by a three-layer anti-resonant nested tube cladding. Numerical simulations based on the finite element method indicate that the designed fiber has the ability to reliably transmit up to 90 OAM modes within the wavelength range of 1530–1620 nm. Additionally, this fiber demonstrates outstanding performance characteristics, achieving a peak effective refractive index difference of 0.0041 while maintaining remarkably low confinement loss between 10−12 dB/m and 10−8 dB/m. The minimum effective mode field area is 101.41 μm2, and the maximum nonlinear coefficient is 1.05 W−1·km−1. The dispersion is flat, with a minimum dispersion variation of merely 0.5394 ps/(nm·km). The mode purity is greater than 98.5%, and the numerical aperture ranges from 0.0689 to 0.089. The designed OAM-supporting ring-core anti-resonant photonic crystal fiber has broad application prospects in long-haul optical communication and high-speed data transmission. Full article
(This article belongs to the Special Issue Optical Fiber Communication: Challenges and Opportunities)
Show Figures

Figure 1

20 pages, 4568 KB  
Article
Dual-Branch Transformer–CNN Fusion for Enhanced Cloud Segmentation in Remote Sensing Imagery
by Shengyi Cheng, Hangfei Guo, Hailei Wu and Xianjun Du
Appl. Sci. 2025, 15(18), 9870; https://doi.org/10.3390/app15189870 - 9 Sep 2025
Abstract
Cloud coverage and obstruction significantly affect the usability of remote sensing images, making cloud detection a key prerequisite for optical remote sensing applications. In existing cloud detection methods, using U-shaped convolutional networks alone has limitations in modeling long-range contexts, while Vision Transformers fall [...] Read more.
Cloud coverage and obstruction significantly affect the usability of remote sensing images, making cloud detection a key prerequisite for optical remote sensing applications. In existing cloud detection methods, using U-shaped convolutional networks alone has limitations in modeling long-range contexts, while Vision Transformers fall short in capturing local spatial features. To address these issues, this study proposes a dual-branch framework, TransCNet, which combines Transformer and CNN architectures to enhance the accuracy and effectiveness of cloud detection. TransCNet addresses this by designing dual encoder branches: a Transformer branch capturing global dependencies and a CNN branch extracting local details. A novel feature aggregation module enables the complementary fusion of multi-level features from both branches at each encoder stage, enhanced by channel attention mechanisms. To mitigate feature dilution during decoding, aggregated features compensate for information loss from sampling operations. Evaluations on 38-Cloud, SPARCS, and a high-resolution Landsat-8 dataset demonstrate TransCNet’s competitive performance across metrics, effectively balancing global semantic understanding and local edge preservation for clearer cloud boundary detection. The approach resolves key limitations in existing cloud detection frameworks through synergistic multi-branch feature integration. Full article
Show Figures

Figure 1

11 pages, 2022 KB  
Article
Thickness Influences on Structural and Optical Properties of Thermally Annealed (GaIn)2O3 Films
by Shiyang Zhang, Fabi Zhang, Tangyou Sun, Zanhui Chen, Xingpeng Liu, Haiou Li, Shifeng Xie, Wanli Yang and Yue Li
Nanomaterials 2025, 15(18), 1385; https://doi.org/10.3390/nano15181385 - 9 Sep 2025
Abstract
This work explores the relationship between the thickness and the structural, morphological, and optical features of thermally annealed (GaIn)2O3 thin films grown by pulsed laser deposition at room temperature. The thickness of the (GaIn)2O3 films varied from [...] Read more.
This work explores the relationship between the thickness and the structural, morphological, and optical features of thermally annealed (GaIn)2O3 thin films grown by pulsed laser deposition at room temperature. The thickness of the (GaIn)2O3 films varied from 20 to 391 nm with an increase in deposition time. The film with a thickness of about 105 nm showed largest grain size as well as the strongest XRD peak intensity, as measured by atomic force microscopy and X-ray diffraction. The studies on the optical properties show that the bandgap value decreased from 5.14 to 4.55 eV with the change in the film thickness from 20 to 391 nm. The film thickness had a significant impact on the structure, morphology, and optical properties of (GaIn)2O3, and the PLD growth mode notably influenced the film quality. The results suggest that optimizing the film thickness is essential for improving the film quality and achieving the target bandgap. Full article
Show Figures

Figure 1

15 pages, 1786 KB  
Article
Application of Gaussian SVM Flame Detection Model Based on Color and Gradient Features in Engine Test Plume Images
by Song Yan, Yushan Gao, Zhiwei Zhang and Yi Li
Sensors 2025, 25(17), 5592; https://doi.org/10.3390/s25175592 - 8 Sep 2025
Viewed by 161
Abstract
This study presents a flame detection model that is based on real experimental data that were collected during turbopump hot-fire tests of a liquid rocket engine. In these tests, a MEMRECAM ACS-1 M40 high-speed camera—serving as an optical sensor within the test instrumentation [...] Read more.
This study presents a flame detection model that is based on real experimental data that were collected during turbopump hot-fire tests of a liquid rocket engine. In these tests, a MEMRECAM ACS-1 M40 high-speed camera—serving as an optical sensor within the test instrumentation system—captured plume images for analysis. To detect abnormal flame phenomena in the plume, a Gaussian support vector machine (SVM) model was developed using image features that were derived from both color and gradient information. Six representative frames containing visible flames were selected from a single test failure video. These images were segmented in the YCbCr color space using the k-means clustering algorithm to distinguish flame and non-flame pixels. A 10-dimensional feature vector was constructed for each pixel and then reduced to five dimensions using the Maximum Relevance Minimum Redundancy (mRMR) method. The reduced vectors were used to train the Gaussian SVM model. The model achieved a 97.6% detection accuracy despite being trained on a limited dataset. It has been successfully applied in multiple subsequent engine tests, and it has proven effective in detecting ablation-related anomalies. By combining real-world sensor data acquisition with intelligent image-based analysis, this work enhances the monitoring capabilities in rocket engine development. Full article
Show Figures

Figure 1

21 pages, 25636 KB  
Article
SARFT-GAN: Semantic-Aware ARConv Fused Top-k Generative Adversarial Network for Remote Sensing Image Denoising
by Haotian Sun, Ruifeng Duan, Guodong Sun, Haiyan Zhang, Feixiang Chen, Feng Yang and Jia Cao
Remote Sens. 2025, 17(17), 3114; https://doi.org/10.3390/rs17173114 - 7 Sep 2025
Viewed by 230
Abstract
Optical remote sensing images play a pivotal role in numerous applications, notably feature recognition and scene semantic segmentation. Nevertheless, their efficacy is frequently compromised by various noise types, which detrimentally impact practical usage. We have meticulously crafted a novel attention module amalgamating Adaptive [...] Read more.
Optical remote sensing images play a pivotal role in numerous applications, notably feature recognition and scene semantic segmentation. Nevertheless, their efficacy is frequently compromised by various noise types, which detrimentally impact practical usage. We have meticulously crafted a novel attention module amalgamating Adaptive Rectangular Convolution (ARConv) with Top-k Sparse Attention. This design dynamically modifies feature receptive fields, effectively mitigating superfluous interference and enhancing multi-scale feature extraction. Concurrently, we introduce a Semantic-Aware Discriminator, leveraging visual-language prior knowledge derived from the Contrastive Language–Image Pretraining (CLIP) model, steering the generator towards a more realistic texture reconstruction. This research introduces an innovative image denoising model termed the Semantic-Aware ARConv Fused Top-k Generative Adversarial Network (SARFT-GAN). Addressing shortcomings in traditional convolution operations, attention mechanisms, and discriminator design, our approach facilitates a synergistic optimization between noise suppression and feature preservation. Extensive experiments on RRSSRD, SECOND, a private Jilin-1 set, and real-world NWPU-RESISC45 images demonstrate consistent gains. Across three noise levels and four scenarios, SARFT-GAN attains state-of-the-art perceptual quality—achieving the best FID in all 12 settings and strong LPIPS—while remaining competitive on PSNR/SSIM. Full article
Show Figures

Figure 1

25 pages, 20160 KB  
Article
A Robust Framework Fusing Visual SLAM and 3D Gaussian Splatting with a Coarse-Fine Method for Dynamic Region Segmentation
by Zhian Chen, Yaqi Hu and Yong Liu
Sensors 2025, 25(17), 5539; https://doi.org/10.3390/s25175539 - 5 Sep 2025
Viewed by 630
Abstract
Existing visual SLAM systems with neural representations excel in static scenes but fail in dynamic environments where moving objects degrade performance. To address this, we propose a robust dynamic SLAM framework combining classic geometric features for localization with learned photometric features for dense [...] Read more.
Existing visual SLAM systems with neural representations excel in static scenes but fail in dynamic environments where moving objects degrade performance. To address this, we propose a robust dynamic SLAM framework combining classic geometric features for localization with learned photometric features for dense mapping. Our method first tracks objects using instance segmentation and a Kalman filter. We then introduce a cascaded, coarse-to-fine strategy for efficient motion analysis: a lightweight sparse optical flow method performs a coarse screening, while a fine-grained dense optical flow clustering is selectively invoked for ambiguous targets. By filtering features on dynamic regions, our system drastically improves camera pose estimation, reducing Absolute Trajectory Error by up to 95% on dynamic TUM RGB-D sequences compared to ORB-SLAM3, and generates clean dense maps. The 3D Gaussian Splatting backend, optimized with a Gaussian pyramid strategy, ensures high-quality reconstruction. Validations on diverse datasets confirm our system’s robustness, achieving accurate localization and high-fidelity mapping in dynamic scenarios while reducing motion analysis computation by 91.7% over a dense-only approach. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Graphical abstract

23 pages, 9993 KB  
Article
Morphological Characterization of Aspergillus flavus in Culture Media Using Digital Image Processing and Radiomic Analysis Under UV Radiation
by Oscar J. Suarez, Daniel C. Ruiz-Ayala, Liliana Rojas Contreras, Manuel G. Forero, Jesús A. Medrano-Hermosillo and Abraham Efraim Rodriguez-Mata
Agriculture 2025, 15(17), 1888; https://doi.org/10.3390/agriculture15171888 - 5 Sep 2025
Viewed by 358
Abstract
The identification of Aspergillus flavus (A. flavus), a fungus known for producing aflatoxins, poses a taxonomic challenge due to its morphological plasticity and similarity to closely related species. This article proposes a computational approach for its characterization across four culture media, [...] Read more.
The identification of Aspergillus flavus (A. flavus), a fungus known for producing aflatoxins, poses a taxonomic challenge due to its morphological plasticity and similarity to closely related species. This article proposes a computational approach for its characterization across four culture media, using ultraviolet (UV) radiation imaging and radiomic analysis. Images were acquired with a camera controlled by a Raspberry Pi and processed to extract 408 radiomic features (102 per color channel and grayscale). Shapiro–Wilk and Levene’s tests were applied to verify normality and homogeneity of variances as prerequisites for an analysis of variance (ANOVA). Nine features showed statistically significant differences and, together with the culture medium type as a categorical variable, were used in a supervised classification stage with cross-validation. Classification using Support Vector Machines (SVM) achieved 97% accuracy on the test set. The results showed that the morphology of A. flavus varies significantly depending on the medium under UV radiation, with malt extract agar being the most discriminative. This non-invasive and low-cost approach demonstrates the potential of radiomics combined with machine learning to capture morphological patterns useful in the differentiation of fungi with optical response under UV radiation. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

29 pages, 3367 KB  
Article
Small Object Detection in Synthetic Aperture Radar with Modular Feature Encoding and Vectorized Box Regression
by Xinmiao Du and Xihong Wu
Remote Sens. 2025, 17(17), 3094; https://doi.org/10.3390/rs17173094 - 5 Sep 2025
Viewed by 534
Abstract
Object detection in synthetic aperture radar (SAR) imagery poses significant challenges due to low resolution, small objects, arbitrary orientations, and complex backgrounds. Standard object detectors often fail to capture sufficient semantic and geometric cues for such tiny targets. To address this issue, a [...] Read more.
Object detection in synthetic aperture radar (SAR) imagery poses significant challenges due to low resolution, small objects, arbitrary orientations, and complex backgrounds. Standard object detectors often fail to capture sufficient semantic and geometric cues for such tiny targets. To address this issue, a new Convolutional Neural Network (CNN) framework called Deformable Vectorized Detection Network (DVDNet) has been proposed, specifically designed for detecting small, oriented, and densely packed objects in SAR images. The DVDNet consists of Grouped-Deformable Convolution for adaptive receptive field adjustment to diverse object scales, a Local Binary Pattern (LBP) Enhancement Module that enriches texture representations and enhances the visibility of small or camouflaged objects, and a Vector Decomposition Module that enables accurate regression of oriented bounding boxes via learnable geometric vectors. The DVDNet is embedded in a two-stage detection architecture and is particularly effective in preserving fine-grained features critical for mall object localization. The performance of DVDNet is validated on two SAR small target detection datasets, HRSID and SSDD, and it is experimentally demonstrated that it achieves 90.9% mAP on HRSID and 87.2% mAP on SSDD. The generalizability of DVDNet was also verified on the self-built SAR ship dataset and the remote sensing optical dataset HRSC2016. All these experiments show that DVDNet outperforms the standard detector. Notably, our framework shows substantial gains in precision and recall for small object subsets, validating the importance of combining deformable sampling, texture enhancement, and vector-based box representation for high-fidelity small object detection in complex SAR scenes. Full article
(This article belongs to the Special Issue Deep Learning Techniques and Applications of MIMO Radar Theory)
Show Figures

Figure 1

37 pages, 12368 KB  
Article
Machine Learning-Based Analysis of Optical Coherence Tomography Angiography Images for Age-Related Macular Degeneration
by Abdullah Alfahaid, Tim Morris, Tim Cootes, Pearse A. Keane, Hagar Khalid, Nikolas Pontikos, Fatemah Alharbi, Easa Alalwany, Abdulqader M. Almars, Amjad Aldweesh, Abdullah G. M. ALMansour, Panagiotis I. Sergouniotis and Konstantinos Balaskas
Biomedicines 2025, 13(9), 2152; https://doi.org/10.3390/biomedicines13092152 - 5 Sep 2025
Viewed by 263
Abstract
Background/Objectives: Age-related macular degeneration (AMD) is the leading cause of visual impairment among the elderly. Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that enables detailed visualisation of retinal vascular layers. However, clinical assessment of OCTA images is often challenging due [...] Read more.
Background/Objectives: Age-related macular degeneration (AMD) is the leading cause of visual impairment among the elderly. Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that enables detailed visualisation of retinal vascular layers. However, clinical assessment of OCTA images is often challenging due to high data volume, pattern variability, and subtle abnormalities. This study aimed to develop automated algorithms to detect and quantify AMD in OCTA images, thereby reducing ophthalmologists’ workload and enhancing diagnostic accuracy. Methods: Two texture-based algorithms were developed to classify OCTA images without relying on segmentation. The first algorithm used whole local texture features, while the second applied principal component analysis (PCA) to decorrelate and reduce texture features. Local texture descriptors, including rotation-invariant uniform local binary patterns (LBP2riu), local binary patterns (LBP), and binary robust independent elementary features (BRIEF), were combined with machine learning classifiers such as support vector machine (SVM) and K-nearest neighbour (KNN). OCTA datasets from Manchester Royal Eye Hospital and Moorfields Eye Hospital, covering healthy, dry AMD, and wet AMD eyes, were used for evaluation. Results: The first algorithm achieved a mean area under the receiver operating characteristic curve (AUC) of 1.00±0.00 for distinguishing healthy eyes from wet AMD. The second algorithm showed superior performance in differentiating dry AMD from wet AMD (AUC 0.85±0.02). Conclusions: The proposed algorithms demonstrate strong potential for rapid and accurate AMD diagnosis in OCTA workflows. By reducing manual image evaluation and associated variability, they may support improved clinical decision-making and patient care. Full article
(This article belongs to the Section Molecular and Translational Medicine)
Show Figures

Figure 1

27 pages, 19273 KB  
Article
Deciphering Photographic Papers: Material Insights into 20th-Century Ilford and Kodak Sample Books
by Laura-Cassandra Vălean, Sílvia O. Sequeira, Susana França de Sá and Élia Roldão
Heritage 2025, 8(9), 361; https://doi.org/10.3390/heritage8090361 - 4 Sep 2025
Viewed by 205
Abstract
Fiber-based black-and-white developing-out papers (DOPs) were among the most widely used photographic supports of the 20th century. Their broad use, structural complexity, and range of surface finishes, alongside evolving manufacturing practices, underscore the importance of understanding their material composition for authentication, dating, and [...] Read more.
Fiber-based black-and-white developing-out papers (DOPs) were among the most widely used photographic supports of the 20th century. Their broad use, structural complexity, and range of surface finishes, alongside evolving manufacturing practices, underscore the importance of understanding their material composition for authentication, dating, and conservation purposes. This study presents a multi-analytical characterization of three DOP sample sets: two from Ilford (ca. 1950) and one from Kodak (1972), complementing previous research with a deeper insight into general features, stratigraphy, and composition. Initial non-sampling techniques, including thickness measurements, colorimetry, optical microscopy, and UV–visible induced fluorescence, were used to classify papers into visually and physically distinct groups. This informed a targeted sampling strategy for further stratigraphic and compositional analysis using Scanning Electron Microscopy with Energy Dispersive X-ray Spectroscopy (SEM-EDS), X-ray fluorescence (XRF), Raman spectroscopy, and fiber/pulp identification tests. Significant differences were observed in base tint, surface gloss, optical brightening agents, fillers, and fiber content. Notable findings include the presence of iron (III) oxide–hydroxide pigment in Ilford cream papers, anatase titanium dioxide (TiO2) in a baryta-less Ilford sample, and the shift to more uniform tones and mixed pulps in Kodak papers by the 1970s. These results offer valuable insights into historical manufacturing and support improved dating and characterization of photographic papers. Full article
Show Figures

Figure 1

29 pages, 4936 KB  
Article
Choline Acetate-, L-Carnitine- and L-Proline-Based Deep Eutectic Solvents: A Comparison of Their Physicochemical and Thermal Properties in Relation to the Nature and Molar Ratios of HBAs and HBDs
by Luca Guglielmero, Angelica Mero, Spyridon Koutsoumpos, Sotiria Kripotou, Konstantinos Moutzouris, Lorenzo Guazzelli and Andrea Mezzetta
Int. J. Mol. Sci. 2025, 26(17), 8625; https://doi.org/10.3390/ijms26178625 - 4 Sep 2025
Viewed by 478
Abstract
The search for more sustainable alternatives to traditional organic solvents, in the frame of the green chemistry approach, is leading to an increasing interest toward the exploration of deep eutectic solvents (DESs), especially natural-based ones (NADESs). The great ferment in the use of [...] Read more.
The search for more sustainable alternatives to traditional organic solvents, in the frame of the green chemistry approach, is leading to an increasing interest toward the exploration of deep eutectic solvents (DESs), especially natural-based ones (NADESs). The great ferment in the use of DESs as innovative media for many applications and in the research of novel types of DESs is not matched by an equal rigor in their characterization and in the study of their physico-chemical characteristics. Nevertheless, it is evident how comparative studies encompassing the investigation of a wide range of properties in relationship with the DESs structures would be beneficial for a rational development of the field. In this work a panel of DESs featuring choline acetate, L-carnitine and L-proline as hydrogen bond acceptor constituents (HBAs) and ethylene glycol, glycerol and levulinic acid as hydrogen bond donor constituents (HBDs) in 1:2 and 1:3 molar ratios have been prepared and characterized. Their density, viscosity and optical properties have been thoroughly investigated at various temperatures, analyzing the influence of their composition in terms of type of HBA, type of HBD and molar ratio on their properties. All the proposed DESs have also been thermally characterized by TGA and DSC, providing a description of their thermal behavior in a wide range of temperature and determining their thermal stability and thermal degradation profile. Full article
Show Figures

Figure 1

Back to TopTop