Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (325)

Search Parameters:
Keywords = illumination correction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 7343 KB  
Article
Accelerated Super-Resolution Reconstruction for Structured Illumination Microscopy Integrated with Low-Light Optimization
by Caihong Huang, Dingrong Yi and Lichun Zhou
Micromachines 2025, 16(9), 1020; https://doi.org/10.3390/mi16091020 - 3 Sep 2025
Viewed by 327
Abstract
Structured illumination microscopy (SIM) with π/2 phase-shift modulation traditionally relies on frequency-domain computation, which greatly limits processing efficiency. In addition, the illumination regime inherent in structured illumination techniques often results in poor visual quality of reconstructed images. To address these dual challenges, this [...] Read more.
Structured illumination microscopy (SIM) with π/2 phase-shift modulation traditionally relies on frequency-domain computation, which greatly limits processing efficiency. In addition, the illumination regime inherent in structured illumination techniques often results in poor visual quality of reconstructed images. To address these dual challenges, this study introduces DM-SIM-LLIE (Differential Low-Light Image Enhancement SIM), a novel framework that integrates two synergistic innovations. First, the study pioneers a spatial-domain computational paradigm for π/2 phase-shift SIM reconstruction. Through system differentiation, mathematical derivation, and algorithm simplification, an optimized spatial-domain model is established. Second, an adaptive local overexposure correction strategy is developed, combined with a zero-shot learning deep learning algorithm, RUAS, to enhance the image quality of structured light reconstructed images. Experimental validation using specimens such as fluorescent microspheres and bovine pulmonary artery endothelial cells demonstrates the advantages of this approach: compared with traditional frequency-domain methods, the reconstruction speed is accelerated by five times while maintaining equivalent lateral resolution and excellent axial resolution. The image quality of the low-light enhancement algorithm after local overexposure correction is superior to existing methods. These advances significantly increase the application potential of SIM technology in time-sensitive biomedical imaging scenarios that require high spatiotemporal resolution. Full article
(This article belongs to the Special Issue Advanced Biomaterials, Biodevices, and Their Application)
Show Figures

Figure 1

23 pages, 3731 KB  
Article
Efficient Navigable Area Computation for Underground Autonomous Vehicles via Ground Feature and Boundary Processing
by Miao Yu, Yibo Du, Xi Zhang, Ziyan Ma and Zhifeng Wang
Sensors 2025, 25(17), 5355; https://doi.org/10.3390/s25175355 - 29 Aug 2025
Viewed by 381
Abstract
Accurate boundary detection is critical for autonomous trackless rubber-wheeled vehicles in underground coal mines, as it prevents lateral collisions with tunnel walls. Unlike open-road environments, underground tunnels suffer from poor illumination, water mist, and dust, which degrade visual imaging. To address these challenges, [...] Read more.
Accurate boundary detection is critical for autonomous trackless rubber-wheeled vehicles in underground coal mines, as it prevents lateral collisions with tunnel walls. Unlike open-road environments, underground tunnels suffer from poor illumination, water mist, and dust, which degrade visual imaging. To address these challenges, this paper proposes a navigable area computation for underground autonomous vehicles via ground feature and boundary processing, consisting of three core steps. First, a real-time point cloud correction process via pre-correction and dynamic update aligns ground point clouds with the LiDAR coordinate system to ensure parallelism. Second, corrected point clouds are projected onto a 2D grid map using a grid-based method, effectively mitigating the impact of ground unevenness on boundary extraction; third, an adaptive boundary completion method is designed to resolve boundary discontinuities in junctions and shunting chambers. Additionally, the method emphasizes continuous extraction of boundaries over extended periods by integrating temporal context, ensuring the continuity of boundary detection during vehicle operation. Experiments on real underground vehicle data validate that the method achieves accurate detection and consistent tracking of dual-sided boundaries across straight tunnels, curves, intersections, and shunting chambers, meeting the requirements of underground autonomous driving. This work provides a rule-based, real-time solution feasible under limited computing power, offering critical safety redundancy when deep learning methods fail in harsh underground environments. Full article
(This article belongs to the Special Issue Intelligent Traffic Safety and Security)
Show Figures

Figure 1

23 pages, 1804 KB  
Article
Automatic Algorithm-Aided Segmentation of Retinal Nerve Fibers Using Fundus Photographs
by Diego Luján Villarreal
J. Imaging 2025, 11(9), 294; https://doi.org/10.3390/jimaging11090294 - 28 Aug 2025
Viewed by 540
Abstract
This work presents an image processing algorithm for the segmentation of the personalized mapping of retinal nerve fiber layer (RNFL) bundle trajectories in the human retina. To segment RNFL bundles, preprocessing steps were used for noise reduction and illumination correction. Blood vessels were [...] Read more.
This work presents an image processing algorithm for the segmentation of the personalized mapping of retinal nerve fiber layer (RNFL) bundle trajectories in the human retina. To segment RNFL bundles, preprocessing steps were used for noise reduction and illumination correction. Blood vessels were removed. The image was fed to a maximum–minimum modulation algorithm to isolate retinal nerve fiber (RNF) segments. A modified Garway-Heath map categorizes RNF orientation, assuming designated sets of orientation angles for aligning RNFs direction. Bezier curves fit RNFs from the center of the optic disk (OD) to their corresponding end. Fundus images from five different databases (n = 300) were tested, with 277 healthy normal subjects and 33 classified as diabetic without any sign of diabetic retinopathy. The algorithm successfully traced fiber trajectories per fundus across all regions identified by the Garway-Heath map. The resulting trace images were compared to the Jansonius map, reaching an average efficiency of 97.44% and working well with those of low resolution. The average mean difference in orientation angles of the included images was 11.01 ± 1.25 and the average RMSE was 13.82 ± 1.55. A 24-2 visual field (VF) grid pattern was overlaid onto the fundus to relate the VF test points to the intersection of RNFL bundles and their entry angles into the OD. The mean standard deviation (95% limit) obtained 13.5° (median 14.01°), ranging from less than 1° to 28.4° for 50 out of 52 VF locations. The influence of optic parameters was explored using multiple linear regression. Average angle trajectories in the papillomacular region were significantly influenced (p < 0.00001) by the latitudinal optic disk position and disk–fovea angle. Given the basic biometric ground truth data (only fovea and OD centers) that is publicly accessible, the algorithm can be customized to individual eyes and distinguish fibers with accuracy by considering unique anatomical features. Full article
(This article belongs to the Special Issue Progress and Challenges in Biomedical Image Analysis—2nd Edition)
Show Figures

Figure 1

12 pages, 810 KB  
Opinion
Pharmacological Modulation of Pupil Size in Presbyopia: Optical Modeling and Clinical Implications
by Pablo De Gracia and Andrew D. Pucker
J. Clin. Med. 2025, 14(17), 6040; https://doi.org/10.3390/jcm14176040 - 26 Aug 2025
Viewed by 602
Abstract
Presbyopia is a ubiquitous age-related condition characterized by reduced near focusing ability due to lenticular stiffening. Pharmacologic agents such as pilocarpine have re-emerged as a less-invasive treatment option by inducing miosis and thereby enhancing depth of focus. However, the optimal pharmacologically induced pupil [...] Read more.
Presbyopia is a ubiquitous age-related condition characterized by reduced near focusing ability due to lenticular stiffening. Pharmacologic agents such as pilocarpine have re-emerged as a less-invasive treatment option by inducing miosis and thereby enhancing depth of focus. However, the optimal pharmacologically induced pupil size that balances improved near vision with sufficient retinal illuminance remains undetermined. In this work, we present for the first time a direct integration of advanced theoretical modeling with a systematic synthesis of clinical trial outcomes to define the optimal target pupil size for pharmacologic presbyopia correction. We modeled visual performance using the Visual Strehl Ratio of the Optical Transfer Function (VSOTF) and convolved images of optotypes across a range of pupil diameters from 1.5 mm to 3.5 mm. This combined optical–clinical approach allowed us to quantitatively compare modeled image quality and depth of focus predictions with real-world clinical efficacy data from pilocarpine-based interventions. Simulations showed that smaller pupil sizes (1.5–2.5 mm) significantly extended depth of focus compared to standard multifocal optics while maintaining image quality within acceptable limits. These findings align with clinical trials of pilocarpine formulations, which commonly achieve post-treatment pupil diameters in the 2.0–2.5 mm range and are associated with clinically meaningful gains in near vision. Our analysis uniquely demonstrates that these clinically achieved pupil sizes closely match the theoretically optimal 2.0–3.0 mm range identified in our modeling, strengthening the evidence base for drug design and patient selection. These results reinforce the role of pharmacologically controlled pupil size as a central target in presbyopia management. By explicitly linking predictive optical modeling with aggregated clinical outcomes, we introduce a novel framework to guide future pharmacologic development strategies and refine clinical counseling in the emerging era of presbyopia therapeutics. Full article
Show Figures

Figure 1

26 pages, 62819 KB  
Article
Low-Light Image Dehazing and Enhancement via Multi-Feature Domain Fusion
by Jiaxin Wu, Han Ai, Ping Zhou, Hao Wang, Haifeng Zhang, Gaopeng Zhang and Weining Chen
Remote Sens. 2025, 17(17), 2944; https://doi.org/10.3390/rs17172944 - 25 Aug 2025
Viewed by 611
Abstract
The acquisition of nighttime remote-sensing visible-light images is often accompanied by low-illumination effects and haze interference, resulting in significant image quality degradation and greatly affecting subsequent applications. Existing low-light enhancement and dehazing algorithms can handle each problem individually, but their simple cascade cannot [...] Read more.
The acquisition of nighttime remote-sensing visible-light images is often accompanied by low-illumination effects and haze interference, resulting in significant image quality degradation and greatly affecting subsequent applications. Existing low-light enhancement and dehazing algorithms can handle each problem individually, but their simple cascade cannot effectively address unknown real-world degradations. Therefore, we design a joint processing framework, WFDiff, which fully exploits the advantages of Fourier–wavelet dual-domain features and innovatively integrates the inverse diffusion process through differentiable operators to construct a multi-scale degradation collaborative correction system. Specifically, in the reverse diffusion process, a dual-domain feature interaction module is designed, and the joint probability distribution of the generated image and real data is constrained through differentiable operators: on the one hand, a global frequency-domain prior is established by jointly constraining Fourier amplitude and phase, effectively maintaining the radiometric consistency of the image; on the other hand, wavelets are used to capture high-frequency details and edge structures in the spatial domain to improve the prediction process. On this basis, a cross-overlapping-block adaptive smoothing estimation algorithm is proposed, which achieves dynamic fusion of multi-scale features through a differentiable weighting strategy, effectively solving the problem of restoring images of different sizes and avoiding local inconsistencies. In view of the current lack of remote-sensing data for low-light haze scenarios, we constructed the Hazy-Dark dataset. Physical experiments and ablation experiments show that the proposed method outperforms existing single-task or simple cascade methods in terms of image fidelity, detail recovery capability, and visual naturalness, providing a new paradigm for remote-sensing image processing under coupled degradations. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

30 pages, 1292 KB  
Review
Advances in UAV Remote Sensing for Monitoring Crop Water and Nutrient Status: Modeling Methods, Influencing Factors, and Challenges
by Xiaofei Yang, Junying Chen, Xiaohan Lu, Hao Liu, Yanfu Liu, Xuqian Bai, Long Qian and Zhitao Zhang
Plants 2025, 14(16), 2544; https://doi.org/10.3390/plants14162544 - 15 Aug 2025
Viewed by 708
Abstract
With the advancement of precision agriculture, Unmanned Aerial Vehicle (UAV)-based remote sensing has been increasingly employed for monitoring crop water and nutrient status due to its high flexibility, fine spatial resolution, and rapid data acquisition capabilities. This review systematically examines recent research progress [...] Read more.
With the advancement of precision agriculture, Unmanned Aerial Vehicle (UAV)-based remote sensing has been increasingly employed for monitoring crop water and nutrient status due to its high flexibility, fine spatial resolution, and rapid data acquisition capabilities. This review systematically examines recent research progress and key technological pathways in UAV-based remote sensing for crop water and nutrient monitoring. It provides an in-depth analysis of UAV platforms, sensor configurations, and their suitability across diverse agricultural applications. The review also highlights critical data processing steps—including radiometric correction, image stitching, segmentation, and data fusion—and compares three major modeling approaches for parameter inversion: vegetation index-based, data-driven, and physically based methods. Representative application cases across various crops and spatiotemporal scales are summarized. Furthermore, the review explores factors affecting monitoring performance, such as crop growth stages, spatial resolution, illumination and meteorological conditions, and model generalization. Despite significant advancements, current limitations include insufficient sensor versatility, labor-intensive data processing chains, and limited model scalability. Finally, the review outlines future directions, including the integration of edge intelligence, hybrid physical–data modeling, and multi-source, three-dimensional collaborative sensing. This work aims to provide theoretical insights and technical support for advancing UAV-based remote sensing in precision agriculture. Full article
Show Figures

Figure 1

22 pages, 8901 KB  
Article
D3Fusion: Decomposition–Disentanglement–Dynamic Compensation Framework for Infrared-Visible Image Fusion in Extreme Low-Light
by Wansi Yang, Yi Liu and Xiaotian Chen
Appl. Sci. 2025, 15(16), 8918; https://doi.org/10.3390/app15168918 - 13 Aug 2025
Viewed by 480
Abstract
Infrared-visible image fusion quality is critical for nighttime perception in autonomous driving and surveillance but suffers severe degradation under extreme low-light conditions, including irreversible texture loss in visible images, thermal boundary diffusion artifacts, and overexposure under dynamic non-uniform illumination. To address these challenges, [...] Read more.
Infrared-visible image fusion quality is critical for nighttime perception in autonomous driving and surveillance but suffers severe degradation under extreme low-light conditions, including irreversible texture loss in visible images, thermal boundary diffusion artifacts, and overexposure under dynamic non-uniform illumination. To address these challenges, a Decomposition–Disentanglement–Dynamic Compensation framework, D3Fusion, is proposed. Firstly, a Retinex-inspired Decomposition Illumination Net (DIN) decomposes inputs into enhanced images and degradative illumination maps for joint low-light recovery. Secondly, an illumination-guided encoder and a multi-scale differential compensation decoder dynamically balance cross-modal features. Finally, a progressive three-stage training paradigm from illumination correction through feature disentanglement to adaptive fusion resolves optimization conflicts. Compared to State-of-the-Art methods, on the LLVIP, TNO, MSRS, and RoadScene datasets, D3Fusion achieves an average improvement of 1.59% in standard deviation (SD), 6.9% in spatial frequency (SF), 2.59% in edge intensity (EI), and 1.99% in visual information fidelity (VIF), demonstrating superior performance in extreme low-light scenarios. The framework effectively suppresses thermal diffusion artifacts while mitigating exposure imbalance, adaptively brightening scenes while preserving texture details in shadowed regions. This significantly improves fusion quality for nighttime images by enhancing salient information, establishing a robust solution for multimodal perception under illumination-critical conditions. Full article
Show Figures

Figure 1

18 pages, 314 KB  
Article
Wittgenstein and Johnson: Notes on a Neglected Appreciation
by Brian R. Clack
Religions 2025, 16(8), 1043; https://doi.org/10.3390/rel16081043 - 12 Aug 2025
Viewed by 425
Abstract
M. O’C. Drury and Norman Malcolm both report that Wittgenstein gave them copies of Samuel Johnson’s Prayers and Meditations, a book that he said he valued highly. Given that Wittgenstein’s commentators have mined the ideas of other religious thinkers he admired (Kierkegaard, [...] Read more.
M. O’C. Drury and Norman Malcolm both report that Wittgenstein gave them copies of Samuel Johnson’s Prayers and Meditations, a book that he said he valued highly. Given that Wittgenstein’s commentators have mined the ideas of other religious thinkers he admired (Kierkegaard, Tolstoy, and so on) in order to illuminate his ambiguous thinking about religion, it is perhaps strange that this voiced appreciation of Johnson’s prayers has not been further investigated. The purpose of this paper is to correct that neglect. This is done by way of an exploration of the nature and content of Johnson’s prayers, and an analysis of how these prayers reflect the tormented state of Johnson’s mind and his concerns about indolence, death and judgment. Wittgenstein had noted that Malcolm would only like Johnson’s prayers if he looked at them “from the angle from which I see them”, something which in the context of his letter to Malcolm suggests the very “human” quality of these prayers, and their origin in Johnson’s personal struggles. A description of Wittgenstein’s own struggles (which mirror to some extent those of Johnson in their worries about indolence, judgment, and a guilt that requires confession) can then form the background to an understanding, not just of Wittgenstein’s personal spiritual state of mind, but of his philosophical account of religious belief and the turbulent human passions from which religion arises. Significant points of contact are noted between the respective thinking of Wittgenstein and Johnson, suggestive of new avenues of research that might profitably be explored. Full article
(This article belongs to the Special Issue New Work on Wittgenstein's Philosophy of Religion)
14 pages, 31941 KB  
Article
PriKMet: Prior-Guided Pointer Meter Reading for Automated Substation Inspections
by Haidong Chu, Jun Feng, Yidan Wang, Weizhen He, Yunfeng Yan and Donglian Qi
Electronics 2025, 14(16), 3194; https://doi.org/10.3390/electronics14163194 - 11 Aug 2025
Viewed by 442
Abstract
Despite the rapid advancement of smart-grid technologies, automated pointer meter reading in power substations remains a persistent challenge due to complex electromagnetic interference and dynamic field conditions. Traditional computer vision methods, typically designed for ideal imaging environments, exhibit limited robustness against real-world perturbations [...] Read more.
Despite the rapid advancement of smart-grid technologies, automated pointer meter reading in power substations remains a persistent challenge due to complex electromagnetic interference and dynamic field conditions. Traditional computer vision methods, typically designed for ideal imaging environments, exhibit limited robustness against real-world perturbations such as illumination fluctuations, partial occlusions, and motion artifacts. To address this gap, we propose PriKMet (Prior-Guided Pointer Meter Reader), a novel meter reading algorithm that integrates deep learning with domain-specific priors through three key contributions: (1) a unified hierarchical framework for joint meter detection and keypoint localization, (2) an intelligent meter reading method that fuses the predefined inspection route information with perception results, and (3) an adaptive offset correction mechanism for UAV-based inspections. Extensive experiments on a comprehensive dataset of 3237 substation meter images demonstrate the superior performance of PriKMet, achieving state-of-the-art meter detection results of 99.4% AP50 and 85.5% for meter reading accuracy. The real-time processing capability of the method offers a practical solution for modernizing power infrastructure monitoring. This approach effectively reduces reliance on manual inspections in complex operational environments while enhancing the intelligence of power maintenance operations. Full article
(This article belongs to the Special Issue Advances in Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

21 pages, 5260 KB  
Article
LapECNet: Laplacian Pyramid Networks for Image Exposure Correction
by Yongchang Li and Jing Jiang
Appl. Sci. 2025, 15(16), 8840; https://doi.org/10.3390/app15168840 - 11 Aug 2025
Viewed by 270
Abstract
Images captured under complex lighting conditions often suffer from local under/ overexposure and detail loss. Existing methods typically process illumination and texture information in a mixed manner, making it difficult to simultaneously achieve precise exposure adjustment and preservation of detail. To address this [...] Read more.
Images captured under complex lighting conditions often suffer from local under/ overexposure and detail loss. Existing methods typically process illumination and texture information in a mixed manner, making it difficult to simultaneously achieve precise exposure adjustment and preservation of detail. To address this challenge, we propose LapECNet, an enhanced Laplacian pyramid network architecture for image exposure correction and detail reconstruction. Specifically, it decomposes the input image into different frequency bands of a Laplacian pyramid, enabling separate handling of illumination adjustment and detail enhancement. The framework first decomposes the image into three feature levels. At each level, we introduce a feature enhancement module that adaptively processes image features across different frequency bands using spatial and channel attention mechanisms. After enhancing the features at each level, we further propose a dynamic aggregation module that learns adaptive weights to hierarchically fuse multi-scale features, achieving context-aware recombination of the enhanced features. Extensive experiments with public benchmarks on the MSEC dataset demonstrated that our method gave improvements of 15.4% in PSNR and 7.2% in SSIM over previous methods. On the LCDP dataset, our method demonstrated improvements of 7.2% in PSNR and 13.9% in SSIM over previous methods. Full article
(This article belongs to the Special Issue Recent Advances in Parallel Computing and Big Data)
Show Figures

Figure 1

13 pages, 692 KB  
Article
Contrast Sensitivity Comparison of Daily Simultaneous-Vision Center-Near Multifocal Contact Lenses: A Pilot Study
by David P. Piñero, Ainhoa Molina-Martín, Elena Martínez-Plaza, Kevin J. Mena-Guevara, Violeta Gómez-Vicente and Dolores de Fez
Vision 2025, 9(3), 67; https://doi.org/10.3390/vision9030067 - 1 Aug 2025
Viewed by 459
Abstract
Our purpose is to evaluate the binocular contrast sensitivity function (CSF) in a presbyopic population and compare the results obtained with four different simultaneous-vision center-near multifocal contact lens (MCL) designs for distance vision under two illumination conditions. Additionally, chromatic CSF (red-green and blue-yellow) [...] Read more.
Our purpose is to evaluate the binocular contrast sensitivity function (CSF) in a presbyopic population and compare the results obtained with four different simultaneous-vision center-near multifocal contact lens (MCL) designs for distance vision under two illumination conditions. Additionally, chromatic CSF (red-green and blue-yellow) was evaluated. A randomized crossover pilot study was conducted. Four daily disposable lens designs, based on simultaneous-vision and center-near correction, were compared. The achromatic contrast sensitivity function (CSF) was measured binocularly using the CSV1000e test under two lighting conditions: room light on and off. Chromatic CSF was measured using the OptoPad-CSF test. Comparison of achromatic results with room lighting showed a statistically significant difference only for 3 cpd (p = 0.03) between the baseline visit (with spectacles) and all MCLs. Comparison of achromatic results without room lighting showed no statistically significant differences between the baseline and all MCLs for any spatial frequency (p > 0.05 in all cases). Comparison of CSF-T results showed a statistically significant difference only for 4 cpd (p = 0.002). Comparison of CSF-D results showed no statistically significant difference for all frequencies (p > 0.05 in all cases). The MCL designs analyzed provided satisfactory achromatic contrast sensitivity results for distance vision, similar to those obtained with spectacles, with no remarkable differences between designs. Chromatic contrast sensitivity for the red-green and blue-yellow mechanisms revealed some differences from the baseline that should be further investigated in future studies. Full article
Show Figures

Figure 1

16 pages, 5703 KB  
Article
Document Image Shadow Removal Based on Illumination Correction Method
by Depeng Gao, Wenjie Liu, Shuxi Chen, Jianlin Qiu, Xiangxiang Mei and Bingshu Wang
Algorithms 2025, 18(8), 468; https://doi.org/10.3390/a18080468 - 26 Jul 2025
Viewed by 537
Abstract
Due to diverse lighting conditions and photo environments, shadows are almost ubiquitous in images, especially document images captured with mobile devices. Shadows not only seriously affect the visual quality and readability of a document but also significantly hinder image processing. Although shadow removal [...] Read more.
Due to diverse lighting conditions and photo environments, shadows are almost ubiquitous in images, especially document images captured with mobile devices. Shadows not only seriously affect the visual quality and readability of a document but also significantly hinder image processing. Although shadow removal research has achieved good results in natural scenes, specific studies on document images are lacking. To effectively remove shadows in document images, the dark illumination correction network is proposed, which mainly consists of two modules: shadow detection and illumination correction. First, a simplified shadow-corrected attention block is designed to combine spatial and channel attention, which is used to extract the features, detect the shadow mask, and correct the illumination. Then, the shadow detection block detects shadow intensity and outputs a soft shadow mask to determine the probability of each pixel belonging to shadow. Lastly, the illumination correction block corrects dark illumination with a soft shadow mask and outputs a shadow-free document image. Our experiments on five datasets show that the proposed method achieved state-of-the-art results, proving the effectiveness of illumination correction. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

21 pages, 3825 KB  
Article
Light Propagation and Multi-Scale Enhanced DeepLabV3+ for Underwater Crack Detection
by Wenji Ai, Jiaxuan Zou, Zongchao Liu, Shaodi Wang and Shuai Teng
Algorithms 2025, 18(8), 462; https://doi.org/10.3390/a18080462 - 24 Jul 2025
Viewed by 292
Abstract
Achieving state-of-the-art performance (82.5% IoU, 85.6% F1), this paper proposes an enhanced DeepLabV3+ model for robust underwater crack detection through three integrated innovations: a physics-based light propagation correction model for illumination distortion, multi-scale feature extraction for variable crack dimensions, and curvature flow-guided loss [...] Read more.
Achieving state-of-the-art performance (82.5% IoU, 85.6% F1), this paper proposes an enhanced DeepLabV3+ model for robust underwater crack detection through three integrated innovations: a physics-based light propagation correction model for illumination distortion, multi-scale feature extraction for variable crack dimensions, and curvature flow-guided loss for boundary precision. Our approach significantly outperforms DeepLabV3+, SCTNet, and LarvSeg by 10.6–13.4% IoU, demonstrating particular strength in detecting small cracks (78.1% IoU) under challenging low-light/high-turbidity conditions. The solution provides a practical framework for automated underwater infrastructure inspection. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))
Show Figures

Figure 1

35 pages, 4256 KB  
Article
Automated Segmentation and Morphometric Analysis of Thioflavin-S-Stained Amyloid Deposits in Alzheimer’s Disease Brains and Age-Matched Controls Using Weakly Supervised Deep Learning
by Gábor Barczánfalvi, Tibor Nyári, József Tolnai, László Tiszlavicz, Balázs Gulyás and Karoly Gulya
Int. J. Mol. Sci. 2025, 26(15), 7134; https://doi.org/10.3390/ijms26157134 - 24 Jul 2025
Viewed by 758
Abstract
Alzheimer’s disease (AD) involves the accumulation of amyloid-β (Aβ) plaques, whose quantification plays a central role in understanding disease progression. Automated segmentation of Aβ deposits in histopathological micrographs enables large-scale analyses but is hindered by the high cost of detailed pixel-level annotations. Weakly [...] Read more.
Alzheimer’s disease (AD) involves the accumulation of amyloid-β (Aβ) plaques, whose quantification plays a central role in understanding disease progression. Automated segmentation of Aβ deposits in histopathological micrographs enables large-scale analyses but is hindered by the high cost of detailed pixel-level annotations. Weakly supervised learning offers a promising alternative by leveraging coarse or indirect labels to reduce the annotation burden. We evaluated a weakly supervised approach to segment and analyze thioflavin-S-positive parenchymal amyloid pathology in AD and age-matched brains. Our pipeline integrates three key components, each designed to operate under weak supervision. First, robust preprocessing (including retrospective multi-image illumination correction and gradient-based background estimation) was applied to enhance image fidelity and support training, as models rely more on image features. Second, class activation maps (CAMs), generated by a compact deep classifier SqueezeNet, were used to identify, and coarsely localize amyloid-rich parenchymal regions from patch-wise image labels, serving as spatial priors for subsequent refinement without requiring dense pixel-level annotations. Third, a patch-based convolutional neural network, U-Net, was trained on synthetic data generated from micrographs based on CAM-derived pseudo-labels via an extensive object-level augmentation strategy, enabling refined whole-image semantic segmentation and generalization across diverse spatial configurations. To ensure robustness and unbiased evaluation, we assessed the segmentation performance of the entire framework using patient-wise group k-fold cross-validation, explicitly modeling generalization across unseen individuals, critical in clinical scenarios. Despite relying on weak labels, the integrated pipeline achieved strong segmentation performance with an average Dice similarity coefficient (≈0.763) and Jaccard index (≈0.639), widely accepted metrics for assessing segmentation quality in medical image analysis. The resulting segmentations were also visually coherent, demonstrating that weakly supervised segmentation is a viable alternative in histopathology, where acquiring dense annotations is prohibitively labor-intensive and time-consuming. Subsequent morphometric analyses on automatically segmented Aβ deposits revealed size-, structural complexity-, and global geometry-related differences across brain regions and cognitive status. These findings confirm that deposit architecture exhibits region-specific patterns and reflects underlying neurodegenerative processes, thereby highlighting the biological relevance and practical applicability of the proposed image-processing pipeline for morphometric analysis. Full article
Show Figures

Figure 1

9 pages, 1583 KB  
Article
Snapshot Quantitative Phase Imaging with Acousto-Optic Chromatic Aberration Control
by Christos Alexandropoulos, Laura Rodríguez-Suñé and Martí Duocastella
Sensors 2025, 25(14), 4503; https://doi.org/10.3390/s25144503 - 20 Jul 2025
Viewed by 504
Abstract
The transport of intensity equation enables quantitative phase imaging from only two axially displaced intensity images, facilitating the characterization of low-contrast samples like cells and microorganisms. However, the rapid selection of the correct defocused planes, crucial for real-time phase imaging of dynamic events, [...] Read more.
The transport of intensity equation enables quantitative phase imaging from only two axially displaced intensity images, facilitating the characterization of low-contrast samples like cells and microorganisms. However, the rapid selection of the correct defocused planes, crucial for real-time phase imaging of dynamic events, remains challenging. Additionally, the different images are normally acquired sequentially, further limiting phase-reconstruction speed. Here, we report on a system that addresses these issues and enables user-tuned defocusing with snapshot phase retrieval. Our approach is based on combining multi-color pulsed illumination with acousto-optic defocusing for microsecond-scale chromatic aberration control. By illuminating each plane with a different color and using a color camera, the information to reconstruct a phase map can be gathered in a single acquisition. We detail the fundamentals of our method, characterize its performance, and demonstrate live phase imaging of a freely moving microorganism at speeds of 150 phase reconstructions per second, limited only by the camera’s frame rate. Full article
(This article belongs to the Special Issue Optical Imaging for Medical Applications)
Show Figures

Figure 1

Back to TopTop