Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (529)

Search Parameters:
Keywords = vision restoration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
8 pages, 1047 KB  
Proceeding Paper
Image Colorization of Fruits and Vegetables Using Convolutional Kolmogorov–Arnold Networks
by Mico Kent P. Malatag, Jhanna D. Vicente and John Paul T. Cruz
Eng. Proc. 2026, 134(1), 58; https://doi.org/10.3390/engproc2026134058 - 16 Apr 2026
Viewed by 53
Abstract
Image colorization transforms monochrome images into full-colored versions, which improves image restoration in fields such as art, history, and medicine. AI models, such as convolutional neural networks and generative adversarial networks, are widely used, but they have limitations in generalization and interpretability. Therefore, [...] Read more.
Image colorization transforms monochrome images into full-colored versions, which improves image restoration in fields such as art, history, and medicine. AI models, such as convolutional neural networks and generative adversarial networks, are widely used, but they have limitations in generalization and interpretability. Therefore, we applied the Convolutional Kolmogorov–Arnold Network (CKAN), a new neural architecture that adds a convolutional layer to the Kolmogorov–Arnold Network for colorizing grayscale images of fruits and vegetables. A dataset of different varieties of fruits and vegetables was used, and the model’s performance was evaluated using the structural similarity index (SSIM) and mean squared error (MSE). After testing the model, the results showed that the CKAN colorized images achieved the desired outcome, consistently having a high SSIM score (up to 0.9) and a low MSE score (<100.0). This confirms CKAN’s potential for effective image colorization and highlights its possible applications in other computer vision tasks. Full article
Show Figures

Figure 1

19 pages, 151357 KB  
Article
An Energy-Efficient Zero-Shot AI-ISP for Real-Time Low-Light Enhancement with Intelligent Vehicles
by Fangzhou He, Bowen Liu, Zhicheng Dong, Jie Li, Jun Luo and Dongcai Zhao
Mathematics 2026, 14(8), 1324; https://doi.org/10.3390/math14081324 - 15 Apr 2026
Viewed by 208
Abstract
Conventional Image Signal Processors (ISPs) employ manually crafted designs with limited adaptability, resulting in suboptimal performance in dynamic environments for both visual quality and machine vision applications. While deep learning facilitates adaptive AI-ISPs, supervised approaches encounter domain shift limitations and substantial computational demands [...] Read more.
Conventional Image Signal Processors (ISPs) employ manually crafted designs with limited adaptability, resulting in suboptimal performance in dynamic environments for both visual quality and machine vision applications. While deep learning facilitates adaptive AI-ISPs, supervised approaches encounter domain shift limitations and substantial computational demands that impede edge deployment. This work introduces an adaptive zero-shot AI-ISP that dynamically optimizes processing pipelines without requiring paired training data. The proposed architecture implements dual specialized subnetworks for illumination estimation and denoising enhancement, operating collaboratively under Retinex theory principles to achieve boundary-aware illumination mapping and noise-resilient image restoration. Additionally, a physically constrained loss function is introduced to enhance color fidelity and noise suppression. For practical implementation, an FPGA-accelerated computing engine replaces transposed convolution with optimized bilinear interpolation, effectively eliminating artifacting while achieving superior memory efficiency through customized buffering architectures. A comprehensive evaluation demonstrates highly competitive performance, achieving a PSNR of 19.91/16.62 and an SSIM of 0.591/0.475 on LSRW-Huawei/Nikon datasets, alongside NIQE scores of 2.065/3.025 on DCIM and TM-DIED datasets. The hardware implementation attains 42.5 GOPS/W power efficiency, representing 35.4× and 7.3× improvements over conventional CPU and GPU platforms, establishing a comprehensive edge deployment solution for next-generation intelligent image processing systems. Full article
Show Figures

Figure 1

22 pages, 3734 KB  
Article
CLEAR: A Cognitive LLM-Empowered Adaptive Restoration Framework for Robust Ship Detection in Complex Maritime Scenarios
by Min Li, Xinyu Zhao and Yunfeng Wan
Remote Sens. 2026, 18(8), 1142; https://doi.org/10.3390/rs18081142 - 12 Apr 2026
Viewed by 305
Abstract
Ship detection in remote sensing imagery serves as a cornerstone of modern maritime surveillance. Existing visible light detectors suffer from severe performance degradation in adverse environmental conditions (e.g., fog, low light) due to domain gaps. Traditional global enhancement methods often lack adaptability, leading [...] Read more.
Ship detection in remote sensing imagery serves as a cornerstone of modern maritime surveillance. Existing visible light detectors suffer from severe performance degradation in adverse environmental conditions (e.g., fog, low light) due to domain gaps. Traditional global enhancement methods often lack adaptability, leading to “negative transfer”—where artifacts are introduced into clean images or mismatched with degradation types. To address these challenges, we propose CLEAR (Cognitive Large Language Model (LLM)-Empowered Adaptive Restoration) framework. Inspired by the dual-process theory of cognition, we introduce a dynamic switching mechanism between fast perception and deep reasoning. Rather than processing all images indiscriminately, it utilizes a hybrid gating mechanism to efficiently filter nominal samples, triggering Vision–Language Model (VLM) only when necessary to diagnose degradation and dispatch targeted restoration operators. Extensive experiments on the constructed HRSC-Robust dataset demonstrate that CLEAR achieves an overall mean Average Precision (mAP) at 0.5 Intersection-over-Union (IoU) of 86.92%, outperforming the baseline by 7.74%. Notably, it establishes a “fail-safe” mechanism for optical degradations. By adaptively resolving fog and low-light, it effectively mitigates detector blindness—exemplified by a doubled Recall rate (52.52%) in dark scenarios. Furthermore, a confidence-based sparse triggering strategy ensures operational efficiency, maintaining a throughput of ~11.8 FPS in nominal conditions. This work validates the potential of VLMs for interpretable and robust remote sensing tasks. Full article
Show Figures

Figure 1

22 pages, 13987 KB  
Article
SDTformer: Scale-Adaptive Differential Transformer Network for Remote Sensing Image Dehazing
by Boyu Liu and Qi Zhang
Remote Sens. 2026, 18(8), 1136; https://doi.org/10.3390/rs18081136 - 11 Apr 2026
Viewed by 274
Abstract
In Transformer-based image restoration models, the self-attention mechanism often introduces attention noise from irrelevant contextual feature, hindering the recovery of underlying clear content. Although many methods have been proposed to suppress attention noise, we note that most existing approaches are often developed for [...] Read more.
In Transformer-based image restoration models, the self-attention mechanism often introduces attention noise from irrelevant contextual feature, hindering the recovery of underlying clear content. Although many methods have been proposed to suppress attention noise, we note that most existing approaches are often developed for general vision tasks and fail to generalize across remote sensing image dehazing, where large-scale spatial structures pose additional challenges for attention modeling. How to effectively model scale-aware attention to suppress redundant activations becomes crucial for remote sensing image dehazing. In this paper, we propose a scale-adaptive differential Transformer (SDTformer), an architecture designed to suppress attention noise through a differential attention mechanism, thereby improving reconstruction fidelity. Specifically, the model incorporates a scale-adaptive differential self-attention module, which models contextual dependencies across different spatial scales and reduces redundant contextual interference by computing differential attention maps. Additionally, a dynamic differential feed-forward network is proposed to adaptively select informative spatial features, strengthening feature aggregation. To further enhance feature representation, a gated fusion module is introduced to aggregate multi-scale features generated by different encoder blocks, which facilitates the learning process of each decoder block and improves the final reconstruction performance. Extensive experimental results on the commonly used benchmarks show that our method achieves favorable performance against state-of-the-art approaches. Full article
Show Figures

Figure 1

33 pages, 515 KB  
Article
From Nonviolence to Reconciliation: The Prophetic Political Ethics of War and Peace
by Harris Sadik Kirazli
Religions 2026, 17(4), 449; https://doi.org/10.3390/rel17040449 - 4 Apr 2026
Viewed by 314
Abstract
This article re-examines Islamic ethics of war and peace by returning to the formative Meccan–Medinan trajectory of the Prophet Muḥammad’s life, where early Islamic moral reasoning developed amid persecution, migration, diplomacy, and armed conflict. Contemporary debates frequently portray Islam either as a tradition [...] Read more.
This article re-examines Islamic ethics of war and peace by returning to the formative Meccan–Medinan trajectory of the Prophet Muḥammad’s life, where early Islamic moral reasoning developed amid persecution, migration, diplomacy, and armed conflict. Contemporary debates frequently portray Islam either as a tradition that sacralizes violence through jihad or as one that reduces peace to purely inward spirituality. Both perspectives obscure the historically grounded ethical discourse that emerged within the early Muslim community. This study argues that the Qurʾān—understood within the Islamic tradition as the authoritative source of ethical guidance—together with prophetic practice articulated a coherent moral framework governing the use of force, the pursuit of peace, and the restoration of social order after conflict. Drawing on Qurʾānic discourse, canonical ḥadīth, classical tafsīr and sīrah literature, and modern scholarship in Islamic studies, religious ethics, and conflict resolution theory, the article reconstructs how early Islamic sources represent the ethical regulation of violence. The analysis identifies a threefold trajectory in prophetic practice: a Meccan phase characterized by nonviolent endurance and moral witness under persecution; a Medinan phase marked by constitutional governance, plural coexistence, and tightly regulated defensive warfare; and a culminating ethic of negotiated peace and post-conflict reconciliation exemplified in the Treaty of Ḥudaybiyyah and the Conquest of Mecca. Taken together, these stages reveal an integrated moral vision in which force is neither celebrated nor treated as a default instrument of political expansion, but permitted only under strict ethical constraints shaped by justice (ʿadl), mercy (raḥma), proportionality, and the protection of communal life. By reconstructing this early prophetic framework, the article demonstrates that Islamic sources contain significant internal resources for limiting violence, regulating warfare, and prioritizing reconciliation. In doing so, it contributes to contemporary scholarship on Islamic ethics and situates the prophetic model within broader global debates on the moral regulation of war, peacebuilding, and post-conflict justice. Full article
(This article belongs to the Special Issue The Ethics of War and Peace: Religious Traditions in Dialogue)
23 pages, 21945 KB  
Article
From “Housing Security” to “Housing Quality”: The Common Implications of Japan’s UR Rental Housing Experience for China’s Affordable Housing and South Korea’s Public Housing
by Xue-Rui Wang, Ting Huang, Xin-Yan Chen and Byung-Kweon Jun
Buildings 2026, 16(7), 1412; https://doi.org/10.3390/buildings16071412 - 2 Apr 2026
Viewed by 321
Abstract
This study focuses on the commonalities and differences in the public housing systems of three East Asian countries, using Japan’s UR Rental Housing as a case study. It employs a composite methodology that integrates architectural typology and cross-cultural comparison, constructing theoretical linkages within [...] Read more.
This study focuses on the commonalities and differences in the public housing systems of three East Asian countries, using Japan’s UR Rental Housing as a case study. It employs a composite methodology that integrates architectural typology and cross-cultural comparison, constructing theoretical linkages within a three-dimensional framework of “social institutions–cultural context–spatial structure”. The research emphasizes three key dimensions: (1) The evolution of policy frameworks and their underlying socio-cultural drivers; (2) The spatial layout logic and functional concepts embedded in residential unit planning; (3) The transformation and inheritance of traditional residential values in contemporary housing design. The study strictly adheres to a progressive logic of “sample construction–type decoding–paradigm extraction–cross-domain comparison–theoretical feedback”. It begins by analyzing the core issues in the supply structure and spatial adaptability of affordable housing in China and South Korea. Next, it systematically examines the policy evolution and spatial design paradigms of Japan’s UR Rental Housing. Subsequently, it constructs a comparative analytical matrix for public housing in China, Japan, and South Korea, identifying transferable common experiences and pathways requiring localized adaptation. Finally, it proposes targeted recommendations across three dimensions, namely policy framework, spatial design, and community building: (1) At the policy level, a full lifecycle governance framework is advocated; (2) In spatial design, the principles of “compactness and efficiency” are emphasized, alongside enhanced flexibility and cultural relevance; (3) In community building, efforts are directed toward activating interpersonal connections and strengthening the social functional attributes of housing. This study emphasizes transnational comparability and knowledge transferability, aiming to provide practical insights for China’s affordable housing reforms and South Korea’s public housing modernization. It seeks to promote cross-national learning and collaborative innovation in the regional housing sector, offering both theoretical reference and practical pathways to realize the shared vision of “restoring housing to a human scale”. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

26 pages, 6199 KB  
Article
WeatherMAR: Complementary Masking of Paired Tokens for Adverse-Weather Image Restoration
by Junyuan Ma, Qunbo Lv and Zheng Tan
J. Imaging 2026, 12(4), 154; https://doi.org/10.3390/jimaging12040154 - 2 Apr 2026
Viewed by 350
Abstract
Image restoration under adverse weather conditions has attracted increasing attention because of its importance for both human perception and downstream vision applications. Existing methods, however, are often designed for a single degradation type. We present WeatherMAR, a multi-weather restoration framework that formulates [...] Read more.
Image restoration under adverse weather conditions has attracted increasing attention because of its importance for both human perception and downstream vision applications. Existing methods, however, are often designed for a single degradation type. We present WeatherMAR, a multi-weather restoration framework that formulates adverse-weather restoration as a paired-domain completion problem in a shared continuous token space. Specifically, WeatherMAR concatenates degraded and clean token sequences into a joint paired-domain sequence and performs restoration through masked autoregressive modeling, in which self-attention enables direct cross-domain interaction. To strengthen conditional learning while avoiding trivial paired correspondences, we introduce complementary bidirectional masking together with an optional reverse objective used only during training to encourage degradation-aware representations. WeatherMAR further employs a conditional diffusion objective for continuous token prediction and adopts a progress-to-step schedule to improve inference efficiency. Extensive experiments on standard multi-weather benchmarks, including Snow100K, Outdoor-Rain, and RainDrop, show that WeatherMAR achieves the best PSNR/SSIM on Snow100K-S (38.14/0.9684), the best SSIM on Outdoor-Rain (0.9396), and the best PSNR on Snow100K-L (32.58) and RainDrop (33.12). These results demonstrate that paired-domain token completion provides an effective solution for adverse-weather restoration. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 3rd Edition)
Show Figures

Figure 1

20 pages, 41296 KB  
Article
Frequency-Domain Feature Learning Network for Joint Image Demosaicing and Denoising
by Donghui Zhang, Feiyu Li, Jun Yang and Le Yang
Mathematics 2026, 14(7), 1175; https://doi.org/10.3390/math14071175 - 1 Apr 2026
Viewed by 366
Abstract
The methods employed for image demosaicing and denoising play a pivotal role in image acquisition and restoration, and have been extensively studied over the past few decades. Traditionally, these tasks are performed sequentially, with demosaicing followed by denoising, or vice versa, treating each [...] Read more.
The methods employed for image demosaicing and denoising play a pivotal role in image acquisition and restoration, and have been extensively studied over the past few decades. Traditionally, these tasks are performed sequentially, with demosaicing followed by denoising, or vice versa, treating each process independently. While this approach can enhance image quality, it often leads to issues such as color inaccuracies and information loss, as the outcome of the first task influences the second. Consequently, the integration of joint demosaicing and denoising (JDD) has become a focal point in recent research. Deep convolutional neural networks have shown promising results in addressing JDD challenges. This study introduces an end-to-end network, termed the Frequency-domain Features learning Network (FFNet), designed to tackle the JDD problem. Unlike conventional methods that focus on spatial domain features, FFNet utilizes frequency-domain (FD) characteristics to capture both global and local image details. Based on the vision Transformer architecture, FFNet consists of two key components: a global Fourier block (GFB), which uses global attention to determine the weights of FD parameters, and an MLP-based local Fourier block (LFB), which improves local feature extraction. These blocks are integrated with a channel attention mechanism to form the frequency-domain attention block (FAB), the core element of FFNet. Extensive experimental results on benchmark datasets demonstrate that FFNet achieves superior performance in terms of both quantitative metrics (PSNR/SSIM) and visual quality compared to existing state-of-the-art JDD methods. Furthermore, we provide a comprehensive analysis of its computational efficiency, including parameter count, FLOPs, and inference time, showing a competitive trade-off between performance and complexity. Full article
Show Figures

Figure 1

22 pages, 2481 KB  
Article
Human Corneal Stromal Stem Cell Treatment Reduces Established Opacities in Chronic Corneal Scarring
by Kira L. Lathrop, Julia T. Coelho, Christine Chandran, Syeda R. Ali, Moira L. Geary, Deepinder K. Dhaliwal, Vishal Jhanji, Mithun Santra and Gary H. F. Yam
Cells 2026, 15(7), 615; https://doi.org/10.3390/cells15070615 - 30 Mar 2026
Viewed by 444
Abstract
Corneal fibrosis, clinically referred to as corneal scarring, disrupts the normal architecture and transparency of the cornea and remains a major cause of visual impairment worldwide. Although corneal transplantation can restore vision, its effectiveness is constrained by limited accessibility, donor tissue shortages, and [...] Read more.
Corneal fibrosis, clinically referred to as corneal scarring, disrupts the normal architecture and transparency of the cornea and remains a major cause of visual impairment worldwide. Although corneal transplantation can restore vision, its effectiveness is constrained by limited accessibility, donor tissue shortages, and the risk of allograft rejection. Treatments with human corneal stromal stem cells (hCSSCs) have demonstrated scarless healing in preclinical models of acute corneal injury. Here, we report that hCSSCs also modulated pre-existing corneal opacities. We established a reproducible in vivo model of chronic corneal opacity. Given that scar severity varies among corneas even after identical injuries, we developed a non-invasive, image-based method to quantify opacity volume longitudinally in individual corneas. Using this approach, we evaluated the scar-reducing potential of three hCSSC batches previously shown to inhibit acute scarring. Following cell treatment, the pre-existing opacity volumes gradually decreased. In vitro, hCSSCs exposed to pro-inflammatory stimulus exhibited increased metalloproteinase (MMP) activity relative to tissue inhibitor of metalloproteinase (TIMP), as indicated by an elevated MMP2/TIMP2 ratio. This shift may promote matrix remodeling and scar resolution. Overall, our findings provide proof-of-concept for hCSSC-based therapy as a strategy to reduce established corneal scarring and restore corneal transparency. Full article
Show Figures

Graphical abstract

18 pages, 21058 KB  
Article
MSSA-Net: Multi-Modal Structural and Semantic-Adaptive Network for Low-Light Image Enhancement
by Tianxiang Chen, Xiaoyi Wang, Tongshun Zhang and Qiuzhan Zhou
Sensors 2026, 26(7), 2059; https://doi.org/10.3390/s26072059 - 25 Mar 2026
Viewed by 478
Abstract
Low-light image enhancement (LLIE) remains challenging due to severe degradation of high-frequency structures and semantic ambiguity under extreme darkness. Although existing methods achieve satisfactory brightness recovery, they often suffer from structural inconsistency and semantic drift, as diverse scenes are typically processed with uniform [...] Read more.
Low-light image enhancement (LLIE) remains challenging due to severe degradation of high-frequency structures and semantic ambiguity under extreme darkness. Although existing methods achieve satisfactory brightness recovery, they often suffer from structural inconsistency and semantic drift, as diverse scenes are typically processed with uniform enhancement strategies or static text prompts. To address these issues, we propose a Multi-Modal Structural and Semantic-Adaptive Network (MSSA-Net) under a structure-anchored paradigm. First, we design a Multi-Scale Self-Refinement Block (MSRB) to enhance degraded visible representations through multi-scale feature extraction and progressive refinement. Meanwhile, a pseudo-infrared structural prior derived from the input image is introduced to provide noise-insensitive geometric cues. These cues are extracted via a Structure-Guided Cross-Attention (SGCA) module to produce structure-dominant features. The refined visible features and structural features are then adaptively integrated through an adaptive residual fusion (ARF) module to achieve balanced restoration. Furthermore, we develop a Large Multi-modal Model (LMM)-Driven Scene-Adaptive Attention mechanism that generates instance-aware scene tags from a coarse preview and injects semantic embeddings into visual features. Extensive experiments demonstrate that MSSA-Net improves structural fidelity, brightness recovery, and semantic naturalness across multiple benchmarks. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems—2nd Edition)
Show Figures

Figure 1

23 pages, 5417 KB  
Article
A Method for Underwater Image Enhancement Utilizing Polarization Inspired by the Mantis Shrimp’s Multi-Dimensional Visual Imaging Mechanism
by Qingyu Liu, Ruixin Li, Congcong Li, Canrong Chen, Yifan Huang, Huayu Yang and Fei Yuan
J. Mar. Sci. Eng. 2026, 14(6), 582; https://doi.org/10.3390/jmse14060582 - 21 Mar 2026
Viewed by 390
Abstract
Optical attenuation caused by absorption and scattering in turbid water significantly degrades underwater image quality, making reliable underwater imaging a challenging problem. Underwater polarization imaging has attracted increasing attention because of its ability to suppress scattered light and provide additional polarization cues. However, [...] Read more.
Optical attenuation caused by absorption and scattering in turbid water significantly degrades underwater image quality, making reliable underwater imaging a challenging problem. Underwater polarization imaging has attracted increasing attention because of its ability to suppress scattered light and provide additional polarization cues. However, existing polarization-based enhancement approaches often adapt conventional underwater image enhancement strategies, and the multi-dimensional characteristics of polarization information are not always fully utilized, which may limit detail restoration in complex underwater environments. To address this issue, this paper proposes a bio-inspired underwater polarization image enhancement framework motivated by the polarization vision mechanism of marine organisms. Specifically, a two-stage architecture consisting of a Polarization Adversarial Network (PAN) and a Polarization Enhancement Network (PEN) is designed. The PAN incorporates a Bionic Antagonistic Module (BAM) to exploit complementary information among polarization channels, while Salient Feature Extraction (SFE) is introduced to reduce redundant feature interference. The subsequent PEN integrates a frequency-aware Mamba-based structure to enhance feature representation and improve detail reconstruction. Experiments on simulated underwater polarization datasets indicate that the proposed framework can effectively suppress backscattering and improve structural detail visibility in challenging underwater scenes, demonstrating competitive performance compared with representative traditional and learning-based methods. Full article
Show Figures

Figure 1

29 pages, 29045 KB  
Article
Liproxstatin-1 Attenuates Retinal Ischemia–Reperfusion Injury by Suppressing EGR1-Mediated Ferroptosis
by Wei Huang, Yue Dong, Xuan Zhou, Huishan Lin, Jingwei Yao, Zhuoyi Wu, Weng Ian Tam, Yuheng Tan, Chengguo Zuo and Mingkai Lin
Antioxidants 2026, 15(3), 391; https://doi.org/10.3390/antiox15030391 - 19 Mar 2026
Viewed by 605
Abstract
Retinal ischemia–reperfusion (I/R) injury results in irreversible vision loss largely through retinal ganglion cell (RGC) death, with ferroptosis being a key mechanism. This study evaluated the therapeutic potential of the ferroptosis inhibitor Liproxstatin-1 (Lip-1) and deciphered its underlying mechanism. Using a mouse retinal [...] Read more.
Retinal ischemia–reperfusion (I/R) injury results in irreversible vision loss largely through retinal ganglion cell (RGC) death, with ferroptosis being a key mechanism. This study evaluated the therapeutic potential of the ferroptosis inhibitor Liproxstatin-1 (Lip-1) and deciphered its underlying mechanism. Using a mouse retinal I/R model and primary RGC cultures subjected to oxygen–glucose deprivation/reoxygenation (OGD/R), we demonstrated that Lip-1 effectively inhibits ferroptosis. Lip-1 treatment preserved retinal architecture (as assessed by H&E staining and SD-OCT) and partially restored visual function (as measured by electroretinography). Integrated molecular analyses—including immunofluorescence, Western blotting, and RNA sequencing—showed that Lip-1 downregulates early growth response 1 (EGR1), thereby inhibiting p53 and consequently restoring solute carrier family 7 member 11 (xCT) expression. Crucially, lentivirus-mediated EGR1 knockdown attenuated OGD/R-induced ferroptosis, confirming its pivotal role. Our work defines a coherent EGR1–p53–xCT signaling axis driving ferroptosis in retinal I/R injury and identifies Lip-1 as a neuroprotective agent targeting this pathway. These findings establish a druggable ferroptotic cascade and provide a mechanistic rationale for targeting EGR1 in the treatment of ischemic retinopathies. Full article
(This article belongs to the Section ROS, RNS and RSS)
Show Figures

Graphical abstract

25 pages, 8614 KB  
Article
Underwater Image Restoration Integrating Monocular Depth Estimation with a Physical Imaging Model
by Tianchi Zhang, Hongwei Qin, Qiang Liu and Xing Liu
J. Mar. Sci. Eng. 2026, 14(6), 563; https://doi.org/10.3390/jmse14060563 - 18 Mar 2026
Viewed by 341
Abstract
Underwater images suffer from quality degradation such as haze, detail blurring, color distortion, and low contrast due to factors like light scattering and wavelength-dependent attenuation in water. This severely hinders the high-quality completion of target detection tasks for Autonomous Underwater Vehicles (AUV) relying [...] Read more.
Underwater images suffer from quality degradation such as haze, detail blurring, color distortion, and low contrast due to factors like light scattering and wavelength-dependent attenuation in water. This severely hinders the high-quality completion of target detection tasks for Autonomous Underwater Vehicles (AUV) relying on image information. Although deep learning-based methods have gained widespread attention, existing approaches still face challenges such as insufficient feature extraction and limited generalization in complex real-world scenes. Methods based on physical models, on the other hand, heavily rely on depth information which is difficult to obtain accurately. To address these issues, this paper proposes a novel underwater image restoration method that integrates depth estimation with the Akkaynak-Treibitz physical imaging model. In the depth estimation stage, efficient and robust feature extraction is achieved through a lightweight encoder–decoder architecture combined with a channel–spatial hybrid attention mechanism. To overcome the inherent scale ambiguity problem in monocular depth estimation, which prevents direct output of absolute depth consistent with the real scene, sparse depth priors are introduced. Subsequently, adaptive depth binning and depth map optimization are realized via m-Vision Transformer and convolutional regression. In the image restoration stage, the acquired high-quality depth map is combined with the Akkaynak-Treibitz physical imaging model for inverse solving, achieving high-quality restoration from degraded to clear images. Experimental results demonstrate that the proposed method outperforms mainstream depth estimation methods (LapDepth, UDepth, etc.) and mainstream image restoration methods (CLAHE, FUnIE-GAN, etc.) in terms of evaluation metrics and visual perceptual quality. When processing the extremely degraded UIEB-S dataset, the proposed method achieves evaluation metrics of SSIM = 0.8954, UCIQE = 0.6107, and PSNR = 23.35 dB. Compared to the CLAHE and FUnIE-GAN methods, SSIM improved by 2.8% and 16.7%, UCIQE improved by 9.6% and 14.3%, and PSNR improved by 22.5% and 13.9%, respectively. Comprehensive subjective and objective evaluation results validate the effectiveness of the proposed method in addressing image quality degradation, particularly demonstrating outstanding capability in severe color cast correction and detail recovery. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

14 pages, 3141 KB  
Article
Enhanced Real-Time Detector for Industrial Vision-Based Corn Impurity Detection
by Xiao Zhang, Yuhang Bian, Xiangdong Li, Haoze Yu, Dong Li and Min Wu
Foods 2026, 15(6), 1065; https://doi.org/10.3390/foods15061065 - 18 Mar 2026
Viewed by 240
Abstract
The effective cleaning of corn prior to storage is crucial for ensuring grain quality and safety. Traditional Convolutional Neural Network (CNN)-based detection methods often struggle to maintain accuracy in scenarios with dense occlusions. Furthermore, limitations in image quality and feature representation hinder their [...] Read more.
The effective cleaning of corn prior to storage is crucial for ensuring grain quality and safety. Traditional Convolutional Neural Network (CNN)-based detection methods often struggle to maintain accuracy in scenarios with dense occlusions. Furthermore, limitations in image quality and feature representation hinder their generalization to diverse impurity types. To address these challenges, this paper proposes an enhanced real-time detector transformer model named RT-DETR-CD (Real-Time Detector Transformer with Convolution and Dynamic Upsampling) for corn impurity detection based on industrial vision. This approach integrates Receptive Field Attention Convolutions (RFAConv) to enhance sensitivity to local texture details and employs the dynamic upsampling operator DySample to restore high-frequency edge information. Additionally, a novel Inner-Shape-IoU loss function is introduced to accelerate bounding box regression for objects with varying aspect ratios. Images were captured using FLIR industrial cameras under controllable annular LED illumination. Experiments on a self-built dataset demonstrate that the proposed model achieves a 4.7% improvement in mean average precision (mAP) and operates at 68 frames per second (FPS), outperforming the original RT-DETR model in both accuracy and speed. This work provides a practical solution for real-time, high-precision impurity detection on grain processing lines. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

20 pages, 48606 KB  
Article
GMUD-Net: Global Modulated Unbalanced Dual-Branch Network for Image Restoration in Various Degraded Environments
by Shengchun Wang, Yingjie Liu and Huijie Zhu
Appl. Sci. 2026, 16(6), 2854; https://doi.org/10.3390/app16062854 - 16 Mar 2026
Viewed by 239
Abstract
Image restoration has wide applications in the field of computer vision, yet existing methods suffer from limitations. CNNs struggle to capture long-range dependencies, while transformers exhibit insufficient performance in handling local details and high computational complexity. Additionally, existing dual-branch networks fail to define [...] Read more.
Image restoration has wide applications in the field of computer vision, yet existing methods suffer from limitations. CNNs struggle to capture long-range dependencies, while transformers exhibit insufficient performance in handling local details and high computational complexity. Additionally, existing dual-branch networks fail to define a clear dominant–auxiliary role between branches, leading to redundancy and high computational costs. This paper proposes a Global Modulated Unbalanced Dual-Branch Network (GMUD-Net), which innovatively adopts an unbalanced structure with a CNN as the main branch and a transformer as the auxiliary branch. Specifically, the CNN branch achieves strong restoration capability by integrating the global–local hybrid backbone block (GLBB) and the frequency-based global attention module (FGAM). As the key building block in the CNN branch, GLBB integrates a local backbone branch, a global Fourier branch, and a residual branch to fuse local details with global context. Meanwhile, FGAM leverages the fast Fourier transform at the bottleneck to enhance cross-channel interaction and improve global restoration performance. In addition, the lightweight transformer branch employs efficient cross-channel attention to provide complementary global cues, which are filtered and injected into the CNN branch via the global attention guidance block (GAG). These designs integrate the advantages of both CNNs and transformers while significantly reducing computational burden, offering a new paradigm to address the limitations of traditional dual-branch architectures. Experimental results demonstrate that compared with existing algorithms, the proposed method achieves state-of-the-art or highly competitive performance in both quantitative evaluations and qualitative results across nine datasets. Full article
(This article belongs to the Special Issue AI-Driven Image and Signal Processing)
Show Figures

Figure 1

Back to TopTop