Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = multi-scale Retinex decomposition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 14848 KB  
Article
Image Sand–Dust Removal Using Reinforced Multiscale Image Pair Training
by Dong-Min Son, Jun-Ru Huang and Sung-Hak Lee
Sensors 2025, 25(19), 5981; https://doi.org/10.3390/s25195981 - 26 Sep 2025
Viewed by 464
Abstract
This study proposes an image-enhancement method to address the challenges of low visibility and color distortion in images captured during yellow sandstorms for an image sensor based outdoor surveillance system. The technique combines traditional image processing with deep learning to improve image quality [...] Read more.
This study proposes an image-enhancement method to address the challenges of low visibility and color distortion in images captured during yellow sandstorms for an image sensor based outdoor surveillance system. The technique combines traditional image processing with deep learning to improve image quality while preserving color consistency during transformation. Conventional methods can partially improve color representation and reduce blurriness in sand–dust environments. However, they are limited in their ability to restore fine details and sharp object boundaries effectively. In contrast, the proposed method incorporates Retinex-based processing into the training phase, enabling enhanced clarity and sharpness in the restored images. The proposed framework comprises three main steps. First, a cycle-consistent generative adversarial network (CycleGAN) is trained with unpaired images to generate synthetically paired data. Second, CycleGAN is retrained using these generated images along with clear images obtained through multiscale image decomposition, allowing the model to transform dust-interfered images into clear ones. Finally, color preservation is achieved by selecting the A and B chrominance channels from the small-scale model to maintain the original color characteristics. The experimental results confirmed that the proposed method effectively restores image color and removes sand–dust-related interference, thereby providing enhanced visual quality under sandstorm conditions. Specifically, it outperformed algorithm-based dust removal methods such as Sand-Dust Image Enhancement (SDIE), Chromatic Variance Consistency Gamma and Correction-Based Dehazing (CVCGCBD), and Rank-One Prior (ROP+), as well as machine learning-based methods including Fusion strategy and Two-in-One Low-Visibility Enhancement Network (TOENet), achieving a Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) score of 17.238, which demonstrates improved perceptual quality, and an Local Phase Coherence-Sharpness Index (LPC-SI) value of 0.973, indicating enhanced sharpness. Both metrics showed superior performance compared to conventional methods. When applied to Closed-Circuit Television (CCTV) systems, the proposed method is expected to mitigate the adverse effects of color distortion and image blurring caused by sand–dust, thereby effectively improving visual clarity in practical surveillance applications. Full article
Show Figures

Figure 1

22 pages, 8901 KB  
Article
D3Fusion: Decomposition–Disentanglement–Dynamic Compensation Framework for Infrared-Visible Image Fusion in Extreme Low-Light
by Wansi Yang, Yi Liu and Xiaotian Chen
Appl. Sci. 2025, 15(16), 8918; https://doi.org/10.3390/app15168918 - 13 Aug 2025
Viewed by 711
Abstract
Infrared-visible image fusion quality is critical for nighttime perception in autonomous driving and surveillance but suffers severe degradation under extreme low-light conditions, including irreversible texture loss in visible images, thermal boundary diffusion artifacts, and overexposure under dynamic non-uniform illumination. To address these challenges, [...] Read more.
Infrared-visible image fusion quality is critical for nighttime perception in autonomous driving and surveillance but suffers severe degradation under extreme low-light conditions, including irreversible texture loss in visible images, thermal boundary diffusion artifacts, and overexposure under dynamic non-uniform illumination. To address these challenges, a Decomposition–Disentanglement–Dynamic Compensation framework, D3Fusion, is proposed. Firstly, a Retinex-inspired Decomposition Illumination Net (DIN) decomposes inputs into enhanced images and degradative illumination maps for joint low-light recovery. Secondly, an illumination-guided encoder and a multi-scale differential compensation decoder dynamically balance cross-modal features. Finally, a progressive three-stage training paradigm from illumination correction through feature disentanglement to adaptive fusion resolves optimization conflicts. Compared to State-of-the-Art methods, on the LLVIP, TNO, MSRS, and RoadScene datasets, D3Fusion achieves an average improvement of 1.59% in standard deviation (SD), 6.9% in spatial frequency (SF), 2.59% in edge intensity (EI), and 1.99% in visual information fidelity (VIF), demonstrating superior performance in extreme low-light scenarios. The framework effectively suppresses thermal diffusion artifacts while mitigating exposure imbalance, adaptively brightening scenes while preserving texture details in shadowed regions. This significantly improves fusion quality for nighttime images by enhancing salient information, establishing a robust solution for multimodal perception under illumination-critical conditions. Full article
Show Figures

Figure 1

15 pages, 7340 KB  
Article
Multi-Modular Network-Based Retinex Fusion Approach for Low-Light Image Enhancement
by Jiarui Wang, Yu Sun and Jie Yang
Electronics 2024, 13(11), 2040; https://doi.org/10.3390/electronics13112040 - 23 May 2024
Cited by 2 | Viewed by 1726
Abstract
Current low-light image enhancement techniques prioritize increasing image luminance but fail to address issues including loss of intricate distortion of colors and image details. In order to address these issues that has been overlooked by all parties, this paper suggests a multi-module optimization [...] Read more.
Current low-light image enhancement techniques prioritize increasing image luminance but fail to address issues including loss of intricate distortion of colors and image details. In order to address these issues that has been overlooked by all parties, this paper suggests a multi-module optimization network for enhancing low-light images by integrating deep learning with Retinex theory. First, we create a decomposition network to separate the lighting components and reflections from the low-light image. We incorporated an enhanced global spatial attention (GSA) module into the decomposition network to boost its flexibility and adaptability. This module enhances the extraction of comprehensive information from the image and safeguards against information loss. To increase the illumination component’s luminosity, we subsequently constructed an enhancement network. The Multiscale Guidance Block (MSGB) has been integrated into the improvement network, together with multilayer extended convolution to expand the sensing field and enhance the network’s capability for feature extraction. Our proposed method out-performs existing ways in both objective measures and personal evaluations, emphasizing the virtues of the procedure outlined in this paper. Full article
(This article belongs to the Special Issue Modern Computer Vision and Image Analysis)
Show Figures

Figure 1

16 pages, 4611 KB  
Article
Tunnel Crack Detection Method and Crack Image Processing Algorithm Based on Improved Retinex and Deep Learning
by Jie Wu and Xiaoqian Zhang
Sensors 2023, 23(22), 9140; https://doi.org/10.3390/s23229140 - 13 Nov 2023
Cited by 13 | Viewed by 2944
Abstract
Tunnel cracks are the main factors that cause damage and collapse of tunnel structures. How to detect tunnel cracks efficiently and avoid safety accidents caused by tunnel cracks effectively is a research hotspot at present. In order to meet the need for efficient [...] Read more.
Tunnel cracks are the main factors that cause damage and collapse of tunnel structures. How to detect tunnel cracks efficiently and avoid safety accidents caused by tunnel cracks effectively is a research hotspot at present. In order to meet the need for efficient detection of tunnel cracks, the tunnel crack detection method based on improved Retinex and deep learning is proposed in this paper. The tunnel crack images collected by optical imaging equipment are used to improve the contrast information of tunnel crack images using the image enhancement algorithm, and this image enhancement algorithm has the function of multi-scale Retinex decomposition with improved central filtering. An improved VGG19 network model is constructed to achieve efficient segmentation of tunnel crack images through deep learning methods and then form the segmented binary image. The Zhang–Suen fast parallel-thinning method is used to obtain the skeleton map of the single-layer pixel, and the length and width information of the tunnel cracks are obtained. The feasibility and effectiveness of the proposed method are verified by experiments. Compared with other methods in the literature, the maximum deviation in the length of the tunnel crack is about 5 mm, and the maximum deviation in the width of the tunnel crack is about 0.8 mm. The experimental results show that the proposed method has a shorter detection time and higher detection accuracy. The research results of this paper can provide a strong basis for the health evaluation of tunnels. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 6525 KB  
Article
X-ray Single Exposure Imaging and Image Processing of Objects with High Absorption Ratio
by Yanxiu Liu, Kaitai Li, Dan Ding, Ye Li and Peng Zhao
Sensors 2023, 23(5), 2498; https://doi.org/10.3390/s23052498 - 23 Feb 2023
Cited by 2 | Viewed by 2134
Abstract
The dynamic range of an X-ray digital imaging system is very important when detecting objects with a high absorption ratio. In this paper, a ray source filter is used to filter the low-energy ray components which have no penetrating power to the high [...] Read more.
The dynamic range of an X-ray digital imaging system is very important when detecting objects with a high absorption ratio. In this paper, a ray source filter is used to filter the low-energy ray components which have no penetrating power to the high absorptivity object to reduce the X-ray integral intensity. This enables the effective imaging of the high absorptivity objects and avoids the image saturation of low absorptivity objects, thus achieving single exposure imaging of high absorption ratio objects. However, this method will reduce the image contrast and weaken the image structure information. Therefore, this paper proposes a contrast enhancement method for X-ray images based on Retinex. Firstly, based on Retinex theory, the multi-scale residual decomposition network decomposes the image into an illumination component and a reflection component. Then, the contrast of the illumination component is enhanced through the U-Net model with the global–local attention mechanism, and the reflection component is enhanced in detail using the anisotropic diffused residual dense network. Finally, the enhanced illumination component and the reflected component are fused. The results show that the proposed method can effectively enhance the contrast in X-ray single exposure images of the high absorption ratio objects, and can fully display the structure information of images on devices with low dynamic range. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 6265 KB  
Article
EIEN: Endoscopic Image Enhancement Network Based on Retinex Theory
by Ziheng An, Chao Xu, Kai Qian, Jubao Han, Wei Tan, Dou Wang and Qianqian Fang
Sensors 2022, 22(14), 5464; https://doi.org/10.3390/s22145464 - 21 Jul 2022
Cited by 13 | Viewed by 3952
Abstract
In recent years, deep convolutional neural network (CNN)-based image enhancement has shown outstanding performance. However, due to the problems of uneven illumination and low contrast existing in endoscopic images, the implementation of medical endoscopic image enhancement using CNN is still an exploratory and [...] Read more.
In recent years, deep convolutional neural network (CNN)-based image enhancement has shown outstanding performance. However, due to the problems of uneven illumination and low contrast existing in endoscopic images, the implementation of medical endoscopic image enhancement using CNN is still an exploratory and challenging task. An endoscopic image enhancement network (EIEN) based on the Retinex theory is proposed in this paper to solve these problems. The structure consists of three parts: decomposition network, illumination correction network, and reflection component enhancement algorithm. First, the decomposition network model of pre-trained Retinex-Net is retrained on the endoscopic image dataset, and then the images are decomposed into illumination and reflection components by this decomposition network. Second, the illumination components are corrected by the proposed self-attention guided multi-scale pyramid structure. The pyramid structure is used to capture the multi-scale information of the image. The self-attention mechanism is based on the imaging nature of the endoscopic image, and the inverse image of the illumination component is fused with the features of the green and blue channels of the image to be enhanced to generate a weight map that reassigns weights to the spatial dimension of the feature map, to avoid the loss of details in the process of multi-scale feature fusion and image reconstruction by the network. The reflection component enhancement is achieved by sub-channel stretching and weighted fusion, which is used to enhance the vascular information and image contrast. Finally, the enhanced illumination and reflection components are multiplied to obtain the reconstructed image. We compare the results of the proposed method with six other methods on a test set. The experimental results show that EIEN enhances the brightness and contrast of endoscopic images and highlights vascular and tissue information. At the same time, the method in this paper obtained the best results in terms of visual perception and objective evaluation. Full article
Show Figures

Figure 1

24 pages, 9933 KB  
Article
Image Enhancement for Inspection of Cable Images Based on Retinex Theory and Fuzzy Enhancement Method in Wavelet Domain
by Xuhui Ye, Gongping Wu, Le Huang, Fei Fan and Yongxiang Zhang
Symmetry 2018, 10(11), 570; https://doi.org/10.3390/sym10110570 - 1 Nov 2018
Cited by 10 | Viewed by 3414
Abstract
Inspection images of power transmission line provide vision interaction for the operator and the environmental perception for the cable inspection robot (CIR). However, inspection images are always contaminated by severe outdoor working conditions such as uneven illumination, low contrast, and speckle noise. Therefore, [...] Read more.
Inspection images of power transmission line provide vision interaction for the operator and the environmental perception for the cable inspection robot (CIR). However, inspection images are always contaminated by severe outdoor working conditions such as uneven illumination, low contrast, and speckle noise. Therefore, this paper proposes a novel method based on Retinex and fuzzy enhancement to improve the image quality of the inspection images. A modified multi-scale Retinex (MSR) is proposed to compensate the uneven illumination by processing the low frequency components after wavelet decomposition. Besides, a fuzzy enhancement method is proposed to perfect the edge information and improve contrast by processing the high frequency components. A noise reduction procedure based on soft threshold is used to avoid the noise amplification. Experiments on the self-built standard test dataset show that the algorithm can improve the image quality by 3–4 times. Compared with several other methods, the experimental results demonstrate that the proposed method can obtain better enhancement performance with more homogeneous illumination and higher contrast. Further research will focus on improving the real-time performance and parameter adaptation of the algorithm. Full article
Show Figures

Figure 1

Back to TopTop