Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (65)

Search Parameters:
Keywords = image defogging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 2091 KB  
Article
Underwater Image Enhancement Method Based on Vision Mamba
by Yongjun Wang, Zhuo Chen, Maged Al-Barashi and Zeyu Tang
Electronics 2025, 14(17), 3411; https://doi.org/10.3390/electronics14173411 - 27 Aug 2025
Abstract
To address issues like haze, blurring, and color distortion in underwater images, this paper proposes a novel underwater image enhancement model called U-Vision Mamba, built on the Vision Mamba framework. The core innovation lies in a U-shaped network encoder for multi-scale feature extraction, [...] Read more.
To address issues like haze, blurring, and color distortion in underwater images, this paper proposes a novel underwater image enhancement model called U-Vision Mamba, built on the Vision Mamba framework. The core innovation lies in a U-shaped network encoder for multi-scale feature extraction, combined with a novel multi-scale sparse attention fusion module to effectively aggregate these features. This fusion module leverages sparse attention to capture global context while preserving fine details. The decoder then refines these aggregated features to generate high-quality underwater images. Experimental results on the UIEB dataset demonstrate that U-Vision Mamba significantly reduces image blurring and corrects color distortion, achieving a PSNR of 25.65 dB and an SSIM of 0.972. Both comprehensive subjective evaluation and objective metrics confirm the model’s superior performance and robustness, making it a promising solution for improving the clarity and usability of underwater imagery in applications like marine exploration and environmental monitoring. Full article
Show Figures

Figure 1

22 pages, 2839 KB  
Article
Multi-Scale Image Defogging Network Based on Cauchy Inverse Cumulative Function Hybrid Distribution Deformation Convolution
by Lu Ji and Chao Chen
Sensors 2025, 25(16), 5088; https://doi.org/10.3390/s25165088 - 15 Aug 2025
Viewed by 278
Abstract
The aim of this study was to address the issue of significant performance degradation in existing defogging algorithms under extreme fog conditions. Traditional Taylor series-based deformable convolutions are limited by local approximation errors, while the heavy-tailed characteristics of the Cauchy distribution can more [...] Read more.
The aim of this study was to address the issue of significant performance degradation in existing defogging algorithms under extreme fog conditions. Traditional Taylor series-based deformable convolutions are limited by local approximation errors, while the heavy-tailed characteristics of the Cauchy distribution can more successfully model outliers in fog images. The following improvements are made: (1) A displacement generator based on the inverse cumulative distribution function (ICDF) of the Cauchy distribution is designed to transform uniform noise into sampling points with a long-tailed distribution. A novel double-peak Cauchy ICDF is proposed to dynamically balance the heavy-tailed characteristics of the Cauchy ICDF, enhancing the modeling capability for sudden changes in fog concentration. (2) An innovative Cauchy–Gaussian fusion module is proposed to dynamically learn and generate hybrid coefficients, combining the complementary advantages of the two distributions to dynamically balance the representation of smooth regions and edge details. (3) Tree-based multi-path and cross-resolution feature aggregation is introduced, achieving local–global feature adaptive fusion through adjustable window sizes (3/5/7/11) for parallel paths. Experiments on the RESIDE dataset demonstrate that the proposed method achieves a 2.26 dB improvement in the peak signal-to-noise ratio compared to that obtained with the TaylorV2 expansion attention mechanism, with an improvement of 0.88 dB in heavily hazy regions (fog concentration > 0.8). Ablation studies validate the effectiveness of Cauchy distribution convolution in handling dense fog and conventional lighting conditions. This study provides a new theoretical perspective for modeling in computer vision tasks, introducing a novel attention mechanism and multi-path encoding approach. Full article
Show Figures

Figure 1

21 pages, 2657 KB  
Article
A Lightweight Multi-Stage Visual Detection Approach for Complex Traffic Scenes
by Xuanyi Zhao, Xiaohan Dou, Jihong Zheng and Gengpei Zhang
Sensors 2025, 25(16), 5014; https://doi.org/10.3390/s25165014 - 13 Aug 2025
Viewed by 402
Abstract
In complex traffic environments, image degradation due to adverse factors such as haze, low illumination, and occlusion significantly compromises the performance of object detection systems in recognizing vehicles and pedestrians. To address these challenges, this paper proposes a robust visual detection framework that [...] Read more.
In complex traffic environments, image degradation due to adverse factors such as haze, low illumination, and occlusion significantly compromises the performance of object detection systems in recognizing vehicles and pedestrians. To address these challenges, this paper proposes a robust visual detection framework that integrates multi-stage image enhancement with a lightweight detection architecture. Specifically, an image preprocessing module incorporating ConvIR and CIDNet is designed to perform defogging and illumination enhancement, thereby substantially improving the perceptual quality of degraded inputs. Furthermore, a novel enhancement strategy based on the Horizontal/Vertical-Intensity color space is introduced to decouple brightness and chromaticity modeling, effectively enhancing structural details and visual consistency in low-light regions. In the detection phase, a lightweight state-space modeling network, Mamba-Driven Lightweight Detection Network with RT-DETR Decoding, is proposed for object detection in complex traffic scenes. This architecture integrates VSSBlock and XSSBlock modules to enhance detection performance, particularly for multi-scale and occluded targets. Additionally, a VisionClueMerge module is incorporated to strengthen the perception of edge structures by effectively fusing multi-scale spatial features. Experimental evaluations on traffic surveillance datasets demonstrate that the proposed method surpasses the mainstream YOLOv12s model in terms of mAP@50–90, achieving a performance gain of approximately 1.0 percentage point (from 0.759 to 0.769). While ensuring competitive detection accuracy, the model exhibits reduced parameter complexity and computational overhead, thereby demonstrating superior deployment adaptability and robustness. This framework offers a practical and effective solution for object detection in intelligent transportation systems operating under visually challenging conditions. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 16422 KB  
Article
DCE-Net: An Improved Method for Sonar Small-Target Detection Based on YOLOv8
by Lijun Cao, Zhiyuan Ma, Qiuyue Hu, Zhongya Xia and Meng Zhao
J. Mar. Sci. Eng. 2025, 13(8), 1478; https://doi.org/10.3390/jmse13081478 - 31 Jul 2025
Viewed by 231
Abstract
Sonar is the primary tool used for detecting small targets at long distances underwater. Due to the influence of the underwater environment and imaging mechanisms, sonar images face challenges such as a small number of target pixels, insufficient data samples, and uneven category [...] Read more.
Sonar is the primary tool used for detecting small targets at long distances underwater. Due to the influence of the underwater environment and imaging mechanisms, sonar images face challenges such as a small number of target pixels, insufficient data samples, and uneven category distribution. Existing target detection methods are unable to effectively extract features from sonar images, leading to high false positive rates and affecting the accuracy of target detection models. To counter these challenges, this paper presents a novel sonar small-target detection framework named DCE-Net that refines the YOLOv8 architecture. The Detail Enhancement Attention Block (DEAB) utilizes multi-scale residual structures and channel attention mechanism (AM) to achieve image defogging and small-target structure completion. The lightweight spatial variation convolution module (CoordGate) reduces false detections in complex backgrounds through dynamic position-aware convolution kernels. The improved efficient multi-scale AM (MH-EMA) performs scale-adaptive feature reweighting and combines cross-dimensional interaction strategies to enhance pixel-level feature representation. Experiments on a self-built sonar small-target detection dataset show that DCE-Net achieves an mAP@0.5 of 87.3% and an mAP@0.5:0.95 of 41.6%, representing improvements of 5.5% and 7.7%, respectively, over the baseline YOLOv8. This demonstrates that DCE-Net provides an efficient solution for underwater detection tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Underwater Sonar Images)
Show Figures

Graphical abstract

21 pages, 2514 KB  
Article
Investigations into Picture Defogging Techniques Based on Dark Channel Prior and Retinex Theory
by Lihong Yang, Zhi Zeng, Hang Ge, Yao Li, Shurui Ge and Kai Hu
Appl. Sci. 2025, 15(15), 8319; https://doi.org/10.3390/app15158319 - 26 Jul 2025
Viewed by 257
Abstract
To address the concerns of contrast deterioration, detail loss, and color distortion in images produced under haze conditions in scenarios such as intelligent driving and remote sensing detection, an algorithm for image defogging that combines Retinex theory and the dark channel prior is [...] Read more.
To address the concerns of contrast deterioration, detail loss, and color distortion in images produced under haze conditions in scenarios such as intelligent driving and remote sensing detection, an algorithm for image defogging that combines Retinex theory and the dark channel prior is proposed in this paper. The method involves building a two-stage optimization framework: in the first stage, global contrast enhancement is achieved by Retinex preprocessing, which effectively improves the detail information regarding the dark area and the accuracy of the transmittance map and atmospheric light intensity estimation; in the second stage, an a priori compensation model for the dark channel is constructed, and a depth-map-guided transmittance correction mechanism is introduced to obtain a refined transmittance map. At the same time, the atmospheric light intensity is accurately calculated by the Otsu algorithm and edge constraints, which effectively suppresses the halo artifacts and color deviation of the sky region in the dark channel a priori defogging algorithm. The experiments based on self-collected data and public datasets show that the algorithm in this paper presents better detail preservation ability (the visible edge ratio is minimally improved by 0.1305) and color reproduction (the saturated pixel ratio is reduced to about 0) in the subjective evaluation, and the average gradient ratio of the objective indexes reaches a maximum value of 3.8009, which is improved by 36–56% compared with the classical DCP and Tarel algorithms. The method provides a robust image defogging solution for computer vision systems under complex meteorological conditions. Full article
Show Figures

Figure 1

15 pages, 126037 KB  
Article
An Improved Dark Channel Prior Method for Video Defogging and Its FPGA Implementation
by Lin Wang, Zhongqiang Luo and Li Gao
Symmetry 2025, 17(6), 839; https://doi.org/10.3390/sym17060839 - 27 May 2025
Viewed by 564
Abstract
In fog, rain, snow, haze, and other complex environments, environmental objects photographed by imaging equipment are prone to image blurring, contrast degradation, and other problems. The decline in image quality fails to satisfy the requirements of application scenarios such as video surveillance, satellite [...] Read more.
In fog, rain, snow, haze, and other complex environments, environmental objects photographed by imaging equipment are prone to image blurring, contrast degradation, and other problems. The decline in image quality fails to satisfy the requirements of application scenarios such as video surveillance, satellite reconnaissance, and target tracking. Aiming at the shortcomings of the traditional dark channel prior algorithm in video defogging, this paper proposes a method to improve the guided filtering algorithm to refine the transmittance image and reduce the halo effect in the traditional algorithm. Meanwhile, a gamma correction method is proposed to recover the defogged image and enhance the image details in a low-light environment. The parallel symmetric pipeline design of the FPGA is used to improve the system’s overall stability. The improved dark channel prior algorithm is realized through the hardware–software co-design of ARM and the FPGA. Experiments show that this algorithm improves the Underwater Image Quality Measure (UIQM), Average Gradient (AG), and Information Entropy (IE) of the image, while the system is capable of stably processing video images with a resolution of 1280 × 720 @ 60 fps. By numerically analyzing the power consumption and resource usage at the board level, the power consumption on the FPGA is only 2.242 W, which puts the hardware circuit design in the category of low power consumption. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

19 pages, 5898 KB  
Article
Multi-Module Combination for Underwater Image Enhancement
by Zhe Jiang, Huanhuan Wang, Gang He, Jiawang Chen, Wei Feng and Gaosheng Luo
Appl. Sci. 2025, 15(9), 5200; https://doi.org/10.3390/app15095200 - 7 May 2025
Viewed by 544
Abstract
Underwater observation and operation for divers and underwater robots still largely depend on optic methods, such as cameras videos, etc. However, due to the poor quality of images captured in murky waters, underwater operations in such areas are greatly hindered. In order to [...] Read more.
Underwater observation and operation for divers and underwater robots still largely depend on optic methods, such as cameras videos, etc. However, due to the poor quality of images captured in murky waters, underwater operations in such areas are greatly hindered. In order to solve the issue of degraded images, this paper proposes a multi-module combination method (UMMC) for underwater image enhancement. This is a new solution for processing a single image. Specifically, the process consists of five modules. With five separate modules working in tandem, UMMC provides the flexibility to address key challenges such as color distortion, haze, and low contrast. The UMMC framework starts with a color deviation detection module that intelligently separates images with and without color deviation, followed by a color and white balance correction module to restore accurate color. Effective defogging is then performed using a rank-one prior matrix-based approach, while a reference curve transformation adaptively enhances the contrast. Finally, the fusion module combines the visibility and contrast functions with reference to two weights to produce clear and natural results. A large number of experimental results demonstrate the effectiveness of the method proposed in this paper, which shows good performance compared to existing algorithms, both on real and synthetic data. Full article
Show Figures

Figure 1

19 pages, 5870 KB  
Article
Tilt-Induced Error Compensation with Vision-Based Method for Polarization Navigation
by Meng Yuan, Xindong Wu, Chenguang Wang and Xiaochen Liu
Appl. Sci. 2025, 15(9), 5060; https://doi.org/10.3390/app15095060 - 2 May 2025
Viewed by 542
Abstract
To rectify significant heading calculation errors in polarized light navigation for unmanned aerial vehicles (UAVs) under tilted states, this paper proposes a method for compensating horizontal attitude angles based on horizon detection. First, a defogging enhancement algorithm that integrates Retinex theory with dark [...] Read more.
To rectify significant heading calculation errors in polarized light navigation for unmanned aerial vehicles (UAVs) under tilted states, this paper proposes a method for compensating horizontal attitude angles based on horizon detection. First, a defogging enhancement algorithm that integrates Retinex theory with dark channel prior is adopted to improve image quality in low-illumination and hazy environments. Second, a dynamic threshold segmentation method in the HSV color space (Hue, Saturation, and Value) is proposed for robust horizon region extraction, combined with an improved adaptive bilateral filtering Canny operator for edge detection, aimed at balancing detail preservation and noise suppression. Then, the progressive probabilistic Hough transform is used to efficiently extract parameters of the horizon line. The calculated horizontal attitude angles are utilized to convert the body frame to the navigation frame, achieving compensation for polarization orientation errors. Onboard experiments demonstrate that the horizontal attitude angle estimation error remains within 0.3°, and the heading accuracy after compensation is improved by approximately 77.4% relative to uncompensated heading accuracy, thereby validating the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

21 pages, 19981 KB  
Article
Research on Image Segmentation and Defogging Technique of Coal Gangue Under the Influence of Dust Gradient
by Zhenghan Qin, Judong Jing, Libao Li, Yong Yuan, Yong Li and Bo Li
Appl. Sci. 2025, 15(4), 1947; https://doi.org/10.3390/app15041947 - 13 Feb 2025
Viewed by 704
Abstract
To address the challenges of low accuracy in coal gangue image recognition and poor segmentation performance under the influence of dust in underground coal mines, a scaled simulation platform was constructed to replicate the longwall top coal caving face. This platform utilized real [...] Read more.
To address the challenges of low accuracy in coal gangue image recognition and poor segmentation performance under the influence of dust in underground coal mines, a scaled simulation platform was constructed to replicate the longwall top coal caving face. This platform utilized real coal gangue particles as the raw material and employed dust simulation to mimic the dust conditions typically found in coal mines. Images of coal gangue without dust and under varying dust concentrations were then collected for analysis. In parallel, an improved DeeplabV3+ coal gangue image segmentation model is proposed, where ResNeSt is employed as the backbone network of DeeplabV3+, thereby enhancing the model’s capability to extract features of both coal and gangue. Furthermore, two channel attention modules (ECAs) are incorporated to augment the model’s ability to recognize edge features in coal gangue images. A class-label smoothing training strategy was adopted for model training. The experimental results indicate that, compared to the original DeepLabV3+ model, the optimized model achieves improvements of 3.14%, 4.70%, and 3.83% in average accuracy, mean intersection over union (mIoU), and mean pixel accuracy, respectively. Furthermore, the number of parameters was reduced from 44.18 M to 43.86 M, the floating-point operations decreased by 8.33%, and the frames per second (FPS) increased by 45.03%. When compared to other models such as UNet, PSANet, and SegFormer, the proposed model demonstrates superior performance in coal gangue segmentation, accuracy, and parameter efficiency. A method combining dark channel prior and Gaussian weighting was employed for defogging coal gangue images under varying dust concentration conditions. The recognition performance of the coal gangue images before and after defogging was assessed across different dust concentrations. The model’s segmentation accuracy and practical applicability were validated through defogging and segmentation of both indoor and underground dust images. The recognition accuracy of coal and gangue, before and after defogging, improved by 6.8–71.8% and 5.8–45.8%, respectively, as the dust concentration increased, thereby demonstrating the model’s effectiveness in coal gangue image defogging segmentation in underground dust environments. Full article
(This article belongs to the Special Issue Novel Technologies in Intelligent Coal Mining)
Show Figures

Figure 1

11 pages, 1981 KB  
Article
Image Dehazing Technique Based on DenseNet and the Denoising Self-Encoder
by Kunxiang Liu, Yue Yang, Yan Tian and Haixia Mao
Processes 2024, 12(11), 2568; https://doi.org/10.3390/pr12112568 - 16 Nov 2024
Cited by 1 | Viewed by 1695
Abstract
The application value of low-quality photos taken in foggy conditions is significantly lower than that of clear images. As a result, restoring the original image information and enhancing the quality of damaged images on cloudy days are crucial. Commonly used deep learning techniques [...] Read more.
The application value of low-quality photos taken in foggy conditions is significantly lower than that of clear images. As a result, restoring the original image information and enhancing the quality of damaged images on cloudy days are crucial. Commonly used deep learning techniques like DehazeNet, AOD-Net, and Li have shown encouraging progress in the study of image dehazing applications. However, these methods suffer from a shallow network structure leading to limited network estimation capability, reliance on atmospheric scattering models to generate the final results that are prone to error accumulation, as well as unstable training and slow convergence. Aiming at these problems, this paper proposes an improved end-to-end convolutional neural network method based on the denoising self-encoder-DenseNet (DAE-DenseNet), where the denoising self-encoder is used as the main body of the network structure, the encoder extracts the features of haze images, the decoder performs the feature reconstruction to recover the image, and the boosting module further performs the feature fusion locally and globally, and finally outputs the dehazed image. Testing the defogging effect in the public dataset, the PSNR index of DAE-DenseNet is 22.60, which is much higher than other methods. Experiments have proved that the dehazing method designed in this paper is better than other algorithms to a certain extent, and there is no color oversaturation or an excessive dehazing phenomenon in the image after dehazing. The dehazing results are the closest to the real image and the viewing experience feels natural and comfortable, with the image dehazing effect being very competitive. Full article
Show Figures

Figure 1

21 pages, 7944 KB  
Article
A Method for All-Weather Unstructured Road Drivable Area Detection Based on Improved Lite-Mobilenetv2
by Qingyu Wang, Chenchen Lyu and Yanyan Li
Appl. Sci. 2024, 14(17), 8019; https://doi.org/10.3390/app14178019 - 7 Sep 2024
Cited by 4 | Viewed by 1544
Abstract
This paper presents an all-weather drivable area detection method based on deep learning, addressing the challenges of recognizing unstructured roads and achieving clear environmental perception under adverse weather conditions in current autonomous driving systems. The method enhances the Lite-Mobilenetv2 feature extraction module and [...] Read more.
This paper presents an all-weather drivable area detection method based on deep learning, addressing the challenges of recognizing unstructured roads and achieving clear environmental perception under adverse weather conditions in current autonomous driving systems. The method enhances the Lite-Mobilenetv2 feature extraction module and integrates a pyramid pooling module with an attention mechanism. Moreover, it introduces a defogging preprocessing module suitable for real-time detection, which transforms foggy images into clear ones for accurate drivable area detection. The experiments adopt a transfer learning-based training approach, training an all-road-condition semantic segmentation model on four datasets that include both structured and unstructured roads, with and without fog. This strategy reduces computational load and enhances detection accuracy. Experimental results demonstrate a 3.84% efficiency improvement compared to existing algorithms. Full article
(This article belongs to the Special Issue Novel Research on Image and Video Processing Technology)
Show Figures

Figure 1

24 pages, 7011 KB  
Article
A Comprehensive Review of Traditional and Deep-Learning-Based Defogging Algorithms
by Minxian Shen, Tianyi Lv, Yi Liu, Jialiang Zhang and Mingye Ju
Electronics 2024, 13(17), 3392; https://doi.org/10.3390/electronics13173392 - 26 Aug 2024
Cited by 4 | Viewed by 3481
Abstract
Images captured under adverse weather conditions often suffer from blurred textures and muted colors, which can impair the extraction of reliable information. Image defogging has emerged as a critical solution in computer vision to enhance the visual quality of such foggy images. However, [...] Read more.
Images captured under adverse weather conditions often suffer from blurred textures and muted colors, which can impair the extraction of reliable information. Image defogging has emerged as a critical solution in computer vision to enhance the visual quality of such foggy images. However, there remains a lack of comprehensive studies that consolidate both traditional algorithm-based and deep learning-based defogging techniques. This paper presents a comprehensive survey of the currently proposed defogging techniques. Specifically, we first provide a fundamental classification of defogging methods: traditional techniques (including image enhancement approaches and physical-model-based defogging) and deep learning algorithms (such as network-based models and training strategy-based models). We then delve into a detailed discussion of each classification, introducing several representative image fog removal methods. Finally, we summarize their underlying principles, advantages, disadvantages, and give the prospects for future development. Full article
Show Figures

Figure 1

14 pages, 3271 KB  
Article
A MSA-YOLO Obstacle Detection Algorithm for Rail Transit in Foggy Weather
by Jian Chen, Donghui Li, Weiqiang Qu and Zhiwei Wang
Appl. Sci. 2024, 14(16), 7322; https://doi.org/10.3390/app14167322 - 20 Aug 2024
Cited by 4 | Viewed by 1674
Abstract
Obstacles on rail transit significantly compromise operational safety, particularly under dense fog conditions. To address missed and false detections in traditional rail transit detection methods, this paper proposes a multi-scale adaptive YOLO (MSA-YOLO) algorithm. The algorithm incorporates six filters: defog, white balance, gamma, [...] Read more.
Obstacles on rail transit significantly compromise operational safety, particularly under dense fog conditions. To address missed and false detections in traditional rail transit detection methods, this paper proposes a multi-scale adaptive YOLO (MSA-YOLO) algorithm. The algorithm incorporates six filters: defog, white balance, gamma, contrast, tone, and sharpen, to remove fog and enhance image quality. However, determining the hyperparameters of these filters is challenging. We employ a multi-scale adaptive module to optimize filter hyperparameters, enhancing fog removal and image quality. Subsequently, YOLO is utilized to detect obstacles on rail transit tracks. The experimental results are encouraging, demonstrating the effectiveness of our proposed method in foggy scenarios. Full article
Show Figures

Figure 1

16 pages, 14680 KB  
Article
Adaptive Image-Defogging Algorithm Based on Bright-Field Region Detection
by Yue Wang, Fengying Yue, Jiaxin Duan, Haifeng Zhang, Xiaodong Song, Jiawei Dong, Jiaxin Zeng and Sidong Cui
Photonics 2024, 11(8), 718; https://doi.org/10.3390/photonics11080718 - 31 Jul 2024
Cited by 2 | Viewed by 1447
Abstract
Image defogging is an essential technology used in traffic safety monitoring, military surveillance, satellite and remote sensing image processing, medical image diagnostics, and other applications. Current methods often rely on various priors, with the dark-channel prior being the most frequently employed. However, halo [...] Read more.
Image defogging is an essential technology used in traffic safety monitoring, military surveillance, satellite and remote sensing image processing, medical image diagnostics, and other applications. Current methods often rely on various priors, with the dark-channel prior being the most frequently employed. However, halo and bright-field color distortion issues persist. To further improve image quality, an adaptive image-defogging algorithm based on bright-field region detection is proposed in this paper. Modifying the dark-channel image improves the abrupt changes in gray value in the traditional dark-channel image. By setting the first and second lower limits of transmittance and introducing an adaptive correction factor to adjust the transmittance of the bright-field region, the limitations of the dark-channel prior in extensive ranges and high-brightness areas can be significantly alleviated. In addition, a guide filter is utilized to enhance the initial transmittance image, preserving the details of the defogged image. The results of the experiment demonstrate that the algorithm presented in this paper effectively addresses the mentioned issues and has shown outstanding performance in both objective evaluation and subjective visual effects. Full article
(This article belongs to the Special Issue Challenges and Future Directions in Adaptive Optics Technology)
Show Figures

Figure 1

22 pages, 5638 KB  
Article
A Method for Defogging Sea Fog Images by Integrating Dark Channel Prior with Adaptive Sky Region Segmentation
by Kongchi Hu, Qingyan Zeng, Junyan Wang, Jianqing Huang and Qi Yuan
J. Mar. Sci. Eng. 2024, 12(8), 1255; https://doi.org/10.3390/jmse12081255 - 25 Jul 2024
Cited by 4 | Viewed by 1403
Abstract
Due to the detrimental impact of fog on image quality, dehazing maritime images is essential for applications such as safe maritime navigation, surveillance, environmental monitoring, and marine research. Traditional dehazing techniques, which are dependent on presupposed conditions, often fail to perform effectively, particularly [...] Read more.
Due to the detrimental impact of fog on image quality, dehazing maritime images is essential for applications such as safe maritime navigation, surveillance, environmental monitoring, and marine research. Traditional dehazing techniques, which are dependent on presupposed conditions, often fail to perform effectively, particularly when processing sky regions within marine fog images in which these conditions are not met. This study proposes an adaptive sky area segmentation dark channel prior to the marine image dehazing method. This study effectively addresses challenges associated with traditional marine image dehazing methods, improving dehazing results affected by bright targets in the sky area and mitigating the grayish appearance caused by the dark channel. This study uses the grayscale value of the region boundary’s grayscale discontinuity characteristics, takes the grayscale value with the least number of discontinuity areas in the grayscale histogram as a segmentation threshold adapted to the characteristics of the sea fog image to segment bright areas such as the sky, and then uses grayscale gradients to identify grayscale differences in different bright areas, accurately distinguishing boundaries between sky and non-sky areas. By comparing the area parameters, non-sky blocks are filled; this adaptively eliminates interference from other bright non-sky areas and accurately locks the sky area. Furthermore, this study proposes an enhanced dark channel prior approach that optimizes transmittance locally within sky areas and globally across the image. This is achieved using a transmittance optimization algorithm combined with guided filtering technology. The atmospheric light estimation is refined through iterative adjustments, ensuring consistency in brightness between the dehazed and original images. The image reconstruction employs calculated atmospheric light and transmittance values through an atmospheric scattering model. Finally, the use of gamma-correction technology ensures that images more accurately replicate natural colors and brightness levels. Experimental outcomes demonstrate substantial improvements in the contrast, color saturation, and visual clarity of marine fog images. Additionally, a set of foggy marine image data sets is developed for monitoring purposes. Compared with traditional dark channel prior dehazing techniques, this new approach significantly improves fog removal. This advancement enhances the clarity of images obtained from maritime equipment and effectively mitigates the risk of maritime transportation accidents. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop