Next Article in Journal
Extrapolation of Physics-Inspired Deep Networks in Learning Robot Inverse Dynamics
Previous Article in Journal
Separable CenterNet Detection Network Based on MobileNetV3—An Optimization Approach for Small-Object and Occlusion Issues
Previous Article in Special Issue
Advanced Neural Classifier-Based Effective Human Assistance Robots Using Comparable Interactive Input Assessment Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study on the Performance of Adaptive Neural Networks for Haze Reduction with a Focus on Precision

1
Department of Electrical Engineering, College of Engineering, Jouf University, Sakakah 72388, Saudi Arabia
2
School of Electrical Engineering, Southeast University, Nanjing 210096, China
3
Institute for Production Technology and Systems (IPTS), Leuphana Universität Lüneburg, 21335 Lüneburg, Germany
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(16), 2526; https://doi.org/10.3390/math12162526
Submission received: 18 July 2024 / Revised: 6 August 2024 / Accepted: 14 August 2024 / Published: 15 August 2024

Abstract

:
Visual clarity is significantly compromised, and the efficacy of numerous computer vision tasks is impeded by the widespread presence of haze in images. Innovative approaches to accurately minimize haze while keeping image features are needed to address this difficulty. The difficulties of current methods and the need to create better ones are brought to light in this investigation of the haze removal problem. The main goal is to provide a region-specific haze reduction approach by utilizing an Adaptive Neural Training Net (ANTN). The suggested technique uses adaptive training procedures with external haze images, pixel-segregated images, and haze-reduced images. Iteratively comparing spectral differences in hazy and non-hazy areas improves accuracy and decreases haze reduction errors. This study shows that the recommended strategy significantly improves upon the existing training ratio, region differentiation, and precision methods. The results demonstrate that the proposed method is effective, with a 9.83% drop in mistake rate and a 14.55% drop in differentiating time. This study’s findings highlight the value of adaptable neural networks for haze reduction without losing image quality. The research concludes with a positive outlook on the future of haze reduction methods, which should lead to better visual clarity and overall performance across a wide range of computer vision applications.

1. Introduction

Haze reduction is an Intellectual Property module that reduces damp hazes or fogs in the images. Haze reduction is an important task to improve the readability and flexibility range of fog images [1]. The haze occurs due to smoke, dust particles, and other elements in the atmosphere. Haze reduction is commonly used in remote sensing application which identifies the hazes in the images [2]. The haze reduction improves the recovery process and provides a feasible solution to the users. The primary use of haze reduction is to enhance the security services provided by image-processing systems [3]. Haze reduction uses a depth estimation method to estimate the exact impact of hazes. The estimation method identifies the hazes and problems based on severity [4]. The estimation method also predicts the cause of hazes in the fog image, which reduces the computational complexity range of the systems. Haze reduction based on filtering techniques is also used to reduce the hazes in the given fog images [5].
Current methods struggle to estimate haze in diverse image regions, resulting in inconsistent or inadequate haze removal. Early atmospheric correction using DOS (dark object subtraction) removes uniform haze. Since the scene may not have a black zone, these approaches are unable to prevent over-dehazing and spectral distortion. Subtracting the haze thickness map by the linear regression coefficient, which indicates a linear connection between the initial and band-specific from each spectral band, removed the haze. This approach may remove context-aware haze, while DOS-based solutions only remove homogenous haze. When estimating the regression coefficient, use the remote sensing picture because it is sensitive to haze particles. Multimodal neural networks acquire supplementary data from specialized sensors and reconstruct. Due to sensor information differences, this method requires a separate algorithm to reconstruct the data but improves outcomes by making the sensors work together. Complex neural networks utilized in real-world applications are computationally expensive. A sample haze image from an aerial view is illustrated in Figure 1.
In the above illustration, it is seen that the region and pixel distribution vary with the input size, texture, and background colors. Such images require a common filtering that reduces the hazes impacting their structure. The identified features produce relevant information to improve the quality of the fog images [6].
Haze reduction is largely accomplished through the process of pixel and area segregation. A pyramid spatially weighted pixel attention network (PSPAN) is used for image dehazing using fog images [7]. The PSPAN addresses the issues from the single fog image and reduces the complexity in image-processing systems. The PSPAN extracts the key features from images using a multi-scale feature extraction technique [8]. The image-dehazing procedure benefits greatly from the retrieved features. The PSPAN finds the image’s visual representation and characteristics that lower the dehazing process’s energy consumption level. Pixel aggregation in fog photos is made more efficient by the PSPAN [9,10]. The method for separating regions is called a level-aware progressive network, or LAP-Net. From individual fog images, the LAP-Net extracts haze patches and pixels. The data required for image dehazing are generated from the collected information. They predict the functional capabilities and feasibility range of the fog images. The LAP-NET increases the overall performance and reliability level of image dehazing via the segregation process [11,12].
The detection and prediction processes often make use of neural network (NN) methods and algorithms to enhance their accuracy. Reducing haze in fog images is accomplished using NN-based algorithms [13]. For dehazing, a convolutional neural network (CNN) algorithm-based single-image approach is employed. The method’s true objective is to record the motion and activities taking place outside in the fog images. Specifically, the degrading problems in the images are identified and removed using the CNN algorithm. The haze reduction procedure makes use of individual fog images that include the relevant data. The CNN algorithm-based method improves the overall visual quality range of the fog images. It also improves the effectiveness level of the image processing systems [14,15].
Image haze reduction research previously used prior-based methods, but subsequently, it has used artificial neural network learning methods. This haze removal study highlights the challenges of present technologies and the need for better ones. First, they need powerful computers. Complex neural networks with many parameters work well for region-specific haze problems. Additional input can enhance outcomes by promoting a complementary relationship between sensors; however, data reconstruction requires a distinct method due to variations in sensor information. Third, they have limited general-purpose use. Complex neural networks utilized in real-world applications are computationally expensive. A region-specific haze reduction technique using an ANTN is the major goal to address these difficulties. The proposed method uses adaptive training with external haze, pixel-segregated, and haze-reduced pictures. This article introduces Region-specific Haze Reduction (RHR) using the Adaptive Neural Training Net to assess digital-image-processing challenges like haze reduction. Iteratively comparing hazy and non-hazy spectral differences enhances accuracy and reduces haze reduction errors. The recommended strategy outperforms training ratio, region differentiation, and accuracy methods, according to this study.
When it comes to ANTN’s haze reduction capabilities, the Res-Net-50 architecture is the one that the network draws upon. The training dataset consists of picture sets that include both blurry and clear versions. So that concentrating on the strength of haze in specific regions, we divide each image into grids of n × n pixels
Compared to previous approaches, this paper’s innovative use of the ANTN to reduce haze in specific regions stands out. The ANTN model separates images into n × n pixel grids, enabling personalized haze removal based on varied intensity levels within each zone, in contrast to standard systems that apply uniform dehazing across full images. This method is more precise and better at preserving image information. Additionally, the ANTN integrates adaptive training and residual learning, drawing inspiration from Res-Net-50, to enhance feature extraction and gradient flow. This method provides a more stable and extensible answer for practical uses, outperforming other studies in the field in terms of processing time and error rates. This study tackles the shortcomings of existing dehazing methods and offers a fresh, high-performance solution to digital image processing by outlining an accurate structure for region-specific processing and making use of sophisticated neural network topologies.
A Visual Attention Dehazing Network (VADN) using multi-level modification is used for image dehazing. The VADN identifies the maps of haze and also detects the prior images. The VADN method uses multi-level features which are collected from the single fog image. Information useful for dehazing can be derived from the multi-level characteristics. Image-dehazing technologies are rendered more effective, and their importance range is expanded by the VADN model [16,17]. The contributions are listed below:
  • A novel region-specific haze reduction method is introduced for suppressing spectral intensity variations observed under different pixel distributions;
  • Designing a ResNet-50-inspired architecture with three layers for intensity variation validation, haze, and de-hazed input training for identifying precise convergence;
  • The proposed method is validated using a well-known dataset using spectral intensities, classification, and distribution assessments;
  • The proposed technique is compared with the existing approaches from the discussion below using different metrics for proving its effectiveness.
The rest of the paper includes Section 2 discussing the recent literature review on the proposed topic. Section 3 entails the proposed Region-specific Haze Reduction using Adaptive Neural Training Net methodology. Section 4 discusses the obtained results from the simulations. Finally, the conclusion of the paper is drawn in Section 5.

2. Related Works

Yu et al. [18] proposed a color enhancement scene estimation approach for haze removal. The proposed approach uses single images to remove the hazes. The main aim of the approach is to identify the smoke and fog in the images. It also detects the image’s color difference, which minimizes the computation process’s complexity. The proposed approach improves the accuracy of haze removal, enhancing the systems’ performance range.
He et al. [19] presented HALP, an innovative haze removal algorithm for visible Remote Sensing Images (RSIs), which tackles issues including reduced contrast and color distortion. It is composed of two parts: an estimate of the transmission based on side window filters and an estimate of the non-uniform ambient light. The results of the experimental assessments demonstrate that HALP is more effective than the current algorithms in recovering damaged RSIs. Nevertheless, there are still many unanswered questions about HALP, such as how to measure its resilience, maximize computing efficiency, integrate data from several sensors, and design intuitive interfaces.
An improved multi-scale network for dehazing single images was created by Liu et al. [20]. The developed method is a double dehaze network that uses an intra-task knowledge transfer to train the datasets. Reducing the time-consuming level of the estimation procedure predicts the enhancement level of the single image. Image dehazing’s general adaptability and practicalities are both enhanced by the created network.
A deep joint neural model was developed for haze removal and color correction by Zhang et al. [21]. The important factors which are necessary for haze removal are estimated based on scattering features. It is also used as a dehazing method that evaluates the relevant variables for the elimination process. Experimental findings demonstrate that the proposed model enhances the efficacy and dependability of haze removal devices.
Yan et al. [22] proposed a new image visibility restoration for haze removal using color correction and composite channel prior (CCP). Simple subtraction and multiplication are implemented in the method to calculate the haze channels in the images. The actual goal is to estimate the color and brightness range of the images. The suggested approach improves the picture quality, which boosts the systems’ efficiency.
A method for dehazing a single image that makes use of unsharp masking and color expansion was presented by Ngo et al. [23]. The approach employs deep learning algorithms to extract visual information in real-time. Image dehazing’s whole computational cost range is decreased by the presented method. It is a pre-processing technique that eliminates unwanted factors from the images. The reliability and efficiency of the dehazing process are both enhanced by the new technology.
A local multi-scale feature aggregation network (LMFA-Net) was developed for picture dehazing by Liu et al. [24]. The precise correlation between the images’ clear ratings and their hazy factors. Feature extraction is a technique that the network uses to seek out the most relevant features for subsequent processing. By improving the process’s quality, the LMFA-Net raises the dehazing speed ratio. Both the systems’ performance and their significance range are enhanced by the developed LMFA-Net.
A novel approach to single-image dehazing at night has been forwarded by Si et al. [25]. Detecting hazes in near-time photographs is the real goal of the method. The suggested technique makes educated guesses about the images’ nighttime effects and dark channels. Many files frequently employ the proposed method to eliminate unwanted hazes from the images. The dehazing process is made more robust and feasible using the suggested strategy.
A transformer and residual attention fusion method called TransRA was created by Dong et al. [26] to dehaze images captured by remote sensing. Decreasing the standard of optical remote-sensing pictures is the primary objective. The features and hazes in the provided photographs are detected by the remote-sensing technology. The approach takes the use of a U-shaped autoencoder to store the hazes for subsequent tasks. A wider range of system performance is achieved by the created method.
Zhang et al. [27] introduced a guided generative adversarial dehazing network (GGADN). The introduced method uses a GAN network to identify the hazes from single images. The introduced method also uses a loss function that guides the module for the detection process. GAN is mainly used here to reduce the quality and complexity level in eliminating the hazes. The introduced method increases effectiveness in training the images for the haze removal process.
Li et al. [28] proposed an image-dehazing model using the pix2pix framework for single images. The suggested model is mostly employed to reduce issues by identifying nonlinear properties and characteristics. GAN is also used in the model to stabilize the feature that provides optimal information for the detection process. The suggested approach improves the quantitative quality of individual photos when compared to other models.
Xiao et al. [29] developed a self-supervised zero-shot dehazing network (SZDNet) for dehazing. Dark channel prior, which gathers the relevant information for dehazing, is used in the network. A multichannel quad-tree algorithm is used in the network to estimate the light values from the images. Dehazing jobs are improved in quality and quantity range, and computation complexity is reduced. The created SZDNet enhances the degree of image-dehazing performance.
An approach for dehazing single images was suggested by Yang et al. [30] using the DAE model. It removes the hazes and unwanted patterns from the single images. It is a robust approach that adjusts the characteristics of the haze removal process. The DAE model provides effective services to the applications. The proposed method recovers the information that enhances the feasibility range of the systems.
Adaptive single-image dehazing utilizing joint local–global illumination adjustment was presented by Hu et al. [31]. It uses global atmospheric lights to displace the hazes and shorten the removal time. The introduced model estimates the local differences that are presented in the images. The presented model enhances the haze removal process’s adaptability and reliability level, according to the experimental results.
Li et al. [32] designed a multi-scale single-image dehazing using Gaussian pyramids for single images. It addresses the hazes which cause various problems in single images. The proposed method also filters the problems based on the complexity and severity range. The computational process uses less energy due to the introduced method. A wider range of performance in the haze reduction procedure is achieved by the designed method.
Jin et al. [33] aims to improve vision in low-light areas and reduce glow in a single night time haze image. The system takes cues from the generated glow pairs to manage glow effects. In particular, we suggest using an APSF-guided glow-rendering system once the light-source aware network has been trained to identify the origins of illumination in nighttime photos. After that, we train our framework on the displayed images, which results in the suppression of glow. In addition, we use gradient-adaptive convolution to pick up textures and edges in foggy environments. After learning an attention map, our network is fine-tuned using gamma correction to increase low-light intensity. In areas with low light levels, this focus is quite important, but in areas with considerable haze or glow, it is very weak. Our method’s efficacy is proven by extensive examination on real-world evening hazy photographs. On the GTA5 nighttime haze dataset, our results show that our method outperforms state-of-the-art methods by 13%, achieving a PSNR of 30.38 dB.
Han et al. [34] provide a single-input, lightweight U-net architectural neural network model for fog (haze) removal based on Dark channel prior (DCP). Operating the current DCP necessitates a significant level of computing complexity. To overcome this, the suggested model uses two-stage neural network architecture to accomplish high-quality fog removal, swapping out the computationally difficult operations of the traditional DCP for easily accelerated convolution operations. With just 2 million parameters, our suggested model is easy to understand and use, and it makes good use of available resources. While compared to the traditional DCP, the suggested neural network model outperforms it with an average PSNR of 26.65 dB and an SSIM of 0.88, as well as a difference of 11.5 dB and 0.22%, respectively, according to the experimental results.

3. Region-Specific Haze Reduction Using Adaptive Neural Training Net

The article establishes an approach, called Region-specific Haze Reduction (RHR) utilizing the Adaptive Neural Training Net to evaluate the demanding issues of haze reduction in images through digital image processing. The suggested model, the Region-specific Haze Reduction (RHR) model using the ANTN, learns the haze-intensity levels by carefully evaluating the input image and dividing it into a n × n pixel grid. The first thing the model does is look at the spectral intensity of every pixel, looking for differences that could indicate different amounts of haze. By analyzing each pixel individually, it identifies areas of haze, no matter how slight. Calculating the spectral intensity, and, in particular, the dark channel values, helps to reveal regions where haze is most prevalent. Using these numbers, we may make an educated guess as to the transmission map, which shows the amount of absorbance and scattering in the picture. Finding the lightest spots in the dark channel is another way to measure the atmospheric light; these spots usually correspond to places where the haze is most concentrated. The picture is divided into separate areas according to the different haze intensities found after the superfluous fog components are eliminated, which decreases the spectral intensity. This region segregation is vital because it allows for targeted haze reduction, which improves visual clarity and preserves critical image information by processing each part of the image based on its specific haze characteristics. The suggested ANTN appears to be an end-to-end dehazing model that incorporates adaptive training procedures using external haze photos, pixel-segregated images, and haze-reduced images. The network achieves accuracy and reduces errors by iteratively comparing spectral variations among hazy and non-hazy regions. As per this methodology, the model employs a neural network structure to analyze input photographs that have a hazy appearance and generate output images that are free from haze. Consequently, the process is fully integrated, which means it directly transforms unclear input photographs into clear images without the need for any intermediate stages or external methods.
The RHR technique divides the source image into sections with different intensities of haze. With this segmentation, it can improve accuracy and keep image information while removing haze from specific regions. The grids of n × n distribution are used to divide the input image. There may be varying haze qualities in each of the grids. Using the dark channel, it estimates the haze intensity for each location in Equation (1).
. H x = m i n y δ x m i n c r , g , b I c y
where I c y is the color intensity of the pixel in the color channel c, and δ x is the local patch centered at pixel x. The transmission map t(x) represents the patch of light that is not scattered and reaches camera. It is estimated in Equation (2):
t x = 1 μ H x
where μ is the constant that controls the amount of haze removed. Finding the most illuminated pixels in the blurry image allows us to estimate the atmospheric light A. Typically, these pixels are the ones most impacted by haze. The transmission map and ambient light are used to recover the haze-free image J in Equation (3):
J x = I x A m a x t x , t 0 + A
where t 0 is a lower bound to prevent division by zero.

3.1. Problem Identification

The main impediment in de-hazing arises from the protruding and hidden objects due to spectral differences within the hazy environment. To evaluate this problem, the proposed RHR method targets segmenting the input image into distinct regions based on the varying levels of haze intensity across an n × n pixel grid. The recurrent comparison of spectral differences in hazy and non-hazy locations improves haze elimination. The algorithm iteratively removes haze from a rural landscape with agricultural dust by segmenting the image into sections of differing haze intensity and correcting spectral mismatches. Using the iterative method to reduce errors and improve image clarity yields a crisper, more detailed image with little haze, keeping details like distant vistas and delicate textures. The program first analyzes spectral variations in the dark channel to identify hazy regions in a cityscape image covered by dense fog. Low-intensity pixels indicate increased haze density. The model refines haze estimates by continuously changing the transmission map and comparing it to clear sky regions. This allows the model to discern actual image characteristics from haze-induced distortions, minimizing mistakes. Figure 2 is a schematic depiction of the approach that has been suggested.
This segregation illustrated in the above Figure 2 aids in an efficacious understanding of the distribution of haze in the image. In the interval, the regions with less convergence are consistently categorized in a linear n × n distribution. By using this approach, the method efficaciously produces a de-hazed image by iteratively consolidating the spectral differences and adjusting the neural network’s parameters. This improvises object visibility and readability in hazy images by eliminating the impact of spectral differences and improving the separation between hazy and non-hazy regions. The proposed Region-specific Haze Reduction using Adaptive Neural Training Net aids in a digital image processing method that helps to control the challenges related to haze reduction and enhancing image clarity.
The suggested ANTN method on region-specific processing allows it to more accurately simulate and remove differing amounts of haze across various areas of the picture, leading to clearer and more consistent dehazing, in contrast to traditional hazing methods that frequently assume homogeneous haze distribution across the image. Improved color adjustment and reconstruction are the results of a clearer separation of actual image colors from haze-induced distortions. For ANTN to learn from different structural trends and better maintain features, edges, and textures, it uses pixel-segregated photos and external haze images. This makes the dehazed images sharper and more detailed.

3.2. Method Explanation

The haze input is examined for the further determination of the n × n distribution of the pixels in the images. This evaluation process helps in understanding how the haze is distributed across the images at a particular level, helping to determine the regions of differing intensities. This pixel-level estimation is the significant step that enables the subsequent segmentation and processing of the image, and, thus, it helps in detecting the effectiveness of the haze reduction and enhanced visual clarity. These building elements incorporate Res-Net-50-inspired features like batch normalization, convolutional layers, and ReLU activation functions. Their use improves gradient flow and feature extraction, and they make learning residual functions easier. The spectral intensity is reduced after eliminating unnecessary fog in the given images. After that, the region segregation is performed based on the haze input for the detection of the n × n distribution. Underneath this equation is the procedure for calculating the haze input for the detecting operation:
X = x 1 , x 2 , , x n X * = n = 1 W X W = w 1 , w 2 , , w n X * = n = 1 W , X = n = 1 W , X ( W ) = n = 1 W , X ( W ) X = n = 1 W X W
where X is represented as the haze input which is analyzed for the detection of the pixel operation, and W is denoted as the different regions. Now, the n × n distribution is examined for the region segregation process based on the haze input. By using this equation, P i j = ( w i w j ) ( w i w j ) , the pixel distribution is evaluated. This operation helps in portioning the image into a grid of small blocks, each containing the n × n pixels. The image is separated into numerous localized regions, which allows for a more precise assessment of the haze distribution across the entire given image. The estimation of the n × n distribution helps in this approach, which enables the identification of the different segments of the images, which is derived from the haze input. The adaptable training procedure utilizes a deep residual network structure inspired by Res-Net-50. This architecture is highly effective at retrieving hierarchical information from images, which is essential for comprehending and eliminating haze on many levels. Internal haze pictures, photos with segmented pixels, and images after haze reduction are used to train this network. As a whole, the Res-Net-50-inspired design’s superior residual learning capabilities make it an ideal fit for the haze reduction model’s adaptive training operation, facilitating efficient learning, and leading to better results when dehazing complicated photos. The process of estimating the n × n distribution in the input image is elucidated by the succeeding equation:
P w i = P x i w i W W P = w × i × m W P i j = n = 1 x w i x w j 2 P = P 11 P 1 n P n 1 P n n
where P is represented as the pixel distribution of the given image, i is signified as the grid of the small blocks, and j is characterized as the distribution of the haze. This distribution of the pixel is important in estimating the region segregation for the reduction in the haze of the images. By breaking down the image into these distinct regions, this method trains the neural network to determine and differentiate between the hazy and non-hazy region by enhancing its ability to efficaciously reduce the haze in the images. This evaluation of the n × n distribution enhances the subsequent stages of the proposed technique, which helps in improving the object visibility and image clarity in the presence of the haze. This process is explained by the following equation:
K i = n = 1 ( x i 1 , x i 2 , , x i n ) K i = n = 1 1 K = 1 w i x i w j x j w h e r e   j = 1,2 , , n K = K 1 , K 2 , , K n w h e r e i = 1 K i = 1 K = i × k = k i , k j , , k n
where K is the operation of the breaking-down of the image. The region segregation process is performed depending on the n × n distribution of the haze input. The region segregation is performed based on the training images, and, thus, it helps in determining the haze-free outputs. This equation,   L i j = n = 1 w i P i + w j P j , is used for region segregation, and also helps in detecting the spectral intensity of the images with respect to reduction in the haze. This process analyzes the localized regions after the estimation of the pixel distribution of the pixels in the image. This segmentation enables the focused analysis of haze-intensity differentiation across the different parts of the images. The resulting segregated regions help in the focused operation and neural training for the images, which helps in reducing the haze in the images. After this process, the spectral differences and distribution to the improvement of the image clarity are performed. The spectral difference estimation using a sample input is presented in Figure 3 with the segregated regions.
Figure 3 is presented as a table, as shown above; the spectral difference of different   ( n × n ) regions is analyzed in this representation. The haze input requires L o j variations observed between   w i and w j . This is contrary to the original input (haze-free) due to an even spectral dispersion. The Res-Net-50-inspired architecture requires this variation for identifying convergence. In particular, the training is pursued using w i w j K and w j w i L in the resnet-50 architecture. Based on this, the n × n distributions are guided for region segregation. Figure 3 shows the segregated zones used for estimating the spectrum difference from a sample input, and it also analyzes the spectral difference in distinct (n × n) values. Changes in weights are necessary for the haze input. Because of the uniform distribution of the spectrum, this goes against the initial input, which was haze-free. Following an estimate of the image’s pixel distribution, this procedure examines the localized regions. The segmentation process allows for the targeted examination of the variations in haze intensity throughout the various image regions. The resultant partitioned areas aid in picture haze reduction via focused functioning and neural training. The next step in improving the image’s clarity is to apply the spectral differences and distribution. The proposed model’s deep-learning training techniques use a structured approach based on a Res-Net-50-inspired design with three critical layers. Training the model on photographs captured during periods of external haze allows it to understand how different types of haze impact the sharpness of the images. The last step involves pixel-segmented pictures, which divide the haze intensity into separate areas so the network can comprehend and handle different amounts of haze in each segment. The last step is to train the model using dehazed photos, which represent the desired result. The network readjusts its weights depending on the differences among hazy intakes and dehazed outcomes, as well as the intensity of haze in different regions, during this iterative training process, which makes use of adaptive learning. Reducing haze from complicated images is now easier and faster because of the Res-Net-50–inspired architecture’s usage of residual blocks, which also guarantee efficient gradient circulation and feature extraction. The segregation pursues major differences between successive n s for identifying convergence. The process of region segregation is explained by the following equation:
L j = n = 1 w i x i w j x j n = 1 P i j L j w = n = 1 1 ; k = 1 x i w j W P i j w + x = σ x X x d x σ x W w d w P i j w i = σ w X x d x σ w x i x i d x i P i j x i = σ x w w d w σ x w i w i d w i n = 1 P i j W + L = L i j ( P + W ) L i j P i j + W i j
where L is denoted as the region segregation operation, and σ is the training given using input images. Based on the training images, the spectral intensity is estimated, and then the training images aid in detecting the haze-free outcome. The adaptive training operation engages a Res-Net-50-inspired architecture comprising three layers. It takes haze intensity maps and segmented pictures as input. Convolutional layers, batch normalization, and ReLU activation functions are part of these blocks, which were inspired by Res-Net-50. Improved gradient flow and feature extraction are two outcomes of their use in learning residual functions. When compared against both older methods and more recent deep-learning models, the ANTN approach outperforms them all because of its revolutionary region-specific haze removal and sophisticated neural network design. This comprehensive analysis and comparison showcase the innovative and efficient nature of the suggested method, showing that it might be used in practical image dehazing scenarios. This network is trained using the external haze images, images with the pixels that are organized into segments, and then the images after the haze reduction. The inspired architecture for training is illustrated in Figure 4.
The proposed RHR model’s segregation section is crucial for handling the images’ heterogeneous haze distribution properly. In order to improve accuracy and maintain fine details, the simulation can apply customized dehazing procedures to each section of the image by separating it into smaller regions with different amounts of haze. Removing haze is more effective when performed locally rather than by applying a uniform dehazing technique to the entire image, since the model can handle areas with varying haze densities better. This guarantees that areas with heavy and light haze are treated appropriately and improves the overall clarity of the image. The Res-Net-50-based architecture is illustrated in the above Figure 4. The n × n distribution is split into 1 × 1 and   ( 3 × 3 ) , and K’s P i , j   is verified. If P i , j = 1 , then 1 × 1 ( n × n ) is even, and L ( i , j ) is identified with ease. On the other hand, if P i , j 1 , then both   1 × 1 and 3 × 3 n × n require training, which is adapted using the haze-reduced training inputs. The W   P i , j is distinguished in the 1st layer, for which w i w j (or) w j w i is estimated. If w i w j > 0 , then the   3 × 3 to 1 × 1 convergence is high and, therefore, the spectral difference is suppressed. Based on the spectral difference, the K variations are adapted for the training using ( i , j ) (Figure 4). This training approach determines the inspiration from the convolutional neural network, with repetition in its second layer. The process of training the images via the CNN is described by the following equation:
B P i j = 1 i f b > x e 1 m x i f m x B w i = B P . n = 1 W n P i j n = 1 w 1 w n = n = 1 i = 1 W n x i j = 1 W n x j = n = 1 W × m P i j m P i j = n = 1 b n = 1 L i j L ( m )
where B is signified as the first layer of the CNN, where the training is provided for the input images. Then, the spectral intensity is estimated based on the training images and region segregation. Using this equation,   S j L i j = n = 1 x + y L i j , the spectral intensity of the images is determined based on the region segregation and training images. After separating the images into n × n pixels based on the haze distribution in the images, this method utilizes the training images to detect the spectral differences between the hazy and non-hazy regions within these segments.
a k l = a i j f o r   k = i ,   k = j a k j o t h e r w i s e x k = x i f o r   k = 1 x j o t h e r w i s e x j = i j , n i j ( i j , L j ) = i j , L i j + i j , W i j A L i = σ L i A W i = σ W i S m j X , Y = i = 1 n σ X x i , σ Y x i j = 1 n σ X x i , σ Y y i X L y = n = 1 X , Y S K X , Y = | X Y | | X |
This method determines the measure of spectral intensity by contrasting the pixel values in the training images. It is thus illustrated by the equation presented above. Where a is signified as the spectral intensity of the images, S is represented as the outcome of the training process as per the CNN approach. For the sample input, the intensity variations across various regions and their values are presented in Table 1.
The K value from the hazy input is used to validate the spectral intensity in Table 1. The third layer of the ResNet-50 design is where the correlation is found. The validation mentioned above handles both the case where (wiwj) > (wjwi) and the reverse as well. The solution that fits best is determined by the non-pixel-changing variation in (3 × 1) and (3 × 3). In the experiment, the condition Y = T holds true when wi = wj, and the difference is barely noticeable. But, unless the image is de-hazed, this ideal scenario will not be realized. Since Table 1 did not fairly represent the intensity changes across different locations in the original data, the performance metrics were not in line with what was input. Better performance metrics that match the observed data should be provided by the suggested ANTN model, which should show that it can manage region-specific haze intensities. It should also more properly encompass these changes. When compared to previous haze reduction approaches, this one will be more precise and meaningful.
The above validation for spectral intensity is performed for the K observed in the haze input. This purely relies on the correlation identified from the 3rd layer of the resnet-50 architecture. The two cases, i.e., w i w j > w j w i and vice versa are handled in the above validation. The variation in   3 × 1 and 3 × 3 (without pixel change) identifies the fit solution. The condition   Y = T in the experiment is consistent for w i = w j wherein the difference is almost null. However, this is an ideal case that cannot be achieved until the image is de-hazed (Refer to Table 1). This spectral intensity estimation enables the method to evaluate the varying degrees of haze across the different regions of the image, which contributes to the subsequent reduction. And then, it is illuminated by the subsequent equation:
X Y = x x A x B X A B x = n = 1 ( σ A x , σ B x ) σ A x = 1 σ A x = σ B x = 0 X Y = x x A x B X A B x = n = 1 ( σ A x , σ B x ) σ A x = 1 σ A x = σ B x = 1
where Y is represents the varying degrees of the haze in the images. Then, the second layer of CNN is performed with the repetition operation, which helps in detecting the convergence after determining the spectral intensity in the images. The repetition process is responsible for segregating the n × n distribution depending on the haze and non-haze using determined spectral differences in the images. The process of repetition helps in evaluating the convergence, and, thus, it aids in detecting the haze-free output. Next, the discovered convergence is useful for pinpointing the exact spectral intensity. An equation describing the CNN’s second layer repeating process is provided below:
Q ¯ = x x A σ X ¯ x = 1 σ A x σ A x = 0 K ¯ = k k B σ K ¯ x = 1 σ K x σ K x = 1 σ x = σ ( x i )
where   Q is the repetition operation, which helps in the convergence detection operation. Now, the convergence is determined based on the spectral intensity and repetitions in the second layer of the CNN. The convergence is ascertained by evaluating the spectral intensity during the repeated evaluations in the network’s second layer. This convergence factor estimates the degree to which the textural intensities of the segregated region in the images converge. This equation,   S + r _ Z = S _ Z + r ( Z ) , helps in detecting the convergence depending on the spectral intensity. Based on the above computation, the convergence for the four segregations in Table 1 is validated in Table 2. The two main differences between the 3 × 1 and 3 × 3 photos are the amount of haze present in the former and the fact that their dimensions are different. By adjusting the picture sizes of the 1 × 1 and 1 × 3 images, the pixel variation was fitted at 0.1. Evaluating the spectral intensity during the repeated assessments in the network’s second layer is how the convergence is determined. The degree to which the textural intensities of the segregated region in the photographs converge is estimated by this convergence factor.
The convergence estimation using   Y     W is analyzed using   S _ Z + r ( Z ) , provided σ x = σ x i at any   ( i , j ) . This varies with the identified regions for L and K under Q ¯ , representing   x x A . Based on this representation, the haze converging at any identified region is computed. If the convergence is not feasible, then the available P i j in that W and the σ are instigated from the 1st layer of the architecture. Therefore, the convergence at any point is identified for S j L I j across multiple Y , from which the de-hazing is performed (Table 2). Higher convergence indicates a consistent spectral pattern across the regions, while lower convergence indicates a greater disparity. If the spectral intensity is high, then the convergence is low. On the other hand, if the spectral intensity is low, then the convergence is high. The process of detecting the convergence based on the spectral intensity is elucidated by the equation given below:
S + r ¯ Z = S ¯ Z + r ¯ ( Z ) K r ¯ = K P _ Z , K r ¯ Z , K 0 K r ¯ Z , K P _ Z , k < 0 S ~ = r ~ = S _ Z = r _ Z = 0 S _ Z = W ( 1 Z ) σ S ¯ r = W + 1 Z P   S ¯ S _ = σ + P 1 Z
where r is designated as the detection of the convergence operation, and W is signified as the regions of the detected convergence. Now, based on the determined convergence factor, this method evaluates the spectral difference between the adaptive end-pixels of the segmented regions. This involves analyzing the distinct textural intensities, which are validated and converged within the CNN second layer. The variation in haze levels between the areas is lessened as a result. Additionally, the following equation clarifies the situation:
G = w s s w G 11 Z + S ¯ r + + G 1 n Z + S ¯ r = w _ 1 Z + w ¯ 1 Z G 21 Z + S ¯ r + + G 2 n Z + S ¯ r = w _ 2 Z + w ¯ 2 Z G n 1 Z + S ¯ r + + G n n Z + S ¯ r = w _ n Z + w ¯ n Z S _ Z = ( w + S ) 1 w _ Z S ¯ r = ( w + S ) 1 w ¯ Z G 1 = w s s w
where G is the detection of the spectral difference based on the convergence factor. Now, the haze-free output is estimated by using the deep learning technique based on the convergence factors and spectral difference. If the difference in the process is high, then the second layer is trained with the maximum and minimum difference, preventing de-hazing errors. If the difference is high, then the determined images are sent as the input to the detected images, and thus the training is given again to reduce the spectral difference. Only by reducing the spectral intensity in the images is the haze-free output obtained. In this operation, the convergence value is compared with the previously obtained values. This process is performed by using this equation:   S n w n = Z n x + n 2 , where Z is denoted as the detected images and its process. Based on the network’s analysis, these factors are incorporated into the mitigation of the haze effects. The network uses the estimated spectral differences to help the process of improving regions with the important haze, by generating the de-hazed output. This technique ensures that the final image retains improved object visibility and clarity and efficaciously minimizes the impact of haze. Below are the equations that show how to use the deep-learning technique to recognize haze-free outputs:
U _ Z = G Z w ( Z ) 2 U ¯ Z = G Z + w ( Z ) 2 U _ Z = G Z w ( 1 Z ) 2 U ¯ Z = G Z + w ( 1 Z ) 2 S n w n = 2 n 2 S n w n = Z n x + n 2 K n L n = n 2 K n L n = W n Y + n 2 B w + A w = n 2 n = 1 A w 2 n = n = 1 x i j
φ = 1 1 s r + 1 s r φ b = 1 s φ = 1 1 + w 2 φ x = 1 i f x A 0 i f x A φ s j = 0 i f S 0 1 i f S > 0
where U is represented as the outcome of these processed factors, and φ is represented as the haze-free outputs. This process helps in detecting the haze-free output with the help of the deep-learning technique and using the CNN approach. The acquired less-convergent regions are arranged in a linear n × n pattern to generate de-hazed images. And then, it helps in reducing the haze where the spectral difference is more. After segregating the regions based on the acquired haze images, the haze-free output is evaluated with the three layers of the CNN, which is inspired by the Res-Net-50-based architecture. Table 3 presents the sample input and output with the variation intensity analyzed under different   n × n distributions.
In Table 3, each image varied based on the length and width depending on the pixels. Through the application of deep learning and the CNN method, this procedure aids in the detection of outputs free of haze. A linear n × n pattern is used to arrange the acquired less-convergent regions in order to produce de-hazed images. There is haze in both the 3 × 1 and 3 × 3 shots, and the dimensions of the two sets of photos are different. The image sizes were adjusted to achieve a pixel variation of 0.1 in the 1 × 1 and 1 × 3 images. With minor adjustments, each image is displayed in one of three formats: 1 × 3, 3 × 1, and 3 × 3. Furthermore, it aids in dehazing areas with a greater spectral difference. Following region separation using the haze photos, the resulting haze-free output is assessed using the three layers of the CNN that draw inspiration from the Res-Net-50 design. Haze reduction can be more precisely achieved with 3 × 1 photographs due to the intensity variations that capture a greater range of haze levels across smaller regions, as opposed to 1 × 3 images, which may average out these variations across larger areas.

4. Results and Discussion

The comparative analysis discussion is presented using error rate, precision, training ratio, region differentiation, and differentiation time metrics. The existing GGADN (Guided Generative Adversarial Dehazing Network) [27], BPFD-Net (Network Based on Pix2Pix Framework De-Haze) [28], and SZDNet (Self-supervised Zero-shot De-Hazing Network) [29] methods are considered from the background section alongside the proposed method.

4.1. Image Dataset Description

The images used in the above Figures and Tables are obtained from the “Dense-Haze” dataset [33]. This dataset contains 33 pairs of haze and de-hazed images identified using haze machines. The haze and de-hazed image validations consider structural index, signal-to-noise ratio, and homogeneity for validation. From the above the intensity variation feature alone is incorporated in the above-proposed method. All the 33 pairs are used for training the 3-layer Res-Net-50-inspired architecture. Although the “Dense-Haze” dataset is helpful as a baseline, it is challenging to train a robust CNN model with it. Rotating, expanding, flipping, and adjusting colors are all examples of data augmentation techniques that can artificially increase the dataset’s size and variability. This allows the model to learn more robust features. Improving performance can be achieved by fine-tuning the model on the “Dense-Haze” dataset after pre-training it on a bigger, related dataset using transfer learning. This allows us to exploit existing learnt features. For the sake of generalizability and robustness, it is also a good idea to augment the dataset with more photos. It would be simpler to train a model that performs well in different real-world scenarios with a larger and more diverse dataset. Researchers can show that the suggested ANTN method is generalizable and robust in different real-world situations by including more instances from datasets other than the one being used. Bring out the model’s adaptability and validate its efficacy across a wider range of variables by including numerous photographs that depict various environments, hazy conditions, and lighting scenarios. The assertions made on the model’s performance will be supported by this extra evidence, which will also offer a more thorough assessment of its practical relevance.
The suggested technique makes use of the “Dense-Haze” dataset, which includes 33 image pairs, one each of hazy and de-hazed. Two images, one with haze and the other without, make up each pair. You may train and test haze-removal techniques thoroughly using this dataset because it contains scenarios with varying degrees of haze intensity. The photographs depict a wide range of landscapes and urban areas, giving a solid foundation on which to train the network. The Dense-Haze dataset is an effort to promote robust solutions for actual and diverse hazy settings, with the goal of considerably pushing the state of the art in single-image dehazing. Additionally, we present a thorough quantitative and qualitative assessment of cutting-edge, Dense-Haze-dataset-based single-image-dehazing methods. As expected, our research shows that current dehazing methods underperform for dense homogenous hazy situations and may need significant improvement when measured using classic picture quality metrics like PSNR, SSIM, and CIEDE2000.
Data-pre-processing techniques, including normalization, scaling, and segmentation into n × n grids, are part of the network configuration for analyzing variations in haze intensity. Res-Net-50 served as an inspiration for the design, which incorporates adaptive neural training, residual blocks, and three convolutional layers. There is a mix of the Adam optimizer, data augmentation, and the MSE and SSIM loss functions used in the training process. To improve accuracy and decrease error rates, this approach iteratively analyzes spectral differences between hazy and non-hazy areas, paying special attention to fluctuations in intensity at the pixel level, in order to decrease haze while preserving image features.

4.2. Error Rate

The error reduction process is described in Algorithm 1.
Algorithm 1. Error Reduction Process
1 . Q = x x A .  
2. The given haze input is trained for the reduction of errors, σ X ¯ x = 1 σ A x
3 .   F o r   Q = x A  
4 .   K i s   t h e   o p e r a t i o n   u s e d   i n   r e g i o n   s e g r e g a t i o n .
5.Training images σ   a r e   u s e d   f o r   t h e   r e g i o n   s e g r e g a t i o n ,   σ K ¯ x = 1 σ K x .
In Algorithm 1, x x A denotes the specific element or data points in set A that are being assessed or handled to reduce error. σ X ¯ x shows the error rate or error metric related to the haze reduction procedure for a certain data point (x). It shows how effectively the model has reduced haze for that specific element. σ A x represents the pre-reduction error or haze intensity linked to data point x. It is the amount of haze impacting the input image or the baseline inaccuracy (Figure 5).
The three-layer CNN, which takes the design elements from the Res-Net-50 architecture, helps to reduce the error rate in this procedure. After estimating the haze input, the n × n pixel distribution in the given image is determined. Afterwards, the input haze image is used to perform the region segregation operation, which enhances the visibility and clarity of the image and helps to reduce errors in the haze reduction procedure. This is achieved by consistently repeating the training procedure in the second layer of the CNN. Testing and validation allow one to assess the suggested Region-specific Haze Reduction (RHR) model’s error rate using performance metrics from the ANTN. By accurately predicting and reducing haze while retaining image information, these models typically strive to accomplish a lower error rate of 0.075 compared to other haze reduction strategies. For real-world applications, a decrease in error rate by the RHR–ANTN model would indicate that it outperforms benchmark methods in minimizing haze-induced artifacts and improving picture clarity.

4.3. Precision

The precision is higher in this process with the aid of the region segregation outcome and the n × n distribution detection outcomes. By portioning the images into distinct regions depending on the haze distribution and arranging them in the n × n format, this process helps in determining the haze differentiations. This analysis enables the precise determination of the regions and their pixel distribution, and also, it helps in acquiring the levels of the haze intensity. This approach efficaciously adapts its de-hazing efforts to the particular factors of each segment, which helps in the reduction in the haze in the given input images. When compared to alternative techniques, such as SZDnet (with a rate of 0.89 at its peak) and BPFD-Net (with a range of about 0.81), the suggested model’s accuracy of 0.95 stands out. By taking a focused approach to treating varied amounts of haze over different parts of a picture, the suggested model achieves improved accuracy performance, as seen in the Figure 6. The model successfully differentiates between actual picture features and distortions caused by haze by dividing the picture into n × n pixel grids and continuously comparing the spectra of hazy and non-hazy regions. By focusing on individual regions, it reduces artifacts and maintains fine features while lowering haze in each segment. The precision enhancement is described as follows (Algorithm 2):
Algorithm 2. Precision enhancement process
1 .   P w i = P x i w i W  
2. The pixel distribution is detected precisely in different regions, P i j = ( w i w j ) ( w i w j ) .  
3 .   F o r   W P = w × i × m W ,   W P   P .  
4 .   T h e   d i s t r i b u t i o n   o f   t h e   p i x e l s   u s e d   i n   e n h a n c i n g   t h e   p r e c i s i o n ,   P i j = n = 1 x w i x w j 2

4.4. Training Ratio

The training ratio is efficacious in this method with the help of the three CNN layers, and also, the region segregation of the input images is gainful in it (Figure 7). This network is trained using a balanced combination of external haze images, pixel-segregated images, and haze-reduced images. The region segregation of input images offers an effective understanding of the haze distribution in the images. By analyzing these two factors, this approach enhances the ability to reduce the haze precisely. This training ratio is also enhanced by determining the spectral intensity along with the precise convergence. This training operation supports the process for the detection of the haze intensities precisely, to improve the image visibility and clarity in the resulting images. The haze is reduced by providing consistent training to the input images that are based on the perfect n × n distribution. The spectral intensity is estimated with the help of the training images in the first layer of the CNN layers. By consolidating these factors and three layers of procedures, the training ratio is enhanced in this process.

4.5. Region Differentiation

The region differentiation is efficacious in this process after the determination of the haze input and n × n distribution (Figure 8). The pixel distribution is estimated if it is in the n × n format, and then the segregation operation is performed. This process involves classifying the image into distinct regions depending on varying haze levels. By aiming at localized variations in haze intensity, this approach manages the adjustments to result in a more precise and effective haze reduction. Through rigorous segmentation of the input image into an n × n pixel grid, where each grid cell represents a different region with varying haze intensity levels, the suggested RHR model achieves a region distribution of about 0.93. Instead of analyzing and treating haze across the image, this segmentation lets the model focus on specific regions. The model can adjust its dehazing method to each divided region’s haze concentration and intensity by measuring spectral differences. This focused strategy improves visual clarity and haze reduction accuracy by processing each area according to its haze conditions, leading to deeper and uniform image restoration. The Res-Net-50-inspired architectural residual learning improves the extraction of features and gradient flow, improving the model’s accuracy and clarity. This efficacious region differentiation is described in Algorithm 3 below.
Algorithm 3. Region differentiation
1 .   L i j = w i P i + w j P j  
2. region differention operation with differnt regions, L j w = n = 1 1 ; k = 1 x i w j W  
3. P i j w + x = σ x X x d x σ x W w d w ,   P i j L    
4. σ ← is used in training the input images in region differentiation, L ( P + W )
5. region differentiation is effective with the precise pixel distribution and haze input,
P i j W + L = L i j ( P + W ) L i j P i j + W i j .

4.6. Differentiation Time

Using a CNN with three layers and a deep-learning technique to find the output free of haze shortens the time required for the region differentiation in this procedure (Figure 9). The region segregation is performed based on the training images, and, thus, it helps in determining the haze-free outputs. This distribution of the pixel is important in estimating the region segregation for the reduction in the haze in the images. The equation, L i j = n = 1 w i P i + w j P j , helps in detecting the region segregation with the lesser time, and also, it helps in determining the spectral intensity and convergence factor. After estimating the distribution of picture pixels, this method examines the localized regions. The segmentation process allows for the targeted examination of the variations in haze intensity throughout the various image regions. Based on the training images, the spectral intensity is estimated, and then the training images aid in detecting the haze-free outcome.
After reading this, you should be able to tell that the suggested approach boosts accuracy by 9.38%, training ratio by 11.77%, and region differentiation by 12.11%. The error rate is reduced by 9.83%, and the differentiation time is reduced by 14.55% when using this suggested strategy.

4.7. Computation Time

The suggested RHR using the ANTN has a reasonably efficient calculation time. Normalizing, scaling, and segmenting data into n × n grids is a procedure that takes a matter of milliseconds per image. The training process uses a Res-Net-50-inspired architecture, which increases computation, but using an adaptive optimizer like as Adam helps to reach faster convergence. Although the method incurs significant processing overhead due to the iterative spectrum difference analysis and error reduction processes, it is nonetheless competitive and often only takes a few seconds per image, which makes it suitable for real-time applications. From the Figure 10, it is shown that the computation time of the proposed model is lower compared to the other methods.
Performance measurements and comparative studies show that ANTN reduces haze better than other approaches. Adaptive training and the Res-Net-50-inspired design provide region-specific processing, which improves dehazing outcomes by managing different haze strengths. Compared to benchmark approaches, the error rates are lower, and measures like PSNR and SSIM are higher. Rigorous testing and validation show that the ANTN approach outperforms existing techniques in precision, visual clarity, and overall efficacy, demonstrating its robustness and outstanding efficacy in real-world haze reduction applications.
The ANTN outperforms other approaches for various reasons. The model’s area-specific haze reduction technique, which splits the image into n × n grids and processes each region separately, provides more exact haze-intensity handling and lowers artifacts. The Res-Net-50-inspired architecture improves the extraction of features and gradient flow, helping the network learn and adapt to complicated haze patterns. Iteratively comparing spectral variations among hazy and non-hazy areas improves haze-removal accuracy. The model’s precision, visual clarity, and performance show that it can better manage haze issues than traditional methods.

5. Conclusions

In this article, region-specific haze reduction using an adaptive neural training net is introduced. This method engaged a Res-Net-50-based architecture with three layers, each serving a particular purpose. The main innovation depends on the adaptive training process of the neural network using external haze images, pixel-segregated images, and haze-reduced images. This training procedure imitates a convolutional neural network but with a distinctive repetition in its second layer. This particular layer is responsible for discerning between hazy and non-hazy regions using spectral differences. The texture intensity of these segregated regions is recurrently evaluated in the second layer to coincide with the distribution of these regions. This convergence factor helps in evaluating the spectral difference between the adaptive end pixels of these segmented regions. When the difference is important, the second layer is trained with the maximum and minimum differences to block potential errors in the de-hazing process. The error rate is reduced by 9.83%, and the differentiation time is reduced by 14.55% using this suggested strategy. The ultimate objective is to develop a specialized hardware structure that allows the suggested neural network framework to function in real-time. Additional investigation is necessary, which entails streamlining the neural network, developing specialized buffers, and optimizing the storage system.
The suggested work has a potentially limited capacity for extrapolation across various real-world scenarios due to its dependence on a small dataset for training and evaluation. External haze photos and pixel-segmented images make the model more flexible, but it might not be able to handle extreme or diverse circumstances well, due to the dataset’s lack of haze patterns and surroundings. This model may not be scalable or efficient in real-world settings due to the computational complexity of its Res-Net-50-inspired design and repetitive processing, which may cause processing time and resource needs to be high.

Author Contributions

Methodology, A.A., K.K., G.A., P.M., M.A. and M.D.A.; Software, A.A., K.K., G.A., P.M., M.A. and M.D.A.; Validation, A.A., K.K., G.A., P.M., M.A. and M.D.A.; Formal analysis, A.A., K.K., G.A., P.M., M.A. and M.D.A.; Investigation, A.A., K.K., G.A., P.M., M.A. and M.D.A.; Resources, P.M. and M.D.A.; Data curation, G.A.; Writing—original draft, A.A., K.K., G.A., P.M., M.A. and M.D.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deputyship for Research and Innovation, Ministry of Education in Saudi Arabia for funding this research work through project number 223202.

Data Availability Statement

The data will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lee, Y.-H.; Tang, S.-J. A design of image dehazing engine using DTE and DAE techniques. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 2880–2895. [Google Scholar] [CrossRef]
  2. He, Y.; Li, C.; Bai, T. Remote Sensing Image Haze Removal Based on Superpixel. Remote Sens. 2023, 15, 4680. [Google Scholar] [CrossRef]
  3. Fan, Y.; Rui, X.; Poslad, S.; Zhang, G.; Yu, T.; Xu, X.; Song, X. A better way to monitor haze through image based upon the adjusted LeNet-5 CNN model. Signal Image Video Process. 2020, 14, 455–463. [Google Scholar] [CrossRef]
  4. Zhang, Y.; Xu, T.; Tian, K. PSPAN: Pyramid spatially weighted pixel attention network for image dehazing. Multimed. Tools Appl. 2023, 83, 11367–11385. [Google Scholar] [CrossRef]
  5. Kuthadi, V.M.; Selvaraj, R.; Baskar, S.; Shakeel, P.M. Data security tolerance and portable based energy-efficient framework in sensor networks for smart grid environments. Sustain. Energy Technol. Assess. 2022, 52, 102184. [Google Scholar] [CrossRef]
  6. Albekairi, M.; Kaaniche, K.; Abbas, G.; Mercorelli, P.; Alanazi, M.D.; Almadhor, A. Advanced Neural Classifier-Based Effective Human Assistance Robots Using Comparable Interactive Input Assessment Technique. Mathematics 2024, 12, 2500. [Google Scholar] [CrossRef]
  7. Liu, Y.; Yu, L.; Wang, Z.; Pan, B. Neutralizing the impact of heat haze on digital image correlation measurements via deep learning. Opt. Lasers Eng. 2023, 164, 107522. [Google Scholar] [CrossRef]
  8. Chi, W.; Ning, Y.; Liu, W.; Liu, R.; Li, J.; Wang, L. Development of a glue- and heat-sealable acorn kernel meal/κ-carrageenan composite film with high-haze and UV-shield for packaging grease. Ind. Crop. Prod. 2023, 204, 117250. [Google Scholar] [CrossRef]
  9. Yin, S.; Yang, X.; Wang, Y.; Yang, Y.-H. Visual attention dehazing network with multi-level features refinement and fusion. Pattern Recognit. 2021, 118, 108021. [Google Scholar] [CrossRef]
  10. Chen, L.; Tang, C.; Xu, M.; Lei, Z. Enhancement and denoising method for low-quality MRI, CT images via the sequence decomposition Retinex model, and haze removal algorithm. Med. Biol. Eng. Comput. 2021, 59, 2433–2448. [Google Scholar] [CrossRef]
  11. Pethuraj, M.S.; Aboobaider, B.B.M.; Salahuddin, L.B. Analyzing CT images for detecting lung cancer by applying the computational intelligence-based optimization techniques. Comput. Intell. 2022, 39, 930–949. [Google Scholar] [CrossRef]
  12. Kang, M.; Jung, M. A single image dehazing model using total variation and inter-channel correlation. Multidimens. Syst. Signal Process. 2020, 31, 431–464. [Google Scholar] [CrossRef]
  13. Memon, S.; Arain, R.H.; Mallah, G.A. Amsff-net: Attention-based multi-stream feature fusion network for single image dehazing. J. Vis. Commun. Image Represent. 2023, 90, 103748. [Google Scholar] [CrossRef]
  14. Dong, W.; Wang, C.; Sun, H.; Teng, Y.; Liu, H.; Zhang, Y.; Zhang, K.; Li, X.; Xu, X. End-to-End Detail-Enhanced Dehazing Network for Remote Sensing Images. Remote Sens. 2024, 16, 225. [Google Scholar] [CrossRef]
  15. Wang, Y.; Yin, S.; Basu, A. A multi-scale attentive recurrent network for image dehazing. Multimed. Tools Appl. 2021, 80, 32539–32565. [Google Scholar] [CrossRef]
  16. Lai, Y.; Hua, M.; Zhu, T. Single Image Dehazing Based on Convolutional Neural Network Using Boundary Constraint. Pattern Recognit. Image Anal. 2021, 31, 616–624. [Google Scholar] [CrossRef]
  17. Huang, S.; Zhang, Y.; Zhang, O. Image Haze Removal Method Based on Histogram Gradient Feature Guidance. Int. J. Environ. Res. Public Heal. 2023, 20, 3030. [Google Scholar] [CrossRef] [PubMed]
  18. Yu, S.; Seo, D.; Paik, J. Haze removal using deep convolutional neural network for Korea Multi-Purpose Satellite-3A (KOMPSAT-3A) multispectral remote sensing imagery. Eng. Appl. Artif. Intell. 2023, 123, 106481. [Google Scholar] [CrossRef]
  19. He, Y.; Li, C.; Li, X. Remote Sensing Image Dehazing Using Heterogeneous Atmospheric Light Prior. IEEE Access 2023, 11, 18805–18820. [Google Scholar] [CrossRef]
  20. Liu, X.; Shi, Z.; Wu, Z.; Chen, J.; Zhai, G. GridDehazeNet+: An enhanced multi-scale network with intra-task knowledge transfer for single image dehazing. IEEE Trans. Intell. Transp. Syst. 2022, 24, 870–884. [Google Scholar] [CrossRef]
  21. Zhang, T.; Yang, X.; Wang, X.; Wang, R. Deep joint neural model for single image haze removal and color correction. Inf. Sci. 2020, 541, 16–35. [Google Scholar] [CrossRef]
  22. Yan, Y.; Jinlong, Z.; Ce, L.; Haowen, Z.; Xiang, L. Visibility restoration of haze and dust image using color correction and composite channel prior. Vis. Comput. 2022, 39, 2795–2809. [Google Scholar] [CrossRef]
  23. Ngo, D.; Lee, G.-D.; Kang, B. Singe Image Dehazing With Unsharp Masking and Color Gamut Expansion. IEEE Access 2022, 10, 102462–102474. [Google Scholar] [CrossRef]
  24. Liu, Y.; Hou, X. Local multi-scale feature aggregation network for real-time image dehazing. Pattern Recognit. 2023, 141, 109599. [Google Scholar] [CrossRef]
  25. Si, Y.; Yang, F.; Chong, N. A novel method for single nighttime image haze removal based on gray space. Multimedia Tools Appl. 2022, 81, 43467–43484. [Google Scholar] [CrossRef]
  26. Dong, P.; Wang, B. TransRA: Transformer and residual attention fusion for single remote sensing image dehazing. Multidimens. Syst. Signal Process. 2022, 33, 1119–1138. [Google Scholar] [CrossRef]
  27. Zhang, J.; Dong, Q.; Song, W. GGADN: Guided generative adversarial dehazing network. Soft Comput. 2023, 27, 1731–1741. [Google Scholar] [CrossRef]
  28. Li, S.; Lin, J.; Yang, X.; Ma, J.; Chen, Y. BPFD-Net: Enhanced dehazing model based on Pix2pix framework for single image. Mach. Vis. Appl. 2021, 32, 124. [Google Scholar] [CrossRef]
  29. Xiao, X.; Ren, Y.; Li, Z.; Zhang, N.; Zhou, W. Self-supervised zero-shot dehazing network based on dark channel prior. Front. Optoelectron. 2023, 16, 7. [Google Scholar] [CrossRef]
  30. Yang, Y.; Wang, Z.; Hong, W.; Yue, H. Single image Dehazing algorithm based on double exponential attenuation model. Multimedia Tools Appl. 2021, 80, 15701–15718. [Google Scholar] [CrossRef]
  31. Hu, H.-M.; Zhang, H.; Zhao, Z.; Li, B.; Zheng, J. Adaptive single image dehazing using joint local-global illumination adjustment. IEEE Trans. Multimed. 2019, 22, 1485–1495. [Google Scholar] [CrossRef]
  32. Li, Z.; Shu, H.; Zheng, C. Multi-scale single image dehazing using Laplacian and Gaussian pyramids. IEEE Trans. Image Process. 2021, 30, 9270–9279. [Google Scholar] [CrossRef] [PubMed]
  33. Jin, Y.; Lin, B.; Yan, W.; Yuan, Y.; Ye, W.; Tan, R.T. Enhancing visibility in nighttime haze images using guided apsf and gradient adaptive convolution. In Proceedings of the MM ‘23: The 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023; pp. 2446–2457. [Google Scholar]
  34. Han, Y.; Kim, J.; Lee, J.; Nah, J.-H.; Ho, Y.-S.; Park, W.-C. Efficient Haze Removal from a Single Image Using a DCP-Based Lightweight U-Net Neural Network Model. Sensors 2024, 24, 3746. [Google Scholar] [CrossRef]
Figure 1. Sample haze images from aerial view.
Figure 1. Sample haze images from aerial view.
Mathematics 12 02526 g001
Figure 2. Schematic illustration of the proposed RHR–ANTN.
Figure 2. Schematic illustration of the proposed RHR–ANTN.
Mathematics 12 02526 g002
Figure 3. Spectral difference estimation.
Figure 3. Spectral difference estimation.
Mathematics 12 02526 g003
Figure 4. Training architecture.
Figure 4. Training architecture.
Mathematics 12 02526 g004
Figure 5. Error Rate Comparison.
Figure 5. Error Rate Comparison.
Mathematics 12 02526 g005
Figure 6. Precision comparisons.
Figure 6. Precision comparisons.
Mathematics 12 02526 g006
Figure 7. Training ratio comparisons.
Figure 7. Training ratio comparisons.
Mathematics 12 02526 g007
Figure 8. Region differentiation comparisons.
Figure 8. Region differentiation comparisons.
Mathematics 12 02526 g008
Figure 9. Differentiation time comparisons.
Figure 9. Differentiation time comparisons.
Mathematics 12 02526 g009
Figure 10. Computation time.
Figure 10. Computation time.
Mathematics 12 02526 g010
Table 1. Intensity variations across various regions.
Table 1. Intensity variations across various regions.
(n × n)Segregated RegionIntensityVariation
(1 × 1)Mathematics 12 02526 i001Mathematics 12 02526 i002
(1 × 1)0
(1 × 3)−0.3
(3 × 1)0.2
(3 × 3)0.41
(1 × 3)Mathematics 12 02526 i003Mathematics 12 02526 i004
(1 × 1)0.11
(1 × 3)0
(3 × 1)−0.4
(3 × 3)−0.2
(3 × 1)Mathematics 12 02526 i005Mathematics 12 02526 i006
(1 × 1)−0.6
(1 × 3)−0.2
(3 × 1)0
(3 × 3)−0.4
(3 × 3)Mathematics 12 02526 i007Mathematics 12 02526 i008
(1 × 1)−1.2
(1 × 3)−0.8
(3 × 1)−1.4
(3 × 3)0
Table 2. Convergence assessment for ( n × n ) .
Table 2. Convergence assessment for ( n × n ) .
n × n Identified RegionConvergence
1 × 1 Mathematics 12 02526 i009Mathematics 12 02526 i010
1 × 3 Mathematics 12 02526 i011Mathematics 12 02526 i012
3 × 1 Mathematics 12 02526 i013Mathematics 12 02526 i014
3 × 3 Mathematics 12 02526 i015Mathematics 12 02526 i016
Table 3. Sample input and output under ( n × n ) .
Table 3. Sample input and output under ( n × n ) .
InputOutputVariation ( n × n )
Mathematics 12 02526 i017Mathematics 12 02526 i018Mathematics 12 02526 i019 1 × 3
3 × 1
3 × 3
Mathematics 12 02526 i020Mathematics 12 02526 i021Mathematics 12 02526 i022 1 × 3
3 × 1
3 × 3
Mathematics 12 02526 i023Mathematics 12 02526 i024Mathematics 12 02526 i025 1 × 3
3 × 1
3 × 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alshahir, A.; Kaaniche, K.; Abbas, G.; Mercorelli, P.; Albekairi, M.; Alanazi, M.D. A Study on the Performance of Adaptive Neural Networks for Haze Reduction with a Focus on Precision. Mathematics 2024, 12, 2526. https://doi.org/10.3390/math12162526

AMA Style

Alshahir A, Kaaniche K, Abbas G, Mercorelli P, Albekairi M, Alanazi MD. A Study on the Performance of Adaptive Neural Networks for Haze Reduction with a Focus on Precision. Mathematics. 2024; 12(16):2526. https://doi.org/10.3390/math12162526

Chicago/Turabian Style

Alshahir, Ahmed, Khaled Kaaniche, Ghulam Abbas, Paolo Mercorelli, Mohammed Albekairi, and Meshari D. Alanazi. 2024. "A Study on the Performance of Adaptive Neural Networks for Haze Reduction with a Focus on Precision" Mathematics 12, no. 16: 2526. https://doi.org/10.3390/math12162526

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop