Next Article in Journal
Adaptive Image-Defogging Algorithm Based on Bright-Field Region Detection
Previous Article in Journal
Experimental Study on the Reconstruction of a Light Field through a Four-Step Phase-Shift Method and Multiple Improvement Iterations of the Least Squares Method for Phase Unwrapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Infrared Image Enhancement Based on Adaptive Guided Filter and Global–Local Mapping

1
Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Photonics 2024, 11(8), 717; https://doi.org/10.3390/photonics11080717
Submission received: 23 June 2024 / Revised: 15 July 2024 / Accepted: 29 July 2024 / Published: 31 July 2024

Abstract

:
Infrared image enhancement technology plays a crucial role in improving image quality, addressing issues like low contrast, lack of sharpness, and poor visual effects within the original images. However, existing decomposition-based algorithms struggle with balancing detail enhancement, noise suppression, and utilizing global and local information effectively. This paper proposes an innovative method for enhancing details in infrared images using adaptive guided filtering and global–local mapping. Initially, the original image is decomposed into its base layer and detail layer through the adaptive guided filter and difference of the Gaussian filter. Subsequently, the detail layer undergoes enhancement using a detail gain factor. Finally, the base layer and enhanced detail layer are merged and remapped to a lower gray level. Experimental results demonstrate significant improvements in global and local contrast, as well as sharpness, with an average gradient enhancement of nearly 3% across various scenes.

1. Introduction

The new generation of infrared (IR) imaging systems has made significant advancements, offering high resolution, dynamic range, and real-time capabilities. However, there are still various factors that impact original infrared images, including the low-pass effect by the point spread function of the optical lens, atmospheric interference, stray radiation, and air turbulence. As a result, these images exhibit poor uniformity, low contrast, weak signal strength, low sharpness, and suboptimal visual effects. To address these challenges, infrared imaging systems employ high-performance analog-to-digital converters for high dynamic range (HDR) image acquisition. Nevertheless, a disparity exists between the grayscale resolution perceivable by the human eye and standard monitors compared with the bit width of HDR infrared images. Therefore, it becomes essential to apply corrective post-processing steps such as contrast enhancement and detail enhancement, so as to enhance the visual quality such that it aligns more closely with human perception standards.
The existing literature on HDR infrared image enhancement methods can be broadly classified into three categories: mapping-based methods, decomposition-based methods, and gradient domain methods [1]. Mapping-based enhancement methods usually exhibit limited detail enhancement capabilities and are susceptible to amplifying local noise [2,3,4,5]. On the other hand, due to the straightforward principles, low computational complexity, and ease of hardware implementation, methods such as contrast limited adaptive histogram equalization (CLAHE) and plateau histogram equalization (PHE) remain prevalent in the latest generation of various infrared imaging systems. Gradient domain enhancement methods can mitigate halo artifacts and gradient reversal issues. In practice, compared with decomposition-based methods, they generally involve high complexity, exhibit poor noise reduction effects, and offer inferior detail enhancement [6,7,8].
With the advent of high-performance edge-preserving filters [9,10,11], decomposition-based enhancement methods have emerged as the predominant approach for infrared image enhancement over the past decade. Zuo et al. proposed an infrared image display and detail enhancement based on a bilateral filter (BF&DDE) [12]. This method employs the Gaussian filter to mitigate gradient artifacts caused by the bilateral filter, thus significantly improving detail enhancement capabilities. However, the high computational costs pose a significant challenge to real-time processing. Liu et al. introduced a detail enhancement method for infrared images based on a guided image filter (GF&DDE) [13]. It employs a guided filter to obtain enhanced images, resulting in a significant improvement in computational efficiency. Chen et al. proposed a real-time infrared image enhancement method based on a fast guided filter and plateau histogram equalization (FGF&PE) [14]. It strikes a good balance between detail enhancement and noise reduction while maintaining the overall grayscale distribution consistent with the original image. Ouyang et al. proposed an infrared image detail enhancement algorithm based on parameter adaptive guided filter (PAGF&IDE) [15]. It could both enhance the scene adaptation ability of detail enhancement methods and reduce noise levels while enhancing image detail information. Chen et al. proposed an ultra-fast short-wave IR image enhancement method based on the difference of Gaussian (DoG) filter [16]. This effectively highlights image details by sharpening the mask and exhibits a high operating efficiency [17]. However, this method is unsuitable for medium-wave or long-wave IR images with a lower signal-to-noise ratio. Focusing on the characteristics of infrared images such as high dynamic range, low contrast, and inconspicuous details, Zhang et al. proposed an infrared image enhancement method based on the gradient domain guided filter [18]. It incorporates adaptive double plateau histogram equalization and the edge-aware weighting factor to optimize the layer image, resulting in improved overall brightness and enhanced detail.
The decomposition-based enhancement method utilizes filters to decompose images, thereby compressing the dynamic range to improve brightness and global contrast by processing the base layer. Simultaneously, it enhances local details and reduces noise by processing the detail layer. There are three key factors in this method:
  • Accurate decomposition of the base and detail layers to avoid artifacts;
  • Dynamic range compression: the method should provide excellent dynamic range compression and brightness improvement while maintaining the global contrast;
  • Detail layer gain factor: due to the typical blurriness of the edges in infrared images, it is necessary to create detail gain factors to enhance the detail layer and improve the visibility of local details.
The proposed method in this paper presents a decomposition-based enhancement technique that leverages the adaptive guided filter (AGF) as the decomposition filter to enhance scene adaptability. For detail layer processing, an edge-aware factor acts as the detail gain function to subtly adjust the detail gain, so as to highlight details and reduce noise. It integrates global and local mapping using the CLAHE and Global Adaptation method to compress the dynamic range and enhance both global and local contrast. Experimental evaluations on images from diverse scenes demonstrate significant improvements in dynamic range mapping, noise reduction, detail enhancement, and adaptability.

2. Related Work

2.1. Guided Filter

The guided filter [10] is an explicit local linear filter, which applies a guided image I to filter an input image p . Mathematically, the filter output of the pixel i is defined as
q i = j W ij I p j ,
where W ij is the filter kernel function and both i and j denote pixel indices. The filter can be assumed to be a linear model and the output q can be expressed as
q i = a k I i + b k , i ω k ,
where ω k is a window centered on the pixel k with a radius r and a k , b k is a pair of constant linear coefficients. To approximate the output image q of the input image p , an energy function is built as follows using the least squares method:
E a k , b k = i ω k a k I i + b k p i 2 + ε a k 2 ,
where ε is a regularization parameter preventing a k from reaching an unreasonably large value. The linear coefficients can be approximated by performing a linear regression:
a k = 1 | ω | i ω k I i p i μ k p ¯ k σ k 2 + ε b k = p ¯ k a k μ k ,
where ω represents the number of pixels within the window ω k ; μ k and σ k denote the expectation and variance of the guidance image I in ω k , respectively; and p ¯ k is the mean of the input image p in ω k .
The guidance image I is the same as the input image p in most edge-preserving applications. Therefore, the kernel weights W ij can be recast as:
a k = σ k 2 σ k 2 + ε b k = 1 a k p ¯ k .
Since the pixel i can be covered by ω windows, q i calculated from the ω window is not necessarily identical. By averaging a k and b k in the ω window, Equation (3) can be recast as:
q i = a ¯ i I i + b ¯ i ,
where a ¯ i = 1 | ω | i ω k a i and b ¯ i = 1 | ω | i ω k b i . A small change in pixel intensity in the kernel window will lead to a k being close to 0 and b k being close to 1 in terms of a solution to Equation (3). In contrast, an abrupt intensity change in the window will lead to a k being close to 1 and b k being close to 0.

2.2. Difference of Gaussian Filter

The DoG filter is commonly employed for detecting edges [19] or blobs [20]. It is mathematically defined as the disparity between two standard Gaussian filters. Specifically, the DoG filter is defined as follows:
D o G i , j , σ 1 , σ 2 = 1 2 π σ 1 2 e i 2 + j 2 / 2 σ 1 2 1 2 π σ 2 2 e i 2 + j 2 / 2 σ 2 2 ,
where i , j shows the image element coordinates and σ 1 and σ 2 are the standard deviations of the Gaussian function. The DoG filter approximates the Laplacian of Gaussian (LoG) filter when σ 2 / σ 1 = 1.6 , which has better edge detection performance [19]. From the signal viewpoint, the Gaussian filter passes the low-frequency signals and cuts off high-frequency information; two low-pass filters with different cutoff frequencies are then combined to form a bandpass filter. The DoG filter extracts the edge feature information that characterizes the details in this feature band, as shown in Figure 1.

2.3. Contrast Limited Adaptive Histogram Equalization

To mitigate the intrinsic limitations of the pronounced enhancement feature of the histogram equalization (HE) algorithm, alternative algorithms such as PHE, adaptive histogram equalization (AHE), and CLAHE have been devised. PE has a poor ability to highlight image details, and the platform threshold needs to be adjusted manually. AHE is prone to block effects and will amplify local noise. CLAHE can effectively improve the local contrast of the image and enhance the ability to highlight details. For an infrared image with L gray levels, the histogram can be expressed as
H l = n l ,   l = 0 , 1 , , L 1 , L ,
where n l represents the number of pixels with gray level L in the image. The dynamic range of the image enhanced by the HE algorithm is R (256 for an 8-bit image). Therefore, the enhancement result for the gray level l is
y l = T l = R 1 N k = 0 l H k ,
where T l is the mapping function that maps the gray level l of the input image into y l of the output image and N is the total number of image pixels.
The HE algorithm demonstrates an exaggerated amplification of high histogram values. Due to the occupation of a broad grayscale range, it is prone to excessive enhancement of infrared images. This amplification leads to heightened noise in uniform regions and impedes the realization of the desired enhancement ratio for gray levels corresponding to pixels in detailed regions. In contrast, the CLAHE algorithm effectively addresses this issue by partitioning the image into a grid of non-overlapping rectangular sub-blocks and performing HE within each sub-block. To prevent over-enhancement, the sub-histograms are adjusted by clipping and redistributing, thereby avoiding excessive enhancement in uniform areas while achieving superior contrast enhancement in detailed regions. Pixels that exceed the clipping threshold are reassigned to appropriate regions through an iterative adjustment process. The clipping threshold is calculated as follows:
T clip = N b L + α N b N b L ,
where N b is the number of pixels in each sub-block and α is a clip factor within the range 0 , 1 . Since each mapping function of the cropped histogram is independent of the others, bilinear interpolation is employed to calculate the values between blocks to eliminate block effects.

3. Proposed Method

In this section, the proposed method is described in detail. First, the original HDR infrared image is decomposed into the base layer and detail layer 1 through adaptive guided filtering. Simultaneously, the DoG filter is applied to further extract detail layer 2 that contains richer high-frequency information. Subsequently, to maintain a balance between detail enhancement and noise suppression, a detail gain function is devised based on the edge-aware weight factor of the adaptive guided filter. This function compresses the dynamic range of the weighted fusion detail layer to effectively enhance and display the detail layer. Finally, the enhanced base layer and detail layer undergo linear fusion, followed by global and local histogram mapping to improve image contrast and visual discrimination. Figure 2 illustrates the workflow of the proposed method.

3.1. Input Image Decomposition

Akin to the conventional decomposition-based approach, an edge-preserving filter is employed for image decomposition in this work. However, the optimal decomposition effect may not always be achieved across all scenes by employing a fixed filtering parameter ε . The adaptive adjustment of parameters based on scene information is essential for enhancing the effectiveness of the decomposition. The crux of the adaptive guided filter lies in modifying the value of the parameter ε according to the original image I in . In Equation (5), it is evident that the local variance σ k 2 and parameter ε are the two variables that affect the filtering effect. Specifically, σ k 2 is determined by the grayscale of the infrared images, while the parameter ε greatly impacts the quality and processing effect of the detail layer. In the conventional guided filter, the calculation of a k and b k requires the determination of variance for each image window (i.e., local variance). The local variance σ k 2 reflects the image detail edge texture, which can significantly enhance image processing and improve the adaptability of visual masking effects. Here, the variance of each window is computed to acquire ε , which indicates the statistical mean of the window variance:
ε = N n × σ k 2 N n ,
where N n represents the number of the local variance σ k 2 at each level. It is worth noting that this paper only counts the local variance values that account for more than 0.1%, avoiding the influence of local variance outlier values on the value of parameter ε . According to Equation (11), images with textured information will yield a greater ε in the statistical distribution of σ k 2 . Conversely, for images with sparse details and texture, a lower ε is obtained adaptively to accommodate different scenes.
In this work, the original HDR infrared image I in is decomposed into the base layer I base and detail layer I detail _ 1 . The guided image I is identical to I in during the decomposition process. To obtain more edge information, the DoG filter is used to accurately extract edge features I detail _ 2 from I in , which are then merged with I detail _ 1 to create the final detail layer I detail :
I detail = I detail _ 1 + I detail _ 2 .
Figure 3a depicts the detail layer I detail _ 1 obtained through AGF decomposition. In Figure 3b, the detail layer I detail generated by superimposing the edge via DoG filtering is shown. Upon examining the image, it is evident that the superimposed detail layer texture is clear.
The edges in an image are defined by high local contrast, and the incorporation of visually distinct edges could further enhance the visual distinctiveness of the corresponding location. As mentioned above, the key parameters of the DoG filter are the standard deviations of the two Gaussian functions ( σ 1 and σ 2 ) and the filter window size w i n _ s i z e . Under the condition of σ 2 / σ 1 = 1.6 , the DoG filter approaches the LoG filter, which has the highest response to edge, e d g e _ s i z e , or spot-like targets with the size of 2 2 σ . The system of equations to calculate the two standard deviations of the DoG filter is established as follows:
e d g e _ s i z e = 2 2 σ 1 + 2 2 σ 2 / 2 σ 2 / σ 1 = 1.6 .
The edge size e d g e _ s i z e of infrared images typically ranges from 3 to 8 pixels. To compare the detail extraction performance of different DoG filter parameters, different pairs of w i n _ s i z e and e d g e _ s i z e are combined, and the DoG filtering results (extracted edge image) on the same image are shown in Figure 4.
The results in Figure 4 show that the window size of the DoG filter has a significantly greater impact on image sharpness than the standard deviation values, which are determined by the edge size of the image. The filter with a small window size is susceptible to noise, but a large window size will result in a wider edge transition area and cause a negative impact on image quality, so a window size of 7 is kept in the subsequent experiments. The standard deviation setting is influenced by the edge scale, and a setting of 4 works well in most of the proposed applications. As a result, the solution of Equation (13) yields σ 1 = 1.088 and σ 2 = 1.7 . Multiplying these two standard deviations by 2 2 gives us 3.077 and 4.923, which closely approximate the edge sizes. Consequently, the explicit array of the DoG filter with a window size of 7 is as follows:
G = 0.0029 0.0061 0.0090 0.0099 0.0090 0.0061 0.0029 0.0061 0.0107 0.0088 0.0047 0.0088 0.0107 0.0061 0.0090 0.0088 0.0168 0.0398 0.0168 0.0088 0.0090 0.0099 0.0047 0.0398 0.0775 0.0398 0.0047 0.0099 0.0090 0.0088 0.0168 0.0398 0.0168 0.0088 0.0090 0.0061 0.0107 0.0088 0.0047 0.0088 0.0107 0.0061 0.0029 0.0061 0.0090 0.0099 0.0090 0.0061 0.0029 .
As shown in Figure 5a, an original HDR infrared image is displayed by the linear mapping (LM) method. Figure 5b depicts a base layer that is obtained after decomposing the original image by the AGF. Figure 5c is the detail layer obtained by AGF and DoG filtering.

3.2. The Sharpening and Noise Reduction of the Detail Layer

The processing of the detail layer is mainly based on the human visual masking effect: the detail gain in smooth areas should be lower to avoid noise amplification; the detail gain in textured areas should be larger to enhance the visibility of the image [12,13,14]. The edge awareness factor a ¯ in the guided filter is a particularly suitable noise visibility function that can measure the richness of detailed information. In flat areas with small changes in image grayscale, the value of a i tends to 0, while at detailed edges with large changes in image grayscale, a i is close to 1, which can protect rich detailed edge information.
Combining with the detail gain factor a ¯ , the calculation of the detail gain on the detail layer is performed by
I dp = g max × a ¯ + g min   ×   I detail ,
where I dp represents the gained detail layer and g max and g min are the greatest and lowest gain value coefficients of the image, respectively, which represent the gain range of the image. In general, g min 1 ; in order to avoid noise amplification, g min can be set as 0.2. g max can be selected as needed, but the value must be greater than g min . A greater g max allows one to generate more pronounced or even exaggerated details. In the experiments, it has been noticed that a value of g max = 10 yields satisfactory results.
Figure 5d shows the results after processing the detail layer I detail by Equation (15). It can be observed that the details in the detail layer have been significantly enhanced.

3.3. Dynamic Range Mapping

As illustrated in Figure 2, even after superimposing the processed detail and base layer, the image remains with a high dynamic range. It is imperative to employ appropriate mapping methods to compress the dynamic range, enhance detail information, and improve the contrast of each brightness area. As one of the most representative local mapping methods, the key idea of CLAHE is elaborated in Section 2.3. By setting a suitable clipping threshold to redistribute the redundant part of the histogram, over-enhancement is mitigated while highlighting local details and effectively improving contrast within texture detail sub-blocks. As illustrated in Figure 5e, mere CLAHE proves inadequate for addressing display issues encountered in dark and bright areas, with maintaining the overall contrast and brightness within dark areas being particularly challenging.
According to the theory of human visual brightness perception, the brightness area of infrared images can be divided into three brightness areas: under-illuminated, well-illuminated, and over-illuminated areas. It is imperative to independently enhance these different brightness areas, with a particular emphasis on enhancing the under-illuminated area to a greater extent than the well- or over-illuminated areas. In this work, the Global Adaptation method proposed by Hyunchan et al. [21] is used to remap the mapped fused image I fc
I out = 255 × l o g I fc / I ¯ fc + 1 l o g I fc _ max / I ¯ fc + 1 ,
I ¯ fc = e x p 1 M × N x , y l o g I fc + l ,
where I out is the enhanced output image, I ¯ fc represents the logarithmic mean of the grayscale of I fc , I fc _ max is the maximum value of I fc , M × N is the amount of pixels, and l is a small constant set to avoid overflow when the pixel value is 0. As the log-average gray converges to a great value, the function converges from the shape of the logarithm function to the linear function. Thus, scenes with low log-average gray get more boosted than scenes with great values.
Figure 5f shows the resulting image processed by the proposed method. As evident from the figure, the dynamic range of the original image has been effectively compressed, the overall brightness of the image has been well adjusted, and details from dark to bright areas have been significantly enhanced.

4. Experimental Results and Discussion

To validate the effectiveness and scene adaptability of the proposed method, several comparative experiments are conducted on four real HDR infrared images from different scenes in Figure 6. These images were all captured by a mid-wave infrared camera with a resolution of 1280 × 1024 . The first image represents an indoor setting, whereas the remaining three depict outdoor scenes at varying distances.
The proposed method is qualitatively and quantitatively compared with five other state-of-the-art infrared image enhancement methods, including PE [4], BF&DDE [12], GF&DDE [13], FGF&PE [14], and PAGF&IDE [15]. For a fair comparison, the parameters within each algorithm utilize the default values that are recommended in the original work. All experiments are implemented with MATLAB and run on an Intel i5-9300H and 16 GB RAM personal computer.

4.1. Qualitative Comparison

Figure 7 shows the results of Image 1 processed by different methods. Image 1 is a 12-bit indoor image with an uncomplicated scene and a small temperature difference between the human and background objects. It is suitable for testing the contrast improvement ability and background noise reduction ability of the enhancement algorithm. As shown in Figure 7, the dynamic range compression capabilities of all comparison methods are limited, the brightness of areas other than the human body and chair in the image is insufficient, and the background information is not effectively highlighted. PE lacks the ability to highlight details. BF&DDE exhibits over-enhancement issues and introduces artifacts at edges, as indicated by the yellow arrow. Both GF&DDE and FGF&PE yield similar enhancement effects, which fail to fully capture the detailed texture. In addition, despite the fact that PAGF&IDE demonstrates improved sharpness, it also inevitably amplifies noise. In contrast, the proposed method substantially enhances the brightness and local contrast of the image while preserving global information with clarity, without significant noise amplification.
Image 2 is a 14-bit outdoor image captured under intense sunlight. The scene is intricately composed, showcasing an abundance of meticulously detailed information. As depicted in Figure 8, as a global mapping method, PE exhibits limited efficacy in enhancing contrast and highlighting details. Both BF&DDE and GF&DDE notably enhance image contrast; however, they also introduce noise amplification, resulting in a notably grainy image. Despite some fine edge details remaining blurred, FGF&PE and PAGF&IDE could enhance image details without significant noise amplification. Notably, the proposed method demonstrates substantial advantages in local contrast, image acutance, and noise reduction.
The results of Image 3 processed by different methods are presented in Figure 9. Image 3 is captured during nighttime, characterized by low scene temperature, and the grayscale range of the image is expanded due to the presence of hot sources on the ground. Among all comparison methods, PE obtains the resulting image with the lowest brightness, worst sharpness, and lowest contrast. The presence of invalid grayscales in the original image, characterized by a minimal proportion of pixels, hampers the tone mapping effect as it diminishes the dynamic range during the mapping process. The detail enhancement capability of BF&DDE is significantly better than that of PE, but the edges of the image are not clear enough and the overall visual effect is unnatural. GF&DDE and FGF&PE do not perform as well as BF&DDE in enhancing local tiny details such as leaves and eaves. PAGF&IDE highlights local details but suffers from poor local contrast effects. In contrast, the proposed method outperforms all other methods in terms of contrast, acutance, and overall visual effect.
As shown in Figure 10, Image 4 is a 16-bit image captured under sunny conditions, encompassing both distant and close targets. The image exhibits weak effective signal and low contrast. When employing PE to linearly map the original 16-bit dynamic range to an 8-bit scale, the information is compressed, which results in darker brightness and hampers detailed observation. Amongst the five compared methods, BF&DDE demonstrates superior detail enhancement capabilities. Nevertheless, as indicated by the arrow, it introduces halos along strong edges which adversely affect visual perception. GF&DDE, FGF&PE, and PAGF&IDE exhibit average overall enhancement effects with GF&DDE failing to effectively suppress noise. Both FGF&PE and PAGF&IDE yield darker grayscales overall with less pronounced contrast enhancement compared to BF&DDE. The proposed method excels in global and local contrast enhancements that surpasses all other methods while maintaining minimal graininess in the output images with clear details for easy observation. As a result, it achieves an outstanding visual effect.

4.2. Quantitative Evaluation

The average gradient (AG) [8] and the perception-based image quality evaluator (PIQE) [18,22] is introduced to quantitatively assess the performance of different methods. AG represents the average gradient magnitude across all pixels in the image, which indicates the detail level of an image. A greater AG value corresponds to better contrast and richer detail, which can be mathematically expressed as:
A G = 1 M 1 × N 1 × x = 1 M 1 y = 1 N 1 I ( x , y ) x 2 + I ( x , y ) y 2 2 ,
where M and N denote the rows and columns of the image, respectively; I x , y represents the grayscale value at x , y ; and I x , y x and I x , y y are the first-order derivatives of the image along the x and y directions, respectively.
PIQE is a reference-free image quality evaluation index based on perceptual features, which utilizes the block structure and noise characteristics of the image to compute its quality score:
P I Q E = k = 1 N sa D sk + C 1 N sa + C 1 ,
where D sk is the distortion score of the k - th spatially active block, N sa refers to the number of spatially active blocks within the image, and C 1 is a constant used for numerical stability. According to Equation (19), it can be inferred that a lower PIQE indicates better image quality.
Table 1 presents the AG and PIQE values of four test images processed by different methods. It is evident that PE, as a global tone mapping method, yields higher overall brightness values but lacks clarity in detail. In terms of AG value, the proposed method achieves the best performance, indicating superior performance in enhancing local contrast and improving image sharpness. Moreover, the proposed method exhibits a lower PIQE value, which further verifies the elevated level of image quality.
All methods are performed 10 times, and the mean values of the running time are shown in Table 2. It should be emphasized that the running time of BF&DDE and GF&DDE are much less than the data detailed in the recent literature [12,13] due to a powerful laptop. The running times of the proposed method are reduced compared to most decomposition-based methods, including BF&DDE, GF&DDE, and PAGF&IDE. The average running time of the proposed method is not significantly different from FGF&PE, but it performs better in detail enhancement. In addition, the adaptive guided filter and DoG filter involved in the proposed method can accelerate the operation in parallel, and the dynamic range mapping can also improve the operation efficiency of the method by constructing a lookup table.

4.3. Merits and Limitations

The proposed method achieves stable performances in different scenarios. The proposed method achieves a good balance between detail enhancement and computational cost, when compared with the other methods. Moreover, this architecture can be further accelerated by parallel computation.
However, the proposed method also has some limitations. First, the proposed method is not fast enough to meet the real-time processing requirements (≥25 fps) of images with a 1280 × 1024 resolution, which have recently become increasingly popular. Therefore, the rapid enhancement for HDR infrared images is a very worthwhile direction for further research. Second, the proposed method cannot preserve the tiny texture details of the original HDR infrared image without generating a halo, gradient reversal, and flat area noise, such that obtaining a clean detail layer is difficult.

5. Conclusions

In light of the inherent characteristics of infrared images, such as high dynamic range, low contrast, and blurred detail information, a method is proposed in this paper that integrates both global and local mapping for enhancing HDR details.
Initially, an adaptive guided filter is used to extract coarse-level detail information, supplemented by the DoG filter for fine-level detail information, thus enhancing high-frequency content. To improve the adaptability across multiple scenes, an adaptive visual mask based on the statistical characteristics of the image variance histogram is constructed. This enables a more accurate generation of the detail gain function for processing the detail layer, thereby effectively suppressing noise and enhancing the visibility of detailed information. Finally, a combination of global and local mapping is used to map the dynamic range of the image. The CLAHE mapping method enhances contrast in local areas while a Global Adaptation method improves overall brightness.
In diverse scenarios, the enhanced images display superior grayscale utilization and substantially improved local contrast and overall brightness, effectively suppressing noise while enhancing detail information. Through both subjective and objective evaluations, the proposed method exhibits significant advantages.

Author Contributions

Conceptualization, C.L.; investigation, H.Z.; methodology, H.Z.; software, H.Z. and Z.C.; writing—original draft, H.Z.; writing—review and editing, Z.C. and J.C.; funding acquisition, J.C. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the West Light Foundation of the Chinese Academy of Sciences (XAB2022YN06) and the Photon Plan in the Xi’an Institute of Optics and Precision Mechanics of the Chinese Academy of Sciences (Grant No. S24-025-III).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhou, Y.; Zhu, Y.; Zeng, B.; Hu, J.; Ouyang, H.; Li, Z. Review of high dynamic range infrared image enhancement algorithms. Laser Technol. 2018, 5, 1–18. [Google Scholar]
  2. Jähne, B. Digital Image Processing; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  3. Karel, Z. Contrast Limited Adaptive Histogram Equalization. In Graphics Gems; Academic Press: Cambridge, MA, USA, 1994; pp. 474–485. [Google Scholar]
  4. Vickers, V.E. Plateau equalization algorithm for real-time display of high-quality infrared imagery. Opt. Eng. 1996, 35, 1921–1926. [Google Scholar] [CrossRef]
  5. Li, S.; Jin, W.; Li, L.; Li, Y. An improved contrast enhancement algorithm for infrared images based on adaptive double plateaus histogram equalization. Infrared Phys. Technol. 2018, 90, 164–174. [Google Scholar] [CrossRef]
  6. Fattal, R.; Lischinski, D.; Werman, M. Gradient domain high dynamic range compression. ACM Trans. Graph. 2002, 21, 249–256. [Google Scholar] [CrossRef]
  7. Kim, J.H.; Kim, J.H.; Jung, S.W.; Noh, C.K.; Ko, S.J. Noh. Novel contrast enhancement scheme for infrared image using detail-preserving stretching. Opt. Eng. 2011, 50, 077002. [Google Scholar] [CrossRef]
  8. Zhang, F.; Xie, W.; Ma, G.; Qin, Q. High dynamic range compression and detail enhancement of infrared images in the gradient domain. Infrared Phys. Technol. 2014, 67, 441–454. [Google Scholar] [CrossRef]
  9. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the IEEE International Conference on Computer Vision, Bombay, India, 4–7 January 1998; pp. 839–846. [Google Scholar]
  10. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  11. Farbman, Z.; Fattal, R.; Lischinski, D.; Szeliski, R. Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef]
  12. Zuo, C.; Chen, Q.; Liu, N.; Ren, J.; Sui, X. Display and detail enhancement for high-dynamic-range infrared images. Opt. Eng. 2011, 50, 127401. [Google Scholar] [CrossRef]
  13. Liu, N.; Zhao, D. Detail enhancement for high-dynamic-range infrared images based on guided image filter. Infrared. Phys. Technol. 2014, 67, 138–147. [Google Scholar] [CrossRef]
  14. Chen, Y.; Kang, J.U.; Zhang, G.; Cao, J.; Xie, Q.; Kwan, C. Real-time infrared image detail enhancement based on fast guided image filter and plateau equalization. Appl. Opt. 2020, 59, 6407–6416. [Google Scholar] [CrossRef] [PubMed]
  15. Ouyang, H.; Xia, L.; Li, Z.; He, Y.; Zhu, X.; Zhou, Y. An infrared image detail enhancement algorithm based on parameter adaptive guided filtering. Infrared Technol. 2022, 44, 1324–1331. [Google Scholar]
  16. Gooch, B.; Reinhard, E.; Gooch, A. Human facial illustrations: Creation and psychophysical evaluation. ACM Trans. Graph. 2004, 23, 27–44. [Google Scholar] [CrossRef]
  17. Chen, Y.; Zhang, H.; Zhao, Z.; Wang, Z.; Wang, H.; Kwan, C. Ultra-fast detail enhancement for a short-wave infrared image. Appl. Opt. 2022, 61, 5112–5120. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, F.; Dai, Y.; Chen, Y.; Peng, X.; Zhu, X.; Zhou, R.; Peng, J. Display method for high dynamic range infrared image based on gradient domain guided image filter. Opt. Eng. 2024, 63, 013105. [Google Scholar] [CrossRef]
  19. Marr, D.; Hildreth, E. Theory of edge detection. Proc. R. Soc. Lond.-Ser. B Biol. Sci. 1980, 207, 187–217. [Google Scholar]
  20. Wang, X.; Lv, G.; Xu, L. Infrared dim target detection based on visual attention. Infrared Phys. Technol. 2012, 55, 513–521. [Google Scholar] [CrossRef]
  21. Ahn, H.; Keum, B.; Kim, D.; Lee, H.S. Adaptive local tone mapping based on retinex for high dynamic range images. In Proceedings of the IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 11–14 January 2013; pp. 153–156. [Google Scholar]
  22. Venkatanath, N.; Praneeth, D.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the National conference on communications (NCC), Mumbai, India, 27 February–1 March 2015; pp. 1–6. [Google Scholar]
Figure 1. Principle diagram of the DoG filter.
Figure 1. Principle diagram of the DoG filter.
Photonics 11 00717 g001
Figure 2. Workflow of the proposed method.
Figure 2. Workflow of the proposed method.
Photonics 11 00717 g002
Figure 3. The detail layer. (a) The detail layer yielded by AGF. (b) The detail layer yielded by the AGF and DoG filter.
Figure 3. The detail layer. (a) The detail layer yielded by AGF. (b) The detail layer yielded by the AGF and DoG filter.
Photonics 11 00717 g003
Figure 4. DoG filtering results with different parameters.
Figure 4. DoG filtering results with different parameters.
Photonics 11 00717 g004
Figure 5. Enhancement of HDR image. (a) The original image with linear mapping; (b) the base layer yielded by AGF; (c) the detail layer yielded by AGF and DoG; (d) the detail layer after gain and dynamic range compression; (e) the fuse image with dynamic range compression; and (f) the result image.
Figure 5. Enhancement of HDR image. (a) The original image with linear mapping; (b) the base layer yielded by AGF; (c) the detail layer yielded by AGF and DoG; (d) the detail layer after gain and dynamic range compression; (e) the fuse image with dynamic range compression; and (f) the result image.
Photonics 11 00717 g005
Figure 6. Test image. (a) Image 1; (b) Image 2; (c) Image 3; and (d) Image 4.
Figure 6. Test image. (a) Image 1; (b) Image 2; (c) Image 3; and (d) Image 4.
Photonics 11 00717 g006
Figure 7. Enhancement results of Image 1. (a) PE; (b) BF&DDE; (c) GF&DDE; (d) FGF&PE; (e) PAGF&IDE; and (f) proposed method.
Figure 7. Enhancement results of Image 1. (a) PE; (b) BF&DDE; (c) GF&DDE; (d) FGF&PE; (e) PAGF&IDE; and (f) proposed method.
Photonics 11 00717 g007
Figure 8. Enhancement results of Image 2. (a) PE; (b) BF&DDE; (c) GF&DDE; (d) FGF&PE; (e) PAGF&IDE; and (f) proposed method.
Figure 8. Enhancement results of Image 2. (a) PE; (b) BF&DDE; (c) GF&DDE; (d) FGF&PE; (e) PAGF&IDE; and (f) proposed method.
Photonics 11 00717 g008
Figure 9. Enhancement results of Image 3. (a) PE; (b) BF&DDE; (c) GF&DDE; (d) FGF&PE; (e) PAGF&IDE; and (f) proposed method.
Figure 9. Enhancement results of Image 3. (a) PE; (b) BF&DDE; (c) GF&DDE; (d) FGF&PE; (e) PAGF&IDE; and (f) proposed method.
Photonics 11 00717 g009
Figure 10. Enhancement results of Image 4. (a) PE; (b) BF&DDE; (c) GF&DDE; (d) FGF&PE; (e) PAGF&IDE; and (f) proposed method.
Figure 10. Enhancement results of Image 4. (a) PE; (b) BF&DDE; (c) GF&DDE; (d) FGF&PE; (e) PAGF&IDE; and (f) proposed method.
Photonics 11 00717 g010
Table 1. Results of the average gradient and the perception-based image quality evaluator.
Table 1. Results of the average gradient and the perception-based image quality evaluator.
Test ImageIndexPEBF&DDEGF&DDEFGF&PEPAGF&IDEProposed
Image 1AG4.8164.9843.3067.5885.5629.191
PIQE60.88455.02163.56467.44870.28354.670
Image 2AG24.48921.45128.74524.92919.35645.091
PIQE34.91535.83538.21332.10334.43830.496
Image 3AG14.5449.91411.49913.35212.54734.563
PIQE43.14248.26156.48841.18243.22332.914
Image 4AG12.91812.45814.05915.34815.44640.924
PIQE54.68745.07553.94447.05249.06047.575
Table 2. Running time comparison of different methods.
Table 2. Running time comparison of different methods.
Test ImagePEBF&DDEGF&DDEFGF&PEPAGF&IDEProposed
Image 10.02902.91550.56420.13070.59660.2076
Image 20.05343.37940.76100.14930.54600.1983
Image 30.02772.73690.56690.13780.59750.1690
Image 40.02853.27450.63920.14250.53450.1744
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, H.; Chen, Z.; Cao, J.; Li, C. Infrared Image Enhancement Based on Adaptive Guided Filter and Global–Local Mapping. Photonics 2024, 11, 717. https://doi.org/10.3390/photonics11080717

AMA Style

Zhang H, Chen Z, Cao J, Li C. Infrared Image Enhancement Based on Adaptive Guided Filter and Global–Local Mapping. Photonics. 2024; 11(8):717. https://doi.org/10.3390/photonics11080717

Chicago/Turabian Style

Zhang, Hui, Zhiqiang Chen, Jianzhong Cao, and Cheng Li. 2024. "Infrared Image Enhancement Based on Adaptive Guided Filter and Global–Local Mapping" Photonics 11, no. 8: 717. https://doi.org/10.3390/photonics11080717

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop