Next Article in Journal
Mixed-Variable Bayesian Optimization for Analog Circuit Sizing through Device Representation Learning
Previous Article in Journal
A Novel Heterogeneous Parallel System Architecture Based EtherCAT Hard Real-Time Master in High Performance Control System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ghost-Free Multi-Exposure Image Fusion Technology Based on the Multi-Scale Block LBP Operator

1
School of Integrated Circuits, Anhui University, Hefei 230601, China
2
Anhui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(19), 3129; https://doi.org/10.3390/electronics11193129
Submission received: 26 August 2022 / Revised: 21 September 2022 / Accepted: 26 September 2022 / Published: 29 September 2022

Abstract

:
This paper proposes a ghost-free multi-exposure image fusion technique based on the multi-scale block LBP (local binary pattern) operator. The method mainly includes two steps: first, the texture variation, brightness, and spatial consistency weight maps of the image are computed, and then these three image features are used to construct the initial weight map. Finally, the multi-resolution method is used to fuse the images to obtain the resulting image. The main advantage of this technique lies in the step of extracting the details of the source image based on the multi-scale block LBP operator, which is used to preserve the details of the brightest and darkest areas in high dynamic range scenes and preserve the texture features of the source image. Another advantage is that a new LBP operator-based motion detection method is proposed for fusing multi-exposure images in dynamic scenes containing moving objects. In addition, this paper also studies two spatially consistent weight distribution methods and compares and discusses the effects of these two methods on the results of dynamic image fusion. Through a large number of experimental comparisons, the superiority and feasibility of this method are proved.

1. Introduction

One of the basic functions of digital cameras is to obtain real-world information naturally and vividly, which is comparable to the information captured by the human eye. However, current digital camera technology still has many limitations in obtaining rich texture details, vivid colors and brightness. This is because the dynamic range of the real scene is larger ( 10 6 10 6 c d / m 2 [1]), while the dynamic range captured by the sensor in ordinary cameras [2] is small compared to this. A photo can only capture the smaller dynamic range of the real world. Therefore, in HDR (high dynamic range) scenes, the picture is often underexposed or overexposed. The most direct way to deal with this problem is to use high dynamic range devices [3] to acquire and display real scenes. However, these devices are often expensive and cannot be used universally [4].
In recent years, software-based solutions have gradually entered people’s field of vision. Compared with hardware-based methods, software-based methods are easy to implement, inexpensive, and suitable for ordinary cameras [5]. Existing software-based solutions fall into two main categories: HDR imaging techniques and multi-exposure image fusion (MEF). HDR imaging technology uses multiple exposures of images captured with ordinary cameras to estimate the camera response function (CRF) [6,7], resulting in a high dynamic range image. Then, the image is compressed, and the tone mapping method [8] is used to convert the high dynamic range image into a low dynamic range (LDR) image so that the image can be visualized on common display devices [9,10,11].
However, the complexity of HDR imaging technology is high and the time required is long, which is not suitable for ordinary cameras [12]. Different from HDR imaging technology, the multi-exposure fusion method does not need to construct a HDR image. It extracts pixels with more information, better exposure, and higher image quality from the input multiple-exposure low dynamic range image, and then fuses them. The resulting fused image does not need to be processed and can be displayed on an ordinary display device [13]. Compared with HDR imaging technology, the multi-exposure image fusion method has lower computational complexity and faster speed, so this type of method is the first choice for ordinary cameras [12].
In the existing multi-exposure image fusion techniques, the source image sequence is obtained under different exposures, and the difference in exposure causes the loss of image details. When there are moving objects in the captured image, if the images are fused directly, ghosting artifacts can appear in the resulting image. The elimination of ghosting artifacts is a major difficulty in the current multi-exposure fusion technology.
As early as the 1980s, Brut et al. [14] proposed Laplacian pyramid decomposition, which can effectively fuse two images. Subsequently, this method is used in many image fusion techniques. Mertens et al. [12] proposed a multi-resolution fusion method. This method uses contrast, saturation, and exposure as three quality measures to construct a weight map, and then fuses the images with a Laplacian pyramid decomposition method. The images fused by this method have good saturation but lack detailed information, which is not suitable for processing images in dynamic scenes.
Shen [15] proposed a method based on the generalized random walk framework, which deals with image color distortion where the fused image lacks correct color information.
The method proposed by Gu et al. [16] fuses multi-exposure images in a gradient field, generates the gradient value of each pixel by maximizing the structure tensor, and derives the pixel value from the gradient field that best represents the geometric features in the scene. This method can extract some detailed information, but the resulting image is darker, and the details in the dark areas are lost.
Li and Kang et al. [17] used a multi-exposure fusion method based on median filtering and recursive filtering. This method uses color dissimilarity to eliminate the influence of moving objects, and to a certain extent, it can eliminate ghosting caused by motion, but it does not process the color information effectively, resulting in color distortion of the fused image.
Shen [18] proposed a new hybrid exposure weight metric. This method is characterized by the use of mixed exposure weights to guide Laplacian pyramid enhancement. His method can maintain the color appearance and texture structure of the image, but cannot maintain the edge well, and the algorithm has high complexity and low efficiency.
Bruce et al. [19] calculated the information entropy and the neighborhood radius of the pixel in the logarithmic domain and then set the weight according to the value of the information entropy of each pixel point. The result of this method is that although the information entropy is high, the overall color of the image is dark and distorted.
The main feature of Li and Wang [20] et al.’s method is the use of structure tensors to extract image details. The resulting images have better color saturation and better preservation of image texture details, but this method handles image sequences in dynamic scenes, and there are ghosting artifacts in the resulting images.
Hayat et al. [21] proposed a multi-exposure image fusion technique based on multi-resolution fusion, which estimates the weight map using contrast, saturation, exposure, and color dissimilarity, and finally uses a multi-resolution method of pyramid decomposition to obtain the fused image. The benefit of this technique is that it can avoid the seam problem well, but when moving objects appear in multiple images, ghosting artifacts can still be generated.
The characteristics of the method of Huang [22] et al. are that the resulting image has rich colors and appropriate exposure, but some texture details are lost in the overexposed and underexposed areas of the image, and when fusing image sequences in dynamic scenes, results in significant ghosting in the image.
Our method solved the problems of brightness imbalance, insufficient retention of detail information in light and dark areas, and ghosting artifacts that occurred when fusing dynamic images. To solve the above problems, this paper constructed a new ghost-free multi-exposure image fusion model based on the multi-scale block LBP operator. In this method, for multi-exposure image sequences containing moving objects, the multi-scale block LBP operator [8,23,24,25,26,27] was used for local texture extraction in bright and dark areas and ghost removal caused by moving objects. On this basis, a new brightness adaptive method was also proposed to ensure the fused image had better visibility. After constructing the weight map, the discontinuous and noisy initial weight map was refined using filters, and finally, the weight map was fused by a multi-resolution method to obtain the final result image. The main contributions of this paper can be summarized as the following three points:
  • This method took advantage of the multi-scale block LBP operator (MBLBP) and applied it to the multi-exposure image fusion method for the first time. We designed two quality metrics based on the multi-scale block LBP for local texture extraction and ghost removal in images, respectively.
  • According to the brightness characteristics of the image, a new brightness adaptive method was proposed, which could adaptively adjust the brightness weight of the pixels in the source image sequence according to the brightness and darkness of the pixels.
  • A new method of multi-exposure fusion based on the multi-scale block LBP was proposed. This method could fuse multi-exposure image sequences captured in static scenes and dynamic scenes.
The rest of this article is organized as follows. Section 2 discusses a detailed explanation of the proposed technique. Section 3 contains the comparison of experimental results, as well as parameter evaluation. Section 4 discusses the conclusions, limitations, and future work of this methodology.

2. Proposed Method

In this paper, a ghost-free multi-exposure image fusion technique based on the multi-scale block LBP operator was proposed. Our method constructed an initial weight map by computing three image features: texture features, luminance features, and spatial consistency features. We used the multi-scale LBP operator for local texture measurement, which could compute spatial details from LDR source image sequences. A new motion detection method based on the LBP operator was also proposed for fusing image sequences of dynamic scenes containing moving objects. Finally, the method used fast-guided filters to initially refine the weights, and a pyramid decomposition method to fuse images.
The above quality metric calculation methods and experimental results are discussed in detail in the following subsections. Figure 1 shows an overall flow chart of our method.

2.1. Texture Change Weight

When fusing image sequences captured in static scenes, two image features were considered: contrast and brightness. We used contrast as one of the weights so that the fused image had more texture and edge information. When using the multi-resolution method [12] to fuse multi-exposure images, it retained enough detailed information in the normally-exposed regions. However, since the texture details of the bright and dark areas were affected by the brightness, some of the details were lost in the bright and dark areas. Aiming at this problem, we innovatively proposed the subregion texture detail extraction method based on the multi-scale block LBP.
We let I i   ( x , y ) denote the source image sequence, where i = 1, 2, …, K denoted the ith image in the image sequence. In a set of multi-exposure image sequences, the average brightness of each pixel ( x , y ) represented the brightness of the pixels in the HDR scene, which was calculated as follows:
L ( x , y ) = i = 1 K l i ( x , y ) K
where L ( x , y ) was the normalized average brightness of pixels at position ( x , y ) in the image sequence, and l i ( x , y ) represented the brightness value of pixels at position ( x , y ) of the ith image.
Next, we used the average brightness L ( x , y ) at each pixel ( x , y ) to divide the bright region B i ( x , y ) , the dark region D i ( x , y ) , and the normally exposed region N i ( x , y ) of each image. The calculation was as follows:
I i g x , y = { N i ( x , y ) ,   α L ( x , y ) 1 α B i ( x , y ) ,          1 α L ( x , y ) D i ( x , y ) ,                L ( x , y ) α
where I i g x , y was the grayscale image and α was the luminance threshold, which usually took a value of 0.04–0.12 [28]. In this paper, we also set α to be 0.1 (as proposed by previous methods [17,29]). When the average brightness of the pixel at ( x , y ) was less than 0.1, the pixel belonged to the dark area, when the average brightness of the pixel at ( x , y ) was greater than 0.9, the pixel belonged to the bright area, otherwise, it belonged to the normal exposure area. For the convenience of calculation, here, the bright and dark regions in the image were denoted as I N i ( x , y ) together.
For the normal exposure area in the image, the Scharr operator was used to extract the texture detail information, and the contrast of each pixel was calculated as follows:
G x = ( 3 0 3 10 0 10 3 0 3 )     N i ( x , y )  
G y = ( 3 10 3 0 0 0 3 10 3 )     N i ( x , y )
G x and G y represented the texture changes in the x-axis and y-axis directions of the image, respectively, and the symbol * was the convolution operator. Then, we used the above convolution calculation results to calculate the texture change weight in the normal area, the calculation was as follows:
W I N i C ( x , y ) = G x 2 + G y 2
To preserve the details of the bright and dark regions in the input image sequence, we used the multi-scale block LBP operator to extract the texture details of the region. Because this operator had rotation invariance and grayscale invariance, and had strong robustness to illumination, it could better extract the texture detail information of this area, and its calculation was as follows:
S i ( x , y ) = M B L B P ( I N i ( x , y ) )
M B P L B P ( · ) was an operator for extracting local texture features of images. For more details on LBPs, please refer to [25,26,27]. S i ( x , y ) was used as the encoded value of the pixel at ( x , y ) , that is, the LBP eigenvalue, which could reflect the texture information of the central pixel ( x , y ) and its neighborhood.
Then, the fast local Laplacian filter [30] was used to enhance the detail information in S i ( x , y ) , while retaining the information of the edge parts. The texture change weights in the bright and dark areas were as follows:
W N i C ( x , y ) = L F μ 1 , μ 2 ( S i ( x , y ) )
L F μ 1 , μ 2 ( · ) stood for the fast local Laplacian filtering operation, and μ 1 and μ 2 were the parameters of this filter. μ 1 represented the magnitude of the edge in the image: the intensity percentage less than μ 1 was considered as detail, the percentage change greater than μ 1 was considered as edge and was preserved, and when μ 2 was less than 1 it meant enhancement of detail.
Finally, by combining the above two texture change weights, the final contrast weight map was obtained, as follows:
W i C ( x , y ) = W N i C ( x , y ) + W I N i C ( x , y )

2.2. Brightness

If the image was underexposed/overexposed, the image was very dark/bright, and the brightness of the image was 0 (or close to 0)/1 (or close to 1), which caused the image to lose a lot of information. Therefore, the basic idea of constructing the brightness weight in this paper was to set a small brightness weight for the overexposed and underexposed areas in the image and assign a higher brightness weight to the pixels with a brightness close to 0.5 [21]. The formula for the brightness weight in this paper was as follows:
W i B ( x , y ) = R l × G l × B l
R l , G l , and B l represented the Gaussian curve of the red, green, and blue channels [21], respectively, calculated as follows:
X l = e x p ( ( X 0.5 ) 2 δ 2 ) , X { R , G , B }
To simplify the formula, the variable X was used to represent the red, green, and blue channels (for example, when the value of X was R , the above formula represented the Gaussian curve of the red channel).
However, some pixels in the image were possibly inherently bright and dark rather than being too bright or too dark due to over or underexposure. In this case, we needed to choose appropriate weights for the pixels in the bright and dark areas to better preserve the characteristics of the source image. This paper developed a novel and flexible bell curve function, and innovatively proposed a luminance adaptive function to adjust the weight of pixels in light and dark areas. The improved method was calculated as follows:
X l = γ ( ( X η ( r l , X ) ) 2 + γ ) e x p ( ( X η ( x _ l , X ) ) 2 δ 2 ) , X { R , G , B } , x _ l { r _ l , g _ l , b _ l }
The usage of variables X and x _ l here was similar to Formula (10) (for example, when X took the value R, x _ l was r _ l , and the formula represented the brightness weight curve of the red channel), where δ was 0.2 [12] and γ was 1. η ( r _ l , R ) , η ( g _ l , G ) , and η ( b _ l , B ) were adaptive functions, and r _ l , g _ l , and b _ l were the average luminance values of the pixels in the red, green, and blue channels at the position ( x , y ) in the source image, respectively. The calculation method used Formula (1). According to the average brightness of the pixel in each channel, according to the previous method, the bright, dark, and normal areas in each channel were divided, and the specific calculation method used Formula (2).
Take the red channel η ( r _ l , R ) as an example: B i r ( x , y ) was the bright area in the channel, D i r ( x , y ) was the dark area in the channel, N i r ( x , y ) was the normal area of exposure, and the adaptive function was:
η = 0.5 λ × log 1 + 1 K i = 1 K l r ,   i x , y   ,   x , y B i r x , y            0.5 ,                                                     x , y N i r x , y 0.5 + λ × log 1 + 1 K i = 1 K l r , i x , y    ,   x , y D i r x , y
l r , i ( x , y ) represented the luminance value of the ith image pixel ( x , y ) in the red channel in the image sequence. The adaptive functions η ( g _ l , G ) and η ( b _ l , B ) in the green and blue channels were calculated in the same way as in the red channel.

2.3. Spatial Consistency

When the image sequence contained moving objects (such as moving people), the weight map also needed to consider the effect of moving objects, otherwise, ghosting could appear in the resulting image. To solve this problem, this paper proposed a new method for constructing spatially consistent weight terms based on the multi-scale block LBP (MDLBP). Figure 2 shows a flow diagram of the motion detection based on the multi-scale LBP.
First, we computed the LBP feature for each pixel in the source image:
T i ( x , y ) = M B L B P ( I i g ( x , y ) )
I i g ( x , y ) was a grayscale image. For any two different images I i ( x , y ) and I j ( x , y )   ( i j ) in the image sequence, the local similarity between them was measured by calculating the Euclidean distance between T i ( x , y ) and T j ( x , y ) at the pixel ( x , y ) . The calculation was as follows:
d i , j ( x , y ) 2 = || T i ( x , y ) T j ( x , y ) || 2 2
where | | · | | 2 was the 2-norm operation that computed the matrix.
Then, the spatial consistency weight term of the image in the moving scene was constructed in the following way, and the formula was as follows:
W i S 1 ( x , y ) = i , j = 1 , i j K e x p ( d i , j ( x , y ) 2 2 δ d 2 )  
Among them, W i S 1 ( x , y ) represented the initial spatial consistency weight calculated using Formula (15) (S1 was just a symbol used to distinguish the spatial consistency weight from the texture weight and brightness weight mentioned above and had no other practical significance). The standard deviation δ d controlled the influence of the local similarity d i , j ( x , y ) on the weight W i S 1 ( x , y ) , and δ d was also set to 0.05 (the value of 0.05, which has been used in another multi-exposure method [29], and the rationality of this value has been proved by previous work) in the method. The design idea here, is that if pixel ( x , y ) belonged to the motion region in image I i , the local similarity d i , j ( x , y ) between the image I i and all I j ( i j ) at ( x , y ) was increased. The spatial consistency weight W i S 1 ( x , y ) at the pixel point was decreased, resulting in a decrease in the weight value of the image I i at the pixel point ( x , y ) .
When the moving frequency of moving objects in the image sequence was high, to remove the influence of ghosting artifacts more effectively, the method of “split-channel” was adopted. We calculated the LBP eigenvalues ( T r i ( x , y ) , T g i ( x , y ) , and T b i ( x , y ) ) of the red, green, and blue channels at the pixel point ( x , y ) of the source image, respectively, and then calculated the d r i , j ( x , y ) 2 , d g i , j ( x , y ) 2 , and d b i , j ( x , y ) 2 of the pixel ( x , y ) in each channel, respectively. The local similarity between the ith and jth images was calculated as follows:
D i , j ( x , y ) = d r i , j ( x , y ) 2 × d g i , j ( x , y ) 2 × d b i , j ( x , y ) 2
The spatial consistency weight term of the image constructed by the method proposed in this paper was calculated as follows:
W i S 2 ( x , y ) = i , j = 1 , i j K e x p ( D i , j ( x , y ) 2 δ d 2 )  
where W i S 2 ( x , y ) was the initialized spatial consistency weight calculated using Equation (17). Formula (17) was improved from Formula (15), and Formula (17) had only one more step than Formula (15), that was, the calculation of D i , j ( x , y ) . When the motion frequency of moving objects in the source image sequence was relatively high, the effect of removing ghost images using Equation (17) was better than using Equation (15). The effect of using Formulas (15) and (17) was the same in other cases, but the complexity of Formula (17) was higher than that of Formula (15). For simplicity, the following discussion of spatial consistency weights was based on Formula (17) unless otherwise stated (including the flow charts of spatial consistency weights in Figure 1 and Figure 2).
Finally, the weight map was refined by the morphological operator to remove the influence of noise, and the final spatial consistency weight W i S ( x , y ) was obtained (among them, S in W i S ( x , y ) and S2 in W i S 2 ( x , y ) were similar to the above-mentioned S1. They were symbols used to distinguish them from other weights and had no other practical significance).
Figure 3 shows the weight map for each quality measure of the method, and Figure 3a shows the source image sequence “Arches”. Figure 3b shows a local contrast weight map, a texture detector that assigned higher weights to pixels with more detail. Figure 3c shows a luminance weight map that assigned higher weights to pixels with an average luminance value between 0.1 and 0.9. Pixels in light and dark areas used a brightness adaptation function to adjust their weights. The spatial consistency of Figure 3d detected moving objects by computing the texture differences between different input images in the source image sequence to remove the ghosting artifacts that affected the fused images.

2.4. Weight Map Estimation

According to the previous calculation, three image features (texture change weight, brightness feature, and spatial consistency) were obtained. In this step, these weight terms needed to be combined to obtain the initial weight map. For the proposed method to extract the highest quality regions from the weight map, this paper used pixel multiplication to combine different weight maps, calculated as follows:
W i i n t i a l ( x , y ) = { W i C ( x , y ) × W i B ( x , y )   ,           f o r   s t a t i c   s c e n e , W i C ( x , y ) × W i B ( x , y ) × W i S ( x , y ) ,   f o r   d y n a m i c   s c e n e ,
Among them, W i C ( x , y ) , W i B ( x , y ) , and W i S ( x , y ) represented the texture change weight, brightness weight, and spatial consistency weight of image i , respectively, and W i i n t i a l ( x , y ) was the initial weight map of this method.
In a static scene, the initial weight map did not need to consider the spatial consistency weight, which was only used for dynamic scenes with moving objects, to eliminate ghosting artifacts in the fused images caused by moving objects. After generating the initial weight map, to ensure that the sum of the weights of each pixel ( x , y ) was 1, we also needed to normalize the weight map, calculated as follows:
W i I ( x , y ) = W i i n i t i a l ( x , y ) + ε i = 1 K ( W i i n i t i a l ( x , y ) + ε )
ε was a small positive number (for more details about ε, please refer to [21,29]).

2.5. Weight Map Refinement

Since the initial weight map was usually affected by noise, we needed to denoise it with a filter before fusing the weight map. When we used the smoothing filter to refine the weight map, we needed to keep the sharpness of the image edge texture while removing the influence of noise, so the edge preservation smoothing filter was selected. Many existing filters could be used to optimize the weight map, such as recursive filters [31], bilateral filters [32], etc. Due to the low complexity of the fast-guided filter [33], it was used in this method to refine the initial weight map, which was calculated as follows:
W i F ( x , y ) = F G F r , e p , ϵ ( W i I ( x , y ) , W i I ( x , y ) )
W i F ( x ,   y ) represented the refined weight map and F G F r , e p , ϵ ( I ,   W ) represented the fast-guided filtering operation. r , e p , and ϵ were the parameters of the filter, where r was the window radius of the filter and e p was the regularization parameter of the filter which controlled the smoothness of the filter. ϵ was the subsampling rate and I and W represented the guide image and the image to be filtered, respectively. In this method, the weight map W i I ( x , y ) was used as both the guide image and the input image. Finally, the weight map was normalized to obtain the final weight map W i ( x , y ) .

2.6. Image Fusion

If the image was fused at a single resolution (that is, the weight map was directly fused with the image), due to the different exposure of each input image, they had different absolute intensities, which led to seam problems in the fused image. To solve this problem, a pyramid-based multi-resolution method [12] was used to fuse images in this paper. In this method, the source image was decomposed into a Laplacian pyramid, the above weight map was decomposed into a Gaussian pyramid, and then the Laplacian pyramid and the Gaussian pyramid were fused at each level, and the calculation was as follows:
L { F } l = i = 1 K G { W i ( x , y ) } l L { I i   ( x , y ) } l
G { W i ( x , y ) } l meant decomposing the weight map into a Gaussian pyramid, L { I i   ( x , y ) } l meant decomposing the input image into a Laplacian pyramid, L { F } l was the new Laplacian pyramid after fusion, and l meant the number of layers of the pyramid. Finally, we reconstructed L { F } l to obtain the final image. Since the multi-resolution method was used to fuse images, it was the image features that were fused, not the image intensities. Therefore, when using this method to fuse images, there was no seam problem in the image fusion process.

3. Experimental Analysis

In this section, to verify the effectiveness of this method, 14 sets of multi-exposure image sequences (including static scenes and dynamic scenes) shot in different scenes were collected for experiments. Details about each set of image sequences are described in Table 1.
This paper compared the method with eight existing techniques [12,16,18,19,20,21,22,36], and selected five representative multi-exposure image sequences from them for subjective evaluation, as shown in Figure 4. Among them, three groups (Figure 4a–c) were multi-exposure image sequences of static scenes, and the other two groups (Figure 4d,e) were multi-exposure image sequences of dynamic scenes. Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 show subjective evaluations of our method.

3.1. Subjective Evaluation

To demonstrate the effectiveness of the method proposed in this paper, we conduct a detailed evaluation of the method in static and dynamic scenarios, respectively, in this section.

3.1.1. Static Scene

Figure 5 shows the processing results of different methods on the “Lighthouse” image sequence. To more clearly compare the processing results of the image sequence “Lighthouse” by different methods, we used the form of a list to compare the main features (well-defined edges, correct brightness or color, and exposure) of the fused images. A detailed comparison of Figure 5 is shown in Table 2 below.
Figure 6 shows the processing results of the “House” image sequence, which captured an indoor room with an outdoor garden outside the window. A detailed comparison of Figure 6 is shown in Table 3 below.
Figure 7a–h show the processing results of the previous multi-exposure fusion methods and the method proposed in this paper on the image sequence “Studio”. A detailed comparison of Figure 7 is shown in Table 4 below.

3.1.2. Dynamic Scene

This subsection presents experimental results of processing image sequences captured in dynamic scenes, discusses related results, and shows close-ups of some of the results for better comparison.
The LBP-based fusion method proposed in this paper had a variable parameter: the scale factor. This parameter was used to control the size of each pixel neighborhood of the LBP, and this parameter was set to 15. When constructing the spatial consistency weight item, to distinguish the two methods of constructing the spatial consistency weight, we denoted the method using Formula (15) as MBLBP-1, and the method using Formula (17) as MBLBP-2.
The processing results of the “Candle” image sequence by different methods are shown in Figure 8a–h. To more clearly compare the processing results of the image sequence “Candle” by different methods, we used the form of a list to compare the main features (well-defined edges, correct brightness or color, and effects of ghosting artifacts) of the fused images. A detailed comparison of Figure 8 is shown in Table 5 below.
Figure 9a–h shows the processing results of the previous multi-exposure fusion methods and the method proposed in this paper on the image sequence “Garden”. When the moving frequency of moving objects in an image sequence was high, it was difficult to completely remove ghosting artifacts because there were no moving objects in some areas of a small part of the image. Figure 9a–d,f shows Mertens [12], Shen [18], Li and Wei [20] et al.’s method, software Photomatix [36] processed images, and Huang et al.’s [22] method results, respectively. It could be seen from the local enlarged area in the figure that these fused images all had very obvious ghosting artifacts. In Figure 9e, although the method of Hayat et al. [21] partially removed the ghosting effect in the image, obvious shadows could still be observed in the enlarged area of the image. The results of our method are shown in Figure 9g,h. As could be seen from the figure, our method was better than the previous methods in removing ghosting artifacts. From the comparison of Figure 9g,h, it could be seen that when the moving objects in the image sequence moved frequently, the effect of using the MBLBP-2 method as the spatial consistency weight was better than that of the MBLBP-1 method.

3.2. Objective Evaluation

In this subsection, we objectively evaluate and compare the proposed method and seven existing techniques using two advanced quality metrics, Q A B / F [37] and the structural similarity index MEF-SSIM [38], respectively.
Table 6 and Table 7 give the Q A B / F [37] and MEF-SSIM [38] values of Mertens et al. [12], Gu et al. [16], Shen et al. [18], Bruce et al. [19], Li and Wang et al. [20], Hayat et al. [21], Huang et al. [22], and the proposed method (MBLBP-2), respectively. The higher the value of MEF-SSIM and Q A B / F , the better the image quality. From Table 6 and Table 7, it could be concluded that in most cases, compared with the existing algorithms, the method in this paper had better results.

3.2.1. Evaluation Using Q A B / F

The image quality metric Q A B / F is a novel objective quality assessment metric for fused images, which reflects the quality of the information retained after the fusion of source image sequences. Q A B / F represented the degree of preservation of image edge detail information, calculated as follows:
Q A B / F = x = 1 H 1 y = 1 H 2 ( Q A F ( x , y ) ω A ( x , y ) + Q B F ( x , y ) ω B ( x , y ) ) x = 1 H 1 y = 1 H 2 ( ω A ( x , y ) + ω B ( x , y ) )
Among them, Q A F ( x , y ) = Q g A F ( x , y ) Q 0 A F ( x , y ) , Q g A F ( x , y ) and Q 0 A F ( x , y ) were the edge strength and direction retention values at the pixel point ( x , y ) , respectively, and ω A ( x , y ) and ω B ( x , y ) represented the weight coefficient of each edge, respectively. The higher the Q A B / F value, the more the edge detail information in the input image sequence was preserved by the fused image.
In this paper, seven different algorithms were used to compare the Q A B / F value of the method, and the details are shown in Table 6. It could be observed from Table 6 that our method preserved image details well in most cases. The Q A B / F value of this method was lower than other methods when dealing with the image sequences “Chinese Garden”, “Garden”, “Kluki” and “Laurenziana”. When refining the weight map in this paper, to prevent the contour details of the image from being blurred as much as possible, the weight map was used as the guide image of the filter, which might have partially affected the Q A B / F of the fused image.

3.2.2. Evaluation Using MEF-SSIM

The MEF-SSIM is one of the objective quality indicators widely used to evaluate fused images. This indicator could detect the overall similarity between the resulting image of this method and the corresponding source image sequence. The maximum value of MEF-SSIM was one and the minimum value was zero. The higher the value, the higher the similarity between the resulting image and the source image sequence, and the better the image quality. The specific calculation method of this indicator was as follows:
Q ( Y ) = m = 1 M [ Q m ( Y ) ] β i
where Q ( Y ) was the final MEF-SSIM value, M was the maximum number of scales that the image could be decomposed, Q m ( Y ) was the average image quality when the image scale was m, and β i was the corresponding weight.
From Table 7, we could see that the proposed method could achieve the maximum MEF-SSIM value in most cases. When dealing with the image sequences “Arch”, “Belgian House”, “Cave”, and “Lighthouse”, the MEF-SSIM value of this method was lower relative to other methods. This was because when the algorithm extracted the texture details of the overexposed and underexposed areas in the image, to better preserve the detail information of the area. This method enhanced the detail information of the area to a certain extent. Therefore, when the proportion of brighter and darker regions in the source image sequence was relatively large, it had a certain impact on the MEF-SSIM of its fused image.

3.2.3. Analysis of Free Parameter

Free parameters were set to 15 for all experiments as previously described. We used evaluation metrics (MEF-SSIM and Q A B / F ) to examine the influence of this parameter on the experimental results of this paper. We set the free parameters to 12, 15, 18, 21, 24, and 28, and for each free parameter, tested the MEF-SSIM and Q A B / F values of the method (MBLBP-2) on the image sequence, respectively. Then, under each of the above free parameters, the average values of these two quality indicators of the image sequence were calculated separately. Figure 10 shows the trend diagram of quality indicators MEF-SSIM and Q A B / F when our method took different scale factors, and Figure 10a,b demonstrates that when this parameter was set to 15, the proposed method achieved the highest MEF-SSIM and Q A B / F values.

3.2.4. Comparing the Performance of the LBP and the Multi-Scale LBP

In this section, our method applies the LBP to a fixed-size neighborhood and compares its results with those of the multi-scale block LBP-based method proposed in this paper, as shown in Figure 11.
From Figure 11, it could be concluded that when the method applied the LBP operator with a fixed neighborhood size, the evaluation indicators MEF-SSIM and Q A B / F of the obtained image sequence processing results were generally lower than the processing results of the method proposed in this paper. The method based on the multi-scale block LBP in this paper exhibited great advantages.

4. Conclusions and Future Work

In this paper, a novel ghost-free multi-exposure image fusion method was proposed. In this model, we used a novel multi-scale LBP operator-based method to extract local texture features of bright and dark regions from source images. If the captured image contained moving objects, we adopted the method based on the MDLBP operator to remove ghosting artifacts in fused images. Through quantitative and qualitative analysis of this method, the experimental results showed that this method was superior to many representative image fusion methods in both visual quality and objective analysis. This method could obtain images with good contrast, color and brightness, and could remove ghosting artifacts in fused images. In addition, this paper also studied two spatially consistent weight distribution methods and compared and discussed the effects of these two methods on the results of dynamic image fusion.
The methods proposed in this paper followed the weighted sum-based framework for multi-exposure image fusion without ghosting, but these methods still suffer from some common limitations. When processing an image sequence containing moving objects, if moving objects appear in most positions of the image sequence, and the same moving object appears in different positions in different images, the effect of this method in eliminating ghosting artifacts may not be good enough. For example, in the image sequence “Garden”, there were many moving people in the scene. Since a certain pixel position in most of the source images was covered by different moving objects or the same moving objects, it was difficult to eliminate all the ghosting in this scenario. To solve this limitation, it is one of our future tasks to improve the multi-exposure image fusion method and develop a method that can eliminate ghosting more effectively.
In the future, I intend to further utilize the advantages of the multi-scale LBP operator to improve the ghost removal method in this paper to address the limitations mentioned above.

Author Contributions

Methodology, X.Y.; software, Z.L.; validation, Z.L. and C.X.; resources, X.Y. and Z.L.; data curation, C.X.; writing—original draft preparation, X.Y.; writing—review and editing, Z.L.; visualization, X.Y.; supervision, Z.L.; project administration, Z.L.; and funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The National Key Research and Development Program of China (2019YFC0117800).

Data Availability Statement

Data sharing not applicable.

Acknowledgments

Input image sequences were provided by Li Huang, School of Electronic Information Engineering, Anhui University, Hefei Anhui, China.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jiao, S.; Liu, Y.; Liu, W. The Synthesis of High Dynamic Range Image Based on Local Variance. In Proceedings of the 7th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 13–14 December 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 560–563. [Google Scholar]
  2. Wang, S.; Zhao, Y. A Novel Patch-Based Multi-Exposure Image Fusion Using Super-Pixel Segmentation. IEEE Access 2020, 8, 39034–39045. [Google Scholar] [CrossRef]
  3. Reinhard, E.; Heidrich, W.; Debevec, P.; Pattanaik, S.; Ward, G.; Myszkowski, K. High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting; Morgan Kaufmann: Burlington, MA, USA, 2010. [Google Scholar]
  4. Shao, H.; Jiang, G.; Yu, M.; Song, Y.; Jiang, H.; Peng, Z.; Chen, F. Halo-Free Multi-Exposure Image Fusion Based on Sparse Representation of Gradient Features. Appl. Sci. 2018, 8, 1543. [Google Scholar] [CrossRef]
  5. Akçay, Ö.; Erenoğlu, R.; Avşar, E. The effect of JPEG Compression in Close Range Photogrammetry. Int. J. Eng. Geosci. 2017, 2, 35–40. [Google Scholar] [CrossRef]
  6. Huo, Y.; Zhang, X. Single image-based HDR image generation with camera response function estimation. IET Image Processing 2017, 11, 1317–1324. [Google Scholar] [CrossRef]
  7. Pribyl, B.; Chalmers, A.; Zemcik, P.; Hooberman, L.; Cadik, M. Evaluation of feature point detection in high dynamic range imagery. J. Vis. Commun. Image Represent. 2016, 38, 141–160. [Google Scholar] [CrossRef]
  8. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  9. Khan, I.R.; Rahardja, S.; Khan, M.M.; Movania, M.M.; Abed, F. A Tone-Mapping Technique Based on Histogram Using a Sensitivity Model of the Human Visual System. IEEE Trans. Ind. Electron. 2018, 65, 3469–3479. [Google Scholar] [CrossRef]
  10. Eilertsen, G.; Mantiuk, R.K.; Unger, J. A comparative review of tone-mapping algorithms for high dynamic range video. Comput. Graph. Forum 2017, 36, 565–592. [Google Scholar] [CrossRef]
  11. Lee, D.-H.; Fan, M.; Kim, S.-W.; Kang, M.-C.; Ko, S.-J. High dynamic range image tone mapping based on asymmetric model of retinal adaptation. Signal Processing-Image Commun. 2018, 68, 120–128. [Google Scholar] [CrossRef]
  12. Mertens, T.; Kautz, J.; van Reeth, F. Exposure Fusion. In Proceedings of the 15th Pacific Conference on Computer Graphics and Applications (PG’07), Washington, DC, USA, 29 October–2 November 2007; pp. 382–390. [Google Scholar]
  13. Kou, F.; Li, Z.; Wen, C.; Chen, W. Edge-preserving smoothing pyramid based multi-scale exposure fusion. J. Vis. Commun. Image Represent. 2018, 53, 235–244. [Google Scholar] [CrossRef]
  14. Burt, P.J.; Adelson, E.H. The Laplacian Pyramid as a Compact Image Code. In Readings in Computer Vision; Morgan Kaufmann: Burlington, MA, USA, 1987. [Google Scholar]
  15. Shen, R.; Cheng, I.; Shi, J.; Basu, A. Generalized Random Walks for Fusion of Multi-Exposure Images. IEEE Trans. Image Processing 2011, 20, 3634–3646. [Google Scholar] [CrossRef] [PubMed]
  16. Gu, B.; Li, W.; Wong, J.; Zhu, M.; Wang, M. Gradient field multi-exposure images fusion for high dynamic range image visualization. J. Vis. Commun. Image Represent. 2012, 23, 604–610. [Google Scholar] [CrossRef]
  17. Li, S.; Kang, X. Fast Multi-exposure Image Fusion with Median Filter and Recursive Filter. IEEE Trans. Consum. Electron. 2012, 58, 626–632. [Google Scholar] [CrossRef]
  18. Shen, J.; Zhao, Y.; Yan, S.; Li, X. Exposure Fusion Using Boosting Laplacian Pyramid. IEEE Trans. Cybern. 2014, 44, 1579–1590. [Google Scholar] [CrossRef]
  19. Bruce, N.D.B. ExpoBlend: Information preserving exposure blending based on normalized log-domain entropy. Comput. Graph. 2014, 39, 12–23. [Google Scholar] [CrossRef]
  20. Li, Z.; Wei, Z.; Wen, C.; Zheng, J. Detail-Enhanced Multi-Scale Exposure Fusion. IEEE Trans. Image Processing 2017, 26, 1243–1252. [Google Scholar] [CrossRef]
  21. Hayat, N.; Imran, M. Multi-exposure image fusion technique using multi-resolution blending. IET Image Processing 2019, 13, 2554–2561. [Google Scholar] [CrossRef]
  22. Huang, L.; Li, Z.; Xu, C.; Feng, B. Multi-exposure image fusion based on feature evaluation with adaptive factor. IET Image Processing 2021, 15, 3211–3220. [Google Scholar] [CrossRef]
  23. Liao, S.; Zhu, X.; Lei, Z.; Zhang, L.; Li, S.Z. Learning multi-scale block local binary patterns for face recognition. In Proceedings of the International Conference on Biometrics, Seoul, Korea, 27–29 August 2007; pp. 828–837. [Google Scholar]
  24. Ojala, T.; Pietikäinen, M.; Harwood, D. A Comparative Study of Texture Measures with Classification Based on Feature Distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  25. Verma, A.; Baljon, M.; Mishra, S.; Kaur, I.; Saini, R.; Saxena, S.; Sharma, S.K. Secure Rotation Invariant Face Detection System for Authentication. CMC—Comput. Mater. Contin. 2022, 70, 1955–1974. [Google Scholar] [CrossRef]
  26. Kamarajugadda, K.K.; Polipalli, T.R. Stride towards aging problem in face recognition by applying hybrid local feature descriptors. Evol. Syst. 2019, 10, 689–705. [Google Scholar] [CrossRef]
  27. Kar, C.; Banerjee, S. Intensity prediction of tropical cyclone using multilayer multi-block local binary pattern and tree-based classifiers over North Indian Ocean. Comput. Geosci. 2021, 154, 104798. [Google Scholar] [CrossRef]
  28. Jinno, T.; Okuda, M. Multiple Exposure Fusion for High Dynamic Range Image Acquisition. IEEE Trans. Image Process. 2012, 21, 358–365. [Google Scholar] [CrossRef] [PubMed]
  29. Liu, Y.; Wang, Z. Dense SIFT for ghost-free multi-exposure fusion. J. Vis. Commun. Image Represent. 2015, 31, 208–224. [Google Scholar] [CrossRef]
  30. Aubry, M.; Paris, S.; Hasinoff, S.W.; Kautz, J.; Durand, F. Fast Local Laplacian Filters: Theory and Applications. ACM Trans. Graph. 2014, 33, 1–14. [Google Scholar] [CrossRef]
  31. Gastal, E.; Oliveira, M.M. Domain transform for edge-aware image and video processing. Acm Trans. Graph. 2011, 30, 1–12. [Google Scholar] [CrossRef]
  32. Paris, S.; Durand, F. A Fast Approximation of the Bilateral Filter Using a Signal Processing Approach. Int. J. Comput. Vis. 2009, 81, 24–52. [Google Scholar] [CrossRef]
  33. Li, Z.G.; Zheng, J.H.; Rahardja, S. Detail-Enhanced Exposure Fusion. IEEE Trans. Image Process. 2012, 21, 4672–4676. [Google Scholar]
  34. Ma, K. Available online: https://kedema.org/ (accessed on 16 July 2022).
  35. HDRsoft Gallery. Available online: http://www.hdrsoft.com/gallery (accessed on 10 July 2022).
  36. Photomatix Pro6.3. Available online: https://www.hdrsoft.com/ (accessed on 3 July 2022).
  37. Xydeas, C.S.; Petrovic, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef]
  38. Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2015, 24, 3345. [Google Scholar] [CrossRef]
Figure 1. Overall flow diagram of the proposed method.
Figure 1. Overall flow diagram of the proposed method.
Electronics 11 03129 g001
Figure 2. Flow diagram of the motion detection based on the multi-scale LBP.
Figure 2. Flow diagram of the motion detection based on the multi-scale LBP.
Electronics 11 03129 g002
Figure 3. Comparison of different weights. (a) Source image sequence “Arch”. (b) Texture weights. (c) The brightness weights. (d) The spatial consistency weights.
Figure 3. Comparison of different weights. (a) Source image sequence “Arch”. (b) Texture weights. (c) The brightness weights. (d) The spatial consistency weights.
Electronics 11 03129 g003
Figure 4. Source image sequences used in experiments. (a) “Lighthouse”, 340 × 512 × 3; (b) “House”, 340 × 512 × 4; (c) “Studio”, 341 × 512 × 5; (d) “Candle”, 364 × 512 × 10; (e) “Garden”, 754 × 1024 × 5.
Figure 4. Source image sequences used in experiments. (a) “Lighthouse”, 340 × 512 × 3; (b) “House”, 340 × 512 × 4; (c) “Studio”, 341 × 512 × 5; (d) “Candle”, 364 × 512 × 10; (e) “Garden”, 754 × 1024 × 5.
Electronics 11 03129 g004
Figure 5. Results of different methods on the image sequence “Lighthouse”. (a) Mertens et al. [12]. (b) Gu et al. [16]. (c) Shen et al. [18]. (d) Bruce et al. [19]. (e) Photomatix [36]. (f) Hayat et al. [21]. (g) Huang et al. [22]. (h) Proposed method.
Figure 5. Results of different methods on the image sequence “Lighthouse”. (a) Mertens et al. [12]. (b) Gu et al. [16]. (c) Shen et al. [18]. (d) Bruce et al. [19]. (e) Photomatix [36]. (f) Hayat et al. [21]. (g) Huang et al. [22]. (h) Proposed method.
Electronics 11 03129 g005
Figure 6. Results of different methods on the image sequence “House”. (a) Mertens et al. [12]. (b) Gu et al. [16]. (c) Shen et al. [18]. (d) Bruce et al. [19]. (e) Photomatix [36]. (f) Hayat et al. [21]. (g) Huang et al. [22]. (h) Proposed method.
Figure 6. Results of different methods on the image sequence “House”. (a) Mertens et al. [12]. (b) Gu et al. [16]. (c) Shen et al. [18]. (d) Bruce et al. [19]. (e) Photomatix [36]. (f) Hayat et al. [21]. (g) Huang et al. [22]. (h) Proposed method.
Electronics 11 03129 g006
Figure 7. Results of different methods on the image sequence “Studio”. (a) Mertens et al. [12]. (b) Gu et al. [16]. (c) Shen et al. [18]. (d) Bruce et al. [19]. (e) Photomatix [36]. (f) Hayat et al. [21]. (g) Huang et al. [22]. (h) Proposed method.
Figure 7. Results of different methods on the image sequence “Studio”. (a) Mertens et al. [12]. (b) Gu et al. [16]. (c) Shen et al. [18]. (d) Bruce et al. [19]. (e) Photomatix [36]. (f) Hayat et al. [21]. (g) Huang et al. [22]. (h) Proposed method.
Electronics 11 03129 g007
Figure 8. Results of different methods on the image sequence “Candle”. (a) Mertens et al. [12]. (b) Shen et al. [18]. (c) DE-MEF [20]. (d) Photomatix [36]. (e) Hayat et al. [21]. (f) Huang et al. [22]. (g) MBLBP-1. (h) MBLBP-2.
Figure 8. Results of different methods on the image sequence “Candle”. (a) Mertens et al. [12]. (b) Shen et al. [18]. (c) DE-MEF [20]. (d) Photomatix [36]. (e) Hayat et al. [21]. (f) Huang et al. [22]. (g) MBLBP-1. (h) MBLBP-2.
Electronics 11 03129 g008aElectronics 11 03129 g008b
Figure 9. Results of different methods on the image sequence “Garden”. (a) Mertens et al. [12]. (b) Shen et al. [18]. (c) DE-MEF [20]. (d) Photomatix [36]. (e) Hayat et al. [21]. (f) Huang et al. [22]. (g) MBLBP-1. (h) MBLBP-2.
Figure 9. Results of different methods on the image sequence “Garden”. (a) Mertens et al. [12]. (b) Shen et al. [18]. (c) DE-MEF [20]. (d) Photomatix [36]. (e) Hayat et al. [21]. (f) Huang et al. [22]. (g) MBLBP-1. (h) MBLBP-2.
Electronics 11 03129 g009
Figure 10. (a) MEF-SSIM analysis of applying different scale factors. (b) Q A B / F analysis of applying different scale factors.
Figure 10. (a) MEF-SSIM analysis of applying different scale factors. (b) Q A B / F analysis of applying different scale factors.
Electronics 11 03129 g010
Figure 11. (a) MEF-SSIM analysis of applying neighborhoods of fixed-size LBP and multi-scale block LBP. (b) QAB/F analysis of applying neighborhoods of fixed-size LBP and multi-scale block LBP.
Figure 11. (a) MEF-SSIM analysis of applying neighborhoods of fixed-size LBP and multi-scale block LBP. (b) QAB/F analysis of applying neighborhoods of fixed-size LBP and multi-scale block LBP.
Electronics 11 03129 g011
Table 1. Information on image sequences relevant to this article.
Table 1. Information on image sequences relevant to this article.
Image SequenceSizeImage Origin
Arno339 × 512 × 3Ma K [34]
Arch669 × 1024 × 5Ma K [34]
Belgium House384 × 512 × 9Ma K [34]
Cave384 × 512 × 4Ma K [34]
Chinese Garden340 × 512 × 3Ma K [34]
Candle364 × 512 × 10Ma K [34]
Forrest683 × 1024 × 5Ma K [34]
Garden754 × 1024 × 5Ma K [34]
House340 × 512 × 4Ma K [34]
Kluki341 × 512 × 3Ma K [34]
Laurenziana356 × 512 × 3Ma K [34]
Lighthouse340 × 512 × 3HDRsoft [35]
Office340 × 512 × 6Ma K [34]
Studio341 × 512 × 5HDRsoft [35]
Table 2. Detailed comparison of Figure 5.
Table 2. Detailed comparison of Figure 5.
MethodWell-Defined EdgesCorrect Brightness or ColorWell-Exposed
Mertens et al. [12]NoYesYes
Gu et al. [16]NoNoNo
Shen et al. [18]NoNoYes
Bruce et al. [19]NoYesNo
Photomatix [36]NoNoYes
Hayat et al. [21]NoYesYes
Huang et al. [22]YesNoNo
Proposed methodYesYesYes
Table 3. Detailed comparison of Figure 6.
Table 3. Detailed comparison of Figure 6.
MethodWell-Defined EdgesCorrect Brightness or ColorWell-Exposed
Mertens et al. [12]NoYesYes
Gu et al. [16]YesNoNo
Shen et al. [18]YesNoYes
Bruce et al. [19]NoNoNo
Photomatix [36]NoNoNo
Hayat et al. [21]NoYesYes
Huang et al. [22]NoYesYes
Proposed methodYesYesYes
Table 4. Detailed comparison of Figure 7.
Table 4. Detailed comparison of Figure 7.
MethodWell-Defined EdgesCorrect Brightness or ColorWell-Exposed
Mertens et al. [12]NoYesYes
Gu et al. [16]NoNoNo
Shen et al. [18]NoNoNo
Bruce et al. [19]NoNoNo
Photomatix [36]NoNoNo
Hayat et al. [21]NoYesYes
Huang et al. [22]NoYesYes
Proposed methodYesYesYes
Table 5. Detailed comparison of Figure 8.
Table 5. Detailed comparison of Figure 8.
MethodWell-Defined EdgesCorrect Brightness or ColorWhether to Eliminate Ghosting
Mertens et al. [12]YesYesNo
Shen et al. [18]NoNoNo
Li et al. [20]YesNoNo
Photomatix [36]NoNoNo
Hayat et al. [21]NoNoYes
Huang et al. [22]YesYesNo
MBLBP-1YesYesYes
MBLBP-2YesYesYes
Table 6. The test result of indicator Q A B / F .
Table 6. The test result of indicator Q A B / F .
DatasetMertens [12]Gu [16]Shen [18]Bruce [19]Li [20]Hayat [21]Huang [22]Proposed
Arno0.6870.3360.4860.6170.6800.6870.6290.693
Arch0.6920.5270.4600.6230.6520.6830.6970.716
Belgium House0.7310.6090.5720.3220.7240.7310.7350.758
Cave0.8090.7430.5920.7220.8040.8090.7190.854
Chinese Garden0.7740.6190.4990.7150.8820.4990.8250.796
Candle0.6240.6030.5510.5860.6100.5830.6250.642
Forrest0.3920.5330.3460.6710.3930.6160.5740.698
Garden0.5660.5120.4280.5670.6770.6220.6720.665
House0.6740.5660.4470.5640.6510.6740.7070.707
Kluki0.7450.6310.5240.6820.7220.7450.7590.763
Laurenziana0.7460.3650.5380.7270.7450.7460.8140.767
Lighthouse0.7080.6340.5150.6470.7060.7080.7020.712
Office0.6710.5290.5130.5500.6910.6710.6750.677
Studio0.7100.3950.4780.5210.6620.7100.7410.748
Note: Bold data in the table represent optimal data.
Table 7. The test result of indicator MEF-SSIM.
Table 7. The test result of indicator MEF-SSIM.
DatasetMertens [12]Gu [16]Shen [18]Bruce [19]Li [20]Hayat [21]Huang [22]Proposed
Arno0.903 0.890 0.846 0.946 0.961 0.903 0.949 0.973
Arch0.842 0.901 0.723 0.848 0.817 0.9410.869 0.938
Belgium House0.893 0.897 0.709 0.859 0.778 0.893 0.9180.902
Cave0.9610.894 0.788 0.944 0.950 0.908 0.935 0.956
Chinese Garden0.950 0.876 0.767 0.859 0.975 0.932 0.920 0.989
Candle0.895 0.912 0.859 0.901 0.921 0.891 0.897 0.928
Forrest0.791 0.877 0.744 0.898 0.821 0.901 0.894 0.930
Garden0.835 0.855 0.809 0.817 0.865 0.8670.827 0.867
House0.915 0.878 0.717 0.863 0.903 0.898 0.919 0.937
Kluki0.929 0.911 0.815 0.876 0.924 0.894 0.931 0.936
Laurenziana0.985 0.755 0.881 0.963 0.991 0.982 0.990 0.991
Lighthouse0.863 0.838 0.711 0.891 0.866 0.9390.857 0.893
Office0.977 0.916 0.756 0.954 0.976 0.973 0.979 0.983
Studio0.920 0.774 0.880 0.911 0.883 0.920 0.911 0.928
Note: Bold data in the table represent optimal data.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ye, X.; Li, Z.; Xu, C. Ghost-Free Multi-Exposure Image Fusion Technology Based on the Multi-Scale Block LBP Operator. Electronics 2022, 11, 3129. https://doi.org/10.3390/electronics11193129

AMA Style

Ye X, Li Z, Xu C. Ghost-Free Multi-Exposure Image Fusion Technology Based on the Multi-Scale Block LBP Operator. Electronics. 2022; 11(19):3129. https://doi.org/10.3390/electronics11193129

Chicago/Turabian Style

Ye, Xinrong, Zhengping Li, and Chao Xu. 2022. "Ghost-Free Multi-Exposure Image Fusion Technology Based on the Multi-Scale Block LBP Operator" Electronics 11, no. 19: 3129. https://doi.org/10.3390/electronics11193129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop