Next Article in Journal
Forward Link Optimization for the Design of VHTS Satellite Networks
Next Article in Special Issue
Detecting Objects from Space: An Evaluation of Deep-Learning Modern Approaches
Previous Article in Journal
Design and Analysis of an Approximate Adder with Hybrid Error Reduction
Previous Article in Special Issue
Exploring Impact of Age and Gender on Sentiment Analysis Using Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiscale Image Matting Based Multi-Focus Image Fusion Technique

1
Faculty of Engineering and Technology, International Islamic University, Islamabad 44000, Pakistan
2
Department of Electrical Engineering, City University of Science and Information Technology, Peshawar 25000, Pakistan
3
Center for Advanced Studies in Telecommunication, COMSATS University, Islamabad 44000, Pakistan
4
School of Electrical Engineering, University of Ulsan, Ulsan 44610, Korea
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(3), 472; https://doi.org/10.3390/electronics9030472
Submission received: 2 February 2020 / Revised: 4 March 2020 / Accepted: 8 March 2020 / Published: 12 March 2020

Abstract

:
Multi-focus image fusion is a very essential method of obtaining an all focus image from multiple source images. The fused image eliminates the out of focus regions, and the resultant image contains sharp and focused regions. A novel multiscale image fusion system based on contrast enhancement, spatial gradient information and multiscale image matting is proposed to extract the focused region information from multiple source images. In the proposed image fusion approach, the multi-focus source images are firstly refined over an image enhancement algorithm so that the intensity distribution is enhanced for superior visualization. The edge detection method based on a spatial gradient is employed for obtaining the edge information from the contrast stretched images. This improved edge information is further utilized by a multiscale window technique to produce local and global activity maps. Furthermore, a trimap and decision maps are obtained based upon the information provided by these near and far focus activity maps. Finally, the fused image is achieved by using an enhanced decision maps and fusion rule. The proposed multiscale image matting (MSIM) makes full use of the spatial consistency and the correlation among source images and, therefore, obtains superior performance at object boundaries compared to region-based methods. The achievement of the proposed method is compared with some of the latest techniques by performing qualitative and quantitative evaluation.

1. Introduction

During image acquisition, one of the most important objectives is obtaining a focused region of interest. However, because of the limited field depth, the focused region contains sharp edges, whereas the other regions get blurred. Recently, multi-focus image fusion (combine images with different focused objects) has received tremendous attention amongst the researchers. This fused image offers high quality containing more detailed information [1,2]. Several methods are developed to fuse multiple images, which are broadly grouped into transform and spatial domains [3,4].
Transform domain methods fuse the corresponding transform coefficients and employ inverse transformation to construct the fused image. Spatial domain methods are further classified into pixel [5,6] and region based methods [7,8]. The spatial domain methods form the fuse image by choosing the pixels/regions/blocks that are focused. Transform domain-based methods in dynamic scenes merge these coefficients without considering the spatial properties, resulting in artifacts in the fused image. Furthermore, pixel and region-based methods are unable to produce the best fusion results for images with complicated texture patterns [1].
Zhang et al. [9] used morphological operations to extract focus regions. However, this technique suffers from block artifacts. De et al. [10] utilized morphological processes to detect the focused region and suggested a technique for calculating an optimized block size. The fused result still suffers from blocking effects. Later on, Bai et al. [11] presented a novel quadtree decomposition and a weighted focus based image fusion technique. However, this technique also provides inaccurate segmentation and low visual effects because of the smooth regions. Yin et al. [12] proposed a method based on joint dictionary and singular value decomposition (SVD) methods. Still, this method is not effective computationally because of the individual training for sub dictionaries and SVD computation.
Li et al. [13] explored guided filtering (GFF) and spatial information to improve the fusion results by mitigating the block effects. Zhang et al. [14] proposed a multifocus image scheme based upon a visual saliency method. Recently, image matting has been used for effectively differentiating the focused and out-of-focus regions. These methods can be broadly categorized as supervised matting and unsupervised matting techniques. Supervised methods require a user specified foreground and background regions known as trimap. Therefore, such techniques require human experts, are time consuming, and produce inconsistent results for images with high-textured backgrounds. However, unsupervised methods are better than supervised ones because user interaction is not required for achieving a good matting result. Chen et al. [15] used a parametric edge based method. However, these methods do not consider the artifacts among the smooth regions, and much of it depends on the performance of hand crafted features, which require much expert knowledge. Li et al. [16] proposed multifocus matting (MFM) based image fusion by combining together the focus region and its neighboring pixels. This method marginally improves the fusion results and also overcomes some shortcomings of spatial domain methods.
Xiao et al. [17] used depth information to segment an image into focus and blur regions. Zhang et al. [18] made use of log spectrum, Fourier transform, and Bayesian techniques. In [19], a definite focus region is detected by using a novel multi scale gradient information. Liu et al. [20] proposed a transform (which is scale invariant) to detect focus regions. However, this technique fails to offer sharp edges of the focus regions. Furthermore, in [21], the focus information was extracted by using texture features. Baohua et al. [22] performed the near and far focus region detection by using a sparse representation and guided filter techniques. In [23], a structure tensor was used for the detection of high and low frequency components. However, this technique fails to provide a visible difference between focus and defocus regions in many cases. Yu at.al. [24] presented a convolutional neural network (CNN) based multifocus image fusion technique. However, in this method, the precision of recognizing the focus block is very low.
In this paper, a novel multi-focus image fusion method is presented using contrast stretching and spatial gradients to enhance the edges from the source images. A multiscale sliding window method is used for detecting the local and global intensity variations to generate initial activity maps. These multiple activity maps are further processed to generate a trimap. An enhanced image matting technique is used for generating the decision maps. Finally, the fused image is obtained after processing the source images, enhanced decision maps, and employing the fusion rule.

2. Proposed Fusion Technique

The schematic diagram of the proposed algorithm is shown in Figure 1. It can be observed that in the first step, a contrast enhancement scheme is applied on the source images. In the second step, the outcome of the intensity transformed image is processed through an edge detection method. In a multi-focus image fusion scheme, the selection of near focus and far focus region plays a vital role. The region that is in focus during image acquisition tends to have sharp edges as compared to the out-of-focus region. Therefore, these sharp edges can be detected easily by applying an appropriate edge detection method.
The edge detection schemes rely heavily on the intensity distribution of an image. A poor intensity distribution can lead to an oversaturated, undersaturated, dark, or bright image. In either of those images, the edge detection algorithm cannot perform well. In order to improve the intensity distribution of an image, an intensity transformation can be performed. In Figure 2, the improvement in edge information is shown by comparing the images before and after applying the contrast enhancement scheme. In the next step, a sliding window technique with two different scales is applied on both edges of the detected images to generate activity maps. In this step, both local and global intensity variations are analyzed. The fine details are more prominent under a small sliding window scale. These masks are further fused together and processed to generate a trimap. Next, the trimap undergoes an image matting transformation to produce refined decision maps, which produces the final fused image. The proposed fusion scheme, along with the equation references, is also elaborated in Algorithm 1.
Let I i be the source color images with M × N dimensions where, m = 1 , 2 , , M , n = 1 , 2 , , N and i [ 1 , 2 ] represents near and far focus images, respectively.
Algorithm 1: Proposed MSIM based Fusion Technique.
Require: I i , i [ 1 , 2 ]
  Step 1. Apply contrast enhancement on I i using Equation (1).
  Step 2. Apply edge map on I i using Equation (2) to Equation (5).
  Step 3. Compute activity maps, G i , σ σ = 1 w, Z i using Equation (6).
  Step 4. Compute Smooth Activity maps, G i , σ Sum Filter , σ = 1 G i , σ , using Equation (7).
  Step 5. Compute Score maps, ζ i , σ σ = 1 G i , σ , using Equations (8) and (9).
  Step 6. Repeat Steps 3, 4, and 5 with σ = 3 .
  Step 7. Compute Near focus ( D 1 ) and Far focus ( D 2 ) using Equation (10).
  Step 8. Generate Trimap T using Equation (11).
  Step 9. Generate Alpha Matte α using Equation (14).
  Step 10. Generate Fused Image A F using Equation (15).

2.1. Contrast Enhancement

Improving the enhancement of the low contrast image, the histogram equalization seems an effective method. Non-parametric modified histogram equalization (NMHE) [25] is integrated to enhance the contrast and preserves the mean brightness of the source image I i , i.e.,
I i NMHE I i
Image development in contrast centrally improves and concentrates pixel details. Figure 2 shows the enhancement in edge information. Figure 2a,b shows the far and near focus source image, respectively, and their gradients are shown in Figure 2c,d. Contrast enhanced of near and far focus images are displayed in Figure 2e,f, and their respective edge maps are shown in Figure 2g,h, respectively. From the images, it can be clearly seen that after the enhancement algorithm, the gradients of the source image were greatly improved.

2.2. Edge Detection

The edges of the images after contrast enhancement is done by a spatial stimuli sketch model (SSGSM) [26] technique, which principally focuses on focal intensity points and edges in an image, and then the unknown region is calculated in the coarse decision maps by implementing the concentrated information in both the activity level maps. The weight of the local stimuli is deliberated by detecting the local variation in the perceived brightness at the respective positions. The discerned brightness, P i of a specific image is given in Equation (2) as,
P i = ϑ log 10 ( I i )
where, I i represents the source images, and ϑ denotes the scaling factor.
Gradients illustrate the sharp intensity variations in the image. Mathematically, the weight is computed as the total difference of the perceived brightness on x and y directions. The intensity variations of P i on the x and y axis are represented by ϱ i x and ϱ i y , respectively. These variations are calculated by using their respective gradients B i x and B i y , given as in Equations (3) and (4):
[ B i x , B i y ] gradient P i
ϱ i x = B i x ( e | B i x | ) ; ϱ i y = B i y ( e | B i y | )
The weight of local stimuli Z i is expressed by using Equation (5):
Z i = ( ϱ i x ) 2 + ( ϱ i y ) 2

2.3. Focus Maps

A multiscale sliding window technique is applied to acquire diverse focus maps from activity maps Z i . Two sliding windows are selected for the generation of focus maps. Firstly, a 9 × 9 window is initialized by setting k = 9 , l = 9 and σ = 1 in Equation (6). The activity maps are divided into blocks of 9 × 9 pixels by using spatial domain filters, as in Equations (6) and (7):
G i , σ ( m , n ) = q 1 = σ × k σ × k q 2 = σ × l σ × l w ( q 1 , q 2 ) Z i ( m + q 1 , n + q 2 )
G i , σ ( s , t ) = ( m , n ) ϵ Ω G i , σ ( m , n )
The activity of each block is stored in the form of map scores. Furthermore, the sum of intensity levels in each near ( G 1 , σ = 1 ( s , t ) ) and far focus block ( G 2 , σ = 1 ( s , t ) ) are calculated and compared with one another to update the score maps ( ζ i , σ = 1 ), as given in Equations (8) and (9).
ζ 1 , σ ( m , n ) = 1 , if G 1 , σ ( s , t ) > G 2 , σ ( s , t ) 0 , Otherwise
ζ 2 , σ ( m , n ) = 1 ζ 1 , σ ( m , n )
Similarly, 27 × 27 block of pixels are generated by setting k = 9 , l = 9 and σ = 3 in Equation (6). The activity maps in each near ( G 1 , σ = 3 ( s , t ) ) and far focus block ( G 2 , σ = 3 ( s , t ) ) are calculated and compared with one another to update the score maps ( ζ i , σ = 3 ), as in Equations (8) and (9).
These multiple sliding windows result in multiple near and far focus maps. This multiscale sliding window technique reduces the blocking artifacts in the coarse decision maps. Each map offers different characteristic information, which plays a key role in improving the focus maps and the fused image. These multiscale windows extract the information from original images at different scales. It is noted that this approach has demonstrated better visual quality than the existing methods. Each scale offers different information for image fusion, for example, a small window size focuses on local intensity variations, whereas a large size window size extracts global variations in an image. The information from these multiscale near-focus ( ζ 1 , σ = 1 and ζ 1 , σ = 3 ) and far-focus maps ( ζ 2 , σ = 1 and ζ 2 , σ = 3 ) are combined together to form a single near-focus ( D 1 ) and far-focus ( D 2 ) map, respectively, carrying the attributes of both scales, as in Equation (10).
D i ( m , n ) AND ζ i , σ = 1 , ζ i , σ = 3
After obtaining the focus maps, the next step is to generate a trimap that segments the given images into the three different regions, i.e., focused, definite defocused, and unknown. Pixels from the focused region have greater focus value than pixels in the defocused region [27]. The trimap T of A 1 is processed by using D 1 , D 2 as in Equation (11).
T Tri Map D i
In a given image I, the image matting considers it a composite of foreground I F o r e and background I b a c k . Each pixel is assumed to be a linear combination of I F o r e and I b a c k . Let α denote the pixel foreground opacities then an image I can be represented as,
I i = α i I i F o r e + ( 1 α i ) I i b a c k
In [28], the quadratic cost function for α is derived as,
J ( α ) = α T L α
where, L is defined as a matting laplacian matrix of N × N dimension.
The L is a symmetric positive definite matrix and is defined in [28] as L = HW, where, H is a diagonal matrix and W is a symmetric matrix. The neighborhood W M is given as,
W M ( i , j ) = k / ( i , j ) ϵ w k = 0 K 1 | w k | 1 + ( χ i ϕ k ) ( ν k + ε | w k | Γ ) 1 ( χ j ϕ k )
where, | w k | denotes the number of pixels in the window, ϕ k and ν k represents mean and variance of intensities in the window w k , respectively. χ represents the pixel color, ϵ is a regularization parameter and Γ is an identity matrix.
Finally, the obtained alpha matte α from the source images and trimap is same as the focused region of I i is constructed as in Equation (15).
I F ( m , n ) = α ( m , n ) I 1 ( m , n ) + ( 1 α ( m , n ) ) I 2 ( m , n )

3. Results and Discussion

To show the superiority of the proposed MSIM, a comparison was performed with discrete wavelet transform (DWT) [29], guided filtering based fusion (GFF) [13], discrete cosine transform (DCT) [30], dense sift (DSIFT) [20], multi-scale morphological focus-measure (MSMFM) [9], and convolutional neural network (CNN) [24] on a multifocus image dataset [31]. The proposed method was evaluated by performing both subjective and objective assessments. These algorithms were tested on a Acer laptop Intel(R) Core™ i7 2.6GHz processor with 12GB RAM under a Matlab R2018b environment. All the algorithms were executed by using the original codes made available by the authors.

3.1. Comparison of Image Matting Result

Generally, an unsupervised trimap produces better results than the supervised ones. Hence, in practice, user specified trimaps are often necessary to achieve the high quality matting results; however, the making of a user supervised trimap takes time, skills, and is not available for all kind of images. In this paper, two image matting techniques have been proposed, i.e., focus maps matting and feature based matting. The results of the proposed method are compared with feature based matting and the closed form matting [28]. It is clearly observed that the proposed matting produces better results compared to the existing technique (Figure 3).

3.2. Comparison of Image Fusion with Other Methods

The proposed technique is tested on gray scale, color, and dynamic images. Figure 4 shows the results of the proposed MSIM for “Lab” image. The source near and far focus inputs are presented in Figure 4a,b, respectively. The fused results produced by other methods and the proposed technique are given in Figure 4c–i. To further investigate the effectiveness, the difference of the near-focus image with the fused images is shown in Figure 4j–p. The close up views enclosed by red and yellow boxes are also shown at the bottom of their respective difference image. It is noted that the DWT, DCT, and DSIFT methods produce poor edge information and contain artifacts (as shown in the close-ups). Furthermore, GFF, MSMFM, and CNN methods also provide limited information of the focused regions as compared to the proposed MSIM technique. Similarly, Figure 5 illustrates the results produced by several existing and proposed algorithms for “Globe” images. To further analyze the results, close-up views of important regions are placed at the bottom of each difference image. In this image, the boundary region of the hand is difficult to detect since it lies on the focus transition point. The results of fusion by other techniques in Figure 5j–o show the distorted regions and lack of sharpness in the highlighted region. However, the proposed MSIM method has successfully fused the complementary information from both the images, as shown in Figure 5p. It is very important to evaluate the results of different algorithms on the color dataset shown in Figure 6a,b and Figure 7a,b. The outcomes of the existing techniques and the proposed method on “Flower” and “Boy” are shown in Figure 6c–i and Figure 7c–i, respectively. The difference between the fused and out of focus source images is illustrated in Figure 6j–p and Figure 7j–p, respectively. It is noted that in both the flower and boy images, the existing techniques are unable to mitigate the artifacts and blur in the focus transition area (as noted in the close-ups of difference images). The proposed MSIM is able to preserve contrast and details using the edge feature and multi scale image matting technique.
Another challenge for multi-focus fusion includes the performance in dynamic scenes. The scenario occurs either due to the movement of the camera or the motion of the object. So it is important to verify the effectiveness of the MSIM result with the existing ones on such scenes. Figure 8a,b shows near and far focus “Girl” images, respectively. The results of MSIM and existing techniques are shown in Figure 8c–i, while Figure 8j–p shows difference images. As shown in the red and yellow boxes, the DWT, GFF, DCT, DSIFT, and MSMFM methods are unable to completely fuse the focus regions. Moreover, the CNN has produced erosion in the fused image, whereas the proposed MSIM has successfully mitigated the inconsistencies and limitations of the existing techniques.
It is observed from these visualizations that the existing methods produce artifacts, erosion, halo effects and are unable to produce sharp boundaries of the near and far focus images. Note that the MSIM technique not only perfectly identifies the near and far focus regions but also fuses the complementary information in an effective manner.

3.3. Objective Evaluation Metrics

After evaluating the visual quality and quantitative assessment of different methods, it can be clearly observed that MSIM produces a visually pleasant and high quality fusion result in almost all cases and outperformed the existing fusion methods for multi-focus images. Five most commonly used metrics are evaluated, i.e., Mutual Information (MI) [32], Spatial Structural Similarity (SSS) Q A B / F [33], Feature Mutual Information (FMI) [34], Entropy (EN) [35], and Visual Information Fidelity (VIF) [36] to verify the superiority and effectiveness of the proposed MSIM method. Table 1 shows that the proposed MSIM gives better objective assessment results than the existing methods. Although, the results of existing techniques are comparable in some cases (Flower and Boy); however, the metric values obtained using the proposed MSIM generally outperforms the existing techniques.

3.4. Comparison of Computational Efficiency

In this section, the computational efficiency of different fusion methods is compared. The execution time of these schemes for different images is shown in Table 1. The results show that the proposed MSIM, DSIFT, and GFF consume less time as compared to the other algorithms DCT, DWT, MSMFM, and CNN. The MSMFM algorithm uses a multi-scale morphological gradient based feature, therefore taking longer processing time than DSIFT. Whereas, GFF integrates the source images by using a global weight based scheme; however, it still takes less computation time and produces satisfactory results.
The proposed method utilizes the contrast enhancement, SSGSM based edge extraction, sliding window based local and global operations to create activity maps and trimap. The sliding window method, activity maps generation, their comparison, and a trimap generation are time consuming tasks. Although, the proposed algorithm consumes more processing time as compared to the existing ones; however, it produces the best unsupervised image matting and image fusion results.

4. Conclusions

A multiscale image fusion technique is presented for accurate construction of tri-maps, decision maps, and fused images. Firstly, the source images are pre-processed using a NMHE histogram equalization method and their gradients are computed using SSGSM. A multiscale sliding window technique calculates the focus maps from source images. Furthermore, the focus information is processed so that an accurate focused region is extracted. The proposed MSIM is robust to noise interference and is flexible to combine various fusion strategies and provides better fusion performance both visually and quantitatively when compared with other state of the art methods for multi-focus images datasets. In the future, the proposed scheme will be further considered for other application areas of image processing.

Author Contributions

Conceptualization: S.M. and U.J.; methodology: S.M. and U.J.; software: S.M., U.J. and M.M.; validation: S.M., U.J. and M.M.R.; formal analysis: S.M., U.J. and M.M.R.; investigation: S.M., U.J., M.M.R., and F.M.; data curation: S.M. and U.J.; writing—original draft preparation: S.M., U.J. and M.M.R.; supervision: U.J., and S.K.; project administration: F.M. and S.K.; funding acquisition: S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Research Program through the National Research Foundation of Korea (NRF-2019R1A2C1005920).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion. A survey of the state of the art. Inf. Fus. 2017, 33, 100–112. [Google Scholar] [CrossRef]
  2. Maqsood, S.; Javed, U. Multi-modal Medical Image Fusion based on Two-scale Image Decomposition and Sparse Representation. Biomed. Signal Process. Control 2020, 57, 101810. [Google Scholar] [CrossRef]
  3. Thang, C.; Anh, D.; Khan, A.W.; Karim, P.; Sally, V. Multi-Focus Fusion Technique on Low-Cost Camera Images for Canola Phenotyping. Sensors 2018, 18, 1887. [Google Scholar]
  4. Goshtasby, A.A.; Nikolov, S. Image fusion: Advances in the state of the art. Inf. Fus. 2007, 8, 114–118. [Google Scholar] [CrossRef]
  5. Yang, Y.; Yang, M.; Huang, S.; Ding, M.; Sun, J. Robust sparse representation combined with adaptive PCNN for multifocus image fusion. IEEE Access 2018, 6, 20138–20151. [Google Scholar] [CrossRef]
  6. Eltoukhy, H.A.; Kavusi, S. A computationally efficient algorithm for multi-focus image reconstruction. SPIE Electr. Imaging Proc. 2003, 332–341. [Google Scholar]
  7. Zribi, M. Non-parametric and region-based image fusion with Bootstrap sampling. Inf. Fus. 2010, 11, 85–94. [Google Scholar] [CrossRef]
  8. Qilei, L.; Xiaomin, Y.; Wei, W.; Kai, L.; Gwanggil, J. Multi-Focus Image Fusion Method for Vision Sensor Systems via Dictionary Learning with Guided Filter. Sensors 2018, 18, 2143. [Google Scholar]
  9. Zhang, Y.; Bai, X.; Wang, T. Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inf. Fus. 2017, 35, 81–101. [Google Scholar] [CrossRef]
  10. De, I.; Chanda, B. Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure. Inf. Fus. 2013, 14, 136–146. [Google Scholar] [CrossRef]
  11. Bai, X.; Zhang, Y.; Zhou, F.; Xue, B. Quadtree-based multi-focus image fusion using a weighted focus-measure. Inf. Fus. 2015, 22, 105–118. [Google Scholar] [CrossRef]
  12. Yin, H.; Li, Y.; Chai, Y.; Liu, Z.; Zhu, Z. A novel sparse-representation based multi-focus image fusion approach. Neurocomputing 2016, 216, 216–229. [Google Scholar] [CrossRef]
  13. Li, S.; Kang, X.; Hu, J. Image fusion with guided filtering. IEEE Trans. Image Process. 2013, 22, 2864–2875. [Google Scholar] [PubMed]
  14. Zhang, B.; Lu, X.; Peo, H.; Liu, H. Multi-focus image fusion algorithm based on focused region extraction. Neurocomputing 2016, 174, 733–748. [Google Scholar] [CrossRef]
  15. Chen, Y.; Guan, J.; Cham, W.K. Robust Multi-Focus Image Fusion Using Edge Model and Multi-Matting. IEEE Trans. Image Process. 2017, 27, 1526–1541. [Google Scholar] [CrossRef]
  16. Li, S.; Kang, X.; Hu, J.; Yang, B. Image matting for fusion of multi-focus images in dynamic scenes. Inf. Fus. 2013, 14, 147–162. [Google Scholar] [CrossRef]
  17. Xiao, J.; Liu, T.; Zhang, Y.; Zou, B.; Lei, J.; Li, Q. Multi-focus image fusion based on depth extraction with inhomogeneous diffusion equation. Signal Process. 2016, 125, 171–186. [Google Scholar] [CrossRef]
  18. Zhang, X.; Li, X.; Feng, Y. A new multifocus image fusion based on spectrum comparison. Signal Process. 2016, 123, 127–142. [Google Scholar] [CrossRef]
  19. Zhou, Z.; Li, S.; Wang, B. Multi-scale weighted gradient-based fusion for multifocus images. Inf. Fus. 2014, 20, 60–72. [Google Scholar] [CrossRef]
  20. Liu, Y.; Liu, S.; Wang, Z. Multi-focus image fusion with dense sift. Inf. Fus. 2015, 23, 139–155. [Google Scholar] [CrossRef]
  21. Liu, Z.; Chai, Y.; Yin, H.; Zhou, J.; Zhu, Z. A novel multi-focus image fusion approach based on image decomposition. Inf. Fus. 2017, 35, 102–116. [Google Scholar] [CrossRef]
  22. Baohua, Z.; Xiaoqi, L.; Haiquan, P.; Yanxian, L.; Wentao, Z. Multi-focus image fusion based on sparse decomposition and background detection. Dig. Signal Process. 2015, 58, 50–63. [Google Scholar]
  23. Li, H.; Li, X.; Yu, Z.; Mao, C. Multifocus image fusion by combining with mixed-order structure tensors and multiscale neighborhood. Inf. Sci. 2016, 349–350, 25–40. [Google Scholar] [CrossRef]
  24. Liu, Y.; Chen, X.; Peng, H.; Wang, Z. Multi-focus image fusion with a deep convolutional neural network. Inf. Fus. 2017, 36, 191–207. [Google Scholar] [CrossRef]
  25. Poddar, S.; Tewary, S.; Sharma, D.; Karar, V.; Ghosh, A.; Pal, S.K. Non-parametric modified histogram equalisation for contrast enhancement. IET Image Process. 2013, 7, 641–652. [Google Scholar] [CrossRef] [Green Version]
  26. Mathew, J.J.; James, A.P. Spatial stimuli gradient sketch model. IEEE Signal Process. Lett. 2015, 22, 1336–1339. [Google Scholar] [CrossRef] [Green Version]
  27. Gonzalez, R.C.; Woods, R.E.; Eddins, S. Digital Image Processing Using MATLAB; Prentice Hall: New York, NY, USA, 2004. [Google Scholar]
  28. Levin, A.; Lischinski, D.; Weiss, Y. A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 228–242. [Google Scholar] [CrossRef] [Green Version]
  29. Liu, Y.; Wang, Z. Multi-focus image fusion based on wavelet transform and adaptive block. J. Image Graph. 2013, 18, 10. [Google Scholar]
  30. Phamila, Y.A.V.; Amutha, R. Discrete Cosine Transform based fusion of multi-focus images for visual sensor networks. Signal Process. 2014, 95, 161–170. [Google Scholar] [CrossRef]
  31. Hong, R.; Yang, Y.; Wang, M.; Hua, X. Learning Visual Semantic Relationships for Efficient Visual Retrieval. IEEE Trans. Big Data 2015, 1, 152–161. [Google Scholar] [CrossRef]
  32. Hossny, M.; Nahavandi, S.; Vreighton, D. Comments on information measure for performance of image fusion. Electron. Lett. 2008, 44, 1066–1067. [Google Scholar] [CrossRef] [Green Version]
  33. Petrovi, V.S.; Xydeas, C.S. Sensor noise effects on signal-level image fusion performance. Inf. Fus. 2003, 4, 167–183. [Google Scholar] [CrossRef]
  34. Haghighat, M.B.A.; Aghagolzadeh, A.; Seyedarabi, H. A non-reference image fusion metric based on mutual information of image features. Comput. Electr. Eng. 2011, 37, 744–756. [Google Scholar] [CrossRef]
  35. Liu, Y.; Liu, S.; Wang, Z. A general framework for image fusion based on multiscale transform and sparse representation. Inf. Fus. 2015, 24, 147–164. [Google Scholar] [CrossRef]
  36. Han, Y.; Cai, Y.; Cao, Y.; Xu, X. A new image fusion performance metric based on visual information fidelity. Inf. Fus. 2013, 14, 127–135. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the proposed approach for the image fusion algorithm.
Figure 1. Schematic diagram of the proposed approach for the image fusion algorithm.
Electronics 09 00472 g001
Figure 2. Results of edge detection after contrast enhancement. (a) Near focus image, (b) far focus image, (c,d) gradients of (a,b) achieved by a spatial stimuli sketch model (SSGSM) [26], (e,f) contrast enhancement using non-parametric modified histogram equalization (NMHE) [25], (g,h) gradients of (e,f) achieved by SSGSM [26].
Figure 2. Results of edge detection after contrast enhancement. (a) Near focus image, (b) far focus image, (c,d) gradients of (a,b) achieved by a spatial stimuli sketch model (SSGSM) [26], (e,f) contrast enhancement using non-parametric modified histogram equalization (NMHE) [25], (g,h) gradients of (e,f) achieved by SSGSM [26].
Electronics 09 00472 g002
Figure 3. Results of Trimap and Alpha matte on flower image (bd) trimaps of (a), (eg) alpha mattes of Figure 3a.
Figure 3. Results of Trimap and Alpha matte on flower image (bd) trimaps of (a), (eg) alpha mattes of Figure 3a.
Electronics 09 00472 g003
Figure 4. Results of image fusion and their difference images on the “Lab” source images. (ci) Fused images obtained through fusion schemes. (jp) Difference images obtained from the fusion results and Figure 4b.
Figure 4. Results of image fusion and their difference images on the “Lab” source images. (ci) Fused images obtained through fusion schemes. (jp) Difference images obtained from the fusion results and Figure 4b.
Electronics 09 00472 g004
Figure 5. Results of image fusion and their difference images on the “Globe” source images. (ci) Fused images obtained through fusion schemes. (jp) Difference images obtained from the fusion results and Figure 5b.
Figure 5. Results of image fusion and their difference images on the “Globe” source images. (ci) Fused images obtained through fusion schemes. (jp) Difference images obtained from the fusion results and Figure 5b.
Electronics 09 00472 g005
Figure 6. Results of image fusion and their difference images on the “Flower” source images. (ci) Fused images obtained through fusion schemes. (jp) Difference images obtained from the fusion results and Figure 6b.
Figure 6. Results of image fusion and their difference images on the “Flower” source images. (ci) Fused images obtained through fusion schemes. (jp) Difference images obtained from the fusion results and Figure 6b.
Electronics 09 00472 g006
Figure 7. Results of image fusion and their difference images on the “Boy” source images. (ci) Fused images obtained through fusion schemes. (jp) Difference images obtained from the fusion results and Figure 7a.
Figure 7. Results of image fusion and their difference images on the “Boy” source images. (ci) Fused images obtained through fusion schemes. (jp) Difference images obtained from the fusion results and Figure 7a.
Electronics 09 00472 g007
Figure 8. Results of image fusion and their difference images on the “Girl” source images. (ci) Fused images obtained through fusion schemes. (jp) Difference images obtained from the fusion results and Figure 8b.
Figure 8. Results of image fusion and their difference images on the “Girl” source images. (ci) Fused images obtained through fusion schemes. (jp) Difference images obtained from the fusion results and Figure 8b.
Electronics 09 00472 g008aElectronics 09 00472 g008b
Table 1. The quantitative assessment of different fusion methods.
Table 1. The quantitative assessment of different fusion methods.
ImagesFusion MethodsMI [32] Q AB / F [33]FMI [34]EN [35]VIF [36]Time/s
DWT [29]8.21520.72390.81907.04740.91384.02
GFF [13]7.91140.72790.81917.06020.91493.11
DCT [30]8.52630.74600.91976.98190.914311.56
LabDSIFT [20]8.52120.74780.90977.07590.91716.65
MSMFM [9]8.79950.68640.91966.98850.91615.79
CNN [24]8.68120.74710.91966.99740.91597.88
Proposed8.83220.74740.93867.17590.99805.08
DWT [29]8.19100.72460.88927.70370.924011.09
GFF [13]8.76640.77260.89357.74120.94769.78
DCT [30]9.18450.77310.89397.69900.937410.98
GlobeDSIFT [20]9.14350.77460.89387.69890.94375.51
MSMFM [9]9.37390.77110.89407.73890.94396.55
CNN [24]9.23970.77010.89277.74580.94728.01
Proposed9.43160.77330.89437.74790.94807.06
DWT [29]5.64520.65360.87737.17010.906416.93
GFF [13]7.32900.69440.89087.19150.920010.13
DCT [30]7.85610.67850.88617.43310.926312.03
FlowerDSIFT [20]8.00570.69470.88577.43160.93044.61
MSMFM [9]7.92330.69300.89157.18730.91405.98
CNN [24]3.07730.69510.89127.18720.91777.32
Proposed8.14580.79400.89367.58970.93676.91
DWT [29]7.53210.72060.88147.53710.89359.95
GFF [13]7.63160.74480.87177.53100.80975.97
DCT [30]8.08520.74090.87147.56690.903511.08
BoyDSIFT [20]8.17650.74370.87217.53880.90483.80
MSMFM [9]8.20810.74180.87177.24020.90267.55
CNN [24]2.99660.74660.88197.53860.90728.01
Proposed8.29610.74870.88267.56730.90777.75
DWT [29]5.62260.59390.81897.84770.64865.39
GFF [13]8.08200.69020.82137.84750.64485.88
DCT [30]8.72220.67880.81687.82260.73326.44
GirlDSIFT [20]8.87740.68340.82157.84290.73944.04
MSMFM [9]9.03680.54270.82197.65490.73977.11
CNN [24]8.85800.69190.82087.86070.741710.21
Proposed9.09700.69360.84157.87490.74685.65

Share and Cite

MDPI and ACS Style

Maqsood, S.; Javed, U.; Riaz, M.M.; Muzammil, M.; Muhammad, F.; Kim, S. Multiscale Image Matting Based Multi-Focus Image Fusion Technique. Electronics 2020, 9, 472. https://doi.org/10.3390/electronics9030472

AMA Style

Maqsood S, Javed U, Riaz MM, Muzammil M, Muhammad F, Kim S. Multiscale Image Matting Based Multi-Focus Image Fusion Technique. Electronics. 2020; 9(3):472. https://doi.org/10.3390/electronics9030472

Chicago/Turabian Style

Maqsood, Sarmad, Umer Javed, Muhammad Mohsin Riaz, Muhammad Muzammil, Fazal Muhammad, and Sunghwan Kim. 2020. "Multiscale Image Matting Based Multi-Focus Image Fusion Technique" Electronics 9, no. 3: 472. https://doi.org/10.3390/electronics9030472

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop