1. Introduction
The statistical analysis of images is restricted due to the depth of focus while sensing different images. The main problem is that the focus is not equally concentrated on objects which exist in an image [
1]. A feasible solution to overcome the above problem is composite imaging. Composite imaging is one of the techniques used in Multi-focus Image Fusion (MIF), which combines multiple numbers of images with the concentration of different focus levels related to the same scene [
2]. The spatial and transform domain methods are applicable in MIF [
3]. Transform base methods are also called multiresolution algorithms. The main principle of transform domain algorithms is to maintain perceptual vision with accurate information in a multiresolution representation. Various studies indicate that several multiresolution methods have been developed, such as discrete wavelet transform (DWT), stationary wavelet transform (SWT), double density discrete wavelet transform (DDDWT), etc. [
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15]. Lack of spatial orientation selectivity is the main issue with pyramid-based approaches, which causes blocking effects in the fused image. These pitfalls can be avoided by using DWT. However, DWT has issues with directionality, shift invariance, and aliasing. Primary factors influencing the quality of fused images are shift-invariance and directional selectivity. Traditional wavelet-based fusion techniques generate ringing artifacts into the fused images, which restricts the use of DWT for image fusion.
The DTCWT [
16,
17], one of the most accurate ones, overcomes the DWT’s limitations in shift invariance and directional sensitivity. The directional selectivity and near-shift invariance of DTCWT allow it to properly represent features in the fused image. Developing filters in DTCWT is a little more challenging since bi-orthogonal, and phase limitations must be met. The qshift DTCWT is a technique for simplifying filter design in DTCWT that produces superior fusion outcomes. The qshift DTCWT has succeeded as a multi-resolution transform intended for image fusion because it can capture directional and shift invariant characteristics.
The objective of the proposed approach is to create a high-quality fused image that is smoother, has improved visuality, and is free of distortions and noise. Users can easily perceive details in these images. The majority of MIF algorithms have an inadequate spatial resolution, which causes blurring. On fused images, the qshiftN DTCWT approach has a significant impact. This technique effectively enhances the resolution of fused images and yields high-quality results. The LP [
18,
19,
20], and MPCA [
19] methods also do better in terms of lowering additive noise, reducing distortion, and maintaining edges and other crucial values such as image sectors with higher contrast. As shown by the visual, and quantitative results, the proposed method gets rid of these problems and produces better quality measurement results. Furthermore, the proposed formulation performs well in MIF.
Several approaches for MIF were proposed in the past decades. For example, in the Nonsubsampled Contourlet Transform (NSCT) domain, Chinmaya Panigrahy et al. proposed an effective image fusion model using an enhanced adaptive pulse coupled neural network (PCNN). The proposed methodology has utilized the subbands of the source images obtained by the NSCT algorithm in the image fusion process. The adaptive linking strength is estimated using a new fractal dimension-based focus measure FDFM algorithm [
21]. A review of region-based fusion techniques was presented by Bikash Meher et al. based on the classification of region-based fusion approaches. For the comparison of the mentioned existing approaches, fusion objective assessment indicators are emphasized [
22]. Lin He, and colleagues proposed a MIF approach for improving imaging systems. The cascade forest was incorporated into MIF to estimate the influence of fusion rules [
23]. Samet Aymaz et al. proposed a unique MIF approach based on a super-resolution hybrid method [
24].
In the DWT domain, Zeyu Wang et al. [
25] proposed a novel MIF approach that uses a convolutional neural network (CNN) algorithm to combine the benefits of both spatial and transforms domain approaches. Instead of using image blocks or source images, CNN is employed to amplify features and build various decision maps for different frequency subbands. The additional benefit of the CNN approach is to utilize the adaptive fusion rule in the fusion process. Amin-Naji et al. [
26] derived two important feature metrics the energy of Laplacian and the variance of Laplacian. The idea of the proposed work is to evaluate the correlation coefficient between the source blocks and the artificial blurred blocks in the discrete cosine transform (DCT) domain using the focus metrics. A new approach for MIF is proposed by Samet Aymaz et al. [
27]. A super-resolution method is concerned with contrast enhancement, SWT with the combination of Discrete Meyer filter for decomposition. The further final image is attained by implementing a new fusion rule with a gradient-based approach. Wavelet transformations are introduced by Jinjiang Li et al. [
28] to extract high and low-frequency coefficients. In addition to this deep convolution, neural networks are implemented to generate a high-quality fused image by direct mapping in between learning high-frequency and low-frequency of source images [
29]. Mansour Nejati et al. [
30] presented a new focus metric based on the surface area of regions using the encircled method. This measure’s ability to discriminate blurred regions in the fusion method is demonstrated. Bingzhe Wei et al. [
31] a novel fusion method that applies CNN to assist sparse representation (SR) is proposed for the purpose of gaining a fused image with more precise and abundant information. The computational complexity of this fusion method is impressively reduced. Chenglang Zhang [
32] proposed a novel MIF approach based on multiscale transform (MST) and convolution sparse representation (CSR) to address the inherent defects of both the MST and SR-based fusion methods. The proposed approach is put up against the approaches discussed in the literature [
21,
22,
23,
24,
25,
26,
27,
28,
30].
The following are the essential contributions of this work:
- (i)
A hybrid method (i.e., qshiftN DTCWT and LP) with MPCA is introduced for the fusion of multifocus images;
- (ii)
The method helps combine multiple source images to develop a fused image having better image quality with good directionality, a high degree of shift-invariance, achieving better visual quality, and retaining more information than the source images;
- (iii)
Using the MPCA method, the amount of redundant data is decreased, and the most significant components of the source images are extracted;
- (iv)
Extend the depth-of-field (DOF) of the advanced imaging system;
- (v)
An analyzing procedure has been done both quantitatively and qualitatively;
- (vi)
Proposed approach performance has improved compared with the state-of-the-art techniques developed in recent years.
The rest of the paper is organized as follows:
Section 2 explains the proposed fusion methodology as well as the fusion methods implemented.
Section 3 presents the results of the experimentation.
Section 4 concludes with conclusions.
2. The Proposed Fusion Approach
This paper proposes a hybrid approach with MPCA to overcome other algorithms’ blurring and spatial distortions. An algorithm is a novel approach in MIF because this hybrid technique with MPCA gives good performance compared to other algorithms in recent years. In the proposed method, the fusion procedure is performed individually on row and column images, which are then averaged to eliminate any noise or distortion generated by the fusion process. The noise elimination process is explained in
Section 2.1. Then, the source images are decomposed into LF components and HF components using LP. It provides information on the sharp contrast changes to which the human visual system is principally sensitive. The LP method is explained in
Section 2.2. Then, qshiftN DTCWT is used to fuse low and high-frequency components to produce a fused image with good directionality and a high degree of shift-invariance. The qshiftN DTCWT method is explained in detail in
Section 2.3. After fusing the low and high-frequency components, IDTCWT is applied to reconstruct the fused low and high-frequency components. In the proposed method, MPCA is used to improve the efficiency of the hybrid approach (i.e., qshiftN DTCWT and LP) to reduce the redundant data and extract the essential components of the fused image (i.e., all-in-focus image). Also, MPCA emphasizes elements that have the most significant impact and are robust to noise. So, the MPCA reduces the blurring and spatial distortions; thus, the fused image has more detailed clarity, clear edges, and better visual and machine perception. The MPCA method is explained in
Section 2.4. Finally, the fused image is formed and available for comparison. Various objective quality metrics are calculated to assess the proposed method’s quality. These measures are described in
Section 2.6 and
Section 2.7, respectively.
Figure 1 depicts the proposed technique’s flow diagram, detailed in
Section 2.5.
2.1. Noise Elimination Process
The image
g(x, y) of size
M × N is separated into rows, and the rows are concatenated to generate a 1-D vector data
g(x) of size
MN [
18]. It is shown in Algorithm 1.
Algorithm 1 Converting a two-dimensional array to a one-dimensional array |
Input: Two-Dimensional Image (I), No. of Rows (M), and No. of Columns (N) |
Output: One-Dimensional Vector Data (I) |
Steps: |
Begin |
|
|
End |
By inversing the technique mentioned in Algorithm 2, the 2-D image could be restored from the 1-D vector data.
Algorithm 2 Converting a one-dimensional array to a two-dimensional array |
Input: One-Dimensional Vector Data (I), No. of Rows (M), and No. of Columns (N) |
Output: Two-Dimensional Image (I) |
Steps: |
Begin |
|
|
End |
Likewise, the size image g(x, y) is separated into columns and these columns are concatenated to generate a 1D vector data with g(x) a size of MN. The operation is I = C2DT1D (I′, M, N).
2.2. Laplacian Pyramid (LP)
The Laplacian pyramid [
18,
19,
20] reveals the strong contrast modifications to which the human visual system is most highly sensitive. It can localize in both the spatial and frequency domains. LP is used to extract the most relevant elements of the fused image. LP also sets a premium on elements that have the most effective and are resistant to noise. As a result, the LP minimizes blurring and spatial distortions. The technique for constructing and reconstructing a Laplacian pyramid is shown below. On vector data, the image reduction method is performed by taking the DCT and applying the inverse of the DCT (IDCT) to the first half of the coefficients. The function that reduces IR is used to conduct level-to-level image reduction.
Reduction Function (Image):
Image Reduction (IR) using DCT:
Expand Image (IE) using DCT:
Each image to be fused is formed into a pyramid using Equations (8) and (9). The constructed stages of the Laplacian pyramid are denoted by I1 in the first image and I2 in the second image. The following is the image fusion rule:
2.3. qshiftN Dual-Tree Complex Wavelet Transform
Highly sampled DWT exhibits change invariance issues in 1-D and directional sensitivity in N-D. The DTCWT approach is shift-invariant, economical, and directionally selective. The DTCWT [
33,
34] is an improved wavelet transformation that generates actual and imagined transformational coefficients. The DTCWT uses two 2-channel FIR filter banks. Output is the actual component of one filter bank (Tree A), whereas yield is the imaginary component (Tree B).
For a d-dimensional object, the DTCWT uses two significantly sampled filter banks with a 2
d redundancy. The three stages of a 1-D DTCWT filter bank are shown in
Figure 2. While DWT-fused images have broken borders, DTCWT-fused images are soft and unbroken. When compared to DWT, which only delivers constrained directions in (0°, 45°, 90°), DTCWT produces 6 subbands in each of the three (±15°, ±45°, ±75°), both real and imaginary, which improves transformational correctness and preserves more detailed features.
The odd/even filter approach proposed by DTCWT, however, has a number of drawbacks:
There is no clear symmetry in the sub-sampling structure;
The frequency reactions of the two trees vary slightly;
Otherwise, since both terms denote linearity, the filter sets must be biorthogonal rather than orthogonal. It demonstrates that energy efficiency does not apply to signals and fields.
Each of them is reduced and solved using the DTCWT qshiftN as illustrated in
Figure 3, with all filters above level 1 much shorter. It is possible to achieve a sample gap above levels 1, and 1/2 during a test period by using delayed filters of 1/4 and 3/4 rather than the DTCWT original’s 0 and 1/2. An asymmetric equal-length filter and the time it takes will be used to accomplish this.
Wavelet orthonormality can be perfectly transformed because of the asymmetry. When it comes to reverse filters, Tree-A filters are used, but Tree-B filters are used for both reverse and reconstruction filters because they are all part of the same orthonormal array. All trees have the same response in terms of their natural frequency. Individual effects are symmetrical around their midpoints, but the total complex impulses are asymmetric. Asymmetrical extension continues to work on the frame’s edges because of this.
2.4. Modified Principal Component Analysis (MPCA)
MPCA is used to turn uncorrelated variables into correlated variables. This method is useful for analyzing data and determining the optimal features for data collection. The first principal component represents data with the greatest variance. The others are just as much of what is left. The data is well represented by the first principal component, which also illustrates the direction of maximum variation. In this paper, the MPCA approach is used to determine the best-represented value of each subband of source images after implementing the LP-based qshiftN DTCWT method. These values are then multiplied by matched source image subbands. MPCA’s goal is to transfer data from the original space to the Eigen space. By saving the components with the largest eigenvector, the variance of the data is enhanced, and the covariance is lowered.
Specifically, this method removes redundant data from source images and extracts the most significant components. Furthermore, MPCA prioritizes components with the greatest impact and resistance to noise. As a result, the MPCA decreases blurring and spatial distortions. The steps of the MPCA algorithm are as follows:
- 3.
Calculate the eigenvalues and eigenvectors of the covariance matrices
- 4.
Choose the first principal component in the order of the eigenvectors
- 5.
Finally, to get the features extracted image
2.5. Flow Diagram of Proposed Approach
The flow diagram of the complete fusion algorithm is depicted in
Figure 1, which comprises two processes: LP-based qshiftN-DTCWT image fusion and MPCA. LP is used for decomposition, DTCWT is used for image fusion, and MPCA is used for feature extraction, as shown in Algorithm 3.
Algorithm 3 LP-based qshiftN DTCWT image fusion process and LP |
Input: Multi-focus images. |
Output: All-in-Focus Image |
Steps: |
- (i)
Take the multi-focus images from the source and load them; - (ii)
To use the image fusion technique, two multi-focus images (I1 and I2) are used as source images. Row (I1 and I2) and column (I1 and I2) pixels are used to divide raw images; - (iii)
Multi-focus image row and column pixels are converted from a Two-Dimensional image to a One-Dimensional array of data; - (iv)
Laplacian pyramid is used to divide the resulting 1-D array data (I1) into minimum (row and column frequencies) and maximum (row and column frequencies) frequency elements. The I2 image is split into low (row and column frequencies) and high (row and column frequencies) frequency components in the same way; - (v)
To produce low and high-frequency row components, the primary fusion procedure is performed on row elements (both low and high-frequency elements) of I1 and I2. Similarly, to generate minimum and maximum frequency column elements, this fusion technique is used to process the column elements; - (vi)
To produce row and column elements, the fused row, and column frequency components are filtered utilizing the Inverse laplacian pyramid algorithm; - (vii)
The row and column elements of the 1-D array data are converted into a 2-D image; - (viii)
Using qshiftN DTCWT, a final fused image is created from the filtered row and column frequency elements; - (ix)
Apply MPCA on a fused image by qshiftN DTCWT-LP; - (x)
Featured extracted image i.e., all-in-Focus image.
|
2.6. Evaluation of the Proposed Method’s Effectiveness
In this section, the performance of the proposed technique is compared to that of state-of-the-art techniques in two ways: subjectively and objectively. Subjective assessment is a qualitative evaluation of how good the fused image looks. On the other hand, objective assessment, also called quantitative evaluation, is done by correlating the values of many image fusion efficiency metrics. Mathematical modeling is the basis for this quantitative method, which is called “objective analysis.” It looks at how similar the fused image is to the images that were used to make it. With and without a reference image, there are two ways to do quantitative analysis [
11,
21,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51].
This paper compared fourteen metrics:
SF,
E(F),
SD,
AG,
RMSE,
CC,
QAB/F,
LAB/F,
SSIM,
QE,
NAB/F,
PSNR, and these measures are explained in
Section 2.7.
2.7. Measuring Performance with Objective Quality Metrics
E(F) (Entropy): It assists in the extraction of meaningful information from an image. A high level of entropy indicates that the image carries more than information.
AG (Average-Gradient): It determines the sharpness and clarity of an image. It shows that when the value of AG is high, the fused image has more clarity and sharpness.
CC (Correlation-Coefficient): It assesses the similarity of the all-in-focus image to the input images. For a better fusion process, a higher CC value is desired.
SSIM (Structural-Similarity-Index-Measure): It assists in the correlation of two images’ local patterns of the brightness of pixels. SSIM has a range of −1 to 1 in its value.
QE (Edge-dependent Fusion Quality): This metric considers features of the human visual system, such as edge detail sensitivity. A greater QE value suggests that the fusion process is more efficient.
SD (Standard Deviation): The higher the SD number, the noisier the final image. Noise is more likely to impact images with lower contrast.
SF (Spatial-Frequency): It is used to determine the total intensity of activeness. When the all-in-focus image activity level is really high, it indicates that SF is quite high.
RMSE (Root Mean Square Error): It is useful for calculating the variations per pixel caused by image fusion methods. The value of RMSE rises as the similarity decreases.
PSNR (Peak-Signal-to-Noise-Ratio): It compares the similarity of the produced fused image and the reference image to determine image quality. The better the PSNR number, the better the fusion results.
In addition, objective image fusion effectiveness assessment via gradient information [
11] is examined. Assessing total fusion performance (TFP), fusion loss (FL), and fusion artifacts (FA), provides a complete analysis of fusion performance. The process intended for calculating these metrics is detailed in [
11], and their symbolic representation is presented below:
QAB/F denotes the total amount of data transferred from the source images to the all-in-focus image. The method’s performance is good if QAB/F values are higher; LAB/F, Total loss of information. The method’s performance is good if LAB/F values are lower, and NAB/F. Due to the fusion process, noise or artifacts have been added to the fused image. The method’s performance is good if NAB/F values are lower.
3. Experimental Results
This paper proposes a qshiftN DTCWT and MPCA in laplacian pyramid domain. Quality measures include
SD,
QAB/F,
E(F),
AG,
SF,
CC,
SSIM,
QE,
QW,
FMI,
LAB/F,
NAB/F,
RMSE, and
PSNR were employed to assess the algorithm’s quality. These metrics are used to contrast the proposed technique to the methods that have been published in the past. The resemblance and robustness of the fused images against distortions are measured using these criteria. Source images for comparison are commonly used in MIF. Experiments are also carried out on many images from various areas and datasets [
52]. In these images, the proposed approach yields good results. However, these images are not included in the paper because the techniques that are contrasted with the proposed approach do not produce outcomes for these images. Desk, balloon, book, clock, flower, lab, leaf, leopard, flowerpot, Pepsi, wine, and craft images are used to compare methodologies in the literature with [
21,
22,
23,
24,
25,
26,
27,
28,
30]. In addition, the outcomes of the proposed technique for certain tried source images were presented. The images are of various sizes and qualities. The proposed method is applicable to any multi-focus images, not only those presented in this work.
3.1. The Outcomes of Some of the Images That Were Tried
Several grayscale images are used to implement the proposed technique. To analyze these images,
SF,
QAB/F,
QE,
AG,
E(F),
SSIM,
SD, and
CC were used. It analyses a variety of images.
Figure 4,
Figure 5,
Figure 6,
Figure 7 and
Figure 8 show the visual outcomes for the images of a balloon, a leopard, a calendar, a bottle of wine, and a craft, respectively.
Table 1 displays the results of the proposed method for specific trailing images. The subjective measurement outcomes (i.e., RMSE and PSNR) of certain trailing multi-focus images are depicted in
Table 1.
Table 2 compares the proposed technique to methods in the literature that use subjective criteria. Measurements for the stated image are not measured for the mentioned article, as shown by the letter X. In contrast, the proposed technique to methods in the literature, the flowerpot, clock, Pepsi, cameraman, desk, book, lab, and flower images are used. The best outcomes are shown in bold. The robustness of the proposed technique to deformation is measured using these criteria. The outcomes suggest that the proposed technique performs well in subjective measurements.
3.2. Comparison of Multi-Focus Image (i.e., Clock)
The evaluation of the first multi-focus image is the clock, illustrated in
Figure 9.
Figure 9a represents the original image.
Figure 9b,c illustrate left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused while the right side is not. A “right-focused image” is one in which the image’s right side is focused, but its left side is not.
Figure 9d shows that the all-in-focus image is created when the approach is implemented.
E(F),
AG,
CC,
QAB/F,
SSIM,
QE,
SF, and
QW are calculated to assess the proposed methodology performance. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in
Table 3,
Table 4,
Table 5,
Table 6,
Table 7 and
Table 8 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [
21,
22,
24,
27,
28,
30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.
3.3. Comparison of Multi-Focus Image (i.e., Desk)
The evaluation of the second multi-focus image is the desk, illustrated in
Figure 10.
Figure 10a represents the original image.
Figure 10b,c illustrates left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused while the right side is not. An image with a right focus indicates that it has been focused on its right side only, while its left side has not been focused on.
Figure 10d shows the process of creating the all-in-focus image after the method has been successfully implemented. The following parameters are computed to evaluate the proposed methodology performance:
E(F),
AG,
CC,
QAB/F,
SSIM,
QE,
FMI,
SD,
QW,
LAB/F,
NAB/F, and
SF. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in
Table 9,
Table 10,
Table 11,
Table 12,
Table 13,
Table 14 and
Table 15 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [
21,
23,
24,
25,
26,
27,
30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.
3.4. Comparison of Multi-Focus Image (i.e., Book)
The evaluation of the third multi-focus image is the book, illustrated in
Figure 11.
Figure 11a represents the original image.
Figure 11b,c illustrate left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused, while the right side is not. The right side of the image is focused while the left is not.
Figure 11d shows the process of creating the all-in-focus image after the method has been successfully implemented. The following parameters are computed to evaluate the proposed methodology performance:
E(F),
AG,
CC,
QAB/F,
SSIM,
QE,
QW,
LAB/F,
NAB/F, and
SF. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in
Table 16,
Table 17,
Table 18,
Table 19,
Table 20 and
Table 21 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [
21,
24,
25,
26,
27,
30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.
3.5. Comparison of Multi-Focus Image (i.e., Flower)
The evaluation of the fourth multi-focus image is the flower, illustrated in
Figure 12.
Figure 12a represents the original image.
Figure 12b,c illustrates left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused while the right side is not. A right-focused image is one in which the image’s right side is focused, but its left side is not.
Figure 12d shows that the all-in-focus image is created when the approach is implemented.
E(F),
AGF,
CC,
QAB/F,
SSIM, and
QE are calculated to assess the proposed methodology performance. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The comparison results are shown in
Table 22 and
Table 23 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [
21,
24], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.
3.6. Comparison of Multi-Focus Image (i.e., Lab)
The evaluation of the fifth multi-focus image is the lab, which is illustrated in
Figure 13.
Figure 13a represents the original image.
Figure 13b,c illustrate left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused while the right side is not. An image with a right focus indicates that it has been focused on its right side only, while its left side has not been focused on.
Figure 13d shows the process of creating the all-in-focus image after the method has been successfully implemented. The following parameters are computed to evaluate the proposed methodology performance:
E(F),
AG,
CC,
QAB/F,
SSIM,
QE,
QW, and
SF. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in
Table 24,
Table 25,
Table 26,
Table 27,
Table 28 and
Table 29 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [
21,
24,
25,
27,
28,
30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.
3.7. Comparison of Multi-Focus Image (i.e., Leaf)
The evaluation of the sixth multi-focus image is the leaf, which is illustrated in
Figure 14.
Figure 14a represents the original image.
Figure 14b,c illustrates left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused, while the right side is not. The right side of the image is focused while the left is not.
Figure 14d shows the process of creating the all-in-focus image after the method has been successfully implemented. The following parameters are computed to evaluate the proposed methodology performance:
E(F),
AG,
CC,
QAB/F,
SSIM,
QE, and
SF. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in
Table 30,
Table 31 and
Table 32 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [
21,
24,
30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.
3.8. Comparison of Multi-Focus Image (i.e., Pepsi)
The evaluation of the seventh multi-focus image is the pepsi, which is illustrated in
Figure 15.
Figure 15a represents the original image.
Figure 15b,c illustrate left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused while the right side is not. A right-focused image is one in which the image’s right side is focused, but its left side is not.
Figure 15d shows that the all-in-focus image is created when the approach is implemented.
AG,
QAB/F,
QE,
SF, and
QW are calculated to assess the proposed methodology performance. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in
Table 33,
Table 34,
Table 35 and
Table 36 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [
24,
25,
27,
30], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.
3.9. Comparison of Multi-Focus Image (i.e., Flowerpot)
The evaluation of the eighth multi-focus image is the flowerpot, which is illustrated in
Figure 16.
Figure 16a represents the original image.
Figure 16b,c illustrate left-focused and right-focused images, respectively. The term “left-focused image” refers to the fact that the image’s left side is focused while the right side is not. An image with a right focus indicates that it has been focused on its right side only, while its left side has not been focused on.
Figure 16d shows the process of creating the all-in-focus image after the method has been successfully implemented. The following parameters are computed to evaluate the proposed methodology performance:
QE, and
QW. Finally, the performance of the proposed approach is compared to that of other methods previously published in the literature. The results of the comparison are shown in
Table 37 of the report. The letter X indicates that metrics are not calculated for the article depicted in the image. According to the literature [
25], the proposed method is more successful than those approaches, and the best outcomes of methods are indicated in bold.
3.10. Analysis of a Few More Image Pairs
A single strategy will never produce the ideal subjective and objective results for all image pairs. Because of this, eight multi-focus image pairings (shown in
Figure 17) are used in the next experiment to demonstrate the average performance of various techniques, which is demonstrated in the following experiment. In the case of the image pairs in
Figure 17, the proposed method produced fused images depicted in
Figure 18. As shown in
Figure 18, the results of the proposed approach fusion are satisfactory for all of the image pairs tested. For the image pairs in
Figure 17, the average objective assessment of several methodologies is shown in
Table 38. The results of the comparison are presented in
Table 38. Comparing the proposed method to approaches described in the literature [
21], the proposed method is more successful, and the best outcomes of the various methods are highlighted in bold.
4. Conclusions
The performance of the traditional wavelets-based fusion algorithms is to create ringing distortions in the fused image due to a lack of directional selectivity and shift-invariance. The proposed methodology utilizes the benefits of a hybrid method approach for the image fusion process. The hybrid method contains LP for decomposition, DTCWT for image fusion, and MPCA for feature extraction. The advantages of the proposed fused image are having better image quality and extracting relevant information from the source images with good directionality, a high degree of shift-invariance using hybrid approach with MPCA, due to this achieving better visual quality. Several pairs of multifocus images are used to assess the performance of the proposed method. Through the experiments conducted on standard test pairs of multifocus images, it was found that the proposed method has shown superior performance in most of the cases as compared to other methods in terms of quantitative parameters and in terms of visual quality, it has shown superior performance to that of other methods. Therefore, the proposed work is validated with many data sets to meet these goals by evaluating quantitative measures like E(F), AG, SD, SSIM, QAB/F, etc. It is evident from the results that the proposed method produces better visual perception, better clarity, and less distortion. In this work, the proposed technique is used to fuse only grayscale images. Moreover, the application of the proposed method to other areas, such as medical image processing, infrared-visible image processing should be part of future exploration.