*4.2. Quality Measures*

Generally, the performance of a hyperspectral pansharpening method can be assessed by the subjective effect and the objective indexes. The similarity of the colors between the reference HS image and the fused HS image can be determined by the subjective evaluation. Objective indexes are used for comparing the fusion quality accurately. This paper is limited to the five most widely used indexes, i.e., cross correlation (CC) [42], spectral angle mapper (SAM) [43], root mean squared error (RMSE), erreur relative globale adimensionnelle de synthèse (ERGAS) [44], and universal image quality index (UIQI) [45]. The CC is the spatial measure, and SAM is the spectral measure. RMSE, ERGAS, and UIQI are the global spectral and spatial measures. The formal definitions of these indexes are provided below. In the definitions, matrix *FH* = [*h*1, ... , *hm*] ∈ *Rλ*×*m* denotes the fused HS image with *λ* bands and m pixels. *RH* ∈ *Rλ*×*m* represents the reference HS image. *RHl* and *FHl* represent the *l*th columns of *RH* and *FH*, respectively. *RHj* and *FHj* represent the *j*th rows of *RH* and *FH*, respectively. *X*,*Y* ∈ *R*1×*m* denote two single band images, and *Xi* denotes the *i*th element of *X*.

(1) Cross correlation: The *CC* measures the degree of the geometric distortion. It is defined as follows:

$$\text{CC}(RH, FH) = \frac{1}{\lambda} \sum\_{j=1}^{\lambda} \text{CCS}(RH^j, FH^j) \tag{25}$$

The *CCS* characterizes the geometric distortion of a single-band image as follows:

$$\text{CCS}(X, Y) = \frac{\sum\_{i=1}^{m} \left(X\_i - \mu\_X\right) \left(Y\_i - \mu\_Y\right)}{\sqrt{\sum\_{i=1}^{m} \left(X\_i - \mu\_X\right)^2 \sum\_{i=1}^{m} \left(Y\_i - \mu\_Y\right)^2}}\tag{26}$$

where *μX* and *μY* are the means of *X* and *Y*, respectively. The optimal value of *CC* is 1.

(2) Spectral angle mapper: The *SAM* measures the spectral distortion between the fused image *FH* and the reference image *RH*, which is defined as:

$$SAM(RH, FH) = \frac{1}{m} \sum\_{l=1}^{m} \arccos(\frac{\langle RH\_{I'}, FH\_{l} \rangle}{||RH\_{I}|| ||FH\_{l}||})\tag{27}$$

The *SAM* is a spectral measure. The smaller the *SAM* value is, the better the fusion performance is. (3) Root mean squared error: The *RMSE* which measures the standard difference between the two matrices *RH* and *FH*, is defined as

$$RMSE(RH, FH) = \frac{\sqrt{trace[(RH - FH)^T (RH - FH)]}}{\sqrt{m \ast \lambda}} \tag{28}$$

The optimal value of *RMSE* is 0.

(4) Erreur relative globale adimensionnelle de synthèse: The *ERGAS*, which is a global measure, is defined as

$$ERGAS(RH, FH) = 100c \sqrt{\frac{1}{\lambda} \sum\_{j=1}^{\lambda} \left(\frac{RMSE\_j}{\mu\_j}\right)^2} \tag{29}$$

where, *RMSEj* = (*trace*[(*RHj* − *FHj*)*<sup>T</sup>*(*RHj* − *FHj*)]/√*m*), *c* represents the ratio of the linear resolution between the *P* and HS images, and *μj* is the mean value of the *j*th band of the reference image. The optimal value of *ERGAS* is 0.

(5) Universal image quality index: The *UIQI*, which evaluates the similarity of the reference image *RH* and the fused image *FH*, is defined as

$$MIQI(RH, FH) = \frac{4\sigma\_{RF}^2 \mu\_R \mu\_F}{(\sigma\_R^2 + \sigma\_F^2)(\mu\_R^2 + \mu\_F^2)}\tag{30}$$

where, *μ<sup>R</sup>*, *<sup>σ</sup>*2*R*, *μ<sup>F</sup>*, *σ*2*F* are the sample means and standard deviations of the reference image *RH* and the fused image *FH*, and *<sup>σ</sup>*2*RF* is the covariance of the two images. The ideal value of the *UIQI* value is 1.

#### *4.3. Analysis of the Influence of Parameter α*

In the experiments, *α* is the parameter which determines the quantity of the final injected spatial details and influences the fusion performance directly. To select an optimal parameter *α*, the proposed method is performed on the Salinas dataset with different *α* settings. We apply five quality measures to investigate the effects of the parameter *α* on the fusion performance. For clarity, the five quality measures are normalized to [0, 1] by the min-max normalization method and are displayed in one figure. Figure 4 shows the performance of the proposed method with different *α* settings. It can be observed that the CC and UIQI values are increasing from 0 to 0.1 when *α* is increased from 0 to 0.1. The CC value obtains the biggest value when *α* equals to 0.1. In addition, the values of SAM, RMSE, and ERGAS all are decreasing when *α* is increased from 0 to 0.1. While they will increase when *α* equals to 0.1. Therefore, we can draw a conclusion that when *α* = 0.1, the performance of the proposed method is the best. We have also performed the performance of the proposed method on various hyperspectral remote sensing images. We found that *α* = 0.1 also give the best performance there. Therefore, the parameter *α* is set as 0.1 in this paper.

**Figure 4.** Performance of the proposed method with different *α* settings.

#### *4.4. Experiments on Simulated Hyperspectral Remote Sensing Datasets*

The Salinas dataset, Pavia University dataset, and Washington DC dataset are all simulated datasets. For the simulated dataset, a reference high spatial resolution HS image is given. The simulated *P* image and the simulated low spatial resolution HS image can be obtained by the Wald's protocol [44]. We can use the reference high spatial resolution HS image as the reference image to evaluate the performance of the fused image.

## 4.4.1. Salinas Dataset

The color displays of the fused HS images obtained by different methods are shown in Figure 5b–h. As reported in some articles, the PCA method generates serious spectral distortion. The fused image obtained by the GFPCA method looks blurry, since the injected spatial information seems to be not sufficient. There is less spectral distortion generated by the GFPCA method compared with the PCA method. The edges in the fused images obtained by the HySure and MGH methods appear too sharp due to the artifacts occurred around the edges. The CNMF and Sparse Representation methods can well preserve the spectral information of the original HS image. However, the edges in the vegetation and roof areas are not clear in the fused images obtained by these two methods. The halo artifacts and the blurring problems can be eliminated by the proposed method. It can be seen that the proposed method performs well in both spectral and spatial aspects.

To further compare the visual quality of the fused images obtained by different fusion methods, Figure 6 is given to show the difference images between the fused HS images and the reference HS image. Here, the difference image is generated by subtracting the reference image from the corresponding fused image, on a pixel by pixel strategy. It is observed that the difference image between the reference image and the fused image obtained by the proposed method shows the light blue color for almost the entire image. In other words, the proposed method causes the smallest value difference between the reference HS image and the fused image compared with other methods, which further proves that the outstanding fusion performance can be obtained by the proposed method. The quality metrics of different methods for the Salinas dataset are shown in Table 1. We consider the five quality metrics together to evaluate the performance of different pansharpening methods. It can be seen that, for the Salinas dataset, the proposed method gives the optimal quality indexes in terms of all the quality metrics. This means that the proposed method can perform well in both spectral and spatial aspects.

**Figure 5.** Visual comparison of different hyperspectral pansharpening methods for Salinas dataset. (**a**)Reference; (**b**) PCA; (**c**) GFPCA; (**d**) HySure; (**e**) MGH; (**f**) CNMF; (**g**) Sparse Representation; (**h**) Proposed.

**Figure 6.** Visual comparison of difference images (light blue means small differences) between each fused HS image and the reference HS image (Salinas dataset). (**a**) PCA; (**b**) GFPCA; (**c**) HySure; (**d**) MGH; (**e**) CNMF; (**f**) Sparse Representation; (**g**) Proposed.

**Table 1.** Quality metrics of different methods for Salinas datasets.


#### 4.4.2. Pavia University Dataset

Figure 7a shows the reference HS image of the Pavia University dataset. Figure 7b–h show the fused images obtained by different pansharpening methods. By visually comparing these fused images with the reference one, a similar conclusion as the above experiments can be drawn. The spatial and spectral quality of the fused image obtained by the PCA method is not desired. The spectral distortion caused by the PCA method is most visible, especially in the vegetation areas. The GFPCA method improves with respect to the spectral aspect. However, the spatial quality of the fused image obtained by the GFPCA method needs further improvement. The HySure and MGH methods produce halo artifacts around edges, although such artifacts make the edges appear sharper. The CNMF method introduces spectral distortion, since the color of the fused image obtained by the CNMF method is not match to that of the reference image in roof area. By contrast, the fused images produced by the Sparse Representation and the proposed method are the closest to the reference one.

**Figure 7.** Visual comparison of different hyperspectral pansharpening methods for Paiva University dataset. (**a**) Reference; (**b**) PCA; (**c**) GFPCA; (**d**) HySure; (**e**) MGH; (**f**) CNMF; (**g**) Sparse Representation; (**h**) Proposed.

The visual quality of fused images obtained by different methods can be measured by the difference images between the fused HS images and the reference HS image. Figure 8 is given to show the difference images with outstanding defects and flat background between the fused HS images and the reference HS image. The difference image of the proposed method is almost all light blue with few yellow mixed. Based on the comparison of difference images, the proposed method indeed displays the best performance in visual quality. Table 2 shows the objective quality assessment of different methods for the Pavia University dataset. It can be clearly seen that the proposed method shows the best objective performance in most measurement terms including CC, SAM, RMSE, and ERGAS. The UIQI value of the proposed method is second largest. This further demonstrates that the proposed method can obtain the state-of-the-art fusion performance.

**Table 2.** Quality metrics of different methods for Pavia University dataset.


**Figure 8.** Visual comparison of difference image (light blue means small differences) between each fused HS image and the reference HS image (Paiva University dataset). (**a**) PCA; (**b**) GFPCA; (**c**) HySure; (**d**) MGH; (**e**) CNMF; (**f**) Sparse Representation; (**g**) Proposed.

#### 4.4.3. Washington DC Dataset

Figure 9 shows the visual comparison of the fused images obtained by different fusion methods for the Washington DC dataset. The reference HS image is displayed in Figure 9a. Figure 9b–h show the fused images obtained by different fusion methods. It is apparent that the fused image obtained by the PCA method suffers from the spectral and spatial distortion. The fused image obtained by the GFPCA method shows improvement, but the spatial quality is not improved obviously. A visual comparison shows that the MGH method performs well in spectral aspect, but the fused image obtained by the MGH method looks blurry in some areas. The reason is that the injected spatial details is insufficient. The CNMF and Sparse Representation methods are close to the reference image in spectral aspect. However, the spatial quality of the CNMF and Sparse Representation methods in the edge regions is not desired. The fused results obtained by the HySure and the proposed fusion methods have superior performance in spectral and spatial aspects.

**Figure 9.** Visual comparison of different hyperspectral pansharpening methods for Washington DC dataset. (**a**) Reference; (**b**) PCA; (**c**) GFPCA; (**d**) HySure; (**e**) MGH; (**f**) CNMF; (**g**) Sparse Representation; (**h**) Proposed.

Figure 10 shows the visual comparison of difference images between the fused HS images and the reference HS image for the Washington DC dataset. The proposed method indeed performs best in achieving the objective that the fused HS image should be as close as possible to the HS image acquired by the high-resolution sensors. The quality metrics of different methods for the Washington DC dataset are shown in Table 3. It can be seen that, for the Washington DC dataset, the proposed method gives the smallest quality indexes for RMSE and ERGAS, and the optimal quality indexes for CC and UIQI. Although the objective assessment of the proposed method is not always the best, it achieves a very stable performance in terms of five widely used quality metrics. This means that the proposed method can perform well in terms of providing the spatial details while preserving the spectral information.

**Figure 10.** Visual comparison of difference image (light blue means small differences) between each fused HS image and the reference HS image (Washington DC dataset). (**a**) PCA; (**b**) GFPCA; (**c**) HySure; (**d**) MGH; (**e**) CNMF; (**f**) Sparse Representation; (**g**) Proposed.


**Table 3.** Quality metrics of different methods for Washington DC dataset.

*4.5. Experiments on Real Hyperspectral Remote Sensing Datasets*

The Hyperion dataset which is the real hyperspectral dataset is utilized to evaluate the performance of the proposed method in real applications. For the real HS image, fusion is performed at the full scale for the subjective evaluation. The dimensions of the test *P* image are 210 × 150, and the size of the experimental HS image is 70 × 50. Figure 11a,b show the interpolated HS image and the *P* image, respectively. Figure 11c–i displays the results of different pansharpening methods. By visually comparing these fused HS images with the original HS image, it is clear that the blocking artifacts exist in the fused image obtained by the PCA method. The result obtained by the GFPCA method also looks blurry in this experiment. The HySure, MGH, and CNMF methods preserve the spectral

information effectively, but the spatial quality of these methods is poor. The spectral distortion of the Sparse Representation method is visible in some areas. The proposed method can well preserve the spectral information and greatly improve the spatial quality of the original HS image.

The high spatial resolution HS image of the real dataset is generally not available. Fusion is performed at the degraded scale for the objective evaluation. Specifically, according to the literatures [46], we degrade the original HS and *P* images, and fuse the degraded HS and *P* images. The original HS image is used as the reference. The fused image is compared with the original HS image to evaluate the objective performance. Table 4 shows the quality metrics of different methods for the Hyperion dataset. The proposed method shows the best objective performance in terms of all the measurements terms including CC, SAM, RMSE, ERGAS, and UIQI.

**Figure 11.** Visual comparison of different hyperspectral pan-sharpening methods for Hyperion dataset. (**a**) Interpolated HS image; (**b**) *P* image; (**c**) PCA; (**d**) GFPCA; (**e**) HySure; (**f**) MGH; (**g**) CNMF; (**h**) Sparse Representation; (**i**) Proposed.


**Table 4.** Quality metrics of different methods for Hyperion dataset.

To verify the validity of the proposed method on the real HS images, the experiment is performed on another Hyperion image. The test *P* image is of size 300 × 300 pixels, and the size of the test HS image is 100 × 100. Figure 12a,b show the interpolated HS image and the *P* image, respectively. The fused images obtained by different methods are displayed in Figure 12c–i. The color of the fused image obtained by the PCA method is not match to that of the original HS image in some areas. The GFPCA method produces the serious spatial distortion, although it performs better in the spectral aspect compared with the PCA method. The fused images obtained by the HySure and Sparse Representation methods appear too sharp due to the artifacts occurred around the edges. The color of the fused images obtained by the MGH, CNMF and the proposed methods is close to that of the original HS image, which indicates the superiority of these pansharpening methods in spectral preservation. However, the spatial quality of the CNMF method in some edges is not desired. By contrast, the fused images produced by the proposed method and the MGH method obtain the outstanding fusion performance in terms of spectral and spatial aspects. Table 5 shows the objective quality evaluation of each method for the Hyperion dataset. The proposed method performs best in terms of most of the indexes. The MGH method obtains the best ERGAS index. Although the objective performance of the proposed method is not always the best, it has a stable performance. Based on the analysis of the visual comparison and objective evaluation, we can draw a conclusion that the proposed method obtains the excellent performance for the real hyperspectral dataset in terms of the objective and subjective evaluations.

**Figure 12.** Visual comparison of different hyperspectral pan-sharpening methods for Hyperion dataset. (**a**) Interpolated HS image; (**b**) *P* image; (**c**) PCA; (**d**) GFPCA; (**e**) HySure; (**f**)MGH; (**g**) CNMF; (**h**) Sparse Representation; (**i**) Proposed.


**Table 5.** Quality metrics of different methods for Hyperion dataset.
