Next Article in Journal
An Extraction Method for Large Gradient Three-Dimensional Displacements of Mining Areas Using Single-Track InSAR, Boltzmann Function, and Subsidence Characteristics
Next Article in Special Issue
ADD-UNet: An Adjacent Dual-Decoder UNet for SAR-to-Optical Translation
Previous Article in Journal
Tree Species Classification in UAV Remote Sensing Images Based on Super-Resolution Reconstruction and Deep Learning
Previous Article in Special Issue
Hyperspectral Denoising Using Asymmetric Noise Modeling Deep Image Prior
 
 
Article
Peer-Review Record

Improved Generalized IHS Based on Total Variation for Pansharpening

Remote Sens. 2023, 15(11), 2945; https://doi.org/10.3390/rs15112945
by Xuefeng Zhang 1,2, Xiaobing Dai 1,2,*, Xuemin Zhang 1,2, Yuchen Hu 1,2, Yingdong Kang 1,2 and Guang Jin 1,2
Reviewer 1: Anonymous
Reviewer 2:
Remote Sens. 2023, 15(11), 2945; https://doi.org/10.3390/rs15112945
Submission received: 27 April 2023 / Revised: 25 May 2023 / Accepted: 30 May 2023 / Published: 5 June 2023
(This article belongs to the Special Issue Machine Vision and Advanced Image Processing in Remote Sensing II)

Round 1

Reviewer 1 Report

This interesting paper took an optimisation approach to develop the CS-based pansharpening methods and build the GIHS-TV framework. Faced with the issue of spectral information loss in fused images, the authors suggested a method using L1-TV to restrict the spectral-spatial information in the new spatial components, effectively improving the spectral distortion.

The article overall seems well written and worthy of publication. The references are also up-to-date, illustrating the state of the art in research. However, I find it appropriate to add the following to the cited references: Dong, W.; Xiao, S.; Li, Y.; Qu, J. Hyperspectral Pansharpening Based on Intrinsic Image Decomposition and Weighted Least Squares Filter. Remote Sens. 2018, 10, 445. https://doi.org/10.3390/rs10030445

Asking for more detail in the description of the methods and presentation of the results, I suggest a re-reading to eliminate some inaccuracies. E.g:

- Lines 137, 240, 248, 253, 257, 260, 264, 279: all figures should be cited in the main text as Figure 1, Figure 2, etc., without abbreviating in Fig. 1, Fig. 2 etc.

- Lines 198, 288: all tables should be cited in the main text as Table 1 and Table 2, without abbreviating in Tab. 1, Tab. 2 etc.

- Lines 311, 312: Although not part of the main text, please correct typing mistakes 

Author Response

Response to the comments of Reviewers #1

Point #1: This interesting paper took an optimization approach to develop the CS-based pansharpening methods and build the GIHS-TV framework. Faced with the issue of spectral information loss in fused images, the authors suggested a method using L1-TV to restrict the spectral-spatial information in the new spatial components, effectively improving the spectral distortion.

Response #1: Thank you for supporting the innovation in our work. Following your useful suggestions, we've added a more solid theoretical foundation and interesting trial details in revision. The following are the responses and revisions we have made in response to your questions and suggestions in detail. Thanks again to your hard work!

 

Point #2: The article overall seems well written and worthy of publication. The references are also up-to-date, illustrating the state of the art in research. However, I find it appropriate to add the following to the cited references: Dong, W.; Xiao, S.; Li, Y.; Qu, J. Hyperspectral Pansharpening Based on Intrinsic Image Decomposition and Weighted Least Squares Filter. Remote Sens. 2018, 10, 445. https://doi.org/10.3390/rs10030445

Response #2: Thanks to your reminder and suggestions, I studied the paper you recommended, the paper is very important and has a very positive effect on our work, the we added it to our reference in line 133(ref 44).

 

Point #3: Asking for more detail in the description of the methods and presentation of the results, I suggest a re-reading to eliminate some inaccuracies. E.g:

- Lines 137, 240, 248, 253, 257, 260, 264, 279: all figures should be cited in the main text as Figure 1, Figure 2, etc., without abbreviating in Fig. 1, Fig. 2 etc.

- Lines 198, 288: all tables should be cited in the main text as Table 1 and Table 2, without abbreviating in Tab. 1, Tab. 2 etc.

- Lines 311, 312: Although not part of the main text, please correct typing mistakes 

Response #3: Thanks for your detailed discovery sincerely. We have carefully modified this manuscript about your mentioned abbreviation problem and typing errors. And The sentences with inaccuracies have been rewritten again to ensure to express correctly and precisely in the revision.

Reviewer 2 Report

This manuscript proposed a GIHS algorithm combined with TV regularization (GIHSTV). It seems TV could improve the performance on many samples. But the experiments were not enough to be published on Remote Sensing. 

(1) The authors should analyze the computational complexity, and compare the execution time with other methods.

(2) The authors did not report the sensitivity analysis on the important hyper-parameter, lambda. A performance curve versus different values of lambda should be displayed. In line 233, lambda is set to 1 and 2. In line 258, lambda is set to 10. Does this means that the method is very sensitive to lambda?

(3) Deep learning based methods are not be compared in Section 4 or mentioned in Section 2. If the authors are not available to run DL code, some representative methods should be reviewed at least in Section 2. For example, Pansharpening by Convolutional Neural Networks [PNN (RS 2016)], PanNet: A deep network architecture for pan-sharpening [PanNet (ICCV 2017)], Deep Gradient Projection Networks for Pan-sharpening [GPPNN (CVPR 2021)]. 

(4) SAM and SCC are used for reduced-scale experiemnts. QNR, D_S, D_Lambda are used for full-scale experiemnts. However, from Figs.2-5, it seems that all visual images are for  full-scale experiemnts. Is there any visual inspection for reduced-scale experiemnts?

 

Author Response

Response to the comments of Reviewers #2

Point #1: This manuscript proposed a GIHS algorithm combined with TV regularization (GIHSTV). It seems TV could improve the performance on many samples. But the experiments were not enough to be published on Remote Sensing. 

Response #1: First of all, thanks for pointing out the shortcomings of our study. We appreciate you very much for your positive and constructive comments and suggestions on our manuscript. I have taken your comments and improved the article. Our GIHS-TV method has a solid theoretical foundation, the overall implementation is relatively efficient and simple. At the same time, the experimental results fully verify the fusion effect of remote sensing images.

Indeed, our work is still not enough, and we will continue to make further improvements. We have improved more details in revision, especially the supplement experiments about the computational complexity, the sensibility of super-parameters and the visual inspection at reduced-scale. In addition, the residual experiments were conducted to analyze the spectral fidelity of the proposed model. The following are the responses and revisions we have made in response on an item-by-item to your questions and suggestions. Thanks again for your hard work.

 

Point #2: The authors should analyze the computational complexity, and compare the execution time with other methods.

Response #2: Thanks for your advice sincerely. The computational complexity analysis is a very important and meaningful work in pansharpening algorithm research. Following your useful suggestions, we provided more experiments details about computational time in P17 Line 400-416. Compared to ATWT-M2 and ATWT-M3,GIHS-TV needs a slightly longer computation time(less than 3s)due to more optimization iterations. However, compared to the same type of method P+XS, our method can achieve more faster than 10 times the computational efficiency in table 3(P17)due to the sparsity of the proposed model .

 

Point #3: The authors did not report the sensitivity analysis on the important hyper-parameter, lambda. A performance curve versus different values of lambda should be displayed. In line 233, lambda is set to 1 and 2. In line 258, lambda is set to 10. Does this means that the method is very sensitive to lambda?

Response #3: Thanks for your detailed discovery sincerely. The proportion of edge information added from the PAN image was changed through hyperparameter λ in the model. We found the optimal one through extensive experimental analysis. In line 258, lambda=10 is just a careless mistake. Following your useful suggestions, we give further trial details about hyperparameters in our methods (figure 5 and figure 6, Line 330-344). Hyperparameters need to be selected according to the fusion image visual effects and evaluation indicators.

 

Point #4: Deep learning based methods are not be compared in Section 4 or mentioned in Section 2. If the authors are not available to run DL code, some representative methods should be reviewed at least in Section 2. For example, Pansharpening by Convolutional Neural Networks [PNN (RS 2016)], PanNet: A deep network architecture for pan-sharpening [PanNet (ICCV 2017)], Deep Gradient Projection Networks for Pan-sharpening [GPPNN (CVPR 2021)]. 

Response #4: Thanks for your advice about the literatures with deep learning methods. In order to improve the "Related works" part, we have seriously studied the papers you mentioned and remade a relatively comprehensive review. Relative content that you recommend is added at the end of the "Related works" in Line160-168.

 

Point #5: SAM and SCC are used for reduced-scale experiments. QNR, D_S, D_Lambda are used for full-scale experiments. However, from Figs.2-5, it seems that all visual images are for  full-scale experiments. Is there any visual inspection for reduced-scale experiments?

Response #5: All visual images are at full-scale in the previous vision. We have conducted experiments at reduced resolution as a supplement. Figure 13 in P17 Line404-408 shows the performance and visual effect in most scenes. GIHS-TV performs with high spectral fidelity and more spatial details. As the same time, the performance is consistent with those at full resolution.

Round 2

Reviewer 2 Report

No other comments. 

Back to TopTop