Next Article in Journal
Quantitatively Disentangling the Geographical Impacts of Topography on PM2.5 Pollution in China
Next Article in Special Issue
A Closed-Loop Network for Single Infrared Remote Sensing Image Super-Resolution in Real World
Previous Article in Journal
A Tailored Approach for the Global Gas Flaring Investigation by Means of Daytime Satellite Imagery
Previous Article in Special Issue
Underwater Image Restoration via Contrastive Learning and a Real-World Dataset
 
 
Article
Peer-Review Record

CARNet: Context-Aware Residual Learning for JPEG-LS Compressed Remote Sensing Image Restoration

Remote Sens. 2022, 14(24), 6318; https://doi.org/10.3390/rs14246318
by Maomei Liu 1, Lei Tang 2, Lijia Fan 3, Sheng Zhong 1, Hangzai Luo 1,* and Jinye Peng 1
Reviewer 1:
Remote Sens. 2022, 14(24), 6318; https://doi.org/10.3390/rs14246318
Submission received: 13 November 2022 / Revised: 7 December 2022 / Accepted: 10 December 2022 / Published: 13 December 2022

Round 1

Reviewer 1 Report

1. Since the last layers of CNN is used, why is there no Dense layers?

2. In addition to grayscale pictures, it is recommended to add some color Jpeg pictures with 3 channels and 3 filters for verification, so as to better highlight the academic contribution of the proposed deep learning architecture.

3. For the derivation of the theory, please provide more sufficient proof and derivation process to prove the reliability of the method.

4. Due to the rapid progress of deep learning research, please add some comparisons or explanations with the works in 2021 or even 2022.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

 

In the abstract LS is used without explaining its meaning.

The authors propose a methodology, not for JPEG images, but for near-lossless JPEG images. The explanation for what JPEG-LS images are is too compact and it is not understandable the difference between the two types of images. (lines 25-27)

Also the explanation of “near-lossless compressed image" (lines 43 to 45) is confused.

“attention feature fusion mechanism” (line 65) needs more detail to understand what it does, namely "attention" and "fusion".

The state of the art (lines 113-139) is very small. The references written in the state of the art [14-15] and [21-29], in a total of only 11 references, only 5 references are works published in the last 5 years.

Figure 2 has two flowcharts but the relationship between the upper flowchart and the lower flowchart is not clear. The two flowcharts should be merged or separated into two separate images.

The results, figure 6, suggest that LS-PSNR and LS-SSIM, have a much wider distribution range. Despite being true, the median values show that the LS-PSNR and LS-SSIM have worse results (lower score) when compared to PSNR and SSIM.

The quantitative results (lines 287-292) presented by the authors are incomplete, as it is not enough to state that they are better; should indicate the numbers that prove to be better. A statistical analysis of the results must also be presented to confirm whether the proposed methodology is effectively better or if it is borderline.

Detail: it is not recommended to fill pages with figures only (figures 9, 10, 11, 12). A good scientific work has figures with text in the middle. Some figures seem to be redundant, but if all the figures are necessary for the explanation of the results then they can be placed in an appendix.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

Request the new version with the changes clearly marked, for example in yellow or using Microsoft Word's Tracking Changes.

Author Response

Thank you for your advice. Sorry that the last revision submitted did not provide a marked version. We now offer the marked manuscript. All changes are clearly marked up in yellow. Thanks again for your valuable advice.

Round 3

Reviewer 2 Report

Do not start a sentence with a reference, but with the author's name. Example: " [ 32 ] proposed" must be Fan [ 32 ] proposed (...). This comment also applies to references [33], [34] and [15] written on lines 139-146.

Back to TopTop