Next Article in Journal
Hyperspectral Video Target Tracking Based on Deep Edge Convolution Feature and Improved Context Filter
Next Article in Special Issue
Extended Cross-Calibration Analysis Using Data from the Landsat 8 and 9 Underfly Event
Previous Article in Journal
Effects of Viewing Geometry on Multispectral Lidar-Based Needle-Leaved Tree Species Identification
Previous Article in Special Issue
Landsat 9 Cross Calibration Under-Fly of Landsat 8: Planning, and Execution
 
 
Article
Peer-Review Record

Validation of Expanded Trend-to-Trend Cross-Calibration Technique and Its Application to Global Scale

Remote Sens. 2022, 14(24), 6216; https://doi.org/10.3390/rs14246216
by Ramita Shah 1, Larry Leigh 2,*, Morakot Kaewmanee 2 and Cibele Teixeira Pinto 2
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Remote Sens. 2022, 14(24), 6216; https://doi.org/10.3390/rs14246216
Submission received: 19 October 2022 / Revised: 3 December 2022 / Accepted: 6 December 2022 / Published: 8 December 2022

Round 1

Reviewer 1 Report

This paper describes  a calibration method developed at SDSU. This is one of the papers that enhances the overall approach by using collective PICS across the globe to increase the clear-sky observations over PICS to reduce the time needed to confidently obtain the intercalibration reflectance ratio between sensors. This paper is very comprehensive in explaining the total method. However, I find it difficult to determine what parts of the calibration methodology have been improved or what additional validation is unique to this study. I approve this paper to be published once my following concerns have been addressed.

 

Specific comments

line 101, 125 Juliana et al. [15] -> Fajardo

 

Line 87 Prathana et  al. [14] ->Khakurel

 

Table 1, equatorial crossing time of Aqua is 13:30, only Terra is listed.

 

Figure 1 is somewhat confusing as far as the BRDF normalization, which is applied independently to the Target and Reference sensor to a common reference angle, I would put to boxes and state in the middle to the same reference angle. Similar the temporal trends are sensor specific, I would make two boxes.

 

Fig. 2d and 2e is over Australia and West Texas which are sites that are not vegetation free. After a rain event these surfaces could change their surface reflectance.

 

Line 296 What atmospheric profile are the simulated wavelength radiances based on? Are the SBAF conversions in Fig.1 using unique atmospheric profiles, especially water vapor absorption?

 

Line 301 referring to the sensor for which is the calibration is derived for “calibrated sensor” sounds like it has been calibrated. I would use the terms reference and target sensor rather than reference and calibrated sensor.

 

Line 351 “Julina”

 

Line 351 “reference geometry” is ambiguous. Is the reference geometry a fixed or static set of angles (independent of observations) or is it the observation angles from the reference sensor?

 

Line 378 Do larger windows (days) dampen the day to day variability?

 

Line 399 I maybe missing something, but I thought cluster 13 is being used as an aggregate. Does the site refer to an individual location?

 

Line 402. Spatial homogeneity is the nonuniformity of the site if the cluster 13 is cloud free. Identifying cloud free pixels will alter the number of clear-sky pixels as well as the spatial distribution for the day. Can spatial uniformity definition be more refined in the text.

 

Figure 6 caption. Can the time period from which the calibration intercomparison was performed stated

 

Line 476 Can you speculate why the red band differs, whereas it agrees with the other bands?

 

Line 498 Can some text be stated that explains the differences between the various techniques?

 

Table 2. The comparison between the T2T-NA and TNT-global seems to be close. It would be interesting to see T2T-global without NA. Are most of the clear-sky from NA with a small contribution from global? Can the sampling NA and global-NA frequency be stated.

 

Fig. 8 NIR 0.86µm is impacted by both vegetation and water vapor. The Niger locations have more water vapor than Libya locations. I think some text to this should be included in the text, since the other visible channels and the 1.6µm which are not impacted by water vapor show good agreement.  The 2.2µm wavelength the MODIS channel is in a water vapor absorption band. The ETM+ band is much broader and includes both water vapor absorption and window. So for this channel accounting for the water vapor is crucial.

 

Fig. 9 please state what the Hyperion black line represents, is it the average of all the 2300 image Hyperion spectra?

 

Line 604 Landsat-7 wide spectral response for the NIR weights the radiance contribution to shorter wavelengths and the larger width also contains more absorption. For SWIR-2 the shorter wavelengths are in water vapor absorption whereas the left side reside in the window.

 

Fig.11 the MODIS/Landsat-8 difference is mostly spectral, good to see after SBAF the average reflectance is closer. Because of the water vapor variability there are large seasonal cycles and daily noise.

 

Line 636 Is 0.0356% an absolute or relative reflectance difference with the model?

 

Figure 12 and 13, could the magenta points be made smaller so that the black dot reflectance distribution can be visualized.

 

Figure 15. There are many more seasonal outliers for Terra-MODIS than for Landsat (fig. 14). However, the MSG smoothing does not seem to be impacted. Does the 3rd order polynomial fit use least square distance, which implies that the outliers would have a greater impact than those close to the mean. Maybe the large black dots do not show how many more data points there are near the mean, and the outliers just seem exaggerated.

 

Line 687. Were Terra-MODIS C6.1 reflectances used? The dataset versions should be added to Table. For example Terra-MODIS C6.1 etc.

 

Fig. 19. I am curious to what the authors believe that for some sensor pairs there are seasonal variations and for other sensor pair the T2T gain trends are flat. Does this suggest onboard calibration variation or methodology inadequacies?

 

Line 798 “average is within 0.5-1% difference on average for all sensor pairs and bands” I would agree this is a true statement when looking at Table 5 for green, NIR, SWIR1 and SWIR2, for red CA and blue bands, the difference is more in the range of 1-2%.

 

Line 832. This paper seems to be an extension of the work by Shrestha, Khukeral, Hasan, Fajardo, etc. please outline what methodology and validation has been enhanced more clearly in the text. Is the main extension of this work using global PICS to improve the sampling as summarized by this paragraph.

 

Line 864 Applying this method to African and Asian domains is promising. Applying this method to US GEO satellites would be difficult due to the lack of stable PICS in the Western hemisphere. Maybe I am missing something here.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report


Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

 

The authors present an exciting paper about a develop and validating the expanded T2T cross-calibration technique. The expanded T2T cross-calibration technique was validated by obtaining consistent results compared to several trusted cross-calibration techniques. After validating this technique, the expanded T2T technique was applied to the global scale using EPICS global. Six sensors were calibrated with Landsat 8 as the reference sensor for this analysis. Also analyzed the difference between EPICS Global and North Africa using this technique to evaluate the model's performance in global applications. The expanded T2T technique was computed using Monte Carlo Simulation. The manuscript is clear, relevant to the field, presented well-structured, and scientifically sound. The manuscript's results are reproducible based on the details given in the methods section. The manuscript is well-written and should be of great interest to the readers. A small remark in the conclusion, where they should mention more about their future work. 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report


Comments for author File: Comments.pdf

Author Response

Thank you. Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop