Next Article in Journal
Compressive Sensing for Ground Based Synthetic Aperture Radar
Next Article in Special Issue
Evaluation of CLARA-A2 and ISCCP-H Cloud Cover Climate Data Records over Europe with ECA&D Ground-Based Measurements
Previous Article in Journal
Terrestrial Laser Scanning to Predict Canopy Area Metrics, Water Storage Capacity, and Throughfall Redistribution in Small Trees
Previous Article in Special Issue
Performance Assessment of the COMET Cloud Fractional Cover Climatology across Meteosat Generations
 
 
Article
Peer-Review Record

Climate Data Records from Meteosat First Generation Part I: Simulation of Accurate Top-of-Atmosphere Spectral Radiance over Pseudo-Invariant Calibration Sites for the Retrieval of the In-Flight Visible Spectral Response

Remote Sens. 2018, 10(12), 1959; https://doi.org/10.3390/rs10121959
by Yves M. Govaerts 1,*, Frank Rüthrich 2, Viju O. John 2 and Ralf Quast 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2018, 10(12), 1959; https://doi.org/10.3390/rs10121959
Submission received: 5 November 2018 / Revised: 29 November 2018 / Accepted: 3 December 2018 / Published: 5 December 2018
(This article belongs to the Special Issue Assessment of Quality and Usability of Climate Data Records)

Round 1

Reviewer 1 Report

This study validates the LibRadTran version 2 radiative transfer model radiance spectral and associated inputs used to predict the at sensor radiance response to determine the spectral ageing of the MFG SRFs. The RTM radiance are validated over the Libyan desert site, by comparing to the BRF of MODIS and MERIS observations. The RTM radiances are then used to compute the Meteosat gains over clear-sky ocean, Libyan desert, and DCC. Consistent scene type gains from MSG HRV would validate the RTM radiances, since the HRV SRFs are well known. This is part of a 3-part analysis to properly calibrate the MFG using all known sensor information and inputs.

 

This is an important paper for the calibration community. I approve publishing after these comments are addressed.

 

Comments

 

Line 12. The authors estimate an uncertainty of 5% for RTM or simulated at sensor radiances. How does this compare to other studies using simulated radiances? This information or background information seemed lacking in the paper. 

 

For example, this paper goes through the uncertainty analysis of the RTM by input. How does the uncertainty compare to this study?

Chen et al. 2015, Uncertainty Evaluation of an In-Flight Absolute Radiometric Calibration Using a Statistical Monte Carlo Method, IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 53, NO. 5, MAY 2015

 

Section 4.3 line 212.

The insertion of stratospheric volcanic aerosol impact on the RTM at sensor clear-sky ocean radiance would be very difficult to validate. How do the authors convince themselves that the appropriate amount of aerosols have been included during volcanic eruptions?

 

Line 230.  I believe that the water radii should be 10µm rather than 20 µm as seen on Fig. 4. 

 

Line 232. After vertically integrating the Fig. 4 COT profile, is the total COT then between 70 to 120 as stated in the text? 

 

Line 248. There are no collection, version, edition numbers for OLI, MODIS, MERIS, MSI L1B datasets. This is extremely important for reproducing results. There are always calibration improvements between reprocessing events. These could affect the BRDF analysis. 

 

For example, the Aqua-MODIS RVS issues, which could impact the BRF view angle accuracy. 

Angal et al. 2018. Results From the Deep Convective Clouds-Based Response Versus Scan-Angle Characterization for the MODIS Reflective Solar BandsIEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 56, NO. 2, FEBRUARY 2018

R. Bhatt et al., “Characterizing response versus scan-angle for MODIS reflective solar bands using deep convective clouds,” J. Appl. Remote Sens., vol. 11, no. 1, p. 016014, 2017.

 

Line 255. For Meteosat there is only one view angle for the Libyan desert site. This limits the set of solar and azimuthal angles that need to be validated. Evaluating all angles in the BRF may not be representative of the RTM difference of the Meteosat observed angular conditions. 

 

The Libya-4 BRF validation is performed using sun-synchronous polar orbiting data. Do these satellites observe the Libyan desert with the same view and solar angular conditions as Meteosat?

 

To derive a true BRF all of the angular conditions must be sampled for a particular solar zenith angle. A typical sun-synchronous satellite sensor cannot provide all of the view and azimuthal sampling. How is Eq. 5 performed without having complete angular coverage?

 

I would think that validating the clear-sky ocean and DCC BRF with MODIS and MERIS radiances would be worthwhile, since these targets encompass a larger domain, thus having more angular bins sampled to perform BRF analysis with simulated data. Why was this not performed?

 

Fig. 7 Could the number of points be mentioned in the figure caption. I can’t determine the distribution of the points in the plot.

 

Fig., 7 What could cause the bimodal distribution of Met-7 DCC pixel counts and RTM radiance pairs?

 

Fig. 7 Why does the range of the Libya-4 RTM pixel radiance differ between Met-7 and HRVIS. Is not the complete year of data used for HRVIS 2013 as Met-7 2005? The whole range of solar zenith angles should be represented in both plots.

 

Line 295. What is the DCC BT threshold uncertainty for MVIRI, and how does that translate into a visible uncertainty?

 

Line 295 How will the authors determine whether calibration gain difference is due to the SRF and not due to other factors? I think a few sentences should be added to let the readers understand how much impact that simulated radiances will have on the final calibration gain.

 


Author Response

The reply to reviewer 1 is attached in the author over letter.

Author Response File: Author Response.pdf

Reviewer 2 Report

I have to say the results from this study are very important to the FCDR as well as the GSICS communities. Therefore, I highly recommend publishing this paper with only few minor revisions, 1. The retrieved in-flight SRF result is mainly determined by the accuracy of the forward simulation if we suppose there is no instrument nonlinearity. Therefore, how much uncertainty would be expected in the SRF retrieval under the circumstance of +-2% uncertainty of the simulation? Or is it possible for the authors to give a specific conclusion on whether it is reliable to use this method to retrieval the SRF? Why? 2. Can the proposed DCC identification method differentiate real DCC from the thick cirrus cloud? In addition, how did the authors deal with the instrument signal saturation over DCC? As far as I know, several MODIS channels are saturated over DCC. 3. A detailed table that lists all the inputs for the forward simulation would be highly recommended, e.g., how to accurately model the airborne dust and the cloud profiles (AOT, layer height, thickness, size distribution, refractive index, non-spherical)?

Author Response

The reply to reviewer 2 is attached in the author over letter.

Author Response File: Author Response.pdf

Reviewer 3 Report

Review of

 

Climate Data Records from Meteosat First Generation: Simulation of Accurate Top-Of-Atmosphere Spectral Radiance over Pseudo-Invariant Calibration Sites for the Retrieval of the In-Flight MVIRI Visible Spectral Response

 

by Govaerts et al.

 

 

General comments:

 

This is an interesting paper addressing the key issue of reprocessing long time series of radiometric observations derived from spaceborne instruments. The paper is  reasonably well written and I only have minor editorial comments. Figure quality is poor and could be improved. From a user perspective, indicating how calibrated TOA data are/will be available/used would be useful.

 

Recommendation: minor revisions.

 

 

Particular comments:

 

- General: too many "has been"/"have been". Most of them should be replaced by “was”/“were”.

- Title: could probably be shortened. Please avoid using acronyms that very few know in the title, such as “MVIRI”. Just delete “MVIRI” ?

- Abstract: please avoid using project acronyms in the abstract of a scientific paper. Delete “Within the FIDUCEO project,”

- L. 47: replace “objective” by “objectives”

- L. 80 (Figure 1): “Met-7” is used in this Figure and elsewhere but is not defined.

- L. 98 (“state variables”): give examples here and/or refer to Table 1.

- L. 105 (“accurately match”): unclear, please explain.

- L. 169 (Eq. 3): Why is Sun-Earth distance needed? Please explain.

- L. 228 (Figure 4): “um” units are unknown. Color lines should be thicker.

- L. 269 (“between targets”): do you mean “across targets”?

-  L. 278 (Figure 7): units of y-axes are missing

-  L. 278 (Table 4): units are missing

- L. 329 (Conclusion): Should be mentionned here how calibrated TOA data are/will be available. Will MTG be a game-changer ?

 

 


Author Response

The reply to reviewer 3 is attached in the author over letter.

Author Response File: Author Response.pdf

Back to TopTop