Next Article in Journal
Quantifying Seagrass Distribution in Coastal Water with Deep Learning Models
Next Article in Special Issue
Uncertainty Analysis for RadCalNet Instrumented Test Sites Using the Baotou Sites BTCN and BSCN as Examples
Previous Article in Journal
Small-Scale Ionospheric Irregularities of Auroral Origin at Mid-latitudes during the 22 June 2015 Magnetic Storm and Their Effect on GPS Positioning
Previous Article in Special Issue
SI Traceable Solar Spectral Irradiance Measurement Based on a Quantum Benchmark: A Prototype Design
 
 
Article
Peer-Review Record

Assessment of New Satellite Missions within the Framework of Numerical Weather Prediction

Remote Sens. 2020, 12(10), 1580; https://doi.org/10.3390/rs12101580
by Stuart Newman 1,*, Fabien Carminati 1, Heather Lawrence 1,†, Niels Bormann 2, Kirsti Salonen 2 and William Bell 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Remote Sens. 2020, 12(10), 1580; https://doi.org/10.3390/rs12101580
Submission received: 27 February 2020 / Revised: 7 May 2020 / Accepted: 9 May 2020 / Published: 15 May 2020

Round 1

Reviewer 1 Report

Manuscript title: Assessment of new satellite missions within the framework of numerical weather prediction

Authors: Stuart Newman, Fabien Carminati, Heather Lawrence, Niels Bormann, Kirsti Salonen and William Bell

Summary:

The European Union GAIA-CLIM (Gap Analysis for Integrated Atmospheric ECV CLImate Monitoring) project examined the calibration/validation of Earth observation data sets using non-satellite reference data. Authors have explored the role of NWP frameworks for assessing the data quality of recent satellite missions at two centers, ECMWF and the Met Office. As a demonstration of the utility of NWP systems for characterizing satellite measurements, they show examples in this paper of anomaly detection such as identifying geographically- and temporally-varying calibration biases and radio frequency interference. They also acknowledge limitations in the use of NWP for validation, particularly uncertainties in surface emission which remain poorly constrained. This means that while we can identify inter-satellite biases for surface-sensitive channels on microwave imagers (at frequencies typically below 89 GHz) it is difficult to assign an absolute uncertainty to differences between observed radiances and NWP model equivalents. The manuscript is well organized. I have few points for authors to modify in order to improve the paper contents. The paper could be publishable in remote sensing with major revisions.

 

Major comments:

  1. Line 351-354 & Figure 3:” Recently, a physically-based correction has been developed for FY-3C MWRI to mitigate the effects of hot load reflector emission on the calibration [42], resulting in a substantially reduced ascending/descending bias. MWRI data are now being assimilated successfully at NWP centers such as the Met Office alongside other comparable microwave imagers.”

Can you show the reduced ascending/descending bias to compare with Figure 3? You only have shown the solar-dependent biases. Also, please state ascending/descending bias corresponding to positive/negative bias, respectively. In addition, please discuss or review previous theory about how the ascending/descending bias (solar-dependent bias) is reduced.

Minor comments:

  1. Line 423-425 & Figure 7: “The NWP framework is acting as an intercalibration standard, revealing in this case that the AMSR2 channel is warmer than the RTTOV calculation by almost 4 K while the GMI channel matches the model/RTTOV calculation to within 1 K.” Are “4K” and “1K” global averaged value? Please specify. Same question is applied to Figure 8.

 

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

Newman et. al present an excellent paper describing some of the work done as
part of the GAIA-CLIM project. It presents a rigorous methodology and
detailed discussion to assess what role NWP can play in assessing new
satellite missions. I have no major comments on the methodology or discussion
and recommend this paper can be published after minor improvements of the
presentation, which I will outline in detail below. Numbers refer to "page
number - line number".

1. Introduction

1-30-32: "The exploitation ... recognized standard", this is true
for many applications, and it is indeed critically important for climate
applications, but as stated here it may be be too absolute. In particular
imagers can still be useful for some users such as forecasters to visually
identify cloud systems, even when reflectances (for solar channels)
or radiances (for terrestrial channels) are well outside the design
specification by a poorly characterised amount.

1-43: What are "comparator NWP analysis"? I know a comparator as a piece
of electronics hardware.

2.1. NWP assessments of current satellite missions

2-80: Acronym "RTTOV" is not explained.

2.1.1. Satellite data description

3-98: Acronym "GCOM-W1" not explained.

3-99 / 3-121: JAXA is spelt out in 3-99 without mentioning the acronym but
explained in 3-121, the acronym should probably be moved here from 3-121.

2.1.2. Data assimilation configurations

4-150: Acronym "FASTEM" not explained.

2.1.3. Data selection

4-168: The term "observation error" has the potential for confusion here. To
users outside the data assimilation community, this may be understood to mean
instrument error or measurement error. My field is not data assimilation, but
if I'm not mistaken, in data assimilation the measurement error is only one of
several components of the observation error. Considering that the target
journal here is not specifically about data assimilation, I would recommend to
define carefully what is meant here, or to avoid the term algother (I note the
term is not used elsewhere in the paper). How does it relate to instrument
error, measurement error, bias, or O-B?

5-185: "a scattering index", how is this defined?

2.2. Gruan processor

5-202: I would mention here that the GRUAN uncertainties do not include error
covariance estimates (in your paper only mentioned on 7-255).

5-206 / 5-210: You write that you simulate top-of-atmosphere radiances,
but also that profile uncertainties are "mapped" to observation space.
Do you mean that with RTTOV, you propagate a profile of uncertainties
(from GRUAN) to estimate the uncertainty in the top-of-atmosphere
radiance? Or is this "mappping" different from a normal propagation of
uncertainties? Please clarify.

2.2.1. Sampling for GRUAN comparison

6, Table 2: Please add the coordinates including the elevation to this table.
The elevation is particularly relevant to interpret the remark at 17-567.

6-225 "significant" — how is this significance tested for?

3.1.1. Detection of geographic biases

9-348: Acronym "SSMI/S" not explained.

10-362 and 10-364: Why is day defined as 08-15 and night as 20-03? In both
cases it's 4 hours before noon and 3 hours after noon. Why is this definition
not symmetrical around noon?

11, Figure 4: This figure needs to be improved. The vertical axis label
should indicate that what is shown is a brightness temperature difference
rather than a brightness temperature. The horizontal axis should show
frequencies, like the vertical axis in Figure 8. Blue and green may
not be the best colours considering colour blindness. There should
be no line drawn between different channel indices, because this is
a discontinuous quantity (the value at "channel 3.5" has no meaning).
The order of the axes (channel horizontal, BT vertical) is inconsistent
with Figure 8 (channel vertical, BT horizontal). If the channel index is
replaced by a description of channel frequency and polarisation, the channel
number itself becomes optional.

3.1.2. Detection of time-dependent biases

12, Figure 5: Please state the complete frequency and polarisation in the
panel titles, like done in the vertical axis in Figure 8, so that the reader
does not need to refer to the parenthetical remark in the figure label (in
this case the channel number is optional).

3.1.4. Inter-satellite comparison

14, Figure 8. This figure is better than Figure 4 but it is still not very
meaningful to draw a line between the different channels, some type of bar
graph might be a more appropriate visualisation here.

3.2.1. IASI-NG

15, Figure 9: see earlier comments about connecting lines between channels.

15-496: Apart from errors resulting from fully random or fully systematic
effects, any errors resulting from a combination, such as mention in 20-682,
are very likely to be important and should probably be mentioned before the
discussion section.

16-511: There is also by definition a 5% possibility that there is agreement
within 95% by chance, due to random sampling — unless I misunderstand what
exactly is meant here.

16, Figure 10: see earlier comments about connecting lines between channels.

16-527: see my comment at 15-496

17-552: see my comment at 15-496

3.2.2. MWS

17-564 and Figure 11: Please be consistent with the number of significant
digits shows between text (currently 5) and figure (currently 4), I would
adapt the text to the figure axis label in this case and use 4 significant
digits to describe the channels.

17-567: "from the surface contribution", how is this shown and how do the
authors reach this conclusion? How much does the surface influence those
temperature sounding channels in those cases? Table 2 needs elevations to aid
in the interpretation. Do those cases with surface contributions to
temperature sounding channels all arise from GRUAN sites with a higher
elevation?

17, Figure 11: see earlier comments about connecting lines between channels.

18, Figure 12: see earlier comments about connecting lines between channels.

18-577: how is 85% "slightly" below 95%? That would seem to be a big
difference.

18-590: Less consistency in sub-tropics; but you are less likely to have a
surface contribution, is this then surprising considering the remark at
17-567?

5. Conclusion

20-710: "traceable uncertainties for NWP fields", I am not sure this can be
stated. For an individual collocation, you can trace those NWP data points
via the GRUAN collocation, although strictly speaking even that would require
a complete characterisation of the matchup uncertainty, which is difficult.
But for the vast majority of NWP data points, such a traceability is not
established, unless you can trace them via the collocated NWP data point,
which I don't think you can. Although the authors have indeed stated it is
just an "attempt", even this attempt has been limited to mid-latitude sites.
I think it would be more accurate to replace "traceable" by "an improved
assessment of", which also matches the title and abstract of the paper.

21-720: "tests the limits", is this a euphemism for "currently impossible"?
It seems to bo significantly beyond the limits, considering you have combined
uncertainties larger than 1 K for humidity sounding channels.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Manuscript title: Assessment of new satellite missions within the framework of numerical weather prediction

Authors: Stuart Newman, Fabien Carminati, Heather Lawrence, Niels Bormann, Kirsti Salonen and William Bell

Reviewer:

Summary:

The European Union GAIA-CLIM (Gap Analysis for Integrated Atmospheric ECV CLImate Monitoring) project examined the calibration/validation of Earth observation data sets using non-satellite reference data. Authors have explored the role of NWP frameworks for assessing the data quality of recent satellite missions at two centers, ECMWF and the Met Office. As a demonstration of the utility of NWP systems for characterizing satellite measurements, they show examples in this paper of anomaly detection such as identifying geographically- and temporally-varying calibration biases and radio frequency interference. They also acknowledge limitations in the use of NWP for validation, particularly uncertainties in surface emission which remain poorly constrained. This means that while we can identify inter-satellite biases for surface-sensitive channels on microwave imagers (at frequencies typically below 89 GHz) it is difficult to assign an absolute uncertainty to differences between observed radiances and NWP model equivalents. The manuscript is well organized and authors have modified the manuscript according to my previous comments. I have few minor points for authors. The paper could be publishable in remote sensing with minor revisions.

 

Minor comments:

  1. Line 216: What is inside the “()”? Please double check.
  2. Line 231, Eq.8: Please double check the square root. The equation formula in previous version (manuscript version 1) seems better.
  3. Line 239, Eq. 10: Please double check the square root. The equation formula in previous version (manuscript version 1) seems better.
  4. Line 654: “Figures 9-12”: How about Figure 13? Please double check.

 

Author Response

We thank the reviewer for their comments. We address these in turn:

Line 216: we had omitted a cross-reference to Table 2, we have now corrected this. Thanks for spotting it.

Lines 231, 239: the square root signs have been rendered oddly in the pdf that was produced from the Word document. We are not sure why, but anticipate that, should the paper be accepted, this will be addressed at the proof stage.

Line 654: the reviewer is absolutely right, we had not updated the numbering after adding a figure and we have corrected the text to read "Figures 10-13".

 

Back to TopTop