Next Article in Journal
Compact Polarimetry Response to Modeled Fast Sea Ice Thickness
Previous Article in Journal
A Novel Radar Detection Method for Sensing Tiny and Maneuvering Insect Migrants
 
 
Article
Peer-Review Record

Intercomparison of Remote Sensing Retrievals: An Examination of Prior-Induced Biases in Averaging Kernel Corrections

Remote Sens. 2020, 12(19), 3239; https://doi.org/10.3390/rs12193239
by Hai Nguyen * and Jonathan Hobbs
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2020, 12(19), 3239; https://doi.org/10.3390/rs12193239
Submission received: 3 August 2020 / Revised: 25 September 2020 / Accepted: 28 September 2020 / Published: 5 October 2020

Round 1

Reviewer 1 Report

General Comments

In this paper, the authors perform a linear analysis and accompanying numerical simulation to
highlight the error induced by comparing multiple retrievals that is due to the
differences in their averaging kernels alone. Such differences arise from different
observational error specifications, prior state uncertainties, and prior mean states.
The results are of some theoretical interest, but it is hard to place them in the
context of other sources of errors in remote sensing retrievals, and the assumptions
that are made in the text simplify the retrieval problem to the point where it would
likely be hard to do so by simple comparison with previous literature.
To name a few examples: different retrievals use different forward models and different
state vectors; different instruments will have different spectral resolutions,
inter-calibration errors, etc. The one implication that the authors focus on, namely
attempted algorithm improvements, is important but not the only one. Comparisons
between satellite retrievals and validation data are often used to correct the
satellite data. What are the implications of this work for the "corrected" satellite
observations? Could large ensembles of retrievals and averaging kernels somehow
reduce the impacts of this error? The proportionality of the bias to x_C-x_T suggests
that this could be the case if the retrievals contain the true states in their
envelope, as would be the case with an unbiased posterior state with scatter driven
by noise.

To make the results in this paper most useful to the community, I suggest the authors
consider:
- misspecified observation errors (i.e. the true observation error is unknown or the radiances
have a non-random error) - this could make a large difference in how important this
error source is
- a simple forward model error
- a bias corrected posterior state - note that this is especially applicable to different
satellites with different bias characteristics

Without a bit more context, I think that this article is probably more fitting for an
applied mathematics or statistics journal than RS.

Specific Comments

171: typo in specification of prior covariance

294: biases in the forward model and data also are mitigated by the prior assumption,
so this suggestion of loosening the prior uncertainty will have other impacts.

Author Response

Please see the attached PDF file.

Author Response File: Author Response.pdf

Reviewer 2 Report

Review of the paper “Intercomparison of remote sensing retrievals: an examination of prior-induced biases in averaging kernel corrections” by Hai Nguyen et al.

 

The focus of this paper is to address a misconception in the literature, that the averaging kernel correction removes bias that is introduced by prior misspecifications by either (or both) of two comparative optimal estimation retrievals. The authors demonstrate from analytic equations, and from an illustrative set of retrieval experiments, that a non-zero prior-induced bias remains due to misspecification of the prior.

The paper is well written, concise, well-structured and makes its points in a clear manner. The paper should be published after minor corrections.

 

Major comments

The only sentences which I found troubling in the text is the sentence on page 8, line 287 (and restated on lines 485-491): “Therefore, when a researcher is unsure of the accuracy of the prior means xa,1 and xa,2, it is a good idea to choose a prior with a ‘larger’ prior covariance matrix as the comparison prior”. Yes, the less informative prior (with larger Sa values) will decrease the prior related bias, but as the authors state on line 301 “In real applications, the prior-induced bias is likely only one component of the overall bias”. The purpose of OE is to bring in the best (most accurate) a priori information and best (most accurate) observations into an OE retrieval that is weighted by both the prior and observational uncertainties. I find the deliberate selection of less accurate prior information troublesome. If both priors are uncertain, and one is unsure of the accuracy of both prior means, then use of the less informative prior is justified since it better reflects the general uncertainty of the observational situation. The use of the less accurate prior should not be justified solely due to its mathematical ability to reduce the prior-induced bias.

Lines 367 and 369 were confusing. I read the lines and stopped my reading cadence. The phrase “In the experiments” implies (in my initial reading) that there is just one set of simulation calculations. I then encountered on line 369 “Similarly, for some experiments” which was confusing. It may be helpful to state on line 367 “In one simulation experiment” and on line 369” In other simulation experiments” to avoid confusion.

I very much like the design and execution of Table 4. Nicely done!

I do think the sentence on line 433 is correct, in that the relative bias is reduced, but on line 435 I do not consider -0.012 to be close to -0.006 pm. The two numbers differ by a factor of two. Please revise the text.

 

Minor comments

The paper contains some minor English phrasing issues in several of the sentences.

Line 44. Change to “One simple corollary”

Line 149 Change to “in the OE algorithm”

Line 171 change perhaps to {xa,i, Sai }

Line 174 change to “an instrument to have an”

Line 266 change to “In many applications,”

Line 291 change to “bias in a validation study”

Line 370 change to “based upon the operational prior”

Figure 1. The x and y labels are too small. Does Sw refer to Sa? It may be best to be consistent throughout the text and have the same subscript letter for the working prior in the analytic equation and simulation study sections of the paper.

Line 411 change to “with the theoretical expression”

Line 422 change to “bias in the intercomparison”

 

Author Response

Please see the attached PDF file.

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper demonstrates analytically that comparisons of OE remote sensing retrievals are subject to biases resulting from uncertainties in the prior state, despite the common misconception that averaging kernel corrections will remove such biases. I found this paper to be well-written and informative and have just a few minor comments. The main one is that some context for the magnitude of the prior-induced bias would be useful to include—the authors should consider simulating other theoretical sources of bias (due to instrument calibration, cloud/aerosol effects, etc.) to show how the prior-induced bias might compare. I also have some questions regarding the implications for data assimilation that the authors briefly mention. I would recommend publication after these are addressed.

Specific comments:

Line 55-56: It wasn’t immediately clear here that ‘E’ referred to the ‘expected value’. This becomes clear later in the paper, but it should be explicitly defined here.

Line 84: ‘Similarly, in data assimilation it is sometimes necessary…’ the authors briefly mention here and later (Lines 264 and 448) that their results have implications for data assimilation. However, since this typically involves applying the retrieval observation operator to a modeled atmospheric state (rather than an intercomparison of retrievals), it’s not clear why the prior-induced bias would come into play in this application. Can the authors elaborate?

In a similar vein, the authors may want to consider discussing the fact that several studies have used chemical transport models as an intercomparison platform to indirectly validate trace gas retrievals (such as Zhu et al. (2016),https://doi.org/10.5194/acp-16- 13477-2016). Is this another way in which we can avoid prior-induced biases, in addition to weakening the prior constraint matrix for the retrieval?

Technical corrections

Line 41: ‘misspeficiation’ should be misspecification

Line 44: ‘simply’ should be ‘simple’

Line 174: ‘instrument have’ should be ‘instrument to have’

Line 238: ‘misspecificiation’ should be misspecification

Line 275: ‘mean’ should be ‘means’

Line 318: ‘as a mean to’ should be ‘as a means to’

Line 370: ‘based the’ should be ‘based on the’

Author Response

Please see the attached PDF file.

Author Response File: Author Response.pdf

Back to TopTop