Next Article in Journal
Summer Marine Heatwaves in the Kuroshio-Oyashio Extension Region
Previous Article in Journal
Evaluation of Atmospheric Correction Algorithms over Lakes for High-Resolution Multispectral Imagery: Implications of Adjacency Effect
Previous Article in Special Issue
Laboratory Heat Flux Estimates of Seawater Foam for Low Wind Speeds
 
 
Article
Peer-Review Record

A Coupled Evaluation of Operational MODIS and Model Aerosol Products for Maritime Environments Using Sun Photometry: Evaluation of the Fine and Coarse Mode

Remote Sens. 2022, 14(13), 2978; https://doi.org/10.3390/rs14132978
by Jeffrey S. Reid 1,*, Amanda Gumber 2, Jianglong Zhang 3, Robert E. Holz 2, Juli I. Rubin 4, Peng Xian 1, Alexander Smirnov 5,6, Thomas F. Eck 6,7, Norman T. O’Neill 8, Robert C. Levy 6, Elizabeth A. Reid 1, Peter R. Colarco 6, Angela Benedetti 9 and Taichu Tanaka 10
Reviewer 1: Anonymous
Reviewer 2:
Remote Sens. 2022, 14(13), 2978; https://doi.org/10.3390/rs14132978
Submission received: 6 May 2022 / Revised: 8 June 2022 / Accepted: 10 June 2022 / Published: 22 June 2022
(This article belongs to the Special Issue Passive Remote Sensing of Oceanic Whitecaps)

Round 1

Reviewer 1 Report

This paper seeks to discern skill and examine some possible additional degrees of freedom in estimating key aerosol parameters using remotely sensed and assimilated AOD measurements compared against a 'truth' AOD dataset. The complexity embodied in this is vast. Establishing a 'truth' dataset with some fidelity is difficult, and the authors spend some time describing the MAN and AERONET data used for this purpose. Establishing a 'truth' dataset with sufficient data density for comparisons with periodic and problematic MODIS overpasses is also difficult, and the authors go into good discussion about it as well. Comparing data to AOD-assimilated models is a painstaking process. That was clear here.

The focus of this paper on the marine environment is useful since it helps assure data quality to a degree. Still the complexity of aerosol size and composition and how AOD emerges as an observable signal leaves the informed reader a little concerned that the effort would be fruitless. That was not the case here. The authors have clearly spent lots of time examining the data in detail and set the stage for several follow-up efforts. Their results demonstrate some reasonable skill in MODIS AOD and its assimilation into the consensus models. It is no surprise, thought, that coarse mode aerosol shows better skill, and also no surprise that the fine mode and fine fraction show so much less.

All this being said, I have two general criticisms

1) The presentation of results was a bit and sometimes very hard to follow. I think the text just needs some polish. Several results are presented in the text that I was unable to trace back to a figure or table. Please review language for clarity.

2) There are many small grammatical, typographical, spelling and language errors.

In both cases above, I, as a reviewer, would usually tally them up and present them as corrections. But there got to be so many that it feels beyond expectation of a reviewer to correct them. I simply ask the authors to read their text carefully.

In terms of things the paper could use to fill in some gaps, two things stand out, one big, one less so:

1) The authors point out the problematic impact of 'definition' of what accounts for fine and coarse mode aerosol via the 4 component models of the C4C. Surely this had enough of an impact to merit a brief description of these definitions in a way that the reader could discern how these definitions manifest in the model data. But I could not find a description.

2) Establishing 0.08 and 0.28 as key thresholds was presented in a way that was not clear both what the authors intended to 'threshold' and why it was being used. So that in the course of reading the manuscript, places where I expected to see these thresholds used did not use it, and when it was used, I had to regularly revisit the threshold explanation to interpret the analysis.

Figures 6 is difficult to see. Could it use the same size symbols as Figure 5?

The symbol overlap in Figure 7 may be improved with smaller symbols.

All tables: There's no need for the abbreviations, and there appears to be plenty of formatting that could be done for higher quality presentation.

Overall, good important work, and the start of a much needed analysis. I hope you do the follow-ups you point to in the text, otherwise all this work would simply raise a big pile of questions far in excess of the value of the conclusions.

 

Author Response

Response Reviewer 1:

 

 

To both reviewers: Let us begin by thanking the reviewer for the read and comments.  This was a hard paper to write, and we imagine the many details included can be hard to keep track of.  Indeed, we tried to be very systematic in our analysis because small differences can at times have large interpretive implications.  Nothing in remote sensing and modeling data is straightforward, and we wished to note all assumption without burdening the reader.  To aid in interpretation, we chose to highlight the conclusions using bullets.  We have done our best to respond and clarify where possible.

 

Reviewer 1

“This paper seeks to discern skill and examine some possible additional degrees of freedom in estimating key aerosol parameters using remotely sensed and assimilated AOD measurements compared against a 'truth' AOD dataset. The complexity embodied in this is vast. Establishing a 'truth' dataset with some fidelity is difficult, and the authors spend some time describing the MAN and AERONET data used for this purpose. Establishing a 'truth' dataset with sufficient data density for comparisons with periodic and problematic MODIS overpasses is also difficult, and the authors go into good discussion about it as well. Comparing data to AOD-assimilated models is a painstaking process. That was clear here.”

 

Thank you for understanding our intent and the complexities as well.  This paper came out of an attempt to close the sea salt budget and improve whitecap/sea salt production estimates.  It became very clear that the available datasets were not up to the job at face value, and we needed to perform a coupled error analysis.  Analyses such as these are absolutely necessary if we are ever to put inverse modeling on a sound footing.

 

 

“The focus of this paper on the marine environment is useful since it helps assure data quality to a degree. Still the complexity of aerosol size and composition and how AOD emerges as an observable signal leaves the informed reader a little concerned that the effort would be fruitless. That was not the case here. The authors have clearly spent lots of time examining the data in detail and set the stage for several follow-up efforts. Their results demonstrate some reasonable skill in MODIS AOD and its assimilation into the consensus models. It is no surprise, thought, that coarse mode aerosol shows better skill, and also no surprise that the fine mode and fine fraction show so much less.”

 

Again, thank you.  We have suspected there were problems since the first MODIS description paper (Levy et all, 2013) as the both the AOD and Angstrom exponent were high biased.  That meant that either:  a) fine AOD is overestimated relative to coarse; b) the modeled fine mode volume median diameter is too small.  What did surprise us (and the developers) was that fine bias was correlated between model and satellite, and that the MODIS fine mode bias increased with cloud fraction in a way that was inconsistent with 3D radiation effects.  But you are right, this has opened a rabbit’s warren of possibilities on how to extract more information from the retrievals.

 

 

“All this being said, I have two general criticisms

  • The presentation of results was a bit and sometimes very hard to follow. I think the text just needs some polish. Several results are presented in the text that I was unable to trace back to a figure or table. Please review language for clarity.”

 

We went through the paper again and tried to clarify. If you still have trouble following, please give us a few examples for reference.

 

 

“2) There are many small grammatical, typographical, spelling and language errors.

In both cases above, I, as a reviewer, would usually tally them up and present them as corrections. But there got to be so many that it feels beyond expectation of a reviewer to correct them. I simply ask the authors to read their text carefully.”

 

Thanks for the comment.  This paper already went through an editor, but we have sent it to a second editor for additional review.

 

 

 

“In terms of things the paper could use to fill in some gaps, two things stand out, one big, one less so:

  • The authors point out the problematic impact of 'definition' of what accounts for fine and coarse mode aerosol via the 4 component models of the C4C. Surely this had enough of an impact to merit a brief description of these definitions in a way that the reader could discern how these definitions manifest in the model data. But I could not find a description.”

 

We tried to lay this out on Page 7, but following your input have rewritten the material there.  The trick is that fine versus coarse is the next degree of freedom after total AOD, and spectral AOD error is tightly coupled to this.  Both Norm O’Neill and Jeff Reid have entire papers under construction that deal with these issues from a remote sensing point of view, with Peter Colarco constructing a similar review from a modeling perspective.  If we wish to do apples to apples comparisons between the models and MAN, the comparison basis must be consistent with the assumptions of the SDA and MODIS algorithms.  While no two models are alike, fortunately all models are largely ‘bulk’ in the fine mode; the difficulty is with coarse mode representation.  We therefore had to draw a few lines, which are elaborated on in the text.

 

2)  “Establishing 0.08 and 0.28 as key thresholds was presented in a way that was not clear both what the authors intended to 'threshold' and why it was being used. So that in the course of reading the manuscript, places where I expected to see these thresholds used did not use it, and when it was used, I had to regularly revisit the threshold explanation to interpret the analysis.”

 

These selections were made to improve interpretation.  As noted in Section 2.3, 0.08 is the sample median AOD, and 0.28 is one geometric standard deviation (84%), and 0.04 is the individual sample noise floor.  We have added additional sentences to this effect in section 2.3, and reiterated this in several places in the paper.

 

 

“Figures 6 is difficult to see. Could it use the same size symbols as Figure 5?”

 

It is the same symbol, but lighter in the middle where there is no bias.  To improve readability, we then adjusted the color bar for figure 6 (and 3 for consistency) to have a green middle.

 

 

“The symbol overlap in Figure 7 may be improved with smaller symbols.”

 

We have reduced symbol size.by 33%

 

 

“All tables: There's no need for the abbreviations, and there appears to be plenty of formatting that could be done for higher quality presentation.”

 

We agree, but this will be handled by the typesetters.  Our principal concern was to get it all on one page.  We have filled out abbreviations, but if you have suggestions on how to improve the organization, we would be grateful.

 

 

“Overall, good important work, and the start of a much needed analysis. I hope you do the follow-ups you point to in the text, otherwise all this work would simply raise a big pile of questions far in excess of the value of the conclusions.”

 

Thanks for the complement.  Indeed we are already working on a series of papers inspired by this analysis.  Right now, perfect retrieval studies are reproducing the Angstrom Exponent and fine AOD bias.  Next will be assimilation studies of fine and coarse AOD.  Lastly we have started deep diving against POLDER as well.

 

Reviewer 2 Report

Summary

The authors present a comprehensive comparison of total, fine and coarse Aerosol Optical Depth (AOD), as well as the fine mode fraction (η500) at 550nm wavelength between the satellite-based dark target aerosol retrieval product from Moderate Resolution Spectral Imaging Radiometer (MODIS) and modelled AOD from the International Cooperative for Aerosol Prediction (ICAP) core four multi-model consensus (C4C). Their study focuses on fair weather maritime conditions over a 4-year period, and they further utilize the global shipboard Maritime Aerosol Network (MAN) and selected island sun photometer stations from the Aerosol Robotic Network (AERONET) as the ¨ground truth¨ for their comparison. The fine/coarse mode AOD model/satellite agreement over maritime environment is an unexplored area in the community and consequently, the scientific impact of the study is rather high. The research has scientific merit and therefore, it is worth being published under the special issue “Passive Remote Sensing of Oceanic Whitecaps” of the Remote Sensing journal. I would kindly suggest the authors to consider the following recommendations in order to improve the manuscript.

General comments: Although the study focuses on maritime environments, terrestrial differences are reviewed for completeness. Over-land and bright desert/soil surfaces the Deep Blue retrievals have shown better agreement with ground-based sun photometers. The Deep Blue retrievals are provided alongside Dark Target in the MOD04/MYD04 aerosol products. Have the authors seen any difference in the AOD correlation of MODIS and MAN/C4C near bright surfaces (e.g dust zones) using the combined product instead of DT-only, accounting for the spatial averaging in their study? Especially near the coast lines where there might be an effect (e.g Figure 6). Ιt is not clear whether the combined DT-DB has been used for the spatial averaging or data from the DT-only product. Some more clarification is needed. Furthermore, it is stated in the introduction that the comparison is focused ¨on fair weather conditions, as an evaluation of maritime environment under significant weather conditions requires a different methodology altogether¨. In this study, fair weather conditions have been implied through the availability of MAN Level 2 QA assured observations but that only applies to intercomparisons between MAN and MODIS/C4C. Nevertheless, Figures 3 and 4 include all weather conditions and in fact the most significant biases between MODIS and C4C are attributed to extreme events (e.g Page 10. Line 43-52). One option is to limit the study to fair weather conditions at all environments (e.g over land) or then to clearly state the weather conditions under which the findings apply to. Another point that I am missing in the study is that although the pairwise comparison includes an extensive dataset of 4 years of satellite observations, there is no discussion regarding the seasonal variation of the agreement/disagreement. Is there such and why?

Specific comments: Abstract Page 1, Line 6: ¨For the years 2015-2019, we perform…¨. The study covers the years from 2016 to 2019, please correct it. Introduction Page 2, Line 9: ¨Given the relatively low…...significant cloud impacts.¨. What does ¨natural¨ marine boundary layers mean? Please replace the word ¨natural¨ with one that describes it more precisely or remove it completely. Methods, Data and Models Following the enumeration, this section should be numbered as 2. Please go through the rest of the manuscript as inconsistencies in the numbering do exist in other parts too (e.g. 1.1 should be 2.1, after 3.4 Section comes 3.3.1 Section, etc.) Page 4, Line 24: ¨(+0.04±0.1*AOD550) ¨. What does the asterisk refer to? Page 9, Line 12: Do the authors perhaps refer to Figure 1(c)? Results Page 10, Line 24: In this paragraph the authors discuss terrestrial differences between MODIS and C4C products. How do the findings compare to previous studies regarding fine mode AOD from MODIS aerosol product (for example, Yan et al., 2021)? Yan, X., Zang, Z., Liang, C., Luo, N., Ren, R., Cribb, M., and Li, Z.: New global aerosol fine-mode fraction data over land derived from MODIS satellite retrievals, Environ. Pollut., 276, 116707, https://doi.org/10.1016/j.envpol.2021.116707, 2021. Page 12, Line 2: Do Figures 5 a, c and e (it applies to Figure 6 as well) correspond to common datasets between MODIS, MAN and C4C? It seems that Figure 5b does not have the same data points, introducing a possible sampling bias which is further translated into the conclusions? The authors comment that there is a yield on corresponding Aqua of ~60% but why the dataset wasn’t limited down to common cases from all three involved datasets, namely MAN,MODIS,C4C? Please clarify this as Figures 5 and 6 are two of the core figures in the study showing the agreement with the ¨ground truth¨. Page 14, Line 2: I suggest plotting 1-1 lines in Figure 7 for clarification and readability. Page 25, Line 29: ¨….perhaps by 50% or more¨. This statement is a bit speculative and vague. If the authors have grounds to support it, then please elaborate.

Author Response

Response Reviewer 2

 

To both reviewers: Let us begin by thanking the reviewer for the read and comments.  This was a hard paper to write, and we imagine the many details included can be hard to keep track of.  Indeed, we tried to be very systematic in our analysis because small differences can at times have large interpretive implications.  Nothing in remote sensing and modeling data is straightforward, and we wished to note all assumption without burdening the reader.  To aid in interpretation, we chose to highlight the conclusions using bullets.  We have done our best to respond and clarify where possible.

 

 

“The authors present a comprehensive comparison of total, fine and coarse Aerosol Optical Depth (AOD), as well as the fine mode fraction (η500) at 550nm wavelength between the satellite-based dark target aerosol retrieval product from Moderate Resolution Spectral Imaging Radiometer (MODIS) and modelled AOD from the International Cooperative for Aerosol Prediction (ICAP) core four multi-model consensus (C4C). Their study focuses on fair weather maritime conditions over a 4-year period, and they further utilize the global shipboard Maritime Aerosol Network (MAN) and selected island sun photometer stations from the Aerosol Robotic Network (AERONET) as the ¨ground truth¨ for their comparison. The fine/coarse mode AOD model/satellite agreement over maritime environment is an unexplored area in the community and consequently, the scientific impact of the study is rather high. The research has scientific merit and therefore, it is worth being published under the special issue “Passive Remote Sensing of Oceanic Whitecaps” of the Remote Sensing journal. I would kindly suggest the authors to consider the following recommendations in order to improve the manuscript.”

 

Thank you for the kind words. We will do our best to incorporate your suggestions.

 

 

General comments:

“Although the study focuses on maritime environments, terrestrial differences are reviewed for completeness. Over-land and bright desert/soil surfaces the Deep Blue retrievals have shown better agreement with ground-based sun photometers. The Deep Blue retrievals are provided alongside Dark Target in the MOD04/MYD04 aerosol products. Have the authors seen any difference in the AOD correlation of MODIS and MAN/C4C near bright surfaces (e.g dust zones) using the combined product instead of DT-only, accounting for the spatial averaging in their study? Especially near the coast lines where there might be an effect (e.g Figure 6)”

 

Indeed!  Expect a paper to be submitted shortly by Gumber et al. that systematically goes through dark target, deep blue, and MAIC MODIS products and VIIRS.  The focus is on extreme events, but the whole data set is analyzed.  As you probably are aware, the differences can be quite significant.  This paper is being followed up by comparisons with the C4C.  Because the over land problem is far more difficult than the over ocean problem, we had to split this into two papers.

 

 

“Ιt is not clear whether the combined DT-DB has been used for the spatial averaging or data from the DT-only product. Some more clarification is needed.” 

 

We have edited the manuscript to emphasize that what is shown over land is the combined DT-DB product.  This is the standard product that should be used by the community, as agreed to by the two developers (Rob Levy DT and Christina Hsu DB).

 

 

“Furthermore, it is stated in the introduction that the comparison is focused ¨on fair weather conditions, as an evaluation of maritime environment under significant weather conditions requires a different methodology altogether¨. In this study, fair weather conditions have been implied through the availability of MAN Level 2 QA assured observations but that only applies to intercomparisons between MAN and MODIS/C4C. Nevertheless, Figures 3 and 4 include all weather conditions and in fact the most significant biases between MODIS and C4C are attributed to extreme events (e.g Page 10. Line 43-52). One option is to limit the study to fair weather conditions at all environments (e.g over land) or then to clearly state the weather conditions under which the findings apply to.”

 

We agree in concept, but the whole point of this paper is to assess the relative skill of the fine and coarse mode partition in AOD in maritime environments.  As we noted in the introduction (and now further emphasize), evaluation data is inherently sample biased toward fair weather environments, and is the best that we can do at this stage without making a long paper even longer.  It is for this reason that we have been so careful with bulk versus pairwise sampling.  The development of a full error model and a combined analysis is a follow-on step, and almost certainly will need to be done in association with a daily reanalysis consensus.  That is an enormous undertaking.  Here, in this analysis, we were able to identify where the issues lay.  Subsequent studies can and are currently being conducted to sort out these issues.  With the information we learned from this analysis, we have a good idea on how to conduct such further studies.  However, please do not misinterpret the inclusion of the overland components.  The total AOD is presented simply to demonstrate the nature of the terrestrial to maritime transition.  There are no operational MODIS products that include fine mode fraction over land.  However, there are many papers that describe the skill of the MODIS and C4C total AOD over land, and hence we did not repeat that analysis here, although we have now referenced these even further in this paper.  While we agree that what Yan et al described below is a worthy effort, these products are not operationally generated or ingested into any numerical weather prediction data stream, and thus are outside the scope of this study.  This said, I would encourage them to approach remote sensing data providers as a potential transition.

 

 

“Another point that I am missing in the study is that although the pairwise comparison includes an extensive dataset of 4 years of satellite observations, there is no discussion regarding the seasonal variation of the agreement/disagreement. Is there such and why?”

 

The short answer is that including seasonality would further increase scope and complicate a long paper with a very targeted message.  Overall, total AOD is good in both operational MODIS and model products, but both high bias the fine mode AOD and fine mode fraction.  This basic finding has no seasonal dependence per say, although it is embedded in the sampling season for MAN (not many ships in the southern hemisphere ocean during their winter).  We have provided explanations and relationships to explore why some of this bias exists.  For MODIS, cloud cover seems to be the determining factor for high fine mode AOD, and surface wind speed for high wind conditions.  In models, it is a combination of:  a) too much secondary fine mode production; b) over transport; and c) under production of over land dust (over ocean results are consistent with over land studies).  These are all hypotheses that need to be fully investigated, and data users need to be aware of these uncertainties when they use the data. .

 

 

“Specific comments: Abstract Page 1, Line 6: ¨For the years 2015-2019, we perform…¨. The study covers the years from 2016 to 2019, please correct it.”

 

Good catch.  Thank you.

 

 

“Introduction Page 2, Line 9: ¨Given the relatively low…...significant cloud impacts.¨. What does ¨natural¨ marine boundary layers mean? Please replace the word ¨natural¨ with one that describes it more precisely or remove it completely.”

 

Indeed this is a fair point, as many argue there are no ‘total natural marine’ environments any more.  We have replaced the term natural with “baseline.”

 

“Methods, Data and Models Following the enumeration, this section should be numbered as 2. Please go through the rest of the manuscript as inconsistencies in the numbering do exist in other parts too (e.g. 1.1 should be 2.1, after 3.4 Section comes 3.3.1 Section, etc.) “

 

Thank you.  On our version we don’t see this, except for the 3.4 section(e.g., 3.3.1). We will keep an eye out for it.

 

“Page 4, Line 24: ¨(+0.04±0.1*AOD550) ¨. What does the asterisk refer to? “

Multiply. We replaced with x

 

 

“Page 9, Line 12: Do the authors perhaps refer to Figure 1(c)?”

Hmm, I think the journal office reformatted the paper before they sent it out.  We could figure out all other comments, but this one I need some key words to search on.

 

 

“Results Page 10, Line 24: In this paragraph the authors discuss terrestrial differences between MODIS and C4C products. How do the findings compare to previous studies regarding fine mode AOD from MODIS aerosol product (for example, Yan et al., 2021)? Yan, X., Zang, Z., Liang, C., Luo, N., Ren, R., Cribb, M., and Li, Z.: New global aerosol fine-mode fraction data over land derived from MODIS satellite retrievals, Environ. Pollut., 276, 116707, https://doi.org/10.1016/j.envpol.2021.116707, 2021.”

 

Thanks for pointing out this paper.  However, it is not really relevant here.  Our paper is investigating the fine and coarse mode representations of a joint remote sensing and model analysis in a maritime environment, and the operational MODIS products only provide total AOD over land.  This said, fine and coarse mode AOD from the C4C are scored against terrestrial AERONET sites in the referenced Sessions et al., and Xian et al. papers.  If you are familiar with the authors of Yan et al., I would encourage them to approach operational centers or developers for inclusion in operational data streams.  Again, please don’t confuse the point of including the terrestrial data in the early plots, it is just to fill in the global picture.

 

 

“Page 12, Line 2: Do Figures 5 a, c and e (it applies to Figure 6 as well) correspond to common datasets between MODIS, MAN and C4C? It seems that Figure 5b does not have the same data points, introducing a possible sampling bias which is further translated into the conclusions?”

 

We have added more clarification.  Figure 5(b), (d), (f) are only those values that are above the single sample noise floor of AOD550>0.04.  This is now better spelled out in the paper.  It is accounted for in all of our analysis, and is why 0.04 is one of the thresholds in our aggregations.

 

 

“The authors comment that there is a yield on corresponding Aqua of ~60% but why the dataset wasn’t limited down to common cases from all three involved datasets, namely MAN,MODIS,C4C? Please clarify this as Figures 5 and 6 are two of the core figures in the study showing the agreement with the ¨ground truth¨. “

 

We are not sure what is being referred to here, as we have statistics for both bulk and pairwise samples.  For pairwise in particular, we investigated cases when we had Terra MODIS, Aqua MODIS and the C4C with MAN to specifically demonstrate that we are not sample biased.  We have now clarified this in our manuscript.

 

 

Page 14, Line 2: I suggest plotting 1-1 lines in Figure 7 for clarification and readability.

 

Added

 

 

 Page 25, Line 29: ¨….perhaps by 50% or more¨. This statement is a bit speculative and vague. If the authors have grounds to support it, then please elaborate.

 

This is an approximation, but you are right, we should apply an error model.  It is struck.

 

 

Back to TopTop