Next Article in Journal
Calibration of a Ground-Based Array Radar for Tomographic Imaging of Natural Media
Next Article in Special Issue
Precipitation Characteristics at Two Locations in the Tropical Andes by Means of Vertically Pointing Micro-Rain Radar Observations
Previous Article in Journal
HDRANet: Hybrid Dilated Residual Attention Network for SAR Image Despeckling
Previous Article in Special Issue
River Discharge Simulation in the High Andes of Southern Ecuador Using High-Resolution Radar Observations and Meteorological Station Data
 
 
Article
Peer-Review Record

MASS-UMAP: Fast and Accurate Analog Ensemble Search in Weather Radar Archives

Remote Sens. 2019, 11(24), 2922; https://doi.org/10.3390/rs11242922
by Gabriele Franch 1,2,*, Giuseppe Jurman 1, Luca Coviello 1, Marta Pendesini 3 and Cesare Furlanello 1
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Remote Sens. 2019, 11(24), 2922; https://doi.org/10.3390/rs11242922
Submission received: 30 September 2019 / Revised: 23 November 2019 / Accepted: 3 December 2019 / Published: 6 December 2019
(This article belongs to the Special Issue Radar Meteorology)

Round 1

Reviewer 1 Report

The abstract has to be improved

 

UMAP outperforms PCA... How? RMSE? R^2? Which mathematical criterial?

 

Line 139: UMAP will learn...It s a Intelligent program? I mean an Artificial Neural Network? What do you mean with learn?

 

Line 253: The hyper-parameters for this model are  the number of components d = 5 and the number of neighbor n = 200…How did you choice these numbers? Trial-Error? Mathematical procedure?

In general an just as suggestion: 

The figures 5 and 6 could be represented in a scatter plot. Now there are a lot of numbers and this difficult the lecture. 

In the figures 7 to 11 the same situation. 

Author Response

The abstract has to be improved. UMAP outperforms PCA... How? RMSE? R^2? Which mathematical criteria?

The reference distance metric is the Mean Squared Error (MSE), as introduced in Section 1. We now clarify the choice of the criterium also in the abstract: We show that UMAP combined with a grid search protocol over relevant hyper-parameters can find analog sequences with lower Mean Square Error (MSE) than Principal Component Analysis (PCA)”.

Line 139: UMAP will learn...It s a Intelligent program? I mean an Artificial Neural Network? What do you mean with learn?

UMAP is indeed a manifold learning technique for dimensionality reduction, as defined by the authors (McInnes et al 2018). The UMAP algorithm has both an underlying theory as well as a practical implementation based on approximate k-nearest neighbor calculation (Nearest-Neighbor-Descent) and stochastic gradient descent for efficient optimization (SGD). UMAP is thus situated in the family of k-neighbour based graph learning algorithms (e.g. Laplacian Eigenmaps, Isomap, and t-SNE). We specialize UMAP to learn a manifold structure and an embedding for the radar image sequences, in our case by minimizing the user-defined distance metric. We agree that this is not an ANN architecture, nonetheless, the term “learning” is commonly used for the optimization of low dimensional representations via Stochastic Gradient Descent. These aspects are now better described in the text in Section 2.1; further, we reference new resources about the theoretical foundations of UMAP and its interpretation. 

Line 253: The hyper-parameters for this model are  the number of components d = 5 and the number of neighbor n = 200 … How did you choice these numbers? Trial-Error? Mathematical procedure?

The (d=5,n=200) hyper-parameter pair has been chosen from a grid search, as described in Section 3.2 over (d=2 to 100; n=5 to 1000) by optimizing for both the Jaccard and Canberra metrics (Figures from 7 to 11 and Appendix A.1). We have now clarified in Section 3.1 that the choice is detailed in Section 3.2. We also reference a new online resource for UMAP that explicitly addresses the practical interpretation of the algorithm on synthetic and real datasets. Moreover, we show the impact of diverse hyper-parameter choices on the radar dataset in the new figures Figure A.8 - A.14  (Appendix A.2) for different values of the number of components and neighbors.  

In general just a suggestion: The figures 5 and 6 could be represented in a scatter plot. Now there are a lot of numbers and this difficult the lecture.  In the figures 7 to 11 the same situation. 

Thanks for the suggestion. We have added a new scatter plot (Figure 12) summarising the results for the Jaccard score. Note that we plot the UMAP Jaccard score for d=2 and d=5 only, as the graphs are overlapping for d={5, .., 100}. 

Reviewer 2 Report

Paper „MASS-UMAP: Fast and accurate analog ensemble search in weather radar archives”

by Gabriele Franch, Giuseppe Jurman, Luca Coviello, Marta Pendesini, Cesare Furlanello

for “Remote Sensing”

 

The paper is very interesting and reliably written. The analog approach is one of the way of analysing and prediction of meteorological variables. I have only minor comments and questions to the Authors.

Detailed remarks

Page 2, line 65-66 and below: The four parameters should be better explained or justified. It is so important for the paper understanding that the references are not enough.

Page 3, line 104: Why MAX(Z) weather radar product is used? The MAX is most burdened with non-meteorological echoes, which can significantly impact on results. I suggest CAPPI (constant altitude PPI) or SRI (surface rainfall intensity) (if precipitation would be investigated) or some similar product.

Page 3 in general: Was quality control of the data performed? I think so, however it is a crucial task for such applications of weather radar data. It should be at least briefly described.

Page 4, lines 132-133 the data upscaling to lower spatial resolution was made, among others, “to alleviate the noise and scatter present in the MAX(Z) product”: see my previous comment – was not the data quality controlled?

Page 4, lines 129-130 “A simple bilinear resize was applied (…) to a resolution of 3.75 x 3.75 km. This was chosen for (…)”: Of course. But I am wondering if such resolution is proper in the cases of convection. Properties of convective cells differs from these of stratiform field so observation of particular convective cell is very significant for the given event evolution.

Page 5, lines 152-153: I see that the MSE is the main criterion of the ranking I suppose that other ones would be also important, e.g. rainfall rate, direction of advection, presence of convection, etc.

General remark

Because aim of the paper is to obtain among others predictions, we should evaluate the search results in comparison to traditional nowcasting techniques like extrapolation: what is comparison between analog-based nowcasting vs. extrapolation, including evolution of meteorological fields, depending on lead-time?

Author Response

Detailed remarks

Page 2, line 65-66 and below: The four parameters should be better explained or justified. It is so important for the paper understanding that the references are not enough.

We thank the reviewer for the suggestion. We now also reference a new online resource for UMAP that explicitly addresses the practical interpretation of the algorithm on synthetic and real datasets. Moreover, we show the impact of diverse hyper-parameter choices on the radar dataset in the new figures Figure A.8 - A.14  (Appendix A.2) for different values of the number of components and neighbors. 

Page 3, line 104: Why MAX(Z) weather radar product is used? The MAX is most burdened with non-meteorological echoes, which can significantly impact on results. I suggest CAPPI (constant altitude PPI) or SRI (surface rainfall intensity) (if precipitation would be investigated) or some similar product.

The MAX(Z) product used in this work is the current choice by the regional Civil Protection Agency for alerting purposes. The use of MAX(Z) is preferred over the more common CAPPI (constant altitude plan position indicators) because of the high operating altitude of Mt. Macaion receiver (1866 AMSL) and the presence of a complex Alpine landscape; a CAPPI product would miss all precipitation events lower than the chosen altitude that can be observed by MAX(Z). Similarly, while SRI(surface rainfall intensity) is excluded because current civil protection assessments are done using reflectivity, the presented pipeline can be used with other products as well.

Page 3 in general: Was quality control of the data performed? I think so, however it is a crucial task for such applications of weather radar data. It should be at least briefly described.
Page 4, lines 132-133 the data upscaling to lower spatial resolution was made, among others, “to alleviate the noise and scatter present in the MAX(Z) product”: see my previous comment – was not the data quality controlled?

The operating agency (Civil Protection of Autonomous Province of Bozen-Sudtirol) provides a real-time elaboration pipeline for the generation of the radar products, including the max(z) product used in this work. The pipeline is specifically designed to cope with problems due to complex orography of the region, such as beam blockage and backscattering caused by the nearby mountains. We better specify details about data generation and quality control in subsection 2.3: 

“The radar has been in operation since 2003, at the beginning with different operating modes and frequencies. The most important upgrade and calibration of the radar was performed in 2010 with the installation of the digital receiver.

Given the complex orography of the region, the lower degrees of the radar scan suffer from beam blockage and backscattering caused by the nearby mountains. The polar volume is low pass filtered and corrected from backscattering and attenuation using the SURFILUM software suite originally developed by Meteo France where Digital Terrain Model (DTM) is used to simulate the radar beam and correct for the beam blockage and backscattering errors (Delrieu, 1995). All the main products in use by the local civil protection are generated starting from the corrected polar volume.

The product chosen for this study is the 2D MAX(Z) reflectivity (maximum on the vertical section) at 500mt horizontal resolution that is currently in use by the civil protection for alerting and assessment purposes.

The use of MAX(Z) is preferred over the more common constant altitude plan position indicators (CAPPI) because of the high operating altitude of the receiver, that would cause a constant altitude product to miss all precipitation events lower than the chosen altitude.”

Page 4, lines 129-130 “A simple bilinear resize was applied (…) to a resolution of 3.75 x 3.75 km. This was chosen for (…)”: Of course. But I am wondering if such resolution is proper in the cases of convection. Properties of convective cells differs from these of stratiform field so observation of particular convective cell is very significant for the given event evolution.

The main reason for the resize is to replicate the pipeline as close as possible to the one presented by Foresti et al. (2015), where PCA for analog radar search was introduced, also satisfying similar computational constraints. Foresti et al found a lack of discrimination ability of higher rainfall thresholds due to probable low predictability of orographic rainfall at small spatial scales, in particular, due to the presence of high convective activity. Previously, Hardenber et al (2003) analyzed near‐equatorial rain in regimes dominated by deep convection using radar reflectivity up to 4 x 4 km resolution, similar to the one in this study. Overall, while we believe that the resolution is sufficient for the presented task, a proper characterization of convective rain cells would also require integration with data from other sources, like ground stations. We thank the reviewer for the suggestion and we have included this consideration in the Discussion section. 

Page 5, lines 152-153: I see that the MSE is the main criterion of the ranking I suppose that other ones would be also important, e.g. rainfall rate, direction of advection, presence of convection, etc.

We agree with the reviewer that the analogue search algorithm would be enriched by adding further criteria, such as precipitation descriptors or synoptic variables as long as they may originate valid metrics (positive-definite and symmetric). We have applied the MSE for front line generic applications, but the architecture of the MASS-UMAP pipeline would be generalizable as suggested. Accordingly, we have expanded in the Discussion section the existing comment about the use of more specialized metrics and the integration of other variables in the algorithm pipeline. 

General remark

Because aim of the paper is to obtain among others predictions, we should evaluate the search results in comparison to traditional nowcasting techniques like extrapolation: what is comparison between analog-based nowcasting vs. extrapolation, including evolution of meteorological fields, depending on lead-time?

We agree with the reviewer that precipitation nowcasting is one possibility among other important applications of analog search. However, the aim of the presented paper is to describe a technique for fast and flexible spatiotemporal radar analog retrieval; this technique can be fine-tuned for any application that benefits from analog search, like statistical downscaling, postprocessing of numerical weather prediction, and of course nowcasting. We indeed do plan to explore the use of the presented technique for one or more specific application in further publications.

Reviewer 3 Report

In this manuscript, an approach based on radar precipitation fields in order to improve the accuracy of analog ensembles and speed-up the search of similar patterns is proposed. First, the Uniform Manifold Approximation and Projection UMAP algorithm is used to reduce the dimensionality of the problem comparing the improvements obtained with respect to the Principal Component Analysis PCA method. Then, the MASS Mueen’s Algorithm is used to speed-up the search of similar patterns comparing with a linear MSE search.

The presentation is clear and reasonably concise. The paper is well organized and appears to make a good contribution to the literature and it deserves attention to be published in the Remote Sensing Journal. However, some aspects of the results of this approach require further explication, as detailed below:

Major Comments:

In line 324, it is said that “it is visually clear that UMAP is able to resemble more closely….with regard to PCA”.

 It is clear that in this example, UMAP resembles more closely than          PCA, but in other cases, that difference may not be so clear, and therefore in my opinion, it would be necessary to evaluate quantitatively this difference between both methods.

On the other hand, to strengthen the performance of the method, I think it would be appropriate to indicate if the TOP-2 most similar sequences resulting from the searches on PCA and UMAP methods (Figures 18 and 19) correspond to the same radar scans as the MSE method (Figure 17). In other words, is it possible that the most similar sequences from PCA or UMAP methods correspond to another radar scan different from the reference one?

Minor Comments:

In line 109, it is mentioned that radar products were obtained from June 2010. However, in Figure 1 is mentioned from July 2010. In line 328, the table is not referenced correctly. In line 374, it should be said “… that UMAP can use….” instead of  “… that UMAP ca use….”.

Author Response

Major Comments:

In line 324, it is said that “it is visually clear that UMAP is able to resemble more closely….with regard to PCA”.It is clear that in this example, UMAP resembles more closely than PCA, but in other cases, that difference may not be so clear, and therefore in my opinion, it would be necessary to evaluate quantitatively this difference between both methods.

In Section 3.2. we describe the quantitative evaluation between PCA search and UMAP search for single images in terms of Jaccard and Canberra measures, while in section 3.3.1 we compare UMAP and PCA on sequences of different lengths using MSE of the retrieved analog sequences ( Figure 12 to 15 ) and discuss the results. To improve the readability of the results on the single image case, we have added a new scatter plot (Figure 12) in section 3.2, summarising the results for the Jaccard score of both UMAP and PCA. Note that we plot the UMAP Jaccard score for d=2 and d=5 only, as the graphs are overlapping for d={5, .., 100}. 

On the other hand, to strengthen the performance of the method, I think it would be appropriate to indicate if the TOP-2 most similar sequences resulting from the searches on PCA and UMAP methods (Figures 18 and 19) correspond to the same radar scans as the MSE method (Figure 17). In other words, is it possible that the most similar sequences from PCA or UMAP methods correspond to another radar scan different from the reference one?

This is correct: indeed in the provided example, top-2 sequences found by MASS-UMAP and MASS-PCA are different from the reference one found by MSE. However, UMAP sequences provide at least a partial match with the reference ones, while PCA fails to provide any correspondence. We have improved the captions of Figures 17 to 20 to include these considerations in the presented example.  We thank the reviewer for the suggestion.

Minor Comments:

In line 109, it is mentioned that radar products were obtained from June 2010. However, in Figure 1 is mentioned from July 2010. In line 328, the table is not referenced correctly. In line 374, it should be said “… that UMAP can use….” instead of  “… that UMAP ca use….”.

Thank you, both issues have been corrected. The correct date is July 2010.

Reviewer 4 Report

This paper presents an interesting and apparently successful new approach for efficiently searching radar archives for analogs for a given situation.  It involves two steps, first a dimension-reduction step (from a ~200,000-element raw image, to a lower-resolution 3,000-element image, to an O(10)-element embedding vector via UMAP) and second, a method (MASS) for searching this reduced vector space for nearby matches to a given case.  The methods are tested against a brute-force MSE calculation (which is very slow).  The English in the paper is good in the beginning but more errors start to appear later on; however these do not impede understanding.

My main comment on the paper is that, for a remote sensing audience not necessarily familiar with machine learning and related techniques, the paper is not always going to be easy to understand and some things are not explained very clearly.  I give some examples below.

Another broad point is that, from Figs. 16-19, it does not look like there is much information coming from the time dimension since the six images change very little, compared to the differences between the best-match and the query.  This suggests that there may not be any point in using time sequences, or that the sequences should be longer.  This probably explains why your results hardly change from t=3 to t=24 (Figs. 12-15), and begs the question whether you need time sequences at all or would find the same analogs using t=1?

 

p.28; 48: The authors do not make clear what “long” means at line 48, or indeed, whether their instances consist of single radar scans or sequences of scans (later it is clear that it is the latter).  

Section 2.1: Parameters such as t (the length of the sequence) should be introduced here somewhere, rather than in the “workflow” section line 148.  Indeed it would be very helpful to have a table of parameters, since quite a few are introduced.

62-63: is there a way of explaining what this means?  Most readers of this journal will probably not know what a “simplicial complex” is.

153: I think “MSE” is being used confusingly.  Here, it seems to be the brute-force computation of distance (not using MASS).  Does this mean MSE is being computed in the original image space, or in the embedding space?  If in the image space, is it the low-res (64x64) space (I would assume)?  Please be clearer what the "reference" calculation(s) are.

151-165 I found this text confusing.  What is actually happening (I think) is that MASS is not perfect, so if the authors want the best five matches, what they do is get MASS to return extras (say, its top eight) and then use a more exact distance calculation on these eight to determine which of them are the best five.  Is that right?

173: I don’t understand what “performance” means in this context — you are simply reducing the dimension, in what sense can peformance be defined?  What we care about is that the embedding is sufficiently rich so that two instances that are similar in the embedding will also be similar in the original images, so the test seems to be to compare the MSE ranking of the embedding vectors with that of the original images.  But I don’t see anywhere that you say you are doing this.

177: Please explain more clearly what “MSE” means here — again, does it mean the MSE in the embedding space or in the original image space.  There are two things to test here, the fidelity of the embedding and the ability of MASS to find the closest analogs.

187: following the previous comment, the ability of a given distance function to preserve ranking will depend on the embedding and vice versa, won’t it?  It seems that the first decision is to choose a distance metric (in the original image space) that best represents the user needs (what makes one sequence of radar images a good analog for another for forecasting purposes), then choose an embedding and embedding metric that best emulate the rankings of the original data in the original space according to the original distance metric (the metric need not be the same in the original and embedding spaces).

Fig. 5: The caption of this figure needs to provide more information, in particular, that this is presenting tests of whether the dimension reduction preserves the rankings found using the original images (is that correct?

Fig. 17: Caption needs to say that these are the top two matches for the query sequence shown in Fig. 16.  Subsequent figure captions also need to say this, or say “As in Fig. 17 but for…”

343: performant not a word

 

 

Author Response

This paper presents an interesting and apparently successful new approach for efficiently searching radar archives for analogs for a given situation.  It involves two steps, first a dimension-reduction step (from a ~200,000-element raw image, to a lower-resolution 3,000-element image, to an O(10)-element embedding vector via UMAP) and second, a method (MASS) for searching this reduced vector space for nearby matches to a given case.  The methods are tested against a brute-force MSE calculation (which is very slow). The English in the paper is good in the beginning but more errors start to appear later on; however these do not impede understanding.

My main comment on the paper is that, for a remote sensing audience not necessarily familiar with machine learning and related techniques, the paper is not always going to be easy to understand and some things are not explained very clearly.  I give some examples below.

Another broad point is that, from Figs. 16-19, it does not look like there is much information coming from the time dimension since the six images change very little, compared to the differences between the best-match and the query.  This suggests that there may not be any point in using time sequences, or that the sequences should be longer. This probably explains why your results hardly change from t=3 to t=24 (Figs. 12-15), and begs the question whether you need time sequences at all or would find the same analogs using t=1?

More than 340 000 images were used in this study, corresponding to very diverse precipitation sequence patterns of duration between 2h and 24h. The reference sequence in Fig 16 (Fig. 17 in the new submission) is a 30’ segment describing a slowly evolving rain pattern, chosen to demonstrate the precision of the new method within the analogous search task.

We computed for the 1226 sequences used in Section 3.3 the average MSE score difference between the query and the top-50 results at t=6 and t=12 considering the whole sequence or only the first image for the match. We found that using sequences to query reduces the average MSE by 4.6% and 10.9% for t=6 and t=12 respectively. We thank the reviewer for the interesting remark. We added the new results in a new appendix A.3, along with an explanatory example that shows the difference between extending single image results in time and querying with t=6 directly.

The authors do not make clear what “long” means at line 48, or indeed, whether their minstances consist of single radar scans or sequences of scans (later it is clear that it is the latter).  

Thanks for the remark. We have clarified this point, specifying that the search is applied to time sequences.  

Section 2.1: Parameters such as t (the length of the sequence) should be introduced here somewhere, rather than in the “workflow” section line 148.  Indeed it would be very helpful to have a table of parameters, since quite a few are introduced.

We thank the reviewer for the suggestion. We have added in the revised paper a table of definitions as a quick reference for relevant parameters as Table 1 at the end of Section 2.1 

62-63: is there a way of explaining what this means?  Most readers of this journal will probably not know what a “simplicial complex” is.

We acknowledge that the topological concepts at the basis of how UMAP works may be out of the remote sensing aspects and the scope of this paper. We have edited the description accordingly and we make now an explicit reference to a new online documentation resource by the UMAP authors (section “Topological Data Analysis and Simplicial Complexes”), which provides a well-developed and accessible explanation of the algorithm.  Moreover, we show the impact of diverse hyper-parameter choices on the radar dataset in the new figures Figure A.8 - A.14 (Appendix A.2) for different values of the number of components and neighbors.  

153: I think “MSE” is being used confusingly.  Here, it seems to be the brute-force computation of distance (not using MASS).  Does this mean MSE is being computed in the original image space, or in the embedding space?  If in the image space, is it the low-res (64x64) space (I would assume)? Please be clearer what the "reference" calculation(s) are.

Correct. MASS is used to find the top-k sequences in the reduced space. Subsequently, the k sequences are reordered in the low-res 64x64 image space with respect to the query sequence. The calculation of the MSE-Search and the Top-k reordering are always referred into the low-res 64x64 image space. We have added this clarification. 

151-165 I found this text confusing.  What is actually happening (I think) is that MASS is not perfect, so if the authors want the best five matches, what they do is get MASS to return extras (say, its top eight) and then use a more exact distance calculation on these eight to determine which of them are the best five. Is that right?

This intuition is correct (we apply a partial reorder in the original space), but as mentioned starting from line 155 in the original submission our algorithm has the twofold goal of recovering fast and optimally ranked analogues. The impact of spurious matches by MASS is in our experiments secondary to the effect of projection function operated by UMAP or PCA. As shown by the Jaccard and Canberra indicators (in Section 3.2), the dimensionality reduction is key for fast search but introduces a reordering that we correct by uplifting to the original image space. We have edited the text to clarify this point, also referring to the two locations (Methods and Results), where we apply the indicators. We thank the reviewer for the remark.

173: “In the first part (2.6) we assessed the performance of different UMAP configurations on single images using two different metrics (2.6.1, 2.6.2) and in comparison with PCA.”
I don’t understand what “performance” means in this context — you are simply reducing the dimension, in what sense can performance be defined?  What we care about is that the embedding is sufficiently rich so that two instances that are similar in the embedding will also be similar in the original images, so the test seems to be to compare the MSE ranking of the embedding vectors with that of the original images.  But I don’t see anywhere that you say you are doing this.

We thank the reviewer for the remark. We now clarify that the MSE ranking is evaluated in terms of the Jaccard distance and Canberra ranking stability indicator, with the Workflow detailed in Figure 3. We have added the observation both at the start of Sec 2 as well in Subsect. 2.6 (lines 194-198 in the original submission), specifying that our aim is evaluating the impact of dimensionality reduction methods (UMAP at different parameter configurations or PCA) on finding analogous in terms of MSE. 

177: Please explain more clearly what “MSE” means here — again, does it mean the MSE in the embedding space or in the original image space.  There are two things to test here, the fidelity of the embedding and the ability of MASS to find the closest analogs.

This is correct. We now specify that the MSE distance is always computed in the original (64x64) space. 

187: following the previous comment, the ability of a given distance function to preserve ranking will depend on the embedding and vice versa, won’t it?  It seems that the first decision is to choose a distance metric (in the original image space) that best represents the user needs (what makes one sequence of radar images a good analog for another for forecasting purposes), then choose an embedding and embedding metric that best emulate the rankings of the original data in the original space according to the original distance metric (the metric need not be the same in the original and embedding spaces).

We agree that it would be extremely interesting to tune according to different applications the choice of distance in the radar image space, embedding and distance in the projected space. This is now mentioned in the Discussion section as a possible generalization of the MASS-UMAP method. In this paper, the distance metric used to search in the embedded space is always the euclidean distance, which is also used as original space metric for the images to train UMAP (since it is rank preserving with regard to MSE). 

Fig. 5: The caption of this figure needs to provide more information, in particular, that this is presenting tests of whether the dimension reduction preserves the rankings found using the original images (is that correct?

Correct. We thank the reviewer, we improved the caption description to clarify.

Fig. 17: Caption needs to say that these are the top two matches for the query sequence shown in Fig. 16.  Subsequent figure captions also need to say this, or say “As in Fig. 17 but for…”
343: performant not a word

We thank the reviewer, all the captions and the issues have been updated accordingly.

Round 2

Reviewer 3 Report

The authors responded satisfactorily to the comments on the original paper. I think the revised paper is now suitable for publication it the Remote Sensing journal.

Back to TopTop