Next Article in Journal
Multispectral and Hyperspectral Image Fusion Based on Regularized Coupled Non-Negative Block-Term Tensor Decomposition
Next Article in Special Issue
Satellite and Machine Learning Monitoring of Optically Inactive Water Quality Variability in a Tropical River
Previous Article in Journal
Dithered Depth Imaging for Single-Photon Lidar at Kilometer Distances
Previous Article in Special Issue
Water Quality Retrieval from ZY1-02D Hyperspectral Imagery in Urban Water Bodies and Comparison with Sentinel-2
 
 
Article
Peer-Review Record

Retrieval of Water Quality Parameters Based on Near-Surface Remote Sensing and Machine Learning Algorithm

Remote Sens. 2022, 14(21), 5305; https://doi.org/10.3390/rs14215305
by Yubo Zhao 1,2,3, Tao Yu 1,2,*, Bingliang Hu 1,2, Zhoufeng Zhang 1,2, Yuyang Liu 1,2,4, Xiao Liu 1,2, Hong Liu 1,2,5, Jiacheng Liu 1,2,4, Xueji Wang 1,2 and Shuyao Song 1,2,4
Reviewer 1:
Reviewer 2: Anonymous
Remote Sens. 2022, 14(21), 5305; https://doi.org/10.3390/rs14215305
Submission received: 11 October 2022 / Revised: 16 October 2022 / Accepted: 21 October 2022 / Published: 23 October 2022
(This article belongs to the Special Issue Hyperspectral Remote Sensing Technology in Water Quality Evaluation)

Round 1

Reviewer 1 Report (Previous Reviewer 1)

I believe this manuscript has been significantly improved and now warrants publication in remotesensing. 

Author Response

It is a great honor to be recognized by the Reviewer for this manuscript. The comments put forward by the Reviewer are very valuable to us and improve the quality of the article. Thank the Reviewer again for taking time out of his/her busy schedule to review our article.

Reviewer 2 Report (Previous Reviewer 2)

Line 226 Figure 5, the statement "The red in the highlighted areas represents the noise"; however,  I can not see any red color in the figure.

Figure 2: It might be better to replace "spectral response" with "radiance".

Author Response

Please see the attachment.

Author Response File: Author Response.docx

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

The paper by Zhao et al, demonstrates the core development of a hyperspectral near-surface sensor initially coupled with in-water sensors to determine various non-optically active parameters. The hyperspectral sensor has an automated calibration feature with a calibration plate rotating out to the field of view to take reference measurements.  The data acquired from the in-water sensors include COD, DO, Turbidity and NH3-N were quality assessed then used which were used to determine the correlation between the variables. It was the divided into training and validation data.  Fourteen machine learning algorithms were then used to build individualised retrieval models to estimate COD, DO, turbidity or NH3-N concentrations (with R2s of 0.91-0.98).

The referencing is inconsistent throughout the manuscript.

For further reading on the impact of sky glint, adjacency effects and solar zenith angle, the authors are referred to:

Ruddick, K.G.; Voss, K.; Boss, E.; Castagna, A.; Frouin, R.; Gilerson, A.; Hieronymi, M.; Johnson, B.C.; Kuusk, J.; Lee, Z.; Ondrusek, M.; Vabson, V.; Vendt, R. A Review of Protocols for Fiducial Reference Measurements of Water-Leaving Radiance for Validation of Satellite Remote-Sensing Data over Water. Remote Sens. 201911, 2198. https://doi.org/10.3390/rs11192198

 

Mueller, J.L., Morel, A., Frouin, R., Davis, C., Arnone, R., Carder, K., Lee, Z.P., Steward, R.G., Hooker, S., Mobley, C.D. and McLean, S., 2003. Ocean Optics Protocols For Satellite Ocean Color Sensor Validation, Revision 4. Volume III: Radiometric Measurements and Data Analysis Protocols. https://repository.oceanbestpractices.org/bitstream/handle/11329/478/protocols_ver4_voliii.pdf?sequence=1

 This reviewer could not access the cited references (Li et al 2000 and Tang et al 2004) nor confirm the equations 6 and 7 and therefore recommend accessible citation material for this section. The above protocols provide suitable reference material.

Minor issues

Line 2: I am not sure ‘short-distance’ remote sensing is an appropriate usage. A quick google of the literature come up with ‘close-range’, ‘proximal’ or ‘near-surface’ remote sensing.

Line 19: high signal to noise is NOT a short-coming

Line 27: indexes = indices

Line 57: ‘monitoring index’ – what is this ?

Line 66: incomplete sentence – key sections of what?

Line 86: Hongwei Guo et al – no date

Line 98: ‘…strongly related to Remote Sensing methods’ – what does this mean?

Lines 99-101: The authors definitely need references to back this statement up.

Line 116: ‘time resolution of seconds’ – is this the temporal resolution or integration time?

Line 118: No mention of the other 3 parameter measured (ie Chl-a, Conductivity pH), please include details of how they were measured with protocols referenced.

 

Line 121-122: What comprehensive index?

Line 130: LVF not yet defined.

Line 147: More details on the calibration is required. What is it made of Teflon, spectralon etc. What is the orientation? Horizontal orientation is essential…

Line 166: Table 1: no integration time, no field of view (FOV) or viewing geometry (nadir or off-nadir, azimuth angle etc)

Line 168: Figure 2c – how are the off-focus, stability and posture test done? What do these grphs actually represent?  More detail needed in text and explanation of graphics as it does not make any sense currently.

Line 174: How do we know the calibration plate is within the FOV?

Line 181: I am pleased the lab has done a good job, but can you quantify this in the manuscript?

Line 199: Can you quantify how large the pond is and how deep? Is there any other point or diffuse sources of pollution?

Line 212: can you clarify what is meant by artificial chemical measurement?

Line 216: water samples stored at low temperature with an environment with pH<2 ? For what parameter is this required and can you provide a reference for this?

Lines 223-225: Those of us that try to quantify uncertainty in our field measurements would probably not agree with this statement.

Line 244: In Figure 4, can you explain the radiometric variability (frown/smile) in your calibration plate?

Line 256: change ‘do’ to DO.

Line 295 – 300: It is really unclear on what this means. What do you mean by “…replaced redundant information of same wavelength of the column band”?

Line 320: I could not find either of these reference cited (ie Li et al 200.

Lines 331-332: There is no comparison of the hyperspectral imager from independent calibrated sources  ( a qualifying statement about “each band having value of less than 0.051 determines it’s credibility” is insufficient to determine sensor suitability.

Line 330-357: The lesson in spectral characteristics of water quality parameters given in this section would be better informed by actual spectra from the instrument developed in this study illustrating these characteristics.

Line 336: Figure 6 shows high reflection in the blue part of the spectrum, but the manuscript indicates otherwise (“..the reflection of water body in this wavelength range (490-510nm) relatively low…”)

Line 358: The section 2.4.4 on Feature Dimension Reduction gives an overview of the methods used without reference to the preceding section which talks about the spectral band regions that are significant for water quality assessment. There are no results presented for the PCA, and the results for the Pearson correlation analysis I assume are presented in Figure 7 but is not labelled as such and the is no legend on the color scale bar. One Lin 374, the authors state the PCA dimension reduction is performed on several bands with the largest absolute Pearson coefficient. – how about a table to show us these in the results section? Which bands were used for which parameter?

Line 359: If the right bands are not included or the appropriate spectral band width then errors can also occur. Data reduction is a balance between computational efficiency and accuracy of the model. Also can you explain why in these times of cloud computing why this is necessary?

Line 407: Explain ‘L2 regularization’.

 

 

 

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

TITLE: Retrieval of water quality parameters based on short-distance remote sensing and machine learning algorithm

AUTHORS: Yubo Zhao, Tao Yu, Bingliang Hu, Zhoufeng Zhang, Yuyang Liu, Xiao Liu, Hong Liu, Jiacheng 4 Liu, Xueji Wang and Shuyao Song.

SUMMARY and COMMENTS

The authors developed an automated short-distance non-point source system for water quality monitoring using hyperspectral sensor, weather stations and water quality sensor. The authors evaluated 14 models for 4 quality parameters COD, DO, NH3-N and turbidity. The authors evaluated the models in a tiny fishpond.

The manuscripts have major limitations with limited prospective for majority of the readers as show in the following comments

Major comments:

1.      The manuscript has a very limited scope with evaluating 4 water quality parameters over limited dataset (only 88 stations) that represent a specific case study of a fishpond.  This criteria make the manuscript has a low scientific soundness.

2.      The authors developed the hyperspectral sensor used in the proposed system, however, the validation of sensor's Rrs was not provided. The authors mentioned in Line 160 that " The system has passed a series of basic tests" without any details or reference for that except three curves in Fig. 3C. The authors should compare the sensor Rrs with other known commercial sensors to evaluate the Rrs accuracy.

3.      The authors used mostly the default parameters for the 14 models (Line 508-511). The default parameters might not be the ultimate choice to provide the highest accuracy. The authors should explains the reason for using the default parameters and their influence on the performance.

4.      The introduction section must be improved as it provided general details without any reference (Lines 37-44), excessive general talk (line 45-51), irrelevant literature review over satellites and optically active parameters which are not in the scope of the manuscripts (lines 72-85). The previous studies that covers similar non-optically active parameters based on satellite or UAV sensors which are not the scope of the proposed studies.

5.      Although the figures must be self-explanatory with sufficient clear details, it is not the case in figs. 1, 2, 5, 7 & 8. Figures generally have low quality (Figs. 2, 7) with small text (figs. 2, 5), text that could be deleted without affecting the details (legend of Fig. 6 & titles of Fig. 8 ) and inappropriate selection of figure format (x-axis of figures 6). The schematic drawing in Fig. 1 a for the proposed system should provide more details.

6.      The fig 5a is taken from scikit-learn.org example https://bit.ly/3BuScxQ without any reference. The authors should provide plot from their data.

7.      Additionally, the introduction section did not provide sufficient literature on automatic short-distance observation systems similar to the proposed system that monitored similar parameters.  

8.      The authors mentioned that the proposed system is a non-point source system. However, it should be point source as it refers to one location.

9.      In the Abstract,  the authors showed the limitation of satellites and UAV (Lines 17-20) as the reason of the proposed system. However, it is not true. The proposed system could be a good solution to continues measurements that needed to validate remote sensing data.  

10.   Section 3.2 only shows the potential of the proposed system without showing the limitation of the proposed system.

11.   The authors did not show the reason ensemble learning models have higher robustness in the prediction of water quality parameters.

Other comments

ü  Line 17-20: Those shortcomings are general statements and do not apply for all satellites, UAV. It might be better to replace the statement with a one shows the need for the proposed system..

ü  Line 18: "remote sensing" could be removed.

ü  Line 24:  replace "including" with "namely".

ü  Line 15:  The best algorithms should be summarized in the abstract. Or at least mentioned that " reason ensemble learning models have higher robustness in the prediction of water quality parameters " along with the reason.

ü  Line 37-44:  The statement is very long. It should be divided into several statements. Additionally, the reference should be mentioned..

ü  Line 45~52:  This paragraph does not any valuable information for the reader. It should be removed.

ü  Line 55:  the statement " andtime-consuming" is repetitive and should be deleted.

ü  Lines 72-84:  Those students related to optically active parameters (e.g., Chla, CDOM) and do not have any direct link with investigated parameters "COD, DO, NH3-N, ..." nor with proposed system.

Those statements should be removed..

ü  Line 127:  I believe that the word "non-point" should be removed as it is a point source measurement.

ü  Line 129-131:  One of the main objectives of this manuscript is to developed a non-point source system. However, it is hard to see the system components (visible sphere machines, radar flowmeters, small weather stations, self-developed LVF hyperspectral imagers, and rotary calibration devices) on Fig. 1a. The component could be numbered on Fig. 1a and named on the figure caption.  .

ü  Line 130:  What is LVF stand for? Does LVF stand for "Linear Variable Filter"? You have to define acronym before using it?.

ü  Line 133:  Does the Visible sphere machines represent a camera or a sensor? More description such as manufacturer, resolution, etc. is needed..

ü  Line 135 - 136:  what is the manufacturer of Radar flowmeters & Small weather stations?.

ü  Line 147:  Where is the calibration plate on Fig. 1c?.

ü  Line 148:  How does the calibration plate shrinks?.

ü  Line 166 table 1:  What are MTF & F/# stand for?.

ü  Line 169 Fig.2:  What is the need for numbers (1-7) in fig.2a?.

ü  Line 169 Fig.2:  It is hard for researchers who do not develop sensor to understand Fig. 2c?

ü  Line 169 Fig.2:  Why number of curves varies in Fig. 2c?.

ü  Line 182:  does "Self-developed" mean that the authors' lab developed this equipment?.

ü  Line 204-205:  Very broad description which is not suitable for methodology section. What is the water depth of the pond? Which parameters are measured? What are the descriptive statistic for those measurements?.

ü  Line 234: " developed by our laboratory, "  a reference or details are needed for those self-developed equipment.

ü  Line 239:  why there is only 88 valid images during one and half month (2022.4.25 to 2022.6.6).

ü  Line 242 Table 3: what is the number of samples? .

ü  Line 247 Fig 5:  Does "I" in Fig. 5b refers to window length?  

ü  Line 247 Fig 5:  Which curves for which window of 51, 25, 10?

ü  Line 247 Fig 5:  The window of savitzky golay should be odd number, isn't it?.

ü  Line 247 Fig 5:  the x-axis of Fig. 5d could be improved be increase the interval. .

ü  Line 247 Fig 5:  it is hard to read the text of Fig. 5e. .

ü  Lines 271 ~ 293:  Description on lines 271 ~ 293 might be extra details which is not needed. Providing a reference is sufficient..

ü  Line 321 eq. 6:  According to Curtis Mobley 1999 and many other researchers,

Rrs = (Lsw - ρ*Ls) / Ed

The Lsw includes the signal from the sky and should be removed.

how did the authors deals with that?.

ü  Line 330 Fig. 6:  "curve" should be "curves"..

ü  Line 330 Fig. 6: Why only 23 Rrs out of 88 spectra are shown?

additionally, the legend could be removed

ü  Line 330 Fig. 6: What is the unit of reflectance?.

ü  Line 350:  I believe that there is no "red shift" for the Rrs shown in fig. 6.

The red shift occurs when the red peak shifts from 690 nm to 710 nm..

ü  Line 491:  Authors did not shown correlation among accuracy assessment in section 2.6.

ü  Line 500 Fig. 8 : The text is too small.

ü  Line 500 Fig. 8 : The authors could make a unified legend for the 4 subplot.

ü  Line 500 Fig. 8 : The top model could be bold in each subplot.

ü  Line 500 Fig. 8 : All titles are same and do not provide any additional information. Thus, titles could be removed .

ü  Line 502 Fig. 9:  I can not understand the figure.

What is the meaning of "regression box plot"?

Does the plot represent the distribution of test data, if this is the case, the figure should moved to the methodology section..

ü  Table 4~7 :  I would recommend to highlight the top performance number in each parameter using bold text..

ü  Figs. 10~13 :  the statement " of test dataset " should be added after " with the highest ?2".

ü  Line 552:  what are the spectral feature bands of NH3-N?.

 

ü  Line 602-609:  Rephrase the testaments in lines 602~609 to avoid the repetition of "the most accurate and suitable algorithm"

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

I have found this paper greatly improved in it's clarity and readability, and appreciate the authors attention to this reviewer's comments. 

This reviewer thanks the authors for their rationale not to account for sunglint, adjacency effects and solar zenith angle. The author indicates (but this is not clear in the Figure 6 caption), that 18 sets of data was 'manually' acquired using the Mobley’s method, but Mobley recommends sensor view angle of 40 degrees and relative azimuth angle to the sun at 135 degrees. Could the authors please recorded the geometry details in the manuscript for clarity? I believe that these 18 sets of measurements were done 'manually' using this method, while not attached to the pole. If not, how was the variability of different time periods (8.00am-6.00pm) captured if the instrument was fixed to the pole, with specific regards to the sun zenith angle and relative azimuth angle to the sun?

Additionally, the author’s have used the Kutser et al 2013 paper as another rationale for ignoring sunglint as shown by Kutser’s Figure 1. However, in the same paper, the Figures 2-4 (also has comparable environments to the fish pond) demonstrate the impact of glint for suboptimal conditions (cloud, wind etc).

I am interested in how the authors would identify suboptimal imagery from their instrument, if glint is never to be considered, I am also not sure that the authors could ensure that the small inland pond would remain optimal permanently.

I thank the author's for adding the additional references, but the line 103 in new version should state that the Gholizadeh et al paper was a review of the literature, so it would be more correct to indicate that, “Gholizadeh et al’s review demonstrated…”

The authors are correct, the reviewer’s eyesight had a bit of trouble with the x-axis legend in Figure 7, and the high reflectivity is of course in the lower wavelengths. 

Reviewer 2 Report

TITLE: Retrieval of water quality parameters based on near-surface remote sensing and machine learning algorithm

AUTHORS: Yubo Zhao, Tao Yu, Bingliang Hu, Zhoufeng Zhang, Yuyang Liu, Xiao Liu, Hong Liu, Jiacheng Liu, Xueji Wang and Shuyao Song.

SUMMARY and COMMENTS

The authors tried to improve their manuscript considering the reviewer comments. However, the authors did not adequately respond or improved the manuscript to be published in remote sening journal. The major comments are as follows:  

Major comments:

1.    The introduction section mainly covered research unrelated to the scope of the manuscript by introducing research that relied on satellites and UAVs, and the manuscript introduced point source observation. The literature review presents only two similar methods with shallow detail (lines 117-121) on their strengths and limitations.

The authors did not respond to the reviewer's inquiries relating to some figures and simply deleted them without providing important information in the manuscript as follows.

For instance, he authors deleted the figures related to the Spectral Denoising using savitzky golay shown in Figure 5a-7 without mentioning the related information of the filter such as window length, polynomial order. The authors only kept one figure in the new Figure 5 with shallow description about highlighted areas.

Similarly, the authors simply deleted Figure 9 in the old manuscript without answering the reviewer inquiries.

2.    The authors compared near-surface and ASD sensors considering the spectral response, as shown in Fig. 2c. We can conclude that the responses of the near-Earth sensors are underestimated compared to ASD. The authors did not clarify the following points

Does this underestimated trend affect the measurements or retrieval?

Why the authors selected spectral response to be shown in Figure 2 instead of reflectance as both measured hyperspectral data?

 

 

3.    Figures mostly are unprofessional and need to be improved. For example:

n  Figure 1 has many images " a, b, c & d" and each one has sub-images with many arrows making hard to understand. Additionally, Fig. 1 caption is unclear.

n  There is no unified format among figures. For instance, some figures have bolded axis (e.g., Fig2c, Fig 6) and other has normal text (e.g. Figs 10~13)

n  Some figures have different font size as x-axis and y-axis of of Fig 6

n  Some figures does not have units such as y-axis of Fig. 6 & Fig. 7

n  Some figures have poor descriptive captions such as Fig 4 & Fig 5

n  Figure 9 has a tiny hard to read legend, which could be unified and enlarged among various figures.

4.    The authors consider the research to be of low scientific soundness. A major achievement in the current research may be the developed system. However, this system is not a unique one, along with the limited scores.

 

Other comments

lines 94~101 ": The statements in lines 94~101 " Aerospace and UAV remote …… and other shortcomings." represent the limitations of satellites and UAV. Therefore, those statements should be moved after the paragraph in lines 102~122 as this paragraph reveals the importance of satellite and UAV. As a result, the reader will understand the need for the proposed system.

 

Line 779: is there any reference for " MWIS-3000" sensor?

Line 330 Fig. 6: the reviewer asked the authors "What is the unit of reflectance?". The authors reply as the unit is "%". However, the equations 1 and 2 shown in lines 534~535 reveal that reflectance should be sr-1.

 

 

Comments for author File: Comments.pdf

Back to TopTop