Next Article in Journal
Data Processing of Gravity Base Network in Plateau Area: The Case of Qinghai Province, China
Next Article in Special Issue
Mapping Soil Properties with Fixed Rank Kriging of Proximally Sensed Soil Data Fused with Sentinel-2 Biophysical Parameter
Previous Article in Journal
The Current Crustal Vertical Deformation Features of the Sichuan–Yunnan Region Constrained by Fusing the Leveling Data with the GNSS Data
Previous Article in Special Issue
Simplified Priestley–Taylor Model to Estimate Land-Surface Latent Heat of Evapotranspiration from Incident Shortwave Radiation, Satellite Vegetation Index, and Air Relative Humidity
 
 
Article
Peer-Review Record

Predicting Canopy Chlorophyll Content in Sugarcane Crops Using Machine Learning Algorithms and Spectral Vegetation Indices Derived from UAV Multispectral Imagery

Remote Sens. 2022, 14(5), 1140; https://doi.org/10.3390/rs14051140
by Amarasingam Narmilan 1,2,*, Felipe Gonzalez 1, Arachchige Surantha Ashan Salgadoe 3, Unupen Widanelage Lahiru Madhushanka Kumarasiri 4, Hettiarachchige Asiri Sampageeth Weerasinghe 4 and Buddhika Rasanjana Kulasekara 4
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Remote Sens. 2022, 14(5), 1140; https://doi.org/10.3390/rs14051140
Submission received: 22 January 2022 / Revised: 17 February 2022 / Accepted: 23 February 2022 / Published: 25 February 2022
(This article belongs to the Special Issue Disruptive Trends of Earth Observation in Precision Farming)

Round 1

Reviewer 1 Report

This manuscript estimated canopy chlorophyll content in sugarcane crops using seven kinks of machine learning algorithms based on spectral vegetation indices derived from UAV multispectral imagery. This study provided a useful method to real-time monitoring of crop nutrition using UAV technique. The study is interesting and the experimental design is good, but sample data collection is not very clear. The results were simplify presented and were not clear. There are many doubts in results. The major comments are as follows:

Please give the full names when the abbreviations were firstly presented. Such as ML(L113),BFS and AFS(L180-181),OMF, SRI, Urea, TSP(Table2).

L237.Some information should be presented. How many leaf samples were measured within each sampling sites of subblock? How many samples in total were collected to build the models?

L239.Please give the accuracy of Magellan Triton 2000 handheld GPS receiver. How can you to well match between the samples and UAV multispectral data?

L258. The flight time (12.30 pm and 02.00 pm) in L258 is not same with the time in Fig.3. Please check it.

L264. The flight altitude above ground, speed, and ground sample distance shown in Table 3 are different with those parameters shown in Fig.3. Please check them.

L274. Why the subblock with sizes of 1.5 ×1.5 m2 was selected? It seems too large to well match the ground measurements of chlorophyll with average VIs inside the subblock.

Section3.2. Only 15 VIs were shown in Fig.6. In total, 24 VIs were calculated in Table 4. Please add relationships between the other VIs (such as EXGR) and CHL in Fig.6.

L357. How to select the 15 VIs from the 24 VIs? Please explain the AFS method in detail.

Table5. How many samples were used for training and validation, respectively? Although the author presented it in l427. I still suggest to add the information in Section 2.6.

Table5 and Figure7 and Lines23-24. The R2 from the SVR and KNN algorithms were smaller than those from the other algorithms. From the Fig.6, the relationships between VIs and CHL are very high. So, the differences in R2 and RMSE among the ML models should be not large as the results were showed in Table 5. Please discuss clearly why the accuracy for the SVR and KNN algorithms were relative low.

Line435. Figure 7 should be Figure8.There is a big doubt. The RMSE is 0.41for BFS and 3.18 for AFS for the KNN algorithm. Why the RMSEs between BFS and AFS for the KNN algorithm were so great because the estimated CHL values from BFS and AFS were very similar? Please check it.

L443.Figure 8 should be Figure9.

Author Response

Dear Reviewer,

Thank you for your valuable feedback on my research work. I have amended almost all the comments based on your feedback. However, If I need to update further, please provide some more comments in future.

Please see the attachment

Thanks and regards 

Author Response File: Author Response.docx

Reviewer 2 Report

Summary

The manuscript 'remotesensing-1587286' by Amarasingam et al.,
provides a framework for inferring the chlorophyll content in sugarcane crops at the canopy level using UAV and spectral vegetation indices processed with multiple machine learning algorithms.

Broad comments

Therefore in my opinion it fits the goals of Remote Sensing journal, it may be of interest to a vast community of users and deserves being published in this journal.

The introduction is pertinent and based on interesting papers.
The procedure is described in details and gives sufficient information on the study logic. The manuscript is well written and well documented and it can be published in the Journal of Remote Sensing after small improvements (minor revision).

Minor comments

  1. Section 1.2: the equations and parameters relevant to the machine learning algorithms used are missing; for example (L131-133) "Support Vector Regression (SVR) is based on the same premise as SVM, but it is used to solve regression problems." it doesn't seem enough to describe SVM model;
  2. L140: "K-Nearest Neighbors (KNN) regression is a non-parametric technique..."; ss well as SVM, RF,...
  3. L 148-150: "Hoeppner et al. (2020) tested the accuracy of various statistical models (narrowband VIs), partial least squares regression (PLSR), and RF in predicting chlorophyll content using airborne hyperspectral data in a temperate mixed forest in Germany's Bavarian Forest National Park." It doesn't seem relevant because the target is forestry;
  4. Section 2.3: there is no mention of the near contemporaneity between chlorophyll measurements and UAV acquisitions;
  5. Figure 3: Flight mission 11:00 am- 12:00 noon, but (L257-258) "A DJI P4 multispectral system was used to conduct a UAV flight mission on a sunny day between 12.30 pm and 02.00 pm";
  6. L 277: please add reference for QGIS (QGIS.org, 2022. QGIS Geographic Information System. QGIS Association. http://www.qgis.org);
  7. Table 4: the column references shows the jobs in which the indexes have been used in studies similar to yours, therefore it should be specified;
  8. L 282-283: a theoretical hint is missing as to why a feature selection is made;
  9. Figure 6: If the goal is to show the selection of the different vegetation indices with higher correlation (> 0.5) with respect to the chlorophyll content, maybe just the first line is enough? If we want to leave this figure unchanged, shouldn't we also consider the correlation between the vegetation indices in order to consider a real and non-redundant information content? (e.g. NDVI and NDRE have a correlation of 0.99, but NDRE has a higher correlation with CHL, and therefore it is the NDRE that I would choose, eliminating the NDVI);
  10. Table 5: for the first time "training" and "validation" appear and only later, in section 4.2 (L427-428); it is explained how it works: it should be explained here, also indicating how the splitting of the starting dataset in training and validation is carried out;
  11. Figure 7: they should be greatly improved: often the statistics are not visible and the figures for the XGB model (AFS, BFS methods) should be inverted in order to maintain the coherence of the alignment of the figures (left column BFS and right column AFS methods).

Kind Regards

 

 

Author Response

Dear Reviewer,

Thank you for your valuable feedback on my research work. I have amended almost all the comments based on your feedback. However, If I need to update further, please provide some more comments in future.

Please see the attachment

Thanks and regards 

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

The author have revised most of issures. But, the RMSE in Figure 8 (AFS and BFS for the KNN algorithm) is still doubt. From the estimated SPAD from both AFS and BFS for the KNN algorithmin shown in Figure 8, the difference between them should be not so great and the RMSE for AFS was seem to be smaller than that for BFS, which is opposite to the result in this study. Please check it again and provide the original dataset (estimated SPAD from both AFS and BFS for the KNN algorithmin and observed SPAD).

Author Response

Dear Reviewer

I found a mistake in the value. The actual RMSE value is 3.42. But I mentioned 0.41. I run the coding again and checked. Therefore, I modified Figure 7, figure 8 and table 6. Please check the revised manuscripts.

For confirmation, I am sharing the SPAD from both AFS and BFS for the KNN algorithm and observed SPAD. Please check it for your clarification. You can find the KNN coding in line number 636.

Thank you for finding the mistake in my data. It was happening during the data transfer from coding to paper.

If you have any further clarification, please let me know.

Please see the attachment

Thank you

Author Response File: Author Response.docx

Back to TopTop