Next Article in Journal
Transition Nonlinear Blended Aerodynamic Modeling and Anti-Harmonic Disturbance Robust Control of Fixed-Wing Tiltrotor UAV
Previous Article in Journal
Path-Following Control of Small Fixed-Wing UAVs under Wind Disturbance
 
 
Article
Peer-Review Record

Improved Crop Biomass Algorithm with Piecewise Function (iCBA-PF) for Maize Using Multi-Source UAV Data

by Lin Meng 1,2,3,†, Dameng Yin 2,3,†, Minghan Cheng 2, Shuaibing Liu 2, Yi Bai 2, Yuan Liu 2,3, Yadong Liu 2,3, Xiao Jia 2,3, Fei Nan 2,3, Yang Song 2,3, Haiying Liu 4,* and Xiuliang Jin 2,3,*
Reviewer 1: Anonymous
Reviewer 3: Anonymous
Reviewer 4:
Submission received: 18 February 2023 / Revised: 29 March 2023 / Accepted: 5 April 2023 / Published: 8 April 2023

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

However I am not a native english speaker, I detected some errors in the expression of some sentences. I think also you should find the way to present all these abbreviations...they are so difficult to follow. I added some comments in the PDF.

Comments for author File: Comments.pdf

Author Response

Point by point responses are provided in attachment. Your comments are in black fonts, our responses are in blue fonts, and the changes made in the manuscript marked in red.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The authors introduce a method to estimate above-ground biomass (AGB) on maize crops by testing and combining previous methods for AGB estimation and separating (piece-wise) the maize growth periods into two: pre-heading and post-heading.

The introduction is complete and cover the state of the art in the subject.

The new method introduced is simple, but it improves the state of the art in AGB estimation on maize crops.

There are strengths in the methodology of the manuscript such as  toroughness in the evaluation of all possible methods to compute AGB on maize as well as many vegetation indexes (VIs) for RGB and multispectral drone imagery. Specially changing the previous view of RGB imagery been superior to Multispectral imagery (MS) for this task.

There are weaknesses in the methodology that are detailed in the specifi comments.

Overall, the paper has novel methods that fill the gap in knwoledge (SGB on maize) and may turn further research towards MS and RGB on this and other crops.  

Specific comments:

- On page 7 you state "Due to unsolved practical issues, the RGB and MS data collected at 20 m flight alti-tude (FA) could not be properly mosaicked. Therefore, we used images collected at higher FAs (30 m, 70 m, and 100 m)." However, you have a Table 1 showing all FAs (20m, 30m, and 70m) not including 100 m. Also, if there were no orthophotos on RGB and MS on the dates July 9, 12, 26, 31, how could you get VIs such as OSAVI2 on Fig. 4 showing values on July 9,14,27 and August 5, 13, 21. Did you use specific drone imagery on those dates and in the destructive samplings area? Clarify this confusing declaration.

- Page 7, are the three representative maize plants with biomass samples the only maize plants considered? You have in Figure 2b, 56 plots in total with many more more plants than that. Specify clearly the total number of samples and how they were identified in the MS and RGB images.

-Page 7, you indicated thresholded VIs were used to remove the soil, which VIs specifically, which thresholds. 

- Page 9, why computing the CHM as DSM - DEM,  why not use the remaining not ground point cloud after filtering the ground point cloud to compute the CHMof the plants? Why bother computing a DEM and a DSM that leads to zeroes in the CHM? 

-Figure 4. I understand there are two linear regressions on AGB vs GDD: one for b and another for k, but according to the color codes for each date on Figures 4a, 4b, 4d and 4e, the color used for b  would correspond to August 13th and the color used for k would correspond to August 21th,  this is confussing, please use other colors for k and b regression. Explain, are there 54x3 points on each graph corresponding to each date? Include in the title of the Figure what is a,b,c,d,e,f.

- Page 11. Define first the whole set, is it 54x3 samples on each date? If you use 8:2 there would be 130 samples for calibration and the remaining 32 samples for validation and then you can select randomly 32 samples out of 162. Is this correct? if not explain this clearly. 

- Page 11, Explain clearly how the calibration and verification datasets were selected. Explain clearly, how did you get 20 different calibration and validation sets? If you use a 8:2 ratio for calibration and validation, there are 45 possible combinations of choosing a proportion 2 as validation out of 10. 

-I understand Figure 5c shows a box plot of the difference between VI and CMVI for MS and RGB. However, there are also two outliers (green points) in MS and one outlier (brown point) in RGB. Explain whcih VIs corresponds to those outliers.

- Figures 6 and 7. Page 12 says "According to the results of correlation analysis, MS_NDRE and RGB_bn were used as input indicator of the CBA method, and MS_CVMOSAVI2 and RGB_CVMbn were used as input indicator for both iCBA and iCBA-PF" However, Figures 6 and 7 indicates "all MS_VI", "all MS_CVMVI", all RGB_VI, ad all RGB_CVMVI. What does than mean? did you use all VIs? for the MSR and RFR methods? that wouldn't make any sense, since as indicated in Fig. 5 some VIs do not correlate well with AGB. 

You should include Figures (maybe on an Appendix) the measured AGB vs VIs and CVMVI, not just show Estimated vs Measured that hinder the methods used.

- Table 3, The models for b have coefficients varying from 2*10^-11 to 5.1096 and 0.005 to 4.5221. The values are very small in some cases (2*10^-11, 0.005) explain this, seems suspicious. Show a graph of the k and b coefficients vs GDD.

-Figure 8, these results show a large variance on the calibration dataset and even sections of data separated from each other. I'm guessing the RMSE and R2score are worse on the Calibration data, it seems the good results on the validation set could be just by chance.

- Figure 10 shows variations in RMSE and R2 score on each one of the 20 runs, what did you report on previous figures, mean values? If there are 20 runs, the models should be reported with mean and standard deviation as in done in cross-validation. Di you use cross-validation on the 20 runs? If not it is a serious methodology error, if you do report results with standard deviations (RMSE and R2). 

- Figure 11 shows AGB on the whole 54 plots, what orthophoto did you use? report height and resolution.

-Page 21, how did you segment the regions used for ground sampling on the UAVs orthophotos since the dates of ground sampling and image acquisition did not pair each other?   

- English language should be in present time not past time. For instance the sentence "Therefore, it was necessary to verify the applicability of the method on other crops in the future." on page 21 is confuse you cannot say it was neccessary to verify the applicability, because you didn't do that, what you want to say is "Therefore it is neccessary to verify the applicability of the method on other crops in the future." Since nobody has done that, it is something to do in the future.

- The manuscript has no line numbers, line numbers must be used to help reviewers cite parts of the manuscript.

Author Response

Point by point responses are provided in attachment. Your comments are in black fonts, our responses are in blue fonts, and the changes made in the manuscript marked in red.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The article in question is within the scope of the intended journal, since it encompasses the study of areas of interest for agriculture in general, but with some compaction problems.

This article is in accordance with the templates of the journal, but needs some adaptations, especially with regard to the understanding of the highlighted study.

It presented itself as a good and interesting work, but with needs of adequacy in the language and explanation of the subject, especially with regard to the results and conclusions obtained, in view of the initial objectives of the study.

Some suggestions are described in the body of work, through comments.

The work has quality to be approved, but requires some corrections and adjustments.

Comments for author File: Comments.pdf

Author Response

Point by point responses are provided attachment. Your comments are in black fonts, our responses are in blue fonts, and the changes made in the manuscript marked in red.

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

The manuscript entitled “Improved Crop Biomass Algorithm with Piecewise Function (iCBA-PF) for Maize using Multi-source UAV Data” revealed the use of UAV multispectral data to estimate crop AGB and improve methodology. The manuscript is interesting and of broad interest to journal readers in the field of crop science and UAV remote sensing and provides information for crop biomass monitoring. However, the manuscript may need some revisions to be able to understand the works better. I summarized my comments below.

1. Although one of the main materials of the study is maize and its cultivation, it is not very clear and understandable how the resulting data can be used in agriculture and how it can be useful. This point needs to be developed.

2. In the study, the number of samples allocated for the accuracy rate in statistical studies was specified, but it was not explained how an evaluation was made. This situation has been interpreted as a deficiency. Researchers should explain these data using kappa statistical methods and present them in a tabular form.

3. It is necessary to write in the conclusion section how the obtained results can be useful in the agricultural field and how it can provide convenience and advantage to the people who will use this data.

Author Response

Point by point responses are provided in attachment. Your comments are in black fonts, our responses are in blue fonts, and the changes made in the manuscript marked in red.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

I have only one remaining comment to the authors, after reviewing the new manuscript: In cross-validation the test set is always different on each run and covers the whole dataset. The way you performed the 20 runs does not guarantee the ordered last 67 random numbers taken as test set do not overlap from one run to the next.  

Author Response

We added some experimental content to demonstrate our results. However, we did not include it in the manuscript.

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

The researchers tried to make the required corrections in the article as much as possible.

Author Response

Your suggestions are very important for us to improve our work. Thank you again for your valuable suggestions.

Back to TopTop