Next Article in Journal
Horticultural Image Feature Matching Algorithm Based on Improved ORB and LK Optical Flow
Previous Article in Journal
Adaptive Support-Driven Sparse Recovery STAP Method with Subspace Penalty
 
 
Article
Peer-Review Record

High-Resolution Flowering Index for Canola Yield Modelling

Remote Sens. 2022, 14(18), 4464; https://doi.org/10.3390/rs14184464
by Hansanee Fernando, Thuan Ha, Anjika Attanayake, Dilshan Benaragama, Kwabena Abrefa Nketia, Olakorede Kanmi-Obembe and Steven J. Shirtliffe *
Reviewer 2: Anonymous
Reviewer 3:
Remote Sens. 2022, 14(18), 4464; https://doi.org/10.3390/rs14184464
Submission received: 13 June 2022 / Revised: 26 August 2022 / Accepted: 6 September 2022 / Published: 7 September 2022
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Round 1

Reviewer 1 Report

Dear authors

The subject proposed in this paper is very interesting as it concerns the automation of canola yield prediction using high-resolution RGB images combined with new VI.

The main drawback to my opinion deals with the lack of information you give concerning the creation of the HrFI index, which appears very discriminating and pertinent. You give the equation to construct it but we never have some explanations.

Moreover, as the colours of the different classes are very close one from the other, why not to use fristly a change of colorspace ? Indeed, Lab should be relevant as the canola flowers are yellow.

Finally you need to better explain the relationship between the canola yield and the results you obtained, as the agronomical context is not clearly defined.

Other remarks/corrections/questions are directly written in the attached file.

Comments for author File: Comments.pdf

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

This study analysed and compared the performance of four spectral indices of high-resolution RGB images from UAV for recognizing yellow flower pixels of Canola and developing the seed yield prediction models. The results showed that among four indices, HrFI and MYI Had better predictors of canola yield. The study falls well within the scope of this journal and its results and analyses are credible. I have some concerns as following, and think it needs some modifications for publication.

(1) In this study, the performance metrics, Pearson's correlation coefficient (r) should be replaced with determination coefficient R2, R2 should be one common metric for assessing the performance of models

(2)  In the part of ‘Model development’, Model 1 and 2 were developed by two non-linear three-parameter asymptotic regression models while Model 3 and 4 were simple linear regression models, why? Are there many differences between two types of models for three flower periods. It maybe be better for three flower periods to use non-linear three-parameter regression method.

(3) For Table 7, it is necessary to show the R2 and RMSE results of the models for training and validation data in one table.

(4)  For Figure 7,8,9, 10, please add the estimating metrics of models, R2, RMSE in each figure.

(5)  Line 293, the number of Table 4 is disordering, “Error! No text of specified style in document” should be revised.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

154 - was the Phase one data in reflectance or digital numbers?

156 - RBNI and NDYI are the only indices that include a normalization term (i.e., the total brightness of all bands is used as a denominator to reduce the influence of illumination such as shadows).
For HrFI there is no normalization so it is surprising that it neutralizes shadows. Please cite literature that discusses band normalization with respect to illumination issues such as shadows and tie it into your lit review.
This is important for interpreting your results (additional comments below) from section 3.1 because NDYI is the only image that does not have shadows cast to the right of the canopy.

Please include a rationale or method, such as a physical basis, for the newly proposed indices

171 - the thresholding process is not reproducible so it is difficult to determine whether the results are due to index suitability or the effectiveness of a chosen threshold for a given index

201-204 - Your goal is to estimate the flower pixel area. Flower pixel area does not account for how many layers of flowers there are per pixel (i.e. flower area index) so you effectively measure the fraction of yellow pixels per unit ground area. Did you check for correspondence between the quantity you estimated, the fraction of yellow ground cover, with the fraction of green ground cover? That is, does estimating fraction of yellow cover provide any new information relative to fraction of green cover?

221-229 - Based on the images in the figure, NDYI does not provide as high of contrast between green plant and yellow plant pixels as the other indices but it does appear to normalize shadows, unlike the proposed indices.

Also, NDYI was developed for regression problems that quantify flower density per pixel (i.e. regression), not classification, although it will classify when used with another index such as VARI in a time series approach such as

https://doi.org/10.1002/agg2.20125, 

which also describes how to avoid setting a fixed threshold by comparing the difference between two indices.
Another index was disclosed in the above reference,  –0.829*blue + 0.557*green + 0.0479*red, and this index should discriminate well between yellow and green or brown pixels, but won't neutralize shadows because there is no normalization/division in the formula. Granted, that could be addressed by dividing each band's brightness by the sum of all 3 bands used, which should also work for HrFI.


225 - it is not clear that the shadow pixels for RBNI and NDYI appear as flower pixels.

242 - I think the reason NDVYI and RBNI did not distinguish between flower and shadow classes is because they both neutralize shadows and so there is no shadow class in the transformed features. Thoughts?

244 - I don't see enough data to support the claim that shadows impact NDYI, but green pixels might
252 - Figure 5 - which physical features in the images correspond with the shadow class? I still don't see evidence in the imagery that NDYI has shadows.
257 - shouldn't the optimal threshold have been selected for each index for the previous section where you compared the indices?
262 - it is surprising that the same threshold applied to different dates, especially since it is not clear whether the Phase One data was radiometrically calibrated to reflectance.

347 - it is surprising to hear a claim that flower pixels can never fall under a shadow. Are all flowers the exact same height? This is equivalent to saying that there is no data in a canopy area that is shaded by another part of the canopy.
378 - if you want to use a single image then how does green vegetation fraction, immediately before flowering, compare with flower fraction at peak flowering?
387 - if you are concerned about environmental effects then wouldn't you need to include air temperature during flowering to account for possible ovule abortion?

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

no suggestions

Author Response

Thank you for the constructive suggestions and comments.

Reviewer 3 Report

Thanks for responding to my comments. However, material around lines 154-156, concerning shadows and band normalization, still needs to be improved. There is plenty of literature that addresses how band ratios deal with  noise common to both bands. For example, the book "Remote Sensing methods and models for image processing"  3rd edition by Robert Schowengerdt, section 5.3.2 addresses the issue. It is mentioned in many introductory texts. You need to address the impacts of band division on image noise because not all indices you used share the same sensitivity to shade. 

Since the Phase 1 was not calibrated, please ensure that the manuscript accurately used the term reflectance, including equations. 

 

Author Response

 "Please see the attachment."

Author Response File: Author Response.pdf

Back to TopTop