Next Article in Journal
Assessment of Polar Ionospheric Observations by VIPIR/Dynasonde at Jang Bogo Station, Antarctica: Part 1—Ionospheric Densities
Next Article in Special Issue
Estimation of Dry Matter and N Nutrient Status of Choy Sum by Analyzing Canopy Images and Plant Height Information
Previous Article in Journal
The Identification of Impact Craters from GRAIL-Acquired Gravity Data by U-Net Architecture
 
 
Article
Peer-Review Record

Hyperspectral Reflectance Proxies to Diagnose In-Field Fusarium Head Blight in Wheat with Machine Learning

Remote Sens. 2022, 14(12), 2784; https://doi.org/10.3390/rs14122784
by Ghulam Mustafa 1, Hengbiao Zheng 1, Imran Haider Khan 1, Long Tian 1, Haiyan Jia 2, Guoqiang Li 2, Tao Cheng 1, Yongchao Tian 1, Weixing Cao 1, Yan Zhu 1,* and Xia Yao 1,*
Reviewer 1: Anonymous
Remote Sens. 2022, 14(12), 2784; https://doi.org/10.3390/rs14122784
Submission received: 3 May 2022 / Revised: 3 June 2022 / Accepted: 6 June 2022 / Published: 10 June 2022
(This article belongs to the Special Issue Crop Biophysical Parameters Retrieval Using Remote Sensing Data)

Round 1

Reviewer 1 Report

 

The manuscript by Mustafa et al. describes the development and accuracy estimation of new vegetation indices adjusted for the detection of FHB via machine learning methods. This study covers an important topic, which is a valuable input for the scientific community.

The authors use several methods in this study and especially a lot of abbreviations, which is partly confusing for the reader. Please provide a flow chart/ graphical overview of the method part 2.3 to help the reader to get a clear picture of the order of methodological steps, input parameters, output parameters, number of data points etc. Can this method be applied to other diseases? Which of the used methods are really needed, which are redundant? Are the typical VIs still needed or are the new VIs sufficient for DS estimation?

 

Specific comments:

Title: add reflectance after hyperspectral

L18: grain damage instead of grains damage

L18: remove ‚or‘

L19: How was the data acquired (… wheat plots using stationary/ ground-based spectrometer systems (?) …)? Which wavelength range was used (visible- near-infrared, infra-red etc.?)?

L26: short instead of shot

L28: Does this mean all data? The years have not been mentioned before, only 2 years.

L38-40: Which of the mentioned methods provides the final results?

L63: Please name more references for this progress

L66: What about the loss of resolution compared to HSI?

L72: subject missing

L147: Was the visual observation done by the same person or different people?

L162: What is the footprint size of the target?  How big is the observed target part? How many wheat leaves are observed? What about shaded leaf parts?

L170: 6 spectra per what? Per measurement day?

L205: add the before present study

L206: What is reason for this selection? Why do the authors only use a part of the dataset for this?

L217: What is Image Spectrometer data?

L222: remove first comma

L209-256: It seems there is an enumeration of headers (Continuous wavelets transformation, Wavelet power scalog) etc., but it is unclear what it means and how they are connected.

L274: remove that

L283: provide references

L331: subscript Pi an Oi

L335: Please specify whether DS is determined by visual inspection

L371: What is reason for the selection of those DAIs?

L377: VIS and NIR, SWIR have not been introduced so far

L378: 470 nm is not in the SWIR region!

Figure 4: Please add a color bar

L406: What about the 5. Wavelength at 570 nm? Why is it not included?

L740: subject missing

L758: What about the influence of the ground/soil/shaded leaf parts etc.?

Author Response

  1. The manuscript by Mustafa et al. describes the development and accuracy estimation of new vegetation indices adjusted for the detection of FHB via machine learning methods. This study covers an important topic, which is a valuable input for the scientific community.

Response: Thank you very much for your positive comments. The newly proposed indices show great potential as proxy approach for detecting FHB at early stage and understanding the physical state of crops in field conditions for better management and control of plant diseases. A point-by-point response is provided below. The red text is response answer and the blue text is same text which has been revised and added in the manuscript.

 

  1. The authors use several methods in this study and especially a lot of abbreviations, which is partly confusing for the reader.

Response: We have added the acronyms list after abstract section.

(L44-52, Page # 1 & 2).

Acronym

Extended meaning

ACA

Average classification accuracy

ANOVA

Analysis of variance

CA

Classification accuracy

CSBs

consistent spectral bands

CWT

Continuous wavelet transform

DAI

Days after inoculation

DS

Disease severity

DWT

Discrete wavelet transform

FHB

Fusarium head blight

HR

Hyperspectral reflectance

HSI

Hyperspectral imaging

Knn

K nearest neighbors

KnnR

K nearest neighbors regression

ML

Machine learning

MLC

Machine learning classifiers

NDCI

Normalized difference canopy indices

NN

Neural net

Pn

Photosynthesis rate

R2

Coefficient of determination

RF

Random forest

RFR

Random forest regression

RF-RFE

Random forest – recursive feature elimination

RMSE

Root mean square error

SCC

Spike chlorophyll contents

SVM

Support vector machine

SVMR

Support vector machine regression

SWIR

Shortwave infrared

VIP

Variable importance score

WFCI1

Wheat fusarium canopy index1

WFCI2

Wheat fusarium canopy index2

Xgboost

Extreme gradient boost

 

  1. Please provide a flow chart/ graphical overview of the method part 2.3 to help the reader to get a clear picture of the order of methodological steps, input parameters, output parameters, number of data points etc.

Response: We have added the workflow of the methodological steps is given (L224-225, Page # 06).

 

Figure 2. Workflow of the study

 

  1. Can this method be applied to other diseases? Which of the used methods are really needed, which are redundant?

Response: Yes, the methodology of extracting the feature, classification the disease, and constructing the DS model can be applied in other disease, in which all the methods processes are required, but the specific results depend on the particular disease. In this manuscript, CWT-RF-RFE is the best methodology progress.

((L749-753, Page # 16) Conclusively, a detailed examination of all the selected and developed indices conclude that newly developed indices have significantly better predictive ability and estimation potential in relevance to conventional VIs, employing the same dataset and statistical approaches. Moreover, these can be practiced for proxy determination of specific diseases.)

 

  1. Are the typical VIs still needed or are the new VIs sufficient for DS estimation?

Response: Along with CWT-RF-RFE the MLC RF has performed better as described in the conclusion section ((L1110-1113, Page # 21) RF manifested 83.33% CA at DS of 9.73% and improved to 100% CA at DS of 10.78% at 8 DAI in the year 2020). The newly developed indices have shown the potential to quantify disease much better than conventional indices (Table 2 and Figures 10-12).

 

  1. Title: add reflectance after hyperspectral

Response: Change has been made in the title.

((L1-2, Page # 01) Hyperspectral reflectance proxies to diagnose in-field fusarium head blight in wheat with machine learning)

 

  1. L18: grain damage instead of grains damage

Response: We have modified it.

((L18, Page # 01) Hyperspectral reflectance (HR) technology as proxy approach to diagnose fusarium head blight (FHB) in wheat crop could be a real-time and non-invasive approach for its in-field management to reduce grain damage.)

 

  1. L18: remove ‚or‘

Response: Change has been made. (L18, Page # 01)

 

  1. L19: How was the data acquired (… wheat plots using stationary/ ground-based spectrometer systems (?) …)? Which wavelength range was used (visible- near-infrared, infra-red etc.?)?

Response: The information has been added.

((L18-19, Page # 01) In-field canopy’s non-imaging HR (400-2400 nm using ground-based spectrometer system), photosynthesis rate (Pn) and disease severity (DS) data were simultaneously acquired from artificially inoculated wheat plots over a period of two-years (2020 and 2021) in the field.)

 

  1. L28: Does this mean all data? The years have not been mentioned before, only 2 years.

Response: The sentences has been rewritten and data is mentioned. It is two-year (2020 and 2021) data.

((L18-21, Page # 01) In-field canopy’s non-imaging HR (400-2400 nm using ground-based spectrometer system), photosynthesis rate (Pn) and disease severity (DS) data were simultaneously acquired from artificially inoculated wheat plots over a period of two-years (2020 and 2021) in the field.)

 

  1. L38-40: Which of the mentioned methods provides the final results?

Response: It was KnnR, which has been revised.

((L36-37, Page # 01) However, Knn regression analysis with both canopy indices (WFCI1 and WFCI2) manifested maximum accuracy for disease estimation with RMSE of 11.61 and R2 = 0.83.)

 

  1. L63: Please name more references for this progress

Response: The references have been added.

((L77, Page # 03) Using HSI to detect wheat ears has made significant progress [7-9]).

  1. Bauriegel, E.; Giebel, A.; Geyer, M.; Schmidt, U.; Herppich, W.B. Early detection of Fusarium infection in wheat using hyper-spectral imaging. Computers and Electronics in Agriculture 2011, 75, 304-312.
  2. Jin, X.; Jie, L.; Wang, S.; Qi, H.J.; Li, S.W. Classifying wheat hyperspectral pixels of healthy heads and Fusarium head blight disease using a deep neural network in the wild field. Remote Sensing 2018, 10, 395.
  3. Mahlein, A.-K.; Alisaac, E.; Al Masri, A.; Behmann, J.; Dehne, H.-W.; Oerke, E.-C. Comparison and combination of thermal, fluorescence, and hyperspectral imaging for monitoring fusarium head blight of wheat on spikelet scale. Sensors 2019, 19, 2281.

 

  1. L66: What about the loss of resolution compared to HSI?

Response: The non-image data does not contain spatial resolution; the information is revised.

((L78-80, Page # 03) Alternatively, non-imaging spectrometers can detect crop spectral information at a lower cost and faster speed than HSI, reducing data processing time. However, it does not contain the spatial information [11])

 

  1. L72: subject missing.

Response: Change has been made.

((L84-88, Page # 04)Huang, et al. [12] used a spectrometer to measure the wheat ears; extracted the derivative and absorption features and vegetation indices. Afterwards, they used these features to build effective disease severity (DS) identification models under the combination of Fish-er's linear discriminant analysis and support vector machine (SVM))

 

  1. L147: Was the visual observation done by the same person or different people?

Response: Yes, the two years’ experiment was conducted by the same person (first author of the manuscript).

 

  1. L162: What is the footprint size of the target?  How big is the observed target part? How many wheat leaves are observed? What about shaded leaf parts?

Response: The detailed information is updated. We considered the canopy reflectance in an average considering a uniform canopy wheat stand. The data was managed to measure at noon to reduce the noise, and control shadow effects. (Figure B)

((L183-193, Page # 05) In both years of the experiments, the canopy reflectance was measured from two points in each plot which were marked and consistently measured throughout the experiment. Whereas the measured point and DS reading points were kept common. The data was always acquired from 10:00 h to 14:00 h (Beijing time) at noon with clear sunshine and low airwaves. The distance between pistol and canopy was kept at almost 1 m height with 25° field of view and uniform for all plots while white reflectance panel (Standard) was also fixed at the level of canopy height using an adjustable tripod (Figure 1B). The ASD was optimized after the measurement of each three plots. After average of acquired 30 spectral signatures on each measurement day from each plot, total 3 number of spectra were retained and used for subsequent analysis for both years of experiments (2020 and 2021).)

 

 

  1. L170: 6 spectra per what? Per measurement day?

Response: The detailed information has been made according to your suggestion. These were 3 spectral per wheat plot.

((L192, Page # 05) After average of acquired 30 spectral signatures on each measurement day from each plot, total 3 number of spectra were retained and used for subsequent analysis for both years of experiments (2020 and 2021).)

 

  1. L205: add the before present study

Response: Change has been made.

 

  1. L206: What is reason for this selection? Why do the authors only use a part of the dataset for this?

Response: We have given the detailed information. Our key focus was on early detection of the disease so we considered the dataset with minimum DS. Moreover, the weather fluctuations also effected the temporal study so we considered best dataset with minimum DS at the day of better sunshine and very low windy conditions.

((L236-239, Page # 06) We selected the consistent spectral bands (CSBs) to develop indices. For CSBs, datasets of 2020 (9 and 16 DAI) and 2021 (10 and 19 DAI) were used because at these days minimum and moderate DS (11 to 30%) is considered like, [19] used 30% DS for plant disease indices development.)

 

  1. L217: What is Image Spectrometer data?

Response: This sentence has been modified. Actually, It was non-image spectrometric data.

((L249, Page # 06) Therefore, we imputed CWT in our study for reflectance band selection from non-imaging spectrometer data.)

 

  1. L222: remove first comma.

Response: Change has been made.

 

  1. L209-256: It seems there is an enumeration of headers (Continuous wavelets transformation, Wavelet power scalog) etc., but it is unclear what it means and how they are connected.

Response: According to your suggestion, changes have been made. CWT was used to select the sensitive bands from selected datasets from both years. (L240-296, Page # 07)

 

  1. L274: remove that. (L332, Page # 08)

Response: Change has been made.

 

  1. L283: provide references.

Response: The reference has been added.

((L345, Page # 08)  However, occasionally, their performance fluctuates due to uncertain explanations; like data issue (e.g., unbalance, missing, size and type of data), MLC's basic function, sensor type, scale or location, and computing efficiency [30].)

  1. Lorena, A.C.; Jacintho, L.F.O.; Siqueira, M.F.; De Giovanni, R.; Lohmann, L.G.; De Carvalho, A.C.; Yamamoto, M. Comparing machine learning classifiers in potential distribution modelling. Expert Systems with Applications 2011, 38, 5268-5275.

 

  1. L331: subscript Pi an Oi.

Response: Change has been made. (L399, Page # 09)

 

  1. L335: Please specify whether DS is determined by visual inspection.

Response: We have specified that it was visually inspected.

((L403, Page # 09) Figure 3 depicts the development trends of average DS of FHB, visually observed in wheat canopy plots after inoculation during the growing seasons of 2020 and 2021.)

 

  1. L371: What is reason for the selection of those DAIs?

Response: We have added the information. In these days the weather was standard (there was no wind, rain, and cloudiness, but best sunshine) for spectral measurements with very low DS.

((L236-239, Page # 06) From 2020, two datasets till 9 and 16 DAI, and from 2021, 10 and 19 DAI were used be-cause at these days minimum and moderate DS (11 to 30%) is considered like, [19] used 30% DS for plant disease indices development.)

 

  1. L377: VIS and NIR, SWIR have not been introduced so far.

Response: These have been introduced. (L473-474, Page # 10)

 

  1. L378: 470 nm is not in the SWIR region!

Response: We have removed this wavelength and revised it. (L475, Page # 10)

 

  1. Figure 4: Please add a color bar.

Response: We have revised the figure. (L487, Page # 11)

()

 

  1. L406: What about the 5. Wavelength at 570 nm? Why is it not included?

Response: During all possible combination analysis under RF-RFE 570 nm didn’t show the high VIP score.

 

  1. L740: subject missing.

Response: We have modified it.

((L1022-1023, Page # 20) Likewise, in our study, multivariate models got better estimation accuracy during the combination of both indices (Figure 11A-C).)

 

  1. L758: What about the influence of the ground/soil/shaded leaf parts etc.?

Response: We observed the canopy scale measurements, ground/soil/shaded leaf parts were not considered in this study and these factors will be investigated in future study. The noon reflectance measurements at some extent reduce the shadow effect and at this stage there is almost no ground part visibility to affect the reflectance pattern.

Author Response File: Author Response.docx

Reviewer 2 Report

The authors have presented a study on using hyperspectral reflectance for early diagnosis of fusarium head blight. Instead of using conventional indices for diagnosis, they were able to find canopy-based indices that were more sensitive and accurate for detecting the aforementioned plant disease. Machine learning models were compared to know which of the feature sets (or indices) were best for diagnosis. Although the authors presented a good discussion on the results obtained, it is advised to make the paper more concise as there was too many redundant information, most especially in the results and discussion sections. Generally, the research work had a good contribution and using the extracted indices will be of interest to other researchers. There are just some issues as raised below:

Major comments:

1.      L485-486: “WFCI1 and WFCI2 has performed better…” What was the most likely reason for this?

2.      Several machine learning models were tested in this study. However, how were they optimized to reach the results (i.e., parameter tuning, etc.)? Are the presented results obtained after optimization? If yes, please also provide a table to show the parameters/variables used for each machine learning model used.

3.      Table 2: Perhaps this table can be summarized as its size was too large.

4.      Figure 7: This figure was found to be difficult to understand. Moreover, how did the authors determine the 5, 8, 10, 12, and 15 DAIs as points/days for comparison?

5.      The text was found to be too verbose. There were a lot of facts in the discussion that were already mentioned in the results.

Minor comments:

1.      Abstract: there were too many abbreviations in this part. It might be better to prepare a nomenclature at a different section so that readers can easily understand the terms.

2.      Introduction paragraph 1: It is better to cut the paragraph into two from “Wheat FHB has been identified…”

3.      The text had too many highfalutin words such as “necessitates”

4.      Generally, the text was a bit confusing since there were too many abbreviated terms.

5.      L382: Is this really 130? Or 1300?

6.      Figure 5: Perhaps put this figure before the equations.

7.      Figure 6: It might be better to sort the x axis labels (in increasing or decreasing ACA) to make it easier to understand

8.      Figure 8: These figures were found to be quite confusing. The R2 and RMSE might also be better to use the same color as the data analysed (i.e., green and blue).

9.      Figure 9 captions: The captions seem to be off since there was “A-F”

Author Response

  1. The authors have presented a study on using hyperspectral reflectance for early diagnosis of fusarium head blight. Instead of using conventional indices for diagnosis, they were able to find canopy-based indices that were more sensitive and accurate for detecting the aforementioned plant disease. Machine learning models were compared to know which of the feature sets (or indices) were best for diagnosis. Although the authors presented a good discussion on the results obtained, it is advised to make the paper more concise as there was too many redundant information, most especially in the results and discussion sections. Generally, the research work had a good contribution and using the extracted indices will be of interest to other researchers. There are just some issues as raised below:

Response; Thank you very much for your helpful suggestions and valuable input on our research manuscript. We also very much appreciate the comments/suggestions made by referees. According to the suggestions, whole manuscript has been revised carefully. We also incorporated most of the suggestions during our revision. A point-by-point response is provided below. The red text is response answer and the blue text is same text which has been revised and added in the manuscript.

 

Major comments:

  1. L485-486: “WFCI1 and WFCI2 has performed better…” What was the most likely reason for this?

Response: WFCI1 and WFCI2 has performed better because these indices consist of the most relevant wavelengths to FHB which are selected on the basis of their consistent performance during feature selection from CWT (Continuous wavelet transform). The selected bands or wavelengths are majorly from visible and near-infrared regions which manifest the pigment and structure damage during disease invasion. Further, this perspective is shown in the section of discussion.

 

((L926-939, Page # 19) The consistently selected five spectral bands 401, 460, 570, 786 and 846 nm (Figure 5) are FHB specific wavebands in this study. The selected wavelengths reveal that canopy damage is due to disruption in, chlorophyll and carotenoid (401 and 460 nm), anthocyanin and chlorophyll (570 and 786 nm), and internal structure and water (840 nm). [50] designated the wavelengths 430 and 460 nm as chlorophyll a and chlorophyll b, respectively, and other studies asserted that chlorophyll decomposition and senescence occur in the ranges 400-530 nm and 550-740 nm, respectively [51]. Additionally, a decrease in chlorophyll content results in an increase in reflectance at 417 nm [6]. As a result, 401, 460 and 570 are excellent candidates for chlorophyll. While 570 characterize the yellow range (550-650), which is indicative of several pigments, most notably anthocyanin and chlorophyll [52]. Whereas, 786 nm in the NIR explores the red-edge importance for disease detection. Meanwhile, 846 nm characterizes the structural variation in the near-infrared region of canopy reflectance. However, the role of water can also be speculated upon, as blighted spikes typically contain less water than healthy spikes [8,9,49,53].)

 

  1. Several machine learning models were tested in this study. However, how were they optimized to reach the results (i.e., parameter tuning, etc.)? Are the presented results obtained after optimization? If yes, please also provide a table to show the parameters/variables used for each machine learning model used.

Response: The information given is given in the 2.3.3 section and the data was divided into the 70 and 30% for calibration and validation, respectively.

((L340-388, Page # 08 & 09) 2.3.3. Machine learning algorithms)

The detail of MLC is following, (R-Studio) This detail is added in Supplementary file (Page # 04)

# Data Partition

set.seed(1234)

ind <- sample(2, nrow(data), replace = T, prob = c(0.7, 0.3))

train <- data[ind == 1,]

test <- data[ind == 2,]

Knn (K nearest neighbors)

knnmodel <- train(DS~., data = train, method = 'knn',

             tuneLength = 20, preProc = c("center", "scale"))

SVM (Support vector machine)

svmmodel <- tune(svm, DS~., data=train,

     ranges = list(epsilon = seq(0,1,0.2), cost = 2^(2:7)))

RF (Random foresrt)

rfmodel <- randomForest(DS~., data=train,

                   ntree = 300, mtry = 4, importance = TRUE, proximity = TRUE)

print(rfmodel)

NN (Neuralnet)

nnmodel <- nnet(DS~., data=train, size = 4,

                decay = 1e-3, skip=F, linout=F, maxit = 1000)

nnmodel <- h2o.deeplearning(DS~., training_frame = as.h2o(train), activation = 'Rectifier', hidden = c(5,5),

                epochs = 100, train_samples_per_iteration = -2)

summary(nnmodel)

Xgboost (Extreme gradient boost)

# eXtreme Gradient Boosting Model

bst_model <- xgb.train(params = xgb_params, data = train_matrix, nrounds = 1000,                     watchlist = watchlist, eta = 0.001, max.depth = 3, gamma = 0, subsample = 1,                       colsample_bytree = 1, missing = NA, set.seed = 333)

 

  1. Table 2: Perhaps this table can be summarized as its size was too large.

Response: We have revised the table and half of the results of the table are moved in Supplementary file (Table S2).

(L668-670, Page # 14)

Table S2. Classification accuracy of the newly developed and conventional spectral indices employing different machine learning classifiers.

 

 

Overall classification accuracy (%)

 

This classification accuracy (CA) is for test data set (Train = 70, Test = 30)

 

Year

 

 

2020

 

 

 

2021

 

 

 

DAI

5

8

10

12

15

6

8

10

17

 

DP

9.73

10.78

18

24.12

30.12

8.21

12.41

19.34

28.13

(A)

Knn

50.00

50.00

66.67

83.33

83.33

50.00

75.00

66.67

88.89

PSNDc

RF

66.67

83.33

83.33

83.33

83.33

66.67

77.78

77.78

88.89

 

SVM

50.00

66.67

66.67

83.33

83.33

50.00

77.78

88.89

88.89

 

NN

50.00

50.00

66.67

83.33

83.33

50.00

77.78

77.78

88.89

 

Xgboost

50.00

66.67

66.67

83.33

83.33

60.00

88.89

77.78

88.89

(B)

Knn

50.00

50.00

83.33

66.67

88.88

73.33

50.00

77.78

88.88

SIPI

RF

66.67

80.00

83.33

83.33

88.88

53.33

88.89

77.78

100.0

 

SVM

50.00

50.00

83.33

83.33

88.88

73.33

77.78

88.89

88.88

 

NN

50.00

66.67

66.67

83.33

88.88

73.33

50.00

88.89

100.0

 

Xgboost

50.00

83.33

66.67

83.33

88.88

53.33

50.00

88.89

100.0

(C)

Knn

50.00

50.00

66.67

83.33

88.88

50.00

50.00

77.78

88.88

LIC1

RF

50.00

66.67

66.67

66.67

100.0

66.67

77.78

66.67

88.88

 

SVM

66.67

50.00

83.33

83.33

88.88

50.00

66.67

66.67

88.88

 

NN

50.00

66.67

66.67

66.67

88.88

73.33

66.67

77.78

88.88

 

Xgboost

50.00

66.67

66.67

66.67

100.0

53.33

66.67

66.67

88.88

(D)

Knn

33.33

66.67

66.67

83.33

88.88

53.33

66.66

66.67

88.88

RR4

RF

66.67

66.67

66.67

83.33

88.88

73.33

66.67

77.78

88.88

 

SVM

53.33

66.67

66.67

83.33

88.88

53.33

66.66

88.89

88.88

 

NN

33.33

66.67

66.67

66.67

88.88

53.33

66.66

73.33

88.88

 

Xgboost

66.67

66.67

66.67

66.67

88.88

53.33

66.66

88.89

88.88

(E)

Knn

33.33

53.33

83.33

83.33

88.88

33.33

55.56

66.67

88.88

NDWI

RF

53.33

66.66

83.33

83.33

88.88

66.67

66.67

77.78

88.88

 

SVM

33.33

53.33

53.33

83.33

88.88

33.33

77.78

66.67

88.88

 

NN

53.33

66.67

83.33

83.33

88.88

53.33

88.89

66.67

88.88

 

Xgboost

53.33

66.67

83.33

83.33

100.0

66.67

55.56

77.78

88.88

DAI is the days after inoculation, DS is the disease percentage or severity, Knn is K-nearest neighbor classifier, RF is the random forest classifier, SVM is the support vector machine classifier, NN is the neural net classifier and Xgboost is the extreme gradient boost classifier.

 

  1. Figure 7: This figure was found to be difficult to understand. Moreover, how did the authors determine the 5, 8, 10, 12, and 15 DAIs as points/days for comparison?

Response: The figure represents the average classification accuracy of all the MLCs in two years and their average performance. Days information is updated in the manuscript. . In these days the weather was standard (there was no wind, rain, and cloudiness, but best sunshine) for spectral measurements with very low DS.

 

((L236-239, Page # 06) From 2020, two datasets till 9 and 16 DAI, and from 2021, 10 and 19 DAI were used be-cause at these days minimum and moderate DS (11 to 30%) is considered like, [19] used 30% DS for plant disease indices development.)

 

  1. The text was found to be too verbose. There were a lot of facts in the discussion that were already mentioned in the results.

Response: We have revised the discussion section and revised the manuscript to remove the verbose issue.

Minor comments:

 

  1. Abstract: there were too many abbreviations in this part. It might be better to prepare a nomenclature at a different section so that readers can easily understand the terms.

Response: We have reduced the abbreviations and only left repeatedly used ones. Moreover, the list of abbreviations is also given. (L44-52, Page # 1 & 2).

 

  1. Introduction paragraph 1: It is better to cut the paragraph into two from “Wheat FHB has been identified…”

Response: We have modified it. (L64, Page # 2).

 

  1. The text had too many highfalutin words such as “necessitates”

Response: In the revised version, some have been revised, the other have been deleted.

 

  1. Generally, the text was a bit confusing since there were too many abbreviated terms.

Response: We have revised it and tried to manage the abbreviation as much as possible. Moreover, the list of abbreviations is also given. (L44-52, Page # 1 & 2).

Acronym

Extended meaning

ACA

Average classification accuracy

ANOVA

Analysis of variance

CA

Classification accuracy

CSBs

consistent spectral bands

CWT

Continuous wavelet transform

DAI

Days after inoculation

DS

Disease severity

DWT

Discrete wavelet transform

FHB

Fusarium head blight

HR

Hyperspectral reflectance

HSI

Hyperspectral imaging

Knn

K nearest neighbors

KnnR

K nearest neighbors regression

ML

Machine learning

MLC

Machine learning classifiers

NDCI

Normalized difference canopy indices

NN

Neural net

Pn

Photosynthesis rate

R2

Coefficient of determination

RF

Random forest

RFR

Random forest regression

RF-RFE

Random forest – recursive feature elimination

RMSE

Root mean square error

SCC

Spike chlorophyll contents

SVM

Support vector machine

SVMR

Support vector machine regression

VIP

Variable importance score

WFCI1

Wheat fusarium canopy index1

WFCI2

Wheat fusarium canopy index2

Xgboost

Extreme gradient boost

 

  1. L382: Is this really 130? Or 1300?

Response: We have revised it. It was 1300. (L478, Page # 11).

 

  1. Figure 5: Perhaps put this figure before the equations.

Response: Change has been made. (L560-570, Page # 12).

 

  1. Figure 6: It might be better to sort the x axis labels (in increasing or decreasing ACA) to make it easier to understand

Response: Changes have been tried to be made but due to different ACA in two years, the figure remained unchanged. In present form, figure more clearly shows the performance of a VI in two years. A visual look is enough to understand. If we give X-axis to Figure-A then it became more complex and difficult to read.

 

  1. Figure 8: These figures were found to be quite confusing. The R2 and RMSE might also be better to use the same color as the data analysed (i.e., green and blue).

Response: Changes have been made. (L800-811, Page # 17).

 

 

  1. Figure 9 captions: The captions seem to be off since there was “A-F”

Response: The figures 9 and 10 are merged together and revised along with caption. (L800-811, Page # 17).

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

I believe that the authors were able to properly revise the manuscript according to the issues that were raised. Therefore, I am recommending the acceptance of this manuscript.

Back to TopTop