Next Article in Journal
Continuous Tracking of Targets for Stereoscopic HFSWR Based on IMM Filtering Combined with ELM
Next Article in Special Issue
Automated Identification of Crop Tree Crowns from UAV Multispectral Imagery by Means of Morphological Image Analysis
Previous Article in Journal
Characterisation of Terrain Variations of an Underwater Ancient Town in Qiandao Lake
Previous Article in Special Issue
Color Calibration of Proximal Sensing RGB Images of Oilseed Rape Canopy via Deep Learning Combined with K-Means Algorithm
 
 
Article
Peer-Review Record

Image-Based Dynamic Quantification of Aboveground Structure of Sugar Beet in Field

Remote Sens. 2020, 12(2), 269; https://doi.org/10.3390/rs12020269
by Shunfu Xiao 1,2,3,†, Honghong Chai 1,†, Ke Shao 2, Mengyuan Shen 1, Qing Wang 1, Ruili Wang 2, Yang Sui 2 and Yuntao Ma 1,2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Remote Sens. 2020, 12(2), 269; https://doi.org/10.3390/rs12020269
Submission received: 15 December 2019 / Revised: 6 January 2020 / Accepted: 10 January 2020 / Published: 14 January 2020
(This article belongs to the Special Issue Advanced Imaging for Plant Phenotyping)

Round 1

Reviewer 1 Report

Overall this is an interesting study and well written paper (although some editing for English grammar is necessary) which is on a topic of current interest (structure from motion applications). The overall objectives, methods, and results are sound, but I do believe some revision and additional details are needed to improve some of the structure, organization, and content of the paper, as noted below.

-Extensive editing and correction for English grammar is needed. For example, Line 17 - "still remain challenge especially in field" -> "still remain a challenge, especially in the field"
-Line 20 - "plant point cloud" - This is a bit of an odd phrase. What do you mean exactly? A point cloud representing a single plant?
-Line 24 - "were adopted to explore the relationship with biomass" - What about the other traits?
-Line 44 - "development of 3D technologies" - Can you give some examples of what you mean by these 3-D technologies?
-Line 48 - "laser triangulation ... structured light" - I'm familiar with lidar AKA laser scanning, but not these two terms. Are they really different from lidar or laser scanning? If so, how?
-Line 51 - "very expensive" - Can you cite some references providing more detail about the expense?
-Line 52 - "Structure from motion (SfM), featured as lightweight and cheap" - Again, some reference and detail on the relative cost would be good to include.
-Line 58 - "plant point cloud" - You use the term quite a lot. Make sure it is properly defined.
-Line 58 - You mentioned that many current technologies have trouble producing complete plant point clouds, and for the most part I agree, but did you look into any studies that use UAV or drone-based lidar? There have been many recent studies that have shown that UAV or drone laser scanning can produce point clouds with enough detail to capture all sorts of vegetation:
https://www.mdpi.com/2504-446X/3/2/35
-Lines 62 to 67 - Objectives - Overall your objectives are good and I think your research goals are important to study. My only comment is on your use of the term "extract phenotypic trait values" from the point cloud data. In most lidar or point cloud studies like there, there is more of a separation between the metrics that are extracted from point cloud data (such as maximum lidar height, vegetation point density, standard deviation of lidar point height, etc) and the physical properties of the environment that are measured in the field (such as vegetation height, vegetation cover, biomass, etc). There is a strong correlation between the two (the point cloud metrics and the physical measures), but usually I don't see studies claim that they can directly extract "phenotypic traits" or physical measures directly from the point cloud data. Usually there is some empirical modeling or machine learning (usually random forests) that makes that link. But I don't see any mention of empirical modeling or machine learning in your objectives. See the following examples of correlating lidar metrics with physical vegetation measures:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0220096
https://www.mdpi.com/2072-4292/11/7/743/htm
-Line 110 - "magic cube" - I'm not sure what you mean here, can you elaborate? I don't see this in the figure you referred, Figure 2c.
-Line 196 to 198 - Overall your methodology section is good and detailed, but this last paragraph where you describe the estimating of biomass is much too short and lacks important details.
-Section 3.4 - Overall your results section looks good, but again, your section on biomass estimation is noticeably much shorter and less detailed than the rest of the results.
-Figure 14 - This Figure needs a bit of work to better explain it. It's not quite clear what's the difference between (a) and (b) and then between (c) and (d). You're also missing a caption for (d).
-Line 377 - Your mention of errors associated with your data collection and processing approach is important and I think this should be expanded on.
-Discussion - This section is good. But what about scaling this approach up to larger areas? Your study area is rather small, which is understandable. But how would this approach be used on an entire field? Or could it be? I imagine the point density over a larger area would be more difficult to achieve. Would you expect this same approach to work as well? Is the data collection approach, where you take multiple photos from multiple angles of each plant, practical at the field scale? I imagine a drone laser scanning approach would be more practical here, which could possibly achieve the same point densities over a larger area. Some discussion is needed on this topic of scale. See the reference I mentioned earlier about drone laser scanning.

Author Response

Dear Reviewer:

Thank you for reviewing my manuscript,please see the attachment.

Best wishes,

shunfu Xiao

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Overall, this is an excellent study that merits rapid publication. One suggestion for future manuscripts is to have appendices with 3D versions of the important figures.

 

Minor comments:

P 4, L 114-133: There are many systems to denote vectors and scalars with arrows, italics, and bold. Please keep to one system. Also, what about angle phi?

P 4, L 138: “classes” may be better than “kinds”

P 5, L 146-147: “Histogram of mean G-R difference for all …”

P 5, L 148: software used?

P 5, L 159-169: need to define terms before proceeding. Some traits are expressed differently at high plant density (individual leaf area), which changes the meaning of global. Are global traits at a single-plant scale or a plot scale? Because maximum canopy area and total leaf area are presented several lines before the definitions, I had the misconception that maximum canopy area was equivalent to leaf area index (note that the singular leaf is used, not plural leaves). Re-arranging sentences should be sufficient for clarity.

 

P 5, L 176: Technically, the petiole is part of the leaf organ, but Fig. 2i shows only the leaf blade. Because this study will be used by plant scientists, please use “leaf blade length”. One or two times should be enough.

P 5, L 184: insert either degrees or radians after first alpha.

 

P 6, L 205: “selected a G-R difference of 7 as”

P 7, L 220: number or area of leaves? (actually both)

P 8, L 232: “without much overlap”

P 9, L 260: The red dotted lines are the “fitted lines”, for differences between estimates and measured, the 1:1 or y=x line.

 

P 9, L 262: “are shown”

P 11, L 287-288: start sentence simply “Five points”

P 12, Fig. 14 c,d: x-axis “Hull”

P 12, L 329: “found”

P 13, L 359-368: need discussion about relative value of fresh weight versus dry weight (i.e. Fig 14) as an important plant trait.

 

Author Response

Dear Reviewer:

Thank you for reviewing my manuscript, please see the attachment.

Best wishes,

Shunfu Xiao

Author Response File: Author Response.pdf

Back to TopTop