Next Article in Journal
Mutual Effects of Zinc Concentration and Ratio of Red-Blue Light on Growth and Nutritional Quality of Flowering Chinese Cabbage Sprouts
Next Article in Special Issue
Reasearch on Kiwi Fruit Flower Recognition for Efficient Pollination Based on an Improved YOLOv5 Algorithm
Previous Article in Journal
A New Approach for Extending Shelf-Life of Pomegranate Arils with Combined Application of Salicylic Acid and Methyl Jasmonate
Previous Article in Special Issue
Research on Real-Time Automatic Picking of Ground-Penetrating Radar Image Features by Using Machine Learning
 
 
Article
Peer-Review Record

HeLoDL: Hedgerow Localization Based on Deep Learning

Horticulturae 2023, 9(2), 227; https://doi.org/10.3390/horticulturae9020227
by Yanmei Meng 1, Xulei Zhai 1, Jinlai Zhang 1, Jin Wei 1,*, Jihong Zhu 2 and Tingting Zhang 1
Reviewer 2:
Reviewer 3:
Reviewer 4:
Horticulturae 2023, 9(2), 227; https://doi.org/10.3390/horticulturae9020227
Submission received: 13 November 2022 / Revised: 2 February 2023 / Accepted: 3 February 2023 / Published: 8 February 2023
(This article belongs to the Special Issue Application of Smart Technology and Equipment in Horticulture)

Round 1

Reviewer 1 Report

The article deals with accurate localization of hedges in 3D space and algorithms to deal with irregularity of the shapes

It is interesting, but some parts should be improved, and some workflow clarified.

Please find comments.

S44 the hedgerow’s point cloud is projected onto a single-channel image with a resolution of 900 x 900. Each pixel width corresponds to the actual distance of 0.5cm,

 

S120 Livox-Horizon LiDAR

Please provide some more details about LiDAR such as:

·        Laser wavelength 905 nm

·        Distance 1σ random error (@ 20 m) < 2 cm

·        Angular random error 1σ < 0.05 °

·        Beam divergence 0.03° (horizontal) × 0.28° (vertical)

 

s135 in addition, due to the nature of the near-dense and far-sparse hedge point clouds, different acquisition distances are used to enrich the diversity of the samples and thus improve the robustness of the network model.

Please explain how you collect data since the range since Livox Horizon cannot accurately detect objects closer than 0.5 m away.

 

S138: We sample the data at intervals of 10 frames.

Which is the speed? And The overlap?

 

S139: we performed morphological operations on the data

Please specify which ones and the algorithms applied

 

S142 as image rotation, which were used to augment the dataset

Why image rotation? What do you mean with "augment the dataset"? Why should you augment it?

 

Please explain in detail the workflow followed to create de datasets.

 

s148 we invited three skilled and experienced gardeners

Do you consider the sample statiscally significant? which is the mean and Standard Deviation of the measurements?

 

 

S162 In the operation of the automatic trimmer , the trimmer needs to be placed directly above the hedge

How do you consider the problem of "lack of information" (shadows) on the top of the hedges due to the lower position of the Livox-Horizon LiDAR?

 

S167:we aim to obtain its center axis (X,Y), radius R and height H.

Basically what you are trying is a "sphere best-fit" to a hedge to get the parametric data by using CNN, etc. Don't you?

 

S170: Figure 2.

Please improve the figure. Some parts are later repeated bigger. Please avoid repetition.

 Steps 2, 3 and 4 are difficult to appreciate, please improve them.

Step 4-5: The image is not very clear and the text below each figure cannot be read. I do not understand how can you get information from bottom to top from the hedge without increasing the height of the lidar.

 

S175: Extract hedge’s height: We use min-heap sorting to extract the hedge’s height according to the Z coordinate of point cloud.

Please explain pre-processing. i.e. Do you filter data to remove outliers?

 

S177: Transform point cloud information into image information

Why "rasterize" the Pointcloud? Don't you consider keeping the 3D information from pointcloud would improve results? You could implement  dilation & Erosion in 3D as well.

 

S184: Morphological operation: The aerial view obtained by step 2 is dilated and eroded

Why not you run filtering in point cloud to remove outliers? The aerial views have lacks in the center due to Lidar position, but should include the trunk. Is it correct?

 

S187: Rotation operation: All the images obtained from step 3 are rotated around the center of the marked circle and then used as the data set of the experiment.

Please explain it in detail and explain which is the criteria for the rotation? i.e. Image matching, point cloud correlations, data from robot odometry, etc.

 

 

S201 we sort the point cloud Z value from large to small, and the average value of the point cloud with the first 5% of the Z value represents the actual height of the hedge.

Please provide any reference to it. How do you get his affirmation? Is it obtained from your experience? Please clarify.

 

s232 Due to the sparsity of the point cloud, the percentage of pixels of the point cloud on the image is small

Please provide details. What do you consider "very small"? In figure 4 seems to seem more than a hundred points.

 

S249+ we rotate the point cloud image around the centre point of our label according to a series of angles.

Why not using any "navigation" or 3d matching criteria to calculate this rotation?

 

S273: Figure 8

Please follow the same style than the previous if possible.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

With interest, I read the manuscript. It is appreciated that the manuscript is easy to follow and not too long. The message is clear and of interest to the community. The authors proposed a paper titled "HeLoDL: Hedgerow localization based on deep learning". The proposed paper seemed to be promising in terms of computational simplicity and classification accuracy. I would accept it in this present form.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

In the paper HeLoDL: Hedgerow localization based on deep learning by Yanmei Meng et. al. the author's method for determining the center and radius of spherical hedgerows is presented. The method gives a good result and is fast enough and accurate.

The material of the article is of interest to readers and also practically useful. The literature review is presented in detail. Methods and results are clearly described.

I have a few minor remarks:

1) it is necessary to explain the physical meaning of the quantity H in formula (1);

2) in Line 373 it apparently means Table 3;

3) there is an extra dot at the end of the line 421;

4) It is necessary to explain in more detail what is presented in each part of Figure 13.

I think that after making these corrections, the article can be published in “Horticulture”.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 4 Report

In this article, authors have proposed a deep learning approach based on the bird’s-eye view to overcoming the irregularity of the hedge shape in automatic pruning, which it is called HeLoDL. Also, authors have reported that the HeLoDL achieves an accuracy of 90.44% within the error tolerance, which greatly exceeds the 61.74% of the state-of-the-art algorithm. The authors presented a work with a clear methodology of system development and implementation. I think that the work described in the manuscript is interesting. In general the work is well structured. Figures and tables are descriptive and sufficient. References are sufficient and belong to the last 10 years. However, I believe that the some details in the study need to be edit before the manuscript can be published. I have listed my comments and suggestions below.

 

Comments and Suggestions:

 

1.      Any faults and warnings indicated by the system? Limitations of this study? I think it would be better if an explanation could be added to the article about these issues.

 

2.      In a real application, how will the system work at high performance in a hedge arrangement? So how hedges should be organized in the garden? Naturally, for an autonomous system to work properly in an open area, the environment in which it will work must be arranged accordingly. I think it would be better if you write your suggestions about this subject in the discussion or conclusion section. Of course this is a suggestion.

 

3.      I think the work is very important. Thank you for contributing to the scientific literature on the subject.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Dear authors,

The paper has improved significantly since the previous version. Congratulations!

I have left some comments that could enrich it a bit more.

Regards,

 

Point 5: S142 as image rotation, which were used to augment the dataset

 

Why image rotation? What do you mean with "augment the dataset"? Why should you augment it?

Please explain in detail the workflow followed to create de datasets.

 

Secondly, the expression "augement the dataset" means exactly to augment and expand the dataset. (S156)

 

I believe the word is confussing. I would use “increase the amount of points in dataset”, “increase the amount of data” or similar. Augment the dataset is ambigous since you can “zoom” to augment as well.

 

 

Point 6: s148 we invited three skilled and experienced gardeners

 

Do you consider the sample statiscally significant? which is the mean and Standard Deviation of the measurements?

 

Response 6: Skilled gardeners were asked because it is more informative to use their experience as a criterion to determine the circles to be trimmed. I think the sample is statistically significant. Since the averaging was done to combine the labeling results of the three gardeners. Thus the mean value is the final annotation result for each sample. We specifically counted the circle center coordinates X, circle center coordinates Y at radius R for each sample and converted these coordinates to values in the LiDAR coordinate system to find their standard deviations. The respective maximum values of the three standard deviations were 0.0208, 0.2212 and 0.0242, respectively.

 

Please mention it in the paper.

 

 

Point 7: S162 In the operation of the automatic trimmer , the trimmer needs to be placed directly above the hedge

 

How do you consider the problem of "lack of information" (shadows) on the top of the hedges due to the lower position of the Livox-Horizon LiDAR?

 

Response 7: For this problem, I think there are two aspects need to be considered. The first one is to perform positioning at different measurement distances, convert the positioning results to the world coordinate system, and average the positioning results within a certain distance range l to use as the final positioning result. The other one is that the height h of the LIDAR installation position is adjusted to ensure that the LIDAR can detect the top of the hedge. Both l and h in the above two options need to be calibrated with the actual scene. In our experiments, the installation height of the LIDAR is 0.8m, and the range of the collected data is between 1m and 5m.

 

Please mention it in the paper.

 

Point 10: S175: Extract hedge’s height: We use min-heap sorting to extract the hedge’s height according to the Z coordinate of point cloud.

 

Please explain pre-processing. i.e. Do you filter data to remove outliers?

 

Response 10: This paper focuses on how to obtain localization information for a given hedge point cloud. The point cloud data preprocessing, however, involves our earlier work. The earlier work mainly includes joint lidar and camera calibration, hedge detection, and frustum generation, ground filtering, direct-pass filtering, and point cloud clustering. Therefore the work on point cloud data pre-processing has been done in the early stage and is not shown in the paper due to the large amount of content involved. The hedge point cloud object in this paper is already the point cloud after filtering out the outliers.

 

Please mention it in the paper.

 

Point 14: S201 we sort the point cloud Z value from large to small, and the average value of the point cloud with the first 5% of the Z value represents the actual height of the hedge.

 

Please provide any reference to it. How do you get his affirmation? Is it obtained from your experience? Please clarify

 

Response 14:

This parameter is not definitive, but is selected based on our experience. It needs to be calibrated for different sizes of hedges and the number of point clouds of the lidar, depending on the actual situation. The value 5% is tested based on the data we currently use.

 

Please mention it in the paper.

 

 

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop