Next Article in Journal
Microalgal Systems, a Green Solution for Wastewater Conventional Pollutants Removal, Disinfection, and Reduction of Antibiotic Resistance Genes Prevalence?
Next Article in Special Issue
Matching the Liquid Atomization Model to Experimental Data Obtained from Selected Nozzles
Previous Article in Journal
Study of the Effect of Gas Baffles on the Prevention and Control of Gas Leakage and Explosion Hazards in aUtility Tunnel
Previous Article in Special Issue
Evaluation of a Deep Learning Approach for Predicting the Fraction of Transpirable Soil Water in Vineyards
 
 
Article
Peer-Review Record

Nano Aerial Vehicles for Tree Pollination

Appl. Sci. 2023, 13(7), 4265; https://doi.org/10.3390/app13074265
by Isabel Pinheiro 1,2,*, André Aguiar 1, André Figueiredo 1, Tatiana Pinho 1, António Valente 1,2 and Filipe Santos 1
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4:
Appl. Sci. 2023, 13(7), 4265; https://doi.org/10.3390/app13074265
Submission received: 24 January 2023 / Revised: 24 March 2023 / Accepted: 26 March 2023 / Published: 28 March 2023
(This article belongs to the Special Issue New Development in Smart Farming for Sustainable Agriculture)

Round 1

Reviewer 1 Report

A Nano Aerial Bee-based robotic solution was proposed in this study to assist bees in the context of pollination. I approve for publication with minor reservation or revisions.

1. Line 60,please give examples to illustrate the specific advantages of NAVs compared with traditional UAVs.
2. This paper is not the first to propose the research of pollination with UAV. Please point out the advantages of this research compared with the previous similar research in the introduction.
3. Line 82, have the benefits of helping bees pollinate been intuitively reflected in the form of data in the previous literature? If yes, please supplement references.
4. Being able to detect flowers by using YOLO-based algorithm does not mean that pollination can be effectively performed. When the proposed NAV is used for pollination tasks, its pollination efficiency and robustness in different environments must be considered in the actual use process. Please supplement these contents in the experiment.
5. The section 2.3 to 2.7 should be listed as sub-sections of the section 2.2, i.e. 2.2.3-2.2.7.
6. Deformable convolution and the corresponding attention mechanism have been proved to be able to improve the detection performance of the detector and can be used for future comparative research.Please supplement the listed references in the introduction or future work:
 (1)Deng, L., Chu, H. H., Shi, P., Wang, W., & Kong, X. (2020). Region-based CNN method with deformable modules for visually classifying concrete cracks. Applied sciences, 10(7), 2528.
 (2)Chu, H., Wang, W., & Deng, L. (2022). Tiny‐Crack‐Net: A multiscale feature fusion network with attention mechanisms for segmentation of tiny cracks. Computer‐Aided Civil and Infrastructure Engineering, 37(14), 1914-1931.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

1. This work presents a relevant background in the area of flowers detection using NAVs. The development of this prototype is a rather complex task given the interactions between 12 different hardware components and the need to achieve autonomous flight capable of pollination The authors should describe all hardware components applied in the prototype proposed.

2. In the table 5 the authors compare three version of Yolo v5, v7 and YoloR they choose Recall, precision, mAP, F1score, : at the first they should justify the choose of these metrics also I suggest to add other like the time of simulations and accuracy.

3. The authors choose only the Yolo version to classify and to detect flower so how to determine the prediction is good? why not applying other models such as SVM, CNN, LSTM...

4. The model proposed apply Yolov7 to detect and classified objects , is the model good to track small objects with higher overlapping?

5. NAV challenges have not been detailed in the presented paper, it's very important to highlight the limitations of NAV systems

6.  The authors should explain the importance of the system proposed in their work 

7. Authors must respect the same form of references citations 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

Please see the attachment.

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 4 Report

The authors propose the use of NAVs to assist bees in pollination.

The work describes separately hardware and software components, i.e., the schematics of the PCB the authors designed, and the YOLO-based flower identification for the software part.

The results of the hardware design are unclear: did the authors produce a prototype or not? Has it been tested? Does the proposed design work in lab settings?

The results coming from the identification of flowers present no novelty, and the authors fail in reviewing papers in the literature on this subject. Furthermore, it is unclear how the proposed algorithm would fit on the hardware above, and what kind of solution can be tested in this regard.

Overall, the work is poorly written and needs a careful review of the language. Furthermore, as said above, the authors must link together the hw and the sw components and describe potential strategies in this regard, and clarify what are the actual novel outputs of their work.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

The authors have well improved their work however they didn't answer all remarks: 

1.        why not applying other models such as SVM, CNN, LSTM... they should explain this in their work

2. Why they choose not to add accuracy ?

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

The author has done a good job with the revisions. In addition, I have a few remaining further questions/remarks.

 

1.       In abstract, the specific results of detection models should be described.

2.       In Introduction: This manuscript should further add some articles and will be of interest to many, such as:

(1)  Automatic Detection of Pothole Distress in Asphalt Pavement Using Improved Convolutional Neural Networks, https://doi.org/10.3390/rs14163892.

(2)  Identification of the operating position and orientation of a robotic kiwifruit pollinator, https://doi.org/10.1016/j.biosystemseng.2022.07.014.

(3)  Automatic recognition of pavement cracks from combined GPR B-scan and C-scan images using multiscale feature fusion deep neural networks, https://doi.org/10.1016/j.autcon.2022.104698.

3.       Please check the picture quality again to meet the publication requirements.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 4 Report

I thank the authors for their efforts in addressing my concerns. 

However, the submitted work has much room for improvement:

* extensive review of the language is necessary: the text is full of typos and hard-to-read statements

* released dataset: indeed of interest, I have a few questions on it:

** does it contain images of a single flower? 
** how many different images and flowers have been used for training and testing? 
** what is the meaning of the contents of label files? I can't seem to find any metadata nor explanations for those

Please add your response in the text of the paper, as well as on the description of the dataset on Zenodo

* the referenced script https://gitlab.inesctec.pt/agrob/dronebee is not accessible, it requires login

 

Overall, I share the same concerns about the submission stated in my previous reviews. The actual contribution is unclear to me, and the dataset used for training and test seems to be rather limited and poorly characterised.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 3

Reviewer 4 Report

Again, I thank the authors for addressing my comments. Anyway, more elaborate answers are something I would appreciate a lot. 

* the dataset is rather small. please state in the abstract and at the very beginning of the work that the contribution is limited by such a factor

* the authors state that SVM and LSTM are better suited to small datasets than YOLO-based approaches; yet, the paper considers YOLO only, and does not compare results with e.g. SVM and LSTM networks. I would say such a comparison is strictly needed

* what is the meaning of the label 'flower' used for annotation in the dataset? what kind of a label is that? I am referring to data like what below

--
0 0.5541666666666667 0.37343750000000003 0.1 0.078125
0 0.45 0.38593750000000004 0.10833333333333334 0.07187500000000001
0 0.5770833333333333 0.4703125 0.10416666666666667 0.12812500000000002
0 0.41875 0.521875 0.15416666666666667 0.1375
0 0.42291666666666666 0.721875 0.20416666666666666 0.15625
0 0.5416666666666666 0.6140625000000001 0.125 0.140625
--

that can be found in the 'labels' subfolders. Metadata are needed for publicly available datasets to be actually useful

* other datasets with images of flowers can be found in the literature. It is worth testing the trained network on different datasets to better evaluate what proposed

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop