Next Article in Journal
Effects of Different Temperatures on the Antibacterial, Immune and Growth Performance of Crucian Carp Epidermal Mucus
Previous Article in Journal
Survival and Physiological Recovery after Capture by Hookline: The Case Study of the Blackspot Seabream (Pagellus bogaraveo)
 
 
Article
Peer-Review Record

Feasibility Research on Fish Pose Estimation Based on Rotating Box Object Detection

by Bin Lin 1, Kailin Jiang 2, Zhiqi Xu 1, Feiyi Li 1, Jiao Li 1, Chaoli Mou 1, Xinyao Gong 1 and Xuliang Duan 1,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Submission received: 24 October 2021 / Revised: 17 November 2021 / Accepted: 17 November 2021 / Published: 19 November 2021
(This article belongs to the Section Sustainable Aquaculture)

Round 1

Reviewer 1 Report

The manuscript presents a new protocol for determining pose estimation for golden crucian carp by rotating box object detection  . The authors compare different computer vision techniques and models and build a dataset for pose estimation of crucian carp. 
The work is relevant, and the manuscript deserves to be published, but it must be improved before a positive recommendation. 

English should be revised, several passages are unclear or lost, as noted in lines 104; 226 255 among others. 
Angle deformation. no degree on line 111 is one of the errors that are present throughout the text.

The legend in Figure 1 should indicate what the different colors mean. 

Avoid expressions like "the following equation" as in lines 220; 386 since it is misleading. 

Author Response

Response to Reviewer 1 Comments

Point 1: English should be revised, several passages are unclear or lost, as noted in lines 104; 226 255 among others. 

Response 1: We checked the whole manuscript and revised the unclear expression. Thank you for any revised comments on our manuscript.

 

Point 2: Angle deformation. no degree on line 111 is one of the errors that are present throughout the text.

Response 2: We checked the whole manuscript and revised it. Thank you for any revised comments on our manuscript.

 

Point 3: The legend in Figure 1 should indicate what the different colors mean. 

Response 3: We have explained in the Figure note, but for better expression, we added a legend to express the meaning of color. Thank you for any revised comments on our manuscript.

 

Point 4: Avoid expressions like "the following equation" as in lines 220; 386 since it is misleading. 

Response 4: We modified the expressions in lines 220 and 386. In addition, we checked the description of the formula in the full text. Thank you for any revised comments on our manuscript.

Reviewer 2 Report

The study is well written, and the methodology is described clearly with a good presentation of results and discussion.

Author Response

Response to Reviewer 2 Comments

Point 1: The study is well written, and the methodology is described clearly with a good presentation of results and discussion.

Response 1: Thank you for your approval of our manuscript. We make the data public so that it can be downloaded online by more people, hoping to be helpful to Fish pose estimation. https://figshare.com/articles/figure/posedata/17022443. Now the data can be downloaded through the online website.

Point 2: English language and style are fine/minor spell check required.

Response 2: We have used Grammarly software to polish. Thank you for any comments on our manuscript.

Reviewer 3 Report

Wrong name Yolo 5 (2 words), not YoloV5 (one word).
Line 47: You Only Look Only (YOLO)[12,13,14] - it is wrong. It should be You Only Look Once.

4.3.2. title is wrong. Comparative analysis or comparison with other methods?

565-567 we successively compared the test results of R-CenterNet and 565 R-Yolov5s on the grass gold data set. We found that R-Yolov5s takes priority in indicators 566 such as accuracy and recall rate. 
You should present some data which you used to make such conclusions.

I have a PhD student working on YOLO 4. Why did you choose Yolo 5? Is this because it is newer or there are some other reasons?

Which program have you used for marking images, which you used in training of ANN?

Check references style. 

Are you planning to provide permanent dataset address?

Author Response

Response to Reviewer 3 Comments

Point 1: Wrong name Yolo 5 (2 words), not YoloV5 (one word).
Line 47: You Only Look Only (YOLO)[12,13,14] - it is wrong. It should be You Only Look Once.

Response 1: We checked the whole manuscript and revised it. Thank you for any revised comments on our manuscript.

 

Point 2: 4.3.2. title is wrong. Comparative analysis or comparison with other methods?

Response 2: Maybe our original writing is a little ambiguous. We revised the title of 4.3.2 to compare with other methods. In fact, the content of this section is to discuss why we choose Yolo 5 and CenterNet from the mainstream objection detection models, and why we use deeppose to estimate pose. Thank you for any revised comments on our manuscript.

 

Point 3: 565-567 we successively compared the test results of R-CenterNet and 565 R-Yolov5s on the grass gold data set. We found that R-Yolov5s takes priority in indicators 566 such as accuracy and recall rate. You should present some data which you used to make such conclusions.

Response 3: Sorry, due to our mistake, the discussion at the end of the manuscript did not quote data. In fact, the supporting data source of this part is table 2, and we have revised the expression of this part. Thank you for any revised comments on our manuscript.

 

Point 4: I have a PhD student working on YOLO 4. Why did you choose Yolo 5? Is this because it is newer or there are some other reasons?

Response 4: In fact, we were tangled at the beginning about the choice of Yolo 4 and Yolo 5. As you said, Yolo 5 is newer than Yolo 4, but in the model comparison of Yolo 5 on GitHub, the author of Yolo 5 did not directly give a comparison with Yolo 4. Therefore, we have made experimental comparison. In table 1, we give the comparative data between the two models. Perhaps, compared with our fish data set, the performance of Yolo 5 is indeed the best among the mainstream models. Therefore, we did more ablation experiments on Yolo 5 later. As shown in table 3, the effect is satisfactory. Although the official did not admit that Yolo 5 is better than Yolo 4, I believe you must have heard of the global wheat detection competition(https://www.kaggle.com/c/global-wheat-detection). At that time, Yolo 5 just appeared. As a result, most of the top ten in the list were Yolov5. Later, Yolo 5 was disabled due to protocol problems. However, the powerful performance of Yolo 5 has begun to show. Now Yolo 5 has been updated in six versions(https://github.com/ultralytics/yolov5). And the fish data set in the manuscript does perform better than Yolo 4. Therefore, considering comprehensively, we adopt Yolo 5 and transform it to adapt to the rotating box. Thank you for any revised comments on our manuscript.

 

Point 5: Which program have you used for marking images, which you used in training of ANN?

Response 5: On the one hand, in line 164, we introduced the use of labelme(https://github.com/wkentaro/labelme) to mark data, and also introduced how to select key points and mark points. On the other hand, we train the neural network through pytorch. We used an integrated code base called mmpose(https://github.com/open-mmlab/mmpose). There are some cases. Thank you for any revised comments on our manuscript.

 

Point 6: Check references style. 

Response 6: We have modified all references and adopted the format of GB / T 7714. Thank you for any revised comments on our manuscript.

 

Point 7: Are you planning to provide permanent dataset address?

Response 7: We make the data public so that it can be downloaded online by more people, hoping to be helpful to Fish pose estimation. https://figshare.com/articles/figure/posedata/17022443. Now the data can be downloaded through the online website. In fact, at the end of the manuscript, the download link of data can be found in the line 638. Thank you for any revised comments on our manuscript.

Back to TopTop