Next Article in Journal
Study the Flow Capacity of Cylindrical Pellets in Hopper with Unloading Paddle Using DEM
Previous Article in Journal
Transcript Analysis Reveals Positive Regulation of CA12g04950 on Carotenoids of Pigment Pepper Fruit under Nitrogen Reduction
 
 
Article
Peer-Review Record

Analysis of Various Machine Learning Algorithms for Using Drone Images in Livestock Farms

Agriculture 2024, 14(4), 522; https://doi.org/10.3390/agriculture14040522
by Jerry Gao 1,2,†, Charanjit Kaur Bambrah 2, Nidhi Parihar 2, Sharvaree Kshirsagar 2, Sruthi Mallarapu 2, Hailong Yu 3, Jane Wu 4 and Yunyun Yang 3,*,†
Reviewer 1: Anonymous
Agriculture 2024, 14(4), 522; https://doi.org/10.3390/agriculture14040522
Submission received: 28 November 2023 / Revised: 27 February 2024 / Accepted: 12 March 2024 / Published: 25 March 2024
(This article belongs to the Section Digital Agriculture)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper implemented machine learning analysis technology for surveillance at livestock farms using drones.

Various algorithms for applying machine learning to livestock farms were analyzed, and yolo v7 and DeepSort model were used according to the analyzed results. But, It is difficult to find originality in the research content.

It would be better to propose the topic as “Analysis of various machine learning algorithms for using drone images in livestock farms.”

Author Response

There are two innovative aspects to the research content:

1) Compare various algorithms and apply them in smart agricultureï¼›

2) Using drones to collect data for data analysis and research, applied to smart agriculture.

The topic can be changed to “Analysis of various machine learning algorithms for using drone images in livestock farms.”

Reviewer 2 Report

Comments and Suggestions for Authors

[1] Abstract must be re-written in order to eliminate the use of term UAV.  

[2] “UAV” in Abstract must be given in full extension first. 

[3] There are many Text lines written in red instead of black. 

[4] Sentence between lines 46 and 48 doesn’t make sense. 

[5] Line 52 must be removed. 

[6] Fig. 6 is of very low quality. 

[7] Fig. 7 is of very low quality. 

[8] In Fig. 15, for better understanding, there must be added the physical places where the functions take place (e.g. Google Cloud, Colab, local server, etc..). Maybe Fig. 15 and Fig. 17 should me merged in one figure. 

[9] The communication requirements between the UAV and base station must be explained.  

[10] In line 551, it is mentioned that Drone communication is based on WiFi. It is mandatory to provide the techno-economical impact of WiFi infrastructure in rural areas. This is very important because the proposed solution is targeting to cost reduction of the monitoring. 

[11] UAVs requirements in terms of processing and storage must be reported thoroughly.   

 [12] Authors must explain the mechanism of data model execution inside the UAV. 

[13] Authors must provide how the multimedia processing and other relevant operations influence the total flight time of the UAV. 

[14] Authors must pay attention in the particular requirements in real-life external environments, such as the agricultural, in order to support the proposition of a plain object-detection application. The emphasis should be put on the application domain idiocracy. 

[15] Image and video files formats stored in UAV and then transmitted and processed in next layers must be reported. Also, describe the media storage minimum requirements. 

Comments on the Quality of English Language

Minor editing of English language required

Author Response

  • Abstract must be re-written in order to eliminate the use of term UAV.

Response: We have rewritten the abstract as follows.

Farming is an important industry in the world. With the development of artificial intelligence, the intelligence of agriculture has become a trend. Intelligent monitoring of agricultural activities is an important part of it. However, due to difficulties in achieving a balance between quality and cost, the goal of improving the economic benefits of agricultural activities has not reached the expected level. Farm supervision requires intensive human effort and may not produce satisfactory results. In order to achieve intelligent monitoring of agricultural activities and improve economic benefits, this paper proposes a solution that combines Unmanned Aerial Vehicles (UAVs) with Deep learning models. The proposed solution aims to detect and classify objects using UAVs in the agricultural industry, thereby achieving independent agriculture without human intervention. To achieve this, a highly reliable target detection and tracking system is developed using Unmanned Aerial Vehicles. The use of Deep learning methods allows the system to effectively solve the target detection and tracking problem. The model utilizes data collected from DJI Mirage 4 Unmanned Aerial Vehicles to detect, track, and classify different types of targets. The performance evaluation of the proposed method shows promising results. The average map accuracy for target detection and tracking is measured to be 90.98%, the average accuracy is 89.76%, and the average recall rate is 88.78%. Additionally, this paper compares the performance of different Deep learning models such as YOLOv7, FASTER-RCNN, SSD, and MASK-RCNN, using key evaluation indicators. Finally, the YOLOv7-DeepSort model is determined to be the most suitable method based on the comparison results. By combining UAV technology and Deep learning models, this paper provides a cost-effective solution for intelligent monitoring of agricultural activities. The proposed method offers the potential to improve the economic benefits of farming while reducing the need for intensive hum

  • ”UAV” in Abstract must be given in full extension first.

Response: We have extended the abbreviation of “UAV” that first appeared in the abstract to Unmanned Aerial Vehicle.

  • There are many Text lines written in red instead of black.

Response: We have carefully reviewed the entire text and corrected the error.

  • Sentence between lines 46 and 48 doesn’t make sense.

Response: We have deleted this sentence.

  • Line 52 must be removed.

Response: We have deleted this sentence.

  • 6 is of very low quality.

Response: We have improved the clarity of the image.

  • 76 is of very low quality.

Response: We have improved the clarity of the image.

  • In Fig. 15, for better understanding, there must be added the physical places where the functions take place (e.g. Google Cloud, Colab, local server, etc.). Maybe Fig. 15 and Fig. 17should me merged in one figure.

Response: In Fig. 15, the functions take place Google Cloud. After discussion by all authors, Fig. 15 and Fig. 17 may be expressed more clearly and intuitively.

  • The communication requirements between the UAV and base station must be explained.

Response: We use UAV for data collection, and then use machine learning tools for data analysis and processing. Therefore, UAV is not related to communication base stations. At present, real-time data collection is not yet possible. Our next step is to collect and process real-time data.

  • In line 551, it is mentioned that Drone communication is based on WiFi. It is mandatory to provide the techno-economical impact of WiFi infrastructure in rural areas. This is very important because the proposed solution is targeting to cost reduction of the monitoring.

  Response: We use UAV for data collection, and then use machine learning tools for data analysis and processing. The next step of our work is to utilize wireless communication devices based on Wife for real-time data collection and analysis.

  • UAVs requirements in terms of processing and storage must be reported thoroughly.

Response: We use UAV for data collection, and then use machine learning tools for data analysis and processing. The system support aspect of this project consists of data storage, data labeling, deep learning model execution, UI design. The framework and platforms used in this project are open-source technology and tools. The data storage is on the local system and google drive. The data labeling and storage is done using Roboflow. The deep learning model execution is performed on google colab which is a cloud platform with GPU and used scikit learn library, TensorFlow. The paid version of google colab which is google colab pro is used for deep learning model training with higher GPU acceleration settings. For User Interface application Streamlit is being used. Streamlit is the open-source application used to create user interfaces for deep learning model integration in computer vision model deployments. It is a compatible platform to integrate deep learning models with web applications.

  • Authors must explain the mechanism of data model execution inside the UAV.

Response: The parameters of the drone are as follows.

DJI Mini3 Drone
Camera parameters
Main camera parameters: 48 million
Perception system
Real time image transmission quality: 720 p
Fair vehicle
Battery capacity: 18.1 Wh
Max flight time: 38 minutes
Weight: 249 g
Picture format: DNG; JPEG
Certified model: MT3PD
Maximum wind resistance: Level 5 Phoenix
Remote control
Remote control method: Remote control
Support interface types: Lightning; Micro USB; USB-C

Transmission technology

Wi-Fi, the full name is Wireless Fidelity,

Wireless LAN technology based on the IEEE 802.11b standard, typically using 2.4G UHF radio waves or 5G UHF radio waves. Similar to Bluetooth technology, electronic devices can be connected to a wireless LAN to transmit signals at high speeds over a small area.

The mobile phone communicates the Wi-Fi module of the ground relay end through Wi-Fi, and the Wi-Fi module of the ground relay terminal communicates the Wi-Fi module of the drone end, which can not only send the control signal from the ground mobile phone terminal, but also transmit the video data of the UAV aerial shooting to the mobile phone through Wi-Fi.

  • Authors must provide how the multimedia processing and other relevant operations influence the total flight time of the UAV.

Response: Multimedia processing and other related operations do not affect the total flight time of the drone.

Our dataset includes different image sizes and hence we have resized our images to 512 x 512 and therefore the images below this size are padded to 512 x 512. Resizing the images reduces processing time since larger resolution images take longer to process.

The data we collected by the DJI drone does not have annotations and hence we labeled some of these images with their respective labels using an image annotation tool called Plainsight. We used RGB images for our model and hence grayscale images are excluded from the dataset. Any images with low quality will be removed as we encounter and discover them. Below figure 5 shows the samples for which we created the bounding boxes. The bounding boxes are created for sheep, trucks and cows.

We have collected the data from the drone as well as a publicly available dataset. From the two sources we have collected 19,476 images/videos as a raw dataset. Also, transformation procedures such as augmentation of data have been used to enhance the quality of our dataset and expand the variety of the data.

  • Authors must pay attention in the particular requirements in real-life external environments, such as the agricultural, in order to support the proposition of a plain object-detection application. The emphasis should be put on the application domain  

  Response: The application scenarios of this article are extensive.

  • Image and video files formats stored in UAV and then transmitted and processed in next layers must be reported. Also, describe the media storage minimum requirements.

Response: We built an extensive farm database with 19,476 images/Video containing objects that belong to either of the 6 classes after completing this phase. The statistics of each phase is presented in Table 1. With the prepared dataset, we have split the images into train, validation, and test dataset in 60:20:20 ratio.

In response to the above review comments, we have supplemented and revised the original text, and highlighted it in red.

Thank you again for your valuable feedback on our manuscript. If you have any question about this paper, please don't hesitate to let me know.

Back to TopTop