Next Article in Journal
Analysis of Short-Circuit and Dielectric Recovery Characteristics of Molded Case Circuit Breaker according to External Environment
Previous Article in Journal
Performance Investigation of Principal Component Analysis for Intrusion Detection System Using Different Support Vector Machine Kernels
 
 
Article
Peer-Review Record

A Video-Based Real-Time Tracking Method for Multiple UAVs in Foggy Weather

Electronics 2022, 11(21), 3576; https://doi.org/10.3390/electronics11213576
by Jiashuai Dai *, Ling Wu and Pikun Wang
Reviewer 1:
Reviewer 2: Anonymous
Electronics 2022, 11(21), 3576; https://doi.org/10.3390/electronics11213576
Submission received: 20 September 2022 / Revised: 24 October 2022 / Accepted: 28 October 2022 / Published: 1 November 2022

Round 1

Reviewer 1 Report

This paper must be improved and reconstructed otherwise rejected.

1. Really i don't understand the novelty and the importance of this paper why you choose foggy weather to detect UAV ?

2.Quality of figures must be improved especially figure (1, 2, 7 and 11), authors should describe and give a small presentation of each figure in order to highlight its importances..

3. The image given for UAV in figure is not in foggy weather 

4. The table 6 is a tracking effect display of different methods contains many images why not change it to a figure with different labels

5. Authors present Yolo v5 however now we speak about Yolo v7 i recommend the use of this algorithm to detect UAV

6. Authors use Deepsort algorithm, a video-based 368 method for tracking multiple UAVs in foggy weather, they don't explain why they choose this one, how they have token video, ...

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

  • A brief summary

Dai et al. do a good job in presenting a new video tracking system that can use Dark Channel defogging, YOLO, and DeepSort for multiple UAVs detection in foggy weather. The chain of toolboxes is solid and does show good performances in their dataset.

·       General concept comments
1. In “2.1 Determination of Transmissivity by Mean Filtering” does raise a concern about effectiveness. Typically mean filtering is to blur the image but, the author claims “avoid losing edge features” since mean will reduce pixel intensity gradient changes. Are there any references supporting this claim?
2. In “2.2. Estimation of Global Atmospheric Light Value”, the author claims to reduce the time complexity from O(n^2) to O(n) which is not quite so. So first, what is the meaning of n? Let’s assume n is the number of pixels, so the old dark channel defogging determines the top 0.1% pixels takes time n and then obtains the maximum gray scale of pixels to solve the atmospheric light value also takes n. Both just loop through all pixels which takes O(2*n) ~ O(n). I did not see the improvement
3. In Section 4, the author claims that image compression is needed and does not hurt the performance which needs more references to support since typically higher resolution without compression will be the best for object detection.
4. In Section 5.1, the author describes a self-prepared dataset which is not detailed. For example, how many labelers are in the labeling? Is there any ambiguity in labeling? How to solve ambiguity? Also, the dataset seems taken by mobile phones and downloaded from the Internet, so is there any discrepancy in collected videos, e.g., different light conditions, resolutions, etc? In sum, this part seems problematic and needs a more detailed description.
5. In table 2, Precision and Recall are shown and, as racking by detection method is used, we can expect that in the ground truth data, there are labeling bounding boxes, and in the model outputs, there are predicted bounding boxes, so how to determine the prediction is good? Is intersection over union (IoU) used as a cut-off? If so, then what is the cut-off IoU?
6. In table 3, the author shows that their improved YOLOv5 is better than YOLOv5 and has a smaller size. But I was thinking is this because their dataset was specific, and their model is only better at this dataset? How about using another dataset to test the robustness of their model?
7. I have a question about UAV tracking issues, is the model good at tracking small objects? Is it good at tracking dense objects with higher and more frequent overlapping saying a very crowded environment?


  • Specific comments 

1. In Fig.1, line 61 is between the figure and figure caption which is wrong.

2. In Fig. 8, the point labels like x,y,Q,R, are blurred and not clear enough.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors address all remarks given very well, the quality of the paper is now more suitable to be published in electronics journal.

Back to TopTop