Next Article in Journal
Performance Analysis of Container Effect in Deep Learning Workloads and Implications
Next Article in Special Issue
A Clustering Routing Algorithm Based on Improved Ant Colony Optimization for an Urban Rail Transit Ad Hoc Network
Previous Article in Journal
Numerical Investigation of the Pile–Soil Interaction Problem under Dynamic Loads
Previous Article in Special Issue
Artificial Intelligence Component of the FERODATA AI Engine to Optimize the Assignment of Rail Freight Locomotive Drivers
 
 
Article
Peer-Review Record

A Novel Dual Mixing Attention Network for UAV-Based Vehicle Re-Identification

Appl. Sci. 2023, 13(21), 11651; https://doi.org/10.3390/app132111651
by Wenji Yin, Yueping Peng *, Zecong Ye and Wenchao Liu
Reviewer 1:
Reviewer 2: Anonymous
Appl. Sci. 2023, 13(21), 11651; https://doi.org/10.3390/app132111651
Submission received: 13 September 2023 / Revised: 6 October 2023 / Accepted: 9 October 2023 / Published: 25 October 2023
(This article belongs to the Special Issue AI Techniques in Intelligent Transport Systems)

Round 1

Reviewer 1 Report

This paper described a novel Dual Mixing Attention Network (DMANet) to extract discriminative features robust to variations in viewpoint. Specifically, we first present a plug-and-play Dual Mixing Attention Module (DMAM) to capture pixel-level pairwise relationships and channel dependencies, where DMAM is composed of Spatial Mixing Attention (SMA) and Channel Mixing Attention (CMA). The proposed DMAM employs SMA and CMA to capture pixel-level pairwise relationships and channel dependencies.   This modular design fosters comprehensive feature interactions, improving discriminative feature extraction under varying viewpoints. The versatility of DMAM allows its seamless integration into existing backbone networks at varying depths, significantly enhancing vehicle discrimination performance. Our approach demonstrates superior performance through extensive experiments compared to representative methods in the UAV-based vehicle re-identification task, affirming its efficacy in challenging aerial scenarios.

 

Some considerations regarding the content of paper:

- The review of research on this topic is not sufficiently detailed. It is not customary to list at once as many references as are given in this article, namely [1–9] and [10–17].

- Some abbreviations are entered several times.

- It is better to make the Figures 2, 3, 5 bigger, because the formulas are not visible on them.

- Subsection 3.2 has formulas with no explanations for them.

- No information is available regarding where Ranks for Tables 1 and 2 were taken from.

- The Figures 2, 3, 4, 6 are posted before the link to them.

- References should be more careful designed according to the requirements of the journal.

Authors should carefully examine and correct syntactic errors.

Author Response

Thanks for your advice. My reply is in the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors wrote the article at a very good level! The authors presented a good description of the method. The authors also presented a good comparison with existing approaches.

However, I should note that the level of solving this problem depends almost exclusively on the data set. Therefore, it will be useful for the authors to set the task of vehicle recognition more clearly. In which cases the vehicle becomes different! For example, the driver turned on the headlights, passengers were added, a sticker was glued to the rear window.

It will be interesting to look at the work of the presented algorithm based on video footage of car storage sites of large car factories.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop