Next Article in Journal
Ghost on the Windshield: Employing a Virtual Human Character to Communicate Pedestrian Acknowledgement and Vehicle Intention
Previous Article in Journal
A Cognitive Model to Anticipate Variations of Situation Awareness and Attention for the Takeover in Highly Automated Driving
 
 
Article
Peer-Review Record

Local Multi-Head Channel Self-Attention for Facial Expression Recognition

Information 2022, 13(9), 419; https://doi.org/10.3390/info13090419
by Roberto Pecoraro *, Valerio Basile and Viviana Bono
Reviewer 1:
Reviewer 2:
Reviewer 3:
Information 2022, 13(9), 419; https://doi.org/10.3390/info13090419
Submission received: 11 August 2022 / Revised: 29 August 2022 / Accepted: 30 August 2022 / Published: 6 September 2022
(This article belongs to the Section Artificial Intelligence)

Round 1

Reviewer 1 Report

I thank the authors for giving me the opportunity to read this work

The paper explores the possibility of employing the self-attention paradigm in order to improve the performance of state-of-the-art computer vision algorithms in the area of facial expression recognition.

It introduces a new channel self-attention module, the Local (multi-)Head Channel (LHC), designed as a processing block that can be integrated into an existing convolutional architecture.

The reported experimental results provide several interesting insights regarding the potential of this approach.

Overall I find the paper very interesting and well written. I recommend only minor revisions: the authors must ensure that all acronyms are defined the first time they are used (e.g., NLP, LSTM, TTA).

Author Response

Thank you for the kind words. 
To improve clarity following your suggestion, we revised all mentions of acronyms and added a summary table in the appendix.

Reviewer 2 Report

The article presents a novel self-attention module which can be integrated with a convolutional neural network especially for facial expression recognition. It is a good article, with an extended related work presentation. The model is well presented and motivated, the results are well explained in Appendix and the limitations and the future work are described.  

Some acronyms are used without explanation (SOTA, NLP, TTA)

The reviewer thinks the last paragraph from page 4 it would be easier to grasp if the information would be presented in a table or graphical form.

Page 9, line 284: Idea should be written idea.

Page 10, line 343: “the” is written twice.

It is interesting this manner of presenting the process of obtaining the results in Appendix. Still, the reviewer thinks that part of the content could be placed in the article’s body.

Author Response

Thank you for the kind words and valuable suggestions.
We revised all typos and first mentions of acronyms, and added an acronym table in the appendix. We replaced the paragraph listing the results on the FER2013 benchmark with a table, keeping only the most salient comments in the text. We also re-organized parts of the text, including relevant sections of the appendix in the main article body.

Reviewer 3 Report

The authors have done fabulous research in proposing model for Facial Expression Recognition (FER) based on Local multi-Head Channel (LHC) self-attention. The topic of this work is very interesting and with possible applicability. The paper is well written and innovative, has good scientific impact. However, there are some aspects that need improvements as follows:

1. The paper organization should be improved.

2. I think; it’s better to collect all abbreviations for all manuscript in a table.

3. Table 1 shouldn’t be mentioned before refers to it. Furthermore, Table A7 should be added and discussed to the original paper.

4. Little typos and grammar check the paper carefully please.

Author Response

Thank you for the encouraging words and valuable comments.
We re-organized parts of the text, including relevant sections of the appendix in the main article body. We revised the grammar and all first mentions of acronyms, and added an acronym table in the appendix. 

Back to TopTop