Next Article in Journal
A Robust CoS-PVNet Pose Estimation Network in Complex Scenarios
Next Article in Special Issue
A Robust Automatic Epilepsy Seizure Detection Algorithm Based on Interpretable Features and Machine Learning
Previous Article in Journal
Piezoelectric MEMS Energy Harvester for Low-Power Applications
Previous Article in Special Issue
A Fine-Grained Approach for EEG-Based Emotion Recognition Using Clustering and Hybrid Deep Neural Networks
 
 
Article
Peer-Review Record

Brain–Computer Interface Based on PLV-Spatial Filter and LSTM Classification for Intuitive Control of Avatars

Electronics 2024, 13(11), 2088; https://doi.org/10.3390/electronics13112088
by Kevin Martín-Chinea 1,*, José Francisco Gómez-González 2,* and Leopoldo Acosta 2
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Electronics 2024, 13(11), 2088; https://doi.org/10.3390/electronics13112088
Submission received: 22 April 2024 / Revised: 16 May 2024 / Accepted: 23 May 2024 / Published: 27 May 2024
(This article belongs to the Special Issue EEG Analysis and Brain–Computer Interface (BCI) Technology)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The relevance of the work is beyond doubt. Authors highlighted the topical question of employing BCI technology for practical use, starting manuscript with the overview of possible implementations, providing some examples of BCI-based assisting systems for disabled persons, focusing on equipment control (wheel-chair, robotic arm, etc.) basing on brain activity signals analysis. Also, authors discussed main techniques used for brain signals processing focusing on best practices. In their work authors proposed BCI-based system aimed to recognize person intention for moving in four directions in virtual environment (VR). They propose to use Phase Locking Value Spatial Filter method for pre-processing of acquired signals and LSTM neural network trained to recognize differences in brain activity signals depending on person intention. Proposed system was physically tested on the group of volunteers using prepared VR environment. Such approach presents certain interest comparing to traditional training and testing approaches making testing process more interactive and promising to gain better training results due to better user experience. Results obtained demonstrated promising nature of proposed solution to make training process for persons aimed to use BCI-based technology more simple, attractive and efficient. However, I’d suggest several improvements for the presented work.

It would be good to rewrite the abstract in a clearer way to reflect the paper content more precisely.

Authors started Materials and Methods section with explanation of their equipment, software, volunteers and some other technical details on data preprocessing and processing, but they disclose the main idea of the experiment only in sub-section 2.8 where they describe developed Virtual Environment. It would be much easier for readers to follow the paper ideas if authors could at least briefly describe the aim of the experiment (e.g. “… to test possibility for improvement avatar movement control in virtual environment using PLV Spatial Filter for signal pre-processing and LSTM NN for signals classification…”), before providing technical details, following detailed description of each implemented solution.

It would be better to provide more details on “Experimental Protocol” (section 2.3). From the text of the paper it could be concluded, that authors used “experimental protocol” to obtain brain signals behavior while volunteer’s eyes are opened and closed, and then use difference in these signals to confirm selection and hitting buttons (move forward, backward, left and right) provided by user interface. In fact, all the process details of choosing and hitting buttons was not clearly described in the paper.

In table 2 authors provide list of questions asked to volunteers about their experience with the experiment. One of the questions (Q3) states: “Was the speed of changes between buttons and selection time adequate for system control?” but authors never have explained term “speed of changes between buttons” previously in the text. I can assume, UI provides set of buttons highlighting them one by one during certain period of time within which volunteer should provide confirmation signal if they choose currently highlighted button. But this process was never described in details in the text of the paper.

Also, it seems the systems provides same “speed of changes” for each user, but different persons may have different response time on visual stimuli. It is not clear how exactly authors choose this parameter (“speed of changes”).

In conclusion (lines 321-323) authors state “This research demonstrates that the combination of BCI and VR can be used effectively to enable intuitive control of virtual environments…”, however, in Results section (lines 271-272) they concluded that from volunteers’ feedback “…leaving the field of the VR environment, which, despite being attractive to them, was not paramount”. So, it remains unclear, how the use of VR helped to increase learning or training efficiency.

In general, manuscript is relevant to the scope of the journal and presented in a structured manner. If authors manage to address provided suggestions, the manuscript could be accepted for publication.

Comments on the Quality of English Language

Authors should proof-read English throughout the manuscript to improve overall impression of their work.

Author Response

We have carefully addressed all the comments and suggestions (to see the attached file), and the corresponding changes have been made to the manuscript.

We have submitted the new version of the manuscript highlighting the modifications and the clean version of the manuscript.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

This study investigates the potential of combining BCI and VR technologies to enhance user experience and facilitate more intuitive avatar control within a secure environment. The topic is interesting, however, there are still some technical issues that need to be addressed.

1. The brain signal used to train the classifier was based on the protocol in Fig. 2. Do you used eye-closed and eye-open signal? To my opinion, eye-closed and eye-open signals show distinct patterns in the EEG, therefore, they can be simply classified by any classifiers. You do not need to use LSTM or any complex signal process methods. I think the experimental protocols should be improves since you these two kinds of signal is simplify to classify. Motor imagery is an optional method. And so does P300 or SSVEP. Here are some references: Y. Yu et al., Self-Paced Operation of a Wheelchair Based on a Hybrid Brain-Computer Interface Combining Motor Imagery and P300 Potential, in IEEE Transactions on Neural Systems and Rehabilitation Engineering, DOI: 10.1109/TNSRE.2017.2766365, Wang, H et al. An asynchronous wheelchair control by hybrid EEG–EOG brain–computer interface. Cogn Neurodyn 8, 399–409 (2014). DOI: 10.1007/s11571-014-9296-y.

2.Certainly, due to the low signal-to-noise ratio of open BCI device, complex decoding may require better signals. Therefore, I suggest that you can design experimental paradigms based on ERP, as this type of signal is relatively easier to decode.

Author Response

We have carefully addressed all the comments and suggestions (to see the attached file), and the corresponding changes have been made to the manuscript.

We have submitted the new version of the manuscript highlighting the modifications and the clean version of the manuscript.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

Please find the attached word file for the comments and suggestions for authors. Thank you.

Comments for author File: Comments.pdf

Author Response

We have carefully addressed all the comments and suggestions (to see the attached file), and the corresponding changes have been made to the manuscript.

We have submitted the new version of the manuscript highlighting the modifications and the clean version of the manuscript.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

1. Although the article utilizes blink signals as input, it is recommended to mention some relevant BCI methods in the manuscript to provide readers with a broader understanding of the current research and its related studies. Such as Y. Yu et al., Self-Paced Operation of a Wheelchair Based on a Hybrid Brain-Computer Interface Combining Motor Imagery and P300 Potential, in IEEE Transactions on Neural Systems and Rehabilitation Engineering, DOI: 10.1109/TNSRE.2017.2766365, Wang, H et al. An asynchronous wheelchair control by hybrid EEG–EOG brain–computer interface. Cogn Neurodyn, DOI: 10.1007/s11571-014-9296-y.

2. It is recommended to compare the results obtained using the LSTM method  to those existing methods.

Author Response

Thank you very much for your feedback. Attached you will find the updated file where we have taken into account each of your comments and suggestions, implementing the corresponding changes in the manuscript.

We have submitted the new version of the manuscript highlighting the modifications and the clean version of the manuscript.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

11. Since the algorithm is mostly based on the authors’ previous published work (with citation numbers [22, 23]), can this be more clearly mentioned in the paper? This is to let the readers know this is a continued work based on the previous ones. 

22. Line 360 and Figure 7: So, the final 3 participants who participate in the final tasks are users 1, 5, and 7, or users 1, 7, and 8? And what does the best performance mean? The shortest time + the highest accuracy? Please clarify.

 

  3. Did the author compare the results of not using the VR scenario, meaning using a normal monitor to display the control buttons? The advantage of applying VR can be proved if the VR scenario's accuracy is higher than a normal display, right? Or any other publication has proved this?

Author Response

Thank you very much for your feedback. Attached you will find the updated file where we have taken into account each of your comments and suggestions, implementing the corresponding changes in the manuscript.

We have submitted the new version of the manuscript highlighting the modifications and the clean version of the manuscript.

Author Response File: Author Response.pdf

Back to TopTop