Next Article in Journal
Grid Reconfiguration Method for Off-Grid DOA Estimation
Previous Article in Journal
Model Update Strategies about Object Tracking: A State of the Art Review
 
 
Article
Peer-Review Record

EEG-Based 3D Visual Fatigue Evaluation Using CNN

Electronics 2019, 8(11), 1208; https://doi.org/10.3390/electronics8111208
by Kang Yue 1,2 and Danli Wang 1,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Electronics 2019, 8(11), 1208; https://doi.org/10.3390/electronics8111208
Submission received: 16 September 2019 / Revised: 30 September 2019 / Accepted: 4 October 2019 / Published: 23 October 2019
(This article belongs to the Section Computer Science & Engineering)

Round 1

Reviewer 1 Report

This is a very interesting study that proposes the architecture of MorletInceptionNet and verifies its effectiveness. However, the following points are unclear. Please answer the following questions.

 

Since we don't know where to find references, we cannot evaluate the effectiveness and novelty. There is no title in Figure 3.1. Wilcoxon signed-rank test shows significant difference. However, the test shows that there is a difference, but does not show that the magnitude is meaningful. The difference between the accuracy of MorletInceptionNet 0.45 ± 0.06 and CNNInceptionNet0.44 ± 0.06 is about 0.01. The variation in accuracy is about the same. Please show that this difference is useful in practice. (For example, please describe the benefits of improving 0.01).

Author Response

On behalf of my co-authors, we thank you very much for giving us an opportunity to revise our manuscript, we appreciate you very much for their positive and constructive comments and suggestions on our manuscript.
We have studied your comments carefully and have made revision. The responses are list as follow:

Point 1: There is no title in Figure 3.1.

Response 1: The caption of Figure 3.1 "Architecture of MorletInceptionNet" has been added.

Point 2: Wilcoxon signed-rank test shows a significant difference. However, the test shows that there is a difference, but does not show that the magnitude is meaningful. The difference between the accuracy of MorletInceptionNet 0.45 ± 0.06 and CNNInceptionNet0.44 ± 0.06 is about 0.01. The variation in accuracy is about the same. Please show that this difference is useful in practice. (For example, please describe the benefits of improving 0.01). 

Response 2: In this paper, we calculate performance measurements (kappa and accuracy) for both specific-subjects and cross-subjects classification. This is because of the large individual difference for EEG signals, thus both individual and overall performances are needed to be considered. Although CNNInceptionNet achieves comparative performance with MorletInceptionNet (kappa: 0.44 vs 0.45) in specific-subjects classification, it does not work as well as MorletInceptionNet in cross-subjects classification (kappa: 0.61 vs 0.57).

Reviewer 2 Report

In the paper “EEG-Based 3D Visual Fatigue Evaluation using CNN”, Yue and Wang introduce a deep learning architecture for EEG-based 3D fatigue assessment. Although the topic is interesting and both methods and results are appropriate, I think the paper suffer from poor English and it is not suited for publication in current form. I strongly recommend reviewing the English from an expert.
From a technical point of view the paper is well structured, the methods are clearly described and replicable.
The results are also clear and concise.

My only concern is about the baseline methods used in the paper. They are all deep learning methods. Although comparing their methods with state-of-the-art deep learning algorithms is appropriate, it would be interesting also shows that the deep learning methods outperforms simply classification algorithms such as LDA or SVM for the specific classification problem. If it doesn’t require a great effort, the authors could compare also LDA and SVM on their data.
Regarding introduction section, I would suggest the authors to also cite more recent paper about CNN and EEG: at lines 122-134 I suggest the authors to also some references of EEG-CNN studies such as:
1. Chiarelli, Antonio Maria, et al. "Deep learning for hybrid EEG-fNIRS brain–computer interface: application to motor imagery classification." Journal of neural engineering 15.3 (2018): 036028.
2. Croce, Pierpaolo, et al. "Deep Convolutional Neural Networks for feature-less automatic classification of Independent Components in multi-channel electrophysiological brain recordings." IEEE Transactions on Biomedical Engineering (2018).
3. Emami, Ali, et al. "Seizure detection by convolutional neural network-based analysis of scalp electroencephalography plot images." NeuroImage: Clinical 22 (2019): 101684.

Minor points:
Line 49: may be the authors wanted to say: “However, these traditional methods have a main drawback that the performance of classification is heavily relied on the designed features extracted from original EEG signals”
Line 65: We
Line 89: bands
Line 110: relied
Line 136: Motivation…
Line 152: Achieved…
Line 161: “There are two benefits come with morlet kernel” should be “There are two benefits using morlet kernel”, “Firstly, where are” should be “Firstly, there are”
Please be careful to every capital letter after a dot.

Author Response

On behalf of my co-authors, we thank you very much for giving us an opportunity to revise our manuscript, we appreciate you very much for their positive and constructive comments and suggestions on our manuscript.
We have studied your comments carefully and have made revision. The responses are list as follow:

Point 1: They are all deep learning methods. Although comparing their methods with state-of-the-art deep learning algorithms is appropriate, it would be interesting also shows that the deep learning methods outperform simple classification algorithms such as LDA or SVM for the specific classification problem. If it doesn’t require a great effort, the authors could compare also LDA and SVM on their data. 

Response 1: In this paper, filter bank common spatial patterns algorithm (FBCSP) is used to extract frequency-domain features in EEG, then these features are passed into SVM for classification. The description of FBCSP-SVM maybe not clear enough, we have added more details about this method.

Point 2: Regarding introduction section, I would suggest the authors to also cite more recent paper about CNN and EEG: at lines 122-134 I suggest the authors to also some references of EEG-CNN 

Response 2: More references including the recommended papers have been cited.

Point 3:Minor points:...

Response 3: We appreciate the reviewer for their careful review. All errors have been corrected.

Round 2

Reviewer 1 Report

I confirmed the correction of the paper.

Reviewer 2 Report

Authors have addressed all my points, I the now the paper is suitable for publication

Back to TopTop