Next Article in Journal
Wide-Angle Beam-Switching Antenna with Stable Gain Based on a Virtual Image Lens
Previous Article in Journal
Improving the Performance of Automatic Lip-Reading Using Image Conversion Techniques
 
 
Article
Peer-Review Record

Enhancing Motor Imagery Electroencephalography Classification with a Correlation-Optimized Weighted Stacking Ensemble Model

Electronics 2024, 13(6), 1033; https://doi.org/10.3390/electronics13061033
by Hossein Ahmadi and Luca Mesin *
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Electronics 2024, 13(6), 1033; https://doi.org/10.3390/electronics13061033
Submission received: 11 February 2024 / Revised: 4 March 2024 / Accepted: 8 March 2024 / Published: 10 March 2024
(This article belongs to the Special Issue Brain Computer Interface: Theory, Method, and Application)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Dear authors, congratulations on the results presented. To improve the quality of this document,

please include the following in the new version of the article.
1.- ¿Could the authors provide more information about the optimization of the parameter $\alpha$?

In this reviewer opinion, since it is an important aspect within your proposal, it would be better for the reader if the author could explain more about the relevance of this optimization and its implication within the proposed procedure.

2.- ¿In figure 1 could the authors explain which blocks are directly related with the proposal?
It is clear that this figure has been included to help the reader to understand the procedure but, in this reviewer opinion, it will be better if the authors could include an explanation about the proposed with respect to differents existing methods. This will highlight the contribution of your proposal.

3.- Please ensure that the contribution of your proposal is clearly defined in the paper. It will better for the reader if the authors could include the phrase "The contribution of this proposal is...", throughout the paper.

4.- At Section 4.5.2 the following has been included.
“To ensure the optimal performance of each classifier within the COWSE model, we meticulously tuned their hyperparameters. Table 9 presents the specific configurations used for the SVM-RBF, MLP, RF, and ET classifiers.” ¿Could the authors explain how this tuning was done? ¿What were the considerations made?

5.- If the dataset is different for some reason, ¿Could the authors explain the implications of this with the aim of perhaps applying the proposal in this or some other data generating area?

Again, congratulation on your paper results.
Regards

Author Response

  1. Could the authors provide more information about the optimization of the parameter $\alpha$? In this reviewer opinion, since it is an important aspect within your proposal, it would be better for the reader if the author could explain more about the relevance of this optimization and its implication within the proposed procedure.

Answer:

Thank you for your insightful feedback on the optimization of parameter . In response, we have specifically updated a subsection in the methodology section titled "2.5.2. Weight Assignment Based on Performance Ranking" and a part of the results section where we discuss the determination of the optimal . These updates elaborate on using Bayesian Optimization for 's optimization, detailing the process and its significant impact on our ensemble model's performance. We have clarified the mathematical approach and the resulting enhancement in predictive accuracy, directly addressing your concerns regarding the relevance and implications of this optimization within our proposal.

We appreciate your valuable suggestions, which have guided us to improve the clarity and depth of our manuscript. We hope these revisions adequately address your concerns and contribute to a better understanding of our work.

 

  1. In figure 1 could the authors explain which blocks are directly related with the proposal?
    It is clear that this figure has been included to help the reader to understand the procedure but, in this reviewer opinion, it will be better if the authors could include an explanation about the proposed with respect to differents existing methods. This will highlight the contribution of your proposal.

Answer:

Thank you very much for your insightful comment regarding the clarity of Figure 1 and its relevance to our proposal in the context of different existing methods. Your feedback has been invaluable in guiding us to enhance the comprehensibility of our work and better highlight our contribution.

In response to your suggestion, we have thoroughly revised the subsection titled "2.5. Ensemble Model Construction" within the methodology section of our paper. This revision now explicitly outlines the phases depicted in Figure 1, detailing the blocks directly related to our proposal. We emphasize the COWSE construction process, starting with error correlation analysis, which is pivotal in selecting diverse base classifiers. This initial step directly contributes to constructing an ensemble greater than the sum of its parts. We further explain how subsequent weight assignment and the innovative composition of our ensemble differentiate our method from existing approaches.

Additionally, we have updated the caption of Figure 1 to provide a more detailed overview of the COWSE model construction process. The revised caption explicitly highlights the innovative elements introduced in Phases 2 and 3, which are crucial for enhancing the ensemble's robustness and accuracy. This change aims to directly address your concern by elucidating how our proposal diverges from and contributes beyond traditional methods.

 

  1. Please ensure that the contribution of your proposal is clearly defined in the paper. It will better for the reader if the authors could include the phrase "The contribution of this proposal is...", throughout the paper.

Answer:

We sincerely appreciate your constructive feedback and the opportunity to clarify the contributions of our proposal. In response to your valuable comment, we have added a new paragraph at the end of the Introduction section, explicitly outlining the key contributions of our research. This edition is formatted as a bullet list for clarity. It highlights our novel ensemble approach, innovative weight assignment strategy, integration of a meta-classifier, extensive validation on BCI competition datasets, and a comprehensive analysis of parameter optimization effects. This concise summary addresses your concern and provides a clear overview of our study's advancements in MI EEG classification in BCI.

We believe these modifications enhance the manuscript's clarity regarding our contributions and hope they fully address your concerns. We are grateful for your guidance in improving the quality of our work.

 

  1. At Section 4.5.2 the following has been included.
    “To ensure the optimal performance of each classifier within the COWSE model, we meticulously tuned their hyperparameters. Table 9 presents the specific configurations used for the SVM-RBF, MLP, RF, and ET classifiers.” Could the authors explain how this tuning was done? What were the considerations made?

Answer:

Thank you for your insightful feedback regarding the hyperparameter tuning process detailed in Section 4.5.2. We appreciate your request for a clearer explanation of our methodology.

We have revised the mentioned section to provide a comprehensive overview of our approach to hyperparameter tuning for the SVM-rbf, MLP, RF, and ET classifiers within the COWSE model. We now clarify that we employed grid search for the SVM-rbf due to its relatively simpler parameter space and opted for random search for MLP, RF, and ET classifiers to navigate their more complex parameter spaces efficiently. This revision aims to detail our balanced consideration of computational efficiency, the exploration of parameter ranges, scales, and the characteristics of the data to ensure the suitability of our hyperparameter choices.

 

  1. If the dataset is different for some reason, Could the authors explain the implications of this with the aim of perhaps applying the proposal in this or some other data generating area?

Answer:

Firstly, we extend our sincerest gratitude for your insightful comments and the opportunity to clarify further and enhance our manuscript. Your query regarding the dataset's distinctiveness and implications for applying our proposal in this or other data-generating areas has been pivotal in guiding our revisions.

In response, we have undertaken substantial updates to our discussion section, particularly focusing on the subsections that elucidate the limitations, future research directions, and the adaptability of our proposed COWSE model to diverse datasets (section 4.9.). These revisions aim to address your concern comprehensively.

  • Previously Discussed in Subsection " Implications, Limitations, and Future Directions": Initially, we discussed the model's adaptability and robustness in handling various MI EEG classification tasks, its computational complexity, and the need for real-world applicability testing. We acknowledged the necessity for future research to explore optimization strategies, the impact of preprocessing techniques, and the integration of additional classifiers to refine ensemble models.
  • Updated Subsection "Limitations and Future Research Directions": We have thoroughly revised this section to reiterate the importance of addressing computational complexity and enhancing real-world applicability and emphasize the significance of adaptability to user variability. This update aims to provide a clearer roadmap for future research, highlighting the exploration of optimization strategies, preprocessing and feature extraction methods, expanding classifier ensembles, and focusing on personalization and real-time performance.
  • New Subsection "Adaptability and Implications of the COWSE Model for Diverse Datasets": In response to your inquiry, we have added a new subsection detailing the COWSE model's adaptability (4.8.). This subsection discusses how the model's ensemble approach allows for effective application across datasets with varying signal qualities, task designs, subject variability, and data distributions. We elucidate how the model's flexibility can accommodate different MI tasks, adjust to subject proficiency levels, and manage imbalanced datasets. Moreover, we outline the implications of adapting the COWSE model to significantly different datasets, including the necessity for preprocessing reevaluation, feature space exploration, hyperparameter retuning, validation approach modification, and performance metrics reassessment.

These updates, particularly the addition of the "4.8. Adaptability and Implications of the COWSE Model for Diverse Datasets" subsection, are designed to provide a comprehensive understanding of the model's versatility and potential impact on advancing personalized and accurate BCI systems. We believe these revisions effectively address your concerns and enrich the manuscript's contribution to the field. Thank you once again for your constructive feedback.

Reviewer 2 Report

Comments and Suggestions for Authors

1. The training process of the model should be clearly described, including the optimization algorithms and loss functions used, as well as the validation and evaluation methods of the model, including cross validation.

2. The advantages of the COWSE model were mentioned in the article, but the limitations and challenges that may be encountered during the experimental process were not fully discussed. Suggest a brief discussion on possible limitations and challenges during the experimental process to provide a more comprehensive and balanced perspective.

3. It is necessary to clearly describe the training process of the ensemble model, including how to combine base classifiers, adjust weights, etc., and explain how to evaluate the performance of the ensemble model.

4. The conclusion section summarizes the main findings of the study, but does not clearly state the specific directions for future research. It is recommended to provide a clear overview of the specific research directions in MI EEG signal classification research in the future, in order to lead the development of future research.

5. The language and writing style in the text are clear and professional, but there are still some minor grammar or typographical errors. Suggest correcting any errors in the article to maintain its quality and professionalism.

Comments on the Quality of English Language

Minor editing of English language required.

Author Response

  1. The training process of the model should be clearly described, including the optimization algorithms and loss functions used, as well as the validation and evaluation methods of the model, including cross validation.

Answer:

Thank you very much for your insightful comments and constructive feedback regarding the training process and evaluation methodology of the COWSE model detailed in our manuscript.

In response to your comment, we have thoroughly revised the "4.6. Training and Evaluation of the COWSE Model" subsection within the discussion section of our manuscript. This updated version now provides a detailed description of the training regimen based on TSCV, explicitly chosen to account for the temporal dependencies inherent in EEG data. We have elaborated on the initial training phase of our base classifiers (SVM-rbf, MLP, RF, and ET), detailing the optimization algorithms and hyperparameter tuning process guided by the scikit-learn library's implementations. Specifically, we discussed the Sequential Minimal Optimization (SMO) algorithm for SVM and the use of the Cross-Entropy Loss function and Adam Optimizer for the MLP classifier, highlighting their roles in enhancing the model's learning efficiency and predictive performance.

Moreover, we have expanded on the methodological approach to determining the optimal number of folds for TSCV through a grid search optimization process, ensuring a comprehensive learning and validation phase. The section further delves into the construction of the meta-classifier, explaining how the base classifiers' weighted predictions form a new feature matrix that embodies the ensemble's collective intelligence. The weighting mechanism, proportional to each classifier's performance, and the subsequent training of the MLP meta-classifier on this composite feature matrix are now clearly outlined, emphasizing their significance in refining the ensemble's predictive accuracy. We are grateful for the opportunity to refine our work based on your feedback.

 

  1. The advantages of the COWSE model were mentioned in the article, but the limitations and challenges that may be encountered during the experimental process were not fully discussed. Suggest a brief discussion on possible limitations and challenges during the experimental process to provide a more comprehensive and balanced perspective.

Answer:

Thank you for your constructive feedback regarding discussing the COWSE model's limitations and challenges. We agree that a balanced perspective, which includes potential limitations and challenges, is crucial for a comprehensive understanding of the model's capabilities and areas for improvement.

In response to your valuable comments, we have revised the subsection on "4.9. Limitations and Future Research Directions" within the discussion section of our manuscript. Our revisions aim to provide a more detailed exploration of the COWSE model's limitations, including its computational complexity, real-world applicability, and adaptability to user variability. We have also outlined specific future research directions that address these limitations, such as optimization strategies to reduce computational demand, the impact of different data preprocessing techniques, expanding the classifier ensemble for improved performance, and tailoring BCI systems to individual users for enhanced personalization and real-time performance.

These revisions offer a balanced view, acknowledging the challenges and setting a clear roadmap for future research to address these issues. We hope these amendments satisfactorily address your concerns and contribute a more comprehensive and balanced perspective on the COWSE model.

 

  1. It is necessary to clearly describe the training process of the ensemble model, including how to combine base classifiers, adjust weights, etc., and explain how to evaluate the performance of the ensemble model.

Answer:

Thank you for your insightful comments and suggestions. We appreciate your feedback, which prompted us to significantly enhance our paper, particularly concerning the training and evaluation process of the COWSE model.

To address your concerns, we have comprehensively revised the subsection "4.6. Training and Evaluation of the COWSE Model" in the discussion section of our manuscript. This revision includes a detailed account of the training regimen for each base classifier (SVM-rbf, MLP, RF, and ET), emphasizing TSCV to cater to the EEG data's temporal dependencies. We have clarified the optimization algorithms employed for hyperparameter tuning, the rationale behind selecting eight folds through grid search optimization, and the specific methodologies utilized for the MLP classifier, including the Cross-Entropy Loss function and the Adam Optimizer.

Furthermore, we elaborated on the novel approach of constructing a meta-classifier by weighting the base classifiers' predictions, detailing how this process harnesses the ensemble's collective intelligence for superior predictive performance.  

 

 

  1. The conclusion section summarizes the main findings of the study, but does not clearly state the specific directions for future research. It is recommended to provide a clear overview of the specific research directions in MI EEG signal classification research in the future, in order to lead the development of future research.

 

Answer:

 

Thank you for your insightful feedback regarding clarifying future research directions in our conclusion. Considering your advice, we have updated the conclusion section to summarize the main findings succinctly and directly outline specific future research directions. This includes addressing computational efficiency, enhancing real-world application robustness, and improving user adaptability of the COWSE model. These revisions aim to guide future MI EEG signal classification research more clearly. We appreciate your guidance in strengthening our paper and hope the updated conclusion aligns well with your expectations.

 

  1. The language and writing style in the text are clear and professional, but there are still some minor grammar or typographical errors. Suggest correcting any errors in the article to maintain its quality and professionalism.

Answer:

Thank you very much for your constructive feedback! We've taken your comment to heart and have thoroughly updated the manuscript by reviewing the entire document for grammar, formatting, and typographical errors. We utilized both automated tools and enlisted the help of a professional proofreading service to ensure our presentation is accurate and clear. We aim to meet the high standards expected for publication, thereby enhancing our manuscript's readability and overall quality. We're grateful for your insights, as they've helped us improve our work significantly.

Reviewer 3 Report

Comments and Suggestions for Authors

As a reviewer, I have carefully assessed the manuscript titled "Enhancing Motor Imagery EEG Classification with a Correlation-Optimised Weighted Stacking Ensemble Model." While the paper presents a novel approach to EEG signal classification and offers valuable insights into ensemble learning strategies, I recommend major revisions before considering it for publication. Below are the key points for revision:

-Please try not to use abbreviation in the title. EEG must be explained.

-The abstract lacks clarity in explaining the specific methods used in the development and evaluation of the Correlation-Optimized Weighted Stacking Ensemble (COWSE) model. It would be beneficial to provide more explicit details on how the ensemble model was constructed, trained, and evaluated.

-The abstract mentions "sixteen diverse machine learning classifiers" without specifying the types of classifiers used or the rationale behind their selection. Providing this information would enhance the clarity and transparency of the study.

-The abstract contains a bold claim of achieving "classification accuracies upwards of 98.16%" on the BNCI2014-002 dataset without providing sufficient context or validation of these results. It is essential to include information on the experimental setup, validation procedures, and statistical significance to support such claims.

-Additionally, the abstract could benefit from more nuanced language when describing the performance of the COWSE model, avoiding absolute terms like "significantly outperforms" and providing a more balanced interpretation of the results.

-While the abstract briefly mentions the potential contributions of the COWSE model to Brain-Computer Interfaces (BCI) research and complex signal classification tasks, it could provide more context on the existing challenges in EEG signal classification and the specific gaps that the proposed model aims to address.

-Providing a brief overview of the current state-of-the-art methods in MI EEG signal classification and highlighting how the COWSE model offers improvements or innovations would enhance the abstract's impact and significance.

-In the ‘Methodology section’, it would be nice elaborate on the rationale behind methodological choices, discuss potential limitations or biases associated with these choices, and explore alternative approaches that could have been pursued.

-How authors evaluated the precision and accuracy of the classifiers and networks used in this study

-In the conclusion section, please provide recommendations for future studies, identifying specific research questions or challenges that warrant further exploration, and discussing potential real-world applications of the COWSE model would enrich the conclusion section.

-The conclusion could benefit from a more balanced and nuanced discussion of the study's contributions and limitations. Acknowledging the inherent uncertainties and caveats associated with EEG signal classification research would add depth to the conclusion.

-It is essential to provide context for the broader implications of the study's findings within the field of BCI technology and EEG signal processing, considering both theoretical advancements and practical implications for end-users and researchers.

Author Response

As a reviewer, I have carefully assessed the manuscript titled "Enhancing Motor Imagery EEG Classification with a Correlation-Optimized Weighted Stacking Ensemble Model." While the paper presents a novel approach to EEG signal classification and offers valuable insights into ensemble learning strategies, I recommend major revisions before considering it for publication. Below are the key points for revision:

  1. Please try not to use abbreviations in the title. EEG must be explained.

Answer:

Thank you for your valuable feedback regarding the use of abbreviations in the title of our manuscript. We appreciate your attention to detail and agree that clarity is paramount for reaching a broad audience.

Following your suggestion, we have revised the title from its original form to spell out the abbreviation "EEG fully." The title now reads: "Enhancing Motor Imagery Electroencephalography Classification with a Correlation-Optimized Weighted Stacking Ensemble Model". Thank you once again for your constructive feedback.

 

 

  1. The abstract lacks clarity in explaining the specific methods used in the development and evaluation of the Correlation-Optimized Weighted Stacking Ensemble (COWSE) model. It would be beneficial to provide more explicit details on how the ensemble model was constructed, trained, and evaluated.

Answer:

Thank you for your valuable feedback regarding the clarity of the abstract in our manuscript. In response to your comments, we have revised the abstract to highlight the specifics of the ensemble model's construction and evaluation process better.

To further address your concerns, we wish to point out that detailed subsections within the methodology and discussion sections of the paper comprehensively describe the model's construction (2.5. Ensemble Model Construction, 4.5. Constructing the COWSE Model, 4.6. Training and Evaluation of the COWSE Model). These include a multi-stage process involving error correlation analysis for selecting base classifiers, dynamic weight assignment based on performance, and the employment of a meta-classifier trained on these weighted predictions. This approach is a departure from conventional ensemble methods, leveraging a deeper analysis of error patterns and classifier performance to enhance the robustness and accuracy of the MI EEG classification.

We believe these revisions and clarifications in the abstract and detailed explanations within the manuscript adequately address your points. We hope this response clarifies our methodology and the innovative aspects of the COWSE model.

 

  1. The abstract mentions "sixteen diverse machine learning classifiers" without specifying the types of classifiers used or the rationale behind their selection. Providing this information would enhance the clarity and transparency of the study.

Answer:

First of all, thank you for your insightful comment. While We have thoroughly updated the abstract, we have elaborated on our choice and description of the classifiers in a comprehensive subsection within the methodology section. This subsection, titled "2.4. Classifier Selection and Description," details our selection of a diverse pool of 16 ML classifiers, each chosen for its unique advantages and historical performance in MI EEG tasks. The classifiers represent various algorithmic families and were selected based on their ability to effectively capture the intricacies of high-dimensional neural data. We provide a detailed table (table 2) in the paper that outlines each classifier's key characteristics and typical application contexts, underlining our rationale for their inclusion and the expected synergistic effect they bring to our ensemble model. This selection aims to leverage different learning strategies and decision-making processes, thus enhancing the model's generalization and adaptability across varying EEG signal datasets. We hope this updated version of the abstract and the detailed subsection in the methodology fully address your concerns regarding the classifiers used in our study.

 

  1. The abstract contains a bold claim of achieving "classification accuracies upwards of 98.16%" on the BNCI2014-002 dataset without providing sufficient context or validation of these results. It is essential to include information on the experimental setup, validation procedures, and statistical significance to support such claims.

Answer:

Thank you for your comment. We acknowledge the concern raised regarding the bold claim of achieving classification accuracies upwards of 98.16% on the BNCI2014-002 dataset. To address this, we have provided a comprehensive approach to evaluate our model, detailed in Table 8 and Figure 5, offering a robust overview of our results' experimental setup and validation procedures.

Table 8 presents the performance of our COWSE model across multiple datasets. Specifically, for the BNCI2014-002 dataset, the model achieved an accuracy of 98.16%, with precision, recall, and F1-score all closely aligned, underscoring the model's consistency and reliability in classification tasks.

Figure 5 visualizes the comparative performance analysis of the COWSE model against six top-performing classifiers for each dataset, illustrating the performance superiority of the COWSE model. This visual representation demonstrates our model's enhanced capability and the value of our correlation-optimized approach in ensemble construction.

In the discussion section, we further elucidate the training and evaluation of the COWSE model. Subsection “4.6. Training and Evaluation of the COWSE Model” delves into our rigorous training regimen, grounded in TSCV and strategic base classifier training, to ensure comprehensive learning and validation. We meticulously fine-tune each classifier's hyperparameters and construct a meta-classifier to integrate the weighted predictions, further enhancing predictive accuracy.

Furthermore, we compare our findings with related works, emphasizing the novel aspects of our methodology, including the integration of diverse classifiers and methodological innovations in Table 10. This comparison highlights the COWSE model's advancements in MI EEG signal classification, setting new benchmarks for accuracy and robustness.

 

  1. Additionally, the abstract could benefit from more nuanced language when describing the performance of the COWSE model, avoiding absolute terms like "significantly outperforms" and providing a more balanced interpretation of the results.

Answer:

Thank you very much for your constructive feedback regarding the language used in the abstract of our paper. We wholeheartedly agree that a more nuanced and balanced interpretation of our results is essential for accurately conveying the significance and scope of our research findings.

In response to your comment, we have thoroughly revised the abstract to reflect better the complexities and challenges of EEG signal classification for MI tasks within the BCI field. We have carefully adjusted the language to avoid absolute terms and instead have provided a more measured discussion of the performance of our COWSE model.

The revised abstract now emphasizes the innovative ensemble learning framework of the COWSE model, its methodological approach to integrating multiple ML classifiers, and carefully considering strengths and weaknesses among classifiers. We have also clarified the model's achievements by specifying the context of its performance, highlighting its contribution to advancing MI EEG classification without overstating its superiority.

We believe these changes fully address your concerns and enrich the paper by offering a more balanced and precise description of our work. We appreciate your guidance in enhancing the clarity and impact of our manuscript.

 

  1. While the abstract briefly mentions the potential contributions of the COWSE model to Brain-Computer Interfaces (BCI) research and complex signal classification tasks, it could provide more context on the existing challenges in EEG signal classification and the specific gaps that the proposed model aims to address.

Answer:

Thank you for your valuable feedback. Your comments have prompted us to refine our manuscript, ensuring it more accurately reflects the challenges and gaps our research addresses within the domain of EEG signal classification for BCIs.

In response, we have expanded our abstract to elucidate the intricate challenges of EEG signal classification, particularly for MI tasks, and detailed how our COWSE model addresses these specific challenges. We've emphasized the model's innovative integration of multiple ML classifiers, leveraging error correlation analysis and performance metrics evaluation to optimize the ensemble's performance.

Furthermore, we have revised the introduction to better contextualize the existing literature's contributions and more clearly delineate the gaps our research seeks to fill. This includes a comprehensive explanation of how our ensemble model significantly enhances classification accuracy and reliability through strategic classifier selection and error correlation analysis.

To address your point directly, we've added a new paragraph at the end of the introduction highlighting the key advancements our study offers to the MI EEG classification field. This includes our novel ensemble approach, the introduction of an innovative weight assignment strategy for base classifiers, the integration of a meta-classifier trained on weighted predictions, extensive validation across four BCI competition datasets, and a comprehensive analysis of parameter optimization implications. Thank you once again for your constructive comments.

 

  1. Providing a brief overview of the current state-of-the-art methods in MI EEG signal classification and highlighting how the COWSE model offers improvements or innovations would enhance the abstract's impact and significance.

Answer:

Thank you for your constructive feedback and the opportunity to enhance our manuscript further. In response to your comment, we have updated our abstract and introduction to present the COWSE model more comprehensively and contextualize it within the broader landscape of MI EEG signal classification research.

We refined the abstract to emphasize the innovative aspects of the COWSE model, highlighting its unique integration of sixteen machine learning classifiers through a weighted stacking approach. This update includes a clearer explanation of how the model optimizes performance by balancing the strengths and weaknesses of each classifier based on error correlation analysis and performance metrics evaluation across benchmark datasets. We also noted the model's significant advancement in classification accuracy, particularly on the BNCI2014-002 dataset.

In the introduction, we expanded our discussion on the current state-of-the-art MI EEG signal classification methods. We acknowledge the contributions of various studies to the field and identify a gap in the literature regarding a systematic method that combines weighted averaging and stacking, informed by error correlation across multiple datasets. Our updates describe how the COWSE model is designed to fill this gap, offering a detailed explanation of our novel ensemble approach, the strategic selection of base classifiers, and the integration of a meta-classifier trained on weighted predictions.

To address your comment further, we added a new paragraph at the end of the introduction detailing the key advancements our study brings to the field. This addition outlines the novel aspects of our ensemble approach, including the innovative weight assignment strategy and the use of a meta-classifier. It also touches on the extensive validation of our approach across multiple BCI competition datasets and the comprehensive analysis of parameter optimization.

Thank you once again for your insightful feedback. We hope our revisions adequately address your comments and that our manuscript is now better positioned to make a meaningful contribution to the field.

 

  1. In the ‘Methodology section’, it would be nice elaborate on the rationale behind methodological choices, discuss potential limitations or biases associated with these choices, and explore alternative approaches that could have been pursued.

Answer:

Thank you for your insightful comments and suggestions. Your feedback has been instrumental in enhancing the depth and clarity of our manuscript, particularly concerning the rationale behind our methodological choices, the potential limitations or biases of these choices, and alternative approaches that could have been considered.

To address your comments, we have updated several sections of our manuscript. Specifically, we have revised the introduction to articulate better the literature gap regarding a method that systematically unifies weighted averaging and stacking techniques informed by error correlation across diverse datasets. This revision aims to provide a clearer justification for our methodological choices and highlight the novelty of our approach.

In the methodology section, we have expanded the "2.5. Ensemble Model Construction" subsection to offer a detailed description of the COWSE model's construction process. This includes a comprehensive explanation of the error correlation analysis, weight assignment based on performance ranking, and stacking with a meta-classifier. These updates aim to elucidate the strategic considerations behind our ensemble model and its potential to enhance BCI applications' accuracy and reliability.

Furthermore, the discussion section has addressed the limitations and future research directions (4.9.). This includes acknowledging the computational complexity of our model, its real-world applicability, and its adaptability to user variability. We have also outlined several avenues for future research that could mitigate these limitations and further advance the field of BCI. We are grateful for the opportunity to improve our work and hope you find these changes satisfactory.

 

  1. How authors evaluated the precision and accuracy of the classifiers and networks used in this study.

Answer:

Thank you for your insightful comment. We appreciate the opportunity to clarify how we evaluated the precision and accuracy of the classifiers and networks in our study. As detailed in the "2.3. Model Training, Validation, and Evaluation" subsection within the Methodology section, our evaluation process involved a comprehensive approach. We implemented an 8-fold TSCV to ascertain the effectiveness of our classifiers, leveraging hyperparameter tuning with a grid search algorithm to fine-tune the classifiers and the fold number for TSCV.

We employed a variety of performance metrics, including Accuracy, Precision, Recall, F1 Score, AUC-ROC, and Cohen's Kappa, to provide a multifaceted assessment of classifier performance. Additionally, the impact of training data volume on classifier effectiveness was rigorously analyzed across a range of dataset sizes.

For our ensemble model, the COWSE, detailed in the subsection "4.6. Training and Evaluation of the COWSE Model" within the Discussion section, we followed a meticulous training regimen rooted in TSCV to account for EEG data's temporal dependencies. This involved fine-tuning the hyperparameters of base classifiers (SVM-rbf, MLP, RF, and ET) across multiple folds, optimizing through the scikit-learn library's intrinsic algorithms. Constructing a meta-classifier by weighting the base classifiers' predictions was a key step, with the MLP meta-classifier integrating these to enhance predictive accuracy.

Results and findings were meticulously presented in tables 3, 4, 5, 6, and 8, showcasing the comprehensive performance evaluation of our classifiers and the COWSE model's superior predictive performance.

 

  1. In the conclusion section, please provide recommendations for future studies, identifying specific research questions or challenges that warrant further exploration, and discussing potential real-world applications of the COWSE model would enrich the conclusion section.

Answer:

Thank you for your insightful comments and constructive suggestions regarding our manuscript. We truly appreciate your feedback, which has guided us to enhance the quality and impact of our work.

In response to your valuable suggestion, we have revised the conclusion section to reflect recommendations for future studies better, identify specific research questions or challenges, and discuss potential real-world applications of the COWSE model. Specifically, we have acknowledged the inherent challenges in EEG signal classification and proposed directions for future research focusing on computational efficiency, enhancing real-world robustness, and adapting to user variability. Furthermore, we have elaborated on the potential of expanding classifier ensembles and customizing systems for real-time, personalized use, highlighting the implications of our model for healthcare monitoring and assistive technologies. This revision aims to provide a clearer roadmap for future research and emphasize the interdisciplinary and user-centered approach necessary for advancing BCI technologies. Thank you once again for your invaluable feedback.

 

 

  1. The conclusion could benefit from a more balanced and nuanced discussion of the study's contributions and limitations. Acknowledging the inherent uncertainties and caveats associated with EEG signal classification research would add depth to the conclusion.

Answer:

Thank you very much for your insightful comments. We greatly appreciate your feedback on enhancing the conclusion section to offer a more balanced and nuanced discussion of our study's contributions and limitations.

In response to your valuable input, we have revised the conclusion to highlight the significant leap the COWSE model represents in MI EEG signal classification and acknowledge the inherent challenges and limitations within EEG signal classification research. We have discussed the complexity of decoding brain signals, variability in signal quality across individuals, and the implications these factors have on the model's reliability and applicability. Furthermore, we emphasized the importance of future research focusing on computational demands, enhancing real-world robustness, and adapting to user variability. This revision aims to provide a more comprehensive view of the study's context, implications for future research, and the interdisciplinary collaborations needed to advance BCI technologies.

We hope these modifications adequately address your concerns and contribute to a deeper understanding of the potential and limitations of our work.

 

 

  1. It is essential to provide context for the broader implications of the study's findings within the field of BCI technology and EEG signal processing, considering both theoretical advancements and practical implications for end-users and researchers.

Answer:

Thank you for your insightful comments and for highlighting the importance of providing context for the broader implications of our study's findings in BCI technology and EEG signal processing. Your feedback has been invaluable in enhancing the depth and relevance of our manuscript.

In response to your suggestion, we have added a new subsection in the discussion section titled "4.8. Adaptability and Implications of the COWSE Model for Diverse Datasets." This addition addresses our study's theoretical advancements and practical implications for end-users and researchers, emphasizing the COWSE model's flexibility and applicability across various BCI applications.

The subsection elaborates on the following key areas:

  • The model's robustness to signal quality variations and its ability to mitigate artifacts through preprocessing and feature extraction.
  • The ensemble approach's capacity to accommodate different MI tasks and reduce susceptibility to inter-subject variability demonstrates high classification accuracy across subjects with varying BCI proficiency levels.
  • The weighting scheme effectively handles imbalanced datasets by dynamically adjusting classifier influence, ensuring equitable class representation.

Furthermore, we discuss the implications of adapting the COWSE model to new datasets, including necessary adjustments in preprocessing, feature extraction, hyperparameter tuning, validation approaches, and performance metrics reassessment. We also highlight the potential for integrating the model with other data modalities to enhance its applicability and robustness in BCI systems.

We believe these additions comprehensively address the broader implications of our findings, underscoring the COWSE model's adaptability and its significant potential to advance personalized and accurate BCI technologies.

Again, thank you for your constructive feedback, which has been instrumental in refining our manuscript.

 

 

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The authors responded well to the suggestions put forward by the reviewers. The quality of the article has been greatly improved through revisions. Therefore, the reviewer's opinion on the revised version of the manuscript is "accept".

Reviewer 3 Report

Comments and Suggestions for Authors

all the comments have been addressed correctly

Back to TopTop