Next Article in Journal
Attitude of the Lithuanian Public toward Medical Assistance in Dying: A Cross-Sectional Study
Next Article in Special Issue
Doctor’s Orders—Why Radiologists Should Consider Adjusting Commercial Machine Learning Applications in Chest Radiography to Fit Their Specific Needs
Previous Article in Journal
I’m Not Only a Body”: Change in Thoughts about the Body after Mirror Exposure Treatment in Women with Obesity—An Exploratory Study
 
 
Article
Peer-Review Record

Evaluating Machine Learning Stability in Predicting Depression and Anxiety Amidst Subjective Response Errors

Healthcare 2024, 12(6), 625; https://doi.org/10.3390/healthcare12060625
by Wai Lim Ku 1 and Hua Min 2,*
Reviewer 1:
Reviewer 2:
Reviewer 3:
Healthcare 2024, 12(6), 625; https://doi.org/10.3390/healthcare12060625
Submission received: 4 January 2024 / Revised: 25 February 2024 / Accepted: 4 March 2024 / Published: 10 March 2024
(This article belongs to the Special Issue The 10th Anniversary of Healthcare—Health Informatics and Big Data)

Round 1

Reviewer 1 Report (Previous Reviewer 1)

Comments and Suggestions for Authors

Title: Evaluating Machine Learning Stability in Predicting Depression and Anxiety Amidst Subjective Response Errors

 

1. Authors can check repetition of abbreviations and acronyms. For example

Major Depressive Disorder (MDD) and Generalized Anxiety Disorder (GAD) repeated twice.

2. Only 3 keywords are not enough. Introducing some more keywords are suggested..

3. Introduced related review after section introduction. 

4. Proposed methods do not have sufficient proof and mathematical formulas. 

5. As per inference from figure 5, there are no significant differences generated with your proposed method. Also check figure 8.

 

6. Afere result section, introduce the discussion section and write about similar and dissimilar findings. Citations are essential here to compare your results with existing methods' performance.

 

Author Response

We would like to express our sincere gratitude for the time and effort you have dedicated to reviewing our manuscript. Your insightful comments and constructive criticism are invaluable to us, and we believe they have significantly contributed to the enhancement of our paper. We have carefully considered each point you have raised and have made corresponding revisions to our manuscript. Below, we provide detailed responses to your comments, outlining the changes we have made and clarifying any points of concern.

Author Response File: Author Response.pdf

Reviewer 2 Report (New Reviewer)

Comments and Suggestions for Authors
  1. The topic is interesting and its content is addressed regarding the journal scope.

2. The Authors should enhance the manuscript's introduction to understand the problem motivation, research gap, key contributions, and the significance of using said system.

3. The Related Reviews, Methodology, and Implementation parts should be clearly identified in the technical content.

4. The abstract and conclusion should be mapped clearly.

5. The conclusion should include a future research direction. 

6.  All pre-researched algorithms with the standard dataset used in the work.  But, how to validate the performance of the classifier models? Any overfitting scenarios, unbalanced data situations are handled during model building?

7. The author's key contributions are clearly highlighted in the introduction part. 

 

8. It would appear that the problem formulation, analysis, and execution as a whole are okay. However, it lacks the clarity in exact problem identification and gap analysis based on the pre-existing researches. 

 

 

 

Author Response

We would like to express our sincere gratitude for the time and effort you have dedicated to reviewing our manuscript. Your insightful comments and constructive criticism are invaluable to us, and we believe they have significantly contributed to the enhancement of our paper. We have carefully considered each point you have raised and have made corresponding revisions to our manuscript. We provide detailed responses to your comments, outlining the changes we have made and clarifying any points of concern.

Author Response File: Author Response.pdf

Reviewer 3 Report (New Reviewer)

Comments and Suggestions for Authors

Dear authors, congratulations on the great paper! To improve some details please consider implementing the following suggestions:

1) Please provide more details about the ratio between training set and validation set that was used for training and validating models.

2) Please provide more details about system configuration that was used to train the model.

3) Please add some examples to conclusion section where this approach could be used in other areas of healthcare, like prevention of some diseases, predictions of some symptoms etc.

Author Response

We would like to express our sincere gratitude for the time and effort you have dedicated to reviewing our manuscript. Your insightful comments and constructive criticism are invaluable to us, and we believe they have significantly contributed to the enhancement of our paper. We have carefully considered each point you have raised and have made corresponding revisions to our manuscript. We provide detailed responses to your comments, outlining the changes we have made and clarifying any points of concern.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report (Previous Reviewer 1)

Comments and Suggestions for Authors

Title: Evaluating Machine Learning Stability in Predicting Depression and Anxiety Amidst Subjective Response Errors

Authors have tried their best to incorporate my comments’ necessity however I am not convinced yet. 

1. Contents for related review is not sufficient and also citations referred are very minimal. Related review must refer to 20 plus citations and has to explain all previous work about your work.

2. Mathematical contents are missing totally in the proposed method.

Author Response

Dear Reviewer,

Thank you for your valuable feedback and the opportunity to further refine our manuscript. Please find below our responses to the comments and the actions we have taken to address the concerns raised.

Wai Lim and Hua

Author Response File: Author Response.pdf

Reviewer 2 Report (New Reviewer)

Comments and Suggestions for Authors

KUDOS!!

The revised draft has been successfully incorporated and addressed all the concerns. 

Author Response

Dear Reviewer,

Thank you for your valuable feedback.

Wai Lim and Hua

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Title: Evaluating Machine Learning Stability in Predicting Depression and Anxiety Amidst Subjective Response Errors

This manuscript concept is good however, the proposed method is similar to existing methods, it seems.

1. Avoid pronouns like we, you etc., throughout the manuscript.

2. The overall consents are not enough. 

3. Proposed methods working mechanism, mathematical proof and algorithm details are missing. 

4. Figure 1 does not show any difference among all methods’ results as well as Figure 4. What is the conclusion?

5. In the discussion, only two existing works are concluded and contrasted. So this discussion is not written well. 

 

6. In the resultant graph, nowhere is mentioned XGBoost and also no evidence to claim that XGBoost is best. 

Author Response

Thank you for reviewers’ constructive comments. We have prepared a point-by-point response to all comments from three reviewers and revise our paper based on their comments.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

1. In this paper, your proposed work is missing.

 

2. Table 1, alignment is to be corrected.

 

3. The methods used in this paper is very oldest method. Try to adapt a new machine learning models.

 

4. Computional time, error rate, loss function shoub be calculated.

 

5. Where is the proposed work comparisons?

 

6. Where is the ablation testing?

 

7. What are the potential sources of subjective response errors in self-reported surveys.

 

8. How do subjective response errors impact the performance of machine learning models for predicting depression and anxiety?

 

9.How can we accurately measure the extent to which subjective response errors degrade the performance of machine learning models for predicting depression and anxiety?

 

10. What are the specific types of subjective response errors that have the most significant impact on model performance? 

 

11. What are the practical implications of subjective response errors on the use of machine learning models for predicting depression and anxiety in real-world settings? 

 

12. What techniques can be employed to improve the quality of self-reported data and reduce the prevalence of subjective response errors? 

 

13. Can machine learning algorithms be trained to detect and correct subjective response errors in self-reported data? 

 

14. How can we strike a balance between achieving high accuracy in machine learning models and ensuring fairness across different demographic groups, particularly when subjective response errors may vary across these groups? 

 

15. How can we effectively integrate subjective self-reported data with objective data sources, such as electronic health records or wearable sensor data, to improve the accuracy and reliability of machine learning models for predicting depression and anxiety?

Comments on the Quality of English Language

It is ok

Author Response

Thank you for reviewers’ constructive comments. We have prepared a point-by-point response to all comments from three reviewers and revise our paper based on their comments.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

Evaluating Machine Learning Stability in Predicting Depression and Anxiety Amidst Subjective Response Errors Review

A brief summary 

This research evaluates the performance of four machine learning algorithms—Random Forest, XGBoost, Logistic Regression, and Naïve Bayesian—in predicting Major Depressive Disorder (MDD) and Generalized Anxiety Disorder (GAD) using electronic health records and survey data.

While all algorithms show good accuracy with pristine survey data, their performance varies in the presence of biased or erroneous responses. Notably, XGBoost demonstrates stability and excels in identifying true positive cases even when subjective response inaccuracies are present, emphasizing the importance of algorithmic resilience in mental health prediction based on self-reported data

 

General concept comments


Article:

v  Results the results are utilizing 30% of the data for held-out testing and 5-fold validation. The results  should be utilizing 10-fold-cross-validation which  is generally recommended for a more reliable evaluation of model performance.

v  Discussion it would be interesting to add comparison to the method described in the following paper : "Predicting Depression, Anxiety, and Stress Levels from Videos Using the Facial Action Coding System"  doi: 10.3390/s19173693.

Review:

v  In Methods section, Participants , it would be much clearer if lines 83-87 would be shown in a table.

v  In Methods section, Measures, the questionnaire mentioned in lines 91-92 should be further detailed. For example , did the  questionnaire included questions about parental childhood abuse (Trait self-acceptance mediates parental childhood abuse predicting depression and anxiety symptoms in adulthood DOI: 10.1016/j.janxdis.2023.102673)

Specific comments:

v  Table 1 in page 5 is out of page frame.


Comments on the Quality of English Language

Minor editing of English language required

Author Response

Thank you for reviewers’ constructive comments. We have prepared a point-by-point response to all comments from three reviewers and revise our paper based on their comments.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

 

Title: Evaluating Machine Learning Stability in Predicting Depression and Anxiety Amidst Subjective Response Errors

This manuscript has not been written well. 

1. Authors have not avoided pronouns such as we and ours, etc.

2. Authors never bother to avoid repetition of acronyms and abbreviations. For example

Generalized Anxiety Disorder (GAD) repeated ten times. No standard journal manuscript writer repeats like this.

3. Authors used different font sizes throughout this manuscript. 

4. Authors only compared their results with citation 27. Discussion should compare and contrast similar and dissimilar findings with the performance table at least.

Author Response

We appreciate the time and effort you have dedicated to reviewing our manuscript. Your insightful comments have provided us with valuable guidance on how to enhance the quality and clarity of our work. Below, we address each of your comments and outline the revisions we have made to the manuscript accordingly.

 

  1. Authors have not avoided pronouns such as we and ours, etc.

 

Reply: We acknowledge the importance of maintaining an objective tone in academic writing. To address this, we have carefully revised the manuscript to minimize the use of first-person pronouns.

 

  1. Authors never bother to avoid repetition of acronyms and abbreviations, such as "Generalized Anxiety Disorder (GAD)" repeated ten times.

 

Reply: We apologize for the oversight and appreciate your attention to this detail. We have now revised the manuscript to ensure that each acronym and abbreviation is fully spelled out upon its first occurrence in each section, followed by the acronym in parentheses.

  1. Authors used different font sizes throughout this manuscript.

 

Reply: Thank you for bringing this to our attention. We have thoroughly reviewed the manuscript and standardized the font size and style throughout the manuscript.

 

  1. Authors only compared their results with citation 27. Discussion should compare and contrast similar and dissimilar findings with the performance table at least.

 

Reply: We agree that a broader comparison with existing literature would strengthen our discussion. To address this, we have expanded our discussion section and included a performance table (Table 7 in the manuscript) to include a comprehensive comparison of our findings with those of other relevant studies.

 

Reviewer 2 Report

Comments and Suggestions for Authors

No more questions

Comments on the Quality of English Language

Can be improved

Author Response

We appreciate the time and effort you have dedicated to reviewing our manuscript. Your insightful comments have provided us with valuable guidance on how to enhance the quality and clarity of our work. 

Round 3

Reviewer 1 Report

Comments and Suggestions for Authors

Title: Evaluating Machine Learning Stability in Predicting Depression and Anxiety Amidst Subjective Response Errors

I checked the quality of this manuscript concerning presentation and contents.

1 . authors still repeat abbreviations and acronyms many times for example

major depressive disorder (MDD) is repeated 7 times,

and generalized anxiety disorder (GAD) is repeated 9 times.

 

Likewise, I can see many abbreviations repeated several times.

Good manuscript never presented repeated abbreviations.

Authors must refer to some standard manuscript and then they may come to know, how to write technical articles.

2. Table 3 and Table 4 observations are confusing. I query whether

accuracy and precision are directly proportional to computing time or inversely propositional to computing time.

3. provide proper citations to claim that random forest outperformed the CNN methods.

 

4. Also I am wondering after checking Figure 5. how did random forest performance for your experiment get a straight slope? recheck Figure 5(a and b). recheck computation time and other performance for all compared methods. 

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop