Next Article in Journal
“Our House Was a Small Islamic Republic”: Social Policing and Resilient Resistance in Contemporary Iran
Previous Article in Journal
“When Is a School Not a School?” Dr. Carrie Weaver Smith, Child Prisons, and the Limits of Reform in Progressive Era Texas
Previous Article in Special Issue
A Sleep Health Education Intervention Improves Sleep Knowledge in Social Work Students
 
 
Review
Peer-Review Record

Ethical Considerations in Artificial Intelligence Interventions for Mental Health and Well-Being: Ensuring Responsible Implementation and Impact

Soc. Sci. 2024, 13(7), 381; https://doi.org/10.3390/socsci13070381
by Hamid Reza Saeidnia 1, Seyed Ghasem Hashemi Fotami 2, Brady Lund 3 and Nasrin Ghiasi 4,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Soc. Sci. 2024, 13(7), 381; https://doi.org/10.3390/socsci13070381
Submission received: 4 June 2024 / Revised: 12 July 2024 / Accepted: 21 July 2024 / Published: 22 July 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The manuscript reviews ethical concerns arising from the use of AI systems in mental health and well-being. The authors selected 51 publications and identified and proposed a series of ethical considerations, ethical principles to integrate into the design and application of AI systems, and practices for the ethical use of AI systems in mental health interventions. A discussion of the analyzed studies is presented before two short sections on the limitations of the work and the conclusions.

The manuscript fits the journal's scope and might interest the journal's readers. However, the novelty and significance are quite limited. The methodology followed to conduct the study seems to be based on standard references, even though they appear to be specific to medical surveys. The manuscript is well-written and presented. The overall merit is limited, and my recommendation is to publish the manuscript after considering and addressing the following comments and suggestions.

 

1. One of the main shortcomings of the review is the lack of focus on explainability (both as a property of AI systems and as an ethical principle). It is quite surprising that the authors did not include explainability in sections 3.3, 3.4, and 3.5. Explainability is crucial when studying ethical considerations of AI, especially in the context of the domain considered in the manuscript. The authors should indicate this omission in the Introduction. This clarification should also be included in section 5. Additionally, the authors focus only on the limitations of the systematic review in section 5, so they should extend that part to include a discussion of the findings themselves.

2. (minor comment) Page 2, line 52: What do the authors exactly mean by "virtual therapists"?

3. (minor but general comment) At the beginning of the sections and subsections, authors should at least write a very short introduction and not concatenate the titles of the subsections (as, for instance, they do on page 3, subsections 2.2 and 2.2.1).

4. (minor comment) Page 3, lines 106-107: "Including studies that address ethical considerations ensures a comprehensive understanding of the potential implications of AI in mental health interventions." This statement should be nuanced since it would be easy to argue for the need to include technical works to present a truly comprehensive review.

5. (minor comment) Page 3, lines 128-133: Why did the authors exclude Google Scholar from the list?

6. Page 4, lines 147-148: The authors comment on excluded works for conflict reasons. This should be clarified (e.g., providing an example without giving a reference nor a clear identification of the work).

7. The appendices are appreciated, but their presentation can be improved. Furthermore, if the paper is published, the appendices should be made available to everyone (e.g., by providing a link), not kept as documents for the reviewers only.

8. Could the authors clarify "1696 articles other study designs" (page 4, lines 177-178)? There is also a missing parenthesis in that part.

9. Page 3, last line: The authors mention 51 articles, whereas in the abstract, they claim to have considered 52 works.

10. Page 5, figure 1: The figure is too big and a bit blurry. This should be addressed if the manuscript is accepted.

11. Table 2 is claimed to be about ethical principles. However, none of the columns include a list of ethical principles. The authors should clarify this.

Author Response

Comments 1. One of the main shortcomings of the review is the lack of focus on explainability (both as a property of AI systems and as an ethical principle). It is quite surprising that the authors did not include explainability in sections 3.3, 3.4, and 3.5. Explainability is crucial when studying ethical considerations of AI, especially in the context of the domain considered in the manuscript. The authors should indicate this omission in the Introduction. This clarification should also be included in section 5. Additionally, the authors focus only on the limitations of the systematic review in section 5, so they should extend that part to include a discussion of the findings themselves.

Response 1: The reviewer is correct that the term “explainability” was not mentioned directly in the manuscript, and this is an ommission that needed to be addressed. However, we believe that much of what we say about “transparency” also applied to “explainability.” Thus, we did not find it necessary to make substantial edits, but rather just update and clarify the terminology we used – i.e., “transparency and explainability of models.”

 

Comments 2. (minor comment) Page 2, line 52: What do the authors exactly mean by "virtual therapists"?

Response 2:  "virtual therapists" refers to digital, remote mental health support and treatment, whether delivered by AI systems, human therapists, or a combination of both. We added this text in parentheses as an explanation in the text of the article before the phrase "virtual therapists".

Comments 3. (minor but general comment) At the beginning of the sections and subsections, authors should at least write a very short introduction and not concatenate the titles of the subsections (as, for instance, they do on page 3, subsections 2.2 and 2.2.1).

Response 3: Thank you for your valuable opinion, "This systematic review applied the following inclusion and exclusion criteria:" As a very short introduction added.

Comments 4. (minor comment) Page 3, lines 106-107: "Including studies that address ethical considerations ensures a comprehensive understanding of the potential implications of AI in mental health interventions." This statement should be nuanced since it would be easy to argue for the need to include technical works to present a truly comprehensive review.

Response 4:  Thanks for your valuable comment, we tried to make the statement more nuanced.

Comments 5. (minor comment) Page 3, lines 128-133: Why did the authors exclude Google Scholar from the list?

Response 5: Thanks for your valuable comment, we added Google Scholar to the list

Comments 6. Page 4, lines 147-148: The authors comment on excluded works for conflict reasons. This should be clarified (e.g., providing an example without giving a reference nor a clear identification of the work).

Response 6: Thanks for your valuable comment,  We have added a paragraph to the "Study Selection" section. I hope it answers your question and it is convincing.

Comments 7. The appendices are appreciated, but their presentation can be improved. Furthermore, if the paper is published, the appendices should be made available to everyone (e.g., by providing a link), not kept as documents for the reviewers only.

Response 7: Thank you for your valuable opinion. We will provide the desired files as supplementary files to the journal. It seems that the way of providing supplementary files to the readers depends on the way of displaying them in the journal.

 

 

Comments 8. Could the authors clarify "1696 articles other study designs" (page 4, lines 177-178)? There is also a missing parenthesis in that part.

Response 8: Thanks for your valuable comment, we have improved this section and tried to rewrite it better.

 

Comments 9. Page 3, last line: The authors mention 51 articles, whereas in the abstract, they claim to have considered 52 works.

Response 9: Thank you for your valuable comment and sorry for this typo.

 

Comments 10. Page 5, figure 1: The figure is too big and a bit blurry. This should be addressed if the manuscript is accepted.

Response 10: We tried to make the image a little smaller and its quality better

 

Comments 11. Table 2 is claimed to be about ethical principles. However, none of the columns include a list of ethical principles. The authors should clarify this.

Response 11: We have revised the table title, content, and explanatory text to indicate that these are “considerations for integrating ethical principles…” The ommission of this language was due to some language differences with the lead authors, but has now been corrected upon review by a native English speaker.

Reviewer 2 Report

Comments and Suggestions for Authors

Dear Authors,

Your manuscript "Ethical Considerations in Artificial Intelligence Interventions for Mental Health and Wellbeing: Ensuring Responsible
Implementation and Impact" is interesting. However, it can be improved further. Please find comments below:

1. The discussion section sounds repetitive. Please revise the discussion section.

2. It is not clear if the AI tools, software or chatbots for the mental health and well beings are regulated as medical device or SaMD ? Who is responsible for medical approval and regulation ?

3. In the introduction section, many sentences contain references that are not relevant. Please revise.

Best wishes.

 

 

 

Comments on the Quality of English Language

Minor editing of English language required

Author Response

Comments 1. The discussion section sounds repetitive. Please revise the discussion section.

Response 1: The discussion section has been revised to address the comments of this reviewer as well as reviewer 3.

Comments 2. It is not clear if the AI tools, software, or chatbots for the mental health and well-being are regulated as medical devices or SaMD ? Who is responsible for medical approval and regulation?

Response2: While we are unsure if this question is entirely germane to the specific purpose of this paper, it is nonetheless an interesting question. Currently, there is very little regulation in any form, a key reason for this review. It is unlikely that they would be regulated as medical devices, since they are not “devices” according to the technical definition. We are happy to explore this further in the paper if you require further edits.

Comments 3. In the introduction section, many sentences contain references that are not relevant. Please revise.

Response 3: Thank you for your attention, we rechecked and rewrote some of the sources and added some.

Reviewer 3 Report

Comments and Suggestions for Authors

I just had the chance to read your work. However, Some improvement suggestions are listed below:

1)     Methodology needs further clarification.

2)     The whole paper must be proofread before another submission.

3)     Results are pretty much descriptive and lack confrontation with previous works.

4)     Theoretical and practical contributions should be included. Authors must expand this discussion so that the implications of the theory become clearer.

Considering the above points, I suggest a major revision to the article.

Comments on the Quality of English Language

The whole paper must be proofread before another submission.

Author Response

Comments 1: Methodology needs further clarification.

Response 1: Thanks for your valuable comment. We have made changes in the Inclusion and Exclusion Criteria section, the Databases and Search Method section, and the Study Selection section, which we hope will help further clarification.

 

Comments 2: The whole paper must be proofread before another submission.

Response 2: We edited the whole text again with the help of the third author, who is an English native speaker. We hope the text of the article is improved.

 

Comments 3: Results are pretty much descriptive and lack confrontation with previous works.

Response 3: Thank you for your feedback. We have carefully considered your suggestions and made revisions to the discussion section to strengthen its quality and coherence. In the updated discussion, we have focused on synthesizing the key insights from the three most relevant review studies in the context of our own research. By closely aligning the discussion with our study objectives and findings, we have aimed to provide a more focused and impactful analysis of the current state of knowledge in this area.

 

Comments 4: Theoretical and practical contributions should be included. Authors must expand this discussion so that the implications of the theory become clearer.

 

Response 4: We have expanded the discussion section to address these contributions in greater detail.

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The authors tackled and solved the long list of comments and concerns I posed. I appreciate it very much and hope they also see the improvement in their work. I believe the manuscript is now ready for publication.

Author Response

Comment: The authors tackled and solved the long list of comments and concerns I posed. I appreciate it very much and hope they also see the improvement in their work. I believe the manuscript is now ready for publication.

 

Response:  Thank you

Reviewer 2 Report

Comments and Suggestions for Authors

Dear Authors,

Thank you for your responses. However, the clarification provided for comment 2 remains unclear. Could you please elaborate further?

In Response 2, you mentioned: "While we are unsure if this question is entirely germane to the specific purpose of this paper, it is nonetheless an interesting question. Currently, there is very little regulation in any form, a key reason for this review. It is unlikely that they would be regulated as medical devices, since they are not “devices” according to the technical definition. We are happy to explore this further in the paper if you require further edits."

This question is pertinent and crucial for understanding the term "Interventions" as referenced in your title. Could you specify what type of intervention you are referring to?

Furthermore, on line 29, you mentioned "AI interventions are developed and deployed." How do you envision AI being developed and deployed? In my view, AI can be deployed through various means such as software, algorithms, AI-enabled devices, or chatbots, and may be subject to FDA regulations.

Please refer to the following links for more information:

  • https://www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd
  • https://www.fda.gov/media/122535/download?attachment
  • https://doi.org/10.3390/electronics13030498

Best regards,

 

Comments on the Quality of English Language

Extensive editing of English language required.

For example:

Some of the main considerations include: Privacy and confidentiality, Informed consent, Bias and fairness, Transparency, explainability, and accountability, Autonomy and human agency (Table 1).

Author Response

Comment 1: Thank you for your responses. However, the clarification provided for comment 2 remains unclear. Could you please elaborate further?
In Response 2, you mentioned: "While we are unsure if this question is entirely germane to the specific purpose of this paper, it is nonetheless an interesting question. Currently, there is very little regulation in any form, a key reason for this review. It is unlikely that they would be regulated as medical devices, since they are not “devices” according to the technical definition. We are happy to explore this further in the paper if you require further
edits."
This question is pertinent and crucial for understanding the term "Interventions" as referenced in your title. Could you specify what type of intervention you are referring to?
Furthermore, on line 29, you mentioned "AI interventions are developed and deployed."
How do you envision AI being developed and deployed? In my view, AI can be deployed through various means such as software, algorithms, AI-enabled devices, or chatbots, and may be subject to FDA regulations.

Response
: Thank you for your deep insight and careful review.
To respond to your valuable opinion, I made some changes in the abstract section of the article in response to the question that referred to line number 29. I also added the following paragraph to the end of the discussion section.
"Notably, regulating bodies, such as the Federal Drug Administration (FDA) in the United States, may play a role in ensuring that AI interventions are developed and deployed in an ethical manner. The AI interventions discussed in this paper could take many forms, such as specific algorithms, chatbots, or complete AI-enabled devices. AI-enabled medical devices may be subject to FDA approval, as noted in recent publications on the agency’s website. While this can be a promising development for protecting consumers, it will be critical that the FDA retain experts who are able to properly assess the ethical design and development of the AI components of these devices. Research like that discussed in this paper can offer an important source of information to inform the development of responsible regulation of AI devices."

Thank you once again for your valuable comment.

 

Comment 2: Comments on the Quality of English Language

Response 2: I once again asked Dr. Brady to read the article. We assure you that the article has no special problem in terms of language structure. Only  have also made a couple of edits to help with the specific English language editing issue they pointed out (capitalization issues in that list)

 

Round 3

Reviewer 2 Report

Comments and Suggestions for Authors

Authors has replied to all the comments.

Back to TopTop