Next Article in Journal
Neighborhood-Level Lead Paint Hazard for Children under 6: A Tool for Proactive and Equitable Intervention
Previous Article in Journal
Antioxidant, Antihypertensive and Antimicrobial Properties of Phenolic Compounds Obtained from Native Plants by Different Extraction Methods
Previous Article in Special Issue
Strong and Deadly Futures: Co-Development of a Web-Based Wellbeing and Substance Use Prevention Program for Aboriginal and Torres Strait Islander and Non-Aboriginal Adolescents
 
 
Study Protocol
Peer-Review Record

IDTWO: A Protocol for a Randomised Controlled Trial of a Web-Based Mental Health Intervention for Australians with Intellectual Disability

Int. J. Environ. Res. Public Health 2021, 18(5), 2473; https://doi.org/10.3390/ijerph18052473
by Peter A. Baldwin 1,*,†, Victoria Rasmussen 1, Julian N. Trollor 2, Jenna L. Zhao 2, Josephine Anderson 1, Helen Christensen 1 and Katherine Boydell 1
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Int. J. Environ. Res. Public Health 2021, 18(5), 2473; https://doi.org/10.3390/ijerph18052473
Submission received: 12 January 2021 / Revised: 21 February 2021 / Accepted: 22 February 2021 / Published: 3 March 2021

Round 1

Reviewer 1 Report

see attached file

This is an interesting proposal/protocol from a very experienced group but there are key elemnts of it that I do not understand from the paper.

  1. What actually is the intervention? https://www.healthymind.org.au/Index.aspx# is an open website not requiring any username or password. I do not understand, therefore , how “Participants allocated to the intervention group will have full access to the Healthy Mind program for 8 weeks. Participants randomised to the control group will be placed on a waitlist for 8 weeks”. Surely they could access the site right now – as I can?

 

  1. Anonymity and security: I find the protocol a bit confusing in relation to this. I am not absolutely clear what information participants will have to give to register for the trial. It seems that this recruitment will take place on a secure BDI website. Obviously they will at least need a user name. They also need to give the details of a named carer. I am presuming therefore that the likely scenario is that both participant and carer will be identifiable in this recruitment process. Yet the Health Mind website is ‘open access’ (as in the point above, and as stated in the protocol “Pilot testing of Healthy Mind revealed that a sufficiently robust password system was a barrier to access due to the different language and memory abilities of users. Therefore, the Healthy Mind site does not require a login. User privacy is maintained by ensuring each user session is secure and unidentifiable.” So the authors are expecting participants to be able to log in to this secre BDI website and complete some lengthy questionnaires – yet by their own research they have sadi that this logging in process has been a barrier and for that reason the intervention itself is left open access.

 

But then I read in data collection that “At each assessment point, the study website automatically issues a unique link to the study questionnaire via email” And furthermore participants will be rewarded with a shopping voucher. And they are clearly not anonymous as “Participants and carers who do not respond to the scheduled prompt sent by email and short message ser-vice (SMS) will receive a phone call reminder from a trial team member.”

 

So I don’t understand how that happens if the user has not logged in?

 

So points 1 and 2 leave me confused, and I think the protocol needs to address the process of advertising, recruitment, and allocation to intervention or waitlist control in more detail.

 

  1. Pursuing this line further on ‘what is the intervention?’ Elizabeth Murray’s paper on online trials https://www.jmir.org/2009/2/e9/ discusses well the difficulties of defining the intervention and this deserves more explanation in this protocol.

 

  1. The outcome measures together make up for a fairly formidable battery of tests – if I am adding up correctly: Adams (28) + Kessler (10) + WHO-DAS (36) + AQ (50) + CBS (?) ~150 questions or so. That’s a lot for anyone, but maybe more so for someone with ID. Is this not going to lose you loads of potential participants right from the start?

 

  1. The estimation of a 70% follow up rate (30% attrition) seems wildly optimistic. Has any previous work been carried out to support that. Again referring to Elizabeth Murray’s paper, she cites follow up rates as low as 10-15%. In this case the sample size of 150 may need to be significanlty increased.

 

  1. Blinding: Authors say “Participants will remain blind to study allocation during the intervention and follow-up periods.”. I don’t really see how that is possible unless in the information/consent process the researchers do not give information about how long to expect to wait before being given access to the program.

 

  1. Recruitment: What is the total estimated eligible population of people with ID and these characteristics? Is 150 possible with these inclusion criteria?

 

 

I guess I must be being pretty dense as “The Healthy Mind study protocol and materials have been approved by Human Re-search Ethics Committee at UNSW Australia (HC190393)” – so they must have understood it better!

Author Response

This is an interesting proposal/protocol from a very experienced group but there are key elements of it that I do not understand from the paper.

  1. What actually is the intervention? https://www.healthymind.org.au/Index.aspx# is an open website not requiring any username or password. I do not understand, therefore , how “Participants allocated to the intervention group will have full access to the Healthy Mind program for 8 weeks. Participants randomised to the control group will be placed on a waitlist for 8 weeks”. Surely they could access the site right now – as I can?

Response: We thank the reviewer for this important point. To clarify how WLC fidelity will be maintained we have added the following text on page 5 (new text in bold)

“The control group will be advised that there will be a delay in being able to access to Healthy Mind, and that they will receive an email from the research team as soon as the website is available to access (i.e., after the three-month follow-up survey has closed). Control group fidelity will be maintained by collaborating with study helpers to avoid access to Health Mind and surveying control participants after the 8-week period to ensure that they were not exposed to Healthy Mind.”

And on page 8 (new text in bold):

“Participants will remain blind to study allocation during the intervention and follow-up periods, however study helpers in the waitlist condition will be unblinded.”

 

  1. Anonymity and security: I find the protocol a bit confusing in relation to this. I am not absolutely clear what information participants will have to give to register for the trial. It seems that this recruitment will take place on a secure BDI website. Obviously they will at least need a user name. They also need to give the details of a named carer. I am presuming therefore that the likely scenario is that both participant and carer will be identifiable in this recruitment process.

 

Yet the Health Mind website is ‘open access’ (as in the point above, and as stated in the protocol “Pilot testing of Healthy Mind revealed that a sufficiently robust password system was a barrier to access due to the different language and memory abilities of users. Therefore, the Healthy Mind site does not require a login. User privacy is maintained by ensuring each user session is secure and unidentifiable.” So the authors are expecting participants to be able to log in to this secre BDI website and complete some lengthy questionnaires – yet by their own research they have sadi that this logging in process has been a barrier and for that reason the intervention itself is left open access. But then I read in data collection that “At each assessment point, the study website automatically issues a unique link to the study questionnaire via email” And furthermore participants will be rewarded with a shopping voucher. And they are clearly not anonymous as “Participants and carers who do not respond to the scheduled prompt sent by email and short message ser-vice (SMS) will receive a phone call reminder from a trial team member. So I don’t understand how that happens if the user has not logged in? So points 1 and 2 leave me confused, and I think the protocol needs to address the process of advertising, recruitment, and allocation to intervention or waitlist control in more detail.

 

Response: We thank the reviewer for these important comments about participant privacy, and for raising important points of clarity.

To clarify, the Healthy Mind website and BDI Research Platform are separate websites accessible by the separate URLs listed in our manuscript. The latter requires a log-in and captures all sensitive data, including identifying information. We described this in the original manuscript as, “…the Healthy Mind site does not require a login. User privacy is maintained by ensuring each user session is secure and unidentifiable. This is achieved by removing all opportunities to enter identifiable information (e.g., users are never asked to provide their name or any other identifiers). Identifiable information is only entered into a separate study website (detailed below).”

To further clarify that the websites are separate, we have added the following text on page 6 (in bold):

“Participant consent, screening and assessment will take place online via a secure study-specific website (https://healthymind.blackdoghealth.org.au), which is separate from the intervention site.”

To further clarify that participants will be asked to enter their contact details, we have added the following text to page 7 (new text in bold)

“…statement. After providing informed consent, eligible participants will be directed to a secure website where they will enter contact details, and complete the baseline.”

We acknowledge that this trial is unique in that it has an intervention without access restrictions. While open, unidentified access was critical for the design of the intervention, the research platform necessitates a password to keep identifying information secure. To clarify that the Healthy Mind site does not have a password to enable independent use for some consumers we have added the following text to page 5 (new text in bold)

“Pilot testing of Healthy Mind revealed that a sufficiently robust password system was a barrier to access due to the different language and memory abilities of users, given the site is intended for independent use by some consumers in the future.”

We note in our original manuscript that “Consent, screening and all other data collection will be completed online with carer support”. To further clarify that the secure study site is password protected and may therefore need carer support to access, we have added the following text to page 6 (new text in bold):

“…study-specific website hosted on the Black Dog Institute research platform (https://healthymind.blackdoghealth.org.au), which is separate from the intervention site. The research platform site is password-protected; therefore, participants may require support from their helper during this component of the study. Once on the secure site, potential…”

We hope the answers above have addressed concerns regarding recruitment and waitlist allocation. We are unsure of what specific aspects of advertising the reviewer would like us to add but are happy to consider changes if the reviewer is able to provide more specific details.

 

 

  1. Pursuing this line further on ‘what is the intervention?’ Elizabeth Murray’s paper on online trials https://www.jmir.org/2009/2/e9/ discusses well the difficulties of defining the intervention and this deserves more explanation in this protocol.

Response: Our original manuscript contains a detailed description of the Healthy Mind intervention. We are not certain from the comment above but assume that the author is referring specifically to Murray et al’s (2009) discussion of ‘intervention fidelity’, often discussed in digital health as ‘treatment engagement’. If so, we thank the reviewer for the important point and have added a paragraph about treatment engagement on page 5:

“The absence of a login system in the intervention poses a challenge for monitoring engagement with the treatment. Although this is not ideal methodologically, our priority has been to create an accessible website. This priority was identified in our co-design process previously described in our team’s publication (Watfern et al, 2019). Treatment engagement will therefore be assessed retrospectively by self-report at the first follow-up measurement.”

  1. The outcome measures together make up for a fairly formidable battery of tests – if I am adding up correctly: Adams (28) + Kessler (10) + WHO-DAS (36) + AQ (50) + CBS (?) ~150 questions or so. That’s a lot for anyone, but maybe more so for someone with ID. Is this not going to lose you loads of potential participants right from the start?

Response: We acknowledge this as a very important point. Participant burden was considered extensively during our study design. Our investigator team agreed it was necessary to include these variables to capture the most essential aspects of cognitive behavioural health while including a measure of autism, a crucial confound. Unfortunately, there is a limited selection of questionnaires validated in people with ID. We selected a test battery similar to those used in previous research (for reviews, see https://pubmed.ncbi.nlm.nih.gov/24051363/ or  https://pubmed.ncbi.nlm.nih.gov/26803286/). The CBS tasks are described on page 6 of our original manuscript. We expect the baseline assessment to take approximately 1 hour and it can be paused at any time.

  1. The estimation of a 70% follow up rate (30% attrition) seems wildly optimistic. Has any previous work been carried out to support that. Again referring to Elizabeth Murray’s paper, she cites follow up rates as low as 10-15%. In this case the sample size of 150 may need to be significantly increased.

Response: We thank the reviewer for considering attrition and agree that 70% may be an optimistic projection for retention. We expect that carer support, automated SMS reminders, researcher follow-up and cash reimbursement should boost retention. With respect to previous work to support our thinking, cash reimbursement significantly improved retention rates to 60% in the paper by Murray et al cited by the reviewer. Notably, the Murray et al. 2009 paper does not reference studies with the automated SMS reminders or third-party support to participants that feature as part of our retention strategy. This 60% retention rate has also been achieved in previous fully online studies conducted by members of our investigation team (e.g., https://www.jmir.org/2019/5/e12246/) that use the same trial management technology and procedures. In line with previous work and estimates made by Murray et al. for studies with cash reimbursement, we have revised our projected retention rate to 60% and have adjusted the manuscript accordingly (page 8 new text in bold):

“To detect a more conservative effect size of ~0.5 we will need a minimum of 50 participants per group, therefore we will recruit a sample of approximately 167 individuals to allow for ~40% attrition.”

 

  1. Blinding: Authors say “Participants will remain blind to study allocation during the intervention and follow-up periods.”. I don’t really see how that is possible unless in the information/consent process the researchers do not give information about how long to expect to wait before being given access to the program.

Response: We acknowledge the difficulties of implementing a waitlist control condition in this trial

  1. Recruitment: What is the total estimated eligible population of people with ID and these characteristics? Is 150 possible with these inclusion criteria?

Response: As our study includes individuals with borderline ID, 15% of Australians will be eligible. Based on a current population of 25,671,900, we would anticipate between 3,850,785 Australians to meet these criteria. Given the high prevalence of mental health symptoms in this group, a conservative estimate would be approximately 50% experiencing mental health symptoms, or 1,925,393 Australians. Assuming only half have access to an internet connected device around 962,696 Australians should be eligible and have capacity to participate. Based on this, we believe 150 participants is indeed possible.

 

I guess I must be being pretty dense as “The Healthy Mind study protocol and materials have been approved by Human Re-search Ethics Committee at UNSW Australia (HC190393)” – so they must have understood it better!

Response: We thank the reviewer for their comments and feel the manuscript is much strengthened as a result of our response to them.

Author Response File: Author Response.docx

Reviewer 2 Report

Overall this is a well-conceived study proposal for an exciting technology. The case for the gap in mental health support for the intellectually disabled is made very clearly and the technical response appears well thought out, as is the study itself.  The writing is excellent.

Nonetheless, I note a number of small points where more detail could be provided:

  1. It’s always a shame to see certain kinds of participants excluded (e.g. impaired vision, hearing), but good to see about the opportunity to participate in the “community collaboration” part of the dissemination plan. It’d be good to hear a bit more about what that entails and how it could allow the voices of those that were left out (and their support people) to influence future developments.
  2. As a comment, I was trying to tally the baseline questions and got 28+10+36+50+ however many the CBS is (good if that were stated). Quite a few for an ID person (or anybody). Do you have an estimate of how long this would take to complete?  Is there any ability to park the baseline assessment session mid way?  Given the further description of how follow-up is done, I wasn’t sure about that. 
  3. Re sample size (section 6) it’d be nice to see more of the math regarding mean and variance of expected effect size and how that leads to the estimate of 50 participants. In Limitations you briefly revisit that the sample is larger than required and that you’ll test preservation of randomization, but I’d like to see this raised in section 6 to support the derivation of the 150 size in terms of attrition and possible greater attrition in wait-list group. Incidentally, what happens if the test shows randomization isn’t preserved? (maybe that could be expanded in limitations). Is the 30% attrition rate realistic? Maybe so given that there’s significant incentive (AUD20) and follow-up – it’d be good to see some justification. Minor typos/formatting in that section: “approximately” runs into “however” oddly – I think a number is missing; also I’d favor “~0.5” rather than “~.5”.
  4. Is any continuity kept within the intervention and, if so, how (with respect to technical security given there’s no password)? I.e. does the application remember from session to session what modules the participant has done in the past or any profile information about them? Is it tied to the specific computer and/or browser? There’s discussion of how the assessment info is securely managed but I don’t see anything about this sort of application data. Are there any expectations about how the participants will use the application over the 8 weeks (e.g. daily, weekly)?  Ah, I see there’s a weekly planner in the app – this further raises my question on how the data is stored and mapped to the user without a login and what issues or limitations this may entail.
  5. Presumably the participants can withdraw from the study at any time (indeed ‘withdrawn’ is shown as an option on figure 4 out through the 3-month follow-up). How is this implemented in terms of how they ask and what is done to erase their data?

Further minor suggestions / edits:

The title doesn’t actually mention mental health.  It does mention ‘mobile’ (which incidentally I don’t see anywhere in the body of the paper).  Perhaps ‘Web-based and Mobile’ should instead be ‘Electronic Mental Health’ in the title.

Last sentence of 3rd paragraph of intro: “Adolescence is also a time of psychiatric vulnerability,  substantial cognitive development.” – maybe an ‘and’ before ‘substantial’

9.2: “(use this language)” – I wonder if that text was meant to be included.

 

Author Response

Open Review (Reviewer 2)

Overall this is a well-conceived study proposal for an exciting technology. The case for the gap in mental health support for the intellectually disabled is made very clearly and the technical response appears well thought out, as is the study itself.  The writing is excellent.

We thank the reviewer for their encouraging and thoughtful review.

Nonetheless, I note a number of small points where more detail could be provided:

Reviewer: It’s always a shame to see certain kinds of participants excluded (e.g. impaired vision, hearing), but good to see about the opportunity to participate in the “community collaboration” part of the dissemination plan. It’d be good to hear a bit more about what that entails and how it could allow the voices of those that were left out (and their support people) to influence future developments.

 

We agree that exclusion is regrettable, but unfortunately necessary to ensure the ability to engage with the online program. We have added the following statement about the lived experience collaboration aspect of the project (page 3, new text in bold):

 

“Ineligible participants will be offered the opportunity to participate in the community collaboration aspect of our dissemination plan, which will involve giving lived experience feedback on the trial results and recommendations about how future trials and services for people with ID should be designed.”

 

Reviewer: As a comment, I was trying to tally the baseline questions and got 28+10+36+50+ however many the CBS is (good if that were stated). Quite a few for an ID person (or anybody). Do you have an estimate of how long this would take to complete?  Is there any ability to park the baseline assessment session mid way?  Given the further description of how follow-up is done, I wasn’t sure about that

 

We thank the reviewer for these important comments about participant burden. The CBS tasks are listed on page 6 of our original manuscript. We expect the baseline assessment to take approximately 1 hour. We selected a test battery similar to those used in previous research (for reviews, see https://pubmed.ncbi.nlm.nih.gov/24051363/ or  https://pubmed.ncbi.nlm.nih.gov/26803286/).

All assessments can be paused and resumed later. We have clarified this in the manuscript on page 7 as follows:

 

“…assessment independently. To allow for participant fatigue, all online assessments can be paused and resumed at a later date. Ineligible participants…”

 

 

 

 

Reviewer: Re sample size (section 6) it’d be nice to see more of the math regarding mean and variance of expected effect size and how that leads to the estimate of 50 participants. In Limitations you briefly revisit that the sample is larger than required and that you’ll test preservation of randomization, but I’d like to see this raised in section 6 to support the derivation of the 150 size in terms of attrition and possible greater attrition in wait-list group. Incidentally, what happens if the test shows randomization isn’t preserved? (maybe that could be expanded in limitations). Is the 30% attrition rate realistic? Maybe so given that there’s significant incentive (AUD20) and follow-up – it’d be good to see some justification. Minor typos/formatting in that section: “approximately” runs into “however” oddly – I think a number is missing; also I’d favor “~0.5” rather than “~.5”.

 

We are unsure what the reviewer means by “mean and variance of expected effect size”? The expected effect size was derived from the reviews and meta-analysis cited in the original manuscript. If the reviewer is referring to power calculations we have added the following text on page 8:

 

“This calculation was based on the group differences on ADAMS between wait-list control and intervention groups at 8 weeks and 3 months, assuming 80% power and alpha=0.05 (0.025 for each primary group difference). In order to detect a mean difference of 0.5 standard deviation units at 8 weeks and at 3 months.”

We thank the reviewer for considering attrition and agree that 70% may be an optimistic projection for retention. We expect that carer support, automated SMS reminders, researcher follow-up and cash reimbursement should boost retention. Cash reimbursement significantly improved retention rates to 60% in previous studies (https://www.jmir.org/2009/2/e9/) without the automated SMS reminders or third-party support to participants that feature as part of our retention strategy. This 60% retention rate has also been achieved in previous fully online studies conducted by members of our investigation team (e.g. https://www.jmir.org/2019/5/e12246/) that use the same trial management technology and procedures. In line with previous work and estimates made by Murray et al. for studies with cash reimbursement, we have revised our projected retention rate to 60% and have adjusted the manuscript according (page 8 next text in bold):

“To detect a more conservative effect size of ~0.5 we will need a minimum of 50 participants per group, therefore we will recruit a sample of approximately 167 individuals to allow for ~40% attrition.”

 

We note in our original manuscript that, Missing data will be handled using maximum likelihood estimation within the MMRM procedure, as this appears to be more robust to violations of the assumption of randomness due to attrition.” To further clarify, we have added the following statement to acknowledge the possibility of failure of randomisation and how this might be addressed (page 10, in bold):

 

“…preservation of randomisation. If randomisation fails, variables that demonstrate systematic differences between groups at baseline will be held constant in future analyses, where appropriate. Additionally, all trial…”

 

We have also corrected the typographical errors and formatting in this section.

 

 

Reviewer: Is any continuity kept within the intervention and, if so, how (with respect to technical security given there’s no password)? I.e. does the application remember from session to session what modules the participant has done in the past or any profile information about them? Is it tied to the specific computer and/or browser? There’s discussion of how the assessment info is securely managed but I don’t see anything about this sort of application data. Are there any expectations about how the participants will use the application over the 8 weeks (e.g. daily, weekly)?  Ah, I see there’s a weekly planner in the app – this further raises my question on how the data is stored and mapped to the user without a login and what issues or limitations this may entail.

 

Unfortunately, without a password we could not implement progress tracking in the intervention website. Therefore, it does not remember what the user has done previously, and the trial will not be able to map engagement data by case. Instead, engagement with the intervention will be assessed via self-report retrospectively. We acknowledge that this approach has limitations. Extensive user consultation (see https://mental.jmir.org/2019/3/e12958/) revealed that any password system would be a significant barrier to use, and our priority was an accessible intervention. To clarify this, we have added a new paragraph about treatment engagement on 5:

 

“The absence of a login system in the intervention poses a challenge for monitoring engagement with the treatment. Although this is not ideal methodologically, our priority has been to create an accessible website. This priority was identified in our co-design process previously described in our team’s publication (Watfern et al, 2019). Treatment engagement will therefore be assessed retrospectively by self-report at the first follow-up measurement.”

 

Reviewer: Presumably the participants can withdraw from the study at any time (indeed ‘withdrawn’ is shown as an option on figure 4 out through the 3-month follow-up). How is this implemented in terms of how they ask and what is done to erase their data?

 

To clarify this, we have added the following text on page 9:

 

“Participants can withdraw from the study at any time for any reason by contacting the investigation team or UNSW HREC in writing. All data collected from a withdrawn participant are securely deleted from the BDI research platform.”

 

Further minor suggestions / edits:

The title doesn’t actually mention mental health.  It does mention ‘mobile’ (which incidentally I don’t see anywhere in the body of the paper).  Perhaps ‘Web-based and Mobile’ should instead be ‘Electronic Mental Health’ in the title.

As the program is web-based, we have removed “Mobile” from the title and added “Mental Health”.

Last sentence of 3rd paragraph of intro: “Adolescence is also a time of psychiatric vulnerability,  substantial cognitive development.” – maybe an ‘and’ before ‘substantial’

We have made this correction.

9.2: “(use this language)” – I wonder if that text was meant to be included.

We have deleted this text.

 

 

 

 

 

Author Response File: Author Response.docx

Reviewer 3 Report

This paper presents a study protocol to evaluate the efficacy of the Healthy Mind (an Internet-based intervention for mental health) in people with borderline-to-mild intellectual disability and mild-to-moderate symptoms of anxiety and/or depression.

Suggestions and questions (answers can/should be used to improve the paper):
1. Check https://www.mdpi.com/journal/ijerph/instructions - "The abstract should be 'a total of about 200 words maximum'. The abstract should be a single paragraph and should follow the style of structured abstracts, 'but without headings'..."
2. What results are expected in this study? What are the research questions? What is the hypothesis of the authors? They should be explicitly declared in the paper.
3. What is new about this study? Be focused on people with intellectual disabilities? The scientific contribution of the results from the study should be clear.
4. Discussion section is poor. It could highlight mainly what the paper adds to the state of the art, comparing the possible results with those of other researches.

Specific comments:
- All abbreviations should be defined in the first occurrence (e.g., CBT, BDI, etc).
- e.g. -> e.g., (with comma - in all occurrences)
- Why is the goal in bold?
- The manuscript has some formatting problems, please revise it completely.
- Some texts in figure 4 are cut.

Author Response

This paper presents a study protocol to evaluate the efficacy of the Healthy Mind (an Internet-based intervention for mental health) in people with borderline-to-mild intellectual disability and mild-to-moderate symptoms of anxiety and/or depression.

Suggestions and questions (answers can/should be used to improve the paper):


  1. Check https://www.mdpi.com/journal/ijerph/instructions - "The abstract should be 'a total of about 200 words maximum'. The abstract should be a single paragraph and should follow the style of structured abstracts, 'but without headings'..."

We have edited the abstract to adhere to these guidelines.

 

  1. What results are expected in this study? What are the research questions? What is the hypothesis of the authors? They should be explicitly declared in the paper.

We thank the reviewer for pointing out that this is not clear in our original manuscript. We have updated the Objectives section on page two as follow (new text in bold):

 

“2. Objectives and hypotheses

The primary aim of this study is to answer the research question as to whether using a tailored eMH program (Healthy Mind) can reduce symptoms of anxiety and depression and improve daily functioning in people aged 16-years and over with borderline-to-mild ID. A secondary aim is to examine any effects that specific impairments in intellectual functioning (e.g. poor working memory) may have on engagement with, and the benefits of, an eMH program. We predict that people who use the intervention will report reduced depression and anxiety, relative to the control group. We also predict that lower scores on tests of cognitive function will be related to lower engagement and benefit in the treatment group.”

 

  1. What is new about this study? Be focused on people with intellectual disabilities? The scientific contribution of the results from the study should be clear.

We thank the reviewer for this comment and have added the following sentences to highlight the pioneering nature of our study:

Page 1: “…depression. To our knowledge, Healthy Mind is the first self-guided and fully automated ID-specific eMH program designed in consultation with ID experts and lived-experience consultants. Healthy Mind is based on myCompass, an established eMH program that helps users alleviate the symptoms of depression while improving their psychosocial functioning. This paper…”

Page 3: “…Active intervention (Healthy Mind): Healthy Mind (www.healthymind.org.au) is a first-of-kind, fully automated…”

Please see the next point for additional edits that highlight the expected contribution of the study.

  1. Discussion section is poor. It could highlight mainly what the paper adds to the state of the art, comparing the possible results with those of other researchers.

We agree that the discussion could be expanded. We have added the following paragraphs to the Discussion section on page 10 (new text in bold):

“Mental illness is 2–3 times more common in people with ID than in the general population, yet few access appropriate mental health services.4 Electronic mental health (eMH) services are well placed to close this treatment gap by adapting cognitive behaviour therapy for people with ID, but very little is known about digital mental health care in ID.

The outcomes of this study will shed new light on how to support the mental health and psychosocial functioning of people with ID. Ours may be the first study to demonstrate the efficacy of a fully automated and self-guided eMH tool in people with ID, similar to those that benefit tens of thousands of other Australians. Importantly, it may be the first time people with ID have a tool to effectively manage mood and anxiety independently. Independence was an important theme in our user consultation and supporting autonomy confers a range of mental health benefits to people with ID.

We expect our study will have practical implications for both treatment and policy. Positive results will provide confidence for ID clinicians to prescribe Healthy Mind as a digital treatment adjunct or psychoeducation tool, which is currently an unmet need in ID-specific face-to-face mental health services. Results will also come as the Australian Royal Commission into Violence, Abuse, Neglect and Exploitation of People with Disability (DRC) concludes and the Australian federal government formulates its National Digital Mental Health Framework. The DRC recently made a substantive finding that “people with cognitive disability have been and continue to be subject to systemic neglect in the Australian health system” . Our trial will provide a clear example of how we might address this neglect with a cost-effective, scalable intervention like others widely available to people with more typical abilities.”

 

 

Specific comments:
- All abbreviations should be defined in the first occurrence (e.g., CBT, BDI, etc).

By our eye, all acronyms are defined, but we are happy to correct any that we have missed or provide this feedback during copyediting.


- e.g. -> e.g., (with comma - in all occurrences)

We have corrected instances where the comma was omitted

- Why is the goal in bold?

We are unsure what this refers to?


- The manuscript has some formatting problems, please revise it completely.

- Some texts in figure 4 are cut.

We have reformatted the diagram to correct this.

Round 2

Reviewer 1 Report

The authors have fully addressed my previous points of concern. I have two remaining suggestions/queries (not critical for publication):

  1. Presumably the trusted carers also have to give consent (you will need to collect and store data identifying them) and they have a role in the trial and will need some 'training/guidance'?
  2. "Treatment engagement will therefore be assessed retrospectively by self-report at the first follow-up measurement". I guess that most people with ID may access the website on a preferred device, and probably the same device that they register for the trial? In which case could the trial registration process not install a cookie on that device that (i) prevents access to the Healthy Mind website before time (for wait list controls, and (ii) monitors date/time of log in/out the website during the intervention phase?

Author Response

  1. That is correct. Carers also provide consent to be involved in the trial as a support person.
  2. This would be an ideal situation, however unfortunately it would not be feasible. Some browsers block cookies by default, and cookies can be cleared by a user at any time. Many people with ID share devices with carers, family embers etc. and many access the web as part of day programs where devices are also shared. Additionally, cookies cannot lawfully block access to another website. We agree that this is a very tricky issue!
Back to TopTop