Development of a Website for E-Health Use for Children with Chronic Suppurative Lung Diseases: A Delphi Expert Consensus Study
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsDear Authors,
Thank you for the opportunity to review your manuscript, which reports on the development of an e‑health website for children with chronic suppurative lung diseases (CSLDs) using a two‑stage design (focus groups and Delphi survey). The topic addresses an important unmet need in pediatric respiratory care, and the combination of qualitative and consensus‑building methods is appropriate for capturing stakeholder perspectives. To enhance the scientific rigour and clarity of your work, I offer the following detailed comments.
1. Study Design and Sampling
-
Focus groups: Please specify the number of participants in each focus group and the recruitment channels. Clarifying whether parents were recruited from the same clinical sites as the healthcare professionals (HCPs) and whether any screening criteria (e.g., child age, disease severity) were applied will help readers assess selection bias. Additionally, provide demographic data (profession, years of experience, parental demographics) to contextualise the perspectives shared.
-
Delphi panel: The manuscript mentions that 49 experts were invited in Round 1, and 44 responded in Round 2. Detail how the list of invitees was generated: Was it based on professional networks, publications, or organisational memberships? Explain how many paediatricians versus physiotherapists were nominated, the overall response rate, and whether non‑respondents differed systematically from respondents. Because expert selection critically affects Delphi outcomes, transparency here is essential.
-
Geographic scope and generalizability: All participants appear to be based in Greece. Please acknowledge that cultural and healthcare‑system differences may limit transferability to other settings. Discuss how future studies might broaden the sample to include international experts, or plan to conduct cross‑cultural validations once a prototype is developed.
2. Methodological Detail
-
Derivation of Delphi items: You note that themes from the focus groups were converted into a list of 36 items for Round 1. Elaborate on this process: Which themes directly informed item wording? Were multiple researchers involved in coding and item generation to ensure reliability? Presenting a table mapping focus‑group themes to Delphi items (perhaps in the supplementary materials) would improve transparency.
-
Consensus thresholds: You adopt thresholds of ≥80 % (consensus), 50–80 % (near consensus) and <50 % (no consensus) without justification. Please cite methodological references supporting these cut‑offs or explain why they were deemed appropriate for this context. Some Delphi studies use Likert scales with more points or set higher consensus thresholds.
-
Treatment of partial agreement: The 3‑point scale distinguishes “limited importance”, “important but not critical” and “critical”. Yet Table 2 reports only the percentage of “critical” responses. What was done with the “important but not critical” ratings? Did these influence whether items progressed to Round 2? Clarify whether you calculated weighted scores or considered median values across the entire scale.
-
Statistical analyses: At minimum, provide measures of central tendency and variability for demographic variables (age, years of experience) and test whether paediatricians and physiotherapists differed significantly in their ratings. Non‑parametric tests (Mann–Whitney U or Fisher’s exact test) are suitable. Without inferential statistics, it is difficult to judge whether, for instance, physiotherapists prioritised gamification more than paediatricians, or whether gender influenced responses.
3. Results Presentation
-
Focus‑group findings: The qualitative themes—information, aesthetics, gamification, evidence, co‑design, technical support, training, remote monitoring, accessibility and multilingual access—are relevant and well supported by participant quotes. To strengthen the narrative:
-
Indicate how many participants expressed each subtheme (e.g., “Seven of eight parents emphasised the importance of colourful design”).
-
Include more representative quotes from parents as well as HCPs, since the former’s voices seem less prominent in the current text. For instance, parents’ concerns about recalling instructions and their desire for an “imaginary friend” avatar are insightful.
-
-
Table 2: This large table is difficult to interpret. Consider reorganising by grouping related items under thematic subheadings (e.g., Content, Aesthetics, Gamification, Support). Provide the number of respondents for each professional group in each round and, if possible, include a simple summary (e.g., how many items achieved consensus, near consensus, no consensus). Adding columns for median ratings or standard deviations would offer more nuance than percentages alone.
-
Supplementary materials: Summarise key findings from Tables S4 and S5 in the main text. For example, explain which three new items were suggested by participants and why they were accepted or rejected in Round 2. Highlight any items that failed to reach consensus after two rounds and offer a rationale (e.g., the role of 3D animation may warrant further investigation).
4. Discussion and Interpretation
-
Integration with existing literature: Your discussion notes the importance of evidence‑based content, gamification, and co‑design, drawing on general mHealth research. Deepen this by comparing your findings with similar initiatives in cystic fibrosis, asthma or other chronic pediatric conditions. What lessons from those interventions could inform the development of a CSLD website? For example, previous studies have shown that gamified breathing exercises improve adherence in cystic fibrosis—could such evidence support your emphasis on 3D animations and avatars?
-
Actionable next steps: The manuscript describes a conceptual framework but stops short of detailing how the website will be built, tested and implemented. Consider outlining planned phases: prototype development, usability testing with children and parents, pilot implementation, and eventual efficacy trials. This demonstrates that the Delphi outcomes will translate into concrete digital health solutions.
-
Limitations: While you mention the Greek‑language sample and lack of end‑user testing, please discuss other limitations: (i) potential biases due to self‑selection of experts; (ii) absence of a third Delphi round, which some researchers use to stabilise consensus; (iii) reliance on self‑reported importance without behavioural validation; and (iv) possible overrepresentation of physiotherapists (75 % of respondents), which may skew priorities toward physiotherapy content.
5. Language, Formatting and Ethics
-
Language: Edit the manuscript for grammar and orthography. Examples include misspelling “Aesthetics” as “Aestetics,” “doctor of phylosophy” for “doctor of philosophy,” and inconsistent tenses. Ensure consistent capitalization of terms (e.g., focus group vs. Focus Group).
-
Ethics statement: Even though the study involved only surveys and interviews, MDPI generally requires an Institutional Review Board (IRB) statement when human participants are involved. Please clarify whether ethical approval was obtained (e.g., from the University of Thessaly Research Ethics Committee, as noted for the focus groups), and describe how informed consent was obtained for both phases.
-
References: Most citations are from the last five years, which aligns with MDPI guidelines. However, a few older references (e.g., 2014, 2007) are included. Replace older citations where newer evidence exists, or justify why seminal works are necessary. Ensure DOI links are correctly formatted and avoid duplicating prefixes (e.g., “https://doi.org/https://doi.org/...”).
By addressing these points—particularly by elaborating your methodology, refining the data presentation, and clarifying ethical and linguistic aspects—you will significantly improve the manuscript’s quality and its value to the field. I appreciate your efforts and look forward to seeing a revised version.
Author Response
Summary
We sincerely thank the reviewer for his/her time and effort in reviewing our manuscript. We greatly appreciate his/her encouraging feedback and critique of our work, and we have addressed all the points raised. The page numbers in our response correspond to the revised manuscript with marked-up corrections (additions in the text are marked in red font). Here, in our responses, all additions in the text are marked in italics.
Point-by-point response to Comments and Suggestions for Author
- Study Design and Sampling
Comments 1: Focus Groups: Please specify the number of participants in each Focus Group and the recruitment channels. Clarifying whether parents were recruited from the same clinical sites as the healthcare professionals (HCPs) and whether any screening criteria (e.g., child age, disease severity) were applied will help readers assess selection bias. Additionally, provide demographic data (profession, years of experience, parental demographics) to contextualise the perspectives shared.
Response 1: We thank the reviewer for his/her comment. In our revised manuscript, we now included:
- In the section of Methods (subsection “Participants”) the following sentences:
Lines 95 to 99: “Recruitment was through an email invitation sent to public hospitals and private clinics across Greece, aiming to include HCPs actively involved in managing children with CSLDs. Parents were approached through the same clinical settings where the participating HCPs practised; in most cases, they were first informed about the study directly by their child’s clinician and were later contacted by the research team.”
- In the section of Results (subsection “Focus Group participants’ characteristics”) the following sentences:
Lines 193 to 197: “A total of thirteen individuals participated in the study. Seven HCPs (median age 51 years; interquartile range (IQR): 36–59) and six parents (median age 43 years; IQR: 39–48). Five out of seven HCPs (71.42%) worked at a public hospital, and four of them (57.14%) held an MSc in pediatrics. All parents were female. Participants’ demographic characteristics are presented in Table 1.”
Table 1. Demographic data collection for participants of Focus Group |
||||
Professionals’ characteristics |
Parents’ characteristics |
|||
ID/Professions |
Work experience |
ID/Parents |
Diagnosis |
Child age |
HCP1 / MD |
>31 |
P1 |
Primary ciliary dyskinesia |
6-12 |
HCP2 / MD |
>31 |
P2 |
Bronchiectasis |
6-12 |
HCP3 / PT |
21-30 |
P3 |
Bronchiectasis |
6-12 |
HCP4 / PT |
11-20 |
P4 |
Cystic Fibrosis |
6-12 |
HCP5 / PT |
11-20 |
P5 |
Cystic Fibrosis |
12-18 |
HCP6 / MD |
5-10 |
P6 |
Bronchiectasis |
12-18 |
HCP7 / PT |
5-10 |
|
|
|
ID: identification; HCP: healthcare professional; MD: medical doctor; PT: physical therapist |
Comments 2: Delphi panel: The manuscript mentions that 49 experts were invited in Round 1, and 44 responded in Round 2. Detail how the list of invitees was generated: Was it based on professional networks, publications, or organisational memberships? Explain how many paediatricians versus physiotherapists were nominated, the overall response rate, and whether non‑respondents differed systematically from respondents. Because expert selection critically affects Delphi outcomes, transparency here is essential.
Response 2: In the initial nomination phase, 50 experts were invited. From those, only one did not respond in Round 1. Given that, it was not feasible to assess systematic differences between responders and non-responders. In our revised manuscript, we now included:
- In the section of Methods (subsection “Sample and recruitment”) the following sentences:
Lines 128 to 135: “The Focus Group of HCPs was asked to nominate a list of 50 participants from their professional networks, including pediatricians and physiotherapists, who have adequate scientific training and clinical experience to take part in the Delphi process and possibly expand the sample size. Selecting participants was considered crucial for ensuring the validity of the study’s conclusions. Therefore, purposive sampling was used to target individuals with proven expertise and practical experience, relevant backgrounds related to the study topic, the ability to provide valuable insights, and a willingness to revise their initial or previous judgments to achieve the highest possible level of consensus.”
- In the section of Results (subsection “Delphi participants’ characteristics”) the following sentences:
Lines 201 to 208: “Of 50 participants, 49 (98%) responded in R1: 12 (24.5%) pediatricians and 37 (75.5%) physiotherapists. In total, 38 (77.6%) respondents were female, while 11 (22.4%) were male. Detailed demographics of participants for R1 are displayed in Table 2. Given that only one participant did not respond in R1, it was not possible to assess the systematic differences between responders and non-responders.
In R2, 44 of 50 participants (88%) responded; 10 (22.7%) pediatricians and 34 (77.3%) physiotherapists. Detailed demographics of participants for the R2 are displayed in Table 3.”
Comments 3: Geographic scope and generalizability: All participants appear to be based in Greece. Please acknowledge that cultural and healthcare‑system differences may limit transferability to other settings. Discuss how future studies might broaden the sample to include international experts, or plan to conduct cross‑cultural validations once a prototype is developed.
Response 3: Indeed, all participants were Greek, as the main aim was to develop a website for Greek child population. Following your recommendations, we reported this fact, in the section of the Strengths and Limitations of our revised manuscript:
Lines 526 to 536: “However, this effort was specifically targeted at a Greek-speaking population, which may limit the generalizability of the findings. Cultural and contextual factors, along with differences in healthcare infrastructure, funding models, and attitudes toward technology, could influence both the perceived relevance of digital health features and the feasibility of their implementation. In addition, while expert consensus was achieved, validation through end-user testing with children and caregivers is needed to confirm usability and long-term impact. Future research should broaden the sample to include international experts from diverse healthcare systems, enabling comparison of priorities across cultural and organisational contexts. Once a prototype is developed, cross-cultural adaptation and validation studies will be conducted to ensure that the tool is acceptable, relevant, and effective in varied international settings.”
- Methodological Detail
Comments 1: Derivation of Delphi items: You note that themes from the Focus Groups were converted into a list of 36 items for Round 1. Elaborate on this process: Which themes directly informed item wording? Were multiple researchers involved in coding and item generation to ensure reliability? Presenting a table mapping focus‑group themes to Delphi items (perhaps in the supplementary materials) would improve transparency.
Response 1: We report this fact, in the section of the Method (subsection “Formulation of the survey items” of our revised manuscript:
Lines 139 to 147: “The 36 items of Round 1 were derived directly from the thematic analysis of the Focus Group discussions (Supplementary Tables S2a,b and S3a,b). Each main theme and its sub-themes were converted into specific, concise questionnaire items using wording that closely reflected the participants’ original phrasing. Two researchers (VS and AM) in-dependently coded the transcripts and proposed candidate items. Discrepancies were resolved through discussion with a third senior researcher (EKο) to ensure reliability and content validity. A detailed mapping of Focus Group themes, representative quotes, and corresponding Delphi items is presented in Supplementary Table S4.”
Comments 2: Consensus thresholds: You adopt thresholds of ≥80 % (consensus), 50–80 % (near consensus) and <50 % (no consensus) without justification. Please cite methodological references supporting these cut‑offs or explain why they were deemed appropriate for this context. Some Delphi studies use Likert scales with more points or set higher consensus thresholds.
Response 2: We acknowledge that consensus thresholds in Delphi surveys are not standardized and may vary across studies. While many COS development studies have adopted the 70/15 rule (≥70% rating an item as critical and ≤15% rating it as not important) as suggested in the COMET Handbook (Williamson et al., 2017), other thresholds have also been reported in the literature, ranging between 70% and 80% agreement (Sinha et al., 2011; Diamond et al., 2014). In our study, we adopted cut-offs of ≥80% (consensus), 50–80% (near consensus), and <50% (no consensus), in order to apply a more stringent criterion for defining agreement. This was considered appropriate given our relatively homogeneous panel and the importance of ensuring robust consensus in this specific context. In the revised manuscript, we now included:
In the section of the Methods (subsection “Data analysis”) the following sentences and supporting references:
Lines 174 to 180: “Although the COMET Handbook and other studies frequently define consensus as at least 70% of participants rating an outcome as critical and no more than 15% rating it as un-important, there is no universally accepted standard for establishing consensus in Delphi surveys [18,19]. Thresholds between 70% and 80% have been used in previous study [20]. We opted for the 80% cut-off to ensure more stringent agreement among participants, given the need for high confidence in prioritizing outcomes in this study.”
[18] Williamson, P.R.; Altman, D.G.; Bagley, H.; Barnes, K.L.; Blazeby, J.M.; Brookes, S.T.; Clarke, M.; Gargon, E.; Gorst, S.; Harman, N.; Kirkham, J.J.; McNair, A.; Prinsen, C.A.C.; Schmitt, J.; Terwee, C.B.; Young, B. The COMET Handbook: Version 1.0. Trials. 2017, 18 (Suppl 3), 280. https://doi.org/10.1186/s13063-017-1978-4
[19] Diamond, I.R.; Grant, R.C.; Feldman, B.M.; Pencharz, P.B.; Ling, S.C.; Moore, A.M.; Wales, P.W. Defining consensus: A systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014, 67, 401–409. https://doi.org/10.1016/j.jclinepi.2013.12.002
[20] Sinha, I.P.; Smyth, R.L.; Williamson, P.R. Using the Delphi technique to determine which outcomes to measure in clinical trials: Recommendations for the future based on a systematic review of existing studies. PLoS Med. 2011, 8, e1000393. https://doi.org/10.1371/journal.pmed.1000393
Comments 3: Treatment of partial agreement: The 3‑point scale distinguishes “limited importance”, “important but not critical” and “critical”. Yet Table 2 reports only the percentage of “critical” responses. What was done with the “important but not critical” ratings? Did these influence whether items progressed to Round 2? Clarify whether you calculated weighted scores or considered median values across the entire scale.
Response 3: We thank the reviewer for pointing out the lack of clarity in our first manuscript (Table 2). In the initial manuscript, only the proportion of “critical” responses was reported, which did not adequately reflect how ratings of “important but not critical” were treated. To address this, we have revised the Results section and replaced Table 2 with two tables (now Table 4 and Table 5), which were already included in the supplementary material. These tables present the full distribution of responses across all three categories of the Likert scale for Round 1 and Round 2, respectively. By integrating them into the main manuscript, we provide a more transparent view of how outcomes were scored, including intermediate ratings, and clarify that consensus thresholds were based on the proportion of “critical” responses in line with established Delphi methodology [16].
De Meyer, D.; Kottner, J.; Beele, H.; Verhaeghe, S.; Van Hecke, A. Delphi procedure in core outcome set development: Rating scale and consensus criteria determined outcome selection. Journal of Clinical Epidemiology. 2019, 111, 23–31. https://doi.org/10.1016/j.jclinepi.2019.03.011
Table 4. Delphi Survey Round 1 |
||||
Content of a website |
||||
Question 1: Do you believe that the following items related to the content could be incorporated into a website for children with CSLDs? |
||||
|
Content
|
Disagree |
Neutral |
Agree |
1. |
Information (medical, physiotherapy, nutrition, exercise) |
2.1 % |
2 % |
95.9 % |
2. |
Simple |
2.1 % |
2 % |
95.9 % |
3. |
User friendly |
2.1 % |
2 % |
95.9 % |
4. |
Interactive (e.g., diary) |
2.1 % |
2 % |
95.9 % |
5. |
Colorful |
2 % |
10.2 % |
87.8 % |
6. |
Gamification |
2 % |
2.1 % |
95.9 % |
7. |
Imaginary friend (avatar) |
4.1 % |
10.2 % |
85.7 % |
8. |
3D animation |
2.1 % |
16.3 % |
81.6 % |
9. |
Evidence based content |
2.1 % |
2 % |
95.9 % |
10. |
Co-design |
2 % |
10.2 % |
87.8 % |
Functions of a website |
||||
Question 2: Do you believe that the following functions could be incorporated into a website for children with CSLDs? |
||||
1. |
Reminders |
2 % |
8.2 % |
89.8 % |
2. |
Sound notification |
4.1 % |
10.2 % |
85.7 % |
3. |
Communication form |
2 % |
18.4 % |
79.6 % |
4. |
User manual |
2 % |
8.2 % |
89.8 % |
5. |
Updates |
2 % |
10.2 % |
87.8 % |
6. |
Modular |
2.1 % |
22.4 % |
75.5 % |
7. |
Calendar/Weekly questionnaire |
2 % |
8.2 % |
89.8 % |
8. |
Evaluation form |
8.2 % |
14.2 % |
77.6 % |
9. |
Telephone support |
- |
8.2 % |
91.8 % |
10. |
|
2 % |
12.3 % |
85.7 % |
11. |
Chat box |
4.1 % |
16.3 % |
79.6 % |
12. |
Social media |
10.2 % |
14.3 % |
75.5 % |
13. |
Diary |
6.2 % |
12.2 % |
81.6 % |
14. |
Child’s training |
- |
10.2 % |
89.8 % |
15. |
Parent’s training |
- |
6.1 % |
93.9 % |
16. |
Healthcare professionals’ training |
- |
6.1 % |
93.9 % |
17. |
Video |
- |
2 % |
98 % |
18. |
Multilingual |
- |
10.2 % |
89.8 % |
19. |
Compatible with other devices |
- |
6.1 % |
93.9 % |
20. |
Interaction between patients and healthcare professionals |
- |
4.1 % |
95.9 % |
21. |
Image uploading |
- |
4.1 % |
95.9 % |
22. |
Portable forms (pdf) |
- |
8.2 % |
91.8 % |
23. |
Easy navigation |
- |
4.1 % |
95.9 % |
24. |
Language adaptability |
- |
4.1 % |
95.9 % |
25. |
Strict privacy policies |
2 % |
10.2 % |
87.8 % |
26. |
Free supply |
2.1 % |
2 % |
95.9 % |
CSLDs: chronic suppurative lung diseases |
Table 5. Delphi Survey Round 2 |
||||
Content of a website |
||||
Question 1: Do you believe that the following items related to the content could be incorporated into a website for children with CSLDs? |
||||
|
Content
|
Disagree |
Neutral |
Agree |
1. |
Electronic leaflets* |
8.2 % |
42.8 % |
49 % |
Functions of a website |
||||
Question 2: Do you believe that the following functions could be incorporated into a website for children with CSLDs? |
||||
1. |
Communication Form |
4.6 % |
13.6 % |
81.8 % |
2. |
Modular |
4.1 % |
12.2 % |
83.7 % |
3. |
Evaluation Form |
2.3 % |
15.9 % |
81.8 % |
4. |
Chat Box |
2.3 % |
13.6 % |
84.1 % |
5. |
Social media |
4.6 % |
13.6 % |
81.8 % |
6. |
Live video conference* |
16.3% |
55.1 % |
28.6 % |
7. |
Emergency button* |
14.2 % |
42.9 % |
42.9 % |
8. |
Communication Form |
4.6 % |
13.6 % |
81.8 % |
CSLDs: chronic suppurative lung diseases * indicates the items suggested from round 1. |
Comments 4: Statistical analyses: At minimum, provide measures of central tendency and variability for demographic variables (age, years of experience) and test whether paediatricians and physiotherapists differed significantly in their ratings. Non‑parametric tests (Mann–Whitney U or Fisher’s exact test) are suitable. Without inferential statistics, it is difficult to judge whether, for instance, physiotherapists prioritised gamification more than paediatricians, or whether gender influenced responses.
Response 4: In our revised manuscript, we provide measures of central tendency and variability for demographic variables in Tables 2 and 3. For better clarification, we separate Table 1 of our first manuscript into two, presenting Round 1 and Round 2 demographics separately. Differences between demographic characteristics of pediatricians and physiotherapists are also presented. Thus, we modified:
- In the section of Method (subsection “Data analysis”) the following sentences:
Lines 182 to 188: “The Kolmogorov–Smirnov was used to assess the normality of the distribution. All continuous variables (e.g., demographic characteristics) were expressed as mean ± standard deviation and median (interquartile ranges). Categorical variables (e.g., level of agreement, response rate) were presented as numbers (n) and percentages (%). Based on the normal distribution, the independent t-test, Mann-Whitney U test, and Fisher’s exact test were used to explore differences between demographic characteristics.”
- In the section of Results (subsection “Delphi participants’ characteristics”) the following sentences:
Lines 201 to 205: “Of 50 participants, 49 (98%) responded in R1: 12 (24.5%) pediatricians and 37 (75.5%) physiotherapists. In total, 38 (77.6%) respondents were female, while 11 (22.4%) were male. Detailed demographics of participants for R1 are displayed in Table 2. Given that only one participant did not respond in R1, it was not possible to assess the systematic differences between responders and non-responders. “
Lines 206 to 208: “In R2, 44 of 50 participants (88%) responded; 10 (22.7%) pediatricians and 34 (77.3%) physiotherapists. Detailed demographics of participants for the R2 are displayed in Table 3.”
Furthermore, we modified Table 1 of our first manuscript in two tables as follows:
Table 2. Demographic data collection for participants of Round 1 (n=49) |
|||
Characteristics |
|||
|
Pediatrician |
Physiotherapist |
p-value |
Number (n) |
12 (24.5%) |
37 (75.5%) |
<0.001* |
Age (years) |
37.5 (35.5 – 58.25) |
42.76 ± 9.15 |
0.598 |
Work experience (years) |
12 (10 – 34.5) |
18.59 ± 8.65 |
0.600 |
Gender (male/female) |
1/11 |
10/27 |
0.252 |
Level of education |
|
|
|
MSc |
8 (66.7%) |
26 (70.3%) |
|
PhD |
4 (33.3%) |
7 (18.9%) |
|
Employer |
|
|
|
Private clinic |
3 (25%) |
26 (70.3%) |
|
Public hospital |
9 (75%) |
11 (29.7%) |
|
MSc in pediatrics |
9 (75%) |
13 (35%) |
|
Publications related to pediatrics |
9 (75%) |
10 (27%) |
|
Data are presented as mean ± SD, median (IQR: Q1-Q3), numbers (n), and % percentage; MSc: masters; PhD: doctor of philosophy; SD: standard deviation; IQR: interquartile range *statistically significant differences (p<0.001) |
Table 3. Demographic data collection for participants of Round 2 (n=44) |
|||
Characteristics |
|||
|
Pediatrician |
Physiotherapist |
p-value |
Number (n) |
10 (22.7%) |
34 (77.3%) |
<0.001* |
Age (years) |
46.2 ± 12.53 |
42.97 ± 9.34 |
0.464 |
Work experience (years) |
13 (9.5 – 35) |
18.77 ± 8.9 |
0.707 |
Gender (male/female) |
1/9 |
10/24 |
0.408 |
Level of education |
|
|
|
MSc |
8 (80%) |
24 (70.6%) |
|
PhD |
2 (20%) |
6 (17.6%) |
|
Employer |
|
|
|
Private clinic |
2 (20%) |
24 (70.6%) |
|
Public hospital |
8 (80%) |
10 (29.4%) |
|
MSc in pediatrics |
8 (80%) |
11 (32.4%) |
|
Publications related to pediatrics |
8 (80%) |
8 (23.5%) |
|
Data are presented as mean ± SD, median (IQR: Q1-Q3), numbers (n), and % percentage; MSc: masters; PhD: doctor of philosophy; SD: standard deviation; IQR: interquartile range *statistically significant differences (p<0.001) |
- Results Presentation
Comments 1: Focus‑group findings: The qualitative themes—information, aesthetics, gamification, evidence, co‑design, technical support, training, remote monitoring, accessibility and multilingual access—are relevant and well supported by participant quotes. To strengthen the narrative:
- Indicate how many participants expressed each subtheme (e.g., “Seven of eight parents emphasised the importance of colorful design”).
- Include more representative quotes from parents as well as HCPs, since the former’s voices seem less prominent in the current text. For instance, parents’ concerns about recalling instructions and their desire for an “imaginary friend” avatar are insightful.
Response 1: To strengthen the narrative, we incorporated the reviewer’s constructive suggestions in the section of Results (subsection “Focus Group results”). The corresponding changes are marked in red font in the revised manuscript. Lines 258 to 348.
Comments 2: Table 2: This large table is difficult to interpret. Consider reorganising by grouping related items under thematic subheadings (e.g., Content, Aesthetics, Gamification, Support). Provide the number of respondents for each professional group in each round and, if possible, include a simple summary (e.g., how many items achieved consensus, near consensus, no consensus). Adding columns for median ratings or standard deviations would offer more nuance than percentages alone.
Response 2: We appreciate the reviewer’s suggestion regarding the presentation of Delphi results. In the revised manuscript, we have clarified the reporting of Round 1 and Round 2 outcomes by including Tables 4 and 5, which show the complete distribution of responses across the three Likert scale categories. These tables, already available in the supplementary material, have now been integrated into the main text to improve transparency and clarity.
Comments 3: Supplementary materials: Summarise key findings from Tables S4 and S5 in the main text. For example, explain which three new items were suggested by participants and why they were accepted or rejected in Round 2. Highlight any items that failed to reach consensus after two rounds and offer a rationale (e.g., the role of 3D animation may warrant further investigation).
Response 3: Indeed, we removed Tables S4 and S5 from the supplementary material into the main revised manuscript (now Tables 4 and Table 5), thereby presenting the full distribution of ratings across rounds.
In the revised manuscript, we now included:
In the section of the Results (subsection “Delphi results”) the following sentences:
Lines 373 to 377: " In R2 of the questionnaire, eight items were assessed in total. Of these, five reached the consensus threshold (≥80%) and were retained, while three did not achieve consensus: electronic leaflets, live video conference and emergency button. The lack of consensus for certain innovative digital features, suggests that these areas remain uncertain and may warrant further investigation in future research."
- Discussion and Interpretation
Comments 1: Integration with existing literature: Your discussion notes the importance of evidence‑based content, gamification, and co‑design, drawing on general mHealth research. Deepen this by comparing your findings with similar initiatives in cystic fibrosis, asthma or other chronic pediatric conditions. What lessons from those interventions could inform the development of a CSLD website? For example, previous studies have shown that gamified breathing exercises improve adherence in cystic fibrosis—could such evidence support your emphasis on 3D animations and avatars?
Response 1: In our revised manuscript, we now included:
In the section of Discussion, the following sentences:
Lines 415 to 427: “Similar approaches have been applied in digital health interventions for other chronic pediatric conditions, offering valuable insights for CSLD website development. For in-stance, gamified breathing exercise programs in cystic fibrosis have been shown to im-prove treatment adherence and engagement, particularly when incorporating visual feedback and progressive challenges [10,25]. In asthma management, mobile platforms integrating symptom diaries, medication reminders, and interactive educational modules have demonstrated increased self-management skills and reduced exacerbations [11]. These examples support our emphasis on 3D animations, avatars, and interactive features as strategies to sustain children’s interest and improve adherence to daily physiotherapy. Furthermore, co-design processes used in cystic fibrosis and pediatric obesity interventions have resulted in more acceptable and user-friendly tools, suggesting that the inclusion of children and parents in the design of a CSLD-specific website is likely to enhance its long-term adoption and effectiveness.”
Comments 2: Actionable next steps: The manuscript describes a conceptual framework but stops short of detailing how the website will be built, tested and implemented. Consider outlining planned phases: prototype development, usability testing with children and parents, pilot implementation, and eventual efficacy trials. This demonstrates that the Delphi outcomes will translate into concrete digital health solutions.
Response 2: In our revised manuscript, we now included:
In the section of Discussion, the following sentences:
Lines 511 to 518: “To ensure that the consensus findings translate into a practical digital health tool, the next phase will involve developing a prototype website incorporating the agreed-upon content and functions. This will be followed by usability testing with children and parents to refine design and features, a pilot implementation in selected clinical settings to assess engagement and feasibility, and finally, efficacy trials to evaluate the impact on treatment adherence, health outcomes, and quality of life. This staged approach will facilitate the creation of an evidence-informed, user-centered platform ready for broader clinical integration.”
Comments 3: Limitations: While you mention the Greek‑language sample and lack of end‑user testing, please discuss other limitations: (i) potential biases due to self‑selection of experts; (ii) absence of a third Delphi round, which some researchers use to stabilise consensus; (iii) reliance on self‑reported importance without behavioural validation; and (iv) possible overrepresentation of physiotherapists (75 % of respondents), which may skew priorities toward physiotherapy content.
Response 3: We thank the reviewer for their mention of these limitations. However, participants were not self-selected; purposive sampling was applied to ensure inclusion of experts with predefined criteria (see Lines 124 to 135). In our revised manuscript, we added the following sentences in the section of the Strengths and Limitations.
Lines 537 to 546: “The Delphi process was limited to two rounds; although this is a common approach, some researchers recommend a third round to further stabilise consensus, which could have refined the prioritisation of items. The study also relied on self-reported ratings of importance without behavioural validation, meaning that stated preferences may not always translate into actual usage or adherence in practice. Finally, physiotherapists represented 75% of respondents, which may have skewed the prioritised content towards physiotherapy-related elements. However, as the scope of this study was disease man-agement rather than exclusively medical management, the strong representation of physiotherapists also reflects their central role in comprehensive care, while highlighting the need for more balanced multidisciplinary input in future work.”
- Language, Formatting and Ethics
Comments 1: Language: Edit the manuscript for grammar and orthography. Examples include misspelling “Aesthetics” as “Aestetics,” “doctor of phylosophy” for “doctor of philosophy,” and inconsistent tenses. Ensure consistent capitalization of terms (e.g., Focus Group vs. Focus Group).
Response 1: We apologize for the errors. Thus, the revised manuscript has been carefully edited for grammar, spelling, and consistency in terminology and capitalization.
Comments 2: Ethics statement: Even though the study involved only surveys and interviews, MDPI generally requires an Institutional Review Board (IRB) statement when human participants are involved. Please clarify whether ethical approval was obtained (e.g., from the University of Thessaly Research Ethics Committee, as noted for the Focus Groups), and describe how informed consent was obtained for both phases.
Response 2: This ethic statement is addressed in the section of the Institutional Review Board Statement and Informed Consent Statement, of the revised manuscript as follows:
Lines 571 to 575: The study was conducted in accordance with the Declaration of Helsinki and the protocol was approved by the Ethics Committee of Physiotherapy Department of the University of Thessaly (protocol number: 14892/12-07-24). Informed consent was obtained from all subjects involved in the study.
For the Focus Groups, participants provided electronic signed consent forms before taking part in the discussions.
For the online survey phase, consent form was presented before the online survey and a tick box was added to confirm consent.
Comments 3: References: Most citations are from the last five years, which aligns with MDPI guidelines. However, a few older references (e.g., 2014, 2007) are included. Replace older citations where newer evidence exists or justify why seminal works are necessary. Ensure DOI links are correctly formatted and avoid duplicating prefixes (e.g., “https://doi.org/https://doi.org/...”).
Response 3: We thank the reviewer for this observation. While the majority of our citations are from the last five years, we have retained two older references because they remain seminal works in their respective methodological areas and continue to be widely cited in recent literature.
Krueger & Casey (2014) is a standard methodological guide for designing, conducting, and analysing Focus Groups. This text is still considered a core reference in qualitative research training and is recommended in current applied research protocols. Its structured, practical approach to data collection and analysis remains directly relevant to the present study.
Hsu & Sandford (2007) provides one of the most concise and widely referenced explanations of the Delphi technique and consensus measurement. Despite its publication date, it is consistently cited in contemporary Delphi studies across healthcare and social sciences because it distills methodological principles that have not changed over time. https://doi.org/10.7275/pdz9-th90
Given their ongoing relevance, authoritative status, and frequent citation in high-quality recent research, we believe that replacing these works would not add methodological value. All DOI links have been checked and reformatted to ensure accuracy.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsReviewer's comments on the article: “Development of a website for e-health use in children with chronic suppurative lung diseases: A Delphi expert consensus study”
Key remarks and recommendations:
- The Results section provides a great deal of information but could be presented more effectively for clarity. The qualitative data from the focus groups (Section 3.2) is well-structured, with thematic analysis and illustrative quotes. However, it lacks a direct link to the Delphi survey items presented in Table 2. The manuscript states that the focus groups established 10 main themes, but the exact number of items derived from this qualitative phase is not specified. It would be beneficial to explicitly connect the themes (e.g., "Information," "Gamification," "Co-design") with the specific items in the Delphi survey (e.g., "Information (medical, physiotherapy, nutrition, exercise)," "Gamification," "Co-design"). This would make the flow from qualitative discovery to quantitative validation more transparent.
A significant issue lies in the presentation of the Delphi results (Section 3.3 and Table 2). The manuscript states that "31 items that reached a consensus level of ≥80% were included in the final questionnaire," and "Five items with a consensus level between 50–80% were re-evaluated in the second round." This is contradictory to the typical Delphi methodology where items with high consensus are usually removed from subsequent rounds, and only those with moderate or low consensus are re-evaluated. The current presentation makes it seem as though 31 items were kept in the "final questionnaire," but only 8 items were in the second round. This needs to be clarified. It is likely that the 31 items that reached consensus in Round 1 were considered final and thus not re-evaluated, while the 5 items in the 50-80% range (and the 3 new ones) were moved to Round 2. The wording should be corrected to reflect this standard practice.
The data in Table 2 itself also requires more precise labeling. The table is titled "The percentage of participants (Pediatricians and Physiotherapists) that scored an outcome as critical (score: 3)," yet the consensus is defined as "agreement equal to or greater than 80%." It is unclear if a consensus of 80% means 80% of participants rated the item as "3 (critical)," or if it is a combined rating of "2 (important but not critical)" and "3 (critical)." This ambiguity makes the interpretation of the consensus findings difficult. For instance, for item #8 ("3D animation") in Round 1, the total score is 81.6%, which is above the 80% consensus threshold, but the individual scores for pediatricians (66.7%) and physiotherapists (86.5%) are quite different. A more detailed explanation of how these percentages and the consensus score were calculated is needed for a more robust analysis.
- The Discussion section is well-structured and aligns the study's findings with existing literature. However, it could be improved by providing a more critical analysis of the specific results from the Delphi survey. For example, while the discussion highlights the strong consensus on certain items, it does not critically analyze why some items, like "Electronic leaflets," "Live video conference," and "Emergency button" (Table 2), failed to reach consensus. A brief discussion on the potential reasons for this lack of agreement would add significant depth to the paper.
The limitations is mentioned in the Discussion section, but it could be expanded upon in a dedicated subsection for clarity. The limitations mentioned (e.g., "limited to Greek-speaking healthcare professionals") are valid, but the manuscript could also acknowledge the potential for cultural bias in the qualitative findings from the focus groups, given the specific geographical context.
- Finally, the Conclusion section could be more impactful by summarizing the most critical and unique findings of the study. While it mentions key features like gamification and multilingual access, it could also highlight the most significant takeaway, such as the successful integration of both parent and HCP perspectives, or the specific design elements that were prioritized by this expert panel. This would provide a stronger closing statement on the study's contribution to the field.
Comments for author File: Comments.pdf
Author Response
Summary
We sincerely thank the reviewer for his/her time and effort in reviewing our manuscript. We greatly appreciate his/her encouraging feedback and critique of our work, and we have addressed all the points raised. The page numbers in our response correspond to the revised manuscript with marked-up corrections (additions in the text are marked in red font). Here, in our responses, all additions in the text are marked in italics.
Point-by-point response to Comments and Suggestions for Authors
Comments 1: The Results section provides a great deal of information but could be presented more effectively for clarity. The qualitative data from the Focus Groups (Section 3.2) is well-structured, with thematic analysis and illustrative quotes. However, it lacks a direct link to the Delphi survey items presented in Table 2. The manuscript states that the Focus Groups established 10 main themes, but the exact number of items derived from this qualitative phase is not specified. It would be beneficial to explicitly connect the themes (e.g., "Information," "Gamification," "Co-design") with the specific items in the Delphi survey (e.g., "Information (medical, physiotherapy, nutrition, exercise)," "Gamification," "Co-design"). This would make the flow from qualitative discovery to quantitative validation more transparent.
Response: We thank the reviewer for his/her comment. In our revised manuscript, we now included:
In the section of Results (subsection “Delphi results”) the following sentences:
Lines 357 to 362: “A total of 36 Delphi survey items were derived directly from the thematic analysis of the Focus Group discussions. Each main theme (e.g., Information) was mapped to specific survey items (e.g., medical, physiotherapy, nutrition, exercise), ensuring continuity between the qualitative and quantitative phases. A detailed mapping of themes, sub-themes, representative quotes, and corresponding Delphi items is provided in Supplementary Table S4.”
A significant issue lies in the presentation of the Delphi results (Section 3.3 and Table 2). The manuscript states that "31 items that reached a consensus level of ≥80% were included in the final questionnaire," and "Five items with a consensus level between 50–80% were re-evaluated in the second round." This is contradictory to the typical Delphi methodology where items with high consensus are usually removed from subsequent rounds, and only those with moderate or low consensus are re-evaluated. The current presentation makes it seem as though 31 items were kept in the "final questionnaire," but only 8 items were in the second round. This needs to be clarified. It is likely that the 31 items that reached consensus in Round 1 were considered final and thus not re-evaluated, while the 5 items in the 50-80% range (and the 3 new ones) were moved to Round 2. The wording should be corrected to reflect this standard practice.
Response: We apologise for confusing the reviewer with our statement. In our revised manuscript, we added the following sentences in the section of the Results.
Lines 363 to 368: “Of the total 36 items included in the first-round questionnaire (Table 4), 31 items reached a consensus level of ≥80% and were considered final; these were not re-evaluated in the R2. The remaining five items, which had a consensus level between 50–80%, were carried forward to R2 for re-evaluation. Furthermore, in R1 participants suggested three new items through the open-ended responses, which were also included in the R2 questionnaire (Table 5).”
The data in Table 2 itself also requires more precise labeling. The table is titled "The percentage of participants (Pediatricians and Physiotherapists) that scored an outcome as critical (score: 3)," yet the consensus is defined as "agreement equal to or greater than 80%." It is unclear if a consensus of 80% means 80% of participants rated the item as "3 (critical)," or if it is a combined rating of "2 (important but not critical)" and "3 (critical)." This ambiguity makes the interpretation of the consensus findings difficult. For instance, for item #8 ("3D animation") in Round 1, the total score is 81.6%, which is above the 80% consensus threshold, but the individual scores for pediatricians (66.7%) and physiotherapists (86.5%) are quite different. A more detailed explanation of how these percentages and the consensus score were calculated is needed for a more robust analysis.
Response: We thank the reviewer for this helpful observation. To address the ambiguity in the original Table 2 of our first manuscript, we have removed it and replaced it with Tables 4 and 5 in our revised manuscript.
Table 4. Delphi Survey Round 1 |
||||
Content of a website |
||||
Question 1: Do you believe that the following items related to the content could be incorporated into a website for children with CSLDs? |
||||
|
Content
|
Disagree |
Neutral |
Agree |
1. |
Information (medical, physiotherapy, nutrition, exercise) |
2.1 % |
2 % |
95.9 % |
2. |
Simple |
2.1 % |
2 % |
95.9 % |
3. |
User friendly |
2.1 % |
2 % |
95.9 % |
4. |
Interactive (e.g., diary) |
2.1 % |
2 % |
95.9 % |
5. |
Colorful |
2 % |
10.2 % |
87.8 % |
6. |
Gamification |
2 % |
2.1 % |
95.9 % |
7. |
Imaginary friend (avatar) |
4.1 % |
10.2 % |
85.7 % |
8. |
3D animation |
2.1 % |
16.3 % |
81.6 % |
9. |
Evidence based content |
2.1 % |
2 % |
95.9 % |
10. |
Co-design |
2 % |
10.2 % |
87.8 % |
Functions of a website |
||||
Question 2: Do you believe that the following functions could be incorporated into a website for children with CSLDs? |
||||
1. |
Reminders |
2 % |
8.2 % |
89.8 % |
2. |
Sound notification |
4.1 % |
10.2 % |
85.7 % |
3. |
Communication form |
2 % |
18.4 % |
79.6 % |
4. |
User manual |
2 % |
8.2 % |
89.8 % |
5. |
Updates |
2 % |
10.2 % |
87.8 % |
6. |
Modular |
2.1 % |
22.4 % |
75.5 % |
7. |
Calendar/Weekly questionnaire |
2 % |
8.2 % |
89.8 % |
8. |
Evaluation form |
8.2 % |
14.2 % |
77.6 % |
9. |
Telephone support |
- |
8.2 % |
91.8 % |
10. |
|
2 % |
12.3 % |
85.7 % |
11. |
Chat box |
4.1 % |
16.3 % |
79.6 % |
12. |
Social media |
10.2 % |
14.3 % |
75.5 % |
13. |
Diary |
6.2 % |
12.2 % |
81.6 % |
14. |
Child’s training |
- |
10.2 % |
89.8 % |
15. |
Parent’s training |
- |
6.1 % |
93.9 % |
16. |
Healthcare professionals’ training |
- |
6.1 % |
93.9 % |
17. |
Video |
- |
2 % |
98 % |
18. |
Multilingual |
- |
10.2 % |
89.8 % |
19. |
Compatible with other devices |
- |
6.1 % |
93.9 % |
20. |
Interaction between patients and healthcare professionals |
- |
4.1 % |
95.9 % |
21. |
Image uploading |
- |
4.1 % |
95.9 % |
22. |
Portable forms (pdf) |
- |
8.2 % |
91.8 % |
23. |
Easy navigation |
- |
4.1 % |
95.9 % |
24. |
Language adaptability |
- |
4.1 % |
95.9 % |
25. |
Strict privacy policies |
2 % |
10.2 % |
87.8 % |
26. |
Free supply |
2.1 % |
2 % |
95.9 % |
CSLDs: chronic suppurative lung diseases |
Table 5. Delphi Survey Round 2 |
||||
Content of a website |
||||
Question 1: Do you believe that the following items related to the content could be incorporated into a website for children with CSLDs? |
||||
|
Content
|
Disagree |
Neutral |
Agree |
1. |
Electronic leaflets* |
8.2 % |
42.8 % |
49 % |
Functions of a website |
||||
Question 2: Do you believe that the following functions could be incorporated into a website for children with CSLDs? |
||||
1. |
Communication Form |
4.6 % |
13.6 % |
81.8 % |
2. |
Modular |
4.1 % |
12.2 % |
83.7 % |
3. |
Evaluation Form |
2.3 % |
15.9 % |
81.8 % |
4. |
Chat Box |
2.3 % |
13.6 % |
84.1 % |
5. |
Social media |
4.6 % |
13.6 % |
81.8 % |
6. |
Live video conference* |
16.3% |
55.1 % |
28.6 % |
7. |
Emergency button* |
14.2 % |
42.9 % |
42.9 % |
8. |
Communication Form |
4.6 % |
13.6 % |
81.8 % |
CSLDs: chronic suppurative lung diseases * indicates the items suggested from round 1. |
Comments 2: The Discussion section is well-structured and aligns the study's findings with existing literature. However, it could be improved by providing a more critical analysis of the specific results from the Delphi survey. For example, while the discussion highlights the strong consensus on certain items, it does not critically analyze why some items, like "Electronic leaflets," "Live video conference," and "Emergency button" (Table 2), failed to reach consensus. A brief discussion on the potential reasons for this lack of agreement would add significant depth to the paper.
Response: The reviewer is right. In our revised manuscript, we added the following sentences in the section of the Discussion:
Lines 502 to 510: “While the majority of items reached strong consensus, certain proposed features did not meet the ≥80% threshold. For example, “Electronic leaflets” and “Live video conference” received lower agreement, possibly reflecting concerns about their added value and the considerable resources required for implementation and ongoing support. Similarly, the "emergency button" function concerns a technical issue which, although it may be practically useful, is usually not included in websites whose main objective is to create informative material. These lower consensus levels suggest that participants prioritised core self-management and communication functions over supplementary or re-source-intensive features.”
The limitations are mentioned in the Discussion section, but it could be expanded upon in a dedicated subsection for clarity. The limitations mentioned (e.g., "limited to Greek-speaking healthcare professionals") are valid, but the manuscript could also acknowledge the potential for cultural bias in the qualitative findings from the Focus Groups, given the specific geographical context.
Response: We thank the reviewer for this valuable suggestion. In response, we created a dedicated Strengths and Limitations subsection for clarity.
We also expanded the limitations to acknowledge that the qualitative findings from the Focus Groups may reflect cultural or contextual factors specific to the Greek healthcare setting, which could limit their transferability to different sociocultural environments. This addition complements the previously mentioned issues of language and health system differences, further clarifying the boundaries of generalisability.
In our revised manuscript, we added the following sentences:
Lines 526 to 530: “However, this effort was specifically targeted at a Greek-speaking population, which may limit the generalizability of the findings. Cultural and contextual factors, along with differences in healthcare infrastructure, funding models, and attitudes toward technology, could influence both the perceived relevance of digital health features and the feasibility of their implementation.”
Comments 3: Finally, the Conclusion section could be more impactful by summarizing the most critical and unique findings of the study. While it mentions key features like gamification and multilingual access, it could also highlight the most significant takeaway, such as the successful integration of both parent and HCP perspectives, or the specific design elements that were prioritized by this expert panel. This would provide a stronger closing statement on the study's contribution to the field.
Response: The reviewer is right. In our revised manuscript, we now included:
In the section of Conclusions, the following sentences:
Lines 548 to 557: “This study identified and prioritised the essential features for a CSLD-focused digital health platform through a rigorous, two-phase Delphi process informed by Focus Group insights from both parents and healthcare professionals. The integration of these complementary perspectives ensured that the proposed design reflects both clinical priorities and family needs. Features receiving the highest consensus included evidence-based content, engaging gamification elements such as 3D animations and avatars, multilingual access, and personalised reminders. By combining user-centred design principles with expert consensus, this work provides a concrete roadmap for developing an accessible, engaging, and clinically relevant tool to support the long-term management of CSLDs in children.”
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe article looks ok now.