ICT-Enabled Education for Sustainability Justice in South East Asian Universities
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe research paper addresses a timely and significant topic, focusing on the various factors contributing to sustainability education in Asian universities. While the topic is engaging, there are several concerns I believe need addressing before considering it for publication.
The literature review is pertinent and current. However, I noticed some missing citations. Claims are made in the summary of the literature review without corresponding references. Please ensure you add appropriate citations to substantiate your statements.
The paper lacks details regarding the sampling technique employed. Was the sample selected randomly for specific groups, or were all students invited to participate, with only those who volunteered included in the study?
The instruments used in the study were either developed, adopted, or adapted, but there is no information provided about their sources. Additionally, details about the Likert scale used are missing.
I find it challenging to understand the authors' claim that "83% had declared good knowledge of ICTs, 2% poor, and 15% minimal." There is no explanation provided regarding how the authors assessed the participants' ICT knowledge levels.
I am puzzled by the decision to conduct factor analysis separately for each of the four scales. Typically, factor analysis aims to categorize items into distinct variables, and the entire questionnaire (excluding demographics) should be analyzed together. Please clarify why separate factor analyses were performed for each scale.
The data appears to be skewed, particularly with a significant difference between male and female participants. Highlighting gender differences without justification is not appropriate, especially given the unequal distribution of male and female participants. However, you may acknowledge this as a limitation of the study.
I suggest the authors include information about the novelty of their research and, if possible, provide a figure to illustrate the construct.
Furthermore, it would be beneficial to include sections on recommendations for future research and limitations of the study separately.
Author Response
Dear Reviewer,
Thank you very much for your meaningful comments which added value to the quality of the paper. Our answers are as follows:
The literature review is pertinent and current. However, I noticed some missing citations. Claims are made in the summary of the literature review without corresponding references. Please ensure you add appropriate citations to substantiate your statements.
ANSWER: The “summary of the literature” reflects the literature review in the introduction and it does not refer to new citations. To make it clear, in the text, we start with “Our literature review presented in the introduction showed that education for sustainability enabled by ICTs (ICTeEfS)
The paper lacks details regarding the sampling technique employed. Was the sample selected randomly for specific groups, or were all students invited to participate, with only those who volunteered included in the study?
ANSWER: It is a purposive sample targeted to students who were going to attend classes of courses to be revised. An explanation for it is added in the text highlighted in red.
The instruments used in the study were either developed, adopted, or adapted, but there is no information provided about their sources. Additionally, details about the Likert scale used are missing.
ANSWER: Details are inserted in the text highlighted in red.
I find it challenging to understand the authors' claim that "83% had declared good knowledge of ICTs, 2% poor, and 15% minimal." There is no explanation provided regarding how the authors assessed the participants' ICT knowledge levels.
ANSWER: It is natural to find so high percentage because the target universities have 1) a policy for basic ICT literacy for all students and 2) students in higher education have already got basic ICT literacy in their upper secondary schools before entering universities. The assessment of ICT literacy was based on a self-reported question, which was also commented on as a limitation at the end of the discussion section.
I am puzzled by the decision to conduct factor analysis separately for each of the four scales. Typically, factor analysis aims to categorize items into distinct variables, and the entire questionnaire (excluding demographics) should be analyzed together. Please clarify why separate factor analyses were performed for each scale.
ANSWER: The reason why we carried out a separate factor analysis in each of the four scales is that the items of these constructs were previously piloted and validated in another context, so there was a sound theoretical basis for each one. That is why we also used the method of the Principal Component Analysis to test the replication of the validity of the scales.
The data appears to be skewed, particularly with a significant difference between male and female participants. Highlighting gender differences without justification is not appropriate, especially given the unequal distribution of male and female participants. However, you may acknowledge this as a limitation of the study.
ANSWER: You are right that the gender distribution is skewed and for this purpose, we have acknowledged it in the limitations highlighted in red.
I suggest the authors include information about the novelty of their research and, if possible, provide a figure to illustrate the construct.
ANSWER: This has been done in the text both in the introduction as well as in other parts. Additionally, we have also inserted a paragraph in the conclusion highlighted in red.
Furthermore, it would be beneficial to include sections on recommendations for future research and limitations of the study separately.
ANSWER: We have a big paragraph about limitations and suggestions for future research at the end of the discussion section. We think that turning it into a new numbered section will not add anything since it is quite visible.
Reviewer 2 Report
Comments and Suggestions for AuthorsThe topic is very interesting and I was engaged in reading it.
However, I found some minor issues that could be addressed.
1. The research question related to ICT for EfS is not well framed in the previous research. Namely, the authors highlight the importance EfS and the previous research in such a field, but do not support the need of using ICT for that. Somehow, the RQ pops up from a authors' need rather than the literature. My suggestion is making this point clear. I see two ways: authors provides refs in such direction or make explicitly the novelty of their study.
2. From methodology point of view, is not clear if the used questionnaire is validated by literature or is design by the authors for this study. Please make it explicit.
This doubt is due to the questions reported in the next sections. In particular, the ones at Table 1. From those items, I can infer that the students self-reported their level of "learning". It is not a problem per se, but I was wondering how authors manage/consider the self-reported data, both from reliability and ethical point of view.
3. I struggled a bit to follow the statistical analysis. I strongly suggest of expanding the Data Analyses section reporting the meaning of all indicators used in the next sections. Moreover, the section could be renamed as Methods of analysis.
Author Response
Dear Reviewer,
Thank you very much for your meaningful comments which added value to the quality of the paper. Our answers are as following:
- The research question related to ICT for EfS is not well framed in the previous research. Namely, the authors highlight the importance EfS and the previous research in such a field, but do not support the need of using ICT for that. Somehow, the RQ pops up from a authors' need rather than the literature. My suggestion is making this point clear. I see two ways: authors provides refs in such direction or make explicitly the novelty of their study.
ANSWER: Thank you for your suggestions references were added
- From methodology point of view, is not clear if the used questionnaire is validated by literature or is design by the authors for this study. Please make it explicit.
ANSWER: It was clarifies that the tools for designed for this study
This doubt is due to the questions reported in the next sections. In particular, the ones at Table 1. From those items, I can infer that the students self-reported their level of "learning". It is not a problem per se, but I was wondering how authors manage/consider the self-reported data, both from reliability and ethical point of view.
ANSWER: Self-reported questionnaire are currently used in educational settings.
- I struggled a bit to follow the statistical analysis. I strongly suggest of expanding the Data Analyses section reporting the meaning of all indicators used in the next sections. Moreover, the section could be renamed as Methods of analysis.
ANSWER: Thank you for the suggestions data Analysis section was expanded reporting the meaning of all indicators used in the next sections.
ANSWER: The section was renamed as Methods of Analysis.
Reviewer 3 Report
Comments and Suggestions for AuthorsOverall, the article leaves a good impression. Interesting research was carried out and high-quality statistical processing was done
The shortcomings of the article include the following.
The text preceding Table 7 talks about groups by area of study. These groups should be named, since only groups 4, 5 and 7 are designated. Then it will become clear how many groups there are and why Df for groups is equal to 6. It is not clear which of the groups were excluded in Table 8 and why df (probably still Df) equals 4? How many people were in each group?
You should also decipher the designations in the tables: CRIREFLECT, SUSTJUST, TRANSTEB
Author Response
Dear Reviewer,
Thank you very much for your meaningful comments which added value to the quality of the paper. Our answers are as follows:
The text preceding Table 7 talks about groups by area of study. These groups should be named since only groups 4, 5 and 7 are designated. Then it will become clear how many groups there are and why Df for groups is equal to 6. It is not clear which of the groups were excluded in Table 8 and why df (probably still Df) equals 4? How many people were in each group?
ANSWER: Both comments have been taken into consideration and clarifications are inserted in the text highlighted in red.
You should also decipher the designations in the tables: CRIREFLECT, SUSTJUST, TRANSTEB
ANSWER: Although it has been explained in section 2.2, we have also added it in Table 5 as a note.