Next Article in Journal
Eating Sturgeon: An Endangered Delicacy
Next Article in Special Issue
Satisfaction with Online Study Abroad Predicted by Motivation and Self-Efficacy: A Perspective Based on the Situated Expectancy–Value Theory during the COVID-19 Epidemic
Previous Article in Journal
Oat–Field Pea Intercropping for Sustainable Oat Production: Effect on Yield, Nutritive Value and Environmental Impact
Previous Article in Special Issue
An Analysis of Factors Influencing the Intention to Use “Untact” Services by Service Type
 
 
Review
Peer-Review Record

You Can Handle, You Can Teach It: Systematic Review on the Use of Extended Reality and Artificial Intelligence Technologies for Online Higher Education

Sustainability 2023, 15(4), 3507; https://doi.org/10.3390/su15043507
by Gizéh Rangel-de Lázaro * and Josep M. Duart
Reviewer 1:
Reviewer 2:
Reviewer 3:
Sustainability 2023, 15(4), 3507; https://doi.org/10.3390/su15043507
Submission received: 13 January 2023 / Revised: 6 February 2023 / Accepted: 9 February 2023 / Published: 14 February 2023
(This article belongs to the Special Issue Impact of COVID-19 on Education)

Round 1

Reviewer 1 Report

Dear Authors,

I had a chance to go through your manuscript and I must admit it was a delight throughout. I really appreciate the detailed information, the clear and cogent explanations, the internal consistency of your work as well as the well defined methodology of your work. The focus of this paper is of potential interest to readership, especially that the world education system is still struggling with the impacts of an ongoing pandemic. I applaud the authors’ pursuit of this topic. Editors will soon communicate their decision. Congratulations. 

Author Response

Dear reviewer, we receive with great humbleness your comments. Moreover, we immensely appreciate your time and consideration in reviewing his manuscript and providing a constructive evaluation. It is relevant for the Authors to see that you have valued the organization and methodology of this Review.

Reviewer 2 Report

This is an interesting paper. It is well written and contributes to literature. However, my comments are minor and listed below:

In the summary, provide the full names of XR and AI in order to help readers understand the meaning. Although, you decribe them later in the paper, it would be more convenient to provide full names from the beginning.

Figure 2 should be redesigned, i.e. the journals' titles etc.

Also Figure 4 is redundant.

 

 

 

 

Author Response

Dear reviewer, we thank you for the time and consideration dedicated to this Review. Please, find below a point-by-point response explaining how we have addressed each of your comments.

  1. In the summary, provide the full names of XR and AI in order to help readers understand the meaning. Although, you decribe them later in the paper, it would be more convenient to provide full names from the beginning.

 

Authors reply: Please, find the suggested changes in the manuscript, lines 13 and 14.

 

  1. Figure 2 should be redesigned, i.e. the journals' titles etc.

 

Authors reply: In this figure, we have included the top ten journals publishing the most significant number of articles applying XR and AI since the COVID-19 outbreak. Please, find on each row list one of the ten journals reported.

 

  1. Also Figure 4 is redundant.

 

Authors reply: We appreciate your comments on this matter, and we believe Figure 4 offers a clear insight into the contribution of the papers having a higher impact on the topic addressed.

Author Response File: Author Response.pdf

Reviewer 3 Report

This review article was aimed at reviewing research in using XR and AI in higher education after the COVID-19 outbreak. Review articles are important both for newbies in the research field and experienced researches.

As a researcher in AI in education, I have noted several problematic parts of this review:

1. The most notable problem is that while the authors intended to review usage of AI in higher education (along with XR technology), their search method somehow missed some of the primary venues of publication in the field like International Journal of Artificial Intelligence in Education (https://www.springer.com/journal/40593). This can expose a serious flaw in the search strategy and puts under question the review's consistency. I urge the authors to review articles in IJAED for the relevant time period (there are articles on higher education) and find why they escaped inclusion in their review. This needs at least serious discussion or, better, the expansion of the review. 

1.1. The authors wrote that they accepted book chapters, but did not write whether they excluded conference papers. This is also strange because it excludes full papers from major topical conferences like the conferences on Artificial Intelligence in Education (https://link.springer.com/conference/aied) and Intelligent Tutoring Systems (https://link.springer.com/conference/its) which are published in prestigious Springer Lecture Notes in Computer Science series and contain significant research. Meanwhile, a paper in much less prestigious Lecture Notes on Data Engineering and Communications Technologies was included. This needs at least discussion in the article. There was a special issue in MDPI Education Sciences drawn from the ITS 2021 conference (https://www.mdpi.com/journal/education/special_issues/Learner_Computer_Interaction_ITS)  with at least one article relevant to higher education which isn't included either.

2. The premise of this article is somewhat dubious because the authors don't seem to differentiate between research on XR and AI in education (whose main purpose is developing new technologies and assessing their effect) and adoption of these technologies (which was affected by the pandemic). While wide adoption in the learning process results in publishing experience and so increases the number of the articles in the field, major scientific teams researching AI and XR in education might have continued their planned research regardless of the pandemic. This should be discussed in the article: labeling the reviewed articles with "research" and "adoption" or dropping COVID requirement might be a good way to increase its soundness.

3. We must also take into account the time between planning a study and publishing the resulting article in a traditional journal which is considerable: the authors state "We conducted further exclusion criteria by only including articles submitted after the COVID-19 outbreak in March 2020" but it's highly unlikely that an article submitted in, say, May 2020 could be influenced by the pandemic - the relevant study was likely planned and conducted before the pandemic. If the authors really want to limit their review to studies caused by the pandemic (they say it in the abstract but not in the title), they should include terms like "COVID-19" or "pandemic" in their search list, though it may be better to avoid this limitation and make the article a general review of the field since March 2020.

4. RQ1 includes the question about the languages the research published in, but the study is heavily biased in that regard. By choosing Scopus, Web of Science Core Collection and EBSCO, the authors naturally limited their study to mostly English articles - so their findings that "In the case of AI, we found that 66 papers were in English (e.g.,43,44), whereas only three were in Spanish" are not surprising. If the authors, for example, included in their search databases like CSCD (https://clarivate.com/webofsciencegroup/solutions/webofscience-chinese-science-citation-index/), they would have found articles in Chinese. There are similar indexes for Korean and Russian journals and so on. I suggest dropping the "language of publishing" part of the study - or, alternatively, expand searching to other databases if the authors really seek the answer to that question.

5.  Given that adaptive learning systems is a subfield of artificial intelligence in education, I'm surprised that less than half of the articles in section 3.3.1 were classified as AI. Please, explain which kinds of adaptive systems you don't categorize as AI and why.

6. Intelligent tutoring systems imply using Artificial Intelligence. So the authors' statement in the section 3.3.3 "The most frequent technology covered was AI" seems trivial. How they could avoid using AI and still be intelligent tutoring systems?

7. "A total of 3537 initial records were retrieved from Scopus (n = 2984), Web of Science (n = 529), and EBSCO Education (n = 24). Afterward, 1026 duplicated imported results were removed" Normally, a duplicate is the same article in different databases, but I cannot see how 1026 duplicates are possible with non-Scopus articles totalling to 553. Please explain what did you consider duplicates.

8. It would be really nice to include in the review the lists of found strengths and weaknesses in the field as it was done, for example, in this systematic review (https://link.springer.com/article/10.1007/s40593-019-00186-y#Sec27)

The article has English mistakes which makes reading it more difficult, e.g. "Thus, the purpose is not to replace traditional techniques with digital tools. Still, consider them for the genuine profit they can offer" or "More experienced in using digital technologies in higher education are the open and online universities that have relied on them to teach and gain insights and learning trends. " Please, fix the grammar issues in the manuscript.

Author Response

Dear reviewer,

Thank you for the time and consideration dedicated to this Review. We highly appreciate your valuable comments, which have helped improve this manuscript. Please, find below a point-by-point response explaining how we have addressed each of your comments.

This review article was aimed at reviewing research in using XR and AI in higher education after the COVID-19 outbreak. Review articles are important both for newbies in the research field and experienced researches.

As a researcher in AI in education, I have noted several problematic parts of this Review:

  1. The most notable problem is that while the Authors intended to review usage of AI in higher education (along with XR technology), their search method somehow missed some of the primary venues of publication in the field like International Journal of Artificial Intelligence in Education (https://www.springer.com/journal/40593). This can expose a serious flaw in the search strategy and puts under question the Review's consistency. I urge the Authors to review articles in IJAED for the relevant time period (there are articles on higher education) and find why they escaped inclusion in their Review. This needs at least serious discussion or, better, the expansion of the Review.  

Authors reply: Thank you for pointing out this journal to us. This Review focuses on pedagogy, and the articles found in this journal mostly rely on educational technology. Nevertheless, we localized publications aligned with our goals and have included them in this Review. Please, find here the publications included:

  • Bai, X., & Stede, M. (2022). A Survey of Current Machine Learning Approaches to Student Free-Text Evaluation for Intelligent Tutoring. International Journal of Artificial Intelligence in Education, 1–39. https://doi.org/10.1007/S40593-022-00323-0/TABLES/3
  • Doroudi, S. (2022). The Intertwined Histories of Artificial Intelligence and Education. International Journal of Artificial Intelligence in Education, 1–44. https://doi.org/10.1007/S40593-022-00313-2/FIGURES/1
  • Feng, S., & Law, N. (2021). Mapping Artificial Intelligence in Education Research: a Network‐based Keyword Analysis. International Journal of Artificial Intelligence in Education, 31(2), 277–303. https://doi.org/10.1007/S40593-021-00244-4/TABLES/2
  • Kuzilek, J., Zdrahal, Z., Vaclavek, J., Fuglik, V., Skocilas, J., & Wolff, A. (2022). First-Year Engineering Students' Strategies for Taking Exams. International Journal of Artificial Intelligence in Education, 1–26. https://doi.org/10.1007/S40593-022-00303-4/FIGURES/10

1.1. The Authors wrote that they accepted book chapters, but did not write whether they excluded conference papers. This is also strange because it excludes full papers from major topical conferences like the conferences on Artificial Intelligence in Education (https://link.springer.com/conference/aied) and Intelligent Tutoring Systems (https://link.springer.com/conference/its) which are published in prestigious Springer Lecture Notes in Computer Science series and contain significant research. Meanwhile, a paper in much less prestigious Lecture Notes on Data Engineering and Communications Technologies was included. This needs at least discussion in the article. There was a special issue in MDPI Education Sciences drawn from the ITS 2021 conference (https://www.mdpi.com/journal/education/special_issues/Learner_Computer_Interaction_ITS)  with at least one article relevant to higher education which isn't included either.

Authors reply: We have included a more detailed explanation in lines 146-148. Moreover, we will consider your comments for future research. Please, find more details in lines 579 and 580.

  1. The premise of this article is somewhat dubious because the Authors don't seem to differentiate between research on XR and AI in education (whose main purpose is developing new technologies and assessing their effect) and adoption of these technologies (which was affected by the pandemic). While wide adoption in the learning process results in publishing experience and so increases the number of the articles in the field, major scientific teams researching AI and XR in education might have continued their planned research regardless of the pandemic. This should be discussed in the article: labeling the reviewed articles with "research" and "adoption" or dropping COVID requirement might be a good way to increase its soundness.

 

Authors reply: We would like to clarify that we understand by research not only the development of new technologies but, above all, we look at their impact from a pedagogical point of view. That is also why we have not distinguished between "research" and "adoption." Moreover, the COVID-19 outbreak was one of the key topics in our search. Disregarding this essential factor would mean developing a new systematic review, which is not in line with our goals. However, it is something we will consider in the future once we evaluate the increasing development of XR and AI from a pedagogical perspective.

 

  1. We must also take into account the time between planning a study and publishing the resulting article in a traditional journal which is considerable: the Authors state "We conducted further exclusion criteria by only including articles submitted after the COVID-19 outbreak in March 2020" but it's highly unlikely that an article submitted in, say, May 2020 could be influenced by the pandemic - the relevant study was likely planned and conducted before the pandemic. If the Authors really want to limit their Review to studies caused by the pandemic (they say it in the abstract but not in the title), they should include terms like "COVID-19" or "pandemic" in their search list, though it may be better to avoid this limitation and make the article a general review of the field since March 2020.

 

Authors reply: We understand the reviewer's concerns and would like to add further clarifications. The COVID-19 outbreak was one of the key Inclusion Criteria in our systematic Review. Moreover, this manuscript has been submitted to the Special Issue "Impact of COVID-19; adding this information to our title might result in redundancy. When we included in this review papers submitted right after March 2020, we acknowledged that, in many cases, the data provided would have been collected before this date. After a careful review, we considered that the process of producing the articles selected were affected, directly or indirectly, by the outbreak of the pandemic and the impact it had on students, faculty, staff, and educational institutions in general.

 

  1. RQ1 includes the question about the languages the research published in, but the study is heavily biased in that regard. By choosing Scopus, Web of Science Core Collection and EBSCO, the Authors naturally limited their study to mostly English articles - so their findings that "In the case of AI, we found that 66 papers were in English (e.g.,43,44), whereas only three were in Spanish" are not surprising. If the Authors, for example, included in their search databases like CSCD (https://clarivate.com/webofsciencegroup/solutions/webofscience-chinese-science-citation-index/), they would have found articles in There are similar indexes for Korean and Russian journals and so on. I suggest dropping the "language of publishing" part of the study - or, alternatively, expand searching to other databases if the Authors really seek the answer to that question.

 

Authors reply: We appreciate the reviewer's comments on this matter, and this is a topic we would like to address in the future. Please, you may find more information in lines 577-579. For the purpose of the systematic research we are presenting here, we decided to only focus on two languages to guarantee the manageability of the literature reviewed.

 

  1. Given that adaptive learning systems is a subfield of artificial intelligence in education, I'm surprised that less than half of the articles in section 3.3.1 were classified as AI. Please, explain which kinds of adaptive systems you don't categorize as AI and why.

 

Authors reply: For this category, we only considered the articles that clearly stated AI use with pedagogical purposes.

 

  1. Intelligent tutoring systems imply using Artificial Intelligence. So the Authors' statement in the section 3.3.3 "The most frequent technology covered was AI" seems trivial. How they could avoid using AI and still be intelligent tutoring systems?

 

Authors reply: We considered the reviewer's observation and have removed this sentence.

 

  1. "A total of 3537 initial records were retrieved from Scopus (n = 2984), Web of Science (n = 529), and EBSCO Education (n = 24). Afterward, 1026 duplicated imported results were removed" Normally, a duplicate is the same article in different databases, but I cannot see how 1026 duplicates are possible with non-Scopus articles totalling to 553. Please explain what did you consider duplicates.

 

Authors reply: When looking at three different databases, finding the same articles in them is frequent. When pooled together, the function remove_duplicates from the R package litsearchr flagged and removed duplicate entries from a data frame, remaining only on a copy of the article. 

 

  1. It would be really nice to include in the Review the lists of found strengths and weaknesses in the field as it was done, for example, in this systematic Review (https://link.springer.com/article/10.1007/s40593-019-00186-y#Sec27).

 

Authors reply: We appreciate the reviewer's comments. However, doing this would mean expanding and conducting a new Review beyond the goals pursued. However, we will take careful note for future reviews and include this in future studies.

 

The article has English mistakes which makes reading it more difficult, e.g. "Thus, the purpose is not to replace traditional techniques with digital tools. Still, consider them for the genuine profit they can offer" or "More experienced in using digital technologies in higher education are the open and online universities that have relied on them to teach and gain insights and learning trends. " Please, fix the grammar issues in the manuscript.

Authors reply: The article has been checked by a native English reviewer. Also, we have rewritten the sentences highlighted.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Most of the concerns were answered and the article was changed accordingly.

However, I still do not understand the response to the point 7.

  1. "A total of 3537 initial records were retrieved from Scopus (n = 2984), Web of Science (n = 529), and EBSCO Education (n = 24). Afterward, 1026 duplicated imported results were removed" Normally, a duplicate is the same article in different databases, but I cannot see how 1026 duplicates are possible with non-Scopus articles totalling to 553. Please explain what did you consider duplicates.

The authors responded: "When looking at three different databases, finding the same articles in them is frequent. When pooled together, the function remove_duplicates from the R package litsearchr flagged and removed duplicate entries from a data frame, remaining only on a copy of the article."

If the authors consider duplicate entries the same article retrieved from different databases, how could they find 1026 duplicates when they had, at most, 553 entries from the databases other than Scopus? Even if all the articles from Web of Science and EBCSO Education were also present in Scopus, there are no more than 553 possible duplicates.

Please, verify your algorithm for finding duplicates and analyse how did you get multiple Scopus entries for the same article or how could you remove more duplicates than it is theoretically possible.

Author Response

Authors reply: Dear Reviewer, thanks for your attention to detail. We found a typo when copying the results to our manuscript. The information has been corrected as follows: A total of 3537 initial records were retrieved from Scopus (n = 1984), Web of Science (n = 1529), and EBSCO Education (n = 24). Afterward, 1026 duplicated imported results were removed.

Back to TopTop