Next Article in Journal
Optimisation of an Automatic Online Post-Processing Service for Static Observations as Realised in the Polish ASG-EUPOS System
Next Article in Special Issue
Inspiring Real-Time Evaluation and Optimization of Human–Robot Interaction with Psychological Findings from Human–Human Interaction
Previous Article in Journal
Simulation on Unsteady Crosswind Forces of a Moving Train in a Three-Dimensional Stochastic Wind Field
Previous Article in Special Issue
GrowBot: An Educational Robotic System for Growing Food
 
 
Article
Peer-Review Record

Research on Equipment and Algorithm of a Multimodal Perception Gameplay Virtual and Real Fusion Intelligent Experiment

Appl. Sci. 2022, 12(23), 12184; https://doi.org/10.3390/app122312184
by Lurong Yang 1,2, Jie Yuan 1,2 and Zhiquan Feng 1,2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2022, 12(23), 12184; https://doi.org/10.3390/app122312184
Submission received: 10 October 2022 / Revised: 16 November 2022 / Accepted: 17 November 2022 / Published: 28 November 2022
(This article belongs to the Special Issue Progress in Human Computer Interaction)

Round 1

Reviewer 1 Report

The paper designed a multimodal that captures the intention to improve the efficiency of human-computer interaction and user experience. The authors also proposed a game-based virtual-real fusion experimental mode.     The authors designed a  framework for multimodal perception as well as an intelligent beaker for the experimental purpose. However, a few points need to be considered:

-       The related work needs to be enhanced by adding some new references

-       The designed experiment seems to be simple. Is there a reason for designing a simple experiment?

-       The number of users who conducted the experiment is a small number,

-       The questionnaire seems simplified to be answered by yes or no, which makes the results unreliable.

 

Author Response

Response to Reviewers

 

Manuscript ID: applsci-1990815
Type of manuscript: Article
Title: Research on equipment and algorithm of a multimodal perception
gameplay virtual and real fusion intelligent experiment
Authors: Lurong Yang *, Jie Yuan, Zhiquan Feng
Received: 10 October 2022

 

We sincerely appreciate the Journal Manager who provides us this unique opportunity to revise our paper.

We respectfully appreciate the reviewers’ comments and suggestions, which are very valuable to improve the quality of this paper.

We have revised the previous paper according to the reviewers’ comments, the details are simply stated as follows.

 

  • For the reviewer #1: The following are the minor concerns about the paper:

 

Question-1: The related work needs to be enhanced by adding some new references.

Response:

To address this issue, we have added some new references to support the article content. The specific changes are lines 81-89 in the text, and lines 177-198:

 

With the development of technology, the application of artificial intelligence in the field of education has become more and more extensive, and the virtual laboratory is a typical application. The concept of virtual experiments was first proposed by Professor William Wolf of the University of Virginia in 1989. Due to the lack of laboratories or inadequate laboratory equipment, few hands-on chemistry experiments are conducted in Turkish schools; therefore, Tysz et al [10] developed a 2D virtual environment for school chemistry education where students perform experiments in a virtual environment, and the results showed that the virtual laboratory had a positive impact on students' academic performance and learning attitudes.

 

Wang et al. [31] proposed a multimodal fusion algorithm (MFA) that integrates multi-channel data such as speech, vision and sensors to capture the user's experimental intention and also navigate, guide or warn the user's operation behavior, and the experimental results show that the smart glove is applied to teach chemistry experiments based on virtual-real fusion with good teaching effect. Xie et al.[32] proposed a virtual reality elementary school mathematics teaching system based on GIS data fusion, which applied virtual experiment teaching system, intelligent assisted teaching system, and virtual classroom teaching system analysis related technologies to teaching. Experimental data and investigation results showed that the system could help students simulate operation experience and understand the principles, and at the same time, improve users' learning interest. Wang et al.[33] proposed a smart glove-based scene perception algorithm to more accurately capture users' experimental behaviors. Students are allowed to conduct exploratory experiments on a virtual experimental platform to guide and monitor user behavior in a targeted manner. The experiments show that the smart glove can also infer the experimental intention of the operator and provide timely feedback and guidance to the user's experimental behavior. Pan et al.[34] proposed a new demand model for virtual experiments by combining human demand theory. An integrated MR system called MagicChem was also designed and developed to support realistic visual interaction, tangible interaction, gesture and touch interaction, voice interaction, temperature interaction, olfactory interaction, and avatar interaction. User studies have shown that MagicChem satisfies the requirements model better than other MR experimental environments that partially satisfy the requirements model.

 

Question-2: The designed experiment seems to be simple. Is there a reason for designing a simple experiment?

Response:

To address this issue, our system experiment actually contains many experiments, two of which are detailed in this paper, and we have added a portion of experimental screenshots to support this part of the experiment in lines 560-575 of the text. The specific modifications are shown below.

 

5.2.3. Experimental prototype system

Our experiment system currently integrates 10 representative experiments. Users can enter the system, voice select the name of the experiment they want to perform, and the system automatically jumps to the experiment of our choice. The main interface of the prototype system is shown in Figure 14 below.

Figure 14. Prototype System Interface.

Next, we will briefly show screenshots from other experimental procedures. As shown in Figure 15, (a) is the experiment of ammonia preparation; (b) is the experiment of investigating the properties of burning charcoal; (c) is the experiment of investigating the combustion of red phosphorus; (d) is the experiment of investigating the properties of concentrated sulfuric acid.

Figure 15. Specific experimental process diagram.

 

 

Question-3: The number of users who conducted the experiment is a small number.

Response:

Due to the epidemic situation, we only invited 15 primary and secondary school students to test the system functions. We also invited 15 graduate students who had not participated in the system design to conduct practical operation and SUS questionnaire survey. The results are as follows:

 

Figure 22. SUS questionnaire.

 

Question-4: The questionnaire seems simplified to be answered by yes or no, which makes the results unreliable.

Response:

To address this issue, a quantitative rating evaluation of user experience with the questionnaire was conducted using the SUS rating scale. Also, the SUS was modified to make it more relevant to the evaluation of intelligent experimental systems in the context of educational applications. The specific modifications are described in lines 708-737 of the text, as follows:

We use the SUS questionnaire to evaluate GVRFL. Due to the specificity of the application of the Smart Experiment System, we have summarized and improved the assessment content of the SUS scale to make it more relevant to the user's assessment of the application needs of the system. Therefore, in addition to the usability and ease of operation assessments previously included in the scale, we have also added fun and experimental immersion assessments. The questionnaire contains 10 questions, and each question is scored using a 5-point system, with 1 meaning disagree and 5 meaning strongly agree.

The results of the experiment are shown in Figure 22 below. The mean scores for questions 1 and 2 were 4.42 and 4.95, respectively, indicating that students preferred to use GVRFL. the mean score for question 3 was 4.37. Students felt that using GVRFL would help them focus on learning chemistry. The average score for question 4 was 4.31. Students felt that the combination of virtual and realistic manipulation of the smart beaker would improve their hands-on skills. The average score for question 5 was 4.35. The students felt that they were highly engaged in the experiment. Therefore, adding game elements to the virtual experiment can increase students' interest and motivation to learn. The mean score for question 6 was 4.32. Students thought that using GVRFL for chemistry experiments was better than swiping the mouse on the computer. The average score for question 7 was 4.39. Some students still prefer to conduct experiments in a traditional laboratory, but most of them believe that using GVRFL is effective in avoiding the dangers of traditional experimental procedures and observing less obvious phenomena. The average score for question 8 was 4.71, indicating that the system is already well integrated with its functions. The average score of question 9 is 4.65, which indicates that users think the system is still easy to get started and use, and will not be too much of a burden to use. The average score of question 10 was 4.53, indicating that users engaged in using the system with confidence that the system's intent-understanding algorithm could help students overcome the difficulties encountered during the experiment and complete it successfully.

Figure 22. SUS questionnaire.

The items of the SUS questionnaire are as follows:

  1. I am willing to use this experimental platform.
  2. I am very interested in this game-based virtual-real fusion experiment platform.
  3. Using this platform, I can focus on learning knowledge.
  4. This experimental platform can improve my hands-on ability.
  5. I am very invested in experiments in this game-based virtual-real fusion experiment platform.
  6. I like this experimental platform better than NOBOOK.
  7. Compared with traditional experimental teaching, I prefer this experimental platform
  8. I found that the functions in the system are well integrated
  9. I think this system is easy to use
  10. I feel very confident when using this system

 

 

 

 

Thank you again,

Yours sincerely,

Lurong Yang

 

Author Response File: Author Response.doc

Reviewer 2 Report

The article is well-organized and contains all of the components. The sections are well-developed. The authors did a good job in synthesizing the literature. The methodology used is clearly explained. However, I have some remarks:

When the author starts the "related work" (section 2), the reading feel a bit forced and unrelated in ideas. In line 81 is written that thanks to artificial intelligence, the application of virtual labs has become more widespread. I suggest that this sentence needs to be more developed. Then, the following sentence explains who carried out the first virtual experiments and then talks about how its is rarely applied in Turkish schools. Although they are sentences related to the subject, the conection is not understood.

 

In line 124, I suggest specifying the acronym HRC.

 

Section 5.3.3 does not have a reference to SUS questionnaire, in addition to the fact that the author handles seven items of the questionnaire and the creators of the questionnaire handle ten items. It does not mention if it is an adaptation of the SUS questionnaire or the justification for why it only uses seven.

 

In the abstract is mentioned that by applying this virtual experiment efficiency of the HCI is improved, the load is reduced, the use of gameplay stimulates interest and enthusiasm, and the sense of reality is improved. The document explain how most of these attributes were evaluated. Although, the attribute sense of reality is not clear how they evaluate it, what instrument they used, and what the result is.

 

I suggest minor spell check is required, for example:

 In the line 138 - the space -- comfortable.Isabel

Author Response

Response to Reviewers

 

Manuscript ID: applsci-1990815
Type of manuscript: Article
Title: Research on equipment and algorithm of a multimodal perception
gameplay virtual and real fusion intelligent experiment
Authors: Lurong Yang *, Jie Yuan, Zhiquan Feng
Received: 10 October 2022

 

We sincerely appreciate the Journal Manager who provides us this unique opportunity to revise our paper.

We respectfully appreciate the reviewers’ comments and suggestions, which are very valuable to improve the quality of this paper.

We have revised the previous paper according to the reviewers’ comments, the details are simply stated as follows.

 

  • For the reviewer #2: The following are the minor concerns about the paper:

 

Question-1: When the author starts the "related work" (section 2), the reading feel a bit forced and unrelated in ideas. In line 81 is written that thanks to artificial intelligence, the application of virtual labs has become more widespread. I suggest that this sentence needs to be more developed. Then, the following sentence explains who carried out the first virtual experiments and then talks about how its is rarely applied in Turkish schools. Although they are sentences related to the subject, the conection is not understood.

Response:

To address this issue, we have added some new references to support the article content. The specific changes are lines 81-89 in the text, and lines 177-198:

 

With the development of technology, the application of artificial intelligence in the field of education has become more and more extensive, and the virtual laboratory is a typical application. The concept of virtual experiments was first proposed by Professor William Wolf of the University of Virginia in 1989. Due to the lack of laboratories or inadequate laboratory equipment, few hands-on chemistry experiments are conducted in Turkish schools; therefore, Tysz et al [10] developed a 2D virtual environment for school chemistry education where students perform experiments in a virtual environment, and the results showed that the virtual laboratory had a positive impact on students' academic performance and learning attitudes.

 

Wang et al. [31] proposed a multimodal fusion algorithm (MFA) that integrates multi-channel data such as speech, vision and sensors to capture the user's experimental intention and also navigate, guide or warn the user's operation behavior, and the experimental results show that the smart glove is applied to teach chemistry experiments based on virtual-real fusion with good teaching effect. Xie et al.[32] proposed a virtual reality elementary school mathematics teaching system based on GIS data fusion, which applied virtual experiment teaching system, intelligent assisted teaching system, and virtual classroom teaching system analysis related technologies to teaching. Experimental data and investigation results showed that the system could help students simulate operation experience and understand the principles, and at the same time, improve users' learning interest. Wang et al.[33] proposed a smart glove-based scene perception algorithm to more accurately capture users' experimental behaviors. Students are allowed to conduct exploratory experiments on a virtual experimental platform to guide and monitor user behavior in a targeted manner. The experiments show that the smart glove can also infer the experimental intention of the operator and provide timely feedback and guidance to the user's experimental behavior. Pan et al.[34] proposed a new demand model for virtual experiments by combining human demand theory. An integrated MR system called MagicChem was also designed and developed to support realistic visual interaction, tangible interaction, gesture and touch interaction, voice interaction, temperature interaction, olfactory interaction, and avatar interaction. User studies have shown that MagicChem satisfies the requirements model better than other MR experimental environments that partially satisfy the requirements model.

 

Question-2: In line 124, I suggest specifying the acronym HRC.

Response:

To address this issue, we have defined the HRC explicitly and modified it as line 126 in the text, as follows:

 

The results showed that the proposed approach to Human-robot collaboration(HRC) is intuitive, stable, efficient, and compliant; thus, it may have various applications in human-robot collaboration scenarios.

 

Question-3: Section 5.3.3 does not have a reference to SUS questionnaire, in addition to the fact that the author handles seven items of the questionnaire and the creators of the questionnaire handle ten items. It does not mention if it is an adaptation of the SUS questionnaire or the justification for why it only uses seven.

 

Response:

To address this issue, we first explained the reasons for changing the scoring criteria of the SUS to make it more relevant to the evaluation of intelligent experimental systems in the context of educational applications. At the same time, additional changes were made to the SUS scale to increase its assessment of system usability. The specific modifications are in lines 708-737 of the text and are described below.

We use the SUS questionnaire to evaluate GVRFL. Due to the specificity of the application of the Smart Experiment System, we have summarized and improved the assessment content of the SUS scale to make it more relevant to the user's assessment of the application needs of the system. Therefore, in addition to the usability and ease of operation assessments previously included in the scale, we have also added fun and experimental immersion assessments. The questionnaire contains 10 questions, and each question is scored using a 5-point system, with 1 meaning disagree and 5 meaning strongly agree.

The results of the experiment are shown in Figure 22 below. The mean scores for questions 1 and 2 were 4.42 and 4.95, respectively, indicating that students preferred to use GVRFL. the mean score for question 3 was 4.37. Students felt that using GVRFL would help them focus on learning chemistry. The average score for question 4 was 4.31. Students felt that the combination of virtual and realistic manipulation of the smart beaker would improve their hands-on skills. The average score for question 5 was 4.35. The students felt that they were highly engaged in the experiment. Therefore, adding game elements to the virtual experiment can increase students' interest and motivation to learn. The mean score for question 6 was 4.32. Students thought that using GVRFL for chemistry experiments was better than swiping the mouse on the computer. The average score for question 7 was 4.39. Some students still prefer to conduct experiments in a traditional laboratory, but most of them believe that using GVRFL is effective in avoiding the dangers of traditional experimental procedures and observing less obvious phenomena. The average score for question 8 was 4.71, indicating that the system is already well integrated with its functions. The average score of question 9 is 4.65, which indicates that users think the system is still easy to get started and use, and will not be too much of a burden to use. The average score of question 10 was 4.53, indicating that users engaged in using the system with confidence that the system's intent-understanding algorithm could help students overcome the difficulties encountered during the experiment and complete it successfully.

Figure 22. SUS questionnaire.

The items of the SUS questionnaire are as follows:

  1. I am willing to use this experimental platform.
  2. I am very interested in this game-based virtual-real fusion experiment platform.
  3. Using this platform, I can focus on learning knowledge.
  4. This experimental platform can improve my hands-on ability.
  5. I am very invested in experiments in this game-based virtual-real fusion experiment platform.
  6. I like this experimental platform better than NOBOOK.
  7. Compared with traditional experimental teaching, I prefer this experimental platform
  8. I found that the functions in the system are well integrated
  9. I think this system is easy to use
  10. I feel very confident when using this system

 

 

Question-4: In the abstract is mentioned that by applying this virtual experiment efficiency of the HCI is improved, the load is reduced, the use of gameplay stimulates interest and enthusiasm, and the sense of reality is improved. The document explain how most of these attributes were evaluated. Although, the attribute sense of reality is not clear how they evaluate it, what instrument they used, and what the result is.

Response: To solve this problem, I would like to explain that we use the accuracy rate of intention understanding, the number of users' attempts to successfully complete the experiment, and the accuracy rate of users' answers to knowledge points after completing the experiment to evaluate the efficiency of human-computer interaction. The NASA and SUS standard scales are used to conduct questionnaires to evaluate the attributes of interest and realism. The final results show that users of our system are more efficient in completing experiments and have higher accuracy in understanding intentions, so the human-computer interaction efficiency is higher; The scores of interest and ease of use in the evaluation of SUS and NASA scales are also higher than those of other experimental platforms.

 

Figure 16. Success rate.

Figure 17. Number of successful attempts.

Figure 18. Students answering questions.

Figure 20. Correctness of intention understanding.

Figure 21. User ratings.

Figure 22. SUS questionnaire.

 

Table 3. Single factor analysis of user evaluation.

 

GVRFL & NOBOOK

GVRFL & MSNVRFL

P

S

P

S

Interactivity

<0.001

Y

0.0015

Y

Operability

<0.001

Y

0.1938

N

Effect

0.1796

N

0.2317

N

Intelligence

<0.001

Y

0.2496

N

Inquiry

0.2634

N

0.1863

N

Interestingness

<0.001

Y

0.0013

Y

 

Table 4. Single factor analysis of NASA user evaluation.

 

GVRFL & NOBOOK

GVRFL & MSNVRFL

P

S

P

S

MD

<0.001

Y

0.07651

N

PD

<0.001

Y

0.1967

N

P

<0.001

Y

<0.001

Y

E

<0.001

Y

0.0021

Y

F

<0.001

Y

<0.001

Y

 

 

Question-5: I suggest minor spell check is required, for example:

 In the line 138 - the space -- comfortable.Isabel

Response: To address this issue, we double-checked the manuscript and found several errors and made corrections, specifically in lines 140,288,290 of the text, as follows:

 

  • comfortable. Isabel et al. [21]
  • the speed v is ,
  • 、and  are an estimate of the amount of liquid poured in one moment t when the pouring angle () matches the range of the four hierarchical classifications in Table 1

 

Thank you again,

Yours sincerely,

Lurong Yang

 

Back to TopTop