Next Article in Journal
Optimisation of an Automatic Online Post-Processing Service for Static Observations as Realised in the Polish ASG-EUPOS System
Next Article in Special Issue
Inspiring Real-Time Evaluation and Optimization of Human–Robot Interaction with Psychological Findings from Human–Human Interaction
Previous Article in Journal
Simulation on Unsteady Crosswind Forces of a Moving Train in a Three-Dimensional Stochastic Wind Field
Previous Article in Special Issue
GrowBot: An Educational Robotic System for Growing Food
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Equipment and Algorithm of a Multimodal Perception Gameplay Virtual and Real Fusion Intelligent Experiment

1
School of Information Science and Engineering, University of Jinan, Jinan 250022, China
2
Shandong Provincial Key Laboratory of Network Based Intelligent Computing, Jinan 250022, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(23), 12184; https://doi.org/10.3390/app122312184
Submission received: 10 October 2022 / Revised: 16 November 2022 / Accepted: 17 November 2022 / Published: 28 November 2022
(This article belongs to the Special Issue Progress in Human Computer Interaction)

Abstract

:

Featured Application

Authors are encouraged to provide a concise description of the specific application or a potential application of the work. This section is not mandatory.

Abstract

Chemistry experiments are an important part of chemistry learning, and the development and application of virtual experiments have greatly enriched experimental teaching. However, in the existing virtual experiments, there are problems such as low human–computer interaction efficiency, poor user sense of reality and operation, and a boring experimental process. Therefore, this paper designs a multimodal perception gameplay virtual and real fusion intelligence laboratory (GVRFL). GVRFL uses virtual and real fusion methods to interactively complete chemical experiments, which greatly improves the user’s sense of reality and operation. This method proposes a multimodal intention active understanding algorithm to improve the efficiency of human–computer interaction and user experience and proposes a novel game-based virtual–real fusion intelligent experimental mode that adds gameplay to the process of virtual–real fusion experiments. The experimental results show that this method improves the efficiency of human–computer interaction and reduces the user’s operating load. At the same time, the interaction between the real experimental equipment and the virtual experimental scene greatly improves the user’s sense of reality and operation. The introduction of game elements into the process of virtual and real fusion experiments stimulates students’ interest in and enthusiasm for learning.

1. Introduction

Chemistry is an experiment-based discipline. Chemistry experiments are an important way to learn chemistry knowledge. Chemistry experiments can help students quickly understand and learn from chemical experiments, thus gaining knowledge beyond what they read in textbooks. The task of teaching chemical experiments in a traditional laboratory has some limitations. For example, many phenomena in chemical experiments are unexplainable or unobservable, some chemical experiments are dangerous, and some chemicals are expensive [1].
For these reasons, the virtual laboratory has become increasingly important. Virtual laboratories provide integration of laboratory and computer simulations [2]. In addition, the virtual lab provides a safe learning environment for students. In the virtual environment, students can pause, continue, and repeat experiments many times. They waste no time or chemicals and can observe phenomena that are not easily observable. However, existing virtual laboratories often use devices such as a computer mouse [3], fingers [4], and handles [5] for interaction, and users lack the sense of operation and realism gained from using experimental equipment in a physical laboratory environment. This is clearly an important missing link for experimental courses that exercise students’ hands-on skills. Therefore, virtual labs that combine software and hardware can better meet the teaching objectives. However, there is a problem of a low experimental rate because the existing virtual labs lack timely guidance from the system on the user’s operation process [6]. In addition, learning is a boring process and proper entertainment is important for students to maintain their interest in learning [7]. Therefore, it is necessary to design an intelligent experimental system that can intelligently guide experimenters and provide them with some appropriate game elements and incentives to improve students’ learning interest. To achieve these functions, the system needs to understand the experimenter’s operations and thus provide timely responses. Unimodal information is one-sided and experiments using unimodal information are less interactive and less accurate [8].
To solve these problems, we designed a multimodal perception gameplay virtual and real fusion intelligence laboratory (GVRFL). GVRFL uses behavioral information from two channels, touch and voice channels, for fusion to obtain the user’s operational intent for natural multimodal human–computer interaction [9]. We proposed a multimodal intention active understanding algorithm to improve the efficiency of human–computer interaction and user experience and designed a set of new experimental equipment to sense the user’s wrong behavior during the experiment. We adopted the method of the fusion of virtual and real to enhance the user’s sense of reality and operation in virtual experiments. At the same time, we proposed a novel game-based intelligent experimental mode combining virtual and real elements. We added gameplay to the process of the virtual and real fusion experiment. In this way, students’ interest in and motivation for learning are stimulated during the experiment, their interest in virtual experiments is increased, and their learning efficiency is improved.
This article makes the following three contributions:
  • Propose a multimodal intention active understanding algorithm suitable for chemical experiments and construct the core engine of virtual and real fusion experiments.
  • Propose a novel intelligent experimental mode of fusion of virtual and real gameplay and introduce gameplay into the experiment.
  • Complete the prototype of a multimodal perception gameplay virtual–real fusion experimental platform, and prove the usability of the platform through experiments.
The structure of this article is as follows: Part 2 briefly discusses related work, Part 3 outlines understanding the user’s intention in the gameplay virtual and real fusion experimental mode, Part 4 presents the design strategy of the gameplay virtual and real fusion experimental mode, Part 5 describes the experiment and evaluations, and Part 6 discusses the of conclusion and prospects.

2. Related Work

With the development of technology, the application of artificial intelligence in the field of education has become increasingly more extensive, and the virtual laboratory is a typical application. The concept of virtual experiments was first proposed by Professor William Wolf of the University of Virginia in 1989. Due to the lack of laboratories or inadequate laboratory equipment, few hands-on chemistry experiments are conducted in Turkish schools. Therefore, Tysz et al. [10] developed a 2D virtual environment for school chemistry education where students perform experiments in a virtual environment, and the results showed that the virtual laboratory had a positive impact on students’ academic performance and learning attitudes. Tsovaltzi [11] and others developed a web-based virtual experiment platform “Vlab”, where students could use menus and dialog boxes to control chemical instruments to complete chemical experiments. The above two virtual experimental platforms are both 2D experimental environments, and authenticity is lacking in the experimental environment. In the experiment, the students use a keyboard and mouse to interact with the system, thus lacking a sense of operation.
Ali et al. [5] developed a 3D interactive multimodal virtual chemistry laboratory (MMVCL). In MMVCL, the user completes the experiment with a virtual hand controlled by Wiimote. The information in the experimental process is received through audio and text information. The experimental results showed that MMVCL improved students’ learning skills and grades. Wu et al. [12] developed a virtual reality chemistry laboratory that uses Leap Motion to detect users’ gestures for operation. Users wear a head-mounted display and use gestures to interact with virtual objects. The results showed that at an appropriate learning intensity, the virtual reality chemistry laboratory could improve and enhance users’ learning confidence. Lam et al. [13] developed an augmented reality chemical experiment platform that uses augmented reality (AR) markers to control chemical experiment instruments and realize movement such as shaking, and dumping. All the above experimental platforms are 3D experimental environments that enhance the immersive sense and sense of reality of virtual experiments. However, these experimental platforms have a single mode of interaction due to the limitations of the input behavioral information methods, such as just using handles or gestures.
Human–computer collaboration is essentially a process of communication and understanding between humans and machines. If the platform can understand the user’s intentions and communicate with the user during the virtual experiment, then the user’s human–computer interaction process is natural and comfortable. In the process of human–computer interaction, researchers have been attempting to provide a natural and harmonious way of interaction. Multimodal human–computer interaction (MMHCI) is an important way to solve the problem of human–computer interaction. It captures the user’s interaction intention by integrating accurate and imprecise input from multiple channels and improves the naturalness and efficiency of human–computer interaction. Kaiser et al. [14] proposed a multimodal interactive system that fuses symbolic and statistical information from a set of 3D gestures, spoken language, and referential agents. The experimental results showed that multimodal input could effectively eliminate ambiguity and reduce uncertainty. Hui et al. [15] fused the user’s voice and pen input information. This method enabled the platform to understand the user’s intention and improve robustness. Liang et al. [16] proposed an augmented approach to help a robot understand human motion behaviors based on human kinematics and human postural impedance adaptation. The results showed that the proposed approach to human–robot collaboration (HRC) is intuitive, stable, efficient, and compliant; thus, it may have various applications in human–robot collaboration scenarios. Gavril et al. [17] also proposed a multimodal interface that makes use of hand tracking and speech interactions for ambient intelligent and ambient assisted living environments. The system has a spoken language interaction module and a hand tracking interaction module. The results showed that because of the user’s interaction with the multimodal interface, the ability to understand the user’s intention was improved. Zhao et al. [18] published a prototype of a human–computer interaction system that integrates face, gesture, and voice to better understand the user’s intention. Jung et al. [19] proposed a combination of voice and tactile interaction in a driving environment. The experimental results showed that the interactive mode of voice and touch improved the efficiency of interaction and user experience.
If the system is able to understand the user’s intent and communicate with the user in a virtual experiment, the human–computer interaction process for the user is natural and comfortable. Isabel et al. [20] proposed a multimodal interaction method, adding vision, hearing, and kinesthesia to virtual experiments. The experimental results showed that this method improved students’ interest in learning. Ismail et al. [21] proposed a method combining gesture and voice input for multimodal interaction in AR, with experimenters controlling virtual objects by a combination of gestures and voice. Wolski et al. [22] developed a virtual chemistry laboratory based on a hand movement system, with users using actions and gestures to perform the experiment. The experimental results showed that students’ learning performance was better in the virtual laboratory. Edwards et al. [23] developed a virtual reality multisensory classroom (VRMC), with users using haptic gloves and Leap Motion to complete the experiment in the VR multisensory laboratory (VRML). The abovementioned experimental platform uses multichannel interactions. Multichannel interaction in the virtual experiment can improve the efficiency and naturalness of human–computer interaction.
Many scholars have also explored the field of smart chemistry labs. Song et al. [24] proposed a chemical experiment system, CheMO, which uses a stereo camera to detect the users’ gestures and the positions of the instruments, mixed object (MO) beakers and an MO eyedropper as instruments for the chemistry experiments, and a digital workbench. When the user uses an MO instrument to experiment, he or she can obtain the sense of operation of the real experiment. The experimental results showed that MO enhanced the realism of virtual experiments, and users could learn effectively during the experiments. In 2019, Hartmann et al. [25] allowed users to communicate with people and objects in physical space in a virtual reality system, so that users would not lose their sense of presence in the virtual world. The experiments showed that the user experience is enhanced by the system. Amador et al. [26] designed a physical tactile burette to embed the physical sense of the laboratory in a virtual experiment. Students can control the physical tactile burette to operate the virtual burette in virtual reality. Zeng et al. [27] also proposed an intelligent dropper and built a multimodal intelligent interactive virtual experiment platform (MIIVEP). The system integrates three modalities, speech, touch, and gesture, for interaction. The experimental results showed that this method improved students’ learning efficiency. Yuan et al. [28] designed an intelligent beaker with multimodal sensing capabilities. The intelligent beaker is equipped with multiple sensors. To obtain the real intention of the user during the experiment more accurately, a multimodal fusion algorithm is proposed to fuse the user’s voice information and tactile information. The user uses the intelligent beaker to complete a variety of experiments under the guidance of teaching navigation. Xiao et al. [29] also designed an intelligent beaker and proposed another multimodal fusion algorithm to fuse the user’s voice information and tactile information. The experimental results showed that users could feel the sense of operation in a traditional laboratory when using the above two intelligent beakers. Wang et al. [30] proposed a multimodal fusion algorithm (MFA) that integrates multi-channel data such as speech, vision, and sensors to capture the user’s experimental intention and also navigate, guide, or warn the user’s operation behavior, and the experimental results showed that the smart glove was applied to teach chemistry experiments based on virtual–real fusion with a good teaching effect. Xie et al. [31] proposed a virtual reality elementary school mathematics teaching system based on GIS data fusion, which applied a virtual experiment teaching system, intelligent assisted teaching system, and virtual classroom teaching system analysis-related technologies to teaching. The experimental data and investigation results showed that the system could help students simulate operation experience and understand the principles, and at the same time, improve users’ learning interest. Wang et al. [32] proposed a smart glove-based scene perception algorithm to capture users’ experimental behaviors more accurately. Students are allowed to conduct exploratory experiments on a virtual experimental platform to guide and monitor users’ behavior in a targeted manner. The experiments showed that the smart glove can also infer the experimental intention of the operator and provide timely feedback and guidance to the user’s experimental behavior. Pan et al. [33] proposed a new demand model for virtual experiments by combining human demand theory. An integrated MR system called MagicChem was also designed and developed to support realistic visual interaction, tangible interaction, gesture and touch interaction, voice interaction, temperature interaction, olfactory interaction, and avatar interaction. User studies have shown that MagicChem satisfies the requirements model better than other MR experimental environments that partially satisfy the requirements model. The above experimental platforms propose the use of instruments and devices in real scenes to operate objects in virtual environments, which greatly retains the user’s sense of operation and realism in traditional experiments.
The combination of virtual experiments and education is still in the trial stage, and many researchers have carried out different explorations of virtual experiments. Ullah et al. [34] added teaching guidance to the MMVCL experiment. The experimental results showed that teaching guidance could improve students’ performance in the virtual laboratory. Su et al. [35] designed and developed a virtual reality chemistry laboratory simulation game and proposed a sustainable innovative experiential learning model to verify the learning effect. The experimental results showed that the virtual chemistry lab had a significant effect on academic performance. Students using the sustainable development innovative experiential learning model gain a better understanding of chemical concepts. Oberdrfer et al. [36] combined learning, games, and VR technology. The experiment showed that this approach improved the quality and effect of learning and enhanced the presentation of abstract knowledge. Rodrigues D et al. [37] used an experimental study of artificial intelligence techniques to customize the educational game interface based on the player’s profile, and the empirical results show that the game components adjusted in real time for a better player experience and, at the same time, improved the correctness rate of the players participating in the study. Marín D et al. [38] proposed a multimodal digital teaching approach based on the VARK (visual, auditory, reading/writing, kinesthetic) model that matches different learning modes to different students’ styles. The experimental results show that the use of the VARK multimodal digital teaching approach has proven to be effective, efficient, and beneficial in HCI instruction, with significant improvements in users’ learning scores and satisfaction. Hong et al. [39] designed a gamification platform called TipOn that allows students to ask and gamify questions based on different game modes in order to facilitate students’ practice of English grammar. The focus was on designing a reward system to gamify students’ learning content. The results suggest that teachers can use this gamification system in the flipped classroom to motivate students to tap into their cognitive curiosity in order to increase their content learning and improve learning outcomes.
In summary, the multimodal interaction method can make it easier for computers to understand the user’s intentions and can effectively improve the efficiency of human–computer interaction and user experience. To improve the understanding of user intention in the virtual–real fusion experiment in this article, we propose a multimodal intention active understanding algorithm. The algorithm integrates the user’s tactile information and voice information during the experiment to improve the efficiency of human–computer interaction and reduce the user’s interaction burden.
Absorbing the above research results and experience, this paper proposes a game-based intelligent experiment based on the fusion of virtuality and reality. Through real experimental equipment and a virtual experimental platform, students can complete chemical experiments that combine virtual and real elements. At the same time, to eliminate the boring nature of virtual experiments, game elements are added to the virtual experiments. By adding gameplay to the experimental mode of fusion of virtual and reality, students’ enthusiasm for learning is stimulated, and their inquiry ability and practice level are improved.

3. Understanding User Intentions in the Experimental Mode of Gameplay

To simulate the chemistry experiment scene and operation in the traditional laboratory, we use the method of virtual and real fusion to perform the chemistry experiment. Real refers to the intelligent beaker, and virtual refers to the virtual experimental platform. At the same time, to improve the ability of the virtual and real fusion experimental platform to understand the user’s intention, we have added a multimodal intention active understanding algorithm. On this basis, we learn from the existing game platforms and add game elements to the virtual and real fusion experimental platform. The above methods can increase the exploratory and interesting nature of the experiment, improve the sense of immersion and participation in the experiment, and stimulate students’ learning enthusiasm. The overall framework of the multimodal perception gameplay virtual and real fusion experimental mode is shown in Figure 1.
The experimental platform obtains tactile information and voice information through the intelligent beaker and microphone and recognizes the obtained tactile information and voice information. The tactile recognition results and voice recognition results are input into multimodal intention active understanding algorithms to obtain the user intention. Finally, the user intention is input into the plot of the virtual and real fusion experimental scene. In the experimental plot, the platform analyzes the user’s intention information and operational information and combines game elements to present the experimental process to the user through a multichannel method.

3.1. Design and Interaction of Intelligent Devices

We use the method of fusion of virtual and real for chemical experiments. Real refers to the new intelligent device that senses tactile information. The intelligent device (Figure 2) is a 3D-printed beaker model equipped with two touch sensors, a posture sensor, and a Bluetooth module. To facilitate users’ use of intelligent devices, we use Bluetooth to output tactile information. The intelligent beaker inputs sensor information into the computer through the Bluetooth module.
The interaction of intelligent devices occurs through touch sensors and posture sensors. The interactive mode of the intelligent beaker is as follows:
  • The No.1 touch sensor has the function of selecting chemical reagents. Users can choose different chemical substances through the touch sensor on the intelligent beaker.
  • The function of the No.2 touch sensor is to detect whether the way the user holds the intelligent beaker is standard. The No.2 touch sensor is set at the lowest end of the beaker, a position that cannot be touched when the intelligent beaker is held normally. If the user touches the No.2 touch sensor when holding the intelligent beaker, it will prompt the user to hold the beaker properly.
  • The posture sensor on the intelligent beaker is used to sense the beaker’s own information, including the angle, direction, and speed.
  • The posture sensor gives the beaker a rough dumping quantification function. The dumping quantification function is determined by the dumping time, angle, and speed. As shown in Table 1, angle θ is the angle of the current tipping direction of the intelligent beaker, the speed v is v = ( x 2 + y 2 + z 2   ) , and x ,   y ,   z are the angular velocities of the coordinate axes of the intelligent beaker. The rough dumping quantification is divided into three levels: slow, normal, and fast. η 1 , η 2 , and η 3 are an estimate of the amount of liquid poured in one moment t when the pouring angle ( θ ) matches the range of the four hierarchical classifications in Table 1, respectively, and in this paper, one moment represents 1 s. Since the values of pouring quantification are roughly estimated, we also present them in hierarchical levels in terms of animation presentation. For example, the values in the interval (30,50) are presented with the same effect in the virtual experimental scenario, where the same amount of liquid is viewed. For the quantification of the pouring speed ( v ), we reflect it by the pouring time, when the pouring speed is less than 30. Based on experience, we believe that the amount of 1-s pouring at this time can be calculated according to 0.8 times the normal speed. Similarly, when the dumping speed is greater than 60, we believe that the amount of dumping in 1 s can be calculated according to 1.2 times of the normal speed. Finally, the total amount of dumping is correlated with the dumping volume and dumping time, and the calculation of the total amount depends on the product of the dumping volume and dumping time for different levels.
Intelligent beakers have the common requirements of chemical experiment beakers and can be used to perform different chemical experiments.

3.2. Multimodal Information Perception

First, we define a set of intents in the chemical experiment. Since we use a combination of tactile and speech information to obtain user intent, the set of chemical experiment intents corresponding to tactile and speech information is the same. We set up multiple paths and obstacles in the experimental process so that the chemical experiment is no longer a single operational process. Since chemical experiments are relatively independent processes, different sets of intentions are set up for different experiments. We divide the whole experimental process into several experimental steps P n = P 1 , P 2 , , P i , where P n denotes all steps in the nth experimental process. To complete a step requires multiple operations. We set one or more paths R n = R 1 , R 2 , , R j in each step according to the rules of chemical experimental knowledge points, where R n denotes all the paths in the nth experimental step, i.e., the set of all the operation behaviors required to achieve this step. Finally, a different set of intentions I n = I 1 , I 2 , , I m is set according to the combination of multiple paths in each step, where I n denotes the set of all intentions for the nth experiment and m denotes the number of experimental intentions. Behavior and intent are a one-to-many relationship, and the user’s operational intent is predicted by behavior. We assign the intention corresponding to each step according to the steps of the experiment and the path. For example, an experimental intention set is I n = I 1 ,   I 2 ,   I 3 ,   I 4 , I 5 . According to the three steps P i = { P 1 , P 2 , P 3 } of the experiment, the intention set assigns intention   I n = {( I 1 ,   I 2 | P 1 ), ( I 3 , I 4 | P 2 ), ( I 5 | P 3 )} according to the step. Among them, two intentions I 1 ,   I 2 are set in step P 1 to reflect the concept of multipath and obstacles in the experimental process. A concrete example is shown in Figure 3.
After entering the experimental scene, the user uses the intelligent beaker to perform experimental operations. The intelligent beaker perceives the user’s tactile information T a c under the current experimental step P i . User tactile information, the current experimental step, and the intention set input the tactile information conversion function T i c F T a c , P i ,   I n . Finally, the tactile information is recognized by the tactile information conversion function to obtain the recognition result I t :
I t = T i c F T a c , P i ,   I n = T a c     (   I n | P i ) ,
where T a c is tactile information,   I n is the tactile intention set of the current experiment, and P i is the current experimental step.
In the process of the virtual and real fusion experiment, users can not only use tactile information but also voice information to express intention. First, we build a voice command database V n = V 1 ,   V 2 , ,   V m   based on different experimental intention sets. Among them, the speech database has a one-to-one correspondence with the intention set   I n of the chemical experiments, n represents the type of experiment, and m represents the number of intentions of the experiment. We interviewed 20 students and asked how they would use language to express the intentions in the intention set, and finally, each intention had five expressions. Using the laboratory preparation of carbon dioxide as an example, the set of intents is I n = { react limestone with dilute hydrochloric acid, react limestone with dilute sulfuric acid, react limestone with concentrated sulfuric acid, react calcium carbonate with dilute hydrochloric acid, react calcium carbonate with dilute sulfuric acid, react calcium carbonate with concentrated sulfuric acid } . The voice command database corresponding to the intent I 1 -React limestone with dilute hydrochloric acid is { mix limestone with dilute hydrochloric acid, add dilute hydrochloric acid to limestone, react limestone with dilute hydrochloric acid, pour dilute hydrochloric acid into a conical flask containing limestone, put limestone with dilute hydrochloric acid into a reaction reagent flask } . Second, we found that the commands issued by users are all shorter voice commands. Therefore, we use word vectors to calculate the similarity between the speech information and the command set in the speech intention database. We use word2vec to train the word vector model and then use the word vector model to convert the user’s voice information Aud and voice commands in the voice database V n of the current step P i into word vectors. The maximum cosine similarity between the two is the result of speech information recognition:
  I a = S a F A u d ,   P i   , V n = a r g   m a x ( c o s A u d   ·   V n | P i   ) ,
where Aud is voice information, V n is the speech intention set of the current experiment, and P i is the current experimental step.

3.3. Active Understanding of Multimodal Intentions

The proposed algorithm focuses on the basic purpose that the experimental system can understand the user’s operation intention based on the multimodal fusion. After obtaining the user’s tactile results and voice results, the multimodal fusion algorithm fuses multimodal information to obtain the user’s intentions. To improve the accuracy of intention understanding, an active understanding process is added after multimodal fusion. The specific algorithm framework is shown in Figure 4.
As the figure shows, the user inputs voice information and tactile information. We use a multimodal information fusion model to fuse the recognition results of speech and touch. After obtaining the fusion results, we preprocess the fusion results and analyze the fusion results with lower confidence. The tactile information and voice information of the fusion result are input into a single-modal evaluation function, and the evaluation function analyzes which channel of the input information is missing or incomplete. The system requests that the user enhances or supplements the information of a certain channel according to the result of the single-modal evaluation function. The fusion result with higher confidence is considered to be the user’s current intention. The multimodal intention active understanding algorithm uses the evaluation of the fusion result and the single-modal information evaluation to request that the user enhances the input information of a certain channel to improve the intention understanding ability.
We choose to fuse the multimodal information in the experimental process at the decision-making level and input the multimodal information into the multimodal information fusion function M i f f I t ,   I a :
M i f f I t ,   I a =                     I t ,                                   I a = , I t α I t + 1 α ,         I a , I t                     I a ,                                   I a , I t =   ,
In terms of multimodal information fusion, we consider three situations: (1) only tactile information, (2) only voice information, and (3) tactile information and voice information at the same time. When tactile information and voice information exist at the same time, the average weighting method is used for fusion: α = 0.5 .
After the fusion result is obtained, the system preprocesses the fusion result.
If M i f f I t ,   I a > β , the system will input the obtained user intention B into the experimental scene. If M i f f I t ,   I a < β , the system inputs the tactile and auditory recognition results into the single-channel evaluation function S E v a I t ,   I a :
S E v a I t ,   I a = f x = 1 ,       I t < λ 1 2 ,       I a < λ 2 ,
where 1 and 2 represent touch and sound, respectively, and λ 1 , λ 2 are the thresholds of the tactile and speech channels, respectively.
According to the results of S E v a I t ,   I a , we analyze which channel has low information quality. According to the evaluation results, the system reminds the user to enhance the information of a certain channel and combines the context to obtain new user intentions. Finally, the real intention of the user is input into the current experimental scene.
The multimodal intention active understanding Algorithm 1 is as follows:
Algorithm 1 Multimodal intention active understanding algorithm (MIAUA)
Input: Tactile recognition results I t . Speech recognition result I a ;
Output: User intention B ;
1: while I t ,   I a   is not empty do
2:   I t ,   I a input multimodal information fusion function M i f f I t ,   I a ;
3:   while  M i f f I t ,   I a is not empty do
4:    if  M i f f I t ,   I a > β  then
5:     Get user intention B ;
6:    end if
7:    if  M i f f I t ,   I a < β  then
8:     Get user intention B ;
9:      When the fusion result is less than the threshold, I t .   I a are input to the single-channel evaluation function S E v a I t ,   I a ;
10:     if  S E v a I t ,   I a = 1 then
11:     Remind users to enhance tactile channel information;
12:     end if
13:     if    S E v a I t ,   I a = 2  then
14:       Remind users to enhance auditory channel information;
15:     end if
16:     if   S E v a I t ,   I a ! = 1   & &   S E v a I t ,   I a = ! 2  then
17:      Remind users that they cannot obtain accurate intentions and continue to predict user intentions;
18:     end if
19:    end if
20:   end while
21: end while

3.4. Algorithm Analysis

The intention capture result of the fusion layer does not necessarily reflect the real intention of the user, so its credibility needs to be evaluated. Under a specific algorithm framework, one of the key factors affecting credibility is the quality of multichannel information at the perceptual level. For example, if the user’s voice information is unclear, incomplete, or cannot be captured by voice recognition, the credibility of the intention perception based on the voice channel is low. In this case, the system can actively ask the user to input voice information again or change to a voice that can express the same intention. Therefore, this article evaluates the results on two levels: one to evaluate the result of fusion intention, and the other to evaluate the credibility of each channel.
The MFPA algorithm in the literature and the MMNI algorithm in the literature are both multimodal fusion algorithms in virtual and real fusion experiments. The MIAUA algorithm in this paper similarly fuses voice and tactile information at the decision-making level. The difference is that MIAUA verifies the fusion result, unlike the MFPA algorithm and the MMNI algorithm, which directly confirm or deny whether the fusion result is the real intention of the user. The MIAUA algorithm evaluates the fusion result, and the fusion result obtained through the evaluation is considered the real intention. For fusion results with low credibility, the MIAUA algorithm evaluates the information of each channel. For channels with low credibility, the system actively requires users to supplement and strengthen the information of the current channel. We use the method of active understanding of the intention to accurately identify the user’s intention. The ability of the MIAUA algorithm to recognize user intentions is stronger than that of the MFPA algorithm and MMNI algorithm. During the experiment, the user’s interaction burden will be lower, and the human–computer interaction will be more natural and coordinated.

4. Gameplay Design of Virtual and Real Fusion Experimental Mode

People always experience a series of psychological changes when they study or play games. “Flow” is one of the most representative positive psychological experiences. Mind flow is a phenomenon in which participants focus on the goal and task at hand, concentrate so intensely that they forget about the passage of time, and filter out all external influences [40]. In terms of guiding teaching activities, Liao et al. [41] created an evaluation model to analyze students’ motivation and behavior by combining the practice of mind flow theory for guiding distance learning and concluded that when students enter a state of flow while learning, the learning effect is greatly improved. If students enter a state of flow when studying, the learning effect will be greatly improved. With the help of this mechanism, we incorporate game elements into the virtual experiment. In the game state, it is easier for students to enter the state of flow, which improves their learning interest and motivation to a great extent. Therefore, we combine the characteristics of existing game platforms to introduce gameplay into the experimental interactive mode of virtual and real fusion. The introduction of gameplay stimulates students’ interest and motivation in the experimental process and improves their learning efficiency.
The addition of game elements to virtual experiments can make learning a means of passing game levels and eliminate students’ negative emotions about learning. Since different chemical experiments have their own operating points and knowledge points, we design game rules in the virtual and real fusion experiment. We take the learning path as the experimental plot and learn chemical experiments by guiding the main line. At the same time, real-time feedback is realized in the form of points, levels, and rewards during the experiment so that users can quickly and intuitively understand the situation of their experiments. We design the experimental mode of the virtual and real fusion experiment as a breakthrough game mode and design the number of experimental levels according to the complexity of the chemical experiments. According to the chemical experiment steps, we set the plot P i at each experimental level. We add a multipath strategy, obstacle setting strategy, and incentive strategy to the experimental mode of gameplay fusion of virtual and reality. The level rules are established on the basis of the multipath strategy and the obstacle setting strategy. The experimental levels are formed through individual experimental steps, so that the entire experimental process is consistent; at the same time, setting obstacles during the experiment increases the difficulty of the experiment and enhances the interest and playability of the experiment. In psychology, reward is a universal state of mind that causes pleasant feelings and leads those who experience it to expect to be recognized and appreciated by others. Therefore, an incentive strategy is added during the experiment. When the learner completes an expected goal or passes the checkpoint, he or she will be rewarded according to the performance of the experiment.

4.1. Multipath Strategy

Chemistry is a natural subject based on experiments. Chemistry experiments are an important part of chemistry teaching and can cultivate students’ scientific inquiry ability and practical ability. Different chemical experiments have corresponding experimental steps, as do certain exploratory experiments. We propose a multipath strategy in the game-based virtual and real fusion experiment and add multiple learning paths to the experimental plot. The multipath approach is defined as the design of multiple paths R j for users to choose based on the knowledge points of the chemical experiment in a certain plot in the game process, increasing the exploratory nature of the virtual and real fusion experiment. For example, the method of producing carbon dioxide in a traditional laboratory is the reaction of limestone with dilute hydrochloric acid. In exploratory experiments, limestone can be replaced with marble or sodium carbonate, dilute hydrochloric acid can be replaced with dilute sulfuric acid, the amount of limestone or the concentration of dilute hydrochloric acid can be changed, etc.

4.2. Set Obstacle Strategy

We use three methods to set obstacles in the experiment. Method 1: When the user wants to obtain nonexploratory experimental results, he or she can observe only the desired experimental phenomenon by selecting the correct path in the multipath format. Method 2: We set up game levels in complex chemistry experiments, and users can enter the next level after passing the first level. For example, in the experiment of preparing carbon dioxide from limestone and dilute hydrochloric acid, the carbon dioxide produced is not pure carbon dioxide (containing water and hydrogen chloride gas). We set up two levels according to the degree of difficulty: the first level is to produce carbon dioxide, and the second level is to remove impurities from carbon dioxide to obtain pure carbon dioxide. Method 3: During the experiment, we add a question-and-answer session related to knowledge points. The user can enter the next step only if the answers are correct. If the user answers incorrectly, the system will feed the user several answer options for the user to choose until the user chooses the correct option. In addition, in the observation of experimental phenomena, part of the experimental phenomenon is obscured by a mosaic. The user can observe the experimental phenomenon after passing the knowledge points of the question-and-answer session.

4.3. Incentive Strategy

The incentive strategy uses real-time feedback in the form of scores and rankings, so that users can quickly and intuitively understand their situation. The score consists of three parts: (1) During the experiment, the question-and-answer session about knowledge points. Set the score according to the number of times the user answers a question. (2) Operational behavior during the experiment. Set the score according to the user’s operational behavior. Users use intelligent beakers for the experiments. The intelligent beaker is equipped with a sensor that can sense the state of the beaker and detect the dumping function of the beaker. In the course of the experiment, two kinds of incorrect operations can be detected: the user dumping an empty beaker or dumping a breaker too quickly. Different path combinations correspond to different experimental results and experimental phenomena and are scored according to the path scores set in the game rules.

5. Experiment and Evaluations

5.1. System Settings

Our system is built using Unity 2018.3.8f1 (64-bit) version. Our computer is equipped with an Intel(R)Core(TM)i7-7700HQ CPU with a 2.80 GHz processor and 8 GB of memory. The computer’s operating system is Windows 10. The intelligent beakers are constructed with 3D printing technology. The development board uses the improved version of ATMEGA328P Pro Mini. The posture sensor is JY61 from WitMotion. The touch sensor is the TTP223 touch sensor module. The Bluetooth module is the model HC-05.
The intelligent beaker is shown in Figure 5. The beaker is made with 3D printing technology, and the beaker is small to facilitate user experimentation. The red mark is the posture sensor, the blue mark is the touch sensor, and the Bluetooth module is placed in the beaker. The Bluetooth module, posture sensor, and touch bed sensor are connected to the development board. The sensor information of the intelligent beaker is input to the computer through the Bluetooth module.

5.2. System Implementation

5.2.1. Laboratory Preparation of Carbon Dioxide Experiments

In the process of chemical experiments, there are problems such as the danger of the experimental process and unobvious experimental phenomena. A virtual and real fusion experiment can not only avoid the danger in the process of chemical experiments but can also use information enhancement technology to enhance the effect of chemical reactions to facilitate students in observing the phenomena. At the same time, the intelligent beaker (physical object) is used for interaction in the virtual and real fusion experiment to give students a sense of operating in a real environment.
The reaction between limestone and dilute hydrochloric acid is as follows: the limestone gradually dissolves, bubbles are generated, and a colorless gas is generated. Since we want to obtain pure carbon dioxide gas, impurities need to be removed from the gas, so we set up two experimental levels.
Due to the use of multipath strategies to increase the exploratory nature of chemical experiments during the gameplay virtual and real fusion experimental process, we show only the path of using limestone and dilute hydrochloric acid to produce carbon dioxide.
After the user selects the limestone and dilute hydrochloric acid experiment, he or she enters the first level of the experiment (Figure 6a), where the red mark is the user’s current level score, and the green mark is the user’s current level score ranking. The content is displayed after the user completes the experiment of the first level. As shown in Figure 6b, the yellow part is the experimental equipment for the reaction of limestone with dilute hydrochloric acid. The virtual experiment platform first broadcasts the current level task by voice, and then asks a question about the name of the chemical instrument indicated by the blue arrow marked by the red marker. As shown in Figure 6c, the green mark shows the chemical reagents that the current user can choose: limestone, marble, and sodium carbonate, and the blue part is the current score. The virtual reality fusion experiment platform uses the MIAUA algorithm to fuse the user’s touch and voice information to obtain the user’s true intention. After the user selects limestone, the chemical reagent selected in the green mark turns red. Next, the user can select the amount of limestone, as shown in Figure 6d. When the appropriate amount of limestone is selected, the amount selected in the green mark above the blackboard will turn red. At the same time, an appropriate amount of limestone can be seen in the green-marked conical flask. The blue mark in the upper right corner shows the change in the total score during the experiment. Throughout the experiment, the system reminds the user of the chemical experiment steps (the reminders refer only to the experimental process and do not stipulate which chemical reagent the user must choose) and provide feedback regarding whether the user’s answers to the questions are right or wrong. Users can obtain voice feedback during the operation of the experiment.
Next, as shown in Figure 7a, the user selects another chemical reagent, and the green mark is the optional chemical reagent (dilute hydrochloric acid or dilute sulfuric acid). After the user selects 20% dilute hydrochloric acid using the MIAUA algorithm, the red text reminder can be seen from the green mark on the blackboard. As shown in Figure 7b, the user uses the intelligent beaker to pour the dilute hydrochloric acid in the beaker in the virtual platform into the separatory funnel. At the same time, the liquid in the conical flask (marked in green) rises and the amount of dilute hydrochloric acid is added (the yellow mark). The reaction starts after the addition of dilute hydrochloric acid, as shown in Figure 7c, and the green and blue marks are experimental phenomena. At the beginning of the reaction, we use animation effects to zoom in the lens for the users to observe. The bubbles produced by the reaction can be observed on the limestone in the conical flask (marked in green), and bubbles can also be observed when the gas produced is passed into the clarified lime water (marked in blue). Here, information technology is used to enhance the chemical reaction phenomena, allowing users to observe experimental phenomena that are not clear in real experiments. In Figure 7d, users can observe the experimental phenomena after the reaction is complete. From the conical flask (marked in green) we can see that the limestone disappears, many white bubbles are produced, and the clarified lime water (marked in blue) becomes turbid. At the same time, we save pictures (yellow marks) of the experimental phenomena on the blackboard for users to observe the differences in the contrasting experimental phenomena during the exploration of the experiment.
When the user finishes the first-level experiment, as shown in Figure 8a, the red marked part is the user’s current score, the stars are lit according to the score, and the current score ranking can be seen in the green-marked part. The user then enters the second level, informed by voice broadcast that the current experimental task is to obtain pure carbon dioxide gas. Since the carbon dioxide gas produced by the reaction of limestone and dilute hydrochloric acid contains water and hydrogen chloride gas, saturated sodium bicarbonate and concentrated sulfuric acid are used to remove impurities, as shown in the blue mark in Figure 8b. To detect whether impurities have been removed, we use silver nitrate and anhydrous copper sulfate to check whether the solution contains hydrogen chloride and water (yellow mark). As shown in Figure 8c, after the user adds saturated sodium bicarbonate and concentrated sulfuric acid (marked in red) to beakers #1 and #2 in order, the inspection device (marked in blue) is covered by a mosaic. At this time, the system asks a knowledge question. After the user answers correctly, the experimental phenomena can be observed. As shown in Figure 8d, the experimental phenomena can be observed with the blue mark, and the final experimental results are presented on the blackboard.
The following will introduce possible user errors in the path of limestone and dilute hydrochloric acid. As shown in Figure 9a, incorrect operations may occur during the experiment, and the intelligent beaker can sense the user’s dumping behavior. If the user intends to pour chemical reagents into the beaker, the system will prompt the user that no chemical reagents have been added. The empty beaker marked in blue appears to have been dumped. During the user experiment, there may be a problem of the chemical reagents being added in the wrong order (Figure 9b). In the second stage of removing impurities, the gas should be introduced into the saturated sodium bicarbonate and then the concentrated sulfuric acid. If the selected order is wrong, the anhydrous copper sulfate will turn blue (blue mark).
Let us explore the influence of the amount of limestone and dilute hydrochloric acid on the chemical reaction during the reaction of limestone and dilute hydrochloric acid. We use the controlled variable method to present different experimental phenomena. First, we do not change the amount of limestone or the concentration of dilute hydrochloric acid. As shown in Figure 10a, the amount of diluted hydrochloric acid added at this time is estimated to be roughly 55. The reaction phenomenon is shown in Figure 10b. Compared with Figure 7c, we observe a conical flask (marked in green). The amount of dilute hydrochloric acid is small, and there are few bubbles during the reaction (green mark and blue mark). The experimental phenomena of the completion of the reaction (Figure 10c) are compared with those shown in Figure 7d. The liquid in the conical flask (marked in green) is lighter in color and produces fewer white bubbles. Next, we change only the concentration of dilute hydrochloric acid. The experimental phenomena of the completion of the reaction (Figure 10d) are compared with those shown in Figure 7d. The liquid in the conical flask (marked in green) is lighter in color and produces fewer white bubbles. There is more dilute hydrochloric acid in the conical flask in Figure 10d than in Figure 10c. The pictures of the experimental results are saved on the blackboard for comparison and observation. It is apparent that the high-concentration dilute hydrochloric acid reacts violently and there are many white bubbles on the liquid surface.
Next, we change only the amount of limestone, and choose a small amount of limestone and a large amount of limestone to compare the experimental phenomena. Figure 11a,b shows the reverse phenomena of a small amount of limestone. Figure 11c,d shows the reverse phenomenon of a large amount of limestone. The experimental results in the yellow marked area are in the following order: appropriate amount, small amount, and large amount. A large amount of limestone causes the most violent reaction in the conical flask: the most bubbles, the darkest color in the liquid, the most carbon dioxide, and the most turbid clarified lime water.

5.2.2. Reaction of Copper Sulfate with Sodium Hydroxide Solution

We added an additional chemical experiment to prove that our algorithms and intelligent devices are universal. The phenomenon of the reaction between copper sulfate solution and sodium hydroxide solution is as follows: the solution becomes colorless, blue precipitates are formed, and the blue precipitates turn black when heated. Since the reaction experiment between copper sulfate solution and sodium hydroxide solution is relatively simple, we set up a single experiment level.
In the experiment of the reaction between copper sulfate solution and sodium hydroxide solution, the user needs to mix two chemical reagents. As shown in Figure 12a, the blue markers are user-selectable chemical reagents: copper sulfate and sodium hydroxide. The user can choose to add copper sulfate or sodium hydroxide solution first. As shown in Figure 12b, the user chooses to add copper sulfate solution (blue solution) through the MIAUA algorithm. The green mark shows that the chemical reagent in the beaker is copper sulfate solution. The blue mark in Figure 12c shows that the user is pouring sodium hydroxide solution into the beaker containing copper sulfate solution. As shown by the blue mark in Figure 12d, the reaction beaker is blocked by a mosaic image.
The user needs to answer knowledge questions to observe the experimental phenomena. After the user has answered the question, he or she can see that the solution in the beaker in Figure 13a has become colorless, and a blue precipitate has been produced. Next, we explore what happens when the blue product is heated. As shown in Figure 13b, the user heats the blue precipitate. At the same time, the experimental phenomena are blocked (Figure 13c). After the user answers the question, he or she can observe that the blue precipitate in the solution has turned black (Figure 13d).

5.2.3. Experimental Prototype System

Our experiment system currently integrates 10 representative experiments. Users can enter the system, voice select the name of the experiment they want to perform, and the system automatically jumps to the experiment of our choice. The main interface of the prototype system is shown in Figure 14.
Figure 15 shows screenshots from other experimental procedures. As shown in Figure 15, (a) is the experiment of ammonia preparation; (b) is the experiment of investigating the properties of burning charcoal; (c) is the experiment of investigating the combustion of red phosphorus; and (d) is the experiment of investigating the properties of concentrated sulfuric acid.

5.3. Experimental Results and Analysis

We invited 15 graduate volunteers (never been exposed to these three chemistry lab platforms, computer majors, with less chemistry knowledge) to participate in the experiment. All volunteers were required to have no experience in conducting virtual chemistry experiments in GVRFL. We chose to let the volunteers complete the reaction experiment of limestone and dilute hydrochloric acid. First, we taught the volunteers the experimental process of this experiment. Then, we trained them in the NOBOOK, MSNVRFL, and GVRFL operation methods. The experimental process was divided into two parts: (1) the volunteers verified the feasibility of the algorithm and the platform in GVRFL. (2) Comparative experiments were conducted in NOBOOK, MSNVRFL, and GVRFL.

5.3.1. Algorithm Analysis

To verify the availability of the MIAUA algorithm and intelligent equipment, we first counted the success rates of seven operational intentions during the reaction experiment of limestone and dilute hydrochloric acid. We asked 15 volunteers to perform 20 experiments for each operational intention. The seven experimental operations were as follows: choose limestone (CL), choose a small amount of limestone (CSAL), choose an appropriate amount of limestone (CAAL), choose an excessive amount of limestone (CEL), choose dilute hydrochloric acid (CDHA), choose saturated hydrogen carbonate sodium (CSHCS), or choose concentrated sulfuric acid (CCSA). The volunteers used tactile information and voice information to conduct the experiments, and user intentions were obtained through the MIAUA algorithm. The experimental results are shown in Figure 16. The success rates of the 7 intentions using the MIAUA algorithm were all above 98.3%, and the average success rate was 99.01%. The success rate of CL, CDHA, and CCSA was 99.7%. Thus, the MIAUA algorithm can understand user intentions well and improve the efficiency of human–computer interaction.
After simple training, the 15 volunteers were asked to start the limestone and dilute hydrochloric acid reaction experiments, and we then counted the number of attempts made by the 15 volunteers to successfully complete the experiment. The results of the experiment are shown in Figure 17. Four students succeed in one experiment, accounting for 26.7%. Nine students succeed in two experiments, accounting for 60%. One student succeeded after trying three experiments, and the proportion was 13.3%. We observed the students who tried two to three times. Most of them made multiple attempts because they were not familiar with the virtual and real fusion experiment platform, which led to errors in the experimental process. A small part of the problem was that the system does not detect the user’s operation. Of the students who had not been exposed to the virtual reality fusion experimental platform, 86.7% successfully completed the reaction of limestone and dilute hydrochloric acid after two attempts. The experimental results reflect the ease of use of the intelligent beaker and the experimental platform.
After the experiment, the 15 volunteers were asked to answer questions for knowledge point inspection. The content of the test paper was the question-and-answer session from the experiment. During the experiment, there were a total of five questions and answers on knowledge points. The results of the experiment are shown in Figure 18. Of the respondents, 12 answered all the questions correctly, and 3 answered 4 questions correctly. Thus, 80% of the students answered all the questions correctly. The experimental results show that it is effective to incorporate game elements into GVRFL. In observing the experimental process, we found that the students were very engaged in the experimental process and were very interested in our experimental platform. The experimental results show that the introduction of gameplay can stimulate students’ interest in and motivation for the experimental process and improve their learning efficiency. Figure 19 shows the volunteers doing the experiment.

5.3.2. Comparison Experiment

At present, some mature virtual experiment platforms are being promoted in schools. We chose the NOBOOK experimental platform for comparison. GVRFL uses the fusion of virtual and real for chemical experiments, and we chose MSNVRFL for comparison. We compared the three experiments on six aspects. In order to ensure the uniformity of the experimental variables, we selected 15 students who had never been exposed to the GVRFL, NOBOOK, and MSNVRFL experimental platforms to perform the experiments and fill out the forms. Finally, we conducted the statistics and concluded that the platform could achieve this basic performance if more than 75% of the students thought that the platform had this performance; otherwise, they thought that the platform could not achieve this performance. The final results are shown in Table 2,   means that the basic performance can be achieved and × means that the basic performance cannot be achieved. In terms of interactivity, all three experimental platforms can conduct virtual experiments through interaction. In terms of operability, NOBOOK uses a mouse or finger to drag to complete the experiment, and therefore does not create the sense of operation of a traditional experiment. MSNVRFL and GVRFL use intelligent beakers for experiments and thus have a certain sense of real experimental operation. In terms of effect observation, all three experimental platforms can observe the effect of chemical reaction experiments, and the NOBOOK simulation effect is relatively better. In terms of intelligence, NOBOOK is not an intelligent platform. In terms of inquiry, all three experimental platforms can carry out exploratory experiments. In terms of interestingness, GVRFL adds game elements while teaching knowledge, which improves students’ interest in learning during the experiment.
GVRFL uses multichannel interaction to conduct virtual experiments. The MIAUA algorithm combines tactile information and voice information to obtain user intentions. Some articles also proposed a multimodal fusion algorithm (MFPA) that fused tactile information and voice information and tested the success rate of two intentions in the experimental part: choosing concentrated sulfuric acid (CCSA) and choosing water (CW). We used the MIAUA algorithm to test these two intentions. In addition, we used the MFPA algorithm to test operational intention during the limestone and dilute hydrochloric acid experiment: CL, CSAL, CAAL, CEL, CDHA, and CSHCS. The volunteers performed each experiment 20 times. The results are shown in Figure 20, which shows that the correct rate of intention understanding of the MIAUA algorithm is higher than that of MFPA. The success rate of CCSA using the MIAUA algorithm is 99.7%, and the success rate of CW is 100%. The average correctness rate of MFPA’s intention understanding is 95.91%, the average correct rate of MIAUA’s intention understanding is 99.13%, and MIAUA’s correctness improves by 3.22% because the MIAUA algorithm has an active understanding process. This method will interact according to the user’s input information, request the user to enhance or supplement the information of a certain channel when the user’s true intention is unclear, and obtain the user’s true intention through multiple interactions.
Next, we invited the volunteers to score GVRFL. The indicators are interactivity, operability, effect observation, intelligence, inquiry, and interestingness. The statistical results are shown in Figure 21. GVRFL scores higher than NOBOOK and MSNVRFL in terms of interactivity. The MIAUA algorithm can better understand the real intentions of users and improve the interactivity of the experimental platform. In terms of operability and intelligence, GVRFL scores much higher than NOBOOK and basically the same as MSNVRFL. In terms of effect observation, GVRFL scores slightly higher than MSNVRFL and lower than NOBOOK. The presentation of experimental phenomena needs to be optimized. In terms of inquiry, the three experimental platforms are basically the same. In terms of interestingness, GVRFL is much higher than NOBOOK and MSNVRFL because GVRFL adds game elements and students are more engaged in the experimental process, which improves the fun of the virtual experiments.
Meanwhile, we conducted a single-factor comparative analysis of the ratings of these 15 students on these 3 platforms, and the data were analyzed separately for interactivity, operability, effectiveness observation, intelligence, inquiry, and interestingness using ANOVA analysis, and the results are shown in Table 3, Where, Y means significant difference, and N means relatively small significant difference. It can be seen that the scores of GVRFL are significantly higher than NOBOOK in the aspects of interactivity, operability, intelligence, and interestingness, and the scores of GVRFL are significantly higher than MSNVRFL in the aspects of interactivity and interestingness. This shows that our system has more advantages in terms of interactivity and interestingness for the experimenter, which also verifies that our proposed multimodal fusion intention understanding algorithm is more instructive for the user’s operation during the experiment and allows the user to interact with the system in a more natural way. In addition, the game mode we designed also increases the fun of the experiment and improves users’ interest in learning.

5.3.3. User Study

We used the SUS questionnaire (Appendix A) to evaluate GVRFL. Due to the specificity of the application of the smart experiment system, we summarized and improved the assessment content of the SUS scale to make it more relevant to the user’s assessment of the application needs of the system. Therefore, in addition to the usability and ease of operation assessments previously included in the scale, we also added fun and experimental immersion assessments. The questionnaire contains 10 questions, and each question is scored using a 5-point system, with 1 meaning disagree and 5 meaning strongly agree.
The results of the experiment are shown in Figure 22. The mean scores for questions 1 and 2 were 4.42 and 4.95, respectively, indicating that students preferred to use GVRFL. The mean score for question 3 was 4.37. Students felt that using GVRFL would help them focus on learning chemistry. The average score for question 4 was 4.31. Students felt that the combination of virtual and realistic manipulation of the smart beaker would improve their hands-on skills. The average score for question 5 was 4.35. The students felt that they were highly engaged in the experiment. Therefore, the addition of game elements to the virtual experiment can increase students’ interest and motivation to learn. The mean score for question 6 was 4.32. Students thought that the use of GVRFL for chemistry experiments was better than swiping the mouse on the computer. The average score for question 7 was 4.39. Some students still prefer to conduct experiments in a traditional laboratory, but most of them believe that the use of GVRFL is effective in avoiding the dangers of traditional experimental procedures and observing less obvious phenomena. The average score for question 8 was 4.71, indicating that the system is already well integrated with its functions. The average score of question 9 was 4.65, which indicates that users think the system is still easy to get started and use, and will not be too much of a burden to use. The average score of question 10 was 4.53, indicating that users engaged in using the system with confidence that the system’s intent-understanding algorithm could help students overcome the difficulties encountered during the experiment and complete it successfully.
Finally, according to NASA-TLX, all participants were divided into mental demand (MD), physical demand (PD), performance (P), effort (E), and frustration (F). The mental requirements describe the user’s operational memory load, the physical requirements describe the user’s effortlessness in the operation, the operational performance and the smoothness of the user’s operation, and the effort level describes whether the user feels relaxed during the operation. The degree of frustration describes the users’ degree of negativity during the operation. NASAs evaluation indicators use a 5-point system, and each indicator is divided into 5 levels: 0~1 points indicates a low cognitive burden, 1~2 points indicates a relatively small cognitive burden, 2~3 points indicates that the cognitive burden is average, 3~4 points indicates that the cognitive burden is relatively large, and 4~5 points indicates a large cognitive burden. Figure 23 shows that the score of GVRFL is generally lower than that of NOBOOK. GVRFL scores are slightly higher than MSNVRFL scores in MD, and for the other four indicators, GVRFL scores are generally lower than MSNVRFL scores. The results indicate that the user’s sense of frustration and fatigue is very low during the experimental process of GVRFL. The experimental process is relatively smooth and relaxed, the user’s cognitive load is low, and the user has a high evaluation.
At the same time, we further analyzed the NASA measurement results of the 15 students, using ANOVA analysis, and carried out data analysis on the five indicators of GVRFL and NOBOOK and GVRFL and MSNVRFL, respectively, and the results are shown in Table 4, Where, Y means significant difference, and N means relatively small significant difference. It can be seen that GVRFL has a significant reduction in five indicators in MD, PD, P, E, and F compared to NOBOOK, and the changes in two indicators in MD and PD are not very significant compared to MSNVRFL, but there is also a significant reduction for the other three indicators. It indicates that through our active intention understanding algorithm, the active guidance of the system to the operator can effectively reduce the user’s brain power and cognitive conformation while the added game elements can also improve the user’s pleasure during the experiment and reduce the degree of psychological frustration, thus increasing the students’ enthusiasm for learning.

6. Conclusions and Prospects

The existing virtual laboratory has low interaction efficiency, the user experience is poor, and the experimental process lacks interest and realism. We designed a multimodal perception gameplay virtual and real fusion intelligence laboratory (GVRFL). This scheme proposes a multimodal intention active understanding algorithm (MIAUA) to improve the efficiency of human–computer interaction and user experience, proposes a novel gameplay intelligent experimental mode of virtual and real fusion, and introduces gameplay into the experimental interactive mode of virtual and real fusion.
The proposed MIAUA gives GVRFL the ability to understand user intentions and improves the efficiency of human–computer interaction. At the same time, the experimental method of fusion of virtual and reality enhances the user’s sense of reality and operation. The introduction of game elements into the virtual experiment stimulated students’ interest and motivation in the experimental process, increased their interest in the virtual experiment, and improved their learning efficiency. The experimental results show that students’ frustration and fatigue are relatively low during the experiment, which reduces the burden of human–computer interaction and simultaneously increases students’ interest in learning. Students like to use GVRFL for chemical experiments. Compared with the cost problem of traditional experiments, our equipment uses inexpensive sensors, which will not cause too much burden on the school and can be used repeatedly to save resources and avoid unnecessary waste. In the future, we will continue to optimize the presentation of the experimental reaction effects and add more chemical experiments.

Author Contributions

Conceptualization, L.Y. and Z.F.; methodology, L.Y. and J.Y.; software, J.Y.; validation, L.Y. and Z.F.; formal analysis, L.Y.; investigation, L.Y.; resources, J.Y.; data curation, J.Y.; writing—original draft preparation, L.Y. and J.Y.; writing—review and editing, L.Y.; visualization, J.Y.; supervision, Z.F.; project administration, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Because the reason does not involve ethical issues, this study gave up ethical review and approval.

Informed Consent Statement

The consent was given up because students were directly invited to the laboratory to do experiments.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. SUS Questionnaire

The items of the SUS questionnaire are as follows:
  • I am willing to use this experimental platform.
  • I am very interested in this game-based virtual-real fusion experiment platform.
  • Using this platform, I can focus on learning knowledge.
  • This experimental platform can improve my hands-on ability.
  • I am very invested in experiments in this game-based virtual-real fusion experiment platform.
  • I like this experimental platform better than NOBOOK.
  • Compared with traditional experimental teaching, I prefer this experimental platform
  • I found that the functions in the system are well integrated
  • I think this system is easy to use
  • I feel very confident when using this system

References

  1. Mutlu, A.; Şeşen, B.A. Comparison of inquiry-based instruction in real and virtual laboratory environments: Prospective science teachersattitudes. Int. J. Curric. Instr. 2020, 12, 600–617. [Google Scholar]
  2. Turk, M. Multimodal interaction: A review. Pattern Recognit. Lett. 2014, 36, 189–195. [Google Scholar] [CrossRef]
  3. Abubakar, H. Impact of internet technology usage on job performance of senior secondary school teachers in kaduna state Nigeria. Int. J. Curric. Instr. 2018, 10, 152–167. [Google Scholar]
  4. Aldosari, S.S.; Marocco, D. Using haptic technology for education in chemistry. In Proceedings of the 2015 Fifth International Conference on e-Learning, Manama, Bahrain, 18–20 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 58–64. [Google Scholar]
  5. Ali, N.; Ullah, S.; Alam, A.; Rabbi, I. The effect of multimodalityand 3D interaction in a virtual laboratory on studentslearning in chemistry education. Sindh Univ. Res. J.-SURJ (Sci. Ser.) 2015, 47, 703–708. [Google Scholar]
  6. Han, R.; Feng, Z.; Tian, J.; Fan, X.; Yang, X.; Guo, Q. An intelligent navigation experimental system based on multimode fusion. Virtual Real. Intell. Hardw. 2020, 2, 345–353. [Google Scholar] [CrossRef]
  7. Turan-Zpolat, E. A Phenomenographic Study on Views about Entertaining and Boring Situations in Learning Process. Int. Educ. Stud. 2020, 13, 8–34. [Google Scholar] [CrossRef]
  8. Dong, D.; Feng, Z.; Tian, J. Smart Beaker Based on Multimodal Fusion and Intentional Understanding. In Proceedings of the ICCDE 2020: 2020 The 6th International Conference on Computing and Data Engineering, Sanya, China, 4–6 January 2020. [Google Scholar]
  9. Jaimes, A.; Sebe, N. Multimodal human–computer interaction: A survey. Comput. Vis. Image Underst. 2007, 108, 116–134. [Google Scholar] [CrossRef]
  10. Tüysüz, C. The effect of the virtual laboratory on students’ achievement and attitude in chemistry. Int. Online J. Educ. Sci. 2010, 2, 37–53. [Google Scholar]
  11. Tsovaltzi, D.; Rummel, N.; McLaren, B.M.; Pinkwart, N.; Scheuer, O.; Harrer, A.G.; Braun, I. Extending a virtual chemistry laboratory with a collaboration script to promote conceptual learning. Int. J. Technol. Enhanc. Learn. 2010, 2, 91–110. [Google Scholar] [CrossRef]
  12. Wu, B.; Wong, S.; Li, T. Virtual titration laboratory experiment with differentiated instruction. Comput. Animat. Virtual Worlds 2019, 30, e1882. [Google Scholar] [CrossRef]
  13. Lam, M.C.; Tee, H.K.; Nizam, S.S.M.; Hashim, N.C.; Suwadi, N.A.; Tan, S.Y.; Majid, N.A.A.; Arshad, H.; Liew, S.Y. Interactive augmented reality with natural action for chemistry 806 experiment learning. TEM J. 2020, 9, 351. [Google Scholar]
  14. Kaiser, E.; Olwal, A.; McGee, D.; Benko, H.; Corradini, A.; Li, X.; Cohen, P.; Feiner, S. Mutual disambiguation of 3d multimodal interaction in augmented and virtual reality. In Proceedings of the 5th international Conference on Multimodal Interfaces, Vancouver, BC, Canada, 5–7 December 2003; pp. 12–19. [Google Scholar]
  15. Hui, P.-Y.; Meng, H. Latent Semantic Analysis for Multimodal User Input With Speech and Gestures. IEEE/ACM Trans. Audio Speech Lang. Process. 2013, 22, 417–429. [Google Scholar] [CrossRef]
  16. Liang, P.; Ge, L.; Liu, Y.; Zhao, L.; Li, R.; Wang, K. An Augmented Discrete-Time Approach for Human-Robot Collaboration. Discret. Dyn. Nat. Soc. 2016, 2016, 9126056. [Google Scholar] [CrossRef] [Green Version]
  17. Gavril, A.F.; Trascau, M.; Mocanu, I. Multimodal interface for ambient assisted living. In Proceedings of the 2017 21st International Conference on Control Systems and Computer Science (CSCS), Bucharest, Romania, 29–31 May 2017; pp. 223–230. [Google Scholar]
  18. Zhao, R.; Wang, K.; Divekar, R.; Rouhani, R.; Su, H.; Ji, Q. An immersive system with multimodal human-computer interaction. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018; pp. 517–524. [Google Scholar]
  19. Jung, J.; Lee, S.; Hong, J.; Youn, E.; Lee, G. Voice+tactile: Augmenting in-vehicle voice user interface with tactile touchpad interaction. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–12. [Google Scholar]
  20. Isabwe, G.M.N.; Moxnes, M.; Ristesund, M.; Woodgate, D. Childrens interactions within a virtual reality environment for learning chemistry. In International Conference on Applied Human Factors and Ergonomics; Springer: Cham, Switzerland, 2017; pp. 221–233. [Google Scholar]
  21. Ismail, A.W.; Billinghurst, M.; Sunar, M.S.; Yusof, C.S. Designing an augmented reality multimodal interface for 6dof manipulation techniques. In SAI Intelligent Systems Conference; Springer: Cham, Switzerland, 2018; pp. 309–322. [Google Scholar]
  22. Wolski, R.; Jagodzi’nski, P. Virtual laboratoryusing a hand movement recognition system to improve the quality of chemical education. Br. J. Educ. Technol. 2019, 50, 218–231. [Google Scholar] [CrossRef] [Green Version]
  23. Edwards, B.I.; Bielawski, K.S.; Prada, R.; Cheok, A.D. Haptic virtual reality and immersive learning for enhanced organic chemistry instruction. Virtual Real. 2018, 23, 363–373. [Google Scholar] [CrossRef]
  24. Song, K.; Kim, G.; Han, I.; Lee, J.; Park, J.-H.; Ha, S. Chemo: Mixed object instruments and interactions for tangible chemistry experiments. In Proceedings of the CHI’11 Extended Abstracts on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 2305–2310. [Google Scholar]
  25. Hartmann, J.; Holz, C.; Ofek, E.; Wilson, A.D. Realitycheck: Blending virtual environments with situated physical reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–12. [Google Scholar]
  26. Amador, C.; Liu, F.W.; Johnson-Glenberg, M.C.; LiKamWa, R. Work-inprogresstitration experiment: Virtual reality chemistry lab with haptic burette. In Proceedings of the 2020 6th International Conference of the Immersive Learning Research Network (iLRN), San Luis Obispo, CA, USA, 21–25 June 2020; pp. 363–365. [Google Scholar]
  27. Zeng, B.; Feng, Z.; Xu, T.; Xiao, M.; Han, R. Research on Intelligent Experimental Equipment and Key Algorithms Based on Multimodal Fusion Perception. IEEE Access 2020, 8, 142507–142520. [Google Scholar] [CrossRef]
  28. Yuan, J.; Feng, Z.; Dong, D.; Meng, X.; Meng, J.; Kong, D. Research on Multimodal Perceptual Navigational Virtual and Real Fusion Intelligent Experiment Equipment and Algorithm. IEEE Access 2020, 8, 43375–43390. [Google Scholar] [CrossRef]
  29. Xiao, M.; Feng, Z.; Fan, X.; Zeng, B.; Li, J. A Structure Design of Virtual and Real Fusion Intelligent Equipment and Multimodal Navigational Interaction Algorithm. IEEE Access 2020, 8, 125982–125997. [Google Scholar] [CrossRef]
  30. Wang, H.; Feng, Z.; Tian, J.; Fan, X. MFA: A Smart Glove with Multimodal Intent Sensing Capability. Comput. Intell. Neurosci. 2022, 2022, 3545850. [Google Scholar] [CrossRef]
  31. Xie, Y.; Hong, Y.; Fang, Y. Virtual Reality Primary School Mathematics Teaching System Based on GIS Data Fusion. Wirel. Commun. Mob. Comput. 2022, 2022, 7766617. [Google Scholar] [CrossRef]
  32. Wang, H.; Meng, X.; Feng, Z. Research on the Structure and Key Algorithms of Smart Gloves Oriented to Middle School Experimental Scene Perception. In CCF Conference on Computer Supported Cooperative Work and Social Computing; Springer: Singapore, 2022; pp. 409–423. [Google Scholar]
  33. Pan, Z.; Luo, T.; Zhang, M.; Cai, N.; Li, Y.; Miao, J.; Li, Z.; Pan, Z.; Shen, Y.; Lu, J. MagicChem: A MR system based on needs theory for chemical experiments. Virtual Real. 2022, 26, 279–294. [Google Scholar] [CrossRef] [PubMed]
  34. Ullah, S.; Ali, N.; Rahman, S.U. The effect of procedural guidance on students skill enhancement in a virtual chemistry laboratory. J. Chem. Educ. 2016, 93, 2018–2025. [Google Scholar] [CrossRef]
  35. Su, C.-H.; Cheng, T.-W. A Sustainability Innovation Experiential Learning Model for Virtual Reality Chemistry Laboratory: An Empirical Study with PLS-SEM and IPMA. Sustainability 2019, 11, 1027. [Google Scholar] [CrossRef] [Green Version]
  36. Oberdörfer, S.; Heidrich, D.; Latoschik, M.E. Latoschik, Usability of gamified knowledge learning in vr and desktop-3D. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–13. [Google Scholar]
  37. de Castro Rodrigues, D.; de Siqueira, V.S.; da Costa, R.M.; Barbosa, R.M. Artificial Intelligence applied to smart interfaces for children’s educational games. Displays 2022, 74, 102217. [Google Scholar] [CrossRef]
  38. Pérez-Marín, D.; Paredes-Velasco, M.; Pizarro, C. Multi-mode Digital Teaching and Learning of Human-Computer Interaction (HCI) using the VARK Model during COVID-19. Educ. Technol. Soc. 2022, 25, 78–91. [Google Scholar]
  39. Hong, J.C.; Hwang, M.Y.; Liu, Y.H.; Tai, K.H. Effects of gamifying questions on English grammar learning mediated by epistemic curiosity and language anxiety. Comput. Assist. Lang. Learn. 2022, 35, 1458–1482. [Google Scholar] [CrossRef]
  40. Mak, M.T.F.; Wang, M.; Chu, K.W.S. Effects of a gamified learning platform on elementary school students’ flow experiences in leisure reading. Proc. Assoc. Inf. Sci. Technol. 2019, 56, 454–458. [Google Scholar] [CrossRef]
  41. Liao, Y.W.; Huang, Y.M.; Wang, Y.S. Factors Affecting Students’ Continued Usage Intention Toward Business Simulation Games: An Empirical Study. J. Educ. Comput. Res. 2015, 53, 260–283. [Google Scholar] [CrossRef]
Figure 1. Overall framework.
Figure 1. Overall framework.
Applsci 12 12184 g001
Figure 2. Intelligent beaker.
Figure 2. Intelligent beaker.
Applsci 12 12184 g002
Figure 3. Behavior–intent mapping relationship.
Figure 3. Behavior–intent mapping relationship.
Applsci 12 12184 g003
Figure 4. Multimodal intention active understanding algorithm framework.
Figure 4. Multimodal intention active understanding algorithm framework.
Applsci 12 12184 g004
Figure 5. Intelligent beaker.
Figure 5. Intelligent beaker.
Applsci 12 12184 g005
Figure 6. Limestone and dilute hydrochloric acid experiment scene 1. (a). The first test interface. (b). Knowledge Q&A interface. (c). Drug selection interface 1. (d). Drug selection interface 2.
Figure 6. Limestone and dilute hydrochloric acid experiment scene 1. (a). The first test interface. (b). Knowledge Q&A interface. (c). Drug selection interface 1. (d). Drug selection interface 2.
Applsci 12 12184 g006
Figure 7. Limestone and dilute hydrochloric acid experiment scene 2. (a). Drug selection interface. (b). Quantitative interface of dumped drugs. (c). Realization of phenomenon amplification interface. (d). Implementation of phenomenon comparison interface.
Figure 7. Limestone and dilute hydrochloric acid experiment scene 2. (a). Drug selection interface. (b). Quantitative interface of dumped drugs. (c). Realization of phenomenon amplification interface. (d). Implementation of phenomenon comparison interface.
Applsci 12 12184 g007
Figure 8. Limestone and dilute hydrochloric acid experiment scene 3. (a). Score interface after the first round. (b). The second drug selection interface. (c). The second level knowledge question and answer interface. (d). The second test phenomenon interface.
Figure 8. Limestone and dilute hydrochloric acid experiment scene 3. (a). Score interface after the first round. (b). The second drug selection interface. (c). The second level knowledge question and answer interface. (d). The second test phenomenon interface.
Applsci 12 12184 g008
Figure 9. Limestone and dilute hydrochloric acid experiment scene 4. (a). Beaker operation interface. (b). The second test phenomenon interface.
Figure 9. Limestone and dilute hydrochloric acid experiment scene 4. (a). Beaker operation interface. (b). The second test phenomenon interface.
Applsci 12 12184 g009
Figure 10. Limestone and dilute hydrochloric acid experiment scene 5. (a). Dumping quantification interface. (b). Experimental phenomenon interface 1. (c). Experimental phenomenon interface 1. (d). Screenshot saving interface of experimental phenomena.
Figure 10. Limestone and dilute hydrochloric acid experiment scene 5. (a). Dumping quantification interface. (b). Experimental phenomenon interface 1. (c). Experimental phenomenon interface 1. (d). Screenshot saving interface of experimental phenomena.
Applsci 12 12184 g010
Figure 11. Limestone and dilute hydrochloric acid experiment scene 6. (a). Experimental interface 1 for reaction of a small amount of limestone with dilute hydrochloric acid. (b). Experimental interface 2 for reaction of a small amount of limestone with dilute hydrochloric acid. (c). Experimental interface 1 of reaction between excess limestone and dilute hydrochloric acid. (d). Experimental interface 2 of reaction between excess limestone and dilute hydrochloric acid.
Figure 11. Limestone and dilute hydrochloric acid experiment scene 6. (a). Experimental interface 1 for reaction of a small amount of limestone with dilute hydrochloric acid. (b). Experimental interface 2 for reaction of a small amount of limestone with dilute hydrochloric acid. (c). Experimental interface 1 of reaction between excess limestone and dilute hydrochloric acid. (d). Experimental interface 2 of reaction between excess limestone and dilute hydrochloric acid.
Applsci 12 12184 g011
Figure 12. Copper sulfate reacts with sodium hydroxide solution 1. (a). Reagent selection interface 1. (b). Reagent selection interface 2. (c). Drug dumping interface. (d). Experimental phenomenon question and answer interface.
Figure 12. Copper sulfate reacts with sodium hydroxide solution 1. (a). Reagent selection interface 1. (b). Reagent selection interface 2. (c). Drug dumping interface. (d). Experimental phenomenon question and answer interface.
Applsci 12 12184 g012
Figure 13. Copper sulfate reacts with sodium hydroxide solution 2. (a). Experimental phenomenon question and answer interface 1. (b). Experimental phenomenon question and answer interface 2. (c). Experimental phenomenon question and answer interface 3. (d). Experimental phenomenon question and answer interface 4.
Figure 13. Copper sulfate reacts with sodium hydroxide solution 2. (a). Experimental phenomenon question and answer interface 1. (b). Experimental phenomenon question and answer interface 2. (c). Experimental phenomenon question and answer interface 3. (d). Experimental phenomenon question and answer interface 4.
Applsci 12 12184 g013
Figure 14. Prototype system interface.
Figure 14. Prototype system interface.
Applsci 12 12184 g014
Figure 15. Specific experimental process diagram. (a). The experiment of ammonia preparation. (b). The experiment of investigating the properties of burning charcoal. (c). The experiment of investigating the combustion of red phosphorus. (d). The experiment of investigating the properties of concentrated sulfuric acid.
Figure 15. Specific experimental process diagram. (a). The experiment of ammonia preparation. (b). The experiment of investigating the properties of burning charcoal. (c). The experiment of investigating the combustion of red phosphorus. (d). The experiment of investigating the properties of concentrated sulfuric acid.
Applsci 12 12184 g015
Figure 16. Success rate.
Figure 16. Success rate.
Applsci 12 12184 g016
Figure 17. Number of successful attempts.
Figure 17. Number of successful attempts.
Applsci 12 12184 g017
Figure 18. Students answering questions.
Figure 18. Students answering questions.
Applsci 12 12184 g018
Figure 19. Students conducting experiments.
Figure 19. Students conducting experiments.
Applsci 12 12184 g019
Figure 20. Correctness of intention understanding.
Figure 20. Correctness of intention understanding.
Applsci 12 12184 g020
Figure 21. User ratings.
Figure 21. User ratings.
Applsci 12 12184 g021
Figure 22. SUS questionnaire.
Figure 22. SUS questionnaire.
Applsci 12 12184 g022
Figure 23. NASA user evaluation.
Figure 23. NASA user evaluation.
Applsci 12 12184 g023
Table 1. Dumping quantification.
Table 1. Dumping quantification.
GradeDumping Angle θDumping Volume mL/sDumping Speed vDumping Time t/s/s
Not dumped θ   >   50 0 v   =   0 0
Dump slowly 30   <   θ     50 η 1 0   <   v     30 0.8   t
Normal dumping 10   <   θ     30 η 2 30   <   v     60 t
Quick dump   10   <   θ     10 η 3 v   60 1.2   t
Table 2. Performance comparison.
Table 2. Performance comparison.
Virtual LaboratoryNOBOOKMSNVRFLGVRFL
Interactivity
Operability ×
Effect observation
Intelligence ×
Inquiry
Interestingness × ×
Table 3. Single-factor analysis of user evaluation.
Table 3. Single-factor analysis of user evaluation.
GVRFL and NOBOOKGVRFL and MSNVRFL
pSpS
Interactivity<0.001Y0.0011Y
Operability<0.001Y0.2136N
Effect0.1546N0.1987N
Intelligence<0.001Y0.2045N
Inquiry0.2315N0.2513N
Interestingness<0.001Y<0.001Y
Table 4. Single factor analysis of NASA user evaluation.
Table 4. Single factor analysis of NASA user evaluation.
GVRFL and NOBOOKGVRFL and MSNVRFL
pSpS
MD<0.001Y0.0871N
PD<0.001Y0.2077N
P<0.001Y<0.001Y
E<0.001Y0.0015Y
F<0.001Y<0.001Y
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, L.; Yuan, J.; Feng, Z. Research on Equipment and Algorithm of a Multimodal Perception Gameplay Virtual and Real Fusion Intelligent Experiment. Appl. Sci. 2022, 12, 12184. https://doi.org/10.3390/app122312184

AMA Style

Yang L, Yuan J, Feng Z. Research on Equipment and Algorithm of a Multimodal Perception Gameplay Virtual and Real Fusion Intelligent Experiment. Applied Sciences. 2022; 12(23):12184. https://doi.org/10.3390/app122312184

Chicago/Turabian Style

Yang, Lurong, Jie Yuan, and Zhiquan Feng. 2022. "Research on Equipment and Algorithm of a Multimodal Perception Gameplay Virtual and Real Fusion Intelligent Experiment" Applied Sciences 12, no. 23: 12184. https://doi.org/10.3390/app122312184

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop