Next Article in Journal
GaN Laser Diode Technology for Visible-Light Communications
Next Article in Special Issue
Factors Influencing Students’ Continuance Intention to Learn in Blended Environments at University
Previous Article in Journal
Advances in Machine Learning
Previous Article in Special Issue
Assessing Repurchase Intention of Learning Apps during COVID-19
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Virtual Reality and Online Learning Immersion Experience Evaluation Model Based on SVM and Wearable Recordings

1
School of Artificial Intelligence, Beijing Normal University, Beijing 100875, China
2
Engineering Research Center of Intelligent Technology and Educational Application, Ministry of Education, Beijing 100875, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(9), 1429; https://doi.org/10.3390/electronics11091429
Submission received: 13 April 2022 / Revised: 28 April 2022 / Accepted: 28 April 2022 / Published: 29 April 2022
(This article belongs to the Special Issue Mobile Learning and Technology Enhanced Learning during COVID-19)

Abstract

:
The increasing development in the field of biosensing technologies makes it feasible to monitor students’ physiological signals in natural learning scenarios. With the rise of mobile learning, educators are attaching greater importance to the learning immersion experience of students, especially with the global background of COVID-19. However, traditional methods, such as questionnaires and scales, to evaluate the learning immersion experience are greatly influenced by individuals’ subjective factors. Herein, our research aims to explore the relationship and mechanism between human physiological recordings and learning immersion experiences to eliminate subjectivity as much as possible. We collected electroencephalogram and photoplethysmographic signals, as well as self-reports on the immersive experience of thirty-seven college students during virtual reality and online learning to form the fundamental feature set. Then, we proposed an evaluation model based on a support vector machine and got a precision accuracy of 89.72%. Our research results provide evidence supporting the possibility of predicting students’ learning immersion experience by their EEGs and PPGs.

1. Introduction

Educational resources, also known as educational economic conditions, refer to the human, material, as well as financial resources occupied, used, and consumed by the educational process [1]. The distribution of educational resources in universities has been increasingly balanced with the progress of the social economy and the practical application of science and technology. However, achieving complete equilibrium in educational resources, especially learning environments, still takes much time and investment. Take the immersive virtual reality (IVR) device as an example, where some schools in China have built smart classrooms with virtual reality (VR) devices to help students become more immersed during learning, while others do not, which would widen the academic gap between students because of the different learning environments [2,3]. With the recurrence of the global COVID-19 epidemic in recent years, mobile learning has gradually emerged, but without the face-to-face guidance of teachers, students’ learning immersion experience (LIE) has become a hot issue waiting for research scholars to launch a detailed study [4,5].
Flow theory was proposed in 1975 by psychologist Mihaly Csikszentmihalyi. Csikszentmihalyi defines flow theory as a positive psychological state that typically occurs when people perceive a balance between the challenges associated with a situation and their ability and skills [6,7,8]. Flow theory has been widely applied to evaluate human engagement in various naturalistic scenarios, such as working, gaming, painting, and learning. For example, the Privette experience questionnaire (PEQ), proposed in 1987 by Privette, was used to evaluate athletes’ immersive levels during sports activities [9,10], and the flow state scale (FSS), developed by Jackson and Marsh in 1996, had the same function as PEQ [11]. A Japanese company, Hitachi, used to study whether human physiological signals would change before and after the flow experience occurred while working. The workers in the company took part in the experiment and found that their pulse rate slow down, and their breathing became more regular [8]. This was the first time that people had begun to explore the relationship between physiological signals and immersion.
Flow happens along with nine elements: (1) challenge-skill balance, (2) action-awareness merging, (3) clear goals, (4) detailed feedback, (5) concentration on the task at hand, (6) sense of control, (7) loss of self-consciousness, (8) transformation of time, and (9) an autotelic experience [8]. Seligman and Csikszentmihalyi stated that ‘families, schools, religious communities, and corporations, need to develop communities that foster these strengths’ [12] (p. 8). As for educators, studying and applying the mechanisms of LIE in education and making full use of them may improve students’ learning effects and help them learn more happily. As for learners, LIE should have similar elements to a regular flow experience. In the research, we developed a detailed experimental paradigm with reference to those nine elements of flow theory, which are discussed in the next section.
Traditional assessment methods to study students’ implicit psychological states, including LIE, such as questionnaires and scales like FSS and PEQ, depend mostly on individuals’ subjective answers, making the research results lack objectivity and authenticity. Therefore, researchers start to explore how people learn with the help of the physiological recordings collected while learning, as well as advanced modern methods, and try to reveal the underlying learning mechanisms.
The fast development of biosensors and biosensing technologies has made it feasible to collect human physiological signals without pause in real-life scenarios. Recordings collected from the brain and body can be applied to indicate intentions and psychological states, which enables a physiological computing system to respond and adapt in an appropriate fashion [13]. Nowadays, researchers can use innovative wearable biosensors, such as headbands [14,15,16], wristbands [17,18,19], and finger-clip detectors [20], to record human physiological signals without interrupting what subjects are doing [21]. Researchers have found that human cognitive ability, emotion, concentration, and engagement could be studied on the basis of different kinds of human physiological signals. These physiology-based studies are more objective and realistic than the traditional questionnaire method used to evaluate these functions [22,23]. Physiologists, psychologists, and educational researchers have shown increasing interest in applying wearable biosensing recordings in educational scenarios, even leading to an emerging cross-field of educational neurosciences [24].
The state-of-the-art wearable biosensors and biosensing techniques are available for collecting data from both the central nervous system (CNS) and the autonomic nervous system (ANS) [25]. Researchers generally used electroencephalography (EEG), or functional near-infrared spectroscopy (fNIRS) signals to characterize CNS activities. For example, EEG helps better understand the semantics of an artificial language learning task [26]. EEG also contributes to understanding students’ mental states, interests, attention, and engagement in real classroom settings [27,28]. Andreas’s study found that stimulus-evoked neural responses, known to be modulated by attention, could be tracked for groups of students with synchronized EEG acquisition [29]. Nowadays, we have non-inserted portable, low-cost headbands to record students’ signals, making it possible to develop experiments in natural learning scenarios. To represent ANS activities, heart rate (HR), pulse rate (PR), galvanic skin reaction (GSR), photoplethysmographic (PPG), skin conductance, and skin temperature are standard signals. Tools to record those signals are more convenient. For instance, in Koester and Farley’s research [30], they observed 98 children in 3 open and 3 traditional first-grade classrooms and collected their skin conductance levels and mean PR to categorize subgroups as either high or low in physiological arousal levels. Researchers also apply those ANS signals to study students’ attention, concentration, mental state (usually stress), memory ability, and academic performance [31,32].
Using physiological recordings to study learning mechanisms and predict students’ academic performance holds great meaning to not only educators but also families and governments [33,34]. Finding how to be immersive while learning would help students learn better, leading to better academic performance and a more pleasant learning experience. According to Mihaly Csikszentmihalyi, flow is a positive psychological state and can be detected by biosensing equipment [8]. LIE should also be a positive psychological state and might be reflected by some physiological recordings. Our research recorded students’ EEGs and PPGs and revealed the underlying secrets of students’ LIE evaluations.
With the fast development of technology, extended reality (XR) technologies, such as VR, augmented reality (AR), and mixed reality (MR), can create a good sense of immersion [35]. IVR simulations for education have increased affective outcomes compared with traditional media [36,37,38,39,40]. With the development of the wisdom classroom, an increasing number of VR devices are being used to improve the traditional learning environment. Our research applied VR devices to conduct a high-immersive learning environment compared with the conventional online learning environment.
Machine learning methods and deep learning methods can be both applied to analyze physiological signals. The support vector machine (SVM) belongs to machine learning methods, whose superior methods are often applied to classify physiological features. For instance, in Tan’s research [41], they proposed a new semi-supervised algorithm combined with the Mutual-cross Imperial Competition Algorithm (MCICA), optimizing SVM for motion imagination EEG classification. Tang’s study [42] built a measurement model for the athlete’s heart rate based on PPG recordings and SVM. As for deep learning methods, neural networks (such as convolutional neural networks, CNN, and recurrent neural networks, RNN) are often used to address human signals. In [43], researchers empowered CNN to classify EEG signals.
In the present study, we recorded college students’ EEGs and PPGs in two learning environments: a high-immersive VR learning environment and a low-immersive online-learning environment, using a headband and a finger-clip blood oxygen probe. On the basis of SVM, we selected suitable physiological characteristics to predict LIE levels and then optimized the evaluation model.
The remainder of the paper is organized as follows:
Section 2 introduces materials and methods, including the participants (Section 2.1), experimental details (Section 2.2, Section 2.3 and Section 2.4), and evaluation model (Section 2.5); Section 3 presents the most suitable human physiological features (Section 3.1) to assess LIE and the performance of the proposed model (Section 3.2). Discussions are given in Section 4. Finally, Section 5 concludes the paper.

2. Materials and Methods

2.1. Participants

Thirty-seven college students volunteered to participate in the LIE evaluation experiment. We required the participants to score no more than 500 on College English Test-Six, an English proficiency examination in China to test college students’ English levels, with normal vision or normal after correction. This research was conducted following China’s law, and we provided all the volunteer students with paper-back informed consent, which was signed before the start of our experiment. The biosensing recordings collected were anonymized and only used for this scientific study and will not be used for other purposes.

2.2. Building Learning Environments

To conduct two different learning environments which can effectively widen the gap between the high and low learning immersive experiences, we applied Pico Neo 2 smart VR glasses, shown in Figure 1, to build the high-immersion learning environment and an online teaching video about English words to build a low-immersion environment. We invited antecedent subjects to make the first-round selection to help us choose the appropriate learning materials. After the first-round selection, we finalized a VR video snippet introducing the scenery of Guilin, China, and an online teaching video on English words for the postgraduate entrance examination taught by Wei Zhu as participants’ learning materials.
Furthermore, to make the online learning have a lower immersive experience, we required that the students’ College English Test-Six scores be less than 500 so that they would find it difficult to follow the English-word learning video. The VR video on Guilin is interesting, while the online English-word video is boring and obscure. As for the devices applied to collect the participants’ neurophysiological signals, we chose the BrainLink headband, shown in Figure 2, and the KS-CM01 finger-clip blood oxygen probe, shown in Figure 3, to obtain EEG and PPG signals.

2.3. Experimental Paradigm

We designed the experimental paradigm under the guidance of the nine elements of flow theory mentioned in Section 1: (1) challenge-skill balance, (2) action-awareness merging, (3) clear goals, (4) detailed feedback, (5) concentration on the task at hand, (6) sense of control, (7) loss of self-consciousness, (8) transformation of time, and (9) an autotelic experience.
To satisfy elements (1) and (6), we chose suitable learning tasks to balance the challenge and students’ skills and limited the qualification of the subjects to make them feel a sense of control over the tasks. To satisfy elements (3) and (4), we set up learning tasks before learning and announced that the completion of the mission would be given back after learning. To satisfy elements (2), (5), and (9), we asked the participants to get enough sleep on the prior night and to focus as much as they could during the learning process. To check elements (7) and (8), we designed a feedback questionnaire to analyze participants’ awareness of self-consciousness and transformation of time.
Participants’ EEGs and PPGs were collected in two different learning environments. Each subject entered the laboratory separately. The first-round experiment for males was on 13 January 2021, and the second round for females was on 14 January 2021. Each subject spent 30–40 min on the experiment. The participants first signed the paper-back informed consent under the guidance of the experimenters and then were reminded of the pre-experiment notes on the usage of the VR glasses, VR handles as well as the prohibited items after wearing the headband and finger probe. Then, the main part of the experiment would start.
The experiment procedure can be divided into three parts according to the different learning stages: before-learning stage, during-learning stage, and after-learning stage presented in Figure 4.
Before learning, we set up learning tasks. The participants were required to read the prepared questions on the learning materials and reminded that those questions needed to be answered after learning. As for the VR video learning, the questions were about the location, the population, and scenic details, which would be shown while watching the VR video on Guilin. As for the online English-word video, we arranged 15 English words in sequence taught by the online teacher, which needed to be translated into their Chinese meanings after learning.
During learning, the students wore our customized-designed headbands on their foreheads and finger probes on their left hands to collect EEGs and PPGs synchronously.
After learning, the participants took off all the devices and answered the questions read before learning. The accuracy rate of the questions was used to measure their academic performance. Participants also filled out a short questionnaire to finish their self-reports on the following three aspects:
  • The number of problems that can be remembered when watching the videos (three options: almost none, nearly half, all);
  • The degree of the feeling about time passage (two options: can or cannot feel the passage of time);
  • The general emotional valence during learning (two options: negative or positive).
The first item was rated by the elements proposed by Csikszentmihalyi: clear goals and concentration on the task at hand. The second item was rated according to the elements: loss of self-consciousness and passage of time. Considering that LIE is a positive psychological state, the third item was about the participants’ emotions. All the participants were explicitly informed that their physiological signals and reports were just for research purposes and would never be revealed to anyone else except the experimenters and themselves.

2.4. Physiological Signal Preprocessing

2.4.1. Photoplethysmography

Hertzman firstly proposed photoplethysmography in 1938. It is a non-invasive method that can detect the change in blood volume in living tissues, which requires the help of photoelectric means [44]. The light-emitting diode (LED) in the PPG sensor emits green light through the living tissues and arteries and veins in the skin and is absorbed and reflected back into the photodiode. If there is no large-scale movement in the human body, the light absorption is unchanged, which will generate direct current (DC) signals. The light absorption naturally changes because of the blood flow in the artery, and it generates alternating current (AC) signals. Extracting the alternating current signal can reflect the characteristics of blood flow. After filtering and Fast Fourier Transformation (FFT), the signal with a prominent amplitude near 1 Hz on the frequency spectrum is obtained. If the frequency is f, then the PR is:
PR = f × 60 (bpm).
While the calculation formula describing the linear correlation between SpO2 and the relative light intensity of 660 nm and 940 nm on the photodetector is:
SpO 2 = 110 + 25 × R ,   R = AC 660 DC 660 AC 940 DC 940 ,
where AC660 and DC660 mean the alternating and direct current generated in the wrist tissue bed under red light with a wavelength of 660 nm, AC940 and DC940 represent alternating and direct current generated under near-infrared light with a wavelength of 940 nm.

2.4.2. Electroencephalogram

EEG describes the potential difference between the cell groups of the cerebral cortex, which reflects the electric wave changes in the brain and is the overall reflection of the electrophysiological activities of brain nerve cells [45]. In this research, we applied the BrainLink headband to collect EEG signals from the volunteer students. The attention score and relaxation score were obtained with the help of the ‘Basic Detection’ mobile application, an app that connects the headbands via Bluetooth and collects EEGs automatically. The scores can reflect the degree of concentration and relaxation. The collected EEG bands are shown in Table 1.

2.4.3. Groups and Labels

We extracted PR and SpO2 from the PPG recordings, and from the EEG recordings, we extracted two useful wave bands: high alpha wave and low beta wave. The attention score and relaxation score were also downloaded. The six features’ data types are displayed in Table 2.
We marked all the physiological recordings with two labels: 1 and −1. Label ‘1’ represents a high-level LIE, which relates to the recordings collected during VR video learning. Label ‘−1’ represents low-level LIE, correlating to the recordings collected during online English learning. The six groups in Table 3 are formulated by including at least one of the two features extracted from PPG and one of the four EEG features. The dimension of the features should not be too high, or it will cause overfitting [46].

2.5. Support Vector Machine

Machine learning methods and deep learning methods can be both applied to conduct analyses of physiological signals. Considering that we divided the LIE levels into two classes: high and low, we choose SVM, which is a data classification model based on the supervised machine learning method, as well as the statistical learning theory [47] as the fundamental model. The basic idea of SVM is a linear classifier that can maximize the region in the feature space according to the different classes. The hyperplane is the SVM decision boundary that we aim to find during training models. When constructing and optimizing models based on SVM, we aim to find the vectors with the shortest distance to the hyperplane so as to maximize the gap between the features and the hyperplane. As for the nonlinear classification problem in the input space, it can be transformed into a linear classification problem in a high-dimensional feature space through nonlinear transformation and learning linear SVM in a high-dimensional feature space. In the dual problem of linear SVM, the objective function and classification decision function only involve the inner product between instances, so it is not necessary to explicitly specify the nonlinear transformation but to replace the inner product with a kernel function.
In the study, we added the radial basis function (RBF) [48] to the basic SVM model according to the size of the features and the dataset. When training the LIE evaluation model, we applied the k-fold cross-validation method to adjust and optimize the parameters [48]. In general, the k-fold cross-validation method is used for model optimization to find the hyperparametric value that makes the generalization performance of the model optimal. In the k-fold cross-validation method, we first distribute the original dataset into k subsets and then regard one of the subsets as the test set, and the rest of the k-1 subsets as the training set. We calculate the average value of the classification rate obtained by the k times and regard the average value as the true classification rate of the model. After finding the appropriate parameters, the model is retrained on all training sets, and the independent test set is used to make the final evaluation of the model performance. K-fold cross-validation uses the advantage of non-repetitive sampling technology: each sample point has only one chance to be included in the training set or test set in each iteration.
We take an example where k is 10 and the calculating procedure is shown in Figure 5. By increasing the value of k, more data will be used for the model training in each iteration, which can obtain the minimum deviation and prolong the algorithm time. Reducing the value of k can reduce the calculation cost of the repeated fitting performance evaluation on different data blocks. While training the SVM model, cross-validation (cv) is an important element needed to be discussed.
For appraising the performance of the model, four evaluation indexes named FN, FP, TN, and TP are defined in the following Table 4.
Take the positive label as an example. Precision refers to the ratio that the correct predictions (positive) account for the proportion of all positive predictions. The definition formula is:
precision = TP TP + FP ,
while recall means that it is correctly predicted to be positive, accounting for the proportion of all the actual positive samples:
recall = TP TP + FN ,
where the f1 score refers to the harmonic average of precision and recall, and the definition formula is:
f 1   score = 2 × precision recall precision + recall = 2 × TP 2 × TP + FP + FN .

3. Experiment Results

3.1. Suitable Physiological Features to Evaluate LIE

To select the physiological features that can contribute the most to evaluating the LIE levels of college students, we implemented SVM with the RBF kernel function. Based on Section 2.4.3, Table 5, and Figure 6, Group 3 (including PR, attention score/relaxation score, and high alpha wave) performs best with the highest prediction accuracy rate of 89.72%, followed by 87.75% for Group 4 with a difference of 1.97%. From Figure 7, we can observe that the training time of Group 3 is 9.8189 s, which is 1.5987 s slower than the 8.22 s of the fastest, Group 5. When we added the feature SpO2 to Group 5, although the training time is the shortest among all the groups, the prediction accuracy rate is not the highest. Considering synthetically, when an SVM classifier with RBF is used as a classification model, with PR, the ratio of attention score and relaxation score, and the high alpha wave of EEGs are being used as eigenvectors, the classification effect performs the best.

3.2. Model Optimization Results

While training the SVM model, cross-validation is a vital parameter that needs to be considered (mentioned in Section 2.5). Table 6 shows the experimental results with different values of cv, ranging from 3 to 10.
From Table 6 and Figure 8, we can observe that the proposed model performs the best when the value of cv is 5.
SVM with different kernel functions is welcome to be applied as a classification model on emotion or experience recognition. Different tasks and different data volumes require different kernel functions. Table 7 lists several pieces of research on human emotion based on SVM.
To classify the LIE using deep learning methods and to compare it with machine learning methods, we chose an open-source model based on CNN and Long-short term memory (LSTM) for testing. The basic model consists of a 1D convolution layer and three stacked LSTM layers. We obtained 84.72% precision on our dataset, which is lower than the proposed SVM model.

4. Discussions

The global outbreak of COVID-19 has dramatically changed people’s daily lives. When there is no longer an ability to have face-to-face learning with a teacher in the classroom, how to ensure that students learn effectively becomes an important and inevitable topic. In the paper, we proposed a new idea that we may monitor the learning immersion experience of students by collecting and analyzing their physiological signals. We took VR learning and online learning as examples and the research results demonstrated that it was possible to predict students’ LIEs by EEGs and PPGs, even if students could not learn from the teacher face-to-face in the traditional classroom.
Besides English class, other courses which require students to concentrate for long periods of instruction can also apply our experimental paradigm and method for LIE measurement. However, in practice, despite the small size and portability of the wearable devices, whether students consent to the continuous collection of physiological signals in class and how to protect the privacy of students’ physiological signals remains to be studied.
The contributions of this paper are summarized as follows:
  • We have pioneered the exploration of the relationship between students’ physiological recordings with their LIEs and proposed that some key physiological characteristics could be effectively used to predict LIE levels.
  • We have proposed an assessment model based on an optimized support vector machine method to predict the outcome of the learning immersion experience. The proposed model can also be applied to other predictive tasks based on biosensing recordings.
  • Our experimental results have provided evidence supporting the feasibility of predicting students’ learning immersion experience levels by their physiological recordings.

5. Conclusions

In the present research, we recorded thirty-seven college students’ EEGs and PPGs in two learning environments: a high-immersive VR learning environment and a low-immersive online-learning environment, using a headband and a finger-clip blood oxygen probe. With the aid of a machine learning method SVM, we selected the suitable physiological characteristics to predict LIE levels and then optimized the evaluation model and made a comparison with related research.
With the inevitable equalization of educational resources and the rapid development of the field of artificial intelligence and biosensing techniques, it is meaningful to study the LIE mechanisms on the basis of modern technologies. Considering the global spread of COVID-19, mobile learning, such as online learning and VR learning, may have to become the norm for learning styles; we ought to make efforts to study the mechanisms of immersive learning so that we could find ways to monitor and enhance the learning effect. The purpose of our research is to propose a novel idea for monitoring students’ learning immersion experiences to cause curiosity and discussion in the academic community and, even in the future, to complete practical applications. In future work, whether other physiological signals, such as eye movement signals, GRS, and EMG could be applied to evaluate students’ LIE levels remains to be studied. Moreover, newly proposed algorithms and methods may be applied to monitor similar factors on the basis of our present work.

Author Contributions

Conceptualization, J.G. and H.W.; methodology, J.G., B.W. and H.W.; software, B.W., Z.Z. and W.H.; validation, B.W., Z.Z. and W.H.; formal analysis, J.G., B.W., H.W., Z.Z. and W.H.; investigation, B.W. and H.W.; resources, J.G., B.W. and H.W.; data curation, B.W. and Z.Z.; writing—original draft preparation, J.G., B.W. and H.W.; writing—review and editing, Z.Z., H.W. and W.H.; visualization, W.H.; supervision, J.G. and H.W.; project administration, J.G.; funding acquisition, J.G. and H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China, grant number 61977006.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki. We all authors take full responsibility for any ethical issues surrounding the study.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets within this research are available from the corresponding author on reasonable request.

Acknowledgments

We wish to thank all the volunteer participants for providing us with their biosensing recordings.

Conflicts of Interest

We declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Handl, S.; Calheiros, C.S.C.; Fiebig, M.; Langergraber, G. Educational Resources for Geoethical Aspects of Water Management. Geosciences 2022, 12, 80. [Google Scholar] [CrossRef]
  2. Liu, R.; Wang, L.; Lei, J.; Wang, Q.; Ren, Y. Effects of an immersive virtual reality-based classroom on students’ learning performance in science lessons. Br. J. Educ. Technol. 2020, 51, 2034–2049. [Google Scholar] [CrossRef]
  3. Jocelyn, P.; Richard, E.M. Learning about history in immersive virtual reality: Does immersion facilitate learning? Educ. Technol. Res. Dev. 2021, 69, 1433–1451. [Google Scholar] [CrossRef]
  4. Boury, N.; Alvarez, K.S.; Costas, A.G.; Knapp, G.S.; Seipeltthiemann, R.L. Teaching in the Time of COVID-19: Creation of a Digital Internship to Develop Scientific Thinking Skills and Create Science Literacy Exercises for Use in Remote Classrooms. J. Microbiol. Biol. Educ. 2021, 22, 251–265. [Google Scholar] [CrossRef]
  5. Power, J.; Conway, P.; Gallchoir, C.O.; Young, A.M.; Hayes, M. Illusions of online readiness: The counter-intuitive impact of rapid immersion in digital learning due to COVID-19. Ir. Educ. Stud. 2022, 1–18. [Google Scholar] [CrossRef]
  6. Csikszentmihalyi, M. Beyond Boredom and Anxiety: The Experience of Play in Work and Games; Jossey-Bass: San Francisco, CA, USA, 1975; p. 75. [Google Scholar]
  7. Mirvis, P.H. Flow: The Psychology of Optimal Experience Flow: The Psychology of Optimal Experience, by Csikszentmihalyi Michael. Acad. Manag. Rev. 1991, 16, 636–640. [Google Scholar] [CrossRef] [Green Version]
  8. Karen, S.B. Theoretically Speaking: An Interview with Mihaly Csikszentmihalyi on Flow Theory Development and Its Usefulness in Addressing Contemporary Challenges in Education. Educ. Psychol. Rev. 2015, 27, 353–364. [Google Scholar] [CrossRef]
  9. Privette, G. Peak experience, peak performance, and flow: A comparative analysis of positive human experiences. J. Pers. Soc. Psychol. 1983, 45, 1361–1368. [Google Scholar] [CrossRef]
  10. Privette, G.; Bundrick, C.M. Measurement of Experience: Construct and Content Validity of the Experience Questionnaire. Percept. Mot. Ski. 1987, 65, 315–332. [Google Scholar] [CrossRef]
  11. Susan, A.J.; Herbert, W.M. Development and Validation of a Scale to Measure Optimal Experience: The Flow State Scale. J. Sport Exerc. Psychol. 1996, 18, 17–35. [Google Scholar] [CrossRef]
  12. Herbert, W.M.; Susan, A.J. Flow experience in sport: Construct validation of multidimensional, hierarchical state and trait responses. Struct. Equ. Model.A Multidiscip. J. 1999, 6, 343–371. [Google Scholar] [CrossRef]
  13. Stephen, H.F.; Chelsea, D. Personal informatics and negative emotions during commuter driving: Effects of data visualization on cardiovascular reactivity & mood. Int. J. Hum.-Comput. Stud. 2020, 144, 102499. [Google Scholar] [CrossRef]
  14. Arnal, P.J.; Thorey, V.; Debellemaniere, E.; Ballard, M.E.; Bou, H.A.; Guillot, A.; Jourde, H.; Harris, M.; Guillard, M.; Van Beers, P.; et al. The Dreem Headband compared to Polysomnography for EEG Signal Acquisition and Sleep Staging. Sleep 2020, 43, zsaa097. [Google Scholar] [CrossRef] [PubMed]
  15. Casciola, A.A.; Carlucci, S.K.; Kent, B.A.; Punch, A.M.; Muszynski, M.A.; Zhou, D.; Kazemi, A.; Mirian, M.S.; Valerio, J.; Mckeown, M.J.; et al. A Deep Learning Strategy for Automatic Sleep Staging Based on Two-Channel EEG Headband Data. Sensors 2021, 21, 3316. [Google Scholar] [CrossRef]
  16. Herman, K.; Ciechanowski, L.; Przegalińska, A. Emotional Well-Being in Urban Wilderness: Assessing States of Calmness and Alertness in Informal Green Spaces (IGSs) with Muse—Portable EEG Headband. Sustainability 2021, 13, 2212. [Google Scholar] [CrossRef]
  17. Nath, R.K.; Thapliyal, H. Machine Learning-Based Anxiety Detection in Older Adults Using Wristband Sensors and Context Feature. SN Comput. Sci. 2021, 2, 1–12. [Google Scholar] [CrossRef]
  18. Xuefei, Z.; Zhang, X.; Li, T.T.; Ren, H.T.; Peng, H.; Jiang, Q.; Wu, L.; Shiu, B.C.; Wang, Y.; Lou, C.W.; et al. Flexible and wearable wristband for harvesting human body heat based on coral-like PEDOT:Tos-coated nanofibrous film. Smart Mater. Struct. 2021, 30, 015003. [Google Scholar] [CrossRef]
  19. Selder, J.; Proesmans, T.; Breukel, L.; Dur, O.; Gielen, W.; van Rossum, A.; Allaart, C. Assessment of a standalone photoplethysmography (PPG) algorithm for detection of atrial fibrillation on wristband-derived data. Comput. Methods Prog. Biomed. 2020, 197, 105753. [Google Scholar] [CrossRef]
  20. Kuncoro, C.B.D.; Luo, W.-J.; Kuan, Y.-D. Wireless Photoplethysmography Sensor for Continuous Blood Pressure Biosignal Shape Acquisition. J. Sens. 2020, 2020, 7192015. [Google Scholar] [CrossRef]
  21. Sharma, A.; Badea, M.; Tiwari, S.; Marty, J.L. Wearable Biosensors: An Alternative and Practical Approach in Healthcare and Disease Monitoring. Molecules 2021, 26, 748. [Google Scholar] [CrossRef]
  22. Kristy, M.; Julien, P.; Ben, R.; David, B.P. Physiological Factors Which Influence Cognitive Performance in Military Personnel. Hum. Factors J. Hum. Factors Ergon. Soc. 2020, 62, 93–123. [Google Scholar] [CrossRef]
  23. Roxana, A.; Adriana, T. Cognitive Performance and Physiological Response Analysis. Int. J. Soc. Robot. 2020, 12, 47–64. [Google Scholar] [CrossRef]
  24. Matthew, D.L. Education and the social brain. Trends Neurosci. Educ. 2012, 1, 3–9. [Google Scholar] [CrossRef]
  25. Yu, Z.; Fei, Q.; Bo, L.; Xuan, Q.; Yingying, Z.; Dan, Z. Wearable Neurophysiological Recordings in Middle-School Classroom Correlate With Students’ Academic Performance. Front. Hum. Neurosci. 2018, 12, 457. [Google Scholar] [CrossRef]
  26. Foster, C.; Williams, C.C.; Krigolson, O.E.; Fyshe, A. Using EEG to decode semantics during an artificial language learning task. Brain Behav. 2021, 11, e2234. [Google Scholar] [CrossRef] [PubMed]
  27. Babiker, A.; Faye, I.; Aricò, P. A Hybrid EMD-Wavelet EEG Feature Extraction Method for the Classification of Students’ Interest in the Mathematics Classroom. Comput. Intell. Neurosci. 2021, 2021, 6617462. [Google Scholar] [CrossRef]
  28. Li-Wei, K.; Komarov, O.; Hairston, W.D.; Chin-Teng, L. Sustained Attention in Real Classroom Settings: An EEG Study. Front. Hum. Neurosci. 2017, 11, 388. [Google Scholar] [CrossRef]
  29. Poulsen, A.T.; Kamronn, S.; Dmochowski, J.; Parra, L.C.; Hansen, L.K. EEG in the classroom: Synchronised neural recordings during video presentation. Sci. Rep. 2017, 7, 43916. [Google Scholar] [CrossRef] [Green Version]
  30. Koester, L.S.; Farley, F.H. Psychophysiological characteristics and school performance of children in open and traditional classrooms. J. Educ. Psychol. 1982, 74, 254–263. [Google Scholar] [CrossRef]
  31. Yang, Y.; Hu, L.; Zhang, R.; Zhu, X.; Wang, M. Investigation of students’ short-term memory performance and thermal sensation with heart rate variability under different environments in summer. Build. Environ. 2021, 195, 107765. [Google Scholar] [CrossRef]
  32. Yoo, H.H.; Yune, S.J.; Im, S.J.; Kam, B.S.; Lee, S.Y. Heart rate variability-measured stress and academic achievement in medical students. Med. Princ. Pract. Int. J. Kuwait Univ. Health Sci. Cent. 2020, 30, 193–200. [Google Scholar] [CrossRef] [PubMed]
  33. Al Balushi, S.M.; Al Harthy, I.S.; Almehrizi, R.S. Attention Drifting Away While Test-Taking: Mind-Wandering in Students with Low- and High-Performance Levels in TIMSS-Like Science Tests. Int. J. Sci. Math. Educ. 2022, 1–22. [Google Scholar] [CrossRef]
  34. Wang, P.; Li, L.; Wang, R.; Xie, Y.; Zhang, J. Complexity-based attentive interactive student performance prediction for personalized course study planning. Educ. Inf. Technol. 2022, 1–23. [Google Scholar] [CrossRef]
  35. Jonathan, S. Defining Virtual Reality: Dimensions Determining Telepresence. J. Commun. 1992, 42, 73–93. [Google Scholar] [CrossRef]
  36. Klingenberg, S.; Jørgensen, M.L.M.; Dandanell, G.; Skriver, K.; Mottelson, A.; Makransky, G. Investigating the effect of teaching as a generative learning strategy when learning through desktop and immersive VR: A media and methods experiment. Br. J. Educ. Technol. 2020, 51, 2115–2138. [Google Scholar] [CrossRef]
  37. Curran, M.F.; Summerfield, K.; Alexander, E.; Lanning, S.G.; Schwyter, A.R.; Torres, M.L.; Schell, S.; Vaughan, K.; Robinson, T.J.; Smith, D.I. Use of 3-Dimensional Videography as a Non-Lethal Way to Improve Visual Insect Sampling. Land 2020, 9, 340. [Google Scholar] [CrossRef]
  38. Vaughan, K.L.; Vaughan, R.E.; Janel, M.S. Experiential Learning in Soil Science: Use of an Augmented Reality Sandbox. Nat. Sci. Educ. 2017, 46, 160031. [Google Scholar] [CrossRef] [Green Version]
  39. Reed, S.; Hsi, S.; Kreylos, O.; Yikilmaz, M.B.; Kellogg, L.H.; Schladow, S.G.; Segale, H.; Chan, L. Augmented Reality Turns a Sandbox into a Geoscience Lesson. Eos 2016, 97, 18–22. [Google Scholar] [CrossRef]
  40. Freeman, S.; Eddy, S.L.; Mcdonough, M.; Smith, M.K.; Okoroafor, N.; Jordt, H.; Wenderoth, M.P. Active learning increases student performance in science, engineering, and mathematics. Proc. Natl. Acad. Sci. USA 2014, 111, 8410–8415. [Google Scholar] [CrossRef] [Green Version]
  41. Tan, X.; Guo, C.; Jiang, T.; Fu, K.; Zhou, N.; Yuan, J.; Zhang, G. A new semi-supervised algorithm combined with MCICA optimizing SVM for motion imagination EEG classification. Intell. Data Anal. 2021, 25, 863–877. [Google Scholar] [CrossRef]
  42. Lei, T.; Cai, Z.; Hua, L.; Paul, A.; Cheung, S.K.S.; Ho, C.C.; Din, S. Training prediction and athlete heart rate measurement based on multi-channel PPG signal and SVM algorithm. J. Intell. Fuzzy Syst. 2021, 40, 7497–7508. [Google Scholar] [CrossRef]
  43. Bird, J.J.; Faria, D.R.; Manso, L.J.; Ayrosa, P.P.S.; Ekárt, A. A study on CNN image classification of EEG signals represented in 2D and 3D. J. Neural Eng. 2021, 18, 026005. [Google Scholar] [CrossRef] [PubMed]
  44. Marek, W.; Bogdan, P. Photoplethysmographic Time-Domain Heart Rate Measurement Algorithm for Resource-Constrained Wearable Devices and Its Implementation. Sensors 2020, 20, 1783. [Google Scholar] [CrossRef] [Green Version]
  45. Tuncer, T.; Dogan, S.; Subasi, A. EEG-based driving fatigue detection using multilevel feature extraction and iterative hybrid feature selection. Biomed. Signal Process. 2021, 68, 102591. [Google Scholar] [CrossRef]
  46. Oscar, D.; Anibal, P.; Noelia, V.; Jesus, S.; Gloria, B. Robustness to adversarial examples can be improved with overfitting. Int. J. Mach. Learn. Cybern. 2020, 11, 935–944. [Google Scholar] [CrossRef] [Green Version]
  47. Amin, H.; Huapeng, W.; Fatemeh, J.; Ming, L.; Heikki, H. A combination of CSP-based method with soft margin SVM classifier and generalized RBF kernel for imagery-based brain computer interface applications. Multimed. Tools Appl. 2020, 79, 17521–17549. [Google Scholar] [CrossRef] [Green Version]
  48. Neffati, S.; Ben, A.K.; Taouali, O.; Bouzrara, K. Enhanced SVM–KPCA Method for Brain MR Image Classification. Comput. J. 2020, 63, 383–394. [Google Scholar] [CrossRef]
  49. Abidin, Z.; Destian, W.; Umer, R. Combining support vector machine with radial basis function kernel and information gain for sentiment analysis of movie reviews. J. Phys. Conf. Ser. 2021, 1918, 042157. [Google Scholar] [CrossRef]
  50. Sun, L.; Huang, Y.; Li, Q.; Li, P. Multi-classification speech emotion recognition based on two-stage bottleneck features selection and MCJD algorithm. Signal Image Video Process. 2022, 2022, 1–9. [Google Scholar] [CrossRef]
  51. Linhui, S.; Bo, Z.; Sheng, F.; Jia, C.; Fu, W. Speech emotion recognition based on DNN-decision tree SVM model. Speech Commun. 2019, 115, 29–37. [Google Scholar] [CrossRef]
  52. Jaya, H.T.H.; Ruldeviyani, Y.; Aditama, A.R.; Madya, G.R.; Nugraha, A.W.; Adisaputra, M.W. Sentiment analysis of twitter data related to Rinca Island development using Doc2Vec and SVM and logistic regression as classifier. Procedia Comput. Sci. 2022, 197, 660–667. [Google Scholar] [CrossRef]
Figure 1. Pico Neo 2 smart VR glasses.
Figure 1. Pico Neo 2 smart VR glasses.
Electronics 11 01429 g001
Figure 2. BrainLink headband.
Figure 2. BrainLink headband.
Electronics 11 01429 g002
Figure 3. KS-CM01 finger-clip blood oxygen probe.
Figure 3. KS-CM01 finger-clip blood oxygen probe.
Electronics 11 01429 g003
Figure 4. Experimental procedure for LIE evaluation.
Figure 4. Experimental procedure for LIE evaluation.
Electronics 11 01429 g004
Figure 5. An example procedure of a k-fold cross-validation method where k is 10.
Figure 5. An example procedure of a k-fold cross-validation method where k is 10.
Electronics 11 01429 g005
Figure 6. Model Evaluation Index for different feature groups and labels. The solid line with a circle represents the prediction accuracy of the whole classification model.
Figure 6. Model Evaluation Index for different feature groups and labels. The solid line with a circle represents the prediction accuracy of the whole classification model.
Electronics 11 01429 g006
Figure 7. Training time and precision accuracies. Columns with a heavy border are the best.
Figure 7. Training time and precision accuracies. Columns with a heavy border are the best.
Electronics 11 01429 g007
Figure 8. Model Evaluation Index for different cv values (from 3 to 10). The solid line with a pure-color circle represents the prediction accuracy of the whole classification model.
Figure 8. Model Evaluation Index for different cv values (from 3 to 10). The solid line with a pure-color circle represents the prediction accuracy of the whole classification model.
Electronics 11 01429 g008
Table 1. EEG wavebands and corresponding characterizations.
Table 1. EEG wavebands and corresponding characterizations.
ItemFrequency Band (Hz)Characterization
delta wave0–3Which only occurs during deep sleep.
theta wave4–7Which occurs in a light or semi-waking state.
low alpha wave7–9Which occurs in a deep state of relaxation when awake.
high alpha wave 110–12Which reflects the extent of relaxation and concentration, and describes the best state of thinking and learning.
low beta wave 213–17In a more focused state, the mind begins to focus on one thing.
high beta wave18–30Which occurs in a state of nervous tension.
low gamma wave31–39Which is mostly about extreme emotions.
high gamma wave41–49Which is mostly about learning disabilities or mental disabilities.
1,2 Two wavebands correlate to thinking and learning.
Table 2. Data types of the six features.
Table 2. Data types of the six features.
ItemPRSpO2attention scorehigh alpha wave
relaxation scorelow beta wave
Typeintegerpercentageintegerenergy
Table 3. Six feature groups.
Table 3. Six feature groups.
GroupFeatures Contained
1PR, SpO2, High alpha wave
2PR, SpO2, Low beta wave
3PR, attention score/relaxation score, High alpha wave
4PR, attention score/relaxation score, Low beta wave
5PR, SpO2, attention score/relaxation score, High alpha wave
6PR, SpO2, attention score/relaxation score, Low beta wave
Table 4. Definitions of FN, FP, TN, and TP.
Table 4. Definitions of FN, FP, TN, and TP.
Negative (N)Positive (P)
False (F)FNFP
predicted result: Npredicted result: P
actual result: Pactual result: N
True (T)TNTP
predicted result: Npredicted result: P
actual result: Nactual result: P
Table 5. Model Evaluation Index for different feature groups.
Table 5. Model Evaluation Index for different feature groups.
GroupPrecisionLabel ‘−1’Label ‘1’
PrecisionRecallf1 ScorePrecisionRecallf1 Score
10.6442690.650.640.650.640.650.64
20.8300400.800.840.820.860.820.84
30.8972330.880.900.890.910. 900.90
40.877470.870.890.880.890.870.88
50.7984840.790.800.790.810.800.80
60.8616610.850.890.870.880.830.85
Table 6. Model Evaluation Index for different cv values (from 3 to 10).
Table 6. Model Evaluation Index for different cv values (from 3 to 10).
ItemPrecisionLabel ‘−1’Label ‘1’
PrecisionRecallf1 ScorePrecisionRecallf1 Score
cv = 30.87351780.880.870.870.870.880.88
cv = 40.8418970.850.840.850.830.840.83
cv = 50.89723320.880.90.890.910.90.9
cv = 60.84584980.830.870.850.860.830.84
cv = 70.81422930.860.780.820.770.860.81
cv = 80.8577080.850.860.850.870.850.86
cv = 90.85375490.880.850.870.820.850.84
cv = 100.83003950.870.790.830.80.870.83
Table 7. Research on applying SVM to human emotion or experience analyses.
Table 7. Research on applying SVM to human emotion or experience analyses.
MethodsFunctionAccuracy
SVM-RBF (Ours)To evaluate learning immersion experience of students89.72%
SVM [49]To classify the sentiment analysis of movie reviews81.50%
SVM-RBF [49]82.25%
SVM-RBF-IG [49]87.25%
SVM-MCJD [50]Speech emotion recognition87.08%
DNN-decision tree SVM [51]75.83%
PV-DM SVM [52]To analyze the public’s sentiment on social media86.86%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guo, J.; Wan, B.; Wu, H.; Zhao, Z.; Huang, W. A Virtual Reality and Online Learning Immersion Experience Evaluation Model Based on SVM and Wearable Recordings. Electronics 2022, 11, 1429. https://doi.org/10.3390/electronics11091429

AMA Style

Guo J, Wan B, Wu H, Zhao Z, Huang W. A Virtual Reality and Online Learning Immersion Experience Evaluation Model Based on SVM and Wearable Recordings. Electronics. 2022; 11(9):1429. https://doi.org/10.3390/electronics11091429

Chicago/Turabian Style

Guo, Junqi, Boxin Wan, Hao Wu, Ziyun Zhao, and Wenshan Huang. 2022. "A Virtual Reality and Online Learning Immersion Experience Evaluation Model Based on SVM and Wearable Recordings" Electronics 11, no. 9: 1429. https://doi.org/10.3390/electronics11091429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop