Next Article in Journal
Hierarchical Decentralized Federated Learning Framework with Adaptive Clustering: Bloom-Filter-Based Companions Choice for Learning Non-IID Data in IoV
Previous Article in Journal
Principal Component Analysis-Based Logistic Regression for Rotated Handwritten Digit Recognition in Consumer Devices
Previous Article in Special Issue
A Model for EEG-Based Emotion Recognition: CNN-Bi-LSTM with Attention Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring User Engagement in Museum Scenario with EEG—A Case Study in MAV Craftsmanship Museum in Valle d’Aosta Region, Italy

by
Ivonne Angelica Castiblanco Jimenez
1,†,
Francesca Nonis
1,†,
Elena Carlotta Olivetti
1,
Luca Ulrich
1,*,
Sandro Moos
1,
Maria Grazia Monaci
2,
Federica Marcolin
1 and
Enrico Vezzetti
1
1
Department of Management and Production Engineering, Politecnico di Torino, 10129 Torino, Italy
2
Department of Social Sciences, University of Valle D’Aosta, 11100 Aosta, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2023, 12(18), 3810; https://doi.org/10.3390/electronics12183810
Submission received: 30 June 2023 / Revised: 21 August 2023 / Accepted: 4 September 2023 / Published: 8 September 2023

Abstract

:
In the last decade, museums and exhibitions have benefited from the advances in Virtual Reality technologies to create complementary virtual elements to the traditional visit. The aim is to make the collections more engaging, interactive, comprehensible and accessible. Also, the studies regarding users’ and visitors’ engagement suggest that the real affective state cannot be fully assessed with self-assessment techniques and that other physiological techniques, such as EEG, should be adopted to gain a more unbiased and mature understanding of their feelings. With the aim of contributing to bridging this knowledge gap, this work proposes to adopt literature EEG-based indicators (valence, arousal, engagement) to analyze the affective state of 95 visitors interacting physically or virtually (in a VR environment) with five handicraft objects belonging to the permanent collection of the Museo dell’Artigianato Valdostano di Tradizione, which is a traditional craftsmanship museum in the Valle d’Aosta region. Extreme Gradient Boosting (XGBoost) was adopted to classify the obtained engagement measures, which were labeled according to questionnaire replies. EEG analysis played a fundamental role in understanding the cognitive and emotional processes underlying immersive experiences, highlighting the potential of VR technologies in enhancing participants’ cognitive engagement. The results indicate that EEG-based indicators have common trends with self-assessment, suggesting that their use as ‘the ground truth of emotion’ is a viable option.

1. Introduction

In museums and heritage sites, technology is gaining momentum. Some innovations, such as projection mapping, binaural audio, digital twins, holographic displays, and visitor flow tech, have moved from proof of concept to standard. Most state-of-the-art museum technologies impact and enhance the visitor experience, while location-based intelligence helps curators to understand how people experience their exhibits and organize their collections in a way that better connects with visitors’ needs.
Among emerging technologies, Virtual Reality (VR) offers new opportunities for creating immersive and interactive experiences in the cultural heritage scenario. Generally speaking, the applications in which VR has been involved are numerous: from education [1] to pure entertainment, but also in marketing, as well as human–computer and human–robot interaction [2], psychology and medicine [3,4]. In the last few years, there has been a considerable increase in the use of VR in museum environments, and some of the most important museums in the world have already embraced technological innovations and adapted to the challenges of the digital era to offer different and immersive experiences to visitors. Indeed, museums aim to bring collections to life, and VR is an excellent tool to achieve this aim. Take, for example, the Victoria and Albert Museum in London [5], which, in the summer of 2021, opened ‘Curious Alice’, an exhibition exploring the origins, adaptations, and reinventions of Lewis Carroll’s classic, or the Louvre Museum in Paris, which, in October 2019, launched ‘Mona Lisa: Beyond the Glass’, a VR experience that explores the Renaissance painting as part of its Leonardo da Vinci blockbuster exhibition.
Especially after the COVID-19 pandemic, the advantages of VR became evident to enhance museum exhibits by creating virtual tours, making exhibits interactive, putting objects in context, and showing their real scale. However, while this immersive technology has some unique traits, such as the ability to create a first-person perspective and the sense of presence, immersion, agency and embodiment [2], it also has some drawbacks. If, on the one hand, VR has been employed in various industries (e.g., aerospace, automotive, and biomechanics) to reduce cost and time by replacing physical mock-ups with virtual ones [6,7,8], on the other hand, the adoption of more advance VR tools, such as Head-Mounted Displays (HMDs) and Cave Automatic Virtual Environment (CAVE), could be very expensive. Current commercial VR HMDs span from low-cost (e.g., smaller brands, usually running off smartphones), average-cost (e.g., Oculus Quest and HP Reverb G2), to high-priced (e.g., HTC Vive Pro 2 and Valve Index), with different effects to the user’s immersion degree [9]. Although, even on a small scale, VR is an innovative and versatile tool that museums can adopt to increase users’ engagement and improve visitors’ experience, making information more accessible and enjoyable, providing new possibilities to explore content through the process of “learning by doing” [10].
VR has also been acknowledged for its potential to raise and study emotions, which are essential for making a difference in memorable experiences [11]. In general, VR has been adopted with this purpose in various contexts: for therapeutic uses to induce relaxation and feelings of emotional well-being [12], to treat phobias and post-traumatic stress [13], or to stimulate mood changes [14]. Indeed, thanks to its completeness and flexibility, VR is the perfect environment where semantic and sensory elements, dynamism, and interaction can contribute to arouse a specific emotion [2,15]. The immersion, quantified by the sense of presence, i.e., the subjective experience of being in one place while physically located in another, is linked to the emotional response [16], especially regarding arousal [17].
In this sense, one of the most promising technologies for studying users’ emotions and perceptions from a neural perspective during real and immersive VR experiences is electroencephalography (EEG) [18]. EEG records the electrical signals generated by the brain, providing a direct window into the neural processes associated with perceptions, emotions, and attention. Recent technological advances, including portable and wireless devices, have enabled its integration with diverse settings such as museums and galleries, allowing researchers to explore visitors’ affective states during artwork exhibitions [19,20]. EEG measurements have an excellent time resolution, typically less than a second, and wearable devices allow for continuous recording while people participate in real or virtual experiences. Still, EEG is susceptible to motion artifacts and, thus, requires controlled experimental settings; in this sense, besides EEG applications in real-world environments, VR allows researchers to explore different scenarios and conditions and obtain reliable EEG results while ensuring ecological validity [11]. In addition, studies have shown that EEG-based metrics can provide objective measurements regarding emotion interpretation, such as engagement, valence, and arousal, which are useful for understanding human behavior [21,22].
The effectiveness and the applicability of the EEG analysis can benefit from the adoption of machine learning (ML) techniques for processing and classification purposes. Particularly, the capability of ML to find out patterns in data makes this approach suitable for the study of the brain’s activity, both in terms of emotions and affective indicators. Support Vector Machines (SVM) [23,24,25,26,27], K-Nearest Neighbor [28], Naïve Bayes [29], Linear Discriminant Analysis (LDA) [30], Quadratic Discriminant Analysis (QDA) [28], and Decision Tree (DT) [31] have been commonly used. Moreover, solutions based on ensemble learning such as Random Forest (RF) [31], Bagged Tree (BT) [32], AdaBoost [33] and Extreme Gradient Boosting [34] have been proposed in the same context for the classification of EEG processed data. SVM is one of the most common data-driven approaches, and it is considered a valuable alternative to statistical analysis in affective studies thanks to its capability of handling multi-dimensional data on the basis of multi-variate patterns [35,36]. One drawback is that SVM is prone to overfitting, leading to unsatisfactory results when dealing with small, noisy, and complex datasets [37,38]. From this viewpoint, bagging and boosting ensemble learning offers valid alternatives [32,33,34]. RF has been used to construct predictive models of mental states such as meditation and concentration, with classification accuracies around 75% [39], outperforming 90% accuracy when deep learning was used for feature extraction [40]. AdaBoost was successfully applied to classify human emotions in the 2D valence–arousal space, reaching on the DEAP dataset [41] 97% accuracy [42] and outperforming 88% accuracy for binary classification in the dominance dimension on the same dataset [33]. On the same dataset, BT showed high performances with over 97% accuracy in 2D valence–arousal space [32]. From the analysis of the literature, it emerged that XGBoost is less commonly used in emotion recognition, even if noteworthy results were obtained for EEG analysis of the DEAP dataset [34], achieving almost 95% accuracy [43]. Concerning complex data such as EEG, Neural Networks (NNs) represent a valid opportunity both with shallow and deep algorithms [44]. The recent literature shows that more than an alternative to traditional ML approaches, NNs are obtaining remarkable results when a combination of the two approaches is applied, particularly NNs for feature extraction and ML for the classification stage [45,46]. On the other hand, NNs require a substantial amount of data that when combined with the complex nature of EEG signals could lead to high computational costs.
In this work, we explore the adoption of EEG-based metrics of arousal, valence and engagement to quantify the visitor’s emotional state while s/he interacts with five physical (experience 1) and virtual (experience 2) handicraft objects belonging to the collection of the Museo dell’Artigianato Valdostano di Tradizione (MAV). The MAV, located in Fénis (Italy), is a museum dedicated to the traditional craftsmanship of Valle d’Aosta region. While the first experimental session (counting 33 participants) was held physically at the museum, the second (with 62 participants) was a virtual tour in a VR environment which was designed ad hoc as a complement to the traditional museum visit. In both experiments, EEG was used as emotional monitoring technology; the EEG-based engagement indicator was computed relying on previous literature, labeled according to the administered questionnaire, and classified adopting an XGBoost classifier.
The research conducted in this work was possible thanks to MEDIA (Museo Emozionale DIgitale multimediale Avanzato) project, which was funded by the Valle d’Aosta region.
The paper is structured as follows: Section 2 explains the methodology and experimental setup, providing a description of the experiment and the selected handicraft objects (Section 2.1), of the VR environment designed for the VR experience (Section 2.2), of the participants taking part (Section 2.3), of the questionnaire for collecting emotional feedback (Section 2.4), of the EEG affective monitoring (Section 2.5), and of the machine learning XGBoost classifier (Section 2.6). Section 3 presents the results divided into questionnaire replies (Section 3.1), EEG emotional assessment (Section 3.2), and XGBoost classification (Section 3.4). The paper ends with discussion (Section 4) and conclusions (Section 5), where future works are proposed.

2. Materials and Methods

The MAV, located in Fénis (Italy), is a museum dedicated to the traditional craftsmanship of the Valle d’Aosta region, carrying a wealth of symbols, knowledge, identity values, and extraordinary creative processes of which the objects are the custodians. About 800 objects are exhibited inside the MAV, including everyday artifacts and sculptures, which witness the evolution of the local artisan tradition. The collection of this museum, particularly referring to five specific representative handcrafted objects, has been chosen as a base for carrying out the experiment described in this section.

2.1. Description of the Experiment

The experiment was divided into two experiences united by content and structure but differentiated by the technology used. With the help of the museum curators, five objects of the museum’s collection, shown in Figure 1, were carefully chosen based on their state of preservation, suitability for successful scanning, and the presence of engaging anecdotes that facilitate contextualization. In particular, the following apply.
  • The cockerel: born as a game for children, it was made with a forked tree branch that outlined the body on one side and the tail on the other. After applying the color to this artifact, its evolution received widespread public approval, and the cockerel became the symbol of craftsmanship in the Valle d’Aosta region;
  • The crib: handed down from generation to generation, the cradle was given as a gift by the godparents to the unborn child and used on the day of baptism. It was decorated with geometric carvings such as rosettes and religious elements, offering indications about its provenance;
  • The butter press: the various types of butter presses tell how a simple object can differentiate itself between the side valleys and within the municipalities of a small region, undergoing modifications in both its symbolic representation (whether natural or heraldic) and construction techniques (including elements affixed with nails rather than carved);
  • The goat collar: it consists of a sheet of wood folded by immersion in boiling water to create the appropriate curvature to be worn by goats. Once folded, the foil was tied at the ends and left to dry. Following this, the skilled artisans move on to the intaglio decoration, incorporating colored elements;
  • The ‘tatà’: it is a carved wooden horse with wheels. Its name derives from a childish onomatopoeic expression indicating the noise the wheels make when dragged along the ground.
The experimental setup is shown in Figure 1 and is composed of two experiences, which are explained here below. While taking part in the experiment, participants’ brain activity was measured using a 14-channel EEG headset: the Emotiv EPOC X.
Real experience (EXP1). A designated space with a table and seat was set up at the museum to let visitors visualize the handicraft works and interact with them physically (Figure 2a).
VR experience (EXP2). Users are invited to navigate an ad hoc developed virtual environment in which objects have been inserted and contextualized with the possibility of interacting with them. Participants used an Oculus Quest, a VR headset developed by Oculus that can run games and software wirelessly under an Android-based operating system. The adoption of this HMD device mounted on the head does not preclude brain activity monitoring through the EEG headset [47] (Figure 2b).
For both experiences, the objects were randomly presented to the participants at two different levels of interaction, which are called ‘phases’ and explained here below.
Visualization and contextualization phase (VC). The selected objects, one at a time, are shown for 30 s and then contextualized, i.e., explained in terms of history and significance, to the participant for another 30 s. The description was provided using a combination of written text and recorded explanations for both experiences (Figure 3a).
Interaction phase (INT). After the first phase, participants could interact with the physical objects by touching them (real experience) or with the virtual twins (VR experience) via VR controllers, which ensure natural interaction in a virtual environment (Figure 3b).

2.2. VR Environment

The virtual environment developed for the experimentation is meant to represent a traditional village in the Valle d’Aosta region, where the selected objects are inserted and contextualized in four distinct areas. The original handicraft objects have been scanned through a structured-light 3D scanner, a 3D scanning device for measuring the three-dimensional shape using projected light patterns and a camera system. The patterns projected by the light source from the scanner head, when light projects onto the object’s surface, become distorted enabling 3D data acquisition, while the camera enables texture acquisition. The VR environment is available in three languages: Italian, English and French.
Unity (Unity 2021.3.15), i.e., the cross-platform game engine developed by Unity Technologies, has been used to design and develop the VR environment. Starting with adding a Terrain GameObject to the Scene, textures, trees, and details like grass, flowers, and rocks have been created to obtain the desired landscape features (Figure 4). Then, the four macro-areas intended to host the objects have been defined. Following the indications provided by the museum curators, a carpenter’s workshop (Figure 5a), a log house (Figure 5b), a stable with cows (Figure 5c), and a farmhouse with goats (Figure 5d) have been designed to contextualize the cockerel, the crib and the tatà, the butter press, and the goat collar, respectively. In addition, some sounds downloaded from Mixkit, a free gallery of stock video clips with no watermark, music tracks, sound effects and video templates, have been inserted into the virtual environment that recalled the object or its manufacture. In particular, a hand saw tool sound has been selected for the carpenter’s workshop, a lullaby has been chosen for the crib, while for the tatà, the sound of wooden wheels on the floor has been played. Finally, for the two areas with animals, cows and goats, it was decided to reproduce their sounds, together with the noise of pouring milk.
Unity supports the C# programming language natively, and some scripts have been created and attached to GameObjects to control their behavior, such as triggering game events, modifying component properties over time and responding to user input in different ways. Indeed, some scripts have been used mainly to define the 30 s phases, i.e., visualization and contextualization, and interaction, for each object, to activate/deactivate canvas, sounds, lights, and cursor lock, and to print and save timestamps in order to synchronize the experience with the acquired EEG data.
Android, the target platform for Oculus Quest, has been set, all necessary additional settings have been configured, and the Oculus Touch Controllers have been arranged and connected to allow interaction during the INT phase. After all these steps, a build has been generated and run on the headset, which is connected to the computer over a USB-C cable.

2.3. Participants

The study involved a total of 95 participants between the ages of 19 and 72, mostly Italian, who voluntarily took part in the experiment. In particular:
  • EXP1 involved 33 participants (21 women and 12 men);
  • EXP2 involved 62 participants (31 women and 31 men).
In order to maintain each interaction with the objects novel and avoid biases from participants interacting with the same objects twice, we randomly split the participants into two groups (EXP1 and EXP2); this consideration also allowed us to offer the participants a comfortable experience in a reasonable time. Before starting the experiment, each person was asked to read an informative document and sign an informed consent to receive enough information about the data usage and the research scope in accordance with the principles of the Helsinki Declaration.

2.4. Questionnaire

Based on the subdivision of the experience into the two different interaction levels with the selected handicrafts, a questionnaire comprising three questions was developed to collect information about the emotions felt and the degree of involvement of the participants.
In order to evaluate the overall user engagement, some studies drawn from the literature on the representation of emotional dimensions have been considered. In particular, Russell developed the Circumplex Model of Affect [48], which is an emotional space wherein emotions are expressed through two independent dimensions: valence and arousal. The first dimension, valence, represents the positivity or negativity of emotions and varies between the extremes of unpleasantness and pleasantness [49]. The second dimension, arousal, concerns the degree of emotional arousal and ranges from deactivation to activation [50]. Another critical indicator in interpreting emotions is engagement, i.e., the level of attention and involvement during an activity [51,52]. These three indicators, i.e., valence, arousal and engagement, were evaluated at the end of the interaction with each object for the two phases and for the two experiences, using a 3-point scale. The questionnaire is summarized in Table 1.

2.5. EEG Affective Monitoring

Measuring visitors’ perceptions and engagement is crucial for museums seeking to enhance their exhibitions and create appealing experiences. Traditional behavioral and affective analysis methods often rely exclusively on self-reporting, which can be subject to limitations in expressing the complete emotional panorama. To address these challenges, physiological measures have emerged as a promising complement to objectively assess participants’ global experience and affective responses [52]. The use of EEG in the museum context provides an objective measure of the visitors’ experience and a way to characterize their emotions, particularly in terms of valence, arousal, and engagement. This provides valuable insights regarding the emotional influence of the artworks, in this case, handicrafts, which will lead to assessing the visitor perceptions with the final goal of optimizing their experience.
In this study, the EEG monitoring was carried out using a wireless headset consisting of 14 saline electrodes placed in accordance with the International 10–20 System [53]. The sensors are located at AF3, AF4, F3, F4, F7, F8, FC5, FC6, P7, P8, T7, T8, O1, O2 and two additional CMS/DRL reference channels at P3 and P4. Different studies have highlighted the usefulness of EEG-based indicators in evaluating and quantifying valence, arousal, and engagement, considering the association between emotions and brain frequencies [18,21,54,55,56]. The human brain’s theta ( θ ), alpha ( α ), beta ( β ), and gamma ( γ ) frequencies are influenced by several cognitive processes, including emotions. Theta is associated with relaxation, learning, and memory formation [57,58,59], alpha is associated with calmness and creative processes [57,60], beta is associated with alertness and concentration [22,61], and gamma is associated with attention and information integration [62,63].
Cognitive engagement is defined as the level of psychological investment that a subject makes while performing a task and the willingness to accomplish it [22,64]; considering that an increase in beta power is related to a cognitive activity increment during a stimulus and that increases in alpha are related to lower vigilance, derived from the EEG literature on attention, the following EEG-based engagement index has been systematically reviewed and implemented in emotion monitoring studies [65,66,67,68,69]:
E n g a g e m e n t = β α
Based on comparable research [22,70], the engagement will be assessed by aggregating the measurements from all the electrodes.
Regarding the valence, an indicator of the pleasantness or unpleasantness of the perceived emotion, researchers have found that the most analyzed EEG electrodes for the definition of valence are in positions F3 and F4 (frontal lobe). Furthermore, the frontal asymmetry has been studied as an expression of emotional states by comparing the difference between the natural logarithm of the left hemisphere alpha power ( α F3) and the right hemisphere alpha power ( α F4); the magnitude of this difference indicates the positivity of the emotion, with higher values corresponding to more positive emotions [18,54,71]:
V a l e n c e = α F 4 α F 3
On the other hand, arousal represents the intensity of emotion and how reactive or proactive someone is to a stimulus. Since relaxation is associated with alpha waves, while alertness is linked to beta waves, researchers have found that the ratio of beta and alpha measured in the frontal lobe (F3 and F4) can be an expression of a person’s arousal [18,72,73]:
A r o u s a l = β F 3 + β F 4 α F 3 + α F 4
Due to the individual-dependent nature of physiological studies, there are no fixed maximum or minimum values for valence, arousal, and engagement. To address the variability caused by individual differences, this research utilized a within-subject design experiment commonly employed in similar studies [22,70,74]. The electrodes placement was carried out by trained personnel experienced in EEG data collection and familiar with the 10–20 system [53]. We adjusted the headset and electrodes comfortably for each participant, minimizing discomfort during the experiment. Furthermore, we monitored the EEG signals in real time throughout the data acquisition to ensure the electrode contacts remained stable. If any issues were detected, we made immediate adjustments to maintain signal quality. For each participant, the EEG data were separated into the baseline, recorded during eyes-closed condition, and the brain activity during the stimuli, in our case, during the visualization and interaction phases. By analyzing the brain activity difference between the baseline and the activated response during the stimuli, it is possible to identify the emotional indicators variations.

2.6. XGBoost for Engagement Classification

In consideration of the type of stimuli concerned in this study, a classification was performed on the engagement indicator. In fact, among the affective indicators selected and considering the two types of experiences the participants underwent, engagement is the most suitable to evaluate differences between real and VR experience for its relationship with immersion [75]. Moreover, craftsmanship implies the involvement of the craftsman with the material on both a physical and mental level [76], which seems reasonable to look for in the users as well. To this aim, EEG-based engagement feature vectors computed for each participant were labeled adopting questionnaire replies, which have been used as ‘ground truth’, according to the type of experience. An ML rather than a statistical approach was preferred in order to empirically verify the existence of a difference in emotional responses to the two experiences. ML does not require creating a mathematical model, as the model is created by the algorithm based on the data of the training set [77]. Indeed, for the type of experiment, conducted with no pre-specified interactions, it seemed reasonable to test the possibility of predicting membership to the first or second experience rather than finding relationships and interactions between variables and relying solely on the feature vector of engagement.
Among the different ML approaches to classification, Gradient Boosting was the one adopted. It is a widespread supervised approach in multiclass classification tasks. As one of the implementations of ensemble learning, the prediction of more than one model is involved to obtain the final output [78]. Gradient Boosted Decision Tree (GBDT) sequentially adds Decision Tree classifiers aimed at correcting the prediction made by the previous classifiers and outputs a weighted average of the predictions. To correct the previous predictions, at each training step, the correct observations have a lower impact than those misclassified. Considering the approach, these Decision Trees are considered as weak classifiers; their predictions are combined for votes or average, and the final output is weighted on the contributions of each model based on its performances. For each tree, nodes are added to optimize a non-linear objective, in this case, the square error.
Specifically, the Extreme Gradient Boosting (XGBoost, xgb Python package version 1.7.4) implementation of GBDT [79] was chosen for the present analysis. This choice was made considering that XGBoost offers some extensions to GBDT as the sparsity awareness, which allows handling missing values in data without imputation first [79,80]. Thus, it is particularly suitable for the purpose of EEG analysis, as during the process of removal of artifacts, some data could be missed. The Hessian is used to manage the non-linearity of the objective since the second-order derivative allows a more precise approximation of the direction of the maximum decrease in the loss function.

3. Results

The results are described in this section by grouping them into questionnaire replies, EEG-based emotional assessment via indicators and the classifier.

3.1. Questionnaire

The answers to the questionnaires were collected, grouping them by the following:
  • Experience, to analyze differences between the real experience and the VR experience;
  • Phase, to analyze differences between the degrees of involvement with the objects.
Figure 6 shows the responses to the questionnaires divided by experience (Figure 6a) and by phase (Figure 6b). Negative (negative, calm, and not engaged), neutral and positive (positive, excited, and engaged) feedback is represented on the graph in orange, gray, and green, respectively, with the relative percentages, while the median is represented with a yellow line. The mode, i.e., the number in a set of numbers that appears the most often, is always the positive class except for the arousal emotional indicator in the visualization and contextualization phase, where it is the neutral state. Comparing the responses to the questionnaire grouped by phase, the three indicators show an increasing trend passing from the visualization and contextualization phase to the interaction one, particularly for the arousal. Indeed, ranging from deactivation to activation, it is a good indicator for analyzing the difference between phases. The highest positive responses percentage has been registered for the engagement indicator in the interaction phase, with 71% of positive and 21% of neutral answers, meaning that participants declared being strongly involved while interacting with the real objects and the virtual twins. The valence indicator received fewer negative responses (≤3%) than arousal and engagement. Overall, on a displeasing–pleasant scale, participants did not report negative emotions during the experience.
As a further analysis, the main effects plots (MEPs) for each object, considered individually and collectively, have been reported in Figure 7 for valence, arousal, and engagement, for qualitatively comparing EEG and questionnaire data. When the line is not horizontal, there is a main effect, and the steeper the slope of the line connecting the response means for each factor, the greater the magnitude of the main effect. In this case, the graph highlights that the two phases affect the responses differently, with an increasing trend from the visualization and contextualization phase to the interaction phase.

3.2. EEG Emotional Assessment

After completing the experiences, the EEG data were processed (detailed preprocessing procedures are provided in Appendix A), and found artifacts were removed. The EEG activity was recorded at a sampling rate of 128 Hz and the bandwidth was between 0.2 and 45 Hz. To isolate the frequency bands, Fast Fourier transforms (FFTs) were calculated for the following spectral ranges: θ (4–8 Hz), α (8–12 Hz), β (12–25 Hz), and γ (25–45 Hz) for each phase of the experiment. Before performing the FFT, a high-pass FIR filter with a cutoff frequency of 0.18 Hz was applied. Then, a Hanning window on the EEG stream was implemented to improve the accuracy of the spectral analysis and reduce artifacts. In alignment with comparable studies in the field [18,22], we used a Hanning window size of 256 samples and slide this window by 16 to create a new window for each analysis epoch. Then, we applied an FFT to the most recent 2 s epoch of EEG data and averaged the FFT-squared magnitude across the frequencies in each band. Also, given that we used a relatively low number of electrodes (14), we opted for an artifact manual inspection and removal process. We visually inspected the EEG data to identify any remaining artifacts, major artifacts such as ocular (EOG) and muscular (EMG) were excluded from analysis; in cases where significant artifacts were present, we decided to exclude the entire subject’s dataset to avoid potential biases and ensure the reliability of the results. As a result, due to excessive noise or incomplete data, three subjects were excluded from the real experience (EXP1) and 17 were excluded from the VR HMD (EXP2), leaving a total of 30 (EXP1) and 45 (EXP2) artifact-free datasets for consideration. The raw EEG signals were recorded in two distinct phases: the first phase consisted of visualization and contextualization (VC), each lasting 30 s, and the second phase involved interaction (INT), also lasting 30 s. Upon completion of the preprocessing steps, a varying proportion of the original signal was excluded. This exclusion ranged from approximately 5% to 30% of the total recording. The variation in the removed signal percentage was influenced by multiple factors, such as the type and extent of artifacts and the applied filters; this means an average effective duration of the processed signals between 21 and 29 s.
Considering the conditions of the experiment and that the participants were the actual visitors who voluntarily took part in the experience, for data analysis, we decided to maintain the groups’ heterogeneity without discriminating on age or gender, and in this way, achieve consistent results with the museum’s real context.
Afterwards, for both the experiences, valence, arousal, and engagement indicators were calculated for each participant during each phase with the handicrafts, and the baseline data were subtracted from the post-stimulus response to assess the impact of the experiences and phases on the emotional indicators. Equations (1)–(3) were employed for emotional metrics assessment.
The overall activation responses in terms of band power, for each experience and phase, are presented in Figure 8.
Concerning the phases of the experience, during the first phase of passive visualization and contextualization (VC), participants experienced a receptive state while observing the handicrafts. In fact, the results show that participants’ theta activity incremented compared to the baseline; in particular during the VR, participants exhibited a higher theta when passively visualizing the handicrafts without any interaction.
Moreover, during this phase (VC), participants listen to explanations and contextual information about the handicrafts, enhancing their understanding and appreciation. Indeed, as the participants shifted their attention and became involved in active listening and comprehension during the contextualization, the demand for focused attention resulted in a lower alpha power compared to the baseline. This decrease in alpha activity indicates a shift from a relaxed state to a more attentive state [69,81].
Likewise, during the interaction phase (INT), participants had the opportunity to interact and manipulate the handicrafts, promoting a sense of agency and involvement. Beta and gamma frequencies are associated with cognitive engagement and active mental processing; the results show that participants presented a higher beta and gamma activity than the baseline during the interaction phase for both real and VR experiences.
Regarding the experiences, the real experience (EXP1) presented a physical and tangible interaction involving multi-sensory activities. When passively visualizing the real handicrafts, participants exhibited higher theta power, indicating a relaxed and reflective state. Also, higher theta power may be correlated with sustained attention and processing of the aesthetic characteristics of the handicrafts, especially during the interaction phase of the real experience [82]. Likewise, increased alpha power indicated focused attention and comprehension of the information provided. Similarly, beta and gamma increased during the interaction phase, reflecting engagement, attention, and active mental processing while manipulating the handicrafts. The real experience (EXP1) showed a steady increment of the frequency bands power during the development of the experience, reaching the highest value during the interaction phase. In particular, the increase in theta power during the interaction phase suggests engagement and cognitive demand during the handicrafts manipulation. This could be due to the increased mental effort required to explore and interact with the physical objects, leading to enhanced theta activity [83,84].
VR offered an immersive experience (EXP2) and a sense of presence inside the virtual environment [85,86]. During the first VR phase (VC), participants showed a slightly higher beta and gamma activity than the real experience; this could be due to the first impact with the VR, which increased cognitive engagement. Afterwards, beta and gamma remained steady.
Additionally, the difference trend in alpha power between real and VR experiences may be due to the cognitive load experienced in each setting. The real experience required participants to actively manipulate, touch and feel the handicrafts and listen to live contextualization. These cognitive demands can contribute to increased alpha and beta power as the brain involves in information processing and task control. In VR experiences, the cognitive load might be slightly reduced as the virtual environment is more controlled and guided.
Overall, the first phase’s (VC) impact was higher during the VR, particularly for alpha, beta, and gamma. While for the second phase (INT), the results were more evident during the real experience, especially for theta and beta. Regarding the orders of magnitude, the VR behavior showed similar patterns to those observed during the real experience. This can be attributed to the cognitive engagement, attention, and active mental processing associated with the immersive nature of the HMD as well as the sense of presence and embodiment that may lead to enhanced brainwave patterns.
Regarding the EEG-based emotional indicators and their relation with the questionnaire answers (Figure 7), the phase analysis revealed a consistent pattern in participant preferences and EEG outcomes.
Examining the questionnaires and comparing the two phases of the experiments, participants’ responses, as well as the EEG emotional indicators, consistently showed a preference for the interaction phase for all the handicrafts. While the visualization phase allowed participants to appreciate the handicrafts and gain knowledge about them, their responses indicated that it might have been a less engaging experience. Conversely, the interaction phase appeared to significantly enhance participants’ perceptions and engagement. The EEG data further corroborated this trend, revealing lower values for valence, arousal, and engagement during the visualization phase and higher values during the interaction. The alignment between self-report preferences and EEG outcomes suggests that active involvement and manipulation played an important role in elevating the overall emotional experience. The act of interacting with the handicrafts allowed the participants to create a deeper connection, developing a greater sense of enjoyment and emotional engagement.
In relation to the real and VR experiences, the EEG-based emotional indicators results revealed distinct patterns in the participants’ cognitive responses (Figure 9).
Notably, the VR experience (EXP2) produced the highest cognitive outcomes, eliciting higher levels of arousal, valence, and engagement. This finding highlights the immersive nature of the HMD VR, which successfully captured participants’ cognitive attention and generated positive emotional responses.
In terms of the real experience (EXP1), it came in second place but always showed positive results with respect to the baseline, indicating that participants generally experienced positive emotions and were engaged.
Considering that the cognitive engagement reflects the focused mental effort in understanding complex ideas and tasks [64], it is important to note that the cognitive engagement during the VR experiences could also respond to the novelty associated with the VR compared to the real visit. Participants might have been less familiar with this technology, making it a more fresh and intriguing experience.
Overall, the results indicate that the VR experience provided the most evident cognitive outcomes. These findings highlight the potential of HMD VR as a promising tool for providing immersive and emotional experiences. On the other hand, the real museum setting is able to offer a unique in-person experience that holds the importance and significance of the physical interaction with the handicrafts; the sensations produced by the physical senses, such as visual stimulation, olfactory, and tactile stimuli, are elements that can only be fully experienced in the real context. By taking advantage of the strengths of both worlds and combining the interactive capabilities of VR, museums can enhance their traditional visits, offering additional virtual content that enriches the understanding and emotional engagement with the artworks.

3.3. Measures of Variability of Power

To illustrate the variability of power across subjects, for the Tatà, one of the most representative handicrafts, we have calculated the standard deviation, the mean, the variance, and the coefficient of variation (CV) as the ratio of the standard deviation to the mean, which is expressed as a percentage for each power band across participants for both Phase 1 and Phase 2 and for both the real museum (EXP1) and the VR (EXP2).
Table 2 and Table 3 show the measures of variability for the Museum Experience regarding the Tatà handicraft for Phase 1 (VC: Visualization and Contextualization) and Phase 2 (INT: Interaction).
In Figure 10, there are the box plot representations of theta, alpha, beta, and gamma variability measures for the Museum Experience. Plus signs represent outliers based on the Interquartile Range (IQR).
The observed changes across the power bands between the two phases align with the expected increase in cognitive and motor activity during Phase 2, and they support the idea that Phase 2, involving direct interaction with objects, is associated with greater cognitive engagement and complexity. The high CV values in both phases (VC and INT) may indicate a significant variation in how visitors individually respond to the stimuli. This could be interpreted as the intrinsic diversity in visitors’ cognitive engagement and emotional responses to the handicrafts.
Similarly, we have calculated the measures of variability for the VR Experience (EXP2), for both Phase 1 and Phase 2, for the Tatà, in Table 4 and Table 5 as follows:
In Figure 11, there are the box plot representations of theta, alpha, beta, and gamma variability measures for the VR experience.
In the VR experience (EXP2) and Phase 1, the relatively high mean values across the power bands may indicate an engaging initial experience in the VR. The CV percentages (especially for theta and gamma) indicate a variation in response, possibly reflecting different levels of engagement with the virtual content. In Phase 2, the slight decrease in theta mean and the increase in the CV may indicate less consistent cognitive engagement during the interaction with the virtual handicrafts. The other bands’ values remain relatively stable, but the increased CV in alpha and the decrease in beta suggest more individual variation in experiences during this manipulation phase.
The results from both environments highlight some essential differences between physical (EXP1) and virtual (EXP2) experiences with the museum’s handicrafts. The VR experience seems to offer a more consistent and a more immersive introduction (Phase 1) but shows a different pattern of cognitive engagement during the interaction (Phase 2) with less pronounced theta activity.
Both sets of results offer insights into how visitors respond to these two different modalities, providing a basis for further investigation into how to use virtual and physical spaces for museum contexts.

3.4. XGBoost Classification

A binary classification was conducted to empirically assess a difference between the engagement experienced by participants in EXP1 and that experienced by participants in EXP2. The f1-score metric was adopted to evaluate the classification results. In this way, the imbalance in the number of observations in the two classes was taken into account. In detail, the f1-scores on the test set (0.3 of the complete dataset) were 0.65 (precision = 0.81, recall = 0.54) for the classification of the EXP1 and 0.81 (precision = 0.74, recall = 0.91) for the classification of EXP2. The confusion matrix is reported in Figure 12. A higher prediction accuracy is reported for EXP2. However, also for EXP1, the majority of predictions are on the diagonal of the confusion matrix, and the f1-score is higher than 0.5.

4. Discussion

Our study used an EEG-based engagement index to measure how engaged people are when they are interacting with a handicraft in a museum context. These kinds of metrics have been used successfully before for distinguishing brain attentive states and determining the cognitive workload, helping researchers to understand how focused and interested people are in different situations [68,74,87,88]. In the specific naturalistic situation addressed, this measure helps us see how much attention and interest a person has in a specific piece of handicraft. Other studies have shown that this approach can even predict what pieces of art people might be most interested in [21,89]. In this regard, these metrics are a good fit for understanding how people experience a museum and connect with the art they see.
The real experience provided a multi-sensory real environment involving visual perception, spatial awareness and tactile interactions. The initial theta power during the passive visualization of the real handicrafts can be attributed to the sensory stimulation and cognitive load associated with processing the onsite location. The immersive nature of the VR experience, on the other hand, may require less cognitive effort during passive visualization, leading to lower initial theta power. Previous studies linked theta waves to various cognitive processes, including working memory, mental effort, and the maintenance and updating of information [58,59,90]. The power of theta activity tends to increase with a higher cognitive load when a task requires sustained attention and selective focus, reflecting the involvement of cognitive resources and information integration. In this sense, considering the design of the proposed experiences, when the cognitive load increased theta tended to increase, this is clearly confirmed in the real experience, where participants needed to remember and process multiple pieces of information. Indeed, both phases of the real setting context, i.e., visualization and contextualization (VC) and interaction (INT), were conducted by a guide; in particular, VC took place with the guide personally sharing stories and explanations about the actual objects. Quite the opposite, in the VR context, the VC was conducted using audio recordings without any physical intervention of the guide. This distinction and the opportunity to touch the authentic handicrafts produced a more evident multi-sensory elicitation in the real experience, which is reflected in a stronger activation of theta waves compared to the VR counterparts. On the other hand, the HMD experience produced brain wave patterns similar to those of the real visit. The HMD’s ability to create a sense of being present and part of the virtual environment evoked similar outcomes to those experienced during a physical museum visit. The EEG emotional indicators and questionnaires analysis showed a clear preference among participants for the interaction phase, revealing higher levels of engagement and positive emotional experiences. The classification of the engagement showed differences in the patterns of the two experiences. For both classes (Exp1 and Exp2), f1-scores higher than 0.5 were found, showing that the classifier was able to find differences between the engagement feature vectors of participants to the two experiences. The highest f1-score resulted for the VR experience (EXP2), suggesting that EXP2 produced more particular results in the engagement experienced by the participants. These findings suggest that the virtual environment designed ad hoc to show and contextualize the selected handicraft objects is a valuable technology for creating immersive and interactive experiences in the considered cultural heritage scenario. Compared to the real experience, participants in the VR experience have been able to observe the four macro-areas hosting the objects, listen to sounds that recall their manufacture or intended use, and move around the mountain village. The head-mounted device to display the VR environment and VR controllers to allow interaction with the virtual twins have increased the sense of presence, i.e., the feeling of “being there”, boosting cognitive processing and providing a highly emotionally and cognitively stimulating environment. On the other hand, by analyzing participants’ self-reported assessments, positive perceptions toward the real visit have emerged. This suggests that while VR may elicit higher cognitive engagement, the tangible and concrete aspects of the physical museum visit present a unique appeal to create enjoyable experiences for participants. The empirical results of the classifier support the consideration that can be made on the aggregated analysis of the affective indicators that highlighted significant differences in the cognitive attention between the real visit and the VR experience. These results should be considered along with the self-reports overall outcomes, where a slight preference for the real experience emerged. In this viewpoint, the result supports the suggestion that an integration between the two types of experience can enhance the quality of the museum visit where a multi-sensory elicitation due to the physical presence in the museum space can be completed by virtual interaction with the objects. In fact, in a real scenario, objects can not be physically available to museum visitors, but a virtual solution based on HMD could be successfully adopted. Moreover, individual differences can also impact the emotional characterization within each modality. Factors such as personal preferences, prior experience with VR, previous knowledge of traditional handicrafts, and the specific design and presentation of the experiences can all contribute to the emotional responses observed. Also, background factors may contribute to the observed differences. For example, participants’ familiarity with VR technology and their prior experiences with these handicrafts could influence their emotional responses. Additionally, the specific characteristics of the objects, such as their uses, crafting, traditions, or personal relevance, might impact the cognitive and emotional responses. While the analysis of these aspects was not the primary focus of this research, it presents an exciting opportunity for prominent future investigations in this field.

5. Conclusions

In summary, the findings of this study highlight the significance of active interaction and hands-on engagement with the handicrafts in evoking positive emotional responses and fostering high levels of cognitive engagement. The initial impact experienced during the visualization phase, coupled with the contextual understanding, contributes to the participants’ overall emotional journey enhancing the interaction phase. By integrating these findings into the design of museum experiences, curators and designers can create immersive and engaging environments that leverage the power of interaction, contextualization, and emotional connections to enhance visitors’ overall experience and appreciation of the handicrafts. These findings have exciting implications for future developments in virtual museum experiences in creating engaging virtual experiences that closely reflect the real setting providing visitors with immersive and authentic encounters. It also highlights the potential of HMD-based VR as a valuable tool for enhancing museum accessibility and reaching broader audiences. Additional research avenues in the field of museum experiences could be employing advanced immersive VR technologies, such as the CAVE system, to increase the museum’s affective exploration. Additionally, extending our research to involve museums with distinct themes or cultural backgrounds can enrich the understanding of how different settings and styles may influence visitors’ engagement and emotional responses. Furthermore, integrating a multi-modal approach with additional physiological metrics, such as Galvanic Skin Conductance (GSC), heart rate (HR), and facial expressions (FER), can provide a more comprehensive assessment of visitors’ engagement, perceptions, and overall satisfaction.

Author Contributions

Conceptualization, F.M. and M.G.M.; data curation, I.A.C.J., F.M. and E.C.O.; formal analysis, I.A.C.J., F.N., L.U. and E.C.O.; funding acquisition, M.G.M., F.M. and E.V.; investigation, I.A.C.J., F.N., E.C.O., S.M., L.U. and F.M.; methodology, I.A.C.J., S.M. and L.U.; supervision, F.M., E.V. and M.G.M.; validation, E.C.O., S.M. and L.U.; visualization, I.A.C.J., F.N., S.M., L.U., F.M. and E.V.; writing—original draft, I.A.C.J., F.N. and E.C.O.; writing—review and editing, L.U., M.G.M. and F.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was possible thanks to MEDIA (Museo Emozionale DIgitale multimediale Avanzato) project funded by the Valle d’Aosta region.

Institutional Review Board Statement

All participants provided written informed consent prior to enrollment in the study for the correctness, transparency and confidentiality protection, according to art. 13 of the European Regulation n. 679/2016 and Legislative Decree 196/2003, and amended by Legislative Decree 101/2018.

Informed Consent Statement

All participants were informed about the experiment and the protection and use of their data. Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Supporting data of this study is not publicly available due to sensitive information that could compromise the privacy policy.

Acknowledgments

The authors thank the Museo dell’Artigianato Valdostano di Tradizione (MAV), all the project partners for their contributions throughout the execution of this research, and the subjects who voluntarily took part in the experiment.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Detailed Preprocessing Procedures

This appendix is intended to provide a detailed overview of the techniques and software employed in our research for the preprocessing of EEG data.
In this research, in addition to Matlab, we used the EmotivPro software v.3.5.6.488. This software is equipped with substantial filtering capabilities and real-time EEG quality verification that allow initially validating the signal during the acquisition stage. In this context, our preprocessing phase included specific procedures to isolate relevant frequency bands, reduce artifacts, and apply suitable windows to enhance the spectral analysis, as follows:
The first step of the procedure (Figure A1) is the EEG data importation. These data are obtained directly from the EEG Emotiv Pro software recorded at a sampling rate of 128 Hz. Then, the preprocessing phase includes several steps to enhance the quality of the EEG data. First, we remove the DC offset by centering the signal around zero, thus eliminating the constant shift, which could bias the successive analyses. A Finite Impulse Response (FIR) high-pass filter with a 0.18 Hz cutoff was then applied to isolate the study’s focus bandwidth. This step helped attenuate slow drifts in the signal, which might be due to temperature changes, respiration, or other low-frequency factors, and retaining only the relevant signal components. Additionally, notch filters were utilized at 50 Hz and 60 Hz to exclude power line interference. An important part of the preprocessing stage was applying a Hanning window to the EEG stream, implemented with a 256-sample size, and sliding this window by 16 to create a new window for each analysis epoch. This technique aimed to minimize spectral leakage and reduce artifacts, enhancing the spectral analysis accuracy.
Figure A1. Data processing flowchart.
Figure A1. Data processing flowchart.
Electronics 12 03810 g0a1
After windowing, FFT was computed on 2 s epochs of the EEG data, and the FFT-squared magnitude was averaged across the specific frequencies in each band. The FFT is utilized to compute the Power Spectral Density, allowing for visual inspection of the data and identifying transient phenomena and artifacts. Major artifacts can be identified and excluded by leveraging filtering, windowing, and visualization. A conservative approach was adopted, and in cases where significant artifacts were identified, the entire subject’s dataset was excluded. Then, the EEG data are segmented into different sections, such as baseline, artwork/handicraft Phase 1, and artwork/handicraft Phase 2, based on given markers. The goal is to isolate the various experimental conditions and phases for independent analysis. If an artifact is known to occur at a specific time or under specific conditions, this segmentation facilitates the exclusion or separate handling of that segment. This segmentation is essential for comparing different experiment stages, allowing us to understand the EEG response to stimuli. Finally, the post-processing phase included calculating the EEG-based indicator, particularly the EEG-based engagement indicator. This specific metric was computed and labeled according to the administered questionnaire and subsequently classified using an Extreme Gradient Boosting (XGBoost) classifier.
In addition, we have included the representation of spectral power/variability graphs of both raw (namely original) and preprocessed signals (Filtered) across four distinct participants under different experimental conditions, specifically:
  • A participant during the real museum experience (EXP1) and the first phase with an artwork (VC). Figure A2;
  • A participant during the real museum experience (EXP1) and the second phase with an artwork (INT). Figure A3;
  • A participant during the Virtual Reality experience (EXP2) and the first phase with an artwork (VC). Figure A4;
  • A participant during the Virtual Reality experience (EXP2) and the second phase with an artwork (INT). Figure A5;
Moreover, we have included an illustrative example of a subject’s data that was rejected in Figure A6.
Figure A2. Subject 1 during the real museum experience (EXP1) and the first phase with a handicraft (VC).
Figure A2. Subject 1 during the real museum experience (EXP1) and the first phase with a handicraft (VC).
Electronics 12 03810 g0a2
Figure A3. Subject 2 during the real museum experience (EXP1) and the second with a handicraft (INT).
Figure A3. Subject 2 during the real museum experience (EXP1) and the second with a handicraft (INT).
Electronics 12 03810 g0a3
Figure A4. Subject 3 during the Virtual Reality experience (EXP2) and the first phase with a handicraft (VC).
Figure A4. Subject 3 during the Virtual Reality experience (EXP2) and the first phase with a handicraft (VC).
Electronics 12 03810 g0a4
Figure A5. Subject 4 during the Virtual Reality experience (EXP2) and the second phase with a handicraft (INT).
Figure A5. Subject 4 during the Virtual Reality experience (EXP2) and the second phase with a handicraft (INT).
Electronics 12 03810 g0a5
Figure A6. Subject 5. Example of discarded data.
Figure A6. Subject 5. Example of discarded data.
Electronics 12 03810 g0a6

Headset Positioning

The placement of the Emotiv headset on the participants’ scalps was carried out with meticulous attention to detail to ensure accurate recording of the EEG signals. We followed the standard procedure as recommended by the manufacturer, and the specific steps are as follows:
  • Before placing the headset, the participants’ scalp and hair were prepared to ensure proper contact between the electrodes and the skin. This included brushing the hair away from the electrode sites and, if necessary, cleaning the scalp with an alcohol wipe to remove any oils or residues;
  • The headset was positioned on the participant’s head according to the international 10–20 system. The headset was carefully adjusted to align with the pre-defined landmarks on the scalp;
  • The EmotivPro software was used to verify the contact quality of each electrode. Below, we include more information in this regard;
  • In cases where the contact quality was not optimal, additional adjustments were made, including repositioning the electrodes or further cleaning the contact site;
  • With the headset properly positioned, participants were guided through the various phases of the experiment, ensuring consistent and high-quality data collection.
Regarding signal quality before processing, it is important to note that this study’s EEG data processing pipeline began with the importation of raw EEG data recorded directly from the headset. The utilized EmotivPro software has significant in-built signal processing and filtering capabilities within the headset itself, such as removing mains noise and harmonic frequencies. This contributes to the signals appearing clean when there is good contact quality [91]. During the recording phase and during the whole duration of the experiment, the headset software provided contact quality (CQ) and EEG quality (EQ) indicators associated with each channel, which were visually represented through the color of the sensor circle, as shown in Figure A7. We accepted only those signals that were categorized as excellent (green color); to attain a high-quality standard, all the sensors had to be shown as green. Other sensor colors indicate different quality levels: yellow, good signal; orange, moderate; red, poor; and black, very poor signal. The real-time evaluation of contact and EEG quality was maintained throughout the measurement duration, ensuring consistent quality standards. It is important to note that we checked two types of measurements: the contact quality—the measurement of the impedance of the channel, and the EEG signal quality computed by the EEG headset provider.
In the case of the VR experience (EXP2), the Emotiv headset and Oculus Rift were carefully and correctly positioned on the participants’ heads, with additional verification of CQ and EQ conducted after the Oculus Rift was placed to ensure accurate and consistent data recording (Figure A8).
Figure A7. Excellent EEG quality signal vs. poor EEG quality signal indicators.
Figure A7. Excellent EEG quality signal vs. poor EEG quality signal indicators.
Electronics 12 03810 g0a7
Figure A8. Headset and Oculus positioning.
Figure A8. Headset and Oculus positioning.
Electronics 12 03810 g0a8

References

  1. Nonis, F.; Olivetti, E.C.; Marcolin, F.; Violante, M.G.; Vezzetti, E.; Moos, S. Questionnaires or Inner Feelings: Who Measures the Engagement Better? Appl. Sci. 2020, 10, 609. [Google Scholar] [CrossRef]
  2. Dozio, N.; Marcolin, F.; Scurati, G.W.; Ulrich, L.; Nonis, F.; Vezzetti, E.; Marsocci, G.; La Rosa, A.; Ferrise, F. A design methodology for affective Virtual Reality. Int. J. Hum. Comput. Stud. 2022, 162, 102791. [Google Scholar] [CrossRef]
  3. Vianez, A.; Marques, A.; Simões de Almeida, R. Virtual reality exposure therapy for armed forces veterans with post-traumatic stress disorder: A systematic review and focus group. Int. J. Environ. Res. Public Health 2022, 19, 464. [Google Scholar] [CrossRef] [PubMed]
  4. Emmelkamp, P.M.; Meyerbröker, K.; Morina, N. Virtual reality therapy in social anxiety disorder. Curr. Psychiatry Rep. 2020, 22, 32. [Google Scholar] [CrossRef] [PubMed]
  5. Wang, S. A Bodies-On Museum: The Transformation of Museum Embodiment through Virtual Technology. Curator Mus. J. 2023, 66, 107–128. [Google Scholar] [CrossRef]
  6. Lawson, G.; Salanitri, D.; Waterfield, B. Future directions for the development of virtual reality within an automotive manufacturer. Appl. Ergon. 2016, 53, 323–330. [Google Scholar] [CrossRef]
  7. Shao, F.; Robotham, A.J.; Hon, K. Development of a 1:1 Scale True Perception Virtual Reality System for Design Review in Automotive Industry; Auckland University of Technology Library: Auckland, New Zealand, 2012. [Google Scholar]
  8. Violante, M.G.; Marcolin, F.; Vezzetti, E.; Nonis, F.; Moos, S. Emotional design and virtual reality in product lifecycle management (PLM). In Sustainable Design and Manufacturing 2019: Proceedings of the 6th International Conference on Sustainable Design and Manufacturing (KES-SDM 19), Budapest, Hungary, 4–5 July 2019; Springer: Singapore, 2019; pp. 177–187. [Google Scholar]
  9. Vergara, D.; Rubio, M.P.; Lorenzo, M. On the design of virtual reality learning environments in engineering. Multimodal Technol. Interact. 2017, 1, 11. [Google Scholar] [CrossRef]
  10. Kolb, D.A. Experiential Learning: Experience as the Source of Learning and Development; FT Press: Upper Saddle River, NJ, USA, 2014. [Google Scholar]
  11. Bastiaansen, M.; Oosterholt, M.; Mitas, O.; Han, D.; Lub, X. An emotional roller coaster: Electrophysiological evidence of emotional engagement during a roller-coaster ride with virtual reality add-on. J. Hosp. Tour. Res. 2022, 46, 29–54. [Google Scholar] [CrossRef]
  12. Browning, M.H.; Mimnaugh, K.J.; Van Riper, C.J.; Laurent, H.K.; LaValle, S.M. Can simulated nature support mental health? Comparing short, single-doses of 360-degree nature videos in virtual reality with the outdoors. Front. Psychol. 2020, 10, 2667. [Google Scholar] [CrossRef]
  13. Maples-Keller, J.L.; Yasinski, C.; Manjin, N.; Rothbaum, B.O. Virtual reality-enhanced extinction of phobias and post-traumatic stress. Neurotherapeutics 2017, 14, 554–563. [Google Scholar] [CrossRef]
  14. Baños, R.M.; Botella, C.; Alcañiz, M.; Liaño, V.; Guerrero, B.; Rey, B. Immersion and emotion: Their impact on the sense of presence. Cyberpsychol. Behav. 2004, 7, 734–741. [Google Scholar] [CrossRef] [PubMed]
  15. Dozio, N.; Marcolin, F.; Scurati, G.W.; Nonis, F.; Ulrich, L.; Vezzetti, E.; Ferrise, F. Development of an affective database made of interactive virtual environments. Sci. Rep. 2021, 11, 24108. [Google Scholar] [CrossRef]
  16. Riva, G.; Mantovani, F.; Capideville, C.S.; Preziosa, A.; Morganti, F.; Villani, D.; Gaggioli, A.; Botella, C.; Alcañiz, M. Affective interactions using virtual reality: The link between presence and emotions. Cyberpsychol. Behav. 2007, 10, 45–56. [Google Scholar] [CrossRef] [PubMed]
  17. Diemer, J.; Alpers, G.W.; Peperkorn, H.M.; Shiban, Y.; Mühlberger, A. The impact of perception and presence on emotional reactions: A review of research in virtual reality. Front. Psychol. 2015, 6, 26. [Google Scholar] [CrossRef] [PubMed]
  18. Castiblanco Jimenez, I.A.; Marcolin, F.; Ulrich, L.; Moos, S.; Vezzetti, E.; Tornincasa, S. Interpreting Emotions with EEG: An Experimental Study with Chromatic Variation in VR. In Proceedings of the Advances on Mechanics, Design Engineering and Manufacturing IV, Ischia, Italy, 1–3 June 2022; Gerbino, S., Lanzotti, A., Martorelli, M., Mirálbes Buil, R., Rizzi, C., Roucoules, L., Eds.; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar] [CrossRef]
  19. Cruz-Garza, J.G.; Brantley, J.A.; Nakagome, S.; Kontson, K.; Megjhani, M.; Robleto, D.; Contreras-Vidal, J.L. Deployment of Mobile EEG Technology in an Art Museum Setting: Evaluation of Signal Quality and Usability. Front. Hum. Neurosci. 2017, 11, 527. [Google Scholar] [CrossRef]
  20. Kontson, K.; Megjhani, M.; Brantley, J.; Cruz-Garza, J.; Nakagome, S.; Robleto, D.; White, M.; Civillico, E.; Contreras-Vidal, J. ‘Your Brain on Art’: Emergent cortical dynamics during aesthetic experiences. Front. Hum. Neurosci. 2015, 9, 626. [Google Scholar] [CrossRef]
  21. Abbattista, F.; Carofiglio, V.; De Carolis, B. BrainArt: A BCI-based Assessment of User’s Interests in a Museum Visit. In Proceedings of the AVI-CH 2018 Workshop on Advanced Visual Interfaces for Cultural Heritage Co-Located with 2018 International Conference on Advanced Visual Interfaces, Castiglione della Pescaia, Italy, 29 May 2018. [Google Scholar]
  22. Castiblanco Jimenez, I.A.; Gomez Acevedo, J.S.; Olivetti, E.C.; Marcolin, F.; Ulrich, L.; Moos, S.; Vezzetti, E. User Engagement Comparison between Advergames and Traditional Advertising Using EEG: Does the User’s Engagement Influence Purchase Intention? Electronics 2023, 12, 122. [Google Scholar] [CrossRef]
  23. Becker, H.; Fleureau, J.; Guillotel, P.; Wendling, F.; Merlet, I.; Albera, L. Emotion Recognition Based on High-Resolution EEG Recordings and Reconstructed Brain Sources. IEEE Trans. Affect. Comput. 2020, 11, 244–257. [Google Scholar] [CrossRef]
  24. Ding, X.W.; Liu, Z.T.; Li, D.Y.; He, Y.; Wu, M. Electroencephalogram Emotion Recognition Based on Dispersion Entropy Feature Extraction Using Random Oversampling Imbalanced Data Processing. IEEE Trans. Cogn. Dev. Syst. 2021, 14, 882–891. [Google Scholar] [CrossRef]
  25. Gunes, C.; Ozdemir, M.A.; Akan, A. Emotion recognition with multi-channel EEG signals using auditory stimulus. In Proceedings of the 2019 Medical Technologies Congress (TIPTEKNO), Izmir, Turkey, 3–5 October 2019; pp. 1–4. [Google Scholar]
  26. Kannadasan, K.; Veerasingam, S.; Shameedha Begum, B.; Ramasubramanian, N. An EEG-based subject-independent emotion recognition model using a differential-evolution-based feature selection algorithm. Knowl. Inf. Syst. 2023, 65, 341–377. [Google Scholar] [CrossRef]
  27. Zhuang, N.; Zeng, Y.; Tong, L.; Zhang, C.; Zhang, H.; Yan, B. Emotion recognition from EEG signals using multidimensional information in EMD domain. BioMed Res. Int. 2017, 2017, 8317357. [Google Scholar] [CrossRef] [PubMed]
  28. Mohammadi, Z.; Frounchi, J.; Amiri, M. Wavelet-based emotion recognition system using EEG signal. Neural Comput. Appl. 2017, 28, 1985–1990. [Google Scholar] [CrossRef]
  29. Dabas, H.; Sethi, C.; Dua, C.; Dalawat, M.; Sethia, D. Emotion classification using EEG signals. In Proceedings of the 2018 2nd International Conference on Computer Science and Artificial Intelligence, Shenzhen, China, 8–10 December 2018; pp. 380–384. [Google Scholar]
  30. Sulthan, N.; Mohan, N.; Khan, K.A.; Sofiya, S.; Muhammed Shanir, P.P. Emotion recognition using brain signals. In Proceedings of the 2018 International Conference on Intelligent Circuits and Systems (ICICS), Phagwara, India, 20–21 April 2018; pp. 315–319. [Google Scholar]
  31. Ramzan, M.; Dawn, S. Learning-based classification of valence emotion from electroencephalography. Int. J. Neurosci. 2019, 129, 1085–1093. [Google Scholar] [CrossRef]
  32. Huynh, V.Q.; Van Huynh, T. An investigation of ensemble methods to classify electroencephalogram signaling modes. In Proceedings of the 2020 7th NAFOSTED Conference on Information and Computer Science (NICS), Ho Ch Minh, Vietnam, 26–27 November 2020; pp. 203–208. [Google Scholar]
  33. Chen, Y.; Chang, R.; Guo, J. Emotion recognition of EEG signals based on the ensemble learning method: AdaBoost. Math. Probl. Eng. 2021, 2021, 8896062. [Google Scholar] [CrossRef]
  34. Parui, S.; Bajiya, A.K.R.; Samanta, D.; Chakravorty, N. Emotion recognition from EEG signal using XGBoost algorithm. In Proceedings of the 2019 IEEE 16th India Council International Conference (INDICON), Rajkot, India, 13–15 December 2019; pp. 1–4. [Google Scholar]
  35. Miller, C.H.; Sacchet, M.D.; Gotlib, I.H. Support vector machines and affective science. Emot. Rev. 2020, 12, 297–308. [Google Scholar] [CrossRef]
  36. Alarcao, S.M.; Fonseca, M. Emotions Recognition Using EEG Signals: A Survey. IEEE Trans. Affect. Comput. 2017, 10, 374–393. [Google Scholar] [CrossRef]
  37. Lameski, P.; Zdravevski, E.; Mingov, R.; Kulakov, A. SVM parameter tuning with grid search and its impact on model over-fitting. In Proceedings of the Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing: 15th International Conference (RSFDGrC 2015), Tianjin, China, 20–23 November 2015; pp. 464–474. [Google Scholar]
  38. Li, J.; Zhang, Z.; He, H. Hierarchical convolutional neural networks for EEG-based emotion recognition. Cogn. Comput. 2018, 10, 368–380. [Google Scholar] [CrossRef]
  39. Edla, D.R.; Mangalorekar, K.; Dhavalikar, G.; Dodia, S. Classification of EEG data for human mental state analysis using Random Forest Classifier. Procedia Comput. Sci. 2018, 132, 1523–1532. [Google Scholar] [CrossRef]
  40. Kumar, J.L.M.; Rashid, M.; Musa, R.M.; Razman, M.A.M.; Sulaiman, N.; Jailani, R.; Majeed, A.P.A. The classification of EEG-based winking signals: A transfer learning and random forest pipeline. PeerJ 2021, 9, e11182. [Google Scholar] [CrossRef]
  41. Soleymani, M.; Member, S.; Lee, J. DEAP: A Database for Emotion Analysis Using Physiological Signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar]
  42. Veena, S.; Sumaiya, M. Human emotion classification using eeg signals by multivariate synchrosqueezing transform. In Human Behaviour Analysis Using Intelligent Systems; Springer: Berlin/Heidelberg, Germany, 2020; pp. 179–204. [Google Scholar]
  43. Liu, Z.T.; Hu, S.J.; She, J.; Yang, Z.; Xu, X. Electroencephalogram Emotion Recognition Using Combined Features in Variational Mode Decomposition Domain. IEEE Trans. Cogn. Dev. Syst. 2023. [Google Scholar] [CrossRef]
  44. Wang, J.; Wang, M. Review of the emotional feature extraction and classification using EEG signals. Cogn. Robot. 2021, 1, 29–40. [Google Scholar] [CrossRef]
  45. Gao, Q.; Yang, Y.; Kang, Q.; Tian, Z.; Song, Y. EEG-based emotion recognition with feature fusion networks. Int. J. Mach. Learn. Cybern. 2022, 13, 421–429. [Google Scholar] [CrossRef]
  46. Demir, F.; Sobahi, N.; Siuly, S.; Sengur, A. Exploring deep learning features for automatic classification of human emotion using EEG rhythms. IEEE Sens. J. 2021, 21, 14923–14930. [Google Scholar] [CrossRef]
  47. Hertweck, S.; Weber, D.; Alwanni, H.; Unruh, F.; Fischbach, M.; Latoschik, M.E.; Ball, T. Brain activity in virtual reality: Assessing signal quality of high-resolution eeg while using head-mounted displays. In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; pp. 970–971. [Google Scholar]
  48. Russell, J.A. A circumplex model of affect. J. Personal. Soc. Psychol. 1980, 39, 1161. [Google Scholar] [CrossRef]
  49. Warriner, A.B.; Kuperman, V. Affective biases in English are bi-dimensional. Cogn. Emot. 2015, 29, 1147–1167. [Google Scholar] [CrossRef]
  50. Adelman, J.S.; Estes, Z. Emotion and memory: A recognition advantage for positive and negative words independent of arousal. Cognition 2013, 129, 530–535. [Google Scholar] [CrossRef]
  51. O’Brien, H.L.; Toms, E.G. The development and evaluation of a survey to measure user engagement. J. Am. Soc. Inf. Sci. Technol. 2010, 61, 50–69. [Google Scholar] [CrossRef]
  52. Castiblanco Jimenez, I.A.; Gomez Acevedo, J.S.; Marcolin, F.; Vezzetti, E.; Moos, S. Towards an integrated framework to measure user engagement with interactive or physical products. Int. J. Interact. Des Manuf. 2022, 17, 45–67. [Google Scholar] [CrossRef]
  53. Homan, R.W.; Herman, J.; Purdy, P. Cerebral location of international 10–20 system electrode placement. Electroencephalogr. Clin. Neurophysiol. 1987, 66, 376–382. [Google Scholar] [CrossRef] [PubMed]
  54. Ramirez, R.; Planas, J.; Escude, N.; Mercade, J.; Farriols, C. EEG-Based Analysis of the Emotional Effect of Music Therapy on Palliative Care Cancer Patients. Front. Psychol. 2018, 9, 254. [Google Scholar] [CrossRef] [PubMed]
  55. Lee, Y.Y.; Hsieh, S. Classifying Different Emotional States by Means of EEG-Based Functional Connectivity Patterns. PLoS ONE 2014, 9, e95415. [Google Scholar] [CrossRef]
  56. Hwang, S.; Jebelli, H.; Choi, B.; Choi, M.; Lee, S. Measuring Workers’ Emotional State during Construction Tasks Using Wearable EEG. J. Constr. Eng. Manag. 2018, 144, 4018050. [Google Scholar] [CrossRef]
  57. Aftanas, L.; Golocheikine, S. Human anterior and frontal midline theta and lower alpha reflect emotionally positive state and internalized attention: High-resolution EEG investigation of meditation. Neurosci. Lett. 2001, 310, 57–60. [Google Scholar] [CrossRef] [PubMed]
  58. Jirakittayakorn, N.; Wongsawat, Y. Brain Responses to a 6-Hz Binaural Beat: Effects on General Theta Rhythm and Frontal Midline Theta Activity. Front. Neurosci. 2017, 11, 365. [Google Scholar] [CrossRef] [PubMed]
  59. Chrastil, E.R.; Rice, C.; Goncalves, M.; Moore, K.N.; Wynn, S.C.; Stern, C.E.; Nyhus, E. Theta oscillations support active exploration in human spatial navigation. NeuroImage 2022, 262, 119581. [Google Scholar] [CrossRef]
  60. Balasubramanian, G.; Kanagasabai, A.; Mohan, J.; Seshadri, N.G. Music induced emotion using wavelet packet decomposition—An EEG study. Biomed. Signal Process. Control. 2018, 42, 115–128. [Google Scholar] [CrossRef]
  61. Ramirez, R.; Vamvakousis, Z. Detecting Emotion from EEG Signals Using the Emotive Epoc Device. In Proceedings of the Brain Informatics, Macau, China, 4–7 December 2012; Zanzotto, F.M., Tsumoto, S., Taatgen, N., Yao, Y., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 175–184. [Google Scholar]
  62. Li, M.; Lu, B.L. Emotion classification based on gamma-band EEG. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 3–6 September 2009; pp. 1223–1226. [Google Scholar] [CrossRef]
  63. He, Y.; Nagels, A.; Schlesewsky, M.; Straube, B. The Role of Gamma Oscillations During Integration of Metaphoric Gestures and Abstract Speech. Front. Psychol. 2018, 9, 1348. [Google Scholar] [CrossRef]
  64. Ladino Nocua, A.C.; Cruz Gonzalez, J.P.; Castiblanco Jimenez, I.A.; Gomez Acevedo, J.S.; Marcolin, F.; Vezzetti, E. Assessment of Cognitive Student Engagement Using Heart Rate Data in Distance Learning during COVID-19. Educ. Sci. 2021, 11, 540. [Google Scholar] [CrossRef]
  65. Freeman, F.G.; Mikulka, P.J.; Prinzel, L.J.; Scerbo, M.W. Evaluation of an adaptive automation system using three EEG indices with a visual tracking task. Biol. Psychol. 1999, 50, 61–76. [Google Scholar] [CrossRef]
  66. Freeman, F.; Mikulka, P.; Scerbo, M.; Prinzel, L.; Clouatre, K. Evaluation of a Psychophysiologically Controlled Adaptive Automation System, Using Performance on a Tracking Task. Appl. Psychophysiol. Biofeedback 2000, 25, 103–115. [Google Scholar] [CrossRef]
  67. Molteni, E.; Bianchi, A.M.; Butti, M.; Reni, G.; Zucca, C. Analysis of the dynamical behaviour of the EEG rhythms during a test of sustained attention. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; pp. 1298–1301. [Google Scholar]
  68. Coelli, S.; Sclocco, R.; Barbieri, R.; Reni, G.; Zucca, C.; Bianchi, A.M. EEG-based index for engagement level monitoring during sustained attention. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 1512–1515. [Google Scholar] [CrossRef]
  69. Eoh, H.J.; Chung, M.K.; Kim, S.H. Electroencephalographic study of drowsiness in simulated driving with sleep deprivation. Int. J. Ind. Ergon. 2005, 35, 307–320. [Google Scholar] [CrossRef]
  70. McMahan, T.; Parberry, I.; Parsons, T.D. Evaluating Player Task Engagement and Arousal Using Electroencephalography. Procedia Manuf. 2015, 3, 2303–2310. [Google Scholar] [CrossRef]
  71. Bekkedal, M.Y.; Rossi, J.; Panksepp, J. Human brain EEG indices of emotions: Delineating responses to affective vocalizations by measuring frontal theta event-related synchronization. Neurosci. Biobehav. Rev. 2011, 35, 1959–1970. [Google Scholar] [CrossRef]
  72. Plass-Oude Bos, D. EEG-based Emotion Recognition. Influ. Vis. Audit. Stimuli 2006, 56, 1–17. [Google Scholar]
  73. Giraldo, S.I.; Ramirez, R. Brain-Activity-Driven Real-Time Music Emotive Control. In Proceedings of the Fifth International Brain-Computer Interface Meeting, Pacific Grove, CA, USA, 3–7 June 2013; Graz University of Technology Publishing House: Graz, Austria, 2013. [Google Scholar] [CrossRef]
  74. Chaouachi, M.; Frasson, C. Exploring the Relationship between Learner EEG Mental Engagement and Affect. In Proceedings of the Intelligent Tutoring Systems, Pittsburgh, PA, USA, 14–18 June 2010; Aleven, V., Kay, J., Mostow, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 291–293. [Google Scholar]
  75. Huang, W.; Roscoe, R.D.; Johnson-Glenberg, M.C.; Craig, S.D. Motivation, engagement, and performance across multiple virtual reality sessions and levels of immersion. J. Comput. Assist. Learn. 2021, 37, 745–758. [Google Scholar] [CrossRef]
  76. Brinck, I.; Reddy, V. Dialogue in the making: Emotional engagement with materials. Phenomenol. Cogn. Sci. 2020, 19, 23–45. [Google Scholar] [CrossRef]
  77. Ley, C.; Martin, R.K.; Pareek, A.; Groll, A.; Seil, R.; Tischer, T. Machine learning and conventional statistics: Making sense of the differences. Knee Surg. Sport. Traumatol. Arthrosc. 2022, 30, 753–757. [Google Scholar] [CrossRef]
  78. Zhou, Z.H.; Zhou, Z.H. Ensemble Learning; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  79. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  80. Graham, J.; Olchowski, A.; Gilreath, T. Review: A gentle introduction to imputation of missing values. Prev. Sci. 2007, 8, 206–213. [Google Scholar] [CrossRef]
  81. Lundqvist, M.; Herman, P.; Lansner, A. Theta and Gamma Power Increases and Alpha/Beta Power Decreases with Memory Load in an Attractor Network Model. J. Cogn. Neurosci. 2011, 23, 3008–3020. [Google Scholar] [CrossRef]
  82. Strijbosch, W.; Vessel, E.A.; Welke, D.; Mitas, O.; Gelissen, J.; Bastiaansen, M. On the Neuronal Dynamics of Aesthetic Experience: Evidence from Electroencephalographic Oscillatory Dynamics. J. Cogn. Neurosci. 2022, 34, 461–479. [Google Scholar] [CrossRef] [PubMed]
  83. Prochnow, A.; Eggert, E.; Münchau, A.; Mückschel, M.; Beste, C. Alpha and Theta Bands Dynamics Serve Distinct Functions during Perception—Action Integration in Response Inhibition. J. Cogn. Neurosci. 2022, 34, 1053–1069. [Google Scholar] [CrossRef] [PubMed]
  84. Sauseng, P.; Griesmayr, B.; Freunberger, R.; Klimesch, W. Control mechanisms in working memory: A possible function of EEG theta oscillations. Neurosci. Biobehav. Rev. 2010, 34, 1015–1022. [Google Scholar] [CrossRef]
  85. Li, G.; Anguera, J.A.; Javed, S.V.; Khan, M.A.; Wang, G.; Gazzaley, A. Enhanced Attention Using Head-mounted Virtual Reality. J. Cogn. Neurosci. 2020, 32, 1438–1454. [Google Scholar] [CrossRef]
  86. Juliano, J.M.; Spicer, R.P.; Vourvopoulos, A.; Lefebvre, S.; Jann, K.; Ard, T.; Santarnecchi, E.; Krum, D.M.; Liew, S.L. Embodiment Is Related to Better Performance on a Brain—Computer Interface in Immersive Virtual Reality: A Pilot Study. Sensors 2020, 20, 1204. [Google Scholar] [CrossRef]
  87. Berka, C.; Levendowski, D.; Lumicao, M.; Yau, A.; Davis, G.; Zivkovic, V.; Olmstead, R.; Tremoulet, P.; Craven, P. EEG correlates of task engagement and mental workload in vigilance, learning, and memory tasks. Aviat. Space Environ. Med. 2007, 78, B231–B244. [Google Scholar] [PubMed]
  88. Chaouachi, M.; Frasson, C. Mental Workload, Engagement and Emotions: An Exploratory Study for Intelligent Tutoring Systems. In Proceedings of the Intelligent Tutoring Systems, Chania, Greece, 14–18 June 2012; Cerri, S.A., Clancey, W.J., Papadourakis, G., Panourgia, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 65–71. [Google Scholar]
  89. Abdelrahman, Y.; Hassib, M.; Marquez, M.G.; Funk, M.; Schmidt, A. Implicit Engagement Detection for Interactive Museums Using Brain-Computer Interfaces. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, Copenhagen, Denmark, 24–27 August 2015; MobileHCI’15. Association for Computing Machinery: New York, NY, USA, 2015; pp. 838–845. [Google Scholar] [CrossRef]
  90. Safaryan, K.; Mehta, M. Enhanced hippocampal theta rhythmicity and emergence of eta oscillation in virtual reality. Nat. Neurosci. 2021, 24, 1065–1070. [Google Scholar] [CrossRef] [PubMed]
  91. EMOTIV. Contact Quality (CQ) vs. EEG Quality (EQ). EmotivPro v3.0. 2023. Available online: https://emotiv.gitbook.io/emotivpro-v3/emotivpro-menu/contact-quality-map/contact-quality-cq-vs.-eeg-quality-eq (accessed on 21 August 2023).
Figure 1. Experimental setup.
Figure 1. Experimental setup.
Electronics 12 03810 g001
Figure 2. Real and Virtual Reality experiences with the EEG headset.
Figure 2. Real and Virtual Reality experiences with the EEG headset.
Electronics 12 03810 g002
Figure 3. Visualization and contextualization (a) and interaction (b) phases for the cockerel, one of the selected objects, in the VR experience.
Figure 3. Visualization and contextualization (a) and interaction (b) phases for the cockerel, one of the selected objects, in the VR experience.
Electronics 12 03810 g003
Figure 4. Unity scene of the Valle d’Aosta village for the VR experience.
Figure 4. Unity scene of the Valle d’Aosta village for the VR experience.
Electronics 12 03810 g004
Figure 5. The carpenter’s workshop (a), the log house (b), the stable with cows (c), and the farmhouse with goats (d) designed to contextualize the cockerel, the crib and the tatà, the butter press, and the goat collar, respectively.
Figure 5. The carpenter’s workshop (a), the log house (b), the stable with cows (c), and the farmhouse with goats (d) designed to contextualize the cockerel, the crib and the tatà, the butter press, and the goat collar, respectively.
Electronics 12 03810 g005
Figure 6. Questionnaire answers analyzed by experience (a) and by phase (b). Negative, neutral and positive feedback is represented in orange, gray, and green, respectively, with the relative percentages, while the median is represented with a yellow line.
Figure 6. Questionnaire answers analyzed by experience (a) and by phase (b). Negative, neutral and positive feedback is represented in orange, gray, and green, respectively, with the relative percentages, while the median is represented with a yellow line.
Electronics 12 03810 g006
Figure 7. Main effects plots (MEPs) for valence, arousal, and engagement grouped by phases. The five selected objects have been considered both individually and collectively.
Figure 7. Main effects plots (MEPs) for valence, arousal, and engagement grouped by phases. The five selected objects have been considered both individually and collectively.
Electronics 12 03810 g007
Figure 8. EEG activated responses.
Figure 8. EEG activated responses.
Electronics 12 03810 g008
Figure 9. EEG emotional indicators grouped by experiences.
Figure 9. EEG emotional indicators grouped by experiences.
Electronics 12 03810 g009
Figure 10. Measures of variability for the Museum Experience (EXP1) from Table 2 and Table 3.
Figure 10. Measures of variability for the Museum Experience (EXP1) from Table 2 and Table 3.
Electronics 12 03810 g010
Figure 11. Measures of variability for the VR experience (EXP2) from Table 4 and Table 5.
Figure 11. Measures of variability for the VR experience (EXP2) from Table 4 and Table 5.
Electronics 12 03810 g011
Figure 12. Confusion matrix computed on the test set for the engagement. Label 0 stands for EXP1, label 1 stands for EXP2.
Figure 12. Confusion matrix computed on the test set for the engagement. Label 0 stands for EXP1, label 1 stands for EXP2.
Electronics 12 03810 g012
Table 1. Questionnaire for collecting emotional feedback for every object, phase and experience.
Table 1. Questionnaire for collecting emotional feedback for every object, phase and experience.
Questioned IndicatorQuestionAnswers
ValenceHow do you rate the emotion evoked by the handicraft?negativeneutralpositive
ArousalHow did the handicraft make you feel?calmneutralexcited
EngagementHow do you rate your level of engagement?not engagedneutralengaged
Table 2. Measures of variability for the Museum Experience (EXP1) Tatà—Phase 1.
Table 2. Measures of variability for the Museum Experience (EXP1) Tatà—Phase 1.
Mean_Phase1StdDev_Phase1Variance_Phase1CV_Phase1Median_Phase1
Theta8.387.6057.7990.735.50
Alpha2.51.612.6064.091.93
Beta2.391.131.2847.502.02
Gamma0.680.380.1455.690.62
Table 3. Measures of variability for the Museum Experience (EXP1) Tatà—Phase 2.
Table 3. Measures of variability for the Museum Experience (EXP1) Tatà—Phase 2.
Mean_Phase2StdDev_Phase2Variance_Phase2CV_Phase2Median_Phase2
Theta12.7113.69187.51107.747.33
Alpha4.414.6821.88106.162.57
Beta4.713.4712.0573.733.87
Gamma2.262.928.52129.021.49
Table 4. Measures of variability for the VR Experience (EXP2) Tatà—Phase 1.
Table 4. Measures of variability for the VR Experience (EXP2) Tatà—Phase 1.
Mean_Phase1StdDev_Phase1Variance_Phase1CV_Phase1Median_Phase1
Theta9.096.8046.2974.877.08
Alpha3.252.295.2470.482.68
Beta3.382.024.0759.663.01
Gamma1.230.950.9177.860.97
Table 5. Measures of variability for the VR Experience (EXP2) Tatà—Phase 2.
Table 5. Measures of variability for the VR Experience (EXP2) Tatà—Phase 2.
Mean_Phase2StdDev_Phase2Variance_Phase2CV_Phase2Median_Phase2
Theta8.9011.77138.44132.134.13
Alpha3.323.4812.10104.862.25
Beta3.502.526.3772.052.69
Gamma1.291.061.1382.520.99
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Castiblanco Jimenez, I.A.; Nonis, F.; Olivetti, E.C.; Ulrich, L.; Moos, S.; Monaci, M.G.; Marcolin, F.; Vezzetti, E. Exploring User Engagement in Museum Scenario with EEG—A Case Study in MAV Craftsmanship Museum in Valle d’Aosta Region, Italy. Electronics 2023, 12, 3810. https://doi.org/10.3390/electronics12183810

AMA Style

Castiblanco Jimenez IA, Nonis F, Olivetti EC, Ulrich L, Moos S, Monaci MG, Marcolin F, Vezzetti E. Exploring User Engagement in Museum Scenario with EEG—A Case Study in MAV Craftsmanship Museum in Valle d’Aosta Region, Italy. Electronics. 2023; 12(18):3810. https://doi.org/10.3390/electronics12183810

Chicago/Turabian Style

Castiblanco Jimenez, Ivonne Angelica, Francesca Nonis, Elena Carlotta Olivetti, Luca Ulrich, Sandro Moos, Maria Grazia Monaci, Federica Marcolin, and Enrico Vezzetti. 2023. "Exploring User Engagement in Museum Scenario with EEG—A Case Study in MAV Craftsmanship Museum in Valle d’Aosta Region, Italy" Electronics 12, no. 18: 3810. https://doi.org/10.3390/electronics12183810

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop