Next Article in Journal
A Synergy Effect of Consumer Orientation and Disruptive Information on Choice in Remanufactured Products
Previous Article in Journal
Advancing Sustainability: Effective Strategies for Carbon Footprint Reduction in Seaports across the Colombian Caribbean
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comprehensive Study of Emotional Responses in AI-Enhanced Interactive Installation Art

1
College of Creative Arts, Universiti Teknologi MARA, Shah Alam 40450, Malaysia
2
Academic Affairs Office, Zhejiang Shuren University, Hangzhou 310000, China
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(22), 15830; https://doi.org/10.3390/su152215830
Submission received: 24 July 2023 / Revised: 29 September 2023 / Accepted: 16 October 2023 / Published: 10 November 2023

Abstract

:
This study presents a comprehensive literature review on the convergence of affective computing, interactive installation art, multi-dimensional sensory stimulation, and artificial intelligence (AI) in measuring emotional responses, demonstrating the potential of artificial intelligence in emotion recognition as a tool for sustainable development. It addresses the problem of understanding emotional response and measurement in the context of interactive installation art under artificial intelligence (AI), emphasizing sustainability as a key factor. The study aims to fill the existing research gaps by examining three key aspects: sensory stimulation, multi-dimensional interactions, and engagement, which have been identified as significant contributors to profound emotional responses in interactive installation art. The proposed approach involves conducting a process analysis of emotional responses to interactive installation art, aiming to develop a conceptual framework that explores the variables influencing emotional responses. This study formulates hypotheses that make specific predictions about the relationships between sensory stimulation, multi-dimensional interactions, engagement, and emotional responses. By employing the ASSURE model combined with experimental design, the research methodology ensures a systematic and comprehensive study implementation. The implications of this project lie in advancing the understanding of emotional experiences in interactive installation art under AI, providing insights into the underlying mechanisms that drive these experiences, and their influence on individual well-being from a sustainable perspective. The contributions of this research include bridging the identified research gaps, refining theoretical frameworks, and guiding the design of more impactful and emotionally resonant interactive artworks with sustainability in mind. This research seeks not only to fill the existing gaps in understanding emotional experiences in interactive installation art, but also to guide the development of immersive and emotionally engaging installations, ultimately advancing the broader field of human–computer interaction, promoting individual well-being, and contribute to sustainable development.

1. Introduction

Interactive installation art has blossomed into an enthralling artistic expression, blending technological innovation with creative prowess to bestow viewers with an immersive and participatory encounter [1]. Infusing artificial intelligence (AI) into these installation arts has engendered a paradigm shift and created a new art form that explores intricate emotional engagement. Viewed through an ontological lens, the amalgamation of interactive installation art with artificial intelligence emerges as a dualistic existence encompassing entity and environment [2]. As palpable entities, interactive installation art possesses unique attributes and characteristics that distinguish its intrinsic nature. Simultaneously, as elements of existence, these installation arts intricately engage with their surrounding milieu, evoking an array of emotional responses. At the core of interactive installation art lies its inherent prowess to produce emotional resonance within participants, effacing the boundaries between the observer and the artificial construct [3].
Propelled by artificial intelligence algorithms, the dynamic interaction between the observer and the installation fabricates an ambiance wherein the distinction between self and artificial entity becomes indistinct. The AI-infused interactive installation arts adopt qualities and attributes that appear to bridge the divide between inanimate constructs and sentient beings. This amalgamation challenges conventional constructs of agency, intentionality, and consciousness, as interactive installation art possesses distinct attributes and characteristics, engaging with their environment and eliciting an array of emotional responses. Eminent scholars such as Dominic M. McIver Lopes, Sherri Irvin, Graham Harman, Erkki Huhtamo, and Claudia Giannetti, among others, have concentrated their research on the intersection of installation art and emotional responses [1,2,3,4]. Their work enhances our ontological understanding of interactive installation art, shedding light on integrating emotions with the essence of existence and artistic innovation. Moreover, it sparks profound inquiries into the fundamental nature of existence, reality, and consciousness within interactive art.
In the era of burgeoning AI capabilities, the capacity of algorithms to swiftly assimilate and construe extensive datasets in real time heralds a transformative juncture. This evolution empowers installation art to dynamically interface with their surroundings, adeptly catering to individual participants’ intricate emotional and cognitive spectra. This amalgamation of artificial intelligence and interactive art engenders a paradigm shift, propelling the notion of responsiveness to unprecedented pinnacles. Consequently, the installation transcends its erstwhile static essence, metamorphosing into a dynamic milieu that profoundly resonates with each participant.
The formidable aptitude for artificial intelligence technology lies at the nucleus of this metamorphic amalgamation. This aptitude lies in its proficiency to anatomize and scrutinize the intricate tapestry of relationships binding critical elements of the installation with the ensuing emotional responses they incite. The intricate dance commences as participants engage with the installation, wherein artificial intelligence algorithms intricately capture and construe sensory inputs. These span a gamut from gestures and facial expressions to tonal inflections and physiological cues [5,6,7,8,9]. This intricate web of data points promptly transmutes into real-time rejoinders through artificial intelligence algorithms, unfurling a chronicle of evolving emotional states and cognitive reactions.
This dynamic process burgeons as AI algorithms, operating in real time, meticulously process and dissect copious volumes of data. This prowess endows installation art with the agility to react dynamically to the behavioral and emotional tapestry woven by participants. In tandem, this symbiotic relationship between AI algorithms and interactive installations bequeaths artists and designers a fresh canvas with the potential to craft experiences that are emotionally opulent and compelling. Installation art, augmented by artificial intelligence, becomes adept at capturing and deciphering the nuanced emotional cues from participants. By leveraging these algorithms, installation art attunes itself to the participants’ emotional states and responses, culminating in an elevated level of engagement.
Therefore, our research objective is to facilitate the measurement of emotional responses by artificial intelligence algorithms, which involves an in-depth study of the intricate interplay between various stages, from multi-sensory input to measurement and subsequent recognition. We adopt an experimental design based on the ASSURE model to achieve this. This approach enables us to investigate the intricate, dynamic nexus between sensory stimulation, multi-dimensional interaction, participant engagement, and emotional responses. Our study aims to foster a deep understanding of emotional responses within interactive installation art under artificial intelligence. This endeavor holds immense significance as it equips artists and designers with invaluable insights, pivotal in advancing the development of emotionally engaging interactive installation art, ultimately enriching the landscape of artistic creation.

2. Literature Review

Rapid advances in computing power, big data, and machine learning enable artists to incorporate artificial intelligence techniques into their installation art, providing audiences an immersive and engaging experience. These installations utilize artificial intelligence algorithms to create dynamic and responsive environments that interact with the viewer’s movements, gestures, or input [5,6]. They can also incorporate computer vision to track audience movement and generate music or visual effects based on audience behavior [10,11]. This research uses artificial intelligence algorithms to measure emotional responses to interactive installation art. To achieve this goal, we conduct a relevant literature review to examine the current state of installation art and identify research gaps.

2.1. Artificial Intelligence in Interactive Installation Art

The impact of artificial intelligence on art design is profound and massive; it is changing the development pattern of art at an unprecedented speed, and artists use artificial intelligence in various ways [12], creating dynamic, responsive, and personalized experiences for their audiences. One of the essential applications of artificial intelligence in interactive installation art is to make the artwork respond to the audience’s behavior in real time through machine-learning algorithms. For example, an intelligent mirror can recognize the viewer’s face, track their movements, and change their appearance or behavior, creating a more personalized experience [5].
Artificial intelligence in interactive installation art another way to use is to use natural language processing (NLP) technology to enable artwork to understand and respond to written or spoken language. For example, the “Molten Memories” installation uses NLP to create a responsive and immersive visitor experience. The building walls are transformed into a dynamic display of light and sound [13]. “You Are What You See” uses NLP and machine-learning algorithms to generate appropriate responses to audience communication [14]. It opens new possibilities for creating interactive installations that respond to the viewer’s questions or commands, creating a more conversational and personal experience.
Artificial intelligence algorithms have created a responsive and immersive experience for the audience, eliciting better emotional responses. A large body of literature is dedicated to exploring the impact of emotion on interactive installation art and how it affects the user experience, emphasizing its role in creating a sense of presence and immersion in interactive installation art. Interactive devices can respond to the user’s emotional state, creating a more personalized and engaging experience by evoking surprise, joy, excitement, curiosity, or awe to create a deeper interaction with the device [15]. Emotions can be elicited by various elements in interactive installation art, such as visual and auditory stimuli, physical interaction with artwork, and narrative or conceptual content. They can range from joy and excitement to more complex emotions such as fear, anxiety, or confusion [16].
Interactive installation art can evoke a range of positive and negative emotions, and designers and curators must understand the role of emotion in the aesthetic experience of interactive installation art to create impactful experiences for viewers [17]. Emotional experience is an integral part of the user experience, and emotional responses are influenced by multiple factors, including the physical environment and audience experience [18].
Designers should prioritize emotions when creating interactive experiences, understanding the importance of the environment, the causes of emotional factors, and developing strategies for emotional experiences [19]. The elements in designing interactive systems include storytelling, personalization, interactivity, the consideration of users’ prior knowledge and expectations, and cultural and social factors [17]. The presence of other people in the physical environment of interactive installation art can also affect the audience’s emotional experience [18]. The emotion of interactive installation art is influenced by the design of the installation and the sensory experience it provides [18,20,21], and mostly depends on the design of the installation, the presentation of the environment, and the characteristics of the relevant actors [22]. For example, the “Rain Room” installation at the Museum of Modern Art in New York City allows visitors to walk in heavy rain without getting wet. It creates a sense of wonder and awe among visitors and evokes a strong emotional response [23]. Movement is an essential element of the installation, as visitors can move around and interact with the rain, experiencing the thrill of being surrounded by rain without getting wet. The “Ripple” interactive installation uses light and sound to create a calming and meditative experience for the user. Movement is integral to the installation, as users can move and interact with light and sound waves, creating ripples and patterns [21]. The installation uses sensory cues to evoke an emotional response in the user. It allows the user to reflect on their emotional experience and connect with others who have had a similar experience.
In conclusion, emotions play a vital role in creating successful interactive installation art, which can create meaningful experiences for viewers. As AI increasingly integrates into our lives, the literature reviewed above highlights the importance of creating engaging and evocative experiences in interactive installation art through emotional design.

2.2. Emotion and Well-Being

Emotion is a powerful and influential factor in the mental life of humans, resulting from internal and external stimuli in humans. In the field of art aesthetics, Scherer defines “aesthetic emotion” as “emotions elicited by an appreciation of the intrinsic qualities of natural beauty or the qualities of a work of art or artistic performance” [24].
Making art is a complex and transformative activity that fosters a deep connection to human emotion. Castro argues that participatory environments in installation art can generate emotional ties and contribute to social reconnection among individuals [25]. As Reason emphasized, art has the power to express thoughts and emotions and evoke powerful experiences in those who encounter it [26]. This transformative experience, the aesthetic experience, has been the subject of much philosophical inquiry. For Kant, aesthetic experience is associated with pleasure derived from beautiful things [27]. Goodman believes art allows us to perceive the world through positive or negative emotions, pleasant or unpleasant [28]. Based on Scheler’s concept of aesthetic emotion, it is divided into five categories: “moved”, “surprised”, “fascinated”, “beautiful”, and “awe” [29]. Artists apply artificial intelligence into interactive installation art to create a more personalized and engaging experience by evoking emotions such as surprise, joy, excitement, curiosity or awe, and create a more personalized and engaging experience, which creates a deeper interaction with the installation.
Previous research has shown that positive emotional output from aesthetic experiences can influence mood and indirectly contribute to health and well-being [16]. The integration of artificial intelligence technologies amplifies the potential of interactive installations, enabling them to elicit a diverse spectrum of emotions in viewers. This culminates in the creation of immersive and captivating environments. These emotional responses are catalysts for forging profound connections and eliciting meaningful reactions from participants, and i contribute to health and well-being.
Emotional expression: Interactive installations allow individuals to express and explore their emotions in a creative and immersive environment. These installations may employ various sensory elements, such as visual displays, sounds, and tactile experiences, to evoke and amplify emotional responses. The “I’m Sensing in the Rain” exhibit creates the illusion of raindrops falling through tactile stimuli in the air, enhancing realism and immersion in virtual environments [30]. The Tate Sensorium exhibit engages visitors’ senses through a multi-sensory design that combines taste, touch, smell, sound, and sight [31].
Catharsis: Interactive installation art can facilitate cathartic experiences, allowing individuals to release pent-up emotions and find emotional release or relief. The interactive nature of these installations can create a safe space for individuals to express and process their feelings. Canbeyli discusses the relationship between emotion and depression, where sensory stimulation via the visual, auditory, olfactory, and gustatory systems can modulate depression [32]. Gilroy proposed a framework for analyzing user experience in interactive art installations, which could help to identify when and how cathartic experiences occur [33].
Self-Reflection: Interactive installations often encourage self-reflection, prompting individuals to delve into their inner thoughts and emotions. Through interactive engagement, individuals can gain insights into their emotional states, triggers, and personal narratives, fostering greater self-awareness and emotional understanding. Gilbert proposed a model for the personal awareness of science and technology (PAST) to design interactive exhibits that promote meaning-making, which could encourage self-reflection. “Learning to See” uses machine-learning algorithms to analyze live video footage and create an abstract image representation in real time [34]. The installation challenges our perception of art, blurs the lines between human and machine-made, and evokes an emotional response by showing us something new and unexpected [14].
Emotional regulation: Interactive installations can aid emotional regulation by offering tools and strategies for managing and modulating emotions. These installations may incorporate elements like mindfulness exercises, relaxation techniques, or guided emotional experiences, providing individuals with resources to regulate their emotional states effectively. Jiang found that an intelligent interactive shawl that reacts to changes in emotional arousal can help users visualize their emotions and reduce their stress levels by interacting with it [35]. Sadka identified four opportunity themes where interactive technologies can provide unique benefits for emotion regulation training [36].
Therapeutic engagement: Specifically in therapeutic settings, interactive installations can be designed to facilitate emotional healing and growth. They may address specific emotional challenges, promote emotional well-being, or support therapeutic interventions by providing a creative and interactive medium for individuals to explore their emotions and experiences. Waller presents a theoretical model that integrates the change-enhancing factors of both group psychotherapy and art therapy and shows how this model works in practice through a series of illustrated case examples of various client and training groups from different societies and cultures [37]. A project called RHYME aimed to develop a musical and multi-sensorial internet-of-things to improve the health and well-being of children with special needs [38].

2.3. Research Dimension: Sensory Stimulation, Experience, and Engagement

Interactive installation art has been extensively studied in four countries: the United States, Germany, the United Kingdom, and China. Table 1 compares interactive installation art in these countries, highlighting the design factors that elicit emotional responses. Sensory stimulation, multi-dimensional interaction, and participation are crucial elements that significantly contribute to the emotional experiences associated with interactive installation art.

2.3.1. Sensory Stimulation

Sensory stimulation is a crucial aspect of interactive installation art, as it refers to any input that activates our senses, such as visual, auditory, olfactory, gustatory, or tactile stimuli. Sensory stimulation can elicit different emotional responses, and multi-sensory stimuli can evoke different emotions [39]; sensory stimuli, including music, sound effects, lighting, color, and CG animation, can creating immersive environments and enhance emotional responses [7,9]. These studies emphasize the importance of carefully designing and integrating sensory elements within interactive installation art to elicit specific emotional states. Visually appealing and engaging content tends to elicit positive emotional responses, while repetitive or annoying sounds elicit negative emotional responses [40]. Music can significantly impact mood, and certain types elicit positive emotions and reduce symptoms of depression [38]. In the “Molten Memories” installation, the walls of the building are transformed into a dynamic and changing display of light and sound [13].
With the rapid development of science and technology, various sensory technologies have emerged, as the times require stimulating vision, hearing, and the senses of touch, smell, and taste [41]. Recent research explores touch, taste, and smell to enhance engagement in multi-sensory experiences. An interdisciplinary SCHI lab team led by Marianna Obrist conducted extensive research on multi-sensory experiences, including interactive touch, taste, and smell experiences [42]. They used air-haptic technology to create an immersive tourist experience [43]. For example, the “I’m Sensing in the Rain” exhibit creates the illusion of raindrops falling through tactile stimuli in the air, enhancing realism and immersion in virtual environments [30]. The Tate Sensorium exhibit engages visitors’ senses through a multi-sensory design that combines taste, touch, smell, sound, and sight [31]. Advances in immersive environments such as virtual reality (VR) and wearable devices have developed kinship interfaces involving scent, temperature, and smell to reshape multi-sensory experiences. Odor stimuli can influence the perception of body lightness or heaviness [44], while temperature perception can promote social proximity concepts and influence human behavior [45]. Likewise, the sense of taste plays a crucial role in our understanding of the consumption process and reflects emotional states.
According to Velasco and Obrist, a multi-sensory experience is an impression formed by a specific event in which sensory elements such as color, texture, and smell are coordinated to create an impression of an object [41]. These impressions can alter cognitive processes such as attention and memory, and designers draw on the existing multi-sensory perception research and concepts to design them [46]. Multi-sensory experiences can be physical, digital, or a combination of the two, from completely real to completely virtual, and can enhance participation in multi-sensory experiences [10].

2.3.2. Multi-Dimension Interactive

By incorporating sensory engagement, spatial design, human–machine interaction, data-driven approaches, and narrative elements, interactive installation art offers a multi-dimensional experience beyond traditional visual art forms [6]. Interactive installation art, data visualizations, and spatial narratives can enable humans to engage with and navigate this multi-dimensional space, blurring the boundaries between the physical and digital realms [47]. Interactive installation art at the Mendelssohn Memorial Hall provides a multi-dimensional experience by offering immersive interaction, sensorial engagement, technological integration, and spatial transformation. It merges technology and art, allowing visitors to actively participate in the virtual orchestra performance and creating a dynamic and captivating experience [48]. George’s interactive installation art provides a multi-dimensional experience [10]. The visitor visually explores the art exhibit, trying to discover interesting areas, while the audio guide provides brief information about the exhibit and the gallery theme. The system uses eye tracking and voice commands to provide visitors with real-time information. The evaluation study sheds light on the dimensions of evoking natural interactions within cultural heritage environments, using micro-narratives for self-exploration and the understanding of cultural content, and the intersection between the human–computer interaction and artificial intelligence within cultural heritage. Reefs and Edge’s interactive installation art embodies a multi-dimensional experience, merging tangible user interface objects and combining environmental science and multiple art forms to explore coral reef ecosystems threatened by climate change [49]. They argue that using a tangible user interface in an installation-art setting can help engage and inform the public about crucial environmental issues. The use of reacTIVision allows the computer simulation to pinpoint the location, identity, and orientation of the objects on the table’s surface in real time as the users move them.
Interface design is also a critical factor in structuring shared experiences. Fortin and Hennessy describe how electronic artists used cross-modal interfaces based on intuitive modes of interaction such as gesture, touch, and speech to design interactive installation art that engages people beyond the ubiquitous single-user “social cocooning” interaction scenario [50]. They explain that the multi-dimensional experience of interactive installation art is achieved through thoughtful interface design that encourages collaboration, play, and meaning. Diversifying interfaces in influential interactive installation art can enhance users’ emotional interaction and create a more engaging atmosphere [51]. By incorporating facial affect detection technology and implementing a wide range of input and output interfaces, installation art can offer a diverse and immersive experience.
Body movement is essential in multi-dimensional interaction; the Microsoft Kinect and the Leap Motion Controller are examples of 3D vision sensors that can be used for body motion interaction, such as gestures, speech, touch, and vision, to create more natural and powerful interactive experiences. On-body tangible interaction involves augmented and mixed reality devices using the body as a physical support to constrain the movement of multiple-degrees-of-freedom devices (3D Mouse). The 3D Mouse offers enough degrees of freedom and accuracy to support the interaction. Using the body as a support for the interaction allows the user to move in their environment and avoid the fatigue of mid-air interactions [52].

2.3.3. Engagement

The user experience should be designed to create engagement and active participation opportunities to enhance the emotional response and overall user satisfaction. Meaningful engagement with art involves a fusion of cognitive and affective elements that go beyond passive observation and involve active involvement, interaction, and participation on the viewer’s part. Interactive installation art, which responds to the actions and inputs of the viewer, creates a dynamic and reciprocal relationship between the artwork and the participant. This interactivity enables individuals to engage with the artwork on their terms, influencing and shaping the experience [53]. In interactive installation art, artificial intelligence technology is increasingly employed to create immersive experiences by analyzing participants’ biometric data, such as heart rate and skin conductance.
Generally, engagement is combined with artificial intelligence to reflect people’s participation. Artificial intelligence allows installation art to respond to biometric data dynamically. Analyzing participants’ biometric data improves engagement, promoting stronger emotional and sensory connections and enriching the interactive experience. For example, Galvanic Skin Response sensors can measure the level of engagement of the audience and the presenter. The gathered data are then visualized in real time through a visualization projected onto a screen and a physical electro-mechanical installation, which changes the height of helium-filled balloons depending on the atmosphere in the auditorium [54], creating a tangible way of making the invisible visible. Artificial intelligence algorithms can adjust the experience in real time. AI algorithms analyze incoming sounds, identify patterns and characteristics, and then classify the sounds, distinguishing between musical instruments, voices, and other auditory elements, which link to corresponding faces or visual representations. AI algorithms dynamically adjust and generate visuals based on real-time audio input. “Learning to See” [14] uses machine-learning algorithms to analyze live video footage and create an abstract image representation in real time. Artificial intelligence algorithms enhance participant engagement and foster emotional resonance in interactive installation art. Personalized content and feedback can create a sense of ownership of the experience, enhancing the emotional response and user engagement [10]. Research shows that engagement is vital in eliciting an emotional response and creating meaningful connections between installation art and viewers.

2.4. Emotion Measurement in Interactive Installation Art

Emotion measurement refers to the process of quantitatively assessing and evaluating emotional experiences. It involves capturing and recording various indicators or signals that reflect an individual’s emotional state, such as physiological responses (e.g., heart rate and skin conductance), facial expressions, vocal cues, self-reported ratings, or behavioral observations.

2.4.1. Emotion Recognition Technology

Different emotional responses can be elicited using different stimuli, emphasizing the potential of multimodal emotion recognition technology in interactive installation art [55]. Multiple sensing modalities give us a wealth of information to support interaction with the world and one another [56]. The multimodal fusion of AR with speech and hand gesture input enables users to interact with computers through various input modalities like speech, gesture, and eye gaze [57]. Multimodal design in Cangjie’s Poetry creates an interactive art experience that combines different modes of communication, such as language, symbols, and visual elements. This approach allows the artwork to engage with audiences in multiple ways and create a more immersive and dynamic experience [58]. Combining multiple modalities through a multimodal approach is more effective than relying on a single modality (unimodal) for emotion recognition in artistic settings [10,58]. Thus, multimodal sentiment analysis harnesses the synergy and complementarity of diverse modal information to enrich emotional understanding and expression, using real-time vocal emotion recognition techniques to enhance audience engagement and creative expression in artistic installations [55]. The research involved collecting and analyzing the participants’ vocal and physiological data, using machine-learning algorithms for emotion recognition, and implementing the system in two interactive installation art exhibits. The findings demonstrated that the system effectively enhanced audience engagement and creative expression.
In affective computing, the amount of information and dimensions of human emotions conveyed by each module are different; affective computing should start from multiple dimensions as much as possible to make up for a single imperfect emotional channel. Then, the emotional tendency is judged based on multiple results, so it is necessary to study its effectiveness. Currently, most innovations are based on multimodal emotion features and fusion algorithms to improve the accuracy of emotion classification, as shown in Table 2 [59].
From Table 2, most of the current research on emotion recognition starts from the perspective of digital signal analysis to explore the relationship between emotion and signal characteristics. With the help of deep-learning algorithms to extract emotional features, the features learned using massive data can better reflect the inherent nature of the data than artificially constructed features, thus significantly improving the effect of emotion recognition [60]. Commonly used AI techniques for emotion classification, such as Support Vector Machines and the Naïve Bayes Classifier, can achieve more than 90% accuracy [61]. Using SVM and LDA classifiers, Bhardwaj (2015) used EEG signals to classify emotions with an average overall accuracy of 74.13% and 66.50% [62]. Mano proposed an ensemble model for emotion classification based on motor facial expressions that achieved greater accuracy than a single classifier [63].
Therefore, emotion recognition technology can be used to identify and analyze people’s emotions, which can help us better understand how people feel and what their needs are. This information can be used to develop more effective policies and programs that meet people’s needs, promote sustainable development and improve the well-being of people around the world, becoming a powerful tool for sustainable development.

2.4.2. Emotion Model

Multimodal fusion helps explore validity and develop effective models that enable more accurate assessment of emotional states by applying various techniques, including artificial intelligence and machine-learning algorithms, to extract meaningful information from the collected data. The most widely used emotion models are discrete and dimensional [64]; researchers try to quantify emotion and convert it into objectively representable data to promote the development of research on the human–computer interaction and emotional experience [65].
Cooney focused on selecting four representative discrete emotions, with one emotion chosen from each quadrant in the valence-arousal space, aiming to capture a comprehensive range of emotional experiences and provide a foundation for further investigation [66]. Gilroy et al. described a method for developing a multimodal affective interface to analyze user experience in real time as part of an augmented reality art installation. The system uses a PAD dimensional model of emotion, which allows the fusion of affective modalities, and each input modality is represented as a PAD vector and supports the representation of emotional responses associated with aesthetic impressions [67]. Işik and Güven utilized various signal-processing methods, feature extraction techniques, and artificial intelligence methods to classify emotions from physiological signals [68]. The analysis is based on the DEAP dataset and involves calculating statistical properties of physiological signals; the obtained data are used for emotion classification. Nasoz focused on implementing artificial intelligence algorithms to analyze the physiological signals associated with emotions by eliciting six emotions (sadness, anger, surprise, fear, frustration, and amusement) from participants via multimodal input and measuring their physiological signals [69]. The algorithms map physiological signals to specific emotions for increased accuracy, including facial expression recognition, vocal intonation recognition, and natural language understanding. We list the emotion recognition technology used in physiological signals in interactive installation art, as shown in Table 3, and classify it from the research dimensions, research methods, research questions, and research findings.
Most research has focused on developing emotion recognition systems that use sensors such as EEG, EKG, facial expressions, and speech recognition to recognize user emotions in real time. The research methods in these articles include signal-processing techniques, machine-learning algorithms, and data analysis tools to develop and validate emotion recognition models and data sources from different combinations, such as wearable devices and virtual reality environments with interactive installation art. These studies also explored different dimensions: user experience, emotional responses, and social behavior. The findings suggest that emotion recognition technology has the potential to enhance user engagement, improve the human–computer interaction, and provide valuable insights into users’ emotional responses in interactive installation art.

2.4.3. Emotion Recognition Models

Emotion recognition models are machine-learning techniques that recognize emotions from various sources, such as text, facial expressions, and physiological signals. The current classification of machine-learning technology for emotion recognition can be divided into two categories: classical machine-learning methods and deep-learning methods. The classifier can classify input signals and output the corresponding emotion category [81].
(1)
Machine-Learning Approaches:
Machine-learning algorithms have been widely used in emotion recognition tasks. These approaches involve training models on labeled datasets to learn patterns and features that distinguish different emotional states. Researchers have explored the performance and accuracy of machine-learning models for emotion recognition in different contexts, including installation art. They have examined the effectiveness of these models in capturing and analyzing viewers’ emotional responses.
Dominguez-Jimenez proposes a machine-learning model for emotion recognition from physiological signals using the Bagged Trees algorithm to recognize facial emotions from RGB data collected using an RGB HD camera [82]. Ratliff discusses a framework for recognizing emotions using facial expressions through machine-learning techniques based on still images of the face using active appearance models [83]. The AAM is trained on face images from a publicly available database to capture important facial structures for expression identification. The classification scheme can successfully identify faces related to the six universal emotions. Teng proposes a scheme that applies the emotion classification technique for emotion recognition using Support Vector Machines (SVMs), a popular tool for machine-learning tasks involving classification, regression, or novelty detection [84]. The proposed emotion recognition system is designed to recognize emotions from the sentence inputted from the keyboard. The training set and testing set are constructed to verify the effectiveness of this model.
(2)
Deep-Learning Models:
Deep learning is a subfield of machine learning. Deep-learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) can automatically extract complex features from raw data and learn hierarchical representations of emotions. These models have improved performance in various domains, including image and speech recognition. So, deep-learning models can be applied to analyze emotional responses based on different modalities. For example, CNNs can analyze visual data, and RNNs can process physiological signals to detect patterns and variations in emotional responses. By leveraging deep-learning models, researchers can gain insights into the emotional experiences evoked by installation art and uncover the nuanced relationships between different design elements and emotional responses.
Hossain and Muhammad propose a system that can recognize emotions from speech and video signals [85]. The system uses a type of artificial intelligence called deep learning to analyze the signals and extract features that are related to emotions. The system then combines the features from the speech and video signals to decide on the emotion being expressed. The proposed system has been tested using two large databases of emotional speech and video, and the results show that it is effective at recognizing emotions. Teo, Chia, and Lee discuss using deep-learning models to recognize emotions in music. The authors conducted experiments to improve the accuracy of these models by adjusting various parameters, such as the number of hidden layers, the number of neurons in each layer, and the regularization techniques used [86]. They found that tuning these parameters achieved a prediction accuracy of 61.7%, an improvement of more than 15% compared to previous methods. This research could help build better music emotion recommendation systems to make song recommendations based on user emotions. Tashu et al. propose a deep-learning architecture for multimodal emotion recognition from art. The proposed architecture uses feature-level and modality attention to classify emotions in art [87]. The model is trained on the WikiArt emotion dataset, and the experimental results show the efficiency of the proposed approach. Liu proposes a multimodal deep-learning approach to construct affective models from multiple physiological signals. The aim is to enhance the performance of affective models and reduce the cost of acquiring physiological signals for real-world applications [48].
(3)
Hybrid Models:
Hybrid models combine multiple modalities or utilize machine-learning and deep-learning techniques to enhance emotion recognition performance. These models aim to leverage the complementary information provided by different modalities to improve accuracy and robustness. In the context of installation art, hybrid models can be applied to analyze emotional responses using multiple data sources, such as facial expressions, physiological signals, and textual data from viewer feedback. By integrating information from different modalities, researchers can better understand viewers’ emotional experiences and capture a broader range of emotional states.
Verma proposes using a hybrid deep-learning model for emotion recognition using facial expressions [88]. The method uses two stages of a convolution neural network (CNN) to detect primary (happy and sad) and secondary (surprised and angry) emotions in images containing human faces. The proposed model is trained on two datasets of facial expressions and achieved high accuracies. The model can be extended to classify primary and secondary emotions in real-time video data and images. Atanassov considers a hybrid multimodal model for improving human emotion recognition based on facial expression and body gesture recognition [89]. The study extends previous investigations on using pre-trained models of deep-learning neural networks (DNNs) for facial emotion recognition (FER) by adding the emotions extracted from body language. A second DNN model is developed and trained with specific datasets to extract emotions from upper-body gestures. Both models’ information regarding recognized emotions is more accurate and can be used in education, medicine, psychology, product advertisement, marketing, and human–machine interfaces. Yaddaden proposes a hybrid-based approach for emotion recognition through facial expressions, utilizing both geometric and appearance-based features to provide specific information about the six basic emotions [90]. The proposed approach uses a multi-class Support Vector Machine architecture for classification and utilizes the randomized Trees feature selection technique. ExperimentaL results demonstarate the effectiveness of the proposed approach, yielding high accuracy rates with three benchmark facial expression datasets. Padhy proposes a model that uses machine-learning and deep-learning techniques for emotion recognition [91]. The model uses image processing and feature extraction methods for machine learning and a neural-network-based solution for deep learning. The neural network is used to classify universal emotions such as sadness, anger, happiness, disgust, fear, and surprise.
The use of artificial intelligence (AI) in emotion recognition systems has witnessed significant advancements, leveraging technologies such as machine learning, deep learning, and computer vision. These systems analyze and classify human emotions based on facial features and other patterns, utilizing emotion detection methods and models that encompass various modalities, including language, sound, images, videos, and physiological signals. The studies discussed in this review showcase the effectiveness of AI-assisted emotion recognition models in providing reliable and objective solutions for understanding and predicting human emotional states. These findings highlight the potential of AI as a sustainable development tool for emotion recognition, offering promising applications in diverse fields.

2.5. Discussion

The rapid progress in science and technology has led to various sensory technologies that stimulate multiple senses, encompassing vision, hearing, touch, smell, and taste [41]. While previous research has delved into sensory stimuli and their corresponding emotional responses, there has been a predominant focus on individual sensory modalities or specific stimuli.
Consequently, there is an escalating recognition of the imperative shift towards a more comprehensive approach in the design of multi-sensory experiences within the realm of human–computer interaction. This necessitates the development of a holistic understanding of how diverse sensory inputs collectively influence emotional responses in interactive installation art. Despite acknowledging the significance of multi-dimensional interactions within interactive installation art, a notable research gap persists in discerning their precise relationship with emotional responses. There is a critical need to unravel how distinct combinations and variations of these interactions elicit specific emotional responses, marking an imperative area for future investigation.
While prior studies have made significant strides in exploring participant engagement within interactive environments, a broader and more nuanced inquiry is warranted. This inquiry should explicitly examine the intricate interplay between participant engagement and the ensuing emotional responses. Further scrutiny is essential to comprehend how participant engagement influences emotional experiences, encompassing elements of immersion, interactivity, and the establishment of meaningful connections with the stimuli and experimental environment.
Multimodal research emerges as a valuable instrument for scrutinizing the impacts of interactive installation art under the influence of multi-sensory stimuli on audience engagement and creative expression. The amalgamation of multimodal data sources holds the potential to heighten audience engagement and amplify creative expression.
Collectively, the literature underscores the integration of emotion recognition models in interactive installation art to enhance the viewer’s experience and engagement with the artwork. Furthermore, it emphasizes the pivotal role of multimodal fusion in validating and refining emotional models, ultimately enhancing the precision of assessments about emotional states. In contrast, the conventional measures of emotional response towards interactive devices may benefit from enhancements in effectively gauging their efficacy. Consequently, multimodal fusion is indispensable in comprehending and dissecting emotional responses within this context.
The insights derived from these studies posit that emotion recognition technology harbors the potential to elevate user engagement, refine human–computer interaction, and provide invaluable insights into users’ emotional responses within interactive installation art. Additionally, research findings unveil the advantages of deploying emotion recognition technology in capturing users’ emotional states with greater accuracy and in real time, surpassing traditional methods such as self-reporting. This technological advancement further paves the way for personalized experiences, adaptive interfaces, and interactive systems adept at real-time responsive modulation based on user emotions. Notably, these models offer valuable insights into the emotional resonance of installation art, shedding light on how distinct design elements and stimuli influence viewers’ emotional responses.

3. Framework and Hypothesis

This research aims to investigate the emotional responses evoked during interactive installation art. It focuses on capturing explicit and implicit data between participants and the artwork, utilizing artificial intelligence algorithms for emotional measurement. We propose a theoretical framework to analyze the emotional response process in interactive installation art, as shown in Figure 1. The framework considers measuring the emotional responses experienced by participants across different stimulation modes offered by the installation art. This process involves a complex interplay between attention, behavior, cognition, and emotion [92]. While attention and behavior are considered explicit data, cognition and emotion are categorized as implicit data. In interactive interactions, emotional responses can only be fully understood by considering the combination of explicit and implicit data. Each dimension of attention, behavior, cognition, and emotion can be measured through physiological signals, including brain activity, heart rate variability, skin conductance, facial expressions, and eye tracking. Artificial intelligence algorithms can be employed to analyze and interpret these physiological signals, providing valuable insights into participants’ emotional experiences during their interactions with the installation.
Based on the analysis of the interaction process in the above figure, we propose a conceptual framework to better explore the factors that affect emotional responses, as shown in Figure 2. The independent variable is installation art and the dependent variable is the breadth, intensity, and diversity of emotional reactions exhibited by participants during their interactions with AI-integrated interactive installation art.
Several mediator variables are considered in this study. Firstly, the sensory stimulation encompasses various sensory inputs, including visual, auditory, and tactile elements, which play a pivotal role in mediating participants’ emotional responses. Secondly, the “interaction dimensions” variable examines the influence of one-dimensional (traditional) interactions versus multi-dimensional interactions involving physical touch, movement, and gestures. These dimensions significantly shape the emotional experiences of participants. Additionally, engagement within contextual factors such as virtual reality (VR), augmented reality (AR), or mixed reality (MR) serves as another mediating factor. The depth of immersion experienced by participants within these virtual realms directly affects their emotional engagement and responses. Artificial intelligence (AI) algorithms are considered as moderator variables. These algorithms play a crucial role in determining how the AI technology operates within the interactive installation art, thus potentially influencing the overall emotional impact on participants. This research seeks to provide a comprehensive understanding of the intricate interplay between these variables, shedding light on the emotional responses within the context of AI-integrated interactive installation art.
From an ontological perspective, integrating interactivity in artistic creation brings forth novel dimensions and complexities. This research delves into the emotional responses of interactive installation art under artificial intelligence (AI). The objective is to delve into the nature of emotional experiences triggered by multi-sensory stimuli and multi-dimensional interactions during meaningful engagement. This study investigates diverse interactive stimulation modes to capture participants’ emotional responses while leveraging emotion recognition techniques to optimize the rationality and efficacy of emotional stimulation settings. Consequently, the research hypotheses are as follows:
Hypothesis 1 (H1):
Emotional responses triggered by multi-sensory stimuli will exhibit a high degree of consistency across diverse participants, indicating the reliability of specific sensory inputs in evoking standardized emotional reactions within the context of interactive installation art.
Hypothesis 2 (H2):
Multi-dimensional interactions with AI-integrated installation art will elicit a more diverse range of emotional experiences compared to traditional one-dimensional interactions, highlighting the transformative impact of multi-faceted engagement on the depth and variety of emotional responses.
Hypothesis 3 (H3):
There is a significant positive correlation between the level of engagement experienced within virtual reality (VR) environments and the intensity of emotional responses elicited during interactions. Precisely, we predict that higher levels of engagement in VR environments will lead to more intense emotional reactions among participants.
Hypothesis 4 (H4):
The utilization of artificial intelligence algorithms will demonstrate a high level of efficacy in analyzing physiological signals, such as EEG and facial expressions, thereby enabling the quantification of emotional responses with a significant degree of accuracy within the context of interactive installation art.
To test the research hypotheses and achieve the outlined research objectives, we employ the research methodology of the ASSURE model combined with experimental design [93]. The ASSURE model is a widely recognized instructional design model that stands for analyze learners, state objectives, select media and materials, utilize media and materials, require learner participation, and evaluate and revise. In the research context, the ASSURE model provides a structured approach to ensure the effective implementation and evaluation of the experimental design.

4. Conclusions

This paper comprehensively explores emotional response and measurement within interactive installation art, primarily focusing on integrating artificial intelligence. While the review underscores the pivotal role played by sensory stimulation, multi-dimensional interactions, and engagement in eliciting profound emotional responses, it is imperative to further elaborate on the practical significance of these findings and their potential impact on art and design.
We introduce the process analysis of the emotional response to interactive installation art to address identified research gaps. This framework allows for a deeper examination of the variables influencing emotional responses, providing valuable insights into the underlying mechanisms driving emotional experiences in this context. By formulating hypotheses, we aim to establish predictive relationships between sensory stimulation, multi-dimensional interactions, engagement, and emotional responses. This step is crucial in bridging the research gaps we have identified.
To effectively test our hypotheses and accomplish our research objectives, we have selected the ASSURE model in conjunction with experimental design as our methodology. The ASSURE model’s structured approach to instructional design aligns seamlessly with our research context, ensuring our experimental study’s systematic and comprehensive implementation. This approach significantly facilitates the exploration of the emotional impact of sensory stimuli, multi-dimensional interactions, and engagement in interactive installation art. Nevertheless, it is essential to acknowledge inherent limitations:
Situational limitations: This research focuses on a specific interactive installation in the laboratory and may not cover all possible design, technical, and environmental variations. As such, the findings may be limited to the devices and settings investigated and may not fully capture real-world scenarios and natural behaviors.
Interpretation and subjectivity: While AI can assist in measuring emotional responses, interpreting those emotions and their artistic significance remains subjective. Emotional experiences are deeply personal, and different viewers may have varying interpretations and reactions to the same artwork. AI can provide quantitative data but may not capture the full richness and complexity of emotional experiences in art.
Technical limitations: The effectiveness of AI in measuring and enhancing emotional responses depends on the available technology. The accuracy and reliability of emotion recognition algorithms, the quality of data input (e.g., sensor accuracy), and the computational resources required can pose technical limitations. Researchers must be aware of these limitations and consider their impact on this study’s findings.
Despite these limitations, this study represents a significant stride towards comprehending emotional engagement within interactive installation art and offers valuable insights into the convergence of interactive art and artificial intelligence. By emphasizing the pivotal role emotions play in AI-enhanced interactive installation art, this research contributes substantially to the well-being and mental health of individuals who engage with these installations. Creating emotionally resonant artworks is designed to profoundly enhance participants’ overall quality of life and mental well-being, harmonizing with sustainable development goals focused on societal welfare.
In summation, this paper establishes a foundational platform for future research in emotional response and measurement in interactive installation art under artificial intelligence. It holds significant implications for practically applying these findings in art and design. By integrating theoretical frameworks, hypotheses, and the ASSURE model, our objective is to advance our understanding of the emotional experiences evoked by interactive artworks and make substantial contributions to the broader field of human–computer interaction. This research could shape the creation of emotionally resonant interactive installations in a manner that significantly impacts the art and design landscape. The careful fusion of artistry and technology heralds a promising future where the boundaries between human emotion and artificial intelligence converge, opening new horizons in interactive art.

Author Contributions

Writing—original draft, X.C.; Supervision, Z.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received external funding from following two topics: Research and practice on smart party building platforms in higher vocational colleges under the background of digital transformation (2023L03). Research and practice on building a smart tutoring system based on affective computing technology (KT2023475).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Stallabrass, J. Digital Commons: Art and Utopia in the Internet Age. Art J. 2010, 69, 40–55. [Google Scholar]
  2. Huhtamo, E. Trouble at the Interface, or the Identity Crisis of Interactive Art. Estet. J. Ital. Aesthet. 2004. Available online: https://www.mediaarthistory.org/refresh/Programmatic%20key%20texts/pdfs/Huhtamo.pdf (accessed on 12 October 2022).
  3. Irvin, S. Interactive Art: Action and Participation. In Aesthetics of Interaction in Digital Art; MIT Press: Cambridge, MA, USA, 2013; pp. 85–103. [Google Scholar]
  4. Giannetti, C. Aesthetics of Digital Art; University of Minnesota Press: Minneapolis, MN, USA, 2015. [Google Scholar]
  5. Patel, S.V.; Tchakerian, R.; Morais, R.L.; Zhang, J.; Cropper, S. The Emoting City Designing feeling and artificial empathy in mediated environments. In Proceedings of the ECAADE 2020: Anthropologic—Architecture and Fabrication in the Cognitive Age, Berlin, Germany, 16–17 September 2020; Volume 2, pp. 261–270. [Google Scholar]
  6. Cao, Y.; Han, Z.; Kong, R.; Zhang, C.; Xie, Q. Technical Composition and Creation of Interactive Installation Art Works under the Background of Artificial Intelligence. Math. Probl. Eng. 2021, 2021, 7227416. [Google Scholar] [CrossRef]
  7. Tidemann, A. [Self.]: An Interactive Art Installation that Embodies Artificial Intelligence and Creativity. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition, Glasgow, UK, 22–25 June 2015; pp. 181–184. [Google Scholar]
  8. Gao, Z.; Lin, L. The intelligent integration of interactive installation art based on artificial intelligence and wireless network communication. Wirel. Commun. Mob. Comput. 2021, 2021, 3123317. [Google Scholar] [CrossRef]
  9. Ronchi, G.; Benghi, C. Interactive light and sound installation using artificial intelligence. Int. J. Arts Technol. 2014, 7, 377–379. [Google Scholar] [CrossRef]
  10. Raptis, G.E.; Kavvetsos, G.; Katsini, C. Mumia: Multimodal interactions to better understand art contexts. Appl. Sci. 2021, 11, 2695. [Google Scholar] [CrossRef]
  11. Pelowski, M.; Leder, H.; Mitschke, V.; Specker, E.; Gerger, G.; Tinio, P.P.; Husslein-Arco, A. Capturing aesthetic experiences with installation art: An empirical assessment of emotion, evaluations, and mobile eye tracking in Olafur Eliasson’s “Baroque, Baroque!”. Front. Psychol. 2018, 9, 1255. [Google Scholar] [CrossRef]
  12. Manovich, L. Defining AI Arts: Three Proposals. Catalog. Saint-Petersburg: Hermitage Museum, June 2019. pp. 1–9. Available online: https://www.academia.edu/download/60633037/Manovich.Defining_AI_arts.201920190918-80396-1vdznon.pdf (accessed on 12 October 2022).
  13. Rajapakse, R.P.C.J.; Tokuyama, Y. Thoughtmix: Interactive watercolor generation and mixing based on EEG data. In Proceedings of the International Conference on Artificial Life and Robotics, Online, 21–24 January 2021; pp. 728–731. Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85108839509&partnerID=40&md5=05392d3ad25a40e51753f7bb8fa37cde (accessed on 12 October 2022).
  14. Akten, M.; Fiebrink, R.; Grierson, M. Learning to see: You are what you see. In Proceedings of the ACM SIGGRAPH 2019 Art Gallery, SIGGRAPH 2019, Los Angeles, CA, USA, 28 June 2019; pp. 1–6. [Google Scholar] [CrossRef]
  15. Xu, S.; Wang, Z. DIFFUSION: Emotional Visualization Based on Biofeedback Control by EEG Feeling, listening, and touching real things through human brainwave activity. Artnodes 2021, 28. [Google Scholar] [CrossRef]
  16. Savaş, E.B.; Verwijmeren, T.; van Lier, R. Aesthetic experience and creativity in interactive art. Art Percept. 2021, 9, 167–198. [Google Scholar] [CrossRef]
  17. Duarte, E.F.; Baranauskas, M.C.C. An Experience with Deep Time Interactive Installations within a Museum Scenario; Institute of Computing, University of Campinas: Campinas, Brazil, 2020. [Google Scholar]
  18. Szubielska, M.; Imbir, K.; Szymańska, A. The influence of the physical context and knowledge of artworks on the aesthetic experience of interactive installations. Curr. Psychol. 2021, 40, 3702–3715. [Google Scholar] [CrossRef]
  19. Lim, Y.; Donaldson, J.; Jung, H.; Kunz, B.; Royer, D.; Ramalingam, S.; Thirumaran, S.; Stolterman, E. Emotional Experience and Interaction Design. In Affect and Emotion in Human-Computer Interaction; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  20. Capece, S.; Chivăran, C. The sensorial dimension of the contemporary museum between design and emerging technologies*. IOP Conf. Ser. Mater. Sci. Eng. 2020, 949, 012067. [Google Scholar] [CrossRef]
  21. Rajcic, N.; McCormack, J. Mirror ritual: Human-machine co-construction of emotion. In Proceedings of the TEI 2020—Proceedings of the 14th International Conference on Tangible, Embedded, and Embodied Interaction, Sydney, Australia, 9–12 February 2020; pp. 697–702. [Google Scholar] [CrossRef]
  22. Her, J.J. An analytical framework for facilitating interactivity between participants and interactive artwork: Case studies in MRT stations. Digit. Creat. 2014, 25, 113–125. [Google Scholar] [CrossRef]
  23. Random International. Rain Room. 2013. Available online: https://www.moma.org/calendar/exhibitions/1352 (accessed on 23 September 2022).
  24. Scherer, K.R. What are emotions? And how can they be measured? Soc. Sci. Inf. 2005, 44, 695–729. [Google Scholar] [CrossRef]
  25. Fragoso Castro, J.; Bernardino Bastos, P.; Alvelos, H. Emotional resonance at art interactive installations: Social reconnection among individuals through identity legacy elements uncover. In Proceedings of the 10th International Conference on Digital and Interactive Arts, Aveiro, Portugal, 13–15 October 2021; pp. 1–6. [Google Scholar]
  26. Reason, D.T. Deeper than Reason; Clarendon Press: Oxford, UK, 2008. [Google Scholar]
  27. Guyer, P. Autonomy and Integrity in Kant’s Aesthetics. Monist 1983, 66, 167–188. [Google Scholar] [CrossRef]
  28. Carrier, D. Perspective as a convention: On the views of Nelson Goodman and Ernst Gombrich. Leonardo 1980, 13, 283–287. [Google Scholar] [CrossRef]
  29. Schindler, I.; Hosoya, G.; Menninghaus, W.; Beermann, U.; Wagner, V.; Eid, M.; Scherer, K.R. Measuring aesthetic emotions: A review of the literature and a new assessment tool. PLoS ONE 2017, 12, e0178899. [Google Scholar] [CrossRef]
  30. Pittera, D.; Gatti, E.; Obrist, M. I’m sensing in the rain: Spatial incongruity in visual-tactile mid-air stimulation can elicit ownership in VR users. In Proceedings of the Conference on Human Factors in Computing Systems—Proceedings 2019, Glasgow, UK, 4–9 May 2019; pp. 1–15. [Google Scholar] [CrossRef]
  31. Ablart, D.; Vi, C.T.; Gatti, E.; Obrist, M. The how and why behind a multi-sensory art display. Interactions 2017, 24, 38–43. [Google Scholar] [CrossRef]
  32. Canbeyli, R. Sensory stimulation via the visual, auditory, olfactory, and gustatory systems can modulate mood and depression. Eur. J. Neurosci. 2022, 55, 244–263. [Google Scholar] [CrossRef]
  33. Gilroy, S.P.; Leader, G.; McCleery, J.P. A pilot community-based randomized comparison of speech generating devices and the picture exchange communication system for children diagnosed with autism spectrum disorder. Autism Res. 2018, 11, 1701–1711. [Google Scholar] [CrossRef]
  34. Gilbert, J.K.; Stocklmayer, S. The design of interactive exhibits to promote the making of meaning. Mus. Manag. Curatorship 2001, 19, 41–50. [Google Scholar] [CrossRef]
  35. Jiang, M.; Bhömer, M.T.; Liang, H.N. Exploring the design of interactive smart textiles for emotion regulation. In HCI International 2020–Late Breaking Papers: Digital Human Modeling and Ergonomics, Mobility and Intelligent Environments: 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, 19–24 July 2020; Proceedings 22; Springer International Publishing: Cham, Switzerland, 2020; pp. 298–315. [Google Scholar]
  36. Sadka, O.; Antle, A. Interactive technologies for emotion-regulation training: Opportunities and challenges. In Proceedings of the CHI EA’20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–12. [Google Scholar]
  37. Waller, C.L.; Oprea, T.I.; Giolitti, A.; Marshall, G.R. Three-dimensional QSAR of human immunodeficiency virus (I) protease inhibitors. 1. A CoMFA study employing experimentally-determined alignment rules. J. Med. Chem. 1993, 36, 4152–4160. [Google Scholar] [CrossRef]
  38. Cappelen, B.; Andersson, A.P. Cultural Artefacts with Virtual Capabilities Enhance Self-Expression Possibilities for Children with Special Needs. In Transforming our World Through Design, Diversity, and Education; IOS Press: Amsterdam, The Netherlands, 2018; pp. 634–642. [Google Scholar]
  39. Schreuder, E.; van Erp, J.; Toet, A.; Kallen, V.L. Emotional Responses to Multi-sensory Environmental Stimuli: A Conceptual Framework and Literature Review. SAGE Open 2016, 6, 2158244016630591. [Google Scholar] [CrossRef]
  40. De Alencar, T.S.; Rodrigues, K.R.; Barbosa, M.; Bianchi, R.G.; de Almeida Neris, V.P. Emotional response evaluation of users in ubiquitous environments: An observational case study. In Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology, Osaka, Japan, 9–12 November 2016; pp. 1–12. [Google Scholar]
  41. Velasco, C.; Obrist, M. Multi-sensory Experiences: A Primer. Front. Comput. Sci. 2021, 3, 614524. [Google Scholar] [CrossRef]
  42. Obrist, M.; Van Brakel, M.; Duerinck, F.; Boyle, G. Multi-sensory experiences and spaces. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces, ISS 2017, Brighton, UK, 17–20 October 2017; pp. 469–472. [Google Scholar] [CrossRef]
  43. Vi, C.T.; Ablart, D.; Gatti, E.; Velasco, C.; Obrist, M. Not just seeing, but also feeling art: Mid-air haptic experiences integrated into a multi-sensory art exhibition. Int. J. Hum.-Comput. Stud. 2017, 108, 1–14. [Google Scholar] [CrossRef]
  44. Brianza, G.; Tajadura-Jiménez, A.; Maggioni, E.; Pittera, D.; Bianchi-Berthouze, N.; Obrist, M. As Light as Your Scent: Effects of Smell and Sound on Body Image Perception. In Proceedings of the IFIP Conference on Human-Computer Interaction, Paphos, Cyprus, 2–6 September 2019; Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics. Springer: Cham, Switzerland, 2019; Volume 11749, pp. 179–202. [Google Scholar] [CrossRef]
  45. Brooks, J.; Lopes, P.; Amores, J.; Maggioni, E.; Matsukura, H.; Obrist, M.; Lalintha Peiris, R.; Ranasinghe, N. Smell, Taste, and Temperature Interfaces. In Proceedings of the Conference on Human Factors in Computing Systems—Proceedings, Yokohama, Japan, 8–13 May 2021. [Google Scholar] [CrossRef]
  46. Zald, D.H. The human amygdala and the emotional evaluation of sensory stimuli. Brain Res. Rev. 2003, 41, 88–123. [Google Scholar] [CrossRef]
  47. Anadol, R. Space in the Mind of a Machine: Immersive Narratives. Archit. Des. 2022, 92, 28–37. [Google Scholar] [CrossRef]
  48. Liu, J. Science popularization-oriented art design of interactive installation based on the protection of endangered marine life-the blue whales. J. Phys. Conf. Ser. 2021, 1827, 012116–012118. [Google Scholar] [CrossRef]
  49. De Bérigny, C.; Gough, P.; Faleh, M.; Woolsey, E. Tangible User Interface Design for Climate Change Education in Interactive Installation Art. Leonardo 2014, 47, 451–456. [Google Scholar] [CrossRef]
  50. Fortin, C.; Hennessy, K. Designing Interfaces to Experience Interactive Installations Together. In Proceedings of the International Symposium on Electronic Art, Vancouver, BC, Canada, 14–19 August 2015. [Google Scholar]
  51. Gu, S.; Lu, Y.; Kong, Y.; Huang, J.; Xu, W. Diversifying Emotional Experience by Layered Interfaces in Affective Interactive Installations. In Proceedings of the 2021 DigitalFUTURES: The 3rd International Conference on Computational Design and Robotic Fabrication (CDRF 2021), Shanghai, China, 3–4 July 2021; Springer: Singapore, 2022; Volume 3, pp. 221–230. [Google Scholar]
  52. Saidi, H.; Serrano, M.; Irani, P.; Hurter, C.; Dubois, E. On-body tangible interaction: Using the body to support tangible manipulations for immersive environments. In Proceedings of the Human-Computer Interaction–INTERACT 2019: 17th IFIP TC 13 International Conference, Paphos, Cyprus, 2–6 September 2019; Springer International Publishing: Cham, Switzerland, 2019; pp. 471–492. [Google Scholar]
  53. Edmonds, E. Art, interaction, and engagement. In Proceedings of the International Conference on Information Visualisation, London, UK, 13–15 July 2011; pp. 451–456. [Google Scholar] [CrossRef]
  54. Röggla, T.; Wang, C.; Perez Romero, L.; Jansen, J.; Cesar, P. Tangible air: An interactive installation for visualising audience engagement. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition, Singapore, 27–30 June 2017; pp. 263–265. [Google Scholar]
  55. Vogt, T.; Andr, E.; Wagner, J.; Gilroy, S.; Charles, F.; Cavazza, M. Real-time vocal emotion recognition in art installations and interactive storytelling: Experiences and lessons learned from CALLAS and IRIS. In Proceedings of the 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, Amsterdam, The Netherlands, 10–12 September 2009. [Google Scholar]
  56. Turk, M. Multimodal interaction: A review. Pattern Recognit. Lett. 2014, 36, 189–195. [Google Scholar] [CrossRef]
  57. Ismail, A.W.; Sunar, M.S. Multimodal fusion: Gesture and speech input in augmented reality environment. In Computational Intelligence in Information Systems: Proceedings of the Fourth INNS Symposia Series on Computational Intelligence in Information Systems (INNS-CIIS 2014), Brunei, Brunei, 7–9 November 2014; Springer International Publishing: Cham, Switzerland, 2015; pp. 245–254. [Google Scholar]
  58. Zhang, W.; Ren, D.; Legrady, G. Cangjie’s Poetry: An Interactive Art Experience of a Semantic Human-Machine Reality. Proc. ACM Comput. Graph. Interact. Tech. 2021, 4, 19. [Google Scholar] [CrossRef]
  59. Pan, J.; He, Z.; Li, Z.; Liang, Y.; Qiu, L. A review of multimodal emotion recognition. CAAI Trans. Intell. Syst. 2020, 7. [Google Scholar] [CrossRef]
  60. Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  61. Avetisyan, H.; Bruna, O.; Holub, J. Overview of existing algorithms for emotion classification. Uncertainties in evaluations of accuracies. J. Phys. Conf. Ser. 2016, 772, 012039. [Google Scholar] [CrossRef]
  62. Bhardwaj, A.; Gupta, A.; Jain, P.; Rani, A.; Yadav, J. Classification of human emotions from EEG signals using SVM and LDA Classifiers. In Proceedings of the 2015 2nd International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 19–20 February 2015; pp. 180–185. [Google Scholar]
  63. Mano, L.Y.; Giancristofaro, G.T.; Faiçal, B.S.; Libralon, G.L.; Pessin, G.; Gomes, P.H.; Ueyama, J. Exploiting the use of ensemble classifiers to enhance the precision of user’s emotion classification. In Proceedings of the 16th International Conference on Engineering Applications of Neural Networks (INNS), Rhodes Island, Greece, 25–28 September 2015; pp. 1–7. [Google Scholar]
  64. Tu, G.; Wen, J.; Liu, H.; Chen, S.; Zheng, L.; Jiang, D. Exploration meets exploitation: Multitask learning for emotion recognition based on discrete and dimensional models. Knowl.-Based Syst. 2022, 235, 107598. [Google Scholar] [CrossRef]
  65. Seo, Y.-S.; Huh, J.-H. Automatic Emotion-Based Music Classification for Supporting Intelligent IoT Applications. Sensors 2019, 21, 164. [Google Scholar] [CrossRef]
  66. Cooney, M. Robot Art, in the Eye of the Beholder?: Personalized Metaphors Facilitate Communication of Emotions and Creativity. Front. Robot. AI 2021, 8, 668986. [Google Scholar] [CrossRef]
  67. Gilroy, S.W.; Cavazza, M.; Chaignon, R.; Mäkelä, S.M.; Niranen, M.; André, E.; Vogt, T.; Urbain, J.; Seichter, H.; Billinghurst, M.; et al. An effective model of user experience for interactive art. In Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology, ACE 2008, Yokohama, Japan, 3–5 December 2008; pp. 107–110. [Google Scholar] [CrossRef]
  68. Işik, Ü.; Güven, A. Classification of emotion from physiological signals via artificial intelligence techniques. In Proceedings of the 2019 Medical Technologies Congress (TIPTEKNO), Izmir, Turkey, 3–5 October 2019; pp. 1–4. [Google Scholar]
  69. Nasoz, F.; Alvarez, K.; Lisetti, C.L.; Finkelstein, N. Emotion recognition from physiological signals using wireless sensors for presence technologies. Cogn. Technol. Work. 2004, 6, 4–14. [Google Scholar] [CrossRef]
  70. Suhaimi, N.S.; Mountstephens, J.; Teo, J. A Dataset for Emotion Recognition Using Virtual Reality and EEG (DER-VREEG): Emotional State Classification Using Low-Cost Wearable VR-EEG Headsets. Big Data Cogn. Comput. 2022, 6, 16. [Google Scholar] [CrossRef]
  71. Yu, M.; Xiao, S.; Hua, M.; Wang, H.; Chen, X.; Tian, F.; Li, Y. EEG-based emotion recognition in an immersive virtual reality environment: From local activity to brain network features. Biomed. Signal Process. Control 2022, 72, 103349. [Google Scholar] [CrossRef]
  72. Suhaimi, N.S.; Mountstephens, J.; Teo, J. Explorations of A Real-Time VR Emotion Prediction System Using Wearable Brain-Computer Interfacing. J. Phys. Conf. Ser. 2021, 2129, 012064. [Google Scholar] [CrossRef]
  73. Marín-Morales, J.; Higuera-Trujillo, J.L.; Greco, A.; Guixeres, J.; Llinares, C.; Scilingo, E.P.; Valenza, G. Affective computing in virtual reality: Emotion recognition from brain and heartbeat dynamics using wearable sensors. Sci. Rep. 2018, 8, 13657. [Google Scholar] [CrossRef]
  74. Zheng, W.L.; Zhu, J.Y.; Lu, B.L. Identifying stable patterns over time for emotion recognition from EEG. IEEE Trans. Affect. Comput. 2017, 10, 417–429. [Google Scholar] [CrossRef]
  75. Wang, X.; Ren, Y.; Luo, Z.; He, W.; Hong, J.; Huang, Y. Deep learning-based EEG emotion recognition: Current trends and future perspectives. Front. Psychol. 2023, 14, 1126994. [Google Scholar] [CrossRef]
  76. Marín-Morales, J.; Llinares, C.; Guixeres, J.; Alcañiz, M. Emotion recognition in immersive virtual reality: From statistics to affective computing. Sensors 2020, 20, 5163. [Google Scholar] [CrossRef]
  77. Ji, Y.; Dong, S.Y. Deep learning-based self-induced emotion recognition using EEG. Front. Neurosci. 2022, 16, 985709. [Google Scholar] [CrossRef]
  78. Cai, J.; Xiao, R.; Cui, W.; Zhang, S.; Liu, G. Application of electroencephalography-based machine learning in emotion recognition: A review. Front. Syst. Neurosci. 2021, 15, 729707. [Google Scholar] [CrossRef]
  79. Khan, A.R. Facial emotion recognition using conventional machine learning and deep learning methods: Current achievements, analysis and remaining challenges. Information 2022, 13, 268. [Google Scholar] [CrossRef]
  80. Lozano-Hemmer, R. 2006. Pulse Room [Installation]. Available online: https://www.lozano-hemmer.com/pulse_room.php (accessed on 12 October 2022).
  81. Cai, Y.; Li, X.; Li, J. Emotion Recognition Using Different Sensors, Emotion Models, Methods and Datasets: A Comprehensive Review. Sensors 2023, 23, 2455. [Google Scholar] [CrossRef]
  82. Domingues, D.; Miosso, C.J.; Rodrigues, S.F.; Silva Rocha Aguiar, C.; Lucena, T.F.; Miranda, M.; Rocha, A.F.; Raskar, R. Embodiments, visualizations, and immersion with enactive affective systems. Eng. Real. Virtual Real. 2014, 9012, 90120J. [Google Scholar] [CrossRef]
  83. Ratliff, M.S.; Patterson, E. Emotion recognition using facial expressions with active appearance models. In Proceedings of the HCI’08: 3rd IASTED International Conference on Human Computer Interaction, Innsbruck, Austria, 17–19 March 2008. [Google Scholar]
  84. Teng, Z.; Ren, F.; Kuroiwa, S. The emotion recognition through classification with the support vector machines. WSEAS Trans. Comput. 2006, 5, 2008–2013. [Google Scholar]
  85. Hossain, M.S.; Muhammad, G. Emotion recognition using deep learning approach from audio–visual emotional big data. Inf. Fusion 2019, 49, 69–78. [Google Scholar] [CrossRef]
  86. Teo, J.; Chia, J.T.; Lee, J.Y. Deep learning for emotion recognition in affective virtual reality and music applications. Int. J. Recent Technol. Eng. 2019, 8, 219–224. [Google Scholar] [CrossRef]
  87. Tashu, T.M.; Hajiyeva, S.; Horvath, T. Multimodal emotion recognition from art using sequential co-attention. J. Imaging 2021, 7, 157. [Google Scholar] [CrossRef]
  88. Verma, G.; Verma, H. Hybrid-deep learning model for emotion recognition using facial expressions. Rev. Socionetwork Strateg. 2020, 14, 171–180. [Google Scholar] [CrossRef]
  89. Atanassov, A.V.; Pilev, D.; Tomova, F.; Kuzmanova, V.D. Hybrid System for Emotion Recognition Based on Facial Expressions and Body Gesture Recognition. In Proceedings of the International Conference on Applied Informatics, Jakarta, Indonesia, 17–19 December 2021. [Google Scholar] [CrossRef]
  90. Yaddaden, Y.; Adda, M.; Bouzouane, A.; Gouin-Vallerand, C. Hybrid-Based Facial Expression Recognition Approach for Human-Computer Interaction. In Proceedings of the 2018 2nd International Conference on Computer Science and Artificial Intelligence, Shenzhen, China, 8–10 December 2018; pp. 1–6. [Google Scholar]
  91. Padhy, N.; Singh, S.K.; Kumari, A.; Kumar, A. A Literature Review on Image and Emotion Recognition: Proposed Model. In Smart Intelligent Computing and Applications: Proceedings of the Third International Conference on Smart Computing and Informatics, 2019, Shimla, India, 15–16 June 2019; Springer: Singapore, 2019; Volume 2, pp. 341–354. [Google Scholar]
  92. Ma, X. Data-Driven Approach to Human-Engaged Computing Definition of Engagement. International SERIES on Information Systems and Management in Creative eMedia (CreMedia), (2017/2), 43-47.2018. Available online: https://core.ac.uk/download/pdf/228470682.pdf (accessed on 12 October 2022).
  93. Richey, R.C.; Klein, J.D. Design and Development Research; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2007. [Google Scholar]
Figure 1. Process analysis of emotional response to interactive installation art.
Figure 1. Process analysis of emotional response to interactive installation art.
Sustainability 15 15830 g001
Figure 2. The conceptual framework of measuring emotional responses in the interactive exhibit.
Figure 2. The conceptual framework of measuring emotional responses in the interactive exhibit.
Sustainability 15 15830 g002
Table 1. Comparative Analysis of Interactive Installation Art in the United States, United Kingdom, Germany, and China.
Table 1. Comparative Analysis of Interactive Installation Art in the United States, United Kingdom, Germany, and China.
CountryFamous ArtistTheme and ConceptExpressed EmotionSignificant Exhibition and Installation
United StatesBill ViolaHuman emotions, spirituality, existentialismContemplation, introspection“Bill Viola: The Moving Portrait” at the National Portrait Gallery, 2017–2018<br>- “Bill Viola: A Retrospective” at the Guggenheim Museum, 2017
Janet Cardiff and George Bures MillerPerception, memory, spatial experienceEerie, introspective“Janet Cardiff and George Bures Miller: The Murder of Crows” at the Art Gallery of Ontario, 2019–2020<br>- “Janet Cardiff and George Bures Miller: Lost in the Memory Palace” at the Museum of Contemporary Art, Chicago, 2013
Rafael Lozano-Hemmerperception, deception and surveillanceCuriosity and intrigue“Rafael Lozano-Hemmer: Common Measures” at the Crystal Bridges Museum of American Art (2022)
United KingdomAnish KapoorIdentity, space, spiritualityTranscendent, awe“Anish Kapoor: Symphony for a Beloved Sun” at The Hayward Gallery (2003)
Rachel WhitereadMemory, absence, domestic spacesHaunting, melancholic“Rachel Whiteread” at Tate Britain, 2017–2018<br>- “Rachel Whiteread: Internal Objects” at the Gagosian Gallery, 2019
Antony GormleyHuman existence, body, spaceContemplative, introspective-“Antony Gormley” at the Royal Academy of Arts, 2019<br>- “Antony Gormley: Field for the British Isles” at the Tate Modern, 2019–2020
Tracey EminPersonal narratives, sexuality, vulnerabilityRaw, intimate“Tracey Emin: Love Is What You Want” at Hayward Gallery (2011)
GermanyOlafur EliassonPerception, nature, environmental issuesSensational, immersive“Olafur Eliasson: In Real Life” at Tate Modern (2019)
Rebecca HornTransformation, body as a metaphor, timePoetic, evocative-“Rebecca Horn: Body Fantasies” at the Martin-Gropius-Bau, 2019<br>- “Rebecca Horn: Performances” at Tate Modern, 2019
Wolfgang LaibTransience, spirituality, simplicitySerene, meditative“Wolfgang Laib: A Retrospective” at MoMA (2013–2014)
Anselm KieferHistory, memory, identityEvocative, contemplative-“Anselm Kiefer: Für Andrea Emo” at the Gagosian Gallery, 2019<br>- “Anselm Kiefer: For Louis-Ferdinand Céline” at the Royal Academy of Arts, 2020–2021
ChinaAi WeiweiPolitics, social issues, cultural heritageProvocative, defiant“Ai Weiwei: According to What?” at the Hirshhorn Museum and Sculpture Garden (2012)
Cai Guo-QiangNature, Chinese culture, cosmologyExplosive, sublime“Cai Guo-Qiang: Odyssey and Homecoming” at the Palace Museum, Beijing, 2019<br>- “Cai Guo-Qiang: Falling Back to Earth” at the Queensland Art Gallery, 2013–2014
Xu BingLanguage, identity, globalizationThought-provoking, questioning“Xu Bing: Tobacco Project” at the Virginia Museum of Fine Arts (2011)
Yin XiuzhenUrbanization, memory, personal narrativesReflective, nostalgic“Yin Xiuzhen: Sky Patch” at Ullens Center for Contemporary Art (2014)
Table 2. Schemes and Effects of Multimodal Recognition with Different Mixed Modes.
Table 2. Schemes and Effects of Multimodal Recognition with Different Mixed Modes.
Modal Mixture Mode Public DataDatasetAuthor and LiteratureModality and Feature Extraction Classification AlgorithmAccuracy/% (Category/#)
A mix of behavioral modeseNTER FACE05Nguyen et al.Voice + face: three-dimensional seriesDBN90.85 (Avg./6)
convolutional neural network
Dobrisek et al.Voice: open SMILE: Face: Support Vector Machines77.5 (Avg./6)
Construction and matching subspace
Zhang et al.Voice: CNN Face: 3D-CNNSupport Vector Machines85.97 (Avg./6)
Hybrid of multiple neurophysiological modalitiesDEAP Koelstra et al.EEG+PPS: WELCHS methodNative Bayes58.6 (Valence/2)
Tang et al.EEG: frequency-domain differential entropy; PPS Support Vector Machines83.8 (Valence/2)
Time Domain Statistical Features 83.2 (Arousal/2)
Yin et al.EEG+PPS:Ensemble SAE 83 (Valence/2)
stacked autoencoders 84.1 (Arousal)
Behavioral performance mixed with neurophysiological modalityMAHNOB-HCISoleymani et al.EEG: frequency-domain power spectral density Support Vector Machines 76.4 (Valence/2) 68.5 (Arousal)
Eye movements: pupillary power spectrum blinking
Gaze at isotemporal statistical features
Huang et al.EEG: Wavelet transform to extract power spectrumSVM (EEG) 75.2 (Valence/2)
Face: CNN Deep Feature ExtractionCNN (expression) 74.1 (Arousal)
Koelstra et al. EEG: Support Vector Machine Recursive Feature EliminationNative Bayes73 (Valence/2)
Face: Action Unit Mapping 68.5 (Arousal)
# Number of classification categories; Avg.: average accuracy rate; Valence: Arousal dimension emotion model’s valence and arousal dimension; PPS: peripheral (peripheral physiological signals); DBN: deepbeliefnetworks (deepbeliefnetwork); BLSTM: bi-directional long short-term memory (long short-term memory); Ensemble-SAE: ensemble. Stacked auto encoder (stacked autoencoder).
Table 3. Application of Emotion Recognition Technology in Interactive Installation Art.
Table 3. Application of Emotion Recognition Technology in Interactive Installation Art.
NOPaperResearch QuestionsResearch MethodologyResearch DimensionResearch Findings
1Suhaimi et al.
(2022) [70]
Can a low-cost wearable VR-EEG headset be used for emotion recognition? How accurate can the system predict emotional states in real time?Machine-learning algorithmEmotion recognition using VR-EEG signalsA system has been developed that predicts a
person’s emotional state with high accuracy in
real time (83.6% average). This system can
adjust a virtual reality experience based on the
predicted emotional stale by changing colors and
textures in the environment.
2Yu et al.
(2022) [71]
What are the most effective EEG features for
emotion recognition in an immersive VR
environment? How do different machine
learning classifiers perform in classifying
emotional states based on these features?
Machine-learning algorithmEmotion recognition using VR-EEG signalsboth local brain activity and network features car
effectively capture emotional information in EEG
signals, but network features perform slightly
better in classifying emotions, The proposed
approach showed comparable or better results
than existing studies on EEG-based emotion
recognition in virtual reality.
3Suhaimi et al.
(2021) [72]
Can a real-time VR emotion prediction system using wearable BCI technology be
developed? How accurately can the system
predict the emotional state of the user based on their EEG signals?
Machine-learning algorithmEmotion recognition using
VR-BCI signals
A system was created that can accurately predict
a user’s emotional state in real time while using virtual reality. With an average accuracy of
83.6%. the system can adjust the virtual
environment’s colors and textures to match the users emotional slate.
5Marín-Morales et al. (2018) [73]How can wearable sensors be used to recognize emotions from brain and heartbeat dynamics in virtual reality environments?Machine learning approachBrain and heartbeat dynamicsThe model’s accuracy was 75.00% along the arousal dimension and 71.21% along the valence dimension 1.
6Zheng, Zhu, and Lu (2017) [74]How can stable patterns be identified over time for emotion recognition from EEG signals?Empirical studyEEG signals, emotion recognitionStable patterns can be identified over time for emotion recognition from EEG signals .
7Wang et al. (2023) [75]how deep learning techniques can be applied to EEG emotion recognition and what are the challenges and opportunities in this field.literature surveyEEG emotion recognition,a review of EEG emotion recognition benchmark datasets, an analysis of deep learning techniques
8Marín-Morales et al. (2020) [76]how to recognize emotions from brain and heartbeat dynamics using wearable sensors in immersive virtual reality systematic review of emotion recognition research using physiological and behavioral measuresusing wearable sensors in immersive virtual realitythe use of wearable sensors in immersive virtual reality is a promising approach for emotion recognition, and that machine learning techniques can be used to classify emotions with high accuracy1.
9Ji and Dong (2022) [77] how to classify self-induced emotions from EEG signals, especially recorded during recalling specific memories or imagining emotional situations.Deep learning technologythe classification of emotions from EEG signalsselecting key channels based on signal statistics can reduce the computational complexity by 89% without decreasing the classification accuracy.
10Cai et al. (2021) [78]how to use machine learning to extract feature vectors related to emotional states from EEG signals and construct a classifier to separate emotions into discrete states to realize emotion recognition electroencephalography-based machine learningelectroencephalography-based machine learningusing machine learning to extract feature vectors related to emotional states from EEG signals and constructing a classifier to separate emotions into discrete states has a broad development prospect1.
11Khan, A. R. (2022) [79]How to recognize emotions from facial expressions using machine learning or deep learning techniquesreview of the literaturefacial emotion recognition using machine learning or deep learning techniquesdeep learning techniques have achieved better performance than conventional machine learning techniques in facial emotion recognition.
12Lozano-Hemmer, R. (2016) [80]How can emotion recognition technology be used to crealecreate a collective emotional portraltportrait of the audience?Computer vision and custom
software
Social engagementReading Emotions used computer vision to
analyze the emotional responses of visitors and
create a collective emotional portrait of the
audience.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, X.; Ibrahim, Z. A Comprehensive Study of Emotional Responses in AI-Enhanced Interactive Installation Art. Sustainability 2023, 15, 15830. https://doi.org/10.3390/su152215830

AMA Style

Chen X, Ibrahim Z. A Comprehensive Study of Emotional Responses in AI-Enhanced Interactive Installation Art. Sustainability. 2023; 15(22):15830. https://doi.org/10.3390/su152215830

Chicago/Turabian Style

Chen, Xiaowei, and Zainuddin Ibrahim. 2023. "A Comprehensive Study of Emotional Responses in AI-Enhanced Interactive Installation Art" Sustainability 15, no. 22: 15830. https://doi.org/10.3390/su152215830

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop