Next Article in Journal
Extending Sensing Range by Physics Constraints in Multiband-Multiline Absorption Spectroscopy for Flame Measurement
Next Article in Special Issue
Variability and Reliability of the Axivity AX6 Accelerometer in Technical and Human Motion Conditions
Previous Article in Journal
Optimized Magnetization Distribution in Body-Centered Cubic Lattice-Structured Magnetoelastomer for High-Performance 3D Force–Tactile Sensors
Previous Article in Special Issue
Identification of Game Periods and Playing Position Activity Profiles in Elite-Level Beach Soccer Players Through Principal Component Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sensing the Inside Out: An Embodied Perspective on Digital Animation Through Motion Capture and Wearables

by
Katerina El-Raheb
*,†,
Lori Kougioumtzian
,
Vilelmini Kalampratsidou
,
Anastasios Theodoropoulos
,
Panagiotis Kyriakoulakos
and
Spyros Vosinakis
Department of Performing and Digital Arts, University of the Peloponnese 1, 211 00 Nafplio, Greece
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2025, 25(7), 2314; https://doi.org/10.3390/s25072314 (registering DOI)
Submission received: 14 February 2025 / Revised: 21 March 2025 / Accepted: 30 March 2025 / Published: 5 April 2025
(This article belongs to the Special Issue Sensing Technology and Wearables for Physical Activity)

Abstract

:
Over the last few decades, digital technology has played an important role in innovating the pipeline, techniques, and approaches for creating animation. Sensors for motion capture not only enabled the incorporation of physical human movement in all its precision and expressivity but also created a field of collaboration between the digital and performing arts. Moreover, it has challenged the boundaries of cinematography, animation, and live action. In addition, wearable technology can capture biosignals such as heart rate and galvanic skin response that act as indicators of the emotional state of the performer. Such metrics can be used as metaphors to visualise (or sonify) the internal reactions and bodily sensations of the designed animated character. In this work, we propose a framework for incorporating the role of the performer in digital character animation as a real-time designer of the character’s affect, expression, and personality. Within this embodied perspective, sensors that capture the performer’s movement and biosignals are viewed as the means to build the nonverbal personality traits, cues, and signals of the animated character and their narrative. To do so, following a review of the state of the art and relevant literature, we provide a detailed description of what constitute nonverbal personality traits and expression in animation, social psychology, and the performing arts, and we propose a workflow of methodological and technological toolstowardsan embodied perspective for digital animation.

1. Introduction

Over the past few decades, digital animation has evolved significantly, expanding in previously unknown waters with the integration of motion capture and biosignal-driven technologies. While traditional animation methods rely on predefined emotion classification frameworks, recent developments in artificial intelligence and sensor-based technologies offer new possibilities for creating dynamic real-time character expressions based on movement and physiological responses. However, a significant gap remains in how embodied interaction and sensor-based data can be effectively operationalised to drive animation workflows, due to the lack of standardised methodologies or integration frameworks.
Embodiment in computing and human–computer interaction refers mainly to the interaction with machines using the whole body and communicating through nonverbal expressions and gestures. In eXtended Reality and games research, the Sense of Embodiment [1] refers to not only controlling a digital body directly with one’s own body (hands, locomotion, etc.) but also creating the illusion that this body feels like one’s own body. Studies have shown that the way this digital body (avatar) looks in such experiences is capable of even altering the movement behavior of the user (Proteus effect) [2,3]. Embodiment in philosophy has been established for many years since Merleau Ponty [4] questioned the Cartesian dichotomy between the mind and the body. Since then, embodiment has been studied and developed in many fields ranging from philosophy [5,6] to linguistics [7], psychology [8], cognitive science [9], and human–computer interaction [10,11].
Embodiment approaches the phenomenon/process of not just having a body but that someone (or an agent) is aware of its own body, its shape, appearance, morphology, articulation, function, and ability to sense the environment through it, perceive external stimuli, and translate them into meaningful information for action [12,13]. Through this lens, the body is not a machine or hardware operated by the mind but the mind is embodied and the body is thinking through its kinesthetic ability and simultaneous processing of multimodal stimuli received through its sensorial system. Additionally, embodied perception extends to the way we sense and understand internal and external sensations through our bodies, e.g., temperature, sweating, feeling cold or warm, or pain. Theories of embodied cognition support the idea that meaning-making in language happens through conceptual metaphors [14], that is, through using embodied experiences, feelings, and actions as analogies for understanding and communicating abstract concepts; e.g., the discussion was cold; the atmosphere was warm. We often also use analogies from one modality to another, for example, warm colors or strong taste. The concept of embodiment extends into digital animation and interactive media, where motion capture and movement sensors allow for a direct translation of human bodily expression into animated characters. This technological evolution marks a significant shift in animation history, reinforcing the idea that cognition and meaning-making are deeply tied to bodily experience. Contemporary animation can be categorised based on how motion and action are generated, including (a) keyframing,(b) real-time game actions, (c) puppeteering, (d) computer generation, or (e) hybrid approaches. In keyframing, animators meticulously craft motion sequences, reminiscent of traditional animation techniques, while real-time game animation introduces player-controlled movement within predefined constraints. Puppeteering [15] further bridges the gap between human movement and digital embodiment as performers physically manipulate a character in real time, simultaneously acting, dancing, and improvising within a narrative framework. In such workflows, the character’s identity emerges both through its designed morphology and the performative embodiment of the actor [16,17].
In computer-generated animation, character movement is dictated entirely by programmed rules or artificial intelligence, constructing a form of digital embodied cognition. Across all these approaches, the movement of an animated character, like that of a living being, serves multiple functions: (a) expressing internal thoughts, emotions, and moods, (b) performing everyday actions aligned with its environment, (c) reacting to other beings through social interactions, and (d) adapting behavior in response to perceived changes in its surroundings. These embodied movements highlight how digital bodies, like physical ones, engage with their world through perception, action, and meaningful interaction, reinforcing the intertwined nature of body and cognition.
In this work, we investigate an embodied perspective to character design in digital animation that emphasises nonverbal behavior. Building on the idea that embodiment is not only about ’having and controlling’ a body but also about perceiving the external world, internal sensations, and interactions with both human and non-human entities through the body, we highlight the significant role sensors play in embodied character design. This study also explores how biosignal-driven animation can enhance nonverbal personality traits in digital characters. Specifically, we propose a dynamic-sensor-based framework to character design that extends beyond rigid emotion classification, allowing for real-time personality-driven animation. By integrating movement analysis techniques with biosignal data, we establish a workflow that translates physiological and motion cues into expressive character behaviors. The proposed methodology aims to bridge the gap between cognitive science, human–computer interaction, and animation technology, offering a structured approach to achieving real-time personality-driven animation. To structure this investigation, we examine this concept through multiple lenses within the contexts of different fields, providing a comprehensive literature review towards creating a unified embodied animation workflow.
The structure of the paper is outlined as follows: Section 2 examines the embodied perspective in animation as it has been perceived and utilised in previous and current works. Section 3 provides an introduction to embodied character design through examining frameworks and definitions on embodied behavior and nonverbal communication. Section 4 examines character personality as a response mechanism and how it can inform character animation. Regarding more technical aspects, Section 5 presents an overview of the state of the art of sensor-based animation, while Section 6 explores methods of integrating sensors in animation workflow. Finally, Section 7 presents the proposed workflow derived from this work and discusses key challenges and limitations, while Section 8 concludes the paper.

2. The Embodied Perspective in Animation

Before examining the embodied perspective in animation, it is useful to outline the core framework of this study. The following diagram presents a simplified model of how physical and digital environments interact to shape the behavior of a digital character (Figure 1). The subsequent sections will analyze each of these elements in detail, leading to a refined workflow.
Cognitive philosopher Andy Clark (2007), writing before the rapid technological advancements in AI, wearables, and motion capture, described three grades of embodiment: mere, modest, and profound embodiment [5]. He explains, “A ‘merely embodied’ creature or robot would be one equipped with a body and sensors, able to engage in closed-loop interactions with its world, but for whom the body was nothing but a means to implement solutions arrived at by pure reason… Profoundly embodied agents, on the other hand, have boundaries and components that are forever negotiable, where body, thinking, and sensing are interwoven flexibly (and repeatedly) within the fabric of situated, intentional action” [5]. In the field of animation, the concept of embodiment is defined as the perception of bodily movement that goes beyond its physiological nature, viewing it as intrinsic to cognition and emotional experience. According to Sheets- Johnstone [18], often, scientific frameworks reduce this aspect of movement, isolating emotions and thoughts from bodily actions instead of viewing them as necessary elements for meaningful interactions with the world. This framework essentially suggests that the body is not just the channel the mind uses to communicate but it also possesses an intrinsic intelligence revealed through the ways it interacts within its environment. According to Thalmann [19], behavioral animation involves simulating character behavior, from movement to emotional interactions, making each scene unique. Even basic actions like walking vary based on mood, fatigue, or circumstances, making precise modeling difficult. The challenge for future computer animation lies in accurately replicating human behavior while considering social and individual differences. In educational contexts, integrating embodiment into dynamic visualizations, like animations, enhances comprehension. De Koning and Tabbers [20] suggest that visualizations are more effective when learners physically engage with the depicted movement. Strategies include mimicking gestures, using body metaphors, and physically interacting with animations. By linking cognition to sensory and motor experiences, learners form mental representations that connect abstract concepts to bodily experiences. This approach boosts engagement while reducing cognitive load, making learning more accessible and memorable. Similarly, in animation, a character’s believability improves when its movements are not merely predefined or externally controlled but are instead rooted in a logic of embodiment, where perception and action reinforce one another.
A believable character in animation, regardless of whether it is controlled through keyframing player input, puppeteering, or computer-generated behavior, must exhibit actions that feel naturally triggered by internal thoughts, bodily sensations, and external stimuli. The character’s perception of the world—what it sees, hears, and physically feels—must align with its morphology and movement dynamics [10]. For example, a character rubbing their hands due to cold is an embodiment of both cognition (awareness of the cold) and physical response (friction for warmth). Even when no real-world sensors are involved, the animation must imply how the character perceives and processes sensory information. For example, a large, heavy character should move accordingly—but breaking this expectation (e.g., giving a massive creature delicate, swift movements) can also be an intentional design choice to create contrast. By considering embodied cognition, animation techniques across all types of controls can create characters that do more than move; they experience, react, and exist within their digital worlds in meaningful ways. Ultimately, embodiment in animation is not just about movement accuracy but about creating characters that think, feel, and react in ways that make their actions meaningful within their world. Whether through predefined actions, live performance, or computer-generated behavior, the challenge lies in integrating bodily perception, cognition, and environment to create truly believable animated characters.

3. Embodied Character Design

Having defined the concept of embodiment within the context of animating characters, in this section, we proceed with examining the core definitions and types of nonverbal behavior communication towards understanding the different ways of creating character personalities through embodied character design.
While words are important, nonverbal cues—glances, posture, embodied interaction, and more—form the essential building blocks and communication channels of nonverbal behavior, serving as a critical framework for understanding personality and emotional expression.

3.1. Nonverbal Behavior and Communication Definitions and Types

Nonverbal communication includes the wide range of signals and actions that occur beyond spoken words. Glances, posture changes, and subtle gestures form the core of nonverbal behavior. Studies show that these cues can be just as important, or even more so, than verbal communication for expressing emotions and intentions. For instance, Mehrabian [21,22] estimated that facial expressions, gestures, and other nonverbal cues convey 93% of people’s feelings and attitudes. Likewise, Birdwhistell [23] argued that verbal communication accounts for no more than 30% of the meaning exchanged in social interactions. Understanding these behaviors can help people to interpret unspoken messages and communicate more effectively. This section defines nonverbal communication and outlines its main types.

3.1.1. Cues vs. Signals

Before diving into the different types of nonverbal communication and its channels, it would be useful to define the difference between cues and signals. According to evolutionary biologists, cues essentially refer to incidental indicators, which have not, however, evolved for communication. On the other hand, signals have evolved specifically for communication (Figure 2). For instance, the act of chewing is an indication that someone is eating, but it has not evolved to communicate that specifically, while peacock plumage evolved to signal mate quality [24].

3.1.2. Nonverbal Communication Types

Nonverbal communication includes the wide range of signals and actions that occur beyond spoken words. Glances, posture changes, and subtle gestures form the core of nonverbal behavior. Studies show that these cues can be just as important, or even more so, than verbal communication for expressing emotions and intentions. Understanding these behaviors can help people to interpret unspoken messages and communicate more effectively. This section defines nonverbal communication and outlines its main types.
Being the process of sharing information, signals, and messages without using words [25,26] often subtly and unintentionally, nonverbal communication includes various types. Facial expressions involve movements of facial muscles to convey emotional states [27]. Gestures are movements of the hands, arms, or head that express ideas or feelings [28]. Paralanguage refers to vocal elements such as tone, pitch, loudness, and even silence, which provide meaning beyond spoken words [29]. Proxemics focuses on the use of personal space and physical distance in social settings, where personal space acts as the physical area individuals maintain around themselves [30]. Eye gaze, including actions such as looking, staring, or blinking, can reveal emotions and thoughts [31]. Haptics, or communication through touch, conveys various emotions and meanings depending on the context [31]. Body language encompasses physical actions such as posture, gestures, facial expressions, and eye movements, all of which transmit nonverbal messages. Appearance, including clothing, hairstyle, and other personal choices, communicates aspects of mood, personality, and social status [31,32]. Lastly, artifacts—objects, images, or tools such as avatars or icons—represent parts of a person’s identity or personality [31].
Understanding these elements is crucial for effective character design and animation as they allow creators to convey depth, emotion, and intention in a way that resonates with audiences. Characters that exhibit believable nonverbal behaviors can evoke empathy, communicate emotions without dialogue, and create memorable interactions [33]. Thus, the use of nonverbal communication cues in virtual agents can create more persuasive and empathic characters.
For instance, facial expressions and gestures can instantly reveal a character’s emotional state, while body language and proxemics can establish relationships and social dynamics within a scene. Moreover, subtle cues like eye gaze and paralinguistic elements add layers of realism, helping characters to feel alive and relatable. Additionally, choices in appearance and the use of artifacts can enhance storytelling by visually representing a character’s background, personality, or motivations. By incorporating these principles into design and animation, creators can produce more engaging and immersive narratives, leveraging the full spectrum of human communication to connect with their audience. Of course, just like with every creative process, the design of authentic characters requires the use of established theoretical backgrounds and models such as Eckman’s set of basic emotions [34], the Circumplex model of affect [35], or the Five Factor model [36], combined with iterative design and focus groups [37].

4. Character Personality as a Response Mechanism

Personality traits can contribute to how characters interpret and react to environmental stimuli, shaping their nonverbal behaviors in distinctive ways. These traits manifest as mechanisms triggered by events occurring in the character’s environmental surroundings, enabling them to sense and respond through the aforementioned nonverbal channels. Thus, personality-driven reactions to stimuli embody a unique combination of communication, interaction, and expression.
One of the most interesting aspects of nonverbal communication research is the relationship between personality-driven expression and the observer’s ability to interpret and embody the sensed data, even when visual appearance is neutralised but morphology (e.g., structure and movement) is preserved. In this section, we examine the different ways personality traits can be interpreted, communicated, or expressed through nonverbal reactions. We also explore what can be communicated through a character’s posture, gestures, and embodied interactions with their environment and others when visual characteristics are minimised. The role of nonverbal behavior as a communicative tool that bridges individual expression and collective interpretation can potentially advance our understanding of embodied interaction beyond surface appearances.

4.1. Definitions

Before diving into the ways personality and personality traits act as a stimuli filter and response mechanism, it would be useful to provide some definitions of relevant terms in different scientific fields and disciplines.
In psychology, personality is defined as the enduring characteristics that account for consistent patterns of feeling, thinking, and behaving [38]. Bergner [39] builds upon and refines this definition, describing personality as an enduring set of traits and styles that encompass an individual’s unique natural inclinations and distinctive characteristics within a societal context. When it comes to media, particularly animation, television, and film, personality is conveyed through consistent character traits and behaviors. Thomas and Johnston [6] emphasise how body movements, dialogue, and interactions help to build personality in animated characters, while Hoffner and Cantor [40] and Field [41] similarly argue that physical appearance, speech patterns, and actions allow audiences to infer personality in TV and movie characters. Moving on to the performing arts, Laurel [42] draws a connection to classical dramatic theory, citing Aristotle’s view of characters as “bundles of traits, predispositions, and choices” that come together to form a cohesive entity.
Despite the different fields, all the aforementioned definitions highlight the critical role of portraying characters in a manner that enables viewers to clearly understand their personality. Successfully communicating a character’s personality relies heavily on the audience’s ability to predict their behavior, actions, movements, moods, and attitudes. Ultimately, it is safe to assume that a character’s personality can be effectively conveyed and interpreted when it demonstrates consistent patterns [43].

4.2. Personality Traits Through Nonverbal Reactions

In the previous sections, we analyzed patterns from psychology, biology, movement, and the performing arts to explore connections between personality traits, emotions, and nonverbal behavior. These findings were categorised under nonverbal communication types and examined across scientific contexts. In this section, we expand on each nonverbal communication type, identifying and revealing specific patterns and the meanings they potentially convey through a literature review. Regarding nonverbal expressions and connections to personality traits, Mehrabian’s communication model [44,45] highlights that body language and vocal tone often carry more weight than verbal content, particularly during emotionally charged interactions. His formula quantifies this dynamic as follows:
Total Emotion/Attitude Communicated = 7% Verbal + 38% Vocal + 55% Facial
This framework underscores the pivotal role of nonverbal cues in effective communication.

4.2.1. Facial Expressions

Facial expressions are a primary form of nonverbal communication, evolving from survival mechanisms to tools for social interaction. Darwin [46] proposed that expressions serve two functions: preparing organisms for environmental changes and conveying social information. This idea, later expanded into the Two-Stage Model, suggests that facial movements evolved from survival tasks, like rejecting harmful food, to signaling internal states and predicting others’ actions [46,47,48,49]. Ekman [34] identified nine universal emotions, including anger, fear, and happiness, which influence interpersonal trait perception [47]. For example, smiles signal friendliness, while frowns or furrowed brows convey dominance or distress. These facial expressions shape how we communicate emotions and interpret others’ intentions [50,51,52,53]. Consequently, different combinations of expressions of facial features can produce visualizations of different emotional states (Figure 3).

4.2.2. Eye Gaze

Eye gaze plays a dual role in social interaction, allowing individuals to both perceive information from others and transmit it through gaze direction and duration. These factors can convey various messages, including dominance or threat [54,55], attraction [56,57], a need for approval [58,59], or a desire to communicate [60]. The behavior of the transmitter is influenced by their environment, with theories highlighting how social context shapes gaze dynamics and self-presentation.

4.2.3. Body Language → Movement → Position-Posture-Gestures

Body language is a medium that can communicate a great deal of information. However, despite previous beliefs, body language can be quite subtle and indefinite in communicating feelings, moods, and attitudes [31]. In this section, we explore movement, position, posture, and gestures as nonverbal behavior communication channels since they all belong in the realm of body language.

Movement Analysis and LMA

Movement analysis involves observing, annotating, and analysing movement, often by certified experts. Among the various frameworks, Laban Movement Analysis (LMA) is one of the most widely used. Developed by Rudolf Laban [61], LMA examines movement through four components: body, effort, space, and shape, collectively forming the BESS system [62,63]. LMA has evolved through contributions from researchers and practitioners [64,65] and is applied across fields, including dance education [66,67], archetypal character and personality creation in dance [68,69], and animation [70]. It remains a key tool for understanding movement’s function, expressivity, and qualities.

Movement Qualities

Blom et al. [71] define movement qualities as “distinctly observable attributes or characteristics produced by dynamics and made manifest in movement”, describing how bodies move in terms of energy, space, and time. These qualities combine to create unique movement dynamics involving specific values for space, time, forms, or shapes [71,72]. LMA focuses on these qualities within its effort category, analysing motion factors like weight, space/direction, time, and flow—each with polar opposites—resulting in Laban’s eight Basic Effort Actions (Figure 4), linked to both environment and personality traits [62,73]. These efforts have been widely applied in dance [69,74], character design and animation [70], as well as AI and robotics [68,75]. Recent research efforts have proved the effectiveness of the LMA framework to identify the expressed and/or perceived emotion in humans through computer vision [76,77] and AI-driven movement classification [78].

Position and Posture

Position and posture are essential elements of nonverbal communication, reflecting personal characteristics and social dynamics. Spatial orientation, or position, illustrates how individuals relate to their environment and others, often categorised by proxemics into zones of intimacy and engagement, as demonstrated in Figure 5: intimate (0–0.45 m), friend (0.45–1.2 m), social (1.2–3.6 m), and audience (beyond 3.6 m) [79]. Frameworks like LMA and Bayesian models help to interpret these spatial behaviors and their psychological implications [80]. Posture also conveys emotions and traits, with upright stances signaling confidence and slouched or bowed positions indicating submissiveness. Friendly traits are often expressed through open body language, forward lean, eye contact, and smiles, sometimes softened by submissive gestures like shoulder shrugs [80]. Together, position and posture play a pivotal role in decoding personality and interactional cues.

4.2.4. Gestures

Gestures, involving movements of the hands, arms, and body, enhance communication by emphasising verbal messages and improving clarity [79]. Rather than focusing on their physical forms, research examines gestures in terms of their movement qualities, which can convey emotional states [81,82]. In 3D character design and animation, recent studies have applied Laban Movement Analysis to develop body movements capable of expressing up to 15 different emotions, demonstrating gestures’ role in nonverbal emotional communication [83].

4.2.5. Paralinguistics

The term paralinguistics encompasses the use of vocal elements like volume, tone, pitch, and speech rate, as well as silence, to communicate without words [29]. Occasionally, paralanguage may include universal verbal signals understood across languages [79]. The Paralinguistics Model of Rapport, introduced by Novick and Gris [84], highlights how nonverbal vocal elements help to establish rapport in conversations. By adjusting pitch, volume, and speech rate, and mirroring others’ vocal traits, the model creates a sense of synchronicity, signals attention, and incorporates social language and humor to build positive connections [84,85].

4.2.6. Haptics

Haptics, or communication through touch, conveys messages of proximity, intimacy, and power dynamics. The type of touch—intimate, formal, or informal—can signal dominance or status, with touch initiation often perceived as asserting power [86,87,88]. Sekerdej et al. [89] identified factors influencing haptic interactions: the initiator, the recipient’s perception, reciprocity, and context. For instance, returning a formal touch, like a handshake, reduces perceived hierarchy, while reciprocating an informal gesture fosters equality in casual settings.

4.2.7. Appearance

Appearance in nonverbal communication reflects self-presentation and can signal mood or personality traits. Clothing, styling, and aesthetic choices serve as tools for expressing or evoking emotional states. According to color psychology, colors and patterns can influence emotions or inspire desired moods, serving as both reflective and motivational tools [31,32,90].

5. Sensor-Based Animation

Animation has always been the art of observing, mastering, and recreating human and animal movement as characters’ personality and behavior either by drawing or digitally key-framing. The term sensor-based animation was proposed by Thalman in 1996 [19], suggesting not only the need for the use of physical and virtual sensors, but stressing the importance of predicting the sensorial behavior of the digital character. Over the last few decades, the use of various sensors for capturing movement and human activity has revolutionised the way that digital animation is produced.
In the concept of embodied character design, we suggest that the sensorial system and perception of the character are essential in manifesting this embodied behavior and therefore unraveling its personality. As mentioned in Section 1, there are different types of movement generation and control used for animating characters, including predefined and dynamic approaches. These approaches differ in when and how the character’s bodily behavior is controlled or programmed, and by whom.
Guimmara et al. [91] identify embodiment as a summary of perceptual aspects of the bodily self that include the following aspects:
  • Multisensory integration and egocentric frames of reference;
  • Proprioception, position sense, and the perception of limb movement;
  • Visual capture and visual processing of the human body;
  • Motor systems: planning, preparation, and execution of motor schemas.
In the following section, we categorise digital animation based on the sensors that are used not only to create the animation but to simulate the sensorial system of the digital character.
A digital character, depending on the manner of animation, should respond according to their implied sensorial activity as follows:
  • Motion Capture and Motion Sensing: Acted sensing by the performer (the actor acts as if they feel or react and their behavior is captured);
  • Biosensors: Captured sensing of the performer (actor’s biosignals are captured as indices of what the performer (and therefore the animated character) feels;
  • Virtual sensors in Agents: Movement-generation-programmed sensing (agents are programmed to act as they feel a stimulus through AI);
  • Physical sensors (morphological computing).

5.1. Motion Capture and Motion Sensing

Motion capture and different techniques of motion sensing have been widely used in recent decades in the computer animation, game, and film industries to enhance realism, speed up the animation process, and reduce manual keyframing efforts. These technologies include passive and active optical sensors (e.g., Vicon [92], OptiTrack [93], and Qualisys [94]), inertial sensors (IMU—inertial measurement unit) that provide lower-cost portable suitcases for real-time applications (Xsens [95] and Rokoko [96]), magnetic sensors, as well as pressure and force sensors (e.g., used in foot tracking, facial mocap (for lip syncing), and virtual puppeteering [15]). These technologies are used to capture realistic human motion for films and games, facilitate facial animation, and enhance virtual reality (VR) and augmented reality (AR) experiences. They also support real-time motion capture for immersive applications and enable the animation of non-human entities, such as creature and fantasy characters, based on human movement. Recent developments also explore how we can use collaborative mixed reality environments to enhance motion-capture workflows, allowing performers to interact with digital environments in a more intuitive way [97].
The incorporation of performers in the animation process opens a wide field of possibilities where nonverbal personality traits, communication, and expression are not only designed on screen or paper but embodied through acting, dancing, and performing [98]. On the other hand, acting for the stage or the camera is different from acting for animating a character in many ways.
Subjective perspective and feeling and closed-loop continuous feedback (if in real time) affect how the performer “becomes” the character (moves as the character and nonverbally makes the character express themselves [99].
In this case, the sensorial system of the digital character becomes a hybrid of the performer and the digital avatar as the performer is asked to puppeteer the character but also act as they sense and feel the environment and stimuli that the 3D character senses in this particular part of the scenario [100,101].

5.2. Biosensors: Visualising the Inside Out

In traditional animations, we have many examples through applying some of the 12 principles of animation [102], e.g., exaggeration, an internal function or feeling of the character is visualised or sonified to convey the feeling. Think of the heart popping out of the body while beating in classic Tom and Jerry [103] to convey the feeling of fear, love, or shock of the character, or exaggerated sweating to show exhaustion, stress, or anxiety. Figure 6 includes some examples of shock, anxiety, confusion, fear, and surprise visualised in animated clips that are currently part of the public domain.
Researchers have shown that expressive biosignals, or biosignals displayed as a social cue, have the potential to facilitate communication as a means to recognise and express our emotions and physical being (Liu et al. [104]). The same work suggests that the incorporation of biosignals in animation has positive effects on easier sharing and social connection. Furthermore, wearable technology is being increasingly integrated into the performing arts to create interactive, responsive environments where physical movements drive digital outputs in real time [105]. Previous work suggests that the incorporation of wearables in psychological research [106] not only helps us to deepen the understanding of felt emotions during different contexts [107,108] but also provides data that can be used both pre-recorded or online to create metaphoric or more literal digital narratives [109,110]. In such cases, the human who provides these internal sensations (heart rate—EEG, electrodrmal activity (EDA), temperature, and electrocardiodiagram (ECG)) through wearable sensors indirectly provides the internal feelings and states of the digital character and can be visualised or sonified in different ways. The use of such sensors in the workflow allows transferring and making visible (or audible) internal sensations of the digital character that are otherwise hidden.

5.3. Virtual Sensors in Agents (Autonomous Digital Characters)

The term virtual sensor in animation refers to a software-based technique that simulates the real-world sensor by estimating the motion, physics, or environmental aspects of the digital character. Such sensors can be used as a replacement for motion capture hardware (e.g., Open Pose or Media Pipe) or enhancement of motion capture, in procedural animation, or in AI-driven motion estimation. Virtual sensors in animation refer to software-based data processing techniques that simulate real-world sensor readings without requiring physical hardware. These sensors estimate motion, physics, or environmental conditions using algorithms, AI, and mathematical models. From a technical standpoint, Bayesian Networks have been used to simulate virtual humans, identifying five challenges: uncertainty, controllability for the animator, misinterpretation, interaction (we as humans do not directly react to the stimuli sensed by our sensory organs in an objective manner but, based on the perception, interpretation, and intention, we react according to our personality traits and our internal state (how we feel at this moment or towards the situation or another person), and extensibility.
Other researchers propose the Neural State Machine as a novel data-driven framework to guide characters to achieve goal-driven actions with precise scene interaction [111]. In such cases, the software is programmed to mimic the sensing of the external scene of the virtual character and predict their response. Unlike simulation of the motion itself by systems such as OpenSim (https://simtk.org/projects/opensim, accessed on 18 January 2025) or inverse kinematics models [112], virtual sensors [113] simulate the virtual human behavior, incorporating the behavior model framework that takes into account the sensorial input of the characters (usually vision and its properties such as the field of view), perception, and interpretation based on a personality model. In several examples of simulating the behavior of autonomous actors in a virtual environment or a computer-generated imagery (CGI) scene, one should take into account (a) the particular situation or scenario, e.g., an emergency situation, or meeting a friend with whom we had a quarrel before; (b) the sensorial input of the surrounding environment for each actor, typically relying on calculating the average human vision; and (c) the internal state that is relevant to the emotional and cognitive state of the person/autonomous actor. While in most models the personality is seen as a rule-based model for handling sensorial inputs and reacting accordingly, the autonomous bodily behavior of physical or virtual sensors can leverage a realistic simulation as they can capture more stimuli from the surroundings but also simulate the internal state, bodily sensations, and emotions.

5.4. Morphological Computation

One of the most recent trends in digital animation is the notion of morphological computation. The term morphological computation, as defined by Muller [13], describes the principle of computations required for motion execution that are not implemented in a dedicated (electronic) controller but executed by the kinematics and morphology of the body itself. In fact, morphological computation is a field that intersects computer animation, robotics, and sensors [114]. As a field, it explores how the physical body of an agent can influence behavior, learning, and problem solving. Analogous to embodiment in humans, where the embodied mind and the mindful body act as a whole for effective and efficient interaction with the environment, the field of morphological computation suggests that the behavior of the agent is incorporated not only in the software of the agent but in the physical (in the case of a robot) or virtual (in the case of the graphic 3D model) aspects of the character/agent. By “morphological computation”, we mean that certain processes are performed by the body that otherwise would have to be performed by the brain.

6. Integrating Sensors in Animation Workflow

While animation is typically about creating or moving an entity that resembles or acts as a living person (or creature in a more general sense) as if it were real or alive, it usually puts the creator in a position of designing through mimicking or imagination. The integration of sensors into digital animation workflows represents a significant shift from traditional keyframing and motion capture techniques towards a more embodied and affective animation paradigm. This approach serves as an opportunity for creators to expand their pool of options outside the realm of their imagination and work with real live data. Wearable technologies and biosensors, including electrocardiogram (ECG), electrodermal activity (EDA), electromyography (EMG), respiration, and skin temperature monitoring, can provide real-time physiological data that can inform character behavior and emotional expression [115,116]. Unlike conventional facial expression tracking or skeletal motion capture, these sensors allow animators to infuse characters with dynamic, context-sensitive responses based on a performer’s internal state. In this section, we explore the different ways sensors and wearables can be integrated in animation workflow, as well as some of the challenges that might arise while following these techniques.

6.1. Sensor-Based Emotion Capture in Animation

Traditionally, animation has relied on predefined expressions and keyframe interpolation to depict character emotions. However, the use of biosignals as input for animation systems enables a more organic and adaptive approach. Heart rate variability (HRV) has been identified as a strong indicator of arousal and stress, with reduced HRV often linked to heightened emotional states such as anxiety or excitement [117]. This makes HRV a potential driver for subtle adjustments in a character’s breathing rate, posture, or micro-movements to reflect internal tension or relaxation. Similarly, EDA, which measures sweat gland activity, has been widely used as a marker of emotional arousal and cognitive engagement [118,119]. For instance, Giomi et al. [120] discuss a phenomenological approach to using wearable technology, suggesting that biosignal sonification can provide real-time feedback while also actively reshaping the performer’s bodily awareness and interactions within digital environments. Moreover, recent research in the field of health has demonstrated the potential for motion-tracking technologies to advance healthcare by providing real-time data-driven insights into human movement, offering valuable techniques that could extend to sensor-based animation [121].
In an animation context, such signals and data could be used to modulate real-time shader effects, environmental changes, or secondary animations, such as subtle twitches, muscle tension, or dilation of pupils, to simulate affective responses. Electromyography (EMG), which captures facial and body muscle activity, is another powerful tool for refining expressive animation. Previous studies have demonstrated that facial EMG signals correlate with emotional valence, making them particularly useful for enhancing facial animation beyond predefined morph targets or blend shapes [122]. By incorporating EMG data, digital characters can express more nuanced micro-expressions, such as tension around the mouth during stress or involuntary eye twitches during excitement. This aligns with Paul Ekman’s [123] work on facial action coding, which categorises muscle activations that correspond to universal emotional expressions. By integrating EMG-based muscle activity tracking, animators can create more lifelike facial animations that respond dynamically to the performer’s physiological state.

6.2. Contextual and Narrative Integration of Biosignals

Rather than directly mapping biosignals to predefined emotional states, an alternative approach is to use them as narrative modifiers that influence the animation workflow contextually. Damasio’s Somatic Marker Hypothesis [124] highlights how emotions serve as embodied decision-making signals, meaning that characters could be animated in ways that reflect their physiological state rather than simply displaying surface-level expressions. For example, in a suspenseful scene, increased heart rate and shallow respiration [125] could trigger subtle tension in the character’s shoulders, faster blinking, or changes in ambient lighting to create an immersive experience. Lisa Feldman Barrett’s Theory of Constructed Emotion [126] challenges the notion of universal emotional expressions, instead proposing that emotions are constructed based on past experiences and situational interpretation. In an animation pipeline, this perspective suggests that, rather than using rigid emotion classification, biosignal data can be used as input for procedural animation systems that adjust a character’s behavior dynamically based on scene context, character history, and environmental cues. For instance, if a character’s biosignal data suggest increased arousal but neutral valence, the system might infer that they are excited rather than scared, leading to more animated gestures and energetic movement rather than defensive body language.

6.3. Challenges and Considerations in Sensor-Driven Animation

While sensor-driven animation offers new possibilities for enhanced expressivity and interactivity, it also presents several challenges:
  • Noise and Variability: Biosignals are inherently noisy and sensitive to movement artifacts, making real-time animation challenging [115]. Filtering techniques and machine learning models must be implemented to distinguish genuine emotional signals from environmental interference.
  • Individual Differences: Emotional responses and baseline biosignals vary across individuals, meaning that one-size-fits-all emotion models are ineffective [126]. Instead, personalised calibration may be required to adapt the animation system to each performer’s unique physiological patterns.
  • Interpretation Complexity: Unlike direct motion capture, biosignals do not correspond to explicit movements or expressions, making their integration into animation workflows less straightforward. Instead of focusing on emotion detection, the emphasis should be on using biosignals as modulation parameters that influence motion curves, shaders, and secondary animations dynamically.
Despite these challenges, the potential for sensor-enhanced animation workflows is vast. By integrating wearable biosensors with motion capture, digital characters can be animated not only with physical realism but also with affective depth, paving the way for more immersive interactive storytelling, virtual performances, and emotionally responsive avatars in gaming, VR, and cinematic animation.

7. Proposed Workflow and Discussion

In the previous sections, we provided an extensive literature review, exploring the concept of embodiment through the lenses of multiple different fields. Driven by the insights gathered, in this section, we present our proposed workflow for integrating sensors within the process of digital animation.

7.1. Methodology: A Sensor-Based Approach to Animation

Our proposed workflow (Figure 7) expands on the initial schema pictured in Figure 1, integrating multiple sensor types to capture both internal bodily states and external motion dynamics from human performers. This approach leverages biosignal sensors (such as EMG, ECG, and EDA) to track physiological changes and motion capture (MoCap) sensors to record body movements, posture, and facial expressions, as discussed in Section 6. These collected data streams undergo processing and are mapped to digital characters, enabling them to express emotions and respond dynamically to virtual interactions. The ultimate goal is to create a feedback-driven animation pipeline that enhances realism, emotional depth, and user engagement.
As pictured in Figure 7, the proposed pipeline integrates multiple types of sensors into the digital animation process, structured as follows:
  • Physical Environment: A human performer is equipped with biosignal sensors (e.g., EMG, HRV, and GSR) and motion capture sensors through wearables to track both internal bodily states and external movements. This combination allows for the real-time extraction of physiological and kinematic data, which are processed and used to create the digital character’s behavior.
  • Digital Character Integration: The collected data inform digital character behavior:
    Characters express internal states such as emotions and thoughts based on biosignal data.
    Characters react to external conditions, dynamically adjusting responses within the space of the digital environment.
  • Digital Environment Interaction: Virtual sensors and actuators allow the character to perceive and modify its digital surroundings. The digital body morphology adapts based on sensor inputs, ensuring a cohesive interaction between the character and its environment.
This workflow also considers the practical implications for animators, offering tools for integrating sensor data into character animation pipelines with minimal manual intervention. By automating aspects of movement and expression generation, animators can focus on refining high-level stylistic and narrative elements rather than manually keyframing each motion.

7.2. Discussion: Implications and Challenges

The proposed workflow showcases the potential to advance sensor-based animation through multimodality. It is important, however, to recognise and voice possible challenges affecting its implementation and scalability. Firstly, biosignals are inherently noisy and sensitive to movement, making real-time animation a challenging task [115]. Especially when it comes to combining multimodal sensor data, possible synchronization issues between biosignal and motion capture data require advanced filtering and machine learning techniques to ensure accurate mapping to digital characters. In such cases, robust signal processing pipelines must be implemented to differentiate and separate genuine emotional signals from environmental interference to avoid the risk of generating false animations.
As discussed in Section 6.3, another key challenge lies in the inability of generic models to capture the unique physiological patterns of each performer since emotional responses and baseline biosignals can vary across individuals [126]. Additionally, since biosignals do not directly translate into clear movements, their use in animation can be quite complex, hence making their use more appropriate for subtly adjusting motion and animations rather than relying solely on them for production. Furthermore, biosignal-driven systems can increase cognitive load and affect natural movement patterns. While feedback enhances immersion, too much reliance on it may reduce spontaneity and creativity. Therefore, finding the right balance between user control and system automation is essential for effectively integrating biosignals into animation.

8. Conclusions

In this paper, we introduce the concept of embodied character design for digital characters and agents, emphasising how various sensors can enhance their responsiveness. These sensors may capture human movement and biosignals or be programmed to help agents perceive their internal and external environments more effectively. We differentiate types of computer animation based on their use of virtual and physical sensors and explore how animations are controlled—whether by human input, coding, or AI-driven agents.
The idea of embodied character design and sensor-based animation necessitates a highly interdisciplinary approach, spanning computer graphics, artificial intelligence, and robotics, as well as the performing arts, traditional character design, and the study of embodiment through psychology, neuroscience, and cognition. A significant portion of this paper is dedicated to examining embodied interaction, nonverbal personality traits, and communication cues—frameworks that enable scientists and artists to create digital characters or agents with enriched embodied personalities.
By framing personality as a mechanism, we argue that it emerges as a response to internal and external stimuli. Therefore, expanding a character’s sensory capabilities enhances both its personality and believability. A digital character—whether human or non-human—is animated through virtual, physical, or implied interactions with these stimuli.
The contributions of this work can be summarised in the following points: (a) driven by the recent advancements in the literature, we proposed an animation model workflow that incorporates both physical and virtual sensors to capture the nonverbal cues of digital characters. While human actors’ and performers’ movements, facial expressions, and biosignals can be captured through motion capture technologies and wearables, virtual sensors enabled by AI and procedural animation can be used to activate the characters’ changes in action and morphology when sensing objects or cues in their virtual environment; (b) we presented the recent advancements in the technological and analytical tools that can potentially be applied in combination to create empathic and believable characters; (c) last but not least, we highlighted the importance of the simulated or acted sensorial reactions of the character to unravel their personality through nonverbal behavior. The framework that is presented and discussed in this work aims to serve as a theoretical tool for future designers. Summarising the recent advancements and applications of the framework that connect the nonverbal behavior of the character that is enabled or controlled through biosignals is part of our future work.
As part of the IMAGINE MOCAP project, from which this research emerges, we aim to further explore how motion capture sensors, wearables, and live coding contribute to the development of digital narratives in animation, games, interactive media, and transmedia networks. By integrating engineering and human–computer interaction with the practical and theoretical frameworks of the performing arts—such as puppeteering, acting, and dance—we seek to advance the concept of embodied character design.

Author Contributions

Conceptualization, K.E.-R. and L.K.; methodology, K.E.-R. and L.K.; investigation, K.E.-R., L.K. and V.K.; resources, K.E.-R., L.K., V.K. and A.T.; writing—original draft preparation, K.E.-R. and L.K.; writing—review and editing, K.E.-R., L.K., V.K. and A.T.; visualization, L.K.; supervision, K.E.-R. and S.V.; project administration, K.E.-R. and S.V.; funding acquisition, K.E.-R., S.V. and P.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research is held in the framework of the funded project IMAGINE MOCAP with ID: 015668, and is supported by the National Recovery and Resilience Plan, Greece 2.0, and the European Union NextGeneration EU Implementation body: HFRI (Hellenic Foundation for Research and Innovation), Greece, 2024–2026. More information about caan be found in: https://www.elidek.gr/en/homepage/ (accessed on 29 March 2025). The project IMAGINE-MOCAP (ID: 015668) aims to extend character animation workflow with emphasis on shaping character’s personality through motion capture, wearable, and live coding technologies. The IMAGINE MOCAP project is a partnership between the University of the Aegean (coordinator), the University of the Peloponnese, and Ionian University. The team includes an interdisciplinary background of computing, game design, animation, and performing arts, investigating state-of-the-art research outcomes and market technologies to come up with a theoretical framework, methodologies, tools, and pipelines for expressing and conveying digital characters’ personality through these four media: interactive media, animation production, games, and interactive networked environments. More information can be found in: https://imagine-mocap.aegean.gr/ (accessed on 29 March 2025).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Correction Statement

This article has been republished with a minor correction to an author’s ORCID. This change does not affect the scientific content of the article.

Abbreviations

The following abbreviations are used in this manuscript:
ARAugmented Reality
BESSBody Effort Shape System
CGIComputer-generated Imagery
ECGElectrocardiogram
LMALaban Movement Analysis
VRVirtual Reality

References

  1. Kilteni, K.; Groten, R.; Slater, M. The sense of embodiment in virtual reality. Presence Teleoperators Virtual Environ. 2012, 21, 373–387. [Google Scholar] [CrossRef]
  2. Otono, R.; Shikanai, Y.; Nakano, K.; Isoyama, N.; Uchiyama, H.; Kiyokawa, K. The Proteus Effect in Augmented Reality: Impact of Avatar Age and User Perspective on Walking Behaviors; The Virtual Reality Society of Japan: Tokyo, Japan, 2021. [Google Scholar]
  3. Oberdörfer, S.; Birnstiel, S.; Latoschik, M.E. Proteus effect or bodily affordance? The influence of virtual high-heels on gait behavior. Virtual Real. 2024, 28, 81. [Google Scholar]
  4. Merleau-Ponty, M.; Landes, D.; Carman, T.; Lefort, C. Phenomenology of Perception; Routledge: Abingdon, UK, 2013. [Google Scholar]
  5. Clark, A. Re-inventing ourselves: The plasticity of embodiment, sensing, and mind. J. Med. Philos. 2007, 32, 263–282. [Google Scholar] [CrossRef] [PubMed]
  6. Johnston, O.; Thomas, F. The Illusion of Life: Disney Animation; Disney Editions New York: New York, NY, USA, 1981. [Google Scholar]
  7. Lakoff, G.; Johnson, M.; Sowa, J.F. Review of Philosophy in the Flesh: The embodied mind and its challenge to Western thought. Comput. Linguist. 1999, 25, 631–634. [Google Scholar]
  8. Glenberg, A.M. Embodiment as a unifying perspective for psychology. Wiley Interdiscip. Rev. Cogn. Sci. 2010, 1, 586–596. [Google Scholar]
  9. Smith, L.B. Cognition as a dynamic system: Principles from embodiment. Dev. Rev. 2005, 25, 278–298. [Google Scholar]
  10. Pustejovsky, J.; Krishnaswamy, N. Embodied human computer interaction. KI-Künstliche Intell. 2021, 35, 307–327. [Google Scholar]
  11. Serim, B.; Spapé, M.; Jacucci, G. Revisiting embodiment for brain–computer interfaces. Hum.–Comput. Interact. 2024, 39, 417–443. [Google Scholar] [CrossRef]
  12. Kiverstein, J. The meaning of embodiment. Top. Cogn. Sci. 2012, 4, 740–758. [Google Scholar]
  13. Müller, U.; Newman, J.L. The body in action: Perspectives on embodiment and development. In Developmental Perspectives on Embodiment and Consciousness; Lawrence Erlbaum Associates: London, UK, 2008; pp. 313–342. [Google Scholar]
  14. Lakoff, G.; Johnson, M. Metaphors We Live by; University of Chicago Press: Chicago, IL, USA, 2008. [Google Scholar]
  15. Shiratori, T.; Mahler, M.; Trezevant, W.; Hodgins, J.K. Expressing animated performances through puppeteering. In Proceedings of the 2013 IEEE Symposium on 3D User Interfaces (3DUI), Orlando, FL, USA, 16–17 March 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 59–66. [Google Scholar] [CrossRef]
  16. Mou, T.Y. Keyframe or motion capture? Reflections on education of character animation. EURASIA J. Math. Sci. Technol. Educ. 2018, 14, em1649. [Google Scholar] [CrossRef]
  17. Sultana, N.; Peng, L.Y.; Meissner, N. Exploring believable character animation based on principles of animation and acting. In Proceedings of the 2013 International Conference on Informatics and Creative Multimedia, Kuala Lumpur, Malaysia, 4–6 September 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 321–324. [Google Scholar] [CrossRef]
  18. Sheets-Johnstone, M. Embodied minds or mindful bodies? A question of fundamental, inherently inter-related aspects of animation. Subjectivity 2011, 4, 451–466. [Google Scholar] [CrossRef]
  19. Thalmann, D. Physical, behavioral, and sensor-based animation. In Proceedings of the Graphicon 96, St. Petersburg, Russia, 1–5 July 1996; pp. 214–221. [Google Scholar]
  20. De Koning, B.B.; Tabbers, H.K. Facilitating understanding of movements in dynamic visualizations: An embodied perspective. Educ. Psychol. Rev. 2011, 23, 501–521. [Google Scholar] [CrossRef]
  21. Mehrabian, A. Silent Messages; Wadsworth: Belmont, CA, USA, 1971; Volume 8. [Google Scholar]
  22. Mehrabian, A. Nonverbal Communication, 3rd ed; Aldine Transaction: New Brunswick, NJ, USA, 2009. [Google Scholar]
  23. Birdwhistell, R.L. Introduction to Kinesics: (An Annotation System for Analysis of Body Motion and Gesture); Department of State, Foreign Service Institute: Washington, DC, USA, 1952. [Google Scholar]
  24. Hasson, O. Towards a general theory of biological signaling. J. Theor. Biol. 1997, 185, 139–156. [Google Scholar] [CrossRef] [PubMed]
  25. American Psychological Association. APA Dictionary of Psychology; American Psychological Association: Washington, DC, USA, 2007. [Google Scholar]
  26. Wikimedia Foundation. Nonverbal Communication. 2024. Available online: https://en.wikipedia.org/wiki/Nonverbal_communication (accessed on 14 May 2024).
  27. Frank, M.G. Facial Expressions. Int. Encycl. Soc. Behav. Sci. 2001, 5230–5234. [Google Scholar] [CrossRef]
  28. Gesture|English Meaning-Cambridge Dictionary. Available online: https://dictionary.cambridge.org/dictionary/english/gesture (accessed on 14 August 2024).
  29. Nordquist, R. What Is Paralinguistics? (Paralanguage). 2019. Available online: https://www.thoughtco.com/paralinguistics-paralanguage-term-1691568 (accessed on 6 June 2024).
  30. Battle, D.E. Communication Disorders in a Multicultural and Global Society. In Communication Disorders in Multicultural and International Populations; Mosby: St. Louis, MO, USA, 2012; pp. 1–19. [Google Scholar]
  31. Cherry, K. What Are the 9 Types of Nonverbal Communication? 2023. Available online: https://www.verywellmind.com/types-of-nonverbal-communication-2795397 (accessed on 14 August 2024).
  32. Hinde, R.A.; Royal Society (Great Britain) (Eds.) Non-Verbal Communication; Cambridge University Press: Cambridge, UK, 1972. [Google Scholar]
  33. Parmar, D.; Olafsson, S.; Utami, D.; Murali, P.; Bickmore, T. Designing empathic virtual agents: Manipulating animation, voice, rendering, and empathy to create persuasive agents. Auton. Agents Multi-Agent Syst. 2022, 36, 17. [Google Scholar] [CrossRef]
  34. Ekman, P. Facial Expression and Emotion. Am. Psychol. 1993, 48, 384. [Google Scholar] [CrossRef]
  35. Wiggins, J.S.; Trapnell, P.; Phillips, N. Psychometric and Geometric Characteristics of the Revised Interpersonal Adjective Scales (IAS-R). Multivar. Behav. Res. 1988, 23, 517–530. [Google Scholar] [CrossRef] [PubMed]
  36. Waude, A. Five-Factor Model of Personality. 2017. Available online: https://www.psychologistworld.com/personality/five-factor-model-big-five-personality (accessed on 6 August 2024).
  37. Korn, O.; Stamm, L.; Moeckl, G. Designing authentic emotions for non-human characters: A study evaluating virtual affective behavior. In Proceedings of the 2017 Conference on Designing Interactive Systems, Edinburgh, UK, 10–14 June 2017; pp. 477–487. [Google Scholar]
  38. Pervin, L.A.; John, O.P. Handbook of Personality: Theory and Research, 2nd ed.; Guilford Press: New York, NY, USA, 1999. [Google Scholar]
  39. Bergner, R.M. What Is Personality? Two Myths and a Definition. New Ideas Psychol. 2020, 57, 100759. [Google Scholar] [CrossRef]
  40. Hoffner, C.; Cantor, J. Perceiving and responding to mass media characters. In Responding to the Screen; Routledge: Abingdon, UK, 2013; pp. 63–101. [Google Scholar]
  41. Field, S. Screenplay: The Foundations of Screenwriting; Delta: London, UK, 2005. [Google Scholar]
  42. Laurel, B. Computers as Theater; Addison-Wesley: Reading, MA, USA, 1993. [Google Scholar]
  43. Higgins, T.E.; Scholer, A.A. When Is Personality Revealed? A Motivated Cognition Approach. In Advances in Experimental Social Psychology; The Guilford Press: New York, NY, USA, 2008. [Google Scholar]
  44. Mehrabian, A.; Ferris, S.R. Inference of Attitudes from Nonverbal Communication in Two Channels. J. Consult. Psychol. 1967, 31, 248. [Google Scholar] [CrossRef]
  45. Allan, P. Body Language: How to Read Others’ Thoughts by Their Gestures; Sheldon Press: London, UK, 1995. [Google Scholar]
  46. Darwin, C. The Expression of Emotions in Animals and Man; Murray: London, UK, 1872. [Google Scholar]
  47. Knutson, B. Facial Expressions of Emotion Influence Interpersonal Trait Inferences. J. Nonverbal Behav. 1996, 20, 165–182. [Google Scholar] [CrossRef]
  48. Shariff, A.F.; Tracy, J.L. What Are Emotion Expressions For? Curr. Dir. Psychol. Sci. 2011, 20, 395–399. [Google Scholar]
  49. Tracy, J.L.; Randles, D.; Steckler, C.M. The Nonverbal Communication of Emotions. Curr. Opin. Behav. Sci. 2015, 3, 25–30. [Google Scholar]
  50. Keating, C.F.; Mazur, A.; Segall, M.H. Facial Gestures Which Influence the Perception of Status. Sociometry 1977, 40, 374–378. [Google Scholar]
  51. Keating, C.F.; Mazur, A.; Segall, M.H.; Cysneiros, P.G.; Kilbride, J.E.; Leahy, P.; Wirsing, R. Culture and the Perception of Social Dominance from Facial Expression. J. Personal. Soc. Psychol. 1981, 40, 615. [Google Scholar]
  52. Matsumoto, D.; Kudoh, T. American-Japanese Cultural Differences in Attributions of Personality Based on Smiles. J. Nonverbal Behav. 1993, 17, 231–243. [Google Scholar]
  53. Poggi, I.; Pelachaud, C. Emotional Meaning and Expression in Animated Faces. In Affective Interactions; Springer: Berlin/Heidelberg, Germany, 1999; pp. 182–195. [Google Scholar]
  54. Ellyson, S.L.; Dovidio, J.F.; Fehr, B.J. Visual Behavior and Dominance in Women and Men. In Gender and Nonverbal Behavior; Mayo, C., Henley, N.M., Eds.; Springer: New York, NY, USA, 1981; pp. 63–81. [Google Scholar] [CrossRef]
  55. Emery, N.J. The Eyes Have It: The Neuroethology, Function and Evolution of Social Gaze. Neurosci. Biobehav. Rev. 2000, 24, 581–604. [Google Scholar] [CrossRef]
  56. Argyle, M.; Dean, J. Eye-Contact, Distance and Affiliation. Sociometry 1965, 28, 289–304. [Google Scholar] [CrossRef] [PubMed]
  57. Georgescu, A.L.; Kuzmanovic, B.; Schilbach, L.; Tepest, R.; Kulbida, R.; Bente, G.; Vogeley, K. Neural Correlates of “Social Gaze” Processing in High-Functioning Autism under Systematic Variation of Gaze Duration. NeuroImage Clin. 2013, 3, 340–351. [Google Scholar] [CrossRef]
  58. Efran, J.S.; Broughton, A. Effect of Expectancies for Social Approval on Visual Behavior. J. Personal. Soc. Psychol. 1966, 4, 103–107. [Google Scholar] [CrossRef]
  59. Efran, J.S. Looking for Approval: Effects on Visual Behavior of Approbation from Persons Differing in Importance. J. Personal. Soc. Psychol. 1968, 10, 21–25. [Google Scholar] [CrossRef]
  60. Ho, S.; Foulsham, T.; Kingstone, A. Speaking and Listening with the Eyes: Gaze Signaling During Dyadic Interactions. PLoS ONE 2015, 10, e0136905. [Google Scholar] [CrossRef] [PubMed]
  61. Laban, R.; Lawrence, F. Effort; MacDonald and Evans: London, UK, 1947. [Google Scholar]
  62. Bartenieff, I.; Lewis, D. Body Movement: Coping with the Environment; Routledge: London, UK, 2013. [Google Scholar]
  63. Alaoui, S.F.; Carlson, K.; Cuykendall, S.; Bradley, K.; Studd, K.; Schiphorst, T. How Do Experts Observe Movement? In Proceedings of the 2nd International Workshop on Movement and Computing, MOCO’15, New York, NY, USA, 14–15 August 2015; pp. 84–91. [Google Scholar]
  64. Hackney, P. Making Connections: Total Body Integration Through Bartenieff Fundamentals; Routledge: London, UK, 2003. [Google Scholar]
  65. Wahl, C. Laban/Bartenieff Movement Studies: Contemporary Applications; Human Kinetics: Champaign, IL, USA, 2019. [Google Scholar]
  66. Davis, J. Laban Movement Analysis: A Key to Individualizing Children’s Dance. J. Phys. Educ. Recreat. Danc. 1995, 66, 31–33. [Google Scholar] [CrossRef]
  67. Hankin, T. Laban Movement Analysis: In Dance Education. J. Phys. Educ. Recreat. Danc. 1984, 55, 65–67. [Google Scholar]
  68. Bacula, A.; LaViers, A. Character synthesis of ballet archetypes on robots using laban movement analysis: Comparison between a humanoid and an aerial robot platform with lay and expert observation. Int. J. Soc. Robot. 2021, 13, 1047–1062. [Google Scholar] [CrossRef]
  69. Kougioumtzian, L.; El Raheb, K.; Katifori, A.; Roussou, M. Blazing fire or breezy wind? A story-driven playful experience for annotating dance movement. Front. Comput. Sci. 2022, 4, 957274. [Google Scholar]
  70. Bishko, L. Animation Principles and Laban Movement Analysis: Movement Frameworks for Creating Empathic Character Performances. In Nonverbal Communication in Virtual Worlds; Tanenbaum, J.T., El-Nasr, M.S., Nixon, M., Eds.; ETC Press: Pittsburgh, PA, USA, 2014; pp. 177–203. [Google Scholar]
  71. Blom, L.A.; Chaplin, L.T. The Intimate Act of Choreography; University of Pittsburgh Press: Pittsburgh, PA, USA, 1982. [Google Scholar]
  72. Alaoui, S.F.; Caramiaux, B.; Serrano, M.; Bevilacqua, F. Movement Qualities as Interaction Modality. In Proceedings of the Designing Interactive Systems Conference, DIS’12, Newcastle Upon Tyne, UK, 11–15 June 2012. [Google Scholar] [CrossRef]
  73. Newlove, J.; Dalby, J. Laban for All; Taylor & Francis: New York, NY, USA, 2004. [Google Scholar]
  74. Camurri, A.; Volpe, G.; Piana, S.; Mancini, M.; Niewiadomski, R.; Ferrari, N.; Canepa, C. The Dancer in the Eye: Towards a Multi-Layered Computational Framework of Qualities in Movement. In Proceedings of the MOCO’16: 3rd International Symposium on Movement and Computing, Thessaloniki, GA, Greece, 5–6 July 2016. [Google Scholar] [CrossRef]
  75. Barakova, E.I.; van Berkel, R.; Hiah, L.; Teh, Y.F.; Werts, C. Observation Scheme for Interaction with Embodied Intelligent Agents Based on Laban Notation. In Proceedings of the 2015 IEEE International Conference on Robotics and Biomimetics, Zhuhai, China, 6–9 December 2015. [Google Scholar]
  76. Melzer, A.; Shafir, T.; Tsachor, R.P. How do we recognise emotion from movement? Specific motor components contribute to the recognition of each emotion. Front. Psychol. 2019, 10, 392097. [Google Scholar]
  77. Shafir, T. Modeling Emotion Perception from Body Movements for Human-Machine Interactions Using Laban Movement Analysis. In Modeling Visual Aesthetics, Emotion, and Artistic Style; Springer: Berlin/Heidelberg, Germany, 2023; pp. 313–330. [Google Scholar]
  78. Guo, W.; Craig, O.; Difato, T.; Oliverio, J.; Santoso, M.; Sonke, J.; Barmpoutis, A. AI-Driven human motion classification and analysis using laban movement system. In Proceedings of the International Conference on Human-Computer Interaction, Virtual, 26 June–1 July 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 201–210. [Google Scholar]
  79. Dash, B.; Davis, K. Significance of Nonverbal Communication and Paralinguistic Features in Communication: A Critical Analysis. Int. J. Innov. Res. Multidiscip. Field 2022, 8, 172–179. [Google Scholar]
  80. Ball, G.; Breese, J. Relating Personality and Behavior: Posture and Gestures. In Affective Interactions; Paiva, A., Ed.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 196–203. [Google Scholar] [CrossRef]
  81. Pollick, F.E.; Paterson, H.M.; Bruderlin, A.; Sanford, A.J. Perceiving Affect from Arm Movement. Cognition 2001, 82, B51–B61. [Google Scholar]
  82. Castellano, G.; Villalba, S.D.; Camurri, A. Recognising Human Emotions from Body Movement and Gesture Dynamics. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction, Lisbon, Portugal, 12–14 September 2007; pp. 71–82. [Google Scholar]
  83. Ziegelmaier, R.S.; Correia, W.; Teixeira, J.M.; Simões, F.P. Components of the LMA as a Design Tool for Expressive Movement and Gesture Construction. In Proceedings of the 2020 22nd Symposium on Virtual and Augmented Reality (SVR), Porto de Galinhas, Brazil, 7–10 November 2020. [Google Scholar]
  84. Novick, D.; Gris, I. Building Rapport Between Human and ECA: A Pilot Study. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2014; Volume 8511 LNCS (PART 2), pp. 472–480. [Google Scholar] [CrossRef]
  85. Brixey, J.; Novick, D. Building Rapport with Extraverted and Introverted Agents: 8th International Workshop on Spoken Dialog Systems; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar] [CrossRef]
  86. Hall, J.A.; Coats, E.J.; LeBeau, L.S. Nonverbal Behavior and the Vertical Dimension of Social Relations: A Meta-Analysis. Psychol. Bull. 2005, 131, 898–924. [Google Scholar] [CrossRef]
  87. Chopik, W.J.; Edelstein, R.S.; van Anders, S.M.; Wardecker, B.M.; Shipman, E.L.; Samples-Steele, C.R. Too Close for Comfort? Adult Attachment and Cuddling in Romantic and Parent–Child Relationships. Personal. Individ. Differ. 2014, 69, 212–216. [Google Scholar] [CrossRef]
  88. Simão, C.; Seibt, B. Friendly Touch Increases Gratitude by Inducing Communal Feelings. Front. Psychol. 2015, 6, 815. [Google Scholar] [CrossRef] [PubMed]
  89. Sekerdej, M.; Simão, C.; Waldzus, S.; Brito, R. Keeping in Touch with Context: Non-Verbal Behavior as a Manifestation of Communality and Dominance. J. Nonverbal Behav. 2018, 42, 311–326. [Google Scholar] [CrossRef] [PubMed]
  90. Cherry, K. Color Psychology: Does It Affect How You Feel? 2024. Available online: https://www.verywellmind.com/color-psychology-2795824 (accessed on 14 May 2024).
  91. Giummarra, M.J.; Gibson, S.J.; Georgiou-Karistianis, N.; Bradshaw, J.L. Mechanisms underlying embodiment, disembodiment and loss of embodiment. Neurosci. Biobehav. Rev. 2008, 32, 143–160. [Google Scholar] [CrossRef] [PubMed]
  92. Vicon Motion Systems. Vicon Motion Capture System. 2024. Available online: https://www.vicon.com (accessed on 14 February 2024).
  93. OptiTrack. OptiTrack Motion Capture System. 2024. Available online: https://www.optitrack.com (accessed on 14 February 2024).
  94. Qualisys Motion Capture. Qualisys Motion Capture System. 2024. Available online: https://www.qualisys.com (accessed on 14 February 2024).
  95. Xsens Technologies B.V. Xsens Motion Capture System. 2024. Available online: https://www.xsens.com (accessed on 14 February 2024).
  96. Rokoko Electronics. Rokoko Motion Capture System. 2024. Available online: https://www.rokoko.com (accessed on 14 February 2024).
  97. Cannavò, A.; Bottino, F.; Lamberti, F. Supporting motion-capture acting with collaborative Mixed Reality. Comput. Graph. 2024, 124, 104090. [Google Scholar] [CrossRef]
  98. Theodoropoulos, A.; El Raheb, K.; Kyriakoulakos, P.; Kougioumtzian, L.; Kalampratsidou, V.; Nikopoulos, G.; Stergiou, M.; Baltas, D.; Kolokotroni, A.; Malisova, K.; et al. Performing Personality in Game Characters and Digital Narrative. 2024. Available online: https://www.researchgate.net/publication/388631249_Editorial_Performing_Personality_in_Game_Characters_and_Digital_Narrative (accessed on 10 February 2025).
  99. Sharma, S.; Verma, S.; Kumar, M.; Sharma, L. Use of motion capture in 3D animation: Motion capture systems, challenges, and recent trends. In Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), Faridabad, India, 14–16 February 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 289–294. [Google Scholar]
  100. Wibowo, M.C.; Nugroho, S.; Wibowo, A. The use of motion capture technology in 3D animation. Int. J. Comput. Digit. Syst. 2024, 15, 975–987. [Google Scholar] [CrossRef]
  101. Menache, A. Understanding Motion Capture for Computer Animation, 2nd ed.; Morgan Kaufmann: San Francisco, CA, USA, 2010. [Google Scholar]
  102. Thomas, F.; Johnston, O. The Illusion of Life: Disney Animation; Hyperion: New York, NY, USA, 1995. [Google Scholar]
  103. Hanna-Barbera. Jerry’s Heartbeat. 1940. Available online: https://www.youtube.com/watch?v=QHuBdrCOpv8 (accessed on 14 February 2024).
  104. Liu, F.; Park, C.; Tham, Y.J.; Tsai, T.Y.; Dabbish, L.; Kaufman, G.; Monroy-Hernández, A. Significant otter: Understanding the role of biosignals in communication. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–15. [Google Scholar] [CrossRef]
  105. Birringer, J.; Danjoux, M. Wearable technology for the performing arts. In Smart Clothes and Wearable Technology; Elsevier: Amsterdam, The Netherlands, 2023; pp. 529–571. [Google Scholar]
  106. Nelson, E.C.; Verhagen, T.; Vollenbroek-Hutten, M.; Noordzij, M.L. Is wearable technology becoming part of us? Developing and validating a measurement scale for wearable technology embodiment. JMIR mHealth uHealth 2019, 7, e12771. [Google Scholar] [CrossRef]
  107. El-Raheb, K.; Kalampratsidou, V.; Issari, P.; Georgaca, E.; Koliouli, F.; Karydi, E.; Ioannidis, Y. Wearables in sociodrama: An embodied mixed-methods study of expressiveness in social interactions. Wearable Technol. 2022, 3, e10. [Google Scholar] [CrossRef]
  108. Ugur, S.; Bordegoni, M.; Wensveen, S.G.A.; Mangiarotti, R.; Carulli, M. Embodiment of emotions through wearable technology. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Washington, DC, USA, 28–31 August 2011; Volume 54792, pp. 839–847. [Google Scholar]
  109. El Raheb, K.; Stergiou, M.; Koutiva, G.; Kalampratsidou, V.; Diamantides, P.; Katifori, A.; Giokas, P. Data and Artistic Creation: Challenges and opportunities of online mediation. In Proceedings of the 2nd International Conference of the ACM Greek SIGCHI Chapter, Athens Greece, 27–28 September 2023; pp. 1–8. [Google Scholar]
  110. Kalampratsidou, V.; El Raheb, K. Body signals as digital narratives: From social issues to digital characters. In Proceedings of the Performing Personality in Game Characters and Digital Narratives Workshop @ CEEGS 2024, Nafplio, Greece, 10–12 October 2024. [Google Scholar]
  111. Starke, S.; Zhang, H.; Komura, T.; Saito, J. Neural state machine for character-scene interactions. ACM Trans. Graph. 2019, 38, 178. [Google Scholar] [CrossRef]
  112. Yoshida, N.; Yonemura, S.; Emoto, M.; Kawai, K.; Numaguchi, N.; Nakazato, H.; Hayashi, K. Production of character animation in a home robot: A case study of Lovot. Int. J. Soc. Robot. 2022, 14, 39–54. [Google Scholar] [CrossRef]
  113. Rosas, J.; Palma, L.B.; Antunes, R.A. An Approach for Modeling and Simulation of Virtual Sensors in Automatic Control Systems Using Game Engines and Machine Learning. Sensors 2024, 24, 7610. [Google Scholar] [CrossRef]
  114. Feldotto, B.; Morin, F.O.; Knoll, A. The neurorobotics platform robot designer: Modeling morphologies for embodied learning experiments. Front. Neurorobotics 2022, 16, 856727. [Google Scholar] [CrossRef]
  115. Egger, M.; Ley, M.; Hanke, S. Emotion recognition from physiological signal analysis: A review. Electron. Notes Theor. Comput. Sci. 2019, 343, 35–55. [Google Scholar]
  116. Kim, J.; André, E. Multi-channel biosignal analysis for automatic emotion recognition. In Proceedings of the BIOSIGNALS 2008-International Conference on Bio-inspired Systems and Signal Processing, INSTICC, Funchal, Portugal, 28–31 January 2008; pp. 241–247. [Google Scholar]
  117. Guo, H.W.; Huang, Y.S.; Lin, C.H.; Chien, J.C.; Haraikawa, K.; Shieh, J.S. Heart rate variability signal features for emotion recognition by using principal component analysis and support vectors machine. In Proceedings of the 2016 IEEE 16th International Conference on Bioinformatics and Bioengineering (BIBE), Taichung, Taiwan, 31 October–2 November 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 274–277. [Google Scholar]
  118. Valenza, G.; Lanatà, A.; Scilingo, E.P.; De Rossi, D. Towards a smart glove: Arousal recognition based on textile electrodermal response. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 3598–3601. [Google Scholar]
  119. Villarejo, M.V.; Zapirain, B.G.; Zorrilla, A.M. A stress sensor based on Galvanic Skin Response (GSR) controlled by ZigBee. Sensors 2012, 12, 6075–6101. [Google Scholar] [CrossRef] [PubMed]
  120. Giomi, A. A Phenomenological Approach to Wearable Technologies and Viscerality: From embodied interaction to biophysical music performance. Organised Sound 2024, 29, 64–78. [Google Scholar]
  121. Wei, L.; Wang, S.J. Motion tracking of daily living and physical activities in health care: Systematic review from designers’ perspective. JMIR mHealth uHealth 2024, 12, e46282. [Google Scholar] [CrossRef]
  122. Kulic, D.; Croft, E.A. Affective state estimation for human–robot interaction. IEEE Trans. Robot. 2007, 23, 991–1000. [Google Scholar]
  123. Ekman, P. The directed facial action task. In Handbook of Emotion Elicitation and Assessment; Oxford University Press: Oxford, UK, 2007; Volume 47, p. 53. [Google Scholar]
  124. Damasio, A.R. The somatic marker hypothesis and the possible functions of the prefrontal cortex. Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci. 1996, 351, 1413–1420. [Google Scholar]
  125. Zhang, Q.; Chen, X.; Zhan, Q.; Yang, T.; Xia, S. Respiration-based emotion recognition with deep learning. Comput. Ind. 2017, 92, 84–90. [Google Scholar]
  126. Barrett, L.F. Solving the emotion paradox: Categorization and the experience of emotion. Personal. Soc. Psychol. Rev. 2006, 10, 20–46. [Google Scholar] [CrossRef]
Figure 1. Simplified model of how physical and digital environments interact to shape the behavior of a digital character.
Figure 1. Simplified model of how physical and digital environments interact to shape the behavior of a digital character.
Sensors 25 02314 g001
Figure 2. A conceptual schema describing nonverbal communication and behavior.
Figure 2. A conceptual schema describing nonverbal communication and behavior.
Sensors 25 02314 g002
Figure 3. Visualising Ekman’s universal emotions [34] by combining different facial elements to build facial expressions.
Figure 3. Visualising Ekman’s universal emotions [34] by combining different facial elements to build facial expressions.
Sensors 25 02314 g003
Figure 4. Laban’s Basic Effort Actions. Panel (A) visualises Laban’s effort dynamosphere and panel (B) lists Laban’s Eight Basic Effort Actions. Source: [69].
Figure 4. Laban’s Basic Effort Actions. Panel (A) visualises Laban’s effort dynamosphere and panel (B) lists Laban’s Eight Basic Effort Actions. Source: [69].
Sensors 25 02314 g004
Figure 5. Types of spatial zones for different levels of intimacy, as defined by Dash and Davis [79].
Figure 5. Types of spatial zones for different levels of intimacy, as defined by Dash and Davis [79].
Sensors 25 02314 g005
Figure 6. Visualising the inside-out exaggerations of internal functions or feelings in character animations. All examples are from animations in the public domain.
Figure 6. Visualising the inside-out exaggerations of internal functions or feelings in character animations. All examples are from animations in the public domain.
Sensors 25 02314 g006
Figure 7. Model of our proposed workflow for an embodied perspective in digital animation.
Figure 7. Model of our proposed workflow for an embodied perspective in digital animation.
Sensors 25 02314 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El-Raheb, K.; Kougioumtzian, L.; Kalampratsidou, V.; Theodoropoulos, A.; Kyriakoulakos, P.; Vosinakis, S. Sensing the Inside Out: An Embodied Perspective on Digital Animation Through Motion Capture and Wearables. Sensors 2025, 25, 2314. https://doi.org/10.3390/s25072314

AMA Style

El-Raheb K, Kougioumtzian L, Kalampratsidou V, Theodoropoulos A, Kyriakoulakos P, Vosinakis S. Sensing the Inside Out: An Embodied Perspective on Digital Animation Through Motion Capture and Wearables. Sensors. 2025; 25(7):2314. https://doi.org/10.3390/s25072314

Chicago/Turabian Style

El-Raheb, Katerina, Lori Kougioumtzian, Vilelmini Kalampratsidou, Anastasios Theodoropoulos, Panagiotis Kyriakoulakos, and Spyros Vosinakis. 2025. "Sensing the Inside Out: An Embodied Perspective on Digital Animation Through Motion Capture and Wearables" Sensors 25, no. 7: 2314. https://doi.org/10.3390/s25072314

APA Style

El-Raheb, K., Kougioumtzian, L., Kalampratsidou, V., Theodoropoulos, A., Kyriakoulakos, P., & Vosinakis, S. (2025). Sensing the Inside Out: An Embodied Perspective on Digital Animation Through Motion Capture and Wearables. Sensors, 25(7), 2314. https://doi.org/10.3390/s25072314

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop