Next Article in Journal
Acknowledgement to Reviewers of Multimodal Technologies and Interaction in 2017
Previous Article in Journal
Eye Gaze Controlled Projected Display in Automotive and Military Aviation Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Designing for a Wearable Affective Interface for the NAO Robot: A Study of Emotion Conveyance by Touch

1
Department of Applied IT, University of Gothenburg, Box 100, SE-405 30 Gothenburg, Sweden
2
Department of Information Technology, Uppsala University, Box 256, 751 05 Uppsala, Sweden
3
School of Informatics, University of Skövde, Box 408, 541 28 Skövde, Sweden
4
Department of Chemistry and Chemical Engineering, Chalmers University of Technology, SE-412 96 Gothenburg, Sweden
5
The Swedish School of Textiles, University of Borås, S-501 90 Borås, Sweden
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2018, 2(1), 2; https://doi.org/10.3390/mti2010002
Submission received: 9 November 2017 / Revised: 5 January 2018 / Accepted: 7 January 2018 / Published: 20 January 2018

Abstract

:
We here present results and analysis from a study of affective tactile communication between human and humanoid robot (the NAO robot). In the present work, participants conveyed eight emotions to the NAO via touch. In this study, we sought to understand the potential for using a wearable affective (tactile) interface, or WAffI. The aims of our study were to address the following: (i) how emotions and affective states can be conveyed (encoded) to such a humanoid robot, (ii) what are the effects of dressing the NAO in the WAffI on emotion conveyance and (iii) what is the potential for decoding emotion and affective states. We found that subjects conveyed touch for longer duration and over more locations on the robot when the NAO was dressed with WAffI than when it was not. Our analysis illuminates ways by which affective valence, and separate emotions, might be decoded by a humanoid robot according to the different features of touch: intensity, duration, location, type. Finally, we discuss the types of sensors and their distribution as they may be embedded within the WAffI and that would likely benefit Human-NAO (and Human-Humanoid) interaction along the affective tactile dimension.

1. Introduction

Robotic technology is quickly advancing with robots entering both professional and domestic settings. A shift towards socially interactive robots can be seen in their increased application in a variety of roles for use as social and behavioural facilitators, for example, in the area of elderly care (e.g., [1]) and for use in human interaction-therapy (e.g., [2,3]). As interaction between humans and robots is becoming more variegated, an increasing interest is emerging in designing robots with human-like features and qualities that enable interaction with humans in more intuitive and meaningful ways [4,5]. Touch as one of the most fundamental aspects of human social interaction [6] has started to receive interest in human-robot interaction (HRI) research (for an overview see e.g., [7,8]) and it has been argued that enabling robots to “feel”, “understand”, and respond to touch in accordance with expectations of the human would enable a more intuitive interaction between humans and robots [8].
In the present work, we investigate how humans convey emotions via touch to a socially interactive humanoid robot (NAO, [9]) (The authors acknowledge that the NAO robot may not be the most human-like robot, but nevertheless is considered a robot that is designed, for often used in, human-interaction contexts. In this sense we refer to NAO as ‘humanoid’. An alternative label might be ‘anthropomorphic’ robot.). The purpose is to inform future development efforts of tactile sensors, concerning where they need to be located, what they should be able to sense, and how human touch should be interpreted, so as to promote affective and personalized human-robot interaction. Of specific interest is the long-term research goal of utilizing ‘smart’ textiles (minimally, pressure sensitive textiles) on robots for promoting tactile sensing in robots. We contend that affective tactile interaction benefits from having robots with visual and tactile affording properties. In the research presented in this article, we investigate human touch tendencies using a wearable affective interface (WAffI) covering much of the body of a humanoid (NAO) robot. The prototype here is not embedded with sensors but rather used to gauge whether the robot’s surface, as a physical interface, has an impact on human affective touch interaction. We evaluate how and where humans touch the robot for different emotions and also to what extent the information provided may be accurately decoded by the robot. Human interaction with the robot wearing the WAffI is compared to that with the standard NAO (no WAffI) allowing us to study whether the WAffI has an effect on how humans behave towards the robot. We adapt the human-human tactile affective interaction study of Heternstein et al. [10] to our human-robot interaction study to provide a benchmark of natural interactive performance. This work constitutes a necessary endeavour into understanding to what extent, and how, smart textiles can provide an affective tactile interface between human and robot.

2. Background

The sense of touch is a channel of communication that, compared to the facial and vocal channels (see e.g., [11,12]), has received little attention in HRI [8] or in other scientific disciplines, such as the field of Psychology [13]. Touch is fundamental in social interaction and serves as a foundation for creating and maintaining trust and interpersonal relationships [14]. It has also been shown that touch can provide a nonverbal means for the communication of emotions and that discrete emotions can be successfully conveyed, and accurately decoded, through the use of physical touch alone [10,15].
Research has revealed that people are inclined to interact with robots through touch [16,17] and spontaneous exhibitions of affective touch such as that of elderly persons or children hugging robots like Telenoid [18] or Kismet [19] suggests that physical touch plays an important role in HRI. Several attempts have been made to furnish robots with touch, or tactile, interaction capabilities. Most notable have been the development of the small, animal shaped robotic companions with full-body sensing, designed to detect affective content of social interaction. Examples of such non-humanoid robots include the Huggable [20] and the Haptic Creature [21,22]. Empowering these robots with affective touch has been argued to offer a valuable contribution to rehabilitation and therapy, especially in places where natural forms of touch therapy, such as interaction with animals, is unavailable [22]. There are in fact a few non-humanoid social robots that are touch-sensitive and can recognize touch patterns and a certain number of touch gestures (e.g., [20,22]). However, none of these robotic projects have yet solved the problem of accurate emotion recognition [22].
With regard to humanoid robots, it has been argued that the humanoid form might elicit expectations of a higher degree of social intelligence which makes the capability of recognizing affective touch even more important [16]. In a study of multimodal interaction with a humanoid robot, touch was considered significantly more important for conveying affection to the robot than distancing, body posture, and arm gestures [16], which suggests that the fundamental role of tactile interaction in interpersonal relationships goes beyond human-human interaction and extends to human-robot interaction.
Much focus of tactile sensing and haptics has been placed on improving robotic dexterity (e.g., gripping ability, as for [23]), while robot whole-body wearables have typically been deployed to increase both robotic safety–reducing the risk of damage on hard surfaces–and human safety–reducing the risk of contact with the robot’s hard surface. Yogeswaran et al. [24], in their survey, also refer to the use of ‘e-skin’ (electronic skin) for sensors detecting potentially harmful chemical, biological and thermal change. Furthermore, they highlight the need for flexible, stretchable skin so as to promote safe and natural handling by human interactors.
In order to facilitate interaction via touch, the robot needs tactile sensors that are able to perceive and interpret human touch behaviours. Different types of transduction methods have been used such as optical transduction and piezoelectric for dynamic tactile sensing on deformable surfaces (e.g., [25]) as well as capacitive, piezoresistive and magnetic, among others (cf. [24]).
Many humanoid robots have a plastic or metal exterior that humans may experience as hard or cold to touch, and therefore unpleasant. On the other hand, textiles, provide a promising means to promote robot-human interaction that permit the use of tactile sensors. Textiles are experienced by humans as positive dependent on the qualities of the fabric such as softness, warmth, chicness, colour etc., [26]. As another example, when using clothes to modify the appearance of the humanoid robot ARMAR-IIIb, and studying participants hug the robot, it was shown that the clothed robot brought a positive effect with less discomfort and higher acceptance of the robot [27].
Textiles have been employed for providing the iCub robot with a wearable interface. The development of a dielectric layer covering a mesh of capacitance sensor patches for the iCub [28,29] has served to provide a safe layer for robots (and for humans against the robot’s surface) that couples as compliant ‘skin’. However, this has not been exploited for affective interaction. There are several levels of affective textile-handling experiences. There are physiological and psychological levels of experience, which are personal and immediate, and social and ideological levels, for example, the “message” the wearer of a certain textile “sends” to the others such as status [30]. The characteristics of textiles make them useful in a wide range of areas whilst having the potential to evoke positive experience in humans. Thus, if such textile robot interfaces can be embedded with sensors, then new possibilities for tactile and haptic interaction between the human and the robot may arise. On this basis, smart textiles provide interesting opportunities [31,32]. Smart textiles are “conductive fabrics and yarns which incorporate or allow microcontroller logic (along with sensors and actuators) to be integrated into garments” (p. 10, [31]). Such fabric could be used on a robot’s body as a wearable interface, giving the user control of the interaction and providing the possibility to adapt the interface in accordance with the user’s current need.
Wearable sensors are increasingly important in ubiquitous computing/intelligent systems aiming to gather information on the state and performance of the wearer, e.g., to track posture, movements, or even physical and emotional state. Research into ‘smart’ wearables has taken many forms while a trend is emerging for using platforms with multiple sensor types in synchrony, e.g., accelerometer, GSR (galvanic skin response) sensor, temperature sensor [33,34]. Smart wearables can be classified according to the type of processing that are permissible: (i) passive smart textiles—able only to sense the environment and user, (ii) active smart textiles—able not only to sense but react to environmental stimuli, (iii) very smart textiles—able not only to sense and react but also to adapt [35]. Actuators, that are required for reacting in active smart textiles, typically entail the use of control units or specific human interface functions integrated into the textiles, e.g., push buttons (capacitive patches). Smartwatches have provided another area of interest in wearables (cf. [33]). These devices provide general-purpose computers—much like smartphones—but their comparatively miniaturized form and wearability with the added component of being imbued with an array of sensors potentially allows additional functionality, e.g., to efficiently pick up location information of their users, monitor aspects of health and fitness, haptic feedback to provide feedback to users without, thereby, the requirement for constant monitoring. Such functionality is, however, presently constrained by size limiting the accommodation of hardware. Smart sensing may also have application in relation to the so-called Internet of Things (IoT; a perspective that sensors connects non-technical and real-world objects to the Internet). It has been suggested [34] that the increasingly low cost of sensors allows for tracking of much information on hitherto-considered non-digital aspects of life including human bodily variables-temperature, muscle activity, blood flow, brain activity. As a result, large stores of data can be utilized to improve human ’ecosystems’. Ecologically relevant (IoT) sensing devices take the form of wearable watches, wrist bands, eye wear (glasses), as well as textiles. The use of self-tracking gadgets, clinical remote monitoring, wearable sensor patches, Wi-Fi scales etc., involves multiple processes such as data generation, information creation, and meaning-making and action-taking. This relates to what [34] refers to as involving several layers: the Hardware Sensor Platform layer, the Software Processing layer, the human-readable Information Visualization layer, and the human-usable Action-Taking layer.
At present, wearable sensors in textile garments and accessories are mainly represented by embedded conventional electronic devices, such as metal wires, strain gauges, MEMS, LEDs and batteries [36]. The textile material does not in itself take part or constitute any functionality but is merely a substrate or vehicle. Due to recent advances in textile technology and materials research, textile sensors have emerged as a new alternative to established electronic components in wearable applications. Textile sensors are particularly useful in wearable applications where wearing comfort and user acceptance are essential. Textile wearables have the advantage over many other types of tactile interfaces in the practicality of their use: easily removable, inexpensive, and importantly they afford touch through a pleasant appearance. An improved integration of smart functions into textile structures is especially relevant for the healthcare sector; a number of studies on textile sensors monitoring [37,38] e.g., respiration, heart activity and body movement. Also the sports industry is in focus, and textile pressure sensors/sensor arrays are studied as means to register precise details, for example, about the stance of a person on a snowboard. We propose that similarly, textile sensors may be worn by a robot, to act as an interface and register the details of pressure, as well as properties such as location, due to touch.
Tactile sensing also requires processing algorithms so that the touch information can garner a meaningful response in the robot. Information regards not just pressure sensing and contextual interpretation but also knowedge about where, how often, for how long and how fast the robot is touched. Human and human-like (robotic ‘mannequins’, cf. [39]) interaction investigating where and how humans would like to touch humanoid robots have been carried out. There have also been studies in social touch on localized areas on a mannequin arm as well as on human-animal affective robot touch [40,41]. In human-human interaction [10,15] it was found that blindfolded humans can reliably decode (discriminate between) at least 8 different emotions, 5 of the 6 primary emotions: fear, anger, disgust, happiness, sadness (where surprise was not evaluated); 3 other pro-social emotions: gratitude, sympathy, love. Participants in the studies were evaluated as encoder-decoder dyads where the role of the encoder was to express the 8 emotions through touch without verbalizing which emotion was being communicated. The experimenters used annotation techniques to then evaluate touch according to: intensity, duration, location and type, so as to assess patterns of emotion tactile conveyance.
With inspiration from the human-human interaction research performed by Hertenstein et al. [10], the present reported work elaborates on the communication of emotions via touch and more specifically investigates how emotions are conveyed, to a small humanoid robot (NAO) and whether textiles on the robot’s body has an impact on human touch behaviours. For socially interactive robots to be accepted as part of our everyday life, it is relevant that they require the capability to recognize people’s social behaviors and respond appropriately. Affective touch, as fundamental in human communication and crucial for human bonding provides, thereby, a potentially natural form of interaction between humans and social robots (see e.g., [16,19]). It should therefore be considered an important consideration when designing for embodied agents (robots) that are able to engage in meaningful and intuitive interaction with human beings.
The remainder of the article is organised as follows. Section 3 describes the methodology of the experiment. In Section 4, the analysis and results are reported. We break down the result section into: (i) encoder analysis, according to the four properties (intensity, duration, location, type) assessed by [10]; (ii) decoder analysis, by evaluating whether classification algorithms can decode emotions according to a subset of the aforementioned encoding properties. This provides us with clues as to how robots could potentially use touch information and what types of sensors should be used on the robot and, furthermore, how they should be distributed. Section 5 provides a discussion of the research results and concludes by outlining future work in relation to imbuing the NAO robot with smart sensors based on smart textiles technology.

3. Method

3.1. A Wearable Affective Interface (WAffI)

We have developed a Wearable Affective Interface (WAffI), which, in its current prototypic state, consists of removable garments, fitted using Velcro, that cover different parts of the NAO body. At present it does not contain sensors (that could be used to decode the tactile information being conveyed). The purpose of the current investigation, rather, has been to provide information as to how such ‘smart’ sensors could be fitted, e.g., the types and locations for such sensors, according to human interaction both with and without the WAffI. The WAffI is designed to be a textile that is: (i) practical—its parts do not overlap with joints and can easily be removed; (ii) skin-tight—smart sensors can potentially be embedded and encode touch-sensitive information, e.g., intensity, movement, pressure; (iii) touch affording—the WAffI should afford touch, i.e., be soft and not unpleasant to look at; (iv) neutral to specific emotions—it was considered important not to bias specific human interactions for the purpose of our investigations. On the basis of (iii) we chose a relatively soft (woolen) fabric, while regarding (iv), we chose an emotion-‘neutral’ colour, i.e., grey. NAO clad in the WAffI according to the above-listed properties is visible in Figure 1. Note, NAO’s existing sensors, on-off button and charge socket (back) remain visible/uncovered. Moreover, we placed less emphasis on below-torso tactile interaction; thus, NAO’s trousers were not ‘skin tight’.
The purpose of the investigation was not to evaluate different types of garments and their interactive potential. Rather, we sought to evaluate whether a given garment, conducive to fitting touch sensors, would afford interaction at least to the same level as compared to when the robot was without the garment. Such an investigation allows us to uncover a proof of principle. Evaluating ideal properties of such a garment was beyond the scope of the current work. We sought to evaluate whether the information picked up by the robot wearing a given garment, i.e., that which we chose, could potentially decode the affective tactile information being transmitted. Thus, our research question addresses whether sensor-suitable garments can, in principle be utilized on the NAO robot. In order to evaluate this we carried out a study comparing our human-robot tactile interaction performance to that of a human-human interaction study [10].

3.2. Participants

Sixty-four volunteers participated in the experiment (32 men and 32 women) the majority of which ranged in age from 20 to 30 years old. They were recruited at the University of Skövde in Sweden and received a movie ticket for their participation. The participants were randomly assigned to one of the two conditions WAffI-On (clothed robot) vs. WAffI-Off (‘naked’ robot) (see Figure 1). Gender was balanced across the two conditions (16 males and 16 females for each condition). Potential gender differences identified in the study were investigated elsewhere [42] and are not the focus of the current paper; however, we include gender as an independent variable within our analysis of touch duration so as to discount interaction effects between gender and clothing type variables. Different subjects are used for WAffI-On and WAffI-Off interaction. A within-subjects investigation was considered but for practical considerations, i.e., the amount of time needed per subject evaluation, we opted for the between-subjects investigation.

3.3. Procedure and Materials

The study took place in the Usability Lab at the University of Skövde, which consists of a medium-sized testing room, furnished as a small apartment and an adjacent control room. The testing room is outfitted with three video-cameras and a one-way observation glass. The control room allows researchers to unobtrusively observe participants during studies and is outfitted with video recording and editing equipment. The participants entered the testing room to find the NAO robot standing on a high table (see Figure 2). Following [10], eight different emotions were displayed serially on individual slips of paper in a randomized order. These emotions consisted of: (i) five basic emotions: Anger, Disgust, Fear, Happiness, Sadness; and (ii) three pro-social emotions: Gratitude, Sympathy, Love. The emotions selected were those chosen by [10] evaluated in a human-human tactile interaction study.
The participants were instructed to read from the paper slips each emotion, think about how he or she wanted to communicate the specific emotion, and then to make contact with the robot’s body, using any form of touch he or she found to be appropriate to convey the emotion to the robot. To preclude the possibility of providing non-tactile clues to the emotion being communicated, the participants were advised not to talk or make any sounds. Participants were not time-limited as this was considered to potentially impose a constraint on the naturalness or creativity of the emotional interaction.
One of the experimenters was present in the room with the participant at all times and another experimenter observed from the control room. All tactile contact between the participant and the robot was video recorded. Subjects were informed of both observation types. At the end of the experimental run, the participant answered questions regarding his or her subjective experience of interacting with the robot via touch.

Coding Procedure

The video recordings of tactile displays were analyzed and coded on a second-by-second basis using the ELAN (4.5.0) annotation software (https://tla.mpi.nl/tools/tla-tools/elan/). During the coding procedure, the experimenters were naïve to the emotion being communicated but retroactively labeled annotation sets according to each of the eight emotions. Following [10], there were four main touch components that were coded for during individual subject interactions: touch intensity, touch duration, touch type, and touch location. Duration of touch was calculated for each emotion and each touch episode was assigned a level of intensity, i.e., an estimation of the level of human-applied pressure, which was defined as light, moderate, or strong in relation to the movement of the robot’s surface/body. Twenty-three specific types of touches were coded: squeezing, stroking, rubbing, pushing, pulling, pressing, patting, tapping, shaking, pinching, trembling, poking, hitting, scratching, massaging, tickling, slapping, lifting, picking, hugging, finger interlocking, swinging and tossing. The contact points were selected appropriately from the diagrams of the NAO robot shown in Figure 3.
In pilot studies and over initial subject recordings, for any given subject, two investigators compared annotations for the emotion interactions. This comparison was based on 5 recordings from the pilot study and 4 subject recordings from the experimental run, annotated by both investigators and used as a means for inter-rater agreement for the coding practice. Once this was done, all video recordings were divided between three investigators and annotated based on this agreed upon coding practice and the initial annotations, mainly used as a practise material, were replaced by final annotations, which are the ones reported here. There were a few cases of equivocal touch behaviours that required the attention—inter-rater agreement—of all investigators to ensure an appropriate coding. However, these instances were considered a consultation and separate annotations were therefore not part of the work procedure (unlike in the pilot study). This approach was applied in annotations for touch intensity, type and location, while touch duration could be accurately assessed using the above-mentioned ELAN annotation software. ELAN was used to evaluate all aspects of touch; nevertheless, the annotators (three of the authors), based on inter-rater agreement of pilot studies and procedure, utilized a fairly coarse measure for encoding for touch type. Only one instance of touch for a given type was recorded for each emotion—this was used to guard against high inter-subject variance. The initial touch was that recorded whilst duration was registered as that of touch interaction over the emotion conveyed (could terminate with a different touch to the initial touch). In practise, it was observed that subjects either produced a single touch or several in parallel. Furthermore, for certain touch types there was a degree of ambiguity as to when one touch instance had finished; for example, patting, tapping and tickling all involve multiple touch-removal instances in order to be classified as that type—where one instance of patting ends and another instance starts was considered ambiguous and therefore we sought to keep annotation as simple as possible. For similar reasons, touch duration and intensity were only given a single valuation over a duration. So duration was recorded according to the initial touch onset until the final touch offset-again, this coarse measure was used to avoid interpretive ambiguity. Intensity was also given a single rating (average across interactions) over the interaction duration. In practice, although these measures might be considered coarse, it was found that most typically subjects produced interactions that were fairly unambiguous, i.e., single touch type interactions over relatively short durations.

4. Results

This section has sought to achieve the following: (1) Evaluate human-NAO robot tactile interaction performance based on the methodology of [10] and, where appropriate, provide a comparison; (2) Evaluate differences in tactile interaction where the NAO was (a) clothed—‘dressed’ in the WAffI detachable garments; (b) non-clothed. We broke down evaluations (1) and (2) according to an analysis of human ‘encoding’ of emotions, and of machine learning based ‘decoding’ of emotions. The former was carried out to assess human-typical interactions with the robot, the latter was carried out to assess the potential for smart sensor garments to detect particular emotions or affective properties of the interaction.
We break down the results section into: (i) Encoder Results: as evaluated by the experimenters using the annotation methods described in the previous section; (ii) Decoder Results. While in (ii) the NAO was not programmed to sense and decode the affective interactions, we provided a classification of affective valence and separate emotions using the different annotated properties of touch in (i). This was done in order to assess the potential for NAO to decode emotions based on smart textiles, i.e., WAffI imbued with smart sensors. The dataset was previously utilized in [42] but in the current investigation the analysis, both for encoding and decoding of emotions, focuses on whether the NAO was clothed or not (WAffi-On vs. WAffI-Off).

4.1. Encoder Results

We focus our analysis on two aspects: (i) affective tactile interaction with the NAO robot; (ii) affective tactile interaction as it is influenced by the Wearable Affective Interface (WAffI). More specifically, we look at four different aspects of affective tactile interaction that the subjects engaged in:
  • touch intensity— light intensity, moderate intensity, strong intensity;
  • touch duration—length of time that the subjects interacted with the NAO;
  • touch type—the quality of the tactile interaction, e.g., press or stroke;
  • touch location—the place on the NAO that is touched.
These four properties of touch were selected as they were studied by [10] on human-human interaction, whose approach serves as a benchmark for evaluating the naturalized human-robot interaction in our study.

4.1.1. Intensity

Tactile interaction was evaluated according to the following four interval scales:
  • No Interaction (subjects refused or were not able to contemplate an appropriate touch),
  • Low Intensity (subjects gave light touches to the NAO robot with no apparent or barely perceptible movement of NAO),
  • Medium Intensity (subjects gave moderate intensity touches with some, but not extensive, movement of the NAO robot),
  • High Intensity (subjects gave strong intensity touches with a substantial movement of the NAO robot as a result of pressure to the touch).
The number of each of the four intervals for the emotions is displayed in Figure 4 for WAffI-On and WAffI-Off conditions. Plots concern total number of ratings over all the participants. Mean number of ratings per emotion could not be analyzed as only one touch intensity per emotion was recorded by the experimenters.
It can be observed from Figure 4 that in the WAffI-Off condition there were more instances of non-touch where subjects showed a disinclination to produce any type of tactile interaction with the NAO. We suggest that this difference between WAffI-On and -Off conditions may indicate that the WAffI increases inclination for subjects to engage in tactile interaction with the NAO.
We carried out a chi-squared test comparing frequencies of the four intensity categories over the two conditions. Our value of χ 2 (3, N = 64) = 11.51, p < 0.01 showed there was a significant difference between the WAffI-On and WAffI-Off conditions regarding recorded intensity of touch. This can be understood from the consistent tendency of subjects to touch the robot more when it was with the WAffI (other than for Gratitude), i.e., fewer instances of No Interaction. One constraint on our analysis of this tactile domain is that subjects often appeared disinclined to touch with too strong intensity for fear of damaging the robot. However, it could also be argued that such reticence would exist if the NAO was replaced by a comparably small human infant.

4.1.2. Duration

The duration of tactile interaction for a given emotion was recorded according to the initial touch and the final touch before the participant turned the next card (signalling the next emotion conveyance episode). Figure 5 plots means and standard errors of such durations (emotion conveyance episodes) in relation to each emotion both for WAffI-On and WAffI-Off conditions.
It can be observed from Figure 5 that subjects in the WAffI-On condition interact with the NAO robot for longer durations on average than those in the WAffI-Off condition over 7 of the 8 emotions, the exception being Anger. While gender effects were not a critical part of this investigation and were studied elsewhere [42]—with the finding that duration of touch was significantly influenced by gender—we were interested to see whether gender and clothing would foster interactive effects. Using a three-way (mixed design) ANOVA with independent variables of clothing, i.e., WAffI-On/Off (between subjects), gender (between subjects) and emotion type (within subjects), we found a significant main effect of clothing: F(1,64) = 12.08, p < 0.01; we also found a significant main effect of gender: F(1,64) = 12.63, p < 0.01; and a significant main effect of emotion: F(7,64) = 2.63, p < 0.01. There were no significant interaction effects between the three independent variables (see Appendix B for details). Individual post hoc bonferroni tests found significant differences for durations in pairwise tests only for Sadness > Disgust, p < 0.05.
Interestingly, these results contrasted somewhat with Hertenstein et al.’s [10] human-human experiment. In that experiment it was Fear that was conveyed for the longest duration. Sadness and Sympathy were joint second highest in duration and so were similarly relatively enduring as in our Human-Robot investigation. Anger was conveyed for the shortest duration while in our investigation it was also conveyed for the shortest time in the WAffI-On condition.

4.1.3. Location

Figure 6 displays the mean number of touched locations during interaction separated for each emotion and for gender and where individual touched regions per emotion were only recorded once.
As is visible in the figure, Disgust yielded the most limited interaction for both WAffI-On and WAffI-Off conditions, with a mean of fewer than two locations touched. Love resulted in the most plentiful interaction overall with a mean of greater than 4 regions involved in each interaction for the two conditions.
Using a two-way (mixed design) ANOVA with independent variables of clothing (between subjects) and emotion type (within subjects), we found a significant main effect of emotion type: F(7,64) = 11.12, p < 0.01 but no main effect of clothing (WAffi-On versus WAffi-Off): F(1,64) = 3.58, p = 0.0591). There was no significant interaction effect between the two independent variables: F(7,64) = 0.19, p = 0.987. Bonferroni correction tests found: Love > Fear , Love > Anger, Love > Happiness, Love > Gratitude, Love > Sympathy, Happiness > Disgust, Sadness > Disgust (all at p < 0.01).
Figure 7 shows a heat map of touch location for the WAffI-On and WAffI-Off conditions. Intensity reflecting number of touches; percentage of touches of all touch in brackets. The exact number of touches for each location is found in Appendix A (Figure A1 and Figure A2). Striking differences in the locations touched when the NAO was in the WAffI-On versus WAffI-Off conditions were not found. Nevertheless, an example of differences (for Gratitude) is shown in Figure 8. Here it can be seen that subjects have a high tendency to touch the right hand of the NAO—not observed for other emotions. This was observed to involve a hand-shaking gesture in subjects. In the WAffI-Off condition there is also a higher percentage of touch on the shoulders compared to the WAffI-On condition. This is also backed up by the absolute number of touches on the shoulders over all subjects: 2 (WAffI-On) vs. 13 (WAffI-Off). One explanation for this is that in the WAffI-On condition shoulders are still exposed. Subjects may have been more inclined to touch the clothed areas as a result of this clothed versus non-clothed body part contrast. No such contrast exists in the WAffI-Off condition and the visual affordance of the shoulders instead may have inclined more greatly tactile interaction in this case.

4.1.4. Type

The 20 used types of touch (of 23 checked for annotation)—taken from the original Hertenstein et al. [10] study—are presented in Figure 9. Pulling, trembling and tossing, are never observed during any interaction. Participants use squeezing (29%), stroking (16%) and pressing (14%) most frequently. Happiness stands out by involving a relatively large percentage (12%) of swinging the robot’s arms, not observed during other emotions. Only in the case of Disgust is another touch type dominant (Push). See Figure A3 in Appendix A for details. The WAffI-On condition observably produces more touch type interactions particularly for Squeezing and Pressing. However, in general, the WAffI-On condition does not differ markedly in type performance to the WAffI-Off condition. This suggests that the WAffI may not qualitatively affect the tactile interaction between human and NAO robot in terms of touch types used. Using a chi-squared analysis, we found that there was no significant difference between the two conditions over all emotions: χ 2 (22, N = 64) = 21.5, p = 0.999. This was also true when each individual emotion was compared (results not shown).

4.2. Decoder Results

Unlike the Hertenstein et al. [10] experiment upon which our HRI study was methodologically based, the NAO robot was a passive recipient of touch, i.e., lacking the sensory apparatus to decode the emotions conveyed. Nevertheless, the patterns of affective tactile interaction observed during experimentation provide clues as to the critical dimensions of touch requisite to disambiguating the emotional or affective state being conveyed by the encoder. This in turn can inform robotics engineers as to which types of sensors and their locations, are most suitable for a NAO robot seeking to interpret human affective states. Similarly, insights into the types of decoder algorithms required for interpreting sensor signals may be gleaned. As for the previous sub-section, we focus our analysis here on: (i) affective tactile interaction with the NAO robot, and, more specifically; (ii) affective tactile interaction as it is influenced by the WAffI.
In Figure 10 is visualized a support vector machine (SVM) classification of emotional valence—specifically, the valence of emotional conveyance. We used Matlab for the 2-dimensional SVM classification. We analyzed mean values over 8 instances (one mean value for each emotion) for the two dimensions: (i) number of different locations touched and, (ii) duration of touch, in order to classify the emotions and compared WAffI-On and WAffI-Off conditions. We viewed Sadness as a positive affect touch emotion since subjects were observed and described, conveying emotion to the NAO as if it were sad and thereby needed consoling.
As can be seen, qualitatively, the results of the two conditions are similar for both WAffI-On and WAffI-Off conditions allowing for a linear separation between postively and negatively conveyed emotions. Emotions are, however, distributed over a broader 2-dimensional range for the WAffI-On condition suggesting that it may be easier to decode separate emotions in this condition. In general, negatively valenced emotions can be decoded as being of shorter duration and with low distribution of touch as compared to positively valenced emotions and this is particularly true of Disgust, as compared to Love and Sadness.
Figure 10 provides a useful visualization of the data but the analysis is based only on the means for each emotion in each condition. In Figure 11 is provided the confusion matrix for WAffI-On (left) and WAffI-Off (right) for all data points (32 entries per each of the 8 emotions for each condition) using leave-one-out cross validation for one vs. one multi-class support vector machine analysis (28 binary classifiers for the binary class combinations). We chose leave-one-out validation due to the relatively small data set (256 models in total). One vs. one classification was found to be more accurate than one vs. all classification (results not shown). Classification is comparable in both conditions where for WAffI-On the mean 21.5% classification accuracy is somewhat above chance 12.5% and the WAffi-Off condition (22.3%) actually provides a higher classification accuracy than the WAffI-On condition. Overall, while Anger, Love, Disgust and Sympathy classify observably above chance for both conditions, a marked difference can be observed with respect to Sadness, Gratitude, Fear and Happiness that are classified to a relatively poor degree overall. Over both conditions, only Love (column 8 of both plots) and Anger (column 2 of both plots) are the emotions that are classified better than they are confused with any other specific emotion. On the other hand, Gratitude is confused with Anger to the same extent as Anger is correctly classified for the WAffI-Off condition (row 2 right plot). Gratitude is the most ‘confused’ emotion overall.
The confusion matrix results are then compared to the original data from Hertenstein et al. [10]—see Figure 12. Consistent with Hertenstein et al., Anger is the most accurately classified emotion in the WAffI-On condition and when averaged over both conditions, while it is the third most accurately classified emotion in the WAffI-Off condition (after Disgust and Love). Love (the third highest correctly classified emotion in Hertenstein et al.) is also relatively well-classified in the WAffI-On and WAffI-Off conditions (third-most overall, after Anger and Disgust and observably above the 12.5% chance level). What is clear is that classification performance in the two conditions under investigation is considerably worse than for the Herteinstein et al. study. This may in part owe to the lack of use of the touch type dimension in our study—for example, Gratitude appeared to often be signalled by a hand-shaking gesture, which was not amenable to decoding analysis in our investigation (based on limited touch ‘type’ data). However, consistent with our results, we found subjects typically verbalized a degree of confusion as to how to convey certain (similar) emotions, in particular Fear. This appears not to have been a problem to the same extent in the Hertenstein et al. investigation. Exploiting the fourth dimension of touch type might have facilitated classification in the Hertenstein et al. human-human decoding relative to our own decoding analysis in our human-robot experiment.
In order to evaluate further whether emotions are good candidates for decoding, we conducted a k-Means cluster analysis over all the data points. In Figure 13 is shown the result of this analysis (left) over all the data points (outliers not displayed) and the corresponding silhouette plot (right) depicting the measure of how close each point of one cluster is to neighbouring clusters (the higher, the more distant). We further evaluated how many clusters the data most naturally fit into according to average silhoutte values (see Figure 13, right). With two clusters (k = 2) over the two dimensions (duration, location number) mean distance (silhoutte value) is 0.6611. With three clusters (k = 3) the value drops to 0.3922 and with four clusters the mean is 0.3088. Therefore, the data most naturally separates into two clusters. The centroids in Figure 10 left (white crosses), indicate that clusters can be roughly differentiated into (i) low touch duration, low number of locations touched (region 2); (ii) relatively high touch duration, high number of locations touched (region 1).
Based on the above, we evaluated emotion conveyance valence (as for Figure 10 Sadness was considered a positively conveyed emotion), i.e., a two-class analysis. As for the cluster analysis, the positively valenced emotions were typically of high duration and high number of locations touched, while the opposite was true of the negatively conveyed emotions (Fear, Anger, Sadness).
Figure 14 shows the confusion matrix for a tree ensemble (200 learners, gentle boost) classification analysis, using leave-one-out validation (256 folds), over duration and location number of touch, as well as intensity. In this analysis is compared the pooled data for Fear, Anger and Disgust with Love, Sadness, Sympathy, Gratitude and Happiness. In the WAffI-On condition, the former (negative emotion conveyance group) is correctly classified in 59.4% of cases while the latter (positive emotion conveyance group) is correctly classified in 72.5% of cases, where 50% classification accuracy represents chance. In the WAffI-Off condition, these figures drop to 57.3% and 62.5%, respectively. Furthermore, in both conditions negative-valenced emotions are predominantly not confused with positive-valenced emotions and positive-valence emotions are predominantly not confused with negative-valenced emotions. Note, the classification accuracy over the two dimensions (touch duration and location number) was slightly less accurate than when touch intensity was also included in the analysis—results not shown.
To summarize this sub-section, our decoder classification analyses evaluated (i) individual emotions, (ii) affective states based on valence-combined emotions—which we refer to as conveyance valence based emotions. The dimensions/properties of location (distribution/number of parts) touched and duration of touch produced results that hint at the most promising means for a NAO robot to decode the human’s conveyance of affective state. Intensity was also important, particularly in relation to Anger. This seems to be particularly the case when the NAO is suited in the WAffI. In general, classification accuracy for individual emotions was lower than for the Hertenstein et al. [10] human-human experiment against which this human-robot experiment was compared. Some qualitative similarities exist—as for Hertenstein et al. Anger was the emotion that (averaged over the two conditions) was most accurately classified and Love the third (third in Hertenstein et al.) most accurately classified emotion well above the chance rate of classification and predominantly not confused with other emotions. Note, that while Disgust was, averaged over both conditions, the second best classified emotion, it was not so well classified in the WAffI-On condition and typically confused with Anger. These results may indicate that Anger and Love are most easily classifiable and that the use of a smart textile might affect individual emotion classification (above all in relation to Disgust). Furthermore, WAffI-Off generally provided better classification accuracy when looking at individual emotions. However, our results showed that when pooling emotions according to valence conveyance we found that classification accuracy in the WAffI-On condition was actually higher overall compared to the WAffI-Off condition and particularly with respect to the positively conveyed emotion. This might suggest that subjects had a more positive feeling when interacting with the WAffI-clad NAO robot and may also hint at why Disgust was less well classified in this condition. On the basis of the above summary, it is debatable as to whether the presence of the WAffI garments potentially detract from classification accuracy though they may contribute to increased interaction. However, at the level at which affective touch information is most reliably interpretable (when emotions are pooled according to extreme interaction type differences), the WAffI, if anything, appears to facilitate classification accuracy.
For NAO robots to be able to decode specific emotions and affective states, it may require training/calibration to specific individuals. Decoding according to the properties mentioned and individual calibration, appears feasible, however, to levels of above chance accuracy at least for some emotions (particularly Anger and Love) and in relation to particular affective states (positive versus negatively conveyed emotions). Additional touch type information might facilitate classification for some emotions, for example, Gratitude, of all emotions, was frequently conveyed by a hand shake, which could provide a means to decode that particular emotion. In the final section, we will discuss how such properties might be imbued in ‘smart’ textiles (textiles with sensors) such that the NAO (or another humanoid robot) could decode human affect and emotion based on sensed tactile information. We also discuss how tactile decoding in combination with other sensed modalities of emotion (e.g., visual) can potentially be more accurately achieved and used particularly in relation to joint, goal-directed tasks.

5. Discussion

In this article, we have reported and analyzed findings of an experiment detailing how humans convey emotional touch to a humanoid (NAO) robot and the potential for decoding emotions based on touch when conveyed to the robot. Our main aims of this work were to: (i) evaluate how emotion is conveyed (encoded) by tactile interaction in a humanoid robot with reference to an existing human-human tactile interaction study, (ii) how such encoding is affected by an exemplar prototype wearable interface that has the potential to be imbued with tactile sensors and finally (iii) evaluate the potential for tactile conveyance of emotion to be decoded in such a way that does not require strong emphasis on tactile sensing (i.e. only requires registering a few dimensions/properites of touch). The experiment closely followed the methodological procedure of [10] and compared touch behaviour when subjects interacted with a NAO robot dressed in a Wearable Affective Interface (WAffI-On condition) and when different subjects interacted with a ‘naked’ NAO (WAffI-Off condition). Our main findings are as follows:
  • Participants interacted with different intensites (stronger overall) of touch when the NAO was with the WAffI on than when it was without.
  • Participants interacted for a longer duration when NAO was in the WAffI than when NAO was not.
  • Emotions were most simply and reliably classified when pooled into negatively valence conveyance-based (Fear, Anger and Disgust) versus positively valence conveyance-based (Love, Sadness, Sympathy, Gratitude and Happiness) affective states.
  • Emotions, when pooled according to valence conveyance, were more accurately classified when the NAO was in the WAffI than when not.
  • Individual emotions were not as accurately classified as compared to the human-human study of Hertenstein et al. [10].
  • Individual emotions were marginally less accurately classified when the robot was wearing the WAffI garments than when without.
  • Anger and Love were the first and third most easily decodable emotions overall in this human-robot study as was the case in the Hertenstein et al. [10] human-human interaction study.
Furthermore, we found particular differences with respect to the conveyance of emotions on the robot regardless of clothing condition:
  • Participants touched the NAO robot for longer duration when conveying Sadness (typically as an attachment-based consoling gesture) than when conveying Disgust (a rejection-based emotion).
  • Participants touched more locations on the NAO robot when conveying Love than when conveying all emotions other than Sadness; Sadness was conveyed over more locations than for Disgust.
  • Squeezing was the most frequently occurring touch type over emotions.
  • Left arm and right arm were the most frequently touched locations on the NAO.
  • Gratitude was typically conveyed by a handshake gesture.
In summary, similarities were found in this human-robot interaction study compared to Hertenstein et al.’s [10] human-human interaction study but our decoder analysis suggests that classifying individual emotions might be more challenging in human-robot contexts. In general, the amount and duration of touch with the NAO was not detrimentally affected by the robot wearing the WAffI (if anything subjects interacted more) and while individual emotions proved harder to decode in the WAffI-On condition, this was not the case when emotions were pooled into high versus low touch duration/amount of contact. As a proof of principle we thereby consider that garments of the type that could be fitted with sensors should not unduly distract humans from interacting with the NAO.
Previous work [43] has shown the potential for sensors embedded in textiles to produce decodable signals for the NAO robot. Textile sensors have the potential to imbue in humanoid robots, such as the NAO, touch affording and sensitive interfaces appropriate to affective tactile interaction. It is notable that we found that participants touched NAO for longer durations and over more locations when in the WAffI-On condition than when in the WAffi-Off condition. The present findings can also be viewed in relation to the existing positioning of tactile sensors on the NAO robot. The NAO has seven tactile sensors, three on the scalp, two on the back of the hands and two bump sensors on the feet. While the hands are frequently involved in tactile interaction, the scalp constitutes less than two percent of all tactile interaction in the present study. No tactile sensors are placed on the arms that are the most frequently touched locations.
From the perspective of Systems Design and Human-Robot Interaction, it is worth noting that three touch types, squeezing, stroking and pressing, constituted more than half (59%) of all tactile interaction in the study. While a detailed analysis of the information content in each touch component is beyond the scope of the present work, the present findings suggest that encoding and decoding of these three touch types may be important for successful human-robot tactile interaction with respect to the conveyance and decoding of at least some emotions. Furthermore, number of different locations touched, duration of touch and also touch intensity proved to be particularly informationally critical in the decoding of emotions from tactile interaction. Textile pressure sensors can be realized either as resistive, capacitive or suitable piezoelectric structures. Both, resistive and capacitive sensors will respond to deformation (pressure or strain) by a measureable change in impedance. Piezoelectric structures generate a voltage when deformed. The type of sensor selected will affect decoding potential, e.g., in relation to sensitivity and noise. It also affects the appearance of the textile. Such sensors, particularly when utilized in the textiles in arrays, have the potential therefore to encode different intensities of touch for different dynamic types (e.g., stroke, massage) providing a key sensory means for decoding affective tactile information.
Notwithstanding the potential for using sensors, textile sensors pose a challenge on the computing part, as they tend to be less predictable than conventional electronic components. Textile sensing materials often produce a significant amount of noise; their material properties in combination with the inherent compliance of textile structures cause them to mechanically wear out over time. Similarly, their performance can be dependent on environmental influences such as temperature and humidity; and their physical structure is much more difficult to model and predict before the actual production, compared to standard sensors. The use of textile sensors in a pervasive computing system therefore relies on the use of an appropriate recognition algorithm that is able to process the sensor data in a meaningful way. The number of locations and duration of touch would thus be required to be decoded through using computational processes that received textile sensors inputs. The fact that in our study we demonstrated the potential for decoding emotions based on the three touch dimensions of location part number, duration of touch and intensity of touch, suggests that the burden on smart sensor power might be reduced by use of suitable processing algorithms. It may be rather more important that the wearable interface (or e-skin) affords—in terms of visual and tactile properties—interactions of an affective nature. This is a key selling point of adopting textile wearables for use in affective-based human-robot tactile interaction. Studies indicate that emotion decoding is more accurate and more typical, when multi-modal sensors that pick up specific emotional and gestural information are added ([39], also see [44]). Such multi-modal encoding and decoding allows for contextual nuance in the affective interaction. We propose that, given suitable calibration to the interacting individual, a three-dimensional tactile decoding space can be sufficient for the successful communication of at least the affective valence state key to particular forms of tactile interaction.
Of further relevance is how the NAO (or a given robot) should perceive and respond to affective touch. Our results indicate that affective valence (postively conveyed versus negatively conveyed emotions) may be detectable according to the dimensions of duration of touch and distribution of locations touched. Such perception naturally requires calibration to individual humans and thus appropriate deployment of processing (learning) algorithms that ease the burden on smart sensor use. A given categorization of an affective state could inform a robot as to how to respond in a given HRI task. For example, if affective interaction is perceived as attachment oriented—long duration and location-distributed tactile interaction, this might signal overall task success or failure. Positive feedback based interaction (e.g., conveying gratitude, or perhaps happiness) may indicate that the task is going well and should be carried on in a similar manner. The reference [45]—see also [46] for general review of emotions in communication—have suggested that communication and perception of different primary and social emotions may be critically informative of how a task (interactive or individual) is going and how to adapt one’s behaviour accordingly.
Most of the emotions our participants did not want to express were the negative ones. This behaviour has been seen in prior research: “we found that participants did not expect to show hate or disgust toward our robot and expected their own actions to mostly express affection, which bears similarity to a result observed by [21]. We think this suggests a generally positive bent to how people expect to interact with robots”, [16]. Nevertheless, where tactile interaction might be particularly relevant in human-robot interaction is in the domain of Joint Action (cf. [47,48,49]) where interaction on a goal-directed task may benefit from human actors conveying both negative (rejection-based) and positive (e.g., gratitude) feedback. Again, in reference to [45], communicating emotions such as anger/frustration may be critically important to informing interactors as to how the task is perceived to be going and how to respond accordingly (e.g., approach the task in a different way, or try harder).
With respect to some of the limitations of our human-robot interaction study, results may be dependent on the interaction context and the placement of the NAO. For example, placing the NAO on the floor is likely to change the interaction pattern, at least in terms of where the robot is touched. Another limitation of this study lies in the age of participants. It would, for example, be interesting to compare these results to those with children interacting with the robot. Children constitute one important target group and may be less constrained by social conventions. This may be particularly relevant to furthering the understanding of tactile conveyance of intimate emotions such as love where adults may feel comparatively inhibited. Other properties of touch were not explored in this study, for example, speed of interaction was not evaluated but has previously found to be an important emotion-interaction factor [50]. Going beyond the original properties of touch investigated in the human-human empirical reference study is an endeavour of clear importance for understanding human-robot tactile emotional interaction possibilities. Providing greater controls/more restrictive instructions to subjects regarding emotional interaction might also have reduced variance in the results and improved decoding classification accuracy. For example, we noted that Fear and Sadness were a little contentious in relation to how humans interpreted their conveyance. In both cases, subjects demonstrated ambiguity as to whether they should convey their own emotion or provide a compensatory gesture to the robot (e.g., consolation in the case of Sadness). Subjects were instructed to touch according to their own emotion but, nevertheless, often appeared confused as to how to convey these emotions. More emphasis in the instructions might have been useful and such interpretative confusion might have contributed to the relatively poor classification of these emotions, e.g., Sadness appears to have been interpreted as a consoling response to the robot’s emotion. This confusion may owe to the difficulty in communicating, by touch, one’s own Fear and Sadness emotions. For empirical reasons, we sought to remain faithful to the Herteinstein et al. [10] report (and personal communication) of the methodology used. For reasons of time and space constraints we did not venture (much) beyond this empirical comparison. However, further studies would benefit from identifying critical dimensions of touch that could elucidate a better degree of machine-learning based decoding accuracy. Personality differences were not studied here.
Finally, follow-up research is aimed at that will evaluate in detail how well affective states and specific emotions are classified given multiple sensory modalities, e.g., touch and visual expression (facial, postural). Studies have typically shown that multiple dimensions of expression facilitate classification analyses [39,44]. We have demonstrated the potential for tactile interaction to provide much information regarding emotional conveyance; nevertheless, emotional interaction in naturalistic contexts is almost always of a context-specific (e.g., dependent on the type of task) and multi-sensory nature. Exploration of how touch may be utilized within this multi-sensory and contextual framework would provide a greater understanding of the nature of affective tactile interaction in both human-human and human-robot interaction. This would also require a more detailed study of the relationship between touch location and touch type. Of further critical importance is to carry out studies that compare different visual and tactile affordances of the WAffI, e.g., with different colours, textiles. We chose the colour ‘grey’ in this study so as to provide a neutral emotional affordance to the WAffI reducing bias or preference for any given emotion. Ultimately, we seek to embed within the WAffI a relatively small number of textile sensors that will furnish the NAO with the information necessary to decode human emotions and respond appropriately. We are also looking at algorithms that can potentially decode such affective information (e.g., [51]).

Acknowledgments

The authors would like to thank Matthew Hertenstein for the important discussions regarding this work. This work has been carried out as part of the project “Design, Textiles and Sustainable Development” funded by Region Västra Götaland (VGR) funding agency.

Author Contributions

For the present work R.L. carried out most of the writing, analysis and data collation, he also contributed to data collection; R.A. and B.A. carried out most of the data collection and collation and also contributed to the writing; E.B. provided some contribution to the analysis; A.L. provided some contribution to the writing.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Tables of Touch Location and Touch Type Data

Figure A1, Figure A2 and Figure A3 provide raw data of touch location per emotion (over all participants)—Figure A1 and Figure A2 and touch type data per emotion (over all participants)—Figure A3. Figure A3 key: Sc = Scalp, Fa = Face, RS = Right Shoulder, LS = Left Shoulder, RA = Right Arm, LA = Left Arm, RH = Right Hand, LH = Left Hand, BW = Below Waist, Ch = Chest, Oc = Occiput, LE = Left Ear, RE = Right Ear, Ba = Back, LW = Left Waist, RW = Right Waist.
Figure A1. Instances/percentage of locations touched over all emotions and all subjects in the WAffI-On condition. 
Figure A1. Instances/percentage of locations touched over all emotions and all subjects in the WAffI-On condition. 
Mti 02 00002 g0a1
Figure A2. Instances/percentage of locations touched over all emotions and all subjects in the WAffI-Off condition. 
Figure A2. Instances/percentage of locations touched over all emotions and all subjects in the WAffI-Off condition. 
Mti 02 00002 g0a2
Figure A3. Instances/means of types of touch over all emotions and all subjects in the WAffI-On (green shaded rows) and WaffI-Off (blue shaded rows) conditions. Key: Squ = squeeze, str = stroke, rub = rub, pus = push, pul = pull, pre = press, pat = pat, tap = tap, sha = shake, pin = pinch, tre = tremble, pok = poke, hit = hit, scr = scratch, mas = massage, tic = tickle, sla = slap, lif = lift, pic = pick, hug = hug, fin = finger interlocking, swi = swing, tos = toss.
Figure A3. Instances/means of types of touch over all emotions and all subjects in the WAffI-On (green shaded rows) and WaffI-Off (blue shaded rows) conditions. Key: Squ = squeeze, str = stroke, rub = rub, pus = push, pul = pull, pre = press, pat = pat, tap = tap, sha = shake, pin = pinch, tre = tremble, pok = poke, hit = hit, scr = scratch, mas = massage, tic = tickle, sla = slap, lif = lift, pic = pick, hug = hug, fin = finger interlocking, swi = swing, tos = toss.
Mti 02 00002 g0a3

Appendix B. Encoder Statistical Comparisons

Figure A4 shows the output of the 3-way anova measuring differences in duration of touch over emotions for the different conditions—note, here we also evaluated gender as an independent variable.
Figure A4. 3-way anova for duration of touch. Key: g1 = clothing variable, g2 = gender, g3 = emotions.
Figure A4. 3-way anova for duration of touch. Key: g1 = clothing variable, g2 = gender, g3 = emotions.
Mti 02 00002 g0a4

References

  1. Broekens, J.; Heerink, M.; Rosendal, H. Asistive social robots in elderly care: A review. Gerontechnology 2009, 8, 94–103. [Google Scholar] [CrossRef]
  2. Shamsuddin, S.; Yussof, H.; Ismail, L.; Hanapiah, F.A.; Mohamed, S.; Piah, H.A.; Zahari, N.I. Initial response of autistic children in human-robot interaction therapy with humanoid robot NAO. In Proceedings of the 2012 IEEE 8th International Colloquium on Signal Processing and its Applications (CSPA), Malacca, Malaysia, 23–25 March 2012; pp. 188–193. [Google Scholar]
  3. Soler, M.V.; Aguera-Ortiz, L.; Rodriguez, J.O.; Rebolledo, C.M.; Munoz, A.P.; Perez, I.R.; Ruiz, S.F. Social robots in advanced dementia. Front. Aging Neurosci. 2015, 7. [Google Scholar] [CrossRef]
  4. Dautenhahn, K. Socially intelligent robots: Dimensions of human-robot interaction. Philos. Trans. R. Soc. B Biol.Sci. 2007, 362, 679–704. [Google Scholar] [CrossRef] [PubMed]
  5. Silvera-Tawil, D.; Rye, D.; Velonaki, M. Interpretation of social touch on an artificial arm covered with an EIT-based sensitive skin. Int. J. Soc. Robot. 2014, 6, 489–505. [Google Scholar] [CrossRef]
  6. Montagu, A. Touching: The Human Significance of the Skin, 3rd ed.; Harper & Row: New York, NY, USA, 1986. [Google Scholar]
  7. Dahiya, R.S.; Metta, G.; Sandini, G.; Valle, M. Tactile sensing-from humans to humanoids. IEEE Trans. Robot. 2010, 26, 1–20. [Google Scholar] [CrossRef]
  8. Silvera-Tawil, D.; Rye, D.; Velonaki, M. Artificial skin and tactile sensing for socially interactive robots: A review. Robot. Auton. Syst. 2015, 63, 230–243. [Google Scholar] [CrossRef]
  9. Aldebaran. Aldebaran by SoftBank Group. 43, rue du Colonel Pierre Avia 75015 Paris. Available online: https://www.aldebaran.com (accessed on 9 November 2017).
  10. Hertenstein, M.J.; Holmes, R.; McCullough, M.; Keltner, D. The communication of emotion via touch. Emotion 2009, 9, 566–573. [Google Scholar] [CrossRef] [PubMed]
  11. Ekman, P. Facial expression and emotion. Am. Psychol. 1993, 48, 384. [Google Scholar] [CrossRef] [PubMed]
  12. Scherer, K.R. Vocal communication of emotion: A review of research paradigms. Speech Commun. 2003, 40, 227–256. [Google Scholar]
  13. Field, T. Touch; MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  14. Kosfeld, M.; Heinrichs, M.; Zak, P.J.; Fischbacher, U.; Fehr, E. Oxytocin increases trust in humans. Nature 2005, 435, 673–676. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Hertenstein, M.J.; Keltner, D.; App, B.; Bulleit, B.A.; Jaskolka, A.R. Touch communicates distinct emotions. Emotion 2006, 6, 528. [Google Scholar] [CrossRef] [PubMed]
  16. Cooney, M.D.; Nishio, S.; Ishiguro, H. Importance of Touch for Conveying Affection in a Multimodal Interaction with a Small Humanoid Robot. Int. J. Hum. Robot. 2015, 12, 1550002. [Google Scholar] [CrossRef]
  17. Lee, K.M.; Jung, Y.; Kim, J.; Kim, S.R. Are physically embodied social agents better than disembodied social agents?: The effects of physical embodiment, tactile interactio.; people’s loneliness in human–robot interaction. Int. J. Hum.-Comput. Stud. 2006, 64, 962–973. [Google Scholar] [CrossRef]
  18. Ogawa, K.; Nishio, S.; Koda, K.; Balistreri, G.; Watanabe, T.; Ishiguro, H. Exploring the natural reaction of young and aged person with Telenoid in a real world. J. Adv. Comput. Intell. Intell. Inform. 2011, 15, 592–597. [Google Scholar] [CrossRef]
  19. Turkle, S.; Breazeal, C.; Dasté, O.; Scassellati, B. Encounters with Kismet and Cog: Children respond to relational artifacts. Digit. Media Transform. Hum. Commun. 2006, 15, 1–20. [Google Scholar]
  20. Stiehl, W.D.; Lieberman, J.; Breazeal, C.; Basel, L.; Lalla, L.; Wolf, M. Design of a therapeutic robotic companion for relational, affective touch. In Proceedings of the IEEE International Workshop Robot and Human Interactive Communication, Nashville, TN, USA, 13–15 August 2005. [Google Scholar]
  21. Yohanan, S.; MacLean, K.E. The haptic creature project: Social human-robot interaction through affective touch. In Proceedings of the AISB 2008 Symposium on the Reign of Catz & Dogs: The Second AISB Symposium on the Role of Virtual Creatures in a Computerised Society, Aberdeen, UK, 1–4 April 2008; pp. 7–11. [Google Scholar]
  22. Flagg, A.; MacLean, K. Affective touch gesture recognition for a furry zoomorphic machine. In Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction, Barcelona, Spain, 10–13 February 2013; pp. 25–32. [Google Scholar]
  23. Kaboli, M.; Walker, R.; Cheng, G. Re-using Prior Tactile Experience by Robotic Hands to Discriminate In-Hand Objects via Texture Properties, Technical Report. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016. [Google Scholar]
  24. Yogeswaran, N.; Dang, W.; Navaraj, W.T.; Shakthivel, D.; Khan, S.; Polat, E.O.; Gupta, S.; Heidari, H.; Kaboli, M.; Lorenzelli, L.; et al. New materials and advances in making electronic skin for interactive robots. Adv. Robot. 2015, 29, 1359–1373. [Google Scholar] [CrossRef]
  25. Kadowaki, A.; Yoshikai, T.; Hayashi, M.; Inaba, M. Development of soft sensor exterior embedded with multi-axis deformable tactile sensor system. In Proceedings of the RO-MAN 2009—The 18th IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 27 September–2 October 2009; pp. 1093–1098. [Google Scholar]
  26. Ziefle, M.; Brauner, P.; Heidrich, F.; Möllering, C.; Lee, K.; Armbrüster, C. Understanding requirements for textile input devices individually tailored interfaces within home environments. In UAHCI/HCII 2014; Stephanidis, C., Antona, M., Eds.; Part II.; LNCS 8515; Springer: Cham, Switzerland, 2014; pp. 587–598. [Google Scholar]
  27. Trovato, G.; Do, M.; Terlemez, O.; Mandery, C.; Ishii, H.; Bianchi-Berthouze, N.; Asfour, T.; Takanishi, A. Is hugging a robot weird? Investigating the influence of robot appearance on users’ perception of hugging. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15–17 November 2016; pp. 318–323. [Google Scholar]
  28. Schmitz, A.; Maggiali, M.; Randazzo, M.; Natale, L.; Metta, G. A prototype fingertip with high spatial resolution pressure sensing for the robot iCub. In Proceedings of the Humanoids 2008-8th IEEE-RAS International Conference on Humanoid Robots, Daejeon, South Korea, 1–3 December 2008; pp. 423–428. [Google Scholar]
  29. Maiolino, P.; Maggiali, M.; Cannata, G.; Metta, G.; Natale, L. A flexible and robust large scale capacitive tactile system for robots. IEEE Sens. J. 2013, 13, 3910–3917. [Google Scholar] [CrossRef]
  30. Petreca, B.; Bianchi-Berthouze, N.; Baurley, S.; Watkins, P.; Atkinson, D. An Embodiment Perspective of Affective Touch Behaviour in Experiencing Digital Textiles. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), Geneva, Switzerland, 2–5 September 2013; pp. 770–775. [Google Scholar]
  31. Katterfeldt, E.-S.; Dittert, N.; Schelhowe, H. EduWear: Smart textiles as ways of relating computing technology to everyday life. In Proceedings of the 8th International Conference on Interaction Design and Children, Como, Italy, 3–5 June 2009; pp. 9–17. [Google Scholar]
  32. Tao, X. Smart technology for textiles and clothing—Introduction and overview. In Smart Fibres, Fabrics and Clothing; Tao, X., Ed.; Woodhead Publishing: Cambridge, UK, 2001; pp. 1–6. [Google Scholar]
  33. Rawassizadeh, R.; Price, B.A.; Petre, M. Wearables: Has the age of smartwatches finally arrived? Commun. ACM 2015, 58, 45–47. [Google Scholar] [CrossRef]
  34. Swan, M. Sensor mania! the internet of things, wearable computing, objective metric and the quantified self 2.0. J. Sens. Actuator Netw. 2012, 1, 217–253. [Google Scholar]
  35. Stoppa, M.; Chiolerio, A. Wearable electronics and smart textiles: A critical review. Sensors 2014, 14, 11957–11992. [Google Scholar]
  36. Pantelopoulos, A.; Bourbakis, N.G. A survey on wearable sensor-based systems for health monitoring and prognosis. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2010, 40, 1–12. [Google Scholar] [CrossRef]
  37. Cherenack, K.; van Pieterson, L. Smart textiles: Challenges and opportunities. J. Appl. Phys. 2012, 112, 091301. [Google Scholar] [CrossRef]
  38. Carvalho, H.; Catarino, A.P.; Rocha, A.; Postolache, O. Health Monitoring using Textile Sensors and Electrodes: An Overview and Integration of Technologies. In Proceedings of the IEEE MeMeA 2014—IEEE International Symposium on Medical Measurements and Applications, Lisboa, Portugal, 1–12 June 2014. [Google Scholar]
  39. Cooney, M.D.; Nishio, S.; Ishiguro, H. Recognizing affection for a touch-based interaction with a humanoid robot. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, 7–12 October 2012. [Google Scholar]
  40. Jung, M.M.; Poppe, R.; Poel, M.; Heylen, D.K. Touching the Void-Introducing CoST: Corpus of Social Touch. In Proceedings of the 16th International Conference on Multimodal Interaction, Istanbul, Turkey, 12–16 November 2014; pp. 120–127. [Google Scholar]
  41. Jung, M.M.; Cang, X.L.; Poel, M.; MacLean, K.E. Touch Challenge’15: Recognizing Social Touch Gestures. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, Seattle, WA, USA, 9–13 November 2015; pp. 387–390. [Google Scholar]
  42. Andreasson, R.; Alenljung, B.; Billing, E.; Lowe, R. Affective Touch in Human–Robot Interaction: Conveying Emotion to the Nao Robot. Int. J. Soc. Robot. 2017, 23, 1–19. [Google Scholar] [CrossRef]
  43. Li, C.; Bredies, K.; Lund, A.; Nierstrasz, V.; Hemeren, P.; Högberg, D. kNN based Numerical Hand Posture Recognition using a Smart Textile Glove. In Proceedings of the Fifth International Conference on Ambient Computing, Applications, Services and Technologies, Nice, France, 19–24 July 2015. [Google Scholar]
  44. Barros, P.; Wermter, S. Developing Crossmodal Expression Recognition Based on a Deep Neural Model. In Special Issue on Grounding Emotions in Robots: Embodiment, Adaptation, Social Interaction. Adapt. Behav. 2016, 24, 373–396. [Google Scholar] [CrossRef] [PubMed]
  45. Oatley, K.; Johnson-Laird, P.N. Towards a cognitive theory of emotions. Cogn. Emot. 1987, 1, 29–50. [Google Scholar] [CrossRef]
  46. Oatley, K.; Johnson-Laird, P.N. Cognitive approaches to emotions. Trends Cogn. Sci. 2014, 18, 134–140. [Google Scholar] [CrossRef] [PubMed]
  47. Bicho, E.; Erlhagen, W.; Louro, L.; e Silva, E.C. Neuro-cognitive mechanisms of decision making in joint action: A human–robot interaction study. Hum. Mov. Sci. 2011, 30, 846–868. [Google Scholar] [CrossRef] [PubMed]
  48. Michael, J. Shared emotions and joint action. Rev. Philos. Psychol. 2011, 2, 355–373. [Google Scholar] [CrossRef]
  49. Silva, R.; Louro, L.; Malheiro, T.; Erlhagen, W.; Bicho, E. Combining intention and emotional state inference in a dynamic neural field architecture for human-robot joint action, in Special Issue on Grounding Emotions in Robots: Embodiment, Adaptation, Social Interaction. Adapt. Behav. 2016, 24, 350–372. [Google Scholar] [CrossRef]
  50. Gao, Y.; Bianchi-Berthouze, N.; Meng, H. What does touch tell us about emotions in touchscreen-based gameplay? ACM Trans. Comput.-Hum. Interact. (TOCHI) 2012, 19, 31. [Google Scholar] [CrossRef]
  51. Lowe, R.; Sandamirskaya, Y.; Billing, E. A neural dynamic model of associative two-process theory: The differential outcomes effect and infant development. In Proceedings of the 4th International Conference on Development and Learning and on Epigenetic Robotics, Genoa, Italy, 13–16 October 2014; pp. 440–447. [Google Scholar]
Figure 1. NAO robot with and without WAffI. Left. NAO without WAffI. Right. NAO clothed in the WAffI. The WAffI consists of a number of detachable tight-fitting parts and serves as a prototype for testing subjects’ tactile interactions. Future work aims at incorporating a number of smart textile sensors into the fabric for facilitating robot decoding of emotions.
Figure 1. NAO robot with and without WAffI. Left. NAO without WAffI. Right. NAO clothed in the WAffI. The WAffI consists of a number of detachable tight-fitting parts and serves as a prototype for testing subjects’ tactile interactions. Future work aims at incorporating a number of smart textile sensors into the fabric for facilitating robot decoding of emotions.
Mti 02 00002 g001
Figure 2. Experimental set-up where the participant interacts with the NAO in the Usability Lab. The participant interacts with the NAO by touching left and right arms to convey a particular emotion. Camera shots are configured using the ELAN annotation tool: https://tla.mpi.nl/tools/tla-tools/elan/.
Figure 2. Experimental set-up where the participant interacts with the NAO in the Usability Lab. The participant interacts with the NAO by touching left and right arms to convey a particular emotion. Camera shots are configured using the ELAN annotation tool: https://tla.mpi.nl/tools/tla-tools/elan/.
Mti 02 00002 g002
Figure 3. Robot body regions considered in the coding process for location of touch. Colors indicate unique touch locations.
Figure 3. Robot body regions considered in the coding process for location of touch. Colors indicate unique touch locations.
Mti 02 00002 g003
Figure 4. Intensity ratings over emotions and for WAffI-On and -Off conditions. The stacked bar plots show pairs (On Off) of ratings over the different intensity intervals per emotion. The y-axis shows total number of ratings per emotion. The x-axis shows total ratings per emotion as well as mean ratings over all emotions (right-most plot) for comparison. It can be seen that, with the exception of Anger (WAffI-On and -Off) and Disgust (WAffI-On), Medium intensity ratings were highest.
Figure 4. Intensity ratings over emotions and for WAffI-On and -Off conditions. The stacked bar plots show pairs (On Off) of ratings over the different intensity intervals per emotion. The y-axis shows total number of ratings per emotion. The x-axis shows total ratings per emotion as well as mean ratings over all emotions (right-most plot) for comparison. It can be seen that, with the exception of Anger (WAffI-On and -Off) and Disgust (WAffI-On), Medium intensity ratings were highest.
Mti 02 00002 g004
Figure 5. Mean durations of tactile interaction from initial to final touch over each emotion. Subjects in the WAffI-On condition interact with the NAO for longer durations over all emotions (means) and differences are greatest (non-overlapping standard error bars) for Sadness, Gratitude, Happiness, Disgust and Fear emotions.
Figure 5. Mean durations of tactile interaction from initial to final touch over each emotion. Subjects in the WAffI-On condition interact with the NAO for longer durations over all emotions (means) and differences are greatest (non-overlapping standard error bars) for Sadness, Gratitude, Happiness, Disgust and Fear emotions.
Mti 02 00002 g005
Figure 6. Mean number of touched locations during interaction. The mean values (y-axis) represent the number of touches per participant for the WAffI-On and WAffI-Off conditions.
Figure 6. Mean number of touched locations during interaction. The mean values (y-axis) represent the number of touches per participant for the WAffI-On and WAffI-Off conditions.
Mti 02 00002 g006
Figure 7. Heat maps depicting touch distribution for WAffI-On and WAffI-Off conditions, averaged over all emotions. The different locations on NAO are visualized according to amount of red in relation to numbers of touches. Darker red indicates a higher number of touches over all the participants. The percentage of all touches are in brackets for each touch location. Key: Sc = Scalp, Fa = Face, RS = Right Shoulder, LS = Left Shoulder, RA = Right Arm, LA = Left Arm, RH = Right Hand, LH = Left Hand, BW = Below Waist, Ch = Chest, Oc = Occiput, LE = Left Ear, RE = Right Ear, Ba = Back, LW = Left Waist, RW = Right Waist.
Figure 7. Heat maps depicting touch distribution for WAffI-On and WAffI-Off conditions, averaged over all emotions. The different locations on NAO are visualized according to amount of red in relation to numbers of touches. Darker red indicates a higher number of touches over all the participants. The percentage of all touches are in brackets for each touch location. Key: Sc = Scalp, Fa = Face, RS = Right Shoulder, LS = Left Shoulder, RA = Right Arm, LA = Left Arm, RH = Right Hand, LH = Left Hand, BW = Below Waist, Ch = Chest, Oc = Occiput, LE = Left Ear, RE = Right Ear, Ba = Back, LW = Left Waist, RW = Right Waist.
Mti 02 00002 g007
Figure 8. Heat maps depicting touch distribution for WAffI-On and WAffI-Off conditions for the Gratitude conveyed emotion. The different locations on NAO are visualized according to amount of red in relation to numbers of touches. Darker red indicates a higher number of touches over all the participants. A noticeable difference in the two conditions is the higher tendency to touch the shoulders when the NAO is unclothed.
Figure 8. Heat maps depicting touch distribution for WAffI-On and WAffI-Off conditions for the Gratitude conveyed emotion. The different locations on NAO are visualized according to amount of red in relation to numbers of touches. Darker red indicates a higher number of touches over all the participants. A noticeable difference in the two conditions is the higher tendency to touch the shoulders when the NAO is unclothed.
Mti 02 00002 g008
Figure 9. Average touch type frequency over all emotions. Out of the 23 annotated touch types, Pulling, Trembling, Tossing was never observed and are excluded from the diagram.
Figure 9. Average touch type frequency over all emotions. Out of the 23 annotated touch types, Pulling, Trembling, Tossing was never observed and are excluded from the diagram.
Mti 02 00002 g009
Figure 10. A two-dimensional support vector machine classification of emotional conveyance valence by number of location and duration of touch. Emotion mean values are classified according to their positive or negative meaning (either side of the hyperplane). Note, Sadness here is classified as an emotion that is conveyed positively (for consoling the robot). Circled are the support vectors. Left. WAffI-On classification. Right. WAffI-Off classification. We used the Matlab cvpartition() function with 50-50 train-test partition of data sets and the Holdout method for one versus all comparison.
Figure 10. A two-dimensional support vector machine classification of emotional conveyance valence by number of location and duration of touch. Emotion mean values are classified according to their positive or negative meaning (either side of the hyperplane). Note, Sadness here is classified as an emotion that is conveyed positively (for consoling the robot). Circled are the support vectors. Left. WAffI-On classification. Right. WAffI-Off classification. We used the Matlab cvpartition() function with 50-50 train-test partition of data sets and the Holdout method for one versus all comparison.
Mti 02 00002 g010
Figure 11. Confusion matrices for the 3 dimensional (Location, Duration, Intensity) SVM classifications for all of the 8 emotions using one vs. one leave-one-out cross validation. Left. WAffI-On classification. Right. WAffI-Off classification. Key: 1 = Fear, 2 = Anger, 3 = Disgust, 4 = Happiness, 5 = Sadness, 6 = Gratitude, 7 = Sympathy, 8 = Love.
Figure 11. Confusion matrices for the 3 dimensional (Location, Duration, Intensity) SVM classifications for all of the 8 emotions using one vs. one leave-one-out cross validation. Left. WAffI-On classification. Right. WAffI-Off classification. Key: 1 = Fear, 2 = Anger, 3 = Disgust, 4 = Happiness, 5 = Sadness, 6 = Gratitude, 7 = Sympathy, 8 = Love.
Mti 02 00002 g011
Figure 12. Classification accuracy of WAffI-On and -Off (Human-Robot) conditions compared with Hertenstein et al.’s Human-Human tactile interaction decoder classification results.
Figure 12. Classification accuracy of WAffI-On and -Off (Human-Robot) conditions compared with Hertenstein et al.’s Human-Human tactile interaction decoder classification results.
Mti 02 00002 g012
Figure 13. k-Means cluster analysis over all data for encoded emotions. Left. k-Means clusters. Right. Corresponding silhoutte plot with two identified cluster regions.
Figure 13. k-Means cluster analysis over all data for encoded emotions. Left. k-Means clusters. Right. Corresponding silhoutte plot with two identified cluster regions.
Mti 02 00002 g013
Figure 14. Confusion matrices for the 3 dimensional (Location, Duration, Intensity) tree ensemble learner classifications of emotion conveyance valence. Left. WAffI-On classification. Right. WAffI-Off classification. Key: 1. Fear, Anger, Disgust (Negative emotion conveyance), 2. Love, Sadness, Sympathy, Gratitude and Happiness (Positive emotion conveyance). Classification accuracy was slightly superior overall in the WAffI-On condition.
Figure 14. Confusion matrices for the 3 dimensional (Location, Duration, Intensity) tree ensemble learner classifications of emotion conveyance valence. Left. WAffI-On classification. Right. WAffI-Off classification. Key: 1. Fear, Anger, Disgust (Negative emotion conveyance), 2. Love, Sadness, Sympathy, Gratitude and Happiness (Positive emotion conveyance). Classification accuracy was slightly superior overall in the WAffI-On condition.
Mti 02 00002 g014

Share and Cite

MDPI and ACS Style

Lowe, R.; Andreasson, R.; Alenljung, B.; Lund, A.; Billing, E. Designing for a Wearable Affective Interface for the NAO Robot: A Study of Emotion Conveyance by Touch. Multimodal Technol. Interact. 2018, 2, 2. https://doi.org/10.3390/mti2010002

AMA Style

Lowe R, Andreasson R, Alenljung B, Lund A, Billing E. Designing for a Wearable Affective Interface for the NAO Robot: A Study of Emotion Conveyance by Touch. Multimodal Technologies and Interaction. 2018; 2(1):2. https://doi.org/10.3390/mti2010002

Chicago/Turabian Style

Lowe, Robert, Rebecca Andreasson, Beatrice Alenljung, Anja Lund, and Erik Billing. 2018. "Designing for a Wearable Affective Interface for the NAO Robot: A Study of Emotion Conveyance by Touch" Multimodal Technologies and Interaction 2, no. 1: 2. https://doi.org/10.3390/mti2010002

Article Metrics

Back to TopTop