Next Article in Journal
Peridynamic Analysis of Rail Squats
Previous Article in Journal
SCoPE: Service Composition and Personalization Environment
Previous Article in Special Issue
Learning and Planning Based on Merged Experience from Multiple Situations for a Service Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of a Huggable Social Robot with Affective Expressions Using Projected Images

1
Graduate School of Systems and Information Engineering, University of Tsukuba, Tsukuba 305-8573, Japan
2
Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba 305-8573, Japan
*
Author to whom correspondence should be addressed.
Current address: University of Tsukuba, Tennodai 1-1-1, Tsukuba 305-8573, Japan.
Appl. Sci. 2018, 8(11), 2298; https://doi.org/10.3390/app8112298
Submission received: 30 September 2018 / Revised: 9 November 2018 / Accepted: 13 November 2018 / Published: 19 November 2018
(This article belongs to the Special Issue Social Robotics)

Abstract

:
We introduce Pepita, a caricatured huggable robot capable of sensing and conveying affective expressions by means of tangible gesture recognition and projected avatars. This study covers the design criteria, implementation and performance evaluation of the different characteristics of the form and function of this robot. The evaluation involves: (1) the exploratory study of the different features of the device, (2) design and performance evaluation of sensors for affective interaction employing touch, and (3) design and implementation of affective feedback using projected avatars. Results showed that the hug detection worked well for the intended application and the affective expressions made with projected avatars were appropriated for this robot. The questionnaires analyzing users’ perception provide us with insights to guide the future designs of similar interfaces.

1. Introduction

Social robots are designed to communicate and to engage in social interaction with humans. They can recognize, follow and react to humans action in social contexts. Different from robots, social robots combine functional and social characteristics. One of the challenges involves the design of social robots for home environments that can communicate with and assist humans in everyday life situations. These types of social robots are commonly known as companion robots. Companion robots assist users in everyday tasks through an intuitive, expressive and affective interaction [1]. These types of robots are designed mainly to interact with the users and, by this, to create a feeling of companionship.
Companion robots need to be carefully designed to meet the user’s expectations to be able to be adopted into everyday life situations. Like any other social robot, when designing a companion robot, it is essential to understand its form, function, and context in order to emphasize its social attribution [2]. “Form” refers to the aesthetics, physical features that contribute to the communication between the robot and the user. “Function” relates to the actions that the robot can perform. “Context” is the application for the robot, the scenario where the robot is placed. These three aspects are interrelated and influence each other. For example, the aesthetic form of the robot conveys social cues, and its physical form defines the behavior of a robot.
The different robot companions found in the literature can be classified using these aspects. In terms of form, companion robots such as iPal [3], FLASH [4] and Pepper [5] were designed with a humanoid appearance. Paro [6], Aibo [7] and Joy for All [8] are examples of companion robots and animatronics with zoomorphic features. The huggable [9] and Jerry the Bear [10] were designed with a friendly teddy bear-like body, falling in the category of caricatured appearance. Buddy [11], Zenbo [12], Furo-i [13] and Jibo [14] have a functional appearance with machine-like features and a body without limbs.
In terms of function, recognition of tangible gestures such as touch [7,8,14] have more complex ones such as hugs or strokes [6,9], user’s recognition [11,14], navigation [11,12,13], and conversation [4,14]. A common function among companion robots is the support of affective expressions either by mechanical facial expressions [4], facial expressions made using displays [11,12,13] or more abstract expressions using colors [14] or body movements [6,7,8].
Finally, when the robots are analyzed in terms of context, it is observed that robots with animal appearance and soft skins [6,8] were designed for therapeutic applications and to provide comfort. Robots with caricature appearance [9,10] were designed as characters that need attention and care from the users. Robots with more functional appearance [11] Zenbo [12,13,14] were designed to support different tasks as home assistants. Humanoid robots are commonly used as a social peer for educational or service purposes. Robots with screen are designed as assistants and for telepresence applications [11,13,14].
Based on the importance of endowing social robots with affective expressions, we explore a way to enhance the robot’s expressiveness using projected avatars from the robot body. This approach is beneficial as the images can be projected on different surfaces, increasing the size of the images, and making it possible to share the images with multiple users on the same physical space. Following this, we propose the companion robot Pepita, designed to sense affective touch-based gestures such as hugs and provide visual feedback using projected avatars. We selected different features that define the form and function of Pepita as a companion robot. The combination of two types of embodiment (physical and virtual) to enhance the capabilities of a physically constrained robot is underrepresented. For this reason, we emphasize the various benefits of an embedded projector into a robotic device designed as a companion robot. This study covered the design criteria, implementation, performance evaluation of the different characteristics of the form and function of this robot. The design criteria for the sensors, feedback, and robot’s appearance were explored and selected based on an extensive literature review. Then, a design that combined a robot body with projected avatars was proposed. The study was divided into three main parts: (1) the exploratory study of the different features of the device, (2) design and performance evaluation of sensors for affective interaction employing touch, and (3) design and implementation of affective feedback using projected avatars. Based on the results, we introduced the expected social context for the proposed companion robot. We expect that this work will contribute to the future design of robotic devices for similar applications.

2. Designing Pepita

This section explores different related works that contributed to defining Pepita’s form and function.

2.1. The Form of Pepita

The appearance of social robots can be divided into four categories: anthropomorphic, zoomorphic, caricatured, or functional [15]. It was found that the appearance factor has a strong effect on the perception of a robot’s abilities, and it might also be linked to the expectation of its capabilities [16,17]. Each of the four categories for a social robot’s appearance has been perceived to be more suitable for specific tasks [18].
Anthropomorphic robots are designed with a human-like appearance, and they are meant to interact with humans in a human-like way, using different gestures, facial expressions, and body postures. They are appropriate for service applications and public spaces [19]. Because these robots can manage non-verbal communication, they can be found in robot-assisted therapies for individuals with developmental disorders [20]. Robots with a high degree of anthropomorphism can potentially elicit human-like communication, making them the most appropriate to use when investigating human behavior [21,22]. Because these robots have a human-like appearance, designers work hard to develop behaviors and interactions that can meet the user’s expectations, resulting in engaging and meaningful interaction.
Living creatures inspire the design of zoomorphic robots. Their social application is related to the potential for creating a feeling of pet-ownership and companionship [23]. There are many cases in which the benefits of animal-robots have been exploited, including applications in hospitals and nursing homes [6,24]. Concerning appearance, zoomorphic robots face challenges similar to those for anthropomorphic robots. Those with a more realistic appearance need to display a behavior that matches the user’s expectations; otherwise, it might lead to a reduced level of engagement [25].
A functional robot is designed to allow a user to understand the robot’s functions just by its appearance. These robots are not meant to imitate a living creature. Instead, they are usually designed with a machine-like appearance. They interact with people in a very task-oriented manner, making the interaction with humans less complex concerning social aspects. An example of this category is robots with parallel arms or bars to support nurses lifting patients [26]. Robots that help to carry luggage [27] and robot suits can also be considered functional robots [28]. These robots are designed to complete specific tasks safely, and the display of emotional behavior is not as relevant in this context.
The proposed companion robot Pepita was designed with a caricatured appearance with simplified anthropomorphic features. Caricatured robots have a simpler appearance and are capable of expressions in their unique way. Giving a robot a caricatured appearance reduces the user’s expectations and makes it possible to design a robot behavior with believable results [15,29]. This type of robot is considered to be especially suitable for home environments [30]. We consider that, among the different appearances for robots, a caricatured one is more suitable for the design of Pepita.

2.2. The Function of Pepita

Pepita was designed with two main functions: (1) Use projected images to enhance the robot’s expressions and (2) Sense tangible affective expressions.

2.2.1. Affective Expressions Using Projected Images

Different studies in human–robot interaction (HRI) have explored alternatives to endow robots with the ability to convey emotional expressions. Among these, it is not uncommon to find robots with highly expressive mechanical faces [31,32]. Other robots used animated faces or avatars [33]. These robots were designed to convey the expressions based on their specific purposes, and each approach has specific advantages and disadvantages. For example, a robot with a caricatured appearance was found to influence user’s comfort positively. However, at the same time, it was harder to identify emotions compared to a realistic representation of a human face using an avatar [34]. A robot with a simpler appearance can use body movements or colored lights to convey emotional expressions [35,36]. Dynamic colored lights have been used to convey a robot’s states and actions [37], and simple expressions using colors were used to express the life duration of a companion robot [38]. Colored lights have been used in combination with sounds and vibrations as a simple and low-cost alternative to express a robot’s emotional expressions [39]. Several models relate colors to different emotions [40]. One study proposed a methodology to express a robot’s emotions by changing the color of its body [36], and it showed that different emotions were perceived in the agent when it displayed a certain color luminosity at a particular frequency. While it is possible to design expressions for physically constrained robots using colored lights patterns, the amount of information that can be transmitted is more abstract and limited.
A possible solution could be combining robots with displays. Although both screens and projectors can be used as visual feedback for a multipurpose robotic device, projectors make it possible to design small and portable robots that can provide visual information using large projected displays [41]. Another study explained how projectors offer an additional visual information channel that complements the abilities of the robot [42]. In the past, the potential of projectors as emotional expression tools for robots has been explored by projecting an animated face over a mask [43,44,45]. Several benefits of using projected images were pointed out, as well as comparing this method with other technologies used to express artificial emotions, such as mechanical faces. Most of the studies using projectors to enhance the robot’s expressions worked with retro-projections, making the images part of the robot body [43,44,45]. Projected images are a powerful visual representation, as they can not only show different kinds of information, but they can also be used to modify user’s environment and to share the information with more than one user sharing the same physical space. We consider that projected images, like avatars with expressive and animated faces, can be a valid solution to increase the expressiveness of a physically constrained robot, especially for a robot companion designed for home environments. It is necessary to explore first the user’s impressions and perception of a robot interacting through projected images.

2.2.2. Sensing Tangible Affective Expressions

Besides the expressions supported by the robot, another aspect of the design of a companion robot is the ways it senses the user’s action. To take advantage of the physical embodiment of the robot, we designed the robot to sense tangible expressions. The role of body contact is considered to be important for expressing affection. It is possible to convey positive affection just by touches, hugs, and strokes, as well as negative ones such as by hitting or pushing [46,47,48]. Among the various touch gestures, hugs are a very important part of human communication, and they can transfer comfort and give an emotional lift [49]. Moreover, design strategies for technologies to mediate intimacy and relatedness described the importance of supporting meaningful gestures that convey affection [50]. A previous study explored the benefits of computer-mediated communication using huggable interfaces and found out that physical contact with the mediator had positive effects on people, such as by providing mental stress relief [51]. The participants used the huggable interface to complement verbal communication, but the interface was not able to sense hugs. Robots with big embodiments were designed with an array of pressure sensors around the body to detect the user’s hugs [29,52]. When the user embraces the big body of the robot, the pressure applied on different points was used to detect the hugs. On the other hand, for smaller robots, the task of distinguishing hugs among other tangible gestures become more challenging. A small teddy bear-like robot was designed with a body covered with soft material and embedded pressure sensors [53]. However, the robot was not designed to distinguish hugs from other gestures that involve pressure. A robot with similar shape was designed to distinguish hugging using a sensor with higher resolution [9]. Sensors for detecting hugs can also be designed considering the robot’s body characteristics, function, and design requirements. An inflatable robot was designed to detect children’ hugs by variation of pressure on different air-filled modules [54]. Based on the importance of touch-based interaction, we designed a sensor to detect hugs and pulling using the sensor’s structure. From the different solutions that were explored, we proposed sensors that allow for keeping the simplicity of the hardware design.

3. System Overview

This section includes the hardware components followed by the description of the proposed approach for hug detection.

3.1. Robot’s Components

Pepita is a caricatured robotic device (Figure 1) designed as a social companion. As shown in Figure 2, the system consists of three main components: the robot circuit, a smartphone connected to a projector, and an external computer for remote control, each of them managed by a different algorithm.
The circuit is composed of a microprocessor (Arduino UNO) connected to the smartphone and the external computer by a Wi-Fi module (Seeed Studio, V1.0). A printed circuit board was attached on top of the Arduino and the WiFi shield. Alongside the sensors, the circuit contains three full-color LEDs and one vibration motor. The LEDs were placed on the sides to look like the cheeks and in the tail tip to emphasize the caricatures’ features. The vibration motor was attached to the bottom of the plastic case. The robotic device has two types of sensors to detect tangible gestures: pressure sensors covering the robot’s body, and a stretch sensor in the tail. The pressure sensors were made to endow the robotic device with hug detection. It is made of 5 mm thick conductive foam (Seiren Electronics Co., Tokyo, Japan) divided into eight petal-shaped pieces covering an 18 cm diameter plastic sphere. Handmade electrodes were made using copper sheets with conductive tape (Seiren Electronics Co., Tokyo, Japan) and attached to the foam pieces. The wires were attached to the copper sheet, and each piece of the sensor was connected in parallel with a 15 Kohm resistor. The stretch sensor was made following the approach from [55] that uses the properties and structure of the material for sensing gestures. A silicone tail containing a magnet, together with a linear Hall effect sensor (Figure 3) was used to detect when the tail was stretched. Silicon was poured into a cast of half of the tail (longitudinal section), and the wires were introduced while the silicon was still soft. After it dried, the other half was poured covering the wires. To prevent the wires from snapping, they were coiled and placed inside the cast before pouring in the silicon. When the tail is pulled, the distance between the magnet and sensor changes, which is detected as changes in the magnetic field.
The second part comprises a smartphone (Galaxy Nexus, Samsung electronics Co., Suwon, South Korea) which is connected to a pico-projector (EAD-R10, Samsung electronics Co., Suwon, South Korea), making it possible to display the screen content on any surface. The projector was placed in one eye, and the smartphone’s camera in the other. An Android application manages the content of the screen, which changes according to the interaction with the device. The smartphone works as a server, receiving the sensor data and sending it to the external computer.
The last component of the system is an external computer, which is used to visualize and store the sensor data. All of the elements are connected using TCP/IP sockets. The smartphone works as the server, mediating the communication between the microcontroller and external computer. By doing this, it is possible to send commands from the computer to the device remotely (for example, to start collecting data from the sensors and visualize it on the computer screen).

3.2. Hug Detection Method

We used the combined readings from all the sensor pieces to implement hug detection on the robot. Similar to the stretch sensor in the tail, this sensor works with the structure to detect hugging. This approach is advantageous because it makes it possible to cover the robot body with sensors using a minimum amount of wiring.
The sensor for hug detection is divided into eight different pieces, and each piece is fixed on a particular position (Figure 4 Top). Each of these pieces is represented in the code as a point, and the eight points are used to generate a polygon. The generated polygon’s area is used to determine when the robot is being hugged. When any of the sensor pieces is pressed, the value drops to a certain level, and if this value goes below to a set threshold, a point for the polygon is generated. For the threshold, we chose 500 (ADC value). This value will differ according to the deformation and size of the pieces, which makes it important to tune it. Each of the generated points is placed in a fixed position in the Cartesian plane, separated from the origin by a segment of a fixed value of one (Figure 4 Bottom), and the polygon’s area is calculated using the following equation:
S = 1 2 k = 1 n X k X k + 1 Y k Y k + 1 ) ,
where n = 8 , and X and Y refer to the coordinates of each point. With this approach, polygons with bigger areas result when more points are generated. Considering the action of hugging involves embracing the device with both arms, most of the sensor pieces are expected to be pressed when the robot is hugged. For this reason, hugs will generate a larger polygon by activating more points, compared, for example, with the action of pressing the robot with both hands (Figure 4 Bottom). Hug detection is made possible by reading the size of the polygon generated when the user manipulates the device. When the area reaches the selected threshold, a hug is detected. With this approach, it is expected to increase the sensing accuracy in distinguishing hugs from other types of manipulations.

4. Performance Evaluation

The evaluation of Pepita was separated into three parts. The first explored its design by comparing different parameters with those of other similar robots. The second part involved experiments for testing the sensors’ performance. The third part included the evaluation of the projected avatars as the robot’s affective expressions. These studies were reviewed by ethical advisory members and conducted at the University of Tsukuba and via online questionnaires. We obtained written informed consent from all the participants.

4.1. Exploring the Design of Pepita

This section introduces the methods used to investigate three features that we considered relevant in the design of Pepita: (1) the huggable aspect related to the body shape and appearance, (2) the expressiveness to convey affective states and (3) the general impression of the robot’s appearance as a character. An online questionnaire using the service provided by [56] was used as a tool to collect information, which is a conventional method for comparing different types of robots [18,19]. The complete questionnaire can be found in Appendix A. Participants were initially contacted by social network service, and after they had agreed to participate, the link for the questionnaire was sent via email. Informed consent was obtained from the participant before starting the questionnaire. Once the participants finished answering the questions, the link was disabled to avoid double responses. The selection criteria were simple; the participants had to be adults who were familiar with technology but not involved in this or any of the projects that were introduced in the questionnaire. A total of 52 participants (age 28.0 ± 4.1 on average, 29 males and 23 females) took part in this study. Nationalities were diverse, separated in the following groups: 4 from North America, 17 from Central America, 16 from South America, 8 from Europe and 7 from Asia.

4.1.1. Questionnaire Overview

Different robots and devices have been designed to be hugged, but one common feature is the presence of arms. Pepita has a simple appearance, and we wanted to understand how it was perceived based on its appearance. From the reviewed huggable robots, those with the appearance of a popular cartoon character (e.g., Disney characters) and ones with a repeated type of appearance (e.g., teddy bear) were excluded from the comparison with Pepita. Therefore, we chose three huggable robots to compare to Pepita (Figure 5), including one huggable robot with an expressive face [29], one huggable robot with a simple appearance and no face [57], and one robot with the familiar appearance of a teddy bear [9]. In this way, we attempted to determine whether aspects like the presence of a face and a familiar body shape affected the participant’s selections. The participants were presented with a photo of each robot and asked to rate the following statements using a 5-point scale: (1) It looks huggable, (2) It looks easy to hug, and (3) It looks appealing to hug. The order of the pictures was balanced among the number of participants to avoid an order effect.
The second part of the questionnaire explored the modalities used by robots to convey affective expressions visually. We compared two categories: facial expressions using mechanical faces (Zeno [32] and Probo [31]) and facial expressions using a display (Buddy [11] and Pepita [58]). Considering we need video stimulus for these items, robots that fall in this category and that had video available to the public were selected. Participants were asked to watch four videos in succession showing a robot displaying happy and sad expressions. The order of these videos was balanced to avoid an order effect. They were presented one at a time, but the participants were freely allowed to replay them. After watching the videos, the participants were presented with a reference photo of each robot that appeared in the videos (Figure 5). Using a 5-point scale, the participants rated the following statement:
Based on your first impression, express using the following scale how acceptable you find the robot’s expressions of emotions?
The third part was related to exploring the first impression people had of the device. Along with an online questionnaire, we showed the participants a video of a person interacting with Pepita. In this way, we introduced the concept of Pepita to each participant. In the video, Pepita is on a sofa showing an avatar with a sad expression. Then, the person in the video takes the device and hugs it. After being hugged, Pepita changes the avatar to one with a happy expression. To minimize the context effect, we did not show the facial expressions of the person in the video but focused on the robot displaying its functions. After watching the video, the participants were presented with a photo of Pepita (Figure 1) and were asked to express their impressions using a semantic differential structure with a 7-point scale. Since a single question evaluates this item, a 7-point scale was chosen to obtain more information. The paired words selected to describe Pepita are commonly used to evaluate social robots using this structure, and it was applied following an already existing methodology [59]. Additionally, we wanted to collect some qualitative data about the participant’s impressions of the robot, for which we asked an open-ended question to determine which features of Pepita positively or negatively impacted their answers. It is important to point out the limitation of this methodology: the user’s perception will be different when just looking at a picture of the robot compared to directly interacting with the robot. However, this study had the goal of determining the characteristics of the robot’s appearance that affected people’s perception of it, and, for this, we solely used visual stimuli.

4.1.2. Results

Table 1 refers to the huggable aspect, comparing Pepita with the other three huggable robots. One-way analysis of variance (ANOVA) was applied for each aspect and significant differences in each aspect was found ( p < 0.001 , F ( 3 , 204 ) = 8.94 , η 2 = 0.13 in “Looks huggable”, p < 0.001 , F ( 3 , 204 ) = 5.92 , η 2 = 0.087 in “Looks easy to hug”, p < 0.001 , F ( 3 , 204 ) = 17.0 , η 2 = 0.25 in “Looks appealing” where p and η 2 denote significance probability and effect size respectively). Afterwards, to investigate the differences between Pepita and other robots, Tukey–Kramer’s multiple comparisons test was used as a post hoc test. As the result, the following combinations showed significant differences: Pepita and The Huggable ( M = 0.673 , p < 0.01 , 95 % CI [ 1.22 , 0.122 ] ) in the “Looks huggable” category, Pepita and The Hug ( M = 0.596 , p < 0.05 , 95 % CI [ 0.0414 , 1.15 ] ) in the “Looks easy to hug” category, Pepita and Probo ( M = 0.808 , p < 0.01 , 95 % CI [ 0.225 , 1.39 ] ) and Pepita and The Huggable ( M = 0.750 , p < 0.01 , 95 % CI [ 1.33 , 0.167 ] ) in the category of “Looks appealing to hug”, where M denotes the mean difference and 95 % CI represents the 95% confidence interval.
Regarding the question about the emotional expressions, each robot scored as follows: Pepita = 4.00 ± 0.929, Probo = 2.98 ± 1.15, Zeno = 3.44 ± 1.04 and Buddy = 4.52 ± 0.754. One-way ANOVA revealed that there is a significant difference among means of participants’ answer for different robots ( p < 0.001 , F ( 3 , 204 ) = 25.4 , η 2 = 0.27 ). Significant differences were found between: Pepita and Probo ( M = 1.06 , p < 0.001 , 95 % CI [ 0.562 , 1.55 ] ), Pepita and Zeno ( M = 0.558 , p < 0.05 , 95 % CI [ 0.0615 , 1.05 ] ), and Pepita and Buddy ( M = 0.519 , p < 0.05 , 95 % CI [ 1.02 , 0.0230 ] ), using Tukey–Kramer’s multiple comparison test.
The box plot presented in Figure 6 shows the results of the evaluation of Pepita’s appearance using semantic differential. Each item refers to a pair of adjectives. The median values of the pairs Unkind/Kind, Unfriendly/Friendly, Unpleasant/Pleasant, and Awful/Nice were found to be positive (5.5, 5, 4.5, and 5, respectively). On the other hand, the pairs Fake/Natural, Artificial/Lifelike, and Machinelike/Humanlike were found to be negative (3, 3, and 3, respectively), and the pair Unconscious/Conscious had a neutral value of 4.
After evaluating the appearance by a scale, the participants gave open-ended responses regarding those aspects that positively or negatively impacted their perception of the robot. To summarize the different answers, these were simplified using single words and grouped in categories:
  • Positive aspects (mentions): Shape (8), Projector (7), Color (7), Cute (5), Size (5), Flowers (5), Tail (4), Kind (3), Huggable (3), Interactive (2)
  • Negative aspects (mentions): Scary eyes (9), Shape (7), Face (6), Texture (6), Artificial (4), Appearance (4), Tail (3), Hard (2), Not huggable (2), Quality (2)
Among the positive aspects, the shape, color, and the projector had the strongest impact. Comments like the “projected avatars are great”, “robots using avatars are interesting”, “expressive avatars”, and “avatars that display emotions” were collected from the positive aspects. About the shape, some participants expressed that it would be “easy to carry and put in a bag” or it was “round and easy to manipulate”. Concerning the color, we found a positive acceptance of bright colors. The majority of the negative aspects were oriented toward the eyes as a considerable number of participants found them to be strange. Some participants expressed that “the robot seems to have one dead eye”, “there is only one eye working,” or “the eyes are scary”. Regarding the appearance, some participants said that “it is not fluffy enough” or “the face looks weird”. These comments provide us with some insights regarding the design that will be further analyzed in the discussions.

4.2. Hug Detection Performance

The hug sensor had to maintain the simplicity that this design requires. This approach involved working with the sensor’s structure and the shape of the robot body. The purpose of this experiment was to evaluate the performance of the pressure sensor designed to detect hugs. Because the device is spherical, there are many possible ways to manipulate it, and the sensors should be able to distinguish hugs from other kinds of touch-based interaction that involve pressing it. To evaluate this, we first observed which tactile gestures led to the majority of detection mistakes. Gestures like petting, slapping, or rotating were too different from hugging, and easily differentiated. However, those gestures that involved pressing using both hands had a higher probability of being incorrectly detected as hugs.

4.2.1. Experiment Setup

Ten participants joined this experiment voluntarily, and they provided informed consent before starting the session. The sessions started by asking the participants to follow a set of instructions displayed on a screen. The instructions were the following:
  • Hug,
  • Press with both hands on the right and left sides,
  • Press with both hands on the upper and lower areas.
Each instruction was displayed for 10 s before changing to the next one. The hug instruction was alternated with the other two instructions, which resulted in five repetitions of the hug instruction, two repetitions of pressing on the left and right sides, and two repetitions of pressing on the upper and lower areas. Participants were asked to sit down in front of a computer and hold the device in their hands (Figure 7). They were facing the wall, and the experimenter was standing behind them. During this experiment, participants were not given instructions on how to hug the device (i.e., to apply more pressure, press in specific places, or hold it in a particular way), and the device did not provide feedback when a hug was detected.

4.2.2. Results

Figure 8 shows the results obtained for the hug detection. The system had a hug detection accuracy of 81.8% when the robot was hugged. Regarding false positives, when a participant pressed on the upper and lower areas, this was detected as a hug 23.3% of the trials, and when they pressed on the sides, 12.5% of the trials were considered a hug. Figure 9 shows the data from one session, and the data collected during each instruction. The instruction for hugging was displayed five times during each session, and in the figure, the interval for the hug instruction is represented by the peaks of the dotted line. After collecting the data, we chose the polygon area that gave us better results representing hugs, and, in this case, it was 1.4. The hug detection performance resulted in precision = 0.84, recall = 0.82, and F-measure = 0.83.

4.3. Force Test for the Hug Sensor

The conductive material used to make the eight electrodes of the hug sensor has specific properties described by the manufacturer. However, different aspects alter the relation force/voltage. First, a combination of materials was used to make the electrodes, and these were cut and shaped to cover the spherical robot body. With this experiment, we expect to explore the relationship between the applied force and the output voltage of the proposed hug sensor.
In order to do this, we tested the conductive foam using a force gauge. We used plastic circular figures with a fixed area to calculate the applied pressure. Different forces were applied, and we collected ten samples for each. From the data obtained, we will convert the voltage values collected from the participants during the hug detection test into pressure values, in order to understand how much force is necessary for detecting hugs with the system.

Results

Figure 10 shows the sensor values for each applied pressure. The bars represent the standard deviation, and the dotted line is the approximated curve. The parameters of the curve were defined by the least squares method. We used this curve to change the voltage values obtained during the hug detection test into applied pressure values. Following this, the data on Figure 9 was converted into the data observed in Figure 11, showing the average pressure applied during the hug detection test of one participant. Based on this test, we understood that, to generate a point for the polygon based on the selected threshold (500 (ADC value)), the user needs to apply about 1.3 N/cm 2 on the sensor piece.

4.4. Tail Pulling Detection Performance

We designed a stretch sensor in the tail of Pepita. The tail is made of silicon, and a linear hall effect sensor is used to measure the variations of distance from a magnet when the tail is stretched. The wiring inside the silicon was coiled to avoid ruptures. With this approach, it was possible to work with the structure and material of the tail to design a simple interaction. The purpose of this experiment was to test the performance of the sensor to detect pulling behavior among other gestures.

4.4.1. Experiment Setup

Fourteen participants participated in this test voluntarily. The instructions were presented automatically on a screen. Participants familiarized with the silicon tail before starting the test. Each gesture instruction was presented for 3 s, followed by a release instruction for 3 s as well. The instructions were:
  • Pull,
  • Shake,
  • Grasp.
In total, each instruction was presented five times each, and feedback from the system was disabled.

4.4.2. Results

After collecting the data from the participants, we selected a threshold that resulted in high detection performance: 595 (ADC value), about 2.9 [V]. A precision and recall analysis was performed using the selected threshold and the data collected from the 14 participants. The performance of the gesture detection resulted in Precision = 0.84, Recall = 0.93, and F-measure = 0.88.

4.5. Affective Feedback Using Projected Avatars

With this experiment, we explored an alternative method to represent affective expressions visually by the robotic device. We evaluated the affective expressiveness of projected images in comparison with a more conventional visual feedback method, such as colored light patterns. The expressions for a physically constrained robot developed by [39] represented four emotions: happy, sad, angry, and relaxed. Their proposed methodology suggested that it is possible to represent all the emotions that a user can perceive by implementing only these four emotional expressions, as they are found in one of each quadrant of valence-arousal. With this design, any emotion on the same quadrant is considered to be similar to the representative one (e.g., happy and delight), and it is easier to distinguish it from others from different quadrants (e.g., angry and calm). This also implies that increasing the variety of emotions to be expressed does not always benefits the quality of interaction, on the contrary, it could confuse the user especially in the case of physically constrained robotic devices. To avoid this issue, we adopted a simplified set of the robot’s expressions such as one positive and one negative expression. For this reason, we are designing visual representations of a positive affective expression (happy-like) and negative affective expression (sad-like) using both avatars and colored lights.
The avatars were animated in a sequence that displayed a character that shared features with the physical robot. The avatars featured facial expressions as well as images such as blooming flowers for happy, dry flowers for sad, or changes in the color of leaves (Figure 12).
To design the expressions using only colored lights, we followed the methodology proposed by [36], and used the following function:
f ( t ) = 1 2 cos 2 π t x T + 1 2 0 < t x T 2 , 1 x T 2 < t T 2 , 1 2 cos 2 π t x T ( t T 2 ) + 1 2 T 2 < t T 2 1 + x , 1 T 2 , 1 + x < t T .
Positive expressions like happiness are related to yellow light with a high frequency and square waveform. Negative expressions like sadness are related to blue light with a low frequency and sinusoidal waveform. In this experiment, we set T = 900 ms and x = 0.25 for the happy state, and T = 3350 ms and x = 0.75 for the sad state. These selected values were similar to the ones proposed by [36], and they have already been proven to be effective at conveying these affective expressions.

4.5.1. Questionnaire Overview

In this study, we had the goal of answering the following questions:
  • Can each visual element displayed by the robotic device represent the intended affective expression?
  • Comparing the LED and projected avatars, which is more efficient at representing the selected affective expression?
  • When the robot is projecting avatars, is it perceived as one entity (the robot and its avatar) or two separate entities (a robot and an avatar)?
In the experiment described next, we compared the effect of colored lights with the effect of projected avatars when they were used to convey the robot’s affective expressions. To evaluate the use of avatars in this application, we developed an online questionnaire using the service provided by [56]. The complete questionnaire can be found in Appendix B. Twenty-six participants who were not familiar with the robot (18 males and 8 females, age 26.9 ± 3.7 on average) took this questionnaire. The nationalities of the participants were grouped as: North America = 1, Central America = 3, South America = 15, Europe = 1 and, Asia = 1. The participant’s cultural background is a factor that can potentially impact the perception of emotions represented by color [60,61]. In the literature, common colors are combined with other parameters to represent a robot’s emotional expressions [36,39,62], and, for this reason, the cultural aspect is not necessarily controlled as the feedback perception is far from being only related to colors. Since the purpose of this experiment is not to evaluate the effect of the cultural background on the perception of the robot, there was no restriction regarding nationality to join this experiment.
Participants were contacted via social network and receive the access link via email. The questionnaire was not open but could only be accessed after receiving an invitation. The questionnaire consisted of two sets of two videos each, followed by questions. The order of the videos was counterbalanced to reduce the order effect. Two of the videos showed the robot projecting an avatar with emotional facial expressions (Figure 12a,b), and the other two showed the robot displaying LED color patterns (Figure 12c,d). Each participant received one of the combinations, and the combinations were proportionally balanced among the group of participants.
The videos presented the robot displaying one type of visual feedback without being contextualized by the environment or interaction. A previous study showed evidence that the context could affect the participant’s recognition of the robot’s expressions [63]. For this reason, we avoided influencing the participant’s choices by adding elements related to interaction, such as by showing the happy state after hugging or the sad one after hitting. In this questionnaire, we attempted to evaluate only the perception of the visual elements. After watching the videos, the participants were asked to rate the following statements using a 5-point rating scale:
  • In my general impression, I consider that the perceived behavior of the robot makes reference to a happy-like behavior.
  • In my general impression, I consider that the perceived behavior of the robot makes reference to a sad-like behavior.
The second part of the questionnaire was related to the perception of the robot’s embodiment. Because we were using two types of embodiment (the projected robot and physical robot), we intended to clarify the entity for which the participants perceived the affective expressions.
The concept of using multimodal interfaces to benefit from different types of embodiment has been explored in the past [64], where an avatar was used as a complement of a physical robot, and they were combined into one entity. The avatar was designed with an appearance similar to the physical robot, and it was implemented using a migration system, which involved having either the avatar or robot active at any given time. Following a similar approach, a different study explored user’s perception when interacting with an artificial pet with two types of embodiment (virtual and physical robot), that transferred from one embodiment to the other, leaving only one of them active at a time [65,66]. These studies pointed out the importance of making the user perceive that they were interacting with the same entity that migrated from one embodiment to the other.
In our approach, both types of embodiment, the robot and the avatar, were active simultaneously instead of one at a time. For this reason, we included one last question at the end of the questionnaire to try to understand the perception of the robot embodiment:
From the following statements, choose the one that most closely reflects your perception of the robot body interface:
  • I perceive the robot body interface as two entities: an avatar and a robot,
  • I perceive the robot body interface as one entity: the robot and its avatar.

4.5.2. Results

Based on the answers obtained from the questionnaire, we attempted to answer three questions related to this robotic device. The first question tried to determine whether both the LED and projected avatars could convey the intended affective expressions. The results presented on Table 2 show a clear difference for both types of stimuli. The stimulus for the happy state made with projected avatars obtained a score of 4.73 for the happy state compared to 1.46 for the sad state. In the case of the stimulus made with colored lights, the happy state obtained 3.62 compared to 2.19 for the sad state. The stimulus for the sad state made with projected avatars obtained a score of 4.35 for the sad state compared to 1.27 for the happy state. Colored lights obtained 3.69 for the sad state and 2.23 for the happy state. For all the combinations, the participants could perceive the represented affective state showed on the videos using both LEDs and avatars. Note that, to confirm the effect of the order of stimulus on participants’ score, we applied a Kruskal–Wallis test for each stimuli, and no significant difference due to the order has been found for all stimulus (Avatar (Happy): p > 0.5 , χ 2 = 1.65 , η 2 = 0.066 , Avatar (Sad): p > 0.5 , χ 2 = 2.30 , η 2 = 0.092 , LED (Happy): p > 0.5 , χ 2 = 1.63 , η 2 = 0.065 , LED (Sad): p > 0.5 , χ 2 = 1.91 , η 2 = 0.076 ).
To answer the second question, we compared the stimuli made with the projected avatars and LEDs to display the affective state. When presenting the stimuli for the happy state, participants’ perception of a happy state obtained 4.73 with projected avatars compared to 3.62 with LEDs. In the case of the stimuli for the sad state, participants’ perception of a sad state obtained 4.35 compared to 3.69 with LEDs.
The third study was related to the question of embodiment, and the results showed that 20 participants perceived the projected avatar as part of the physical robot, five participants perceived that the avatar and robot were different entities, and one participant did not include this answer.

5. Discussion

This section includes insights obtained from the evaluation of the design of the companion robot Pepita, and how these will be used to improve it for future studies. Then, results of the hug and tail pulling detection experiments and accuracy tests are discussed together with explanations of the implications of errors in detection. Based on the results of the affective feedback comparing projected avatars with colored lights, we discuss the potential of the proposed combination of robots with projected avatars as a method to convey affective expressions.

5.1. Exploring the Design of Pepita

The first part of the evaluation consisted of a general evaluation of different characteristics of the design of Pepita. Regarding the huggable aspect, we compared Pepita with three other huggable robots (Table 1). Among them, Probo [29] was designed with a caricatured appearance, The Huggable [9] looks like a teddy bear, and The Hug [57] has a simple appearance with big arms and no facial features. We could see that, even though Pepita has no arms, it had a score similar to the other three interfaces. However, the teddy bear appearance effectively evoked the feeling of being huggable, which was reflected in higher scores. Evaluating huggable aspects using only photos can lead to limited results since the impression can change when directly touching the robot. Because the role of the appearance was being investigated, we found appropriated to use photos to compare the proposed design with other huggable robots. In this study, we concluded that common features in huggable interfaces (e.g., open arms) are not essential, but making the robot look familiar based on an already existing idea of huggability could be beneficial to make the robot more appealing to be hugged.
Then, methods for conveying the robot’s affective expressions were evaluated. We compared robots that used mechanical facial expressions (Probo [31] and Zeno [32]) and two robots that used a display to manage the robot’s facial expressions (Pepita [58] and Buddy [11]). The results showed that there was a significant difference between Pepita and the other three robots. Participants favored both of the robots that presented expressions using a display over the two that used mechanical facial expressions. However, Buddy’s expressions were significantly more acceptable than Pepita’s expressions. Embedded displays (Buddy) can be used to present facial expressions as a part of the robot’s body, while projected displays open the possibility of designing new ways of interacting with robots. Moreover, projected images can be shared by different users and have a strong visual impact. On the other hand, robots with mechanical facial expressions have specific applications in which it is necessary to make robots that imitate a human’s behavior to a higher degree. In this study, we found that, regardless of the application, using avatars projected from the robot’s body was considered to be as acceptable as other more traditional ways of conveying a robot’s affective expressions.
Regarding the perception of the robot’s appearance, the results in Figure 6 show that, in general, the participants gave Pepita positive ratings in the following four aspects: “kind”, “friendly”, “pleasant”, and “nice”. On the other hand, a tendency toward the attributes “artificial”, “machine-like”, and “fake” reflected that participants perceived Pepita as an artificial agent. These results encourage us to use a similar design in a future study given that rating Pepita as an artificial character is not necessarily undesirable. Based on these results, we concluded that it is important to design this type of character to have a robot-like means of expression that matches the expectations of the user. Moreover, to obtain more insights related to which factors positively and negatively impacted the perception of Pepita, we asked the participants an open-ended question. Their positive comments expressed a general acceptance of the images projected from the robot as an alternative means of representing the robot’s expressions. We observed that the huggable aspect was mentioned by some participants to be positive, but not as strong as other features. This outcome becomes more evident after reading some of the negative comments that pointed out that the robot did not look huggable enough. For this reason, we plan to work on the robot’s softness to try to reduce this negative aspect. Another critical factor that needs to be improved in this design is the negative influence of the eyes. The majority of negative aspects included the appearance of the eyes. Because Pepita has a projector placed in one of the eyes, one eye lights up, and one is off. This characteristic was perceived to be unnatural because it seemed that one eye was “not working” or “dead.” Thus, we plan to find a different location for the projector that does not negatively affect the perception of the robot’s appearance.

5.2. Hug and Tail Pulling Detection Performance

We tested the detection of hugs among other kinds of touch-based interactions. We chose to test pressing on the sides, top, and bottom because these are common ways to handle a spherical robot, and they are more similar to hugging. Even though the sensor could not provide the level of detailed tactile information that would be possible using sensors with a higher resolution, the results proved that the sensors worked well for the intended interaction (Figure 8). The highest number of false positives was obtained when pressing on the upper and lower sides. We expected this result because the poles of the sphere contained eight sensors in an area that could be covered by the hand.
To better evaluate the accuracy of the system, precision, recall, and F-value were calculated. The results showed that the system is accurate and sensitive in detecting hugs. However, it is important to understand the potential implications related to failures in the detection. One type of error occurs when the system fails on detecting an actual hug (false negative). Among the two types of errors, this is the easiest to address because users can realize when it occurs based on the lack of response from the system (e.g., not receiving feedback right after hugging), and then correct it by hugging again. The other type of error is when different types of gestures are incorrectly detected as hugs (false positive). Unintended hug detection should be avoided as it might impact the user’s credibility on the system. This sensor was designed especially for this interface, with the criteria of using soft materials and reducing the sensor complexity as much as possible. We are satisfied with the results so far as we were able to detect hugging using only eight electrodes while maintaining high performance. These values could be improved by adding different kinds of sensors (like sound or temperature) to complement the pressure sensor readings. A limitation of the current approach is related to the way the sensors are mapped into a two-dimensional polygon. This makes it impossible, for example, to detect hugs if the robot is tilted 90 degrees. Pepita was designed with features such as eyes, and decoration on the top, to give the user an idea of the orientation to hold the robot, but solutions in the future should tackle this issue. Moreover, for the current implementation, we only used spatial information at each time frame (the area of the polygons generated when the sensors are pressed), and the temporal information was neglected. Therefore, using a model which involves temporal features or past events such as Hidden Markov Model or Recurrent Neural Network would improve the accuracy of detection, and it should be considered for future implementations.
Pulling the tail was used as a gesture that conveys a negative meaning. The sensor was designed to work with the structure and material of the tail. The magnet and the hall effect sensor were placed at a distance that resulted in clear signal changes when the tail was being pulled. To test the detection performance, we chose three other gestures that could be wrongly detected as pulling. The results of the sensor performance show that the detection was accurate enough for the proposed application (higher than 0.8). This approach offers a feasible solution with a sensor of simple structure that supports detection of pulling.

5.3. Affective Feedback Using Projected Avatars

To limit the robot body to simple features, we combined it with projected avatars, which enhanced its communication capabilities without increasing the complexity of the hardware. For this reason, we evaluated the use of a small projector embedded in the robot body to convey the affective states related to two emotions. In this study, we explored the benefits of using a projector compared to abstract visual elements made by colored lights. The results related to the perception of colored lights patterns were in line with those of a previous study [36]. The participants perceived a happy state when the robot was blinking yellow lights and a sad state with blue lights (Table 2). These results also showed that avatars were more effective conveying affective expressions compared to the feedback made with colored lights. During the study, the robot was not moving or showing any behavior—only visual elements were being displayed. For a robot with limited features and an inexpressive face, there are few options to convey affective expressions visually. Colored lights are commonly used and explored, as they are a simple and effective solution, but the perception of these abstract representation of emotions can be different from one person to the other. Projected avatars with affective expressions have the potential of reducing the user’s misinterpretation by providing a clear message.
Regarding the robot embodiment, the results showed that most of the participants could perceive that the avatar was reflecting the expressions of the physical robot as only one entity. This exploratory study showed that, even though the avatar was displaying the affective expressions, since they were projected from the robot, it was possible for the participants to perceive them as part of the robot’s behavior instead of an external agent. Based on these results, we are encouraged to keep exploring the possibility of using images projected from a robot to convey its expressions in a robot-like manner.

6. Application Scenarios

6.1. Description of the Social Context of Pepita

Based on the features selected for the design of Pepita, one of the possible applications is as a mediator of remote communication. This involves a paired robots configuration, which means that each user will communicate using one identical robot. The current implementation is driven by the user’s tactile gesture (hug/pulling the tail), and it communicates with the user using projected avatars. These avatars are displayed by a running application in the robot’s smartphone. This feature allows for easily assigning different avatars to the different users, as a kind of ID. The robot will display its avatar with a happy expression every time the user hugs it. At the same time, every time a robot is hugged, it is translated as a message and displayed on the paired robot. As time passes without a user hugging a robot, the avatar will change to showing a sad expression. The same robot will display the partner’s avatar every time it receives a hug from the paired robot. Similarly, as time passes without receiving more messages, the avatar of the paired robot will change to display a sad expression. It is expected that the users will be aware and feel motivated to interact with the robot to improve its affective state, which is also a message for the partner. Since this type of communication does not contain any detailed content, the purpose of the message is open to interpretation.
Pepita is envisioned to be placed in two scenarios. On the first one, the robot works as a standalone communication device, and each message works as a kind of presence indicator (Figure 13A). The second scenario involves Pepita working as a complement of verbal communication, and, in this context, Pepita works as a tangible emoticon (Figure 13B). With this, it is expected to enhance the transmission of affective messages and foster a sense of co-presence.
Two user scenarios are proposed to understand how Pepita mediates expressions of affection in remote communication. The first one reflects the use of Pepita as a standalone communication device, and, on the second one, Pepita works as a complement of verbal communication:
  • Jane is a college student living away from her family. Every morning before going to the university, she leaves a message to her parents by hugging her robot. Jane observes how her avatar displays a happy expression and then sets the robot on the sofa. She comes back home in the evening and observes her parents’ avatar displaying a sad expression, which she understands as “I received a hug from my parents a long time ago”, she takes her Pepita and hugs it, observing her avatar appearing with a happy expression, and conveying to her parents that she is now home.
  • On the weekend, Jane is talking with her mother by phone sharing stories of the recent days. She takes her robot and hugs it sending a message. Her father, who is in the living room watching TV, observes their robot reacting showing Jane’s robot’s "happy" avatar. Then, he asks to speak to her and say hello. While talking to her, he hugs back.

6.2. Combining Robots with Projectors

Designing robots with a screen for a face is becoming popular because it makes it possible to display a broad range of expressions that are easy to identify. Screens can easily be related to computers, and computers are machines like robots. For this reason, screens can be considered to be appropriate parts of robots. Projectors work in a way similar to screens because they make it possible to display multiple kinds of information. The difference between them is related to the way the user interacts with the interface. Embedded screens are constrained to the size of the robot’s body, and thus are commonly small. This causes users to become immersed in the contents of the screen. On the other hand, projectors can be used to display large images that can be shared by multiple people sharing the same space. It is interesting to explore the social aspect of robots with projectors, and their effect on human interaction.
Projected images have a substantial visual impact and they allow to keep the robot structure simple. Moreover, projected images create opportunities to share the experience with more than one user sharing the same space, using a small robot body. By combining robots and projectors, it is also possible, for example, to share photos and memories with people in the same place and those in different locations, opening different possibilities for future designs (Figure 14B). Enhancing a robot’s capabilities using a projector is not limited to facial expressions. Robots with no limbs like Pepita have limited body’s capabilities, and projectors can be used to display animations of body gestures. For example, avatars with pointing gestures projected from the robot could potentially allow users to perform actions together regardless of whether they share the same space (Figure 14A).
However, projected images are hard to see in a bright environment, and they can be occluded when the robot is hugged. One possible solution to overcome these limitations could be by notifying the user when a message arrives by combining LEDs and vibrations. By doing this, the user can adapt and move the robot to a darker place if necessary, or uncover the lens of the projector. Other limitations are related to the heat generated by the lamp which could be a problem if it is used continuously for extended periods of time. Future similar projects need to consider the problem heat can represent for the application carefully.

7. Conclusions

This article introduced Pepita, a companion robot designed to sense and convey affective information. The current implementation senses two types of tactile gestures, hugs as a positive message, and pulling the tail as a negative message. The system translates these actions into visual feedback made by projected avatars, designed to convey a positive affective expression such as happy, and a negative affective expression such as sad. This report includes contents such as design criteria, system overview, and tactile gestures’ recognition methods.
The results offer some guidelines to improve the current proposed design. The hug detection method had a good performance for this application, but comments from the participants pointed to the necessity to improve the huggable aspect of the robotic device. We compared the proposed robot with other huggable robots and concluded that while a simple appearance with no arms is acceptable, making the robot look similar to other familiar huggable elements (like stuffed animals or cushions) can be beneficial to make the robot appealing to be hugged. For this reason, future work involves working on the huggable aspect to elicit and support natural hugging behavior. The proposed approach using an array of sensors made of foam could benefit from the addition of an extra layer of cushion between the case and the sensor to increase the softness.
Regarding the appearance, the caricatured features were found to be funny and acceptable in general, but the projector’s lamp in one eye had a strong adverse effect. From this, we plan to relocate the projector in a different part of the robot. Projected avatars to express the robot’s affective expressions were clear and effective conveying information. Moreover, participants perceived the avatars as part of the robot (as one entity). In the current design, we worked with a hug-driven change of states, and thus minimal expressions were implemented. Considering that the main benefit of projected avatars is different information that can be easily displayed, more interaction rules leading to more expressions are a good starting point for future works.
This study represents the first step in the process of developing a companion robot for remote communication. Before evaluating its effect on remote communication contexts, it was necessary to implement a robot based on robust design criteria to explore which of the characteristics are appropriate for a social mediator of affective messages. Future work will involve the use of two identical robots to mediate communication using touch-based gestures as the input and projected images as the output, in order to explore its role in enhancing the sense of co-presence compared to traditional telecommunication devices.

Author Contributions

Conceived and designed the experiments: E.N. and K.S. Performed the experiments: E.N. and M.H. Analyzed the data: E.N. and M.H. Wrote the paper: E.N.

Funding

This work was partly supported by JST CREST project, Grant Number JPMJCR14E2, Japan.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A. Questionnaire for Exploring the Design of Pepita

  • Page 1: The questionnaire consists of different photos and videos, and you will be asked to give your impressions. Before starting please be aware of: (1) In case you do not understand an English word, please refer to a dictionary to be sure of the meaning before answering, (2) Look at the picture and answer based on your first impressions, (3) Read all the sentences and instructions, (4) The entire questionnaire will take approx. 15min.
  • Page 2: Consent to participate in the research.
  • Page 3: Nationality, age, gender.
  • Page 4: Part 1: This section will introduce different huggable robots. In other words, a robot that can sense hugging actions. These robots differ in size, appearance, and shape. You will be asked to rate them using a scale.
  • Page 5: Please rate the following statements for each presented picture. (Photo of 1 of the 4 huggable robots, counterbalanced order) The robot looks huggable (5 points scale, from strongly disagree to “strongly agree”). The robot looks easy to hug (5 points scale, from “strongly disagree” to “strongly agree”). The robot looks appealing to hug (5 points scale, from “strongly disagree” to “strongly agree”).
  • Page 6: (Photo of 1 of the 4 huggable robots, counterbalanced order) The robot looks huggable (5 points scale, from strongly disagree to “strongly agree”). The robot looks easy to hug (5 points scale, from strongly disagree to “strongly agree”). The robot looks appealing to hug (5 points scale, from strongly disagree to “strongly agree”).
  • Page 7: (Photo of 1 of the 4 huggable robots, counterbalanced order) The robot looks huggable (5 points scale, from “strongly disagree” to “strongly agree”). The robot looks easy to hug (5 points scale, from “strongly disagree” to “strongly agree”). The robot looks appealing to hug (5 points scale, from “strongly disagree” to “strongly agree”).
  • Page 8: (Photo of 1 of the 4 huggable robots, counterbalanced order) The robot looks huggable (5 points scale, from “strongly disagree” to “strongly agree”). The robot looks easy to hug (5 points scale, from “strongly disagree” to “strongly agree”). The robot looks appealing to hug (5 points scale, from “strongly disagree” to “strongly agree”).
  • Page 9: When you rated the different huggable robots, how relevant were these features for your answers? (5 points scale, from unimportant to extremely important) (1) The shape of the robot body, (2) Size of the robot body, (3) Weight of the robot body, (4) Texture of the skin, (5) Softness of the robot body, (6) Appearance of the robot.
  • Page 10: Part 2: When interacting with people, robots need to understand and convey a representation of emotions. In this section, you will watch four different videos in succession displaying different robot’s expressions. Then, using some photos as reference, you will be asked to give your general impressions about them.
  • Page 11: (video of 1 of the 4 robots displaying facial expression by a display or with a mechanical face, counterbalanced order).
  • Page 12: (Photo showing the previous robot’s expressions) Based on your first impression, please express using the following scale how acceptable for you is the robot’s expressions of emotions? (5 points scale using emoticons from sad to happy).
  • Page 13: (video of 1 of the 4 robots displaying facial expression by a display or with a mechanical face, counterbalanced order).
  • Page 14: (Photo showing the previous robot’s expressions) Based on your first impression, express using the following scale how acceptable for you is the robot’s expressions of emotions? (5 points scale using emoticons from sad to happy).
  • Page 15: (video of 1 of the 4 robots displaying facial expression by a display or with a mechanical face, counterbalanced order).
  • Page 16: (Photo showing the previous robot’s expressions) Based on your first impression, express using the following scale how acceptable for you is the robot’s expressions of emotions? (5 points scale using emoticons from sad to happy).
  • Page 17: (video of 1 of the 4 robots displaying facial expression by a display or with a mechanical face, counterbalanced order).
  • Page 18: (Photo showing the previous robot’s expressions) Based on your first impression, express using the following scale how acceptable for you is the robot’s expressions of emotions? (5 points scale using emoticons from sad to happy).
  • Page 19: Part 3: In this section, you will be asked to give your general impression about the social robot companion Pepita. This robotic device was designed to be placed at home and interact with people in everyday life. (Video of a person interacting with Pepita)
  • Page 20: (Photo of Pepita) Please express your impressions of Pepita using the following scale: (7 points scale with 8 items, from Awful to Nice, from Machinelike to Humanlike, from Artificial to Lifelike, from Unpleasant to Pleasant, from Fake to Natural, from Unfriendly to Friendly, from Unconscious to Conscious, from Unkind to Kind). This question was followed by two blank spaces to collect the features of Pepita that positively and negatively impacted the answers.

Appendix B. Questionnaire for Exploring the Affective Feedback Using Projected Avatars

  • Page 1: The questionnaire consists of two sets of two videos followed by some questions: (1) The videos display Pepita, a robotic device displaying different visual feedback; (2) Then, you will be asked about your perception and impressions, (3) The entire questionnaire will take approx. 10 min.
  • Page 2: Consent to participate in the research.
  • Page 3: Nationality, age, gender.
  • Page 4: Task overview.
  • Page 5: Case 1: In the following video, the robot is displaying light color patterns. (You can play this video multiple times) (Player showing Projector or LED condition, happy or sad. All the options are counterbalanced)
  • Page 6: Case 2: In the following video, the robot is displaying light color patterns. (You can play this video multiple times) (Player showing Projector or LED condition, happy or sad. All the options are counterbalanced)
  • Page 7: Please select the option that reflects your immediate response to each statement. Do not think too long about each statement. Make sure you answer every question. (Photo of case 1) As a total impression, I consider that the perceived behavior of the robot makes reference to the following statements: Happy-like behavior (5 points scale, from “strongly disagree” to “strongly agree”). (Photo of case 2) As a total impression, I consider that the perceived behavior of the robot makes reference to the following statements: Happy-like behavior (5 points scale, from strongly disagree to strongly agree).
  • Page 8: Case 3: In the following video, the robot is displaying light color patterns. (You can play this video multiple times) (Player showing Projector or LED condition, happy or sad. All the options are counterbalanced)
  • Page 9: Case 4: In the following video, the robot is displaying light color patterns. (You can play this video multiple times) (Player showing Projector or LED condition, happy or sad. All the options are counterbalanced)
  • Page 11: Please select the option that reflects your immediate response to each statement. Do not think too long about each statement. Make sure you answer every question. (Photo of case 3) As a total impression, I consider that the perceived behavior of the robot makes reference to the following statements: Happy-like behavior (5 points scale, from “strongly disagree” to “strongly agree”). (Photo of case 4) As a total impression, I consider that the perceived behavior of the robot makes reference to the following statements: Happy-like behavior (5 points scale, from strongly disagree to strongly agree).
  • Page 12: (Photo of Pepita projecting avatars) From the following statements, choose the one that most closely reflects your perception about the robot body interface: (1) I perceive the robot body interface as two entities: an avatar and a robot. (2) I perceive the robot body interface as one entity: the robot and its avatar.

References

  1. Steunebrink, B.R.; Vergunst, N.L.; Mol, C.P.; Dignum, F.; Dastani, M.; Meyer, J.J.C. A Generic Architecture for a Companion Robot. In Proceedings of the ICINCO-RA (2), Funchal, Portugal, 11–15 May 2008; pp. 315–321. [Google Scholar]
  2. Hegel, F.; Muhl, C.; Wrede, B.; Hielscher-Fastabend, M.; Sagerer, G. Understanding social robots. In Proceedings of the Second International Conferences on Advances in Computer-Human Interactions (ACHI’09), Cancun, Mexico, 1–7 February 2009; pp. 169–174. [Google Scholar]
  3. AvatarMind’s iPal Robot. 2016. Available online: https://www.ipalrobot.com (accessed on 1 September 2018).
  4. Kędzierski, J.; Kaczmarek, P.; Dziergwa, M.; Tchoń, K. Design for a robotic companion. Int. J. Hum. Robot. 2015, 12, 1550007. [Google Scholar] [CrossRef]
  5. SoftBank Robotics’s Pepper Robot. 2014. Available online: https://www.softbank.jp/en/robot/ (accessed on 1 September 2018).
  6. Robinson, H.; MacDonald, B.; Kerse, N.; Broadbent, E. The psychosocial effects of a companion robot: A randomized controlled trial. J. Am. Med. Directors Assoc. 2013, 14, 661–667. [Google Scholar] [CrossRef] [PubMed]
  7. Friedman, B.; Kahn, P.H., Jr.; Hagman, J. Hardware companions?: What online AIBO discussion forums reveal about the human–robotic relationship. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Lauderdale, FL, USA, 27 April 27–2 May 2003; pp. 273–280. [Google Scholar]
  8. Hasbro’s Joy for All. 2015. Available online: https://joyforall.com/products/companion-cats (accessed on 1 September 2018).
  9. Jeong, S.; Santos, K.D.; Graca, S.; O’Connell, B.; Anderson, L.; Stenquist, N.; Fitzpatrick, K.; Goodenough, H.; Logan, D.; Weinstock, P.; et al. Designing a socially assistive robot for pediatric care. In Proceedings of the 14th International Conference on Interaction Design and Children, Boston, MA, USA, 21–24 June 2015; pp. 387–390. [Google Scholar]
  10. Sproutel’s Jerry the Bear. 2017. Available online: https://www.jerrythebear.com (accessed on 1 September 2018).
  11. Milliez, G. Buddy: A Companion Robot for the Whole Family. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; p. 40. [Google Scholar]
  12. Asus’s Zenbo. 2016. Available online: https://zenbo.asus.com (accessed on 1 September 2018).
  13. Future Robot’s Furo-i. 2017. Available online: http://www.myfuro.com/furo-i/service-feature/ (accessed on 1 September 2018).
  14. Jibo. 2017. Available online: https://www.jibo.com (accessed on 1 September 2018).
  15. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef] [Green Version]
  16. Goetz, J.; Kiesler, S.; Powers, A. Matching robot appearance and behavior to tasks to improve human–robot cooperation. In Proceedings of the 12th IEEE International Workshop on Robot and Human Interactive Communication, Millbrae, CA USA, 31 October–2 November 2003; pp. 55–60. [Google Scholar]
  17. Harbers, M.; Peeters, M.M.; Neerincx, M.A. Perceived autonomy of robots: Effects of appearance and context. In A World with Robots; Springer: Berlin, Germany, 2017; pp. 19–33. [Google Scholar]
  18. Lohse, M.; Hegel, F.; Wrede, B. Domestic applications for social robots: An online survey on the influence of appearance and capabilities. J. Phys. Agents 2008, 2. [Google Scholar] [CrossRef]
  19. Hegel, F.; Lohse, M.; Swadzba, A.; Wachsmuth, S.; Rohlfing, K.; Wrede, B. Classes of applications for social robots: A user study. In Proceedings of the 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju, Korea, 26–29 August 2007; pp. 938–943. [Google Scholar]
  20. Scassellati, B.; Admoni, H.; Matarić, M. Robots for use in autism research. Annu. Rev. Biomed. Eng. 2012, 14, 275–294. [Google Scholar] [CrossRef] [PubMed]
  21. Minato, T.; Shimada, M.; Ishiguro, H.; Itakura, S. Development of an android robot for studying human–robot interaction. In Proceedings of the 17th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, Ottawa, ON, Canada, 17–20 May 2004; pp. 424–434. [Google Scholar]
  22. MacDorman, K.F.; Ishiguro, H. The uncanny advantage of using androids in cognitive and social science research. Interact. Stud. 2006, 7, 297–337. [Google Scholar] [CrossRef]
  23. Kaplan, F. Free creatures: The role of uselessness in the design of artificial pets. In Proceedings of the 1st Edutainment Robotics Workshop, Sankt Augustin, Germany, 27–28 September 2000; pp. 45–47. [Google Scholar]
  24. Heerink, M.; Albo-Canals, J.; Valenti-Soler, M.; Martinez-Martin, P.; Zondag, J.; Smits, C.; Anisuzzaman, S. Exploring requirements and alternative pet robots for robot assisted therapy with older adults with dementia. In International Conference on Social Robotics; Springer: Berlin, Germany, 2013; pp. 104–115. [Google Scholar]
  25. Melson, G.F.; Kahn, P.H., Jr.; Beck, A.; Friedman, B. Robotic pets in human lives: Implications for the human—Animal bond and for human relationships with personified technologies. J. Soc. Issues 2009, 65, 545–567. [Google Scholar] [CrossRef]
  26. Chen, T.L.; Kemp, C.C. Lead me by the hand: Evaluation of a direct physical interface for nursing assistant robots. In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, 2–5 March 2010; pp. 367–374. [Google Scholar]
  27. Weiss, A.; Bader, M.; Vincze, M.; Hasenhütl, G.; Moritsch, S. Designing a service robot for public space: An action and experiences-approach. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 3–6 March 2014; pp. 318–319. [Google Scholar]
  28. Sankai, Y. HAL: Hybrid assistive limb based on cybernics. In Robotics Research; Springer: Berlin, Germany, 2010; pp. 25–34. [Google Scholar]
  29. Saldien, J.; Goris, K.; Yilmazyildiz, S.; Verhelst, W.; Lefeber, D. On the design of the huggable robot Probo. J. Phys. Agents 2008, 2, 3–11. [Google Scholar] [CrossRef] [Green Version]
  30. Sebastian, J.; Tai, C.Y.; Lindholm, K.; Hsu, Y.L. Development of caricature robots for interaction with older adults. In International Conference on Human Aspects of IT for the Aged Population; Springer: Cham, Switzerland, 2015; pp. 324–332. [Google Scholar]
  31. Saldien, J.; Goris, K.; Vanderborght, B.; Vanderfaeillie, J.; Lefeber, D. Expressing emotions with the social robot Probo. Int. J. Soc. Robot. 2010, 2, 377–389. [Google Scholar] [CrossRef]
  32. Cameron, D.; Fernando, S.; Collins, E.; Millings, A.; Moore, R.; Sharkey, A.; Evers, V.; Prescott, T. Presence of life-like robot expressions influences children’s enjoyment of human–robot interactions in the field. In Proceedings of the 4th International Symposium on New Frontiers in Human-Robot, Canterbury, UK, 21–22 April 2015. [Google Scholar]
  33. Gockley, R.; Simmons, R.; Wang, J.; Busquets, D.; DiSalvo, C.; Caffrey, K.; Rosenthal, S.; Mink, J.; Thomas, S.; Adams, W.; et al. Grace and George: Social Robots at AAAI. In Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI), San Jose, CA, USA, 25–29 July 2004; Volume 4, pp. 15–20. [Google Scholar]
  34. Marcos, S.; Gómez-García-Bermejo, J.; Zalama, E. A realistic, virtual head for human—Computer interaction. Interact. Comput. 2010, 22, 176–192. [Google Scholar] [CrossRef]
  35. Michaud, F.; Laplante, J.F.; Larouche, H.; Duquette, A.; Caron, S.; Létourneau, D.; Masson, P. Autonomous spherical mobile robot for child-development studies. IEEE Trans. Syst. Man Cybern. A Syst. Hum. 2005, 35, 471–480. [Google Scholar] [CrossRef]
  36. Terada, K.; Yamauchi, A.; Ito, A. Artificial emotion expression for a robot by dynamic color change. In Proceedings of the RO-MAN, Paris, France, 9–13 September 2012; pp. 314–321. [Google Scholar]
  37. Baraka, K.; Rosenthal, S.; Veloso, M. Enhancing human understanding of a mobile robot’s state and actions using expressive lights. In Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 652–657. [Google Scholar]
  38. Yonezawa, T.; Yoshida, N.; Kuboshima, K. Design of Pet Robots with Limitations of Lives and Inherited Characteristics. In Proceedings of the 9th EAI International Conference on Bio-Inspired Information and Communications Technologies (Formerly BIONETICS), New York, NY, USA, 3–5 December 2016; pp. 69–72. [Google Scholar]
  39. Song, S.; Yamada, S. Expressing Emotions through Color, Sound, and Vibration with an Appearance-Constrained Social Robot. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 2–11. [Google Scholar]
  40. Hemphill, M. A note on adults’ color—Emotion associations. J. Gen. Psychol. 1996, 157, 275–280. [Google Scholar] [CrossRef] [PubMed]
  41. Mayer, P.; Beck, C.; Panek, P. Examples of multimodal user interfaces for socially assistive robots in Ambient Assisted Living environments. In Proceedings of the IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom), Kosice, Slovakia, 2–5 December 2012; pp. 401–406. [Google Scholar]
  42. Panek, P.; Edelmayer, G.; Mayer, P.; Beck, C.; Rauhala, M. User acceptance of a mobile LED projector on a socially assistive robot. In Ambient Assist. Living; Springer: Berlin/Heidelberg, Germany, 2012; pp. 77–91. [Google Scholar]
  43. Delaunay, F.; De Greeff, J.; Belpaeme, T. Towards retro-projected robot faces: An alternative to mechatronic and android faces. In Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 27 September–2 October 2009; pp. 306–311. [Google Scholar]
  44. Pierce, B.; Kuratate, T.; Vogl, C.; Cheng, G. “Mask-Bot 2i”: An active customisable robotic head with interchangeable face. In Proceedings of the12th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Osaka, Japan, 29 November–1 December 2012; pp. 520–525. [Google Scholar]
  45. Mollahosseini, A.; Graitzer, G.; Borts, E.; Conyers, S.; Voyles, R.M.; Cole, R.; Mahoor, M.H. Expressionbot: An emotive lifelike robotic face for face-to-face communication. In Proceedings of the 14th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Madrid, Spain, 18–20 November 2014; pp. 1098–1103. [Google Scholar]
  46. Smith, J.; MacLean, K. Communicating emotion through a haptic link: Design space and methodology. Int. J. Hum.-Comput. Stud. 2007, 65, 376–387. [Google Scholar] [CrossRef]
  47. Hertenstein, M.J.; Holmes, R.; McCullough, M.; Keltner, D. The communication of emotion via touch. Emotion 2009, 9, 566. [Google Scholar] [CrossRef] [PubMed]
  48. App, B.; McIntosh, D.N.; Reed, C.L.; Hertenstein, M.J. Nonverbal channel use in communication of emotion: How may depend on why. Emotion 2011, 11, 603. [Google Scholar] [CrossRef] [PubMed]
  49. Forsell, L.M.; Åström, J.A. Meanings of hugging: From greeting behavior to touching implications. Compr. Psychol. 2012, 1, 13. [Google Scholar] [CrossRef]
  50. Hassenzahl, M.; Heidecker, S.; Eckoldt, K.; Diefenbach, S.; Hillmann, U. All you need is love: Current strategies of mediating intimate relationships through technology. ACM Trans. Comput.-Hum. Interact. 2012, 19, 30. [Google Scholar] [CrossRef]
  51. Sumioka, H.; Nakae, A.; Kanai, R.; Ishiguro, H. Huggable communication medium decreases cortisol levels. Sci. Rep. 2013, 3, 3034. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Bonarini, A.; Garzotto, F.; Gelsomini, M.; Romero, M.; Clasadonte, F.; Yilmaz, A.N.Ç. A huggable, mobile robot for developmental disorder interventions in a multi-modal interaction space. In Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 823–830. [Google Scholar]
  53. Fong, A.; Ashktorab, Z.; Froehlich, J. Bear-with-me: An embodied prototype to explore tangible two-way exchanges of emotional language. In Proceedings of the CHI’13 Extended Abstracts on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 1011–1016. [Google Scholar]
  54. Kim, J.; Alspach, A.; Leite, I.; Yamane, K. Study of children’s hugging for interactive robot design. In Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 557–561. [Google Scholar]
  55. Slyper, R.; Poupyrev, I.; Hodgins, J. Sensing through structure: Designing soft silicone sensors. In Proceedings of the Fifth International Conference on Tangible, Embedded, and Embodied Interaction, Funchal, Portugal, 22–26 January 2011; pp. 213–220. [Google Scholar]
  56. SoSci Survey (Version 2.6.00) [Computer Software]. Available online: https://www.soscisurvey.de (accessed on 1 September 2018).
  57. DiSalvo, C.; Gemperle, F.; Forlizzi, J.; Montgomery, E. The hug: An exploration of robotic form for intimate communication. In Proceedings of the 12th IEEE International Workshop on Robot and Human Interactive Communication, Millbrae, CA, USA, 2 November 2003; pp. 403–408. [Google Scholar]
  58. Nuñez, E.; Uchida, K.; Suzuki, K. PEPITA: A Design of Robot Pet Interface for Promoting Interaction. In International Conference on Social Robotics; Springer: Berlin, Germany, 2013; pp. 552–561. [Google Scholar]
  59. Bartneck, C. Who like androids more: Japanese or US Americans? In Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication, Munich, Germany, 1–3 August 2008; pp. 553–557. [Google Scholar]
  60. Gao, X.P.; Xin, J.H.; Sato, T.; Hansuebsai, A.; Scalzo, M.; Kajiwara, K.; Guan, S.S.; Valldeperas, J.; Lis, M.J.; Billger, M. Analysis of cross-cultural color emotion. Color Res. Appl. 2007, 32, 223–229. [Google Scholar] [CrossRef]
  61. Sokolova, M.V.; Fernández-Caballero, A. A review on the role of color and light in affective computing. Appl. Sci. 2015, 5, 275–293. [Google Scholar] [CrossRef]
  62. Feldmaier, J.; Marmat, T.; Kuhn, J.; Diepold, K. Evaluation of a RGB-LED-based Emotion Display for Affective Agents. arXiv, 2016; arXiv:1612.07303. [Google Scholar]
  63. Zhang, J.; Sharkey, A. Contextual recognition of robot emotions. In Towards Autonomous Robotic Systems; Springer: Berlin, Germany, 2011; pp. 78–89. [Google Scholar]
  64. Grigore, E.C.; Pereira, A.; Yang, J.J.; Zhou, I.; Wang, D.; Scassellati, B. Comparing Ways to Trigger Migration Between a Robot and a Virtually Embodied Character. In International Conference on Social Robotics; Springer: Berlin, Germany, 2016; pp. 839–849. [Google Scholar]
  65. Gomes, P.F.; Segura, E.M.; Cramer, H.; Paiva, T.; Paiva, A.; Holmquist, L.E. ViPleo and PhyPleo: Artificial pet with two embodiments. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology, Lisbon, Portugal, 8–11 November 2011; p. 3. [Google Scholar]
  66. Segura, E.M.; Cramer, H.; Gomes, P.F.; Nylander, S.; Paiva, A. Revive!: Reactions to migration between different embodiments when playing with robotic pets. In Proceedings of the 11th International Conference on Interaction Design and Children, Bremen, Germany, 12–15 June 2012; pp. 88–97. [Google Scholar]
  67. Stiehl, W.D.; Lieberman, J.; Breazeal, C.; Basel, L.; Lalla, L.; Wolf, M. Design of a therapeutic robotic companion for relational, affective touch. In Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication, Nashville, TN, USA, 13–15 August 2005; pp. 408–415. [Google Scholar]
Figure 1. Pepita: a huggable robot companion with caricatured appearance.
Figure 1. Pepita: a huggable robot companion with caricatured appearance.
Applsci 08 02298 g001
Figure 2. Device components: composed by a smartphone that communicates with the robot’s circuit and an external computer.
Figure 2. Device components: composed by a smartphone that communicates with the robot’s circuit and an external computer.
Applsci 08 02298 g002
Figure 3. Stretch sensor structure: variations in the distance between the hall effect sensor and the magnet are used to detect stretching.
Figure 3. Stretch sensor structure: variations in the distance between the hall effect sensor and the magnet are used to detect stretching.
Applsci 08 02298 g003
Figure 4. (Top) Pressure sensor’s arrangement: each petal-shaped sensor was made of soft conductive foam and attached around the spherical body of the robot. (Bottom) Hug detection method: when a piece of the sensor is pressed with certain applied pressure, it generates a point placed in a fixed position on the Cartesian plane. When the area of the polygon formed by the generated points exceeds a threshold, the device recognizes the action as a hug.
Figure 4. (Top) Pressure sensor’s arrangement: each petal-shaped sensor was made of soft conductive foam and attached around the spherical body of the robot. (Bottom) Hug detection method: when a piece of the sensor is pressed with certain applied pressure, it generates a point placed in a fixed position on the Cartesian plane. When the area of the polygon formed by the generated points exceeds a threshold, the device recognizes the action as a hug.
Applsci 08 02298 g004
Figure 5. Photos used in the questionnaire to compare huggable robots according to the appearance: (a) The Hug; (b) Probo; (c) The Huggable and (d) Pepita. Photos used in the questionnaire to compare different robot’s emotional expressions. Expressions made by display: (e) Pepita and (g) Buddy. Expressions made by mechanical face: (f) Zeno and (h) Probo.
Figure 5. Photos used in the questionnaire to compare huggable robots according to the appearance: (a) The Hug; (b) Probo; (c) The Huggable and (d) Pepita. Photos used in the questionnaire to compare different robot’s emotional expressions. Expressions made by display: (e) Pepita and (g) Buddy. Expressions made by mechanical face: (f) Zeno and (h) Probo.
Applsci 08 02298 g005
Figure 6. Results using semantic differential to evaluate the impressions of Pepita’s appearance.
Figure 6. Results using semantic differential to evaluate the impressions of Pepita’s appearance.
Applsci 08 02298 g006
Figure 7. (A) participant during the “press left/right” instruction; (B) participant during the “press upper/lower” instruction and (C) participant during the “Hug” instruction.
Figure 7. (A) participant during the “press left/right” instruction; (B) participant during the “press upper/lower” instruction and (C) participant during the “Hug” instruction.
Applsci 08 02298 g007
Figure 8. Hug detection perfomance.
Figure 8. Hug detection perfomance.
Applsci 08 02298 g008
Figure 9. Data from one participant’s session. The peaks of the dotted line represent the intervals when the hug instruction was displayed.
Figure 9. Data from one participant’s session. The peaks of the dotted line represent the intervals when the hug instruction was displayed.
Applsci 08 02298 g009
Figure 10. Sensor values for each applied pressure.
Figure 10. Sensor values for each applied pressure.
Applsci 08 02298 g010
Figure 11. Data from one participant’s session showing average pressure used to detect a hug with the device.
Figure 11. Data from one participant’s session showing average pressure used to detect a hug with the device.
Applsci 08 02298 g011
Figure 12. Representation of the robot’s affective expressions: (a) happy with Avatar; (b) sad with Avatar; (c) happy with LED; (d) sad with LED.
Figure 12. Representation of the robot’s affective expressions: (a) happy with Avatar; (b) sad with Avatar; (c) happy with LED; (d) sad with LED.
Applsci 08 02298 g012
Figure 13. Social context for the proposed companion robot Pepita: (Top) as a tangible emoticon; (Bottom) as a presence indicator.
Figure 13. Social context for the proposed companion robot Pepita: (Top) as a tangible emoticon; (Bottom) as a presence indicator.
Applsci 08 02298 g013
Figure 14. Enhancing the robot’s expressiveness with projected images: (A) avatars to make pointing gestures using the robot in remote communication; (B) scenario of a robot sharing information with multiple users in the same place and remotely.
Figure 14. Enhancing the robot’s expressiveness with projected images: (A) avatars to make pointing gestures using the robot in remote communication; (B) scenario of a robot sharing information with multiple users in the same place and remotely.
Applsci 08 02298 g014
Table 1. Results of the comparison among huggable robots based on their appearance. (SD: stantard deviation).
Table 1. Results of the comparison among huggable robots based on their appearance. (SD: stantard deviation).
n = 52Looks HuggableLooks Easy to HugLooks Appealing
(mean ± SD)(mean ± SD)(mean ± SD)
Pepita3.21 ± 1.043.56 ± 1.052.77 ± 1.19
Probo3.10 ± 1.233.31 ± 1.121.96 ± 0.96
The huggable3.88 ± 0.953.83 ± 1.013.52 ± 1.18
The hug2.83 ± 1.052.96 ± 1.142.42 ± 1.20
Table 2. Mean and standard deviation of participant’s answer as “Happy” or “Sad” to each stimuli.
Table 2. Mean and standard deviation of participant’s answer as “Happy” or “Sad” to each stimuli.
n = 26ProjectorLED
HappySadHappySad
Score as “Happy”
(mean± SD)
4.73 ± 0.59 1.46 ± 0.80 3.62 ± 0.88 2.19 ± 0.83
Score as “Sad”
(mean± SD)
1.27 ± 0.52 4.35 ± 0.92 2.23 ± 0.85 3.69 ± 0.72

Share and Cite

MDPI and ACS Style

Nunez, E.; Hirokawa, M.; Suzuki, K. Design of a Huggable Social Robot with Affective Expressions Using Projected Images. Appl. Sci. 2018, 8, 2298. https://doi.org/10.3390/app8112298

AMA Style

Nunez E, Hirokawa M, Suzuki K. Design of a Huggable Social Robot with Affective Expressions Using Projected Images. Applied Sciences. 2018; 8(11):2298. https://doi.org/10.3390/app8112298

Chicago/Turabian Style

Nunez, Eleuda, Masakazu Hirokawa, and Kenji Suzuki. 2018. "Design of a Huggable Social Robot with Affective Expressions Using Projected Images" Applied Sciences 8, no. 11: 2298. https://doi.org/10.3390/app8112298

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop