Next Article in Journal / Special Issue
Not Only the Lonely—How Men Explicitly and Implicitly Evaluate the Attractiveness of Sex Robots in Comparison to the Attractiveness of Women, and Personal Characteristics Influencing This Evaluation
Previous Article in Journal
Welcome to MTI—A New Open Access Journal Dealing with Blue Sky Research and Future Trends in Multimodal Technologies and Interaction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Familiar and Strange: Gender, Sex, and Love in the Uncanny Valley

Department of Anthropology, Université of Montana, Missoula, MT 59812, USA
Multimodal Technol. Interact. 2017, 1(1), 2; https://doi.org/10.3390/mti1010002
Submission received: 30 September 2016 / Revised: 19 December 2016 / Accepted: 21 December 2016 / Published: 4 January 2017
(This article belongs to the Special Issue Love and Sex with Robots)

Abstract

:
Early robotics research held that increased realism should result in increased positivity of the interactions between people and humanoid robots. However, this turned out to be true only to a certain point, and researchers now recognize that human interactions with highly realistic humanoid robots are often marked by feelings of disgust, fear, anxiety, and distrust. This phenomenon is called the Uncanny Valley. In a world in which Artificial Companions are increasingly likely, and even desired, engineering humanoid robots that avoid the Uncanny Valley is of critical importance. This paper examines theories of the uncanny, and focuses on one in particular—that humans subconsciously appraise robots as potential sexual partners. Drawing from work on love, sexuality, and gender from a variety of fields, this paper speculates on possible futures in a world of intimate companionships between humans and machines.

1. Introduction

In early robotics research, it was commonly assumed that increases in visual and functional realism in anthropomorphic technologies would prompt overtly positive interactions and responses between humans and machines. However, Japanese roboticist Mashiro Mori [1] proposed that this was true only to a certain point, after which increasing realism would result in decidedly negative reactions. He theorized that the nature of this negativity was based on small imperfections of appearance or movement that became increasingly unsettling as robots’ overall similarity to the human observer grew. It was troubling, Mori decided, when a machine, acting as an artificial agent, came too close to passing itself off as human and was subsequently “caught” by the perceptual skills of the human involved. Mori named this phenomenon “bukimi no tani,” or the “uncanny valley,” a term that has become synonymous with the constellation of negative emotions and behavioral responses demonstrated by humans in the presence of lifelike machines. Mori plotted the potential interactions between humans and machines on a now-classic, graph in which negative responses to humanoid robots emerged as a large valley in the middle of two peaks on either side (See Figure 1). By mapping human likeness against familiarity, Mori surmised that the negative reactions to robots in this valley were founded on the disruption of emotional resonation when human subjects discovered that their social partner was an artificial replication. It is worth noting that recent translations have shifted the meaning of the original Japanese phrase from “familiarity” to “affinity” and that differences in the two constructs might have implications on the significance of past work [2].
Mori’s proposal of an uncanny valley was very similar to previous psychoanalytic theory describing perceptions of “uncanniness”, such as work by Jentsch [3], who thought that that the uncanny represents intellectual uncertainty, and Freud [4] who called the uncanny Das Unheimliche—the opposite of the familiar. More recent psychoanalysts, such as Julia Kristeva [5], have continued to refine these ideas, and Kristeva notes that the uncanny is much more than approbation of ambiguity between self and other. Rather, she claims, the uncanny represents the establishment of abjection—the forceful casting out of the strange in a system of ethics and aesthetics centered on facing familiarity and turning away from difference.

2. Theories of the Uncanny

Decades after Mori, researchers in robotics, animation, and virtual reality still struggle to avoid entering, or “falling into” [6], the uncanny valley with their lifelike machines. Indeed, although the uncanny valley is still a largely hypothetical construct [7], it forms a critical problem in the development of animated movies, video games, websites, and robots. In all of these fields researchers labor to engineer, or design, around the uncanny valley, as well as to understand exactly what it is and when, and why, it emerges. While still resonant with the psychoanalytics of similarity and difference, contemporary research draws from evolutionary theory, neuroscience, cognitive science, and psychology, and suggests a number of theories regarding the perception of uncanniness in realistic robots. I cover these in greater detail below, grouped broadly as they relate to the ideas of familiarity and strangeness.

2.1. Strangeness

The fascination many people feel with humanoid robots, speaks to a complex origin for experiences of uncanniness. Recent research indicates that uncanniness may rely on subconscious reactions to stimuli that are “categorically ambiguous” in nature [8,9,10]. Experiments using digitally morphed images suggest that the perception of an unclear distinction between the category of “human” and that of “non-human” causes feelings of fear and anxiety, and might lie at the heart of the uncanny valley [11,12]. These studies draw from literature on categorical perception, which suggests that the ability to place objects or people into meaningful, discrete groupings [13,14] is critical to both psychological functioning and smooth social interactions between humans. In short, such research proposes that the presence of categorical boundaries that do “not allow rapid and effortless discrimination of an object on the basis of the observer’s category representations of non-human and human” [9] (p. 126) prompts the negative feelings that Mori first theorized. According to Tondu and Bardou [15] uncanniness may arise as a form of cognitive dissonance between beings that are judged to lack souls (e.g., zombies and robots) and human beings, while for Gray and Wegner [16] it is the inability to clearly distinguish entities with “mind,” and therefore subjective experience, that brings the uncanny valley to life. Ramey [17] similarly proposes that uncanniness results when a quantitative metric attempts to combine two previously distinct categories, such as ‘human’ and ‘machine’.
Categorical ambiguity and cognitive dissonance hint at the degree to which behavioral expectations impact human-robot interactions. Indeed, researchers have long known that the ability to predict patterns of behavior in social interactions may be critical to smooth social encounters [18,19]. This idea has been extended to interactions between humans and robots, and some research shows that when robots appear human, they set in motion a series of predictions regarding how they will behave [20]. However, a robot may violate these expectations with even minor behavioral inconsistencies or flaws in the realism of its movements or speech, thereby sounding a subconscious alarm in the brain of the human observer that, although the robot appears human, it is only pretending [10]. For example, Tinwell et al. [21] and Mitchell et al. [22] demonstrate that a mismatch between facial and vocal realism may prompt perceptions of uncanniness. However, it is not only mismatchs between behavior and appearance that may trigger this response. Discrepancies between different aspects of a robot’s appearance may also raise the suspicion or distrust of human observers. For example, Blow et al. [23] note that a robot with a highly realistic face and mechanical limbs may precipitate feelings of uncanniness, while MacDorman [24] and Seyama and Nagayama [25] show that when realistic and unrealistic facial features are paired, the response by human observers is negative. In particular, robotic heads and faces may be critical to avoiding the uncanny valley, as facial characteristics and expressions are key to the portrayal of everything from personality to friendliness [26], and some research has shown that uncanny facial features can potentially be eliminated through a process of careful fine-tuning [27].
The lack of predictability often displayed by realistic humanoid robots has other implications, however; most notably, that humanoid robots may be perceived as uncanny because they are somehow corpse-like or reminiscent of someone who is dead or diseased. As noted by Mitchell et al. [22] there is often a “cross-modal mismatch” between aspects of a robot’s appearance and the expectation of the observer. For example, “the visual appearance of [a robotic] hand elicits the tactile expectation that it will feel as warm and soft” (p. 10). The fact that the robot is not warm and soft, but rather cold and stiff is alarming and human subjects may experience sensations of uncanniness as feelings of disgust are replaced by the subconscious desire for self-preservation [25]. According to some researchers, this may result from strategies of pathogen avoidance, if the robot is judged to be ill or contagious [28,29] or from danger avoidance [30] if the robot is thought to be a corpse. In either case human psychological responses are thought to be innately tied to evolutionarily engrained reactions meant to facilitate survival. According to other researchers, robots, and their often corpse-like texture, temperature, or movements force human subjects to confront the inevitability of their own deaths. This perspective—called the mortality salience hypothesis—proposes that these subconscious confrontations cause subjects to respond with culturally specific defense mechanisms that enforce their worldviews and protect them from the psychological trauma of facing death [30,31,32,33].

2.2. Familiarity

The theories above highlight the ways in which robots may fail to live up to our expectations or standards of ‘humanness,’ and the degree to which they remain strange. However, the fact that we use standards of humanness at all in order to evaluate our mechanical creations suggests that within this strangeness and difference lies a fundamental recognition of familiarity as well. For example, robots may provoke the response of the “behavioral immune system” [34] which reacts to superficial cues indicating sources of danger such as contagious disease. Ho et al. [33] and Moosa and Ud-Dean [30] note that both danger and pathogen avoidance in interactions with humanoid robots may be based on a subconscious assessment of similarity. In other words, these robots are sufficiently human to demonstrate the superficial cues suggesting illness or disease that trigger an avoidance response. Similarly, according to psychoanalytic theory, the uncanny object represents a category of difference that still contains hints of the self; this object is an attempt to copy that is realistic enough to imply the erasure of borders and to ultimately form a threat to the firm boundary that defines the ego [5].
This suggests an underlying perception of relatedness as the basis for uncanniness, a source of affinity that is also demonstrated in applications of evolutionary mate selection theory to understanding why this phenomenon arises. From this perspective, humans engage in an involuntary process of evaluating realistic robots based on the range of aesthetic features used in the selection of potential mates. According to Green et al. [35] whose research demonstrated a complex relationship between attributes of gender, human likeness, aliveness, attractiveness, and sexiness, there are universal or cross-cultural characteristics that are sought out in possible mates. According to MacDorman et al. [36] (p. 696) “human beings have evolved to perceive as attractive potential mates who possess discernable indicators of fertility, hormonal and immune system health, social desirability, and other signs of reproductive fitness”. Some research suggests that particular physical characteristics such as facial symmetry and proportionality may reflect such reproductive fitness and may contribute to the process of mate selection. Furthermore, we may have evolved to subconsciously sort individuals as candidates for reproductive partners based on these characteristics. Uncanniness then arises when human subjects are presented with a humanoid robot realistic enough to prompt appraisal as a potential mate, and when the artificial agent is subsequently rejected due to features such as poor skin texture, or asymmetrical physiology suggestive of poor health or low fertility [35,36,37,38].
Most interestingly, however, is research indicating that robots with life-like features and social tendencies prompt the expression of empathetic interactions with human subjects [39]. Indeed, current research lends support to the idea that the process of empathizing with another entity is cognitively bound to perceiving both its appearance and intentions [40]. MacDorman et al. [33] refer to this as a “shared circuitry for empathy” and note that the degree of humanness exhibited by a robot may influence the activation of the neural pathways that allow human empathizing [41]. Other authors note that ‘human-like’ characteristics outside of appearance may also influence this process; for example, Coeckelbergh [42] suggests that the ability of a robot to mirror human vulnerability may provoke empathy, or “fellow feeling” in some cases. However, research by Suzuki et al. [43] compared empathetic responses to perceived pain inflicted on a robotic hand and that inflicted on a human hand and found both similarities and differences in the physiological reactions of human subjects. All of this indicates a potentially complex connection between familiarity and strangeness and the ways in which these aspects of robotic appearance and behavior may come together in the context of human perception to impact interactions with humanoid machines. Indeed, based on theories of mate selection, anthropomorphization, and empathy we might well ask to what extent human subjects are willing to shrug off the differences evoked by realistic robots, and engage in intimate, rather than fearful, interactions with them? Is there a place for love, intimacy, or sex in the uncanny valley?

3. Anthropomorphization and Sociality

To begin to answer these questions, we turn to the growing body of literature documenting the degree to which robots are increasingly used in social settings. Indeed, a number of robotic models have been shown to engage in successful social interactions with both children and adults [44,45,46,47,48,49]. While many of these robots are not realistic or humanoid (as in the case of Paro, a robotic harp seal that often prompts demonstrations of affection, loving behavior, and admiration from human subjects [50]) they do provide confirmation of the degree to which humans find artificial agents to be acceptable social partners. For example, Sung et al. [51] proposed that human subjects developed what (by certain definitions) may be thought of as socially intimate relationships with even their robotic vacuums, and that this simple form of intimacy, which, according to the authors resulted in a sense of shared responsibility or partnership, increased satisfaction among subjects participating in housekeeping tasks.
This may relate to the inclination humans show for using the mental models developed in interactions with other humans as a basis for sociality with robots [52,53,54,55,56]. In other words, there may be an innate willingness to interact with robots as if they were people rather than machines, and research has repeatedly shown that subjects use models drawn from human-human interactions as a basis for judging the likeability, trustworthiness, and friendliness of robots in social encounters [57,58]. This is potentially founded on the tendency to extend human characteristics, emotions, and intentions to non-human agents, a phenomenon referred to as anthropomorphization, which has been widely documented in the literature [59]. Like the perception of aesthetics and the avoidance of pathogenic entities, the anthropomorphization of artificial agents may be grounded in the evolution of human cognition. According to Cynthia Breazeal [60], the anthropomorphization of robots and other technologies indicates that humans “evolved in a world in which only humans exhibited rich social behavior” (p. 16), and as such we are hard-wired to relate to anything technological that is capable of expressing social tendencies as human. For others, like anthropologist Pat Shipman [61], these interactions are bound to an evolutionary past that included formative relationships with animals—non-human, interactive things that were the original “living tools.” Kirsten Dautenhahn [62] similarly notes that the future of human robot interactions may involve socialization where robots are “personalized” through a training process similar to that undergone by our canine companions. Anthropologists and archaeologists, however, draw attention to the fact that animistic traditions were most likely pervasive in the human evolutionary past, and included belief systems that attributed personality, power, and motivation to inanimate objects of all types [63,64].
For Sherry Turkle [65], on the other hand, robots are unique “relational artifacts” that present themselves as having ‘states of mind’ only because they are designed to do so. Turkle’s perspective hints at the ethics involved in engineering humanoid robots, where “optimal anthropomorphization” [66] is part of a design space that aims to avoid the uncanny valley while still “exploiting” our willingness as a species to interact socially with inanimate objects. Indeed, some research has shown that while robots that are engineered to be realistic are more likely to fall into the uncanny valley, they are also more likely to be perceived as competent social partners than robots that score lower on an anthropomorphic scale [57]. Again, this may relate to a predictable congruity between behavior and appearance, as Goetz et al. [56] showed that both the exterior and demeanor of the robot helped human subjects categorize the types of potential interactions they might have with it; for example, mechanical-looking robots were more likely to be slated for chores or menial tasks than they were for social interaction. It is clear then that aesthetic perception and social perception are not easily separated [67]. This is particularly true as aesthetic appearance has been shown to combine with subtle behavioral cues such as eye contact and attentiveness to facilitate social interactions [68]; in other words, robots must appear as capable of both giving and receiving interaction in order to be judged as acceptable social partners [69] but it is these same qualities that may cause them to be judged as uncanny as well.

4. Gender, Sex, and Love

A critical aspect of human-robot interactions that combines aesthetics and behavior is the attribution of gender to robots by human subjects. Indeed, this may be a pivotal area in the future of robotic engineering for the design of intimate companions. In some studies, gender has been shown to play a role in the interactions between humans, and these human-human gender norms and roles have also been shown to structure expectations in interactions between humans and robots [69,70,71]. For example, research shows that people tend to apply prevailing gender stereotypes to even simple computer programs with which they work [72]. Interestingly, subjects both attributed gender to the program, and also made judgments regarding its “friendliness” and “competence” based on these attributions. Woods et al. [73] showed more specifically in the case of robots, gender ascription depended on particular physical characteristics of the robot’s model—for example, color and shape, while Robertson [74] demonstrated that the conflation of gender and robotic design began with the engineers themselves who expressed that “an attribution of female gender requires an interiorized, slender body, and male gender an exteriorized, stocky body” (p. 19). Seigel et al. [70] showed that perceptions of a robot’s gender played into the degree to which participants judged a robot as “credible, trustworthy, or engaging” a set of qualities that the authors found to be critical to “robotic persuasion”. However, it is not just the gender ascribed to the robot in an interaction, but the gender of the human participant as well that matters. Seigel et al. [70] found that male subjects were more likely to donate money to a female robot than female subjects, while Schermerhorn, Scheutz, and Crowell [75] showed that men were more likely than women to describe robots as human-like and to facilitate social interactions with them. According to Carpenter et al. [71] men were also more comfortable with the presence of a robot in their homes, and more likely than women to be accepting of home service robots more generally.
Some recent literature examines the possibility that in a not-too-distant future these tendencies for gendering, anthropomorphization, and socialization may come together in the form of robots designed specifically for sexually intimate relationships with humans [76]. David Levy [77], whose work inspired the theme of the current issue, is a formative voice on the topic, urging consideration of the many positive impacts that might result from taking robots as intimate companions. Among these are possibilities of fulfilling intimate or sexual relationships for people with limitations or disabilities that prevent them from having successful interactions with other humans. While Levy’s futuristic scenario of high-tech robotic lovers is still far-off, simple versions of his vision already exist. For example, the roboticization of sex dolls and the growth of a brothel and rental service industry that supplies these dolls in place of actual prostitutes (for example Doll no Mori, a doll escort service in Tokyo) indicates that the practice of ‘sex with robots’ has already begun [78]. Levy is an advocate for such robo-brothels, which he claims will positively benefit society by reducing social ills such as disease and human trafficking [78]. However, more importantly, Levy speaks to the idea that robots are poised to become loving companions in a more intimate, genuine sense, and that they may provide access to a more perfect form of love than that experienced between many humans [78,79]. Given the fact that we are already having sex with robots, then, the real questions is whether “sex with robots is a route to love with robots” [80] (p. 239).
Affective computing is a branch of engineering and robotics research that aims to facilitate the emotional expressivity of advanced technologies [81] and to understand human emotional expression through interaction with artificial systems. In some cases affective computing research seeks to “evoke loving or amorous reactions from their human users” [79] (p. 398) and researchers in this specific branch of affective computing work to deepen comprehension of what makes loving relationships between humans and machines possible. That is assuming, of course that we have some idea of what love actually is, and how it combines with sexual intimacy to form the basis for human relationships. As noted by Sullins [79], the reasons for engaging in sexual intimacy are themselves complex, rooted not only in the evolution of human sociality and mate selection, but in motivations from goal attainment to insecurity as well. Defining amorous love, then, is a difficult problem, related not only to physical intimacy, personal history, and social psychology, but to the deeply individualized motivations and expectations each person has for engaging in interpersonal relationships of various kinds. According to Levy [77], this complexity is precisely the reason that robots will make highly successful romantic partners—they will be fully programmable and capable of being personalized to display the precise array of emotional, social, and sexual behaviors and feelings that resonate with their individual human companions.
The fact that we not only anthropomorphize humanoid robots, but gender them and evaluate them as possible mates as well, suggests that we are well on the way to Levy’s future. However, as noted by Walters et al. [6], while research continues to demonstrate some of the similarities between human-robot encounters to those occurring between humans, there is still clear indication that human-robot interactions and human-human interactions are not qualitatively the same. As the authors state, “as long as robots can still be distinguished from biological organisms, which may be the case for a long time to come, it is unlikely that people will react socially to robots in exactly the same ways that they might react to other humans or other living creatures in comparable contexts” [6] (p. 161). Thus, while both social and sexual encounters between humans and robots are a contemporary reality, we should anticipate the intimacy underlying these interactions to be fundamentally different from that in human relationships with other biological organisms.
However, we might also question whether this is likely to remain the case. The growing body of research on engineering realistic, humanoid robots designed for social interactions indicates that it is not. Given this, the question becomes whether realistic humanoid robots can be designed around the uncanny valley in order to facilitate true intimacy—can robotic companions become more than just expensive novelties or very expensive sex toys and instead fulfill all of the roles of a true companion? Answering this question in the affirmative rests largely on the evolving expectations of humans as to what constitutes intimacy, sociality, love, and sex and the types of partners we expect to encounter in these various socially embedded scenarios. As noted above, the evolutionary past of our species indicates an innate willingness to form close, social relationships with non-humans, including both artifacts and animals, and an acknowledgement that these non-human companions often have motivations and intentions of their own.
Indeed, the fact that many research participants simultaneously confirm that robots are neither real nor alive but react as if they are both [65], indicates that a propensity for animism may be as much a part of the human evolutionary future, as it is a part of our past [61]. Based on this we might expect an ongoing ‘cognitive engineering’ of sociality, even in situations, such as human-robot interactions, where truly reciprocal interactions do not actually exist. As noted by Gilbert and Jones [82], however, this is not even an anomalous process in interactions between humans, as research subjects were shown to construct feelings of love in other people, even when provided with clear evidence that the love they were constructing did not exist. Thus, while humanoid robots in the future might certainly be realistic enough to pass as human, the question “when your robotic lover tells you that it loves you should you believe it?” [79] (p. 398) still resonates. Rather than simply assuming that we will accept the robot’s love as genuine based on how we react to similar statements made by other humans, a more prescient question might be whether we will find the very expression of robotic love uncanny, given the underlying knowledge that the love is artificial.
From another perspective, however, the uncanny valley is “simply an emotional reaction which may be subject to change over time” [27] (p. 3). Thus, just as we humans have grown accustomed to hearing our own voices on radios and through telephones and seeing ourselves portrayed on televisions and movie screens, so too may we grow increasingly comfortable with robots that look, act, and talk like us. Indeed, as the discrepancies between the human and the humanoid shrink, and as our encounters with social robots increase we might well expect intimacy to occur simply as a natural result of growing contact and familiarity, rather than as the successful engineering away of difference. In short, we may come to accept aspects of robots’ strangeness as a part of whom or what these robots are, rather than perceiving dissimilarity as flaws or problems with design. Given the difficulties we already have in forming boundaries around concepts such as love, intimacy, and companionship, the ongoing breakdown of the distinction between human and machine may result in relationships of all types between humans and artificial companions. After all, it may not matter as much whether a robot actually possesses traits such as agency, intelligence, or the neuro-biological capacities for “real” love. What matters most is that we often perceive them to [83] and based on this perception we are almost certain to begin to love them in return.

5. Conclusions

The research presented above demonstrates that human interactions with robots are complex and often counterintuitive. We, as a species, tend to anthropomorphize even the simplest technologies and show empathy to the very machines that we perceive as the most uncanny and unnerving. Despite admitting that robots are neither alive nor real, we still tend to attribute them with human characteristics such as gender, and in the case of humanoid robots, we subconsciously evaluate their potential as reproductive partners. Based on the juxtaposition of similarities and differences between us and them, our confrontations with interactive, realistic machines may often be strange, or even uncanny. However, this paper suggests that the relationships we form with robots are equally likely to be familiar, grounded in a co-evolutionary past where humans formed close relationships with a number of non-human things. It remains to be seen whether the future of human intimacy includes robotic partners and lovers on a large scale. However, this paper proposes that such intimate relationships may emerge inside of the uncanny valley, inasmuch as uncanniness represents a space where familiarity and strangeness rub together, both frightening and fascinating us and pulling us forward into a future where relationships are not bounded by biology.

Acknowledgments

Cheyenne Laue would like to thank the University of Montana, Department of Anthropology and the International Chapter of the P.E.O Sisterhood for financial support during the writing of this manuscript.

Conflicts of Interest

The author declares no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Mori, M. The Uncanny Valley. Energy 1970, 7, 33–35. [Google Scholar]
  2. Mori, M.; MacDorman, K.F.; Kageki, N. The uncanny valley [from the field]. IEEE Robot. Autom. Mag. 2012, 19, 98–100. [Google Scholar] [CrossRef]
  3. Jentsch, E. On the Psychology of the Uncanny Angelaki. J. Theor. Humanit. 1996, 2, 7–16. [Google Scholar] [CrossRef]
  4. Freud, S. The Uncanny; The Standard Edition of the Complete Psychological Works of Sigmund Freud 17; Imago: London, UK, 1919; pp. 219–252. [Google Scholar]
  5. Kristeva, J. Powers of Horror; University Presses of California: Jackson, TN, USA; University Presses of Columbia: New York, NY, USA; University Presses of Princeton: West Sussex, UK, 1982; p. 71. [Google Scholar]
  6. Walters, M.L.; Syrdal, D.S.; Dautenhahn, K.; Te Boekhorst, R.; Koay, K.L. Avoiding the uncanny valley: Robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion. Auton. Robot. 2008, 24, 159–178. [Google Scholar] [CrossRef] [Green Version]
  7. Brenton, M.G.; Daniel, B.; David, C. The uncanny valley: Does it exist. In Proceedings of the Conference of Human Computer Interaction, Workshop on Human Animated Character Interaction, Las Vegas, NV, USA, 22–27 July 2005.
  8. Burleigh, T.J.; Schoenherr, J.R.; Lacroix, G.L. Does the Uncanny Valley Exist? An Empirical Test of the Relationship between Eeriness and the Human Likeness of Digitally Created Faces. Comput. Hum. Behav. 2013, 29, 759–771. [Google Scholar] [CrossRef]
  9. Cheetham, M.; Pascal, S.; Lutz, J. The Human Likeness Dimension of the “Uncanny Valley Hypothesis”: Behavioral and Functional MRI Findings. Front. Hum. Neurosci. 2011, 5, 126. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Saygin, A.P.; Thierry, C.; Ishiguro, H.; Driver, J.; Frith, C. The Thing That Should Not Be: Predictive Coding and the Uncanny Valley in Perceiving Human and Humanoid Robot Actions. Soc. Cogn. Affect. Neurosci. 2013, 7, 413–422. [Google Scholar] [CrossRef] [PubMed]
  11. Cheetham, M.; Pavlovic, I.; Jordan, N.; Suter, P.; Jancke, L. Category Processing and the Human Likeness Dimension of the Uncanny Valley Hypothesis: Eye-tracking Data. Front. Psychol. 2013, 4, 108. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Yamada, Y.; Kawabe, T.; Ihaya, K. Categorization Difficulty is Associated with Negative Evaluation in the “Uncanny Valley” Phenomenon. Jpn. Psychol. Res. 2013, 55, 20–32. [Google Scholar] [CrossRef]
  13. Calder, A.J. Categorical Perception of Morphed Facial Expressions. Vis. Cogn. 1996, 3, 81–118. [Google Scholar] [CrossRef]
  14. Harnad, S.R. (Ed.) Categorical Perception: The Groundwork of Cognition; Cambridge University Press: Cambridge, UK, 1990.
  15. Tondu, B.; Bardou, N. A new interpretation of Mori’s uncanny valley for future humanoid robots. Int. J. Robot. Autom. 2001, 26, 337. [Google Scholar] [CrossRef]
  16. Gray, K.; Wegner, D.M. Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition 2012, 125, 125–130. [Google Scholar] [CrossRef] [PubMed]
  17. Ramey, C.H. The uncanny valley of similarities concerning abortion, baldness, heaps of sand, and humanlike robots. In Proceedings of the Views of the Uncanny Valley Workshop: IEEE-RAS International Conference on Humanoid Robots, Tsukuba, Japan, 5–7 December 2005; pp. 8–13.
  18. Lamb, M.E. The development of social expectations in the first year of life. In Infant Social Cognition: Empirical and Theoretical Considerations; Psychology Press: East Sussex, UK, 1981; pp. 155–175. [Google Scholar]
  19. Landis, D.; Triandis, H.C.; Adamopoulos, J. Habit and behavioral intentions as predictors of social behavior. J. Soc. Psychol. 1978, 106, 227–237. [Google Scholar] [CrossRef]
  20. Eyssel, F.; Kuchenbrandt, D.; Bobinger, S. Effects of anticipated human-robot interaction and predictability of robot behavior on perceptions of anthropomorphism. In Proceedings of the 6th International Conference on Human-Robot Interaction ACM 2011, Lausanne, Switzerland, 6–9 March 2011; pp. 61–68.
  21. Tinwell, A.; Grimshaw, M.; Nabi, D.A.; Williams, A. Facial expression of emotion and perception of the Uncanny Valley in virtual characters. Comput. Hum. Behav. 2011, 27, 741–749. [Google Scholar] [CrossRef]
  22. Mitchell, W.J.; Szerszen, K.A.; Lu, A.S.; Schermerhorn, P.W.; Scheutz, M.; MacDorman, K.F. A mismatch in the human realism of face and voice produces an uncanny valley. I-Percept. 2011, 2, 10–12. [Google Scholar] [CrossRef] [PubMed]
  23. Blow, M.; Dautenhahn, K.; Appleby, A.; Nehaniv, C.L.; Lee, D. The art of designing robot faces: Dimensions for human-robot interaction. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction 2006, Salt Lake City, UT, USA, 2–3 March 2006; pp. 331–332.
  24. MacDorman, K.F. Subjective ratings of robot video clips for human likeness, familiarity, and eeriness: An exploration of the uncanny valley. In Proceedings of the ICCS/CogSci-2006 Long Symposium: Toward Social Mechanisms of Android Science, Vancouver, BC, Canada, 26 July 2006; pp. 26–29.
  25. Seyama, J.; RNagayama, R.S. The uncanny valley: Effect of realism on the impression of artificial human faces. Presence 2007, 16, 337–351. [Google Scholar] [CrossRef]
  26. DiSalvo, C.F.; Gemperle, F.; Forlizzi, J.; Kiesler, S. All robots are not created equal: The design and perception of humanoid robot heads. In Proceedings of the 4th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques ACM, London, UK, 25–28 June 2002; pp. 321–326.
  27. Hanson, D.; Olney, A.; Prilliman, S.; Mathews, E.; Zielke, M.; Hammons, D.; Stephanou, H. Upending the uncanny valley. In Proceedings of the National Conference on Artificial Intelligence, Pittsburgh, Pennsylvania, 9–13 July 2005.
  28. Clasen, M. The Anatomy of the Zombie: A Bio-Psychological Look at the Undead Other. Otherness Essays Stud. 2010, 1, 1–23. [Google Scholar]
  29. Rozin, P.; Fallon, A.P. A perspective on disgust. Psychol. Rev. 1987, 94, 23–41. [Google Scholar] [CrossRef] [PubMed]
  30. Moosa, M.M.; Minhaz Ud-Dean, S.M. Danger avoidance: An evolutionary explanation of the uncanny valley. Biol. Theory 2010, 5, 12–14. [Google Scholar] [CrossRef]
  31. MacDorman, K.F. Mortality Salience and the Uncanny Valley. In Proceedings of the 2005 5th IEEE-RAS International Conference on Humanoid Robots, Tukuba, Japan, 5–7 December 2005.
  32. MacDorman, K.F.; Ishaguro, H. Subjective Ratings of Robot Video Clips for Human Likeness, Familiarity, and Eeriness: An exploration of the uncanny valley. In Proceedings of the ICCS/CogSci-2006 Long Symposium: Toward Social Mechanisms of Android Science, Vancouver, BC, Canada, 26 July 2006.
  33. Ho, C.C.; MacDorman, K.F.; Pramono, Z.D. Human emotion and the uncanny valley: A GLM, MDS, and Isomap analysis of robot video ratings. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, Amsterdam, The Netherlands, 12–15 March 2008; pp. 169–176.
  34. Schaller, M.; Park, J.H. The behavioral immune system (and why it matters). Curr. Dir. Psychol. Sci. 2011, 20, 99–103. [Google Scholar] [CrossRef]
  35. Green, R.D.; MacDorman, K.F.; Ho, C.-C.; Vasudevan, S. Sensitivity to the proportions of faces that vary in human likeness. Comput. Hum. Behav. 2008, 24, 2456–2474. [Google Scholar] [CrossRef]
  36. MacDorman, K.F.; Green, R.D.; Ho, C.-C.; Koch, C.T. Too Real for Comfort? Uncanny Responses to Computer Generated Faces. Comput. Hum. Behav. 2009, 25, 695–710. [Google Scholar] [CrossRef]
  37. Donovan, J.M. Facial Attractiveness: Evolutionary, Cognitive, and Social Perspectives; Ablex Publishing: Westport, CT, USA, 2002. [Google Scholar]
  38. Rhodes, G. The Evolutionary Psychology of Facial Beauty. Annu. Rev. Psychol. 2006, 57, 199–226. [Google Scholar] [CrossRef] [PubMed]
  39. Breazeal, C. Emotion and sociable humanoid robots. Int. J. Hum. Comput. Stud. 2003, 59, 119–155. [Google Scholar] [CrossRef]
  40. Krach, S.; Hegel, F.; Wrede, B.; Sagerer, G.; Binkofski, F.; Kircher, T. Can machines think? Interaction and perspective taking with robots investigated via fMRI. PLoS ONE 2008, 3, e2597. [Google Scholar] [CrossRef] [PubMed]
  41. Chaminade, T.; Hodgins, J.; Kawato, M. Anthropomorphism influences perception of computer-animated characters’ actions. Soc. Cogn. Affect. Neurosci. 2007, 2, 206–216. [Google Scholar] [CrossRef] [PubMed]
  42. Coeckelbergh, M. Artificial companions: Empathy and vulnerability mirroring in human-robot relations. In Studies Ethics, Law, and Technology; Berkeley Electronic Press: Berkeley, CA, USA, 2010; Volume 4. [Google Scholar]
  43. Suzuki, Y.; Galli, A.; Ilkeda, A.; Itakura, S.; Kitazaki, M. Measuring empathy for human and robot hand pain using electroencephalography. Sci. Rep. 2015, 5, 15924. [Google Scholar] [CrossRef] [PubMed]
  44. Sabanovic, S.; Michalowski, M.P.; Simmons, R. Robots in the wild: Observing human-robot social interaction outside the lab. In Proceedings of the 9th IEEE International Workshop on Advanced Motion Control, Istanbul, Turkey, 27–29 March 2006; pp. 596–601.
  45. Broekens, J.; Heerink, M.; Rosendal, H. Assistive social robots in elderly care: A review. Gerontechnology 2009, 8, 94–103. [Google Scholar] [CrossRef]
  46. Tapus, A.; Peca, A.; Aly, A.; Pop, C.; Jisa, L.; Pintea, S.; Rusu, A.S.; David, D.O. Children with autism social engagement in interaction with Nao, an imitative robot—A series of single case experiments. Interact. Stud. 2012, 13, 315–347. [Google Scholar] [CrossRef]
  47. Breazeal, C. Toward sociable robots. Robot. Auton. Syst. 2003, 42, 167–175. [Google Scholar] [CrossRef]
  48. Breazeal, C.; Takanishi, A.; Kobayashi, T. Social robots that interact with people. In Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1349–1369. [Google Scholar]
  49. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef]
  50. Kidd, C.D.; Taggart, W.; Turkle, S. A sociable robot to encourage social interaction among the elderly. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 3972–3976.
  51. Sung, J.-Y.; Guo, L.; Grinter, R.E.; Christensen, H.I. “My Roomba is Rambo”: Intimate home appliances. In International Conference on Ubiquitous Computing; Springer: Berlin/Heidelberg, Germany, 2007; pp. 145–162. [Google Scholar]
  52. Lee, S.-L.; Lau, I.Y.-M.; Kiesler, S.; Chiu, C.-Y. Human mental models of humanoid robots. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Shatin, China, 5–9 July 2005; pp. 2767–2772.
  53. Powers, A.; Kiesler, S. The advisor robot: Tracing people’s mental model from a robot’s physical attributes. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, Salt Lake City, UT, USA, 2–3 March 2006; pp. 218–225.
  54. Phillips, E.; Ososky, S.; Grove, J.; Jentsch, F. From tools to teammates toward the development of appropriate mental models for intelligent robots. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Las Vegas, NV, USA, 19–23 September 2011; Volume 55, pp. 1491–1495.
  55. Dautenhahn, K.; Woods, S.; Kaouri, C.; Walters, M.L.; Koay, K.L.; Werry, I. What is a robot companion-friend, assistant or butler? In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 1192–1197.
  56. Goetz, J.; Kiesler, S.; Powers, A. Matching robot appearance and behavior to tasks to improve human-robot cooperation. In Proceedings of the 12th IEEE International Workshop on Robot and Human Interactive Communication (ROMAN 2003), Millbrae, CA, USA, 31 October–2 November 2003; pp. 55–60.
  57. Schaefer, K.E.; Sanders, T.L.; Yordon, R.E.; Billings, D.R.; Hancock, P.A. Classification of robot form: Factors predicting perceived trustworthiness. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Boston, MA, USA, 22–26 October 2012; Volume 56, pp. 1548–1552.
  58. Bartneck, C.; Kulić, D.; Croft, E.; Zoghbi, S. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 2009, 1, 71–81. [Google Scholar] [CrossRef]
  59. Reeves, B.; Nass, C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places; Center for the Studies of Language and Information Publications: Stanford, CA, USA, 1996. [Google Scholar]
  60. Breazeal, C. Designing Sociable Robots; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  61. Shipman, P. The Animal Connection; Norton: New York, NY, USA, 2011. [Google Scholar]
  62. Dautenhahn, K. Robots we like to live with?!—A developmental perspective on a personalized, life-long robot companion. In Proceedings of the 13th IEEE International Workshop on Robot and Human Interactive Communication (ROMAN 2004), Okayama, Japan, 20–22 September 2004; pp. 17–22.
  63. Brown, L.A.; Walker, W.H. Prologue: Archaeology, animism and non-human agents. J. Archaeol. Method Theory 2008, 15, 297–299. [Google Scholar] [CrossRef]
  64. Alberti, B.; Bray, T.L. Introduction. Camb. Archaeol. J. 2009, 19, 337–343. [Google Scholar] [CrossRef]
  65. Turkle, S. Authenticity in the Age of Computers. In Machine Ethics; Michael, A., Anderson, S.L., Eds.; Cambridge University Press: New York, NY, USA, 2011. [Google Scholar]
  66. Duffy, B.R. Anthropomorphism and the social robot. Robot. Auton. Syst. 2003, 42, 177–190. [Google Scholar] [CrossRef]
  67. Tondu, B.; Bardou, N. Aesthetics and robotics: Which form to give to the human-like robot? World Acad. Sci. Eng. Technol. 2009, 58, 650–657. [Google Scholar]
  68. Miyauchi, D.; Sakurai, A.; Nakamura, A.; Kuno, Y. Active eye contact for human-robot communication. In Proceedings of the CHI’04 Extended Abstracts on Human Factors in Computing Systems, Vienna, Austria, 24–29 April 2004; pp. 1099–1102.
  69. Dautenhahn, K. Socially intelligent robots: Dimensions of human–robot interaction. Philos. Trans. R. Soc. B Biol. Sci. 2007, 362, 679–704. [Google Scholar] [CrossRef] [PubMed]
  70. Siegel, M.; Brazeal, C.; Norton, M.I. Persuasive robotics: The influence of robot gender on human behavior. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’09), St. Louis, MO, USA, 10–15 October 2009; pp. 2563–2568.
  71. Carpenter, J.; Davis, J.M.; Erwin-Stuart, N.; Lee, T.R.; Bransford, J.D.; Vye, N. Gender representation and humanoid robots designed for domestic use. Int. J. Soc. Robot. 2009, 1, 261–265. [Google Scholar] [CrossRef]
  72. Nass, C.; Moon, Y. Machines and mindlessness: Social responses to computers. J. Soc. Issues 2010, 56, 81–103. [Google Scholar] [CrossRef]
  73. Woods, S.; Dautenhann, K.; Sschultz, J. Exploring the design space of robots: Children’s perspectives. Interact. Comput. 2006, 18, 1390–1418. [Google Scholar] [CrossRef]
  74. Robertson, J. Gendering humanoid robots: Robo-sexism in Japan. Body Soc. 2010, 16, 1–36. [Google Scholar] [CrossRef]
  75. Schermerhorn, P.; Sscheutz, M.; Crowell, C.R. Robot social presence and gender: Do females view robots differently than males? In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, New York Association for Computer Machining, Amsterdam, The Netherlands, 12–15 March 2008; pp. 263–270.
  76. Yeoman, I.; Michelle, M. Robots, men and sex tourism. Futures 2010, 44, 365–371. [Google Scholar] [CrossRef]
  77. Levy, D. Love and Sex with Robots; Harper Collins: New York, NY, USA, 2009. [Google Scholar]
  78. Levy, D. The ethics of robot prostitutes. In Robots Ethics: The Ethical and Social Implications of Robotics; MIT Press: Cambridge, MA, USA, 2012; pp. 223–231. [Google Scholar]
  79. Sullins, J.P. Robots, love, and sex: The ethics of building a love machine. IEEE Trans. Affect. Comput. 2012, 3, 398–409. [Google Scholar] [CrossRef]
  80. Whitby, B. Do You Want a Robot Lover? In The Ethics of Caring Technologies. Robot Ethics: The Ethical and Social Implications of Robotics; Lin, P., Keith, A., Bekey, G.A., Eds.; MIT Press: Cambridge, MA, USA, 2011; p. 233. [Google Scholar]
  81. Tao, J.; Tan, T. Affective computing: A review. In International Conference on Affective Computing and Intelligent Interaction; Springer: Berlin/Heidelberg, Germany, 2005; pp. 981–995. [Google Scholar]
  82. Gilbert, D.T.; Jones, E.E. Perceiver-Induced Constraint: Interpretations of Self-Generated Reality. J. Personal. Soc. Psychol. 1986, 50, 269–280. [Google Scholar] [CrossRef]
  83. Glenda, S.-G. Loving machines: Theorizing human and sociable-technology interaction. In International Conference on Human-Robot Personal Relationship; Springer: Berlin/Heidelberg, Germany, 2010; pp. 1–10. [Google Scholar]
Figure 1. The Uncanny Valley.
Figure 1. The Uncanny Valley.
Mti 01 00002 g001

Share and Cite

MDPI and ACS Style

Laue, C. Familiar and Strange: Gender, Sex, and Love in the Uncanny Valley. Multimodal Technol. Interact. 2017, 1, 2. https://doi.org/10.3390/mti1010002

AMA Style

Laue C. Familiar and Strange: Gender, Sex, and Love in the Uncanny Valley. Multimodal Technologies and Interaction. 2017; 1(1):2. https://doi.org/10.3390/mti1010002

Chicago/Turabian Style

Laue, Cheyenne. 2017. "Familiar and Strange: Gender, Sex, and Love in the Uncanny Valley" Multimodal Technologies and Interaction 1, no. 1: 2. https://doi.org/10.3390/mti1010002

Article Metrics

Back to TopTop