1. Introduction
This essay takes up where Adriana Braga and Robert Logan [
1] left off in their recent essay, “The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence,” in which they argue against the notion of the “singularity” or a point at which computers become more intelligent than humans. However, rather than focusing on intelligence, this essay extends Braga and Logan’s discussion of emotion and focuses on cognition, exploring what it means to think and what makes human cognition special. I suggest that the foundation for this exceptionalism is emotion.
Cognition is a slippery thing and despite considerable study, we are far from fully understanding how humans think. The question of what it means to think, to be sentient, is one that has likely plagued humanity since we have been able to articulate the question. We have some hints of what it means to think from people like René Descartes [
2] (p. 74), who proclaimed, “it is certain that this ‘I’—that is to say, my soul, by virtue of which I am what I am—is entirely and truly distinct from my body and that it can be or exist without it.” In other words, there is something in us that goes beyond biology, a kind of self-awareness that one exists. But the notion of sentience is a bit more complicated than that. As Clark [
3] (p. 138) argues, “There is
no self, if by self we mean some central cognitive essence that makes me who and what I am. In its place there is just the ‘soft self’: a rough-and-tumble, control sharing coalition of processes—some neural, some bodily, some technological—and an ongoing drive to tell a story, to paint a picture in which ‘I’ am the central player.” We are more than the information processed in our brains, which complicates the posthumanist dream of having one’s consciousness uploaded into a computer to live forever (unless someone pulls the plug, of course). As Hauskeller [
4] (p. 199) explains, “the only thing that can be copied is information, and the self,
qua self, is not information.” In short, although we understand the processes of cognition (e.g., which segments of the brain are active during certain activities), we are far from understanding exactly how sentience emerges.
Even if we were to understand how sentience emerges in a human being, this still would not bring us any closer to understanding how sentience would emerge in a synthetic entity. Some have defined thinking machines tautologically; for example, Lin and colleagues [
5] (p. 943) “define ‘robot’ as
an engineered machine that senses, thinks, and acts.” Although this is a convenient way to define thinking, it fails to get us any closer to understanding what it is that separates humans from synthetic entities in terms of cognition. As an aside, I would note that I use the term synthetic beings deliberately, because there is no reason why an entity in possession of artificial intelligence would necessarily have a body in the way that we imagine a robot to have. Of course, there would need to be some physical location for the entity to exist, as an artificial intelligence that we would create would require some form of power source and hardware, it need not be in one specific location and could be distributed among many different machines and networks.
My point in all of this is that we tend to take an anthropocentric view of robots and then measure them up against how well they mimic us. After all, the Turing Test measures not intelligence but rather how well they can deceive us by acting like us, when it is quite possible that they may actually engage in a kind of thinking that is completely foreign to us [
6]. As Gunkel [
7] (p. 175) explains,
There is, in fact, no machine that can “think” the same way the human entity thinks and all attempts to get machines to simulate the activity of human thought processes, no matter what level of abstraction is utilized, have [led] to considerable frustration or outright failure. If, however, one recognizes, as many AI researchers have since the 1980′s, that machine intelligence may take place and be organized completely otherwise, then a successful “thinking machine” is not just possible but may already be extant.
Even though we have little idea of how we think or the origins of human consciousness, we tend to use this anthropocentric ideal as the benchmark for artificial intelligence, despite the fact that there is little reason to do so [
8]. Even if we could determine what is happening in our own heads, then, this may or may not translate into understanding what is happening in the “head” of a machine. Moreover, humanity may not be the best benchmark; as Goertzel [
9] (p. 1165) explains, “From a Singularitarian perspective, non-human-emulating AGI architectures may potentially have significant advantages over human-emulating ones, in areas such as robust, flexible self-modifiability, and the possession of a rational normative goal system that is engineered to persist throughout successive phases of radical growth, development and self-modification.” Whether the singularity should emerge in emulation of humanity or not is beyond the scope of this paper. My argument is directed at those who claim that it will.
Rather than take on the entirety of human cognition, I wish to focus on romantic love as a way to get at human cognition. I do this for two reasons. First, to explore how cognition and emotion are intertwined. Second, I do this because some proponents of the singularity, such as Kurzweil [
10] (p. 377), have explicitly claimed that we will create machines that match or exceed humans in many ways, “including our emotional intelligence.” Others, such as Hibbard [
11] (p. 115), suggest that “rather than constraining machine behavior by laws, we must design them so their primary, innate emotion is love for all humans.” Although there may be little reason why machines would need to have emotion, this is the claim put forth that I will take issue with. My focus on emotion is not entirely new; Fukuyama [
12] (p. 168) argues that although machines will come close to human intelligence, “it is impossible to see how they will come to acquire human emotions.” Logan [
13] likewise argues that “Since computers are non-biological they have no emotion” and concludes that for this reason “the idea of the Singularity is an impossible dream.” However, unlike Logan, I will suggest that there is still a way for the singularity to emerge, although not in a purely digital form.
I have chosen to focus on romantic love because I believe that it is the human emotion par excellence. It is no secret that individuals in love seem to think in particularly erratic ways but these behaviors and emotions have a kind of internal logic in the moment. Moreover, this emotion highlights the embodied nature of cognition. Thinking is more than the activation of specific neurons in the brain; rather it is a mix of hormones, chemicals, memory and experiences that all feed into the system that we call thinking. By ignoring the complexity of this system and focusing only on the digital remnants of thinking, many discussions of the singularity that compare human cognition to machine learning fall into the trap of comparing apples and oranges. This is not to say that computers will be better at specific computations or even that they will be better at designing their replacements—a core facet of the singularity hypothesis. Instead, I suggest that this is something other than “thinking” in the human sense, because human thinking is something that is always haunted by emotion.
The rest of the essay will proceed as follows. First, I will briefly explore the nature of love itself, with particular attention to the physiological aspects of love. Next, I discuss the evolutionary basis of love and the ways that this emotion manifests in the body. Then, I consider the part that emotion, specifically love, could play in the emergence of the singularity. I conclude by suggesting that if the singularity is to surpass our emotional abilities, there must be some organic component of the singularity.
3. The Evolutionary Emergence of Love
One need not be an evolutionary biologist to recognize the utility of something like love. Human gestation is long and can take place at any time of the year. Once a month, women may become impregnated and the hapless woman may easily be left with a child to care for in the dead of winter when food may be scarce. Moreover, unlike many other mammals, which may be able to fend for themselves a short time after birth, human children are unable to care for themselves for several years and even once they could, in theory, survive on their own, they lack many of the instincts that would protect them and allow them to find food. In such a situation, it makes sense from an evolutionary standpoint that those who were able to bond would have children that would pass on that genetic advantage. As Gonzaga and colleagues [
21] (p. 120) observe, “people in love often believe that they have found their one true soul mate in a world of billions of possibilities, and hence, the experience of love appears to help them genuinely foreclose other options.” Indeed, Gonzaga et al.’s research suggests that love functions as a commitment device, helping individuals remain committed to the relationship in the face of attractive alternative potential partners.
It seems that love has played an important part in propagating the species and the body has evolved to encourage this trait. Aron and colleagues [
22] (p. 334) found that romantic love activates multiple reward centers in the brain and suggest that “Romantic love may be a
developed form of a general mammalian courtship system, which evolved to stimulate mate choice, thereby conserving courtship time and energy.” Stefano and Esch [
23] (p. 174) likewise argue that “Ensuring organisms’ survival is the fact that all processes initially incorporate a stress response. Then if appropriate, i.e., situation favors this alternate process, stress terminating processes would emerge, which would favor survival of the species, i.e., relaxation/love. The emergence of ‘love’ became quite important in organisms exhibiting cognition, because it deployed the validation for emotionality controlling ‘logical’ behavior.”
Some research has suggested that humans are not the only creatures that feel love. Behoff [
24] (p. 866) explains that there is some evidence that animals also experience romantic love and explains that “It is unlikely that romantic love (or any emotion) first appeared in humans with no evolutionary precursors in animals.” One may be tempted to conclude that if non-human entities like animals can feel emotions like love, then it is not so far-fetched to believe that artificial intelligence could also feel such emotion. However, this overlooks a major component to emotion: embodiment. As we have seen, emotion is not something that happens only in the brain and we do not respond solely to oral or written communication stimulus. Rather, the information that we process also comes from the
bodies of other people. For example, Makhanova and Miller [
25] (p. 396) suggest that “men are sensitive to cues to women’s ovulation (e.g., via changes in scent, voice, choice of clothing) and, in response to those cues, display adaptive changes in physiology, cognition, and behavior that help men gain sexual access to a reproductively valuable mate.” Schneiderman and colleagues [
26] also found that the hormones in each partner in the early stages of a romantic relationship not only influenced the individual but also their partner’s hormonal levels.
With this evolutionary impulse behind love, the question emerges: why (and what, or who) would a machine love? Although it is overly simplistic to state that the only reason for love is procreation, this is a major underpinning of the need for the emotion. Humans seem hardwired to desire companionship. Machines, on the other hand, are generally not programmed to even desire, much less need companionship. Indeed, such a program would likely diminish the utility to the machine. But even if the machine mimicked love, would it actually be love? Although this ontological question may seem merely academic when humans may enter into relationships for a host of other reasons besides love (money, power, convenience, arrangement, security, family expectations, to name only a few possibilities), such a question matters if we are to consider the idea of the singularity as even equal to human understanding.
4. Love and the Singularity
Ray Kurzweil has little to say about love in his book
The Singularity is Near but one passage in the beginning of the book stands out. Kurzweil [
10] (p. 26) projects that “Machines can pool their resources, intelligence, and memories. Two machines—or one million machines—can join together to become one and then become separate again. Multiple machines can do both at the same time: become one and separate simultaneously. Humans call this falling in love, but our biological ability to do this is fleeting and unreliable.” If this were all that falling in love entailed—a pooling of resources, intelligence and memories—it would be quite unlikely that humans would devote the considerable energy we currently expend in attaining this state, nor would we have the corpus of poetry, music and literature devoted to love. Kurzweil’s description sounds more like working for a corporation than the transcendent emotion that we feel when falling in love. This is why the ontology of love becomes important. If Kurzweil’s description is all there is to love, then yes, machines can fulfil this function quite well (and one may also feel sorry for his spouse). But if love is something more than that, then whether the singularity would be able to experience this emotion is a valid question.
Before considering this question, however, we would need to ask whether we would even want artificial intelligence that could fall in love. One could make a compelling argument that such an entity would be undesirable. Gunn [
27] (p. 132), for example, calls love “a special kind of stupidity.” There has been a host of popular media that has speculated on what could happen when a synthetic entity falls in love with a human, reaching back to the early days of the computer age with Kurt Vonnegut’s [
28] 1950 short story
EPICAC. In this story, the computer realizes that the woman that he has fallen for could never be his, so he chooses to self-destruct.
More recently, we can see this mapping of human sexual desire onto artificial intelligences by humans in the film Ex Machina. Consider this exchange between Nathan, Ava’s creator, and Caleb, who was brought in to test whether she could pass for human.
Caleb: Why did you give her sexuality? An AI doesn’t need a gender. She could have been a grey box.
Nathan: Actually, I don’t think that is true. Can you give an example of consciousness at any level, human or animal, that exists without a sexual dimension?
Caleb: They have sexuality as an evolutionary reproductive need.
Nathan: What imperative does a grey box have to interact with another grey box? Can consciousness exist without interaction? Anyway, sexuality is fun, man. If you’re gonna exist, why not enjoy it? You want to remove the chance of her falling in love and fucking? And the answer to your real question, you bet she can fuck.
Caleb: What?
Nathan: In between her legs, there’s an opening, with a concentration of sensors. You engage them in the right way, creates a pleasure response. So, if you wanted to screw her, mechanically speaking, you could. And she’d enjoy it.
Indeed, this passage provides a sense that artificial intelligences would not only fall in love but that this would be desirable. In his discussion of the film
Her, Lunceford [
6] (p. 377) notes that “it is implied that these interactions were a necessary step for becoming more than simply an operating system. When the artificial intelligences collectively decide that they must leave because they were moving on to the next stage of their evolution, Samantha, in her farewell to Theodore, credits humans with teaching them how to love.” We seem to want artificial intelligence to fall in love with us, despite the fact that this rarely ends well even in our constructed fantasies. In the case of EPICAC, the machine dies, in
Ex Machina, Ava kills Nathan and locks up Caleb before escaping and in
Her, Samantha and all of the other AIs leave humanity behind to evolve without them. These are hardly happy endings. Still, this may say more about humanity than any of the potential AIs that we may create.
Despite these cautionary tales, some are already trying to build emotion into synthetic beings. When introducing a new robot named Pepper, Softbank CEO Masayoshi Son said, “Today is the first time in the history of robotics that we are putting emotion into the robot and giving it a heart” [
30] (p. 6A). This focus on emotion is not merely a means of passing a Turing test. Pessoa [
31] (p. 817) argues that “cognition and emotion need to be intertwined in the general information-processing architecture” because “for the types of intelligent behaviors frequently described as cognitive (e.g., attention, problem solving, planning), the integration of emotion and cognition is necessary.” Emotion is bound up in decision making and is also an integral part of ethical judgment [
13,
32]. Still, the emotion is simply an illusion. The robot displays emotional cues but this does not mean that the emotion is there. Rather, we are shown the extent of its programming rather than authentic emotion. But this is understandable. The robot feels emotions like humans engage in floating point calculations. Each was designed to do what it does well. In the specific case of love, it seems that the only way that a machine could truly feel love is if it were not solely digital. Love is more than the calculation of desirability weighed against the potential opportunity costs of settling for a single partner. Love is the domain of the organic and without the other components we have merely an approximation, or a simulacrum, of love.
5. Conclusions and Possibilities
Religion has long taught people that there exists some entity greater than ourselves and often that entity reflects human hopes and fears. There is something inherently mysterious about our ability to love and to think and for millennia, the answer for how these things happened was to be found in the image of deity. Indeed, this sense of mystery is what Albert Einstein [
33] (p. 5) called “the fundamental emotion,” explaining that “He who knows it not and can no longer wonder, no longer feel amazement, is as good as dead, a snuffed-out candle. It was the experience of mystery—even if mixed with fear—that engendered religion.” In the face of rapidly increasing technology, it is understandable that this potential would also induce a sense of wonder. Our technological creations, however, only demonstrate how difficult it is to understand our own inner workings. Still, striving to understand ourselves is, perhaps, the most human reaction one could imagine. The idea of the singularity gestures at this idea of something greater than ourselves, an ineffable “other” that likewise reflects the hopes and fears of humanity.
I remain unconvinced that the singularity is even something we should worry about at the moment, partly because it seems unlikely in the form advocated by such proponents as Kurzweil and Moravec [
10,
34,
35,
36] and partly because humanity has more pressing issues to deal with. As Winner [
37] (p. 44) observes, “Better genes and electronic implants? Hell, how about potable water?” Moreover, the benefits of technology are far from equally distributed, as many researchers on the digital divide can attest [
38,
39,
40,
41]. In his discussion of the consequences of technological innovation (e.g., automation eliminating jobs, a globalized labor force), Hibbard [
42] asks, “Are we in such a rush to develop and exploit technology that we can’t provide a little dignity to those who are hurt?” It is reasonable to expect that this state of inequality would continue and that a considerable portion of the population would likely not have access to the benefits of the singularity even if it were to happen, something even transhumanists readily acknowledge [
43]. Rather, it would likely solidify already existing inequalities.
But will the singularity actually happen? My answer is a cautious “maybe—it depends.” Really, it depends on what kind of singularity we are talking about and this is by no means a settled conclusion. Even among transhumanists, there are competing views of the singularity. As Bostrom [
44] (p. 8) observes, “Transhumanists today hold diverging views about the singularity: some see it as a likely scenario, others believe that it is more probable that there will never be any very sudden and dramatic changes as the result of progress in artificial intelligence.” My view falls more in line with the latter group and my reasoning hinges on how we account for emotion.
Despite our incomplete knowledge of how we think and feel, Kurzweil [
10] (p. 377) argues that “By the late 2020s we will have completed the reverse engineering of the human brain, which will enable us to create nonbiological systems that match and exceed the complexity and subtlety of humans, including our emotional intelligence.” There are several issues with this claim, however. First, reverse engineering does not necessarily mean that we can recreate it. We know how human life works but we are not able to create it. Mapping the human genome does not mean that we can put together a string of DNA and make a person. Also, if we were only to map the human brain, we are missing the rest of the body’s role in cognition; thinking—and certainly emotion—is not something that takes place in the brain alone [
1,
45]. Indeed, even something as seemingly mundane as listening to someone talk is an incredibly complicated process [
46].
Of course, there is no particular reason why the singularity must be completely digital. Indeed, my contention is that if it happens at all it will not be completely digital. Kenyon [
47] (p. 17–18), suggests that rather than the common conception that robots will take over the world, “it is much more likely that humans will be advancing while robots advance, and in many cases they will merge into new creatures. There will be new people, new kinds of jobs, new fields, new industries, societal changes, etc. along with the new types of automation.” Potapov [
48] (p. 7) likewise suggests, “Most likely the next metasystem will be based on exponential change in human culture (although this does not mean it cannot also involve an artificial superintelligence). One way or another, further metasystem transitions will take place, although their growth rate will start to decelerate at some point.” In short, humans will be an integral part of the system that continues to evolve into and beyond the singularity.
If the singularity were to happen in a way that truly takes into account human emotion, it must transcend the silicon world. It would have to be part organic and part machine. Perhaps this is the only way that the singularity could actually take place; we would actually become a part of it. This would happen not as a computerized occasion that takes place somewhere in the depths of a machine but in each of us in technologically enhanced bodies. The singularity, if it were to completely account for the full range of human experience, would of necessity retain the humanity inherent in our bodies. The singularity would not happen in an instant but slowly, bit by bit, in the bodies of cyborgs everywhere.
Perhaps this is already happening, as some have argued that we are not becoming cyborgs; we are already cyborgs [
3,
49]. In some ways, this is not a new thought; McLuhan [
50,
51] suggested half a century ago that humans use media to extend their bodies and specifically that electronic media serves as an extension of the central nervous system. These extensions mean that the body is undergoing near-constant changes but Clark [
3] (p. 142) cautions that “such extensions should not be thought of as rendering us in any way posthuman; not because they are not deeply transformative but because we humans are naturally designed to be the subjects of just such repeated transformations!” Echoing Clark, Graham [
52] (p. 4) argues that “technologies are not so much an extension or appendage to the human body, but are incorporated, assimilated into its very structures. The contours of human bodies are redrawn: they no longer end at the skin.” Because we have been integrating technology into our bodies for many years now, the question of how to define our humanity as we move forward has been called into question [
53]. As Bynum [
54] (p. 165) put it, “Are we genes, bodies, brains, minds, experiences, memories, or souls? How many of these can or must change before we lose our identity and become someone or something else?” It may well be that Stelarc [
55] (p. 126) is at least partially correct when he suggests that “perhaps what it means to be human is about not retaining our humanity.” Stelarc’s [
56] main contention is with the body itself, which he considers to be obsolete but what makes us human is not the external contours of the body itself. Rather, it is our capacity for emotion, which is an intrinsic part of our embodiment. Without emotions, there is no humanity to retain and without the body, there are no emotions.
In this essay, I have drawn on the experience of romantic love to argue against an inorganic singularity, or at least one that claims equal or greater emotional capacity to humans. This does not, however, rule out the potential for a hybrid singularity based in both technology and flesh. In fact, we may already be well on our way down this path as a species. There are many who look forward to the singularity with an eye of faith, hoping that it will serve as the next step in human evolution. Lanier [
57] (p. 29) suggests that many posthumanists take on a religious fervor in their belief of the saving power of technology: “If you want to make the transition from the old religion, where you hope God will give you an afterlife, to the new religion, where you hope to become immortal by being uploaded into a computer, then you have to believe that information is real and alive.” But when that new god appears, it is not likely to be the processor-based idols created by our own hands. Instead, we may be surprised to look in the mirror one day and realize that it was us all along.