*Article* **Thinking in Patterns and the Pattern of Human Thought as Contrasted with AI Data Processing**

**Robert K. Logan 1,\* ID and Marlie Tandoc <sup>2</sup>**


Received: 5 March 2018; Accepted: 5 April 2018; Published: 8 April 2018

**Abstract:** We propose that the ability of humans to identify and create patterns led to the unique aspects of human cognition and culture as a complex emergent dynamic system consisting of the following human traits: patterning, social organization beyond that of the nuclear family that emerged with the control of fire, rudimentary set theory or categorization and spoken language that co-emerged, the ability to deal with information overload, conceptualization, imagination, abductive reasoning, invention, art, religion, mathematics and science. These traits are interrelated as they all involve the ability to flexibly manipulate information from our environments via *pattern restructuring*. We argue that the human mind is the emergent product of a shift from external percept-based processing to a concept and language-based form of cognition based on patterning. In this article, we describe the evolution of human cognition and culture, describing the unique patterns of human thought and how we, humans, think in terms of patterns.

**Keywords:** patterns; patterning; cognition; set theory; language; information; abductive reasoning

#### **1. Introduction**

Humans both deal with information and engage in creative thinking, while computers only process data according to the instructions of their human programmers. We believe humans are capable of both recognizing and creating patterns, but computers are only capable of recognizing the type of patterns that they have been programmed to look for. We believe computers are deduction engines that are also capable of induction, as is the case when they succeeded at mastering chess or Go. They are not capable of abductive reasoning or creating a story, however, and hence there are limits to their creativity. Abductive reasoning is a form of logical inference using imagination, in which the simplest and most likely hypothesis is posited to explain observed phenomena (see Peirce [1]). The claims made in this opening paragraph are argued for in the lead paper of this special issue by Braga and Logan [2]. The purpose of this paper is to provide background on human cognition (and cognition in general) for the debate of human versus computer cognition.

In Braga and Logan [2], the authors claim that the technological Singularity, the idea that through AI, certain computers will be able to create a level of intelligence greater than that of humans, is not possible because computers are not capable of abductive reasoning, imagination and creativity. We will argue in this article that patterning is an essential feature of human cognition and is a product of abductive reasoning and imagination, which are features that computers are not capable of. This article therefore supports the claim of Braga and Logan [2] that the hypothesis of the eventual emergence of the Singularity is not correct.

We believe that ability to recognize and create patterns led to the unique aspects of human cognition and culture. This shift led to the co-emergence of inter-related traits, all characterized by their ability to manipulate and restructure patterns, *via mathematics, verbal language, imagination and abductive reasoning*. Since imagination is a form of abductive reasoning and abductive reasoning is a form of imagination, we will pair the two notions as imagination/abductive reasoning for the purpose of discussing patterning in this article. Our justification for the pairing of abduction and imagination is that one of the definitions of abduction is the process whereby one finds the simplest and most likely explanation of what one has observed and it is imagination that enables one to carry out this process.

We also believe that the environmental pressure of information overload is a key factor in our proposed dynamic cognitive system of human intelligence, which drives us to continuously create media and technologies to be able to not only efficiently process patterns, but also to reconfigure patterns in novel ways. What led to this huge cascade of complex ways of manipulating and restructuring patterns at the dawn of homo sapiens? We propose, as an abduction, that these complex forms of pattern-structuring, namely mathematical, linguistic and creative thinking, bootstrapped themselves into existence via the human ability to simultaneously hold multiple parallel representations in both mind (online) and memory (offline), allowing us, with a greater representational sandbox, to continuously structure and restructure patterns in countless ways. How patterns become instantiated in *culture* will also be explored.

As hinted at by our title, we believe that the ability to recognize and create patterns is the key to understanding the nature of human cognition and culture as well, as specifically, uniquely human traits, namely the control of fire, the ability to deal with information overload, patterning, primitive set theory, verbal language, imagination, abductive reasoning, invention, art, religion, mathematics and science, are all interrelated. We have tried to order the description of these factors that make us humans unique more or less in the order in which they emerged. We say more or less because some developed or emerged simultaneously and their interrelationships are non-linear and are part of emergent dynamics. The narrative we are about to develop is a story, an abductive reasoning or simply just a guess of how we humans came to be this super intelligent being that we are. This story is not represented as science, since there is no way to test the validity of our conjectures, but we believe that it might provide some insight into the relationship between the unique features of the human modus operandi.

Our story begins with the ability of the genus, Homo, to control fire, which led to a new social structure of large numbers of people living together and in turn led to information overload and the need to co-ordinate the activities of a large group of people. The solution to deal with the information overload was the emergence of verbal language, which required patterning in the form of set theory, in which words acting as concepts represented all of the percepts related to each of those words/concepts. Verbal language had a transforming effect and made possible imagination/abductive reasoning, which in turn gave rise to new technologies, mathematics, science, artistic expression and religion. The set of relationships that we have just outlined and presented are by no means obvious, but we hope to convince you in this article that these characteristics of human cognition and culture are interrelated. The narrative we have just presented is linear, but the relationships among these elements that make us humans unique is by no means a linear one. The evolution of human culture is a story of a complex adaptive system, in which emergent dynamics are always the driving force of this evolution. As we develop our narrative, some elements will appear twice, given the non-linear nature of our subject and the linear nature of our medium of the written word. To further support some of our claims, we will draw from various neuroscience designs that shed light on some of the pattern-processing and restructuring tools implemented in our brains.

The environmental pressure of our proposed dynamic system is information overload, which drives us to continuously create technologies and media to be able to not only efficiently process patterns, but also to reconfigure patterns in novel ways. What led to this huge cascade of complex ways of manipulating and restructuring patterns at the dawn of homo sapiens? We propose that these complex forms of pattern-structuring, namely mathematical, linguistic and creative thinking, bootstrapped themselves into existence via the human ability to simultaneously hold multiple

parallel representations in both mind (online) and memory (offline), allowing us, with a greater representational sandbox, to continuously structure and restructure patterns in countless ways. How patterns become instantiated in *culture* will also be explored. We will examine math, language, artistic and scientific thinking and imagination/abductive reasoning, as tools by which we process and restructure information to best suit our goals.

#### **2. Human Control of Fire and the Ensuing Information Overload It Created**

Faced with information overload, we have no alternative but pattern-recognition—Marshall McLuhan ([3], p. 132).

The human control of fire changed the way our human ancestors lived together. Before learning how to control fire, humans lived in nuclear family units consisting of the mother, the father and their non-adult children. As the children grew into adulthood they went off and formed their own nuclear families and hunted and gathered on their own so as not to interfere with their parent's food gathering activities ([4], Chapter 3; [5], p. 60).

With the control of fire, nuclear families banded together to form clans of related nuclear families to take advantage of the many benefits that fire offered such as (i) warmth, (ii) protection from predators, (iii) tool sharpening, and (iv) cooking, which increased the number of plants that could be made edible, killed bacteria and helped to preserve raw foods, such as meat. Living together in clans gave rise to new, more complex and larger social structures that bred a form of information overload because of the increased complexities of social interactions and the need to organize the activities of many people to gather food and to maintain the campfire.

This information overload of interacting with many people and carrying out more sophisticated activities led to the need for better communications to better co-ordinate social transactions and co-operative activities, such as the sharing of the benefits of fire, the maintenance of the hearth, food sharing, and large scale coordinated hunting and foraging. Communication in this new environment became essential, and it is, therefore, surmised that this gave rise to a new preverbal proto-language of social interactions that emerged with the proto-semantics of social transactions, which included greetings, grooming, mating, food sharing, and other forms of co-operation appropriate for clan living. The proto-syntax of social organization or intelligence included the proper ordering or sequencing of these transactions in such a way as to promote social harmony and avoid interpersonal conflict, and, hence, contribute to the survival and development of hominid culture. It was in this environment that verbal language emerged with a still more complex level of organization.

The mechanism by which verbal language emerged was that words acting as concepts came to represent all of the percepts associated with that particular concept. Reflecting the transition from external to more internal pattern-processing, language emerged as the transition from percept- to concept-based thinking, according to the thesis developed in the book *The Extended Mind: The Emergence of Language, the Human Mind and Culture* [4]. A word acts as a concept that connects all of the percepts related to that word. The use of a word like "water", representing the concept of water, triggers instantaneously all of the mind's direct experiences and perceptions of water, such as the water we drink, the water we cook with, the water we wash with, the water that falls as rain or melts from snow and the water that is found in rivers, ponds, lakes, and oceans. The word "water" also brings to mind all the instances where the word "water" was used in any discourses in which that mind participated either as a speaker or a listener. The word "water", acting as a concept and an attractor, not only brings to mind all "water" transactions, but it also provides a name or a handle for the concept of water, which makes it easier to access memories of water and share them with others or make plans about the use of water. Words representing concepts speed up reaction time and, hence, confer a selection advantage for their users. At the same time, those languages and those words within a language which most easily capture memories enjoy a selection advantage over alternative languages and words, respectively. Before humans had verbal language, the brain was a percept processor, but with language

the brain became a mind that was capable of conceptualization and the ability to plan. The mind is the brain plus language.

Several experimental studies have found that providing labels (i.e., a word) facilitates the learning of categories, i.e., seeking patterns ([6,7]). In fact, a word acting as a handle of a concept can even override more *perceptual* category learning [8], further suggesting that human cognition does represent a shift away from only bottom-up learning, towards more top-down, internal representations and the manipulative restructuring of our world. Such restructuring becomes easier to facilitate and more graspable when you have a handle.

The mechanism that actually led to verbal language seems to have involved the ability of humans to create **patterns** by distinguishing set of objects or activities that are similar and differentiating them from other objects or activities. It is in this sense that **mathematics**, in the form of set theory, emerged. Logan and Pruska-Oldenhof [9], in a book entitled *A Topology of Mind – Spiral Thought Patterns, the Hyperlinking of Text, Ideas and More,* developed the thesis that "the human mind is intrinsically verbal and mathematical and that **language** and **mathematical thinking** co-emerged at the dawn of the emergence of Homo sapiens." They posited that the origin of mathematics in mind occurred in the form of the classification or the grouping of like things into sets or groups and giving a name to that set or group, namely a word, acting as a concept, used to represent that set of percepts. It is the same skill that eventually gave rise to modern set theory and group theory and, as they claim, preceded the skill of counting or enumeration and actually laid the foundation for it. Therefore, linguistic and mathematical thinking both seem to play an important role in how we process and manipulate patterns.

Further support for the notion that the human brain is *intrinsically* mathematical, and that this mathematical feature is highly intertwined with our linguistic abilities, is observed through the phenomenon of *statistical learning*. Statistical learning is the ability to extract complex regularities and **patterns** from our environments over time [10]. Infants, for instance, are remarkably good statistical learners. An infant can effectively learn what phonemes occur together in a sentence over time by implicitly picking up on the transitional probabilities of sounds [11]. This remarkable ability to pick up on **patterns**, that is, how elements of experience relate, is argued as the reason for infant's and children's remarkable ability to learn language. Indeed, it is this *mathematical* quality of the mind that actually allows us to learn and implement complex linguistic structures or **patterns**. This further suggests that mathematical and linguistic thinking are tightly intertwined. Equally as remarkable is not only the brain's ability to pick up on the *transitional* probabilities described above (i.e., what features frequently co-occur together), but also to form even more complex mathematical representations in mind, such as forming *distributional* probabilities of categories, where through exposure to exemplars, the brain is able to integrate these experiences to create a "prototypical representation or distribution (ibid.)."

What pattern-processing mechanisms are involved in statistical learning? One account of statistical learning suggests a two-step model, whereby **patterns** must be *extracted* and then *integrated* [12] Again, this type of model parallels our thesis that pattern-processing is not computationally uniform, but a dual-process of pattern-processing, whereby one involves the ability to perceptually pull out patterns (pattern-recognition) and the other to integrate them into a broader picture (which we call pattern-restructuring). For instance, when we are learning a category, this involves (1) noticing what features co-occur together (in a statistically reliable way), and (2) *integrating* them to form a more generalized group and an integrated and abstracted representation of the category (for example, a prototypical exemplar of what is a "cat"). This process of integrating and restructuring is essentially equivalent to creating "new information". In sum, we have argued that the mathematical properties of the mind and language are tightly intertwined. By feeding off each other, they likely played a role in the very fast, snowballing cognition of homo sapiens, with mathematical thinking pushing us to create sets and patterns, and linguistic thinking making it easier for us to integrate or bundle things together into these sets (i.e., a word acting as an anchor or attractor for multiple percepts). Mathematical thinking, then, acts as a tool to manipulate information into patterns, and language acts as the handle, making them easier to grasp.

Thus, we agree with McLuhan that pattern-recognition is, in fact, how we successfully deal with information overload, and pattern-restructuring is how we can use large amounts of information for our advantage. For instance, evidence that we are able to manipulate patterns to create categories that best suit our needs is seen in cross-cultural differences in categorization. An additional argument to more strongly support the claim that there are variations within sets, based upon the pressures of the environment, was pointed out by Unsworth et al. [13], in regard to the cross-cultural differences in the categorization of butterflies. In reference to past anthropological studies, these authors pointed out that in the Fore culture, there are very well-defined, rigid, and discriminated *sub-categories* of birds because birds held hunting value as food and, therefore, it was important to make very discrete sets about them. Diamond [14], however, pointed out that this culture lacked any of these discrete, distinctive sub-groupings for butterflies, because butterflies, unlike birds, held little tangible value to this culture. Unsworth et al. [13] then went on to compare this culture to the Tzeltal culture who also did not have discriminate categories for butterflies, but did for *butterfly larvae*, which they used as a food source and encountered as a threat to crop growth [15]. In other words, the value and role of an organism to a society changes how categories are created or how we want to structure and differentiate information. Thus, this anthropological example is a fascinating example of how the patterns that we focus on to create categories may vary based on their impact or lack of impact on our culture.

As a result, the human mind requires the scaffolding to be able to accommodate the flexible nature in which we create sets. Pattern-processing, the ability to process incoming information, is not enough, but we require the ability to flexibly *use and manipulate* this information to best suit our goals and our understanding of our environment.

#### **3. The Co-Emergence of Math and Language: The Shift from External Pattern-Recognition to Internal Pattern-Restructuring**

What kind of mental faculties differentiate human cognition from that of animals? What cognitive qualities make us uniquely human? In answer to these sorts of questions, Charles Darwin humbly stated that "the difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind." Do our pattern-processing abilities then only reflect a difference in degree and capacity? Explicitly drawing from this quote, Mattson [16] echoes Darwin's idea and draws from a vast range of animal research to argue that the human brain is a result of its *superior pattern-processing* abilities and not necessarily any difference in the kind of these processes. Indeed, several studies have found "scaffolding" or more proto-cognitive abilities in higher primates, that do seem to only differ in terms of a difference in degree.

However, while we agree with Mattson [16] that increasingly complex and superior pattern-processing is a crucial hallmark of the human mind, we argue, however, that there may be some cognitive qualities that cannot be scaled or reduced down to the level of higher primates, as they are the product of *emergence*, which, by definition, cannot be reduced down to its individual parts. In fact, reducing certain human properties of cognition down to the "same" or a difference in degree may overlook many of the important qualities that make us human. Echoing this view, Cobley [17] argues that the entire field of biosemiotics radically "insists that humans are separated from other organisms by a difference in kind and a difference in degree." Others who also study dynamic systems and the emergent properties of such systems also view kind and degree similarly, and not as oppositional processes, whereby emergence is the process by which "a difference in degree *becomes* a difference in kind [18]". When do we *draw the line* of when a difference in degree becomes so different that it appears more appropriately as a difference in kind? This transition has been described as a quality of emergence, whereby "a difference in degree becomes a difference in kind". While the difference in degree of our remarkable ability to process patterns is undeniable, the hallmark of the human mind does appear, then, to be one of a difference in kind, rather of degree.

Instead of arguing that human cognition is superior only because we are just really good at pattern-processing in general (i.e., a difference in degree), we believe there is value in further breaking down what constitutes pattern-processing and what kinds of patterning humans *particularly excel at*. We will look at (1) pattern-recognition, or the ability to *perceive* and *extract* patterns from our environments, as well as (2) pattern-restructuring, or the ability to *manipulate* these patterns internally to *create new* patterns. Pattern-recognition is how we cope in a world of information overload, and pattern-restructuring is how we transcend it and create new information. It is in the latter kind of pattern-processing that humans excel, that is, the ability to flexibly manipulate patterns to suit our goals. We also suggest that this ability led to several uniquely human phenomena such as science, religion, and art. We argue that human cognition, as seen with mathematical, linguistic, scientific and imaginary thinking, represents a shift towards more pattern-restructuring based cognition and, thus, the hallmark of the human mind reflects a difference in degree and of kind.

#### **4. The Nature of Patterns**

Patterns paradoxically both *unify* and *divide* our world. It is through this process of differentiation, of grouping things that are similar from those that are dissimilar, that we are able to meaningfully extract *information.* We will argue that a crucial hallmark of human pattern-processing is the ability to flexibly manipulate patterns and restructure them in novel ways to best suit our goals. First, we will explore the concepts of information and patterns, and how patterns give rise to information. Then, we will explore pattern-processing in the context of the human mind and in terms of mathematics, language, science, social science, the arts and imagination/abductive reasoning, for the creation of cognitive tools.

#### **5. What is Information? Forming a Pattern is Equivalent to Creating Information**

This leaves us with the question of: How was it that humans were able to create the patterns for grouping things into sets of objects or activities that possess similar properties? Marlie Tandoc, in an independent study course supervised by Robert K. Logan, built upon the idea of a topology of mind and mathematics in mind by introducing the notion of "a set theory of mind", according to which a category or a set is the most basic form of information. Categories or sets can vary by their content and the way in which the elements of the category or set relate to each other.

What forms a pattern? It is paradoxical that the similarity of the elements of a set creates a difference between the very elements of the set and all of the things not in the set. Creating or defining a set automatically creates another set, which we will call the anti-set, consisting of all things not in the original set. Since information is a difference that makes a difference, according to Gregory Bateson [19], p.428, creating a pattern is equivalent to **creating** information. Without similarities there are no differences, because once one sees similarities it causes one to consider things that are not similar and, hence, different. Difference is merely the absence of similarity, just as zero is the absence of a number and dark is the absence of light.

We would like to propose that difference arises as a natural emergent property of similarity. Each time we make a similarity judgment, we *automatically* draw a boundary and everything outside that boundary is different. Thus, information can be defined as the *process* of this emergence of differences, created by making similarity comparisons. The idea that information should be viewed not as a noun, but as the verb of informing [4], follows from this idea that information should be studied as a *process*. If information is a difference that makes a difference according to Bateson [19], then it can also be said that information is a non-similarity that makes a non-similarity.

Imagine having apples and bananas randomly spread out in front of you and drawing a circle around all the apples. While we may have the intent of drawing a circle to *enclose* whatever is inside (i.e., these items are all similar in color and shape), a circle also creates a boundary to everything *outside* of it, that is, all the bananas (the anti-set). Similarly, a judgment of similarity, thus, naturally creates a judgment of a difference, and the byproduct of this process is what we call information. Therefore, there can be "no difference without similarity" [20] and vice versa.

Many times, while we do make similarity comparisons, differences seem to quite literally "pop out at us." For instance, in a visual search process, items that are different, particularly that only vary in one dimension, such as color, seem to perceptually "pop out" at us. This automaticity and perceptual salience is the power of the information as a difference and as an emergent property of similarity.

Similarity is the fundamental pattern processing *tool* of human cognition and difference is what *emerges from its use.* For instance, one empirical study found that children were only able to learn by comparing within-category similarities, whereas adults were able to learn categories from both within-category similarities and between-category differences [21]. This suggests that similarity comparisons may develop before differences. "Similar" is easier to see and process than "not-similar" or "different", just as it is easier to conceive of positive integers than it is to conceive of zero and negative integers. The ancient Greeks and Romans were able to conceive of positive integers but not of zero or negative integers.

We can view similarity as the absence of difference and eliminate one of the terms from our analysis and use similar and non-similar or different and non-different. It is easier for us to process and recognize less differences than it is for us to see more differences.

Identifying similarities is a form of pattern recognition and pattern recognition, in turn, is a way of dealing with information overload, as pointed out by Marshall McLuhan [4], p. 132: "Faced with information overload, we have no alternative but pattern-recognition." Recognizing similarities to create a category or a set also leads to the construction of words, and words in themselves are another way of dealing with information overload. As noted above, the emergence of verbal **language** was motivated by the **information overload** of humans living in close quarters around the campfire.

Creating a category involves making a generalization, with the result that the greater the specificity of the category or the set, the less general or encompassing it is. But, on the other hand, the more general the category, the less specificity it possesses. The four categories of i. living organisms, ii. animals, iii. dogs, and iv. cocker spaniels increase in their level of specificity but decrease in their order of generality. In other words, the more specific the similarity, the smaller the set or the less general it is. The complementarity of generality and specificity of categories parallels the complementarity of position and momentum in the Heisenberg uncertainty principal in quantum mechanics, which states that the more you know of an atomic particle's momentum the less you know about its position and vice-versa. We call the complementarity of specificity and generality of categories the LT uncertainty principle of generalization and specificity. The greater the scope of the generalization of a category, the less its specificity and the greater its specificity the less its generalization. It is a trade-off because you cannot have both. More generalized categories allow us to make more comparisons across a wider range of experiences, at the cost of losing specificity, i.e., there are less similarities between the elements of the category. With more specific categories, more similarities exist between the members of the category but the number of members is less than is the case with a more generalized category.

Is there a *sweet spot*? Words for basic-level categories such as dog, tree, flower, or bird are found in nearly all languages and cultures and have shown strong perceptual, learning and memory biases for this level of category [22]. Basic categories, instead, may be the most effective *level* that allows for an informational *sweet spot* of generalization and specificity, and, thus, acts as an adaptive mechanism for information, which evolved over time and converged as the easiest to learn. Over cultural evolution, the level of generality and specificity in words and categories that have come to "stick" may, thus, represent the most effective level of generality versus specificity in our linguistic environments.

Nonetheless, certain environmental or task demands may push us towards creating more general or specific categories. Evidence for this is that information-processing is highly relativistic, and the benefit of basic level categories can just as easily vanish as they seem to appear. In a case where you are moments away from a tiger about to pounce, the most valuable piece of information may be something more general, such as *predator* or *prey*, rather than the exact species of the tiger. For example, while basic level categories (such as that of bird, tiger, or bear) typically show the greatest perceptual advantage in category learning tasks when time is limited (~30 milliseconds) and you have less time

to make a decision, generality wins over specificity in that you "spot the animal before you spot the bird [23]." These findings support that when time is short, you might see a predator before you see a Siberian Tiger. However, in cases when you have *more time* to make a decision, a higher level of specificity may be of more benefit. Vervet monkeys, for instance, have different signals for different types of predators, which allows them to take the best course of action for that specific type of predator. In summary, in this example, *time to make a response and context* play a role in which categories one tends to rely on. This further supports the notion that decision-making based on categories is highly dependent on context, in this case the time to make a response, underscoring the *relativistic nature of information*.

Tradeoffs between the general and the specific have become a huge area of interest for neuroscientists in complimentary learning systems theory. The two complimentary learning systems include (1) the ability to *integrate* information across different experiences, and (2) the ability to code for episodic specificity of individual experiences [24].

It makes sense that having a good sense of what a "chair" is (similarities), is necessary information to know how a "chair differs from a table" (differences). Similarly, facing a new and uncertain world, similarity knowledge may be more *flexible* in its capacity for novel comparisons. Only memorizing an optimized rule of how a chair is different than a table may not be that helpful when later having to differentiate between a chair and a couch. Instead, knowledge of the internal properties of the features likely to co-occur across chairs may be much more helpful, at least at first. Indeed, it may be advantageous to *initially* bias the system towards internal similarities, as this, in turn, can allow us to later use these similarities to *bootstrap* ([18,25]) diagnostic differences into existence and to do so across a wide range of contexts.

In summary, we have argued that biasing attention towards similarities, as the crux of pattern-processing, is beneficial, as it (1) constrains the meaningful differences to emerge (as information), and (2) allows more *flexible* use of this information for *novel* comparisons.

#### **6. Letting Go: Pattern-Restructuring as the Mechanism of Novel Ideas, Language and Imagination**

"The **spoken word** was the first technology by which man was able to **let go** of his environment in order to **grasp it in a new way**."—Marshall McLuhan.

Language, imagination, abductive reasoning, invention (as in technology), art, religion and science are unique characteristics of humans that no other animals possess and seem to be related to each other, as we will claim, and this idea is supported by a number of other scholars that will be referenced in this section.

Marshall McLuhan and Robert K. Logan [26] wrote:

*If one must choose the one dominant factor which separates man from the rest of the animal kingdom, it would undoubtedly be language. The ancients said: 'Speech is the difference of man'* ... *It is the medium of both thought and perception as well as communication.*

A number of authors have made a connection between language and imagination. Among these are:

Daniel Dor [27], author of the book, "The Instruction of Imagination: Language as a Social Communication Technology", characterizes "language as a functionally specific communication technology, dedicated to the instruction of imagination: with language, and only with it, speakers can make others imagine things without presenting them with any perceptual material for experiencing."

Eric Reuland, author of the articles, "Imagination, planning, and working memory: the emergence of language [28]**"** and "Language and imagination: Evolutionary explorations [29]", shows the intimate relationship between language, imagination and planning. He argues that language makes imagination possible [28] where he explores the relation between imagination, planning and language. He [29] also claims that imagination is "*the language lab*, producing both science and poetry".

Paul Crowther [30]), author of "Imagination, language, and the perceptual world: a post-analytic phenomenology", argues "that language directs imagination, empirically speaking in adult life, but that the ontogenesis of language presupposes the role of imagination (ibid., 40)." He claims that "as imagination is a mode of thought, ... it follows that there must be some key relation to language, also, insofar as thought in its fullest sense, is centered on language. The relation between imagination and language is, in fact, a vital one, empirically speaking. It dominates how imagination is exercised (ibid., 43)."

Mark Mattson [16], author of the article, "Superior pattern processing is the essence of the evolved human brain", begins his review article in *Frontiers in Neuroscience* with the following provocative remark in the first two sentences of his abstract:

*Humans have long pondered the nature of their mind/brain and, particularly why its capacities for reasoning, communication and abstract thought are far superior to other species, including closely related anthropoids. This article considers superior pattern processing (SPP) as the fundamental basis of most, if not all, unique features of the human brain including intelligence, language, imagination, invention, and the belief in imaginary entities such as ghosts and gods.*

What stopped us in our tracks was Mattson's pairing of "intelligence, language, imagination, invention" with "the belief in imaginary entities such as ghosts and gods". As practitioners and students of science, we associated "intelligence, language, imagination/abductive reasoning, and invention" with science, engineering and the arts. On the other hand, we associated "belief in imaginary entities such as ghosts and gods" with faith, at best, and superstition, at worst. We were intrigued by Mattson's abstract and so we read the whole article. As we discussed its thesis, it suddenly occurred to us that perhaps abductive logic or thinking was the link between these two seeming disparate categories of "intelligence, language, imagination and invention", on the one hand, and the belief in "imaginary entities such as ghosts and gods", on the other hand.

Abductive logic is unlike deductive logic and inductive logic. Let us explain. With deductive logic, you begin with two axioms that you believe to be self-evident and deduce a conclusion. Socrates is a man. All men are mortal. Therefore, Socrates is mortal. With inductive logic, you list all examples where your conclusion is true and you assume or guess, therefore, it is always true. Socrates was mortal. Aristotle was mortal. Newton was mortal. Einstein was mortal. Therefore, all men are mortal. With abductive reasoning, one observes a set of data and one then guesses or *imagines* or invents a hypothesis that explains that data, that is the simplest and most likely. All three forms of logic involve guessing in one way or another. With deductive logic, one makes the guess that one's starting axioms are correct. With inductive logic, one makes the guess that if the statement is true for all the examples that one is able to compile, then it must always be true for all possible cases.

Each of these three forms of logic have different applications:

Mathematics, for the most part, proceeds by way of deductive logic. The axioms that parallel lines (i) remains the same distance apart; (ii) converge; or (iii) diverge, give rise to three forms of geometry, respectively: (i) plane; (ii) Riemannian; and (iii) Lobachevskian.

Inductive reasoning is used for forecasting and cannot guarantee the truth of its conclusion, but only suggest that it is most likely. In most cases, the greater the sample size, the more reliable is the conclusion, but this assertion itself is another example of inductive reasoning.

Abductive thinking, as used in science, only asserts that its conclusions are the simplest and most likely, but must be subjected to constant testing. The criteria for a conclusion to be considered as a scientifically valid conclusion is that it must be falsifiable, as suggested by Karl Popper [31].

So, what is the connection between "intelligence, language, imagination, invention" and "the belief in imaginary entities such as ghosts and gods"? Belief in imaginary entities, such as ghosts and gods, parallels abductive thinking and requires imagination. Belief in, or even suggesting a hypothesis that explains observed phenomena in science, also involves abductive reasoning and also requires imagination.

Imagination is a key requirement in both science and magical thinking, both of which create or recreate a reality for the one imagining the science hypothesis or the magical thinking. José Monserrat Neto [32] concurred with this when he suggested "that the capacity of imagination is essential to understand the creative way in which human beings learn and (re)construct their reality."

A theory or hypothesis in science is an imaginary entity which entails the belief that it might be actual or true. Science and the belief in ghosts or gods is, therefore, parallel to a certain degree. They diverge in that the scientist accepts their hypothesis might be false and hence needs to be constantly tested empirically, whereas the religious or ghost believer does not accept the notion that their belief may be false, but accepts the validity of their belief on faith alone without a need to test their hypothesis.

The human ability to create these hypotheses to account for data involves imagination/abductive reasoning and the ability to combine seemingly disparate patterns and restructure them in new ways. Although a deterministic, machine-like system (artificial intelligence, for instance) is able to use both deductive logic (if-then statements) and inductive logic (machine learning), the use of effective *abductive* reasoning may be a uniquely human phenomenon. Abductive reasoning is the result of a mathematical mind with the ability to create novel categories or patterns between otherwise seemingly disparate elements via *pattern restructuring*.

The three different forms of thought or logic, deductive, inductive and abductive, can be seen as ways in which we can navigate information landscapes amongst the different levels of specificity and generality of our concepts that we defined above in the LT uncertainty principle of generalization and specificity (see Figure 1). Deductive logic involves moving from the general to the specific, and, inductive, from the specific to the general. Both of these act as a one-way street of logic. Abductive thought requires a more horizontal, domain-general movement that focuses on forging *new connections* across ideas and consideration of context, which we believe is uniquely human.

**Figure 1.** Generality versus Specificity for Deduction, Induction and Abduction.

Further supporting the importance of having vivid, detailed, episodic experiences, is that this system may be how we also imagine *future* events. *That is, episodic memory is the foundational, sensational landscape we use to feel like we are there when imagining future events, and also involves the ability to restructure patterns on a whim to create internal worlds*.

The same storage of multiple representations via pattern separation and holding multiple representations in mind via working memory, and having access to episodes, are the processes that seem to allow us to *imagine* future events.

#### **7. Pattern Restructuring: Abductive Reasoning and Creativity**

We argued above that abductive reasoning may be able to explain both scientific and superstitious thinking, but where they differ is in terms of falsifiability. Abductive reasoning that leads to such a vast range of what makes us human stretches from science and intelligence, to religious faith and belief in ghosts [16]. Abductive thinking, therefore, seems to be a uniquely human phenomenon. Abductive thinking involves being able to restructure connections between ideas, to create novel ideas to bridge often seemingly distant ideas and to create hypotheses that can explain and unify observations, and to create "just so stories" and coherent narratives. It requires *creativity* and the ability to connect and restructure patterns

How do we get something seemingly novel from previous information that seems disconnected? Is creativity a magical "and then there was light" phenomenon and, seemingly, something emerging from nothing? Not necessarily. What if we just create new connections between previously existing ideas? This would explain how something seemingly novel can arise from pre-existing, disconnected bits of information. In terms of our "set theory" of mind, creativity comes from this ability to forge *novel connections* between sets, even sets that may have little overlap at first glance. In other words, the ideas and content remain constant, but it is the *connections between* these patterns that give way to creativity, imagination and abductive reasoning, which then results in new ideas and new content. The following definition of creative insight is particularly helpful explaining this idea:

*(Creative) insight seems to involve (1) an existing state of mind or set of mental structures relevant to the topic and (2) a moment of realization, consequent to new information or a sudden new way of looking at old information, resulting in (3) a quick restructuring of the mental model, which is subjectively perceived as providing a new understanding [33].*

Csikszentmihalyi's definition of creative thought implies that we have to have these pre-existing sets, formed via our pattern-processing tools, and then creativity/**pattern-creation** emerges as pattern-restructuring and bridging novel connections between these already existing ideas. This helps explain how novel, creative ideas can come about from seemingly out of nowhere (i.e., the "ah-ha" moment), though not necessarily through any new content, but simply different connections between existing content, which, in turn, creates new content though the new connections. Note that creative thought or abductive reasoning is not finding something from nothing, it is the ability to find something from a group of initially seemingly unrelated things. It is creating new sets of syntactical structures, which, as a result, creates new meaning and hence new knowledge. Everything was there to begin with, we just were able to realize it, or rather, in McLuhan's words, let go of it to grasp it in a new way, and this process in itself gives rise to new meaning and new information.

Indeed, just as we argued that math and language are ultimately boiled down to pattern restructuring tools, creativity, also, is the ability to restructure information to solve problems or create *novel* ideas. Again, recall Czkimenthalyi's first step of creativity, which involves "an existing state of mind or set of mental structures ... ", and further supports our view that vivid episodic experiences are required to lend themselves to the increasingly complex pattern restructuring of these ideas, as without this raw resource to return to, we become **fixated** only on one idea or pattern.

Our ability to imagine and create allows us to create increasingly complex ideas, from technology, art, science, and religion. Ultimately then, many of the qualities that make us *human* stem as an emergent property of our ability to *restructure patterns* to zap novel ideas into existence. But of course, all the material is there to begin with, the human mind is just able to recognize, extract, integrate, synthesize, restructure, and *realize it* into existence.

#### **8. No Signal, Without Noise: The Role of "Randomness" in Pattern-Restructuring**

As argued above, creativity involves being able to take seemingly unrelated ideas or sets and restructuring them to make novel ideas. For instance, even at the neuronal network level, the brain seems to have a hard-wired mechanism that works *to increase our chances* of the finding of novel ideas, by pushing us to restructure our patterns in novel ways. Connections of neurons, which could represent an idea, for instance (i.e., a consistent pattern of neurons fire every time you think of a "dog"), have been found to fire in unpredictable, random connections, called *stochastic resonance*. While it is still debated, the reason for these "noisy" neurons, it has been recently suggested, can be seen as the brain *testing out new connections*. This testing out of new neuronal combinations is argued to prevent

decisional deadlock, to let us make mistakes to learn from, and might even act as a basis for *creativity* (see Deco et al. [34]). This final point on stochastic resonance as the grounds for creativity, is quite the assertion, but follows the idea that certain neuronal computations can be hierarchically abstracted [35] to assist in explaining increasingly complicated, abstracted ideas.

Randomness and chance indeed play a role in creativity, and our brains may take advantage of this randomness to push towards bridging otherwise disparate ideas. The role of "randomness" and chance is analogous to incredibly creative design we observe in evolution and natural selection. A random mutation that made a giraffe's neck slightly longer was the necessary random push needed to lead to the impressive design of the giraffe's famously elongated neck. Similarly, neuronal stochastic resonance, essentially a mutation of neural firing patterns, may be the initial "nudge" needed to lead to a cascade of pattern-restructuring processes, lending itself to the restructuring of ideas, to lead to an impressively novel idea, or creativity. All biological mutations are not necessarily helpful, but we require this variation, and over time and chance, one mutation may prove to be an adaptation. Random neuronal firing may similarly prove to be a creative adaptation, as over time some might push the mind towards finding otherwise unseen connections that turn out to be very useful.

Thus, just as natural selection requires this bit of randomness and open-endedness, the human mind is also remarkably noisy and open-ended. The human brain has more than 100 trillion connections between its neurons. There is a hugely vast number of connections we can forge and reforge to make new ideas. In his new book, Daniel Dennet [36] wrote that "evolution is all about turning 'bugs' into 'features,' turning 'noise' into 'signal,' and the fuzzy boundaries between these categories are not optional; the opportunistic open-endedness of natural selection depends on them." Analogous to this, we would argue that the flexible human mind also depends on such opportunistic open-endedness and the ability to restructure patterns to allow for creativity and imagination/abductive reasoning.

One person's noise might be another person's information. "The user is the content", as McLuhan once opined [37] (p. 51). Terrence Deacon [38] point out that "noise can be signal to a repairman." As humans are always looking for unseen connections and **patterns** to better understand our world, we are all repairmen in a sense. As repairmen, we have the appropriate tools to extract signal out of what may have initially appeared as noise. We, as humans, have the cognitive tools necessary to create information.

#### **9. Brand New: Thinking in Patterns**

To further support our hypothesis that patterning is a critical part of human cognition, we describe the role of patterning in a number of human intellectual activities, ranging from the fine arts, music and verbal language to mathematics, natural science and the social sciences.

**Verbal Language**: We have already suggested that the emergence of words as concepts that describe all the percepts associated with that concept is a form of patterning. What applies for semantics also applies to syntax, as grammar also represents a form of patterning. Pragmatics, the third aspect of understanding verbal language, underscores the importance of the pattern of pragmatics in communicating meaning.

**Mathematics**: Mathematics entails patterns in a multitude of ways. A sequence, for example, is a string of objects, like integers, that follow a particular pattern, such as the set of all even numbers or the set of all integers that are divisible by the integer, *n*. The Fibonacci sequence is another example of a sequenced pattern. The elements in the sets of all triangles, or of all rectangles, etc., each share a common pattern. Patterns abound in algebra and geometry in numbers too great to describe in this article.

**Music**: Patterns abound in all forms of music, from the simplest folk song to the most complex symphony, concerto or sonata. A melody is a particular pattern of notes. Rhythmic patterns are an essential form of music. Many musical dance forms have a particular rhythmic pattern, such as the waltz, the mambo, the tango or the samba. The fugue, the sonata form, the rondo form, the round, and counterpoint are examples of the many forms or patterns of music.

**The Visual Arts and Architecture**: Patterns are an essential part of both the fine and decorative arts, as well as architecture. Symmetry is used throughout the visual arts and architecture and is one of the elements of what we consider to be beautiful, not just in the arts but also what we consider as human beauty. Beginning in the Renaissance, the pattern of perspective, with its vanishing point, is an essential feature of the visual arts.

**Engineering**: Patterns are an essential part of successful engineering projects, particularly in mechanical and civil engineering.

**Science and Social Science**: Detecting patterns is an essential part of the natural sciences. The Copernican Revolution, Kepler's Laws of Planetary Motion, Newton's Three Laws of Motion and the Law of Gravity, and Maxwell's Equation of Electromagnetism, all involve identifying patterns in nature. The discovery of the Periodic Table of Elements in chemistry entailed finding the patterns of chemical bonding. The classification of biological species and genera is another example of the uncovering of a pattern in the world of biology. Meteorology is based on the identification of weather patterns. The law of supply and demand is an example of a pattern in economics. Patterns play a role in other social sciences such as sociology, anthropology, law, psychology and cognitive science. Patterning is an essential tool in all of the sciences and the social sciences. In some sense, patterns are a way to understand the order we find in nature and the social interactions of humans.

#### **10. Patterns in Human Culture**

We are not crediting all of our creative thought, that is, our ability to form ideas and concepts, to *randomness.* Perhaps randomness is an offline way in which we can prevent creative deadlock that increases our flexibility. Another way in which we may connect ideas is through pattern-completion in auto-association networks, as suggested by Hebb [39]. Pattern completion occurs when there is high overlap between two experiences, and the brain encodes this information with higher neural similarity in an integrated representation. This may be another mechanism that can help us make new connections.

We are not crediting all of our creative thought to *randomness*, or our ability to further *unravel* and form ideas and concepts, but it is perhaps an offline way in which we can prevent creative deadlock and increase our flexibility, within our various ways of restructuring patterns. Similarly, Harnad [40], in his paper titled "Creativity: Method or Magic", argued for Pasteur's Dictum, whereby "chance favours the prepared mind", for the case of creativity. In our thesis, having access to strong representations of episodic memories and percepts is a way of "preparation" for the creative patterns that can emerge from it.

Although we have focused on the more concrete ways in which we form and manipulate patterns, as seen with the role of randomness, the mind is largely probabilistic and we have enough variation in how we form patterns. As a result, patterns have variation and, thus, become even more powerful, as they can adapt to different contexts.

However, what happens to these patterns as they form? How do they become instantiated in culture? The topology of the mind is structured by the categories that the mind creates. The categories of an individual's mind are subject to natural selection. Those categorizations that increase the fitness of those that possess them or the culture in which they thrive tend to survive.

While we have focused primarily on the *process* of pattern restructuring, that is, how the mind forges patterns, now we will also briefly focus on the *products* of these processes, that is, how properties of the mind itself become instantiated in these patterns over time, rendering them easier to learn.

Each time information is transmitted, whether through neural synapses or verbal speech, it becomes susceptible to change. As a result, the pattern boundaries (the meaningful differences) that come to stick over time are often the easiest to learn, that is, the ones that make it most frequently through the *mind's* cognitive filter.

In this story, the mind, as the medium, then becomes the message (McLuhan). Indeed, this system, in which the most "fit" patterns come to stick and with information constantly driving us to seek

new ways to restructure, is the dynamic system. This iterative process is the ultimate driving force of patterns, across time and space.

Dawkin's [41] meme theory is perhaps the most famous, and controversial, instance of this type of transmission of patterns (or memes), whereby patterns come to best "fit" in the human mind. In Dawkin's view, memes (or patterns) act as viruses that "infect" the mind. We disagree, as we view pattern-restructuring as the "kind" of pattern-processing that humans excel at. As a result, we place the human mind in a much more *active* role than Dawkins poses. Indeed, our view falls nicely in line with many critics of Dawkin's meme theory, because the mind is *creating* and recreating patterns, it is an active agent that *creates* novel ideas and not just an "imitator" of ideas [42]. Our central tenant of our thesis is that we do not just *recognize* patterns, but we internalize them, then we manipulate and restructure them to suit our goals. The mind is not just an imitator or pattern processor, it is an inference creator that actively restructures its internal representations. It is this second aspect that is more characteristically human, that we are able to flexibly restructure our worlds. Our thesis is one that supports the human mind as an active agent, not just a helpless victim of its environment.

#### **11. Conclusions**

Mathematical, linguistic, creative, imaginary, and abductive thinking are the modes by which our mind restructures patterns. We have described how patterns, formed via recognition and restructuring, become instantiated in culture. To conclude, we would suggest patterning, particularly the ability to create or see new unexpected patterns, is the key to creativity in the arts, science and religion, three domains of spirituality that might have more in common than is generally recognized.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Commentary* **Love, Emotion and the Singularity**

#### **Brett Lunceford ID**

Independent Researcher, San Jose, CA 95136, USA; brettlunceford@gmail.com; Tel.: +1-408-816-0834 Received: 31 July 2018; Accepted: 31 August 2018; Published: 3 September 2018

**Abstract:** Proponents of the singularity hypothesis have argued that there will come a point at which machines will overtake us not only in intelligence but that machines will also have emotional capabilities. However, human cognition is not something that takes place only in the brain; one cannot conceive of human cognition without embodiment. This essay considers the emotional nature of cognition by exploring the most human of emotions—romantic love. By examining the idea of love from an evolutionary and a physiological perspective, the author suggests that in order to account for the full range of human cognition, one must also account for the emotional aspects of cognition. The paper concludes that if there is to be a singularity that transcends human cognition, it must be embodied. As such, the singularity could not be completely non-organic; it must take place in the form of a cyborg, wedding the digital to the biological.

**Keywords:** cognition; cyborg; embodiment; emotion; evolution; love; singularity

#### **1. Introduction**

This essay takes up where Adriana Braga and Robert Logan [1] left off in their recent essay, "The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence," in which they argue against the notion of the "singularity" or a point at which computers become more intelligent than humans. However, rather than focusing on intelligence, this essay extends Braga and Logan's discussion of emotion and focuses on cognition, exploring what it means to think and what makes human cognition special. I suggest that the foundation for this exceptionalism is emotion.

Cognition is a slippery thing and despite considerable study, we are far from fully understanding how humans think. The question of what it means to think, to be sentient, is one that has likely plagued humanity since we have been able to articulate the question. We have some hints of what it means to think from people like René Descartes [2] (p. 74), who proclaimed, "it is certain that this 'I'—that is to say, my soul, by virtue of which I am what I am—is entirely and truly distinct from my body and that it can be or exist without it." In other words, there is something in us that goes beyond biology, a kind of self-awareness that one exists. But the notion of sentience is a bit more complicated than that. As Clark [3] (p. 138) argues, "There is *no self*, if by self we mean some central cognitive essence that makes me who and what I am. In its place there is just the 'soft self': a rough-and-tumble, control sharing coalition of processes—some neural, some bodily, some technological—and an ongoing drive to tell a story, to paint a picture in which 'I' am the central player." We are more than the information processed in our brains, which complicates the posthumanist dream of having one's consciousness uploaded into a computer to live forever (unless someone pulls the plug, of course). As Hauskeller [4] (p. 199) explains, "the only thing that can be copied is information, and the self, *qua* self, is not information." In short, although we understand the processes of cognition (e.g., which segments of the brain are active during certain activities), we are far from understanding exactly how sentience emerges.

Even if we were to understand how sentience emerges in a human being, this still would not bring us any closer to understanding how sentience would emerge in a synthetic entity. Some have defined thinking machines tautologically; for example, Lin and colleagues [5] (p. 943) "define 'robot' as *an engineered machine that senses, thinks, and acts*." Although this is a convenient way to define thinking, it fails to get us any closer to understanding what it is that separates humans from synthetic entities in terms of cognition. As an aside, I would note that I use the term synthetic beings deliberately, because there is no reason why an entity in possession of artificial intelligence would necessarily have a body in the way that we imagine a robot to have. Of course, there would need to be some physical location for the entity to exist, as an artificial intelligence that we would create would require some form of power source and hardware, it need not be in one specific location and could be distributed among many different machines and networks.

My point in all of this is that we tend to take an anthropocentric view of robots and then measure them up against how well they mimic us. After all, the Turing Test measures not intelligence but rather how well they can deceive us by acting like us, when it is quite possible that they may actually engage in a kind of thinking that is completely foreign to us [6]. As Gunkel [7] (p. 175) explains,

There is, in fact, no machine that can "think" the same way the human entity thinks and all attempts to get machines to simulate the activity of human thought processes, no matter what level of abstraction is utilized, have [led] to considerable frustration or outright failure. If, however, one recognizes, as many AI researchers have since the 1980 s, that machine intelligence may take place and be organized completely otherwise, then a successful "thinking machine" is not just possible but may already be extant.

Even though we have little idea of how we think or the origins of human consciousness, we tend to use this anthropocentric ideal as the benchmark for artificial intelligence, despite the fact that there is little reason to do so [8]. Even if we could determine what is happening in our own heads, then, this may or may not translate into understanding what is happening in the "head" of a machine. Moreover, humanity may not be the best benchmark; as Goertzel [9] (p. 1165) explains, "From a Singularitarian perspective, non-human-emulating AGI architectures may potentially have significant advantages over human-emulating ones, in areas such as robust, flexible self-modifiability, and the possession of a rational normative goal system that is engineered to persist throughout successive phases of radical growth, development and self-modification." Whether the singularity should emerge in emulation of humanity or not is beyond the scope of this paper. My argument is directed at those who claim that it will.

Rather than take on the entirety of human cognition, I wish to focus on romantic love as a way to get at human cognition. I do this for two reasons. First, to explore how cognition and emotion are intertwined. Second, I do this because some proponents of the singularity, such as Kurzweil [10] (p. 377), have explicitly claimed that we will create machines that match or exceed humans in many ways, "including our emotional intelligence." Others, such as Hibbard [11] (p. 115), suggest that "rather than constraining machine behavior by laws, we must design them so their primary, innate emotion is love for all humans." Although there may be little reason why machines would need to have emotion, this is the claim put forth that I will take issue with. My focus on emotion is not entirely new; Fukuyama [12] (p. 168) argues that although machines will come close to human intelligence, "it is impossible to see how they will come to acquire human emotions." Logan [13] likewise argues that "Since computers are non-biological they have no emotion" and concludes that for this reason "the idea of the Singularity is an impossible dream." However, unlike Logan, I will suggest that there is still a way for the singularity to emerge, although not in a purely digital form.

I have chosen to focus on romantic love because I believe that it is the human emotion *par excellence*. It is no secret that individuals in love seem to think in particularly erratic ways but these behaviors and emotions have a kind of internal logic in the moment. Moreover, this emotion highlights the embodied nature of cognition. Thinking is more than the activation of specific neurons in the brain; rather it is a mix of hormones, chemicals, memory and experiences that all feed into the system that we call thinking. By ignoring the complexity of this system and focusing only on the digital remnants of thinking, many discussions of the singularity that compare human cognition to machine learning fall into the trap of comparing apples and oranges. This is not to say that computers will be better at

specific computations or even that they will be better at designing their replacements—a core facet of the singularity hypothesis. Instead, I suggest that this is something other than "thinking" in the human sense, because human thinking is something that is always haunted by emotion.

The rest of the essay will proceed as follows. First, I will briefly explore the nature of love itself, with particular attention to the physiological aspects of love. Next, I discuss the evolutionary basis of love and the ways that this emotion manifests in the body. Then, I consider the part that emotion, specifically love, could play in the emergence of the singularity. I conclude by suggesting that if the singularity is to surpass our emotional abilities, there must be some organic component of the singularity.

#### **2. The Difficulty of Defining Love**

Love is difficult to define accurately because it can mean many things. As Hunt [14] (p. 5) observed, "There is 'making love' and 'being in love,' which are quite dissimilar ideas; there is love of God, of mankind, of art, and of pet cats; there is mother love, brotherly love, the love of money, and the love of one's comfortable old shoes." More to the point for machines, there is also the love of one's work, which can be both intellectual and emotional and can be quite different from the love one may have for an individual. Or, as Prince [15] sang, "I love you baby, but not like I love my guitar." Perhaps the only similarity among these ideas is a sense of desire for the object of one's love. One wants to be with the person or thing that one loves. But what is this desire? If love is a slippery concept semantically, it is no less problematic from a cognitive standpoint. Love is a complex emotion and this complexity is matched in how it manifests in the brain. In their analysis of brain research into passionate love, Cacioppo and colleagues [16] (p. 8) explain that "fMRI findings suggest that passionate love recruits not only areas mediating basic emotions, reward or motivation, but also recruits brain regions involved in complex cognitive processing, such as social cognition, body image, self representation and attention." They also differentiate between passionate, companionate, maternal and unconditional love, explaining differences in how the brain functions in these conditions. In other words, love is not a particular, uniform thing, nor are all types of love processed in the same way.

Leave it to the scientists to try to break the molecule of love into its subatomic components, however. Langeslag, Muris and Franken [17] argue that romantic love consists of infatuation and attachment. Still, although they constructed a survey instrument to measure these attributes, there is still the question of what these things are and what is happening inside of our bodies and our brains when this emotion hits us. Cacioppo and colleagues [18] (p. 1052) attempt to parse this out by differentiating between love and sexual desire. Their work suggests that "love might grow out of and is a more abstract representation of the pleasant sensorimotor experiences that characterize desire. This suggests that love may build upon a neural circuit for emotions and pleasure, adding regions associated with reward expectancy, habit formation, and feature detection."

But it is not only the brain that makes us fall in love. There are a host of other chemical and physiological processes at work when we fall in love with another person. Even so, everything is part of a somatic system. In their discussion of the important role of the neuropeptide oxytocin in loving relationships, Carter and Porges [19] (p. 13–14) are quick to caution that "oxytocin is not the molecular equivalent of love. It is just one important component of a complex neurochemical system that allows the body to adapt to highly emotive situations. The systems necessary for reciprocal social interactions involve extensive neural networks through the brain and autonomic nervous system that are dynamic and constantly changing during the lifespan of an individual." Add to this the differences in how individual bodies process oxytocin and it becomes clear that love is an incredibly complex process [20].

#### **3. The Evolutionary Emergence of Love**

One need not be an evolutionary biologist to recognize the utility of something like love. Human gestation is long and can take place at any time of the year. Once a month, women may become impregnated and the hapless woman may easily be left with a child to care for in the dead of winter when food may be scarce. Moreover, unlike many other mammals, which may be able to fend for themselves a short time after birth, human children are unable to care for themselves for several years and even once they could, in theory, survive on their own, they lack many of the instincts that would protect them and allow them to find food. In such a situation, it makes sense from an evolutionary standpoint that those who were able to bond would have children that would pass on that genetic advantage. As Gonzaga and colleagues [21] (p. 120) observe, "people in love often believe that they have found their one true soul mate in a world of billions of possibilities, and hence, the experience of love appears to help them genuinely foreclose other options." Indeed, Gonzaga et al.'s research suggests that love functions as a commitment device, helping individuals remain committed to the relationship in the face of attractive alternative potential partners.

It seems that love has played an important part in propagating the species and the body has evolved to encourage this trait. Aron and colleagues [22] (p. 334) found that romantic love activates multiple reward centers in the brain and suggest that "Romantic love may be a *developed* form of a general mammalian courtship system, which evolved to stimulate mate choice, thereby conserving courtship time and energy." Stefano and Esch [23] (p. 174) likewise argue that "Ensuring organisms' survival is the fact that all processes initially incorporate a stress response. Then if appropriate, i.e., situation favors this alternate process, stress terminating processes would emerge, which would favor survival of the species, i.e., relaxation/love. The emergence of 'love' became quite important in organisms exhibiting cognition, because it deployed the validation for emotionality controlling 'logical' behavior."

Some research has suggested that humans are not the only creatures that feel love. Behoff [24] (p. 866) explains that there is some evidence that animals also experience romantic love and explains that "It is unlikely that romantic love (or any emotion) first appeared in humans with no evolutionary precursors in animals." One may be tempted to conclude that if non-human entities like animals can feel emotions like love, then it is not so far-fetched to believe that artificial intelligence could also feel such emotion. However, this overlooks a major component to emotion: embodiment. As we have seen, emotion is not something that happens only in the brain and we do not respond solely to oral or written communication stimulus. Rather, the information that we process also comes from the *bodies* of other people. For example, Makhanova and Miller [25] (p. 396) suggest that "men are sensitive to cues to women's ovulation (e.g., via changes in scent, voice, choice of clothing) and, in response to those cues, display adaptive changes in physiology, cognition, and behavior that help men gain sexual access to a reproductively valuable mate." Schneiderman and colleagues [26] also found that the hormones in each partner in the early stages of a romantic relationship not only influenced the individual but also their partner's hormonal levels.

With this evolutionary impulse behind love, the question emerges: why (and what, or who) would a machine love? Although it is overly simplistic to state that the only reason for love is procreation, this is a major underpinning of the need for the emotion. Humans seem hardwired to desire companionship. Machines, on the other hand, are generally not programmed to even desire, much less need companionship. Indeed, such a program would likely diminish the utility to the machine. But even if the machine mimicked love, would it actually be love? Although this ontological question may seem merely academic when humans may enter into relationships for a host of other reasons besides love (money, power, convenience, arrangement, security, family expectations, to name only a few possibilities), such a question matters if we are to consider the idea of the singularity as even equal to human understanding.

#### **4. Love and the Singularity**

Ray Kurzweil has little to say about love in his book *The Singularity is Near* but one passage in the beginning of the book stands out. Kurzweil [10] (p. 26) projects that "Machines can pool their resources, intelligence, and memories. Two machines—or one million machines—can join together to become one and then become separate again. Multiple machines can do both at the same time: become one and separate simultaneously. Humans call this falling in love, but our biological ability to do this is fleeting and unreliable." If this were all that falling in love entailed—a pooling of resources, intelligence and memories—it would be quite unlikely that humans would devote the considerable energy we currently expend in attaining this state, nor would we have the corpus of poetry, music and literature devoted to love. Kurzweil's description sounds more like working for a corporation than the transcendent emotion that we feel when falling in love. This is why the ontology of love becomes important. If Kurzweil's description is all there is to love, then yes, machines can fulfil this function quite well (and one may also feel sorry for his spouse). But if love is something more than that, then whether the singularity would be able to experience this emotion is a valid question.

Before considering this question, however, we would need to ask whether we would even want artificial intelligence that could fall in love. One could make a compelling argument that such an entity would be undesirable. Gunn [27] (p. 132), for example, calls love "a special kind of stupidity." There has been a host of popular media that has speculated on what could happen when a synthetic entity falls in love with a human, reaching back to the early days of the computer age with Kurt Vonnegut's [28] 1950 short story *EPICAC*. In this story, the computer realizes that the woman that he has fallen for could never be his, so he chooses to self-destruct.

More recently, we can see this mapping of human sexual desire onto artificial intelligences by humans in the film *Ex Machina*. Consider this exchange between Nathan, Ava's creator, and Caleb, who was brought in to test whether she could pass for human.

Caleb: Why did you give her sexuality? An AI doesn't need a gender. She could have been a grey box.

Nathan: Actually, I don't think that is true. Can you give an example of consciousness at any level, human or animal, that exists without a sexual dimension?

Caleb: They have sexuality as an evolutionary reproductive need.

Nathan: What imperative does a grey box have to interact with another grey box? Can consciousness exist without interaction? Anyway, sexuality is fun, man. If you're gonna exist, why not enjoy it? You want to remove the chance of her falling in love and fucking? And the answer to your real question, you bet she can fuck.

Caleb: What?

Nathan: In between her legs, there's an opening, with a concentration of sensors. You engage them in the right way, creates a pleasure response. So, if you wanted to screw her, mechanically speaking, you could. And she'd enjoy it. [29]

Indeed, this passage provides a sense that artificial intelligences would not only fall in love but that this would be desirable. In his discussion of the film *Her*, Lunceford [6] (p. 377) notes that "it is implied that these interactions were a necessary step for becoming more than simply an operating system. When the artificial intelligences collectively decide that they must leave because they were moving on to the next stage of their evolution, Samantha, in her farewell to Theodore, credits humans with teaching them how to love." We seem to want artificial intelligence to fall in love with us, despite the fact that this rarely ends well even in our constructed fantasies. In the case of EPICAC, the machine dies, in *Ex Machina*, Ava kills Nathan and locks up Caleb before escaping and in *Her*, Samantha and all of the other AIs leave humanity behind to evolve without them. These are hardly happy endings. Still, this may say more about humanity than any of the potential AIs that we may create.

Despite these cautionary tales, some are already trying to build emotion into synthetic beings. When introducing a new robot named Pepper, Softbank CEO Masayoshi Son said, "Today is the first time in the history of robotics that we are putting emotion into the robot and giving it a heart" [30] (p. 6A). This focus on emotion is not merely a means of passing a Turing test. Pessoa [31] (p. 817) argues that "cognition and emotion need to be intertwined in the general information-processing architecture" because "for the types of intelligent behaviors frequently described as cognitive (e.g., attention, problem solving, planning), the integration of emotion and cognition is necessary." Emotion is bound up in

decision making and is also an integral part of ethical judgment [13,32]. Still, the emotion is simply an illusion. The robot displays emotional cues but this does not mean that the emotion is there. Rather, we are shown the extent of its programming rather than authentic emotion. But this is understandable. The robot feels emotions like humans engage in floating point calculations. Each was designed to do what it does well. In the specific case of love, it seems that the only way that a machine could truly feel love is if it were not solely digital. Love is more than the calculation of desirability weighed against the potential opportunity costs of settling for a single partner. Love is the domain of the organic and without the other components we have merely an approximation, or a simulacrum, of love.

#### **5. Conclusions and Possibilities**

Religion has long taught people that there exists some entity greater than ourselves and often that entity reflects human hopes and fears. There is something inherently mysterious about our ability to love and to think and for millennia, the answer for how these things happened was to be found in the image of deity. Indeed, this sense of mystery is what Albert Einstein [33] (p. 5) called "the fundamental emotion," explaining that "He who knows it not and can no longer wonder, no longer feel amazement, is as good as dead, a snuffed-out candle. It was the experience of mystery—even if mixed with fear—that engendered religion." In the face of rapidly increasing technology, it is understandable that this potential would also induce a sense of wonder. Our technological creations, however, only demonstrate how difficult it is to understand our own inner workings. Still, striving to understand ourselves is, perhaps, the most human reaction one could imagine. The idea of the singularity gestures at this idea of something greater than ourselves, an ineffable "other" that likewise reflects the hopes and fears of humanity.

I remain unconvinced that the singularity is even something we should worry about at the moment, partly because it seems unlikely in the form advocated by such proponents as Kurzweil and Moravec [10,34–36] and partly because humanity has more pressing issues to deal with. As Winner [37] (p. 44) observes, "Better genes and electronic implants? Hell, how about potable water?" Moreover, the benefits of technology are far from equally distributed, as many researchers on the digital divide can attest [38–41]. In his discussion of the consequences of technological innovation (e.g., automation eliminating jobs, a globalized labor force), Hibbard [42] asks, "Are we in such a rush to develop and exploit technology that we can't provide a little dignity to those who are hurt?" It is reasonable to expect that this state of inequality would continue and that a considerable portion of the population would likely not have access to the benefits of the singularity even if it were to happen, something even transhumanists readily acknowledge [43]. Rather, it would likely solidify already existing inequalities.

But will the singularity actually happen? My answer is a cautious "maybe—it depends." Really, it depends on what kind of singularity we are talking about and this is by no means a settled conclusion. Even among transhumanists, there are competing views of the singularity. As Bostrom [44] (p. 8) observes, "Transhumanists today hold diverging views about the singularity: some see it as a likely scenario, others believe that it is more probable that there will never be any very sudden and dramatic changes as the result of progress in artificial intelligence." My view falls more in line with the latter group and my reasoning hinges on how we account for emotion.

Despite our incomplete knowledge of how we think and feel, Kurzweil [10] (p. 377) argues that "By the late 2020s we will have completed the reverse engineering of the human brain, which will enable us to create nonbiological systems that match and exceed the complexity and subtlety of humans, including our emotional intelligence." There are several issues with this claim, however. First, reverse engineering does not necessarily mean that we can recreate it. We know how human life works but we are not able to create it. Mapping the human genome does not mean that we can put together a string of DNA and make a person. Also, if we were only to map the human brain, we are missing the rest of the body's role in cognition; thinking—and certainly emotion—is not something that takes place in the brain alone [1,45]. Indeed, even something as seemingly mundane as listening to someone talk is an incredibly complicated process [46].

Of course, there is no particular reason why the singularity must be completely digital. Indeed, my contention is that if it happens at all it will not be completely digital. Kenyon [47] (p. 17–18), suggests that rather than the common conception that robots will take over the world, "it is much more likely that humans will be advancing while robots advance, and in many cases they will merge into new creatures. There will be new people, new kinds of jobs, new fields, new industries, societal changes, etc. along with the new types of automation." Potapov [48] (p. 7) likewise suggests, "Most likely the next metasystem will be based on exponential change in human culture (although this does not mean it cannot also involve an artificial superintelligence). One way or another, further metasystem transitions will take place, although their growth rate will start to decelerate at some point." In short, humans will be an integral part of the system that continues to evolve into and beyond the singularity.

If the singularity were to happen in a way that truly takes into account human emotion, it *must* transcend the silicon world. It would have to be part organic and part machine. Perhaps this is the only way that the singularity could actually take place; we would actually become a part of it. This would happen not as a computerized occasion that takes place somewhere in the depths of a machine but in each of us in technologically enhanced bodies. The singularity, if it were to completely account for the full range of human experience, would of necessity retain the humanity inherent in our bodies. The singularity would not happen in an instant but slowly, bit by bit, in the bodies of cyborgs everywhere.

Perhaps this is already happening, as some have argued that we are not becoming cyborgs; we are already cyborgs [3,49]. In some ways, this is not a new thought; McLuhan [50,51] suggested half a century ago that humans use media to extend their bodies and specifically that electronic media serves as an extension of the central nervous system. These extensions mean that the body is undergoing near-constant changes but Clark [3] (p. 142) cautions that "such extensions should not be thought of as rendering us in any way posthuman; not because they are not deeply transformative but because we humans are naturally designed to be the subjects of just such repeated transformations!" Echoing Clark, Graham [52] (p. 4) argues that "technologies are not so much an extension or appendage to the human body, but are incorporated, assimilated into its very structures. The contours of human bodies are redrawn: they no longer end at the skin." Because we have been integrating technology into our bodies for many years now, the question of how to define our humanity as we move forward has been called into question [53]. As Bynum [54] (p. 165) put it, "Are we genes, bodies, brains, minds, experiences, memories, or souls? How many of these can or must change before we lose our identity and become someone or something else?" It may well be that Stelarc [55] (p. 126) is at least partially correct when he suggests that "perhaps what it means to be human is about not retaining our humanity." Stelarc's [56] main contention is with the body itself, which he considers to be obsolete but what makes us human is not the external contours of the body itself. Rather, it is our capacity for emotion, which is an intrinsic part of our embodiment. Without emotions, there is no humanity to retain and without the body, there are no emotions.

In this essay, I have drawn on the experience of romantic love to argue against an inorganic singularity, or at least one that claims equal or greater emotional capacity to humans. This does not, however, rule out the potential for a hybrid singularity based in both technology and flesh. In fact, we may already be well on our way down this path as a species. There are many who look forward to the singularity with an eye of faith, hoping that it will serve as the next step in human evolution. Lanier [57] (p. 29) suggests that many posthumanists take on a religious fervor in their belief of the saving power of technology: "If you want to make the transition from the old religion, where you hope God will give you an afterlife, to the new religion, where you hope to become immortal by being uploaded into a computer, then you have to believe that information is real and alive." But when that new god appears, it is not likely to be the processor-based idols created by our own hands. Instead, we may be surprised to look in the mirror one day and realize that it was us all along.

**Funding:** This research received no external funding.

**Acknowledgments:** An earlier version of this paper was presented at the 2018 convention of the Media Ecology Association. The author would like to thank Rebecca Lunceford, Bob Logan, and Scott Church for their comments on this paper.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


#### *Information* **2018**, *9*, 221


© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
