*Article* **The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence**

#### **Adriana Braga <sup>1</sup> and Robert K. Logan 2,\* ID**


Received: 31 October 2017; Accepted: 22 November 2017; Published: 27 November 2017

**Abstract:** Making use of the techniques of media ecology we argue that the premise of the technological Singularity based on the notion computers will one day be smarter that their human creators is false. We also analyze the comments of other critics of the Singularity, as well supporters of this notion. The notion of intelligence that advocates of the technological singularity promote does not take into account the full dimension of human intelligence. They treat artificial intelligence as a figure without a ground. Human intelligence as we will show is not based solely on logical operations and computation, but also includes a long list of other characteristics that are unique to humans, which is the ground that supporters of the Singularity ignore. The list includes curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, objectives, goals, telos, values, morality, experience, wisdom, judgment, and even humor.

**Keywords:** technological Singularity; intelligence; emotion; artificial general intelligence; artificial intelligence; computer; logic; figure/ground

*The true sign of intelligence is not knowledge but imagination.* —Albert Einstein

*Levels of consciousness. Knowing what one knows. Language is necessary for knowing what one knows as one talks to oneself. But computer have no need or desire to communicate with others and hence never created language and without language one cannot talk to oneself and hence computers will never be conscious* [1].

#### **1. Introduction**

The notion of the technological singularity or the idea that computers will one day be more intelligent than their human creators has received a lot of attention in recent years. A number of scholars have argued both for and against the idea of a technological singularity using a variety of different arguments. A sample of these opinions for and against the idea of the technological singularity can be found in two collections of short essays, entitled Special Report: The Singularity [2] and *What to Think About Machines that Think* [3] as well as the critical writings of Herbert Dreyfus [4–7]. We will analyze their different positions and make use of their arguments, which we will integrate into our own critiques of both the idea that computers can think, and the idea of the Singularity, or the idea that machines through the use of Artificial General Intelligence (AGI, sometimes referred to simply as AI) can become more intelligent than their human creators. We intend to show that despite the usefulness of artificial intelligence, that the Singularity is an over extension of AI and that no computer can ever duplicate the intelligence of a human being.

We will argue that the emperor of AI is quite naked by exploring the many dimensions of human intelligence that involve characteristics that we believe cannot be duplicated by silicon based forms of intelligence because machines lack a number of essential properties that only a flesh and blood living organism, especially a human, can possess. In short, we believe that artificial intelligence (AI) or its stronger version artificial general intelligence (AGI) can never rise to the level of human intelligence because computers are not capable of many of the essential characteristics of human intelligence, despite their ability to out-perform us as far as logic and computation are concerned. As Einstein once remarked "Logic will get you from A to B. Imagination will take you everywhere".

What motivated us to write this essay is our fear that some who argue for the technological singularity might in fact convince many others to lower the threshold as to what constitutes human intelligence so that it meets the level of machine intelligence, and thus devalue those aspects of human intelligence that we (the authors) hold dear such as imagination, aesthetics, altruism, creativity, and wisdom.

To be a fully realized human intelligent being it is necessary, in our opinion, to have these characteristics. We will suggest that these many aspects of the human experience that are associated uniquely with our species Homo sapiens (wise humans) do not have analogues in the world of machine intelligence, and that as a result the notion that an artificial intelligent machine-based system that is more intelligent than a human is not possible and that the notion of the technological singularity is basically science fiction. We recognize that the attributes that we listed above that constitute what we consider to be intelligence are arrived at subjectively. Perhaps we are defining what we believe is a humane form of intelligence as has been suggested kindly by one of the reviewers of an earlier version of this essay. But, that is one of the objectives of this essay, a desire to make sure that in the desire to gain the benefits of AI, we as a society do not degrade the humaneness of what is considered intelligence. Human intelligence and machine intelligence are of a completely different nature so to claim that one is greater than the other is like comparing the proverbial apples and oranges. They are different and they are both valuable and one should not be mistaken for the other.

There is a subjective, non-rational (or perhaps extra-rational) aspect of human intelligence, which a computer can never duplicate. We do not want to have intelligence as defined by Singularitarians, who are primarily AI specialists and as a result are motivated to exaggerate their field of research and their accomplishments as is the case with all specialists. Engineers should not be defining intelligence. Consider the confusion engineers created by defining Shannon's measure of signal transmission as information (see Braga and Logan [8]).

To critique the idea of the Singularity we will make use of the ideas of Terrence Deacon [9], as developed in his study *Incomplete Nature: How Mind Emerged from Matter*. Deacon's basic idea is that for an entity to have sentience or intelligence it must also have a sense of self [9] pp. 463–484. In his study, Deacon [9] p. 524 defines information "as about something for something toward some end". As a computer or an AI device has no sense of self (i.e., no one is home), it has no information as defined by Deacon. The AI device only has Shannon information, which has no meaning for itself, i.e., the computer is not aware of what it knows as it deals with one bit of data at a time. We will discover that many of the other critiques of the singularity that we will reference parallel our notion that a machine has no sense of self, no objectives or ends for which it strives, and no values.

We will also make use of media ecology and the insights of Marshall McLuhan [10] including:


Given that the medium is the message, as McLuhan [10] proclaimed, we will examine the medium of the computer and its special use as an artificial intelligence (AI) device with particular attention to strong AI or AGI. Our basic thesis is that computers, together with AI, are a form of technology and a medium that extends human intelligence not a form of intelligence itself.

Our critique of AGI will make use of McLuhan's [10] technique of figure/ground analysis, which is at the heart of his iconic one-liner the "medium is the message" that first appeared in his book *Understanding Media*. The medium independent of its content has its own message. The meaning of the content of a medium, the figure, is affected by the ground in which it operates, the medium itself. The problem that the advocates of AGI and the Singularity make is they regard the computer as a figure without a ground. As McLuhan once pointed out "logic is figure without ground" [11]. A computer is nothing more than a logic device and hence it is a figure without a ground. A human and the human's intelligence are each a figure with a ground, the ground of experience, emotions, imagination, purpose, and all of the other human characteristics that computers cannot possibly duplicate because they have no sense of self.

While we are critical of the notion of the idea of the Singularity, we are quite positive re the value of AI. We also believe, like Rushkoff [12] pp. 354–355, that networked computers will increase human intelligence by allowing humans to network and share their insights. We also concur with Benjamin Bratton [13]:

"AI can have an enormously positive role to play at an infrastructural level, not just the augmentation of an individual's intelligence, but the augmentation of systemic intelligence and the ability of infrastructural systems to automate what we call political decision or economic decision."

The pattern recognition capabilities of big data will assist humans to make new discoveries, but it will require human intelligence to guide the AI devices as to what patterns to look for. In short, AI guided by human intelligence will always be more productive than AGI working on its own. As pointed out by one of the reviewers of our essay, other forms of a technological singularity that do not try to duplicate human intelligence are altogether possible but they are not the subject of this essay.

#### **2. Origin of the Singularity Idea**

The following excerpt from the article, *Technological Singularity* in Wikipedia, accessed 15 September 2017, summarizes the rise of the concept in the early days of the computer age beginning with a conversation between John Von Neumann and Stan Ulam.

The first use of the term "singularity" in this context was made by Stanislaw Ulam in his 1958 obituary for John Von Neumann, in which he mentioned a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue". The term was popularized by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil cited von Neumann's use of the term in a foreword to von Neumann's classic *The Computer and the Brain* (https://en.wikipedia.org/wiki/ Technological\_singularity).

#### *2.1. The Belief in the Singularity*

Let us examine the roots of the fantasy that humans will one day be obsolesced by computers. A number of advocates of strong AI or AGI have suggested that in the not too distant futures (2045 according to Ray Kurzweil [14]) programmers will design a computer with an AI or AGI capability that will allow it to design a computer even more intelligent than itself and that computer will be able to do the same and by a process of iteration a technological singularity point will be arrived at where post-Singularity computers will be far more intelligent than us poor humans who only have an intelligence that is designed by nature through natural selection and evolution. At this

point, according to those that embrace this idea the super-intelligent computers will take over and we human will become their docile servants.

This scenario is not the product of science fiction writers. Rather, they are the scenarios being that are painted by information and computer scientists who are advocates of strong AI or AGI and the idea of the Technological Singularity.

The idea of the perfectibility of human intelligence can be traced back to the Enlightenment and the encyclopaedist Condorcet who wrote,

Nature has set no term to the perfection of human faculties; that the perfectibility of man is truly indefinite; and that the progress of this perfectibility, from now onwards independent of any power that might wish to halt it, has no other limit than the duration of the globe upon which nature has cast us (www.historyguide.org/intellect/sketch.html).

It is worth noting that the idea of the Singularity is the product of techno-optimists who have made other predictions in the past that did not pan out. For example, the techno-optimists once predicted that with the efficiency of computers, that there would be a dramatic decrease in the number of hours we would need to work, which we have yet to seen (http://paleofuture.gizmodo.com/thelate-great-american-promise-of-less-work-1561753129). "For instance, computer workstations have revolutionized office and retail ... Yet the dramatic increase in productivity has not led to a shorter work week or to a more relaxed work environment" [15] p. 74. Instead, it has led to companies using the efficiency of computers to reduce the number of workers needed to perform certain tasks.

Techno-optimists also predicted that offices would be practically paper-free, when in fact the amount of paper in offices has actually increased.

Over the past thirty years, many people have proclaimed the imminent arrival of the paperless office. Yet even the World Wide Web, which allows almost any computer to read and display another computer's documents, has increased the amount of printing done. The use of e-mail in an organization causes an average 40% increase in paper consumption [16] p. 1.

#### *2.2. Singularity Advocates with a Spiritual Dimension to Their Belief*

When we first began to think of this project and work on the research that led to this essay the biggest mystery for us was what motivates a tough minded scientific thinker to believe that human intelligence can be programmed into or instantiated on a computer, a mechanical machine. Then, we read Steven Pinker's [17] pp. 5–8 short little essay "Thinking Does Not Imply Subjugating", and in the following few sentences he succinctly described his romance in the belief that human intelligences could be captured in a machine:

The cognitive feats of the brain can be explained in physical terms ... This is a great idea for two reasons. First, it completes a naturalistic understanding of the universe, exorcising occult souls, spirits and ghosts in the machine. Just as Darwin made it possible for a thoughtful observer of the natural world to do without creationism, Turing and others made it possible for a thoughtful observer of the cognitive world to do so without spiritualism.

However, some proponents of the Singularity have a religious zeal to them not in the theist sense but somewhat similar to the beliefs of the deists. Here is a collection of positions by Singularity zealots that have in our opinion to varying degrees a religious tone to them.

Frank Tipler [18] has an amusing solution for the inevitable fact that once our sun runs out of nuclear fuel and can no longer provide the conditions that make life on Earth sustainable that our only hope for the survival of human culture will be AI computers (AIs) that do not require the conditions that make carbon-based life sustainable. He suggests that the AIs will take to outer space. And

any human who wants to join the AIs in their expansion can become a human upload, a technology that should be developed about the same time as AI technology ... If you can't beat 'em, join 'em ... When this doom is at hand, any human who remains alive and doesn't want to die will have no choice but to become a human upload. The AIs will save us all.

The parallels of Tipler's proposal with Christianity are striking. God Is dead but AI has been born and it is our Savior and like Jesus's self-described appellation, it is "the son of man". AI, not Jesus, "will save us all" and eternal life can be found in an AI computer somewhere in space like the "kingdom of heaven" (Matthew 3.2) and not here on Earth.

Anthony Garrett Lisi [19], in an article entitled "I, for One, Welcome our Machine Overlords", claimed: "Computers share knowledge much more easily than humans do, and they keep that knowledge longer, becoming wiser than humans". Lisi in his attempt to find a higher power makes the mistake that wisdom comes from knowledge. Knowledge is about using information to achieve one's objectives and wisdom is the ability to choose objective consistent with one's values. How can a computer have values? The values of a computer are those of its programmers.

Pamela McCorduck [20] p. 53 in an article entitled "An Epochal Human Event" opined, "We long to preserve ourselves as a species. For all of the imaginary deities that we have petitioned throughout history who have failed to protect us from nature, from one another, from ourselves—we're finally ready to call on our own enhanced, augmented minds instead". Her god is "our own enhanced, augmented minds".

Sam Harris [21] suggests that a super intelligent AGI could achieve 20,000 years of intellectual work in a week. Scientific work requires making observation, designing, and building observational tools. His closing comments reveal what we have identified as the quasi-religious fervor of the AGI advocates: "We seem to be in the process of building a god. Now would be a good time to wonder whether it will (or can be) a good one".

Gregory Paul [22] writes, "The way for human minds to avoid becoming obsolete is to join in the cyber civilization; out of growth-limited bio brains into rapidly improving cyber brains". He then suggests that we can then give up our physical bodies which would then benefit the Earth's biosystem. This is a variation on the Christian idea that we can have everlasting life as pure spirits. For Gregory Paul heaven will be in the clouds, computer clouds.

James Croak [23] suggests that, "Fear of AI is the latest incarnation of our primal unconscious fear of an all-knowing, all-powerful angry God dominating us–but in a new ethereal form".

Douglas Hofstadter [24] provides us with an apocalyptic scenario of the impact of the Singularity, which he believes is a "couple of centuries" away. He suggests that the ramifications "will be enormous, since the highest form of sentient beings on the planet will no longer be human. Perhaps these machines—our 'children'—will be vaguely like us and will have culture similar to ours, but most likely not. In that case, we humans may well go the way of the dinosaurs".

Perhaps the most explicit example of the religious devotion to the idea of the Singularity comes from AI programmer Anthony AI programmer Anthony Levandowski, who is famous for his work on self-driving cars, first for Waymo (a subsidiary of Alphabet, Google's holding company) and later for Uber. Levandowski has founded a non-profit religious organization, the Way of the Future that plans to "develop and promote the realization of a Godhead based on artificial intelligence *and through understanding and worship of the Godhead contribute to the betterment of society* (https://www.wired.com/ story/god-is-a-bot-and-anthony-levandowski-is-his-messenger)".

We end this section with John Horgan's [25] humorous and skeptical take on the eternal life belief by Singularity advocates, who believe that humans can be uploaded onto a computer:

I would love to believe that we are rapidly approaching "the singularity". Like paradise, technological singularity comes in many versions, but most involve bionic brain boosting. At first, we'll become cyborgs, as stupendously powerful brain chips soup up our

perception, memory, and intelligence and maybe even eliminate the need for annoying TV remotes. Eventually, we will abandon our flesh-and-blood selves entirely and upload our digitized psyches into computers. We will then dwell happily forever in cyberspace ... Kurzweil says he has adopted an antiaging regimen so that he'll live long enough to live forever.

Horgan remains skeptical of the uploading of human brains on to computers, mainly because neuroscientists have such a sketchy understanding of how the brain operates or how it stores memories, or what are the roles of various chemicals found in the brain.

Neurotransmitters, which carry signals across the synapse between two neurons, also come in many different varieties. In addition to neurotransmitters, neural-growth factors, hormones, and other chemicals ebb and flow through the brain, modulating cognition in ways that are both profound and subtle [25].

#### *2.3. Have We Become the Servomechanisms of Our Computers?*

Although we firmly believe that machine intelligence can never exceed human intelligence, there is still a very real danger, however, that we can lose some of our autonomy to AI or AGI through a decline in what we regard as human intelligence and hence how we view the nature of the human spirit. There are other instances where we have partially lost our autonomy to other technologies. One example is our total dependence on the automobile and the burning of fossil fuels that now threatens our very existence due to global warming and climate change.

In Chapter 4 of *Understanding Media: The Gadget Lover Narcissus as Narcosis,* Marshall McLuhan [10] describes how we allow our technologies to take over and control us so that we become their servo mechanisms. We believe this is an apt description of the AGI computer 'gadget lovers' who support the notion of the inevitability of the technological singularity. McLuhan [10] p. 51 wrote:

The Greek myth of Narcissus is directly concerned with a fact of human experience, as the word Narcissus indicates. It is from the Greek word narcosis, or numbness. The youth Narcissus mistook his own reflection in the water for another person. This extension (or amplification) of himself by mirror numbed his perceptions until he became the servomechanism of his own extended or repeated image ... Now the point of this myth is the fact that men at once become fascinated by any extension of themselves (i.e., their technological extensions) ... Such amplification is bearable by the nervous system only through numbness or blocking of perception ... To behold, use or perceive any extension of ourselves in form is necessarily to embrace it ... By continuously embracing technologies, we relate ourselves to them as servo-mechanisms. That is why we must, to use them at all, serve these objects, these extensions of ourselves, as gods or minor religions ... Physiologically, man in the normal use of technology (or his variously extended body) is perpetually modified by it and in turn finds ever new ways of modifying his technology. Man becomes, as it were, the sex organs of the machine world, as the bee of the plant world, enabling it to fecundate and to evolve ever new forms.

We have quoted this rather long excerpt from McLuhan because it seems to describe some advocates of AGI, who, in our opinion, have become the servomechanisms of their computer technology and even suggest that we will become even more than their servomechanisms, but we will become their slaves. Like Narcissus who fell in love with his own image reflected in the water, the advocates of the technological singularity see a reflection of themselves in the computers that they program. Riffing on the one-liner "garbage in, garbage out" we would suggest "computer worship in, narcissism out".

Quentin Hardy [26], a strong advocate of strong AI, wrote, "we have met the AI and it is us". AI is like the pool that Narcissus looked into and fell in love with his own image. AI is just a reflection of one aspect of our own intelligence (the logical and rational aspect), and we, or some of us, have fallen in love with it. Just as Narcissus was suffering from narcosis because he was so mesmerized by his own reflection he could not get beyond himself. Many of the advocates of AGI or strong AI are mesmerized by the beauty of logic and rationality to such an extent that they dismiss the emotional, the non-rational, the poetry of poiesis, and the emotional side of intelligence. Human intelligence is bicameral. Not a neat division of the analytic/rational left brain and the artistic/intuitive right brain, but a synthesis of these two aspects of the human mind. The AI computer brain is unicameral with a left brain bias. It lacks the neurochemistry, such as dopamine, serotonin, and other agents that are triggered by or are part of human emotional life.

Timothy Taylor [27] introduces the idea of *denkraumverlust,* whereby one confuses the representation of something for the thing itself. He cites the example of the Pygmalion myth where a sculptor falls in love with the figure of a woman that he sculpted. In a similar way, creators of AGI have fallen in love with their creations attributing properties to them that they do not possess like the creator of Pygmalion or the male lead in the movie *Her*.

#### **3. The Ground of Intelligence—What Is Missing in Computers**

At the core of our critique of the technological singularity is our belief that human intelligence cannot be exceeded by machine intelligence because the following set of human attributes are essential ingredients of human intelligence, and they cannot, in our opinion, be duplicated by a machine. The most important of these is that humans have a sense of self and hence have purpose, objectives, goals, and telos, as has been described by Terrence Deacon [9] pp. 463–484 in his book *Incomplete Nature*. As a result of this sense of self, humans also have curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, values, morality, experience, wisdom, and judgement. All of these attributes are essential elements of or conditions for human intelligence, in our opinion. In a certain sense, they are the ground in which human intelligence operates. Stripped of these qualities as is the case with AI all that is left of intelligence is logic, a figure without a ground according to McLuhan as we have already mentioned. If those that desire to create a human level of intelligence in a machine they will have to find a way to duplicate the above list of characteristics that we believe define human intelligence.

To the long list above of human characteristics that we have suggested contributes to human intelligence we would also add humor based on the following report of the social interaction of Stan Ulam and John von Neumann, the very first scholars to entertain the notion of the singularity.

Von Neumann's closest friend in the United States was mathematician Stanislaw Ulam. A later friend of Ulam's, Gian-Carlo writes: "They would spend hours on end gossiping and giggling, swapping Jewish jokes, and drifting in and out of mathematical talk". When von Neumann was dying in hospital, every time Ulam would visit he would come prepared with a new collection of jokes to cheer up his friend (https://en.wikipedia.org/ wiki/John\_von\_Neumann).

Humor entails thinking out of the box, a key ingredient of human intelligence. Humor specifically works by connecting elements that are not usually connected, as is also the case with creative thinking. All of the super intelligent people we have known invariably have a great sense of humor. Who can doubt the intelligence of the comics Robin Williams and Woody Allen, or the sense of humor of physicists Albert Einstein and Richard Feynman?

There are computers that can calculate better than us, and in the case of IBM's Big Blue, play chess better than us, but Big Blue is a one-trick pony that is incapable of many of the facets of thinking that we regard as essential for considering someone intelligent. Other examples of computers that exceeded humans in game playing are Google's AlphaGo beating the human Go champion and IBM's Watson beating the TV Jeopardy champion. In the case of Watson, it won the contest but it had

no idea of what the correct answers it gave meant and it did not realize that it won the contest, nor did it celebrate its victory. What kind of intelligence is that? A very specialized and narrow kind for sure.

Perhaps the biggest challenge to our skepticism vis-à-vis the Singularity is a recent feat by the non-profit organization OpenAI with the mission of openly sharing its AI research. They developed an AI machine that can play games against itself and thereby find the optimum strategy for winning the game. It played GO against itself for three days and when it was finished, it was able to beat the original AlphaGo computer that had beat the human Go champion. In fact, it played 100 matches against AlphaGo and it won them all. AI devices that can beat humans at ruled base games parallels the fact that computers can calculate far faster and far better than any human. The other aspect of computers beating humans playing games is that a game is a closed system, whereas life and reality is an open system [28].

Intelligence, at least the kind that we value, involves more than rule based activities and is not limited to closed systems, but operates in open systems. All of the breakthroughs in science and the humanities involve breaking the rules of the previous paradigms in those fields. Einstein's theory of relativity and quantum theory did not follow the rules of classical physics. As for the fine arts, there are no rules. Both the arts and the sciences are open systems.

The idea that a computer can have a level of imagination or wisdom or intuition greater than humans can only be imagined, in our opinion, by someone who is unable to understand the nature of human intelligence. It is not our intention to insult those that have embraced the notion of the technological singularity, but we believe that this fantasy is dangerous and has the potential to mislead the developers of computer technology by setting up a goal that can never be reached, as well as devalue what is truly unique about the human spirit and human intelligence.

It is only if we lower our standards as to what constitutes human intelligence, will computers overtake their human creators as advocates of AGI and the technological singularity suggest. Haim Harari [29] p. 434 put it very succinctly when he wrote that he was not worried about the world being controlled by thinking machines but rather he was "more concerned about a world led by people who think like machines, a major emerging trend of our digital society". In a similar vein, Devlin [30] p. 76 claims that computers cannot think, they can only make decisions, and that he further claims, is the danger of AGI, namely, decisions that are made without thought.

#### **4. The 3.5 Billion Year Evolution of Human Intelligence**

Many of the shortcomings of AGI as compared to human intelligence is due to the fact that human beings are not just logic machines, but they are flesh and blood organisms that perceive their environment, have emotions, goals, and have the will to live. These capabilities took 3.5 billion years of evolution to create.

Kate Jeffrey [31] pp. 366–369 suggests that it would be "an immense act of hubris" to achieve the same level of human intelligence in a machine. She asks, "Can we do better than 3.5 billion years of evolution did with us?"

Anthony Aguirre [32] pp. 212–214 remarks that, "human minds are incredibly complex but have been battle tested into (relative) stability over eons of evolution in a variety of extremely challenging environments. AGI computers, on the other hand, cannot be built in the millions or billions, and how many generations of them need to be developed before they achieve the stability of the human brain".

As S. Abbas Raza [33] pp. 257–259 asks, can any process other than Darwinian evolution produce "teleological autonomy akin to our own".

Gordon E. Moore [34], cofounder of Intel Corp, cofounder of Fairchild Semiconductor and the Moore of Moore's law is also a skeptic.

I don't believe this kind of thing is likely to happen, at least for a long time. And I don't know why I feel that way. The development of humans, what evolution has come up with, involves a lot more than just the intellectual capability. You can manipulate your fingers and other parts of your body. I don't see how machines are going to overcome that overall

gap, to reach that level of complexity, even if we get them so they're intellectually more capable than humans.

#### *4.1. Human Intelligence and the Figure/Ground Relationship*

In the Introduction, we indicated that human intelligence for us is not just a matter of logic and rationality, but that it also entails explicitly the following characteristics that we will now show are essential to human thought: purpose, objectives, goals, telos, caring, intuition, imagination, humor, emotions, passion, desires, pleasure, aesthetics, joy, curiosity, values, morality, experience, wisdom, and judgement. We will now proceed through this list of human characteristics and show how each is an essential component of human intelligence that would be difficult if not impossible to duplicate with a computer. These characteristics arise directly or indirectly because of the fact that humans have a sense of self that motivates these characteristics. Without a sense of self, who is it that has purpose, objectives, goals, telos, caring, intuition, imagination, humor, emotions, passion, desires, pleasure, aesthetics, joy, curiosity, values, morality, experience, wisdom, and judgement. How could a machine have any of these characteristics?

#### *4.2. Human Thinking Is Not Just Logical and Rational, but It Is Also Intuitive and Imaginative*

Just as some advocates of the Singularity look at figures without considering the ground in which they operate, they also do not take into account that human thought is not just logical and rational but it is also intuitive, imaginative and even sometimes irrational.

The earliest and harshest critic of AGI was Hubert Dreyfus who wrote the following series of books beginning in 1965: *Alchemy and AI* [4], *What Computers Can't Do* [5], *Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer* [6], and *What Computers* Still *Can't Do* [7].

Dreyfus made a distinction between knowing-that which is symbolic and knowing-how, which is intuitive like facial recognition. Knowing-how depends on context, which he claimed is not stored symbolically. He contended that AI would never be able to capture the human ability to understand context, situation, or purpose in the form of rules. The reason being that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation. He argued that these unconscious skills would never be captured in formal rules that a computer could duplicate. Dreyfus's critique parallels McLuhan's notion of figure/ground analysis. Just as McLuhan claimed that the ground was subliminal, Dreyfus also claims that the ground of human thought are unconscious instincts that allow us to instantly and directly arrive at a thought without going through a conscious series of logical steps or symbolic manipulations. Another way of formulating Dreyfus's insight is in terms of emergence theory. The human mind and its thought processes are emergent, non-reductionist phenomena. Computers, on the other hand, operate making use of a reductionist program of symbolic manipulations. AGI is linear and sequential, whereas human thinking processes are simultaneous and non-sequential. In McLuhan's terminology, computers operate in one thing at a time, visual space and the human mind operates in the simultaneity of acoustic space. Computers operate as closed systems, no matter how large the databases can be. Biotic systems, such as human beings, are created by and operate within an open system.

Atran [35] pp. 220–222 reminds us that computers "process information in ways opposite to humans' in domains associated with human creativity". He points out that Newton and Einstein imagined "ideal worlds without precedent in any past or plausible future experience ... Such thoughts require levels of abstraction and idealization that disregard, rather than assimilate" what is known. Dyson [36] pp. 255–256 strikes a similar note: "genuinely creative intuitive thinking requires non-deterministic machines that can make mistakes, abandon logic from one moment to the next and learn. Thinking is not as logical as we think".

Imagination and curiosity are uniquely human and are not mechanical one step at a time. Mechanically trying to program them as a series of logical steps is doomed to fail, since imagination and curiosity defy logic and a computer is bound by logic. Logic can inhibit creativity, imagination

and curiosity. An example of how deviation from strictly logical thinking leads to new ideas is the story of the invention of zero. Parmenides argued that nothing changes because 'non-being' cannot be because it is a contradiction. As a result, nothing changes because if A changes into B, the A will not-be but since non-being cannot be nothing changes.

His use of logic led to a non-intuitive result but it impacted much of Ancient Greek thinking with all subsequent Greek philosophers adding something to their model of the world that was unchanging like Plato's forms and Aristotle's domain of the heavens. We believe that Parmenides argument that non-being could not be also explains why the Greeks, who were great at geometry, never came up with the idea of zero. Zero was an invention of the Hindu mathematicians who were not always very logical but very practical. When they recorded their abacus calculations, they used a symbol they called sunya (leave a space) to record 302 as 3 sunya 2, i.e., 3 in the 100 column, nothing in the ten column and 2 in the one column. They denoted sunya with a dot and later with a circle. Sunya allowed them to invent the place number system, which we call Arabic numbers. The Arabs adopted sunya and translated 'leave a space' into sifr. When the Italians borrowed sifr from the Arabs, they called it zefiro and later shortened it to zero. Zero, place numbers, negative numbers, and algebra all emerged from what Parmenides and his Greek compatriots would have called a logical error.

Lawrence Krauss, a physicist, expressed a wish for AGI machines: "I'm interested in what machines will focus on when they get to choose the questions as well as the answers" [37]. We are quite skeptical of this hope of Krauss given the following remark of Einstein and Infeld [38]: "The mere formulation of a problem is far more often essential than its solution, which may be merely a matter of mathematical or experimental skill. To raise new questions, new possibilities, to regard old problems from a new angle requires creative imagination and marks real advances in science". It is hard to imagine how a system of logic could develop a curiosity, as all logic can do is equate equivalent statements.

Kevin Kelly [39] also sees AGI machines tackling scientific questions: "To solve the current grand mysteries of quantum gravity, dark energy, dark matter, we'll probably need intelligences other than human". Kelly is not taking into account the imagination that is required to create a new paradigm. A new paradigm does arise from making a calculation, but from using one's imagination. If he had written his piece in 1904, he might have suggested that we needed AI to understand the result of the Michaelson-Morley experiment, which showed that there is no aether and the speed of light is the same in all frames of reference. Einstein did not use computation or logic to come up with the theory of relativity, only imagination. Einstein [40] remarked, "Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, while imagination embraces the entire world, and all there ever will be to know and understand".

Gordon Kane [41] p. 219 reminds us that AI might be able to analyze data but scientists will still have to build technological devices to gather data, without which there is no science.

Rafaeli [42] pp. 342–344 also sees AGI computers someday making scientific progress: "Thinking needs data, information and knowledge but also requires communication and interaction. Thinking is about asking questions, not just answering them ... For a machine to think it will need to be curious, creative and communicative". Rafaeli believes that machines will one day be communicative, but we wonder what will motivate them to do so or to be creative or curious for that matter.

#### *4.3. Purpose, Objectives, Goals, Telos and Caring*

*Everything—a horse, a vine—is created for some duty... For what task, then, were you yourself created? A man's true delight is to do the things he was made for* —Marcus Aurelius

The set of five interrelated characteristics of purpose, objectives, goals, telos, and caring, define what it is to be a living organism. All living organisms and only this class of objects have these properties because they are the only entities that act in their own self-interest. One can define a living organism as an entity that has purpose, objectives, goals, and a telos, a will to live and to reproduce, an end. Telos, associated with Aristotle's fourth cause, is the purpose, goal, or objective that gives rise

to an efficient cause that makes something happen. Computers do not have a will to live, a purpose, a goal, or objectives nor do they care about anything. They just function as they were designed to perform and as they are programmed by their human manufacturers and users. They do not reproduce like all forms of life, including bacteria and viruses. Because of their will to live, all organisms are caring; they care about finding nourishment and whether they live or die. Even single cell organisms like bacteria and more complex eukaryote microbes will communicate with each other when there is not an ample supply of nourishment to sustain themselves and form a slime medium that allows them to migrate to where there is more food. Bacteria also form slime to protect themselves from ingestion or desiccation. Slime molds, which are eukaryotes, also form cooperative slime colonies for finding nourishment and for reproduction. As more complex multi-cell organisms emerged more complex forms of caring emerged. Computers, on the other hand, have no capacity for caring. Caring, an emotional state, as we will discover below is key for creative thinking.

#### *4.4. Intuition*

#### *Intuition is the clear conception of the whole at once* —Johann Kaspar Lavater

Living organisms basically make decisions based on their intuition and not on a linear deductive use of logic or any other process for that matter. One exception are humans, who because of their capacity with symbolic language, sometimes make decisions through a process of reasoning, but for the most part their day to day decisions regarding their safety, their nourishment, their movement, their breathing, are all done intuitively. It is only when they are planning, building something, or solving a problem that they make use of logical reasoning. But intuition kicks in again when they are engaged in the following activities: solving a wicked problem; creating or composing music; making an art object like a painting or a piece of sculpture; performing in a play; dancing; engaged in sports; driving a car; flying a plane; or, sailing a boat. In most of these cases there would not be enough time to proceed through a chain of logical steps to make decisions upon. In the case of wicked problem solving, the solution lies in making assumptions that have never been made before.

Logic has nothing to do with making the assumptions upon which a chain of logical thinking is executed. Logic only helps one develop a solution based on the assumptions one has made. Imagining new assumption is an intuitive act not an act of reason or rationality. Thomas Kuhn has made a study of how new scientific breakthroughs are made. They are always made by someone new to the field, usually a young scientist who intuits the new paradigm by making an assumption that contradicts the logic of the old paradigm. This is why an AI device cannot solve a wicked problem because it operates as a logic closed system, and thus cannot intuit a new paradigm or a new set of assumptions. That requires a creative artistic-like or improvisation approach to science or problem solving. Improvisation cannot be achieved using logic. Logic constrains one's thinking, making improvisation and imagination impossible. Creative thinking is not rational and most times defies logic. Improvisation is about breaking the rules. Computers with their logical step by step processes cannot leap directly to a solution as is the case with an intuitive thinker.

The moment we, the authors, learned about the idea of a Singularity, we immediately sensed that it was wrong. As to those that hold a belief in the Singularity, it was also a result of their intuitive thinking. The explanation as to why the intuition of human agents can be so different is because they have different emotional needs. It is no accident that many Singularitarians are computer scientists, especially AI specialists.

#### *4.5. Imagination*

#### *Imagination is an important component of intelligence* —Albert Einstein

Imagination entails the creation of new images, concepts, experiences, or sensations in the mind's eye that have never been seen, perceived, experienced, or sensed through the senses of sight, hearing, and/or touch. Computers do not see, perceive, experience, or sense, as do humans, and therefore

cannot have imagination. They are constrained by logic and logic is without images. Another way of describing imagination is to say it represents thinking outside the box. Well, is the box not all that we know and the equivalent ways of representing that knowledge using logic? Logic is merely a set of rules that allows one to show that one set of statements is equivalent to another. One cannot generate new knowledge using logic; one can only find new ways of representing it. Creativity requires imagination and imagination requires creativity and both creativity and imagination are intuitive, so once again we run up against another barrier that prevent computers from generating general intelligence.

Imagination is essential in science for creating a hypothesis to explain observed phenomena, and this part of the process of scientific thinking requires imagination, which is quite independent of logic. Logic comes into play when one used logic to determine the consequences of one's hypotheses that can be tested empirically. Devising ways to test one's hypotheses requires another, but quite different, kind of imagination.

Imagination is also a key element of artistic creation. The artist creates sensations for his or her audience that they (the audience) would not ordinarily experience.

#### *4.6. Humor*

#### *He laughed to free himself from his minds bondage* —James Joyce

Humor is not so much a pre-requisite for intelligence as it is an indication of intelligence. To create or appreciate humor, one requires an imagination to see alternatives to one's expectations. The "incongruity theory (of humor) suggests that humor arises when logic and familiarity are replaced by things that don't normally go together [43]". Given that a computer would not recognize or even create this kind of incongruity, then they would not only lack a sense of humor, but would not be able to assemble such incongruities, which are an essential part of imagination, and hence, intelligence.

#### *4.7. Emotions*

#### *The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science* —Albert Einstein

Humans experience a wide variety of emotions, some of which, as Einstein suggests, motivate art and science. Emotions, which are a psychophysical phenomenon, are closely associated with pleasure (or displeasure); passion; desires; motivation; aesthetics; and, and joy. Every human experience is actually emotional. It is a response of the body and the brain. Every experience is about what action to take. Acting to do it again or not do it again (private communication Terry Deacon).

Emotions play an essential part in human thinking as neuroscientist Antonio Damasio has shown:

Damasio's studies showed that emotions take (or play) an important part in the human rational thinking mechanism [44] p. 326.

For decades, biologists spurned emotion and feeling as uninteresting. But Antonio Damasio demonstrated that they were central to the life-regulating processes of almost all living creatures. Damasio's essential insight is that feelings are "mental experiences of body states", which arise as the brain interprets emotions, themselves physical states arising from the body's responses to external stimuli. (The order of such events is: I am threatened, experience fear, and feel horror.) He has suggested that consciousness, whether the primitive "core consciousness" of animals or the "extended" self-conception of humans, requiring autobiographical memory, emerges from emotions and feelings [45].

Terrence Deacon [9] pp. 512, 533 in *Incomplete Nature* also claims that emotions are essential for mental activities:

Emotion ... is not merely confined to such highly excited states as fear, rage, sexual arousal, love, craving, and so forth. It is present in every experience, even if often highly attenuated, because it is the expression of the necessary dynamic infrastructure of all mental activity . . . Emotion . . . is not some special feature of brain function that is opposed to cognition

Computers are incapable of emotions, which in humans, are inextricably linked to pleasure and pain because they have no pain nor any pleasure, and hence there is nothing to get emotional about. In addition, they have none of the chemical neurotransmitters, which is another reason why computers are incapable of emotions and the drives that are associated with them. Without emotions, computers lack the drive that are an essential part of intelligence and the striving to achieve a purpose, an objective or a goal. Emotions play a key role in curiosity, creativity, aesthetics, which are three other factors that are essential for human intelligence.

Singularitarians are essentially dualists that embrace the dualisms between body and mind and between reason and emotion. They are the last of the behaviorists that have replaced the Skinner box with a silicon box (today's computers). The mind is not just the brain, and the brain is not just a network of neurons operating as logic gates. The human mind extends into the body, is extended into our language according to Logan [46], and extended into our tools according to Clark and Chalmers [47].

#### *4.8. Curiosity*

*Curiosity is one of the permanent and certain characteristics of a vigorous intellect* —Samuel Johnson

#### *I have no special talent. I am only passionately curious*. —Albert Einstein

Curiosity is both an emotion and a behavior. Without the emotion of curiosity, the behavior of curiosity is not possible, and given that computers are not capable of emotions, then they cannot be curious, and hence lack an essential ingredient for intelligence. Curiosity entails the anticipation of reward, which in the brain comes in the form of neurotransmitters like dopamine and serotonin. No such mechanism exists in computers, and hence they totally lack native curiosity. Curiosity, if it exists at all, would have to be programmed into them. In fact, that is exactly what NASA did when it sent its Mars rover, aptly named Curiosity, to explore the surface of Mars.

Curiosity and intelligence are highly correlated. Advances in knowledge have always been the result of someone's curiosity. Curiosity is a characteristic that only a living organism can possess and no living organism is more curious than humans. How could a computer create new forms of knowledge without being curious? But that level of curiosity would have to the curiosity of the programmers who create the AGI creature. Since the curiosity programmed into the AGI device cannot exceed native human curiosity, this represents a real barrier to the achievement of the Singularity.

#### *4.9. Creativity and Aesthetics*

The role of creativity and aesthetics in the fine arts is rather obvious but it also plays a key role in science, engineering, product design and general problem solving. Humans solve problems and make discoveries using both out of the box and elegant thinking that defy the logical, one thing at a time, line of thought characteristic of uncreative thinkers and computers. Creativity is a passionate emotion-filled pursuit in which the creator cares about their creation whether it be a practical well-designed product, a scientific theory or an objet d'art.

Terence Deacon [9], pp. 91–92 also weighs in on the question of computers and creativity, which he shows if it were the case that a series of logical steps is all that is required to be creative then we live in a pre-determined world in which there is no free will, which is not how we find this world:

Consider, however, that to the extent that we map physical processes onto logic, mathematics, and machine operation, the world is being modeled as though it is preformed, with every outcome implied in the initial state. But as we just noted, even Turing recognized that this mapping between computing and the world was not symmetrical. Gregory Bateson [48] p. 58 (1972, 58) explains this well:

In a computer, which works by cause and effect, with one transistor triggering another, the sequences of cause and effect are used to simulate logic. Thirty years ago, we used to ask: Can a computer simulate all the processes of logic? The answer was "yes", but the question was surely wrong. We should have asked: Can logic simulate all sequences of cause and effect? The answer would have been: "no".

When extrapolated to the physical world in general, this abstract parallelism has some unsettling implications. It suggests notions of predestination and fate: the vision of a timeless, crystalline, four-dimensional world that includes no surprises. This figures into problems of explaining intentional relationships such as purposiveness, aboutness, and consciousness, because as theologians and philosophers have pointed out for centuries, it denies all spontaneity, all agency, all creativity, and makes every event a passive necessity already prefigured in prior conditions. It leads inexorably to a sort of universal preformationism.

Aesthetic also plays a role and not just in the fine arts and design but also in science and engineering. Einstein once remarked, "the only physical theories that we are willing to accept are the beautiful ones". Herman Bondi [49] confirmed this attitude of Einstein's when he wrote,

What I remember most clearly was that when I put down a suggestion that seemed to me cogent and reasonable, Einstein did not in the least contest this, but he only said, 'Oh, how ugly.' As soon as an equation seemed to him to be ugly, he really rather lost interest in it and could not understand why somebody else was willing to spend much time on it. He was quite convinced that beauty was a guiding principle in the search for important results in theoretical physics.

The idea of a computer having a sense of aesthetics is preposterous given that the feeling of beauty is an emotion and computers cannot have emotions. Emotions involve the nervous system and the psyche, and since the computers do not have a nervous system or a psyche, they cannot feel emotions like the emotion of beauty. So, once again, it is hard to imagine AGI competing with or developing anything like human intelligence.

#### *4.10. Values and Morality*

Because a computer has no purpose, objectives, or goals, it cannot have any values as values are related to one's purpose, objectives, and goals. As is the case with curiosity, values will have to be programmed into a computer, and hence the morality of the AGI device will be determined by the values that are programmed into it, and hence the morality of the AGI device will be that of its programmers. This gives rise to a conundrum. Whose values will be inputted, and who will make this decision, a critical issue in a democratic society. Not only that, but there is a potential danger. What if a terrorist group or a rogue state were to create or gain control of a super-intelligent computer or robot that could be weaponized. Those doing AGI research cannot take comfort in the notion that they will not divulge their secrets to irresponsible parties. Those that built the first atomic bomb thought that they could keep their technology secret, but the proliferation of nuclear weapons of mass destruction became a reality. When considering how many hackers are operating today, is not the threat of super-intelligent AGI agents a real concern?

Intelligence, artificial or natural, entails making decisions, and making decisions requires having a set of values. So, once again, as was the case with curiosity, the decision-making ability of an AGI device cannot exceed that of human decision making as it will be the values that are programmed into the machine that will ultimately decide which course of action to take and which decisions are made.

#### **5. Artificial Intelligence and the Figure/Ground Relationship**

Another insight of McLuhan's, namely the relationship of figure and ground, can help to explain why so many intelligent scientists can go so far astray in advocating the idea that a machine can think. The meaning of any "figure" according to McLuhan, whether it is a technology, an institution, a communication event, a text, or a body of ideas, cannot be determined if that figure is considered in isolation from the ground or environment in which it operates. The ground provides the context from which the full meaning or significance of a figure emerges. The following examples illustrate the way in which the context can transform the meaning of a figure: a smokestack belching smoke, once a symbol of industrial progress, is today a symbol of pollution; the suntan, once a sign of hard work in the fields, is now a symbol of affluence and holidaying and will probably evolve into a symbol of reckless disregard for health and the risk of skin cancer.

We believe that a computer operates strictly on the figure of the problem that it is asked to solve. It has no idea of the ground or the context in which the problem arose. It is therefore not thinking, but merely manipulating symbols for concepts that it has never experienced and for which it has not had a perceptual experience, hence no emotions, no caring. Basically, the machine does not give a damn. Thinking entails making use of a bit of wisdom, which only can be acquired with experience that has both an intellectual and an emotional component, the latter of which is impossible for a machine. In other words, the machine cannot have emotions, feel love, pain, or regret, or have a sense of what is just or beautiful and hence can never become wise. The computer can only manipulate the figure of a problem, and it really has no clue about the ground or the context of the problem.

Can machines think? Just because a machine can calculate or compute faster than a human does not mean that it is thinking. It is just carrying out computations that a human who programs the computer has asked it to do. Before computers humans used abacuses and slide rules to facilitate their calculations. It never occurred to anyone to suggest that the abacuses or slide rules could think. No—they only carried out operations that their human operators made then perform. According to Andy Clark [50], these devices became extensions of the human mind. Computers, like abacuses and slide rules, only carry out operations their human operators/programmers ask them to do, and as such, they are extensions of the minds of their operators/programmers. They cannot think as they have no free will, in fact they have no will at all.

We would imagine that proponents of AGI or strong AI would claim that free will is an illusion and that our argument is simply a category error. Well, if there is no such thing as free will, then there is no difference between a human and a computer, as both are subject to the laws of physics. If that is the case, why do we value human life more than computer life. Should a person be charged with murder who destroys a computer as we have done when we abandoned our former out of date computers to recycling. The answer is, of course, no, but it does raise the question, would an AGI computer in the post-Singularity days be protected against murder like the humans that they will replace. Which raises another question: If post-Singularity computers control society how will they enforce the law.

Our position that computers or machines are mindless is supported by many of our confreres who are also Singularity skeptics. Here is a sample of a few thinkers who believe that AI computers are mindless:

Much of the power of artificial intelligence stems from its mindlessness... Unable to question their own actions or appreciate the consequences of their programming—unable to understand the context in which they operate—they can wreak havoc either as a consequence of flaws in their programming or through the deliberate aims of their programmers [51] p. 59.

Silicon-based information processing requires interpretation by humans to become meaningful and will for the foreseeable future. We have little to fear from thinking machines and more to fear from the increasingly unthinking humans who use them [52] p. 89.

Machines (at least so far, and I don't think this will change with a Singularity) lack vision [53] p. 93.

Machines (humanly constructed artifacts) cannot think, because no machine has a point of view—that is, a unique perspective on the worldly referents of its internal symbolic logic [54] p. 7.

Machines don't think. They approximate functions. They turn inputs into outputs ... much of machine thinking is just machine hill climbing ... Tomorrow's machines will look a lot like today's—old algorithms running on faster machines [55] pp. 423–426.

Basically, as each of the five critics above have pointed out AGI is a figure without a ground and a figure without a ground is dangerous because it lacks meaning because of a lack of context as Marshall McLuhan has observed.

In each of these quotes the ground that is missing to support the machines mechanical processing is: for Carr [51] "appreciate the consequences of their programming"; for Fitch [52] "interpretation"; for Pepperberg "vision"; for Trehub "a point of view" and for Kosko thinking substituted by "machine hill climbing".

#### *5.1. The Turing Test*

Alan Turing, in 1950, developed criteria to determine if a machine could exhibit intelligent behavior. The Turing test, as it became to be known, was whether a human dialoging with a computer in a text only channel could determine whether or not they were in conversation with a machine or a human. If the human could not tell whether it was a human or a computer, the computer passed the Turing test. The Turing test might be a necessary condition for a computer possessing human-like intelligence, but it certainly is not a sufficient condition. A smart interlocutor, however, could easily determine whether they were conversing with a machine or a human by asking the following personal questions: Have you ever been in love?; Who are you closest to in your family?; What is your gender?; What gives you joy and why?; What are your goals in life?; What is the ethnic origin of your family?; What sports do you enjoy participating in and why? What was the happiest moment in your life and the saddest? The Turing test is not really a test of intelligence, it is a test of whether a programmer can fool a human to believe it is also a human. In other words, it is a magician's trick (private communication with Terry Deacon).

#### *5.2. Machines Do Not Network and They Have No Theory of Mind*

Stanislaw Dehaene [56] pp. 223–225 points out that machines lack two essential functions for intelligent thinking a global workspace and a theory of mind. The human mind knows what it knows. Although it has specialized modules where the information is accessible by all of the brain, similar to Pribram's holographic brain with its holographic storage network. Computer modules do not have holographic access to information, and a computer module, unlike a human brain module, is not aware of the information in the other modules.

The human mind is capable of responding and attending to other humans, but AI does not have this capability to respond and attend to its users. Computers operate as figure without ground. Ridley [57] pp. 226–227 points out "human intelligence is not individual thinking at all but collective, collaborative and distributive intelligence". Networking with humans is possible because humans have language which animals and computers do not have. Control of fire and the living in communal groups led to verbal language and networking. Ridley claims the only hope for the AGI will be individual AGI configured computers that are interlinked by the Internet.

Shafir [58] pp. 300–301 points out that since an AGI computer cannot have a Theory of Mind, it will never be able to achieve the level of human intelligence. A theory of mind emerges when a human realizes other humans think the way they do. Since a computer cannot think like a human, it will never be able to develop a theory of mind.

#### *5.3. AI Have No Goals, Feelings or Emotions and Hence Cannot Act and They Do Not Care*

A computer cannot experience pain, pleasure, or joy, and therefore has no motivation, no goals, no desire to communicate according to Enfield [59] pp. 3917–3998: "Machines comply, but they don't cooperate" because they have no goals.

David Gelernter [60] pp. 80–83. "Philosophers and scientists wish that the mind was nothing but thinking and that *feeling* or *being* played no part. They wished so hard for it to be true that they finally decided it was. Philosophers are only human".

Edward Slinerland [61] pp. 345–346 regards AGI computers as "directionless intelligences because AI systems are tools not organisms ... No matter how good they become [doing things] they don't actually want to do any of these things ... AI systems in and of themselves are entirely devoid of intentions or goals ... Motivational direction is the product of natural selection working on biological organisms".

Since computers operate using an either/or, 0 or 1, true or false logic, they are not capable of metaphor, which Stuart Kauffman [62] pp. 507–509 claims is the basis of human creative thought, whether mathematical, artistic or scientific.

Abigail Marsh [63] pp. 415–417 cites a patient that is incapable of any emotions because of damage to his ventro-medial prefrontal cortex. The patient was unable to make use of his intelligence and knowledge residing in other unaffected areas of his brain because without his emotional capacity he was "unable to make decisions or take action". No emotions—no actions. One, therefore, cannot expect any thought or initiative from an AGI device. It is apathetic, has no capacity for emotions and has no initiative unless instructed by a human. It all comes down to the fact that there is no reward for a computer, and hence no motivation. Therefore, the notion that is expressed by some advocates of the Singularity that AGI computers could take over the human race is without basis. Roy Baumeister [64] p. 73 comes to a similar conclusion.

Johnathan Gottschall [65] pp. 179–180 points out that the success of AGI computers to generate compelling stories has been a dismal failure. The creators of great stories and other forms of artistic expression are intelligent but they have another quality, which we will call soulfulness. By soul in this context we are expressing something that is not supernatural but something that has a strong emotional component to it, in addition to the analytic intelligence that an artist must also possess. Musicologists have formulated the rules that Mozart used in composing his music, and then fed those rules into a computer along with a simple Mozart melody. The result was music that sounded a bit like Mozart but had none of the emotional and aesthetic appeal of the music that Mozart actually composed. Mozart has soul and passion and the computer has a melody, a set of rules and the ability to combine them, but not the ability to create beauty. An artist knows when to break the rules, whereas the computer can only stick to the rules. There is a parallel with creative science. The scientists that breaks new ground breaks the rules of the former Kuhnian paradigm by combining intelligence with intuition and creative imagination.

Saffo [66] pp. 206–208 suggests one of the motivation to create AGI is that "we want someone to talk to". This suggestion raises the question: but if we have the possibility to talk to each other, why would we need an AGI computer to fill in this need. Saffo also suggests that, "we of course will attribute feelings and rights to AIs—and eventually they will demand it". How can machines without a sense of self or without a will to live be able to demand anything which begs the question how will they arrive at a notion of rights unless that is programmed into them. Finally, the last straw for us is Saffo's contention that sentience is universal and not limited to living things. "It's just a small step to speculate about what trees, rocks—or AIs—think". I guess that means trees and rocks have rights and we can talk to them. Maybe the tree huggers know something that we do not know.

Kosslyn [67] pp. 228–230 posits that AGI computers will want to have purpose, and as a consequence, they will want to support and elevate the human condition. This is another example of a Singularity advocate assuming that a machine without any capacity for emotions having the capacity to desire something that would give it pleasure. We contend that without emotions and without the ability to feel pleasure, there can be no desire and no purpose.

#### **6. Decision Making, Experience, Judgement and Wisdom**

#### *The only Source of Knowledge is Experience.* —Albert Einstein

If the challenges of programming an AGI device with a set of values and a moral compass that represents the will of the democratic majority of society is achieved, there is still the challenge of whether the AGI device still has the judgment and wisdom to make the correct decision. In other words, is it possible to program wisdom into a logical device that has no emotions and has no experiences upon which to base a decision. Wisdom is not a question of having the analytic skills to deal with a new situation but rather having a body of experiences to draw upon to guide one's decisions. How does one program experience into a logic machine?

Intelligence requires the ability to calculate or to compute, but the ability to calculate or compute does not necessarily provide the capability to make judgments and decisions unless values are available, which for an AGI device, requires input from a human programmer.

#### **7. Conclusions: How Computers Will Make Us Humans Smarter**

Douglas Rushkoff [12] pp. 354–355 invites us to consider computers not as figure but as ground. He suggests that the leap forward in intelligence will not be in AGI configured computers that have the potential to be smarter than us humans, but in the environment that computers create. Human intelligence will increase by allowing human minds to network and create something greater than what a single human mind can create, or what a small group of minds that are co-located can create. The medieval university made us humans smarter by bringing scholars in contact with each other. The city is another example of a medium that allowed thinkers and innovators to network, and hence increase human intelligence. The printing press had a similar impact. With networked computer technology, a mind with a global scale is emerging.

In the past, schools of thought emerged that represented the thinking of a group or team of scholars. They were named after cities. What is emerging now are schools of thought and teams of scholars that are not city-based but exist on a global scale. An example is that we once talked about the Toronto school of communication and media studies consisting of scholars, such as Harold Innis, Marshall McLuhan, Ted Carpenter, Eric Havelock, and Northrop Fry that lived in Toronto and communicated with each other about media and communications. A similar New York school of communication emerged with Chris Nystrom, Jim Carey, John Culkin, Neil Postman, and his students at NYU. Today, that tradition lives on but not as the Toronto School or the New York School, but as the Media Ecology School, with participants in every part of the world. This is what Rushkoff [12] was talking about in his article "The Figure or Ground", where he pointed out that it is the ground or environment that computers create, and not the figure of the computer by itself that will give rise to intelligence greater than a single human. He expressed this idea as follows: "Rather than towards machines that think, I believe we are migrating toward a networked environment in which thinking is no longer an individual activity nor bound by time and space".

Marcelo Gleiser [68] pp. 54–55 strikes a similar chord to that of Doug Rushkoff when he points out that many of our technologies act as extensions of who we are. He asks: "What if the future of intelligence is not outside but inside the human brain? I imagine a very different set of issues emerging from the prospect that we might become super-intelligent through the extension of our brainpower by digital technology and beyond—artificially enhanced human intelligence that amplifies the meaning of being human".

**Author Contributions:** A.B. and R.K.L. wrote the paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Countering Superintelligence Misinformation**

#### **Seth D. Baum**

Global Catastrophic Risk Institute, PO Box 40364, Washington, DC 20016, USA; seth@gcrinstitute.org Received: 9 September 2018; Accepted: 26 September 2018; Published: 30 September 2018

**Abstract:** Superintelligence is a potential type of future artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. If built, superintelligence could be a transformative event, with potential consequences that are massively beneficial or catastrophic. Meanwhile, the prospect of superintelligence is the subject of major ongoing debate, which includes a significant amount of misinformation. Superintelligence misinformation is potentially dangerous, ultimately leading bad decisions by the would-be developers of superintelligence and those who influence them. This paper surveys strategies to counter superintelligence misinformation. Two types of strategies are examined: strategies to prevent the spread of superintelligence misinformation and strategies to correct it after it has spread. In general, misinformation can be difficult to correct, suggesting a high value of strategies to prevent it. This paper is the first extended study of superintelligence misinformation. It draws heavily on the study of misinformation in psychology, political science, and related fields, especially misinformation about global warming. The strategies proposed can be applied to lay public attention to superintelligence, AI education programs, and efforts to build expert consensus.

**Keywords:** artificial intelligence; superintelligence; misinformation

#### **1. Introduction**

At present, there is an active scholarly and public debate regarding the future prospect of artificial superintelligence (henceforth just *superintelligence*), which is artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. While much of the issue remains unsettled, some specific arguments are clearly incorrect, and as such can qualify as *misinformation.* (As is elaborated below, arguments can qualify as misinformation even when the issues are unsettled.) More generally, misinformation can be defined as "false or inaccurate information" [1], or as "information that is initially presented as true but later found to be false" [2] (p. 1). This paper addresses the question of what can be done to reduce the spread of and belief in superintelligence misinformation.

While any misinformation is problematic, superintelligence misinformation is especially worrisome due to the high stakes involved. If built, superintelligence could have transformative consequences, which could be either massively beneficial or catastrophic. Catastrophe is more likely to come from a superintelligence built based on the wrong ideas—and it could also come from *not* building a superintelligence that would have been based on the *right* ideas, because a well-designed superintelligence could prevent other types of catastrophe, such that abstaining from building such a superintelligence could result in catastrophe. Thus, the very survival of the human species could depend on avoiding or rejecting superintelligence misinformation. Furthermore, the high stakes of superintelligence have the potential to motivate major efforts to attempt to build it or to prevent others from doing so. Such efforts could include massive investments or restrictive regulations on research and development (R&D), or plausibly even international conflict. It is important for these sorts of efforts to be based on the best available understanding of superintelligence.

Superintelligence is also an issue that attracts a substantial amount of misinformation. The abundance of misinformation may be due to the many high-profile portrayals of superintelligence in science fiction, the tendency for popular media to circulate casual comments about superintelligence

made by various celebrities, and the relatively low profile of more careful scholarly analyses. Whatever the cause, experts and others often find themselves responding to some common misunderstandings [3–9].

There is also potential for superintelligence *disinformation*: misinformation with the intent to deceive. There is a decades-long history of private industry and anti-regulation ideologues promulgating falsehoods about socio-technological issues in order to avoid government regulations. This practice was pioneered by the tobacco industry in the 1950s and has since been adopted by other industries including fossil fuels and industrial chemicals [10,11]. AI is increasingly important for corporate profits and thus could be a new area of anti-regulatory disinformation [12]. The history of corporate disinformation and the massive amounts of profit potentially at stake suggest that superintelligence disinformation campaigns could be funded at a large scale and could be a major factor in the overall issue. Superintelligence disinformation could potentially come from other sources as well, such as governments or even concerned citizens seeking to steer superintelligence debates and practices in particular directions.

Finally, there is the subtler matter of the information that has not yet been established as misinformation, but is nonetheless incorrect. This misinformation is the subject of ongoing scholarly debates. Active superintelligence debates consider whether superintelligence will or will not be built, whether it will or will not be dangerous, and a number of other conflicting possibilities. Clearly, some of these positions are false and thus can qualify as misinformation. For example, claims that superintelligence will be built and that it will not be built cannot both be correct. However, it is not presently known which positions are false, and there is often no expert consensus on which positions are likely to be false. While the concept of misinformation is typically associated with information that is more obviously false, it nonetheless applies to these subtler cases, which can indeed be "information that is initially presented as true but later found to be false". Likewise, countering misinformation presents a similar challenge regardless of whether the misinformation is spread before or after expert consensus is reached (though, as discussed below, expert consensus can be an important factor).

In practical terms, the question then is what to do about it. There have been a number of attempts to reply to superintelligence misinformation in order to set the record straight [3–9]. However, to the best of the present author's knowledge, aside from a brief discussion in [12], there have been no efforts to examine the most effective ways of countering superintelligence misinformation. Given the potential importance of the matter, a more careful examination is warranted. That is the purpose of this paper. The paper's discussion is relevant to public debates about superintelligence, to AI education programs (e.g., in university computer science departments), and to efforts to build expert consensus about superintelligence.

In the absence of dedicated literature on superintelligence misinformation, this paper draws heavily on the more extensive research literature studying misinformation about other topics, especially global warming (e.g., [10,13,14]), as well as the general literature on misinformation in psychology, cognitive science, political science, sociology, and related fields (for reviews, see [2,15]). This paper synthesizes insights from these literatures and applies them to the particular circumstances of superintelligence. The paper is part of a broader effort to develop the social science of superintelligence by leveraging insights from other issues [12,16].

The paper is organized as follows. Section 2 presents some examples of superintelligence misinformation, in order to further motivate the overall discussion. Section 3 surveys the major actors and audiences (i.e., the senders and receivers) of superintelligence misinformation, in order to provide some strategic guidance. Section 4 presents several approaches for preventing the spread of superintelligence misinformation. Section 5 presents approaches for countering superintelligence misinformation that has already spread. Section 6 concludes.

#### **2. Examples of Superintelligence Misinformation**

It is often difficult to evaluate which information about superintelligence is false. This is because superintelligence is a possible future technology that may be substantially different from anything that currently exists, and because it is the subject of a relatively small amount of study. For comparison, other studies of misinformation have looked at such matters as whether Barack Obama was born in the United States, whether childhood vaccines cause autism, and whether Listerine prevents colds and sore throats [17]. In each of these cases, there is clear and compelling evidence pointing in one direction or the other (the evidence clearly indicates that Obama was born in the US, that vaccines do not cause autism, and that Listerine does not prevent colds or sore throats, despite many claims to the contrary in all three cases). Therefore, an extra degree of caution is warranted when considering whether a particular claim about superintelligence qualifies as misinformation.

That said, some statements about superintelligence are clearly false. For example, this statement from Steven Pinker: "As far as I know, there are no projects to build an AGI, not just because it would be commercially dubious, but also because the concept is barely coherent" [18]. The acronym AGI stands for artificial general intelligence, which is a form of AI closely associated with superintelligence. Essentially, AGI is AI that is capable of reasoning across a wide range of domains. AGI may be difficult to build, but the concept is very much coherent. Indeed, it has a substantial intellectual history and ongoing study [19], including a dedicated research journal (*Journal of Artificial General Intelligence*) and professional society (the Artificial General Intelligence Society). Furthermore, there are indeed projects to build AGI—one recent survey identifies 45, spread across many countries and institutions, including many for-profit corporations, the largest of which being DeepMind, acquired by Google in 2014 for £400 million, the Human Brain Project, an international project with \$1 billion in funding from the European Commission, and OpenAI, a nonprofit with \$1 billion in pledged funding [20]. (DeepMind and OpenAI explicitly identify as working on AGI. The Human Brain Project does not, but it is working on simulating the human brain, which is considered to be a subfield of AGI [19].) There is even an AGI project at Pinker's own university. (Pinker and the AGI project MicroPsi [21] are both at Harvard University.) Therefore, in the quoted statement, the "as far as I know" part may well be true, but the rest is clearly false. This particular point of misinformation is significant because it conveys the false impression that AGI (and superintelligence) is a nonissue, when in fact it is a very real and ongoing subject of R&D.

A more controversial matter is the debate on the importance of consciousness to superintelligence. Searle [22] argues that computers cannot be conscious and therefore, at least in a sense, cannot be intelligent, and likewise cannot have motivation to destroy humanity. Similar arguments have been made by Logan [23], for example. A counterargument is that the important part is not the consciousness a computer but its capacity to affect the world [4,24,25]. It has also been argued that AI could be harmful to humanity even if it is not specifically motivated to do so, because the AI could assess humanity as being in the way of it achieving some other goal [25,26]. The fact that AI has already shown the capacity to outperform humans in some domains is suggestive of the possibility for it to outperform humans in a wider range of domains, regardless of whether the AI is conscious. However, this is an ongoing area of debate, and indeed Chalmers [24] (p. 16) writes "I do not think the matter can be regarded as entirely settled". Regardless, there must be misinformation on one side or the other: computers either can be conscious or they cannot, and consciousness either matters for superintelligence or it does not. Additionally, many parties to the debate maintain that those who believe that consciousness or conscious motivation matter are misinformed [4,5,7–9], though it is not the purpose of this paper to referee this debate.

There are even subtler debates among experts who believe in the prospect of superintelligence. For example, Bostrom [25] worries that it would be difficult to test the safety of a superintelligence because it could trick its human safety testers into believing it is safe (the "treacherous turn"), while Goertzel [27] proposes that the safety testing for a superintelligence would not be so difficult because the AI could be tested before it becomes superintelligent (the "sordid stumble"; the term is

from [28]). Essentially, Bostrom argues that an AI would become capable of deceiving humans before humans realize it is unsafe, whereas Goertzel argues the opposite. Only one of these views can be correct; the other would qualify as misinformation. More precisely, only one of these views can be correct for a given AI system—it is possible that some AI systems could execute a treacherous turn while others would make a sordid stumble. Which view is more plausible is a matter of ongoing study [28,29]. This debate is important because it factors significantly into the riskiness of attempting to build a superintelligence.

Many more additional examples could be presented, such as on the dimensionality of intelligence [3], the rate of progress in AI [7,8], the structure of AI goals [6–8], and the relationship between human and AI styles of thinking [6,8]. However, this is not the space for a detailed survey. Instead, the focus of this paper is on what to do about the misinformation. Likewise, this paper does not wish to take positions on open debates about superintelligence. Some positions may be more compelling, but arguments for or against them are tangential to this paper's aim of reducing the preponderance of misinformation. In other words, this paper strives to be largely neutral on which information about superintelligence happens to be true or false. The above remarks by Pinker will occasionally be used as an example of superintelligence misinformation because they are so clearly false, whereas the falsity of other claims is more ambiguous.

The above examples suggest two types of superintelligence misinformation: information that is already clearly false and information that may later be found to be false. In practice, there may be more of a continuum of how clearly true or false a piece of information is. Nonetheless, this distinction can be a useful construct for efforts to address superintelligence misinformation. The clearly false information can be addressed with the same techniques that are used for standard cases of misinformation, such as Obama's place of birth. The not-yet-resolved information requires more careful analysis, including basic research about superintelligence, but it can nonetheless leverage some insights from the misinformation literature.

The fact that superintelligence is full of not-yet-resolved information is important in its own right, and it has broader implications for superintelligence misinformation. Specifically, the extent of expert consensus is an important factor in the wider salience of misinformation. This matter is discussed in more detail below. Therefore, while this paper is mainly concerned with the type of misinformation that is clearly false, it will consider both types. With that in mind, the paper now starts to examine strategies for countering superintelligence misinformation.

#### **3. Actors and Audiences**

Some purveyors of superintelligence misinformation can be more consequential than others. Ditto for the audiences for superintelligence misinformation. This is important to bear in mind because it provides strategic direction to any efforts to counter the misinformation. Therefore, this section reviews who the important actors and audiences may be.

Among the most important are the R&D groups that may be building superintelligence. While they can be influential sources of ideas about superintelligence, they may be especially important as audiences. For example, if they are misinformed regarding the treacherous turn vs. the sordid stumble, then they could fail to correctly assess the riskiness of their AI system.

Also important are the institutions that support the R&D. At present, most AGI R&D groups are based in either for-profit corporations or universities, and some also receive government funding [20]. Regulatory bodies within these institutions could ensure that R&D projects are proceeding safely, such as via university research review boards [30,31]. Successful regulation depends on being well-informed about the nature of AGI and superintelligence and its prospects and risks. The same applies to R&D funding decisions by institutional funders, private donors, and others. Additionally, while governments are not presently major developers of AGI, except indirectly as funders, they could become important developers should they later decide to do so, and they meanwhile can play important roles in regulation and in facilitating discussion across R&D groups.

Corporations are of particular note due to their long history of spreading misinformation about their own technologies, in particular to convey the impression that the technologies are safer than they actually are [10]. These corporate actors often wield enormous resources and have a correspondingly large effect on the overall issue, either directly or by sponsoring industry-aligned think tanks, writers, and other intermediaries. At this time, there are only hints of such behavior by AI corporations, but the profitability of AI and other factors suggest the potential for much more [12].

Thought leaders on superintelligence are another significant group. In addition to the aforementioned groups, this also includes people working on other aspects of superintelligence, such as safety and policy issues, as well as people working on other (non-superintelligence) forms of AI, and public intellectuals and celebrities. These are all people who can have outsized influence when they comment on superintelligence. That influence can be on the broader public, as well as in quieter conversations with AGI/superintelligence R&D groups, would-be regulators, and other major decision-makers.

Finally, there is the lay public. The role of the public in superintelligence may be reduced due to the issue being driven by technology R&D that (for now at least) occurs primarily in the private sector. However, the public can play roles as citizens of governments that might regulate the R&D and as consumers of products of the corporations that host the R&D. The significance of the public for superintelligence is not well established at this time.

While the above groups are presented in approximate order of importance, it would not be appropriate to formally rank them. What matters is not the importance of the group but the quality of the opportunity that one has to reduce misinformation. This will tend to vary heavily by the circumstances of whoever is seeking to reduce the extent of superintelligence misinformation.

With that in mind, the paper now turns to strategies for reducing superintelligence misinformation.

#### **4. Preventing Superintelligence Misinformation**

The cliché "an ounce of prevention is worth a pound of cure" may well be an understatement for misinformation. An extensive empirical literature finds that once misinformation enters into someone's mind, it can be very difficult to remove.

Early experiments showed that people can even make use of information that they acknowledge to be false. In these experiments, people were told a story and then were explained that some information in the story is false. When asked, subjects would correctly acknowledge the information to be false, but they would also use it in retelling the story as if it were true. For example, the story could be a fire caused by volatile chemicals, and then it is later explained that there were no volatile chemicals present. Subjects would acknowledge that the volatile chemicals were absent but then cite them as the cause of the fire. This is logically incoherent. The fact that people do this speaks to the cognitive durability that misinformation can have [32,33].

The root of the matter appears to be that human memory does not simply write and overwrite like computer memory. Corrected misinformation does not vanish. Ecker et al. [15] trace this to the conflicting needs for memory stability and flexibility:

Human memory is faced with the conundrum of maintaining stable memory representations (which is the whole point of having a memory in the first place) while also allowing for flexible modulation of memory representations to keep up-to-date with reality. Memory has evolved to achieve both of these aims, and hence it does not work like a blackboard: Outdated things are rarely actually wiped out and over-written; instead, they tend to linger in the background, and access to them is only gradually lost. [15] (p. 15)

There are some techniques for reducing the cognitive salience of misinformation; these are discussed in detail below. However, in many cases, it would be highly desirable to simply avoid the misinformation in the first place. Therefore, this section presents some strategies for preventing superintelligence misinformation.

The ideas for preventing superintelligence misinformation are inevitably more speculative than those for correcting it. There are two reasons for this. One is that the correction of misinformation has been the subject of a relatively extensive literature, while the prevention of misinformation has received fairly little scholarly attention. (Rare examples of studies on preventing misinformation are [34,35].) The other reason is that the correction of misinformation is largely cognitive and thus conducive to simple laboratory experiments, whereas the prevention of misinformation is largely sociological and thus requires a more complex and case-specific analysis. Nonetheless, given the importance of preventing superintelligence misinformation, it is important to consider potential strategies for doing so.

#### *4.1. Educate Prominent Voices about Superintelligence*

Perhaps the most straightforward approach to preventing superintelligence misinformation is to educate people who have prominent voices in discussions about superintelligence. The aim here is to give them a more accurate understanding of superintelligence so that they can pass that along to their respective audiences. Prominent voices about superintelligence can include select scholars, celebrities, or journalists, among others.

Educating the prominent may be easier said than done. For starters, they can be difficult to access, due to busy schedules and multitudes of other voices competing for their attention. Additionally, some of them they may already believe superintelligence misinformation, especially those who are already spreading it. Misinformation is difficult to correct in general, and may be even more difficult to correct for busy people who lack the mental attention to revise their thinking. (See Section 5.4 for further discussion of this point.) People already spreading misinformation may seem to be ideal candidates for educational efforts, in order to persuade them to change their tune, but it may actually be more productive to engage with people who have not yet made up their minds. Regardless, there is no universal formula for this sort of engagement, and the best opportunities may often be a matter of particular circumstance.

One model that may be of some value is the effort to improve the understanding of global warming among broadcast meteorologists. Broadcast meteorologists are for many people the primary messenger of environmental science. Furthermore, as a group, meteorologists (broadcast and non-broadcast) have traditionally been more skeptical about global warming than most of their peers in other Earth sciences [36,37]. In light of this, several efforts have been made to provide broadcast meteorologists with a better understanding of climate science, in hopes that they would pass this on to their audiences (e.g., [38,39]).

The case of broadcast meteorologists has important parallels to the many AI computer scientists who do not specialize in AGI or superintelligence. Both groups have expertise on a topic that is closely related to, but not quite the same as, the topic at hand. Broadcast meteorologists' expertise is weather, whereas global warming is about climate. (Weather concerns the day-to-day fluctuations in meteorological conditions, whereas climate concerns the long-term trends. An important distinction is that while weather can only be forecast a few days in advance, climate can be forecasted years or decades in advance.) Similarly, most AI computer scientists focus on AI that has "narrow" intelligence (intelligence in a limited range of domains), not AGI. Additionally, broadcast meteorologists and narrow AI computer scientists are often asked to voice their views on climate change and AGI, respectively.

#### *4.2. Create Reputational Costs for Misinformers*

When prominent voices cannot be persuaded to change their minds, they can at least be punished for getting it wrong. Legal punishment is possible in select cases (Section 4.5). However, reputational punishment is almost always possible and has potential to be quite effective, especially for public intellectuals whose brands depend on a good intellectual reputation.

In an analysis of US healthcare policy debates, Nyhan [40] concludes that correcting misinformation is extremely difficult and that increasing reputational costs may be more effective. Nyhan [40] identifies misinformation that was critical to two healthcare debates: in the 1990s, the false claim that the policy proposed by President Bill Clinton would prevent people from keeping their current doctors, and in the 2000s, the false claim that the policy proposed by President Barack Obama would have established government "death panels" to deny life-sustaining coverage to the elderly. Nyhan [40] traces this misinformation to Betsy McCaughey, a scholar and politician generally allied with US conservative politics and opposed to these healthcare policy proposals:

"Until the media stops giving so much attention to misinformers, elites on both sides will often succeed in creating misperceptions, especially among sympathetic partisans. And once such beliefs take hold, few good options exist to counter them—correcting misperceptions is simply too difficult. The most effective approach may therefore be for concerned scholars, citizens, and journalists to (a) create negative publicity for the elites who are promoting misinformation, increasing the costs of making false claims in the public sphere, and (b) pressure the media to stop providing coverage to serial dissemblers". [40] (p. l6)

Nyhan [40] further notes that while McCaughey's false claims were widely praised in the 1990s, including with a National Magazine Award, they were heavily criticized in the 2000s, damaging her reputation and likely reducing the spread of the misinformation.

There is some evidence indicating the possibility that reputational threats can succeed at reducing misinformation. Nyhan and Reifler [34] sent a randomized group of US state legislators a series of letters warning them about the reputational and electoral harms that the legislators could face if an independent fact checker (specifically, PolitiFact) finds them to make false statements. The study found that the legislators receiving the warnings were significantly less likely to make false statements. This finding is especially applicable to superintelligence misinformation spread by politicians, whose statements are more likely to be evaluated by fact checker like PolitiFact. Conceivably, similar fact checking systems could be developed for other types of public figures, or even for more low-profile professional discourse such as occurs among scientists and other technical experts. Similarly, Tsipursky and Morford [41] and Tsipursky et al. [35] describe a Pro-Truth Pledge aimed at committing people to refrain from spreading misinformation and to ask other people to retract misinformation, which can serve as a reputational punishment for misinformers, as well as a reputational benefit for those who present accurate information. Initial evaluations provide at least anecdotal support for the pledge having a positive effect on the information landscape.

For superintelligence misinformation, creating reputational costs has potential to be highly effective. A significant portion of influential voices in the debate have scholarly backgrounds and reputations that they likely wish to protect. For example, many of Steven Pinker's remarks about superintelligence are clearly misinformed, including the one discussed in Section 2 and several in his recent book *Enlightenment Now* [42]. (For detailed analysis of *Enlightenment Now*, see Torres [9].) Given Pinker's scholarly reputation, it may be productive to spread a message such as 'Steven Pinker is unenlightened about AI'.

At the same time, it is important to recognize the potential downsides of imposing reputational costs. Criticizing a person can damage one's relationship with them, reducing other sorts of opportunities. For example, criticizing people who may be building superintelligence could make them less receptive to other efforts to make their work safer. (Or, it could make them more receptive—this can be highly specific to individual personalities and contexts.) Additionally, it can impose reputational costs on the critic, such as a reputation of negativity or of seeking to restrict free speech. Caution is especially warranted for cases in which the misinformation comes from a professional contrarian, who may actually benefit from and relish in the criticism. For example, Marshall [43] (p. 72–73) warns climate scientists against debating professional climate deniers, since the latter tend to be more skilled at debate, especially televised debate, even though the arguments of the former are more sound. The same could apply for superintelligence, if it is to ever have a similar class of professional debaters. Thus, the imposition of reputation costs is a strategy to pursue selectively in certain instances of superintelligence misinformation.

#### *4.3. Mobilize against Institutional Misinformation*

The most likely institutional sources of superintelligence misinformation are the corporations involved in AI R&D, especially R&D for AGI and superintelligence. These companies have a vested interest in cultivating the impression that their technologies are safe and good for the world.

For these companies, reputational costs can also be significant. Corporate reputation can be important for consumer interest in the companies' products, citizen and government interest in imposing regulations on the companies, investor expectations of future profits, and employee interest in working for the companies. Therefore, one potential strategy is to incentivize companies so as to align their reputation with accurate information about superintelligence.

A helpful point of comparison is to corporate messaging about environmental issues, in particular the distinction between "greenwashing" and "brownwashing" [44]. Greenwashing is when a company portrays itself as protecting the environment when it is actually causing much environmental harm. For example, a fossil fuel company may publicize the greenhouse gas emissions reductions from solar panels it installs on its headquarters building while downplaying the fact that its core business model is a major driver of greenhouse gas emissions. In contrast, brownwashing is when a company declines to publicize its efforts towards environmental protection, perhaps because they have customers who oppose environmental protection or investors who worry it reduces profitability. In short, greenwashing is aimed at audiences that value environmental protection, while brownwashing is aimed at audiences that disvalue it.

Greenwashing is often criticized for giving companies a better environmental reputation than they deserve. In many cases that criticism may be fair. However, from an environmental communication standpoint, greenwashing does have the benefit of promoting a pro-environmental message. At a minimum, audiences of greenwashing are told that environmental protection is important. Audiences may also be given accurate information about environmental issues—for example, an advertisement that touts a fossil fuel company's greenhouse gas emissions reductions may also correctly explain that global warming is real and is caused by human action.

Similarly, there may be value in motivating AI companies to present accurate messages about superintelligence. This could be accomplished by cultivating demand for accurate messages among the companies' audiences. For example, if the public wants to hear accurate messages about superintelligence, then corporate advertising may be designed accordingly. The advertising might overstate the company's positive role, which would be analogous to greenwashing and could likewise be harmful for reducing accountability for bad corporate actors, but even then it would at least be spreading an accurate message about superintelligence.

Another strategy is for the employees of AI companies to mobilize against the companies supporting superintelligence misinformation, or against misinformation in general. At present, this may be a particularly promising strategy. There is a notable recent precedent for this in the successful employee action against Google's participation in Project Maven, a defense application of AI [45]. While not specifically focused on misinformation, this incident demonstrates the potential for employee action to change the practices of AI companies, including when those practices would otherwise be profitable for the company.

#### *4.4. Focus Media Attention on Constructive Debates*

Public media can inadvertently spread misinformation via the journalistic norm of balance. For the sake of objectivity, journalists often aim to cover "both sides" of an issue. While this can be constructive for some issues, it can also spread misinformation. For example, media coverage has often presented "both sides" of the "debate" over whether tobacco causes cancer or whether human activity causes global warming, even when one side is clearly correct and the other side has a clear conflict of interest [10,13].

One potential response for this is to attempt to focus media attention on legitimate open questions about a given issue, questions for which there are two meaningful sides to cover. For global warming, this could be a debate over the appropriate role of nuclear power or the merits of carbon taxes. For superintelligence, it could be a debate over the appropriate role of government regulations, or over the values that superintelligence (or AI in general) should be designed to promote. These sorts of debates satisfy the journalistic interest in covering two sides of an issue and provide a dramatic tension that can make for a better story, all while drawing attention to important open questions and affirming basic information about the topic.

#### *4.5. Establish Legal Requirements*

Finally, there may be some potential to legally require certain actors, especially corporations, to refrain from spreading misinformation. A notable precedent is the court decision of United States v. Philip Morris, in which nine tobacco companies and two tobacco trade organizations were found guilty of conspiring to deceive the public about the link between tobacco and cancer. Such legal decisions can have powerful effect.

However, legal requirements may be poorly suited to superintelligence misinformation. First, legal requirements can be slow to develop. The court case United States v. Philip Morris began in 1999, an initial ruling was reached in 2006, and that ruling was upheld in 2009. Furthermore, United States v. Philip Morris came only after several decades of tobacco industry misinformation. Given the evolving nature of AI technology, it could be difficult to pin down which information is correct over such long time periods. Second, superintelligence is a future technology for which much of the correct information cannot be established with the same degree of rigor. Furthermore, if and when superintelligence is built, it could be so transformative as to render current legal systems irrelevant. (For more general discussion of the applicability of legal mechanisms to superintelligence, see [46–48].) For these reasons, legal requirements are less likely to play a significant role in preventing superintelligence misinformation.

#### **5. Correcting Superintelligence Misinformation**

Correcting misinformation is sufficiently difficult that it will often be better to prevent it from spreading in the first place. However, when superintelligence misinformation cannot be prevented, there are strategies available for correcting it in the minds of those who are exposed to it. Correcting misinformation is the subject of a fairly extensive literature in psychology, political science, and related fields [2,15,33,49]. For readers unfamiliar with this literature, Cook et al. [2] provide an introductory overview accessible to an interdisciplinary readership, while Ecker et al. [15] provide a more detailed and technical survey. This section applies this literature to the correction of superintelligence misinformation.

#### *5.1. Build Expert Consensus and the Perception Thereof*

At present, there exists substantial expert disagreement about a wide range of aspects of superintelligence, from basic matters such as whether superintelligence is possible [50–52] and when it might occur if it does [53–55] to subtler matters such as the treacherous turn vs. the sordid stumble. The situation stands in contrast to the extensive expert consensus on other issues such as global warming [56]. (Experts lack consensus on some important details about global warming, such as how severe the damage is likely to be, but they have a high degree of consensus on the basic contours of the issue.).

The case of global warming shows that expert consensus on its own does not counteract misinformation. On the contrary, misinformation about global warming continues to thrive despite the existence of consensus. However, there is reason to believe that the consensus helps. For starters, much of the misinformation is specifically oriented towards creating the false perception that there is no consensus [10]. The scientific consensus is a target of misinformation because it is believed to be an important factor in people's overall beliefs. Indeed, several studies have documented a strong correlation among the lay public between rejection of the science of global warming and belief that there is no consensus [57,58]. Further studies find that presenting messages describing the consensus

increases belief in climate science and support for policy to reduce greenhouse gas emissions [14,59]. Notably, this effect is observed for people across the political spectrum, including those who would have political motivation to doubt the science. (Such motivations are discussed further in Section 5.2.) All of this indicates an important role for expert consensus in broader beliefs about global warming.

For superintelligence, at present there is no need to spread misinformation about the existence of consensus because there is rather little consensus. Therefore, a first step is to work towards consensus. (This of course should be consensus grounded on the best possible analysis, not consensus for the sake of consensus.) This may be difficult for superintelligence because of the inherent challenge of understanding future technologies and the complexity of advanced AI. Global warming has its own complexities, but the core science is relatively simple: increased atmospheric greenhouse gas concentrations trap sunlight and raise temperatures. However, at least some aspects of superintelligence should be easy enough to get consensus on, starting with the fact that there are a number of R&D groups attempting to build AGI. Other aspects may be more difficult to build consensus on, but this consensus is at least something that can be pursued via normal channels of expert communication: research articles, conference symposia, private correspondence, and so on.

Given the existence of consensus, it is also important to raise awareness about it. The consensus cannot counteract misinformation if nobody knows about it. The global warming literature provides good models for documenting expert consensus [56], and such findings of consensus can be likewise be publicized.

#### *5.2. Address Pre-Existing Motivations for Believing Misinformation*

The human mind tends to not process new information in isolation, but instead processes it in relation to wider beliefs and understandings of the world. This can be very valuable, enabling us to understand the context behind new information and relate it to existing knowledge. For example, people would typically react with surprise and confusion upon seeing an object rise up to the ceiling instead of fall down to the floor. This new information is related to a wider understanding of the fact that objects fall downwards. People may even struggle to believe their own eyes unless there is a compelling explanation. (For example, perhaps the object and the ceiling are both magnetized.). Additionally, if people did not see it with their own eyes, but instead heard it reported by someone else, they may be even less likely to believe it. In other words, they are motivated to believe that the story is false, even if it is true. This phenomenon is known as *motivated reasoning*.

While generally useful, motivated reasoning can be counterproductive in the context of misinformation, prompting people to selectively believe misinformation over correct information. This occurs in particular when the misinformation accords better with preexisting beliefs than the correct information. In the above example, misinformation could be that the object fell down to the floor instead of rising to the ceiling.

Motivated reasoning is a major factor in the belief of misinformation about politically contentious issues such as climate change. The climate science consensus is rejected mainly by people who believe that government regulation of industry is generally a bad thing [14,59]. In principle, belief that humans are warming the planet should have nothing to do with belief that government regulations are harmful. It is logically coherent to believe in global warming yet argue that carbon emissions should not be regulated. However, in practice, the science of global warming often threatens people's wider beliefs about regulations, and so they find themselves motivated to reject the science.

Motivated reasoning can also be a powerful factor for beliefs about superintelligence. A basic worldview is that humans are in control. Per this worldview, human technology is a tool; the idea that it could rise up against humanity is a trope for science fiction, not something to be taken seriously. The prospect of superintelligence threatens this worldview, predisposing people to not take superintelligence seriously. In this context, it may not help that media portrayals of the scholarly debate about superintelligence commonly include reference to science fiction, such as by using pictures of the Terminator. As one expert who is concerned about superintelligence states, "I think that at this

point all of us on all sides of this issue are annoyed with the journalists who insist on putting a picture of the Terminator on every single article they publish of this topic" [60].

Motivated reasoning has been found to be linked to people's sense of self-worth. As one study puts it, "the need for self-integrity—to see oneself as good, virtuous, and efficacious—is a basic human motivation" [61] (p. 415). When correct information threatens people's self-worth, they are more motivated to instead believe misinformation, so as to preserve their self-worth. Furthermore, motivated reasoning can be reduced by having people consciously reaffirm their own self-worth, such as by recalling to themselves ways in which they successfully live up to their personal values [61]. Essentially, with their sense of self-worth firmed up, they become more receptive to information that would otherwise threaten their self-worth.

As a technology that could outperform humans, superintelligence could pose an especially pronounced threat to people's sense of self-worth. It may be difficult for people to feel good and efficacious if they would soon be superseded by computers. For at least some people, this could be a significant reason to reject information about the prospect of superintelligence, even if that information is true. At the same time, it may still be valuable for messages about superintelligence to be paired with messages of affirmation.

Another important set of motivations comes from the people active in superintelligence debates. Many people in the broader computer science field of AI have been skeptical of claims about superintelligence. These people may be motivated by a desire to protect the reputation and funding of the field of AI, and in turn protect their self-worth as AI researchers. AI has a long history of boom-bust cycles in which hype about superintelligence and related advanced AI falls flat and contributes to an "AI winter". Peter Bentley, an AI computer scientist who has spoken out against contemporary claims about superintelligence, is explicit about this:

"Large claims lead to big publicity, which leads to big investment, and new regulations. And then the inevitable reality hits home. AI does not live up to the hype. The investment dries up. The regulation stifles innovation. And AI becomes a dirty phrase that no-one dares speak. Another AI Winter destroys progress" [62] (p. 10). "Do not be fearful of AI—marvel at the persistence and skill of those human specialists who are dedicating their lives to help create it. And appreciate that AI is helping to improve our lives every day" (p. 11).

While someone's internal motivations can only be inferred from such text, the text is at least suggestive of motivations to protect self-worth and livelihood as an AI researcher, as well as a worldview in which AI is a positive force for society.

To take another example, Torres [9] proposes that Pinker's dismissal of AGI and superintelligence is motivated by Pinker's interest in promoting a narrative in which science and technology bring progress—a narrative that could be threatened by the potential catastrophic risk from superintelligence.

Conversely, some people involved in superintelligence debates may be motivated to believe in the prospect of superintelligence. For example, researcher Jürgen Schmidhuber writes on his website that "since age 15 or so, the main goal of professor Jürgen Schmidhuber has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire." [63] Superintelligence is also sometimes considered the "grand dream" of AI [64]. Other common motivations include a deep interest in transformative future outcomes [65] and a deep concern about extreme catastrophic risks [4,66,67]. People with these worldviews may be predisposed to believe certain types of claims about superintelligence. If it turns out that superintelligence will not be built, or would not have transformative or catastrophic effects, then this can undercut people's deeply held beliefs in the importance of superintelligence, transformative futures, and/or catastrophic risks.

For each of these motivations for interest in superintelligence, there can be information that is rejected because it cuts against the motivations and misinformation that is accepted because it supports the motivations. Therefore, in order to advance superintelligence debates, it can be valuable to affirm people's motivations when presenting conflicting information. For example, one could affirm that AI computer scientists are making impressive and important contributions to the world, and then explain reasons why superintelligence may nonetheless be a possibility worth considering. One could affirm that science and technology are bringing a great deal of progress, and then explain reasons why some technologies could nonetheless be dangerous. One could affirm that superintelligence is indeed a worthy dream, or that transformative futures are indeed important to pay attention to, and then explain reasons why superintelligence might not be built. Finally, one could affirm that extreme catastrophic risks are indeed an important priority for human society, and then explain reasons why superintelligence may not be such a large risk after all. These affirming messaging strategies could predispose participants in superintelligence debates to consider a wider range of possibilities and make more progress on the issue, including progress towards expert consensus.

Another strategy is to align motivations with accurate beliefs about superintelligence. For example, some AI computer scientists may worry that belief in the possibility of superintelligence could damage reputation and funding. However, if belief in the possibility of superintelligence would bring reputational and funding benefits, then the same people may be more comfortable expressing such belief. Reputational benefits could be created, for example, via slots in high-profile conferences and journals, or by association with a critical mass of reputable computer scientists who also believe in the possibility of superintelligence. Funding could likewise be made available. Noting that funding and space in conferences and journals are often scarce resources, it could be advantageous to target these resources at least in part toward shifting motivations of important actors in superintelligence debates. This example of course assumes that it is correct to believe in the possibility of superintelligence. The same general strategy of aligning motivations may likewise be feasible for other beliefs about superintelligence.

The above examples—concerning the reputations and funding of AI computer scientists, the possibility of building superintelligence, and the importance of transformative futures and catastrophic risks—all involve experts or other communities that are relatively attentive to the prospect of superintelligence. Other motivations could be significant for the lay public, policy makers, and other important actors. Research on the public understanding of science finds that cultural factors, such as political ideology, can factor significantly in the interpretation of scientific information [68,69]. Kahan et al. [69] (p. 79) propose to "shield" scientific evidence and related information "from antagonistic cultural information". For superintelligence, this could mean attempting to frame superintelligence (or, more generally, AI) as a nonpartisan social issue. At least in the US, if an issue becomes politically partisan, legislation typically becomes substantially more difficult to pass. Likewise, discussions of AI and superintelligence should, where reasonably feasible, attempt to avoid close association with polarizing ideologies and cultural divisions.

The fact that early US legislation on AI has been bipartisan is encouraging. For example, H.R.4625, FUTURE of Artificial Intelligence Act of 2017, sponsored by John Delaney (Democrat) and co-sponsored by Pete Olson (Republican), and H.R.5356, National Security Commission Artificial Intelligence Act of 2018, sponsored by Elise Stefanik (Republican) and co-sponsored by James Langevin (Democrat). This is a trend that should be praised and encouraged to continue.

#### *5.3. Inoculate with Advance Warnings*

The misinformation literature has developed the concept of *inoculation*, in which people are preemptively educated about a piece of misinformation so that they will not believe it if and when they later hear it. For example, someone might be told that there is a false rumor that vaccines cause autism, such that when they later hear the rumor, they know to recognize it as false. The aim is to get people to correctly understand the truth about a piece of misinformation from the beginning, so that their minds never falsely encode it. Inoculation has been found to work better than simply telling people the correct information [70].

Inoculation messages can include why a piece of misinformation is incorrect as well as why it is being spread [71]. For example, misinformation casting doubt on the idea that global temperatures are rising could be inoculated with an explanation of how scientists have established that global temperatures are rising. The inoculation could also explain that industries are intentionally casting doubt about global temperature increases in order to avoid regulations and increase profits. Likewise, for superintelligence, misinformation claiming that there are no projects seeking to build AGI could be inoculated by explanations of the existence of AGI R&D projects, and perhaps also explanations of the motivations of people who claim that there are no such projects. For example, Torres [9] proposes that Pinker's dismissal of AGI and superintelligence is motivated by Pinker's interest in promoting a narrative in which science and technology bring progress—a narrative that could be threatened by the potential catastrophic risk from superintelligence.

#### *5.4. Explain Misinformation and Corrections*

When people are exposed to misinformation, it can be difficult to correct, as first explained in Section 4. This phenomenon has been studied in great depth, with the terms "continued influence" and "belief perseverance" used for cases in which debunked information continues to influence people's thinking [72,73]. There is also an "illusion of truth", in which information explained to be false is later misremembered as true—essentially, the mind remembers the information but forgets its falsity [74]. The difficulty of correcting misinformation is why this paper has emphasized strategies to prevent of misinformation from spreading in the first place.

Adding to the challenge is the fact that attempts to debunk misinformation can inadvertently reinforce it. This phenomenon is known as the "backfire effect" [74]. Essentially, when someone hears "X is false", it can strengthen their mental representation of X, thereby reinforcing the misinformation. This effect has been found to be especially pronounced among the elderly [74]. One explanation is that correcting the misinformation (i.e., successfully processing "X is false") requires the use of strategic memory, but strategic memory requires dedicated mental effort and is less efficient among the elderly [15]. Unless enough strategic memory is allocated to processing "X is false", the statement can end up reinforcing belief in X.

These findings about the backfire effect have important consequences for superintelligence misinformation. Fortunately, many important audiences for superintelligence misinformation are likely to have strong strategic memories. Among the prominent actors in superintelligence debates, relatively few are elderly, and many of them have intellectual pedigrees that may endow them with strong strategic memories. On the other hand, many of the prominent actors are busy people with limited mental energy available for processing corrections about superintelligence information. As a practical matter, people attempting to debunk superintelligence misinformation should generally avoid "X is false" messages, especially when their audience may be paying limited attention.

One technique that has been particularly successful at correcting misinformation is the use of refutational text, which provides detailed explanations of why the misinformation is incorrect, what the correct information is, and why it is correct. Refutational text has been used mainly as a classroom tool for helping students overcome false preexisting beliefs about course topics [75,76]. Refutational text has even been used to turn misinformation into a valuable teaching tool [77]. A meta-analysis found refutational text to be the most effective technique for correcting misinformation in the context of science education—that is, for enabling students to overcome preexisting misconceptions about science topics [78].

A drawback of refutational text is that it can require more effort and attention than simpler techniques. Refutational text may be a valuable option in classrooms or other settings in which one has an audience's extended attention. Such settings include many venues of scholarly communication, which can be important for superintelligence debates. However, refutational texts may be less viable in other settings, such as social media and television news program interviews, in which one can often only get in a short sound bite. Therefore, refutational text may be relatively well-suited for interactions with experts and other highly engaged participants in superintelligence debates, and relatively poorly suited for much of the lay public and others who may only hear occasional passing

comments about superintelligence. That said, it may still be worth producing and disseminating extended refutations for lay public audiences, such as in long-format videos and articles for television, magazines, and online. These may tend to only reach the most motivated segments of the lay public, but they can nonetheless be worthwhile.

#### **6. Conclusions**

Superintelligence is a high-stakes potential future technology as well as a highly contested socio-technological issue. It is also fertile terrain for misinformation. Making progress on the issue requires identifying and rejecting misinformation and accepting accurate information. Some progress will require technical research to clarify the nature of superintelligence. However, a lot of progress will likely also require the sorts of sociological and psychological strategies outlined in this paper. The most progress may come from interdisciplinary projects connecting computer science, social science, and other relevant fields. Computer science is a highly technical field, but as with all fields, it is ultimately composed of human beings. By appreciating the nuances of the human dimensions of the field, it may be possible to make better progress towards understanding superintelligence and acting responsibly about it.

As the first dedicated study of strategies for countering superintelligence misinformation, this paper has taken a broad view, surveying a range of options. Despite this breadth, there may still be additional options worth further attention. Indeed, this paper has only mined a portion of the insights contained within the existing literature on misinformation. There may also be compelling options that go beyond the literature. Likewise, because of this paper's breadth, it has given relatively shallow treatment to each of the options. More detailed attention to the various option would be another worthy focus of future research.

An especially valuable focus would be the proposed strategies for preventing superintelligence misinformation. Because misinformation can be so difficult to correct, preventing it may be the more effective strategy. There is also less prior research on the prevention of misinformation. For these reasons, there is likely to be an abundance of important research opportunities on the prevention of misinformation, certainly for superintelligence misinformation and perhaps also for misinformation in general.

For the prevention of superintelligence misinformation, a strategy that may be particularly important to study further is dissuading AI corporations from using their substantial resources to spread superintelligence misinformation. The long history of corporations engaging in such tactics, with a major impact on the surrounding debates, suggests that this could be a highly important factor for superintelligence [12]. It may be especially valuable to study this at an early stage, before such tactics are adopted.

For the correction of superintelligence misinformation, a particularly promising direction is on the motivations and worldviews of prominent actors and audiences in superintelligence debates. Essentially, what are people's motivations with respect to superintelligence? Are AI experts indeed motivated to protect their field? Are superintelligence developers motivated by the "grand dream"? Are others who believe in the prospect of superintelligence motivated by beliefs about transformative futures or catastrophic risks? Can attention to these sorts of motivations help them overcome their divergent worldviews and make progress towards consensus on the topic? Finally, are people in general motivated to retain their sense of self-worth in the face of a technology that could render them inferior?

Most important, however, is not the research on superintelligence misinformation, but the efforts to prevent and correct it. It can often be stressful and thankless work, especially amidst the heated debates, but it is essential to ensuring positive outcomes. This paper is one effort towards helping this work succeed. Given the exceptionally high potential stakes, it is vital that decisions about superintelligence be well-informed.

**Funding:** This research received no external funding.

**Acknowledgments:** Olle Häggström, Tony Barrett, Brendan Nyhan, Maurizio Tinnirello, Stephan Lewandowsky, Michael Laakasuo, Phil Torres, and three anonymous reviewers provided helpful feedback on earlier versions of this paper. All remaining errors are the author's alone. The views expressed in this paper are the author's and not necessarily the views of the Global Catastrophic Risk Institute.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Superintelligence Skepticism as a Political Tool**

#### **Seth D. Baum ID**

Global Catastrophic Risk Institute, P.O. Box 40364, Washington, DC 20016, USA; seth@gcrinstitute.org Received: 27 June 2018; Accepted: 17 August 2018; Published: 22 August 2018

**Abstract:** This paper explores the potential for skepticism about artificial superintelligence to be used as a tool for political ends. Superintelligence is AI that is much smarter than humans. Superintelligence does not currently exist, but it has been proposed that it could someday be built, with massive and potentially catastrophic consequences. There is substantial skepticism about superintelligence, including whether it will be built, whether it would be catastrophic, and whether it is worth current attention. To date, superintelligence skepticism appears to be mostly honest intellectual debate, though some of it may be politicized. This paper finds substantial potential for superintelligence skepticism to be (further) politicized, due mainly to the potential for major corporations to have a strong profit motive to downplay concerns about superintelligence and avoid government regulation. Furthermore, politicized superintelligence skepticism is likely to be quite successful, due to several factors including the inherent uncertainty of the topic and the abundance of skeptics. The paper's analysis is based on characteristics of superintelligence and the broader AI sector, as well as the history and ongoing practice of politicized skepticism on other science and technology issues, including tobacco, global warming, and industrial chemicals. The paper contributes to literatures on politicized skepticism and superintelligence governance.

**Keywords:** artificial intelligence; superintelligence; skepticism

#### **1. Introduction**

The purpose of this paper is to explore the potential for skepticism about artificial superintelligence to be used for political ends. Artificial superintelligence (for brevity, henceforth just *superintelligence*) refers to AI that is much smarter than humans. Current AI is not superintelligent, but the prospect of superintelligence is a topic of much discussion in scholarly and public spheres. Some believe that superintelligence could someday be built, and that, if it is built, it would have massive and potentially catastrophic consequences. Others are skeptical of these beliefs. While much of the existing skepticism appears to be honest intellectual debate, there is potential for it to be politicized for other purposes.

In simple terms (to be refined below), *politicized skepticism* can be defined as public articulation of skepticism that is intended to achieve some outcome other than an improved understanding of the topic at hand. Politicized skepticism can be contrasted with *intellectual skepticism*, which seeks an improved understanding. Intellectual skepticism is essential to scholarly inquiry; politicized skepticism is not. The distinction between the two is not always clear; statements of skepticism may have both intellectual and political motivations. The two concepts can nonetheless be useful for understanding debates over issues such as superintelligence.

There is substantial precedent for politicized skepticism. Of particular relevance for superintelligence is politicized skepticism about technologies and products that are risky but profitable, henceforth *risk–profit politicized skepticism*. This practice dates to 1950s debates over the link between tobacco and cancer and has since been dubbed the *tobacco strategy* [1]. More recently, the strategy has been applied to other issues including the link between fossil fuels and acid rain, the link between fossil fuels and global warming, and the link between industrial chemicals and neurological disease [1,2]. The essence of the strategy is to promote the idea that the science underlying certain risks is unresolved, and therefore the implicated

technologies should not be regulated. The strategy is typically employed by an interconnected mix of industry interests and ideological opponents of regulation. The target audience is typically a mix of government officials and the general public, and not the scientific community.

As is discussed in more detail below, certain factors suggest the potential for superintelligence to be a focus of risk–profit politicized skepticism. First and foremost, superintelligence could be developed by major corporations with a strong financial incentive to avoid regulation. Second, there already exists a lot of skepticism about superintelligence, which could be exploited for political purposes. Third, as an unprecedented class of technology, it is inherently uncertain, which suggests that superintelligence skepticism may be especially durable, even within apolitical scholarly communities. These and other factors do not guarantee that superintelligence skepticism will be politicized, or that its politicization would follow the same risk–profit patterns as the tobacco strategy. However, these factors are at least suggestive of the possibility.

Superintelligence skepticism may also be politicized in a different way: to protect the reputations and funding of the broader AI field. This form of politicized skepticism is less well-documented than the tobacco strategy, and appears to be less common. However, there are at least hints of it for fields of technology involving both grandiose future predictions and more mundane near-term work. AI is one such field of technology, in which grandiose predictions of superintelligence and other future AI breakthroughs contrast with more modest forms of near-term AI. Another example is nanotechnology, in which grandiose predictions of molecular machines contrast with near-term nanoscale science and technology [3].

The basis of the paper's analysis is twofold. First, the paper draws on the long history of risk–profit politicized skepticism. This history suggests certain general themes that may also apply to superintelligence. Second, the paper examines characteristics of superintelligence development to assesses the prospect of skepticism being used politically in this context. To that end, the paper draws on the current state of affairs in the AI sector, especially for artificial general intelligence, which is a type of AI closely related to superintelligence. The paper further seeks to inform efforts to avoid any potential harmful effects from politicized superintelligence skepticism. The effects would not necessarily be harmful, but the history of risk–profit politicized skepticism suggests that they could be.

This paper contributes to literatures on politicized skepticism and superintelligence governance. Whereas most literature on politicized skepticism (and similar concepts such as denial) is backward-looking, consisting of historical analysis of skepticisms that have already occurred [1,2,4–7], this paper is largely (but not exclusively) forward-looking, consisting of prospective analysis of skepticisms that could occur at some point in the future. Meanwhile, the superintelligence governance literature has looked mainly at institutional regulations to prevent research groups from building dangerous superintelligence and support for research on safety measures [8–11]. This paper contributes to a smaller literature on the role of corporations in superintelligence development [12] and on social and psychological aspects of superintelligence governance [13].

This paper does not intend to take sides on which beliefs about superintelligence are most likely to be correct. Its interest is in the potential political implications of superintelligence skepticism, not in the underlying merits of the skepticism. The sole claim here is that the possibility of politicized superintelligence skepticism is a worthy topic of study. It is worth studying due to: (1) the potential for large consequences if superintelligence is built; and (2) the potential for superintelligence to be an important political phenomenon regardless of whether it is built. Finally, the topic is also of inherent intellectual interest as an exercise in prospective socio-political analysis on a possible future technology.

The paper is organized as follows. Section 2 presents a brief overview of superintelligence concerns and skepticisms. Section 3 further develops the concept of politicized skepticism and surveys the history of risk–profit politicized skepticism, from its roots in tobacco to the present day. Section 4 discusses prospects for politicized superintelligence skepticism. Section 5 discusses opportunities for constructive action. Section 6 concludes.

#### **2. Superintelligence and Its Skeptics**

The idea of humans being supplanted by their machines dates to at least the 1863 work of Butler [14]. In 1965, Good presented an early exposition on the topic within the modern field of computer science [15]. Good specifically proposed an "intelligence explosion" in which intelligent machines make successively more intelligent machines until they are much smarter than humans, which would be "the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control" [15] (p. 33). This intelligence explosion is one use of the term *technological singularity*, though the term can also refer to wider forms of radical technological change [16]. The term *superintelligence* refers specifically to AI that is much more intelligent than humans and dates to at least the 1998 work of Bostrom [17]. A related term is *artificial general intelligence*, which is AI capable of reasoning across many intellectual domains. A superintelligent AI is likely to have general intelligence, and the development of artificial general intelligence could be a major precursor to superintelligence. Artificial general intelligence is also an active subfield of AI [18,19].

Superintelligence is notable as a potential technological accomplishment with massive societal implications. The effects of superintelligence could include anything from solving a significant portion of the world's problems (if superintelligence is designed well) to causing the extinction of humans and other species (if it is designed poorly). Much of the interest in superintelligence derives from these high stakes. Superintelligence is also of intellectual interest as perhaps the ultimate accomplishment within the field of AI, sometimes referred to as the "grand dream" of AI [20] (p. 125).

Currently, most AI research is on *narrow AI* that is not oriented towards this grand dream. The focus on narrow AI dates to early struggles in the field to make progress towards general AI or superintelligence. After an initial period of hype fell short, the field went through an "AI winter" marked by diminished interest and more modest expectations [21,22] This prompted a focus on smaller, incremental progress on narrow AI. It should be noted that the term AI winter most commonly refers to a lull in AI in the mid-to-late 1980s and early 1990s. A similar lull occurred in the 1970s, and concerns about a new winter can be found as recently as 2008 [23].

With most of the field focused on narrow AI, artificial general intelligence has persisted only as a small subfield of AI [18]. The AI winter also caused many AI computer scientists to be skeptical of superintelligence, on grounds that superintelligence has turned out to be much more difficult than initially expected, and likewise to be averse to attention to superintelligence, on grounds that such hype could again fall short and induce another AI winter. This is an important historical note because it indicates that superintelligence skepticism has wide salience across the AI computer science community and may already be politicized towards the goal of protecting the reputation of and funding for AI. (More on this below.)

Traces of superintelligence skepticism predate AI winter. Early AI skepticism dates to 1965 work by Dreyfus [24]. Dreyfus [24] critiqued the overall field of AI, with some attention to human-level AI though not to superintelligence. Dreyfus traced this skepticism of machines matching human intelligence to a passage in Descartes' 1637 *Discourse On Method* [25]: "it must be morally impossible that there should exist in any machine a diversity of organs sufficient to enable it to act in all the occurrences of life, in the way in which our reason enables us to act."

In recent years, superintelligence has attracted considerable attention. This has likely been prompted by several factors, including a growing scholarly literature (e.g., [9,19,26–29]), highly publicized remarks by several major science and technology celebrities (e.g., Bill Gates [30], Stephen Hawking [31], and Elon Musk [32]), and breakthroughs in the broader field of AI, which draw attention to AI and may make the prospect of superintelligence seem more plausible (e.g., [33,34]). This attention to superintelligence has likewise prompted some more outspoken skepticism. The following is a brief overview of the debate, including both the arguments of the debate and some biographical information about the debaters. (Biographical details are taken from personal and institutional webpages and are accurate as of the time of this writing, May 2018; they are not necessarily accurate as of the time of the publication of the cited literature.) The biographies can be

politically significant because, in public debates, some people's words carry more weight than others'. The examples presented below are intended to be illustrative and at least moderately representative of the arguments made in existing superintelligence skepticism (some additional examples are presented in Section 4). A comprehensive survey of superintelligence skepticism is beyond the scope of this paper.

#### *2.1. Superintelligence Cannot Be Built*

Bringsjord [35] argued that superintelligence cannot be built based on reasoning from computational theory. Essentially, the argument is that superintelligence requires a more advanced class of computing, which cannot be produced by humans or existing AI. Bringsjord is Professor of Cognitive Science at Rensselaer Polytechnic University and Director of the Rensselaer AI and Reasoning Lab. Chalmers [36] countered that superintelligence does not necessarily require a more advanced class of computing. Chalmers is University Professor of Philosophy and Neural Science at New York University and co-director of the NYU Center for Mind, Brain, and Consciousness.

McDermott [37] argued that advances in hardware and algorithms may be sufficient to exceed human intelligence, but not to massively exceed it. McDermott is Professor of Computer Science at Yale University. Chalmers [36] countered that, while there may be limits to the potential advances in hardware and software, these limits may not be so restrictive as to preclude superintelligence.

#### *2.2. Superintelligence Is Not Imminent Enough to Merit Attention*

Crawford [38] argued that superintelligence is a distraction from issues with existing AI, especially AI that worsens inequalities. Crawford is co-founder and co-director of the AI Now Research Institute at New York University, a Senior Fellow at the NYU Information Law Institute, and a Principal Researcher at Microsoft Research.

Ng argued that superintelligence may be possible, but it is premature to worry about, in particular because it is too different from existing AI systems. Ng memorably likened worrying about superintelligence to worrying about "overpopulation on Mars" [39]. Ng is Vice President and Chief Scientist of Baidu, Co-Chairman and Co-Founder of Coursera, and an Adjunct Professor of Computer Science at Stanford University.

Etzioni [40] argued that superintelligence is unlikely to be built within the next 25 years and is thus not worth current attention. Etzioni is Chief Executive Officer of the Allen Institute for Artificial Intelligence and Professor of Computer Science at University of Washington. Dafoe and Russell [41] countered that superintelligence is worth current attention even if it would take more than 25 years to build. Dafoe is Assistant Professor of Political Science at Yale University and Co-Director of the Governance of AI Program at the University of Oxford. Russell is Professor of Computer Science at University of California, Berkeley. (An alternative counter is that some measures to improve AI outcomes apply to both near-term AI and superintelligence, and thus it is not essential to debate which of the two types of AI should be prioritized [42].)

#### *2.3. Superintelligence Would (Probably) Not Be Catastrophic*

Goertzel [43] argued that superintelligence could be built and is worth paying attention to, but also that superintelligence is less likely to result in catastrophe than is sometimes suggested. Specifically, Goertzel argued that it may be somewhat difficult, but very difficult, to build superintelligence with values that are considered desirable, and that the human builders of superintelligence would have good opportunities to check that the superintelligence has the right values. Goertzel is the lead for the OpenCog and SingularityNET projects for developing artificial general intelligence. Goertzel [43] wrote in response to Bostrom [28], who suggested that, if built, superintelligence is likely to result in catastrophe. Bostrom is Professor of Applied Ethics at University of Oxford and Director of the Oxford Future of Humanity Institute. (For a more detailed analysis of this debate, see [44].)

Views similar to Goertzel [43] were also presented by Bieger et al. [45], in particular that the AI that is the precursor to superintelligence could be trained by its human developers to have safe and desirable values. Co-authors Bieger and Thórisson are Ph.D. student and Professor of Computer Science at Reykjavik University; co-author Wang is Associate Professor of Computer and Information Sciences at Temple University.

Searle [46] argued that superintelligence is unlikely to be catastrophic, because it would be an unconscious machine incapable of deciding for itself to attack humanity, and thus humans would need to explicitly program it to cause harm. Searle is Professor Emeritus of the Philosophy of Mind and Language at the University of California, Berkeley. Searle [46] wrote in response to Bostrom [28], who arqued that superintelligence could be dangerous to humans regardless of whether it is conscious.

#### **3. Skepticism as a Political Tool**

#### *3.1. The Concept of Politicized Skepticism*

There is a sense in which any stated skepticism can be political, insofar as it seeks to achieve certain desired changes within a group. Even the most honest intellectual skepticism can be said to achieve the political aim of advancing a certain form of intellectual inquiry. However, this paper uses the term "politicized skepticism" more narrowly to refer to skepticism with other, non-intellectual aims.

Even with this narrower conception, the distinction between intellectual and politicized skepticism can in practice be blurry. The same skeptical remark can serve both intellectual and (non-intellectual) political aims. People can also have intellectual skepticism that is shaped, perhaps subconsciously, by political factors, as well as politicized skepticism that is rooted in honest intellectual beliefs. For example, intellectuals (academics and the like) commonly have both intellectual and non-intellectual aims, the latter including advancing their careers or making the world a better place per whatever notion of "better" they subscribe to. This can be significant for superintelligence skepticism aimed at protecting the reputations and funding of AI researchers.

It should be stressed that the entanglement of intellectual inquiry and (non-intellectual) political aims does not destroy the merits of intellectual inquiry. This is important to bear in mind at a time when trust in science and other forms of expertise is dangerously low [47,48]. Scholarship can be a social and political process, but, when performed well, it can nonetheless deliver important insights about the world. For all people, scholars included, improving one's understanding of the world takes mental effort, especially when one is predisposed to believe otherwise. Unfortunately, many people are not inclined to make the effort, and other people are making efforts to manipulate ideas for their own aims. An understanding of politicized skepticism is essential for addressing major issues in this rather less-than-ideal epistemic era.

Much of this paper is focused on risk–profit politicized skepticism, i.e., skepticism about concerns about risky and profitable technologies and products. Risk–profit politicized skepticism is a major social force, as discussed throughout this paper, although it is not the only form of politicized skepticism. Other forms include politicized skepticism by concerned citizens, such as skepticism about scientific claims that vaccines or nuclear power plants are safe; by religious activists and institutions, expressing skepticism about claims that humans evolved from other species; by politicians and governments, expressing skepticism about events that cast them in an unfavorable light; and by intellectuals as discussed above. Thus, while this paper largely focuses on skepticism aimed at casting doubt about concerns about risky and profitable technologies and products, it should be understood that this is not the only type of politicized skepticism.

#### *3.2. Tobacco Roots*

As mentioned above, risk–profit politicized skepticism traces to 1950s debates on the link between tobacco and cancer. Specifically, in 1954, the tobacco industry formed the Tobacco Industry Research Committee, an "effort to foster the impression of debate, primarily by promoting the work of scientists whose views might be useful to the industry" [1] (p. 17). The committee was led by C. C. Little, who was a decorated genetics researcher and past president of the University of Michigan, as well as a eugenics advocate who believed cancer was due to genetic weakness and not to smoking.

In the 1950s, there was substantial evidence linking tobacco to cancer, but it was not as conclusive of a link as is now available. The tobacco industry exploited this uncertainty in public discussions of the issue. It succeeded in getting major media to often present the issue as a debate between scientists who agreed vs. disagreed in the tobacco–cancer link. Among the media figures to do this was the acclaimed journalist Edward Murrow, himself a smoker who, in tragic irony, later died from lung cancer. Oreskes and Conway speculated that, "Perhaps, being a smoker, he was reluctant to admit that his daily habit was deadly and reassured to hear that the allegations were unproven" [1] (pp. 19–20).

Over subsequent decades, the tobacco industry continued to fund work that questioned the tobacco–cancer link, enabling it to dodge lawsuits and regulations. Then, in 1999, the United States Department of Justice filed a lawsuit against nine tobacco companies and two tobacco trade organizations (United States v. Philip Morris). The US argued that the tobacco industry conspired over several decades to deceive the public, in violation of the Racketeer Influenced and Corrupt Organizations (RICO) Act, which covers organized crime. In 2006, the US District Court for the District of Columbia found the tobacco industry guilty, upheld unanimously in 2009 by the US Court of Appeals. This ruling and other measures have helped to protect people from lung cancer, but many more could have also avoided lung cancer were it not for the tobacco industry's politicized skepticism.

#### *3.3. The Character and Methods of Risk–Profit Politicized Skepticism*

The tobacco case provided a blueprint for risk–profit politicized skepticism that has since been used for other issues. Writing in the context of politicized environmental skepticism, Jacques et al. [4] (pp. 353–354) listed four overarching themes: (1) rejection of scientific findings of environmental problems; (2) de-prioritization of environmental problems relative to other issues; (3) rejection of government regulation of corporations and corporate liability; and (4) portrayal of environmentalism as a threat to progress and development. The net effect is to reduce interest in government regulation of corporate activities that may pose harms to society.

The two primary motivations of risk–profit politicized skepticism are the protection of corporate profits and the advancement of anti-regulatory political ideology. The protection of profits is straightforward: from the corporation's financial perspective, the investment in politicized skepticism can bring a substantial return. The anti-regulatory ideology is only slightly subtler. Risk–profit politicized skepticism is often associated with pro-capitalist, anti-socialist, and anti-communist politics. For example, some political skeptics liken environmentalists to watermelons: "green on the outside, red on the inside" [1] (p. 248), while one feared that the Earth Summit was a socialist plot to establish a "World Government with central planning by the United Nations" [1] (p. 252). For these people, politicized skepticism is a way to counter discourses that could harm their political agenda.

Notably, both the financial and the ideological motivations are not inherently about science. Instead, the science is manipulated towards other ends. This indicates that the skepticism is primarily political and not intellectual. It may still be intellectually honest in the sense that the people stating the skepticism are actually skeptical. That would be consistent with author Upton Sinclair's saying that "It is difficult to get a man to understand something when his salary depends upon his not understanding it." The skepticism may nonetheless violate that essential intellectual virtue of letting conclusions follow from analysis, and not the other way around. For risk–profit politicized skepticism, the desired conclusion is typically the avoidance of government regulation of corporate activity, and the skepticism is crafted accordingly.

To achieve this end, the skeptics will often engage in tactics that clearly go beyond honest intellectual skepticism and ordinary intellectual exchange. For example, ExxonMobil has been found to express extensive skepticism about climate change in its public communications (such as newspaper advertisements), but much less skepticism in its internal communications and peer-reviewed publications [7]. This finding suggests that ExxonMobil was aware of the risks of climate change and

misled the public about the risks. ExxonMobil reportedly used its peer-reviewed publications for "the credentials required to speak with authority in this area", including in its conversations with government officials [7] (p. 15), even though these communications may have presented climate change risk differently than the peer-reviewed publications did. (As an aside, it may be noted that the ExxonMobil study [7], published in 2017, has already attracted a skeptic critique by Stirling [49]. Stirling is Communications Manager of the Canadian nonprofit Friends of Science. Both Stirling and Friends of Science are frequent climate change skeptics [50].)

While the skeptics do not publicly confess dishonesty, there are reports that some of them have privately done so. For example, Marshall [51] (p. 180) described five energy corporation presidents who believed that climate change was a problem and "admitted, off the record, that the competitive environment forced them to suppress the truth about climate change" to avoid government regulations. Similarly, US Senator Sheldon Whitehouse, an advocate of climate policy to reduce greenhouse gas emissions, reported that some of his colleagues publicly oppose climate policy but privately support it, with one even saying "Let's keep talking—but don't tell my staff. Nobody else can know" [52] (p. 176). Needless to say, any instance in which skepticism is professed by someone who is not actually skeptical is a clear break from the intellectual skepticism of ordinary scholarly inquiry.

One particularly distasteful tactic is to target individual scientists, seeking to discredit their work or even intimidate them. For example, Philippe Grandjean, a distinguished environmental health researcher, reported that the tuna industry once waged a \$25 million advertising campaign criticizing work by himself and others who have documented links between tuna, mercury, and neurological disease. Grandjean noted that \$25 million is a small sum for the tuna industry but more than the entire sum of grant funding he received for mercury research over his career, indicating a highly uneven financial playing field [2] (pp. 119–120). In another example, climate scientists accused a climate skeptic of bullying and intimidation and reported receiving "a torrent of abusive and threatening e-mails after being featured on" the skeptic's blog, which calls for climate scientists "to be publicly flogged" [51] (p. 151).

Much of the work, however, is far subtler than this. Often, it involves placing select individuals in conferences, committees, or hearings, where they can ensure that the skeptical message is heard in the right places. For example, Grandjean [2] (p. 129) recounted a conference sponsored by the Electric Power Research Institute, which gave disproportionate floor time to research questioning the health effects of mercury. In another episode, the tobacco industry hired a recently retired World Health Organization committee chair to "volunteer" as an advisor to the same committee, which then concluded to not restrict use of a tobacco pesticide [2] (p. 125).

Another common tactic is to use outside organizations as the public face of the messaging. This tactic is accused of conveying the impression that the skepticism is done in the interest of the public and not of private industry. Grandjean [2] (p. 121) wrote that "organizations, such as the Center for Science and Public Policy the Center for Indoor Air Research or the Citizens for Fire Safety Institute, may sound like neutral and honest establishments, but they turned out to be 'front groups' for financial interests." Often, the work is done by think tanks. Jacques et al. [4] found that over 90% of books exhibiting environmental skepticism are linked to conservative think tanks, and 90% of conservative think tanks are active in environmental skepticism. This finding is consistent with recent emphasis in US conservatism on unregulated markets. (Earlier strands of US conservatism were more supportive of environmental protection, such as the pioneering American conservative Russell Kirk, who wrote that "There is nothing more conservative than conservation" [53].)

#### *3.4. The Effectiveness of Politicized Skepticism*

Several broader phenomena help make politicized skepticism so potent, especially for risk–profit politicized skepticism. One is the enormous amounts of corporate money at stake with certain government regulations. When corporations use even a tiny fraction of this for politicized skepticism, it can easily dwarf other efforts. Similarly, US campaign finance laws are highly permissive. Whitehouse [52] traced the decline in bipartisan Congressional support for climate change policy to the Supreme Court's 2010 *Citizens United* ruling, which allows unlimited corporate spending in elections. However, even without election spending, corporate assets tilt the playing field substantially in the skeptics' favor.

Another important factor is the common journalistic norm of balance, in which journalists seek to present "both sides" of an issue. This can put partisan voices on equal footing with independent science, as seen in early media coverage of tobacco. It can also amplify a small minority of dissenting voices, seen more recently in media coverage of climate change. Whereas the scientific community has overwhelming consensus that climate change is happening, that it is caused primarily by human activity, and that the effects will be mainly harmful, public media features climate change skepticism much more than its scientific salience would suggest [54]. (For an overview of the scientific issues related to climate change skepticism, see [55]; for documentation of the scientific consensus, see [56].)

A third factor is the tendency of scientists to be cautious with respect to uncertainty. Scientists often aspire to avoid stating anything incorrect and to focus on what can be rigorously established instead of discussing more speculative possibilities. Scientists will also often highlight remaining uncertainties even when basic trends are clear. "More research is needed" is likely the most ubiquitous conclusion of any scientific research. This tendency makes it easier for other parties to make the state of the science appear less certain than it actually is. Speaking to this point in a report on climate change and national security, former US Army Chief of Staff Gordon Sullivan states "We seem to be standing by and, frankly, asking for perfectness in science ... We never have 100 percent certainty. We never have it. If you wait until you have 100 percent certainty, something bad is going to happen on the battlefield" [57] (p. 10).

A fourth factor is the standard, found in some (but not all) policy contexts, of requiring robust evidence of harm before pursuing regulation. In other words, the burden of proof is on those who wish to regulate, and the potentially harmful product is presumed innocent until proven guilty. Grandjean [2] cited this as the most important factor preventing the regulation of toxic chemicals in the US. Such a protocol makes regulation very difficult, especially for complex risks that resist precise characterization. In these policy contexts, the amplification of uncertainty can be particularly impactful.

To sum up, risk–profit politicized skepticism is a longstanding and significant tool used to promote certain political goals. It has been used heavily by corporations seeking to protect profits and people with anti-regulatory ideologies, and it has proven to be a powerful tool. In at least one case, the skeptics were found guilty in a court of law of conspiracy to deceive the public. The skeptics use a range of tactics that deviate from standard intellectual practice, and they exploit several broader societal phenomena that make the skepticism more potent.

#### **4. Politicized Superintelligence Skepticism**

#### *4.1. Is Superintelligence Skepticism Already Politicized?*

At this time, there does not appear to be any superintelligence skepticism that has been politicized to the extent that has occurred for other issues such as tobacco–cancer and fossil fuels–global warming. Superintelligence skeptics are not running ad campaigns or other major dollar operations. For the most part, they are not attacking the scholars who express concern about superintelligence. Much of the discussion appears in peer-reviewed journals, and has the tone of constructive intellectual discourse. An exception that proves the rule is Etzioni [40], who included a quotation comparing Nick Bostrom (who is concerned about superintelligence) to Donald Trump. In a postscript on the matter, Etzioni [40] wrote that "we should refrain from ad hominem attacks. Here, I have to offer an apology". In contrast, the character attacks of the most heated politicized skepticism are made without apology.

However, there are already at least some hints of politicized superintelligence skepticism. Perhaps the most significant comes from AI academics downplaying hype to protect their field's reputation and funding. The early field of AI made some rather grandiose predictions, which soon

fell flat, fueling criticisms as early as 1965 [24]. Some of these criticisms prompted major funding cuts, such as the 1973 Lighthill report [58], which prompted the British Science Research Council to slash its support for AI. Similarly, Menzies [59] described AI as going through a "peak of inflated expectations" in the 1980s followed by a "trough of disillusionment" in the late 1980s and early 1990s. Most recently, writing in 2018, Bentley [60] (p. 11) derided beliefs about superintelligence and instead urges: "Do not be fearful of AI—marvel at the persistence and skill of those human specialists who are dedicating their lives to help create it. And appreciate that AI is helping to improve our lives every day." (For criticism of Bentley [60], see [61].) This suggests that some superintelligence skepticism may serve the political goal of protecting the broader field of AI.

Superintelligence skepticism that is aimed at protecting the field of AI may be less of a factor during the current period of intense interest in AI. At least for now, the field of AI does not need to defend its value—its value is rather obvious, and AI researchers are not lacking for job security. Importantly, the current AI boom is largely based on actual accomplishments, not hype. Therefore, while today's AI researchers may view superintelligence as a distraction, they are less likely to view it as a threat to their livelihood. However, some may nonetheless view superintelligence in this way, especially those who have been in the field long enough to witness previous boom-and-bust cycles. Likewise, the present situation could change if the current AI boom eventually cycles into another bust—another winter. Despite the success of current AI, there are arguments that it is fundamentally limited [62]. The prospect of a new AI winter could be a significant factor in politicized superintelligence skepticism.

A different type of example comes from public intellectuals who profess superintelligence skepticism based on questionable reasoning. A notable case of this is the psychologist and public intellectual Steven Pinker. Pinker recently articulated a superintelligence skepticism that some observers have likened to politicized climate skepticism [63,64]. Pinker does resemble some notable political skeptics: a senior scholar with an academic background in an unrelated topic who is able to use his (and it is typically a *he*) platform to advance his skeptical views. Additionally, a close analysis of Pinker's comments on superintelligence finds them to be flawed and poorly informed by existing research [65]. Pinker's superintelligence skepticism appears to be advancing a broader narrative of human progress, and may be making the intellectual sin of putting this conclusion before the analysis of superintelligence. However, his particular motivations are, to the present author's knowledge, not documented (It would be especially ironic for Pinker to politicize skepticism based on flawed intellectual reasoning, since he otherwise preaches a message intellectual virtue).

A third type of example of potential politicized superintelligence skepticism comes from the corporate sector. Several people in leadership positions at technology corporations have expressed superintelligence skepticism, including Eric Schmidt (Executive Chairman of Alphabet, the parent company of Google) [66] and Mark Zuckerberg (CEO of Facebook) [67]. Since this skepticism comes the corporate sector, it has some resemblance to risk–profit politicized skepticism and may likewise have the most potential to shape public discourse and policy. One observer postulated that Zuckerberg professes superintelligence skepticism to project the idea that "software is always friendly and tame" and avoid the idea "that computers are intrinsically risky", the latter of which "has potentially dire consequences for Zuckerberg's business and personal future" [67]. While this may just be conjecture, it does come at a time in which Facebook is under considerable public pressure for its role in propagating fake news and influencing elections, which, although unrelated to superintelligence, nonetheless provides an antiregulatory motivation to downplay risks associated with computers.

To summarize, there may already be some politicized superintelligence skepticism, coming from AI academics seeking to protect their field, public intellectuals seeking to advance a certain narrative about the world, and corporate leaders seeking to avoid regulation. However, it is not clear how much superintelligence skepticism is already politicized, and there are indications that it may be limited, especially compared to what has occurred for other issues. On the other hand, superintelligence

is a relatively new public issue (with a longer history in academia), so perhaps its politicization is just beginning.

Finally, it is worth noting that while superintelligence has not been politicized to the extent that climate change has, there is at least one instance of superintelligence being cited in the context of climate skepticism. Cass [68,69] cited the prospect of superintelligence as a reason to not be concerned about climate change. A counter to this argument is that, even if superintelligence is a larger risk, addressing climate change can still reduce the overall risk faced by humanity. Superintelligence could also be a solution to climate change, and thus may be worth building despite the risks it poses. At the same time, if climate change has been addressed independently, then this reduces the need to take risks in building superintelligence [70].

#### *4.2. Prospects for Politicized Superintelligence Skepticism*

Will superintelligence skepticism be (further) politicized? Noting the close historical association between politicized skepticism and corporate profits—at least for risk–profit politicized skepticism—an important question is whether superintelligence could prompt profit-threatening regulations. AI is now being developed by some of the largest corporations in the world. Furthermore, a recent survey found artificial general intelligence projects at several large corporations, including Baidu, Facebook, Google, Microsoft, Tencent, and Uber [19]. These corporations have the assets to conduct politicized skepticism that is every bit as large as that of the tobacco, fossil fuel, and industrial chemicals industries.

It should be noted that the artificial general intelligence projects at these corporations were not found to indicate substantial skepticism. Indeed, some of them are outspoken in concern about superintelligence. Moreover, out of 45 artificial general intelligence projects surveyed, only two were found to be dismissive of concerns about the risks posed by the technology [19]. However, even if the AI projects themselves do not exhibit skepticism, the corporations that host them still could. Such a scenario would be comparable to that of ExxonMobil, whose scientists confirmed the science of climate change even while corporate publicity campaigns professed skepticism [7].

The history shows that risk–profit politicized skepticism is not inherent to corporate activity—it is generally only found when profits are at stake. The preponderance of corporate research on artificial general intelligence suggests at least a degree of profitability, but, at this time, it is unclear how profitable it will be. If it is profitable, then corporations are likely to become highly motivated to protect it against outside restrictions. This is an important factor to monitor as the technology progresses.

In public corporations, the pressure to maximize shareholder returns can motivate risk–profit politicized skepticism. However, this may be less of a factor for some corporations in the AI sector. In particular, voting shares constituting a majority of voting power at both Facebook and Alphabet (the parent company of Google) are controlled by the companies' founders: Mark Zuckerberg at Facebook [71] and Larry Page and Sergey Brin at Alphabet [72]. Given their majority stakes, the founders may be able to resist shareholder pressure for politicized skepticism, although it is not certain that they would, especially since leadership at both companies already display superintelligence skepticism.

Another factor is the political ideologies of those involved in superintelligence. As discussed above, risk–profit politicized skepticism of other issues is commonly driven by people with pro-capitalist, anti-socialist, and anti-communist political ideologies. Superintelligence skepticism may be more likely to be politicized by people with similar ideologies. Some insight into this matter can be obtained from a recent survey of 600 technology entrepreneurs [73], which is a highly relevant demographic. The study finds that, contrary to some conventional wisdom, this demographic tends not to hold libertarian ideologies. Instead, technology entrepreneurs tend to hold views consistent with American liberalism, but with one important exception: technology entrepreneurs tend to oppose government regulation. This finding suggests some prospect for politicizing superintelligence skepticism, although perhaps not as much as may exist in other industries.

Further insight can be found from the current political activities of AI corporations. In the US, the corporations' employees donate mainly to the Democratic Party, which is the predominant party of American liberalism and is more pro-regulation. However, the corporations themselves have recently shifted donations to the Republican Party, which is the predominant party of American conservatism and is more anti-regulation. Edsall [74] proposed that this divergence between employees and employers is rooted in corporations' pursuit of financial self-interest. A potential implication of this is that, even if the individuals who develop AI oppose risk–profit politicized skepticism, the corporations that they work for may support it. Additionally, the corporations have recently been accused of using their assets to influence academic and think tank research on regulations that the corporations could face [75,76], although at least some of the accusations have been disputed [77]. While the veracity of these accusations is beyond the scope of this paper, they are at least suggestive of the potential for these corporations to politicize superintelligence skepticism.

AI corporations would not necessarily politicize superintelligence skepticism, even if profits may be at stake. Alternatively, they could express concern about superintelligence to portray themselves as responsible actors and likewise avoid regulation. This would be analogous to the strategy of "greenwashing" employed by companies seeking to bolster their reputation for environmental stewardship [78]. Indeed, there have already been some expressions of concern about superintelligence by AI technologists, and likewise some suspicion that the stated concern has this sort of ulterior motive [79].

To the extent that corporations do politicize superintelligence skepticism, they are likely to mainly emphasize doubt about the risks of superintelligence. Insofar as superintelligence could be beneficial, corporations may promote this, just as they promote the benefits of fossil fuels (for transportation, heating, etc.) and other risky products. Or, AI corporations may promote the benefits of their own safety design and sow doubt about the safety of their rivals' designs, analogous to the marketing of products whose riskiness can vary from company to company, such as automobiles. Alternatively, AI corporations may seek to sow doubt about the possibility of superintelligence, calculating that this would be their best play for avoiding regulation. As with politicized skepticism about other technologies and products, there is no one standard formula that every company always adopts.

For their part, academic superintelligence skeptics may be more likely to emphasize doubt about the mere possibility of superintelligence, regardless of whether it would be beneficial or harmful, due to reputational concerns. Or, they could focus skepticism on the risks, for similar reasons as corporations: academic research can also be regulated, and researchers do not always welcome this. Of course, there are also academics who do not exhibit superintelligence skepticism. Again, there is no one standard formula.

#### *4.3. Potential Effectiveness of Politicized Superintelligence Skepticism*

If superintelligence skepticism is politicized, several factors point to it being highly effective, even more so than for the other issues in which skepticism has been politicized.

First, some of the experts best positioned to resolve the debate are also deeply implicated in it. To the extent that superintelligence is a risk, the risk is driven by the computer scientists who would build superintelligence. These individuals have intimate knowledge of the technology and thus have an essential voice in the public debate (though not the only essential voice). This is distinct from issues such as tobacco or climate change, in which the risk is mainly assessed by outside experts. It would be as if the effect of tobacco on cancer was studied by the agronomists who cultivate tobacco crops, or if the science of climate change was studied by the geologists who map deposits of fossil fuels. With superintelligence, a substantial portion of the relevant experts have a direct incentive to avoid any restrictions on the technology, as do their employers. This could create a deep and enduring pool of highly persuasive skeptics.

Second, superintelligence skepticism has deep roots in the mainstream AI computer science community. As noted above, this dates to the days of AI winter. Thus, skeptics may be abundant even where they are not funded by industry. Indeed, most of the skeptics described above do not appear to be speaking out of any industry ties, and thus would not have an industry conflict of interest. They could still have a conflict of interest from their desire in protect the reputation of their field, but this is a subtler matter. Insofar as they are perceived to not have a conflict of interest, they could be especially persuasive. Furthermore, even if their skepticism is honest and not intended for any political purposes, it could be used by others in dishonest and political ways.

Third, superintelligence is a topic for which the uncertainty is inherently difficult to resolve. It is a hypothetical future technology that is qualitatively different from anything that currently exists. Furthermore, there is concern that its mere existence could be catastrophic, which could preclude certain forms of safety testing. It is thus a risk that defies normal scientific study. In this regard, it is similar to climate change: moderate climate change can already be observed, as can moderate forms of AI, but the potentially catastrophic forms have not yet materialized and possibly never will. However, climate projections can rely on some relatively simple physics—at its core, climate change largely reduces to basic physical chemistry and thermodynamics. (The physical chemistry covers the nature of greenhouse gasses, which are more transparent to some wavelengths of electromagnetic radiation than to others. The thermodynamics covers the heat transfer expected from greenhouse gas buildup. Both effects can be demonstrated in simple laboratory experiments. Climate change also involves indirect feedback effects on much of the Earth system, including clouds, ice, oceans, and ecosystems, which are often more complex and difficult to resolve and contribute to ongoing scientific uncertainty.) In contrast, AI projections must rely on notions of intelligence, which is not so simple at all. For this reason, it is less likely that scholarly communities will converge on any consensus position on superintelligence in the way that they have on other risks such as climate change.

Fourth, some corporations that could develop superintelligence may be uniquely well positioned to influence public opinion. The corporations currently involved in artificial general intelligence research include some corporations that also play major roles in public media. As a leading social media platform, Facebook in particular has been found to be especially consequential for public opinion [80]. Corporations that serve as information gateways, such as Baidu, Google, and Microsoft, also have unusual potential for influence. These corporations have opportunities to shape public opinion in ways that the tobacco, fossil fuel, and industrial chemicals industries cannot. While the AI corporations would not necessarily exploit these opportunities, it is an important factor to track.

In summary, while it remains to be seen whether superintelligence skepticism will be politicized, there are some reasons for believing it will be, and that superintelligence would be an especially potent case of politicized skepticism.

#### **5. Opportunities for Constructive Action**

Politicized superintelligence skepticism would not necessarily be harmful. As far as this paper is concerned, it is possible that, for superintelligence, skepticism is the correct view, meaning that superintelligence may not be built, may not be dangerous, or may not merit certain forms of imminent attention. (The paper of course assumes that superintelligence is worth some imminent attention, or otherwise it would not have been written.) It is also possible that, even if superintelligence is a major risk, government regulations could nonetheless be counterproductive, and politicized skepticism could help avoid that. That said, the history of politicized skepticism (especially risk–profit politicized skepticism) shows a tendency for harm, which suggests that politicized superintelligence skepticism could be harmful as well.

With this in mind, one basic opportunity is to raise awareness about politicized skepticism within communities that discuss superintelligence. Superintelligence skeptics who are motivated by honest intellectual norms may not wish for their skepticism to be used politically. They can likewise be cautious about how to engage with potential political skeptics, such as by avoiding certain speaking opportunities in which their remarks would be used as a political tool instead of as a constructive intellectual contribution. Additionally, all people involved in superintelligence debates can insist on basic intellectual standards, above all by putting analysis before conclusions and not the other way around. These are the sorts of things that an awareness of politicized skepticism can help with.

Another opportunity is to redouble efforts to build scientific consensus on superintelligence, and then to draw attention to it. Currently, there is no consensus. As noted above, superintelligence is an inherently uncertain topic and difficult to build consensus on. However, with some effort, it should be possible to at least make progress towards consensus. Of course, scientific consensus does not preclude politicized skepticism—ongoing climate skepticism attests to this. However, it can at least dampen the politicized skepticism. Indeed, recent research has found that the perception of scientific consensus increases acceptance of the underlying science [81].

A third opportunity is to engage with AI corporations to encourage them to avoid politicizing skepticism about superintelligence or other forms of AI. Politicized skepticism is not inevitable, and while corporate leaders may sometimes feel as though they have no choice, there may nonetheless be options. Furthermore, the options may be especially effective at this early stage in superintelligence research, in which corporations may have not yet established internal policy or practices.

A fourth opportunity is to follow best practices in debunking misinformation in the event that superintelligence skepticism is politicized. There is a substantial literature on the psychology of debunking [81–83]. A debunking handbook written for a general readership [82] recommends: (1) focusing on the correct information to avoid cognitively reinforcing the false information; (2) preceding any discussion of the false information with a warning that it is false; and (3) when debunking false information, also give the correct information so that people are not left with a gap in their understanding of the topic. The handbook further cautions against using the *information deficit model* of human cognition, which proposes that mistaken beliefs can be corrected simply by providing the correct information. The information deficit model is widely used in science communication, but it has been repeatedly found to work poorly, especially in situations of contested science. This sort of advice could be helpful to efforts to counter superintelligence misinformation.

Finally, the entire AI community should insist that policy be made based on an honest and balanced read of the current state of knowledge. Burden of proof requirements should not be abused for private gain. As with climate change and other global risks, the world cannot afford to prove that superintelligence would be catastrophic. By the time uncertainty is eliminated, it could be too late.

#### **6. Conclusions**

Some people believe that superintelligence could be a highly consequential technology, potentially even a transformative event in the course of human history, with either profoundly beneficial or extremely catastrophic effects. Insofar as this belief is plausible, superintelligence may be worth careful advance consideration, to ensure that the technology is handled successfully. Importantly, this advance attention should include social science and policy analysis, and not just computer science. Furthermore, even if belief in superintelligence is mistaken, it can nonetheless be significant as a social and political phenomenon. This is another reason for social science and policy analysis. This paper is a contribution to the social science and policy analysis of superintelligence. Furthermore, despite the unprecedented nature of superintelligence, this paper shows that there are important historical and contemporary analogs that can shed light on the issue. Much of what could occur for the development of superintelligence has already occurred for other technologies. Politicized skepticism is one example of this.

One topic not covered in this paper is the prospect of beliefs that superintelligence will occur and/or will be harmful to be politicized. Such a phenomenon could be analogous to, for example, belief in large medical harms from nuclear power, or, phrased differently, skepticism about claims that nuclear power plants are medically safe. The scientific literature on nuclear power finds medical harms to be substantially lower than is commonly believed [84]. Overstated concern (or "alarmism") about nuclear power can likewise be harmful, for example by increasing use of fossil fuels. Similarly, the fossil fuel industry could politicize this belief for its own benefit. By the same logic, belief in

superintelligence could also be politicized. This prospect is left for future research, although much of this paper's analysis may be applicable.

Perhaps the most important lesson of this paper is that the development of superintelligence could be a contentious political process. It could involve aggressive efforts by powerful actors—efforts that not only are inconsistent with basic intellectual ideals, but that also actively subvert those ideals for narrow, self-interested gain. This poses a fundamental challenge to those who seek to advance a constructive study of superintelligence.

**Funding:** This research received no external funding.

**Acknowledgments:** Tony Barrett, Phil Torres, Olle Häggström, Maurizio Tinnirello, Matthijs Maas, Roman Yampolskiy, and participants in a seminar hosted by the Center for Human-Compatible AI at UC Berkeley provided helpful feedback on an earlier version of this paper. All remaining errors are the author's alone. The views expressed in this paper are the author's and not necessarily the views of the Global Catastrophic Risk Institute.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
