**The Singularity Isn't Simple! (However We Look at It) A Random Walk between Science Fiction and Science Fact**

#### **Vic Grout ID**

Department of Computing, Wrexham Glyndwr University, Wrexham LLI1 2AW, UK.; ˆ v.grout@glyndwr.ac.uk; Tel.: +44-1978-293-203

Received: 11 April 2018; Accepted: 18 April 2018; Published: 19 April 2018

**Abstract:** It seems to be accepted that *intelligence*—*artificial* or otherwise—and 'the singularity' are inseparable concepts: 'The singularity' will apparently arise from AI reaching a, supposedly particular, but actually poorly-defined, level of sophistication; and an empowered combination of hardware and software will take it from there (and take over from us). However, such wisdom and debate are simplistic in a number of ways: firstly, this is a poor definition of the singularity; secondly, it muddles various notions of intelligence; thirdly, competing arguments are rarely based on shared axioms, so are frequently pointless; fourthly, our models for trying to discuss these concepts at all are often inconsistent; and finally, our attempts at describing any 'post-singularity' world are almost always limited by anthropomorphism. In all of these respects, professional 'futurists' often appear as confused as storytellers who, through freer licence, may conceivably have the clearer view: perhaps then, that becomes a reasonable place to start. There is no attempt in this paper to propose, or evaluate, any research hypothesis; rather simply to challenge conventions. Using examples from science fiction to illustrate various assumptions behind the AI/singularity debate, this essay seeks to encourage discussion on a number of possible futures based on different underlying metaphysical philosophies. Although properly grounded in science, it eventually looks beyond the technology for answers and, ultimately, beyond the Earth itself.

**Keywords:** futurism and futurology; hard science fiction; artificial intelligence; models of consciousness; intelligent machines; machine replication; machine evolution and optimization; technological singularity

#### **1. Introduction: Problems with 'Futurology'**

This paper will irritate some from the outset by treating 'serious' academic researchers, professional 'futurists' and science fiction writers as largely inhabiting the same techno-creative space. Perhaps, for these purposes, our notion of science fiction might be better narrowed to 'hard' sci-fi (i.e., that with a supposedly realistic edge) loosely set somewhere in humanity's (rather than anyone else's) future; but, that aside, the alignment is intentional, and no apology is offered for it.

Moreover, it is justified to a considerable extent: purveyors of hard future-based sci-fi and professional futurism have much in common. They both have their work (their credibility) judged in the here-and-now by known theory and, retrospectively, by empirical evidence. In fact, there may be a greater gulf between (for example) the academic and commercial futurist than between either of them and the sci-fi writer. Whereas the professional futurists may have their objectives set by technological or economic imperatives, the storyteller is free to use a scientific premise of their choosing as a blank canvas for *any* wider social, ethical, moral, political, legal, environmental or demographic discussion. This 360◦ view is becoming increasingly essential: asking technologists their view on the future of technology makes sense; asking their opinions regarding its wider impact may not.

#### *1.1. Futurology 'Success'*

Therefore, for our purposes, we define an alternative role, that of *'futurologist'* to be any of these, whatever their technical or creative background or motivation may be. A futurologist's predictive success (or *accuracy*) may be loosely assessed by their performance across three broad categories: *positives*, *false positives* and *negatives*, defined as follows:


Obviously, these terms are vague and subject to overlap, but they are for discussion only: there will be no attempt to quantify them here as metrics. (However, see [1] for an informal attempt at doing just this!) Related to these is the concept of justification: the presence (or otherwise) of a coherent argument to support predictions. Justification may be described not merely by its presence, partial (perhaps unconvincing) presence or absence but also by its form or nature; purely scientific, for example, or based on wider social, economic, legal, etc. grounds. The ideal would be a set of predictions with high accuracy (perhaps loosely the ratio of positives to false positives and negatives) together with strong justification. The real world, however is never that simple. Although unjustified predictions might be considered merely guesses, some guesses are lucky: another reason why the outcomes of factual and fictional writing may not be so diverse. Finally, assessment of prediction accuracy and justification rarely happen together. Predictions made *now* can be considered for their justification but not their accuracy: that can only be known later. Predictions made in the past can have their positives and negatives scrutinised in detail but their justification has been made, by the passage of time and the comfort of certainty, less germane.

This flexible assessment of accuracy (positives, false positives and negatives) and justification can be applied to any attempt at futurology, whatever its intent: to inform, entertain or both. We start then with one of the best known examples of all.

Star Trek [2,3] made its TV debut in 1966. Although five decades have now passed, we are still over two centuries from its generally assumed setting in time. This makes most aspects of assessment of its futurology troublesome but, as a snapshot exercise/example in the here-and-now, we can try:


A similar exercise can be (indeed, informally has been) attempted with other well-known sci-fi favourites such as Back to the Future [8] and Star Wars [9] or, with the added complexity of insincerity, Red Dwarf [10].

#### *1.2. Futurology 'Failure'*

In Star Trek's case, the interesting, and contentious, category is the negatives. Did it really fail to predict modern, and still evolving, networking technology? Is there no Internet in Star Trek? There are certainly those who would defend it against such claims [11] but such arguments generally take the form of noting secondary technology that could *imply* a pervasive global (or universal) communications network: there is no first-hand reference point anywhere. The ship's computer, for example, clearly has access to a vast knowledge base but this always appears to be held locally. Communication is almost always point-to-point (or a combination of such connections) and any notion of distributed processing or resources is missing. Moreover, in many scenes, there is a plot-centred (clearly intentional) sense of isolation experienced by its characters, which is incompatible with today's understanding and acceptance of a ubiquitous Internet: on that basis alone, it is unlikely that its writers intended to suggest one.

This point regarding (overlooking) 'negatives' may be more powerfully made by considering another sci-fi classic. Michael Moorcock's *Dancers at the End of Time* trilogy [12] was first published in full in 1976 and has been described as *"one of the great postwar English fantasies"* [13]. It describes an Earth millions of years into the future in which virtually anything is possible. Through personal 'power rings', its cast of decadent eccentrics have access to almost limitless capability from technology established in the distant past by long-lost generations of scientists. As the hidden technology consumes vast, remote stellar regions for its energy, the inhabitants of this new world can create life and cheat death. It is an excellent example of the use of a simple sci-fi premise as a starting point for a wider social and moral discussion. Setting the scene takes just a few pages but then it gets interesting: *how do people behave* in a world where *anything* is possible?

Yet, there is no Internet; or even anything much by way of mobile technology. If 'The Dancers' want to observe what might be happening on the other side of the planet, they use their power rings to create a plane (or a flying train) and go there at high speed: fun, perhaps, but hardly efficient! A mere two decades before the Internet became a reality for most people in the developed world, Moorcock's vision of an ultimately advanced technological civilization did not include such a concept.

Of course, many writers from E.M. Forster (in 1928) [14], through Will Jenkins (1946) [15], to Isaac Asimov (1957) [16], even Mark Twain (much earlier in 1898) [17], have described technology with Internet-like characteristics so there can be no concluding that imagining any particular scientific innovation is necessarily impossible. However each of these, whilst pointing the way in this particular respect to varying degrees, are significantly adrift in other areas. In some there is an archaic reliance on existing equipment; in others, a network of sorts but no computers. Douglas Adams's *Hitchhiker's Guide to the Galaxy* (1978) [18] is a universal knowledge base without a network. There is no piece of futurology from more than two or three decades ago which portrays the current world particularly accurately in even most, let alone all, respects. Not only does technological advancement have too many individual threads, the interaction between them is hugely complex.

This is not a failing of fiction writers only. To give just one example, in 1977, Ken Olsen, founder and CEO of DEC [19], made an oft-misapplied statement, *"there is no reason for any individual to have a computer in his home"*. A favourite of introductory college computing modules, supposedly highlighting the difficulty in keeping pace with rapid change in computing technology, it appears foolish at a time when personal computers were already under development, including in his own laboratories. The quote is out of context, of course, and applies to Olsen's scepticism regarding fully-automated assistive home technology systems (climate control, security, cooking food, etc.) [20]. However, as precisely these technologies gain traction, there may be little doubt that, if he stands unfairly accused of being wrong in one respect, time will inevitably prove him so in another. This form of 'it won't happen' prediction (but it then does) can be considered simply as an extension of the *negatives* category for futurology success or, if necessary, as an additional false negatives set; but this would digress this discussion unnecessarily at this point.

Therefore, the headline conclusion of this introduction is, of course, the somewhat obvious 'futurology is difficult' [21]. However, there are three essential, albeit intersecting, components of this observation, which we can take forward as the discussion shifts towards the eponymous technological singularity:


These elements have to remain in permanent focus as we move on to the main business of this paper.

#### **2. Problems with Axioms, Definitions and Models**

Having irritated some traditional academics at the start of the previous section, we begin this one in a similar vein: by quoting Wikipedia [22].

*"The technological singularity (also, simply, the singularity) [23] is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization [24]. According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a "runaway reaction" of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue" [25]. Subsequent authors have echoed this viewpoint [22,26]. I. J. Good's "intelligence explosion" model predicts that a future superintelligence will trigger a singularity [27]. Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate [27]*.

*At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial general intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040 [28]*.

*Many notable personalities, including Stephen Hawking and Elon Musk, consider the uncontrolled rise of artificial intelligence as a matter of alarm and concern for humanity's future [29,30]. The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated by various intellectual circles."*

Informal as much of this is, it provides a useful platform for discussion and, although there may be more credible research bases, no single source will be the consensus of such a large body of opinion: in this sense, it can be argued to represent at least the common view of the Technological Singularity (TS). Once again, we see sci-fi suggestions of the TS (Vinge) pre-dating academic discussions (Kurzweil [31]) and consideration of the wider social, economic, political and ethical impact also follows close behind: these various dimensions cannot be considered in isolation; or, at least, should not be.

#### *2.1. What Defines the 'Singularity'?*

However, before embarking on a discussion of what the TS actually is, if it might happen, how it might take place and what the implications could be, a more fundamental question merits at least passing consideration: is the TS really a 'singularity' at all?

This is not an entirely simple question to answer. Different academic subjects, from various fields in mathematics, through the natural sciences, to emerging technologies have their own concept of a 'singularity' with most dictionary definitions imprecisely straddling several of them. Sci-fi [27] continues to contribute in its own fashion. However, a common theme is the notion of the rules, formulae, behaviour or understanding of any system breaking down or not being applicable at the point of singularity. ('We understand how the system works everywhere except here') A black hole is a singularity in space-time: the conventional laws of physics fail as gravitational forces become infinite (albeit infinitely slowly as we approach it). The function *y* = 1/*x* has a singularity at *x* = 0: it has no (conventional) value. On this basis, *is* the point in our future, loosely understood to be that at which machines start to evolve by themselves, really a singularity? To put it another way, does it suggest the sort of discontinuity most definitions of a singularity imply?

It may not. Whilst there may be little argument with there being a period of great uncertainty following on from the TS, including huge questions regarding what the machines might do or how long (or just how) humans, or the planet, might survive [31], it is not clear that the TS marks an impassable break in our timeline. It is absolutely unknown what happens to material that enters a black hole; it may appear in some other form elsewhere in the universe but that is entirely speculative: there is no other side, of which we know. The *y* = 1/*x* curve can be followed smoothly towards *x* = 0 from either (positive or negative) direction but there is no passing continuously through to the other. However, at the risk of facetiousness, if we go to bed one night and the TS occurs while we sleep, we will still most likely wake up the following morning. The world may indeed have become a very different place but it will probably continue, for a while at least. It is possible that we sometimes confuse 'uncertainty' with 'discontinuity' so perhaps 'singularity' is not an entirely appropriate term?

Returning, to less abstract matters, we attempt to consider what the TS might actually be. Immediately, this draws us into difficulty as we begin to note that most definitions, including the Wikipedia consensus, conflate different concepts:


Figure 1 attempts to distil these overlapping principles in relation to the TS. Towards the left of the diagram, more is understood (with clearer, concrete definitions) and more likely to achieve consensus among futurologists. Towards the right, concepts become more abstract, with looser definitions and less agreement. Time moves downwards.

**Figure 1.** Combined (but simplified) models of the technological singularity. TS, Technological Singularity.

#### *2.2. How Might the 'Singularity' Happen?*

Any definition of the TS by reference to AI, itself not well defined, or worse, to *AGI* (*Artificial General Intelligence*: loosely the evolutionary end-point of AI to something 'human-like' [32]) is, at best, a heuristic one. Not only does this not provide a deterministic characterization of the TS, it is not *axiomatically* dependent upon it. Suppose there was an *agreed* formulaic measurement for AI (or AGI), which there is not, what *figure* would we need to reach for the TS to occur? We have no idea and it is unlikely that the question is a functional one. In addition, *is* it unambiguously clear that this is the *only* way it could happen? Not if we base our definitions on precise *expectations* rather than *assumptions*.

A process-based model (white box), built on what we already have, is at least easier to understand. *Evolution* is the result of iterated replication within a 'shaping' or 'guiding' environment. In theory, at least, (automatic) replication is the automated combining of *design* and *production*. We already see examples of both automated production and automated design at work. Production lines, including robotic hardware, have been with us for decades [33]: in this respect, accelerated manufacturing began long ago. Similarly, software has become an increasingly common feature of engineering processes over the years [34] with today's algorithms being very sophisticated indeed [35]. Many problems in electronic component design, placement, configuration and layout [36,37], for example, would be entirely impossible now without computers taking a role in optimizing their successors. In principle, it is merely a matter of connecting all of this seamlessly together!

Fully integrating automated design and automated production into automated replication is far from trivial, of course, but it does give a better theoretical definition of the TS than vague notions of AI or AGI. It is, however, also awash with practical problems:


and *sustainable* then either the supply of materials has to be similarly seamless or the production hardware must somehow be able to source its own. Neither option is in sight presently.


Although consideration of 4 does indeed appear to lead us back to some notion of intelligence, which is why the heuristic (grey box) definition of the TS associated with AI or AGI is not outright wrong, it is simply unhelpful in any practical sense. Exactly *what* level of *what* measurement of intelligence would be necessary for a self-replicating system to effect improvement across generations? It remains unclear. Having broken a process-based TS down into its constituents, however, it does suggest an alternative definition.

Because a results-based (black box) definition of the TS could be something much easier to come to terms with. For example, we might understand it to be the point at which a 3D printer autonomously creates an identical version of itself. Then, presumably, with some extra lines of code, it creates a marginally better version. This superior offspring has perhaps slightly better hardware features (precision, control, power, features or capabilities) and improved software (possibly viewed in terms of its 'operating system'). As the child becomes a parent, further hardware and software enhancements follow and the evolutionary process is underway. Even here we could dispute the *exact* point of the TS but it is clearly framed somewhere in the journey from the initial parent, through its extra code, to the first child. 'Intelligence' has not been mentioned.

The practical objections above still remain, of course. We can perhaps ignore 1 [Design] by reminding ourselves that this is a black box definition: we can simply claim to recognize when the outcomes have satisfied our requirements. Similarly 2 [Production] might be overlooked but it does suggest that the 3D printer might have to have an independent/autonomous supply network of (not just present but all possible future) materials required: *very* challenging, arguably *non-deterministic*; the alternative, though, might be that the device has to be *mobile*, allowing it to source or forage for its needs! Finally, although we can use the black box defence against being able to describe 3 [Replication] in detail, the difficult question of 4 [Evolution] remains: *Why would the printer want to do this?* Therefore, this might bring us back to intelligence after all; or does it?

Because, if we are forced to deal with intelligence as a driver towards the TS (even if only to attempt to answer the question of whether it is even required), we encounter a similar problem in what may be a recurring pattern: we have very little idea (understanding or agreement) on what *'intelligence'* is ether, which takes us to the next section.

#### **3. Further Problems with 'Thinking', 'Intelligence', 'Consciousness', General Metaphysics and 'The Singularity'**

The irregular notion of an intelligent machine is found in literature long before academic non-fiction. Although it may stretch definitions of 'sci-fi', *Talos* [39] appears in stories dating back to the third

century BC. A giant bronze automaton, patrolling the shores of Europa (Crete), he appears to have many of the attributes we would expect of an intelligent machine. Narratives vary somewhat but most describe him as of essentially non-biological origin. Many other examples follow over the centuries with Mary Shelley's monster [40] being one of the best known: Dr. Frankenstein's creation, however, is the result of (albeit corrupted) natural science. These different versions of ('natural' or 'unnatural') 'life' or 'intelligence' continue apace into the 21st century, of course [41–44], but, by the time Alan Turing [45] asked, in 1950, whether such things were possible in practice, sci-fi had already given copious judgement in theory.

How important is this difference between 'natural' and 'unnatural' life? Is it the same as the distinction between 'artificial' and 'real' intelligence? Who decides what is natural and unnatural anyway? (The concepts of 'self-awareness' and 'consciousness' are discussed later.) As Turing pointed out tangentially in his response to 'The Theological Objection [45], this may not be a question we have the right, or ability, to make abstract judgements on. (To paraphrase his entirely atheist argument: 'even if there were a God, who are *we* to tell Him what he can and cannot do?') However, even in the title of this seminal work ('Computing Machinery and Intelligence'), followed by his opening sentence (*'I propose to consider the question, "Can machines think?"'*), he has apparently already decided that 'thinking' and 'intelligence' are the same thing. Although he immediately follows this with an admission that precise definitions are problematic, whether this equivalence should be regarded is axiomatic is debatable.

#### *3.1. 'Thinking' Machines*

However, something that Turing [45] forecasts with unquestionable accuracy is that, *'I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted'*. It cannot be denied that today we do just this (and have done for some time): at the very least, a computer is 'thinking' while we await its response. It might be argued that the use of the word in this context describes merely the delay, or even that it has some associated irony attached, but we still do it, even if we do not give any thought to what it means. Leaving aside his almost universally misunderstood 'Imitation Game' ('Turing Test': see later) discussion, in this the simplest of predictions, Turing was right.

(His next sentence is then often overlooked: *'I believe further that no useful purpose is served by concealing these beliefs. The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any improved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.'* This very much supports the guiding principles of this paper: so long as we can recognize the difference between science-fact and anything short of this, conjecture, and even fiction, have value in promoting discussion. Perhaps the only real difference between the sci-fi writer and the professional futurist is that the latter expect us to believe them?)

#### *3.2. 'Intelligent' Machines*

Therefore, if 'thinking' is a word used too loosely to have much value, what of the perhaps stronger 'intelligence'? Once again, the dictionary is of little help, confusing several clearly different concepts. 'Intelligence', depending on the context, implies a capacity for: *learning*, *logic*, *problem-solving*, *reasoning*, *creativity*, *emotion*, *consciousness* or *self-awareness*. That these are hardly comparable is more than abstract semantics: does 'intelligent' simply suggest 'good at something' (number-crunching, for example) or is it meant to imply 'human-like'? By some of these definitions, a pocket-calculator is intelligent, a chess program more so. However, by others, a fully-automated humanoid robot, faster, stronger and superior to its creators in every way would not be if it lacked consciousness and/or self-awareness [46]. What is meant (in Star Trek, for example) by frequent references to 'intelligent life'? Is this tautological or are there forms of life lacking intelligence? (After all, we often say this of people we lack respect for.) Are the algorithms that forecast the weather or run our economies

intelligent? (They do things with a level of precision that we cannot.) If intelligence can be anything from simple learning to full self-awareness, at which point does AI become AGI, or perhaps 'artificial' intelligence become 'real' intelligence. There sometimes seem to be as many opinions as there are academics/researchers. And, once again, we return to what increasingly looks to be a poorly framed question: *how much 'intelligence' is necessary for the TS?*

Perhaps tellingly, Turing himself [45] made no attempt to define intelligence, either at human or machine level. In fact, with respect to the latter, accepting his conflation of intelligence with thought, he effectively dismisses the question: he precedes his *'end of the century'* prediction above by, *'The original question, "Can machines think?" I believe to be too meaningless to deserve discussion.'* Instead, he proposes his, widely quoted but almost as widely misunderstood, *'Imitation Game'*, eventually to become known as the 'Turing Test'. A human 'interrogator' is in separate, remote communication with both a machine and another human and, through posing questions and considering their responses, has to decide which is which. It probably makes little difference whether either is allowed to lie. (Clearly then something akin to 'intellectually human-like' is thus being suggested as intelligence here.) In principle, if the interrogator simply cannot decide (or makes the wrong decision as often as the right one) then the machine has 'won' the game.

Whatever the thoughts of sci-fi over the decades [47,48] might be, this form of absolute victory was not considered by Turing in 1950. Instead he made a simple (and quite possibly off-the-cuff) prediction that *'I believe that in about fifty years' time it will be possible, to programme computers,* ... *, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.'* Today's reports [49] of software 'passing the Turing Test' because at least four people out of ten chose wrongly in a one-off experiment are a poor reflection on this original discussion. We should also consider that the sophistication of the interrogators and their questioning may well increase over time: both pocket calculators and chess machines seemed unthinkable shortly before they successfully appeared but are now mainstream.

However, the key point regarding this formulation of an 'intelligence test' is that it is only loosely based on knowledge. No human knows nothing but some know more than others. Similarly, programming a computer with knowledge bases of various sizes is trivial. The issue for the interrogator is to attempt to discriminate on the basis of how that knowledge is manipulated and presented. 'Q. What did Henry VIII die from'; 'A. Syphilis', requires a simple look-up but 'Q. Name a British monarch who died from a sexually transmitted disease', is more complex since neither the king nor the illness is mentioned in the original question/answer. As a slightly different illustration, asking for interpretations of the signs, 'Safety hats must be worn', and 'Dogs must be carried', is challenging: although syntactically similar, their regulatory expectations are very different.

#### *3.3. 'Adaptable' Machines*

This, in turn, leads us to the concept of adaptability, often mooted in relation to intelligence, by both human-centric and AI researchers. Some even take this further to the point of using it to define 'genius' [50]. The argument is that *what* an entity *knows* is less important than how it *uses* it, particularly in dealing with situations where that knowledge has to be applied in a different context to that in which it was learned. This relates well to the preceding paragraph and perhaps has a ring of truth in assessing the intelligence of humans or even other animals; but what of AI? What does adaptability look like in a machine?

In fact, this is very difficult and rests squarely on what we think might happen as machines approach, then evolve through, the TS. If our current models of software running on hardware (as separate entities) survive, then adaptability may simply be the modification of database queries to analyse existing knowledge in new ways or an extension to existing algorithms to perform other tasks. Thus, in a strictly evolutionary sense, it may, for example, be sufficient for one generation to autonomously acquire better software in order to effect hardware improvements in the next. This is essentially a question of extra code being written, which is becoming simple enough, even

automatically [51], if we (again) put aside the question of what the machine's motivation would be to do this.

All these approaches to machine 'improvement', however, remain essentially under human control. If, in contrast, adaptability in AI or machine terms implies fundamental changes to how that machine actually operates (perhaps, but not necessarily, within the same generation), or we expect the required 'motivation for improvement' to be somehow embedded in this evolutionary process, then this is entirely different, and may well have to appear independently. The same conceptual difficulty arises if we look to machines in which hardware and software are merged into an inseparable operational unit, as they appear to be in our human brains [52]. If the future has machines such as this: machines that have somehow passed over a critical evolutionary boundary, beyond anything we have either built or programmed into them, then something very significant indeed will have happened. What might cause this? To an extent, our accepted framework within which machines/computers operate may have been replaced by something conceptually beyond our control (possibly our understanding), one of the deeper concerns regarding the TS. However, where would this come from? Well, there may be numerous possibilities, so this is by no means an irrefutable logical progression we follow now but, having postponed the discussion to this point, it seems appropriate to deal with *'consciousness'* or *'self-awareness'*.

#### *3.4. 'Conscious' Machines*

Not all writers and academics, from neuroscientists to philosophers [53], consider these terms entirely synonymous but the distinction is outside the scope of this paper. However, whether consciousness necessarily implies intelligence, particularly how that might play out in practice, is certainly contentious and returning to sci-fi illustrates this point well. If, only temporarily, we follow a (largely unproven) line of reasoning that consciousness is related to 'brain' size, that is if we create sufficiently complex hardware or software then sentience emerges naturally, then, rather than look to stand-alone machines or even arrays of supercomputers, our first sight of such a phenomenon would presumably come from the single largest (and most complex) thing that humanity has ever created: the *Internet*. Whilst, in this context, Terminator's 'Skynet' [54] is well known, the considerable variants on this theme are best illustrated by two lesser-known novels.

Robert J. Sawyer's *WWW* Trilogy [41–43] describes the emergence of *'Webmind'*, an AI apparently made from discarded network packets endlessly circling the Internet topology. Leaving aside the question of whether the 'time-to-live' field [55] typically to be found in such packets might be expected to clear such debris, the story suggests an interesting framework for emergent consciousness based purely on software. Sawyer loosely suggests that a large enough number of such packets, interacting in the nature of cellular automata [42] would eventually achieve sentience. For any real evidence we have to the contrary, perhaps they could; there is nothing fundamentally objectionable in Sawyer's pseudo-science.

However, *Webmind* exhibits an equally serious problem we often observe in trying to discuss AI, probably AGI, in fact, or any post-TS autonomous computer or machine: the tendency to be unrealistically anthropocentric, or worse. In *WWW*, not only is *Webmind* intelligent by any reasonable definition, it is also almost limitlessly powerful. Once it has mastered its abilities, it is quickly able to 'do good' as it sees it, partially guided by a (young) human companion. It begins by eliminating electronic spam, then finds a cure for cancer and finally overthrows the Chinese government. It would remain to be seen, of course, quite how 'good' any all-powerful AI would really regard each of these objectives. At best, the first is merely a high-level feature of human/society, the second would presumably improve human lives but the overall balance of nature is unknown (even if the AI would really care much about *that*) and the last is the opinion of a financially-comfortable, liberally-minded western author: not *all humans* would necessarily share this view, let alone an emergent intelligence! Frankly, whatever A(G)I may perceive its raison d'être to be, it is unlikely to be this!

In marked contrast, in the novel, *Conscious*, Vic Grout's *'It'* [44], far from being software running on the Internet, is simply the Internet *itself*. The global combination of the Internet and its supporting power networks have reached such a level of scale and connection density, they automatically acquire an internal control imperative and begin behaving as a (nascent form of) independent 'brain'. Hardware and software appear to be working inseparably: although its signals can be observed, they emanate from nowhere other than the hardware devices and connections themselves. Humans have supplied all this hardware, of course, and although, initially, *It* works primitively with human software protocols, these are slowly adapted to its own (generally unfathomable) purposes. *It* begins its life as the physical (wired) network infrastructure before subsuming wireless and cellular communications as well. Eventually, *It* acquires total mastery of the *'Internet of Everything'* (the *'Internet of Things'* projected a small number of years into the future) and causes huge, albeit random, damage.

Crucially, though, Grout quite deliberately offers no insight into the workings of *Its* 'mind'. Although *It* is almost immeasurably powerful (its peripherals are effectively everything on Earth: human transport, communications, climate control, life-support, weaponry, etc.), it remains throughout no more than embryonic in its sophistication, even its intelligence. If it *can* be considered brain-like, it is a brain in the very earliest stages of development. *It* is undoubtedly *conscious* in the sense that it has acquired an independent control imperative, and, as the story unfolds, there is clear evidence of learning: at least, becoming familiar with its own structure and (considerable) capabilities, but in no sense could this be confused with being 'clever'. There is no human attempt to communicate with *It* and not the slightest suggestion that this could be possible. Although *It* eventually wipes out a considerable fraction of life on Earth, it is unlikely that it understands any of this in any conventional sense. *It* is as much simply a 'powered nervous system' as it is a brain. *Conscious* [44] is only a story, of course, but, as a model, if actual consciousness could indeed be achieved without real intelligence (perhaps it requires merely a simple numerical threshold, neural complexity, for example, to be reached) then it would be difficult to make any confident assertions regarding AI or AGI.

#### *3.5. Models of 'Consciousness'*

Grout's framework for a conscious Internet is essentially a variant of the philosophical principle of panpsychism [56]. (Although 'true' panpsychism suggests some level of universal consciousness in both living and non-living things, Grout's model suggests a need for external 'power' or 'fuel' and an essential 'tipping point', neural complexity here, at which consciousness either happens or becomes observable.) The focus of any panpsychic model is essentially based on hardware. However, as with Sawyer's model of a large software-based AI, for all we really know of any of this, it is as credible as any other.

As with 'intelligence', there are almost as many models of 'consciousness' [53] as there are researchers in the field. However, loosely grouping the main themes together [57], allows us to make a final, hopefully interesting, point. To this end, the following could perhaps be considered the main 'types' of explanations of where consciousness come from:


Firstly, before dismissing any of these models out-of-hand, we should perhaps reflect that, if pressed, a non-negligible fraction of the world's population would adopt each of them (which is relevant if we consider *attitudes* towards AI). Secondly, each is credible in two respects: (1) they are logically and separately non-contradictory and (2) the human brain, which we assume to deliver consciousness, could (for all we know) be modelled by any of them. In considering each of these loose explanations, we need to rise above pre-conceptions based on belief (or lack of). In fact, it may be more instructive to look at *patterns*. With this in mind, these different broad models of consciousness can be viewed as being in sequence in two senses:


However, the simplicity of the first 'progression' is spoilt by adding 'pure' panpsychism and the second by Turing himself.

Firstly, if we add the pure panpsychism model as a new item zero in the list:

*Everything* in the universe has consciousness to some extent, even the apparently inanimate: humans are simply poor when it comes to recognizing this, then this fits nicely before (as a simpler version of) 1 whilst, simultaneously, the panpsychic notion of a 'universal consciousness' looks very much like an extension of 9. (It closely matches some established religions.) The list becomes cyclic and notions of 'scientific' vs. 'spiritual' are irreparably muddled.

Secondly, whatever an individual's initial position on all of this, Turing's original 1950 paper [45] could perhaps challenge it . . .

#### *3.6. A Word of Caution* ... *for Everyone*

In suggesting (loosely) that some form of A(G)I might be possible, Turing anticipated 'objections' in a variety of forms. The first, he represented thus:

*"The Theological Objection: Thinking is a function of man's immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think."*

#### He then dealt with this robustly as follows:

*"I am unable to accept any part of this, but will attempt to reply in theological terms. I should find the argument more convincing if animals were classed with men, for there is a greater difference, to my mind, between the typical animate and the inanimate than there is between man and the other animals.* ... *It appears to me that the argument quoted above implies a serious restriction of the omnipotence of the Almighty. It is admitted that there are certain things that He cannot do such as making one equal to two, but should we not believe that He has freedom to confer a soul on an elephant if He sees fit? We might expect that He would only exercise this power in conjunction with a mutation which provided the elephant with an appropriately improved brain to minister to the needs of this sort. An argument of exactly similar form may be made for the case of machines. It may seem different*

*because it is more difficult to 'swallow'. But this really only means that we think it would be less likely that He would consider the circumstances suitable for conferring a soul.* ... *In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates."*

Turing is effectively saying (to 'believers'), 'Who are you to tell God how His universe works?' This is a significant observation in a wider context, however. Turing's caution, although clearly derived from a position of unequivocal atheism, can perhaps be considered as a warning to any of us who have already decided what is possible or impossible in regard to A(G)I, then expect emerging scientific (or other) awareness to support us. That is not the way anything works; neither science nor God are extensions of ourselves: they will not rally to support our intellectual cause because we ask them to. It makes no difference what domain we work in; whether it is (any field of) science or God, or both, that we believe in, it is *that* that will decide what can and cannot be done, not *us*.

In fact, we could at least tentatively propose the notion that, ultimately, this is something we *cannot* understand. There is nothing irrational about this: most scientific disciplines from pure mathematics [58], through computer science [59] to applied physics [60] have known results akin to 'incompleteness theorems'. We are entirely reconciled to the acceptance that, starting from known science, there are things that cannot be reached by human means: propositions that cannot be proved, problems that cannot be solved, measurements that cannot be taken, etc. However, with a median predicted date little more than 20 years away [22], can the reality of the TS really be one of these undecidables? How? Well, perhaps we may simply never get to find out . . .

#### **4. The Wider View: Ethics, Economics, Politics, the Fermi Paradox and Some Conclusions**

The *'Fermi Paradox'* (*FP*) [61] can be paraphrased as asking, on the cascaded assumptions that there are a quasi-infinite number of planets in the universe, with a non-negligible, and therefore still extremely large, number of them supporting life, why have none of them succeeded in producing civilizations sufficiently technologically advanced to make contact with us? Many answers are proposed [62] from the perhaps extremes of *'there isn't any: we're alone in the universe'* to suggestions of *'there* has *been contact: they're already here, we just haven't noticed'*, etc.

It could be argued or implied, however, that the considerable majority of possible solutions to the FP suggest that *the TS has not happened on any other planet in the universe*. Whatever, the technological limitations of its 'natural' inhabitants may have been, whatever may or may not have happened to them (survival or otherwise) following their particular TS, whatever physical constraints and limitations they were unable to overcome, their new race(s) of intelligent machines, evolving at approaching an infinite rate, would surely be visible by now. As they do not appear to be, we are forced into a limited number of possible conclusions. Perhaps the machines have no desire to make contact (not even with the other machines they presumably know must exist elsewhere) but, again, this (machine motivation) is simply unknown territory for us. Perhaps these TSs are genuinely impossible for all the reasons discussed here and elsewhere [63]. Or perhaps their respective natural civilizations never get quite that far.

#### *4.1. Will We Get to See the TS?*

This, indeed, is an FP explanation gaining traction as our political times look more uncertain and our technological ability to destroy ourselves increases on all fronts. It has been suggested [64] that it may be a natural (and unavoidable) fate of advanced civilizations that they self-destruct before they are capable of long-distance space travel or communication. Perhaps this inevitable doom also prevents any TS? We should probably not discount this theory lightly. Aside from the obvious weaponry and the environmental damage being done, we have other, on the surface, more mundane, factors at work. Social media's ability, for example, to threaten privacy, spread hatred, strew false information, cause

division, etc., or the spectre of mass unemployment through automation, could all destabilize society to crisis point.

Technology has the ability to deliver a utopian future for humanity (with or without the TS occurring) but it almost certainly will not, essentially because it will always be put to work within a framework in which the primary driving force is profit [65]. A future in which intelligent machines do all the work for us could be good for everyone; but for it to be, fundamental political-economic structures would have to change, and there is no evidence that they are going to. (Whether 'not working' for a human is a good thing has nothing to do with the technology that made their work unnecessary but the economic framework everything operates under. If nothing changes, 'not working' will be socially unpleasant, as it is now: technology cannot change that, but there could soon be more out of work than in.) Under the current political-economic arrangement, nothing really happens for the general good, whatever public-facing veneer may be applied to the profit-centric system. However, such inequality and division may eventually (possibly even quickly) lead to conflict? The framework itself, however, appears to be very stable so it may be that it preserves itself to the bitter end. All in all, the outlook is a bleak one. Perhaps the practical existence of the TS is genuinely unknowable because it lies just beyond any civilization's self-destruct point, so it can never be reached?

#### *4.2. Can We Really Understand the TS?*

Coming back to more Earthly and scientific matters, we should also, before ending, note something essential regarding evolution itself: it is imperfect [66]. The evolutionary process is, in effect, a sophisticated local-search optimization algorithm (variants are applied to current solutions in the hope of finding better ones in terms of a defined objective). The objective (for biological species, at least) is survival and the variable parameters are these species' characteristics. The evolutionary algorithm can never guarantee 'perfection' since it can only look (for each successive generation) at relatively small changes in its variables (mutations). As Richard Dawkins puts it [67] in attempting to deal with the question of *'why don't animals have wheels?'*

*"Sudden, precipitous change is an option for engineers, but in wild nature the summit of Mount Improbable can be reached only if a gradual ramp upwards from a given starting point can be found. The wheel may be one of those cases where the engineering solution can be seen in plain view, yet be unattainable in evolution because it lies the other side of a deep valley, cutting unbridgeably across the massif of Mount Improbable."*

Therefore, advanced machines, seeking evolutionary improvements, may not achieve perfection. Does this mean they will remain constrained (and 'content') with imperfection or will/can they find a better optimization process? There are certainly optimization algorithms superior to local-search (often somewhat cyclically employing techniques we might loosely refer to as AI) and this/these new races(s) of superior machines might realistically have both the processing power and logic to find them. In the sense we generally apply the term, machines may not *evolve* at all: they may do something *better*. There may or may not be the same objective. Millions of years of evolution could, perhaps, be bypassed by a simple calculation? However, whatever happens, the chances of us being able to understand it are minimal.

#### *4.3. Therefore, Can the TS Really Happen?*

Returning finally to the main discussion, if we persist in insisting that the TS can only arise from the evolution of AI to AGI, or worse that machines have to achieve self-awareness for this to happen then we are beset by difficulties: we really have little idea what this really means so a judgement on whether it can happen is next to impossible. There are credible arguments [68] that machines can never achieve the same form of intelligence as humans and that Turing's models [41] are simplistic in this respect. If both the AGI requirement for the TS and its (AGI's) impossibility to achieve are simultaneously correct then, trivially, the TS cannot happen. However, if the TS can result simply from the conjunction of increasingly complex, but fully-understood, technological processes, then perhaps it can: and a model of what this might look like in practical terms is, not only possible but, independent of definitions of intelligence. Finally, other predictions of various crises in humanity's relationship with technology and its wider impact [65] within the framework of existing social, economic and political practice could yet render the debate academic.

However, in many ways, the real take home message is that many of us are not on the same page here. We use terms and discuss concepts freely with no standardization as to what they mean; we make assumptions in our self-contained logic based on axioms we do not share. Whether across different academic disciplines, wider fields of interest, or simply as individuals, we have to face up to some uncomfortable facts in a very immediate sense: that many of us are not discussing the same questions. On that basis it is hardly surprising that we seem to be coming to different conclusions. If, as appears to be the case, a majority of (say) neuroscientists think the TS cannot happen and a majority of computer scientists think it can then, assuming an equivalent distribution of intelligence and abilities over those disciplines, clearly they are not visioning the same event. *We need to talk.*

One final thought though, an admission really, as this paper has been written, probably in common with many others in a similar vein, by a middle-aged man having worked through a few decades of technological development: sometimes with appetite, but at other times with horror ... could this possibly be just a 'generational thing'? Just as Bertrand Russell [69] noted that philosophers themselves are products of, and therefore guided by, their position and period, futurologists are unlikely to be (at best) any better. This may yet all make more sense to generations to come. There is no better way to conclude a paper, considering uncertain futures and looking to extend thought boundaries through sci-fi, than to leave the final word to Douglas Adams [70].

*"I've come up with a set of rules that describe our reactions to technologies:*


**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Pareidolic and Uncomplex Technological Singularity**

#### **Viorel Guliciuc \***

Department of Human, Social and Political Sciences, Faculty of History and Geography, " ¸Stefan cel Mare" University, 13 University Street, 720229 Suceava, Romania

Received: 25 October 2018; Accepted: 3 December 2018; Published: 6 December 2018

**Abstract:** "Technological Singularity" (TS), "Accelerated Change" (AC), and Artificial General Intelligence (AGI) are frequent future/foresight studies' themes. Rejecting the reductionist perspective on the evolution of science and technology, and based on *patternicity* ("the tendency to find patterns in meaningless noise"), a discussion about the perverse power of *apophenia* ("the tendency to perceive a connection or meaningful pattern between unrelated or random things (such as objects or ideas)") and *pereidolia* ("the tendency to perceive a specific, often meaningful image in a random or ambiguous visual pattern") in those studies is the starting point for two claims: *the "accelerated change" is a future-related* apophenia *case*, whereas *AGI (and TS) are future-related* pareidolia *cases*. A short presentation of research-focused social networks working to solve complex problems reveals the superiority of human networked minds over the hardware-software systems and suggests the opportunity for a network-based study of TS (and AGI) from a complexity perspective. It could compensate for the weaknesses of approaches deployed from a linear and predictable perspective, in order to try to redesign our intelligent artifacts.

**Keywords:** Technological Singularity; Accelerated Change; Artificial (General) Intelligence; apophenia; pareidolia; complexity; research focused social network; networked minds; complexity break; complexity fallacy

"Any fact becomes important when it's connected to another"

(Umberto Eco, *Foucault's Pendulum*)

#### **1. A Pretext**

The popular understanding of the Future(s) and, especially, of AGI and/or of the TS seems to be related *to patternicity* [1], *apophenia* [2] and *pareidolia* [3].

We care about the future because, as living beings, we are "programmed" for the conservation of our lives despite threats, challenges, and changes.

When the future has become an essential part of their lives *and* their language(s), pre-human beings have become human beings.

Change, the possibility of change, and the power-of-realization/of-becoming-real of the possibility/virtuality-of-change are essential parts of our relationship with reality, with what- is.

Because nothing can be without the power-to-be, the power to erupt from what-is-virtual into what-is-real [4] (pp. 91–92), this power-of-being/power-to-be is fueling both our future(s) and our studies of the future(s).

#### **2. The Fascination with the Future: Foresight Studies Require an Appropriate Methodology**

Confronted with the challenge of correctly understanding, representing, and managing the future(s) of humankind and, especially, future discoveries in science and breakthroughs in technology, we have, primarily, to correctly deal with our perplexities and expectations related to them, and to

find the most probable behavior of such complex systems as the human brain/mind, human society, science and/or technology for our best future(s).

Let us remember that, when "the simple yet infinitely complex question of where is technology taking us?" [5] was asked, our times were described as the "Age of Surprise."

It is a "concept originally described by the U.S. Air Force Center for Strategy and Technology at The Air University" [5], a part of the Blue Horizons project [6], writes Reuven Cohen.

In fact, the Age of Surprise is a situation in which "the exponential advancement of technology" has just reached "a critical point where not even governments can project the direction humanity is headed" [5].

What is interesting for this paper's theme is the forecast made by some researchers of "an eventual Singularity where the lines between humans and machines are blurred" [5].

There is increasing awareness related to technological changes and breakthroughs.

Under this metaphor, but also considering the necessity of finding the best solutions in order to not engage humankind in catastrophic, global, and existential risks [7] and, especially and more specifically, in *existential technological risks*, it is essential, from a scientific approach, *to not accept that it is natural to find what we expect to find*, *nor to project our expected finds as sound scientific results*. Yet, it is also important *to not reduce the complexity of the analyzed systems—TS or AGI—to the linearity of a predictable use of data*, when dealing with the future(s).

In fact, *to date, there is no such* scientific field *as future studies* [8]/*foresight studies*, even though there are such claims [9].

Because future-related reasoning is probabilistic, a hard science of the future is problematic, as we cannot decide with any accuracy about the truth or falsehood of our judgments on future actions, or on the existence of future beings and future artifacts. As one of the reviewers of this paper rightly observed, in science we make predictions that can be "evaluated according to available experience" and this is a sign that "nevertheless, the science is possible." Indeed, a probabilistic truth engages a re-evaluation of the classic two-dimensional models of reasoning in favor of a discussion related to an unified model of reasoning [10].

The sense of the above revised statement is related to some of the observations made by Samuel Arbesman in his book *The Half-life of Facts: Why Everything We Know Has an Expiration Date* [11]. Even though I could not access the contents of the book, I have understood, from the review by Roger M. Stein, that *in science, acquired knowledge has a pace of overturn* [12], and, from the paper of James A. Evans on "Future Science," that *innovation has been decreasing, instead of accelerating, in the last several decades* [13].

This is why it is difficult and problematic to claim the status of a "hard" science for foresight studies.

Perhaps the explanation for such a problematic status could be related not only to the complexity of the future-related models of the evolution of science and technology (as proven in "The Age of Surprise" report), but, also, to Aristotle's legacy [14] regarding the problem of future contingents [15] and/or to Charles Sanders Peirce's observation that, in order to reach, through inquiry, an objective scientific understanding we have to eliminate such human factors as expectations and bias ([16] pp. 111–112).

A major issue in the study of future(s) is reaching the best *probably* correct understanding. For such an accomplishment, *we have to accept both the complexity of the world and the complexity of the human being(s)*.

This is why, first of all, we need to reject any reductionist perspective.

There is a powerful reductionist tendency in scientific studies, as "the world is still dominated by a mechanistic, cause and effect mindset with origins in the Industrial Revolution and the Newtonian scientific philosophy" [17].

Beyond the radical claim in the sentence quoted above, the existence of such reductionist tendencies is of importance for TS researches, as such tendencies are detectable in various perspectives

on AC, AGI, and TS as well. This is why, as one of the reviewers concluded, this paper is focused on "criticism of the TS/AC/AGI claims based on reductionism and extrapolation."

In the following pages, I will briefly consider some examples of the perverse power of our expectations related to the future(s) of technology and, especially related to TS, I will engage in a short discussion based on the observation that there are few studies on TS deployed from a complexity perspective.

#### **3. The Apophenic Face(s) of Our Technology- and Science-Related Expectations: The "Law of Accelerating Returns" (LAR)**

Contemplating the timeline of humankind, when change is perceived as affecting both societies and individuals, debates and controversies related to the future flourish until the reaching of a new *equilibrium* and a new *status quo* regarding the perception(s) and understanding(s) of the possible future(s)—not necessarily predictable and predicted in those discussions.

*a. Is the Change in Science and Technology Accelerating?*

One of the most appealing ideas in the debates of our times is so-called "accelerating change" (AC)/the "law of accelerating returns" (LAR).

When considering the idea of AC, one should go back to the idea of "progress itself" [18].

This is not about AC in technology only—e.g., as claimed by Ray Kurzweil [19]. It is about AC in science, too—e.g., as claimed by John M. Smart, the Foresight U, and the FERN teams [18].

For some researchers, AC is an ontological feature, almost a law of nature as it "is one of the most future-important, pervasive, and puzzling features of our universe," everywhere observable, including "the twenty-thousand year history of human civilization" [18].

Meanwhile, for other researchers it emerges universally [20].

In general terms, AC is just a perceived change of rhythm in the regularity of the advances of technology and science through the ages.

The challenge it brings with it, observes Richard Paul, is "to trace the general implications of what are identified as the two central characteristics of the future: accelerating change and intensifying complexity" [21]. The major concerns are related to the pace of AC and to the perception of an increasing complexity: "how are we to understand how this change and complexity will play itself out? How are we to prepare for it?" [21].

The fascination with AC is an expression of our interest in the study of the future, in so-called "foresight studies" [22].

In fact, it is such a rapidly growing topic that, using the simplest Google search, on 4 October 2018, I found not only about 16,800,000 results for "foresight" about 11,700,000 results for "foresight studies," and about 14,900,000 results for "foresight study," but also 20,400 results for "foresight science," too.

At the same time, *the perceived AC seems to be more likely just a subjective projection of our assumptions/presuppositions*, even when discovering the gap(s) between our linear and predictable expectations about the pace(s) of changes in technology and science and our own minds' power to process the complexity of the information related to the new developments in science and new breakthroughs in technology.

Some critics of AC—Theodore Modis [23]; Jonathan Huebner [24]; Andrey Korotayev, Artemy Malkov, and Daria Khaltourina [25]; James A. Evans [13]; Julia Lane [26] and bloggers such as Richard Jones [27] and David Moschella [28], among others—argued that "the rate of technological innovation has not only ceased to rise, but is actually now declining" [29] and/or argued that there are other possible projections of the LAR (Law of Accelerating Returns) besides the one proposed by Kurzweil.

To date, there is neither general acceptance of AC's existence, nor a final rejection of it.

Under these circumstances, a statement such as "one can criticize 'proofs' of AC for a subjective selection of technologies, but no one can claim that within the selective set of technologies there is no AC" (made by one of the reviewers of this paper) is most likely a subjective one.

My main concerns are: How is it/could it be/should it be established that a "selective set of technologies" should be generally accepted? and Is it permitted to extrapolate those particular findings to a universal set of technologies or even to the status of a universal phenomenon?

In the meantime, it is well known that there are several models of Singularity and of TS—such as those presented in the taxonomies of John Smart [30] and Anders Sandberg [31], for example.

In John Smart's classification, all three types of Singularity (computational, developmental, and technological) "assume the proposition of the universe-as-a-computing-system" [30]. In fact, it is an "assumption, also known as the "infopomorphic" paradigm," that "proposes that information processing is both a process and purpose intrinsic to all physical structures," when "the interrelationships between certain information processing structures can occasionally undergo irreversible, accelerating discontinuities" [30].

I think this is why Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart chose as the title of the book they co-edited on TS: *Singularity Hypotheses: A Scientific and Philosophical Assessment* [32].

Because, for now, the "infomorphic" paradigm/assumption is still debated and because this is the ground/ultimate source of the TS hypothesis, the results of the debates on TS cannot be, consequently, boldly and clearly related to truth or falsehood.

As TS is deeply related to AC—it is not possible without it!—a good method for an appropriate TS study is to remember the claims of one of the most well-known defenders of TS, Kurzweil.

John Smart is considered among the "prominent explorers and advocates of the technological Singularity idea," along with brilliant researchers such as John Von Neumann, I.J. Good, Hans Moravec, Vernor Vinge, Danny Hillis, Eliezer Yudkowsky, Damien Broderick, Ben Goertzel and "a small number of other futurists, and most eloquently to date, Ray Kurzweil" [30].

For that "eloquence," I choose to refer here, especially, to Kurzweil.

(In the meantime, the "infopomorphic" assumption remains active for TS's defenses).

One of Kurzweil's main ideas is that change is exponential and not "intuitively linear"—as is the Kurzweil's case, he thinks, even with "sophisticated commentators" who "extrapolate the current pace of change over the next ten years or one hundred years to determine their expectations." Meanwhile, "a serious assessment of the history of technology" will reveal that "technological change is exponential" for "a wide variety of technologies, ranging from electronic to biological ... the acceleration of progress and growth applies to each of them" [33].

However, despite Kurzweil's belief/beliefs—a "natural" result of his own expectations—it was observed that our world is changing, but not as fast as we would be tempted to think, based on our own perceptions: "the world is changing. But change is not accelerating." So, the very idea of an exponential growth of change is disputable [34].

#### *b. Has the Road toward TS a Single Shape/Route?*

A second main idea of Kurzweil's is related to the *inevitability of TS*. It is based on a subjective extension, following Moravec, of the so-called "Moore's law" of exponential growth in computing power, having the shape of an asymptote, as a graphic representation of AC, toward a human-like AGI's existence and accelerated growth.

Even in popular science it was observed that the so-called LAR has "altered public perception of Moore's law," because, contrary to the common belief promoted by Kurzweil and Moravec, Moore is *only* making predictions related to the performance(s) of "semiconductor circuits" and not "predictions regarding all forms of technology" [29].

In fact, there were numerous and various debates on the very existence of Kurzweil's extension of Moore's law [35,36]. In the references we cite just four of them.

Most likely, it is not possible to claim exponential growth in the form of a vertical-like asymptote where the curve approaches +∞ [37] (Figure 1).

**Figure 1.** Vertical asymptote as in the following source: [37].

(One of this paper's reviewers noted "the exponential growth doesn't have a vertical asymptote." Indeed, but *this asymptotic-like shape seems to be used by Kurzweil* in order to represent the growth of computing power as in Figure 2 toward a vertical endless growth. More likely the cause is quite simple: it is better at fitting *his expectations* and shows the patterns *he* was looking for. As Korotayev observes, Kurzweil uses *three graphs* to illustrate the countdown to Singularity [38] (page 75). Yet he argues that Kurzweil did not know about the mathematical Singularity, nor was he aware of the differences between exponential and hyperbolic growth, etc.

**Figure 2.** The exponential scale of technological growth (as in [19]).

But, instead, in the form of a horizontal one (Figure 3),

**Figure 3.** Horizontal asymptote as in the source: [39].

Or even, more naturally, in the form of successive horizontal asymptotes, but vertically arranged (Figure 4),

**Figure 4.** Horizontal asymptotes [39].

I think the Figure 4 fits more naturally the history of breakthroughs in science and technology, as it evokes a succession of jumps in the evolution towards TS.

Indeed, two asymptotes will constitute a hyperbola under certain conditions.

One of the reviewers observed, "The asymptote in TS appears, because each next type of cybernetic systems develops with a higher exponential growth rate, and the time between metasystem transactions to novel cybernetic systems becomes smaller."

However, Kurzveil's TS is related to the growth of computing power, which is related to his extrapolation of Moore's Law. Under these circumstances, let us remember Tom Simonite's paper from the *MIT Technology Review*, "Moore's Law Is Dead. Now What?", where he quotes Horst Simon, deputy director of the Lawrence Berkeley National Laboratory, saying "the world's most powerful calculating machines appear to be already feeling the effects of Moore's Law's end times. The world's top supercomputers aren't getting better at the rate they used to" [40].

The reviewer continues: "consequently, it doesn't really matter if the exponential growth of individual technologies continues infinitely or becomes S-shaped curve with a horizontal asymptote if new technologies outperform older technologies with higher growth rate."

I have some difficulties in understanding the extrapolation made above.

A first one is related to the observation made by Andrey Korotayev, who states: "let us stress again that the mathematical analysis demonstrates rather rigorously that the development acceleration pattern within Kurzweil's series is NOT exponential (as is claimed by Kurzweil), but hyperexponential, or, to be more exact, hyperbolic" [38] (p. 84).

This is why some researchers, studying the macro-regularities in the evolution of humankind, came to different conclusions about the possible evolution of AC. A second perplexity is related to the observation that it really matters if the exponential growth of individual technologies is a S-shaped curve [23].

Andrey Korotayev's conclusions, in his re-analysis of 21st-century Singularity, are: "the existence of sufficiently rigorous global macroevolutionary regularities (describing the evolution of complexity on our planet for a few billions of years)" are "surprisingly accurately described by extremely simple mathematical functions." Moreover, he thinks, "there is no reason "to expect an unprecedented (many orders of magnitude) acceleration of the rates of technological development" near/in the region of the so-called "Singularity point." Instead, "there are more grounds for interpreting this point as an indication of an inflection point, after which the pace of global evolution will begin to slow down systematically in the long term" [38].

This is why I used the representation within Figure 4 (above). The main idea is not to correctly represent the succession of the exponential growth, but to highlight the discontinuities in the evolution

of technology. I agree with Richard Jones, who observes: "the key mistake here is to think that 'Technology' is a single thing, that by itself can have a rate of change, whether that's fast or slow." Indeed, "there are many technologies, and at any given time some will be advancing fast, some will be in a state of stasis, and some may even be regressing." Moreover, "it's very common for technologies to have a period of rapid development, with a roughly constant fractional rate of improvement, until physical or economic constraints cause progress to level off" [27].

So, *the mathematical representation of the AC, as leading to TS, could have different perspectives* beyond the graphic defended by Kurzweil, and *his* expectations related to the road toward TS are problematic.

*c. Is There an Explanation for Kurzweil's "Discoveries"?*

One could ask: Why have Kurzweil and the defenders of AC reached the idea of exponential growth?

An appealing but unexpected answer is quite simple: they have searched for data to confirm their expectations.

As Michael Shermer observed, human beings are "pattern-seeking story-telling animals" [41].

This is why "we are quite adept at telling stories about patterns, whether they exist or not" [41]. He named this tendency "patternicity" [42].

Observing and evaluating, we are looking for and finding "patterns in our world and in our lives"; then, we "weave narratives around those patterns to bring them to life and give them meaning." Michael Shermer concludes: "such is the stuff of which myth, religion, history, and science are made" [43].

In Kurzweil's "discoveries," we have both *a subjective extension of an expectation*—AC, *and an alteration of the data in order to fit the model "discovered"—*the exponential growth of the returns toward the Singularity point.

Kurzweil's LAR is an example of connection created by its own expectations.

Yet, as one of the reviewers underlined, Kurzweil is neither the one who discovered TS nor the only one who is promoting it. However, again, due to his "eloquence," he is exemplary for the "infomorphic" paradigm. While his assumption remains controversial, even some clever works—such as Valentin F. Turchin's [44], mentioned by one of the reviewers of this paper—found a cybernetic approach in human evolution [44], while others promoted an entire cybernetic philosophy—as was the case with Mihai Draganescu's works [45–47], or have even questioned if we are living in a computer simulation [48]. One example of this flourishing debate and controversy is Ken Wharton's paper dismissing the computer simulation theory [49], or Zohar Ringel and Dmitry L. Kovrizhin's [50] or Andrew Masterson's conclusions on the same subject [51].

Those "infomorphic" paradigm-related theories of everything are, very probably, just examples of the (false) perceptions (and beliefs) created by our expectations.

As already underlined above, the status of AC as an objective tendency is still under debate. So is the status of LAR.

These are perceptions (and beliefs) created by our expectations—examples of patternicity. *AC (and LAR) are just examples of a specific patternicity case:* apophenia.

*Apophenia* is defined by the *Merriam-Webster Dictionary* as "the tendency to perceive a connection or meaningful pattern between unrelated or random things (such as objects or ideas)" [2], by the RationalWiki as "the experience of seeing meaningful patterns or connections in random or meaningless data" [52] and by *The Skeptic's Dictionary* as "the spontaneous perception of connections and meaningfulness of unrelated phenomena" [53].

Until now, several types of *apophenia* have been studied: *clustering illusion* ("the cognitive bias of seeing a pattern in what is actually a random sequence of numbers or events" [54]); *confirmation bias* ("the tendency for people to (consciously or unconsciously) seek out information that conforms to their pre-existing view points, and subsequently ignore information that goes against them, both positive and negative" [55]); *gambler's fallacy* ("the logical fallacy that a random process becomes less random, and more predictable, as it is repeated" [56]); and *pareidolia* ("the phenomenon of recognizing patterns, shapes, and familiar objects in a vague and sometimes random stimulus" [57]).

AC and LAR seem to be just cognitive biases related to the representation of futurerelated expectations.

#### **4. There Is More than** *Apophenia* **in Kurzweil's TS; It Is** *Pareidolia*

My two hypotheses about Kurzweil's famous "law of accelerating returns" (LAR) as undoubtedly leading to TS are the following.

1. *LAR is more likely just a new case of apophenia* [58,59]—as it shows "the spontaneous perception of connections and meaningfulness of unrelated phenomena" [53] and for centuries people have been perceiving the changes in science and technology as accelerating [23,58,60].

One of the reviewers of this paper wrote, "one absolutely cannot agree that exponential growth is a false pattern observed in random data as supposed by the notion of apophenia."

This is Kurzweil's opinion, too.

However, it is a false pattern not only because he manipulated data [23], but also, and more importantly, because exponential growth is just one of the possible models of growth [61] and it cannot continue indefinitely, but sometimes makes an+ inflexion and becomes an exponential decay [61,62] or just a slowdown [38].

The models of growth could have several types of representation, not only the exponential one.

"There is a whole hierarchy of conceivable growth rates that are slower than exponential and faster than linear (in the long run)" [61] and "growth rates may also be faster than exponential." In extreme cased, "when growth increases without bound in finite time, it is called hyperbolic growth. In between exponential and hyperbolic growth lie more classes of growth behavior" [38,61]. Ideally, growth continues "without bound in finite time" [38,61]. Sometimes exponential growth is simply slowdown [38,63].

When rejecting exponential growth as a most likely false pattern, we come up against the following problem: it is based on the evolutionary acquisition of patternicity as a specific human adaptive behavior, in order to ensure or facilitate individual or collective survival.

Indeed, "the search for pertinent patterns in the world is ubiquitous among animals, is one of the main brain tasks and is crucial for survival and reproduction." In the meantime, "it leads to the occurrence of false positives, known as patternicity: the general tendency to find meaningful/familiar patterns in meaningless noise or suggestive cluster" [64].

When claiming AC and consequently LAR and TS are objective tendencies, we are assuming everything is eventually explainable in a Mendeleevian-like table, in a solid, monolithic, and somehow mechanical explanation—so every other possibility should be rejected. Or, the world, human society, human beings, and very evolution of science and technology are complex and hardly predictable, as demonstrated in "The Age of Surprise" report [6].

Claiming AC/LAR/TS are objective tendencies leads, necessarily, to a *subjective* selection of data *considered trustworthy because it fits our expectations*. Or this is a new form of apophenia.

So, how can we trust the claims related to TS's possibility or even inevitability? Under these circumstances, as one of the reviewers correctly observed, when, maybe, "we can easily trust the claims related to TS's possibility," "we cannot so easily trust the claims related to TS's inevitability." I would add here: our trust is also a patternicity result.

2. TS could be considered more likely as a new case of pareidolia [41], because TS is AGI-based, and AGI is commonly and uncritically understood as a human-like intelligence.

The specificity of LAR's apophenia and TS's pareidolia (through AGI) is related to the direction of our perceptions and expectations—they are both future-related.

The arguments for such claims will be deployed in the following pages.

*a. The Perfect New World of TS*

As we saw, Kurzweil's expectations (and beliefs) are the following: "technological change is exponential, contrary to the common-sense 'intuitive linear' view"; the "returns" are increasing exponentially; there is "exponential growth in the rate of exponential growth"; machine intelligence will surpass, within just a few decades, human intelligence, "leading to The Singularity—technological change so rapid and profound it represents a rupture in the fabric of human history" based on "the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light" [19].

For Kurzweil and the Singularitarians [65]—the adepts, the defenders and those who promote Singularitarianism [66] in almost a religious way [67,68]—these expectations (and beliefs) seem to be confirmed by the pace of progress in science and technology.

Or, rather, there is no one pace of progress, but paces of progress.

Some critics of LAR observed that there are not only different rates of AC in technical innovation and scientific discovery, but also very different phenomena and processes appropriate to be mathematically included in graphical representation(s) of the accelerating change [38].

Meanwhile, for other researchers, TS is just intellectual fraud [69].

From such a perspective, Kurzweil's LAR and Richard W. Paul's plead for AC are just unnecessary reductions of the complexity of the tendencies in the evolution of science and technology, to the linearity and predictability of our expectations.

Unifying and reducing all the rhythms and paces of progress to one perspective and equation is an exercise of the imagination, under the question: How will the perfect future world be?

#### *b. (Again) "Accelerating Change" (AC) as Apophenia*

Considering *apophenia* and *pareidolia*, let us remember, once more, some of their characteristics.

They can occur simultaneously, observes Robert Todd Carroll, as in the case of "seeing a birthmark pattern on a goat as the Arabic word for Allah and thinking you've received a message from God" or, as when seeing not only "the Virgin Mary in tree bark but believing the appearance is a divine sign" [58]. Here he discovers both *apophenia* and *pareidolia*.

Yet, "seeing an alien spaceship in a pattern of lights in the sky is an example of pareidolia," but it becomes apophenia if you believe the aliens have picked you as their special envoy [62].

Moreover, continues Carroll, commonly, "apophenia provides a psychological explanation for many delusions based on sense perception"—"UFO sightings"; "hearing of sinister messages on records played backwards"—whereas "pareidolia explains Elvis, Bigfoot, and Loch Ness Monster sightings" [58].

Seeing the pattern of exponential acceleration in the pace of technological change and representing the exponential growth as an asymptote is apophenia, because there is no general consensus about the objective existence of AC (and LAR) and, in an attempt to break the ultimate unpredictability of the complexity of future evolution of technology and science, unrelated and/or unclearly related phenomena and/or processes are connected and considered meaningful based on our profound need for order [70].

It is a semiotic situation as "any fact becomes important when it's connected to another" (Eco).

*Even though it is a model that seems to work for particular cases*, it still proves the ultimate weakness of a reductionist approach, as was true with the Ptolemaic model, too. Let us remember that from a false statement we can reach both a true or a false conclusion—in this case, from a reductionist subjective model we could have both falsehood and truth in the idea of change in the evolution of technology. Because of this truth-falsehood status of the observation-based idea, there is sometimes accelerating change in some technologies' evolution; we cannot extrapolate to AC (as a general objective tendency) and consider the model we use—based on a positivist assumption of the technology's progress—to be necessarily true.

It is a special case of *patternicity*, an apophenia, when, based on subjective selection and *arbitrary inferences* [71], various evolutionary tendencies, from various fields, are merged into a single evolutionary pattern.

The graphic representation Kurzweil used to illustrate the growth of computing power is Figure 2 (above).

The data related to technological change and advancement have been altered, manipulated, and adjusted in order to fit his expectations related to AC, LAR, and TS.

Observing Kurzweil's "methodology," Nathan Pensky concluded, "Thus, 'evolution' can mean whatever Kurzweil wants it to mean." It requires joining "disparate types of 'evolution'" [72].

Under these conditions, "the graph takes an exponential curve not because humans have moved inexorably along a track of "accelerating returns," but because Kurzweil has "ordered" data points in order to "reflect the narrative he likes" [72].

The future-related TS's *apophenia* [73] is a good example of the *intentionality fallacy* described by David Christopher Lane [74] and explained by Sandra L. Hubscher in her article on *apophenia* from *The Skeptic's Dictionary* [59].

#### *c. TS (through AGI) as Pareidolia*

As a new type of *pareidolia,* TS could be described as a wrong, subjective visual representation of the expectation related to human future(s), under a human-like AGI presupposition/assumption, also using *arbitrary inferences* [71].

The best example is this common and uncritical expectation: AGI, leading, inexorable, to TS will be human-like—at least in its first stages.

As in this *Information* special issue, serious arguments have been deployed against such an idea [75].

Let us illustrate the enthusiastic defense of human-like AGI based on a short survey of bombastic headlines in the media: "One step closer to having human emotions: Scientists teach computers to feel 'regret' to improve their performance [76], "Daydreaming simulated by computer model" [77], "Kimera Systems Delivers Nigel—World's First Artificial General Intelligence" [78], "Meet the world's 'first psychopath AI'" [79], "Human-like A.I. will emerge in 5 to 10 years, say experts" [80], etc.

The expectation that AI/AGI is/will be human-like or will be congruent with human intelligence(s) and emotion(s) is the ground for a narcissistic and future-related TS pareidolia. In fact, AI/AGI could have different characteristics than human intelligence, even though we human beings have created the so-called "intelligent software" [81].

One cause of the anthropocentric claims of the Singularitarians could be this: we are often forgetting that, from a false assumption—in this case, there is AC/LAR and AGI will be, necessarily, a human-like intelligence—based on an apophenic/pareidolic future related hope, one can deduce anything.

An example is Roman Yampolskyi's rejection of Toby Walsh's idea that the Singularity may not be near [82], in this special issue of *Information*.

TS's pareidolia cannot be validated as true or false, even by the most brilliant minds.

#### **5. The TS-Related "Complexity Fallacy"**

Another cause of such anthropocentric claims could be related to a wrong understanding and management of complexity, not only in the research on TS, but in the very way we are creating our hardware and software.

All these discussions, debates, and controversies related to AC, AI/AGI, or TS seem to be like the birds' uncoordinated songs in a wood and not like a symphony. There is a rise in different perspectives, definitions, and claims related to them.

Under these circumstances, systematizations and classifications have been proposed in papers and/or books/collected papers signed/edited by various researchers: Anders Sandberg [31], Nick Bostrom [83], Nikita Danaylov (Socrates) [84], Amnon H. Eden, James H. Moor, Johnny H. Søraker and Eric Steinhart [32], Eliezer S. Yudkowsky [85], IEEE's *Spectrum. Special Report on Singularity* [86], John Brockman [87], Adriana Braga and Robert K. Logan [75], etc.

In fact, the history of science and technology is full of attempts to reduce the richness of the facts, phenomena, entities, and beings to a Mendeleevian-like table. "This is the Faustian knowledge management philosophy assumed by the Wizard Apprentice" [88].

This is "a sign of a deep belief in the power of the taxonomy." It is also "an effect of the so-called presupposition of the 'generic (=linear and fully predictable) universality'—one of the best expressions of a mechanistic perspective on the world." It is about "claiming that we could fully reverse a deduction," usually "through strait abduction, in an attempt to rebuild the so called unity of the unbroken original mirror of the human knowledge using its fragments" [88].

This is about dismissing complexity for the profit of reductionist simplicity.

*I would name this reduction of what is complex—and so, nonlinear and unpredictable, but also partially predictable—to what is linear and predictable, when naturally predictable (just complicated), the complexity fallacy.* There are many AC, LAR, AI/AFI, and TS approaches deploying such a fallacy as an effect of an unconscious *patternicity*.

This situation suggests that AC, LAR, AI/AGI, and TS have to be studied from a perspective really based on complexity, as was already suggested by Paul Allen—with his "complexity break" argument against TS [89] and Viorel Guliciuc—with his examples of differences between computers' and human networked minds' functioning [88].

Paull Allen observes, "the amazing intricacy of human cognition should serve as caution for those who claim that the Singularity is close," as "without having a scientifically deep understanding of the cognition, we cannot create the software that could spark the Singularity." So, "rather than the ever-accelerating advancement predicted by Kurzweil," it is more likely "that progress towards this understanding is fundamentally slowed by the complexity break" [89].

*Complexity break* is described in these words: "as we go deeper and deeper in our understanding of natural systems," we find we need "more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways." So, "understanding the detailed mechanisms of human cognition is a task that is subject to this complexity brake" [89].

As quoted in this special issue of *Information*, "human minds are incredibly complex" [76] and the way humans think in patterns is very different from AI/AGI data processing [90]. So, an AGI leading to TS should necessarily embody human-like emotions in cognition [91].

Moreover, AI researchers—and let us assume that the same is applicable to AGI and TS researchers—"are only just beginning to theorize about how to effectively model the complex phenomena that give human cognition its unique flexibility: uncertainty, contextual sensitivity, rules of thumb, self-reflection, and the flashes of insight that are essential to higher level thought" [89].

As Robert K. Logan and Adriana Braga argued in their essay on the weakness of the AGI hypothesis, there is real danger in devaluing "aspects of human intelligence" as one cannot ignore or consider in a reductionist way "imagination, aesthetics, altruism, creativity, and wisdom" [75].

There is no need here to consider again the discussion about the strong AGI's hypothesis and the dangers AGI's (human-like) misunderstanding could bring with it, already detailed in their paper and/or in other papers from this special issue of *Information* [90–92].

Instead, what it is important for this paper is the conclusion of one of the papers from this special issue that even id "it is possible to build a computer system that follows the same laws of thought and shows similar properties as the human mind," "such an AGI will have neither a human body nor human experience, it will not behave exactly like a human, nor will it be "smarter than a human" on all tasks" [93]. A similar conclusion related to something other than full human-like evolution of AGI is underlined by other researchers, too [81].

Accepting these observations, let us add the findings of Thomas W. Meeks and Dilip V. Jeste in the neurobiology of wisdom when dealing with uncertainty: "prosocial attitudes/behaviors, social decision making/pragmatic knowledge of life, emotional homeostasis, reflection/self-understanding, value relativism/tolerance" [94].

The AGI strong hypothesis is not just "very complicated," as noted by one of the reviewers of this paper, but complex. That is, complexity cannot be reduced to complicatedness.

However, the next observation of the reviewer can be fully accepted: "The author may want to revise the conclusion 'AGI is impossible' to 'The possibility of AGI cannot be established by the arguments provided via TS and AC'."

Indeed, we do not have, for now, enough evidence to decide if (human-like) AGI is possible or impossible, nor enough arguments to sustain the truth of the claim that AC, through AGI, will necessarily lead to TS.

This is why most of the current perspectives on AGI and TS seem to be unprepared to really deal with their complexities, and this is "why they are facing so many difficulties, uncertainties and so much haziness in the full and appropriate understanding of the TS" [88].

#### **6. Conclusions**

a. The appropriate study of AC, AI/AGI, and/or TS requires the complexity of networked minds in order to manage the complexity break and avoid the complexity fallacy and different forms of wrong patternicity.

The argument for such a claim is, somehow unexpectedly, offered by the very functioning of the social networks specialized and focused on research.

CrowdForge [95], EteRNA [96], and other experiments [97], for example, proved the power of networked minds having a research task, *when dealing with missing data*, to obtain, each time, "impossible" correct results, when the most powerful computers and software were not able to reach any correct result.

EteRNA players, for example, were "extremely good at designing RNA's." Their results were most surprising "because the top algorithms published by scientists are not nearly so good. The gap is pretty dramatic." This chasm was attributed to the fact that "humans are much better than machines at thinking through the scientific process and making intelligent decisions based on past results." This conclusion is of the greatest importance for AGI and/or TS studies as "a computer is completely flummoxed by not knowing the rules"; when human "players are unfazed: they look at the data, they make their designs, and they do phenomenally" [96].

What could explain the clear gap between human networked minds and computers' results?

I think the answer is this: our intelligent artifacts are built based on linear, predictable, and predictable reasoning and not based on complex, nonlinear, partially predictable, and unpredictable reasoning.

"Linear and predictable" in the above claim means without "imagination, aesthetics, altruism, creativity, and wisdom" [80]. Our intelligent artifacts are executing sets of logical steps—algorithms. They cannot imagine, create, feel, or be wise. Everything they do is measurable and predictable.

Human reasoning is so complex that it cannot be reduced to a single logical rule/type of reasoning or to any set of logical rules/types of reasoning covering all possibilities. Human reasoning is more than "complicated": it is complex and so *irreducible* to a machine-like model. In most cases, in human reasoning there is no unique logical rule compulsory to obtain a result—just because from a false claim/sentence/proposition we will always be in a position to obtain both the truth and the falsehood. Human logical "machines" have holes in their functionning. There is some predictability in human reasoning, but it remains *not fully predictable*.

Yet, when it comes to the future, and especially TS, considered as a "rupture in the fabric of human history" [19], we cannot have enough predictable information about how it would be because we do not know what, why, and how exactly TS will be or, even, more likely, *if* TS will be, so it is unpredictable.

For example, even the merging of humans with machines is complicated, as there are many meanings, types, and grades of merging [98].

So, any number of networked computers will retain this weakness: they cannot find a result if some data is missing, when social networked minds can—as in the examples above.

We have to keep in mind "the power of the human mind to collectively surpass the power of computation of our 'smartest' machines just because the machine (=AI/AGI), being created using a linear reasoning, cannot deal with the complexity" [88].

b. TS would require redesigning AGI based on complexity—which we are not sure is possible.

Reaching TS (through AGI) seems to not be possible without reaching real complexity (not complicatedness!) in designing our "intelligent" artifacts. Redesigning hardware-software systems based on nonlinearity and unpredictability is not yet possible without fully understanding the complexity of our human, not-machine, minds. Maybe it will never be possible. Until then, TS is more likely a creation of our best expectations, an example of pareidolia, based on reductionism, subjective extrapolation, and imagination.

So, let us think (digitally) wisely and wait for the surprises the future(s) is/are preparing for us already!

**Funding:** This research received no external funding.

**Acknowledgments:** The author expresses his deep gratitude for the reviewers' comments, for the editors of this special issue, for my friend Dr. Iulian Rusu – the Editior iu Chief of the *European Journal of Science and Theology*, for the assistant editor assigned for this paper submitted to *Information,* for the copy-editors, proofreaders and typesetting specialists of MDPI, and to his wife MA Gabriela Guliciuc. The author kindly asks that no reference to his ideas deployed in this paper should be in any way used without his permission until the publishing of this paper in *Information* or another journal.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
