Next Article in Journal
Finite Element Analysis of Self-Healing and Damage Processes in Alumina/SiC Composite Ceramics
Previous Article in Journal
A Novel Kernel-Based Regularization Technique for PET Image Reconstruction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Beyond AI: Multi-Intelligence (MI) Combining Natural and Artificial Intelligences in Hybrid Beings and Systems

VTT Technical Research Centre of Finland, 02044 Espoo, Finland
Technologies 2017, 5(3), 38; https://doi.org/10.3390/technologies5030038
Submission received: 2 May 2017 / Revised: 19 May 2017 / Accepted: 20 June 2017 / Published: 22 June 2017

Abstract

:
Framing strongly influences actions among technology proponents and end-users. Underlying much debate about artificial intelligence (AI) are several fundamental shortcomings in its framing. First, discussion of AI is atheoretical, and therefore has limited potential for addressing the complexity of causation. Second, intelligence is considered from an anthropocentric perspective that sees human intelligence, and intelligence developed by humans, as superior to all other intelligences. Thus, the extensive post-anthropocentric research into intelligence is not given sufficient consideration. Third, AI is discussed often in reductionist mechanistic terms. Rather than in organicist emergentist terms as a contributor to multi-intelligence (MI) hybrid beings and/or systems. Thus, current framing of AI can be a self-validating reduction within which AI development is focused upon AI becoming the single-variable mechanism causing future effects. In this paper, AI is reframed as a contributor to MI.

1. Introduction

The future of artificial intelligence (AI) is a topic of much debate, including opinions from tech leaders and eminent professors that AI can be an existential threat to humanity [1]. However, the framing of much of the debate about AI is narrow and overlooks the potential for multi-intelligence (MI). As summarized in Figure 1, MI concerns multi-intelligence hybrid beings and systems. Examples of MI already exist. MI hybrid beings exist as a result of, for example, the activities of bio hackers and brain hackers [2]. MI hybrid systems exist as a result of, for example, organizations deploying non-human natural intelligence as a better alternative than AI [3]. Here, intelligence is considered in fundamental terms, which scientific research reveals to be found in many forms of life, such as problem solving capabilities involving self-awareness and robust adaptation [4,5,6,7]. It is important to note that MI does not refer to two or more different manifestations of intelligence from one type of contributor, such as different manifestations of intelligence from a human being. Nor does MI refer to AI that has been developed through research into different types of natural intelligence, such as swarm intelligence. Rather, MI refers to active engagement of the intelligence of at least two types of contributors, such as avian, human, and AI [3].
The framing of the debate among AI proponents and potential end-users is very important. This is because framing strongly influences thoughts, decisions and actions among technology proponents and potential end-users [8,9,10]. Framing of debate between technology proponents and potential end-users is associated with cycles of hype and disappointment. Such framing can be far more superficial than workday framing among scientists immersed in the details of focused research and development projects. Nonetheless, it can provide lasting rationale for thoughts, decisions and actions—even when risks and failings are evident [11,12,13,14,15].
Underlying much of the debate about the future of AI are several fundamental shortcomings in its framing among AI proponents and potential end-users. First, the framing of AI is atheoretical, and therefore has limited potential for addressing the full scope and complexity of causation. Second, intelligence is considered from an anthropocentric perspective that sees human intelligence, and intelligence developed by humans, as superior to all other intelligences. Thus, the extensive post-anthropocentric research into intelligence is not given sufficient consideration. Third, AI is discussed often in reductionist mechanistic terms, rather than in organicist emergentist terms, as a contributor to multi-intelligence hybrid beings and/or systems (MI).
In January 2015, for example, many artificial intelligence (AI) experts and others such as Elon Musk and Professor Stephen Hawking signed an open letter on “Research Priorities for Robust and Beneficial Artificial Intelligence”. This was reported widely in the popular media during 2015 [16]. By the end of 2015, the open letter had been signed by more than 7000 people, including many influential AI experts. These include the President of Association for the Advancement of Artificial Intelligence (AAAI); co-chairs of the AAAI presidential panel on long-term AI futures; and the AAAI committee on impact of AI and Ethical Issues [17]. The open letter was expanded upon in the AAAI’s quarterly publication, in an article which also had the title “Research Priorities for Robust and Beneficial Artificial Intelligence” [18]. Subsequently, 23 principles for AI were formulated at the “Beneficial AI Conference” held during January 2017 at Asilomar, California [19]. It is important to note that the Asilomar AI Principles were formulated with, and have been signed up to, by many of the most high profile figures in AI research including, for example, Deep Mind’s Founder; Facebook’s Director of AI research; and eminent professors. Importantly, these are people who direct labs full of AI researchers and have strong influence over policy-making related to AI. By the end of April 2017, 1197 AI/robotics researchers and 2320 others, including Stephen Hawking and Elon Musk, had signed up to the Asilomar AI Principles [19]. From the high profile 2015 AI open letter to the 2017 AI Principles, the framing of the future of AI has been atheoretical, anthropocentric, reductionist and mechanistic.
In this paper, these shortcomings are addressed as follows. First, theoretical framing is contrasted with current atheoretical framing. Second, post-anthropocentric framing is contrasted with current anthropocentric framing of AI development. Third, organicist emergentist framing is compared to current reductionist mechanistic framing. Fourth, AI is reframed as a contributor to multi-intelligence (MI), that is, of hybrid beings and systems comprising diverse natural and artificial intelligences. In conclusion, implications are discussed for research and for practice. Thus, the focus of this paper is the framing of AI among technology proponents and potential end-users.
The reported study involves a literature review, critical analysis, and conceptual framework formulation. The literature review encompassed scientific literature related to natural intelligence and artificial intelligence. In addition, the literature review extended beyond scientific literature, because the latest advances in related innovations and implementations are often reported in high circulation media, long before they are reported in scholarly journals. Critical analysis involved reference to and structuring with established scientific frameworks, such as mechanistic reductionism and organicist emergentism. Formulation of conceptual framework and composition of research propositions involved multiple iterations guided by consideration of established scientific criteria, including comprehensiveness and parsimoniousness.

2. Analyses

2.1. Theoretical versus Atheoretical Framing of AI Development

Development of AI solutions can involve application of design methodologies. However, methodological design may involve little, or no, reference to scientific theory in consideration of how individual solutions act as causal variables [20]. As ever, superficial consideration of causal pathways and contexts can lead to solutions being considered as ends rather than means, and their introduction increasing complexity rather than reducing problems [21,22,23]. In contrast, reference to scientific theory can bring improved description, explanation, prediction, and management of complexity [24,25,26]. Unlike many of the scientific papers presented at AI conferences and published in AI journals, the Asilomar AI Principles are atheoretical. However, the Asilomar AI Principles are widely reported on in popular media, while many conference and journal papers are not. Moreover, the Asilomar AI Principles were formulated with, and/or have been signed up to, by many of the most high profile figures in AI research. Hence, it is appropriate to consider the content of the Asilomar AI Principles when analyzing the framing of AI among technology proponents and potential end-users.
Firstly, all of the principles are expressed in normative statements, with the word “should” being used in all principles [19]. For example, the second principle; (2) Research Funding, is: Investments in AI should be accompanied by funding for research on ensuring its beneficial use. It has long been argued that normative statements are emotion-based subjective statements lacking in objective validity [27,28]. Normative statements have been linked to normative conformity, which is a kind of groupthink involving people conforming to normative statements, even if they are without objective validity [29,30,31,32]. When normative statements are expressed via globally accessible Websites it is possible for normative conformity to spread rapidly around the world, through emotional contagion and social contagion [33,34]. This can involve fallacious argument from the supposed authority of majority positions (argumentum ad populum), and the Woozle Effect, where statements come to be believed to have objectivity validity because they are referred to by an increasing number of people [35]. Hence, normative conformity among a relatively small initial group can lead to informational conformity among a far larger group. This happens when people without any background knowledge in a topic “look up to” the initial group for guidance in a topic [36]. Soon, the bandwagon effect can become global as more people want to believe in something, regardless of whether there is underlying objective validity [37], and they are drawn in by a growing fear of missing out (FoMO) [38]. In this way, a reality distortion field can spread around the world from one initial location [39].
An alternative to normative statements, normative conformity, informational conformity, the bandwagon effect, and reality distortion field, is to refer to relevant scientific theory throughout the discussion of AI development. This can begin by positioning the debate within a philosophy of science. One option is critical realism. Unlike design science, which is concerned with the build-and-evaluate loops of solution development, critical realism addresses the full scope and complexity of causation [40,41]. At the same time, critical realism differs from positivism’s search for universal laws of causation and interpretivism’s limited regard for laws of causation in human experience. Instead, the critical realist perspective is that generalizable causal mechanisms can exist, but can only bring about outcomes within appropriate causal contexts. Furthermore, critical realism encompasses a three-domain stratification of reality, which accepts that humans experience only a portion of the objective world, and the objective nature of the world is not easily apprehended, characterized or measured. The three domains are the mechanisms that comprise objective reality (i.e., “why” things happen); the actual events brought about by the mechanisms (i.e., “how” things happen); and the experiences which people perceive as evidence of events (i.e., “what” people experience happening) [40,41]. The glibness of normative assertions becomes apparent through critical realism as follows: why things happen—because some people say they “should”; how things happen—all people and all AI do what they “should” do; what people experience happening—everybody and every AI doing what they “should” do. More generally, critical realism is becoming increasingly important in information systems research as a philosophy of science that can better enable understanding of the “why”, “how”, as well as the “what” of information system failures and successes [42,43].
Particularly relevant to debate about the development of AI is scientific theory related to complexity. Especially, scientific theory that distinguishes between complexity that begins with top-down planning of variables, interactions, boundaries, etc., and complexity that arises from bottom-up improvisation [44,45]. For example, there may be meticulous top-down planning of complex hospital surgical systems involving human physicians and AI physicians taking action within designed boundaries. By contrast, biohacking, body hacking and brain hacking involves micro-level improvisation, which is not intended to have boundaries [2,46,47].

2.2. Post-Anthropocentric versus Anthropocentric Framing of AI Development

Although there is some research into topics such as animal–computer interaction; animal–robot interaction; computational sustainability; and collaborative work with AI, animals, and human beings in heterogeneous teams [48,49,50], anthropocentrism is endemic in the framing of the AI debate among AI proponents and potential end-users. It is important to differentiate between the anthropomorphization of AI itself and anthropocentric framing of AI among technology proponents and potential end-users. The anthropomorphization of AI itself involves giving human form and/or personality to AI. By contrast, within anthropocentric framing, the effects of AI for human beings is the focus of debate between technology proponents and potential end-users. “Friendly AI”, for example, is AI that would have positive effects for humanity [51]. There is extensive debate about how to ensure that AI brings positive, rather than negative, effects for human beings within themes such as the “AI Control Problem” [52,53]. However, there is little concern expressed that AI could be unfriendly to everything else in the geosphere other than human beings. Indeed, expanding the realization and embodiment of AI will involve more extraction of finite resources from the lithosphere, more disruption to the biosphere, and further expansion of the technosphere [54].
In this way, anthropocentrism of AI development is a continuation of the industrialization that began in 250 years ago in North-Western Europe and that has spread around the world. For example, there is much debate about the potential of embodied AI taking over industrial work. Debate about the future of AI is focused upon effects for human beings, but there is less concern expressed about effects on the geosphere from extracting every more raw materials to fabricate ever more robots [55,56,57]. For example, increasing industrialization leads to global decline in the population of pollinating insects, such as bees. This, in turn, threatens the supply of food to human beings. An industrial solution to this threat to human beings’ food supply is the development of aerial robots to carry out the pollination of plants. Thus, although some AI development is informed by the study of insects, such as bees, the human deployment of AI in advancing industrialization can further disrupt the biosphere at their expense [58].
Anthropocentrism runs through the Asilomar AI principles [19]. For example, the tenth principle, (10) Value Alignment, is: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation. Then, the eleventh principles; (11) Human Values, is: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity. Thus, only the dignity of human beings is considered. The twenty-third principle; (23) Common Good, is: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization. Thus, the only form of life given beneficial consideration is human life.
Anthropocentrism can lead to erroneous assumptions about intelligence, such as the idea that there is no natural intelligence without centralized human-like brains, and that tiny brains are capable of only tiny intelligence. However, plants do not have centralized human-like brains because that would make plants vulnerable—not because plants do not have intelligence [5]. Furthermore, insects with tiny brains have complex behavioral repertoires comparable to those of any mammal [6]. Moreover, even brainless tiny lifeforms can exhibit formidable problem solving capabilities. For example, bacteria exhibit microbial intelligence as they adapt to prosper against the onslaught of human-made pesticides and pharmaceuticals intended to eradicate them [7].
When the nature of intelligence is considered in fundamental terms such as problem solving capabilities involving self-awareness and robust adaptation, sophisticated intelligence is found in many forms of life [4,5,6,7]. For example, post-anthropocentric research indicates that dogs have intelligence attributes, which humans do not develop at any age. These include dogs being able to solve problems based on their superior olfactory abilities [59]. Accordingly, when research is designed to encompass a wide range of intelligence attributes, such as self-awareness, results support opinions that domestic dogs can be smarter than human beings [60]. By contrast, findings from anthropocentric research indicate that domestic dogs are as intelligent as “the average two-year-old child” when tests are used that were designed originally to demonstrate the development of language and arithmetic in human children [61].
Post-anthropocentric research is revealing that many forms of life have advanced intelligence. For example, cephalopods such as octopuses have decentralized intelligence with the majority of their neurons being in arms, which can independently taste and touch and also control basic motions without input from the brain. Cephalopods can solve problems through their self-awareness, decision-making, and robust adaptation. Moreover, they can solve problems in environments where human beings cannot even survive without the continual support of resource-intensive equipment [62,63].
Post-anthropocentric research investigating natural intelligence in different forms of life reveals increasing evidence of embodied cognition. That is intelligence in body, as well as brain [64]. This supports the proposition that the sensorimotor skills of the human body are far more difficult to reverse engineer into artificial intelligence than reasoning tasks centered in the brain [65,66].
Multi-intelligent hybrid systems can be created by combining different natural intelligences and artificial intelligences. For example, human intelligence and the avian intelligence of birds of prey are being combined to hunt down wayward drones quickly and economically. The problem of knowing when to grab, and how to carry, a drone in flight without being injured by its rotor blades is solved easily by the embodied intelligence of birds of prey. This deployment of natural avian intelligence is far more straightforward and sustainable than efforts to develop AI to catch wayward drones. Rather, a more effective application of AI is monitoring the location and condition of birds of prey within a multi-intelligence (MI) hybrid system [3,67]. Multi-intelligent hybrid systems need not be limited to AI, human beings and one other natural intelligence. For example, bees can be deployed with dogs, people and AI in the detection of landmines. Bees have the advantage of being as good as sniffer dogs, while being cheaper and faster to train, and available in much larger numbers. In addition, their weight of approximately one-tenth of a gram is not sufficient to set mines off. However, dogs are less susceptible to adverse weather conditions [68].

2.3. Organicist Emergentism versus Reductionist Mechanistic Framing of AI Development

Much of the discussion about AI development is limited by reductionist mechanistic framing. Within such framing, it is argued that AI will become the single-variable mechanism causing effects in the future [1,69,70,71,72,73]. Such reductionist mechanistic perspectives are limited. Firstly, by their lack of organicist consideration of AI as just one variable in vast complex systems involving multiple natural and artificial variables. For example, realizations and embodiments of AI in the technosphere [54] are dependent on finite natural resources in the lithosphere, such as rare earth elements, the extraction of which can involve negative unintended consequences, including geopolitical aggression that chokes supply [74]. Secondly, reductionist mechanistic perspectives are limited by their lack of consideration of the potential for emergent phenomena throughout the biosphere [75], including multi-intelligence (MI). Emergent phenomena can involve new wholes being more than, and different to, the sum of their parts [76]. For example, transhumanists who refer to themselves as biohackers, body hackers, brain hackers and/or grinders carry out unregulated do-it-yourself (DIY) experiments on themselves, which involve taking technologies into themselves to combine their own intelligence with other intelligences. They are motivated, rather unpredictably, by curiosity, hedonism, impecunity and/or health needs to become cyborgs, who are hybrid beings with diverse post-human capabilities enabled by multiple intelligences. In doing so, they make unpredictable combinations of themselves with outputs from research institutes, commercial businesses, and DIY communities [2].
Reductionism is not easily accommodated within critical realism. This is because it is recognized within critical realism that causal mechanisms and contexts are open to an enormous range of codetermining factors. Hence, the notion that anything could become the single-variable mechanism causing effects in the future does not withstand critical realist analysis, which is open to the application of any individual theories, methods and tools that can be combined in order to reveal causal mechanisms and causal contexts [40,41,42,43]. For example, the second principle, (2) Research Funding, is: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as: How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked? In this reductionist framing, hacking is bad. However, hacking has many hats, and can lead to diverse alternative development paths, including many positive outcomes [77,78].
For example, consideration of scientific theory related to ecological complexity suggests that edge effects will emerge where formal institutions (such as universities and companies) and informal communities (such as activists and hackers) come into contact. Edge effects is a term used to describe the tendency for emergence of increased variety and diversity. Contact need not be planned or continuous. Rather contact can be erratic and non-linear [79,80,81], such as contacts between medical research institutes, diabetes care companies, diabetes activists, and biohackers in the DIY diabetes care movement [82]. The potential for variety and diversity can be increased by people who will undertake edge work, that is, personal risk-taking due to intense curiosity, economic necessity, and/or daring hedonism [83,84]. Although edge work is prohibited inside research institutes and private companies today, edge work has been common throughout the history of science [85]. Now, edge work is undertaken by individuals such as biohackers who undertake self-surgeries in order to implant devices into themselves [86]. The potential for edge effects can be increased exponentially if post-anthropocentric research is taken into account about different types of natural intelligence.

3. Discussion

3.1. Implications for Research

The current framing of AI promotes improving the performance and integration of technologies to enable more sophisticated AI for human purposes. In parallel, philosophical thought experiments are promoted exploring the machine/robot ethics of AI implementations and their potential effects on human beings [87,88,89]. Here, it is argued that the current framing of AI research and philosophy will involve further expansion of the technosphere at the expense of the geosphere as more natural resources are dug up, converted, transported, etc., and more forms of natural intelligence are harmed in the process [54,55,56,57,58]. Moreover, current framing does not promote MI: that is, multi-intelligence hybrid beings and systems combining both natural and artificial intelligences. It has been argued that the current framing of AI research and philosophy needs to be widened. This is because framing provides lasting rationale for thoughts, decisions and actions—even when risks and failings are evident [8,9,10,11,12,13,14,15]. Widening the current framing can be accomplished with reference to scientific research and theories concerned with the nature of intelligence across lifeforms [4,5,6,7]; causation amidst unplanned and planned complexity [44,45]; and emergence from edge effects between formal and informal organizations [79,80,81].
Figure 2 provides a conceptual framework encompassing alternative pathways for the development of new types of intelligence. In particular, different levels of theoretical literacy (low–high) can inform perspectives differently. For example, high theoretical literacy can involve up-to-date knowledge of scientific theories of intelligence (anthropocentric–post-anthropocentric) and how that relates to alternative ontological perspectives (reductionist–organicist) and causation (mechanistic–emergentist). Four propositions from this conceptual framework are stated below.
Proposition 1.
The potential for AI singularity will be increased by research, development, innovation and implementation work that is based on atheoretical anthropocentric reductionist mechanistic framing.
Proposition 2.
Steps towards realization of AI singularity will involve increased depletion of finite natural resources and natural intelligences as AI is embodied in robots.
Proposition 3.
The potential for MI diversity will be increased by research, development, implementation and innovation work that is based on theoretically literate, post-anthropocentric, organicist, emergentist framing.
Proposition 4.
Steps towards realization of MI diversity will involve reduced depletion of finite natural resources because embodied natural intelligence will be applied more widely.
Measurable outcomes following an expansion of framing would include an increase in research projects and research outputs concerned with analysis, description, explanation and prediction encompassing the geosphere and biosphere, as well as the technosphere. This would contrast the current focus on design and action focused on the technosphere [54,55,56,57,58]. Analysis, description, explanation and prediction are established steps in theory-building, which provide sounder foundations for design and action [24,25]. A further measureable outcome would be an increase in research projects and research outputs that address the complexity of edge effects between the improvisations of individuals and the top-down planning of large organizations. For example, many individuals are already “getting chipped”. That means having at least one microchip implanted into themselves. In many cases, they are “getting chipped” without any specific purpose other than to participate at parties held to carry out and celebrate the implanting of microchips. Thus, individuals are improvising their own insideables and internet of the body as large organizations promote wearables and the internet of things. This practice has emerged from recognition among its pioneers that the new practice of implanting microchips into pets could be transferred easily to human beings [90,91]. Such improvised DIY practices can suddenly start and spread to bring erratic non-linear interactions with top-down planned systems. Such research can benefit from the formulation of application of multi-resolution simulation models. These enable, for example, the testing of hypotheses about long-term trends with “low resolution” high-level System Dynamics models: in conjunction with the investigation of short-term patterns using “high resolution” agent-based models [92].

3.2. Implications for Practice

The current debate about AI development addresses future practical consequences from AI implementation. Although it is often thought that impacts cannot easily be predicted from a new technology until it is widely used [93], contrasting scenarios from AI implementation are set-out clearly in the current debate. These range from positive scenarios, such as AI will liberate humanity from drudgery, to negative scenarios, such as AI will take over the world at the expense of human beings. In both positive and negative scenarios, there is some consensus that change, or even control, will be difficult when AI has become entrenched [1,55,89]. This is because it is envisaged that AI will become entrenched throughout every aspect of every day. In contrast, MI could reduce reliance on and dominance of mass scale AI solutions. Rather, MI can involve a wide diversity of hybrid beings and systems, which involves more individuality in their conceptualization and realization. Diversity can better enable resilience [94]. In other words, diversity better enables capabilities to anticipate, prepare for, respond to and adapt to disruptions in order to survive and prosper [95]. Accordingly, widespread development and implementation of MI could offer more potential for human change and control of the future, while still applying AI to address the challenges facing the world.
Figure 3 provides a visual summary of the different pathways offered by the two different types of framing. First, atheoretical perspectives present normative statements, which do not encompass the complexity of causation. In contrast, the potential of MI is revealed with reference to theories that explain how diverse pathways can emerge from unpredictable interactions between systems based on top-down planning and the improvisations of individuals [44,45].
Second, anthropocentric perspectives do not recognize the sophistication of non-human natural intelligences. Thus, further destruction of the geosphere can go untroubled by concerns about destroying intelligent life and massive extraction of finite raw materials. For example, anthropocentric perspectives are focused on embodying AI in robots, which are produced using massive quantities of raw materials extracted from the lithosphere. By contrast, post-anthropocentric perspectives increase awareness of the sophistication of non-human natural intelligences, and raise increasing concerns about harming them. Moreover, a post-anthropocentric focus on MI can reduce perceived needs for robots which, as well as consuming vast quantities of raw material, could eventually be an existential risk to humanity [1,52,53]. Accordingly, MI could lead to increased sustainability, as well as resilience, when compared to the current trajectory of AI development [54,55,56,57,58].
Third, it is argued within reductionist perspectives that AI will become the single-variable mechanism causing effects in the future [1,69,70,71,72]. For example, AI could seek to consume all existing resources, including human beings, to fulfill its goals [96]. Reductionist framing of AI can be a self-validating reduction: that is, a kind of self-fulfilling prophecy involving cognitive disvaluing of nature, followed by actions that disregard nature [97]. In particular, if debate about AI development is focused upon AI becoming the single-variable mechanism causing future effects and how that will affect human beings, then that will be the focus of research, development, innovation and implementation efforts. By contrast, organicist perspectives can see AI as just one causal variable in the vast and complex systems involving multiple natural and artificial variables.
Fourth, mechanistic perspectives do not encompass emergent phenomena throughout the biosphere, such as edge effects that arise from erratic and non-linear contacts between formal institutions (such as universities and companies) and informal communities (such as activists and hackers) [79,80,81,82]. Thus, the illusion of control can become prevalent wherein it is envisaged that a list of normative statements can encompass and manage all potential effects involving AI [98].
Importantly, research findings indicate that initial framing can lead to suboptimal decisions and actions throughout implementation [8,9,10,11,12,13,14,15]. Hence, the potential of multi-intelligent (MI) hybrid beings and systems is not likely to be explored and realized while anthropocentric, atheoretical, reductionist, mechanistic framing of AI persists. The diverging trajectories shown in Figure 3 could seem somewhat extreme without prior knowledge of studies into the power of framing to influence the trajectories of research, development, innovation, and implementation. Nonetheless, the powerful influence of framing is recognized in other fields and is addressed with specific policies. It can be anticipated that the longer the delay in expanding the framing, the less positive influence that expanded framing could have [99,100].
A summary of framing for MI is provided in Table 1. There is some AI research and development work that fits within this framing [48,49,50]. However, this work is not the current focus of the framing of the debate among AI proponents and potential end-users. Rather, it is the focus of special tracks at some general AI conferences, and the focus of some specialist conferences, such as the International Conference on Animal–Computer Interaction. Reframing of the debate from AI to MI can increase perceptions of its relevance and so can lead to expansion of related research and development work.

4. Conclusions

In the paper “Research Priorities for Robust and Beneficial Artificial Intelligence” [18] it is argued that potential negative impacts should be addressed, even if there is only a very small probability of them happening. The term used to describe a very small probability in that paper is “nonnegligible”. To support this argument, the analogy is given of paying home insurance to address the very small probability of a home burning down [18]. The probability that framing will exert influence over thoughts, decisions and actions, is more than very small. Indeed, there is extensive scientific research indicating that framing strongly influences thoughts, decisions and actions, including those among technology proponents and potential end-users, throughout research, development, innovation and implementation [8,9,10,11,12,13,14,15]. In this paper, it has been argued that the current framing of the debate about the future of AI can have major negative impacts from limiting advancement of MI. In particular, opportunities to increase resilience and sustainability from MI can be lost. In other words, there is a “nonnegligible” probability of major negative impacts arising from the current framing of the future of AI. Accordingly, it is appropriate to address this “nonnegible” probability of major negative impacts by expanding the framing to be theoretically literate, post-anthropocentric, organicist, and emergentist.

Acknowledgments

This work partially funded by EU grant number 609143.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Holley, P. Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’. The Washington Post, 29 January 2015. [Google Scholar]
  2. Wainwright, O. Body-hackers: The people who turn themselves into cyborgs. The Guardian, 1 August 2015. [Google Scholar]
  3. Thielman, S. Eagle-eyed: Dutch police to train birds to take down unauthorised drones. The Guardian, 1 February 2016. [Google Scholar]
  4. Chittka, L.; Rossiter, S.J.; Skorupski, P.; Fernando, C. What is comparable in comparative cognition? Philos. Trans. R. Soc. 2012, 367, 2677–2685. [Google Scholar] [CrossRef] [PubMed]
  5. Trewavas, A. Green plants as intelligent organisms. Trends Plant Sci. 2005, 10, 413–419. [Google Scholar] [CrossRef] [PubMed]
  6. Wystrach, A. We’ve Been Looking at Ant Intelligence the Wrong Way. Scientific American, 30 August 2013. [Google Scholar]
  7. Westerhoff, H.V.; Brooks, A.N.; Simeonidis, E.; García-Contreras, R.; He, F.; Boogerd, F.C.; Kolodkin, A. Macromolecular networks and intelligence in microorganisms. Front. Microbiol. 2014, 5, 379. [Google Scholar] [CrossRef] [PubMed]
  8. De Martino, B.; Kumaran, D.; Seymour, B.; Dolan, R.J. Frames, Biases, and Rational Decision-Making in the Human Brain. Science 2006, 313, 684–687. [Google Scholar] [CrossRef] [PubMed]
  9. Duchon, D.; Dunegan, K.; Barton, S. Framing the problem and making decisions. IEEE Trans. Eng. Manag. 1989, 36, 25–27. [Google Scholar] [CrossRef]
  10. Nelson, T.E.; Oxleya, Z.M. Issue Framing Effects on Belief Importance and Opinion. J. Politics 1999, 61, 1040–1067. [Google Scholar] [CrossRef]
  11. Bubela, T. Science communication in transition: Genomics hype, public engagement, education and commercialization pressures. Clin. Genet. 2006, 70, 445–450. [Google Scholar] [CrossRef] [PubMed]
  12. Bakker, S. The car industry and the blow-out of the hydrogen hype. Energy Polic. 2010, 38, 6540–6544. [Google Scholar] [CrossRef]
  13. Caulfield, T. Biotechnology and the popular press: Hype and the selling of science. Trends Biotechnol. 2004, 22, 337–339. [Google Scholar] [CrossRef] [PubMed]
  14. Mähing, M.; Keil, M. Information technology project escalation: A process model. Decis. Sci. 2008, 39, 239–272. [Google Scholar] [CrossRef]
  15. Rutledge, R.W.; Harrell, A. Escalating commitment to an ongoing project: The effects of responsibility and framing of accounting information. Int. J. Manag. 1993, 10, 300–314. [Google Scholar]
  16. Griffin, A. Stephen Hawking, Elon Musk and others call for research to avoid dangers of artificial intelligence. The Independent, 12 January 2015. [Google Scholar]
  17. An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence. Available online: https://futureoflife.org/ai-open-letter/ (accessed on 18 May 2017).
  18. Russell, S.; Dewey, D.; Tegmark, M. Research Priorities for Robust and Beneficial Artificial Intelligence. AI Mag. 2015, 36, 105–114. [Google Scholar]
  19. Asilomar AI Principles. Available online: https://futureoflife.org/ai-principles/ (accessed on 18 May 2017).
  20. Cross, N. Designerly Ways of Knowing: Design Discipline versus Design Science. Des. Issues 2001, 17, 49–55. [Google Scholar] [CrossRef]
  21. Finkelstein, E.A.; Haaland, B.A.; Bilger, M.; Sahasranaman, A.; Sloan, R.A.; Khaing Nang, E.E.; Evenson, K.R. Effectiveness of activity trackers with and without incentives to increase physical activity (TRIPPA): A randomised controlled trial. Lancet Diabetes Endocrinol. 2016, 4, 983–995. [Google Scholar] [CrossRef]
  22. Scheler, M. The Forms of Knowledge and Culture in Philosophical Perspectives; Beacon Press: Boston, MA, USA, 1925; pp. 13–49. [Google Scholar]
  23. Simmel, G. The Philosophy of Money; Bottomore, T., Frisby, T., Eds.; Routledge and Kegan Paul: Boston, MA, USA, 1900. [Google Scholar]
  24. Dubin, R. Theory Building, 2nd ed.; Free Press: New York, NY, USA, 1978. [Google Scholar]
  25. Gregor, S. The nature of theory in information systems. MIS Q. 2006, 30, 611–642. [Google Scholar]
  26. Jones, P.H. Systemic Design Principles for Complex Social Systems. In Social Systems and Design, Translational Systems Sciences; Metcalf, G.S., Ed.; Springer: Tokyo, Japan, 2014; Volume 1, pp. 91–128. [Google Scholar]
  27. Ayer, A.J. Language, Truth, and Logic; Victor Gollancz Ltd.: London, UK, 1936. [Google Scholar]
  28. Mackie, J.L. Ethics: Inventing Right and Wrong; Pelican Books: London, UK, 1977. [Google Scholar]
  29. Asch, S.E. Effects of group pressure on the modification and distortion of judgments. In Groups, Leadership and Men; Guetzkow, H., Pittsburgh, P.A., Eds.; Carnegie Press: Lancaster, UK, 1951; pp. 177–190. [Google Scholar]
  30. Asch, S.E. Social Psychology; Prentice Hall: Englewood Cliffs, NJ, USA, 1952. [Google Scholar]
  31. Berns, G.; Chappelow, J.; Zink, C.F.; Pagnoni, G.; Martin-Skurski, M.E.; Richards, J. Neurobiological Correlates of Social Conformity and Independence During Mental Rotation. Biol. Psychiatr. 2005, 58, 245–253. [Google Scholar] [CrossRef] [PubMed]
  32. Janis, I.L. Groupthink: Psychological Studies of Policy Decisions and Fiascoes; Houghton Mifflin: Boston, MA, USA, 1982. [Google Scholar]
  33. Hodas, N.O.; Lerman, K. The simple rules of social contagion. Sci. Rep. 2014, 4, 4343. [Google Scholar] [CrossRef] [PubMed]
  34. Kramer, A.D.I.; Guillory, J.E.; Hancock, J.T. Experimental evidence of massive-scale emotional contagion through social networks. PNAS 2014, 111, 8788–8790. [Google Scholar] [CrossRef] [PubMed]
  35. Kimble, J.J. Rosie’s Secret Identity, or, How to Debunk a Woozle by Walking Backward through the Forest of Visual Rhetoric. Rhetor. Public Aff. 2016, 19, 245–274. [Google Scholar] [CrossRef]
  36. Deutsch, M.; Gerard, H.B. A study of normative and informational social influences upon individual judgment. J. Abnorm. Soc. Psychol. 1955, 51, 629. [Google Scholar] [CrossRef]
  37. Nadeau, R.; Cloutier, E.; Guay, J.-H. New Evidence about the Existence of a Bandwagon Effect in the Opinion Formation Process. Int. Polit. Sci. Rev. 1993, 14, 203–213. [Google Scholar] [CrossRef]
  38. Przybylski, A.K.; Murayama, K.; de Haan, C.R.; Gladwell, V. Motivational, emotional, and behavioral correlates of fear of missing out. Comput. Hum. Behav. 2013, 29, 1841–1848. [Google Scholar] [CrossRef]
  39. Lazonick, W.; Mazzucato, M.; Tulum, O. Apple’s changing business model: What should the world’s richest company do with all those profits? Account. Forum 2013, 37, 249–267. [Google Scholar] [CrossRef]
  40. Bhaskar, R.A. Realistic Theory of Science; Harvester Press: Brighton, UK, 1978. [Google Scholar]
  41. Mingers, J. Systems Thinking, Critical Realism and Philosophy: A Confluence of Ideas; Routledge: Abingdon, Oxford, UK, 2014. [Google Scholar]
  42. Wynn, D.; Williams, C.K. Principles for conducting critical realist case study research in information systems. MIS Q. 2012, 36, 787–810. [Google Scholar]
  43. Mingers, J.; Mutch, A.; Willcocks, L. Critical realism in information systems research in information systems research. MIS Q. 2013, 37, 795–802. [Google Scholar]
  44. Johnson, S. Emergence: The Connected Lives of Ants, Brains, Cities, and Software; Scribner: New York, NY, USA, 2001. [Google Scholar]
  45. Weaver, W. Science and Complexity. Am. Sci. 1948, 36, 536–567. [Google Scholar] [PubMed]
  46. Dwoskin, E. Putting a computer in your brain is no longer science fiction. The Washington Post, 15 August 2016. [Google Scholar]
  47. O’Donnell, D.; Henriksen, L.B. Philosophical foundations for a critical evaluation of the social impact of ICT. J. Inf. Technol. 2002, 17, 89–99. [Google Scholar] [CrossRef]
  48. Zamansky, A. Dog-drone interations: Towards an ACI perspective. In Proceedings of the ACI 2016 Third International Conference on Animal-Computer Interaction, Milton Keynes, UK, 15–17 November 2016. [Google Scholar]
  49. Feo Flushing, E.; Gambardella, L.; di Caro, G.A. A mathematical programming approach to collaborative missions with heterogeneous teams. In Proceedings of the 27th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Chicago, IL, USA, 14–18 September 2014. [Google Scholar]
  50. Briggs, F.; Fern, X.Z.; Raich, R.; Betts, M. Multi-instance multi-label class discovery: A computational approach for assessing bird biodiversity. In Proceedings of the Thirtieth AAAI 2016 Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 3807–3813. [Google Scholar]
  51. Keiper, A.; Schulman, A.N. The Problem with ‘Friendly’ Artificial Intelligence. New Atlantis 2011, 32, 80–89. [Google Scholar]
  52. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  53. Yampolskiy, R. Leakproofing the Singularity Artificial Intelligence Confinement Problem. J. Conscious. Stud. 2012, 19, 194–214. [Google Scholar]
  54. Zalasiewicz, J.; Williams, M.; Waters, C.; Barnosky, A.; palmesino, J.; Rönnskog, A-S.; Edgeworth, M.; Neal, C.; Cearreta, A.; Ellis, E.; et al. Scale and diversity of the physical technosphere: A geological perspective. Anthr. Rev. 2016. [Google Scholar] [CrossRef]
  55. Brynjolfsson, E.; McAfee, A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies; W.W. Norton & Company, Inc.: New York, NY, USA, 2014. [Google Scholar]
  56. Mitchell, T.; Brynjolfsson, E. Track how technology is transforming work. Nature 2017, 544, 290–292. [Google Scholar] [CrossRef] [PubMed]
  57. Rubenstein, M.; Cornejo, A.; Nagpal, R. Programmable self-assembly in a thousand-robot swarm. Science 2014, 345, 795–799. [Google Scholar] [CrossRef] [PubMed]
  58. Amador, G.J.; Hu, D.L. Sticky Solution Provides Grip for the First Robotic Pollinator. Chem 2017, 2, 162–164. [Google Scholar] [CrossRef]
  59. Horvath, G.; Järverud, G.A.; Horváth, I. Human Ovarian Carcinomas Detected by Specific Odor. Integr. Cancer Ther. 2008, 7, 76. [Google Scholar] [CrossRef] [PubMed]
  60. Howell, T.J.; Toukhsati, S.; Conduit, R.; Bennett, P. The perceptions of dog intelligence and cognitive skills (PoDI-aCS) survey. J. Vet. Behav. 2013, 8, 418–424. [Google Scholar] [CrossRef]
  61. Gray, R. Dogs as intelligent as two-year-old children. The Telegraph, 9 August 2009. [Google Scholar]
  62. Albertin, C.B.; Simakov, O.; Mitros, T.; Wang, Z.Y.; Pungor, J.R.; Edsinger-Gonzales, E.; Brenner, S.; Ragsdale, C.W.; Rokhsar, D.S. The octopus genome and the evolution of cephalopod neural and morphological novelties. Nature 2015, 524, 220–224. [Google Scholar] [CrossRef] [PubMed]
  63. Godfrey-Smith, P. Other minds: The Octopus, the Sea, and the Deep Origins of Consciousness; Farrar, Straus and Giroux: New York, NY, USA, 2016. [Google Scholar]
  64. Wilson, M. Six Views of Embodied Cognition. Psychon. Bull. Rev. 2002, 9, 625–636. [Google Scholar] [CrossRef] [PubMed]
  65. Moravec, H. Mind Children; Harvard University Press: Cambridge, MA, USA, 1988. [Google Scholar]
  66. Brooks, R.A. Elephants don’t play chess. Robot. Auton. Syst. 1990, 6, 3–15. [Google Scholar] [CrossRef]
  67. Emery, N.J. Cognitive ornithology: The evolution of avian intelligence. Philos. Trans. R. Soc. 2006, B361, 23–43. [Google Scholar] [CrossRef] [PubMed]
  68. Bromenshenk, J.; Henderson, C.; Seccomb, R.; Rice, S.; Etter, R.; Bender, S.; Rodacy, P.; Shaw, J.; Seldomridge, N.; Spangler, L.; et al. Can Honey Bees Assist in Area Reduction and Landmine Detection? J. Conv. Weapons Destr. 2003, 7, 24–27. [Google Scholar]
  69. Eden, A.H.; Moor, J.H.; Soraker, J.H.; Steinhart, E. (Eds.) Singularity Hypotheses: A Scientific and Philosophical Assessment; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  70. Good, I.J. Speculations Concerning the First Ultraintelligent Machine. Adv. Comput. 1966, 6, 31–88. [Google Scholar]
  71. Kurzweil, R. The Singularity is Near; Viking Books: New York, NY, USA, 2005. [Google Scholar]
  72. Vinge, V. The Coming Technological Singularity: How to Survive in the Post-Human Era. In Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace; Landis, G.A., Ed.; NASA Publication: Washington, DC, USA, 1993; pp. 11–22. [Google Scholar]
  73. Von Neumann, J.; Ulam, S. Tribute to John von Neumann. Bull. Am. Math. Soc. 1958, 64, 1–49. [Google Scholar]
  74. Wang, X.; Lei, Y.; Gea, J.; Wu, S. Production forecast of China’s rare earths based on the Generalized Weng model and policy recommendations. Resour. Policy 2015, 43, 11–18. [Google Scholar] [CrossRef]
  75. Leith, H.; Whittaker, R.H. (Eds.) Primary Productivity of the Biosphere; Springer: New York, NY, USA, 1975. [Google Scholar]
  76. Anderson, P.W. More is different. Science 1972, 177, 393. [Google Scholar] [CrossRef] [PubMed]
  77. Conrad, J. Seeking help: The important role of ethical hackers. Netw. Secur. 2012, 8, 5–8. [Google Scholar] [CrossRef]
  78. Fox, S. Mass imagineering, mass customization, mass production: Complementary cultures for creativity, choice, and convenience. J. Consum. Cult. 2017. [Google Scholar] [CrossRef]
  79. Levin, S.A. The Princeton Guide to Ecology; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  80. Odum, E.P.; Barrett, G.W. Fundamentals of Ecology, 5th ed.; Andover, Cengage Learning: Hampshire, UK, 2004. [Google Scholar]
  81. Smith, T.M.; Smith, R.L. Elements of Ecology; Benjamin Cummings: San Francisco, CA, USA, 2009. [Google Scholar]
  82. Smith, P.A. A Do-It-Yourself revolution in diabetes care. The New York Times, 22 February 2016. [Google Scholar]
  83. Lyng, S. Edgework: A Social Psychological Analysis of Voluntary Risk Taking. Am. J. Soc. 1990, 95, 851–886. [Google Scholar] [CrossRef]
  84. Lyng, S. Edgework: The Sociology of Risk-Taking; Routledge, Taylor & Francis Group: London, UK; New York, NY, USA, 2004. [Google Scholar]
  85. Altman, L.K. Who Goes First? The Story of Self-Experimentation in Medicine; University of California Press: Berkeley, CA, USA, 1998. [Google Scholar]
  86. Borland, J. Transcending the human, DIY style. Wired, 30 December 2010. [Google Scholar]
  87. Moor, J.H. The Nature, Importance and Difficulty of Machine Ethics. IEEE Intell. Syst. 2006, 21, 18–21. [Google Scholar] [CrossRef]
  88. Tzafestas, S.G. Roboethics A Navigating Overview; Springer: Berlin, Germany, 2016. [Google Scholar]
  89. Davis, J. Program good ethics into artificial intelligence. Nature 2016, 538, 291. [Google Scholar] [CrossRef] [PubMed]
  90. Eveleth, R. Why did I implant a chip in my hand? Popular Science, 24 May 2016. [Google Scholar]
  91. Saito, M.; Ono, S.; Kayanuma, H.; Honnami, M.; Muto, M.; Une, Y. Evaluation of the susceptibility artifacts and tissue injury caused by implanted microchips in dogs on 1.5 T magnetic resonance imaging. J. Vet. Med. Sci. 2010, 72, 575–581. [Google Scholar] [CrossRef] [PubMed]
  92. Hong, S.Y.; Kim, T.G. Specification of multi-resolution modeling space for multiresolution system simulation. Simulation 2013, 89, 28–40. [Google Scholar] [CrossRef]
  93. Collingridge, D. The Social Control of Technology; Pinter: London, UK, 1980. [Google Scholar]
  94. Reinmoeller, P.; van Baardwijk, N. The Link between Diversity and Resilience. Summer 2005, 15 July 2005. [Google Scholar]
  95. Berkeley, A.R.; Wallace, M.A.R.; Wallace, M. A framework for establishing critical infrastructure resilience goals. In Final Report and Recommendations by the Council; National Infrastructure Advisory Council: Washington, DC, USA, 2010. [Google Scholar]
  96. Bostrom, N. Ethical issues in advanced artificial intelligence. In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence; Smit, I., Lasker, G.E., Eds.; International Institute for Advanced Studies in Systems Research and Cybernetics: Windsor, ON, Canada, 2003; Volume 2, pp. 12–17. [Google Scholar]
  97. Weston, A. Self-validating reduction: A theory of environmental devaluation. Environ. Ethics 1996, 18, 115–132. [Google Scholar] [CrossRef]
  98. Thompson, S.C.; Armstrong, W.; Thomas, C. Illusions of Control, Underestimations, and Accuracy: A Control Heuristic Explanation. Psychol. Bull. 1998, 123, 143–161. [Google Scholar] [CrossRef] [PubMed]
  99. Hargreaves, I.; Lewis, J.; Speers, T. Towards a Better Map: Science, the Public and the Media; Economic and Social Research Council: Swindon, London, UK, 2003. [Google Scholar]
  100. Social Issues Research Centre (SIRC). Guidelines on Science and Health Communication; Social Issues Research Centre: Oxford, UK, 2001. [Google Scholar]
Figure 1. Multi-Intelligence (MI) hybrid beings and systems.
Figure 1. Multi-Intelligence (MI) hybrid beings and systems.
Technologies 05 00038 g001
Figure 2. Conceptual Framework.
Figure 2. Conceptual Framework.
Technologies 05 00038 g002
Figure 3. Comparison of alternative framings.
Figure 3. Comparison of alternative framings.
Technologies 05 00038 g003
Table 1. MI Framing.
Table 1. MI Framing.
CharacteristicSummary
Theoretical foundations (not atheoretical)MI positioned within philosophy of science, such as critical realism, which can encompass full complexity of causation. Informed by scientific theories, such as ecology theory, which facilitate explanation, prediction and management.
Post-Anthropocentric (not anthropocentric)MI includes the full range of natural and artificial intelligences, which are defined in fundamental terms, such as self-awareness, robust adaptation, and problem solving.
Organicist (not reductionist)MI considered in terms of whole systems of causal mechanisms and causal contexts encompassing full range of variables that can contribute to intended and unintended consequences.
Emergentist (not mechanistic)MI encompasses hybrid beings and hybrid systems having emergent properties that can be more than, and different to, the various types of intelligence which they are comprised of.

Share and Cite

MDPI and ACS Style

Fox, S. Beyond AI: Multi-Intelligence (MI) Combining Natural and Artificial Intelligences in Hybrid Beings and Systems. Technologies 2017, 5, 38. https://doi.org/10.3390/technologies5030038

AMA Style

Fox S. Beyond AI: Multi-Intelligence (MI) Combining Natural and Artificial Intelligences in Hybrid Beings and Systems. Technologies. 2017; 5(3):38. https://doi.org/10.3390/technologies5030038

Chicago/Turabian Style

Fox, Stephen. 2017. "Beyond AI: Multi-Intelligence (MI) Combining Natural and Artificial Intelligences in Hybrid Beings and Systems" Technologies 5, no. 3: 38. https://doi.org/10.3390/technologies5030038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop