*Article* **Sparking Religious Conversion through AI?**

**Moira McQueen**

University of St. Michael's College, University of Toronto, Toronto, ON M5S 1J4, Canada; moira.mcqueen@utoronto.ca

**Abstract:** This paper will take the stance that cognitive enhancement promised by the use of AI could be a first step for some in bringing about moral enhancement. It will take a further step in questioning whether moral enhancement using AI could lead to moral and or religious conversion, i.e., a change in direction or behaviour reflecting changed thinking about moral or religious convictions and purpose in life. One challenge is that improved cognition leading to better moral thinking is not always sufficient to motivate a person towards the change in behaviour demanded. While some think moral bioenhancement should be imposed if necessary in urgent situations, most religions today see volition in conversion as essential. Moral and religious conversion should be voluntary and not imposed, and recent studies that show possible dangers of the use of AI here will be discussed along with a recommendation that there be regulatory requirements to counteract manipulation. It is, however, recognized that a change in moral thinking is usually a necessary step in the process of conversion and this paper concludes that voluntary, safe use of AI to help bring that about would be ethically acceptable.

**Keywords:** cognitive and moral enhancement; artificial intelligence (AI); volition; conversion

#### **1. Introduction**

Moral bioenhancement through AI and other technologies has the aim of improving people, making us 'better', perhaps more able to solve society's problems. This is, for example, the background to the famous Persson and Savulescu approach to moral enhancement: we should use every means we have to solve current problems, especially climate changes that threaten destruction to our planet and future generations. Their theory suggests that not enough of us have the moral capacity to react to this and other serious situations with the urgency required, and that moral bioenhancement, even involuntarily, is needed.

The hope of moral bioenhancement is that people will be able to reason better morally, not just as a good end in itself but also to spark the realization that concrete action and the will to change situations are necessary. These steps are needed for traditional moral and religious conversion, and it is proposed here that cognitive enhancement promised by the use of AI or other means could be a first step in bringing about moral enhancement. The use of AI would therefore be important in sparking or short-circuiting conversion, depending on whether or not it is able to help bring about better moral thinking as a precursor to the changed behaviour that conversion entails. Some of the challenges to moral bioenhancement as it relates to moral or religious conversion will be discussed.

#### **2. AI and Cognitive Enhancement**

Using AI for human enhancement has proved a great aid in restoring physical capacity. Methods used so far include deep brain stimulation (DBS), computer to brain interface, and brain implants to achieve superior learning. Several studies show some of these methods help people with reduced capacity brought about by illness or accidents, while others show the possibility of learning to operate mechanisms through neural activity, perhaps by implanted chips in the brain, thereby opening or developing neural pathways with greater capacity for cognition. Important here is the possibility of not simply being able to receive

**Citation:** McQueen, Moira. 2022. Sparking Religious Conversion through AI? . *Religions* 13: 413. https://doi.org/10.3390/ rel13050413

Academic Editors: Tracy J. Trothen and Calvin Mercer

Received: 17 March 2022 Accepted: 29 April 2022 Published: 4 May 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

more knowledge, but also the capacity for understanding. In other words, " ... cognitive abilities relate to mechanisms of how we learn, remember, problem-solve and pay attention rather than with actual knowledge" (Kaimara et al. 2020).

In a recent pilot study, researchers from the National University of Singapore (NUS) showed that an artificial intelligence (AI) platform, CURATE.AI, produces training programs personalized to the individual's learning capacity to enhance training for maximum benefit. Results of the study showed the CURATE.AI platform has potential to enhance learning capacity and could lead to successful use in digital therapy, perhaps even preventing cognitive decline. Digital therapeutics using personalised applications already exist in many platforms and would be accessible to anyone with a smart phone, tablet, etc., with the potential of replacing some drug therapies and perhaps even preventing cognitive decline.

Participants' scores varied, leading to a statement by one of the authors to a daily science journal: "We need a strategy that adjusts the training—which can involve many tasks that interfere with each other—according to the participant's changing responses" (*Science Daily* 2019). It is recognized that it is difficult to standardize anything in educational theory and this remains problematic, but at the same time, personalized programs could add moral and religious content to applications (apps) tailored to the individual and helpful for cognitive and moral thinking and reasoning.

In an extensive survey on methods of cognitive enhancement conducted by the US National Institutes of Health, several different methods are discussed, noting the challenges to new methods of enhancing cognition, including the possibility of brain hacking. Dresler et al. write: "Just like the hacking culture in the realm of computer software and hardware, an increasing number of individuals experiment with strategies to creatively overcome the natural limitations of human cognitive capacity—in other words, to hack brain function" (Dresler et al. 2019). The authors note that those in the field are concerned about the usefulness of enhancement techniques when it could be employed and exploited nefariously (Dresler et al. 2019). These differing viewpoints and warnings cause hesitancy in developing open techniques using technology, while a lack of solid evidence of successful results leaves observers with questions.

The article points out that another set of disagreements arises when it is not accepted that cognitive enhancement is true enhancement, and the authors themselves demand higher standards for the use of enhancing equipment including that employing AI, etc., saying: " ... only on the basis of a clear picture on how a particular enhancement strategy might affect specific cognitive processes in specific populations, along with side effects and costs to be expected, can an informed theoretical debate evolve and a promising empirical research designs to test the strategy can be proposed" (Dresler et al. 2019).

At the same time, the mode of action of AI in cognitive enhancement recognizes that there has been solid progress in pharmacological ways of enhancement or in behavioural intervention treatments. The NIH refers to a cluster of physical strategies for cognitive enhancement, including brain stimulation technologies. Quite apart from treatment of subjects with pathological conditions, it states, " ... several forms of allegedly non-invasive stimulation strategies are increasingly used on healthy subjects, among them electrical stimulation methods such transcranial direct current stimulation (tDCS), transcranial alternating current stimulation (tACS), transcranial random noise stimulation (tRNS), transcranial pulsed current stimulation (tPCS), transcutaneous vagus nerve stimulation (tVNS), or median nerve stimulation (MNS)" (Dresler et al. 2019). While the authors raise doubts about the effectiveness of many of these procedures, they add a more positive note in listing 'transcranial magnetic stimulation (TMS), optical stimulation with lasers, and several forms of acoustic stimulation, such as transcranial focused ultrasound stimulation, binaural beats, or auditory stimulation of the EEG theta rhythm or sleep EEG slow oscillations' as having potential for cognitive enhancement (Dresler et al. 2019). Recently, fMRI neurofeedback is also showing potential to increase sustained attention (i.e., helpful for those with attention deficit disorders) or visuospatial memory (helpful for those with dementia) (Dresler et al. 2019).

Many of these methods function with AI assistance, and a step further in the use of AI is found in developments that the authors say 'converge minds and machines where machines are closely integrated with the person through the use of wearable electronic memory aids, AI related reality gadgets or, more permanently, bodily implants.' (Dresler et al. 2019). Neural implants that could aid memory are being tested and some are in use while brain– computer interfaces, such as those developed by Kevin Warwick, connect the central nervous system with computers through wearable or implanted electrodes to bring about enhanced cognitive function.

Indications of further use of AI in cognitive enhancement is found in commercial video games and in customized computer training programs designed to enhance specific cognitive capacities and skills. Unfortunately, recent controlled studies and meta-analyses have shed some doubt on the success of computerized brain training programs, since no single cognitive enhancer augments every cognitive function. (Dresler et al. 2019). In fact, the authors found that some cognitive training programs do enhance memory, processing speed and visuospatial skills, but work against functions and attention (Dresler et al. 2019). If an enhancement program promotes some aspects of cognition but damages others, then it will not be worth using.

For example, the studies show that electrical stimulation of posterior brain regions was found to facilitate numerical learning, whereas automaticity for the learned material was impaired. In contrast, stimulation on frontal brain regions impaired the learning process, whereas automaticity for the learned material was enhanced. Brain stimulation has thus been suggested to be a zero-sum game, with costs in some cognitive functions always being paid for gains in others (Dresler et al. 2019). This implies that enhancement may have to be tuned to the most pressing current cognitive function for the person, and certainly shows that conclusions about efficacy are rather distant, limiting not only cognitive capacity but also the capacity for moral and/or religious development through enhancement.

A major ethical and anthropological question is raised by Clowes, who notes that, "Electronic-Memory (E-Memory), powerful, portable and wearable digital gadgetry and "the cloud" of ever-present data services allow us to record, store and access an everexpanding range of information both about and of relevance to our lives" (Clowes 2015). The cloud is the ' ... wireless internet of data and processing services ... which while providing local information is connected to a wireless internet that provides data-warehousing and, increasingly, processing capacities that moreover track and collect information on the minutiae of our lives' (Clowes 2015). As these technologies become more pervasive and as we grow ever more dependent on them, the author asks, " ... but what, if anything, might be happening to our minds and sense of self as we adapt to an environment and culture increasingly populated by pervasive smart technology ... ?" (Clowes 2015). The question is important for the possibility of cognitive and moral bioenhancement if there are negative as well as positive effects, and if, as has been shown in cognitive enhancement, there are impairment possibilities that in some ways cancel the enhanced capacities.

He is concerned about negative results on users, and questions raised by him include asking whether answers that are fed to us at the touch of a screen might dilute human capacity for thinking. We constantly use e-memory for providing information rapidly, or as an electronic diary, or, variously, as GPS/calculator/camera/video recorder of events, etc. The question is important for human capacity for learning and memory over the long haul as our dependency on machines and AI grows. On the other hand, the possibilities for cognitive enhancement, for example, in supporting the failing memory of those in the early stages of dementia, is a desirable outcome. E-memory could also have ramifications for moral bioenhancement and even for conversion, if there were good information to help people with their moral decisions. As these technologies and our habitual use of them increasingly become a part of everyday life, the tendency is for them to become invisible, fading into the background of cognition and skilled action. Clowes notes that whereas drugs that may produce cognitive enhancements or more direct brain–machine interfaces have a more public, academic and popular audience, use of the Cloud and AI is

so widespread in everyday work and tasks that we scarcely even notice our dependency (Clowes 2015).

In terms of improving cognition, he suggests e-memory provides, " ... a scaffolding upon which we build for recall and accuracy" and this seems less threatening than the suggestion that we may be damaging human thinking, especially when he discusses how ememory adds material we did not know before, even when we thought we 'knew' someone. E-memory adds to our store of information, and most of us are happy about that expanded knowledge and see it as positive in shaping our picture of reality, always assuming the information is accurate and verifiable, the very matters that can be problematic in this age of disinformation. Could there be cognitive diminishment in this easy access to information, even as we think our horizons are being expanded through memory aids or prompters? Will we 'learn to forget' as we become more reliant on external forces of AI for our poor memory, or will we simply use e-memory as an aid until we become familiar with the facts provided? Clowes uses the example that GPS devices guide us through areas we do not know, yet once we have navigated routes for a time, our brain takes over and we function on our own (Clowes 2015). If there is concern that our problem-solving functions and capacity for analysis could be affected, it should be remembered that e-memory is already proving valuable in helping people in cognitive decline to remember people, places and bygone times. The usual ying/yang of advantage/disadvantage applies also to technology, and time will tell if human memory will be affected by our 'not needing' to remember, e.g., telephone numbers, driving directions, historical dates, lists of capital cities, poetry or other memory lapses, now that we can even turn to portable, ever-present smart phones, tablets, wristwatches, etc., for answers.

In his article on AI as a means to moral enhancement, Klincewicz identifies a major ethical problem in noting that "There are reasons to think that leading a moral life is even more difficult today than in Aristotle's time. Many contemporary societies face rapid technological advance and moral practice is not catching up" (Klincewicz 2016). His thesis is, not unlike Persson and Savulescu's, that we are neither cognitively nor morally prepared for the advent of computers, biotechnology, and new forms of medicine. We tend to be concerned about more immediate concerns, such as family, local politics over geo-politics, and to resist action in spheres that are distant from us. Klincewicz calls this the 'Moral Lag Problem', describing all the things that cause us to be not as moral as we could or should be, and this fits with Persson and Savulescu's view that this gap threatens our planet, resulting in their urging people to take steps to remedy the problem.

He notes that Savulescu and Maslen appeal to advances in computing technology and artificial intelligence as a way of moral enhancement (Klincewicz 2016). In their view, "the moral AI would monitor physical and environmental factors that affect moral decisionmaking, would identify and make agents aware of their biases, and would advise agents on the right course of action, based on the agent's moral values" (Klincewicz 2016). Noting that there are concrete examples of the way in which this could be achieved, Klincewicz concludes that the approach with most promise would be to use discoveries from machine ethics along with engineering solutions featuring AI to formulate such programs to bring about moral bioenhancement (Klincewicz 2016). These ideas involve developing moral environment monitors that would prompt information about environment issues that would then 'nudge' a person towards moral conclusions to assist the person, but not attempt a take-over of the person's moral agency. Klincewicz foresees machines that would give answers to normative questions, but there could be challenges: What if I do not agree with a suggested course of action, e.g., to stop driving my car or to buy only local produce? Since machines rely on algorithms, would they not then produce a type of utilitarian ethic, for example, suggesting an answer that the greatest number of people have so far expressed? There is the possibility that the person would listen to AI suggestions over their own beliefs, since there is evidence that people can be persuaded to change their behaviour by appropriately designed technologies. Agent computer trust can be high when it comes to automation, but the problem is that humans may end up trusting an automated

system when it is not really appropriate to do so, since the machine may contain skewed information or has systemic problems of which the users are unaware. Klincewicz points to research that shows that there is reason to think that a machine that can advise and give reasons would be more successful in changing behaviour than the kind of training programs proposed by others, showing that there may be potential for creating an artificial moral advisor with AI playing a normative role (Klincewicz 2016). He notes that, "The key problem is that all of the component parts of moral AI are tied up with the agent's own moral values and those values might be based on morally compromising biases and beliefs" (Klincewicz 2016).

He suggests that a response to these challenges could lie in the authors of programs employing a morally pluralistic approach (not relativism) and points out that 'common human morality', while not always in agreement on finer points, does require some objective standards (Klincewicz 2016). I see this as an interesting referral to the possibility of some norms being seen as necessary, in contrast to today's tendency towards individual relativism or other theories that challenge the existence of any universal, objective norms. Perhaps some actions that benefit the common good or other universals, such as 'You shall not kill an innocent party', 'you shall not steal', and 'you shall not commit adultery', carry more weight than is often realized. Some also argue that any 'interference' or 'nudge' by any form of AI should allow for what Harris calls 'the freedom to fall', meaning one must decide for oneself, rightly or wrongly, and not by others' standards (Harris 2011). While against any form of compulsion in the use of AI, Klincewicz makes a good point in saying that perhaps the best points in AI's favour if used as a moral enhancer/advisor is that it " ... invites its user to engage in rational deliberation that he or she may not have gone through otherwise" (Klincewicz 2016). After all, in any ethical theory, full information is essential for good moral decision making, and is not always easy to find.

#### **3. Moral Bioenhancement**

Since studies show that cognitive improvement through the use of AI is sometimes possible, the next step is to ask whether moral thinking can be enhanced by it. One view emphasizes the need for the exercise of personal moral agency by an individual, free of compulsion or manipulation. The person needs cognitive and moral capacity to sift through information and possibilities and to reflect on outcomes in order to make a freely willed moral decision. Schaefer suggests, following Jotterand, that moral neuroenhancement is impossible because, " ... we can only become better through careful, reflective exercise of our moral agency, not through neural implementation" (Schaefer 2011). He notes a deeper problem alluded to by Jotterand: disagreement about the goal of moral enhancement threatens to make such projects untenable. I think his view is more accurately about the means used, since he asks, " ... if part of being virtuous is to adequately process relevant factors in moral decision making, why couldn't we (at least in theory) use neural manipulation to enhance cognitive capacities and thereby make people more likely to be virtuous?" (Schaefer 2011). Jotterand, however, believes that when the word 'manipulation' is used, there is already an ethical objection: a threat to human agency and free will, an imposition of someone else's thinking on the individual concerned (Jotterand 2014).

Schaefer suggests that certain forms of cognitive manipulation would not pose the same risks to agency that, for example, emotional manipulation does. The latter could diminish agency by promoting the manipulator's values, whereas this does not necessarily happen in cognitive enhancement. He thinks the manipulator could be 'content-neutral' about values, only trying to improve the other person's ability to reason. I do not think neutrality is possible, as so many have attested to. Machine learning, in particular, has shown how algorithmic results can be skewed because of bias of various kinds, often depending on the participants featured in studies It is omnipresent and it is almost impossible for humans to be value free, a complication being that we are often unaware of our own biases. It is the hall mark of human agency that the person be free from manipulation (at least obvious manipulation!) and, therefore, free from other people's biases, in forming beliefs

and deciding on actions. Allowing for these challenges, Schaefer thinks that cognitive manipulation could make the decision-making process, including moral decision making, easier and would allow moral bioenhancement.

Jotterand acknowledges that there is strong evidence of the possibility to alter, manipulate, and regulate moral emotions using neurotechnologies or psychopharmacology. For example, increased levels of oxytocin make people more trusting and selective serotonin reuptake inhibitors (SSRIs) reduce aggression and enable cooperation (Jotterand 2014, p. 2). Similarly, the use of neurostimulation techniques seems able to produce changes in mood, affect, and moral behaviour. She accepts that these technologies can alter how people react to situations that implicate a particular moral stance. Her critique is that manipulative control of behaviour is not enough to show genuine moral enhancement, whereby the individual's moral thinking would change and develop across the spectrum. Rather, people develop morally "...through the development of a vision of the good life and an understanding of the meaning of human flourishing" (Jotterand 2011, p. 8). This is essentially Aristotelian and the accepted teaching of Thomas Aquinas, constituting my own leaning in this field but in this case does not, to me, preclude safe voluntary methods that may aid cognition and possibly moral decision making.

Regarding the question he asks about the goal of moral bioenhancement, Schaefer accepts Alasdair MacIntyre's critique that tradition-free approaches to ethics such as consequentialism and deontology, have failed to produce uncontroversial or unproblematic results, applies equally to the tradition-infused approach of virtue ethics (Schaefer 2011). If the disagreement about what it is to be good or moral remains unresolved, what is the real point of moral enhancement at all? I believe this is a valid point. Without agreement on at least some moral values and implications, the responses, even if moral bioenhancement were effective, would still leave divisions and hesitance about how to rank ethical problems in terms of priority, not to mention solutions to them. To me, this is a fundamental problem about ethics and is an ongoing dilemma in moral philosophy and theology. While we may be able to reach a degree of overlapping consensus in a few cases in the field of neuroethics (e.g., mitigation of psychopathic tendencies counts as a cognitive and moral enhancement), much of the time there may be deep moral disagreement. What might seem to be moral improvement to some could well seem moral deterioration to others and we would be divided about the content of treatments and programs. Jotterand is sceptical about consensus in these issues, writing that, "The motivation to develop biotechnologies to enhance human capacities does not occur in a vacuum, and a particular moral stance about human nature and notions of embodiment, enhancement, and morality are at play in shaping the discourse" (Jotterand 2014). Therefore, current notions of relativism, consequentialism, utilitarianism, transhumanism, libertarianism and distrust of legitimate authority and religions, and so on are at play, as well as a deepening individual relativism, where even the social notion of the common good takes second place to 'my rights' as basic justifying factors.

This disagreement is both symptomatic of and a reason for deep uncertainty about what it is to be good or moral. Various competing theories are all compelling in their own way, making adjudication of what counts as enhancement even more difficult than adjudicating the morality of actions. This makes an inclusive sort of pluralism about value attractive, but then the problem manifests itself in a different way: if, for example, we agree that being virtuous, doing the right thing and seeking the best consequences are important in being good, how do we weigh our *different* conclusions about what is right or wrong in given situations. Should majority decisions win the day, thus turning to utilitarian or pragmatic approaches? MacIntyre's critique still stands to be answered, unlikely in today's exceedingly pluralistic yet individually relativistic moral world, but answers still have to be looked for in the field of moral bioenhancement as in any other, and 'agreement to disagree' is already an answer of sorts in at least democratic societies.

Concerns about volition and privacy of thought and feelings are raised by neuroethicists such as Lavazza, who realizes the possible danger to personal freedom in applied

technology aided by AI. He notes that there are already neural prostheses " ... depriving individuals of full control of their thoughts" (Lavazza 2018). Those who insist on the importance of the capacity for free will as a basic marker of human identity will want to safeguard those areas out of respect for human dignity and to make sure others do not acquire the right to invade private territory or to use any information obtained from patients who are treated by these means. It is easy to see how technologies could be used nefariously, but as long as most applications are used in health care treatments such as for neurodegenerative diseases, he sees no reason to think about forbidding use, since safety and cure of disease are ethical duties of the first rank.

Nonetheless, Lavazza proposes strict internal controls on what can be 'sparked' in a person's thoughts, and what can be used thereafter. He reminds us that neuroscientific techniques can be invasive, threatening a patient's cognitive freedom and privacy and therefore protective human rights have become necessary (Lavazza 2018). Access to a person's thoughts should be strictly regulated and dependent on the person's full consent to any use of material obtained. He notes this approach is necessary not only for AI devices but should become a general 'technical' operating principle to be observed by any systems connected in decoding a patient's brain activity (Lavazza 2018). I agree with this approach, and would point out another concern that there is generally a lack of enforceable regulations in many technological and health care-related fields, e.g., gene editing, because of a lack of agreement on fundamental principles. In some cases, suggested principles fail in favour of pragmatism, which could lead to severe risks to human dignity, free will and personal privacy in some instances of moral bioenhancement.

Rakic agrees with moral bioenhancement as long as it is safe and voluntary (Rakic 2017). I agree with both conditions and with his stating, like Jotterand, that, "To make it obligatory deprives us of our freedom" (Rakic 2017). He sees compulsory moral enhancement as a contradiction in terms, violating free will, and he asks "MacIntyre-type" questions such as "Whose means? Who creates the input for the 'software?' Where does thee moral authority to enhance come from? Under what terms?". He is concerned that use of any mechanism might actually reduce our will power, thus also reducing freedom of thought. He takes a hard look at the future in this perceptive statement: " ... if such form of ultimate harm changes our species beyond recognition, compulsory moral enhancement itself obliterates humans and is, therefore, not even consonant with biological morality as an ethics of survival of the species ... " (Rakic 2017). He finds resorting to majority decisions about these matters (which I term pragmatic rather than ethical) make matters more political than moral, because then it seems that only numbers count in ethical decision making, which for him and others is an insurmountable difficulty.

Parker Crutchfield suggests that the manipulation of a person's moral traits, i.e., the core of a person's identity, amounts to 'killing' the person (Crutchfield 2018). While the widespread use of AI or other means to perform such manipulation is still rare and mainly used as therapy, he, like Jotterand, Rakic and Lavazza, warns that we should anticipate future problems in such use and be prepared. His thesis is that change brought about by technical interventions may result in a person acting like a different person and, further, the change is not due to the person's own agency, even if voluntary. What comes to mind is Jack Nicholson's portrayal of a person changed by a lobotomy procedure (for suspicious reasons) in the movie, "One Flew over the Cuckoo's Nest", which, while fictional, resulted in a complete change in the character's identity, personality and behaviour. This is not to suggest that current brain manipulation could or would effect that level of traumatic damage, where the cure is worse than the disease, but it is a stark reminder that human manipulation could go wrong or be performed for the wrong reasons, perhaps at a cost for some.

#### **4. Moral or Religious Conversion and Bioenhancement**

At another level, the 'change' in moral thinking hoped for by those who advocate the use of AI in moral bioenhancement is something I shall compare with moral 'change' or conversion, partly from a secular and partly from a Christian viewpoint. Using a popular literary example, Scrooge in Dickens' *A Christmas Carol* reveals what is meant by a spiritual or religious conversion, brought about seemingly by the 'Spirits of Christmas' in the tale. The story uses his own memories, imaginings, dreams and fear of untimely death, resulting in a late-life recognition of his own earlier woundedness and loneliness, complicated by the development of his antisocial, 'closed in' and miserly character. After Scrooge's change in heart (conversion), only Tiny Tim is allowed to invoke God expressly, but it is clear that a spiritual concept of 'love of neighbour' prevails, and Scrooge becomes a different person towards his fellow creatures. He acknowledges his earlier suffering and mistakes, he expresses repentance for those whom he had wronged, and he manifests a truly Christian spirit in making amends. These reference points are generally reckoned to be necessary for religious conversion, meaning that change in one's moral thinking leads to an actual change in behaviour. No matter what causes moral change, it is necessary for true religious conversion, and if moral bioenhancement cannot effect such change, it is more or less pointless. Crutchfield confirms this in writing, " ... people undergo changes to their moral traits all the time, but usually these trait changes don't result in different identities because only very few traits change or because the changes occur within the person's narrative in a way that allows the narrative to continue to unify the self, preserving the person's identity through the change" (Crutchfield 2018). He is doubtful about moral bioenhancement's capacity for actual change in the person. *A Christmas Carol* is only a morality tale and may not stand up to Crutchfield's charge about real change, but it does seem to have had a great deal of influence on how people think and act and could be considered a 'universal' in capturing certain aspects of human nature, almost in the same way as a parable.

Crutchfield is concerned that if and when a change in identity occurs through bioenhancement, the person 'dies'. His concern is perhaps justified if the change is for the worse, but Saint Paul's example points to the possibility of another type of 'dying where the person is then 'reborn', and is thankful to God for that rebirth. Of course, if a person's identity is changed through external means and his or her free will is taken over by human design with the intention that he or she 'die' through bringing about radical change in personality, as happened to Jack Nicholson's character in the movie, few people would find that ethical.

Yet the possibility that the change might be positive should also be recognized. Although Saul was clearly not bioenhanced technologically, the biblical story tells us that his 'sight' was affected for some time before being restored when he underwent a drastic change in identity in becoming Paul, the follower of Christ. His conversion seems to have come from a more internal mechanism of insight and openness to grace: being 'knocked off his horse' is variously interpreted as a Scriptural way of saying that a great insight dawned on him, and he acted accordingly. He 'died' but was reborn. It can be hard for those telling their conversion stories to explain their subjective moments of insight and 'dawning' realizations, many giving witness to dramatic stories and others experiencing conversion, such as Elijah, 'in the gentle breeze'. The saying, "The bigger they come, the harder they fall", may have had some significance in the account of Saul's conversion, given his forceful and zealous nature. The main point is that many people point to the reality of conversion, and if cognitive and moral bioenhancement can set people on that path, safely and voluntarily, such enhancement could also serve a religious purpose.

#### **5. Challenges to Moral or Religious Bioenhancement**

Many philosophers, ethicists and scientists, however, say that evidence for effective cognitive and, in turn, moral enhancement by any means is not yet strong enough. I agree with those who say that the cognitive capacity for change in moral thinking needs to be high—especially for 'new' questions. Thinking through values and moral stances can be a difficult and ongoing task even for those in the field, often taking considerable time to arrive at conclusions or workable solutions. At the same time, another challenge exists in that new, ethical questions will always run ahead of us as technology develops at great speed and we will always be running to catch up, often *post factum*. I believe that partly explains Persson and Savulescu's frustration, as well as Klincewicz' 'Moral Lag': if society cannot think fast and well enough to grasp the impact of a given moral dilemma, then why not enhance society to do so? Even were that to be a reality, we would still have the challenge of 'running to catch up', since it is impossible to anticipate the many questions science, medicine or technology throw our way.

Another challenge, already mentioned, is that ethicists disagree about many matters, not only on account of religion, but through disagreement about facts, sources of facts, values and norms. Assuming that is the case, we will never be sure what morally bioenhanced people will value after treatment. Short of piping the 'manipulator's' values and information into them, people are still going to think for themselves. Unless somehow 'enslaved' through a sci-fi brain–computer interface or chip (to date more sci-fi than actual), the voluntary and free will aspects of morality will be maintained. Interestingly, these are perhaps the aspects of the moral life about which most people agree. Prospects of having moral information 'piped' into a person raise the usual questions: whose information and whose morality? A lack of agreement on universal norms may still render the process of moral bioenhancement problematic at least from the standpoint of those with a specific moral agenda.

Another factor to be taken into account is that conversion is an ongoing process in spiritual life, involving cognitive, moral and religious change. It is difficult to see that compliance with, for example, Christian principles could be a *direct* result of moral bioenhancement through deep brain stimulation, e-memory or other uses of AI when human circumstances are so variable. The biblical parable of 'the sower' makes sense here: as Jesus tells it, some seed fell on rocky or stony ground, but some fell on fertile soil. Even that which fell on fertile soil was sometimes choked by weeds or was eaten by birds. Some seed grew and gave a hundred fold of itself: the same seed, but different soil and circumstances. At the religious and spiritual level, there is a need to leave space for the seed to work in us as unique individuals, and we discover that spiritual matters cannot be forced or compelled. Followers of Christianity learn that Jesus simply invites us to follow him, knowing that the harvest will not be one hundred fold. Whether moral or religious conversion occurs as a result of existing teaching methods or bioenhancement, I would venture to say that even under compulsion, the results are likely to be the same.

#### **6. Conclusions**

Still, cognitive enhancement is already a reality, and it looks as though it will be further developed and be more effective, at least for individuals with cognitive impairment. More people will then have improved cognition, which may in turn improve their moral thinking. We will still face disagreements about ethical theories and still run into different views of resolving those moral questions and problems. Although somewhat pessimistic, it is difficult to see that, even were it successful, moral enhancement would be able to change enough people, soon enough, to respond to more immediate global challenges such as climate change or other societal problems.

That does not mean, however, that cognitive enhancements or moral bioenhancements that are voluntary and safe are useless. If they lead to better moral thinking in individuals, they deserve a place. Better or clearer moral thinking could lead to moral and possibly religious conversion, where a person desires to change his or her behaviour, whether towards people, in choice of career, in life decisions, and so on, taking into account the values which now resonate as primary (Cf., St Paul, Scrooge—seemingly vastly different, but actually similar in experience). Moral conversion can lead us to 'see' matters in a different light and to act differently. In Canada, for example, facts have recently been revealed about the treatment of Indigenous peoples, facts that may have been deliberately concealed and obscured, while society continued in biased behaviour against these peoples, based on misinformation. When society eventually had its blinkers removed, Canada came to fully acknowledge its wrongdoing and moved to change its behaviour, a necessary

corrective in achieving justice and in allowing the process of reconciliation and healing to begin, with the intention of working towards a more egalitarian and just society.

The same process occurs in moral and religious conversion. A change in evaluative (based on cognitive) knowledge is needed first, whether sparked by spontaneous or enhanced means, leading to changed behaviour. A change in action is then needed to right the wrong that occurred, as a matter of justice. The healing process of the harm caused (sin, in religious terms) can then begin, with the person responsible for any harm resolving never to cause such harm again. Christians in Canada experienced the same moral conversion as the rest of society regarding societal treatment of Indigenous peoples over the years, but their religious convictions should have made them realize afresh how so many had abandoned or ignored their own 'Great Commandment': to love God and love one's neighbour as oneself. This does not imply love only in the affective sense, but socially and politically in accord with the important principles of social justice and the maintenance of the common good.

When it is realized how long it has taken for this wrongdoing to have been addressed, Klincewicz' point about 'moral lag' in these matters comes home to roost. Given this lag, which to future generations will appear ethically unacceptable, there is all the more reason to look for help from any sources in trying to resolve such major issues. Although there are clearly some challenges to the effectiveness of bioenhancement, advances through AI and other technologies are growing rapidly and already show potential for moral influence. With the same caveat as before as to the need for them to be safe and voluntary, their influence could be turned to great good in fostering better moral thinking and action towards achieving higher standards of individual and societal relationships.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


Harris, John. 2011. Moral Enhancement and Freedom. *Bioethics* 25: 102–11. [CrossRef] [PubMed]


Schaefer, Owen. 2011. What is the Goal of Moral Engineering? *AJOB Neuroscience* 2: 10–17. [CrossRef]

*Science Daily*. 2019. AI to Enhance Cognitive Performance? Available online: https://www.sciencedaily.com/releases/2019/05/190522 120507.htm (accessed on 26 April 2022).

### *Article* **Will Superintelligence Lead to Spiritual Enhancement?**

**Ted Peters**

Graduate Theological Union, Berkeley, CA 94709, USA; tedfpeters@gmail.com

**Abstract:** If we human beings are successful at enhancing our intelligence through technology, will this count as spiritual advance? No. Intelligence alone—whether what we are born with or what is superseded by artificial intelligence or intelligence amplification—has no built-in moral compass. Christian spirituality values love more highly than intelligence, because love orients us toward God, toward the welfare of the neighbor, and toward the common good. Spiritual advance would require orienting our enhanced intelligence toward loving God and neighbor with heart, mind (or intelligence), and soul.

**Keywords:** intelligence; superintelligence; machine intelligence; artificial intelligence; intelligence amplification; reason; love; transhumanism; public theology; AI ethics; Knud Løgstrup

#### **1. Introduction**

We already enhance our vision by wearing glasses. With CRISPR gene editing along with implanting artificially intelligent memory chips in the brain, could we enhance our spirituality? With the term "spirituality," I ask about motivated behavior such as moral resolve, compassion, faith in God, love of neighbor, sanctification, and deification.

Theologians routinely emphasize that healthy spirituality conforms human free will with God's will. Would technological enhancement<sup>1</sup> override our free will? Or, would it enhance our free will? Or, would it ignore our free will? Should we expect a morally advanced humanity in the future? (Herzfeld 2017, Introduction: Religion and the New Technologies).

No. Not if we restrict our criterion for measuring human progress to superintelligence. Why? Because the *summum bonum of* Silicon Valley is not sanctification or love of neighbor. Rather, the highest good sought here is intelligence. That is it. Intelligence. As desirable as superintelligence in either artificial or human form might be, it would have no necessary effect on moral responsibility or spiritual enhancement.

If human intelligence could be enhanced artificially by means of ML (machine learning), AI (artificial intelligence in robots), or IA (intelligence amplification through brain implants), the Christian theologian would celebrate (Herzfeld 2018, The Enchantment of Artificial Intelligence). This would count as an advance in human health and wellbeing. But, make no mistake. Enhanced intelligence in itself does not constitute an achievement of spiritual goals such as virtue, sanctification, or neighbor love. No matter how valued and respected intelligence is, moral or spiritual enhancement is something else.

#### **2. What Is the Highest Good: Intelligence or Love?**

How should we formulate the issue? Here is the problem. The goals set by ML, AI, and IA researchers as articulated especially by transhumanists are set by a vision of superintelligence. The technological destination, according to Max More and our other transhumanist colleagues, is a posthuman species augmented by superintelligence. This may be a laudable vision. However, this is not the goal of Christian spirituality, let alone the spiritual end of most religious traditions.2

The *trans* in transhumanism refers to the present phase of propelling both AI and IA toward the Singularity, toward the threshold where superintelligence grabs the reigns

**Citation:** Peters, Ted. 2022. Will Superintelligence Lead to Spiritual Enhancement?. *Religions* 13: 399. https://doi.org/10.3390/ rel13050399

Academic Editors: Tracy J. Trothen, Calvin Mercer and Jeffery D. Long

Received: 11 January 2022 Accepted: 25 April 2022 Published: 26 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

of evolution, steers humanity toward posthumanity, and discards current *Homo sapiens*. We will become fossils of an extinct species (More 1996).<sup>3</sup> The "Singularity...is a point where our old models must be discarded and a new reality rules," wrote the prescient Verner Vinge (Vinge 1992).

The engineer shining the headlight on the transhumanist train is Oxford's Nick Bostrom. He describes the end station.

Let us make a leap into an imaginary future posthuman world, in which technology has reached its logical limits. The superintelligent inhabitants of this world are autopotent, meaning that they have complete power over and operational understanding of themselves, so that they are able to remold themselves at will and assume any internal state they choose in any technological utopia we have a realistic chance of creating a large portion of the constraints we currently face have been lifted and both our internal states and the world around us have become much more malleable to our wishes and desires (Bostrom 2008, pp. 202–3).

Futurist Bostrom projects a utopia replete with humanistic values such as dignity and freedom. Yet, the path to utopia is the one-way track toward increased intelligence.<sup>4</sup> There is nothing in this posthuman Eden that causally links enhanced intelligence with enhanced spiritual integrity.

Despite this lacuna, technological enhancement should still attract the theologian. Why? Because both H+ and theology look forward to human transformation. Reformed theologian Ronald Cole-Turner, for example, is attracted to H+ because "human transformation is central to Christian thought" (Cole-Turner 2011, p. 5, Introduction: The Transhumanist Challenge).5 For both critical as well as complementary reasons, the church theologian should participate in the wider public discourse. Specifically, the positive contributions of intelligence technology could benefit the common good.

Perceptive religious insights belong in this public discussion. "Religion can play an important role in assessing these technologies and shaping a beneficial outcome. Playing that role requires religion to be responsive, relevant, and prophetic in the public square," declare Tracy Trothen and Calvin Mercer (Mercer and Trothen 2021, p. 210).

It becomes the task of the public theologian, then, to board this train and ride as far as conscience will allow (Peters 2018, Public Theology: Its Pastoral, Apologetic, Scientific, Political, and Prophetic Tasks). The public theologian dare not ride to the end station because the Christian vision of human flourishing depends on love, not intelligence (Peters 2019c, Boarding the Transhumanist Train: How Far Should the Christian Ride?).

So, we must ask: what role should the religiously informed AI ethicist play? To date, AI ethics is pretty much restricted to professional ethics. AI and Faith, an organization made up of techies and theologians, is pushing the frontier further down the track toward religious engagement. What might be the implications long term, for human wellbeing or flourishing?

#### **3. Five Concerns of the Public Theologian**

Here are the implications. The public theologian has an opportunity, if not a responsibility, to engage in discourse clarification that lifts up five concerns (Peters 2019d, The Ebullient Transhumanist and the Sober Theologian).

First, contest the view that the defining feature of humanity is rationality and propose an account of spirituality that dissociates it from reason alone (Peters 2021, Enhanced Intelligence and Sanctification November).

Second, search for a way to invalidate the growing faith in a posthuman future shaped by the enhancements of ML, AI, and IA. What the public theologian sees that the transhumanist is blind to is the ambiguity of technology. Technology can be pressed into the service of evil as well as good. So can intelligence.

Third, assert strongly that it is love understood as *agape*, not rational intelligence, which tells us how to live a godly life. Love tells us how to be truly virtuous, authentically human, even holy.

Fourth, demonstrate how the transhumanist vision of a posthuman superintelligence is not only unrealistic, it portends the kind of tragedy we expect from a false messiah6 (Peters 2019b, Artificial Intelligence, Transhumanism, and Rival Salvations).

Fifth and finally, proclaim that if, as a byproduct of AI and IA research combined with H+ zeal, the wellbeing of the human species and the common good of our planet is enhanced, then we should be grateful.<sup>7</sup>

#### **4. What Is Love?**

What is Love? Briefly, love comes to us in the form of divine grace and human compassion (Peters 2019a, Artificial Intelligence versus Agape Love: Spirituality in a Posthuman Age). This kind of love can be shared by smart people and not so smart people alike. It can also be shared with the animal kingdom.

And this is his commandment, that we should believe in the name of his Son Jesus Christ and love one another, just as he has commanded us. All who obey his commandments abide in him, and he abides in them. And by this we know that he abides in us, by the Spirit that he has given us (1 John 3: 23–24).

This kind of love is known by its Greek name, *agape.* If superintelligence would be a scoop of ice cream, love of God and love of neighbor would be the hot fudge topping. Intelligence without love risks being only something cold.

Again, intelligence—whether in AI form or enhanced human form—is something to value. But intelligence alone lacks a moral compass. *Agape* love provides the moral compass that makes us godly. For enhanced spirituality, we need more than enhanced intelligence.

#### **5. Human Intelligence and Human Reason**

Yes, intelligence is a most valuable commodity. General intelligence, among other traits, marks us as distinctively human. The common good could benefit from super-general intelligence.

We *Homo sapiens*—sometimes *Homo sapiens sapiens*, to stress the point—are rational animals. According to Aristotle, we human beings are "thought-bearers" (ζХȱoν λóγoν σȱχoν, *animal rationale*). We think. We reason. Intelligence makes thinking and reasoning possible.

Today's scientists, following Aristotle, call us wise animals *Homo sapiens sapiens*, a subspecies evolving between 160,000 and 90,000 years ago from the more inclusive *Homo sapiens*.

Should we human beings think of ourselves as the pinnacle of creation? We are a unique species, right? No. Not exactly. We human beings share our intelligence with other animals. What distinguishes *Homo sapiens* is not our particular mental capacity but rather our linguistic capacity. At least, according to Terrance Deacon at the University of California, Berkeley (Deacon 2012). And now that computers are gaining linguistic capacity, how long can we maintain the delusion that we wet-brains are the kings of the beasts? Or, even the kings over our electronic progeny?

Despite what I have said about love, the capacity and exercise of reason that intelligence affords is something to be cherished. Whether computers or robots will surpass us in the future, we need to make clear that Christians and Jews in Western culture have lauded the human capacity to reason.

When the Vatican takes up the question of AI ethics, Pope Francis reaffirms that rational intelligence contributes to making us human. "All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of fellowship" (Francis 2020, Rome Call for AI Ethics).

#### It is reasoning power that connects the human to both God and the nonhuman machine (Peterson 2010).

Yet we must re-ask: is rationality a trait unique to our species? Do we share intelligence with animals or plants or extraterrestrial civilizations? If so, must we humans surrender our claim to uniqueness in the biosphere?

In our present era with heightened eco-consciousness, the time is ripe for emphasizing continuity rather than discontinuity between our species and other living creatures. We definitely share intelligence with animals and even single-celled organisms, although to date we do not share intelligence with computers or other machines.

Looking toward the future, it is quite possible that we *Homo sapiens* may encounter other beings of equal or superior intelligence and reasoning capacity. Our future superiors might come in the form of either robots on Earth or disembodied machine intelligence living in cyberspace on exoplanets. Let us ask: does it matter for Christian spirituality whether we humans alone bear the level of intelligence common to our species? My answer: no, it does not matter.

#### *5.1. What Is Intelligence? Do We Humans Share Intelligence with Other Life Forms?*

Intelligence is by no means the private property of our friends and neighbors within the human race. Our highly developed human reasoning exists in continuity—not discontinuity—with other living and embodied creatures.

Let us saunter down intelligence lane for a few paragraphs. The street sign reads: "Intelligence This Way." Well, what will we find when we get there?

The literature on intelligence generally avoids defining intelligence. I find this curious. Scientists dealing with this subject matter prefer to sort through degrees or levels of intelligence rather than telling us what intelligence is. They prefer to distinguish between smarter and dumber. This results in scales of intelligence, in ranks ranging from simple to complex. In short, our scientists do not typically draw lines between the total absence of intelligence and the presence of intelligence. At least in living creatures.

Shane Legg and Marcus Hutter, however, have done us a favor. They have collected definitions of intelligence. They prescribe a minimum of three components essential to any definition of intelligence: (1) agency when interacting with the environment; (2) goal setting leading to success or failure; and (3) adaptation to the environment by altering goals. In sum, "Intelligence measures an agent's ability to achieve goals in a wide range of environments" (Legg and Hutter 2006).

With this background in mind, let us adumbrate seven criteria that reveal the presence of intelligence. These criteria are roughly ranked from simple to complex. The lines between stages are blurry rather than sharp, to be sure; but levels are discernable.

Let us consider a seven-mark description of intelligence. With this list, I hope to demonstrate that very simple life forms exhibit some, though not all, marks of intelligent creatures. An organism is intelligent when it exhibits one or more of the following seven traits (Peters 2017, Where There's Life There's Intelligence):


Our intelligence makes judgement possible. But intelligence does not in itself dictate which judgments to make. A moral compass must be added to make sound moral or spiritual judgments.

We human beings exhibit all seven marks. Many mammals exhibit similar traits of intelligence. Brainless microbes and simple organisms exhibit the first four marks. This spectrum of traits suggests that all life, from the simplest to the most complex, can be dubbed intelligent. There may be differences in levels of complexity, to be sure; yet, we *Homo sapiens* share intelligence with amoebas.

This should come as no surprise to the theologian. In Augustine's *City of God* (16.8) we find such continuity affirmed.

But whoever is anywhere born a person, that is, a rational, mortal animal, no matter what unusual appearance he presents in color, movement, sound, nor how peculiar he is in some power, part, or quality of his nature, no Christian can doubt that he springs from that one protoplast. We can distinguish the common human nature from that which is peculiar, and, therefore, wonderful.

Augustine was including within the *imago Dei* monstrosities, unusual races, persons with mental disabilities, persons with birth defects and such. We dare not underestimate the value of rational capacity in the Christian tradition.

As said above, Aristotle and the Christian tradition were on target when describing us humans as "thought-bearers." Here is the point: we *Homo sapiens* are not alone in this. We humans may bear more abstract thoughts than amoebas, to be sure. But there is no solid line dividing human reasoning from simple cell interiority or intentionality.

If this seven-point spectrum is relevant and illuminating, then we should explore questions about its implications for the future of machine intelligence. For artificial intelligence.<sup>8</sup> For intelligence amplification. For meeting extraterrestrial intelligence.

#### *5.2. Will Disembodied Intelligence Really Be Intelligent?*

If we apply the above seven criteria, what is commonly called artificial intelligence or AI is not intelligent at all. AI is only a bucket of code, some of my techie friends tell me. AI is only a laundry basket of processes with rules of operation, say other Microsoft colleagues. AI may perform jaw-dropping feats of calculation, but this does not imply that intelligence is present. There is no doubt that we should applaud uproariously the computer engineers who have designed machines that learn how to provide us with answers to complex questions. Yet, *intelligence* is not the word to describe information processing, no matter how dramatic.

When we look at a computer or robot we might ask: "who is allegedly doing the thinking here?" Answer: "nobody is at home". There is no self or agent who deliberates, renders judgments, makes decisions, and takes actions.

The stated goal of the strong AI movement is to create artificial general intelligence (AGI). AGI is defined as "interactive, autonomous, self-learning agency, which enables computational artifacts to perform tasks that otherwise would require human intelligence to be executed successfully" (Mariarosaria and Florida 2018). But, alas, this goal may be furtive.

What is missing in machine intelligence? Item seven on our list of intelligence traits. ML lacks knowledge produced by sound judgment. Data classification and calculation alone does not constitute the level of judgment required for actual knowledge, let alone moral resolve.

AI in the form of DNNs (deep neural networks), for example, rely on pattern-recognition technology. And reliance on pattern recognition to classify inputs sets the limit of what DNN can accomplish. Without the capacity for judgment, DNNs can be easily fooled.

Douglas Heaven, writing in *Nature*, points out that the change in just a few pixels changes a DNN's perception from seeing a lion to seeing a library. It is easy "to make DNNs see things that were not there, such as a penguin in a pattern of wavy lines" (Heaven 2019, p. 164). No number of rules can overcome AI's lack of judgment. "Even if rules can be embedded into DNNs, they are still only as good as the data they learn from" (Heaven 2019, p. 164). Data without judgment means that Al has not yet reached stage seven.

If today's human intelligence provides the model for future AI, we are not yet close. "Robots that can develop humanlike intelligence are far from becoming a reality... [AI] still belongs in the realm of science fiction", is the observation of Diana Kwon, writing for *Scientific American* (Kwon 2018, p. 31). After six to seven decades of attempting to construct a machine with intelligence, Noreen Herzfeld concludes, the accomplishment rate is zero. "We are unlikely to have intelligent computers that think in ways we humans think, ways as versatile as the human brain or even better, for many, many years, if ever" (Herzfeld 2018, p. 3, The Enchantment of Artificial Intelligence).

So, we must ask about the H+ vision of superintelligence. Is it possible for moderately intelligent *Homo sapiens* to procreate superintelligent children? The answer depends on your philosophical assumptions. Scholastic theologians thought that the creator would necessarily be more complex and more intelligent than what gets created. "No effect exceeds its cause," said Thomas Aquinas (Aquinas 1485, Il-Il, 32, 4, obj. 1). This implies that God is more complex and more intelligent than us creatures. Might this classic theological principle of causation apply to today's human AI progenitors?

What should we conclude here? If the criterion by which we measure AI or machine intelligence is stage seven intelligence itself, then the criterion would be embodied intelligence. I am not likely to invite my Dell computer to determine what I should buy my spouse for Christmas or formulate public policy.

#### *5.3. Wet versus Dry Intelligence*

AI techies and H+ visionaries have so overemphasized intelligence and autopotency that relationality has faded into the background. Therefore, the public theologian must remind us how relationality remains important on two fronts: (1) the relationship of intelligence to the body, and (2) the relationship of the person to other persons. Disembodied souls of the Cartesian type are out of fashion with Jewish and Christian theologians of our era. Rather, authentic humanity as well as eschatological humanity are now thought of holistically, body included. Even in the resurrection, according to St. Paul (1 Corinthians 15:42–44), the eschatological human person lives in a spiritualized body.

In addition, every intelligence we have known to date has been wet. To be wet is to be embodied. Robotic AI and cybernetic immortality envisioned by H+ are dry, disembodied. Is this a problem theologically? (Herzfeld 2002, Cybernetic Immortality versus Christian Resurrection), (Tirosch-Samuelson 2018) Yes, indeed, according to Tracy Trothen and Calvin Mercer: "Jewish and Christian theologians, who affirm the importance of embodiment, are concerned about what they perceive (sometimes rightly) to be transhumanism's denigration of the body biological, therefore making some transhumanist projects like mind uploading theologically problematic" (Mercer and Trothen 2021, pp. 165–66).

#### *5.4. Technologically Advanced Intelligence Would Still Be Morally Ambiguous*

We have seen in history that technological advances lead to both plows and swords, firecrackers and guns, medicine and poison, communication and miscommunication. Does this apply to envisioned superintelligence as well? Yes, indeed. Intelligence has no built-in moral compass, let alone commitment. Like a teeter-totter, greater intelligence could bounce both ways: for good or evil.

Hybrid geneticist and theologian Arvin Gouw raises this challenge. He places the burden of demonstrating that H+ technological advance will have any positive influence on human moral advance.

Technologies are neutral tools by default; thus, the assumption that technology will make humanity better is a questionable hypothesis given the fact that, over the years, technologies have given birth to atomic bombs and biological weapons precisely because human nature is not neutral, unlike technology (Gouw 2018, p. 230).

ML, Al, IA, just like other technologies in our past, are morally ambiguous. They can edify. They can destroy. This applies to the concept of intelligence as well. Intelligence all by itself feels no compassion, no love, no responsibility for the welfare of either humanity or the planet.9

Methodist theologian Alan Weissenbacher cautions us that technological advance could backfire. "Instead of salvation...technological advances represent a new set of benefits as well as challenges to overcome, particularly the tendency of human technical creations to reflect the sins of their creators even under the best of intentions" (Weissenbacher 2018, p. 69).

Perhaps Carmen Fowler LaBerge provides the most fitting recognition of moral ambiguity. "From a Christian worldview, technology is not inherently good nor evil. Technology is morally benign but we are not. Human beings who develop and use technology are moral agents who stand responsible before God who defines the boundaries of good and evil. So, part of what Christians bring to the transhumanist conversation is the question of should" (LaBerge 2019, p. 774).

Might we sidestep this ambiguity by pressing technology into the service of educating the person or community who is already committed to a life of virtue? After all, making practical decisions in pursuit of the common good will require knowledge and informed judgment. A bioethicist at Santa Clara University, Brian Patrick Green, invites AI technologies into the service of virtue education. Virtual reality would contribute to Virtue itself. AI as a pedagogical tool could accelerate educational programs, "raising up future generations to be prepared for the difficult situations they will face, by experiencing, through a VR education personalized by AI, vastly more moral situations and their best solutions, than contemporary people could hope to experience, even with immense effort" (Green 2018, p. 227). What Green has done here is turn AI into a means toward virtue as the end.

#### *5.5. Confusing the Penultimate with the Ultimate*

Let us turn now to distinguishing means from ends. Or, what is penultimate from what is ultimate.

The snarling nemesis of H+, Francis Fukayama, complains that H+ visionaries confuse the penultimate with the ultimate. He asks rhetorically: do transhumanists "really comprehend ultimate human goods?" (Fukuyama 2004)

No, adds Adam Willows at Notre Dame. What do the transhumanists value? "Health, wellbeing, longevity (even immortality), mental activity, reliable memory and social benefits such as increased equality and liberty—these are all important things offered by the transhumanist project," Willows observes. "All of them are valued by theologians and bioconservatives. However, none of these goods are ultimate goods" (Willows 2017, p. 179).

In the neo-orthodox tradition of Reinhold Niebuhr, Paul Tillich, and Langdon Gilkey, substituting the penultimate for the ultimate risks inviting the demonic. We will take this in two steps. First, the technological reason that imbues H+ cannot on its own apprehend what is ultimate. "Technical reason," observes Tillich, "provides means for ends, but offers no guidance in the determination of ends" (Tillich 1989, 2:168).

Second, when a person or a culture confuses means with ends or penultimate values with what is ultimate, beware! The demonic is lurking in ambush. Gilkey sounds the alarm. "Perhaps the unique insight of a Christian interpretation of the human predicament is, first, that only God is God, and, second, as a consequence, all else even the most creative aspects of our human existence, are not absolutely good, good in themselves, but possess the possibility of the demonic if they are made self-sufficient and central" (Gilkey 1980, p. 34).

This is what Willows picks up. "Life and power by these [H+ technological] means is not desirable because it cultivates the vice of pride and causes us to forget that our good is to be found in God, not our own endeavors" (Willows 2017, p. 179).

Health and wellbeing and enhanced capacity to reason are all good things. Every theologian and moralist would agree. So, the question raised by H+ and its technologies is this: what is ultimate? When it comes to matters of faith or virtue or holiness, it is love that matters most. Love as a moral end seems to get lost in the transhumanist fog that confuses the penultimate with the ultimate.

#### **6. What Is Our Spiritual End? Intelligence or Love?**

It is love, not enhanced intelligence, that is the spiritual end for the Christian.

To demonstrate, let us turn to the interpersonal dimension of the human reality. To be a human person is to be a person-in-relationship with other persons. The very relationality of relationship includes within it a moral demand to love. Realization of our being a personin-relationship produces an inescapable ethical demand, according Danish philosopher Knud Løgstrup (1905–1981). This ethical demand belongs to our very ontology, as human beings.10

To be is to be a person-in-relationship. Løgstrup observes that this relationship entails the demand that we serve the wellbeing and even the flourishing of the other party with whom we share a relationship. When we wake up to a consciousness of our own being-inthe-world, we find that we are not individuals first who then add relationships. Instead, we find that whatever individuality and responsibility we have derive from a prior nexus of concrete relationships. We are interdependent. This interdependence, contends Løgstrup, entails a silent yet potent commandment. What is that commandment? Love your neighbor! Our responsibility is inescapable.

By our very attitude to one another we help to shape one another's world. By our attitude to the other person we help to determine the scope and hue of his or her world, we make it large or small, bright or drab, rich or dull, threatening or secure (Løgstrup 1997, p. 18).

Before this becomes a commandment delivered by God to Moses on Mount Sinai, neighbor love has already belonged inherently to our human condition, even if the existence of the commandment acknowledges that sinful humans sometimes fail to shoulder their moral responsibility.

Løgstrup, following Martin Luther before him, believes each of us can serve as "daily bread" for those around us. Our impact on another person may be a very small matter, involving only a passing mood, a dampening or quickening of spirit, a deepening or removal of some dislike. But it may also be a matter of tremendous scope, such as can determine if the life of the other flourishes or not (Løgstrup 1997, 5 1516).

Jesus' double commandment is to love God and neighbor. "Love for neighbor is the concrete way in which we love God," observes Karl Rahner (Rahner 1978, p. 447). Could any technology—whether ML, AI, IA, or genetic engineering—enhance the human capacity for loving? For virtuous living? For sanctification or deification?

#### *6.1. Genetically Engineered Spiritual Enhancement?*

The possibility of moral bioenhancement is widely discussed among today's bioethicists. "We argue, moral bioenhancement should be sought and applied," say Ingmar Persson and Julien Savulescu (Persson and Savalescu 2013, p. 124). But, Valjko Dubljec and Eric Racine fear that "moral enhancement is not feasible in the near future as it rests on the use of neurointerventions, which have no moral enhancement effects or, worse, negative effects" (Dubljevic and Racine 2017, p. 338). We will not wait for this debate to come to a resolution before proceeding.

We should thank neuroscientists who search for means of motivating behavior. We have learned that pharmaceuticals can influence moral dispositions and spiritual receptivity. Yet, the matter of following a lifelong path of virtuous behavior or service to God dare not avoid one central question: what is the role of human free will, sound judgment, and moral resolve?

Because spiritual or moral enhancement requires the willful participation of an embodied self, genetic or other technological enhancements will most likely fall short. Asking AI to guide CRISPR gene editing into making us or our babies more intelligent simply will

not lead to enhanced virtue, holiness, sanctification, *theosis*, or deification. Really? Let us look into this.

Mark Walker's Genetic Virtue Project assumes that technological alteration can contribute to spiritual enhancement. Genetic technology becomes equipment in soul building. Walker is following in the tradition of Irenaeus in which the process of *theosis* or deification conforms the virtuous person to the "likeness of God" (Genesis 1:26–29).

Soul building can benefit from technologically enhancing the biological superstructure of our humanity. In particular, genetic engineering can enhance human virtue. The biological basis of our moral natures can be improved using genetic technologies, including (possibly) somatic and germline engineering (Walker 2018, p. 251).

Not so fast! Bioethicists seem to operate with an anthropology that precludes any technology, including genetic engineering, from doing our moral work for us. "It is fruitless to attempt to genetically engineer virtuous living," trumpets virtue ethicist Lisa Fullam. Why? "Traits given at birth are not the same thing as a virtuous character that can be acquired only by self-discipline" (Fullam 2018, p. 319). In other words, a virtuous character cannot be pre-programmed. It can be gained only through willful self discipline over time. Virtue could be attained through self-discipline regardless of one's genome.

What about sanctification, *theosis,* and deification? Not likely, according to Ukrainian Orthodox biologist Gayle Woloschak. Even so, she attempts to make as balanced an assessment of the technology as possible. Even if genetic technology provides us with a moral jump, so to speak, two contingencies make the outcome unpredictable: our human will and God's action. Our ability to find genes associated with virtuous behavior is very limited...Deification, which is a gift from God freely chosen by the individual and God working together in synergy, is open to every human person. (Woloschak 2018, p. 306.)

The problem with technological enhancements of any sort, observes Ronald Cole-Turner, is that they aim at enhancing the self. This is a problem. Why? Because genuinely Christian virtue is aimed at loving others even at the expense of oneself. In principle, this would preclude *theosis* or deification. "Human enhancement technologies tend to feed off the desire to expand the self," observes Cole-Turner; "while *theosis* is grounded in the idea that true divinization means we become like God in God's own kenosis of self-giving love." With this in mind, Cole-Turner can conclude that "the use of human enhancement technology is largely a matter of indifference" (Cole-Turner 2018, p. 330, Theosis and Human Enhancement).

Notice how Cole-Turner introduces kenosis, self emptying. In Hellenistic virtue, the self empties itself on behalf of an ideal such as truth, beauty, or justice. This produces a person of integrity—that is, a personality integrated around the ideal.

In Christian virtue, the self empties itself in loving God and in loving the neighbor. The resulting integrity becomes the virtuous life. Can such personal integrity be enhanced by enhanced intelligence? Not likely. Level of intelligence becomes a matter of indifference.

#### *6.2. Again: What Is Love? The Common Good*

Integrity gained through self-emptying love leads to a vision of the common good. When society loves God by loving the neighbor, it strives to serve the common good. Universal health care stands out. Could ML, AI, and perhaps IA enhance health care? It already has. And it promises more. Moira McQueen lifts up the promise.

Artificial intelligence has vast potential, and its responsible implementation is up to us. One way to do that would be to ask and implement the wise principles of Catholic Social Teachings: do these ways of developing health care respect the individual dignity of the patient and patient carers? As systems, do they enhance the common good and benefit human flourishing? (McQueen 2018, p. 4.)

This promise requires the human will to press ML, AI, and IA into the service of the common good. If superintelligence, either in robots or persons, enhances the common good, Christians should do cartwheels in applause.

#### *6.3. Again: What Is Love? Compassion Is Essential*

Compassion is indispensable for motivating us to neighbor love and the common good. Compassion is the capacity to feel the passion or pain of the beloved one. Our inherited term for compassion, *compassio*, connoted in medieval times an emotion, the feeling of sorrow for the misfortune of others. Compassion includes mercy, love arising out of emotion rather than reason. "The horizontal charity was understood in terms of a compassionate attitude, which in some way imitated God's mercy" (Knuuttila 2019, p. 266).

Compassionate love could require kenosis, self sacrifice. Mathematical cosmologist George Ellis together with philosophical theologian Nancey Murphy describe spirituality in terms of kenotic love. As agape love, kenotic love is willing to sacrifice on behalf of the welfare of the neighbor.

This kenotic ethic—an ethic of self-emptying for the sake of the other—is in turn explained and justified by a correlative theology: the kenotic way of life is objectively the right way of life for all of humankind because it reflects the moral character of God. (Murphy and Ellis 1996, p. 17.)

Would enhanced intelligence enhance our compassion? No, not likely. At least according to Roman Catholic theologian Ilia Delio.

Simply put, technology cannot fulfill our deepest capacity for love. From a Christian perspective, the crucified Christ stands as symbol of the world's openness to its completion in God. God suffers in and with creation so that we do not suffer alone. Suffering is a door through which God can enter and love us in our human weakness, misery and loneliness. As we suffer loss, so too God experiences our loss, remaining ever faithful in love. This compassionate, loving presence of God is our hope that suffering and death are not final but are a breakthrough into the fullness of life up ahead. (Delio 2020)

Despite her demure here, Delio greets with glee the transhumanist promise of a future superintelligence. But she would prefer putting this in Teilhardian terms. Pierre Teilhard de Chardin (1881–1955) was intrigued by computer technology and its potential to link humankind on a new level of a global mind, she tells us. For this reason, Teilhard can be viewed as a forerunner of transhumanism (de Chardin 1959).

Nevertheless, Delio avers, Teilhard's theological vision is not about enhancement. Rather, it is about transformation. Delio points out that Teilhard recognized how suffering and death are invaluable to the emergence of unitive love. This unitive love is exemplified in the death and resurrection of Jesus Christ. Teilhard's vision helps us realize that suffering in nature may appear to be erratic and absurd. Nevertheless, in light of God's kenotic love, suffering is oriented toward freedom and the fullness of love. (Delio 2020)

#### **7. Conclusions**

In sum, Christian spirituality places all its marbles in a single bag labeled, "love". Compassionate or even self-sacrificial love could appear among the smart and the not so smart among us. Superintelligence could not, all by itself, generate superlove. In *Laudato Sí*, Pope Francis waxes with eloquence.

Love, overflowing with small gestures of mutual care, is also civic and political, and it makes itself felt in every action that seeks to build a better world. Love for society and commitment to the common good are outstanding expressions of a charity which affects not only relationships between individuals but also macrorelationships, social, economic and political ones. (Francis 2015)

Will technological progress toward superintelligence lead to spiritual enhancement? Not automatically. What must be added to intelligence at any level is the willful decision to act morally, show compassion, pursue holiness, and live the life of virtue.

For the Christian, the love of the heart takes precedence over the genius of the mind. This is the case even if the genius of the mind is to be treasured when we love God with heart, mind, and soul (Matthew 22:37).

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data drawn from public sources including TedsTimelyTake.com and https://\_www.patheos.com/blogs/.publictheology (accessed on 24 April 2022).

**Acknowledgments:** Thanks to Tracy Trothen and Calvin Mercer for the invitation to write.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **Notes**


#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Religions* Editorial Office E-mail: religions@mdpi.com www.mdpi.com/journal/religions

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel: +41 61 683 77 34

www.mdpi.com

ISBN 978-3-0365-5718-2