Next Article in Journal
Measuring Muslim Lifestyle Using a Halal Scale
Previous Article in Journal
Introduction to the Special Issue “Focusing on the Elusive: Centering on Religious and Spiritual Influences within Contexts of Child and Young Adulthood Development”
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transhumanism, Human Moral Enhancement, and Virtues

by
Vojko Strahovnik
* and
Mateja Centa Strahovnik
Faculty of Theology, University of Ljubljana, Poljanska 4, SI 1000 Ljubljana, Slovenia
*
Author to whom correspondence should be addressed.
Religions 2024, 15(11), 1345; https://doi.org/10.3390/rel15111345
Submission received: 30 August 2024 / Revised: 15 October 2024 / Accepted: 28 October 2024 / Published: 4 November 2024
(This article belongs to the Section Religions and Humanities/Philosophies)

Abstract

:
Moral transhumanism is a project that aims to enhance human beings by using modern technology in a morally beneficial way. This article discusses the problems and ethical dilemmas raised by transhumanism, notably whether (moral) virtues may be enhanced in the proposed manner. To achieve this end, moral transhumanism primarily proposes the use of genetic technology and pharmacology. Regarding moral virtues, the article suggests that sensitivity to reasons is a fundamental feature of human moral cognition. Enhancing moral virtues would diminish this sensitivity or assign it a superfluous role while jeopardizing the person’s autonomy and self-determination.

1. Introduction

This paper addresses transhumanism as a project for the enhancement or augmentation of human beings. It focuses explicitly on moral transhumanism, a project that seeks to morally enhance humans through the use of modern technologies, a crucial tenet that raises several ethical implications. This paper discusses select ethical issues and challenges posed by moral transhumanism, especially the question of whether moral virtues can or should be enhanced. Section 2 briefly outlines the main ethical challenges of transhumanism and introduces moral transhumanism. Then, it moves to examining transhumanist proposals that aim to enhance moral virtues and thus attain moral enhancement. Our primary focus is on the so-called genetically enhanced virtue or genetic virtue program. Section 3 presents several arguments against such an enhancement project’s feasibility, plausibility, and moral sensibility. This paper indicates that one of the fundamental characteristics of human moral thought is sensitivity to reasons. We understand sensitivity to (practical and epistemic) reasons as meaning when an individual acts or forms a belief, they are doing so as a response to reasons they see as supporting such an action or the adoption of said belief, including being able to put forward these as reasons when prompted. In relation to sensitivity to reasons, reason is understood as the reason to and not merely the “reason why”, and this is why an individual experiences the action or adoption of a belief as related to their agency and their self and not as the result of some prior state–causal process, followed by a passive experience of acting or believing. Enhancing moral virtues would diminish this sensitivity, thereby compromising the autonomy and freedom of the individual. The concluding Section 4 illuminates some of the issues raised in light of this broader theological perspective on human beings and their nature.

2. Ethical Challenges of Moral Transhumanism and Enhancement of Moral Virtues

Transhumanism is an intellectual movement that advocates for the use of science and technology to enhance the physical, cognitive, and emotional capacities of humans, thereby transcending their inherent biological limitations. It proposes the use of technological means related to fields such as artificial intelligence, biotechnology, nanotechnology, and neuroscience to enable the radical augmentation of human capabilities in order to achieve greater well-being (Hopkins 2008). It raises a number of general ethical issues that can be categorized into two main sets. The first set concerns the moral status of the technologies advocated, in particular, artificial intelligence (AI) and the systems that are integrated in it, along with the implications of their operations. This issue is vital because transhumanism advocates for the fusion or tight intertwining of these technologies with human beings. What also belongs in this set of concerns are the risks and possible adverse effects of the possible misapplications of enhancement technologies.
To illustrate the possible scope of such concerns, let us look at AI and AI systems. One concern pertains to the question of whether such systems can be accorded moral status and how they can be embedded into a web of moral obligations. In a way, this question relates to the possible cyborgization of human beings, by way of the integration of AI systems and other technologies inside the human body (Schneider 2019), which might well be possible in the not-so-distant future. But, we do not even need to consider the future, and can turn our attention to current state-of-the-art technology and its use. Take, for example, AI systems that decide on issues such as assessing the risks in insurance, assessing recidivism risks (e.g., the COMPAS system), analyzing patient data, assisting in diagnostic decisions and recommending treatment, or approving bank loans. It is therefore reasonable to ask questions about what level of fairness, transparency, explainability, absence of bias, etc., we can expect from such systems in these processes. These are related to the relational and social dimensions of such systems that could feature in human augmentation. Bostrom and Yudkowsky (2014, p. 317) effectively underscore this point by noting that while, of course, a consideration that is made in the making of a robotic arm is whether or not it will easily injure a human being, this does not raise any fundamentally new ethical issues that would not also apply to all other products, i.e., issues of user safety. However, the AI algorithms that will augment or eventually replace humans as decision-makers and actors as part of the social structure raise utterly novel ethical challenges.
The second set of questions involves questions about the limits (possibilities, reasonableness, acceptability) of the transhumanist project of human enhancement via genetic technology, nanotechnology, AI, and other technologies. One of the questions about such possibilities is not only about the technological limitations of enhancement but about the limits of the sensible use of the term human being. Transhumanism thus often speaks about the trans-human, super-human, or post-human as a being who, by virtue of technological augmentation, possesses at least one of the trans-, super-, or post-human characteristics (Malanowski and Baima 2022, p. 658). Central to this discourse is the distinction between therapy and enhancement. Therapy refers to interventions that restore an individual to a level of functioning considered normal or typical for humans. A case of such an intervention in the area of human morality would be a technological intervention to raise one’s level of empathy to a normal level, e.g., enabling psychopaths to be able to make normal moral judgments, have moral motivations, and perform morally acceptable actions. The enhancement of the human being is represented by interventions that raise this level of functioning beyond normal human levels by a factor of a few, a few dozen, or even beyond that. Such a radical enhancement of human mental abilities is predicted by Raymond Kurtzweil in the context of transhumanism (Agar 2010b). For the field of morality, such an example would be the elevation of empathy to an extraordinary level that would result in vigorous and robust impact on human motivation and action. This strand thus raises ethical questions about the permissibility of altering or transcending human nature, including concerns related to autonomy, the limitation of freedom, and the potential loss of authenticity. In the remainder of this section, and related to moral transhumanism, we will mostly focus on the concerns belonging to this latter set of ethical issues.
We now move to focusing on the topic of moral virtues in the context of the transhumanist project. This is a compelling research avenue that has not yet been explored extensively in the context of transhumanism since the focus has so far been on human nature as a whole or on particular aspects of human life, with human cognitive capacities and longevity being the main focusses. Also, one must take into account that contemporary virtue theory is not simply a resurgence of the ancient tradition of virtue ethics and virtue epistemology but a holistic approach that also incorporates the empirical insights from cognitive science. Any exploration of transhumanism and the question of the augmentation of human capacities requires reflection in such a broad framework (Strahovnik 2019).
Virtues are praiseworthy or valuable character traits in the form of relatively stable and enduring dispositions to act, perceive, expect, evaluate, feel, choose, respond, etc., in ways that help a person achieve valuable goals (e.g., happiness, a good life, true beliefs, understanding). Virtues have at least four distinct aspects or building blocks, namely, mental, emotional, motivational, and action-oriented. Looking at examples of particular virtues such as courage, truthfulness, justice, gratitude, humility, etc., we can thus see that they involve an individual’s core beliefs about the value of these virtues, emotional restraints, or attitudes in relation to virtues, motivation to act in accordance with virtues, and actual action in accordance with virtues (Hursthouse and Pettigrove 2018). One prominent source of virtue theory is Aristotle (2009) and his distinction between epistemic or intellectual virtues and moral virtues: “Virtue too is distinguished into kinds in accordance with this difference; for we say that some of the virtues are intellectual and others moral, philosophic wisdom and understanding and practical wisdom being intellectual, liberality and temperance moral. For in speaking about a man’s character we do not say that he is wise or has understanding but that he is good-tempered or temperate; yet we praise the wise man also with respect to his state of mind; and of states of mind we call those which merit praise virtues.” (NE I,13, 1103b) According to Aristotle, intellectual virtues are partly innate and partly acquired through learning, while moral virtues are acquired primarily through the habit or cultivation of these virtues within educational or practical contexts. Both are formed on the basis of human nature, but not determined by that nature (NE II,1, 1103b–1104a). This is also the point on which transhumanism relies since this means that virtues afford enhancement.
Since virtues are an important building block of the human personality and the basis of human action, they have not been bypassed by moral transhumanism in its project to upgrade or enhance the human person. Transhumanism thus aims, among other things, to enhance virtues as part of the goal of improving human beings in terms of cognitive abilities and morality (Persson and Savulescu 2008). Moral transhumanism aims to purposefully intervene to address the basic problems of the human condition and, furthermore, sees these problems as not completely determined by circumstances but as a matter of biological, psychological, spiritual, etc. constraints that are remediable via enhancement and modification through the use of scientific and technological means (Hopkins 2008). Persson and Savolescu even suggest that such moral enhancement is necessary and morally required given our current situation and the moral challenges that humanity is facing (Persson and Savulescu 2012, pp. 9, 133). Within transhumanist thought, we encounter several diverse proposals for the moral enhancement of human beings (moral transhumanism) that aim to make human beings more moral. These proposals differ in terms of the means proposed for upgrading and the target phenomena of this upgrading (Daniels 2009, p. 40; Caplan 2009, p. 206; Savulescu 2009, pp. 213–14; Douglas 2008; Willows 2017, p. 178). The upgrading of human beings in terms of moral virtues is also part of the transhumanist project (Walker 2009). If intellectual capacities, and thus also intellectual virtues, are mainly associated with enhancement or augmentation by means of AI and similar technologies, the most common suggestions in relation to the enhancement of moral virtues include genetic engineering and advanced pharmacology. However, the particular technological means of improving moral virtues is not central to this issue. One consideration that may be relevant is that moral virtue or virtuousness would then be attained through these means of human augmentation, not through education and practice. But this alone is not necessarily a reason to oppose it, especially if such means are not otherwise morally objectionable and if the ultimate goal is indeed the good of both the individual and society as a whole. In this sense, Willows even argues that theology should, in principle, support moral transhumanism, and, in particular, that “the initial theological response to moral transhumanism should be one of approval. In fact, given the commitment to the necessary desirability of virtue and goodness, I think that the pursuit of moral transhumanism must be for the theological virtue ethicist a moral imperative” (Willows 2017, p. 181; cf. Malanowski and Baima (2022) in a similar thesis about the relation between moral transhumanism and ancient virtue theory).
As a representative example of the transhumanist defense of human moral enhancement, we will consider Mark Walker (2009) and his proposal for the enhancement of moral virtues (as a complement and extension of the progress that education and social change can bring about). Walker situates his proposal within the more general genetic virtue program, which is a proposal to influence our morality (the ethical nature of human beings) through the genetic basis of our actions and to do so in a complementary and supportive way to our other efforts to act (more) ethically. Since genes or the genetic basis influence human behavior, changing the genes of individuals can be translated into influencing behavior. Such “engineered genetic virtue” can be achieved, for example, through pre-implantation genetic diagnoses for embryo selection (a technique already available today, utilizing the selection of embryos on the basis of the best possible genetic background) or through genetically modifying zygotes (a technique still under development) (Walker 2009, pp. 26–27).
This proposal is founded on several key assumptions. The first is that human nature is morally flawed and can be improved. The second assumption is that we can speak of virtues as relatively stable dispositions at all and, consequently, that it is the action or behavioral aspect (as opposed to the mental, emotional, or motivational aspect) that is crucial for virtues and morality. These are joined by an additional assumption that at least some moral virtues (or vices) have a hereditary or genetic component and that we can identify, control, and augment the genes responsible for this component (Walker 2009, pp. 28–31).
Having argued that these assumptions are all met, Walker proposed his main argument for the moral acceptability of a genetic enhancement program. Given biotechnology and genetic technology are deemed morally acceptable for other forms of human enhancement, and considering that non-genetic methods—such as education and socialization—are widely morally accepted for promoting moral improvement, moral progress and advancing the common good, there is no fundamental reason to reject the use of a genetic virtue program for similar purposes. Any possible reason we might, in principle, raise as an objection against such enhancement would equally count as a reason against moral enhancement via education and socialization (Walker 2009, p. 34). Walker proposes the virtues of truthfulness, justice, and care as the prime candidates for enhancement since empirical research on humans as well as on more highly evolved non-human animals suggests that all three of these virtues have a strong genetic component and are, therefore, the easiest to enhance with such means.
Walker acknowledges two central concerns with such a program. The first has to do with the goal of the moral enhancement of human beings and, more specifically, of the world. One might ask how acceptable the goal of improving the world itself is and how one even frames it to begin with. Is there a unique understanding of what counts as a “better” world? Upon reflection, the same point applies to education itself since, there too, the aim is to build a morally mature person with the aim to contribute to a better world, even if the conception of a good and better world is ambiguous. The second concern is whether such a genetic virtue program actually leads to virtue or merely to the imitation of virtue, given that the focus is primarily on the agentive aspects of moral virtue. Walker replies that this is not just a feature of the genetic virtue program, but that a similar understanding of virtue is also advocated by a consequentialist or utilitarian view of morality, where the primary focus is on the disposition to act towards the best possible consequences. All that is morally relevant or that morally counts are the actions and consequences of our actions. Moreover, scientific and technological advances in genetic technology might be able to bypass the restriction to agentive aspects, potentially enabling the enhancement of virtues in a broader sense that includes the modification or enhancement of beliefs, intentions, motivations, and emotions (Walker 2009, pp. 36–38).
Bostrom and Sandberg (2009, p. 396) identify an even broader set of characteristics of individuals that are supposed to contribute to the common good and are thus also within the legitimate scope of the transhumanist project of improvement. These are altruism, conscientiousness and honesty, modesty and humility, originality, resourcefulness and independent thinking, civic courage, knowledge and good judgment in public affairs, empathy and compassion, the cultivation of emotionality and caring, apt admiration and respect, self-control and the ability to control violent impulses, a strong sense of justice, a lack of racial prejudice, a lack of propensity toward drug abuse, the enjoyment of the success and flourishing of others, an aptitude for usefulness, forms of economic productivity, health, resilience, and longevity. In order to enhance each of these characteristics, each one of them must pass the evolutionary optimization test, which consists of being able to explain why a particular virtue or feature is underdeveloped as part of the evolutionary process and ensuring that there are no obstacles to it being an advantage in the present. If a particular feature passes this test, then this “would be a case where we have reason to think that the wisdom of nature has not achieved what would be best for society and that we could feasibly do better” (Bostrom and Sandberg 2009, p. 397). However, their proposed program is not limited to genetic technology as a means of moral enhancement; rather, it encompasses any technology that could morally improve human beings or influence their virtues.

3. Challenges for Enhancement of Moral Virtues

Before proceeding with the challenges for moral transhumanism that aim at enhancing moral virtues, let us return to Walkers proposal of a genetic virtue program. Walker himself acknowledges two central concerns with such a project. The first has to do with the goal of the moral enhancement of human beings, which amounts to making the world better. One might ask how acceptable the goal of bettering the world itself is, in particular, if we question how to frame all this. Is there a unique understanding of what counts as a “better” world? Upon reflection, the same point applies to education itself since, there too, the aim is to build a morally mature person with the aim to contribute to a better world, even if the conception of a good and a better world is ambiguous. The second concern is whether such a genetic virtue program actually leads to virtue or merely to the imitation of virtue, given that the focus is primarily on the agentive aspects of moral virtue. Walker replies that this is not just a feature of the genetic virtue program, but that a similar understanding of virtue is also advocated for by the consequentialist or utilitarian view of morality, where the primary focus is on the disposition to act towards the best possible consequences and then on acting this way. All that is morally relevant or that morally counts is the value achieved through the consequences of our actions. Moreover, scientific and technological advances in genetic technology might be able to bypass the restriction to agentive aspects, potentially enabling the enhancement of virtues in a broader sense that includes the modification or enhancement of beliefs, intentions, motivations, and emotions (Walker 2009, pp. 36–38).
As part of the examination of the enhancement of moral virtues in the context of the transhumanist project, we first draw a parallel between human capacities and the development of the capacities of AI and other technologies for human enhancement, including genetic engineering and pharmacology. In particular, we highlight the aspects of human judgment and action as based on reasons, and the associated aspect of individual autonomy as the core of moral behavior. From the point of view of moral and intellectual virtues, the fundamental question is as follows: what are the characteristic human forms of thought and action? One central characteristic is that human cognition is highly holistic and abductive (Fodor 2000), which means that humans can take into account an extensive range of background information when forming their beliefs (that are then further utilized in making decisions, creating plans, solving problems, constructing theories, sharing knowledge, assigning credibility to others, etc.), forming (moral) judgements, or making decisions to act. They identify those that are particularly relevant, and relate them to each other (Horgan et al. 2018; Strahovnik 2022). The human cognitive system manages, in real time, to accommodate all the pertinent information it possesses in such a way as to aptly form judgments and intentions, including all the complexity that goes on outside explicit conscious awareness, but nonetheless figures into the process. One way to characterize this is that humans as (moral) agents exercise competence in rational evidence or consideration appreciation and the associated capacity to become gripped in the basis of pertinent considerations.
In the context of traditional philosophical thought, this can be described as sensitivity and responsiveness to reasons. These ideas are closely associated with Wilfrid Sellars’ concept of the “space of reasons” as something distinct from the “space of causes” consisting of a chain of causal relations between events, with no consideration of reasons as reasons. Sellars used this aspect in his characterization of knowledge, i.e., “in characterizing an episode or a state as that of knowing, we are not giving an empirical description of that episode or state; we are placing it in the logical space of reasons, of justifying and being able to justify what one says” (Sellars 1956, sct. 36, p. 169). A similar point can be made for the formation of (moral) judgments and acting; one forms a judgment and genuinely acts (as opposed to, e.g., reflexive movement) if one is responsive to reasons in this way, i.e., being capable of appreciating reasons, being in the grip of the authority of reasons, and recognizing and experiencing oneself as judging and acting in light of these reasons. The “space of reasons” thus applies to both epistemic reasons (reasons on the basis of which we form our beliefs and judgments), as well as to practical reasons (reasons on the basis of which we act or form our decisions), and consequently also to moral reasons or considerations. This places the domain of belief formation, judgment, and action within the space of reasons, since both actions and beliefs can be justified with reference to reasons. To be an agent in the space of reasons (to recognize a reason, to act on it, and to be able to state it as a crucial part of justification) is thus a distinctively human form of thought and action.
In the context of moral virtuousness, virtues are thus dispositions or character traits that guide an individual toward morally good actions. Suppose genuine judgment and actions are those situated in the space of reasons. In that case, this means that to be considered virtuous, a person must act not (merely) out of instinct, habit, reflex, or external coercion, but with sensitivity and responsiveness to reasons that justify the action. Being morally virtuous thus means to be attuned to moral reasons and to allow these reasons to guide one’s actions. The cultivation of virtue, then, can be seen as an agent’s development of sensitivity to moral reasons and the ability to justify their actions in these terms. Virtuousness indicates, not only external behavior, but internal engagement with the moral space of reasons as well.
The critical consideration in the context of the transhumanist enhancement of moral virtues is whether such augmented moral virtues would even still maintain sensitivity to moral reasons. To reiterate, the existing proposals by moral transhumanists (e.g., Walker 2009; Persson and Savulescu 2012) predominantly focus on bioethical means of moral enhancement and on some particular dispositions and virtues (e.g., empathy, moral motivation, diminished level of self-regards, altruism, reduction in bias towards the near future, etc.) One consideration that pertains to the above-outlined moral sensitivity is related to reasons, including our responsiveness to them, including the ones which we have not (fully) pursued in our moral deliberation and action. Sensitivity to reasons involves recognizing reasons as morally relevant features of the situation or object of our judgment, judging how these reasons affects judgment and its weight in the matter, and what would be morally right given everything in the situation, or what would be the most appropriate moral status for the object of our judgment. Merely enhancing dispositions to act in accordance with particular virtues would assign such recognition of moral importance to a secondary role, since the enhanced-virtue program would now dictate the primary direction of attention or balancing within such sensitivity. Nicholas Agar points out a similar problematic aspect of such impoverished sensitivity to reasons: “Our sensitivity to moral reasons that we may intellectually reject is a valuable part of being. […] Someone who has been subjected to moral enhancement is likely to have a reduced sensitivity to moral reasons rejected by his or her enhancer. Moral enhancement therefore has the potential to undermine morally diverse liberal democracies whose success depends on insight into the different and diverse moral motivations of fellow citizens” (Agar 2010a, p. 75). The main point here is that the moral enhancement of particular virtues or dispositions would narrow the scope of reasons that moral agents would regard as morally relevant. The issue is therefore not just about reasons we follow in action (e.g., to respond in a way that is caring, just, and honest), but also about reasons that we recognize and can attach meaningful importance to, but then ultimately reject and not follow in action. Such recognition must also be understood as part of an overall ethical virtue (e.g., we may feel remorse on the basis of reasons, or understand (different) actions of others better than we would otherwise, etc.). The generic augmentation of moral virtues would lead to an impoverishment of sensitivity in such a broad space of reasons. Ultimately, a possible contrast can be established between a democratic and pluralistic society on the one hand and an anti-utopian society that is totalitarian, unitary, and controlled on the other, in the sense that no member would stand out or deviate in terms of moral reasoning. Such a model of the “(super)ethical” person is found, above all, in anti-utopias, including one of the most significant illustrations of the dangers of such eroding of moral complexity in the name of the idea of the common good, in Aldous Huxley’s novel Brave New World. What lurks in the background is the problem or possibility of some sort of moral automatism alongside moral monoculture.
A related consideration that arises from moral transhumanism’s focus on selected specific virtues is that it might well happen that some dispositions, abilities, and virtues might be valued and used differently in different moral contexts. Consider empathy, which is paradigmatically highly praiseworthy in the context of close personal relationships or responding to the needs of the disadvantaged but not necessarily advantageous in the context of specific professional settings, like judicial work, law enforcement, or evaluation. This is not to say that empathy is valueless or completely out of place in these contexts, but that scaling up one dimension of moral sensibility or virtue via biotechnological means might not have a desirable effect throughout the whole range of moral contexts. Transhumanists might object to this. Persson and Savulescu (2012) go as far as stipulating that the most common moral dispositions that transhumanism chooses for enhancement are such that they “by themselves, they always issue in a morally correct treatment of the individuals to whom they are directed” (p. 108). But, this is something that should be established and not merely stipulated. Virtuousness encompasses individual virtues together with some higher-order virtues or regulative meta-virtues that function as linking points and unites other virtues, both moral and intellectual, into a relatively stable and enduring whole as a point of departure for judgment and action. Loder (2008) assigns integrity with this role, while Aristotle (NE VI,8; III), famously, saw practical wisdom or phronesis as a central point of moral discernment. Moral virtuousness must be considered as a whole consisting of beliefs, commitments, attitudes, motivations, desires, goals, and values and guided by phronesis, and thus the “conception of character in virtue ethics is holistic and inclusive of how we reason: it is a person’s character as a whole (rather than isolated character traits), that explains her actions, and this character is a more-or-less consistent, more-or-less integrated, set of motivations, including the person’s desires, beliefs about the world, and ultimate goals and values. The virtuous character that virtue ethics holds up as an ideal is one in which these motivations are organized so that they do not conflict, but support one another” (Kamtekar 2004, p. 460). But moral transhumanists have not thus far put forward such a comprehensive proposal.
Daniels (2009, p. 40) makes a similar but somewhat broader point, pointing out that virtues are highly complex human characteristics that are closely intertwined and interdependent, but also sensitive to circumstance or context, and involve specific ways of perceiving, understanding, weighing, etc. Although virtues are closely linked to the various capacities or dispositions that underlie them (e.g., compassion or sensitivity to the feelings of others, intelligence, calibration of emotional responses) and which could be influenced by genetic technology, this does not mean that these enhanced capacities would contribute to moral virtue, since they can quickly become the building blocks of moral vice. Additionally, there is no assurance that the entire system of moral virtue would adapt to such enhancements of certain aspects or capacities. Willows (2017) points out that the technologies proposed by transhumanism for moral enhancement mainly operate on dispositions to act in accordance with particular virtues, and that it is questionable whether they work towards holistic ethical virtue. At the same time, none of the proposed technologies work toward enabling individuals to answer the question of how they should act in a particular or concrete situation, i.e., they do not improve or enhance prudence, judgment, or practical wisdom. It is one thing for a specific genetic modification or pharmacologically active substance to affect our level of compassion or concern, but whether we are able to respond morally correctly in a given situation on such a basis is another. Without practical wisdom, we cannot understand how and why to act in this way, which is a virtue. At the same time, practical wisdom is something that eludes transhumanist projects for improving intellectual virtues and other cognitive prowess because it requires practice and a genuine sensitivity to particular circumstances.
Perhaps a way out of these conundrums for moral transhumanism would be to focus on technologies other than genetic engineering and pharmacology. Perhaps the suitable utilization of AI and AI-related technologies could lead to moral enhancement that would include enhanced sensitivity to reasons and augmented capacity for apt moral judgment. Here, such attempts would most likely stumble upon what has come to be known as the frame problem (Fodor 1987, 2000; Dennett 1984). A more detailed investigation of this type of moral transhumanism is beyond the scope of this paper, but we can remark on the following. The frame problem surfaced in the early days of AI research and revealed important aspects of human cognition and rationality that repeatedly have been overlooked, including what we have described above as abductive character, holism, and sensitivity to background information. The frame problem pertains both to our theoretical rationality concerning matters like belief fixation, as well as to practical rationality concerning matters like planning and action. The frame problem, in short, is the problem of understanding how the human cognitive system manages to bring to the forefront, in real time, the pertinent information it possesses in such a way as to effectively manage these kinds of theoretical-cum-practical tasks. The depth and intricacy of this problem are easy to overlook, precisely because the common sense rationality that has proved so dauntingly difficult to engineer into robots or AI systems is so easy and so natural for us humans (this, of course, does not mean that we are flawless in such tasks). This includes judgment and, in particular, moral judgment which must operate in a way that accommodates a significant amount of background information automatically and implicitly, and not by explicitly fetching and manipulating relevant representations. This means that such capabilities thus cannot be modular, i.e., the product of informationally encapsulated faculties or processes. One problem is that there is too much that may need to be considered, and that there is no known manageable way to bring what is needed into consideration representationally. For Fodor (2000, p. 42), what is problematic is the idea that human cognition is a computation, i.e., the manipulation of explicit mental representations in accordance with rules of the kind that could constitute a computer program. Tractable computational processes require modularity—an informationally encapsulated database—whereas human capabilities like moral judgment are highly holistic, abductive, and non-modular. There is also the question of the extent to which AI-enhanced human dispositions and capabilities would have the ability to respond to reasons, and with this connection we can also talk about the formation of cognitive and moral virtues as part of such augmentation. The central question in the context of epistemic and moral virtues is whether they can be modeled by AI, including moral judgment and sensitivity to reasons. Unlike AI systems, which process vast amounts of data at high speeds through numerous operations, human cognition often relies on pre-existing patterns and the strategic use of attention and relevance, thus functioning in a fundamentally different manner.

4. Conclusions

In reflecting on the transhumanist project of the moral enhancement of the human person, it is therefore necessary to highlight the aspects of human plurality, freedom, and autonomy. In this regard, Žalec points out that “the reverence for God’s commandments, together with the awareness that faith cannot be reduced to any Sittlichkeit, empowers the individual to oppose established norms and views, which means that it strengthens their autonomy and subjectivity. Such individuals and such opposition are crucial for social and democratic development. Without them, democracy can only be abstract, at best, not real. An abstract democracy ossifies, ceases to develop, and its members fall more and more into a more or less disguised naked allegiance that can easily develop into fundamentalism, quietist individualism or anarchism” (Žalec 2017, pp. 257–58). The transhumanist project of augmenting moral virtues could thus consequently undermine the democratic foundations of human society. In a theological context, Petkovšek (2018) also underscores the importance of symbolic discourse in relation to freedom and autonomy, with a particular focus on the relationship between free agency and ethics: “Freedom is not an end in itself; it is given to man as a framework, as a form, as a shape, in order to realise his humanity in it. It enables him to seek truth, to do good and to live love. Truth, goodness and love have their origin in freedom; they can only be what they are if they come from a free choice. A computer can solve complex problems much faster than human reason, but it cannot be said to have ‘learned the truth’; nor can a medicine that cures a serious illness be said to have ‘done a good job’. In the true sense of the word, only a person who freely chooses to do so knows the truth or does good. We can say that freedom is the humus necessary for the growth of man in his humanity” (Petkovšek 2018, p. 35).
The transhumanist project of enhancing moral virtues would preclude such freedom of reflection or action, because it would pre-program our responsiveness. At the same time, we cannot focus on a single dimension (like virtues), but must embrace the totality of human life, as highlighted, for example, by understandings of ethics as the art of living. Morality is only one of these aspects, because it is also essential to “understand the relationship between emotions, feeling, valuing and good life. In the context of the art of living, what is important is that the individual becomes aware of and reflects on the processes of feeling as well as valuing, and that these are embedded in morality. The beginning of this process means reflecting on one’s own emotions, which reveal themselves to one’s significant goals and in significant relationships” (Centa 2018, p. 61). Moral transhumanism projects have largely overlooked this holistic aspect of morality and the good life. This project thus threatens to impoverish the space of reasons and our sensitivity to reasons, including human freedom and autonomy.

Author Contributions

Both authors contributed equally to this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by University of Ljubljana (research program The Intersection of Virtue, Experience, and Digital Culture: Ethical and Theological Insights), Slovenian Research and Innovation Agency (research project J6-4626 Theology, Digital Culture and the Challenges of Human-Centered Artificial Intelligence and research program P6-0269 Religion, Ethics, Education, and Challenges of Modern Society), and John Templeton Foundation and the Ian Ramsey Centre for Science and Religion at the University of Oxford (research project Epistemic Identity and Epistemic Virtue: Human Mind and Artificial Intelligence).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Agar, Nicholas. 2010a. Enhancing genetic virtue? Politics and the Life Sciences 29: 73–75. [Google Scholar] [CrossRef] [PubMed]
  2. Agar, Nicholas. 2010b. Humanity’s End: Why We Should Reject Radical Enhancement. Cambridge, MA: Bradford—MIT Press. [Google Scholar]
  3. Aristotle. 2009. The Nicomachean Ethics. Translated by William David Ross. New York: Oxford University Press. [Google Scholar]
  4. Bostrom, Nick, and Anders Sandberg. 2009. The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement. In Human Enhancement. Edited by Julian Savulescu and Nick Bostrom. New York: Oxford University Press, pp. 375–416. [Google Scholar]
  5. Bostrom, Nick, and Eliezer Yudkowsky. 2014. The Ethics of Artificial Intelligence. In Cambridge Handbook of Artificial Intelligence. Edited by Keith Frankish and William Ramsey. New York: Cambridge University Press, pp. 316–34. [Google Scholar]
  6. Caplan, Arthur L. 2009. Good, Better, or Best? In Human Enhancement. Edited by Julian Savulescu and Nick Bostrom. New York: Oxford University Press, pp. 199–209. [Google Scholar]
  7. Centa, Mateja. 2018. Kognitivna teorija čustev, vrednostne sodbe in moralnost (Cognitive Theory of Emotions, Value Judgments, and Morality). Bogoslovni Vestnik—Theological Quarterly 78: 53–65. [Google Scholar]
  8. Daniels, Norman. 2009. Can Anyone Really Be Talking About Ethically Modifying Human Nature? In Human Enhancement. Edited by Julian Savulescu and Nick Bostrom. New York: Oxford University Press, pp. 25–42. [Google Scholar]
  9. Dennett, Daniel. 1984. Cognitive Wheels: The Frame Problem of AI. In Minds, Machines and Evolution. Edited by Christopher Hookway. Cambridge: Cambridge University Press, pp. 129–50. [Google Scholar]
  10. Douglas, Thomas. 2008. Moral Enhancement. Journal of Applied Philosophy 25: 228–45. [Google Scholar] [CrossRef] [PubMed]
  11. Fodor, Jerry. 1987. Modules, Frames, Fridgeons, Sleeping Dogs, and the Music of the Spheres. In The Robot’s Dilemma: The Frame Problem in Artificial Intelligence. Edited by Zenon W. Pylyshyn. New York: Ablex, pp. 139–49. [Google Scholar]
  12. Fodor, Jerry. 2000. The Mind Doesn’t Work That Way. Cambridge, MA: MIT Press. [Google Scholar]
  13. Hopkins, Patrick D. 2008. A moral vision for transhumanism. Journal of Evolution and Technology 19: 3–7. [Google Scholar]
  14. Horgan, Terence, Matjaž Potrč, and Vojko Strahovnik. 2018. Core and Ancillary Epistemic Virtues. Acta Analytica 33: 295–309. [Google Scholar] [CrossRef]
  15. Hursthouse, Rosalind, and Glen Pettigrove. 2018. Virtue Ethics. In The Stanford Encyclopedia of Philosophy. Edited by Edward N. Zalta. Available online: https://plato.stanford.edu/archives/win2018/entries/ethics-virtue/ (accessed on 2 August 2024).
  16. Kamtekar, Rachana. 2004. Situationism and Virtue Ethics on the Content of Our Character. Ethics 114: 458–91. [Google Scholar] [CrossRef]
  17. Loder, Reed Elizabeth. 2008. Epistemic Integrity and the Environmental Future. Environs Environmental Law and Policy Journal 32: 1–36. [Google Scholar]
  18. Malanowski, Sarah, and Nicholas R. Baima. 2022. Human Nature and Aspiring the Divine: On Antiquity and Transhumanism. Journal of Medicine and Philosophy 47: 653–66. [Google Scholar] [CrossRef] [PubMed]
  19. Persson, Ingmar, and Julian Savulescu. 2008. The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity. Journal of Applied Philosophy 25: 162–77. [Google Scholar] [CrossRef]
  20. Persson, Ingmar, and Julian Savulescu. 2012. Unfit for the Future: The Need for Moral Enhancement. Oxford: Oxford University Press. [Google Scholar]
  21. Petkovšek, Robert. 2018. Svoboda med žrtvovanjem in darovanjem (The Freedom Between Sacrifice and Self-giving). Bogoslovni Vestnik—Theological Quarterly 78: 33–51. [Google Scholar]
  22. Savulescu, Julian. 2009. The Human Prejudice and the Moral Status of Enhanced Beings: What Do We Owe the Gods? In Human Enhancement. Edited by Julian Savulescu and Nick Bostrom. New York: Oxford University Press, pp. 211–47. [Google Scholar]
  23. Schneider, Susan. 2019. Artificial You: AI and the Future of Your Mind. Princeton and Oxford: Princeton University Press. [Google Scholar]
  24. Sellars, Wilfrid. 1956. Science, Perception, and Reality. London: Routledge and Kegan Paul. [Google Scholar]
  25. Strahovnik, Vojko. 2019. Vrline in transhumanistična nadgradnja človeka (Virtues and Transhumanist Human Augmentation). Bogoslovni Vestnik—Theological Quarterly 79: 601–10. [Google Scholar] [CrossRef]
  26. Strahovnik, Vojko. 2022. Holism of religious beliefs as a facet of intercultural theology and a challenge for interreligious dialogue. Religions 13: 633. [Google Scholar] [CrossRef]
  27. Walker, Mark. 2009. Enhancing Genetic Virtue: A Project for Twenty-first Century Humanity? Politics and the Life Sciences 28: 27–47. [Google Scholar] [CrossRef] [PubMed]
  28. Willows, M. Adam. 2017. Supplementing Virtue: The Case for a Limited Theological Humanism. Theology and Science 15: 177–87. [Google Scholar] [CrossRef]
  29. Žalec, Bojan. 2017. Kierkegaard in politično: Vera kot premagovanje nasilja in vir demokracije (Kierkegaard and Political: Faith as Overcoming of Violence and as an Origin of Democracy). Bogoslovni Vestnik—Theological Quarterly 77: 247–59. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Strahovnik, V.; Centa Strahovnik, M. Transhumanism, Human Moral Enhancement, and Virtues. Religions 2024, 15, 1345. https://doi.org/10.3390/rel15111345

AMA Style

Strahovnik V, Centa Strahovnik M. Transhumanism, Human Moral Enhancement, and Virtues. Religions. 2024; 15(11):1345. https://doi.org/10.3390/rel15111345

Chicago/Turabian Style

Strahovnik, Vojko, and Mateja Centa Strahovnik. 2024. "Transhumanism, Human Moral Enhancement, and Virtues" Religions 15, no. 11: 1345. https://doi.org/10.3390/rel15111345

APA Style

Strahovnik, V., & Centa Strahovnik, M. (2024). Transhumanism, Human Moral Enhancement, and Virtues. Religions, 15(11), 1345. https://doi.org/10.3390/rel15111345

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop