**4. Alienation**

There is a dialectic involved here, insofar as even if human enhancement technologies have their positive impact in a liberal way of thinking over human nature, these same technologies carry within themselves other possibilities, which are often of quite contrary quality. Therefore, it is misleading to take the positive possibilities, which we have recorded through the legal clarification, for granted. Technology is a constitutive element of the practices and institutions of the human condition. Technology constitutes an area of ethical investigation, and at the same time presents challenges to moral theory because it transforms reality and demands that moral theory adapt to a changed situation such as the one that accompanies the gradual process of human merging with the machine. I will critically discuss the conceptualization of technology in terms of mere instrument. My intention is to contend the partiality of this way of understanding the technology. In my view, this initial conceptualization needs to be complemented by a second perspective, which is focused on technology as a process of mediation that tends to transform what is at stake.

When one is about to decide on the transformation whose source is technology, controversies may arise about the ways in which the relationships between technology and society can be conceptualized and interpreted. The interpretation of the social and ethical implications also depends on the characterization that we will give of the technology. The conceptualization of the relationship between technology and human is responsible for various questions about nature, the human nature, human action, autonomy, freedom and much more. It is not the intention of this contribution to explore all these dimensions. I will focus only on the conceptions of technology even if some of the results of my argumen<sup>t</sup> will have some impact on the other dimensions as well.

Reflecting on the way technology has been conceptualized can help to clarify its transformative effects. The process of our merging with machines is certainly one of the most remarkable. A first form of conceptualization has been developed within the philosophical anthropology. According to this philosophical movement, technology was conceived as compensation for the biological deficits of the human being. Cassirer observed that at the basis of technical action there is a teleological relationship that presupposes a rational actor or a multitude of actors acting cooperatively [33]. This first part of the characterization is the most conservative and does not question the current ontology. To a certain extent, it matches the "introduction stage" described by Moor [1]. However, it is partial and fails to formulate an adequate representation of the phenomenon that it wants to capture. There is, therefore, a second element, and that is culture. If we analyze technology from this point of view, then we will have to admit that it is part of creativity and freedom of spirit and therefore able to change the structure with which we are familiar. It is capable of having transformative effects such as transformation of the ontology. To a certain extent, Cassirer's cultural dimension matches the "permeation stage" described by Moor [1]. Drawing on this double characterization, further considerations have been derived. The first is that technology can be conceived as a tool that set us free from natural constraints and thus exonerates us, for example, from dull, dirty, and dangerous tasks.

However, technology can also be conceived as a risk of alienation of humans from themselves. In this interpretation, we place the profiles of dehumanization that have so much weight in current debates. In the philosophy of technology, there is the idea of conforming to the objective forms of human activity and therefore to attribute to them a function of mediation of self-understanding. By considering technology, it would be possible to know what becomes of us. Technology highlights the fact that the actions mediated by it always include a concrete intervention in the material environment and therefore always represents a form of a relationship with nature. Technology then is both a relationship with nature and, since the human being himself belongs to nature, it also includes a certain relationship with oneself and therefore also with the immaterial, symbolic and normative dimension. Human beings then not only find themselves in their artifacts, but above all they recognize themselves in them. This thesis allows Kapp to explain the human body in terms of an organism by using the objectification of the tools we create.

Kapp tries to explain the human body, its being organism through its self-made tools. As they are regarded as human unconscious alienation, human organs now appear to be extensions of artifacts, and these themselves can thus become models for their exploration and interpretation. Since artifacts and their functional context are meant to be subject to natural laws, it is clear that they provide a model for research on the human organism, provided that it too is subject to natural laws, and is therefore doomed to lose track of both the phenomenal character of consciousness and normative processes.

Kapp has focused on the interplay of reflections that exists between technology and the human. If the human is essentially technical, then technology is the ground of the human culture. In this way, then technology will not be limited to putting mechanical pieces side by side with human ones. In fact, the process of becoming technical will bring to the maximum development the use of biotechnological options by eroding the dominion of the natural that for a long time had remained beyond the possibilities of human influence. It will not be possible to hide that such a process of conquest of territories previously untouched by moral judgments has a potential impact on the theoretical and symbolic dimension. This is the typical shift that the philosophy of technology allows to implement with epistemic benefits. Thinking about human organs starting from artifacts, conceived as unconscious alienation of the human, organs now appear from this overturned perspective as extensions of artifacts. The objectification allowed by this epistemic deviation allows them to be considered models for their exploration and interpretation. There is, however, a caveat. Once the model for their understanding is that of artifacts, they will lose their inner face. Undoubtedly, artifacts and their functional context are constitutively subject to laws, it is clear that they can serve as a model for research on the human organism, provided, however, that it is also understood only in those configurations that are governed by natural laws. However, this epistemological move has an undoubted value. Following Ernst Kapp's, one might be convinced to appeal to his notion of alienation within his theory of technology as a self-knowledge device.

The central idea here is that, through technology, we may come across human features. Technology serves us as tool for knowing our constitution. It represents a reliable way to conceptualize us. My thesis is that while we can use such hybrid forms to explore the boundaries of our self-objectification, the direction of the human–machine interaction in the difficult ethical situations must however remain that of adapting the machine to the human. This mirroring dynamics exposes the transformations of the human vulnerability. The alienation nexus includes vulnerability and direction. The normative indication is that the adaptation should be determined according to human and not to machinic standards. Overall, the risk of self-objectification, which becomes alienation, is very well identified by Kapp's theory. The recent attitude puts forward by engineers, computer scientist, and philosophers that contends that "buildings robot is philosophy making" [34,35] is somehow echoing Kapp's philosophy of technology. This means that the process of building robots forces us to reflect on what human capabilities are, and is therefore comparable to a process of self-knowledge. As the ancient Greeks relied on the Oracle of Delphi, so we can count on the construction of robots to obtain knowledge of ourselves. I will discuss this claim and in doing so I will borrow Ernst Kapp's terminology. In Kapp's account, the human being is to be found in her artifacts; in her technical culture, she recognizes herself there [14]. A traditional view says that there is no increment of self-understanding without mirroring of a mind-external world. So far there is no disagreement between Kapp [14] and Pfeiffer and Bongard [34] and Wallach and Allen [35]. With the exception that the Kapp's argumen<sup>t</sup> has two aspects, while the recent account restricts itself to the self-knowledge claim without further caveat.

On the one hand, Kapp argues for the epistemic potential of technology. On the other hand, he warns against the risks of sorting humans and things as items of similar shape and features. On Kapp's account these are the ethical boundaries of the self-objectivation.

This analysis may accommodate a number of worries connected to the image of the human being as a well-functioning machine. Therefore, we can address this tendency as the process of *mechanization of the human being*. For such reasons, the interventions in the brain are research topics and cannot serve as a normative tool. It would be wrong to use the insights from the process of self-objectivation to formulate ethical recommendations. Ethical recommendation concerning what does happen when humans share the same environment with highly automated systems can set the stage for a more integrated perspective on the human mind. As they identify the direction of the adjustment, they can also sugges<sup>t</sup> that the project to understand the mind will root in self-reports and capacity to engage in normative practices and discourses. Experience and morality are to be moved out of the lab and to the street, office, kitchen, bar or wherever people happen to be when they feel, think, and act.

From a normative perspective, we can state that highly automated systems must be designed in a way that they must adapt more closely to the communication behavior of humans and not, conversely, demand increased adaptation performance from humans, who may not be willing to accomplish. If this adaptation goes the opposite path, we consider this—from an ethical perspective—as an undesirable development that may result in dehumanization processes. If the aim is to serve society, then innovation has to comply with human aims and desires. Otherwise, innovation without ethical dimension may result as a blind kind of innovation [36].

While the legal example shows how puzzling the hybrid constellation can be, the ethical analysis of human enhancement technologies is a source of hard moral questions. Intervening in the brain involves engineering issues and ethical problems, which can result in the conceptual and philosophical re-examination of our familiar insights.
