Next Article in Journal
Fvsoomm a Fuzzy Vectorial Space Model and Method of Personality, Cognitive Dissonance and Emotion in Decision Making
Next Article in Special Issue
The Right to the City in the Platform Age: Child-Friendly City and Smart City Premises in Contention
Previous Article in Journal
The Transition from Natural/Traditional Goods to Organic Products in an Emerging Market
Previous Article in Special Issue
Technology for Our Future? Exploring the Duty to Report and Processes of Subjectification Relating to Digitalized Suicide Prevention
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Objects, Digital Subjects and Digital Societies: Deontology in the Age of Digitalization

Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven, The Netherlands
Information 2020, 11(4), 228; https://doi.org/10.3390/info11040228
Submission received: 9 March 2020 / Revised: 7 April 2020 / Accepted: 10 April 2020 / Published: 20 April 2020
(This article belongs to the Special Issue The Future of Human Digitization)

Abstract

:
Digitalization affects the relation between human agents and technological objects. This paper looks at digital behavior change technologies (BCT) from a deontological perspective. It identifies three moral requirements that are relevant for ethical approaches in the tradition of Kantian deontology: epistemic rationalism, motivational rationalism and deliberational rationalism. It argues that traditional Kantian ethics assumes human ‘subjects’ to be autonomous agents, whereas ‘objects’ are mere passive tools. Digitalization, however, challenges this Cartesian subject-object dualism: digital technologies become more and more autonomous and take on agency. Similarly, human subjects can outsource agency and will-power to technologies. In addition, our intersubjective relations are being more and more shaped by digital technologies. The paper therefore re-examines the three categories ‘subject’, ‘object’ and ‘intersubjectivity’ in light of digital BCTs and suggests deontological guidelines for digital objects, digital subjects and a digitally mediated intersubjectivity, based on a re-examination of the requirements of epistemic, motivational and deliberational rationalism.

1. Introduction

Digitalization is affecting almost all domains of our lives [1]. One aspect of our lives, which is particularly affected by digitalization is human decision making. Emerging digital technologies have an impact on our behavior and the choices we make. This is especially true for so called persuasive technology that explicitly aims at changing the behavior of users, often with the help of ICT: car dashboard give feedback on energy consumption; smart watches suggest to exercise more; e-health coaches give feedback on our eating habits. Philosophers have worried that these nudging technologies undermine human autonomy [2,3,4]. This worry is particularly prominent with regard to deontological approach to ethics of technologies [5].
Deontological ethics focusses on the rightness or wrongness of moral actions, independent from the consequences that these actions might have (as opposed to consequentialism), and independent from the character of the person who is acting (as opposed to virtue ethics). Kantian deontology focusses on autonomy and human dignity and demands that our societies are set-up such, that autonomy and human rights are respected. Deontological ethics is finally rooted in a peculiar understanding of what it means to be a human, since it usually takes its starting point from an analysis of human autonomy and human (moral) agency. In this essay, I will point out three elements of a deontological approach that relate to how humans, technologies and societies are conceptualized within traditional deontology. I will argue that digitalization encompasses various trends that seem to undermine these traditional conceptualizations and therefore call for a reconsideration of these traditional notions.
More specifically, digitalization leads a change in the understanding of (human) ‘subjects’ and (technological) ‘objects’. Technological objects are no longer mere passive things, but begin to have agency of their own. Self-driving cars make decisions on their own and start to display a form of agency. Behavior change technologies aim at influencing human actions. On the other hand, human subjects can use digital behavior support system to outsource part of their agency to technology. Digitalization does thus affect both human and technological agency. Finally, also our societal communication is shaped and influenced more and more by digital information technologies. In my paper, I therefore follow the distinction between ‘objects’, ‘subjects’ and ‘intersubjectivity’ as three major philosophical categories, as elaborated by Apel [6], Hösle [7], and Habermas [8], to analyze these trends of digitalization.
In the main part of the paper I will therefore attempt to show how a deontological ethics could deal with these challenges in each of these three areas. The focus of the paper will be on very specific ways in which digitalization affects human decision making or human autonomy, in a way that will in turn impact human behavior. The paper will therefore use digital behavior change technologies (BCT) as a case study. The main aim of the paper is to suggest guidelines for the design and usage of digital BCT from a deontological perspective and identify relevant topics for future research. The paper starts by identifying the relevant aspects of a deontological ethics and the underlying interpretation of human agents, technical objects and societal interactions (Section 2), before going on to show, how these traditional conceptualizations are being challenged by digitalization—and how a deontological framework can respond to these challenges (Section 3).

2. Deontological Ethics and Three Forms of Deontological Rationalism

In this section I will highlight key features of a deontological framework in the tradition of Kantian Ethics, insofar as they are relevant for a discussion of digitalization. My approach of deontology is inspired both by Kant’s traditional framework and by recent attempts to modernize it, particularly in the tradition of German discourse ethics in Apel [9] and Habermas [10]. The aim of this section is obviously neither to give a complete theoretical foundation, nor to justify the assumptions of the framework. Rather I intend to highlight key elements insofar as they will be relevant for our discussion of ethical issues of digitalization in the next section. The aim of the paper is conditional: if we were to approach the phenomena of digitalization, how can we evaluate important trends within digitalization from a deontological perspective? What guidelines can deontology suggest for the design and usage of digital behavior change technologies?
Deontological approaches in the tradition of Kant put rational agency at the center of ethical theory. The assumption is that the moral law is a rational framework for the moral evaluation of actions that are being performed by (potentially) rational agents that have a free will. In fact, Kant starts from the challenge of freedom as a quest for humans. Often, we are faced to decide which of the many possible actions we should choose to realize at any given time. The moral law is accordingly by Kant interpreted as an answer to the questions of freedom. The aim is to justify which ends we should strive for and which actions we have most reason to choose. Kant argues therefore that reason does not only allow us to merely answer strategic questions about means and ends, such as: if I want to achieve great luxury, how could I do this (so called hypothetical imperatives)? Rather reason also allows us to answer questions about which ends I should choose in the first place. Should I strive to be rich or strive for other things in life (categorical imperatives)? The fact that morality is seen as strongly rooted in the theory of practical rationality has certain consequences for a deontological approach. I will mention three main implications.
(1) The first one is that Kantian deontology is thus a cognitivist ethical theory, that argues that the only way to discover and justify moral norms is by appeal to reason. Other faculties—such as emotions, intuitions or tradition—play no role in the foundational framework. This implies a certain optimism, that reason can answer moral questions with logical categoricity and necessity. This focus on reason has often been cited as a key feature of Kantian ethics that makes deontology an interesting case for artificial intelligence, as future AI agents might not have access to intuitions or emotions, but could—at least in principle—be programmed to act rational. I will call this first implication the epistemic rationalism of deontology.
(2) The second implication is related to (individual) human behavior and concerns what I want to call motivational rationalism. Acting in line with ‘what you have most reason to do’ constitutes a foundational part of moral agency. Therefore, individual reflection on what you have most reason to do constitutes for Kant a necessary element of moral agency. An individual action is moral in so far as the reason to do the right thing features at the same time also as the motive for the action. This rigorous motivational rationalism has often been criticized as too strict, as it is very burdensome to live up to this high standard. (It might indeed be easier for future AI to live up to this standard, than for mere mortal humans). We will see in the next section, that digital behavior change technology has precisely to do with the relation between our better selves, that know what we should do—and our inability to live up to our own goals (see Section 3.2). This outsourcing of moral agency to technology seems to be at conflict with the requirement of motivational rationalism.
(3) The third implication relates to (collective) human decision making within society and could be called deliberational rationalism. It requires in short that binding societal decisions (such as in parliament or in court) should be based on rational deliberation. This means that political issues should be tackled by an exchange of arguments from both sides, in which the “force of the better argument” (and not emotions, power-relations, traditions or religions) should carry the day. These Enlightenment ideas of Kant have further been elaborated for our time by Habermas and Apel, whose ethical project is to transform Kant’s deontology of the subject towards an intersubjective (neo-) Kantian ethics of discourse [9].
Discourse ethics tries to identify rules for deliberations such as, e.g., the requirement that discourses should be power-free and transparent, and that everyone affected should be able to agree to a consensus, including those that carry the highest burden [10]. Apel refers to these principles as the ideal community of communication, as they are often difficult to implement in everyday societal interactions. On the level of social reality, Apel and Habermas point to the existence of a public sphere, in which individuals exchange arguments and form their opinions—this is the real community of communication in Apel’s terminology [9]. It is this aspect of the real community of communication that is currently being shaped more and more by digital technologies, such as digital social media. The worry is, that these technologies have a negative impact on the rationality of the public sphere, rather than helping us to orient ourselves towards truth and reason (see Section 3.3).
These three implications together are rooted in more fundamental ideas, such as assumptions about what constitutes human agents (‘subjects’), what constitutes mere ‘objects’ (such as technological artifacts) and what should constitute foundational principles of society (‘intersubjectivity’). In more detail, Kantian deontology assumes that human agents are proper ‘subjects’. That implies that they are free and autonomous in their actions and that they can shoulder responsibility, since they can (and should) act on the basis of what they have most reason to do. This capacity for rational agency constitutes the moral value of human beings and lies at the core of human dignity. ‘Objects’ on the other hand are passive by themselves, they do not possess freedom nor agency, therefore they can neither shoulder responsibility, nor do we owe them respect. We can own ‘objects’ and use them as property, whereas it is immoral to treat other humans as property (or as means only). That implies that society, finally refers to a deontological interpretation of ‘intersubjectivity’: since the aim of society is to guarantee human agency, dignity and freedom, it should be rooted in rational principles that individuals can give their rational consent to. Deliberation becomes therefore an important element of the public sphere: it is the place where rational agents exchange their viewpoints.
We can thus see that a Kantian deontology originally is embedded in an underlying interpretation of what constitutes an ‘object’, a ‘subject’ or ‘intersubjectivity’ [6]. The subject-object distinction in Kant dates of course back to the Cartesian Dualism of free, thinking subjects (res cogitans) on the one side, and mere extended objects (res extensa) on the other side. Are these ontological conceptualizations still plausible in the age of digitalization? Further, if we need to modify these concepts: what would that mean for a re-interpretation of Kantian deontology for digital technologies?

3. Towards a Digital Deontology

I would like to identify three trends within digitalization that seem to challenge the traditional (neo)-Kantian deontological framework. They relate to the underlying understanding of humans as ‘rational agents’, technological objects and nature as mere ‘passive entities’, and societies as the place of ‘rational deliberation’.
It has often been argued in general philosophy and in philosophy of technology that the Cartesian dualism can and should be challenged. Philosophers of technology have, e.g., pointed out that technology is less passive than the notion of a mere object suggests [11]. This seems to be particularly true with regard to the (slow) advent of artificial intelligence. In short, our once passive tools start to make decisions by themselves, they start to ‘act’. This has implications for the discussion of the relation between agency of technology and human autonomy. What does it mean to be an ‘object’ in the age of digitalization (Section 3.1)?
One can further argue that what it means to be a ‘rational subject’ in the world of digitalization also changes. ICT driven technologies allow us to ‘expand’ our minds into the world with the help of digital artifacts. I can outsource my agency and will-power by using technologies to, e.g., help me memorize the birthdays of my friends or to overcome weakness of the will with the help of digital e-coaches. In the distant future I might even be able to merge with AI. This raises questions of the ‘flow of agency’ between humans and technology and what it means to be a ‘subject’ in the age of digitalization. (Section 3.2.)
Finally, social deliberation changes as well. On the one side, social media and behavior change technologies can help and foster democratic deliberation and provide platforms for exchange and discussion, on the other hand they can be used to influence our opinions and manipulate elections. This calls for a new evaluation of the public sphere as a place of political deliberation (Section 3.3).
In the next section, I will look at these three challenges in turn. Since they cover a broad range of domains, I want to limit myself to the case of a specific type of digital technologies, namely ICT-based behavior change technologies, as mentioned above. BCTs are technologies that are intentionally designed to change the behavior of users. Digital BCT seem to perfectly illustrate the changes in digital objects, subjects and intersubjectivity as they affect all three domains. Furthermore, these technologies are already in existence and are in fact getting more and more widespread. Other, more advanced technologies that would alter the three domains even more radical—such as fully autonomous robots or full-fledged AI—are still more distant and futuristic. They will therefore be left out of the scope of this essay. Digital BCTs can serve as ‘transition’ technologies that illustrate the trend towards fully autonomous AI. The real effects of BCTs can already be observed and can therefore inform ethical analysis.

3.1. Deontology and Digital Objects

One can argue, that it was fair enough in the time of the Enlightenment to focus on human agency only, and regard objects as passive things that do not have agency of themselves. However, recently we observe that the distinction between subjects and objects seems to get blurred for technological artefacts with the rise of digitalization. Algorithms take over human decisions, they help to fly planes, invest in the stock market and will soon let cars drive autonomously. The potential end-point of this development might be robots that pass the Turing test [12], are equipped with full-fledged artificial intelligence and can for all intense and purposes be regarded as real actors. This will raise questions, whether these robots should be regard as ‘persons’ and which—if any—of the human rights should be applied to them [13,14,15].
The observation that technologies are more than mere neutral tools is however older and pre-dates the focus on digitalization. Already Winner famously claimed that artifacts—such as bridges—can have politics [16]. Actor-network-theory goes even further and argued that we should ascribe agency and intentionality to all artifacts and even to entities in nature, such as plants [17,18,19]. In a similar vein, post-phenomenology has been developed in part as a strict opposition to the Cartesian subject-object dualism and maintains that all technologies affect human agency, since they shape human perception and alter human action [11,20]. One can of course still argue that it is meaningful to distinguish between full-fledged human agency and intentionality on the one hand, and whatever ‘objects’ are currently doing on the other hand [21,22]. However, the phenomenon of increasing agency of objects through digitalization deserves attention, especially for an ethical approach such as deontology that starts from notions of (human) agency and autonomy.
For the purpose of this paper, I therefore want to suggest to distinguish between three types of objects: “good old objects”, “digital objects” and “autonomous objects”. The intuitive distinction behind these three categories is the amount of agency we are willing to ascribe to any of these objects: no agency (good old objects), some limited form of agency (digital objects), and full-fledged agency (autonomous robots) (This distinction is thus meant to be conceptual and therefore independent from any concrete framework about agency of artifacts. Depending on your preferred philosophy of technology, you can judge what concrete objects belong in each category. E.g., mediation theory and actor-network theory might claim that “good old objects” never really existed, this class would thus be an empty class under this framework. On the other extreme, if you are embracing a framework that requires mind and consciousness as necessary pre-conditions for full-fledged agency, you might doubt whether there ever will be (fully) autonomous objects (see, e.g., Searle’s criticism of the extended mind). For a conceptual analysis of the relation between agency and artifacts see [23,24]).
Traditional tools (without any agency) are what I want to refer to for now as “good old” objects. A screwdriver, that it used by a mechanic might have affordances [23], but lacks agency of its own. It does nothing in the absence of a human being, other than just lying there in a toolbox. Next to this we have at the other end of the spectrum “fully autonomous robots”, that for all intense and purposes “act by themselves” and whose actions might at some point be indistinguishable from what a human would do. These are the robots that will pass the Turing tests and whose actions can no longer be distinguished sharply from those of a human being. In between, we have a third category consisting of all technologies that encompass some form of agency. There are currently many artifacts to which we would ascribe some form of agency. Self-driving cars, e.g., can be seen to decide autonomously how to drive on a highway, but of course they lack many other aspects of agency. However, this in-between category does not seem to fit into the traditional subject–object dualism. It does thus require special consideration from a deontological standpoint. Let us look at all three categories from a deontological perspective.
How would Kant treat ‘autonomous objects’? As said above, traditional Kantian ethics merely distinguishes between subjects and objects. Subjects are agents that are capable to act autonomously based on what they have most reasons to do (and who can reflect on this capacity and give reasons for their actions). Mere objects do not have this capacity. In this Cartesian spirit, Kant also famously assumes that animals belong into the category of objects. They are no moral agents, and they have no intrinsic moral status [25,26].
However, the first thing to note is, that there is nothing in the Kantian enterprise that restricts moral agency to humans only. Kant himself speculates about potential rational agents that might exist on other planets and that might be sufficiently similar to humans: they could possess a free will, in which case—according to Kant—also their actions would be subject to the same moral law. According to Kant the moral law even binds the agency of God. Kant is thus not a ‘speciecist’ in the terminology adapted by Singer [27]. It is not our biology that makes us special, but our capacity to act morally. We can therefore speculate that once artificial agents encompass autonomous agency, that is sufficiently similar to human agency, they should be seen as bound by the same moral law as humans. At least that would be a natural application of Kant’s theory to autonomous artificial agents. In short, if artificial agents ever become ‘subjects’, they are bound by the same moral law that all rational and free agents are subjected to, according to a Kantian framework. Fully autonomous AI agents, would therefore need to be treated like ‘subjects’. Or in other words: if artifacts (technological objects) ever possess the necessary and sufficient conditions for free and autonomous moral agency, then they should be treated as ‘subjects’, i.e., as persons. (This question is independent from the issue of whether the Kantian framework is the best framework to implement in artificial moral agents [28,29,30], or whether it might even be immoral to try to create Kantian artificial moral agents in the first place. The later point has been argued by Tonkens, based on the assumption, that artificial moral agents could not have free will [31]. For a general analysis of ‘moral agency’ of artificial agents see [32,33].)
Kant also has no problem to deal with mere good old objects. Objects can be used as tools and—in a Kantian framework—there are no duties that we owe to objects, except in cases where our actions would violate the rights of other humans. We can destroy our property and, e.g., disassemble our old cars and sell the pieces we no longer need. We do not owe anything to mere objects, at least not in the Kantian framework. It is, therefore, precisely the “in-between category” that raises interesting questions. I will thus focus on the case of distributed agency, and I will illustrate a deontological perspective by analyzing the case of behavior change technologies.
Digital behavior change technologies affect human agency, but also start to inter-act with humans, even if currently only in limited forms. Conceptually, I want to therefore distinguish between two cases of the ‘flow of agency’ in digital BCTs (see Figure 1). (1) One the one hand, BCTs can be used to affect the behavior of users. They are designed to change the attitude and/or behavior of users. In this case the traditional human subject is not the source, but the target of the influence, and the digital BCT acts with the intent to influence human agency. Users might or might not be aware of these influences. I will focus on this category first.
(2) On the other hand, BCTs can be used by humans to enhance or extend their agency. For example, I can use a health-coaching app to help me reach my goals and support my desire to exercise more. In this case I am delegating or expanding my agency; the human subject is so to speak the source of the agency. I will look at this category in the next paragraph (on ‘digital subjects’), since these are cases of agency that are initiated by the subject.
It must be noted that this distinction is a conceptual one: the same technology can exercise both forms of agency. An E-coaching health app is in part an extension of human agency (as it is installed and initiated by the user), that—once installed—goes on to act upon the user (e.g., in pushing against weakness of the will). It does thus encompass both flows of agency: from user to technology and from technology to user. Since both cases raise different ethical issues it is nevertheless helpful to distinguish these two dimensions analytically and treat them separately.
Let us look more closely at the way in which digital BCTs affect human agency. Already Fogg observed that Computers and ICT technologies can be used to steer human behavior. He defined persuasive technologies as those technologies that were intentionally designed to change human attitudes and/or behavior [34]. Persuasive technologies were originally studied under the header of ‘captology’, referring to ‘computers as persuasive technologies’. The advent of digitalization allowed first computers and later smart technologies to monitor user behavior via sensors and to try to actively influence user behavior. Designers of BCT started to use psychological research to steer users towards desired behavior [35,36,37].
Recently, Hung [38] has distinguished two classes of behavior change technologies: material BCT (‘nudges’) and informational BCT (‘persuasive technologies’). Material behavior change technologies change the physical material environment in which users make decisions. One example would be a speed-bump that makes car-drivers slow down. Informational BCTs use feedback and information to guide or influence user behavior. A car-dashboard can, e.g., display a red color if the driver is wasting energy or reward him with symbolic digital flowers that grow on the dashboard if he keeps on driving in an environmentally friendly way. Informational BCTs are the most interesting type from a digitalization perspective, as they use ICT to monitor behavior and digital user interfaces to give evaluative feedback.
If one looks at informational BCT from a Kantian perspective one can develop ethical guidelines for the design of these technologies. A first deontological principles for BCT can be derived from the importance of autonomy and rationality within Kantian ethics. First of all, informational BCT are digital objects whose agency targets to influence human agency. Since autonomy is a key value in the Kantian framework, we can argue that informational BCT should be compatible with user autonomy. This means more specifically that they should allow for voluntary behavior change that is compatible with acting in accordance to what you have most reasons to do [39,40].
This means that, other things being equal, a non-coercive intervention should be preferred in the design of BCT. Smids [40] has elaborated in more detail, what the requirement of compatibility with free and rational behavior change would entail for the design of these so called ‘persuasive’ technologies. He defines BCTs as coercive, that do not allow for a reflection on the reasons for behavior change, such as mandatory speed limiting technologies. A BCT that gives a warning, if one exceeds the speed limit, is compatible with rational behavior change. In principle the user can override these persuasive technologies. Thaler and Sunstein [41] also try to accommodate this requirement in their advocacy for ‘nudges’, since these should be compatible with the free will of the users. They define nudges as holding the middle between paternalism and libertarianism. Nudges push users in a desired direction, but do not coerce them to change their behavior. (The question whether ‘nudges’ are, however, really compatible with autonomy is debated extensively in the literature [3,42]).
A second guideline can be related to the observation that digital persuasive technologies are going beyond being mere objects. Informational BCT establish a (proto-)communicative relation with the user: they give feedback on behavior, warn or instruct users to behave in a certain way and give praise for desired behavior. I have argued earlier that this establishes a primitive type of a basic communication [43,44]. Therefore, we cannot only treat these BCT as mere ‘objects’, but we can apply basic ethical rules that have prior only been useful in the relation between humans. The validity claims of communication, that have been analyzed by Habermas [10] and discourse ethics scholars, can be applied to the relation between persuasive technologies and humans. Like in the human–human case, the feedback that persuasive technologies give should be comprehensible, true, authentic and appropriate.
Since informational BCTs often use feedback that should not require much cognitive load from the user, there is always a risk that the feedback is misinterpreted. Designers should therefore use easy to understand feedback, like, e.g., a red light for a warning and a green light for a positive feedback. The feedback should obviously be true, which might be more difficult in the case of evaluative feedback. Toyota hybrid cars, e.g., give us feedback ‘excellent’ written on the car dashboard, if the user drives in a fuel-efficient way. However, only the factual feedback of gallons per liter is accurate and properly truthful. The challenge of evaluative feedback is, who gets to decide what counts as ‘excellent’, and is the evaluation transparent to the user? Authenticity refers to the obligations of designers to not mislead users and give ‘honest’ feedback. Appropriateness refers to finding the sweet spot between too much insistence in attempting to change behavior vs. giving up to early (see [5] for a more detailed analysis of these four validity claims, for a critical view see [45]). It is plausible to assume, that future informational BCTs will be even closer in their behavior to human feedback, it is therefore important to reflect on the implications of this trend for their ethical design [37].
To summarize the analysis of BCT as digital objects, one can formulate the main principle of deontological ethics as follows. The design of digital technologies should be compatible with the autonomy and the exercise of rational choice of the user. The preferred method of behavior change of informational BCT should be in line with basic truth and validity claims of human–human interaction. This means that persuasion should be preferred over coercion or other methods of behavior steering. Digital BCTs should be in line with ethical behavior we would expect other humans (subjects) to display. The latter is particularly true the more the digital BCTs move towards increasing (autonomous) agency. From deontology the main guiding principle for digital objects is therefore, that the usage and design of such technologies should be compatible with the conditions for human moral agency and the human capacity to act based upon what they have most reason to do. In short, digital objects should not undermine what makes Kantian ‘subjects’ rational agents in the first place. Digital BCT should thus respect the requirements of epistemic rationality: human agents should be able to base their actions as much as possible on a reflection on what they have most reasons to do.

3.2. Deontology and Digital Subjects

We have seen above that digitalization adds agency to our technological objects. In this section I want to look at the changes in the age of digitalization from the perspective of the acting subjects. As argued above, the focus of this section will thus be on the flow of agency from human subjects to digital objects. Like before, we can make a similar typology to distinguish different types of (the understanding of) ‘subjects’. In the age of Kant, human subjects were seen to be the only known examples of free and autonomous agents. In so far as this category still makes sense today, we can call these the “good old subjects”. Whereas the envisioned end point of digital objects are fully autonomous, possibly conscious acting robots, the vision we find with regard to future of human ‘subjects’ is the idea of a merging of humans and AI to create transhuman agents that have incorporated AI as part of their biology [46,47]. Transhuman cyborgs are thus the second category. In between we find theories of extended agency, which I would like to call ‘digital subjects’. We have observed above that objects get degrees of agencies of their own. We can similarly observe the extension of the human mind beyond the borders of the biological body with the help of digital technologies. Whereas digital objects are designed to affect human agency, digital subjects are cases of distributed agency, starting from the intentions and choices of the human subject.
Within philosophy of technology theories of the extended mind [48,49] and the extended will Ref. [50] have been developed to account for the fact that humans can outsource elements of their cognitive functions of their minds or their volition with the help of technological artifacts (The idea that tools are an extension of human agency or the ‘mind’ is older than the rise of digital technologies (cfr. [51]). Already a pen and paper, or a notebook can be seen as extensions of the human mind. For an application of theories of extended cognition to digital technologies see: Ref. [52].). Again, it is this middle-category that is most interesting from a Kantian perspective. BCTs can not only be used to affect the agency of others, but also as an outsourcing of will-power. If we apply a deontological perspective to these technologies, we can develop prima facie guidelines for their design from a Kantian perspective.
In the previous section, we have formulated negative guidelines, about what BCTs should not do. We were starting from the Kantian worry to protect human autonomy and agency from improper interference from digital objects. Can we complement these guidelines with some positive accounts starting from the agency of digital subjects? We might regard these negative requirements (not to undermine or interfere with human autonomy) as perfect duties in the Kantian sense. Are there also weaker principles, or “imperfect duties”, i.e., guidelines that might point towards BCTs that could be regarded as morally praiseworthy?
I indeed suggest to consider two additional guidelines, which are weaker than the ones suggested above. As a positive principle one could add that BCT should, if possible, encourage reflection and the exercise of autonomous agency in the user. Designers should at least consider, that sometimes the best way to change behavior in a moral way is to simply prompt the user to actively reflect and make a conscious and autonomous choice. A smart watch for health-coaching for example might prompt the user to reflect on past performances and ask him to actively set new goals (e.g., the amount of calories to be burnt or minutes of exercise for the next week). Health apps can display supporting reasons for eating choices, to try to convince—rather than persuade—the user to change his diet. Bozdag and Hoven [53] have analyzed many examples of digital technologies that help users to overcome one sided information on the internet, and that can help to identify and overcome filter-bubbles. One example they discuss is the browser tool ‘Balancer’ that tracks the user’s reading behavior to raise awareness on possible biases and to try to nudge the user to make her reading behavior more balanced.
If we take the observations of the prior section and this section together, we can use the epistemic requirement of deontology to distinguish three different types of digital behavior interventions in BCT based on their relation towards human autonomy and the human capacity to base choices on rational deliberation. (i) Some BCTs might be incompatible with the exercise of autonomous deliberation (e.g., coercive technologies), (ii) others might be compatible with it (persuasive technologies), (iii) some BTCs might even actively encourage or foster reflection (deliberative persuasive technologies).
There is a second deontological principle, that could be regarded as an imperfect duty in the design of digital BCTs. Behavior change technologies can be designed to support users in cases of weakness of the will. They can remind us, that we wanted to exercise, watch out for filter-bubbles, or that we planned to take our medication. This outsourcing of will-power to digital technologies is not problematic as such, and can even be seen as empowering, or as a “boosting” of self-control [54]. The worry, one might have, however, with these technologies is the problem of deskilling of moral decision making through technology [55]. Rather than training will-power or discipline, we might become dependent on technologies to reach our goals, while at the same time loosing capacities of will-power and relying on the fact that BCTs will and should tell us what to do.
In Ref. [5] I have, therefore, contrasted ‘manipulation’ with ‘education’ as paradigmatic strategies of behavior change. Both are asymmetrical relations, that intend to change behavior; but they use opposite methods. The aim of manipulation is to keep the asymmetrical relation alive and keep the user dependent. Manipulation is therefore often capacity destructive. Education on the other hand aims at overcoming the initial asymmetrical relation between educator and user, it aims at capacity building. This strategy might therefore also better be referred to as ‘empowerment’. Designers of BCTs can thus try to use the paradigm of educational intervention in the design of BCT and reflect on the question, whether their technologies built up (rather than destroy) individual capacities, such as, e.g., digital E-coaches that aim at training and establishing new habits. One could thus with some oversimplification formulate as a deontological guiding principle, that ideally the aim of the persuasion in BCTs should be the end of the persuasion.
These positive guidelines bring us, however, to a controversial point of a Kantian deontological approach. We have identified motivational rationalism above as a key-feature of Kant’s deontology: the requirement that moral actions should not only be in-line with the action that an agent has most reasons to pursue, it should also (at least in part) be motivated by these reasons. I would argue, in line with many (early) criticisms of Kant, that this requirement is too strict. (Already, Kant himself seems to take back some of the rigor of the motivational requirements in his Groundwork, by including an elaborated virtue ethics in his Metaphysics of Morals). A convincing re-actualization of Kant should let go of the strict motivational requirement and replace it with a weaker version. Rather than always being motivated by reason, it is enough that a user is in principle able to give reasons that support his choice of action, though these reasons must not play a motivational effect at all times of the action.
A weaker version of the motivational requirement would allow for two things with regard to digital BCTs. It would encourage the development of BCT that are meant to overcome weakness of the will, by supporting users in their tasks as discussed above. The weak requirement would, however, still require, that the lack of “autonomy” within the behavior change interference could (or maybe even should) in principle be complimented by an act of rational agency, motivated by reason, at some other point in time. The best way to guarantee this, is to call for an actual act of decision that is based on reasoning. This could, e.g., be a free and autonomous choice to use a given BCT in the first place, with the intention of overcoming temptations or weakness of the will. It would not need to imply that the BCT itself only appeals to reflection and deliberation in its attempts to change the user behavior.

3.3. Deontology and Digital Societies

So far, we have focused on the domains of digital subjects and digital objects, and suggested to re-interpret the epistemic and motivational requirements of Kantian deontology to develop guidelines for the design and usage of digital BCTs. For the sake of completion, I want to conclude with a few remarks on the remaining, third aspect: digital intersubjectivity and the requirement of rational social deliberation. This topic deserves a more detailed analysis than can be given here in the context of the paper. There is a rich, growing literature on the impact of social media on societal debates and opinion forming [56,57,58], though not many of these analyses are taking an explicitly deontological perspective. For the reminder of the paper I will restrict myself to try to identify the three most pressing challenges from a deontological perspective.
Initially social media have been greeted as a pro-democratic technology (e.g., due to their role in the Arab spring [59,60,61,62] or due to their potential to let more people participate in the public debate [63]. However, recent worries have emerged about the impact of fake news on Facebook and twitter and the attempt to use these technologies to influence public debates and to interfere with elections [64,65]. These technologies are again aiming at behavior change: they can be used to change voter behavior and can target the attitudes and beliefs that people hold.
The first most fundamental worry from a deontological perspective is linked to the requirement of societal deliberational rationalism and its importance for the public sphere. Any deontological theory of social institutes will stress the importance of communicative rationality [66] for public decision making, including debates in the public sphere. The spread of social media technologies can then be seen pessimistically as counter-enlightenment technologies that threaten to replace communicative rationality with strategic rationality and place humans again under a self-imposed tutelage (to use Kant’s language). Whereas deliberation is a conscious and transparent process to debate public issues, fake-news, misleading ads and attempts to polarize the debate can be regarded as attempts to use strategic rationality. A ‘silent might’ (Christian Illies) that threatens to distorts rational debates. This is particularly true with regard to two recent trends. The first is the distortion between “truth” and fake-news. Some researchers worry that we are moving towards a post-truth age [67], in which it will be more and more difficult to distinguish facts from fictions, as traditional news-media (with editorial authority) are declining and social media—fueled by a click-bait attention economy—take over. Twitter, e.g., is not a medium that lends itself to a carefully considered debate, due to the character restrictions [64], but it is a great medium to post short oversimplifications.
The fact that humans are willing to engage more, if they disagree with each other, leads to a polarization of the debate, where the loud voices are heard and the moderate voices seem to be less visible. This is helped by filter bubbles or echo-chambers, in which users are only confronted with their own views and not challenged to engage with view point diversity [68]. The change in current political trends towards a rise of populism on the left and the right side of the political spectrum, together with a decline of traditionally more moderate parties, has many different reasons. The change of the shape of the public sphere due to social media may very well be one of the contributing factors [64].
What should we make of these trends from a deontological perspective? I would argue, that traditional deontological theories about the importance of rational deliberation for a healthy society can give guidelines, that are, however, abstract, unless they are spelled out in more details in careful future analysis. For now, I would suggest to keep these three guidelines in mind in the development and usage of social media.
The first guideline would be to design social media technologies in line with the requirements of communicative rationality, and limit the aspects of strategic rationality [5,53,66]. One example would be the debate about hate-speech on twitter. In an interesting pod-cast debate, Jack Dorsey (CEO of Twitter) discusses various attempts to deal with hate-speech on the platform [69]. The debate covers the two ends of the spectrum. On the one hand, twitter needs to establish guidelines about which speech acts should be forbidden and lead to a ban from the platform. On the other hand, twitter could consider formulating also a positive ideal about the type of communication that it would like to encourage on its platform. Some of these aspects can be implemented differently in the technology design: hate-speech could be filtered out by humans, by algorithms or brought to a deliberative panel of voluntary users, that decide on possible sanctions. But twitter could also seek technological solutions. Twitter could implement, e.g., an ‘anger-detection’ algorithm, that prompts the user to save a harsh tweet before publishing it and ask the user to re-consider the usage of aggressive language before posting it. In a similar vein, Instagram has recently tried to improve focus on content and remove incentives for strategic behavior by hiding the amount of likes a picture gets. In the wake of the coronavirus, Twitter in the Netherlands, displayed a prominent link to official information by the Dutch Ministry to counter false information and rumors. These can be seen as attempts to (re-)design social media in light of the requirements of communicative rationality.
Future research should spell out in more detail what the application of communicative rationality would mean for the design of social media and BCT. Since the aim of deliberation is the joint search for the truth, technologies could try to overcome echo-chambers by occasionally presenting users with popular view points from an opposing position, rather than adding suggestions that confirm existing beliefs. The debate on whether tech-companies like Facebook or Twitter should be regarded also as media-outlets, that have a responsibility to not promote fake news, is currently very fierce in light of the upcoming US election. From a societal deliberational rationalism perspective, it would seem that these companies have indeed a greater editorial responsibility than they are currently willing to take.
These debates are being complicated by the fact, that—on the other hand—freedom of speech is an important value for communicative rationality and social deliberation as well. It is, therefore, important to develop a theory of communicative rationality in the age of social media, which investigates these questions more carefully than can be done in this short essay. This is arguably the most urgent field of research for the ethics of digitalization from a Kantian perspective.

4. Conclusions

The aim of the paper was to approach ethics of digitalization from a deontological perspective. We identified three important elements of a deontological ethics: epistemic rationalism, which requires us to base our moral choices on autonomous rational considerations; motivational rationalism, which requires that the insight in to the morality of an actions needs to figure as (part of) the motive of the action, and deliberational (enlightenment) rationalism, which favors deliberation as an important part of the societal debates for the settling of ethical issues in the public sphere. We confronted these requirements with a theoretical challenge and a practical case study (see Table 1).
The theoretical challenge consists in the fact, that the underlying Kantians conceptualizations about ‘subjects’, ‘objects’ and ‘intersubjectivity’ are called into questions by recent trends in digitalization and AI. Digital technologies are no longer mere passive tools, they start to take up various forms of agency, thus moving closer to classical ‘subjects’. Similarly, modern human subjects can outsource part of their agency to digital technologies. Finally, the debate in the public sphere is more and more mediated by digital technologies, such as social media, that are more and more shaping our societal interactions.
We illustrated this theoretical challenge with the case study of behavior change technologies. Informational BCTs use digitalization to monitor human behavior and give targeted evaluative feedback to influence user behavior. We argued to use epistemic rationalism as a negative criterion for the design of ‘digital objects’: BCTs should be compatible with the exercise of free agency, particularly with voluntary behavior change, based on rational individual reflection. They should also adhere to the standard validity claims, that we expect for human-human communication.
For technologies that support digital ‘subjects’ we have developed positive design guidelines. BCTs that support human agency should try to foster reason-based behavior change or be complimented with exercises of rational agency prior to the usage of these technologies. It turns out, however, that there are good reasons to give up motivational rationalism, as it is a too demanding criterion.
Finally, the paper tried to sketch initial guidelines for the digital technologies that affect public societal deliberation, such as social media. Also ‘intersubjectivity’ is mediated by digital technologies. An orientation at original enlightenment ideas of deliberative rationality should inform the design and implementation of social media, including a commitment to truth, view-point diversity and ideals of communicative rationality.
All three aspects of digitalization (digital subjects, digital objects and digital intersubjectivity) require further research to develop more fine-grained ethical guidelines in future work. The most interesting challenges lie in my opinion in the field of digital intersubjectivity. In this paper, I could only give three abstract ideals, that should inform the design of social media as behavior change technologies. The increasing effect that social media have on opinion forming in the public sphere deserves close attention. A new exercise in deontological analysis, that re-examines the ‘structural change of the public sphere’ [70]—but this time in light of digitalization—is an important desideratum for the development of a digital deontology.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Royakkers, L.; Timmer, J.; Kool, L.; van Est, R. Societal and ethical issues of digitization. Ethics Inf. Technol. 2018, 20, 127–142. [Google Scholar] [CrossRef] [Green Version]
  2. Hausman, D.M.; Welch, B. Debate: To nudge or not to nudge. J. Polit. Philos. 2010, 18, 123–136. [Google Scholar] [CrossRef]
  3. Cohen, S. Nudging and informed consent. Am. J. Bioeth. 2013, 13, 3–11. [Google Scholar] [CrossRef] [PubMed]
  4. Schubert, C. Green nudges: Do they work? Are they ethical? Ecol. Econ. 2017, 132, 329–342. [Google Scholar] [CrossRef] [Green Version]
  5. Spahn, A. And lead us (not) into persuasion…? Persuasive technology and the ethics of communication. Sci. Eng. Ethics 2012, 18, 633–650. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Apel, K.O. Paradigmen der Ersten Philosophie: Zur Reflexiven-Transzendentalpragmatischen-Rekonstruktion der Philosophiegeschichte; Suhrkamp: Berlin, Germany, 2011. [Google Scholar]
  7. Hösle, V. Hegels System: Der Idealismus der Subjektiviatat und das Problem der Intersubjektivitat; Meiner: Frankfurt, Germany, 1988. [Google Scholar]
  8. Habermas, J. Auch eine Geschichte der Philosophie; Suhrkamp: Berlin, Germany, 2019. [Google Scholar]
  9. Apel, K.O. Transformation der Philosophie; Suhrkamp: Berlin, Germany, 1973. [Google Scholar]
  10. Habermas, J. Justification and Application: Remarks on Discourse Ethics; MIT Press: Cambridge, MA, USA, 1993. [Google Scholar]
  11. Verbeek, P.-P. What Things Do: Philosophical Reflections on Technology, Agency, and Design; Pennsylvania State University Press: University Park, PA, USA, 2005. [Google Scholar]
  12. Turing, A.M. Computing Machinery and Intelligence. Mind 1950, 59, 434–460. [Google Scholar] [CrossRef]
  13. McNally, P.; Inayatullah, S. The rights of robots: Technology, culture and law in the 21st century. Futures 1988, 20, 119–136. [Google Scholar] [CrossRef]
  14. Coeckelbergh, M. Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf. Technol. 2010, 12, 209–221. [Google Scholar] [CrossRef] [Green Version]
  15. Gunkel, D.J. The other question: Can and should robots have rights? Ethics Inf. Technol. 2018, 20, 87–99. [Google Scholar] [CrossRef] [Green Version]
  16. Winner, L. Do artifacts have politics? Daedalus 1980, 109, 121–123. [Google Scholar]
  17. Latour, B.; Woolgar, S. The Social Construction of Scientific Facts; Princeton University Press: Princeton, NJ, USA, 1979. [Google Scholar]
  18. Latour, B. Reassembling the Social: An Introduction to Actor-Network-Theory; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
  19. Law, J. Actor Network Theory and After; Blackwell/Sociological Review: Oxford, UK, 1999. [Google Scholar]
  20. Verbeek, P.-P. Acting artifacts. In User Behavior and Technology Development: Shaping Sustainable Relations between Consumers and Technologies; Springer: Berlin/Heidelberg, Germany, 2006; pp. 53–60. [Google Scholar]
  21. Illies, C.; Meijers, A. Artefacts without agency. Monist 2009, 36, 420. [Google Scholar] [CrossRef]
  22. Peterson, M.; Spahn, A. Can technological artefacts be moral agents? Sci. Eng. Ethics 2011, 17, 411–424. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Pols, A.J.K. How artefacts influence our actions. Ethical Theory Moral Pract. 2013, 16, 575–587. [Google Scholar] [CrossRef] [Green Version]
  24. Nyholm, S. Attributing agency to automated systems: Reflections on human-robot collaborations and responsibility-loci. Sci. Eng. Ethics 2018, 24, 1201–1219. [Google Scholar] [CrossRef]
  25. Denis, L. Kant’s conception of duties regarding animals: Reconstruction and reconsideration. Hist. Philos. Q. 2000, 17, 405–423. [Google Scholar]
  26. Spahn, A. “The First Generation to End Poverty and the Last to Save the Planet?”—Western Individualism, Human Rights and the Value of Nature in the Ethics of Global Sustainable Development. Sustainability 2018, 10, 1853. [Google Scholar] [CrossRef] [Green Version]
  27. Singer, P. Speciesism and moral status. Metaphilosophy 2009, 40, 567–581. [Google Scholar] [CrossRef]
  28. Powers, T.M. Prospects for a Kantian machine. IEEE Intell. Syst. 2006, 21, 46–51. [Google Scholar] [CrossRef]
  29. Wallach, W.; Allen, C.; Smit, I. Machine morality: Bottom-up and top-down approaches for modelling human moral faculties. Ai Soc. 2008, 22, 565–582. [Google Scholar] [CrossRef] [Green Version]
  30. Anderson, M.; Anderson, S.L. Machine ethics: Creating an ethical intelligent agent. AI Mag. 2007, 28, 15. [Google Scholar]
  31. Tonkens, R. A challenge for machine ethics. Minds Mach. 2009, 19, 421–438. [Google Scholar] [CrossRef]
  32. Moor, J.H. The nature, importance, and difficulty of machine ethics. IEEE Intell. Syst. 2006, 21, 18–21. [Google Scholar] [CrossRef]
  33. Floridi, L.C.; Sanders, J.W. On the morality of artificial agents. Minds Mach. 2004, 14, 349–379. [Google Scholar] [CrossRef] [Green Version]
  34. Fogg, B.J. Persuasive Technology: Using Computers to Change What We Think and Do; Morgan Kaufmann Publishers: Burlington, MA, USA, 2003. [Google Scholar]
  35. Oinas-Kukkonen, H. Behavior change support systems: A research model and agenda. In Proceedings of the International Conference on Persuasive Technology, PERSUASIVE 2010, Copenhagen, Denmark, 7–10 June 2010; pp. 4–14. [Google Scholar]
  36. Midden, C.; Ham, J. Persuasive technology to promote pro-environmental behaviour. Environ. Psychol. Introd. 2018, 283–294. [Google Scholar] [CrossRef]
  37. Ham, J.; Spahn, A. Shall I show you some other shirts too? The psychology and ethics of persuasive robots. In A Construction Manual for Robots’ Ethical Systems; Springer: Cham, Germany, 2015; pp. 63–81. [Google Scholar]
  38. Hung, C. Design for Green: Ethics and Politics of Behaviour-Steering Technologies, Simon-Stevies Series in Ethics of Technology; Twente: Enschede, The Netherlands, 2019. [Google Scholar]
  39. Smids, J. The voluntariness of persuasive technology. In Proceedings of the International Conference on Persuasive Technology, PERSUASIVE 2012, Linköping, Sweden, 6–8 June 2012; pp. 123–132. [Google Scholar]
  40. Smids, J. Persuasive Technology, Allocation of Control, and Mobility: An Ethical Analysis; Technische Universiteit Eindhoven: Eindhoven, The Netherlands, 2018. [Google Scholar]
  41. Thaler, R.H.; Sunstein, C.R. Nudge: Improving Decisions about Health, Wealth, and Happiness; Penguin: London, UK, 2009. [Google Scholar]
  42. Anderson, J. Nudge: Improving Decisions about Health, Wealth, and Happiness. Econ. Philos. 2010, 26, 369–376. [Google Scholar] [CrossRef]
  43. Spahn, A. Persuasive technology and the inherent normativity of communication. In Proceedings of the 5th International Conference on Persuasive Technology, PERSUASIVE 2010, Copenhagen, Denmark, 7–10 June 2010; pp. 21–24. [Google Scholar]
  44. Nickel, P.; Spahn, A. Trust; Discourse Ethics; and Persuasive Technology. In Proceedings of the 7th International Conference on Persuasive Technology, PERSUASIVE 2012, Linköping, Sweden, 6–8 June 2012; pp. 37–40. [Google Scholar]
  45. Linder, C. Are persuasive technologies really able to communicate? Some remarks to the application of discourse ethics. Int. J. Technoethics (IJT) 2014, 5, 44–58. [Google Scholar] [CrossRef] [Green Version]
  46. Warwick, K. Cyborg morals, cyborg values, cyborg ethics. Ethics Inf. Technol. 2003, 5, 131–137. [Google Scholar] [CrossRef]
  47. Kurzweil, R. The Singularity is Near: When Humans Transcend Biology; Penguin: London, UK, 2005. [Google Scholar]
  48. Clark, A.; Chalmers, D. The extended mind. Analysis 1998, 58, 7–19. [Google Scholar] [CrossRef]
  49. Clark, A. Natural-born cyborgs? In International Conference on Cognitive Technology 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 17–24. [Google Scholar]
  50. Anderson, J.; Kamphorst, B.A. Should uplifting music and smart phone apps count as willpower doping? The extended will and the ethics of enhanced motivation. AJOB Neurosci. 2015, 6, 35–37. [Google Scholar] [CrossRef]
  51. Ryle, G. The Concept of Mind; Routledge: London, UK, 2009. [Google Scholar]
  52. Smart, P. Emerging digital technologies. In Extended Epistemology; Carter, J.A., Clark, A., Kallestrup, J., Palermos, S.O., Pritchard, D., Eds.; Oxford University Press: Oxford, UK, 2018; p. 266ff. [Google Scholar]
  53. Bozdag, E.; van den Hoven, J. Breaking the filter bubble: Democracy and design. Ethics Inf. Technol. 2015, 17, 249–265. [Google Scholar] [CrossRef] [Green Version]
  54. Hertwig, R.; Grüne-Yanoff, T. Nudging and boosting: Steering or empowering good decisions. Perspect. Psychol. Sci. 2017, 12, 973–986. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Vallor, S. Moral deskilling and upskilling in a new machine age: Reflections on the ambiguous future of character. Philos. Technol. 2015, 28, 107–124. [Google Scholar] [CrossRef]
  56. Baynes, K. Communicative ethics, the public sphere and communication media. Crit. Stud. Media Commun. 1994, 11, 315–326. [Google Scholar] [CrossRef]
  57. Gerhards, J.; Schäfer, M.S. Is the internet a better public sphere? Comparing old and new media in the USA and Germany. New Media Soc. 2010, 12, 143–160. [Google Scholar] [CrossRef]
  58. Gabriels, K. Onlife: Hoe de Digitale Wereld je Leven Bepaalt; Lannoo: Tielt, Belgium, 2016. [Google Scholar]
  59. Khondker, H.H. Role of the new media in the Arab Spring. Globalizations 2011, 8, 675–679. [Google Scholar] [CrossRef]
  60. Allagui, I.; Kuebler, J. The Arab spring and the role of ICTs: Editorial introduction. Int. J. Commun. 2011, 5, 1435–1442. [Google Scholar]
  61. Lim, M. Clicks, cabs, and coffee houses: Social media and oppositional movements in Egypt, 2004–2011. J. Commun. 2012, 62, 231–248. [Google Scholar] [CrossRef]
  62. Pols, A.J.K.; Spahn, A. Design for the values of democracy and justice. In Handbook of Ethics, Values and Technology Design; Springer: Berlin/Heidelberg, Germany, 2015; pp. 335–363. [Google Scholar]
  63. Noveck, B.S. Wiki Government: How Technology Can Make Government Better, Democracy Stronger and Citizens More Powerful; Brookings Institution Press: Washington, DC, USA, 2009. [Google Scholar]
  64. Hösle, V. Globale Fliehkräfte: Eine geschichtsphilosophische Kartierung der Gegenwart; Verlag Herder GmbH: New York, NY, USA, 2020. [Google Scholar]
  65. Farkas, J.; Schou, J. Post-Truth, Fake News and Democracy: Mapping the Politics of Falsehood; Routledge: New York, NY, USA, 2019. [Google Scholar]
  66. Habermas, J. The Theory of Communicative Action: Lifeworld and Systems, Critique of Functionalist Reason; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  67. Chugrov, S.V. Post-Truth: Transformation of political reality or self-destruction of liberal democracy? Polis. Polit. Stud. 2017, 2, 42–59. [Google Scholar]
  68. Sunstein, C.R. Republic. Com; Princeton University Press: Princeton, NJ, USA, 2001. [Google Scholar]
  69. Rogan, J. Poscast Jack Dorsey (Twitter), Vijaya Gadde (Twitter) & Tim Pool with: Joe Rogan Experience #1258 5.3.2019. Available online: https://www.youtube.com/watch?v=DZCBRHOg3PQ (accessed on 7 March 2020).
  70. Habermas, J. The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society; MIT Press: Cambridge, MA, USA, 1991. [Google Scholar]
Figure 1. Agency and digital objects.
Figure 1. Agency and digital objects.
Information 11 00228 g001
Table 1. Deontological Requirements and Recommendations.
Table 1. Deontological Requirements and Recommendations.
Deontological RequirementChallenge of DigitalizationRecommendation CASE BCT
Epistemic Rationality
- justify moral actions with universally acceptable reasons for actions
‘Digital Objects’
- technologies that aim to change human behavior, often bypassing rational deliberation
Perfect duties
R1. BCT should not undermine (but be compatible with) user autonomy
R2. BCT should adhere to validity claims of communication
Motivational Rationality
- having a reason to act should feature as motivation of moral actions
‘Digital Subjects’
- outsource of will-power to overcome weakness of the will
Imperfect duties
R4. BCT should, if possible, foster and encourage autonomous decision making
R5. BCTs should be capacity building, rather than de-skilling
Note: The motivational requirement might be too strict.
Deliberative (Social) Rationality
- societal decision making should follow (as much as possible) the model of enlightenment rational deliberation
Digital Intersubjectivity
- debates in the public sphere are more and more mediated by digital technologies, such as social media
R6. BCTs should aim at fostering communicative rationality and limit strategic rationality
R7. BCTs should counter polarization of the debate
R8. Providers of social media platforms should be seen as having editorial responsibilities, including a commitment to truth

Share and Cite

MDPI and ACS Style

Spahn, A. Digital Objects, Digital Subjects and Digital Societies: Deontology in the Age of Digitalization. Information 2020, 11, 228. https://doi.org/10.3390/info11040228

AMA Style

Spahn A. Digital Objects, Digital Subjects and Digital Societies: Deontology in the Age of Digitalization. Information. 2020; 11(4):228. https://doi.org/10.3390/info11040228

Chicago/Turabian Style

Spahn, Andreas. 2020. "Digital Objects, Digital Subjects and Digital Societies: Deontology in the Age of Digitalization" Information 11, no. 4: 228. https://doi.org/10.3390/info11040228

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop