**4. Informed Consent**

The idea of autonomy in a medical context is frequently operationalized in terms of informed consent. Similar to the elements of autonomy mentioned above, we are primarily interested in whether a person has freely consented to a given enhancement; could they have done otherwise, and was there some external agen<sup>t</sup> or factor that interfered with their decision-making? This draws from the third aspect of autonomy described by Christman above, that decisions made by the person are not the product of an external agent. The concept of informed consent, developed out of the Nuremburg Doctors' Trials in 1947 which led to the creation of the Nuremburg Code, consists of 10 principles that constitute basic legal and ethical rules for research involving human subjects. The first principle is: "The voluntary consent of the human subject is absolutely essential" [30]. This principle is, for the most part, concerned with the individual's ability to exercise free power of choice and to be free of any intervention of force, deceit, duress, coercion, or constraint. Under this principle, the individual should also be given access to su fficient knowledge of the decision to be made and is able to understand the elements involved in the decision-making. The ability to refuse or say no to a decision is also an essential element of informed consent, or, more importantly, *voluntary* informed consent. The right to say no (or withdraw or refuse) is a direct indicator of the individual's ability to "exercise free power of choice" without any coercion, duress, or intervention [30]. Examining this concept in a military context, it is important to identify situations in which soldiers have the ability to refuse an order or directive to accept enhancement technologies.

### **5. Vulnerability and Saying No**

Proper informed consent practices recognize that people may be especially vulnerable to diminutions in their autonomy and capacity to give informed consent. Human research ethics addresses the concepts of vulnerability in depth, some aspects of which are applicable here. For example, prisoners are treated di fferently with relation to informed consent compared to other adults in medical contexts [31–33]. This vulnerability comes from factors such as prisoners being placed in physical isolation and the power dynamics in the relationships with authority figures. Prisoners are at a greater risk of being manipulated or coerced into accepting interventions that they may otherwise refuse. This special vulnerability comes from aspects such as: Do prisoners have the capacity to understand what is being asked of them and what they are consenting to? Do they have the capacity to say no? If they do consent to an intervention, how can we be sure that the individual in this case is saying yes to an intervention freely and not as a product of coercion by institutional authorities? Based on these aspects, prisoners require special safeguards when it comes to obtaining informed consent for medical interventions.

Soldiers are not the same as prisoners in their roles and treatment within their relevant institutions; however, the concept of unseen pressures and the possibility of coercion and duress can be used to draw some parallels between these two scenarios. The directives to obey the chain of command and subsequent reprimand if one disobeys create an environment in which soldiers could feel unduly pressured into accepting enhancement technologies. The power imbalance in authority relationships formalised in the military's hierarchical systems directly impacts an individual's right to say no to enhancement technologies.

For instance, decisions involving the use of human enhancement technologies would, at a minimum, involve an authority figure (a commander or responsible o fficer), research or technical specialist, and a physician if the enhancement involves an alteration to the human physiology (such as cognitive enhancements). If it is the case that the enhancement is used for a specific operation (on-the-ground testing), one can presume that the unit or team members would need to be privy to the decision-making process. Privacy and confidentiality are normally available in a medical setting under doctor–patient confidentiality and individual privacy laws, or in a research setting with the ethics approval for the specific study. In prisons, such privacy and confidentiality are limited by the need for prison o fficers, medical specialists, etc. to share information about a given prisoner. Moreover, given the close confines when being incarcerated, information is hard to suppress, and can travel quickly and easily among inmates. Similar practical limits on privacy and confidentiality apply in a military context. Like with prisoners, we recognize a form of di fferential vulnerability arising from informal authority relationships, such as those with one's team members and other outranking o fficers.

In addition, the "mission first" values that are promoted in the military add to the constraints in the individual's ability to freely consent. Where the commanding o fficers' priorities and those of the individuals may not align, commanding o fficers may prioritise mission success and safety of the unit as a whole over one individual's safety or privacy. This may not be the case of the individual being asked to accept a brain-stimulating technology that could potentially leave them with adverse side effects, whether they are in the long or short term. History has shown that this is the case, as military personnel have been coerced or pressured into accepting experimental vaccines, which have later on been identified as having less-than-ideal e fficacy and several side e ffects that were long-lasting [34]. Whilst some of the enhancement technologies are supported by scientific research conducted to investigate their functions prior to use, the testing protocols are not the same as a product that would be tested prior to release to the market, thereby raising concerns regarding safety and e fficacy.

Vulnerability, as discussed here, is a set of contextual elements: Institutional and di fferential vulnerability [35]. Institutional vulnerability arises from individuals being subjected to authority relationships where the power imbalance is *formalised* in hierarchical systems, and di fferential vulnerability arises when individuals are subjected to the *informal* power dynamics from authority of others. The above example involving prisoners is used here to highlight the parallels that can be drawn with regards to obtaining informed consent from soldiers. In the following sections of this paper, we sugges<sup>t</sup> that because of the elements of contextual vulnerability arising in the military context, soldiers fit the conditions of an especially vulnerable population, even when the specific technology could potentially enhance their autonomy through improved decision-making.

### **6. Can a Soldier Say No? The Special Case of Soldiers**

In this section, we look at three situations where the recipient is compelled to say yes, and we ask if they could say no. First, can a soldier autonomously say no to interventions that will enhance their own autonomy? Second, given the moral significance of some of their future actions, does morality itself compel a person to enhance their morality? That is, can a soldier say no to making better moral decisions? Finally, in a military context, soldiers are expected to follow commands. Therefore, can a soldier say no to following a command given the special conditions of being in the military and the ethical implications of doing so?

### *6.1. Can a Soldier Say No to Themselves?*

The first issue where a soldier's capacity to say no is limited derives from the potential for an intervention to change them. It is a question of continuity or "numeric identity".<sup>3</sup> Essentially, does the Soldier at Time 1 (T1) owe it to themselves for Soldier at Time 2 (T2) to be enhanced? The basic idea of this question works on two related aspects of numeric identity: First, that the enhancement causes some significant rupture between Soldier at T1 and Soldier at T2, such that there is no relevant

<sup>3</sup> We note here that in the philosophical literature, these issues are typically covered under discussions of "personal identity" rather than "numeric identity". However, as "personal identity" is also used in non-philosophical disciplines to refer to psychological aspects of a person's identity, we have chosen to refer to this as "numeric identity". For more on this particular nomenclature, see Henschke [26].

continuity between them; second, that Soldier at T2 not only has significantly enhanced autonomy as a result of the enhancement, but also that this matters morally. Combining these two points, as Soldier at T1 and Soldier at T2 are di fferent enough people (as a result of the rupture), and Soldier at T2 will be so significantly improved by the enhancement, that Soldier at T1 owes it to Soldier at T2 to undergo the enhancement.

The first premise of this argumen<sup>t</sup> draws on notions of numeric identity. Essentially, are Soldier at T1 and Soldier at T2 the same person or di fferent people? "Discussions of Numeric Identity ... are often concerned with what is needed for a thing to be the same as itself. If time passes, how do we judge that" the Soldier at T1 and Soldier at T2 are the same person? [26]. Consider here Sally who, since the age of 10, has wanted to join the army. By the time she is 30, Sally has spent a number of years as a soldier and fighting in conflict zones; Sally at 30 years old (at T2) has a set of physical, psychological, and experiential attributes that are going to significantly di fferentiate her from who she was as a ten year old at T1. We can obviously see that Sally is di fferent at the two times. "But despite these changes, most of us would say that Sally is the same person she was at ten years old, now and until she dies... That is, Sally's identity persists through time, despite the obvious fact that she has changed" [26]. So, on the one hand, Sally at T1 and Sally at T2 are di fferent, but on the other hand, Sally at T1 and Sally at T2 are the same.

The way that people have sought to explain this persistence or continuity, despite the di fferences, draws on di fferent aspects of Sally. One is what Derek Parfit called overlapping chains of psychological connectedness [36–38]. The person Sally is today is very similar to the person she was yesterday. The person she was yesterday is very similar to the person she was two days before, and so on. So, though she may not be exactly the same person now as she was at ten years old, as long as there is a continuity of states that links the person she was then to the person she is now, an identity claim holds [26]. Others sugges<sup>t</sup> an alternative explanation. Instead of these overlapping chains of psychological connectedness, it is the facts about Sally's physical persistence that make her the same person at T1 and T2.<sup>4</sup> On this bodily criterion of numeric identity, it is the facts of the ongoing physical existence that make Sally the same person.

We sugges<sup>t</sup> here that, whichever account one favours (psychological connectedness or the bodily criterion), Soldier at T1 and Soldier at T2 are the same person. Though they are di fferent, they are not di fferent people. T1 and T2 are still likely going to be psychologically connected, and their body is ongoing. That they have received a technological intervention that enhances them is not su fficient cause to say that they are di fferent people. On both accounts, they are still the same.

This is all relevant to whether the soldier can say no to an enhancement, as one potential argumen<sup>t</sup> against saying no is that the soldier owes it to their future self to say yes. On this argument, if Soldier at T1 said no, they would be unfairly denying Soldier at T2 the options or capacities o ffered by the enhancement. We encounter a similar form of argumen<sup>t</sup> in discussions about environmental stewardship and what present people owe future people [40,41]. On the issues of what we owe future people, the issues rely in part at least on generational injustice, which in turn relies on the people at T1 or Generation 1 being di fferent people from the people at T2 or Generation 2. Likewise, the "owe it to their future self" argumen<sup>t</sup> relies on the two selves being di fferent people; it requires some significant di fference between T1 and T2 selves. However, this does not work as a compelling argumen<sup>t</sup> if T1 and T2 selves are the same. Insofar as they make a free decision, and the soldier is making an autonomous decision about themselves, they are not denying the options or capacities to any di fferent future self.

Another way that the "owe it to themselves" argumen<sup>t</sup> can run is like this: Soldier at T2 is not simply improved or enhanced by the intervention, but their rational capacities and the resulting autonomy from those rational capacities are so far in advance of Soldier at T1, that Soldier at T2 essentially has "authority" over Soldier at T1. Here, we can look at the arguments around advance

<sup>4</sup> For instance, see [39].

directives where a previous self has authority over the present self, but only when the previous self's autonomy is so far above the present self's extremely low autonomy [42]. In our situation, while the temporal logic is the reverse,<sup>5</sup> the core of the argumen<sup>t</sup> is the same. One's self is so significantly advanced in terms of its autonomy that the enhanced self has authority over the less autonomous self. As such, Soldier at T1 owes it to themselves to do what they can to bring Soldier at T2 about. However, we think that, with the particular technologies being the way that they are at the moment, it is unlikely that the Soldier at T2 would be so significantly enhanced that their autonomy must take precedence over that of the Soldier at T1. Thus, we think that the authority of the Soldier at T2 is not su fficient enough to prevent Soldier at T1 from saying no.

### *6.2. Can a Soldier Say No to Making Better Moral Decisions? Moral Decision-Making in a Military Context*

The next argumen<sup>t</sup> is more compelling. The basic claim here is that the soldier cannot say no to an enhancement if that enhancement improves their moral decision-making. For example, an NIBS technology that could enhance a soldier's situational awareness or vigilance to the extent that they are able to process a considerable amount of information load could allow a soldier to improve their moral decision-making compared to that of a non-enhanced soldier. This might be a sacrifice they are compelled to make. Consider this argumen<sup>t</sup> by analogy: A soldier in a conflict zone is o ffered the option of using weapon 1 or weapon 2. Weapon 1 is a weapon that they have been using for years and they feel comfortable with it, and they like to use it. They are familiar with weapon 2, but they do not feel as comfortable with it. However, in this particular conflict zone, there is a reasonable risk that particular forms of combat will kill innocent civilians, and the soldier knows this. Now, weapon 2 is much more likely to avoid civilian casualties or harm, but other than that, it will impact the enemy the same as weapon 1. Again, the soldier knows that weapon 2 will be far better in terms of its discrimination. In this scenario, as per the ethics and laws of armed conflict, the soldier needs to choose weapon 2 over weapon 1.

The underpinning logic of this is that soldiers have a duty to not just adhere to relevant moral principles, but if there are two options and one meets the moral principles better than the other one, they ought to choose that better option. Here, they are compelled to follow what morality demands. The same reasoning would likely hold with regard to particular enhancements; if the soldier is presented with an option that would improve capacity to adhere to and meet specific military ethics principles, then that option ought to be chosen. On the face of it, the soldier's general moral responsibility overrides any personal disagreement they might have with a particular technological intervention. This idea that people should be morally enhanced is currently an idea being explored in the literature [44–47]. These authors have advanced the argumen<sup>t</sup> that we ought to morally enhance ourselves *if such enhancements exist*. Some of these authors take quite a strong line on this. If safe moral enhancements are ever developed, there are strong reasons to believe that their use should be obligatory [46].

Their reasoning turns on access to destructive technologies like weapons, and is similar to what we have o ffered here:

Around the middle of last century, a small number of states acquired the power to destroy the world through detonation of nuclear weapons. This century, many more people, perhaps millions, will acquire the power to destroy life on Earth through use of biological weapons, nanotechnology, deployment of artificial intelligence, or cyberterrorism ... To reduce these

<sup>5</sup> We also recognise here that there is perhaps an additional step required to make the claim that the T2 self has authority over the T1 self—that the future self can direct or dictate things to the present self. However, this line of argumen<sup>t</sup> may rely on some form of backwards causation, where the future causes present events to occur. We note here that backwards causation is a somewhat contentious concept. For more on backwards causation, see [43].

risks, it is imperative to pursue moral enhancement not merely by traditional means, such as education, but by genetic or other biological means. We will call this *moral bioenhancement*.

> [47]

We sugges<sup>t</sup> that in the context of military decision-making, particularly when considering decisions that are of significant moral weight, such as deciding when to shoot, who to shoot, and so on, there seems to be a convincing argumen<sup>t</sup> that soldiers ought to be morally enhanced. However, this is a contingent claim. First, this is not a blanket claim that the soldier must assent to all enhancements. It is only relevantly applied to enhancements that enhance their *moral* decision-making. We note here that there is an important discussion about the assumptions and feasibility of *moral* enhancement. One general assumption is that there is some agreemen<sup>t</sup> on what constitutes "good" moral decision-making. Much of ethics, from one's metaethical position to one's preferred normative theories, is a series of open questions. However, we point out here that in the military ethics context, there are some generally accepted principles like discrimination, proportionality, and necessity that must be met. We do not claim that these principles are *true*, but instead agree with the just war tradition that things are better, all things considered, when soldiers adhere to these principles.

In terms of feasibility, as Harris points out, if moral enhancement involves the reduction of morally problematic emotions like racism, then he is "sceptical that we would ever have available an intervention capable of targeting aversions to the wicked rather than the good" [46] 6. Similarly, Dubljevic and Racine argue that "an analysis of current interventions leads to the conclusion that they are 'blunt instruments': Any enhancement e ffect is unspecific to the moral domain" [47].<sup>7</sup> The worry here is that the technologies that might aid in moral enhancement are so imprecise as to be discounted as serious ways to improve moral behaviour and so on. In Harris' view, we should instead focus on current methods of moral enhancement like education [48]. We consider these points to be reasonably compelling; there is good reason to be sceptical about the likelihood that these technologies will have the precision to reliably and predictably improve moral decision-making. However, for the purposes of this paper, in order to explore the ethical implications of these technologies, we are assuming some potential for these technologies to work as promised [49]. That said, this is an *in principle* argument. Without certainty that these interventions do enhance moral decision-making, the argumen<sup>t</sup> against saying no becomes significantly weaker.

For instance, we need to question which interventions actually constitute *enhancements* to moral decision-making. For instance, given the relation between enhanced memory, vigilance, and attention span and decision-making, as discussed in earlier sections of this paper, and the relations between improved decision-making and moral decision-making [29], one could argue that interventions that improved the quality of a soldier's cognitive functions do in fact enhance their chances at making better moral decisions.<sup>8</sup> It is important to note that whilst the enhancements examined in this paper have the capability to enhance cognitive functions, research investigating the e fficacy of some types of commercially available NIBS products has shown that these enhancements may not be as e ffective as expected [51].<sup>9</sup> Our thought here is that, as we are entertaining a claim that the soldier ought to be *compelled* to accept the intervention, there would need to be more than a mere likelihood that such an intervention will reliably enhance their moral decision-making. We sugges<sup>t</sup> that this is reliant

<sup>6</sup> See p. 105.

<sup>7</sup> See p. 348.

<sup>8</sup> As noted earlier, a somewhat Kantian approach to reason and decision-making, as well as their connection to moral decision-making. For this, we draw from the work of people like Michael Smith, or Jeanette Kennett and Cordelia Fine [28,29]. This is, in contrast, a more Humean account, like that of the social intuitionist model of moral decision-making advocated by Jonathan Haidt [50].

<sup>9</sup> In some cases, one type of cognitive function could be enhanced at the cost of another. For example, increased learning memory could come at a cost of decreased levels of automated processing.

on a combination of assumptions about moral psychology and empirical claims about particular interventions.<sup>10</sup>

We also need to take into account other factors—does the intervention have any side e ffects? In particular, does it have any side e ffects on moral decision-making? For instance, amphetamines are a range of pharmaceutical interventions that have been used to reduce the need for sleep in the military [53]. However, they have a range of side e ffects, such as causing aggression and long-term psychological e ffects that would argue against a soldier being compelled to take them on moral grounds. Investigation into potential side e ffects of NIBS identified short-term side e ffects, such as reactions at the electrode sites, as well as a few cases of black-outs and seizures [17,52]. These investigations were done on voluntary healthy patients in a medical/laboratory setting. As the technology is still relatively new, further investigation will be required to identify long-term side e ffects of their use. Any side effects would need to be taken into account and weighed against the likelihood that the intervention does indeed improve moral decision-making. For instance, if there was only an inference that the intervention improved moral decision-making and there were known deleterious side e ffects of the intervention, the case that the soldier can say no is much stronger.

However, even if the interventions were likely to improve moral decision-making without significant side e ffects, some might still balk at the idea that soldiers *must* consent to such enhancements. This is because such interventions seem to override the principle of autonomy. However, perhaps the soldier has to assent to enhancements that would improve their moral decision-making capacity. This is because of the nature of military actions, or at least military actions that involve killing people. These actions are of significant moral weight, and so need to be treated di fferently from non-moral decisions or actions.<sup>11</sup>

There is a counter-argument to this: That the position that one must assent to moral enhancements is absurd and extreme. If it is true that we must accept interventions that improve our moral decision-making, then everyone on earth is morally required to assent to these moral enhancements. If the particular case holds in the military context—that a soldier must consent to being morally enhanced—this would surely hold for everyone, and this seems absurd: It would seem to be such a significant infringement on personal autonomy, bordering on authoritarianism, that we ought to reject it. While there is perhaps substance to this counter-argument at a general level, we can reject it as an argumentum ad absurdum, as we are only looking at the military context. Moreover, we are only considering those military members whose roles and duties would have them being forced to make life and death decisions, and to make those decisions in a way that would benefit from the enhancements described in Section 2. The average person does not face such life and death decisions in such high-pressure contexts. Finally, even if we constrain potential recipients to the military, arguably, no military has or will have the capacity to roll this out for every serving member. Maybe they should [46,55,56], but that is a di fferent point from what we are concerned with here. What we are concerned with is the capacity to say no. As we can reasonably constrain the claim to particular members of the military, the argumentum ad absurdum fails.

<sup>10</sup> Counter-arguments [52] indicate that concerns regarding explicit coercion and potential impact on individual autonomy and informed consent in the military are perhaps misplaced given the low prevalence of use, social acceptance, and e fficacies of tDCS still ye<sup>t</sup> to be explored. However, we propose that though these interventions are not widely used as yet, it does not negate exploration of potential ethical concerns should their use become more widely accepted.

<sup>11</sup> We recognise that this position, that "moral reasons" can override personal beliefs, is contentious and contested. While we do not have space to cover the topic here, we sugges<sup>t</sup> that one of the features of moral reasons that makes them different from non-moral reasons is that they ought to count significantly in one's decision-making [54]. What we will say is that, given the specifics of the technologies that seem likely to be used for such enhancements, as they are currently non-invasive and potentially reversible, the argumen<sup>t</sup> that a soldier has a right to conscientiously object to such enhancements is weak. Like "weapon 1" versus "weapon 2" above, if the technologies *do* enhance moral decision-making and are not so different from using two different weapon types, the right to say no is limited at best. However, as we have taken care to note throughout the paper, there is perhaps a stronger conscientious objection argumen<sup>t</sup> that says "I say no to this technology, because it does *not* actually enhance moral decision-making."

### *6.3. Saying No to an Order: Ethics of Following Commands and Being in the Military*

The third element of this discussion arises because of the special conditions of being in the military. First, soldiers are trained to follow orders, thus diminishing their capacity to say no. Second, soldiers decide to enter the military with the knowledge that they will not only be asked to potentially engage in risky activity, but also have the foreknowledge that they will be expected to follow orders. These points are complex and nuanced, as we will discuss, but when combined with the previous argumen<sup>t</sup> about saying no to making better moral decisions, we sugges<sup>t</sup> there might be a situation where a soldier cannot say no to particular moral enhancements.

As an essential part of their training, soldiers are trained to follow orders, something that may conflict with their existing moral identity [57]. Of course, this does not mean that they are an automaton; many professional militaries now include training on the laws of armed conflict, military ethics, and the just war tradition. Any such training will explicitly or implicitly include recognition that a soldier should not follow an order that they know to breach the laws of armed conflict or a relevant ethical principle. For instance, many soldiers are taught that they can refuse to follow an order to kill a prisoner of war or an unarmed unthreatening civilian. However, as history shows [58], commanders still give orders that are illegal or immoral, and many soldiers still follow those commands. Moreover, as was infamously demonstrated by Stanley Millgram, many people will follow the commands of someone perceived to be in a position of authority even if what they are being asked to do is objectively morally objectionable [59,60]. The point here is that even when significant moral principles may be transgressed, many soldiers will still follow those commands; their capacity to say no is diminished.

The relevance here is that if it is generally psychologically di fficult for a soldier to say no to a command, particularly commands that do not obviously contravene the laws or ethics of war, it may be equally psychologically di fficult to be able to say no to commanders commanding them to accept an enhancement. We can consider here a general claim that the military command structures, training, and socialisation to follow orders undermine our confidence that soldiers can say no to enhancements.

Adding to this explanation, again arising in part from military training, is that soldiers feel a significant responsibility to their comrades and/or to the nation for which they fight. "The role of this socialisation process is to separate soldiers from their previous social milieus and inculcate a new way of understanding the world. Central to this process is loyalty, obedience, and the subsuming of one's individual desires to the needs of the greater cause" [61]. Not only are soldiers trained to consider seriously sacrificing themselves for some greater good, but as training progresses, they are taught that they ought to. "The o fficer in training builds up a professional identity on the basis of his personal immersion in the ongoing, collective narrative of his corps. This narrative identity is imparted not by instruction in international law but by stories about the grea<sup>t</sup> deeds of honourable soldiers" [62]. Some see this loyalty to one's closest comrades as fundamental to military practice: "The strongest bonds are not to large organizations or abstract causes like the nation; rather, they are to the immediate group of soldiers in one's platoon or squad, before whom one would be ashamed to be a coward"—Frances Fukuyama, quoted in [61]. Here, the issue is whether a soldier feels like they can say no because they are concerned that, if they do, they will be letting their comrades down.

Similarly, a number of people sign up to be soldiers due to a sense of loyalty to their nation, to protect their nation, and to fight for what is right. Serving in the military "is a higher calling to serve a greater good, and there is nothing incoherent or irrational about sacrificing oneself for a greater good" [63]. On this view, soldiering is not a mere job, but something higher. For a group like "the clergy, the 'larger' and 'grander' thing in question is the divine. [In contrast, for] soldiers, it is the closest thing on earth to divinity: The state" [63].<sup>12</sup> Here, rather than the loyalty being to their comrades, it is to the larger nation for whom they fight and/or the values for which they fight. In both

<sup>12</sup> We note here that this author is not endorsing this view; rather, they are describing the notion of military service as distinct from a normal job [63].

aspects, though, we can recognise a strong weight in favour of doing what is asked; were this another job, the responsibility to would play far less of a role; very few jobs reliably expect a person to sacrifice their life for their job. "If it is not permissible for civilian employers to enforce compliance with such 'imminently dangerous' directives, then why is it permissible for the military?" The obvious answer is that soldiers make a commitment of obedience unto death; they agree to an "unlimited liability contract" upon enlistment, called so because there is no limit to the personal sacrifice that they can legitimately be ordered to make under that contract [63]. Given the importance of soldiering, the soldier forgoes many basic rights. In line with this, they may also forfeit a right to say no to enhancements.

This brings us back to the argumen<sup>t</sup> that we need to think of informed consent in a military context as different from informed consent in a medical context. What we might need to think of is that entry into a military context is a broad consent, where you consent to giving up other latter consents.<sup>13</sup> This is not a "freedom to be a slave" argument, as the military service will typically end, but where enhancements differ is that they are ongoing.<sup>14</sup>

However, this all brings us to a second vital aspect of the capacity to say no—soldiers sign up to join the military, and unlike many other jobs, they are expected to follow orders. We would hope that people signing up to join the military would have some foreknowledge that what they are committing to is different from a normal job, with a set of important expectations, including following orders. For instance, it would be absurd to join the military and then complain about their commander being bossy, and that they do not like guns or state-sanctioned violence. It is essentially a form of caveat emptor, where the person knowingly gives up certain freedoms as part of joining the military; their freedom to say no is significantly curtailed. Just as they have significantly diminished freedoms to say no to commands, their freedom to say no to an enhancement is diminished. We return to the issue of exploitation in enlistment below.

The above argumen<sup>t</sup> becomes even more compelling when considering the narrowed focus that this paper has—whether a soldier can say no to technologies that enhance their military decision-making capacity. As we discussed in the technology summary and in the section on whether a soldier can say no to themselves, the technologies we are concerned with are those that are non-permanent and non-invasive. While their intended use is explicitly to enhance a person's decision-making capacity in a military context, thus qualifying them as an enhancement technology, they are as much a military tool as they are a biotechnological intervention. While the interventions we are concerned with here are not equivalent to asking a soldier to carry a weapon or put on body armour, they are not exactly equivalent to an invasive clinical intervention to irreversibly alter their physiology.

The relevance of this is that the arguments about saying no to clinical interventions that one finds in the biomedical literature have less purchase in the military context than they do in a clinical biomedical context. That informed consent in a military context is different from that in a medical or clinical context is a well-founded view [67–69]. This does not mean that we jettison the notion of informed consent in a military context, but rather that it needs to be adapted to that context.

We also need to consider whether the purposes of particular enhancements add further moral weight to the argumen<sup>t</sup> that soldiers cannot say no. For instance, if the enhancement was shown to increase a particular military team's success or survival rate, then there is an increased weight for that enhancement to be accepted; for example, if cognitive enhancements were so advanced to the stage that they could significantly enhance a soldier's memory processing or reduce or eliminate cognitive fatigue in the battlefield, factors that directly impact survival rate on the ground. It is like a soldier saying no to a better weapon; insofar as that better weapon hits its targets but is more proportionate and discriminatory than existing weapons, we find a prima facie case that the soldier should use the better

<sup>13</sup> For more on broad consent, see [64,65].

<sup>14</sup> For more on enhancements and the duty of care to veterans, see [66].

weapon. So too, we have a prima facie case that the enhancement be accepted.<sup>15</sup> Moreover, as was discussed above, if the particular enhancement is likely to increase the capacity to adhere to ethical principles like *jus in bello* proportionality or discrimination, then they should not say no to making better moral decisions. The relevance to this section is that both of these arguments that a soldier has responsibility to fight better are strengthened when considering that the soldier signed up for entry into the military.

All that said, this argumen<sup>t</sup> has a significant caveat of its own—it assumes that all people join the military freely and with the relevant advance knowledge of what this role entails. However, as Bradley Strawser and Michael Robillard show, there is risk of exploitation in the military [71]. Exploitation of people's economic, social, and educational vulnerabilities to ge<sup>t</sup> them to enlist in the military would significantly undermine any notions of broad consent or that soldiers must accept enhancements because "they knew what they were getting into when they signed up." Moreover, the arguments developed here are considerably weaker if considering soldiers who were conscripted to fight. Thus, not only does the context of the military change how we would assess whether they can say no to an enhancement, but the conditions under which soldiers are enlisted are also essential to any relevant analysis.
