**4. Results**

In the updated sample, we were able to locate mentions of all eight categories, with some ethical issues, such as Autonomy, being mentioned much more frequently [71.4%, n = 5] than others. While most articles discussed benefits in terms of the increases in autonomy and independence gained from using a BCI [11–14], the potential for autonomy to be compromised was also discussed. For example, Hildt [15] mentions the possibility of taking the information gained from BCI—or in this case, Brain-to-Brain Interface (BBI)—from the individual and using it without their consent or knowledge:

"Participants in BBI networks depend heavily on other network members and the input they provide. The role of recipients is to rely on the inputs received, to find out who are the most reliable senders, and to make decisions based on the inputs and past experiences. In this, a lot of uncertainty and guessing will be involved, especially as it will often be unclear where the input or information originally came from. For recipients in brain networks, individual or autonomous decision-making seems very difficult if not almost impossible" [15] (p. 3).

Another frequently [57.1%, n = 4] discussed ethical issue was Humanity and Personhood, since BCIs could impact one's sense of self. In one specific study of BCI technology used in patients with epilepsy, there was a variety of resulting perspectives on sense of self, with some individuals saying that it made them feel more confident and independent, while others felt like they were not themselves anymore, with one patient expressing that the BCI was an " ... extension of herself and fused with part of her body ... " [11] (p. 90). Other articles more generally discussed the possibility of sense of self changing and the ways BCI technology could contribute to this. Sample and colleagues categorize three ways in which one's sense of self and identity could change: altering the users' interpersonal and communicative life, altering their connection to legal capacity, and by way of language associated with societal expectations of disability [14] (p. 2). Müller and Rotter argue that BCI technology constitutes a fusion of human and machine, stating that "the direct implantation of silicon into the brain constitutes an entirely new form of mechanization of the self ... [T]he new union of man and machine is bound to confront us with entirely new challenges as well" [13] (p. 4).

Research Ethics and Informed Consent was also a frequently [57.1%, n = 4] mentioned issue. The main consensus among the ethicists that discussed this was that it is very important to obtain informed consent and make sure that the subjects are aware of all possible implications of BCI technology before consenting to use it. Additionally, some ethicists warned against the possibility of exploiting potentially vulnerable BCI research subjects. As Klein and Higger note: "[t]he inability to communicate a desire to participate or decline participation in a research trial—when the capacity to form and maintain that desire is otherwise intact—undermines the practice of informed consent. Individuals cannot give an informed consent for research if their autonomous choices cannot be understood by others" [12] (p. 661).

User Safety was discussed as often as research ethics in the sample [57.1%, n = 4], with both psychological and physical harm being mentioned and explained as serious possibilities that need to be considered [13–15]. One article discussed the impacts of harm on the results of a BCI study, stressing the importance of stopping a clinical trial if the risks to the individual participants begin to outweigh the potential benefits to science [12].

Issues of User Safety led to discussions of Responsibility and Regulation when using BCI technology. While the term "regulation" was mentioned in several articles [57.1%, n = 4], only one went into significant detail about regulation in regard to BCIs specifically, discussing the issue of a "right to brain privacy," which can be understood in similar terms to existing privacy legislation, such as General Data Protection Regulation (GDPR) in the European Union or the Health Insurance Portability and Privacy Act (HIPAA) in the United States, to regulate the information gathered in BCI use [15] (p. 2). This was also the only time regulation was used in a legal sense for the technology, as opposed to regulating who it should be used for [11]. Responsibility was also mentioned multiple times [71.4%, n = 5], but again, only one article went into detail about who would be responsible for potentially dangerous or illegal BCI technology uses [16]. One discussed the "responsibility" of the research being divided among members of the research team [12], which was not Burwell et al.'s original meaning of the category Responsibility.

*Privacy and Security* concerns were mentioned somewhat less frequently [42.8%, n = 3] but still discussed in depth. Three articles [13,15,17] talked about the risks of extracting private information from people's brains and using it without their knowledge or consent, which is a significant concern for BCI technologies. Müller & Rotter connected this issue to User Safety, arguing that the increased fidelity of BCI data yields inherently more sensitive data, and that the "impact of an unintended manipulation of such brain data, or of the control policy applied to them, could be potentially harmful to the patient or his/her environment" [13] (p. 4).

*Justice* was also mentioned infrequently [28.6%, n = 2], with the main idea being inequality and injustice within the research. These discussions often related back to the aforementioned questions of when the trials would end and if the participants would ge<sup>t</sup> to subsequently keep the BCI technologies [12].

The final two social factors mentioned were Stigma and Normality [28.6%, n = 2] and Societal Implications [28.6%, n = 2]. Stigma was mainly discussed from the perspective of the device itself having a negative stigma around it, and the device itself being what is stigmatizing about the individual [12]. However, it was also mentioned that perhaps universalizing the technology instead of only targeting it toward a group that is considered "disabled" could reduce or eliminate stigma [14]. Societal Implications were discussed from several standpoints. One take was the possibility of BCI being used as a sort of social network instead of just for therapy for disabled individuals [15]. Another position was that—since society tends to universalize technology so that it is used by nearly everyone and hard to function without, e.g., cellphones—the use of BCI technology may become a " ... precondition to the realization of personhood" [14]. This article also discussed the potential for BCI technology to reshape how society perceives disability.

Similar to Burwell et al.'s findings, Military Use and Enhancement were both mentioned rarely [14.3%, n = 1], and neither category was discussed any further than using it as an example for potential BCI use [15]. For the more prominent categories, the distribution can be seen in Table 1.


**Table 1.** The distribution of overarching themes of BCI ethics.
