Next Article in Journal
Absorbed Concert Listening: A Qualitative, Phenomenological Inquiry
Previous Article in Journal
The Psycholinguistics of Self-Talk in Logic-Based Therapy: Using a Toolbox of Philosophical Antidotes to Overcome Self-Destructive Speech Acts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Causal Deviance in Brain–Computer Interfaces (BCIs): A Challenge for the Philosophy of Action

by
Artem S. Yashin
1,2
1
MEG Center, Moscow State University of Psychology and Education, 127051 Moscow, Russia
2
Laboratory for Neurocognitive Technology, NRC “Kurchatov Institute”, 123182 Moscow, Russia
Philosophies 2025, 10(2), 37; https://doi.org/10.3390/philosophies10020037
Submission received: 10 November 2024 / Revised: 11 March 2025 / Accepted: 13 March 2025 / Published: 25 March 2025

Abstract

:
The problem of deviant causal chains is a classic challenge in the philosophy of action. According to the causal theory of action (CTA), an event qualifies as an action if it is caused by the agent’s intention. In cases of deviant causal chains, this condition is met, but the agent loses control of the situation. To address this, theorists suggest that the intention must cause the action “in the right way”. However, defining what constitutes the “right way” is difficult, as the distinction between having and not having control can be subtle. In this paper, I demonstrate that brain-computer interfaces (BCIs) provide important insights into basic causal deviance. I examine how existing strategies might account for deviant causation in BCI use and highlight their challenges. I advocate for reliability strategies—approaches that focus on identifying which causal pathways reliably connect an agent’s intentions to their outcomes. Additionally, I compare two BCIs that differ in their sources of occasional malfunction. I argue that the presence of causal deviance in a given case depends on the boundaries of the system that enables action. Such boundary analysis is unnecessary for bodily movements; however, for basic actions performed through a machine, it becomes essential.

1. Introduction

1.1. The Problem of Deviant Causal Chains

The primary objective of the philosophy of action is defining what an action is [1,2]. A theory of action needs to find a line between events controlled by the agent and their mere behavior [3]. As of today, the most widely accepted [1,2,4] philosophical theory of action is the causal theory of action (CTA), which originated in the works of Davidson [5] and Goldman [6]. According to modern versions of CTA, an action is an event caused by a mental state of a certain kind. Usually, this kind of state is associated with the concept of intention, although historically CTA involved such mental states as desires and beliefs [4].
In Davidson’s original version of the CTA [5], an action must be driven by a primary reason, which also has to be the cause of the action. This primary reason is a combination of a desire and a belief that this desire can be fulfilled. According to Davidson, the primary reason might not always manifest as a single mental episode: a belief and desire which form it just have to be involved in the causal explanation of action. However, since Goldman’s work [6], the version of the theory in which an action is caused by a single mental event has become the mainstream view.
One challenge for the CTA is dealing with deviant causal chains. This issue in philosophy of action is usually illustrated with a case involving a rock climber introduced by Davidson [7]. In this scenario, a climber is holding another man on a rope. He experiences fear and distress, and desires to free himself from the weight and danger. He believes that loosening his grip will achieve this. However, this desire and belief unnerve him so much that he loses his grip. Davidson emphasizes that the climber may never have chosen to let go and did not do so intentionally. Although the loosening of the grip is driven by a proper combination of a desire and belief, the climber does not control the situation: the mental states of the climber lead to the effect in a wrong way. A similar example by Morton [8] describes someone named Leo who is about to shoot an offender. While aiming, his excitement of revenge causes a tremor, and his finger accidentally pulls the trigger, resulting in the shot. Leo wanted to shoot the offender and the offender was shot, but Leo did not control the shot.
In cases of causal deviance, the agent’s mental states—which, according to CTA, should function as the cause of the action—lead to the intended outcome by accident, making the result appear involuntary or uncontrollable. To rule out such anomalous cases, CTA typically requires that the mental state cause its intended effect “in the right way”. Why do situations with causal deviance pose a problem for CTA? We could bite the bullet and accept that what happens to the rock climber is an action, or we could abandon CTA in the face of causal deviance, as it has been proposed by Di Nucci [9]. However, it can be argued that this move would contradict our basic intuitions about agency. Mainstream action theorists follow these intuitions, as a result creating a set of unusual situations that the general theory of action should not (by design) recognize as actions. To address this, CTA must explain what it means for actions to be caused “in the right way”.
It is important to distinguish between two kinds of causal deviance to define the purview of this study, as the problem in its purest form only arises in a limited set of situations. Consider a different example from Davidson [7]: a man attempts to shoot another man but misses. Instead, his shot startles a herd of wild pigs, which then trample the intended victim, causing his death. While there is a deviation in how the shot leads to the victim’s death, agency of the shooter is well defined at some level: he decides to fire the gun and shoots—this event is an action. According to a distinction made by Brand [10], the stampede involves consequential waywardness, while the rock climber’s situation involves antecedental waywardness, also referred to as basic causal deviance [11]. In the first case, causal deviance occurred as a result of the movement, while in the second case, it happened in the transition between the mental antecedent and the movement. My focus will be on basic deviance, which raises questions about whether an action occurs at all. From this point forward, when I refer to causal deviance, I will specifically mean basic deviance.
In this paper, I examine the implications for the philosophy of action that arise from cases of deviant causal chains in brain–computer interfaces (BCIs). In Section 1.2, I begin with an overview of existing strategies for addressing basic causal deviance, followed by an explanation in Section 1.3 of how agents can act through BCI use. In Section 2, I outline the methodological framework on which I rely. In Section 3, I introduce a thought experiment involving basic deviance in BCI-mediated action and assess how existing strategies for addressing causal deviance might account for this case. In Section 4, I analyze two similar cases in which causal deviance either occurs or does not, depending on the source of the malfunction affecting the system’s performance. In both cases, the source of the malfunction lies entirely outside the agent. After comparing these thought experiments, I highlight key distinctions in Section 5 that suggest new requirements for CTA in addressing the problem of deviant causal chains. I argue that reliability-based strategies provide the most effective framework for accounting for these cases of deviant causation in BCI-mediated actions. In Section 6, I conclude with a summary of these arguments.

1.2. The Existing Strategies

Various strategies for dealing with deviant causal chains have been proposed in the literature. These strategies aim to clarify the “right way” from intention to movement, ensuring the agent maintains sufficient control over the action. One of the earliest approaches was proposed by Goldman [6], who argued that solving the problem of deviant chains should be left to neurophysiologists rather than philosophers. The examples provided by various authors demonstrate that the efforts of neurophysiologists alone will not be sufficient. In his review, Mayr [12] reconstructs four types of strategies for addressing deviant causal chains in action: the immediate causation strategy, sensitivity strategy, sustaining causation strategy, and ‘manifestation’ or ‘well-functioning’ analysis of action. We will follow his analysis, with the exception of one additional type of strategy.
The immediate causation strategy seeks to limit the number of steps in the causal chain from intention to action. Mayr identifies Brand [10] and Searle [13] as proponents of this approach. Both authors distinguish between two types of intentions: Brand separates distal and proximal intentions, while Searle differentiates between prior intentions and intentions-in-action. This kind of distinction allows them to impose a condition on CTA: the causal antecedent of an action should always be an intention from the more immediate class—either a proximal intention or intention-in-action. Since this type of intention occurs temporarily close to the action, the gap between them seems to be insufficient for the emergence of a deviant causal chain. The sensitivity strategy highlights that in cases of deviant causal chains, the connection between the agent’s intention and the outcome lacks sufficient precision. This strategy requires that even a small change in the content of the intention should result in a change in the movement. Mayr attributes this strategy to Peacocke [14]; a similar account has also been defended by Schlosser [15]. The sustaining causation strategy requires that the intention continuously causes the movement throughout its execution. In this approach, the bodily action must be guided by the corresponding intention-in-action, so that causation is ongoing. Proponents of this strategy are, for example, Bach [16] and Bishop [17].
Finally, Mayr distinguishes the ‘manifestation’ or ‘well-functioning’ analysis of action, which shifts focus away from explaining the causal chain between intention and movement. Instead, it begins with an assumption that action is a manifestation of the agent’s abilities, or a realization of a certain function. A variation of this strategy within the mainstream branch of causal theory is developed by Enç [18]. He argues that the concept of causal deviance can be clarified by identifying the evolutionary function of specific actions. According to Enç, intentions should lead to outcomes that align with their evolutionary purpose, so causal deviance represents “broken” actions or an episode of bodily dysfunction. Mayr himself advocates for a different kind of manifestation analysis, moving away from traditional event-causal CTA and leaning toward an Aristotelian view of action. His approach to agency differs methodologically from the one used in this study, as it does not align with event-causal analysis of action.
A strategy similar to the biological ‘well-functioning’ strategy defended by Enç has been recently proposed by Roloff [19]. Building on Millikan’s teleofunctionalism [20,21], Roloff adds an extra condition for the causal connection that should account for cases of causal deviance. He argues that actions must be caused in accordance with an explanation of how they have been caused historically. If an intention triggers the movement in a manner consistent with how that intention has historically functioned within the agent’s cognitive system, then the movement qualifies as an action.
In addition to Mayr’s categorization, I will identify ‘reliability’ strategies, which focus on determining the causal reliability of an action. These strategies are similar to ‘well-functioning’ approaches, as both examine actions within the context of the system’s causal structure. However, unlike ‘well-functioning’ strategies, ‘reliability’ strategies do not rely on the biological foundations of agency.
An approach proposed by Aguilar [11] involves identifying an agent’s repertoire of actions by analyzing the mechanisms that enable the agent to perform action types. While this solution is not fully developed, as it does not detail how to precisely identify the mechanisms supporting these action types, it does offer a guiding principle for addressing the problem. Shepherd’s strategy [22] examines a counterfactual space that is constructed by considering a spectrum of possible circumstance sets surrounding action. To avoid causal deviance, two conditions must be met. First, an intention must be general enough that the agent can consistently perform similar actions in comparable situations, as intentions with very specific content may lead to causal deviance. This connects Shepherd’s approach to sensitivity-based strategies. Second, the causal pathway between intention and outcome must produce similar results in similar circumstances with enough regularity. Essentially, Shepherd emphasizes the need for stability in the causal pathway across different circumstances.
In summary, strategies for dealing with deviant causal chains are either based on clarifying the causal connection between intention and movement, specifying the content of the intention, or considering the systematic context in which intention leads to movement. Alternatively, a researcher may choose to move away from the event-causal theory of action altogether. A common feature of the available strategies is their concern with bodily actions, even though agency does not necessarily have to be expressed through voluntary movements. It is worth noting that Shepherd mentions [22] (p. 32) that his solution applies not only to bodily actions but also to mental actions. Mental actions, for example, include mental arithmetic, reciting poems in one’s head, or creation of mental images (see [23]). However, mental actions do not result in any intended external outcome. Meanwhile, there is another category of actions that enable an agent to have an effect on the external world.

1.3. Acting Through a BCI

Consider a situation where an agent’s intentions are carried out by a machine instead of their own body. One example of such a device is a brain–computer interface (BCI) [24,25,26]. A BCI translates the agent’s brain activity into commands for the computer. I will focus on active BCIs, where a user actively controls a computer by means of generating the appropriate brain activity [27]. One of the ways to classify BCIs is based on how they register the user’s neural activity: the registration methods can be either invasive, requiring surgical intervention, or non-invasive [28]. Additionally, BCIs can be categorized as synchronous or asynchronous depending on when the user can issue commands to the system. In a synchronous BCI, commands are given at specific points in time, while in an asynchronous BCI, commands can be issued at any time.
For the purposes of this discussion, whether a BCI is invasive or non-invasive is not crucial, but I will be interested in asynchronous systems. I will presuppose that it is technologically possible to develop an asynchronous BCI with exceptionally high classification accuracy and information transfer rate, making it almost as reliable for performing actions as the human body. Invasive interfaces hold great promise, for example, in enabling real-time decoding of speech [29]. They can also provide control of body parts to the user by decoding brain activity [30]. Non-invasive interfaces, however, have achieved more modest results so far.
As an encouraging technology for the enhancement of human agency, BCI has been of interest to philosophers of action for some time: BCIs point to a disembodied way of affecting the world [31,32] and raise difficult questions about moral responsibility [33,34]. In theory, they can operate not only based on intentionally induced brain activity, but also based on changes in activity that occur before an intention is fully formed, or parallel unconscious processes. BCI activation can follow various principles, making it a valuable tool for studying conscious control (e.g., [35]).
BCIs can be used to control robotic limbs [36] or exoskeletons [37], or assist in bodily movements, but these applications are not the focus here. When an agent uses a BCI as a means of computer input, their actions are not bodily actions. However, acting through a BCI can still be understood as a combination of mental actions and their effects—specifically, the operation of classification algorithms and other processes in the computer. Should issuing a command to the interface be considered a distinct mental action within BCI use? In most cases, it seems that it should, primarily due to technological limitations: users typically need to perform specific mental operations and maintain focus to interact with the interface. Given this typical state of affairs, Steinert et al. [32] simply classify BCI-mediated actions as non-basic. If BCI use always involves mental actions, then basic deviance can arise only within mental actions. If deviance occurs later, within the computational processes of the system, it constitutes consequential waywardness—where the agent acts mentally, but the action fails to produce the intended outcome in the correct way. This study focuses on basic deviance, which can arise in BCI-mediated actions if the technology allows for the possibility of basic actions.
Basic actions, refined by Goldman in his version of CTA [6] from the term introduced by Danto [38], are actions that can be performed “at will”, without requiring practical knowledge. An agent can achieve something else through a basic action—such as closing a tab by tapping a finger on the screen—but the basic action itself (the tapping) is not performed through any other action. For Goldman, basic actions consist of various voluntary movements, but his theory was developed before the advent of BCIs. If a user preforms basic BCI-mediated actions, they do not form distinct intentions to issue commands to the BCI. For a user to act “at will” using a BCI, the system must be asynchronous, meaning it must continuously translate the agent’s state into action throughout the control session.
So, if users perform basic actions with BCIs, separate mental actions are not involved. BCI-mediated actions would resemble something like raising a hand—where, as Wittgenstein famously noted, one cannot find a volition, or mental command to the body [39]. In the literature, there is a position—for example, defended by McCann [40]—that basic actions are volitions. However, I will assume that simple bodily actions do not involve them. If we accept that both bodily actions and BCI-mediated actions can be basic, then we should not reduce acting through a BCI to a combination of mental actions and external events.
There is evidence that suggests some users can learn to act through a BCI without intermediate actions. The literature describes such scenarios for both invasive interfaces and non-invasive EEG-based interfaces. For example, a tetraplegic patient with 96 microelectrodes implanted in her motor cortex [36] reported that she could control a robotic arm “automatically”, without needing intermediate mental commands [41] (see a video recording at [42]). Similarly, a patient trained in EEG-based BCI control with kinesthetic motor imagery shared after extensive practice that the process felt seamless [43]. The first user of Neuralink also remarked that, during a control session, he simply thought “about where I want the cursor to go” [44]. However, all this evidence is based on participant reports, which limits its reliability. As BCI technology advances, more direct control is expected to become common. Although there is not yet enough data to fully support the feasibility of users automating the control of BCI, especially in non-invasive cases, I believe it is up to the critics to challenge this view.
At the same time, research on user experience with BCIs, including their sense of agency [45], does not provide sufficient data on the phenomenology of BCI-mediated actions. Emergence of the sense of agency relies on multiple cues, including intentions, feedback, and the beliefs of the agent [46]. Users are capable of making explicit judgments about agency in relation to BCI-mediated actions. Evans et al. [47] studied users’ explicit judgments of agency when controlling a standard motor imagery-based BCI and found that these judgments were highly dependent on visual feedback, while being less sensitive to temporal shifts compared to bodily actions. Serino et al. [48] manipulated the sense of agency in a user of an invasive BCI but were primarily interested in its neural mechanisms rather than the content of agentive awareness. Notably, in confidence ratings for action authorship, users relied more on somatosensory feedback (through stimulation) than on visual feedback, somewhat mirroring the feedback hierarchy observed in bodily actions. Whether a more immediate form of the sense of agency (or a “feeling of agency” [49]) arises in BCI interaction remains an open question that requires further investigation.
The problem of deviant causal chains originally emerged in discussions about bodily actions, and the development of strategies for solving it took place in that context as well. I aim to show that facing the challenge of deviant causal chains requires understanding how the architecture enabling the agent to act is organized in a particular case. My reasoning will be based on examples of actions performed by means of a BCI. I will demonstrate that, even for two BCIs of equal efficiency, whether a causal chain results in an action depends on how the system is designed. Building on this, I will explore the implications that BCIs as a technology have in addressing the problem of deviant causal chains.

2. Methodology

My further reasoning will be based on a set of assumptions and the general methodological approach commonly adopted by researchers studying the problem of deviant causal chains.
Deviant causal chains challenge CTA, indicating the need for further refinement. The significance of this challenge is drawn from the intuitions of researchers. Philosophy of action aims to define what actions are by combining our basic concepts with scientific evidence. In turn, our understanding of action is influenced by the phenomenology of agency—the subjective experience of the acting agent. Addressing the problem of causal deviance requires an ongoing engagement with these intuitions to refine the concept of control. While intuitions are fallible, abandoning them altogether would be misguided. Replacing this inquiry with a rigid, dogmatic definition of action or focusing exclusively on the physiological mechanisms of control in a narrow sense would overlook the core philosophical problem.
It is also worth noting that general theories of behavior and cognition can be valid without competing with CTA. For example, Friston’s free-energy principle [50,51] does not conflict with CTA, as it operates on a different level of analysis of an agent’s states. An algorithmic approach to studying human and animal behavior [52,53] can provide valuable insights into the mechanisms underlying cognition and action. Mathematical models of behavior represent an intriguing area of research; however, their relationship to causal theory of action has yet to be fully explored, as these two fields use different methodologies and approach the workings of the human mind from distinct perspectives. The current paper focuses specifically on issues of proximate control within CTA, without aiming to offer a general explanation of decision-making and behavior.
The other part of my assumptions concerns BCI technology. I will formulate thought experiments involving devices with certain properties. A common way to assess the effectiveness of a BCI is by measuring its classification accuracy—the likelihood that the interface correctly identifies a command—and information transfer rate [28]. While the level of BCI development I am interested in is not routine, it is also not too far from current realities (for state-of-the-art designs in BCI studies, see [54]). The cases presented below will not require imagining anything that seems to approach the limits of physical possibility in our world. Using examples based on real technologies allows us to explore situations that closely resemble those occurring around us.

3. Causal Deviance in BCI-Mediated Actions

Let us explore the complexities of actions performed with BCIs. In examples, I will appeal to eye gaze hybrid brain–computer interfaces or eye–brain–computer interfaces (EBCIs) (e.g., [55,56,57]). Using this kind of hybrid BCI, an agent can choose an object on the screen by using gaze-based control and interact with it through specific changes in brain activity. For example, a user can select an icon they are looking at, combining gaze-based and BCI-mediated actions. This kind of eye gaze hybrid BCI is closely related to the way EBCI works, by recognizing brain activity associated with the user’s intention to select an item for interaction.
Imagine a hybrid BCI that allows the user to perform two types of operations with the object they are looking at: selecting it for moving, or deleting it. These operations are triggered by two distinct mental states, aside from the resting state between the operations. Let us assume the user selects an image icon on the screen and drags it with their gaze toward a folder icon in another part of the screen. This combination of selecting and dragging the image can be viewed as a single action: the user simply intends to move the image into the folder. At the same time, this action contains several basic actions: a gaze dwell on an icon, a BCI-mediated selection of the icon, and its gaze-based relocation.
Case 1An Interface with Artifacts: Imagine a hybrid BCI that is highly reliable, but not flawless, as it occasionally misinterprets commands (see Figure 1 for a scheme).
As established earlier, the user intends to move an image into a folder. Now, suppose the user accidentally blinks, leading to corrupted EEG data (blinks are known to induce artifacts in EEG signals [58]). The BCI correctly translates the user’s intention, but only because of these artifacts in the neural data recording. Had the artifacts not occurred, the BCI would not have translated this intention. Such situations are possible because the BCI does not translate intentions with 100% accuracy.
Now, suppose the intention caused the operation in a counterfactual sense: had the user not intended to select the folder, it would not have been selected. This suggests that we are dealing with a BCI-mediated action, as the user’s intention caused an effect that aligns with its content. However, the user did not exercise control over selecting the icon, as the operation was executed because of an accidental blink. Blinking does not normally enhance the neural signal for BCI activation. As a means of exercising agency, the described BCI cannot function based on blinking. This indicates the presence of a deviant causal chain; since the presence of a basic action (icon selection) in this scenario is doubtful—given that the user does not perform a distinct mental action to send a command to the BCI—we are dealing with basic deviance, which occurs partly outside the agent’s body.
Let us examine how strategies of dealing with causal deviance, which we listed previously, can engage with this case. Some accounts lack the instruments to accommodate BCI-mediated actions with discrete operation. First of all, Goldman’s suggestion to pass the issue to neuroscientists does not apply here, since we are no longer dealing with motor control of bodily actions. Even if neuroscientists can identify instances of causal deviance in bodily actions, they will not necessarily account for BCI-mediated actions.
The sensitivity strategy seems to ineffective because, in a BCI with a limited set of discrete operations, minor alterations in intention do not change the outcome. The sustaining causation strategy faces similar challenges: the discrete operations performed by the BCI do not inherently support continuous guidance. The system’s output relies on the classification of neural data within a short temporal window. While the intention is sustained during this period, it does not influence the outcome in a way that ensures proper guidance. The ‘well-functioning’ analysis by Enç [18] and the teleofunctionalist strategy by Roloff [19] do not support BCI-mediated actions. Unlike natural bodily functions shaped by evolution, functions of BCIs are the result of purposeful design. In the context of BCI, a researcher is working with an external device made to perform certain tasks.
Proponents of the three strategies mentioned above would have to argue that BCI-mediated events are not actions, but this approach risks throwing the baby out with the bathwater. If BCIs are treated merely as novelty devices, excluding BCI-mediated actions might seem inconsequential. However, for paralyzed patients, BCIs may provide their only means of meaningful interaction with the outside world. In typical cases, BCIs serve as a way to exercise agency, even if they do not fully support continuity in action types or action guidance.
Unlike ‘well-functioning’ strategies, ‘reliability’ strategies face no obvious obstacles in accounting for Case 1. These approaches can treat the human and the BCI as a unified system in which actions are performed and clarify which sets of circumstances and mechanisms within this system support agency. In Case 1, the way the BCI maps intention into its effect represents abnormal functioning of its mechanisms.
Now, let us turn to the immediate causation strategy, which does not immediately fail to account for BCI-mediated agency. As previously stated, the class of BCIs under consideration is sufficiently advanced to support basic actions, meaning it is activated by the user’s proximal intention. If we adopt the version of the immediate causation strategy defended by Brand, we can see an analogy with bodily movements here. Brand argues that a proximal intention must be the proximal cause of the activation of the neurophysiological chain leading to movement—there should be no intermediate event caused by the intention that independently triggers the chain. In Case 1, we might extend this idea to say that the proximal intention must activate the external causal chain implemented through the computer and EEG recording setup. This condition appears to be met in Case 1, yet it does not eliminate causal deviance.
If we examine Searle’s account, the conditions he sets for non-deviant causation appear to be satisfied in Case 1. First, Searle’s condition of plannable regularity is met: the user can reasonably expect that their intention will activate the interface. Second, Searle requires that the operation of intention occurs under its intentional aspects—meaning that an intention should lead to an action through the execution of its content. In this case, the agent simply intends to move the folder, but their intention does not contain explicit content specifying how the BCI should function. According to Searle, “specification of the content is already a specification of the conditions of satisfaction” [13] (p. 13). The user expects the correct discrete operation to be executed by intending to perform it. There appears to be no failure of intentional content in Case 1. Requiring the agent’s intention to include internal knowledge of how the BCI operates would be, as Mele puts it in his critique of this general approach, “psychologically unrealistic” [59] (p. 204).
Skeptics might argue that Case 1 does not, in fact, involve a lack of control. They could claim that blinking during BCI interaction functions as a background condition for successful BCI activation. However, consider the following: if the user blinked as a result of an intention to activate the BCI, we would instead be dealing with a more conventional case of causal deviance—one that could, for example, be addressed by the immediate causation strategy. In that scenario, blinking would function as an intermediate link in the causal chain, even though it would not be a sufficient cause for BCI activation.
Brand’s account of proximal causation [10] (p. 20) requires that for event e to be the proximal cause of event f, there must be no intermediary events g₁… gₙ such that e causes g₁… gₙ, which in turn cause f. If blinking were caused by the intention, it would violate this condition, thereby creating a deviant causal chain. If we accept that such intention-driven blinking results in causal deviance, then why would an accidental blink—one unrelated to the user’s intention—be any different in terms of the agent’s control? One could insist that the agent still has control in both scenarios, but this position risks misclassifying cases of causal deviance in bodily actions.

4. Deviance and the System’s Border

Now, let us consider a case where putative deviance arises after the system has already begun processing the user’s neural activity. In the blinking example, we dealt with an event that a person can, at least in some cases, control as part of their bodily agency. However, in this new case, deviance would occur entirely outside the agent.
Case 2An Interface with Deviant Computation: Let us consider a scenario similar to Case 1. This time, the user does not blink, yet their intention still leads to the execution of the operation in the wrong way. Suppose the BCI should not have recognized the intention because of a classifier error, but this error was overridden by another computational error within the system’s internal processing. I will not delve into the specifics of the computational error that could produce such an effect, but such errors are certainly possible, given that BCI functionality relies on processing neural data according to a non-trivial pipeline.
Are we dealing with a deviant causal chain here as well? The key difference between Case 2 and Case 1 is that Case 2 more closely resembles a malfunction within the neurophysiological chain after it has been triggered by the user’s intention. Mayr [12] suggests that this kind of scenario challenges Brand’s version of the immediate causation strategy, as it shows that even when a proximal intention directly initiates the process, deviations in the causal chain can still occur. Technically, Case 2 is similar to Case 1 in that it involves a BCI malfunction. In Case 1, blinking was relevant only as a factor influencing the classification process. However, in Case 2, the deviant part of the chain occurs entirely outside the agent, which raises a potentially troubling issue. If we accept that Case 2 constitutes basic causal deviance, then the question of whether the agent is truly acting becomes dependent on processes within an external technical system. Despite this, the reliability strategy can still account for this scenario by treating the agent and the BCI as part of a unified system, allowing it to identify and explain the malfunction that led to the deviant causal chain.
In Cases 1 and 2, a deviant causal chain occurred because of technological imperfections: sometimes the user’s intentions were not correctly translated but were executed despite to malfunctions which were accidentally counterbalanced by other events. Now imagine that errors arise because of a separate disruptive module connected to the system. This module acts as an external factor that interferes with the proper operation of the BCI. The key point is that, under ‘normal conditions’ (with the module disabled), the user has access to a better working BCI.
Brand [10] (p. 21) discusses a situation involving a modular modification of bodily movements. For example, suppose Richard, because of a severe injury, can no longer clap his hands. A technical device implanted in his body is meant to return him this ability. Sometimes the device allows him to clap, but because of an implanted randomizer, it does not always transmit signals from his brain to his muscles. When Richard does manage to clap, is it his action? Brand believes so, though he approaches this conclusion cautiously.
Case 3—A Perfect Interface with a Randomizing Module: Imagine an agent who has access to a hybrid BCI with extremely high accuracy. Yet, the interface designer adds a special randomizing module that introduces the possibility of command switching, so intentions can be carried out incorrectly with a certain probability. This randomizing module, unlike the interface itself, is imperfect: sometimes, because of internal errors, it fails to switch one command to the other when it should (see Figure 2 for a scheme). Suppose the resulting effectiveness of the hybrid BCI from Case 3 is similar to previous cases.
In Case 3, if the user intends to perform an operation using a BCI and the module does not switch commands, then the user acts. However, if the module does switch commands, the user does not act—instead, they accidentally delete the image rather than selecting it. Additionally, there is a possibility of putative causal deviance. Suppose the user attempts to select an icon, and their intention is correctly translated, but the randomizing module attempts to switch it to icon deletion. Now, assume that the randomizing module malfunctions (which is rare), and the correct command is still executed. Does this situation involve causal deviance? It seems not, because the agent’s intention still produces the desired outcome in a way that allows to control the BCI. Anyone arguing otherwise would have to normalize what was intentionally introduced as a fault, which seems counterintuitive. However, Case 3 closely resembles Case 2, as both describe functioning technical systems where a potential malfunction occurs in a part of the causal chain that extends beyond the agent.
The key difference between Case 2 and Case 3 lies in the relationship between the source of malfunction and the BCI. If we accept that Case 2 involves basic causal deviance, we must establish a way to accurately delineate the system whose mechanisms are involved in acting. Here, deviance does not depend on how the agent’s intention initiates the causal chain, nor on the structure of the agent’s body, but rather on the design of the machine that enables certain basic actions. Determining which parts of the machine should be considered components of the system supporting the agent’s actions is thus an additional task for the philosophy of action.

5. Implications for Philosophy of Action

Above, I introduced three examples of BCI-mediated actions. In each case, the agent attempts to perform the same action using a BCI, which consists of two basic actions: activating the BCI and shifting their eye gaze. I focused specifically on the BCI component. In every case, the BCI had similar effectiveness. From a first-person perspective, an agent unaware of the system’s internal events would not be able to distinguish among the three cases.
In Case 1, I identified a deviant causal chain in BCI functioning. This occurs when the BCI correctly executes the user’s intention, but only because of a lucky coincidence—the interface malfunctions but still produces the correct outcome because of the user’s blinking. In Case 2, the malfunction occurs within the computational processes of the BCI, affecting the transition from neural data to operation execution. This case serves as an analogy for how causal deviance might arise in a physiological pathway activated by an agent’s intention. Based on Case 2, I introduced Case 3, involving a randomizing module. This module disrupts the function of an otherwise perfectly working BCI, causing it to behave like the BCI in Case 2. However, if the randomizing module fails to interfere, the agent acts normally using the interface, as the BCI functions properly.
Based on these cases, certain requirements emerge that a causal theory of action (CTA) must fulfill. Specifically, a theory must accommodate discrete actions—those that are initiated by a single, overarching intention and carried out as a unified whole—without requiring high-precision intentions or continuous guidance. A theory must also account for empirical data in which causal pathways are normal for a given action, whether the data pertain to biological systems or machines. Cases of basic deviance in BCI-mediated actions can be addressed by ‘reliability’ strategies, such as those proposed by Aguilar [11] or Shepherd [22]. Shepherd’s approach is particularly relevant, as it is more detailed and not limited to bodily actions in its present form. However, a general theory still requires additional analysis to determine the boundaries of the system enabling action.
We can formulate a separate challenge to CTA by comparing Case 2 and Case 3. A viable CTA must be able to differentiate between these cases, classifying Case 2 as an instance of a deviant causal chain and Case 3 as a successful case of acting through a BCI. This distinction must be made by analyzing the means of action in each case—namely, the internal workings of the BCI. In Case 2, basic causal deviance is introduced by a malfunction inside the BCI. However, comparing it to Case 3 reveals that the localization of the malfunction—whether it occurs inside or outside the system that enables action—is crucial. To address this, we must examine the boundaries of the architecture supporting action: which components belong to it, and which do not. Ultimately, this system will include both the agent and certain technological components.
For bodily movements, this action-supporting architecture is simply the human body. The boundaries of the body are well-defined, as we are biological organisms. Thus, in the case of bodily movements, the scenarios presented do not introduce new demands on CTA, as long as the analysis of bodily functions aligns with how BCI functioning is evaluated within clearly defined system boundaries. In reliability-based strategies, putative causal deviance is assessed by analyzing how the human body normally functions. Cases 2 and 3 highlight the need for additional analysis in BCI-mediated actions, but not for all types of actions.
I do not propose a complete solution to the problem of deviant causal chains, as it requires further investigation. However, the reasoning presented so far points in a promising direction. The quest for a criterion to define the boundaries and design principles of systems not directly tied to the human body could lead to an extension of ‘reliability’ strategies. This means, of course, that the general solution will neither focus on voluntary movements nor emphasize the specific details of the physiological pathway between intention and movement.

6. Conclusions

This paper examines the problem of deviant causal chains in the philosophy of action within the context of actions mediated by a brain–computer interface (BCI). The distinctive feature of such cases is that the deviant part of the causal chain is partially or entirely external to the agent. I have demonstrated that the very existence of BCI-mediated actions renders certain strategies for addressing causal deviance invalid. Sustaining causation strategies and sensitivity strategies struggle to account for the discrete nature of operations in certain types of BCIs. Meanwhile, well-functioning strategies, which rely on the evolutionary origin of the agent, do not adequately support a designed device like a BCI. I have also shown that the immediacy strategy faces significant challenges, while the reliability strategy appears to be a promising approach.
In the BCI cases I analyzed, malfunctions occurred at different stages. I examined two closely related cases, where dysfunction arose within the causal chain that begins after the agent’s neural activity has been registered. In one case, the error stemmed from an imperfection in the BCI itself, while in the other, the fault lay with an external device that interfered with the BCI’s performance. Using these cases, I have demonstrated that evaluating causal deviance must include a distinct step: analyzing where the boundaries of the system that mediates the agent’s actions should be drawn. This type of analysis is uncommon for bodily actions, but when acting through a machine, it requires a detailed examination of causal elements, which can lead to non-obvious conclusions.
While most research in the philosophy of action focuses on bodily actions, it is crucial to also consider non-bodily actions when generalizing the causal theory of action. BCIs provide real-world examples of non-bodily actions with external outcomes. Developing a thorough analysis of these examples are essential for addressing the problem of deviant causal chains.

Funding

This work was supported by the Russian Science Foundation, grant 22-19-00528.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Glasscock, J.P.; Tenenbaum, S. Action. In Stanford Encyclopedia of Philosophy; Stanford University: Stanford, CA, USA, 2023. [Google Scholar]
  2. Paul, S.K. Philosophy of Action: A Contemporary Introduction; Routledge: New York, NY, USA, 2021. [Google Scholar]
  3. Frankfurt, H. The Problem of Action. Am. Philos. Q. 1978, 15, 157–162. [Google Scholar]
  4. Aguilar, J.H.; Buckareff, A.A. Causing Human Actions: New Perspectives on the Causal Theory of Action; The MIT Press: Cambridge, MA, USA, 2010; ISBN 978-0-262-51476-7. [Google Scholar]
  5. Davidson, D. Actions, Reasons, and Causes. J. Philos. 1963, 60, 685–700. [Google Scholar] [CrossRef]
  6. Goldman, A.I. A Theory of Human Action; Prentice-Hall: Englewood Cliffs, NJ, USA, 1970; ISBN 978-1-4008-6897-1. [Google Scholar]
  7. Davidson, D. Freedom to Act. In Essays on Freedom of Action; Honderich, T., Ed.; Routledge: London, UK, 1973; pp. 137–156. [Google Scholar]
  8. Morton, A. Because He Thought He Had Insulted Him. J. Philos. 1975, 72, 5–15. [Google Scholar] [CrossRef]
  9. Di Nucci, E. Action, Deviance, and Guidance. Abstracta 2013, 7, 41–59. [Google Scholar] [CrossRef]
  10. Brand, M. Intending and Acting: Toward a Naturalized Action Theory; The MIT Press: Cambridge, MA, USA, 1984. [Google Scholar]
  11. Aguilar, J.H. Basic Causal Deviance, Action Repertoires, and Reliability. Philos. Issues 2012, 22, 1–19. [Google Scholar] [CrossRef]
  12. Mayr, E. Understanding Human Agency; Oxford University Press: Oxford, UK, 2012; ISBN 978-0-19-173163-1. [Google Scholar]
  13. Searle, J. Intentionality—An Essay in the Philosophy of Mind; Cambridge University Press: New York, NY, USA, 1983. [Google Scholar]
  14. Peacocke, C. Deviant Causal Chains. Midwest Stud. Philos. 1979, 4, 123–155. [Google Scholar] [CrossRef]
  15. Schlosser, M.E. Basic Deviance Reconsidered. Analysis 2007, 67, 186–194. [Google Scholar] [CrossRef]
  16. Bach, K. A Representational Theory of Action. Philos. Stud. 1978, 34, 361–379. [Google Scholar] [CrossRef]
  17. Bishop, C.J. Natural Agency: An Essay on the Causal Theory of Action; Cambridge University Press: Cambridge, MA, USA, 1989. [Google Scholar]
  18. Enç, B. Causal Theories of Intentional Behavior and Wayward Causal Chains. Behav. Philos. 2004, 32, 149–166. [Google Scholar]
  19. Roloff, J. A Teleofunctionalist Solution to the Problem of Deviant Causal Chains of Actions. KRITERION 2022, 36, 247–261. [Google Scholar] [CrossRef]
  20. Millikan, R.G. Language, Thought, and Other Biological Categories: New Foundations for Realism; MIT Press: Cambridge, MA, USA, 1984. [Google Scholar]
  21. Millikan, R.G. Beyond Concepts: Unicepts, Language, and Natural Information; Oxford University Press: Oxford, UK, 2017. [Google Scholar]
  22. Shepherd, J. The Shape of Agency: Control, Action, Skill, Knowledge; Oxford University Press: Oxford, UK, 2021; ISBN 978-0-19-886641-1. [Google Scholar]
  23. Peacocke, A. Mental Action. Philos. Compass 2021, 16, e12741. [Google Scholar] [CrossRef]
  24. Drew, L. The Rise of Brain-Reading Technology: What You Need to Know. Nature 2023, 623, 241–243. [Google Scholar] [CrossRef] [PubMed]
  25. Gao, X.; Wang, Y.; Chen, X.; Gao, S. Interface, Interaction, and Intelligence in Generalized Brain–Computer Interfaces. Trends Cogn. Sci. 2021, 25, 671–684. [Google Scholar] [CrossRef]
  26. Vidal, J.J. Toward Direct Brain-Computer Communication. Annu. Rev. Biophys. Bioeng. 1973, 2, 157–180. [Google Scholar] [CrossRef]
  27. Zander, T.O.; Kothe, C. Towards Passive Brain–Computer Interfaces: Applying Brain–Computer Interface Technology to Human–Machine Systems in General. J. Neural Eng. 2011, 8, 025005. [Google Scholar] [CrossRef] [PubMed]
  28. Wolpaw, J.R.; Birbaumer, N.; McFarland, D.J.; Pfurtscheller, G.; Vaughan, T.M. Brain–Computer Interfaces for Communication and Control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef]
  29. Card, N.S.; Wairagkar, M.; Iacobacci, C.; Hou, X.; Singer-Clark, T.; Willett, F.R.; Kunz, E.M.; Fan, C.; Nia, M.V.; Deo, D.R.; et al. An Accurate and Rapidly Calibrating Speech Neuroprosthesis. N. Engl. J. Med. 2024, 391, 609–618. [Google Scholar] [CrossRef]
  30. Lorach, H.; Galvez, A.; Spagnolo, V.; Martel, F.; Karakas, S.; Intering, N.; Vat, M.; Faivre, O.; Harte, C.; Komi, S.; et al. Walking Naturally after Spinal Cord Injury Using a Brain–Spine Interface. Nature 2023, 618, 126–133. [Google Scholar] [CrossRef]
  31. Haselager, P.; Mecacci, G.; Wolkenstein, A. Can BCIs Enlighten the Concept of Agency? A Plea for an Experimental Philosophy of Neurotechnology. In Clinical Neurotechnology Meets Artificial Intelligence; Springer: Cham, Switzerland, 2021; pp. 55–68. [Google Scholar]
  32. Steinert, S.; Bublitz, C.; Jox, R.; Friedrich, O. Doing Things with Thoughts: Brain-Computer Interfaces and Disembodied Agency. Philos. Technol. 2019, 32, 457–482. [Google Scholar] [CrossRef]
  33. Haselager, P. Did i Do That? Brain-Computer Interfacing and the Sense of Agency. Minds Mach. 2013, 23, 405–418. [Google Scholar] [CrossRef]
  34. Rainey, S.; Maslen, H.; Savulescu, J. When Thinking Is Doing: Responsibility for BCI-Mediated Action. AJOB Neurosci. 2020, 11, 46–58. [Google Scholar] [CrossRef]
  35. Aflalo, T.N.; Zhang, C.; Revechkis, B.; Rosario, E.; Pourtain, N.; Andersen, R.A. Implicit Mechanisms of Intention. Curr. Biol. 2022, 32, 2051–2060. [Google Scholar] [CrossRef]
  36. Collinger, J.L.; Wodlinger, B.; Downey, J.E.; Wang, W.; Tyler-Kabara, E.C.; Weber, D.J.; McMorland, A.J.C.; Velliste, M.; Boninger, M.L.; Schwartz, A.B. High-Performance Neuroprosthetic Control by an Individual with Tetraplegia. Lancet 2013, 381, 557–564. [Google Scholar] [CrossRef] [PubMed]
  37. Frolov, A.A.; Mokienko, O.; Lyukmanov, R.; Biryukova, E.; Kotov, S.; Turbina, L.; Nadareyshvily, G.; Bushkova, Y. Post-Stroke Rehabilitation Training with a Motor-Imagery-Based Brain-Computer Interface (BCI)-Controlled Hand Exoskeleton: A Randomized Controlled Multicenter Trial. Front. Neurosci. 2017, 11, 400. [Google Scholar] [CrossRef]
  38. Danto, A.C. Basic Actions. Am. Philos. Q. 1965, 2, 141–148. [Google Scholar]
  39. Wittgenstein, L. Philosophical Investigations; Anscombe, G.E.M., Ed.; Blackwell: Oxford, UK, 1986. [Google Scholar]
  40. McCann, H. Volition and Basic Action. Philos. Rev. 1974, 83, 451–473. [Google Scholar] [CrossRef]
  41. Pelley, S.; Cetta, D.S. Breakthrough: Robotic Limbs Moved by the Mind. 60 Minutes 2012, 14–15. [Google Scholar]
  42. CBS News. Breakthrough: Robotic Limbs Moved by the Mind. Available online: https://youtu.be/Z3a5u6djGnE (accessed on 11 March 2025).
  43. McFarland, D.J.; Sarnacki, W.A.; Wolpaw, J.R. Electroencephalographic (EEG) Control of Three-Dimensional Movement. J. Neural Eng. 2010, 7, 036007. [Google Scholar] [CrossRef]
  44. Mullin, E. Neuralink’s First User Is ‘Constantly Multitasking’ with His Brain Implant. WIRED. 2024. Available online: https://www.wired.com/story/neuralink-first-patient-interview-noland-arbaugh-elon-musk/ (accessed on 11 March 2025).
  45. Gallagher, S. Philosophical Conceptions of the Self: Implications for Cognitive Science. Trends Cogn. Sci. 2000, 4, 14–21. [Google Scholar] [CrossRef]
  46. Moore, J.W.; Fletcher, P.C. Sense of Agency in Health and Disease: A Review of Cue Integration Approaches. Conscious. Cogn. 2012, 21, 59–68. [Google Scholar] [CrossRef]
  47. Evans, N.; Gale, S.; Schurger, A.; Blanke, O. Visual Feedback Dominates the Sense of Agency for Brain-Machine Actions. PLoS ONE 2015, 10, e0130019. [Google Scholar] [CrossRef] [PubMed]
  48. Serino, A.; Bockbrader, M.; Bertoni, T.; Colachis, S., IV; Solcà, M.; Dunlap, C.; Eipel, K.; Ganzer, P.; Annetta, N.; Sharma, G.; et al. Sense of Agency for Intracortical Brain–Machine Interfaces. Nat. Hum. Behav. 2022, 6, 565–578. [Google Scholar] [CrossRef] [PubMed]
  49. Synofzik, M.; Vosgerau, G.; Newen, A. Beyond the Comparator Model: A Multifactorial Two-Step Account of Agency. Conscious. Cogn. 2008, 17, 219–239. [Google Scholar] [CrossRef]
  50. Friston, K. The Free-Energy Principle: A Unified Brain Theory? Nat Rev Neurosci 2010, 11, 127–138. [Google Scholar] [CrossRef]
  51. Parr, T.; Pezzulo, G.; Friston, K.J. Active Inference: The Free Energy Principle in Mind, Brain, and Behavior; The MIT Press: Cambridge, MA, USA, 2022. [Google Scholar]
  52. Gauvrit, N.; Zenil, H.; Soler-Toscano, F.; Delahaye, J.-P.; Brugger, P. Human Behavioral Complexity Peaks at Age 25. PLoS Comput. Biol. 2017, 13, e1005408. [Google Scholar] [CrossRef] [PubMed]
  53. Zenil, H.; Marshall, J.A.R.; Tegnér, J. Approximations of Algorithmic and Structural Complexity Validate Cognitive-Behavioral Experimental Results. Front. Comput. Neurosci. 2023, 16, 956074. [Google Scholar] [CrossRef]
  54. Guger, C.; Allison, B.; Rutkowski, T.; Korostenskaja, M. (Eds.) Brain-Computer Interface Research: A State-of-the-Art Summary 11; Springer: Cham, Switzerland, 2024. [Google Scholar]
  55. Zander, T.O.; Gaertner, M.; Kothe, C.; Vilimek, R. Combining Eye Gaze Input with a Brain-Computer Interface for Touchless Human-Computer Interaction. Int. J. Hum.-Comput. Interact. 2010, 27, 38–51. [Google Scholar] [CrossRef]
  56. Protzak, J.; Ihme, K.; Zander, T.O. A Passive Brain-Computer Interface for Supporting Gaze-Based Human-Machine Interaction. In Proceedings of the Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion; Stephanidis, C., Antona, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 662–671. [Google Scholar]
  57. Shishkin, S.L.; Nuzhdin, Y.O.; Svirin, E.P.; Trofimov, A.G.; Fedorova, A.A.; Kozyrskiy, B.L.; Velichkovsky, B.M. EEG Negativity in Fixations Used for Gaze-Based Control: Toward Converting Intentions into Actions with an Eye-Brain-Computer Interface. Front. Neurosci. 2016, 10, 528. [Google Scholar] [CrossRef]
  58. Hari, R.; Puce, A. MEG-EEG Primer, 2nd ed.; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
  59. Mele, A. Springs of Action: Understanding Intentional Behavior; Oxford University Press: Oxford, UK, 1992. [Google Scholar]
Figure 1. Schematic representation of the system in Case 1. The warning sign indicates an imperfection in the system. In this scenario, the BCI has a small but non-zero probability of incorrectly translating the user’s intention.
Figure 1. Schematic representation of the system in Case 1. The warning sign indicates an imperfection in the system. In this scenario, the BCI has a small but non-zero probability of incorrectly translating the user’s intention.
Philosophies 10 00037 g001
Figure 2. Schematic representation of the system in Case 3. The warning sign indicates an imperfection in the system. In this scenario, although the core system functions perfectly, the performance of the BCI component is affected by an external randomizing module. However, this mischievous component has a non-zero probability of failure.
Figure 2. Schematic representation of the system in Case 3. The warning sign indicates an imperfection in the system. In this scenario, although the core system functions perfectly, the performance of the BCI component is affected by an external randomizing module. However, this mischievous component has a non-zero probability of failure.
Philosophies 10 00037 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yashin, A.S. Causal Deviance in Brain–Computer Interfaces (BCIs): A Challenge for the Philosophy of Action. Philosophies 2025, 10, 37. https://doi.org/10.3390/philosophies10020037

AMA Style

Yashin AS. Causal Deviance in Brain–Computer Interfaces (BCIs): A Challenge for the Philosophy of Action. Philosophies. 2025; 10(2):37. https://doi.org/10.3390/philosophies10020037

Chicago/Turabian Style

Yashin, Artem S. 2025. "Causal Deviance in Brain–Computer Interfaces (BCIs): A Challenge for the Philosophy of Action" Philosophies 10, no. 2: 37. https://doi.org/10.3390/philosophies10020037

APA Style

Yashin, A. S. (2025). Causal Deviance in Brain–Computer Interfaces (BCIs): A Challenge for the Philosophy of Action. Philosophies, 10(2), 37. https://doi.org/10.3390/philosophies10020037

Article Metrics

Back to TopTop