*2.3. BCI Operation*

The operation of a typical BCI system is based on the sequential execution of a number of procedures, which namely are signal acquisition, preprocessing, feature extraction, classification, translation, and feedback to operator [10,11], as shown in Figure 1.

**Figure 1.** Block diagram representing the processes performed in a typical Brain–Computer Interface.

In EEG-BCIs, *signal acquisition* is performed by using electrodes which are positioned along the scalp of the user. Normally, the settlement of electrodes on the scalp is performed in compliance to the International 10–20 system. According to this system, electrodes are located on the scalp at 10% and 20% of a measured distance from reference spots including nasion, inion, left, and right preauricular [10].

The pattern of this system is depicted in Figure 2, where odd numbers refer to the left side of the head, even numbers refer to the right side, A1 and A2 refer to the earlobes and 'Fp', 'F', 'T', 'C', 'P', and 'O' stand for the prefrontal, frontal, temporal, central, parietal, and occipital areas of the brain, correspondingly.

**Figure 2.** Top view of the international 10–20 electrode placement system on a human scalp.

*Preprocessing* is the procedure which is carried out in order to reduce the noise from the signal and apply some filtering and other methods in order to remove artifacts which are caused by endogenous sources, such as motions of eyes, muscles, and heart, and exogenous sources, such as power-line coupling and impedance mismatch [12]. Preprocessing is usually performed by using low-pass, high-pass, band-pass, or notch filtering. However, the use of such filters may eliminate useful elements of EEG signals having the same frequency band as artifacts [13].

In *feature extraction*, specific features of the signals in time domain or/and frequency domain that can expressively differentiate specific classes are extracted and positioned into a feature vector in order to enable the classification phase which follows. Autoregressive (AR), Hjorth, and EEG signal power are commonly used feature extraction techniques [14].

During the *classification* phase, a properly built algorithm is used. This algorithm distinguishes between classes which correspond to various brain activity patterns by deciding to which of these classes every feature vector suits best. Neural networks (NNs) are widely used as classifiers in BCIs because they provide the ability to approximate nonlinear decision boundaries [15,16]. Alternatively, linear discriminant analysis (LDA), support vector machines (SVM), and statistical classifiers may be used [17]. The advantage of LDA is that it is a simple-to-use probabilistic approach based on Bayes' Rule. On the other hand, NNs have the advantage of being able to approximate nonlinear decision boundaries. In cases where a small amount of training data is available, the use of SVM is a very good choice. Finally, statistical classifiers have the ability to represent the uncertainty that is inherent in brain signals.

During the *translation* phase the extracted signal features are converted into particular commands to the device(s) under control, through the use of dedicated translation algorithms. Specifically, these algorithms have the ability not only to adapt to the continuing variations of the signal features, but also to ensure that the complete device control range is covered by the specific signal features from the user.

Finally, in the *feedback to operator* phase, the final outcome of the overall operation of the BCI system is transferred back to the system operator, so that the performance of the system can be evaluated.

#### *2.4. BCI-Based Robot Control*

An EEG-based brain-controlled robot is a robot that uses an EEG-based BCI to receive control commands from its human operator. EEG-based brain-controlled mobile robots can support the movement of both elderly people and people who are severely disabled with destructive neuromuscular disorders, such as amyotrophic lateral sclerosis (ALS), multiple sclerosis (MS), or strokes.

There are two main classes of EEG-based brain-controlled assistive robots which namely are *brain-controlled manipulators* and *brain-controlled mobile robots*. Similarly, assistive mobile robots are classified in two categories according to their mode of operation [11].

The first category consists of assistive mobile robots which operate under *direct BCI control*. Robots of this kind are controlled exclusively via the commands that their users send to the robots controlled via BCI modules, without any additional assistance by robot intelligence elements. For this reason, they are less expensive and complex to develop and their users keep the absolute motion control.

On the other hand, the overall performance of these brain-controlled mobile robots mainly depends on the performance of the BCIs, which in many cases may have inadequate speed of response and accuracy. Furthermore, the demand for continuous production of motor control commands by the users may be extremely tiring for them.

The initial example of a robot of this kind was presented in [18] where the left and right turning movements of a robotic wheelchair were directly controlled by corresponding motion commands translated from user brain signals.

Similarly, in [19] a brain-controlled mobile robot was able to perform forward, left, and right motions by using a BCI based on motor imagery.

Moreover, in [20] the motion control of a wheelchair is performed via a BCI, which captures alpha brainwaves. Specifically, a set of icons corresponding to predefined commands are sequentially displayed on a screen and the user is able to select the desired command by closing his/her eyes as soon as its corresponding icon appears on the display unit.

The second category consists of assistive mobile robots which operate under *shared control*. In the robots of this category the control is performed by combining a BCI system along with an intelligent controller, such as an autonomous navigation system. Due to their enhanced intelligence, robots of this type are safer and less tiring for their users and more accurate in interpreting and executing their commands. On the other hand, their development is of higher cost and computational complexity.

A typical example of shared control in assistive mobile robots is proposed in [21]. In this system the operator, by using a SSVEP BCI system, has the ability to send commands in order to move a robotic wheelchair in four directions (forwards, backwards, left, and right), while an autonomous navigation system executes the delivered commands.

Similarly, in [22], by using a P300 BCI, the operator uses a list of predefined locations in order to select the desired location and then sends this selection to an autonomous navigation system, which guides a robotic wheelchair to the selected location. The limitation of the specific system is that it is able to be operated only in a known environment.

Likewise, in [23] shared control is used. Specifically, the combined use of a P300 BCI along with an autonomous navigation system is proposed in order to perform the motion control of a robotic wheelchair in an environment which is unknown. Moreover, the user has the ability to make the wheelchair turn either left or right by focusing correspondingly on one of two relative icons at a predefined visual display.

In [24] three mental tasks, which namely are the imagination of right or left hand movements and the generation of words beginning with the same random letter, were used in a BCI system applied to a robotic wheelchair. The system developed, which interacts with the user by using a PDA screen and speakers, is able to guide the robotic wheelchair both in known and unknown environments.

#### **3. Materials and Methods**

The research work carried out made use of the experimental equipment described in Section 3.1 and followed the procedure explained in Section 3.2.
