Next Article in Journal
Asymmetric Stereo High Dynamic Range Imaging with Smartphone Cameras
Previous Article in Journal
Application of IMU/GPS Integrated Navigation System Based on Adaptive Unscented Kalman Filter Algorithm in 3D Positioning of Forest Rescue Personnel
Previous Article in Special Issue
Wavelet Transforms Significantly Sparsify and Compress Tactile Interactions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Control System for Brain-Controlled Mobile Robot Using Self-Learning Neuro-Fuzzy Approach

by
Zahid Razzaq
1,2,*,
Nihad Brahimi
3,
Hafiz Zia Ur Rehman
4,* and
Zeashan Hameed Khan
5
1
Faculty of Engineering, Free University of Bozen-Bolzano, 39100 Bozen-Bolzano, Italy
2
Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genoa, 16126 Genova, Italy
3
School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
4
Department of Mechatronics Engineering, Air University, Islamabad 44000, Pakistan
5
Interdisciplinary Research Center for Intelligent Manufacturing and Robotics (IRC-IMR), King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(18), 5875; https://doi.org/10.3390/s24185875
Submission received: 17 July 2024 / Revised: 31 August 2024 / Accepted: 5 September 2024 / Published: 10 September 2024

Abstract

:
Brain-computer interface (BCI) provides direct communication and control between the human brain and physical devices. It is achieved by converting EEG signals into control commands. Such interfaces have significantly improved the lives of disabled individuals suffering from neurological disorders—such as stroke, amyotrophic lateral sclerosis (ALS), and spinal cord injury—by extending their movement range and thereby promoting self-independence. Brain-controlled mobile robots, however, often face challenges in safety and control performance due to the inherent limitations of BCIs. This paper proposes a shared control scheme for brain-controlled mobile robots by utilizing fuzzy logic to enhance safety, control performance, and robustness. The proposed scheme is developed by combining a self-learning neuro-fuzzy (SLNF) controller with an obstacle avoidance controller (OAC). The SLNF controller robustly tracks the user’s intentions, and the OAC ensures the safety of the mobile robot following the BCI commands. Furthermore, SLNF is a model-free controller that can learn as well as update its parameters online, diminishing the effect of disturbances. The experimental results prove the efficacy and robustness of the proposed SLNF controller including a higher task completion rate of 94.29% (compared to 79.29%, and 92.86% for Direct BCI and Fuzzy-PID, respectively), a shorter average task completion time of 85.31 s (compared to 92.01 s and 86.16 s for Direct BCI and Fuzzy-PID, respectively), and reduced settling time and overshoot.

1. Introduction

Robots are getting popular in healthcare assistance specially for patients with special needs or suffering from restricted movement due to disabilities [1]. The number of people with such disabilities is increasing due to rising cases of stroke, war casualties, road accidents, and aging factors [2,3]. Assistive robots can ameliorate the lives of disabled people; hence, they are growing in demand these days. Usually, healthy individuals can control robots using common input devices—e.g., joystick, keyboard, or mouse. However, it is difficult to operate these devices for individuals with disabilities, such as stroke, amyotrophic lateral sclerosis (ALS), and multiple sclerosis (MS). In most cases, these patients lose the ability to walk, use their hands/arms, or even talk. Thus, disabled individuals cannot easily communicate their intentions to robots using traditional devices [4].
The development of BCIs has helped patients with neurological disorders and disabilities to ameliorate the quality of life [5]. BCI systems infact measure the brain activity and decode their thoughts or intentions into control commands, thus bypassing the peripheral nervous system or muscles [6]. They also play an important part to ensure independence for people with severe disabilities [6,7]. Electroencephalography (EEG) is the most frequently used signal for the development of BCI systems due to its moderate price, easy usage, and high temporal resolution [8]. Furthermore, they can also be used for a variety of tasks: such as 2-D cursor control [9], playing games [10], browsing the Internet [11], brain-controlled vehicles [12], brain-controlled robots and drones [13,14]. The development of an EEG-based brain-controlled robot is a step forward to describe a robot that follows the commands sent by a human brain meant to improve the freedom of movement and quality of life of disabled people [1]. Disabled users can therefore take advantage of a brain-controlled mobile robot such as a wheelchair or electric vehicle (EV) to reach their desired locations with ease and safety using BCIs.
The concept of a BCI mobile robot based on non-invasive EEG signals was first proposed in 2004 by Millán [15]. Afterwards, several researchers developed numerous brain-controlled mobile robots [1,16]. Tanaka [17] presented a BCI robotic wheelchair and tested it in a real-world scenario. In his work, the left and right movements were controlled by motor imagery (MI) generated by the subjects. Rebsamen [18] also developed a BCI robotic wheelchair by using a P300-based BCI to control it from one destination to another.
These studies focused primarily on direct control by BCI, meaning that users issue control commands to steer robots directly. Because the efficiency of BCIs determines the efficiency of these BCI robots, the BCIs can negatively affect their performance: they can come with limitations, such as the number and accuracy of commands and their execution time) [19,20,21]. Consequently, these robots are typically less safe, their performance is currently slow and uncertain, and users feel fatigued when operating for a longer period. Improved BCI techniques can enhance the outcome of BCI systems. However, it is challenging to achieve the desired performance using current BCI techniques because of the non-stationary property of EEG signals and the differences in BCI performance between individuals [1,22,23].
Shared control is a popular approach in human-robot interaction in which humans and machines work together to complete a task by enhancing each other’s capabilities. Shared control techniques can play a part in ameliorating the performance of brain-controlled systems, given the performance constraints of BCIs [24,25,26]. For example, Li [27] proposed a shared-control approach for navigation and control of a wheelchair. In his work, he used the brain-machine control (BMC) mode to produce a polar polynomial trajectory using steady-state visual evoked potentials (SSVEPs) and the autonomous control mode to navigate the robot through the obstacles. Shared control techniques were also proposed by [28,29] using model predictive control (MPC) and sliding mode control (SMC). Alqumsan presented a study on self-adaptations in EEG-based BCIs. The proposed Bayesian update rule can track user goals and benefit the shared control driving scheme by reducing user effort [30]. However, these methods require accurate mathematical models of the system.
A promising model-free approach for developing a shared control system is the fuzzy logic control (FLC) strategy. Fuzzy inference systems (FIS) have been widely used as adaptive controllers for robots [31,32]. A shared control method based on fuzzy logic was proposed by Liu [33]. In her work, the FLC method was used to implement two behaviors (wall following and obstacle avoidance) to navigate and keep the brain-actuated robot safe. Fuzzy-based shared control was also proposed in our previous work [34] for a brain-controlled mobile robot.
The FLC approach has several advantages over traditional control systems, including fuzzy-based control schemes. For instance, it is a model-free method and is robust to disturbances and uncertainties. The fuzzy model is based on data samples using expert experience and simple logic [35,36]; however, it is challenging to design its membership functions, rule base, and tune its parameters. FLC and Artificial Neural Networks (ANN) have been successfully employed to model and control various control problems because they combine the benefits of both the FLC and ANN systems—such as human reasoning and learning capability. Neuro-fuzzy systems have received considerable attention in the literature and have become a rapidly emerging field. The Adaptive Neuro-Fuzzy Network (ANFIS) proposed by Jang [37] is one of the popular neuro-fuzzy method in use. However, it requires offline I/O training data and cannot update its parameters online to cope with changes in plant dynamics or handle disturbances during the process. The Self-Learning Neuro-Fuzzy (SLNF) controller [38] used in our study can address this problem. The Feedback error learning (FEL) scheme, first proposed by Kawato [39], is used in this study to train the SLNF controller. It was found that the FEL control technique provides excellent tracking performance without substantial modeling, which is desirable in our work. The uniqueness of the FEL is that it employs feedback error as a learning signal, which is fundamentally distinctive in the control literature.
This study describes a shared control scheme based on a fuzzy logic approach for a brain-controlled mobile robotic system to improve its performance, safety, and robustness. In contrast to conventional controllers—such as state-of-the-art control methods like SMC and MPC [28,29]—the SLNF controller does not need an accurate mathematical model, which can be challenging to obtain in real-world robotics applications. Furthermore, the SLNF controller deviates from traditional fuzzy controllers by not requiring offline training or prior knowledge. Hence, it can learn and update its parameters online; this feature makes it great for dynamic systems and helps to minimize the disturbance impact.
The contributions of this work are twofold: firstly, this paper discusses the implications of applying an SLNF control technique to a brain-controlled robotic system—a novel approach that has been unexplored previously in this field. Second, it highlights the advantages of the SLNF controller: model-free control, the ability to learn and adapt online, and the capability to minimize the disturbance impact. These insights significantly enrich our understanding of how the SLNF control technique enhances the performance, adaptability, and robustness of a brain-controlled robotic system.
The remainder of this paper is organized as follows: The architecture of the brain-controlled mobile robotic system is illustrated in Section 2. Section 3 demonstrates the controller design of the proposed shared control system. Section 4 describes some key experimental results, while Section 5 concludes the discussion with some potential future directions.

2. System Structure

The structure of the proposed control system is illustrated in Figure 1. It contains three subsystems: the BCI system, the robot, and its shared control system, which includes the SLNF controllers and an obstacle avoidance Controller (OAC). Whenever the user intends to drive the mobile robot—based on the surrounding information and robot states—the EEG signals are communicated to the BCI system. The purpose of the BCI system is to translate human intentions into steering commands. There are three parts of the BCI system. In the first part, EEG signals are captured from users and some preprocessing (e.g., noise removal, filtering, etc.) is performed. The second part extracts the important features (e.g., frequency domain features) and categorizes them into three mental states. Finally, the interface module converts these mental states into steering commands (i.e., moving forward, left/right turns). This paper has considered only angular velocity as a steering command. There are two simultaneous commands: one from the BCI system (subject), and the other from the OAC. The shared control system compares these commands and navigates the robot following R u l e X , taking the environmental situation into account. More details about shared control are reported in Section 2.3. The proposed SLNF controllers ultimately produce the actuator torques (i.e., τ u ) required to drive the mobile robot. Note that the human is continuously in the loop.
R u l e X : If a robot is in a safe state (i.e., not colliding with obstacles and walls), then the output of the BCI system is taken as the reference angular velocity ( ω r ) for the SLNF controller. Conversely, if a robot is not in a safe state, the output of the OAC is taken as the reference angular velocity.

2.1. SSVEP BCI Module

In our work, we used the steady-state visually evoked potential (SSVEP) BCI interface to produce EEG signals from visual stimuli [40]. The SSVEP visual stimuli were displayed on a screen, which comprised two flashing rectangle checkerboards at 12 Hz and 13 Hz, respectively. To control the mobile robot in the corresponding direction (i.e., right or left), a user must concentrate on the corresponding checkerboards (i.e., right or left). On the other hand, when a user intends to keep the robot in the current heading direction (going forward), he or she does not need to attend to any stimulus. EEG signals were captured with an EMOTIV EPOC+ 14-Channel Wireless EEG Headset (at a sampling rate of 2048 Hz) and preprocessed by using a high-pass IIR filter (0.16 Hz) to remove the DC offset. EEG sensors are located at standard positions of AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4 locations based on the 10-20 system (EEG), while P3/P4 locations (left/right mastoid of the temporal bone) are taken as the electrical reference point and the noise cancellation electrode. To extract the EEG epoch as samples, we adjusted the window length up to 4 s with the step size of 0.5 s. The frequency-domain features were extracted using the discrete wavelet transform (DWT) [41]. Each EEG signal was decomposed into five levels using Daubechies wavelets (db8). As a result, we attained five features set for each channel. Therefore, we got a total of 14 × 5 = 70 features. Support vector machine (SVM) with a one-vs-all classification technique was used to build the recognition model for different mental commands: turning right, turning left, and moving forward.
The interface module is utilized to translate mental commands ω ( n ) into the reference angular velocity ( ω r ) (i.e., steering control command):
ω ( n ) = min { ω ( n 1 ) + Δ ω × B ( n ) , ω max } , B ( n ) = 1 , ω ( n 1 ) , B ( n ) = 0 , max { ω ( n 1 ) + Δ ω × B ( n ) , ω max } , B ( n ) = 1
where ω ( n ) indicates the angular velocity at n t h update, ω m i n , and ω m a x are the minimum/maximum values of the angular velocities respectively,
B ( n ) = 1 : turning left,
B ( n ) = 0 : moving forward,
B ( n ) = 1 : turning right.
Δ ω was set to 0.35 rad / s , which can be further adjusted. Further details about the BCI module can be found in [40].

2.2. Mobile Robot Model

Although the mathematical model of a plant to be controlled is not required for designing fuzzy logic controllers—it is used in our work to test the control performance of the proposed control system with the help of a simulated robot. We considered a wheeled mobile robot (WMR) with two caster wheels and two driving wheels, as shown in Figure 2 [28]. The local and global coordinate systems are indicated as x c B y c and x o y , respectively (as shown in Figure 2).
The kinematic model of a WMR is represented as (2):
q ˙ = S ( q ) ζ
In matrix form,
x ˙ y ˙ ϕ ˙ = cos ϕ 0 sin ϕ 0 0 1 u ω
where
q = x y ϕ represents the real-time position and orientation of the robot local frame in x c B y c , and ζ = u ω T ,
u and ω : the linear and angular velocities of the mass center point G,
S ( q ) : the Jacobian matrix of the system used for translating generalized coordinates to workspace variables. The dynamics of the robot in matrix form is given as follows:
M ( q ) ζ ˙ + C ( q , q ˙ ) ζ = B ( q ) τ + τ d
where
τ = τ r τ l τ r + τ l T : are the corresponding actuator torques,
τ d : is the unknown disturbance, including parameter uncertainty.
M ( q ) : represents the positive definite system’s inertia matrix,
C ( q , q ˙ ) : is the Coriolis matrix,
B ( q ) : denotes the input transformation matrix.
These matrices are defined as follows:
M ( q ) = 2 I m r + m r 0 0 2 R r I m + r R I C ( q , q ˙ ) = 2 B m r 0 0 2 R B m r B ( q ) = 1 0 0 1
where m, r, 2 R denote the robot mass, the robot radius, and the lateral distance between the driving wheels, respectively. I m represents the effective moment of inertia (MoI) and the viscous friction coefficient of the motor rotor, I is the moment of inertia, and B m denotes the gearbox and wheel assembly. For further details and derivations of the mobile robot model, see [28,42]. The dynamic model of the mobile robot, simplified by reference [28], can be expressed by (6):
u ˙ ω ˙ = 2 B m λ u 0 0 d 2 B m λ ω u ω + 0 r λ u d r λ ω 0 τ r τ l τ r + τ l + τ d
where λ u = ( m + Δ m ) r 2 + 2 I e , λ ω = 2 ( I z + Δ I z ) r 2 + d 2 I e , d = 2 R , I e is the motor inertia, and I z is the inertia of the mobile robot.

2.3. Shared Control

The concept of shared control is useful when a system is under the supervision of both a human operator and an embedded intelligent system, such as a robot—where the goal is to assist the users in the navigation of the robot when they are incapable of performing specific maneuvers independently and safely. In default mode, the robot goes forward at a constant speed; it turns left or right after receiving mental commands from users. Users are given complete control over the robot if they do not need any navigational assistance to achieve their goals. In any other case, the system interrupts the mental commands from the user and drives the robot autonomously.
There are two main reasons why shared control is useful when controlling the robot using BCI. Firstly, mental commands from the user (i.e., BCI output) are not always perfect due to user error or fatigue, thereby the robot needs extra navigational safety. Second, there are three possible steering commands (i.e., Forward, Left turn, Right turn) in our study; consequently, the system needs to provide some assistance for fine maneuvering. The shared control system has three controllers, including two SLNFs and one OAC. Details about these controllers can be found in Section 3.

3. Controller Design

In the case of brain-controlled mobile robots, direction control is considered more often and with greater importance than speed control. In our study, we employed a kinematic model to predict the position of the mobile robot while considering its velocity. Accordingly, two SLNF controllers are developed for tracking the velocities: one for angular velocity ( ω r ), reflecting human intentions, and another for linear velocity ( u r )—maintained at a constant value. In addition, an obstacle avoidance controller is also being developed to guarantee the safety of the robot. The SLNF controller learns and updates its parameters online without prior knowledge or training data. Another key role of the SLNF controller is to deal with disturbances and lessen their effect over time, as demonstrated in Section 4.3.

3.1. Self-Learning Neuro-Fuzzy Controller

The general framework of the SLNF controller is represented in Figure 3 [38,43]. It comprises of a feed-forward controller, an online learning mechanism, a reference model, and a proportional controller. A neuro-fuzzy model—a fuzzy system designed consistent with the configuration of a neural network—is utilized to build the controller in the feed-forward path. It combines the learning capacity of neural networks with the linguistic reasoning of fuzzy models.
The feed-forward controller (neuro-fuzzy model) is likely to resemble the inverse model of a non-linear plant once correctly trained, y ( t ) = f { y ( t 1 ) , , u ( t L ) , u ( t L 1 ) , } , where L denotes the transportation delay, which is the integral value of sampling time. Since the desired control action is not known at the beginning, thus, online learning is made possible through feedback error learning law [39] as shown in the following relation:
u ˜ f = u f ( t t d ) + γ e ( t )
u f ( t t d ) : the inaccurate control action produced t d samples ago,
t d = L + 1 : the delay resulting from the dead-time of plant model,
e ( t ) : the system error, and
γ : the feedback error learning rate.
The online learning mechanism includes two approaches: feedback error learning and fuzzy identification.

3.1.1. Feedback Error Learning

The function of the feedback error learning module is to estimate the correct control signal ( u ˜ f ). Subsequently, the fuzzy identification scheme will utilize this correct control signal to update the correct controller parameters (ŵ).

3.1.2. Fuzzy Identification Scheme

The fuzzy identification scheme is employed in combination with the Fuzzy Least Mean Square (FLMS) Algorithm [38]. This scheme is used in conjunction with the Normalized Least Mean Square Algorithm (NLMS) update rule—as it is computationally effortless [44]. This scheme provides the updated controller parameters ŵ.

3.1.3. Fuzzy Feedforward Controller

Consider a neuro-fuzzy model containing n inputs ( x 1 , x 2 , , x n ) and a single output ( u f ) where the i t h input space is divided into p i triangular fuzzy sets with 50% overlap. The Takagi-Sugeno-Kang (TSK) fuzzy inference system has the following p = j = 1 n p j rules:
Rule i: if x 1 is A 11 , x 2 is A 21 , , and x n is A n 1 , then u f = w ^ 1 .
Rule i i : if x 1 is A 11 , x 2 is A 21 , , and x n is A n 2 , then u f = w ^ 2 ,
Rulep: if x 1 is A 1 p 1 , x 2 is A 2 p 2 , , and x n is A n p n , then u f = w ^ p .
Using the multiplication operator and algebraic addition to implement logical A N D s , logical O R s , and height defuzzification, the output of the neuro-fuzzy model [45] can be written as:
u f ( t ) = i = 1 p a i ( x ( t ) ) w ^ i = a T ( t ) w ^ ( t )
where
a i ( x ( t ) ) : the product of the membership grades in the fuzzy sets in the antecedent part of the i t h rule,
x ( t ) = [ x 1 ( t ) · x 2 ( t ) x n ( t ) ] T is Kronecker tensor product of the input vector,
a ( t ) = [ a 1 a 2 a p ] is its transformed vector,
A i denotes i t h fuzzy set.
w ^ ( t ) = [ w ^ ( t ) 1 ( t ) + w ^ ( t ) 2 ( t ) + + w ^ ( t ) p ( t ) ] T is a weight vector of the controller’s parameters.
The main objective is to determine the elements in the weight vector (ŵ) to facilitate the mapping of the input vector to the control signal by the feedforward controller. The parameters ŵ of feedforward neuro-fuzzy controller can be estimated at a time interval, by sending the data pair x ( t t d ) , u ˜ f to the fuzzy identification scheme, which can be found by the following expressions:
w ^ ( t ) = w ^ ( t 1 ) + δ S ( t 1 ) a ( t t d ) a T ( t t d ) S ( t 1 ) a ( t t d ) ε ( t )
δ : the user-selected update rate, and
ε ( t ) : the modeling error, whereas all elements in w ^ ( t 1 ) are initialized at zero.
S ( t ) = diag { s 1 , s 2 , s 3 , , s i , , s p }
s i = min j = 1 , j i p F j ( t )
F i ( t ) = F i ( t 1 ) + a i ( t )
where S ( t ) is a diagonal matrix that represents the cumulative strength at which a rule has been fired, while all elements in S ( t 1 ) are initialized at unity. F i ( t ) determines the i t h rule’s firing rate and strength. It is initialized at unity and has a maximum bound of 1000. It also indicates the accuracy of parameters in the ŵ ( t ) weight vector.
ε ( t ) = u ˜ f ( t ) a T ( t t d ) w ^ ( t 1 )
The total control action is given as:
u ( t ) = u f ( t ) + k p e ( t )

3.1.4. Proportional Controller

The purpose of incorporating a proportional feedback controller (with k p gain) is to lower the impact of any unmeasured disturbances while ensuring satisfactory control performance. The controller has no prior knowledge of the plant to be controlled initially; thereby, it cannot generate the correct control signal because its parameters have not been learned yet.

3.1.5. Reference Model

The main idea of using this block is to filter out the required changes in the output of the plant (w). The plant follows the set-point trajectory (r) provided by the reference model. Theoretically, a plant with a feed-forward controller should imitate the reference model behavior.

3.2. Obstacle Avoidance Controller (OAC)

We used three LIDAR sensors—which can be further extended to 5 or 7—to design the membership functions (MFs) of OAC. These MFs are designed based on the Interval Type-2 Fuzzy Logic System (IT2FLS) scheme, by using LIDAR sensor data collected from the mobile robot. The framework of the IT2FL system is presented in Figure 4. The primary function of OAC is to move the mobile robot away from the boundary walls and obstacles, avoiding potential collisions. It is responsible for both environmental and operational safety.
Compared to the Type-1 Fuzzy Logic System (T1FLS), IT2FLS has an extra output processing block, which includes type reduction followed by the defuzzifier block [46]. The type reduction block maps IT2FLS into T1FLS, and afterward, the defuzzifier maps that T1FLS into a crisp angular velocity. The structure of the rules in IT2FLS remains the same as in T1FLS. There are two major architectures for an IT2FLS: Mamdani and TSK. We used the Mamdani type in OAC because it is very suitable to human cognition [47,48,49]. The IT2 Mamdani type has all of the antecedent and consequent MFs in IT2FLS which offers more flexibility to handle uncertainties due to its adjustable parameters, and offers easy representation of uncertainities [46,50,51].
The IT2FLS receives three crisp inputs from the LIDAR sensors—representing distances (adjustable within a range of up to 5 m) to the nearest obstacle (i.e., LOD, FOD, and ROD)—and provides the angular velocity as an output to the robot. LIDAR sensors were configured as follows: one is mounted at an angle of π / 2 on the vertical axis, and the other two were mounted at angles of π / 4 and π / 4 with reference to the vertical axis. We denoted each input by two IT2 MFs, Near and Far, whereas we used five output MFs for the OAC, as shown in Figure 5. The rules for OAC are determined using expert knowledge of the operator and are listed in Table 1. The linguistic terms used in OAC have the following meanings: LOD, FOD, and ROD are the left, forward, and right distances, respectively. P, PB, Z, N, and NB denote Positive, Positive Big, No Turn, Negative, and Negative Big, respectively.

4. Results

To evaluate the control performance of the SLNF controller developed in Section 3, we conducted human-in-loop (HIL) simulation experiments with human subjects controlling a simulated mobile robot. Eight subjects participated in the experiment without monetary compensation. None of the subjects had a history of neurological or mental illness, and none had completed the navigation of a brain-controlled robot. This study adhered to the guidelines of the 2013 Declaration of Helsinki. Informed consent forms were signed by all the participants.

4.1. Simulation Setup, and Test Scenario

Only subjects who demonstrated a high level of accuracy in the SSVEP BCI—defined as a performance threshold greater than 90%—were included in the study. Accordingly, subject eight is excluded from the analysis.
As detailed in Section 2, the architecture of the brain-controlled robotic system comprises three principal components: a shared control system, a BCI system, and a robotic subsystem. Transmission Control Protocol/ Internet Protocol communication (TCP/IP) is used to transmit data between the BCI system and the mobile robot.
The robotic control system is executed using MATLAB R2020b software. The physical parameters of the robot dynamics used in our simulation were as follows: m   =   16.5   kg ,   r   =   0.0625   m ,   d   =   0.340   m ,   B m   =   0.1 ,   I z   =   0.68   kg   ·   m 2 ,   I e   =   0.0015   kg   ·   m 2 .
Nine triangular MFs for the reference input with 50% overlap equally spaced between the ranges (0, 1) were used in the SLNF controllers. The SLNF controller’s user-selected parameters can be determined from a discrete PI controller, as explained in [38]. By setting the update rate to unity ( δ = 1 ), the learning rates and feedback proportional gains of the SLNF controllers for the linear and angular velocities were adjusted to γ = 0.3 , k p = 1.5 , and γ = 0.85 , k p =   2.8 . , respectively. Details of the Fuzzy-PID controller can be found in our previous work [34].
Figure 6 shows an indoor environment of a simulated mobile robot, in which four black boundary stripes form a rectangular area that represents the safe navigation region for a mobile robot. In Figure 6, the blue hollow circle indicates a simulated mobile robot, and the red line points to the current heading direction of the mobile robot to help the subject navigate it through a cluttered environment. Moreover, the operating region contained many randomly scattered black rectangular boxes that represented detectable static obstacles.
The subjects were given the task of controlling a mobile robot from the start to one of the two target positions (A or B) by attending to related SSVEP stimuli. The online brain-controlled HIL simulation required the subjects to perform ten runs per controller type. The robot is expected to take the minimum amount of time possible while attempting to avoid collisions with any obstacles. The default position for the robot is set as the starting location at the beginning of the simulation. In our simulation test scenario, we assumed that the robot could pass through obstacles; however, once it crossed the boundary region, the trial was considered terminated. We evaluated the performance of the robot control system using different control methods: Direct BCI control, Fuzzy-PID control, and the proposed SLNF controller. Direct BCI control refers to the control of a robot without using the proposed control system.
Before experimenting, the participants were acquainted with the experimental procedure. Offline training is used to identify BCI module parameters in brain-controlled robot experiments. EEG data is collected for three control commands (turning right, turning left, and going forward) using an EMOTIV EPOC+ wireless headset. Figure 7 illustrates the experimental setup for conducting brain-controlled mobile robot experiments. Subjects were asked to complete 4 sessions of the respective SSVEP stimuli for turning left and right commands every 12 seconds (s) while not attending to any stimuli for going forward commands. Each session comprised four trials, and the subjects took a 20 s break between two consecutive trials. SSVEP BCI Accuracy values for different subjects are reported in Table 2. The accuracy metric for each control command can be calculated as:
Accuracy = No . of correctly classified commands Total number of commands × 100 %

4.2. System Performance Evaluation

We used three metrics [28]—task completion time, task completion rate, and total collisions—to assess the control system performance of the proposed control method against direct BCI control and Fuzzy-PID control methods.
As per definition, the task completion rate is the number of successful trials divided by the total number of trials. The nominal task time ( T n m ) is calculated as the task completion time using the shortest distance between the target point and the starting point with the desired linear velocity of the robot [40]. In our experiments, a trial is considered successful if the subject is capable of guiding the robot to the desired location; whereas the task completion time is assumed to be less than three fold of the nominal task time ( 3 × T n m ). Finally, the total number of collisions is defined as the number of collisions encountered during each trial.
Results were analyzed using a statistical analysis, involving a one-factorial ANOVA with Bonferroni-adjusted post-hoc tests. Figure 8 illustrates the system performance for the different control methods (BCI, Fuzzy-PID, and SLNF), including means and standard errors, highlighting significant findings. Figure 8a depicts the mean task completion rate along with standard errors. We observed a significant difference in task completion rate among controllers ( F ( 1 , 12 ) = 17.779 , p < 0.001 , partial η 2 = 0.748 ). The SLNF controller significantly outperformed the direct BCI control method ( p = 0.008 ) by 15% (94.29% ± 3.4% vs. 79.29% ± 4.6%), whereby no significant differences were observed for the Fuzzy-PID controller (94.29% ± 3.4% vs. 92.86% ± 4.1%). Figure 8b presents the average task completion times along with respective standard errors. We observed a significant difference in task completion time among controllers ( F ( 1 , 12 ) = 71.867 , p < 0.001 , partial η 2 = 0.923 ). The SLNF controller exhibited a significantly shorter average task completion time ( p < 0.001 ) of 85.309 ± 2.926 s, in contrast to the Direct BCI control method (92.011 ± 3.167 s), whereby descriptively similar performance is observed for the Fuzzy-PID control (86.157 ± 2.997 s). Our SLNF controller reduced mean task completion time by 6.70 s compared to Direct BCI and was 0.85 s faster than the Fuzzy-PID controller.
Both the proposed SLNF and Fuzzy-PID controllers successfully navigated the robot without any collisions; whereas the direct BCI control method had a mean collision number of 6.650 with standard errors of ±1.304. Figure 9 further shows the recorded trajectories of the mobile robot for subject Three during tasks A and B. Results indicate much smoother trajectories with fewer collisions for the proposed SLNF controller compared to direct BCI control.

4.3. Robustness Evaluation

Another important feature of the SLNF controller is its ability to overcome the effects of disturbances over time. In our work, two pulse signals were introduced as input torque disturbances to check the robustness of our proposed control system. These simulated signals have amplitudes of 0.25 Nm and 0.4 Nm and periods of 10 s and 20 s for linear and angular velocities, respectively, as shown in Figure 10.
The linear velocity ( u r ) is kept constant in our control system at 0.12 m/s; whereas the angular velocity ( ω r ) keeps changing to control the direction of the mobile robot. We provided step inputs to our system to better understand the characteristics and disturbance-handling ability of our proposed control method, as shown in Figure 10. Good tracking abilities were observed for all control schemes (e.g., direct BCI, fuzzy-PID, and SLNF) in the absence of simulated disturbances; however, in the case of disturbances (spikes in Figure 11), the two control methods, Direct BCI control and fuzzy-PID, didn’t perform well as they could not overcome the disturbance effect as shown in the zoomed areas of Figure 11. On the contrary, the proposed SLNF controller managed to successfully minimize the effect of disturbances with time—despite initially higher overshoots (as shown in the zoom box of Figure 11a since it can update its parameters online. Thus, once trained, the proposed SLNF controller outperforms the other control methods (direct BCI control and Fuzzy-PID) in terms of overshoot and settling time, see Figure 11b. Moreover, the SLNF controller mitigated the effect of disturbance over time, whereas direct BCI control failed to do it.
To summarize, the Direct BCI control method could not handle the effect of disturbances; while the Fuzzy-PID controller partially lessened the effect. Conversely, the proposed SLNF controller successfully dealt with disturbances as time progressed due to its online learning capability. This suggests that the proposed control framework has improved the robustness of the system.

5. Conclusions

This paper presents a shared control scheme based on a fuzzy-logic approach to improve the performance of a brain-controlled mobile robot. In other words, we investigated the applicability of our proposed method to a brain-controlled robotic system and discussed the advantages of the SLNF controller over traditional methods. Moreover, we developed two SLNF controllers for tracking the linear and angular velocities of the robot, and one OAC for the safety of the robot. The SLNF controllers—due to their online learning capability—successfully tracked the desired trajectories of a BCI mobile robot and effectively diminished the effects of external disturbances, which depicts the robustness of the system. The shared control technique compared the steering commands from the BCI system with surrounding information to ensure safety. Conversely, if these control commands violated the safety constraints (i.e., due to collisions with walls or obstacles), the shared control system passed the control fully to the robot by overriding the direct BCI commands; consequently, the mobile robot moved autonomously while avoiding obstacles.
The human-in-loop simulation demonstrates the efficacy of the proposed control strategy by achieving: a higher task completion rate, shorter task completion time, and no collisions compared to Direct BCI control and also outperformed the Fuzzy-PID controller in terms of disturbance rejection. Consequently, users can perform a given task safely and robustly. However, it is important to note that no notable difference is observed in the task completion rate or time when comparing our proposed controller with the Fuzzy-PID controller. Perhaps, the SLNF controller is trained based on the PI controller, whereas Fuzzy-PID is trained based on the PID controller; this might have resulted in comparable control performance. Nonetheless, it should be noted that the proposed controller effectively diminished the impact of disturbances over time—highlighting the robustness of our proposed control method. Hence, we conclude that the proposed control approach improves the safety, performance, and robustness of brain-controlled mobile robots.
Although the proposed control method has improved the robotic control system—it has certain limitations. First, we only performed online simulation experiments, and the subject’s experience would be slightly different when controlling a real mobile robot in a cluttered environment. Secondly, we only considered the angular velocity as the steering control command (turn left/right, and keep moving ahead) of the BCI output while maintaining a constant linear velocity.
In future, we will consider dynamic objects and more complex scenarios to further validate the proposed control scheme. For this, another controller may be designed for linear motion control along with the starting and stopping of the mobile robot, although this is a challenging task.

Author Contributions

Conceptualization, Z.R. and H.Z.U.R.; methodology, Z.R.; software, Z.R. and N.B.; validation, Z.R., H.Z.U.R. and Z.H.K.; formal analysis, Z.R.; investigation, Z.R. and H.Z.U.R.; resources, H.Z.U.R.; data curation, Z.R. and N.B.; writing—original draft preparation, Z.R.; writing—review and editing, Z.R., H.Z.U.R., N.B. and Z.H.K.; visualization, Z.R.; supervision, H.Z.U.R.; project administration, H.Z.U.R.; funding acquisition, Z.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Open Access Publishing Fund of the Free University of Bozen-Bolzano.

Institutional Review Board Statement

This study adhered to the guidelines of the 2013 Declaration of Helsinki.

Informed Consent Statement

Informed consent forms were signed by all the participants.

Data Availability Statement

The data used to support the findings of this study is available from the corresponding author upon request.

Acknowledgments

The authors gratefully acknowledge the Free University of Bolzano for their support and all participants in our experiments. Part of this research is conducted within the National PhD Program DIBRIS, University of Genoa.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  1. Bi, L.; Fan, X.A.; Liu, Y. EEG-based brain-controlled mobile robots: A survey. IEEE Trans. Hum.-Mach. Syst. 2013, 43, 161–176. [Google Scholar] [CrossRef]
  2. Katan, M.; Luft, A. Global Burden of Stroke. Semin. Neurol. 2018, 38, 208–211. [Google Scholar] [CrossRef]
  3. Khazaie, H.; Zakiei, A.; Rezaei, M.; Brand, S.; Komasi, S. The Role of Traffic and Road Accidents in Causing Disabilities in Iran. Iran. J. Public Health 2020, 49, 1804. [Google Scholar] [CrossRef]
  4. Khan, Z.H.; Siddique, A.; Lee, C.W. Robotics utilization for healthcare digitization in global COVID-19 management. Int. J. Environ. Res. Public Health 2020, 17, 3819. [Google Scholar] [CrossRef]
  5. Chaudhary, U.; Birbaumer, N.; Ramos-Murguialday, A. Brain–computer interfaces for communication and rehabilitation. Nat. Rev. Neurol. 2016, 12, 513–525. [Google Scholar] [CrossRef]
  6. Daly, J.J.; Huggins, J.E. Brain-computer interface: Current and emerging rehabilitation applications. Arch. Phys. Med. Rehabil. 2015, 96, S1–S7. [Google Scholar] [CrossRef] [PubMed]
  7. Nijboer, F. Technology transfer of brain-computer interfaces as assistive technology: Barriers and opportunities. Ann. Phys. Rehabil. Med. 2015, 58, 35–38. [Google Scholar] [CrossRef]
  8. Robinson, N.; Vinod, A.P. Bi-Directional Imagined Hand Movement Classification Using Low Cost EEG-Based BCI. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015. [Google Scholar] [CrossRef]
  9. Abiri, R.; Borhani, S.; Kilmarx, J.; Esterwood, C.; Jiang, Y.; Zhao, X. A Usability Study of Low-Cost Wireless Brain-Computer Interface for Cursor Control Using Online Linear Model. IEEE Trans. Hum.-Mach. Syst. 2020, 50, 287–297. [Google Scholar] [CrossRef] [PubMed]
  10. McMahon, M.; Schukat, M. A low-Cost, Open-Source, BCI- VR Game Control Development Environment Prototype for Game Based Neurorehabilitation. In Proceedings of the 2018 IEEE Games, Entertainment, Media Conference (GEM), Galway, Ireland,, 15–17 August 2018; pp. 1–9. [Google Scholar] [CrossRef]
  11. He, S.; Zhou, Y.; Yu, T.; Zhang, R.; Huang, Q.; Chuai, L.; Mustafa, M.U.; Gu, Z.; Yu, Z.L.; Tan, H.; et al. EEG- and EOG-Based Asynchronous Hybrid BCI: A System Integrating a Speller, a Web Browser, an E-Mail Client, and a File Explorer. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 519–530. [Google Scholar] [CrossRef]
  12. Kim, K.T.; Suk, H.I.; Lee, S.W. Commanding a brain-controlled wheelchair using steady-state somatosensory evoked potentials. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 654–665. [Google Scholar] [CrossRef]
  13. Gandhi, V.; Prasad, G.; Coyle, D.; Behera, L.; McGinnity, T.M. EEG-Based mobile robot control through an adaptive brain-robot interface. IEEE Trans. Syst. Man Cybern. Syst. 2014, 44, 1278–1285. [Google Scholar] [CrossRef]
  14. Lafleur, K.; Cassady, K.; Doud, A.; Shades, K.; Rogin, E.; He, B. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface. J. Neural Eng. 2013, 10, 046003. [Google Scholar] [CrossRef]
  15. Millán, J.D.R.; Renkens, F.; Mouriño, J.; Gerstner, W. Noninvasive brain-actuated control of a mobile robot by human EEG. IEEE Trans. Biomed. Eng. 2004, 51, 1026–1033. [Google Scholar] [CrossRef]
  16. Fernández-Rodríguez, A.; Velasco-Álvarez, F.; Ron-Angevin, R. Review of real brain-controlled wheelchairs. J. Neural Eng. 2016, 13, 061001. [Google Scholar] [CrossRef] [PubMed]
  17. Tanaka, K.; Matsunaga, K.; Wang, H.O. Electroencephalogram-based control of an electric wheelchair. IEEE Trans. Robot. 2005, 21, 762–766. [Google Scholar] [CrossRef]
  18. Rebsamen, B.; Burdet, E.; Guan, C.; Teo, C.L.; Zeng, Q.; Ang, M.; Laugier, C. Controlling a wheelchair using a BCI with low information transfer rate. In Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, ICORR’07, Noordwijk, The Netherlands, 13–15 June 2007. [Google Scholar] [CrossRef]
  19. Carlson, T.; Del, R.; Millan, J. Brain-controlled wheelchairs: A robotic architecture. IEEE Robot. Autom. Mag. 2013, 20, 65–73. [Google Scholar] [CrossRef]
  20. Liu, Y.; Li, Z.; Zhang, T.; Zhao, S. Brain-Robot Interface-Based Navigation Control of a Mobile Robot in Corridor Environments. IEEE Trans. Syst. Man Cybern. Syst. 2018, 50, 3047–3058. [Google Scholar] [CrossRef]
  21. Zhang, R.; Li, Y.; Yan, Y.; Zhang, H.; Wu, S.; Yu, T.; Gu, Z. Control of a wheelchair in an indoor environment based on a brain-computer interface and automated navigation. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 128–139. [Google Scholar] [CrossRef]
  22. Brunner, C.; Allison, B.Z.; Krusienski, D.J.; Kaiser, V.; Müller-Putz, G.R.; Pfurtscheller, G.; Neuper, C. Improved signal processing approaches in an offline simulation of a hybrid brain-computer interface. J. Neurosci. Methods 2010, 188, 165–173. [Google Scholar] [CrossRef]
  23. Yao, L.; Sheng, X.; Zhang, D.; Jiang, N.; Mrachacz-Kersting, N.; Zhu, X.; Farina, D. A Stimulus-Independent Hybrid BCI Based on Motor Imagery and Somatosensory Attentional Orientation. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1674–1682. [Google Scholar] [CrossRef]
  24. Choi, J.W.; Park, J.; Huh, S.; Jo, S. Asynchronous Motor Imagery BCI and LiDAR-Based Shared Control System for Intuitive Wheelchair Navigation. IEEE Sens. J. 2023, 23, 16252–16263. [Google Scholar] [CrossRef]
  25. Xu, B.; Liu, D.; Xue, M.; Miao, M.; Hu, C.; Song, A. Continuous shared control of a mobile robot with brain–computer interface and autonomous navigation for daily assistance. Comput. Struct. Biotechnol. J. 2023, 22, 3–16. [Google Scholar] [CrossRef] [PubMed]
  26. Deng, X.; Yu, Z.L.; Lin, C.; Gu, Z.; Li, Y. A Bayesian Shared Control Approach for Wheelchair Robot with Brain Machine Interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 328–338. [Google Scholar] [CrossRef] [PubMed]
  27. Li, Z.; Zhao, S.; Duan, J.; Su, C.Y.; Yang, C.; Zhao, X. Human Cooperative Wheelchair with Brain-Machine Interaction Based on Shared Control Strategy. IEEE/ASME Trans. Mechatron. 2017, 22, 185–195. [Google Scholar] [CrossRef]
  28. Li, H.; Bi, L.; Yi, J. Sliding-Mode Nonlinear Predictive Control of Brain-Controlled Mobile Robots. IEEE Trans. Cybern. 2020, 52, 5419–5431. [Google Scholar] [CrossRef]
  29. Lu, Y.; Bi, L.; Li, H. Model Predictive-Based Shared Control for Brain-Controlled Driving. IEEE Trans. Intell. Transp. Syst. 2020, 21, 630–640. [Google Scholar] [CrossRef]
  30. Abu-Alqumsan, M.; Ebert, F.; Peer, A. Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems. J. Neural Eng. 2017, 14, 036024. [Google Scholar] [CrossRef]
  31. Hwang, C.L.; Fang, W.L. Global Fuzzy Adaptive Hierarchical Path Tracking Control of a Mobile Robot with Experimental Validation. IEEE Trans. Fuzzy Syst. 2016, 24, 724–740. [Google Scholar] [CrossRef]
  32. Chwa, D. Fuzzy adaptive tracking control of wheeled mobile robots with state-dependent kinematic and dynamic disturbances. IEEE Trans. Fuzzy Syst. 2012, 20, 587–593. [Google Scholar] [CrossRef]
  33. Liu, R.; Wang, Y.X.; Zhang, L. An FDES-Based shared control method for asynchronous brain-actuated robot. IEEE Trans. Cybern. 2016, 46, 1452–1462. [Google Scholar] [CrossRef]
  34. Zahid, R.; Bi, L. Fuzzy-Based Shared Control for Brain-controlled Mobile Robot. In Proceedings of the 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 3759–3764. [Google Scholar] [CrossRef]
  35. Tavana, M.; Hajipour, V. A practical review and taxonomy of fuzzy expert systems: Methods and applications. Benchmarking Int. J. 2020, 27, 81–136. [Google Scholar] [CrossRef]
  36. Faisal, M.; Algabri, M.; Abdelkader, B.M.; Dhahri, H.; Al Rahhal, M.M. Human Expertise in Mobile Robot Navigation. IEEE Access 2018, 6, 1694–1705. [Google Scholar] [CrossRef]
  37. Jang, J.S.R. ANFIS: Adaptive-Network-Based Fuzzy Inference System. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  38. Tan, W.W.; Dexter, A.L. Self-learning neurofuzzy control of a liquid helium cryostat. Control Eng. Pract. 1999, 7, 1209–1220. [Google Scholar] [CrossRef]
  39. Kawato, M.; Furukawa, K.; Suzuki, R. A hierarchical neural-network model for control and learning of voluntary movement. Biol. Cybern. 1987, 57, 169–185. [Google Scholar] [CrossRef]
  40. Bi, L.; Fan, X.A.; Jie, K.; Teng, T.; Ding, H.; Liu, Y. Using a head-up display-based steady-state visually evoked potential brain-computer interface to control a simulated vehicle. IEEE Trans. Intell. Transp. Syst. 2014, 15, 959–966. [Google Scholar] [CrossRef]
  41. Al-Qerem, A.; Kharbat, F.; Nashwan, S.; Ashraf, S.; Blaou, K. General model for best feature extraction of EEG using discrete wavelet transform wavelet family and differential evolution. Int. J. Distrib. Sens. Netw. 2020, 16, 1550147720911009. [Google Scholar] [CrossRef]
  42. Li, H.; Bi, L. Discrete-Time Integral Sliding Mode Control for Brain-Controlled Mobile Robots. In Proceedings of the 2020 IEEE International Conference on Real-time Computing and Robotics (RCAR), RCAR 2020, Asahikawa, Japan, 28–29 September 2020; IEEE: New York, NY, USA, 2020; pp. 239–244. [Google Scholar] [CrossRef]
  43. Tan, W.W. An on-line modified least-mean-square algorithm for training neurofuzzy controllers. ISA Trans. 2007, 46, 181–188. [Google Scholar] [CrossRef]
  44. Postlethwaite, B.E. Building a model-based fuzzy controller. Fuzzy Sets Syst. 1996, 79, 3–13. [Google Scholar] [CrossRef]
  45. Brown, M.; Harris, C. Neurofuzzy Adaptive Modelling and Control; Prentice Hall International (UK) Ltd.: London, UK, 1994. [Google Scholar]
  46. Castillo, O.; Amador-Angulo, L.; Castro, J.R.; Garcia-Valdez, M. A comparative study of type-1 fuzzy logic systems, interval type-2 fuzzy logic systems and generalized type-2 fuzzy logic systems in control problems. Inf. Sci. 2016, 354, 257–274. [Google Scholar] [CrossRef]
  47. Mamdani, E.H. Application of Fuzzy Logic to Approximate Reasoning Using Linguistic Synthesis. IEEE Trans. Comput. 1977, C-26, 1182–1191. [Google Scholar] [CrossRef]
  48. Eshragh, F.; Mamdani, E.H. A general approach to linguistic approximation. Int. J. Man. Mach. Stud. 1979, 11, 501–519. [Google Scholar] [CrossRef]
  49. Mamdani, E.H.; Assilian, S. An experiment in linguistic synthesis with a fuzzy logic controller. Int. J. Man. Mach. Stud. 1975, 7, 1–13. [Google Scholar] [CrossRef]
  50. Ontiveros-Robles, E.; Melin, P.; Castillo, O. Comparative analysis of noise robustness of type 2 fuzzy logic controllers. Kybernetika 2018, 54, 175–201. [Google Scholar] [CrossRef]
  51. Mendel, J.M.; John, R.I.; Liu, F. Interval type-2 fuzzy logic systems made simple. IEEE Trans. Fuzzy Syst. 2006, 14, 808–821. [Google Scholar] [CrossRef]
Figure 1. Schematic of the proposed methodology for an EEG controlled mobile robot. The BCI system translates human intentions into steering commands. The shared control system manages automatic switching between user input and autonomous navigation, following Rule X to ensure safety. The robotics system communicates the robot’s states and surrounding information among different controller nodes.
Figure 1. Schematic of the proposed methodology for an EEG controlled mobile robot. The BCI system translates human intentions into steering commands. The shared control system manages automatic switching between user input and autonomous navigation, following Rule X to ensure safety. The robotics system communicates the robot’s states and surrounding information among different controller nodes.
Sensors 24 05875 g001
Figure 2. Schematic of the wheeled mobile robot showing a two-dimensional coordinate system with the x o y global frame and the x c B y c local frame of the robot. G represents the center of gravity of the robot. The distance between the wheels is 2 R , and the diameter of each wheel is 2 r . ω and u represent the angular and linear velocities of the robot, respectively. The angle ϕ indicates the rotation between the global and local frames.
Figure 2. Schematic of the wheeled mobile robot showing a two-dimensional coordinate system with the x o y global frame and the x c B y c local frame of the robot. G represents the center of gravity of the robot. The distance between the wheels is 2 R , and the diameter of each wheel is 2 r . ω and u represent the angular and linear velocities of the robot, respectively. The angle ϕ indicates the rotation between the global and local frames.
Sensors 24 05875 g002
Figure 3. Framework of the self−learning neuro−fuzzy control scheme. The reference model filters the desired changes in the plant’s output (w), guiding the plant to follow the set−point trajectory (r). The proportional feedback controller (with k p gain) minimizes the impact of unmeasured disturbances. The feedback error learning module estimates the correct control signal ( u ˜ f ), while the fuzzy identification scheme updates the controller parameters (ŵ). The feedforward controller (neuro−fuzzy model) approximates the inverse model of a nonlinear plant when properly trained.
Figure 3. Framework of the self−learning neuro−fuzzy control scheme. The reference model filters the desired changes in the plant’s output (w), guiding the plant to follow the set−point trajectory (r). The proportional feedback controller (with k p gain) minimizes the impact of unmeasured disturbances. The feedback error learning module estimates the correct control signal ( u ˜ f ), while the fuzzy identification scheme updates the controller parameters (ŵ). The feedforward controller (neuro−fuzzy model) approximates the inverse model of a nonlinear plant when properly trained.
Sensors 24 05875 g003
Figure 4. Structure of Obstacle Avoidance Controller (OAC). IT2FLS processes three LIDAR distance inputs, fuzzifies them into IT2 fuzzy sets, and applies inference rules. The type reducer converts these to IT1 fuzzy sets, and the defuzzifier computes the angular velocity for robot control.
Figure 4. Structure of Obstacle Avoidance Controller (OAC). IT2FLS processes three LIDAR distance inputs, fuzzifies them into IT2 fuzzy sets, and applies inference rules. The type reducer converts these to IT1 fuzzy sets, and the defuzzifier computes the angular velocity for robot control.
Sensors 24 05875 g004
Figure 5. The membership functions of AOC for (a) input variables and (b) output variables; Note: LOD = FOD = ROD.
Figure 5. The membership functions of AOC for (a) input variables and (b) output variables; Note: LOD = FOD = ROD.
Sensors 24 05875 g005
Figure 6. Online simulation setup of a robotic system where a user maneuvers the mobile robot using EEG signals to targets A or B, avoiding obstacles.
Figure 6. Online simulation setup of a robotic system where a user maneuvers the mobile robot using EEG signals to targets A or B, avoiding obstacles.
Sensors 24 05875 g006
Figure 7. Experimental scenario. The subject focuses on the SSVEP visual stimuli (left screen) to maneuver the robot through obstacles (right screen) and reach the target safely.
Figure 7. Experimental scenario. The subject focuses on the SSVEP visual stimuli (left screen) to maneuver the robot through obstacles (right screen) and reach the target safely.
Sensors 24 05875 g007
Figure 8. Comparison of system performance among the Direct BCI, Fuzzy-PID, and SLNF controllers for (a) Task completion rate (%) and (b) Task completion time (seconds). The SLNF controller achieved a higher average task completion rate of 94.29% (vs. 79.29% for Direct BCI and 92.86% for Fuzzy-PID) and a shorter average task completion time of 85.31 s (vs. 92.01 s for Direct BCI and 86.16 s for Fuzzy-PID). Statistically significant differences are indicated with (*) for p < 0.001.
Figure 8. Comparison of system performance among the Direct BCI, Fuzzy-PID, and SLNF controllers for (a) Task completion rate (%) and (b) Task completion time (seconds). The SLNF controller achieved a higher average task completion rate of 94.29% (vs. 79.29% for Direct BCI and 92.86% for Fuzzy-PID) and a shorter average task completion time of 85.31 s (vs. 92.01 s for Direct BCI and 86.16 s for Fuzzy-PID). Statistically significant differences are indicated with (*) for p < 0.001.
Sensors 24 05875 g008
Figure 9. Robot trajectories produced by Subject Three using (a) the proposed SLNF controller and (b) Direct BCI control. The trajectories with the proposed SLNF controller show no collisions, demonstrating the efficacy of OAC, while Direct BCI control method was unable to handle obstacle avoidance.
Figure 9. Robot trajectories produced by Subject Three using (a) the proposed SLNF controller and (b) Direct BCI control. The trajectories with the proposed SLNF controller show no collisions, demonstrating the efficacy of OAC, while Direct BCI control method was unable to handle obstacle avoidance.
Sensors 24 05875 g009
Figure 10. Step input disturbance torque signals. These test the disturbance handling capability of our proposed control system.
Figure 10. Step input disturbance torque signals. These test the disturbance handling capability of our proposed control system.
Sensors 24 05875 g010
Figure 11. Disturbance rejection comparison among Direct BCI control, fuzzy-PID, and the proposed controller for (a) linear velocity and (b) angular velocity. Initially, the SLNF controller exhibits the highest overshoot in linear velocity but reduces it over time due to its online learning capability, as shown in the zoomed area. For angular velocity, the SLNF controller shows minimal overshoot and settling time, with disturbance effects decreasing over time. Direct BCI and Fuzzy-PID controllers do not show a significant reduction in disturbance.
Figure 11. Disturbance rejection comparison among Direct BCI control, fuzzy-PID, and the proposed controller for (a) linear velocity and (b) angular velocity. Initially, the SLNF controller exhibits the highest overshoot in linear velocity but reduces it over time due to its online learning capability, as shown in the zoomed area. For angular velocity, the SLNF controller shows minimal overshoot and settling time, with disturbance effects decreasing over time. Direct BCI and Fuzzy-PID controllers do not show a significant reduction in disturbance.
Sensors 24 05875 g011
Table 1. Rules for AO controller.
Table 1. Rules for AO controller.
RuleLODFODROD ω
1NearNearNearNegative Big
2NearNearFarNegative
3NearFarNearZero
4NearFarFarNegative
5FarNearNearPositive
6FarNearFarNegative Big
7FarFarNearPositive
8FarFarFarZero
Table 2. Accuracy of SSVEP BCI system (%).
Table 2. Accuracy of SSVEP BCI system (%).
SubjectTurning LeftTurning RightGoing ForwardMean
One97.7%100%100%99.2%
Two97.7%100%98.9%98.9%
Three97.7%98.9%97.7%98.1%
Four96.6%98.9%97.7%97.7%
Five96.6%94.3%98.9%96.6%
Six93.2%97.7%92.0%94.3%
Seven98.9%86.4%88.6%91.3%
Eight81.8%100%77.3%86.4%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Razzaq, Z.; Brahimi, N.; Rehman, H.Z.U.; Khan, Z.H. Intelligent Control System for Brain-Controlled Mobile Robot Using Self-Learning Neuro-Fuzzy Approach. Sensors 2024, 24, 5875. https://doi.org/10.3390/s24185875

AMA Style

Razzaq Z, Brahimi N, Rehman HZU, Khan ZH. Intelligent Control System for Brain-Controlled Mobile Robot Using Self-Learning Neuro-Fuzzy Approach. Sensors. 2024; 24(18):5875. https://doi.org/10.3390/s24185875

Chicago/Turabian Style

Razzaq, Zahid, Nihad Brahimi, Hafiz Zia Ur Rehman, and Zeashan Hameed Khan. 2024. "Intelligent Control System for Brain-Controlled Mobile Robot Using Self-Learning Neuro-Fuzzy Approach" Sensors 24, no. 18: 5875. https://doi.org/10.3390/s24185875

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop