Next Article in Journal
Exploration of Driver Posture Monitoring Using Pressure Sensors with Lower Resolution
Next Article in Special Issue
Human Behavior Recognition Model Based on Feature and Classifier Selection
Previous Article in Journal
Unsupervised Damage Detection for Offshore Jacket Wind Turbine Foundations Based on an Autoencoder Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Brain Electrical Activity Related to Head Yaw Rotations

Department of Informatics, Bioengineering, Robotics, and Systems Engineering (DIBRIS), University of Genova, 16145 Genoa, Italy
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(10), 3345; https://doi.org/10.3390/s21103345
Submission received: 14 April 2021 / Revised: 4 May 2021 / Accepted: 7 May 2021 / Published: 11 May 2021
(This article belongs to the Special Issue Application for Assistive Technologies and Wearable Sensors)

Abstract

:
Automatizing the identification of human brain stimuli during head movements could lead towards a significant step forward for human computer interaction (HCI), with important applications for severely impaired people and for robotics. In this paper, a neural network-based identification technique is presented to recognize, by EEG signals, the participant’s head yaw rotations when they are subjected to visual stimulus. The goal is to identify an input-output function between the brain electrical activity and the head movement triggered by switching on/off a light on the participant’s left/right hand side. This identification process is based on “Levenberg–Marquardt” backpropagation algorithm. The results obtained on ten participants, spanning more than two hours of experiments, show the ability of the proposed approach in identifying the brain electrical stimulus associate with head turning. A first analysis is computed to the EEG signals associated to each experiment for each participant. The accuracy of prediction is demonstrated by a significant correlation between training and test trials of the same file, which, in the best case, reaches value r = 0.98 with MSE = 0.02. In a second analysis, the input output function trained on the EEG signals of one participant is tested on the EEG signals by other participants. In this case, the low correlation coefficient values demonstrated that the classifier performances decreases when it is trained and tested on different subjects.

1. Introduction

In human computer interaction (HCI), design and application of brain–computer interfaces (BCIs) are among the main challenging research activities. BCI technologies aim at converting human mental activities into electrical brain signals, producing a control command feedback to external devices such as robot systems [1]. Recently, scientific literature has shown specific interest in cognitive human reactions’ identification, caused by a specific environment perception or an adaptive HCI [2]. Reviews on BCI and HCI can be found in Mühl et al. [3] and Tan and Nijholt [4].
The essential stages for a BCI application consist of a signal acquisition of the brain activities, on the preprocessing and feature extraction, classification, and feedback.
The brain signals acquisition may be realized by different devices such as Electroencephalography (EEG), Magnetoencephalography (MEG), Electrocorticography (ECoG), or functional near infrared spectroscopy (fNIRS) [5]. The preprocessing consists of cleaning the input data from noises (called artifacts), while the extraction feature phase deals with selecting, from the input signals, the most relevant features required to discriminate the data according to the specific classification [6]. The classification is the central element of the BCI and it refers to the identification of the correct translation algorithm, which converts the extracting signals features into control commands for the devices according to the user’s intention.
From the signals acquisition viewpoint, the EEG represents the most used technique; although it is non-invasive, cheap, and portable, it assures a good spatial and temporal resolution [7]. As stated in the literature [8], however, the acquisition of EEG signals through hair remains a critical issue.
Electroencephalography (EEG)-based biometric recognition systems have been used in a large range of clinical and research applications [9] such as interpreting humans’ emotional states [10], monitoring participants’ alertness or fatigue [11], checking memory workload [12], evaluating participants’ fear when subjected to unpredictable acoustic or visual external stimuli [13], or diagnosing generic brain disorders [14].
Significant literature concerning EEG signal analysis versus visual-motor tasks is available. Perspectives are particularly relevant in rehabilitation engineering. A pertinent recent example of EEG analysis application in robotics and rehabilitation engineering is provided in Randazzo et al. [15], where the authors tested on nine participants how an exoskeleton, coupled with a BCI, can elicit EEG brain patterns typical of natural hand motions.
Apart from this, cognitive activities related to motor movements have been observed in EEG following both actually executed and imagined actions [16,17]. Comparing neural signals provided by actual or imaginary movements, most papers concluded that the brain activities are similar [18]. In literature, a significant correlation between head movements and visual stimuli has been proven [19].
In order to realize an EEG-based BCI, adopting a classifier to interpret EEG-signals and implement a control system is necessary. In fact, according to the recorded EEG pattern and the classification phase, EEG may be used as input for the control interface in order to command external devices. As demonstrated in literature, the quality of the classifier, which has to extract the meaningful data from the brain signals, represents the crucial point to obtain a robust BCI [20].
The well-known techniques used for EEG signals classification in motor-imagery BCI applications are support vector machine (SVM), linear discriminant analysis (LDA), multi-layer perceptron (MLP), and random forest (RF) or convolutional neural network (CNN) classifiers. In Narayan [21], SVM obtained better performances with 98.8% classification accuracy in respect to LDA and MLP for left-hand and right-hand movements recognition. In their research, the authors demonstrated the superiority of the CNN in respect to LDA and RF for the classification of different fine hand movements [22]. In Antoniou et al. [23], the RF algorithm outperformed compared to K-NN, MLP, and SVM in the classification of eye movements used in a EEG-based control system for driving an electromechanical wheelchair. In Zero et al. [24], a time delay neural network (TDNN) classification model has been implemented to classify the human’s EEG signals when the driver has to rotate the steering wheel to perform a right or a left turn during a driving task in a simulated environment.
More in general, almost 25% of recent classification algorithms for neural cortical recording were based on Artificial Neural Networks (ANNs) [25], as they have been intensively applied to EEG classification [26]. An interesting application is presented in Craig and Nguyen [27], where the authors proposed an ANN classifier for mental command with the purpose of enhancing the control of a head-movement controlled power wheelchair for patients with chronic spinal cord injury. The authors obtained an average accuracy rate of 82.4%, but they also noticed that the classifier applied to a new subject performed worse than expected and that the customization of the classifier by an adaptive training technique increased the quality of prediction. Besides, researchers used ANNs for motor imagery classification of hand [28] or foot movements [29], as well as eye blinking detection [30]. In Lotte et al. [31], a review on classification algorithms for EEG-based BCI appears.
This paper focuses on an original objective in the context of EEG signals classifiers in respect to the literature related to body movements. Even if this work adopts a traditional ANN classifier, the scope of the application represents the main novelty due to the fact that we explore the recognition of the yaw head rotations directed toward a light target by EEG brain activities to support the driving of tasks in different applications, such as to control autonomous vehicle or wheelchair or robot in general.
In detail, this work is about “using brain electrical activities to recognize head movements in human subjects.” Input data are EEG signals collected from a set of 10 participants. Left or right head position as responses to external visual stimulus represent the output data for the experiments. The main purpose of the proposed approach is defining and verifying the BCI system effectiveness in identifying an input-output function between EEG and head different positions. Section 2 introduces BCI architecture used for experiments, while Section 3 shows results coming from different training and testing scenarios. Section 4 briefly reports the conclusions.

2. Materials and Methods

2.1. System Architecture

The architecture of the system used for the experiments consists of two interacting sub-systems: (1) a basic lamp system in charge of generating visual stimuli, and (2) an Enobio® EEG systems cap by Neuroelectrics (Cambridge, MA, USA) for EEG signal acquisition. The two subsystems can communicate with a PC server through a serial port and a Bluetooth connector, respectively.

2.1.1. Lamp System

The lamp system’s main components are a Raspberry pi 3-control unit (Cambridge, UK) and two LED lamps. The PC server hosts a Python application, which randomly sends an input to the Raspberry unit by the serial cable. The Raspberry unit hosts another Python application, which receives commands to switch on/off the lamps. Figure 1 shows the system architecture.
The two lamps are positioned at the extreme sides of a table (size: 1.3 × 0.6 m), allowing a typical head rotation (yaw angle) over a −45°/45° range. Figure 2 shows a top vision of the experimental set environment.

2.1.2. EEG Enobio Cap

The sensors connected to this cap can monitor EEG signals at 500 Hz frequency. The Enobio cap works on eight different channels. In order to decrease the artifacts due to muscular activity, the EEG system is equipped with two additional electrodes to apply a differential filtering to the EEG signals. These two electrodes are positioned in a hairless area in the head (usually behind the ears by the neck). In the proposed experiments, we focus on three channels labeled O1, O2, and CZ, according to International Standard System 10/20. The first two are positioned in the occipital lobe; the other in the parietal one (Figure 3). The reason for this choice is that the signals coming from the occipital lobe are commonly associated with visual processing [32], while the signals coming from the parietal lobe are related to body movement activities. In addition, a good correlation between occipital centroparietal areas improves visual motor performance identification [33].
Positioning electrodes on the head plays a fundamental role in the quality of the data acquisition. For example, using gel may improve the quality of EEG signal. However, the main target of this work is verifying EEG monitoring’s feasibility in working conditions, in order to avoid every possible, although limited, action on the workers. For this reason, no gel was used in the sensors positioning phase.

2.2. Simulation Description

During data acquisition, the participant sits in front of the table and wears the EEG Enobio cap, assisted by the operator who checks the electrodes position. Each participant is expected to move his/her head left or right towards the lamp, which is randomly switched on by the Raspberry unit. The lamp stays on for a variable period of time (between six and nine seconds). After turning off, the lamp stays inactive for five seconds. The test participant is expected to move his or her head back to the starting position following the lamp turning off.

2.3. Data Processing and Analysis

2.3.1. Pre-Processing Data

During EEG monitoring, the presence of artifacts and noise in the acquired data was one of the main problems we had to face. Exogenous and endogenous noises can significantly affect reliability of the acquired data. Concerning artifacts, several types have been described in literature [34], among others, such as ocular, muscle, cardiac, and extrinsic artifacts.
In order to limit artifacts, we worked as follows:
  • Muscle artifacts were intrinsically limited in the EEG signal acquisition system thanks to the two differential electrodes embodied in Enobio Cap.
  • Extrinsic artifacts were limited by proper signal filtering and normalizing EEG signals. Specifically, we applied a bandpass filtering between 49 and 51 Hz in order to eliminate the noise given by the electrical frequencies [35].
  • In addition, in order to remove linear trends, a high pass filter—cutting frequencies lower than 1 Hz—filtered the overall signal.
The resulting signals, whose unit of measure is µV, have been amplified to a factor 105, and limited between 1 and −1. The reason is to enhance the precision of the following signal analysis. The head positions were classified as follows: −1 for left position, 1 for right, and 0 for forward. The participants were asked to move the head in a normal speed avoiding sudden movements. Thus, transition from one position to the other (e.g., left to forward) was linearly smoothed using a moving average computed on a window of 300 samples (i.e., for a duration of 0.6 s).

2.3.2. Input Output Data Analysis

The testing goal is to find a direct input-output function that is able to relate a certain number of EEG samples to the related value of the head position. This is challenging since, as stated in literature, time variance [36] and sensibility to different participants’ reactions [37] are well known obstacles.
Specifically, the goal is to identify a non-linear input-output function, which takes 10 consecutive EEG samples, extracted from O1, O2, and Cz, (hereinafter defined as x _ ( t ), which is a 3-component vector sampled at instant t), and the value of the head position in the sample just following the EEG samples (hereinafter defined as y ( t ) ).
A non-linear function f between input x _ ( t ) and output y ( t ) must be identified so that the values y ˜ ( t ) resulting by Equation (1):
y ˜ ( t ) = f ( x _ ( t 1 ) , x _ ( t 2 ) , x _ ( t 10 ) )
minimize the minimum squared error (MSE) between y ( t ) and y ˜ ( t ) values, where MSE computed on one prediction is given by:
M S E = t = 1 n ( y ( t ) y ˜ ( t ) ) 2 n
To keep predictions less sensible to the input noise, the predicted values y ˜ ( t ) are averaged on a moving mean of 300 preceding samples, which is:
y ¯ ( t ) = t ^ = 0 299 y ˜ ( t t ^ ) 300
Results related to the identification reliability of the function f are evaluated against two key performance indexes, MSE and Pearson correlation coefficient r, as reported below:
M S E ( y , y ¯ ) = t = 1 n ( y ( t ) y ¯ ( t ) ) 2 n
r ( y , y ¯ ) = c o v ( y , y ¯ ) σ y σ y ¯
where:
  • σ y and σ y ¯ are the standard deviations of y and y ¯ ;
  • c o v ( y , y ¯ ) is the covariance of y and y ¯ .
An ANN with 10 neurons in the hidden layer identified the non-linear input-output function. The identification process is based on Levenberg–Marquardt backpropagation algorithm [38,39] Matlab® software version R2020 b (Natick, MA, USA). The training processes got a solution after an average of 87 steps (about 45 s) on a common Dell laptop Intel i5-3360M CPU, 2.8 GHz, 8 GB (Austin, TX, USA).

3. Results

3.1. Data Set

The trials involved 10 participants: one woman (P1) and nine men (P2–P10), aged 25 to 60, with no known history of neurological abnormalities.
All participants, but P5, are right-handed. P2 and P4 are hairless. For two participants, namely P1 and P2, 10 different experiments were recorded; for P10, 2 experiments were recorded while for the others, namely P3–P9, only one experiment was recorded. All tests were 5 min long. Table 1 shows the main files characteristics.
From left to right, the columns show: participant ID; file ID; the number of samples in each file; time elapsed from participant’s first trial; occurrences percentage related to the three coded head positions (1 R (right), 0 F (forward), and −1 L (left)).
Out of the example, Figure 4 shows P4F1 trend in the three EEG channels versus head movement output signals, filtered and normalized as described in Section 2.3.1.

3.2. First Analysis. Identification of the Function f on the First Half File and Verification on the Second Half

Each file was divided into two equals parts; we named the first “training set,” and the second “test set.” The training sets always include samples related to the three possible positions (R, F, L). The results on the testing set can be further classified according to r value ranges reported in Table 2 [40].
Table 3 shows the performance indexes on the testing set. In 29 files, only two (P1F8 and P1F10) show a moderate correlation; the others show a strong one instead.
Table 4 and Table 5 report r and MSE values produced by extracting the functions from the 10 different tests on P1 (rows) and applying them to each test for the same subject (columns). Table 6 and Table 7 report the same data produced from P2 tests.
The cells in the tables are grayed according to the classification given in Table 2 (white = strong correlation; gray = moderate correlation; dark gray = weak correlation).
Out of example, Figure 5, Figure 6 and Figure 7 show the trend of three different cases of predictions against the actual head positions.
Figure 5 presents the best case (i.e., on P4F1, r = 0.98, Figure 5), and Figure 7 the worst one (i.e., on P1F10, r = 0.38, Figure 7), while Figure 6 shows a study case with a medium performance (r = 0.82 and MSE = 0.31).

3.3. Second Analysis. Identification of the Function f on One Participant’s Overall Data and Verification on All Participants’ Overall Data

Following this approach, the files related to the overall experiments for each participant were used to train ANN in order to test the classifier using each function on each test file. Although the function is identified and verified on the same data, the values on the diagonal (see Table 8 and Table 9) showed strong correlation in this analysis too. On the other hand, as expected, testing one subject’s function f on another subject’s data returns very low correlation coefficient values, almost close to zero. There is just one case that contradicts this statement: we managed to see that functions coming from P1 return results with a good performance (r = 0.52, MSE = 0.38) for the P3 case. This exception is surely fortuitous, although it is quite curious noting that P1 is P2′s mother.

4. Conclusions

The main contribution of this paper is to address an issue that the literature concerning BCI has paid little attention to: the identification of human head movements (yaw rotation) by EEG signals.
This kind of system is effectively starting to become present in commercial systems at prototypal level. For example, it will be used more and more in the automotive context, with proprietary systems, which will be, however, mostly based on ANN applications. Thus, for the scientific community, it is hard to be completely aware of the current state of the art prototypes. In our opinion, it is important to share experimental results on these subjects.
Concerning the head yaw rotation studied in this work, from the trials performed on ten different participants, spanning more than two hours of experiments, it seems clear that—under some specific limitations—this goal is achievable.
Specifically, after identifying a proper function over a short period of time (a couple of minutes for each participant), this can predict head positions with a quite relevant accuracy for the remaining minutes. Such accuracy is quite relevant (MSE < 0.35 and r > 0.5, p < 0.01) since it was obtained in 26 out of 28 tested files. Once the function is identified for a single file, this generally shows good results on files involving the same participant in the same day.
However, the results obtained in different analyses proved that EEG signals are time variant and the files recorded in a short time interval may be useful to generate a classifier for human head movements following visual stimuli. As a matter of fact, such correlation appears to be time dependent, or more likely, quite susceptible to sensors’ positioning. Besides, a further result of the study, which may represent a drawback but also an important finding of the approach, is related to the fact that the correlation is surely dependent on the specific participant, with the impossibility to predict on another subject when the classifier is trained on another one. This may be a disadvantage in the implementation of the EEG classifier because it seems to be significantly different for each subject, and this precludes the ability to achieve an acceptable level of generalization. However, further studies should demonstrate this when the classifier is identified on a group of several different subjects.
Other important remarks concern the EEG data acquisition reliability, which seems to be extremely dependent on the adherence of the electrodes to the scalp. In the proposed study cases, the two hairless participants achieved better performance in the tests proving that the quality of data collection is closely related to the quality of the predictions.
Future developments will address different arguments. Since, in the trials reported in this paper, EEG is affected both by electrical and illumination stimuli, further efforts should be devoted to separate these two aspects. Secondly, further EEG signal analysis should be performed to outline input-output relations for specific frequency bands.

Author Contributions

Conceptualization, R.S., E.Z.; methodology, E.Z., C.B., R.S.; software, E.Z.; validation, E.Z.; writing—original draft preparation, C.B.; writing—review and editing, C.B.; supervision, R.S. However, the efforts in this work by all authors can be considered equal. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all individual participants involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restrictions present in the informed consent.

Acknowledgments

Research partially supported by Eni S.p.A.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nourmohammadi, A.; Jafari, M.; Zander, T.O. A survey on unmanned aerial vehicle remote control using brain–computer interface. IEEE Trans. Hum.-Mach. Syst. 2018, 48, 337–348. [Google Scholar] [CrossRef]
  2. Zhang, J.; Yin, Z.; Wang, R. Recognition of Mental Workload Levels Under Complex Human–Machine Collaboration by Using Physiological Features and Adaptive Support Vector Machines. IEEE Trans. Hum.-Mach. Syst. 2015, 45, 200–214. [Google Scholar] [CrossRef]
  3. Mühl, C.; Allison, B.; Nijholt, A.; Chanel, G. A survey of affective brain computer interfaces: Principles, state-of-the-art, and challenges. Brain-Comput. Interfaces 2014, 1, 66–84. [Google Scholar] [CrossRef] [Green Version]
  4. Tan, D.; Nijholt, A. Brain-Computer Interfaces and Human-Computer Interaction. In Brain-Computer Interfaces; Tan, D., Nijholt, A., Eds.; Human-Computer Interaction Series; Springer: London, UK, 2010. [Google Scholar]
  5. Saha, S.; Mamun, K.A.; Ahmed, K.I.U.; Mostafa, R.; Naik, G.R.; Darvishi, S.; Khandoker, A.H.; Baumert, M. Progress in Brain Computer Interface: Challenges and Potentials. Front. Syst. Neurosci. 2021, 15, 4. [Google Scholar] [CrossRef] [PubMed]
  6. Di Flumeri, G.; Aricò, P.; Borghini, G.; Sciaraffa, N.; Di Florio, A.; Babiloni, F. The dry revolution: Evaluation of three different EEG dry electrode types in terms of signal spectral features, mental states classification and usability. Sensors 2019, 19, 1365. [Google Scholar] [CrossRef] [Green Version]
  7. Srinivasu, P.N.; SivaSai, J.G.; Ijaz, M.F.; Bhoi, A.K.; Kim, W.; Kang, J.J. Classification of Skin Disease Using Deep Learning Neural Networks with MobileNet V2 and LSTM. Sensors 2021, 21, 2852. [Google Scholar] [CrossRef] [PubMed]
  8. Chi, Y.M.; Jung, T.; Cauwenberghs, G. Dry-Contact and Noncontact Biopotential Electrodes: Methodological Review. IEEE Rev. Biomed. Eng. 2010, 3, 106–119. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Ali, F.; El-Sappagh, S.; Islam, S.R.; Kwak, D.; Ali, A.; Imran, M.; Kwak, K.S. A smart healthcare monitoring system for heart disease prediction based on ensemble deep learning and feature fusion. Inf. Fusion 2020, 63, 208–222. [Google Scholar] [CrossRef]
  10. Stikic, M.; Johnson, R.R.; Tan, V.; Berka, C. EEG-based classification of positive and negative affective states. Brain-Comput. Interfaces 2014, 1, 99–112. [Google Scholar] [CrossRef]
  11. Monteiro, T.G.; Skourup, C.; Zhang, H. Using EEG for Mental Fatigue Assessment: A Comprehensive Look into the Current State of the Art. IEEE Trans. Hum.-Mach. Syst. 2019, 49, 599–610. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, S.; Gwizdka, J.; Chaovalitwongse, W.A. Using wireless EEG signals to assess memory workload in the n-back task. IEEE Trans. Hum.-Mach. Syst. 2015, 46, 424–435. [Google Scholar] [CrossRef]
  13. Zero, E.; Bersani, C.; Zero, L.; Sacile, R. Towards real-time monitoring of fear in driving sessions. IFAC-PapersOnLine 2019, 52, 299–304. [Google Scholar] [CrossRef]
  14. Borhani, S.; Abiri, R.; Jiang, Y.; Berger, T.; Zhao, X. Brain connectivity evaluation during selective attention using EEG-based brain-computer interface. Brain-Comput. Interfaces 2019, 6, 25–35. [Google Scholar] [CrossRef]
  15. Randazzo, L.; Iturrate, I.; Perdikis, S.; Millán, J.D.R. mano: A wearable hand exoskeleton for activities of daily living and neurorehabilitation. IEEE Robot. Autom. Lett. 2017, 3, 500–507. [Google Scholar] [CrossRef] [Green Version]
  16. Sleight, J.; Pillai, P.; Mohan, S. Classification of executed and imagined motor movement EEG signals. Ann. Arbor Univ. Mich. 2009, 110, 2009. [Google Scholar]
  17. Liao, J.J.; Luo, J.J.; Yang, T.; So, R.Q.; Chua, M.C. Effects of local and global spatial patterns in EEG motor-imagery classification using convolutional neural network. Brain-Comput. Interfaces 2020, 7, 47–56. [Google Scholar] [CrossRef]
  18. Athanasiou, A.; Chatzitheodorou, E.; Kalogianni, K.; Lithari, C.; Moulos, I.; Bamidis, P.D. Comparing sensorimotor cortex activation during actual and imaginary movement. In XII Mediterranean Conference on Medical and Biological Engineering and Computing; Springer: Berlin/Heidelberg, Germany, 2010; pp. 111–114. [Google Scholar]
  19. Li, B.J.; Bailenson, J.N.; Pines, A.; Greenleaf, W.J.; Williams, L.M. A public database of immersive VR videos with corresponding ratings of arousal, valence, and correlations between head movements and self report measures. Front. Psychol. 2017, 8, 2116. [Google Scholar] [CrossRef]
  20. Ilyas, M.Z.; Saad, P.; Ahmad, M.I.; Ghani, A.R. Classification of EEG signals for brain-computer interface applications: Performance comparison. In Proceedings of the 2016 International Conference on Robotics, Automation and Sciences (ICORAS), Melaka, Malaysia, 5–6 November 2016; pp. 1–4. [Google Scholar] [CrossRef]
  21. Narayan, Y. Motor-Imagery EEG Signals Classification using SVM, MLP and LDA Classifiers. Turk. J. Comput. Math. Educ. 2021, 12, 3339–3344. [Google Scholar]
  22. Bressan, G.; Cisotto, G.; Müller-Putz, G.R.; Wriessnegger, S.C. Deep learning-based classification of fine hand movements from low frequency EEG. Future Internet 2021, 13, 103. [Google Scholar] [CrossRef]
  23. Antoniou, E.; Bozios, P.; Christou, V.; Tzimourta, K.D.; Kalafatakis, K.; G Tsipouras, M.; Giannakeas, N.; Tzallas, A.T. EEG-Based Eye Movement Recognition Using the Brain–Computer Interface and Random Forests. Sensors 2021, 21, 2339. [Google Scholar] [CrossRef]
  24. Zero, E.; Bersani, C.; Sacile, R. EEG Based BCI System for Driver’s Arm Movements Identification. In Proceedings of the Automation, Robotics & Communications for Industry 4.0, Chamonix-Mont-Blanc, France, 3–5 February 2021; Volume 77. [Google Scholar]
  25. Bashashati, A.; Fatourechi, M.; Ward, R.K.; Birch, G.E. A survey of signal processing algorithms in brain–computer interfaces based on electrical brain signals. J. Neural Eng. 2007, 4, R32. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Kawala-Sterniuk, A.; Browarska, N.; Al-Bakri, A.; Pelc, M.; Zygarlicki, J.; Sidikova, M.; Martinek, R.; Gorzelanczyk, E.J. Summary of over Fifty Years with Brain-Computer Interfaces—A Review. Brain Sci. 2021, 11, 43. [Google Scholar] [CrossRef] [PubMed]
  27. Craig, D.A.; Nguyen, H.T. Adaptive EEG Thought Pattern Classifier for Advanced Wheelchair Control. In Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; pp. 2544–2547. [Google Scholar] [CrossRef]
  28. Alazrai, R.; Abuhijleh, M.; Alwanni, H.; Daoud, M.I. A Deep Learning Framework for Decoding Motor Imagery Tasks of the Same Hand Using EEG Signals. IEEE Access 2019, 7, 109612–109627. [Google Scholar] [CrossRef]
  29. Tariq, M.; Trivailo, P.M.; Simic, M. Mu-Beta event-related (de) synchronization and EEG classification of left-right foot dorsiflexion kinaesthetic motor imagery for BCI. PLoS ONE 2020, 15, e0230184. [Google Scholar] [CrossRef]
  30. Chambayil, B.; Singla, R.; Jha, R. EEG eye blink classification using neural network. In Proceedings of the World Congress on Engineering, London, UK, 30 June–2 July 2010; Volume 1, pp. 2–5. [Google Scholar]
  31. Lotte, F.; Congedo, M.; Lécuyer, A.; Lamarche, F.; Arnaldi, B. A review of classification algorithms for EEG-based brain–computer interfaces. J. Neural Eng. 2007, 4, R1. [Google Scholar] [CrossRef]
  32. Müller, V.; Lutzenberger, W.; Preißl, H.; Pulvermüller, F.; Birbaumer, N. Complexity of visual stimuli and non-linear EEG dynamics in humans. Cogn. Brain Res. 2003, 16, 104–110. [Google Scholar] [CrossRef]
  33. Rilk, A.J.; Soekadar, S.R.; Sauseng, P.; Plewnia, C. Alpha coherence predicts accuracy during a visuomotor tracking task. Neuropsychologia 2011, 49, 3704–3709. [Google Scholar] [CrossRef]
  34. Jiang, X.; Bian, G.B.; Tian, Z. Removal of artifacts from EEG signals: A review. Sensors 2019, 19, 987. [Google Scholar] [CrossRef] [Green Version]
  35. Singh, V.; Veer, K.; Sharma, R.; Kumar, S. Comparative study of FIR and IIR filters for the removal of 50 Hz noise from EEG signal. Int. J. Biomed. Eng. Technol. 2016, 22, 250–257. [Google Scholar] [CrossRef]
  36. Sakkalis, V. Review of advanced techniques for the estimation of brain connectivity measured with EEG/MEG. Comput. Biol. Med. 2011, 41, 1110–1117. [Google Scholar] [CrossRef]
  37. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEG-based brain–computer interface paradigms. J. Neural Eng. 2019, 16, 011001. [Google Scholar] [CrossRef] [PubMed]
  38. Sapna, S.; Tamilarasi, A.; Kumar, M.P. Backpropagation learning algorithm based on Levenberg Marquardt Algorithm. Comp. Sci. Inform. Technol. 2012, 2, 393–398. [Google Scholar]
  39. Lv, C.; Xing, Y.; Zhang, J.; Na, X.; Li, Y.; Liu, T.; Cao, D.; Wang, F.Y. Levenberg–Marquardt backpropagation training of multilayer neural networks for state estimation of a safety-critical cyber-physical system. IEEE Trans. Ind. Inform. 2017, 14, 3436–3446. [Google Scholar] [CrossRef] [Green Version]
  40. Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
Figure 1. System architecture.
Figure 1. System architecture.
Sensors 21 03345 g001
Figure 2. Top vision of the layout of the experimental set environment.
Figure 2. Top vision of the layout of the experimental set environment.
Sensors 21 03345 g002
Figure 3. EEG Enobio Cap.
Figure 3. EEG Enobio Cap.
Sensors 21 03345 g003
Figure 4. Trend of channels O1, O2, and CZ vs. the output signal y in file P4 F1.
Figure 4. Trend of channels O1, O2, and CZ vs. the output signal y in file P4 F1.
Sensors 21 03345 g004
Figure 5. Predicted vs. actual values in P4 F1 testing (r = 0.98 and MSE = 0.02).
Figure 5. Predicted vs. actual values in P4 F1 testing (r = 0.98 and MSE = 0.02).
Sensors 21 03345 g005
Figure 6. Predicted vs. actual values in the second half of file P3 F1 (r = 0.82 and MSE = 0.31).
Figure 6. Predicted vs. actual values in the second half of file P3 F1 (r = 0.82 and MSE = 0.31).
Sensors 21 03345 g006
Figure 7. Predicted vs. actual values in P1 F10 testing (r = 0.38 and MSE = 0.38).
Figure 7. Predicted vs. actual values in P1 F10 testing (r = 0.38 and MSE = 0.38).
Sensors 21 03345 g007
Table 1. The files used to identify the function f .
Table 1. The files used to identify the function f .
Part. IDFile IDDuration Time (s)Start TimeHead Position Occurrence
(Left L, Forward F, Right R)
P1F13280L 14.2%, F 59.5%, R 26.3%
P1F231051 dL 28.9%, F 60.7%, R 10.5%
P1F331951 dL 18.0%, F 59.9%, R 22.1%
P1F433654 dL 12.0%, F 60.3%, R 27.7%
P1F533554 dL 11.9%, F 60.3%, R 27.8%
P1F632854 dL 18.5%, F 60.9%, R 20.6%
P1F7306100 dL 17.9%, F 61.5%, R 20.6%
P1F8307100 dL 20.2%, F 59.6%, R 20.2%
P1F9328100 dL 16.7%, F 59.9%, R 23.4%
P1F10305100 dL 22.8%, F 61.1%, R 16.1%
P2F13410L 21.6%, F 60.7%, R 17.7%
P2F232168 dL 19.1%, F 59.8%, R 21.2%
P2F332568 dL 12.7%, F 60.3%, R 27.1%
P2F435468 dL 22.9%, F 61.7%, R 15.4%
P2F538468 dL 17.5%, F 61.3%, R 21.2%
P2F630485 dL 11.0%, F 60.4%, R 28.6%
P2F731485 dL 17.0%, F 59.5%, R 23.5%
P2F831285 dL 30.5%, F 60.9%, R 8.7%
P2F931685 dL 23.3%, F 62.0%, R 14.7%
P2F1031685 dL 17.0%, F 60.1%, R 22.9%
P3F13140L 19.1%, F 59.6%, R 21.3%
P4F13000L 25.9%, F 59.3%, R 14.8%
P5F13990L 16.8%, F 60.0%, R 23.2%
P6F13080L 11.0%, F 60.7%, R 28.4%
P7F13560L 23.0%, F 61.7%, R 15.8%
P8F13040L 25.5%, F 61.7%, R 12.8%
P9F13660L 19.7%, F 60.4%, R 19.9%
P10F13770L 24.8%, F 59.6%, R 15.6%
P10F23391 hL 22.1%, F 59.9%, R 18.0%
Table 2. Threshold values to evaluate correlation performance.
Table 2. Threshold values to evaluate correlation performance.
r Correlation Performance
0.50 ≤ r ≤ 1strong
0.30 ≤ r < 0.50moderate
r < 0.30Weak
Table 3. Prediction performances by the first analysis.
Table 3. Prediction performances by the first analysis.
Participant IDFile IDMSEr
P1F10.120.86
P1F20.200.80
P1F30.320.78
P1F40.140.84
P1F50.260.78
P1F60.210.71
P1F70.300.61
P1F80.370.48
P1F90.420.79
P1F100.380.38
P2F10.310.71
P2F20.190.86
P2F30.120.88
P2F40.160.86
P2F50.160.82
P2F60.290.82
P2F70.300.78
P2F80.280.57
P2F90.270.76
P2F100.350.87
P3F10.310.82
P4F10.020.98
P5F10.130.91
P6F10.370.59
P7F10.350.66
P8F10.330.76
P9F10.330.78
P10F10.180.89
P10F20.320.93
Table 4. r Values (P1).
Table 4. r Values (P1).
rF1F2F3F4F5F6F7F8F9F10
F10.90−0.18−0.21−0.73−0.39−0.740.40−0.17−0.35−0.15
F2−0.230.780.59−0.79−0.70−0.78−0.32−0.21−0.42−0.47
F3−0.330.520.71−0.8−0.65−0.81−0.19−0.35−0.10−0.14
F40.01−0.59−0.560.840.730.820.040.570.150.19
F50.06−0.53−0.50.840.820.840.49−0.09−0.010.08
F60.17−0.54−0.470.840.760.850.37−0.150.130.13
F70.70−0.14−0.140.700.600.700.670.27−0.07−0.02
F8−0.11−0.48−0.400.800.680.810.350.550.310.08
F9−0.69−0.060.220.410.330.450.390.070.800.68
F10−0.66−0.320.000.780.520.770.200.310.830.79
Table 5. MSE Values (P1).
Table 5. MSE Values (P1).
MSEF1F2F3F4F5F6F7F8F9F10
F10.090.510.410.520.490.470.350.410.400.40
F20.540.170.460.931.231.330.860.780.780.68
F30.480.410.311.101.721.730.500.410.400.38
F41.252.642.180.110.190.132.392.462.262.79
F50.791.881.410.140.130.131.951.841.501.93
F61.052.131.640.120.170.112.252.111.702.16
F70.280.430.400.580.470.450.250.380.400.38
F80.480.470.421.011.081.070.330.300.370.40
F90.850.450.370.340.350.350.340.410.320.35
F100.880.420.430.550.710.510.360.380.350.34
Table 6. r Values (P2).
Table 6. r Values (P2).
rF1F2F3F4F5F6F7F8F9F10
F10.780.620.770.620.770.41−0.010.010.020.0
F20.090.860.860.860.850.290.510.610.700.52
F30.010.770.870.770.840.210.310.110.260.33
F40.000.780.860.760.860.270.370.380.420.43
F5−0.140.830.860.830.870.300.600.580.720.60
F60.150.420.600.420.830.750.700.690.770.80
F7−0.00−0.120.44−0.120.390.420.800.720.840.86
F80.15−0.22−0.24−0.22−0.110.370.710.720.80.65
F90.16−0.20−0.15−0.20−0.210.530.620.660.810.72
F10−0.11−0.360.05−0.360.040.410.780.710.830.87
Table 7. MSE Values (P2).
Table 7. MSE Values (P2).
MSEF1F2F3F4F5F6F7F8F9F10
F10.290.260.290.430.310.310.631.130.930.68
F20.400.160.150.170.150.510.470.300.360.46
F30.430.200.130.180.160.400.440.340.350.45
F40.440.170.150.150.540.580.330.430.590.21
F50.420.200.150.160.150.480.510.310.400.51
F60.400.340.280.320.290.250.360.520.410.35
F70.380.420.380.330.360.360.290.330.280.31
F80.390.550.570.410.470.470.390.260.300.39
F90.370.550.500.390.440.370.350.270.260.34
F100.390.440.400.360.380.320.300.340.280.30
Table 8. r values in the second analysis.
Table 8. r values in the second analysis.
rP1P2P3P4P5P6P7P8P10P10
P10.41−0.090.08−0.07−0.020.210.010.03−0.070.01
P2−0.360.64−0.170.060.19−0.30−0.09−0.020.140.23
P30.52−0.530.810.07−0.260.60−0.220.41−0.050.08
P4−0.07−0.17−0.420.930.08−0.48−0.72−0.24−0.040.19
P5−0.31−0.02−0.50−0.620.90−0.270.80−0.64−0.150.02
P6−0.04−0.210.450.100.110.53−0.56−0.48−0.040.03
P70.31−0.08−0.52−0.160.33−0.570.67−0.11−0.270.23
P80.51−0.320.670.43−0.62−0.570.410.72−0.090.10
P9−0.060.780.190.420.400.11−0.260.010.80−0.59
P10−0.63−0.500.29−0.33−0.58−0.190.62−0.23−0.740.84
Table 9. MSE values in the second analysis.
Table 9. MSE values in the second analysis.
MSEP1P2P3P4P5P6P7P8P9P10
P10.330.410.460.400.590.410.450.410.460.58
P20.550.270.500.390.710.430.420.490.440.44
P30.380.410.250.400.960.360.430.420.470.63
P40.742.262.400.056.950.620.670.701.010.61
P50.400.400.650.420.120.450.270.890.470.62
P60.370.400.360.401.180.330.420.490.380.78
P70.370.370.420.370.690.420.320.390.430.46
P80.330.370.430.350.490.620.320.280.450.44
P90.750.640.720.720.680.850.810.961.390.34
P100.630.630.520.600.850.600.510.610.200.96
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zero, E.; Bersani, C.; Sacile, R. Identification of Brain Electrical Activity Related to Head Yaw Rotations. Sensors 2021, 21, 3345. https://doi.org/10.3390/s21103345

AMA Style

Zero E, Bersani C, Sacile R. Identification of Brain Electrical Activity Related to Head Yaw Rotations. Sensors. 2021; 21(10):3345. https://doi.org/10.3390/s21103345

Chicago/Turabian Style

Zero, Enrico, Chiara Bersani, and Roberto Sacile. 2021. "Identification of Brain Electrical Activity Related to Head Yaw Rotations" Sensors 21, no. 10: 3345. https://doi.org/10.3390/s21103345

APA Style

Zero, E., Bersani, C., & Sacile, R. (2021). Identification of Brain Electrical Activity Related to Head Yaw Rotations. Sensors, 21(10), 3345. https://doi.org/10.3390/s21103345

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop