1. Introduction
In human computer interaction (HCI), design and application of brain–computer interfaces (BCIs) are among the main challenging research activities. BCI technologies aim at converting human mental activities into electrical brain signals, producing a control command feedback to external devices such as robot systems [
1]. Recently, scientific literature has shown specific interest in cognitive human reactions’ identification, caused by a specific environment perception or an adaptive HCI [
2]. Reviews on BCI and HCI can be found in Mühl et al. [
3] and Tan and Nijholt [
4].
The essential stages for a BCI application consist of a signal acquisition of the brain activities, on the preprocessing and feature extraction, classification, and feedback.
The brain signals acquisition may be realized by different devices such as Electroencephalography (EEG), Magnetoencephalography (MEG), Electrocorticography (ECoG), or functional near infrared spectroscopy (fNIRS) [
5]. The preprocessing consists of cleaning the input data from noises (called artifacts), while the extraction feature phase deals with selecting, from the input signals, the most relevant features required to discriminate the data according to the specific classification [
6]. The classification is the central element of the BCI and it refers to the identification of the correct translation algorithm, which converts the extracting signals features into control commands for the devices according to the user’s intention.
From the signals acquisition viewpoint, the EEG represents the most used technique; although it is non-invasive, cheap, and portable, it assures a good spatial and temporal resolution [
7]. As stated in the literature [
8], however, the acquisition of EEG signals through hair remains a critical issue.
Electroencephalography (EEG)-based biometric recognition systems have been used in a large range of clinical and research applications [
9] such as interpreting humans’ emotional states [
10], monitoring participants’ alertness or fatigue [
11], checking memory workload [
12], evaluating participants’ fear when subjected to unpredictable acoustic or visual external stimuli [
13], or diagnosing generic brain disorders [
14].
Significant literature concerning EEG signal analysis versus visual-motor tasks is available. Perspectives are particularly relevant in rehabilitation engineering. A pertinent recent example of EEG analysis application in robotics and rehabilitation engineering is provided in Randazzo et al. [
15], where the authors tested on nine participants how an exoskeleton, coupled with a BCI, can elicit EEG brain patterns typical of natural hand motions.
Apart from this, cognitive activities related to motor movements have been observed in EEG following both actually executed and imagined actions [
16,
17]. Comparing neural signals provided by actual or imaginary movements, most papers concluded that the brain activities are similar [
18]. In literature, a significant correlation between head movements and visual stimuli has been proven [
19].
In order to realize an EEG-based BCI, adopting a classifier to interpret EEG-signals and implement a control system is necessary. In fact, according to the recorded EEG pattern and the classification phase, EEG may be used as input for the control interface in order to command external devices. As demonstrated in literature, the quality of the classifier, which has to extract the meaningful data from the brain signals, represents the crucial point to obtain a robust BCI [
20].
The well-known techniques used for EEG signals classification in motor-imagery BCI applications are support vector machine (SVM), linear discriminant analysis (LDA), multi-layer perceptron (MLP), and random forest (RF) or convolutional neural network (CNN) classifiers. In Narayan [
21], SVM obtained better performances with 98.8% classification accuracy in respect to LDA and MLP for left-hand and right-hand movements recognition. In their research, the authors demonstrated the superiority of the CNN in respect to LDA and RF for the classification of different fine hand movements [
22]. In Antoniou et al. [
23], the RF algorithm outperformed compared to K-NN, MLP, and SVM in the classification of eye movements used in a EEG-based control system for driving an electromechanical wheelchair. In Zero et al. [
24], a time delay neural network (TDNN) classification model has been implemented to classify the human’s EEG signals when the driver has to rotate the steering wheel to perform a right or a left turn during a driving task in a simulated environment.
More in general, almost 25% of recent classification algorithms for neural cortical recording were based on Artificial Neural Networks (ANNs) [
25], as they have been intensively applied to EEG classification [
26]. An interesting application is presented in Craig and Nguyen [
27], where the authors proposed an ANN classifier for mental command with the purpose of enhancing the control of a head-movement controlled power wheelchair for patients with chronic spinal cord injury. The authors obtained an average accuracy rate of 82.4%, but they also noticed that the classifier applied to a new subject performed worse than expected and that the customization of the classifier by an adaptive training technique increased the quality of prediction. Besides, researchers used ANNs for motor imagery classification of hand [
28] or foot movements [
29], as well as eye blinking detection [
30]. In Lotte et al. [
31], a review on classification algorithms for EEG-based BCI appears.
This paper focuses on an original objective in the context of EEG signals classifiers in respect to the literature related to body movements. Even if this work adopts a traditional ANN classifier, the scope of the application represents the main novelty due to the fact that we explore the recognition of the yaw head rotations directed toward a light target by EEG brain activities to support the driving of tasks in different applications, such as to control autonomous vehicle or wheelchair or robot in general.
In detail, this work is about “using brain electrical activities to recognize head movements in human subjects.” Input data are EEG signals collected from a set of 10 participants. Left or right head position as responses to external visual stimulus represent the output data for the experiments. The main purpose of the proposed approach is defining and verifying the BCI system effectiveness in identifying an input-output function between EEG and head different positions.
Section 2 introduces BCI architecture used for experiments, while
Section 3 shows results coming from different training and testing scenarios.
Section 4 briefly reports the conclusions.
4. Conclusions
The main contribution of this paper is to address an issue that the literature concerning BCI has paid little attention to: the identification of human head movements (yaw rotation) by EEG signals.
This kind of system is effectively starting to become present in commercial systems at prototypal level. For example, it will be used more and more in the automotive context, with proprietary systems, which will be, however, mostly based on ANN applications. Thus, for the scientific community, it is hard to be completely aware of the current state of the art prototypes. In our opinion, it is important to share experimental results on these subjects.
Concerning the head yaw rotation studied in this work, from the trials performed on ten different participants, spanning more than two hours of experiments, it seems clear that—under some specific limitations—this goal is achievable.
Specifically, after identifying a proper function over a short period of time (a couple of minutes for each participant), this can predict head positions with a quite relevant accuracy for the remaining minutes. Such accuracy is quite relevant (MSE < 0.35 and r > 0.5, p < 0.01) since it was obtained in 26 out of 28 tested files. Once the function is identified for a single file, this generally shows good results on files involving the same participant in the same day.
However, the results obtained in different analyses proved that EEG signals are time variant and the files recorded in a short time interval may be useful to generate a classifier for human head movements following visual stimuli. As a matter of fact, such correlation appears to be time dependent, or more likely, quite susceptible to sensors’ positioning. Besides, a further result of the study, which may represent a drawback but also an important finding of the approach, is related to the fact that the correlation is surely dependent on the specific participant, with the impossibility to predict on another subject when the classifier is trained on another one. This may be a disadvantage in the implementation of the EEG classifier because it seems to be significantly different for each subject, and this precludes the ability to achieve an acceptable level of generalization. However, further studies should demonstrate this when the classifier is identified on a group of several different subjects.
Other important remarks concern the EEG data acquisition reliability, which seems to be extremely dependent on the adherence of the electrodes to the scalp. In the proposed study cases, the two hairless participants achieved better performance in the tests proving that the quality of data collection is closely related to the quality of the predictions.
Future developments will address different arguments. Since, in the trials reported in this paper, EEG is affected both by electrical and illumination stimuli, further efforts should be devoted to separate these two aspects. Secondly, further EEG signal analysis should be performed to outline input-output relations for specific frequency bands.