1. Introduction
Advances in cognitive engineering support the design, analysis, and development of complex human–machine–environment systems [
1]. Recently, special interest has been devoted to the identification of the cognitive human reaction to an adaptive human computer interaction (HCI) [
2] or to environment perception by the human. In HCI research, brain–computer interface (BCI) systems represent one of the most challenging research activities. Reviews about BCI and HMI can be found in [
3,
4]. BCI technologies monitoring brain activities may be classified as either invasive or noninvasive. Invasive BCI technologies generally consist of electrodes implanted in the human cortex. Noninvasive BCI technologies usually refer to electrodes positioned on the scalp to perform electroencephalography (EEG) or magnetoencephalography (MEG). EEG and MEG can, respectively, monitor the electrical and the magnetic fields generated by neuronal activities.
This work relates to the EEG used for the recognition of the head movement in a human participant. A survey on the EEG in this context can be found in [
5]. The EEG is one of the most promising tools to monitor brain cognitive functionality due to its higher temporal precision with respect to other tools such as Functional Magnetic Resonance Imaging (FMRI), which presents higher spatial resolution [
6]. EEG-based biometric recognition systems have been used in a wide range of clinical and research applications, among others: interpreting humans emotional states [
7]; monitoring participants’ alertness or mental fatigue [
8]; sleep stage identification [
9]; checking memory workload [
10]; finding brain area of damage in case of injury or disease [
11]; automated diagnosis of patients with learning difficulties [
12]; and, in general, diagnosing brain disorders [
13]. EEG monitoring, analysis, and processing are useful techniques to identify intentional cognitive activities from brain electrical signals. Some studies investigated the event-related potential when the decisions of the monitored participants are affected by external stimuli during workload sessions. In fact, the workload performance does not only depend on the operator but also on the surrounding environment and on their interactions. A cognitive stimulus represents an event that involves the subject while performing different planned tasks during EEG data collection. Several recent examples may be considered. The evaluation of a driver’s behavior [
14,
15] or drowsiness [
16,
17] by analyzing the EEG signals has been conducted. Specifically, a Multitask Deep Neural Network (MTDNN) to distinguish between an alert and drowsy state of mind was set up in [
18]. To prevent road accidents, the driver’s fear detected by EEG signals was evaluated when he/she was subjected to unpredictable acoustic or visual external stimuli [
19]. In the automotive context, the interaction between the driver and an onboard Human Machine Interface (HMI) may support safer driving, especially in critical situations [
20]. Emergency braking, a rapid change of lane, or avoiding an obstacle on the route can also be detected by the driver’s brain in advance with respect to the vehicle dynamic, and this contribution may improve the quality of the driver’s intent prediction [
21] and overall system safety [
22].
BCI technologies aim at converting mental activities of the human participant into electrical brain signals, producing control command feedback to external devices such as robot systems [
23]. In addition, BCIs may also include paradigms that can identify patterns of brain activity for a reliable interpretation of the observed signals [
24]. In a control-oriented approach, the BCI has the EEG features of the user in input and can translate the user intention into output commands to carry out an action for the peripheral devices. In this context, most of the BCI applications deal with neuro-aid devices [
25]. A quadratic discriminant analysis (QDA) classifier was used to generate the control signals to start, stop, and rotate a DC motor in the reverse direction [
26]. Other intelligent classification systems have recently been used: a feedforward Multilayer Neural Network [
27] and two Linear Discriminant Analysis (LDA) [
28] classifiers were realized to control a robot chair. Other applications relate to the control of drones [
29], robot arms [
30,
31], virtual objects [
32], or speech communication [
33].
One of the main hard tasks in an EEG BCI application is the implementation of pattern recognition to identify the classes of data coming from brain activities. The pattern recognition task depends on classification algorithms, a survey of which is presented in [
34]. Nonlinear classifiers, such as ANNs and support vector machines (SVM), have been demonstrated to produce slightly better classification results with respect to linear techniques (as linear discriminant analysis) in the context of mental tasks [
35]. Further research has been carried out to classify neural activities when cognitive stimuli are related to olfactory [
36] or visual [
37] perception. The correlation between the complexity of visual stimuli and the EEG alpha power variations in the parieto–occipital, right parieto–temporal, and central–frontal regions was demonstrated in [
38]; this is in agreement with the findings in [
19]. In the context of mental imagery and eye state recognition, a novel learning algorithm was proposed to train neural network classifiers based on Hamilton–Jacobi–Bellman equation, obtaining better accuracy compared with other traditional approaches [
39]. A Neuro-fuzzy classifier was used for decoding EEG signals during visual alertness, motor planning, and motor-execution phases by the driver [
40]. Other studies demonstrated the feasibility of interpreting EEG signals correlated to drivers’ movements in the context of simulated and real car environment [
21]. The cognitive activities related to motor movements have been observed in the EEG both for executed and for imagined actions [
41]. A comparison between actual and imaginary movements by neural signals concluded that the brain activities are similar [
42]. The classification accuracy to predict the actions of standing and sitting by motor imagery (MI) and motor execution (ME) was evaluated in [
43]. The results showed that the classification of MI provides the highest mean accuracy at 82.73 ± 2.54% in the stand-to-sit transition. Another study demonstrated that ANNs perform better than SVM in motor imagery classification [
44]. Different classification algorithms including LDA, QDA, k-nearest neighbor (KNN) algorithm, linear SVM, radial basis function (RBF) SVM, and naïve Bayesian were compared to classify left/right hand movement, reaching an accuracy of about 82% by the RBF SVM in [
45]. A 2D study was carried out investigating the movements of the dominant hand when a participant had to track a moving cursor on a screen by an imaginary mouse on the table [
46]; in this study, the prediction accuracy of horizontal movement intention was found to be higher than the vertical one. An EEG-based classifier has been implemented to identify the driver’s arms movement when he/she must rotate the steering wheel to perform a right or left turn in a virtual driving environment [
47].
The classification of head movements appears more complex since, with respect to other monitored actions, EEG signals are affected by artifacts to a greater extent [
48]. In this case, the EEG signals may also include noncerebral activities coming from elements such as hair, eye activity, or muscle movements [
49]. Many techniques have been used to analyze such artifacts [
50]. EEG artifacts were identified using the signal from a gyroscope located on the EEG device in [
48]. A SVM-based automatic detection system was developed, where the artifacts were considered as a distinct class, in [
51]. Artifacts have also been classified according to autoregressive models [
52] and hidden Markov models [
53,
54], or by introducing independent component analysis (ICA)-based heuristic approaches [
55]. ICA and Adaptive Filter (AF) to remove ocular artifacts included in EEG signals were used in [
56]. An automatic technique to remove artifacts in head movements was proposed in [
57], where the signals from an accelerometer located on the participant’s head were acquired together with EEG signals. Subsequently, EEG components correlated to accelerometer signals were removed from the EEG.
A significant relationship between the head movements and the emotions generated by visual stimuli was found in [
58]. EEG spectral analysis in the context of visual-motor tasks was carried out, and the human head movements following visual stimuli were predicted with high accuracy by an ANN classifier in [
59]. A significant correlation r = 0.98 was obtained between training and test trials of the same person, but relevant difficulties emerged in head movement prediction for a participant when the classifier had been trained on another subject. A positive correlation has been found between the task of drawing an image boundary on a screen by hand movement on a mouse and the brain activities in the upper alpha band from a single frontal electrode [
60]. In a similar experiment, which involved the fingers of one hand, it was found that alpha band variations are correlated with visual motor performance, while the coherence in alpha and beta oscillations implies an integration among visual motor brain areas [
61]. A great amount of work has also been devoted to eye or hand movements, while a small amount of work has discussed the correlation of EEG signals associated with head movement in visual-motor tasks.
4. Conclusions and Future Directions
This work relates to the implementation of a HMI system to implement a binary controller; in the case study, this is applied to a switch on/off lamp system according to electrical brain activity due to head turning movements. The main purpose of the approach may be related with two applications. In the first place, it may support workers’ safety in the workplace with special reference to the driving context. Specifically, the head position and rotation are strongly associated with a driver’s attention during a driving task [
70]. The recognition of head movements when the participant is subjected to visual stimuli support the development of automatic safety devices, which the next generation of vehicles will be equipped with to avoid accidents. In general, the detection of driver’s anomalous movements may prevent accident occurrence. Additionally, the HMI system described here may represent a preliminary application to support the development of assistive devices to improve the mobility of physically disabled people who are not able to use their arms or legs, for example, to turn or control motorized wheelchairs or vehicles or to use a PC. It is intended to implement a HMI system that allows interaction with the devices directly driven by the patient’s head movements. Most work that has been carried out so far adopts the EEG to monitor brain activity in cases of participant’s movements intention [
71], for tracking eye movement [
72], or using a gyroscope controlled to drive external components [
73]. The approach that has been adopted represents a novel contribution in this field. Its main purpose is to demonstrate the feasibility of using human signals as input to the HMI to switch on/off external devices. The results show the feasibility to identify human left or right head movements by a nonlinear input–output function recording brain signals from three channels of an EEG cap. The correlation is quite relevant and, for our set of 22 participants, it is greater than 0.75 in 68% of the experiments.
The proposed approach consists of applying two consecutive NNs: firstly, a TDNN; subsequently, a PRNN, providing promising outcomes in the prediction of movements acquired by participants’ EEG signals. The accuracy of the classifier, which is computed as the rate between correct predictions out of total examples, has a mean value of 88%. In addition, the low computational time required by TDNN and PRNN allows for their application in a real time context.
The main achievement of this work is the reliability of the controller that has been realized. The tests related to the application of the classifier to classify the user movements and, consequently, to control the light system by monitoring the EEG signals show a relevant capability of the binary controller to identify the intention of the subjects to move the head on the right and left sides. On the contrary, a drawback may be recognized in the customization of the classifier. When the TDNN–PRNN system is identified on one participant, it cannot be applied on another participant. However, several issues may be questioned. The first aspect relates to the presence of artifacts in the EEG signal, which may affect the overall identification process. In fact, while classic filtering has been adopted, electrical stimuli coming from different sources (for example, the effect of the lamp light on the participant’s eyes) can be still present. However, as it relates to the effects of the light on the participant’s eyes, it is encouraging to observe that the controller works quite well with no need for stimulus, since lamps are switched on by the controller. In addition, reasonable results are also obtained with closed eyes.
The control results achieved in this work can be also considered an important milestone in the development of humanoid neck systems. Focusing on the neck element, several humanoid neck mechanisms have been developed by different researchers, specifically on soft robotic neck. Soft robotics is becoming an emerging solution to many of the problems in robotics, such as weight, cost, and human interaction. In this respect, a novel tuning method for a fractional-order proportional derivative controller has been proposed with applications to the control of the soft robotic neck [
74]. A soft robotic cable-driven mechanism with the purpose of later creating softer humanoid robots that meet the characteristics of simplicity, accessibility, and safety has been proposed [
75]. The complexity of the required models for control has increased dramatically, and geometrical model approaches—widely used to model rigid dynamics—are not enough to model these new hardware types [
76].
Obviously, in this work, better results could have been obtained with a simple, cheaper gyroscope positioned on the head. This is why it is important to underline that the goal of our study is understanding the association between the electrical brain activity and a specific action. In fact, the electrical activity of the muscles in the neck area is 10–100 times stronger in magnitude than the electrical activity of the brain; thus, despite the techniques used to reduce artifacts, there is a high probability that the developed system takes muscle activity as a hint for the classification task.
However, as quoted before, we believe that such a hint could be useful in the first phases of machine learning in degenerative diseases that reduce human mobility when mobility is still possible. A further step that could not be obtained by an accelerometer or a gyroscope is to verify such an association when the actual movement is not performed, while the participant is ”just thinking“ to make a movement. Further work could address the contribution of the EEG versus the electromyogram (EMG) with parallel monitoring of the two systems to verify the effective information contribution of the EEG. Other studies [
77] have shown that this further step is possible. A study related to post foot motor imagery has been performed [
78]. This case study is focused on the study of the postimagery beta rebound to realize a brain switch using just the EEG channel. An analogous approach, involving the creation of an asynchronous BCI switch using imaginary movement, was designed in [
79]; the focus of this work was to demonstrate that the 1-4Hz feature basis have the minimal power to discriminate the voluntary movement-related potentials. An issue of the EEG-based brain switches is related to the detection an asynchronous way of the control and idle states. A P300-based threshold-free brain switch to control an intelligent wheelchair was proposed in [
80]; experiments on this new approach showed the goodness of the proposed methods, reporting also the possible application in participants with spinal cord injuries.
Future developments of this work will be devoted to three directions. First, further efforts should be devoted to improving the reliability of the data acquired by the EEG, enhancing filtering while retaining a quite cheap and noninvasive architecture, such that practical future implementation is feasible. While IC is one of the most widely used artifact removal techniques used in this context, recent work based on ”ad hoc“ heuristics by detecting influential independent components show better performance [
55]. Secondly, the generalization of the identification of the controller on more participants rather than on one will be investigated, carrying out an additional cluster analysis on participants. Finally, and probably more important as a future goal, is to apply the method in an application context. Two applications are envisaged. The first one relates to the automotive context, as was recently investigated [
21]. The second application relates to identification based on the idea of the movement rather than on the movement itself, as this final function could have important applications for severely disabled people.