Next Article in Journal
Modifying Hata-Davidson Propagation Model for Remote Sensing in Complex Environments Using a Multifactional Drone
Previous Article in Journal
Calibration of Stereo Pairs Using Speckle Metrology
Previous Article in Special Issue
A Review of Mechanical and Chemical Sensors for Automotive Li-Ion Battery Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of an EEG Headband for Stress Measurement on Driving Simulators †

by
Antonio Affanni
1,*,
Taraneh Aminosharieh Najafi
1 and
Sonia Guerci
2
1
Polytechnic Department of Engineering and Architecture, University of Udine, 33100 Udine, UD, Italy
2
Eurisoft S. P., 33010 Tavagnacco, UD, Italy
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in “Design of a low cost EEG sensor for the measurement of stress-related brain activity during driving”. In Proceedings of the IEEE International Workshop on Metrology for Automotive (MetroAutomotive), Bologna, Italy, 1–2 July 2021.
Sensors 2022, 22(5), 1785; https://doi.org/10.3390/s22051785
Submission received: 21 December 2021 / Revised: 16 February 2022 / Accepted: 18 February 2022 / Published: 24 February 2022

Abstract

:
In this paper, we designed from scratch, realized, and characterized a six-channel EEG wearable headband for the measurement of stress-related brain activity during driving. The headband transmits data over WiFi to a laptop, and the rechargeable battery life is 10 h of continuous transmission. The characterization manifested a measurement error of 6 μ V in reading EEG channels, and the bandwidth was in the range [0.8, 44] Hz, while the resolution was 50 nV exploiting the oversampling technique. Thanks to the full metrological characterization presented in this paper, we provide important information regarding the accuracy of the sensor because, in the literature, commercial EEG sensors are used even if their accuracy is not provided in the manuals. We set up an experiment using the driving simulator available in our laboratory at the University of Udine; the experiment involved ten volunteers who had to drive in three scenarios: manual, autonomous vehicle with a “gentle” approach, and autonomous vehicle with an “aggressive” approach. The aim of the experiment was to assess how autonomous driving algorithms impact EEG brain activity. To our knowledge, this is the first study to compare different autonomous driving algorithms in terms of drivers’ acceptability by means of EEG signals. The obtained results demonstrated that the estimated power of beta waves (related to stress) is higher in the manual with respect to autonomous driving algorithms, either “gentle” or “aggressive”.

1. Introduction

Over the last few decades, with the rapid growth in technologies and the evolution of Artificial Intelligence (AI), computers have become more pervasive, not only in industries, but also in offices and homes. The words “smart” and “intelligent” are commonly used today to label digital devices. Smartphones, once considered luxury items, have become a necessity for many. Similarly, smart and autonomous cars will lead the automobile markets in the near future. They can have different levels of Advanced Driver Assistance Systems (ADASs), from no control (Level 0) to fully autonomous vehicles (Level 5) [1]. They are expected to be safer and to reduce the traffic on the roads thanks to Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communications [2]. In order to have a smooth transition toward autonomous vehicles and, in general, to all AI devices and services, the human–machine communication should be studied and developed as well. Recognizing human emotions, their experience, and their levels of comfort and stress while using AI devices or services can help to calibrate the functions to be performed and the decisions to be made by AI accordingly. For example if autonomous vehicles consistently make automated choices that follow passengers’ expectations, they could create a higher level of trust, which is fundamental to promoting their acceptance [3], while failing to do so would instead lead to mistrust and stress.
Stress is a physiological and psychological reaction to frightening situations. Normally, it occurs when the perceived events are different from the expected ones [4]. Brain neurons release many chemical messengers called neuropeptides in response to stress [5]. As a result of this action, the secretion of a series of hormones including adrenaline and cortisol will be enhanced to prepare the individuals for their fight or flight response by giving them extra strength and speed to either fight or run away from danger. When the fight or flight response begins, the Autonomic Nervous System (ANS) will be activated, causing instantaneous and strong changes in the body. The ANS controls various automatic routine activities of the body including digestion, heart beat, blood pressure, and body temperature. The ANS manages the fight or flight response of the body through two branches: the sympathetic and parasympathetic nervous systems. The sympathetic nervous system initiates the fight or flight response as an automatic reaction to prepare the body to face the possible threat or to support eventual pain. The parasympathetic nervous system is responsible for moving the body back to its balance state after the termination of stressful situations. During the fight or flight response, the sympathetic nervous system increases several blood factors such as flow, pressure, sugar, and fats; moreover, it tenses muscles, dilates pupils, and increases the breathing rate, perspiration, and brain neural activities [6].
In order to measure stress, the mentioned physiological changes can be evaluated through wearable biosensors. Various methods and biological sensors have been proposed and presented in the literature. In a previous study [7], we presented our designed Skin Potential Response (SPR) sensor, which is able to measure the electrical activity of the sympathetic nervous system on the palms, where sweat glands are situated, based on the Electro-Dermal Activity (EDA) principle. In another study [8], we presented an adaptive filters algorithm to remove hand movement artifacts from SPR signals. In [9], we monitored the stress level of drivers during manual and autonomous driving scenarios on a professional simulator using our designed SPR sensor. In [10], the authors measured EDA signals to identify stress and anger in 20 subjects while driving on a simulator. In [11], pupil diameter signals were employed for stress measurement of drivers during a simulated driving experiment. In [12], several bio-signals such as Electrocardiogram (ECG), Electromyogram (EMG), Galvanic Skin Response (GSR), Heart Rate (HR), and Respiration Rate (RESP) were evaluated with Machine Learning (ML) algorithms to classify stress in drivers. The study concluded that the ECG achieved the highest stress classification accuracy.
The brain is the core of the nervous system; therefore, complex patterns of neural activities packed with valuable physiological information can be recorded from the surface of the head by Electroencephalogram (EEG) sensors. EEG is able to measure the currents that flow within brain neurons on the scalp. Neural oscillations or brain waves are the result of synchronized electrical activities of large groups of neurons and are generally categorized by their frequency, as shown in Table 1 [13,14,15].
The electrical signals detected by the scalp electrodes are very weak, usually from 10 μ V to 100 μ V, because between the electrode and neuronal layers, there are the skin, skull, and several other layers that significantly attenuate the signals. Electrode placement on the scalp is defined by the International 10/20 Standard [16]. Electrodes are labeled according to the different brain areas: Frontal (F), Central (C), Temporal (T), Posterior (P), and Occipital (O); A and M are the ear and Mastoid electrodes, respectively; they are usually used as references. The left side of the head is indicated by odd numbers and the right side by even numbers [13]. Electrodes and their placement are shown in Figure 1.
The EEG signal can be distorted from the original one to a higher amplitude and different shapes by unwanted physiological signals such as cardiac and muscle activities, eye movements and blinks, or by external sources such as AC power line noise ( 50 / 60 Hz), impedance fluctuation, cable movements, and broken wire. These distortions are called artifacts [13]. Artifacts from external sources can be minimized with more precise experimental setups and strict recording procedures, whereas physiological artifacts are usually removed or corrected from the acquired signal during the pre-processing steps. The motion artifact that arises on low-level bio-signals is a well-known problem [17,18,19] when the subject is performing physical activity. Various artifact-removal techniques have been presented in the literature; different methods are ideal for different applications, but Independent Component Analysis (ICA)-based algorithms are the most popular because they can deal with almost all types of EEG signal artifacts [20].
EEG sensors have multiple advantages: they are non-invasive, low cost, and easy to set up; they acquire real-time data with high temporal resolution; they are known to be able to detect different brain activities, mental states, and emotions of an individual such as stress [15,21,22,23], attention [24], drowsiness [25,26,27,28], confusion [29], focused, unfocused, and drowsy [30], joy, anger, sadness, and pleasure [31], and happiness, sadness, and relaxed [32]. The use of EEG to analyze drivers’ stress has attracted the attention of researchers in recent years along with the development of autonomous cars. In [33], the study proposed a combined fuzzy and case-based reasoning (Fuzzy-CBR) classification approach to identify the stress or relaxed states of drivers using EEG signals. The proposed method scored a classification accuracy of 79%. In the research undertaken in [34], EEG and ECG signals in addition to electric vehicle data were acquired from 40 drivers during real driving tasks to classify drivers’ stress level. It was shown that the stress level in the drivers was not only affected by the environment conditions such as the road, traffic, and driving duration, but also by individual patterns. In [35], the researchers used three Machine Learning (ML) algorithms: Support Vector Machine (SVM), Neural Network (NN), and Random Forest (RF), to classify EEG signals acquired from 50 subjects while driving in stressful and calm simulated driving setups. An overall accuracy of 97.9% was achieved by a decision fusion method that combined the three models at the decision level. In [36], the authors used the real road driving information of 28 subjects to train an SVM model. The training data were labeled by EEG recordings of the drivers, which were divided into “normal” and “overload” categories. The study achieved a 74.3% classification accuracy by the combination of various driving and vehicle data.
When measuring the stress in individuals, it is of paramount importance to collect data by different methodologies of bio-signal acquisition; in the past, for instance [37], we developed a sensor system that integrated the ECG and EDA signals with a unique timebase thanks to the fact that we designed all the sensors from scratch: in this way, it was possible to have the full control of the raw data, and we could align them with an accuracy of up to 50 ms. Using the same approach, here, we designed from scratch an EEG headband that allows the full control of the raw data and allows the integration of EEG, ECG, and EDA with a unique timebase. Using commercial sensors instead, the users cannot access the raw data, and the time alignment has poor accuracy, since every sensor sends data to its proprietary software, which does not interact with other proprietary control panels; thus, the user must manually insert markers in every graphical user interface, losing repeatability on time alignment.
Beyond the great advantage of full control of the raw data and time alignment, the headband that we present has the advantage of a full characterization, described in Section 3.1; in most commercial EEG sensors, in fact, the specifications do not report the linearity of the sensors; sometimes, the resolution is reported in terms of the number of bits of the Analog to Digital Converter (ADC), but the quantization step size is not provided; thus, the resolution of the voltage readout is not provided; this means that the user is not allowed to know how accurate the readout (and therefore, the signal processing) is. As an example, we report in Table 2 a comparison obtained from commercial devices’ technical specifications and from [38].
Table 2 shows that the presented sensor manifests high performances in terms of battery life, very high resolution, and high linearity. Moreover, the cost of the developed sensor is by far lower than the commercial devices.
Even if linearity and/or measurement error are not provided for commercial devices, most of the scientific literature acquires EEG signals using commercial sensors, and the data are then processed with ML algorithms in order to classify the stress in drivers, especially using driving simulators. Some studies used the help of other bio-sensors to categorize and label the acquired EEG signals [33,34] (again, we point out the importance of a precise time alignment between different sensors); some other studies relied on the drivers’ self-report [35]; other studies classified the mental activity using an arbitrary threshold on the EEG signal level and labeling the categories on the basis of threshold trespassing [36]. In this study, we adopted the analysis of the beta activities to identify the stress in drivers from the acquired EEG signals, and this method is well known in the literature [39,40,41,42]. Few studies have designed and developed EEG sensors with various architectures. For example, in [43], the authors developed an EEG sensor using three prefrontal electrodes, evaluated the signal quality, removed ocular artifacts, and identified the chronic stress using EEG dynamic features; however, no characterization of the sensor’s measurement error, as defined in [44], was provided.
In this study, we designed an EEG sensor from scratch, which provided us more flexibility in the choice of electrodes’ location, montage, and references that normally are not modifiable in commercial sensors. Then, we set up the measurement procedure to experimentally characterize the developed sensor, as described in Section 3.1, in order to quantify the measurement error on the acquired signals, accordingly to [45]; this aspect is very important in order to understand the performances of the sensor, and as previously said, this quantity is often missing in manuals of commercial devices. Finally, we employed our designed sensor to detect the stress in subjects while driving on an advanced simulator setup, which included a movable platform with three degrees of freedom, a steering wheel, and pedals. Furthermore, in order to provide a more realistic simulation, our participants were equipped with a Virtual Reality (VR) headset.
This work is the extended version of the paper presented in [46]; in that paper, we showed a preliminary and partial characterization of the developed sensor, and we provided the results obtained from a single subject under test, in order to verify whether the headband was able to discriminate brain waves’ properties under different driving conditions. In the present paper, we provide the complete metrological characterization of the sensor, and we set up an extended experiment, which involved ten volunteers on a driving simulator. The volunteers had to cope with three different vehicles (one with manual driving, two with different autonomous algorithms) while the headband was acquiring the EEG signals during the tests.
The paper is organized as follows. Section 2 provides the design procedure of the six-channel EEG circuit and the description of the experimental setup to acquire signals from drivers on the simulator. Section 3 shows the experimental results obtained from the characterization and from the post-processed data captured during driving. In Section 4, we discuss the obtained results, and in Section 5, we draw the conclusions.

2. Materials and Methods

This section is organized into two parts: in the first part, we describe the design of the developed EEG headband with the considerations of the required technical specifications; in the second part, we describe the experimental procedure followed to acquire the EEG signals of ten volunteers during driving on a simulator.

2.1. EEG Headband Design

The developed headband (whose scheme is shown in Figure 2) has six dry comb electrodes located at F p 1 , F p 2 , C 3 , C 4 , O 1 , and O 2 positions accordingly to the International 10/20 system with common reference montage. The quantity V I N indicated in Figure 2 is the low-level differential voltage between each electrode and the reference electrodes M 1 and M 2 . V I N is properly conditioned by amplifying and filtering accordingly to the specifications described in Section 2.1.1; the high-level signal V A D is acquired by the Analog to Digital Converter (A/D) on board a Digital Signal Processor (DSP) and is sent, via a Universal Asynchronous Receiver Transmitter (UART), to a WiFi module, which transmits the data to a laptop where a custom Graphical User Interface (GUI), described in Section 2.1.3, was developed for data saving and signals’ real-time plotting.
The EEG headband is supplied by a single 850 mAh Lithium Polymer (LiPo) cell battery; since the headband current consumption is 85 mA (the main consumption is due to the WiFi module during data transmission), the chosen LiPo cell allows 10 h of continuous transmission: thus, the battery life is by far longer than typical EEG acquisitions, which are, roughly, 1 h long. A buck DC–DC converter (not shown in Figure 2) provides a +3.3 V supply voltage for the entire circuit; the reference voltage V R E F = 1.65 V is generated using a linear voltage reference and is applied to the M 1 and M 2 reference electrodes.
Figure 3 shows the printed circuit board of the sensor; the outer dimensions of the circuit are 75 mm 2 × 65 mm 2 ; red rectangles and labels indicate the blocks’ layout in the circuit. In particular, the six-channel analog front-ends are located in the top layer, while the power supply, V R E F generation, and WiFi module are in the bottom layer in order to separate the low-level signals from digital signals.

2.1.1. Analog Section

Referring to Figure 2, the analog front-end of each channel measures low-level differential voltages and, through a proper signal conditioning, provides a high-level voltage, which can be acquired by the A/D converter on board the DSP. As a first step, the common mode DC voltage that may be present on the scalp must be removed, so a couple of passive first-order high-pass filters were posed at the circuit input stage. The input impedance of the filters was set to 100 M Ω in the passband; in this way, the load uncertainty results are lower than 1 % , considering that the skin impedance is in the order of 1 M Ω . Connected to the filters’ output, there is the instrumentation amplifier (IA, MCP6N11 from Microchip), which was designed with a gain G 1 = 680 ; the IA output is connected to an active second-order low-pass filter (Sallen–Key topology) whose gain is G 2 = 6.6 followed by a passive first-order low-pass filter. The overall gain G = G 1 · G 2 = 4488 , so the EEG signals, in the range ± 350 μ V , are amplified to 3.3 V P P .
The block named “DC Compensation” in Figure 2 compensates the DC non-idealities of the IA (such as the offset current and offset voltage); it performs the integration of the voltage difference V I A V R E F ; thus, V I A at steady-state follows V R E F . This block in the feedback loop results in a high-pass filter with a single pole posed at the inverse of the integrator time constant τ I [37,47]; this quantity was set by design equal to the time constant of passive high-pass filters τ H P . The anti-alias, third-order, low-pass filter was designed to have three coincident poles at time constant τ L P . The transfer function of the analog section for each channel results, in the Laplace domain, as:
V A D ( s ) = V R E F + G 1 · G 2 τ H P 2 · ( s ) 2 ( 1 + τ H P · s ) 2 ( 1 + τ L P · s ) 3 · V I N ( s )
Designing τ H P τ I = 0.33 s and τ L P = 2.7 ms, the analog section behaves as a bandpass amplifier having a lower cutoff frequency of 0.8 Hz (lower slope +40 dB/dec), an upper cutoff frequency of 44 Hz (upper slope –60 dB/dec), and a center-band gain of 72 dB.

2.1.2. A/D Conversion, DSP, and Data Transmission

The signals V A D are connected to the analog inputs of the DSP (DSPIC30F3013 from Microchip), which operates at 8 MIPS and has an on-board 12 bit A/D converter. Considering the designed gain discussed in the previous subsection, we would obtain a resolution on V I N of 3.3 / ( G · 2 12 ) 200 nV, approximately. In order to obtain higher resolution, we oversampled the signal by a factor of 16. The desired sampling rate for transmitting signals was 200 Sa/s; we oversampled at a rate of 3200 Sa/s, and we built the output datum as the sum of 16 samples; in this way, the signal range was increased by a factor of 16, and the noise was increased by a factor of 4; naming S the signal and N the noise, we have in fact:
S = i = 1 16 S i 16 · S i , N = i = 1 16 N i 2 4 · N i S N = 4 · S i N i
In this way, we increased the signal-to-noise ratio by a factor of four, corresponding to an increase of the resolution of 2 bit, thus obtaining an effective A/D resolution of 14 bit corresponding to 50 nV, approximately.
The 14 bit data were then sent at 200 Sa/s with a baud rate of 115,200 bps to the low-power WiFi module (USR IOT C216), which sends TCP packets to the laptop.

2.1.3. Software Description

We developed, using the .NET framework, a GUI (shown in Figure 4) responsible for acquisition, visualization in real time, and saving of the data transmitted by the EEG headband. The GUI implements also an optional, second-order, IIR notch filter for the rejection of the power line frequency.
The panel was designed in the .NET framework since this tool allows a fast and easy design of Graphical User Interfaces (GUIs). The GUI communicates with the EEG headband via TCP, extracts the data from the received packets, plots the six signals in real time, and allows the insertion of graphical markers with optional comments if the user needs to annotate which kind of stimuli are received by the subject under test. On the left of the GUI, there are several controls where the user sets the SSID of the wireless network and the folder path where data are saved. The button “acquire” starts the connection to the laptop and the plot of the six traces in real time.

2.2. Acquisitions of EEG Signals on a Driving Simulator

We tested the developed headband acquiring EEG signals from ten volunteers during driving on the driving simulator available in the BioSensLab at the University of Udine [48].
The driving simulator (Figure 5) is composed of a desktop PC, a moving platform with three degrees of freedom (P3 from DOFReality), a steering wheel with pedals (Logitech G29), and a Virtual Reality (VR) (Oculus Rift) headset [49,50,51]. As the simulation software, we used DriveSim20 by VI-Grade [52]; this software allows the insertion of obstacles on the road, the change of vehicle dynamics, and the replay of saved simulations.

2.2.1. Data Acquisition

Ten volunteers (seven males, three females, aged 29 ± 5 ) from the University of Udine participated in the experiment; their average driving experience was 11 ± 5 y using cars with a manual transmission. We chose participants among students and employees of our institution that had no previous experience with driving simulators; in this way, we avoided previous VR exposure possibly affecting the response to the virtual driving experience. After the introduction to the EEG headband, VR headset, simulator platform, and the procedure of the experiment, they were invited to sit on the simulator platform wearing the developed EEG headband together with the VR headset and perform the experiment as described below.
We prepared in advance the simulation of a 20 km-long free highway path in which we placed, every 2 km, a total number of 10 “challenges”. In each challenge, we placed a different sequence of Jersey barriers on the roadway, in order to simulate road works: the resulting challenges were different combinations of multiple lane changes, lane narrowing, multiple lane changes with narrow lanes, and so on; one example of the challenge is shown in Figure 6.
The experiment consisted of driving in three scenes, each with a different sequence of challenges, in order to mimic three different vehicle setups: manual driving (“Manual”), an autonomous vehicle with a “gentle” driving algorithm (“ADAS1”), and an autonomous vehicle with an “aggressive” driving algorithm (“ADAS2”). In the Manual session, the participants had to drive along the roadway respecting speed limits and keeping safely away from the Jersey barriers. In the ADAS1 session, the participants sat on the simulator while the autonomous vehicle performed the maneuvers with a “gentle” approach; in this condition, we limited the vehicle accelerations (longitudinal and lateral) to 8 m/s 2 (longitudinal) and 3 m/s 2 (lateral): in this way, the autonomous vehicle faces challenges with a prudent driving style. In the ADAS2 session, the participants coped with an autonomous vehicle that faced challenges with an “aggressive” approach; in this condition, the accelerations were limited by the vehicle dynamics. Moreover, the ADAS2 setup was programmed to respect a very narrow distance between the vehicle and the Jersey barriers, with the aim of providing less confidence to the subjects during the test. The sequence of challenges were different in the Manual, ADAS1, and ADAS2 sessions because, otherwise, the participants would have been able to predict the road ahead. Each participant encountered the Manual, ADAS1, and ADAS2 sessions in a randomized sequence. Since the path of each test was 20 km long and the average speed was 120 km/h approximately, the duration of each simulation was roughly 7–10 min.
The model of the vehicle simulated during the experiments was an off-road car, in order to provide to the subjects a vehicle with a low-power-to-weight ratio; in this way, we inhibited, in manual driving, users trying to drive as in a race instead of normal highway driving.
During each session, the data were acquired by the EEG headband from each participant and transmitted to the GUI, on which the data were monitored in real time on the front panel, as shown in Figure 4, and stored in a text file with a 200 Sa/s sampling rate, as described in Section 2.1.3.

2.2.2. Data Pre-Processing

The acquired EEG data are contaminated with different types of artifacts during the experiment; there were artifacts due to the simulator platform movement, which may lead to cable sway, and there were artifacts from subject movements derived from electrical activity due to facial and neck muscles or eye movements, which are common during physical activities [53]. Other physiological artifacts such as eye blink and ECG can also contribute to the brain signal contamination during the experiment. All the acquired data were pre-processed with the EEGLAB toolbox for MATLAB [54] in order to detect and remove artifacts and reconstruct the original signal.
The pre-processing pipeline consisted of three steps: (1) filtering the data with a basic Finite Impulse Response (FIR) filter with passband edges of [ 1 , 45 ] Hz ; (2) removing artifacts with the Artifact Subspace Reconstruction (ASR) algorithm [55]; in this step, we reconstructed the portions of the signals affected by motion artifacts; (3) applying Independent Component Analysis (ICA) [56] method using the RUNICA algorithm to decompose the data into independent components and consequently removing the non-brain components (such as eye blink) from the data. Figure 7 shows an example of the raw (blue) signal and pre-processed (red) signal after the mentioned steps on the FP1 channel of a participant during the ADAS1 session of the experiment; in particular, in Figure 7, the blue line shows an artifact due to the motion of the simulator platform during a lane change.
After artifact removal, we analyzed the signals, evaluating the spectral powers in the frequency bands delta, theta, alpha, beta, and gamma. The results are presented in Section 3.2.

3. Experimental Results

In this section, we present the experimental results obtained from two different experiments. The first one is the metrological characterization of the performances of the sensor in terms of gain accuracy, linearity, and bandwidth use and as the reference input, sinusoidal signals and measuring the sensor response at the output. The second experiment showed the results obtained on real EEG traces acquired on the ten volunteers during driving on a simulator.

3.1. Metrological Characterization

In this subsection, we characterize the bandwidth and the linearity of the EEG headband experimentally. We connected a waveform generator (Agilent 33220A) to a characterized attenuator, and we acquired the quantity V A D and the output of the generator with an oscilloscope (Keysight DSOX2022A).
Varying the amplitude of the signal, we characterized the gain and linearity (Section 3.1.1); varying the frequency, we characterized the analog front-end bandwidth (Section 3.1.2) and the overall system bandwidth (Section 3.1.3). In particular, the developed GUI allows the insertion of a second-order IIR digital notch filter centered on the power line frequency (50 Hz), so in Section 3.1.3, we quantify the power line rejection and show the overall bandwidth of the system composed of hardware and software.
After acquisition, the data were post-processed according to the “Guide to the expression of Uncertainty in Measurements” (GUM) [45] in order to evaluate the accuracy of the EEG headband.

3.1.1. Linearity Characterization and Resolution

For the linearity and the gain accuracy characterization, the output of a waveform generator provided a sinusoidal voltage V G with 47 linearly spaced amplitudes from 0.1–2.4 V P P with a step size of 50 mV and with a frequency around the center of the bandwidth of the EEG sensor, i.e., at 5 Hz.
We connected a resistive attenuator α (0.1% tolerance of the resistors) to the generator output; the attenuator output impedance was set to 1 M Ω to simulate skin behavior, α = 0.303 × 10 3 ( 3 × 10 6 ) . The output of the attenuator provides the input voltage (referring to Figure 2) V I N = α V G variable from 30 μ V P P to 730 μ V P P with a 15 μ V P P step size; the input of each channel of the EEG sensor was connected to the attenuator. The uncertainty on the input voltage was then calculated according to the GUM:
u ( V I N ) = [ α · u ( V G ) ] 2 + [ V G · u ( α ) ] 2
In order to estimate the headband accuracy, we evaluated the RMS values of the traces acquired by the oscilloscope on a variable time interval corresponding to 25 periods of the input signal V G ; for each input amplitude, we acquired 20 RMS values of V G and V A D to perform the type A estimation of uncertainty. Hence, with this setup of acquisitions, we processed a 47 × 20 matrix for each amplified EEG signal V A D .
Then, using the least-squares method, we characterized the overall gain G = G 1 · G 2 using as the input the voltage vector ( V I N ) and as the output the mean of the 20 RMS readings ( V A D ¯ ) calculated for each amplitude of the input signal [37,47]:
G = V I N V A D ¯ V I N V A D ¯ V I N 2 V I N 2
We evaluated the combined uncertainty of the output voltage u ( V A D ¯ ) for each channel taking into account the type A estimation u A (as the standard deviation of the sample means over the 20 readings of the RMS) and the type B estimation u B extracted from the manual of the oscilloscope:
u ( V A D ¯ ) = [ u A ( V A D ¯ ) ] 2 + [ u B ( V A D ¯ ) ] 2
We recall that the vector V I N was composed of 47 test amplitudes and that the vector V A D ¯ was obtained by calculating the mean of 20 readings for each output amplitude. Naming with subscript i the input and output voltages at the i t h amplitude and using (3), (4), and (5), we estimated the gain uncertainty as:
u ( G ) = i = 1 47 G V I N , i u ( V I N , i ) 2 + i = 1 47 G V A D , i ¯ u ( V A D , i ¯ ) 2
After characterization, the gains of the six channels resulted in G = 4210 ± 35 . Using the gain calculated in (4), we evaluated the deviation from the linear regression. In Figure 8, we show the linearity error normalized to the Full Scale (FS) of each input.
Referring to Figure 8, the non-linearity for the six channels resulted in roughly 0.8 % FS, corresponding to 6 μ V ; error bars represent the uncertainty estimated in (3) and (5).
As reported in Section 2, the A/D resolution was increased to 14 bit using the oversampling technique; since the full scale of the sensor was ± 350 μ V , this corresponds to 43 n V .
The noise voltage density of the chosen instrumentation amplifier was, by the datasheet, 35 n V / H z , so integrating over the sensor bandwidth, the analog circuitry introduces the noise σ A = 230 n V (comparable to the resolution without oversampling); combining this quantity with the quantization noise (i.e., σ Q = 3.3 / ( G · 2 12 · 12 ) 60 n V ), we obtained the overall noise introduced by the sensor as σ A 2 + σ Q 2 = 240 n V without oversampling. Taking into account the improvement shown in (2), we can quantify the noise after oversampling in 60 n V , and the Effective Number of Bits (ENOB) of the sensor would result in E N O B = 13.51 bits.

3.1.2. Analog Bandwidth Characterization

For the characterization of the analog front-end bandwidth, the waveform generator provided a sinusoidal voltage with amplitude 2.4 V P P (corresponding to V I N = 730 μ V P P ) and frequencies spanning in the range [ 0.1 , 1000 ] Hz with 81 logarithmically spaced steps; the output of the generator was connected to the same attenuator described in the previous subsection.
With the same procedure shown in Section 3.1.1, we acquired the outputs for each frequency step in a time window corresponding to 10 periods of the input signal; in this way, we computed the gain G ( j ω ) on an integer number of periods (coherent sampling).
Figure 9 shows the gain modulus of all six channels; the upper cutoff frequency resulted in being 44 Hz, while the lower cutoff was 0.8 Hz.

3.1.3. Overall Bandwidth Characterization

As described in the previous section, the GUI allows the optional insertion of a second-order IIR notch filter to remove power line disturbance. In order to characterize the effects of the notch filter, with the GUI, we logged the data obtained during the characterization described in Section 3.1.2 and estimated the bandwidth of the data acquired by the GUI when we enabled the notch filter. Figure 10 shows the frequency response of the data plotted on the GUI; from Figure, it is evident that the power line (corresponding to 50 Hz), thanks to the digital notch filter insertion, was attenuated by 30 dB.

3.2. Results of EEG Signals on a Driving Simulator

The distance covered in all the experiment sessions was 20 km; this length was designed in order to obtain a rough duration of 10 min per session, considering an average speed of 120 km/h. Depending on the speed of each driver, the duration of the acquired data was in the range of 7–10 min. Figure 11 shows the time–frequency analysis on the six channels during the three sessions relative to one of the drivers.
From a qualitative point of view, the ADAS1 session was characterized by low spectral components in any frequency band and for the entire duration of the session; this may represent low stress during autonomous smooth driving. In the ADAS2 session, instead, we saw that there were some higher components in theta and alpha bands, and there were also some significant components in the beta band; this may be indicative of a less-relaxing state of the driver during ADAS2. As a preliminary analysis, we observed that the ADAS2 session manifested slightly higher EEG activity with respect to ADAS1. During the Manual session, instead, it was possible to see strong components in the beta band, which was almost absent in ADAS1 and was lower in ADAS2.
The EEG beta power is known to be highly correlated with the cortisol hormone level in the body, which increases in response to stress [57]. Moreover, beta power is known to be an indicator of attention, concentration, and anxiety [42,58]. Numerous studies selected the power increase of the beta waves to identify stress [59,60,61,62] and the power decrease of the beta waves to identify mental fatigue and drowsiness [63,64,65]. For these reasons, in the present paper, we focused our attention on the power of beta waves to identify stress. To quantify the beta power, we evaluated the Power Spectral Density (PSD) of the signals over a time window of duration 4 s, using as a window function the von Hann window with 50% overlap between windows.
Figure 12 shows the powers in the beta band during the three sessions relative to the spectrograms in Figure 11; as can be seen, the beta power resulted in being low during ADAS1, slightly higher during ADAS2, and significantly higher during Manual. This suggests that the stress on the driver was by far higher in the Manual session. This result is consistent with our previous findings [9], where the EDA activities and mean heart rate (stress indicators) were shown to be higher in manual driving with respect to autonomous driving. In Table 3, we report the mean values of the beta power on the six channels for all the drivers, during the three sessions of the experiment.
Figure 13 shows the box plot of the data reported in Table 3. Markers represent the medians, boxes the 25 to 75 percentiles, and whiskers the percentiles from 5 to 95. In the figure, it is possible to see a quite evident tendency in almost all the channels of low beta power during ADAS1, medium power in ADAS2, and high power in Manual. The tendency of increasing beta powers from ADAS1 to ADAS2 or from ADAS1 to Manual was expected by design, since ADAS1 was configured to provide confidence to the users, putting them in a relaxed state due to a prudent autonomous vehicle; on the other hand, we expected that ADAS2 would have been the most stressful session for the users, since we configured ADAS2 to perform sudden maneuvers with high lateral acceleration and with closer distances from the Jersey barriers. However, in Figure 13, we observe a slight trend of increasing beta powers from ADAS2 to Manual; this would suggest that the users during Manual driving had to bear higher mental load with respect to ADAS2 because they had to be attentive to the barriers and follow the road cautiously to avoid accidents. This result is in accordance with other studies, where higher mental load provoked by harder driving tasks was observed by means of EEG activity [66]. Since several box plots in Figure 13 are overlapping, we must consider if these results assume statistical significance.
In order to quantify the statistical significance of the differences between ADAS1, ADAS2, and Manual, we performed the non-parametric statistical Wilcoxon signed rank test; we report the probability of the null hypothesis in Table 4. If the probability is lower than 0.05, then there is a significant statistical difference between two experimental setups.
From Table 4, it emerges that beta power in ADAS1 resulted in being significantly lower than the ADAS2 one in the channels F p 1 and O 2 ; we noticed that in the channel O 1 , p = 0.06 , very close to statistical significance; the beta power of ADAS1 resulted in being significantly lower than the Manual one in the channels C 3 , O 1 , and O 2 ; the ADAS2 beta power instead resulted in being lower than the Manual one, but with poor statistical significance. A qualitative insight of this can also be found in Figure 13; qualitatively comparing the boxes relative to ADAS2 and Manual, it is possible to observe that in all channels, ADAS2 and Manual manifest strongly overlapped boxes, i.e., the statistical data among the users belong to similar distributions.

4. Discussion

Regarding the experimental characterization of the sensor, we can assert that the performances of the developed headband respected the expectations in terms of the accuracy and bandwidth; in particular, we recall that the non-linearity of all channels was 0.8 % F S , corresponding to 6 μ V, and the bandwidth was in the range [0.8, 44] Hz with a power line rejection of 30 dB thanks to the IIR digital filter implemented on the developed GUI.
Regarding the test drive on the simulator performed by the ten volunteers, the aim of the experiment was to demonstrate that the EEG headband was able to discriminate the sensation of confidence perceived by the drivers during manual driving with respect to autonomous vehicles with gentle or aggressive algorithms. To do this, we focused our study on the power of the beta waves because in the scientific literature, it is evaluated as an indicator of discomfort. Actually, in the first phase of designing the experiment, we would have expected that ADAS2 would have been perceived as the most stressful experiment, while ADAS1 was designed to be the least stressful; however, as shown in Figure 13, the evidence showed that ADAS1 manifested the lowest beta wave power, as expected, but the Manual experiment was perceived, on average, as the most expensive in terms of mental load.
As reported in Table 4, we see that the O 1 and O 2 channels provided a statistically significant difference between ADAS1 vs. ADAS2 and ADAS1 vs. Manual; even if ADAS2’s beta power was lower than the Manual beta power, the difference turned out to be not statistically significant, since the p-values in the last column of Table 4 are higher than 0.05. For this reason, we can just assert that Manual was slightly more stressful than ADAS2, but with quite low statistical significance.

5. Conclusions

We presented the design of a wearable EEG headband for the measurement of stress-related brain activity during driving. The EEG data are sent via WiFi to a laptop and the sensor is battery operated, so the headband is completely wireless in order to have high wearability. The characterization showed high-level performances in terms of linearity, resolution, and battery life. The metrological characterization of the sensor is of paramount importance in order to understand the accuracy of the used sensor, especially if the data are then processed through ML algorithms. Most of the literature works in fact made use of commercial devices where the measurement error is not provided in the sensors’ manuals.
We tested the proposed sensor, setting up an experiment that involved ten volunteers; the aim was comparing three different scenarios: manual driving, autonomous vehicle with “gentle” algorithm (ADAS1), and autonomous vehicle with “aggressive” algorithm (ADAS2). Processing the EEG data and evaluating the beta wave power, we observed that Manual driving revealed the highest stress on drivers; ADAS2 manifested higher stress than ADAS1 and slightly lower than Manual; ADAS1 instead presented the lowest stress on drivers with good statistical significance, especially on channels O1 and O2.
As future work, we will integrate the EEG data with the data coming from the ECG and EDA sensors that we developed in previous works. Thanks to the total control of the transmission protocol, in fact, the sensor allows the simultaneous acquisition of different bio-signals using a common timebase, having a high-accuracy time alignment.

Author Contributions

Conceptualization, A.A.; methodology, A.A. and T.A.N.; software, S.G.; formal analysis, A.A. and T.A.N.; data curation, A.A. and T.A.N.; writing—original draft preparation, A.A., T.A.N. and S.G.; writing—review and editing, A.A, T.A.N. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Galvani, M. History and future of driver assistance. IEEE Instrum. Meas. Mag. 2019, 22, 11–16. [Google Scholar] [CrossRef]
  2. Sini, J.; Marceddu, A.C.; Violante, M. Automatic Emotion Recognition for the Calibration of Autonomous Driving Functions. Electronics 2020, 9, 518. [Google Scholar] [CrossRef] [Green Version]
  3. Azevedo-Sa, H.; Jayaraman, S.K.; Esterwood, C.T.; Yang, X.J.; Robert, L.P.; Tilbury, D.M. Real-Time Estimation of Drivers’ Trust in Automated Driving Systems. Int. J. Soc. Robot. 2020, 13, 1911–1927. [Google Scholar] [CrossRef]
  4. Ursin, H. Expectancy and activation: An attempt to systematize stress theory. In Neurobiological Approaches to Human Disease; Hellhammer, D.H., Florin, I., Weiner, H., Eds.; Hans Huber Publishers: Gottingen, Germany, 1988; Volume 2, pp. 313–334. [Google Scholar]
  5. Subhani, A.R.; Xia, L.; Malik, A.S. EEG signals to measure mental stress. In Proceedings of the 2nd International Conference on Behavioral, Cognitive and Psychological Sciences, Bandos, Maldives, 25–27 November 2011; pp. 84–88. [Google Scholar]
  6. Olpin, M. The Science of Stress. Available online: https://web.archive.org/web/20171120215838/http://faculty.weber.edu/molpin/healthclasses/1110/bookchapters/stressphysiologychapter.htm (accessed on 21 November 2021).
  7. Affanni, A.; Chiorboli, G. Wearable instrument for skin potential response analysis in AAL applications. In Proceedings of the 20th IMEKO TC4 Symposium on Measurements of Electrical Quantities: Research on Electrical and Electronic Measurement for the Economic Upturn, Benevento, Italy, 15–17 September 2014. [Google Scholar]
  8. Affanni, A.; Bernardini, R.; Piras, A.; Rinaldo, R.; Zontone, P. Driver’s stress detection using skin potential response signals. Measurement 2018, 122, 264–274. [Google Scholar] [CrossRef]
  9. Zontone, P.; Affanni, A.; Bernardini, R.; Del Linz, L.; Piras, A.; Rinaldo, R. Stress Evaluation in Simulated Autonomous and Manual Driving through the Analysis of Skin Potential Response and Electrocardiogram Signals. Sensors 2020, 20, 2494. [Google Scholar] [CrossRef]
  10. Ooi, J.S.K.; Ahmad, S.A.; Chong, Y.Z.; Ali, S.H.M.; Ai, G.; Wagatsuma, H. Driver emotion recognition framework based on electrodermal activity measurements during simulated driving conditions. In Proceedings of the 2016 IEEE EMBS Conference on Biomedical Engineering and Sciences (IECBES), Kuala Lumpur, Malaysia, 4–8 December 2016; pp. 365–369. [Google Scholar]
  11. Pedrotti, M.; Mirzaei, M.A.; Tedesco, A.; Chardonnet, J.R.; Mérienne, F.; Benedetto, S.; Baccino, T. Automatic stress classification with pupil diameter analysis. Int. J. Hum.-Comput. Interact. 2014, 30, 220–236. [Google Scholar] [CrossRef] [Green Version]
  12. Elgendi, M.; Menon, C. Machine learning ranks ECG as an optimal wearable biosignal for assessing driving stress. IEEE Access 2020, 8, 34362–34374. [Google Scholar] [CrossRef]
  13. Teplan, M. Fundamentals of EEG measurement. Meas. Sci. Rev. 2002, 2, 1–11. [Google Scholar]
  14. Roshdy, A.; Karar, A.S.; Al-Sabi, A.; Al Barakeh, Z.; El-Sayed, F.; Beyrouthy, T.; Nait-ali, A. Towards Human Brain Image Mapping for Emotion Digitization in Robotics. In Proceedings of the 2019 3rd International Conference on Bio-Engineering for Smart Technologies (BioSMART), Paris, France, 24–26 April 2019; pp. 1–5. [Google Scholar]
  15. Kamińska, D.; Smółka, K.; Zwoliński, G. Detection of Mental Stress through EEG Signal in Virtual Reality Environment. Electronics 2021, 10, 2840. [Google Scholar] [CrossRef]
  16. Jasper, H.H. The ten-twenty electrode system of the International Federation. Electroencephalogr. Clin. Neurophysiol. 1958, 10, 371–375. [Google Scholar]
  17. Affanni, A.; Chiorboli, G.; Minen, D. Motion artifact removal in stress sensors used in driver in motion simulators. In Proceedings of the 2016 IEEE International Symposium on Medical Measurements and Applications, MeMeA 2016, Benevento, Italy, 15–18 May 2016. [Google Scholar]
  18. Huang, H.H.; Condor, A.; Huang, H.J. Classification of EEG Motion Artifact Signals Using Spatial ICA. In Statistical Modeling in Biomedical Research: Contemporary Topics and Voices in the Field; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 23–35. [Google Scholar] [CrossRef]
  19. Affanni, A.; Piras, A.; Rinaldo, R.; Zontone, P. Dual channel Electrodermal activity sensor for motion artifact removal in car drivers’ stress detection. In Proceedings of the 2019 IEEE Sensors Applications Symposium (SAS), Sophia Antipolis, France, 11–13 March 2019. [Google Scholar]
  20. Jiang, X.; Bian, G.B.; Tian, Z. Removal of artifacts from EEG signals: A review. Sensors 2019, 19, 987. [Google Scholar] [CrossRef] [Green Version]
  21. Jun, G.; Smitha, K.G. EEG based stress level identification. In Proceedings of the 2016 IEEE international conference on systems, man, and cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 003270–003274. [Google Scholar]
  22. Jebelli, H.; Hwang, S.; Lee, S. EEG-based workers’ stress recognition at construction sites. Autom. Constr. 2018, 93, 315–324. [Google Scholar] [CrossRef]
  23. Lotfan, S.; Shahyad, S.; Khosrowabadi, R.; Mohammadi, A.; Hatef, B. Support vector machine classification of brain states exposed to social stress test using EEG-based brain network measures. Biocybern. Biomed. Eng. 2019, 39, 199–213. [Google Scholar] [CrossRef]
  24. Hamadicharef, B.; Zhang, H.; Guan, C.; Wang, C.; Phua, K.S.; Tee, K.P.; Ang, K.K. Learning EEG-based spectral-spatial patterns for attention level measurement. In Proceedings of the 2009 IEEE International Symposium on Circuits and Systems, Taipei, Taiwan, 24–27 May 2009; pp. 1465–1468. [Google Scholar]
  25. Lin, C.T.; Wu, R.C.; Liang, S.F.; Chao, W.H.; Chen, Y.J.; Jung, T.P. EEG-based drowsiness estimation for safety driving using independent component analysis. IEEE Trans. Circuits Syst. I Regul. Pap. 2005, 52, 2726–2738. [Google Scholar]
  26. Budak, U.; Bajaj, V.; Akbulut, Y.; Atila, O.; Sengur, A. An effective hybrid model for EEG-based drowsiness detection. IEEE Sens. J. 2019, 19, 7624–7631. [Google Scholar] [CrossRef]
  27. Dkhil, M.B.; Wali, A.; Alimi, A.M. Drowsy driver detection by EEG analysis using Fast Fourier Transform. In Proceedings of the 2015 15th International Conference on Intelligent Systems Design and Applications (ISDA ), Marrakech, Morocco, 14–16 December 2015; pp. 313–318. [Google Scholar]
  28. Li, G.; Chung, W.Y. A context-aware EEG headset system for early detection of driver drowsiness. Sensors 2015, 15, 20873–20893. [Google Scholar] [CrossRef]
  29. Zhou, Y.; Xu, T.; Li, S.; Li, S. Confusion state induction and EEG-based detection in learning. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 17–21 July 2018; pp. 3290–3293. [Google Scholar]
  30. Acı, Ç.İ.; Kaya, M.; Mishchenko, Y. Distinguishing mental attention states of humans via an EEG-based passive BCI using machine learning methods. Expert Syst. Appl. 2019, 134, 153–166. [Google Scholar] [CrossRef]
  31. Lin, Y.P.; Wang, C.H.; Jung, T.P.; Wu, T.L.; Jeng, S.K.; Duann, J.R.; Chen, J.H. EEG-based emotion recognition in music listening. IEEE Trans. Biomed. Eng. 2010, 57, 1798–1806. [Google Scholar]
  32. Jadhav, N.; Manthalkar, R.; Joshi, Y. Effect of meditation on emotional response: An EEG-based study. Biomed. Signal Process. Control 2017, 34, 101–113. [Google Scholar] [CrossRef]
  33. Begum, S.; Barua, S.; Ahmed, M.U. In-vehicle stress monitoring based on EEG signal. Int. J. Eng. Res. Appl. 2017, 7, 55–71. [Google Scholar] [CrossRef]
  34. Noh, Y.; Kim, S.; Yoon, Y. Evaluation on Diversity of Drivers’ Cognitive Stress Response using EEG and ECG Signals during Real-Traffic Experiment with an Electric Vehicle. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), IEEE, Auckland, New Zealand, 27–30 October 2019; pp. 3987–3992. [Google Scholar]
  35. Halim, Z.; Rehan, M. On identification of driving-induced stress using electroencephalogram signals: A framework based on wearable safety-critical scheme and machine learning. Inf. Fusion 2020, 53, 66–79. [Google Scholar] [CrossRef]
  36. Kim, H.S.; Yoon, D.; Shin, H.S.; Park, C.H. Predicting the EEG level of a driver based on driving information. IEEE Trans. Intell. Transp. Syst. 2018, 20, 1215–1225. [Google Scholar] [CrossRef]
  37. Affanni, A. Wireless sensors system for stress detection by means of ECG and EDA acquisition. Sensors 2020, 20, 2026. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. LaRocco, J.; Le, M.D.; Paeng, D.G. A Systemic Review of Available Low-Cost EEG Headsets Used for Drowsiness Detection. Front. Neuroinformatics 2020, 14, 553352. [Google Scholar] [CrossRef]
  39. Hayashi, T.; Okamoto, E.; Nishimura, H.; Mizuno-Matsumoto, Y.; Ishii, R.; Ukai, S. Beta activities in EEG associated with emotional stress. Int. J. Intell. Comput. Med. Sci. Image Process. 2009, 3, 57–68. [Google Scholar] [CrossRef]
  40. Sulaiman, N.; Hamid, N.H.A.; Murat, Z.H.; Taib, M.N. Initial investigation of human physical stress level using brainwaves. In Proceedings of the 2009 IEEE Student Conference on Research and Development (SCOReD), Serdang, Malaysia, 16–18 November 2009; pp. 230–233. [Google Scholar]
  41. Díaz, H.; Cid, F.M.; Otárola, J.; Rojas, R.; Alarcón, O.; Cañete, L. EEG Beta band frequency domain evaluation for assessing stress and anxiety in resting, eyes closed, basal conditions. Procedia Comput. Sci. 2019, 162, 974–981. [Google Scholar] [CrossRef]
  42. Palacios-García, I.; Silva, J.; Villena-González, M.; Campos-Arteaga, G.; Artigas-Vergara, C.; Luarte, N.; Rodríguez, E.; Bosman, C.A. Increase in beta power reflects attentional top-down modulation after psychosocial stress induction. Front. Hum. Neurosci. 2021, 15, 630813. [Google Scholar] [CrossRef]
  43. Hu, B.; Peng, H.; Zhao, Q.; Hu, B.; Majoe, D.; Zheng, F.; Moore, P. Signal quality assessment model for wearable EEG sensor on prediction of mental stress. IEEE Trans. Nanobiosci. 2015, 14, 553–561. [Google Scholar]
  44. Joint Committee for Guide in Metrology, JCGM. JCGM 200:2012, International Vocabulary of Metrology—Basic and General Concepts and Associated Terms (VIM), 3rd ed. 2012. Available online: https://jcgm.bipm.org/vim/en/2.16.html (accessed on 10 January 2022).
  45. Joint Committee for Guide in Metrology, JCGM. JCGM 100:2008, Evaluation of Measurement Data—Guide to the Expression of Uncertainty in Measurement. 2008. Available online: https://www.bipm.org/utils/common/documents/jcgm/JCGM_100_2008_E.pdf (accessed on 10 December 2021).
  46. Affanni, A.; Najafi, T.A.; Guerci, S. Design of a low cost EEG sensor for the measurement of stress-related brain activity during driving. In Proceedings of the 2021 IEEE International Workshop on Metrology for Automotive (MetroAutomotive), Bologna, Italy, 1–2 July 2021; pp. 152–156. [Google Scholar] [CrossRef]
  47. Affanni, A. Dual channel electrodermal activity and an ECG wearable sensor to measure mental stress from the hands. Acta Imeko 2019, 8, 56–63. [Google Scholar] [CrossRef]
  48. University of Udine-Laboratory of Sensors and Biosignals-BioSens Lab. Available online: https://www.uniud.it/it/territorio-e-societa/uniud-lab-village/laboratorio-iot/bio-sens-lab (accessed on 10 January 2022).
  49. DOF Reality Motion Simulators. Available online: https://dofreality.com/ (accessed on 10 January 2022 ).
  50. Facebook Technologies, LLC. Available online: https://www.oculus.com/rift/ (accessed on 10 January 2022 ).
  51. Logitech Inc. Available online: https://www.logitech.com/en-us/products/driving.html (accessed on 10 January 2022 ).
  52. VI-Grade Driving Simulators. Available online: https://www.vi-grade.com/ (accessed on 10 January 2022 ).
  53. Klug, M.; Gramann, K. Identifying key factors for improving ICA-based decomposition of EEG data in mobile and stationary experiments. Eur. J. Neurosci. 2020, 54, 8406–8420. [Google Scholar] [CrossRef]
  54. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Mullen, T.R.; Kothe, C.A.; Chi, Y.M.; Ojeda, A.; Kerth, T.; Makeig, S.; Jung, T.P.; Cauwenberghs, G. Real-time neuroimaging and cognitive monitoring using wearable dry EEG. IEEE Trans. Biomed. Eng. 2015, 62, 2553–2567. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Sejnowski, T.J. Independent component analysis of electroencephalographic data. Advances in Neural Information Processing Systems 8: Proceedings of the 1995 Conference; MIT Press: Cambridge, MA, USA, 1996; Volume 8, p. 145. [Google Scholar]
  57. Seo, S.; Lee, J. Stress and EEG. In Convergence and Hybrid Information Technologies; IntechOpen: London, UK, 2010; Available online: https://www.intechopen.com/chapters/10986 (accessed on 10 January 2022).
  58. Jatoi, M.A.; Kamel, N. Brain Source Localization Using EEG Signal Analysis; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  59. Waili, T.; Alshebly, Y.S.; Sidek, K.A.; Johar, M.G.M. Stress recognition using Electroencephalogram (EEG) signal. J. Phys. Conf. Ser. 2020, 1502, 012052. [Google Scholar] [CrossRef]
  60. Ahn, S.; Nguyen, T.; Jang, H.; Kim, J.G.; Jun, S.C. Exploring Neuro-Physiological Correlates of Drivers’ Mental Fatigue Caused by Sleep Deprivation Using Simultaneous EEG, ECG, and fNIRS Data. Front. Hum. Neurosci. 2016, 10, 219. [Google Scholar] [CrossRef]
  61. Choi, Y.; Kim, M.; Chun, C. Measurement of occupants’ stress based on electroencephalograms (EEG) in twelve combined environments. Build. Environ. 2015, 88, 65–72. [Google Scholar] [CrossRef]
  62. Blanco, J.A.; Vanleer, A.C.; Calibo, T.K.; Firebaugh, S.L. Single-trial cognitive stress classification using portable wireless electroencephalography. Sensors 2019, 19, 499. [Google Scholar] [CrossRef] [Green Version]
  63. Jap, B.T.; Lal, S.; Fischer, P.; Bekiaris, E. Using EEG spectral components to assess algorithms for detecting fatigue. Expert Syst. Appl. 2009, 36, 2352–2359. [Google Scholar] [CrossRef]
  64. Zhao, C.; Zhao, M.; Liu, J.; Zheng, C. Electroencephalogram and electrocardiograph assessment of mental fatigue in a driving simulator. Accid. Anal. Prev. 2012, 45, 83–90. [Google Scholar] [CrossRef]
  65. Tanaka, M.; Shigihara, Y.; Ishii, A.; Funakura, M.; Kanai, E.; Watanabe, Y. Effect of mental fatigue on the central nervous system: An electroencephalography study. Behav. Brain Funct. 2012, 8, 1–8. [Google Scholar] [CrossRef] [Green Version]
  66. Wascher, E.; Arnau, S.; Reiser, J.E.; Rudinger, G.; Karthaus, M.; Rinkenauer, G.; Dreger, F.; Getzmann, S. Evaluating mental load during realistic driving simulations by means of round the ear electrodes. Front. Neurosci. 2019, 13, 940. [Google Scholar] [CrossRef]
Figure 1. Electrode placement according to the 10/20 system. The electrodes used in the developed headband are highlighted in blue.
Figure 1. Electrode placement according to the 10/20 system. The electrodes used in the developed headband are highlighted in blue.
Sensors 22 01785 g001
Figure 2. Block diagram of the EEG headband circuit; the dashed box represents the analog front-end of each channel.
Figure 2. Block diagram of the EEG headband circuit; the dashed box represents the analog front-end of each channel.
Sensors 22 01785 g002
Figure 3. EEG circuit realization on PCB: (a) top layer, (b) bottom layer. Red rectangles show the blocks of the circuit.
Figure 3. EEG circuit realization on PCB: (a) top layer, (b) bottom layer. Red rectangles show the blocks of the circuit.
Sensors 22 01785 g003
Figure 4. Control panel developed in the .NET environment for data acquisition and real-time plot of the six EEG channels.
Figure 4. Control panel developed in the .NET environment for data acquisition and real-time plot of the six EEG channels.
Sensors 22 01785 g004
Figure 5. The driving simulator available in our laboratory at the University of Udine.
Figure 5. The driving simulator available in our laboratory at the University of Udine.
Sensors 22 01785 g005
Figure 6. Example of Jersey barriers’ placement to mimic road works; in this figure, a narrow lane followed by a multiple lane change is depicted.
Figure 6. Example of Jersey barriers’ placement to mimic road works; in this figure, a narrow lane followed by a multiple lane change is depicted.
Sensors 22 01785 g006
Figure 7. Example of the raw and pre-processed signals.
Figure 7. Example of the raw and pre-processed signals.
Sensors 22 01785 g007
Figure 8. Linearity of the headband channels; error bars represent the uncertainty on the linearity. (af) linearity of the channels F p 1 , F p 2 , C 3 , C 4 , O 1 , and O 2 , respectively.
Figure 8. Linearity of the headband channels; error bars represent the uncertainty on the linearity. (af) linearity of the channels F p 1 , F p 2 , C 3 , C 4 , O 1 , and O 2 , respectively.
Sensors 22 01785 g008
Figure 9. Bandwidth characterization of the analog front-end of the headband; the horizontal line represents the cutoff limit. (af) Bandwidth of the channels F p 1 , F p 2 , C 3 , C 4 , O 1 , and O 2 , respectively.
Figure 9. Bandwidth characterization of the analog front-end of the headband; the horizontal line represents the cutoff limit. (af) Bandwidth of the channels F p 1 , F p 2 , C 3 , C 4 , O 1 , and O 2 , respectively.
Sensors 22 01785 g009
Figure 10. Bandwidth characterization of the entire system with the digital notch IIR filter. (af) Bandwidth of the channels F p 1 , F p 2 , C 3 , C 4 , O 1 , and O 2 , respectively.
Figure 10. Bandwidth characterization of the entire system with the digital notch IIR filter. (af) Bandwidth of the channels F p 1 , F p 2 , C 3 , C 4 , O 1 , and O 2 , respectively.
Sensors 22 01785 g010
Figure 11. Spectrograms of the six channels extracted after the experiment sessions. (ac) F p 1 channel; (df) F p 2 channel; (gi) C 3 channel; (jl) C 4 channel; (mo) O 1 channel; (pr) O 2 channel. Pictures on the left column refer to the ADAS1 session, in the middle column refer to ADAS2 session, and in the right column refer to the Manual session.
Figure 11. Spectrograms of the six channels extracted after the experiment sessions. (ac) F p 1 channel; (df) F p 2 channel; (gi) C 3 channel; (jl) C 4 channel; (mo) O 1 channel; (pr) O 2 channel. Pictures on the left column refer to the ADAS1 session, in the middle column refer to ADAS2 session, and in the right column refer to the Manual session.
Sensors 22 01785 g011
Figure 12. Beta wave powers relative to the six channels calculated after processing. (ac) F p 1 channel; (df) F p 2 channel; (gi) C 3 channel; (jl) C 4 channel; (mo) O 1 channel; (pr) O 2 channel. Pictures on the left column refer to the ADAS1 session, in the middle column refer to the ADAS2 session, and on the right column refer to the Manual session.
Figure 12. Beta wave powers relative to the six channels calculated after processing. (ac) F p 1 channel; (df) F p 2 channel; (gi) C 3 channel; (jl) C 4 channel; (mo) O 1 channel; (pr) O 2 channel. Pictures on the left column refer to the ADAS1 session, in the middle column refer to the ADAS2 session, and on the right column refer to the Manual session.
Sensors 22 01785 g012
Figure 13. Box plot of the beta powers relative to the ten volunteers. Markers represent the medians, boxes 25–75 percentiles, and whiskers 5–95 percentiles. The tendency of increasing beta power from ADAS1 to ADAS2 and from ADAS2 to Manual is noticeable.
Figure 13. Box plot of the beta powers relative to the ten volunteers. Markers represent the medians, boxes 25–75 percentiles, and whiskers 5–95 percentiles. The tendency of increasing beta power from ADAS1 to ADAS2 and from ADAS2 to Manual is noticeable.
Sensors 22 01785 g013
Table 1. Brain waves and their description.
Table 1. Brain waves and their description.
Wave TypeFrequency Range (Hz)Description
Delta0.5–4Deep sleep
Theta4–8Sleep, meditation, concentration
Alpha8–12Relax, reflection
Decreasing amplitude with anxiety
Beta12–30Alert, focused
Increasing amplitude with
stress, excitement, high mental activity
Gamma>30Focus, sensory processing
Increasing amplitude with anxiety
Table 2. Commercial EEG headbands’ specifications and comparison with the present paper.
Table 2. Commercial EEG headbands’ specifications and comparison with the present paper.
ModelChannel #Sample Rate (Hz)Resolution (nV)LinearityBit #Battery (h)
Emotiv Epoc14128510N/A149
Neurosky Mindwave1512N/AN/A128
Interaxon Muse 24220–500488N/A125
Present paper620043 0.8 % 1410
Table 3. Mean beta powers for every channel and scenario relative to each volunteer.
Table 3. Mean beta powers for every channel and scenario relative to each volunteer.
Mean beta power for ADAS1 session ( μ V 2 )
Subject # F p 1 F p 2 C 3 C 4 O 1 O 2
12.331.993.273.973.344.68
24.635.996.697.127.666.90
32.391.683.542.904.375.05
42.913.604.833.583.534.25
52.391.955.345.357.406.46
61.661.514.182.723.104.63
72.121.536.428.483.985.24
82.332.083.923.263.144.10
92.041.273.393.177.119.03
101.204.375.593.923.994.08
Mean beta power for ADAS2 session ( μ V 2 )
Subject # F p 1 F p 2 C 3 C 4 O 1 O 2
12.562.734.673.919.066.63
24.865.815.406.227.679.77
33.213.055.714.853.914.86
44.904.248.554.735.195.55
52.452.056.305.428.706.31
61.931.614.173.674.377.10
77.145.6510.6815.816.456.05
83.753.866.243.623.6417.19
92.541.584.173.2810.8312.94
101.935.945.345.957.076.37
Mean beta power for Manual session ( μ V 2 )
Subject # F p 1 F p 2 C 3 C 4 O 1 O 2
12.263.1310.304.427.418.29
22.532.586.805.725.148.71
39.6414.9322.7531.5510.7313.89
42.381.419.117.216.456.07
53.933.278.557.1611.2810.41
61.331.414.252.984.047.09
72.842.385.996.924.618.38
83.983.464.444.344.457.10
91.811.594.122.8811.2411.07
101.887.556.513.984.525.83
Table 4. Wilcoxon test probabilities. The values with p 0.05 are in bold.
Table 4. Wilcoxon test probabilities. The values with p 0.05 are in bold.
ChannelWilcoxon Test Probability
ADAS1 vs. ADAS2ADAS1 vs. ManualADAS2 vs. Manual
F p 1 0.05 0.520.31
F p 2 0.160.470.57
C 3 0.120.020.38
C 4 0.210.180.68
O 1 0.060.020.79
O 2 0.020.0030.34
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Affanni, A.; Aminosharieh Najafi, T.; Guerci, S. Development of an EEG Headband for Stress Measurement on Driving Simulators. Sensors 2022, 22, 1785. https://doi.org/10.3390/s22051785

AMA Style

Affanni A, Aminosharieh Najafi T, Guerci S. Development of an EEG Headband for Stress Measurement on Driving Simulators. Sensors. 2022; 22(5):1785. https://doi.org/10.3390/s22051785

Chicago/Turabian Style

Affanni, Antonio, Taraneh Aminosharieh Najafi, and Sonia Guerci. 2022. "Development of an EEG Headband for Stress Measurement on Driving Simulators" Sensors 22, no. 5: 1785. https://doi.org/10.3390/s22051785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop