Next Article in Journal
A 3D-Printed Hybrid Water Antenna with Tunable Frequency and Beamwidth
Next Article in Special Issue
Coupled-Region Visual Tracking Formulation Based on a Discriminative Correlation Filter Bank
Previous Article in Journal
High Performance Graphene-Based Electrochemical Double Layer Capacitors Using 1-Butyl-1-methylpyrrolidinium tris (pentafluoroethyl) trifluorophosphate Ionic Liquid as an Electrolyte
Previous Article in Special Issue
An Infinite-Norm Algorithm for Joystick Kinematic Control of Two-Wheeled Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Communications and Driver Monitoring Aids for Fostering SAE Level-4 Road Vehicles Automation

by
Felipe Jiménez
1,*,
José Eugenio Naranjo
1,
Sofía Sánchez
1,
Francisco Serradilla
1,
Elisa Pérez
2,
Maria José Hernández
2 and
Trinidad Ruiz
2
1
University Institute for Automobile Research (INSIA), Universidad Politécnica de Madrid (UPM), 28031 Madrid, Spain
2
Psychology Faculty, Universidad Complutense de Madrid, Campus de Somosaguas, Ctra. de Húmera, s/n, 28223 Pozuelo de Alarcón, Madrid, Spain
*
Author to whom correspondence should be addressed.
Electronics 2018, 7(10), 228; https://doi.org/10.3390/electronics7100228
Submission received: 3 August 2018 / Revised: 20 September 2018 / Accepted: 27 September 2018 / Published: 2 October 2018

Abstract

:
Road vehicles include more and more assistance systems that perform tasks to facilitate driving and make it safer and more efficient. However, the automated vehicles currently on the market do not exceed SAE level 2 and only in some cases reach level 3. Nevertheless, the qualitative and technological leap needed to reach level 4 is significant and numerous uncertainties remain. In this sense, a greater knowledge of the environment is needed for better decision making and the role of the driver changes substantially. This paper proposes the combination of cooperative systems with automated driving to offer a wider range of information to the vehicle than on-board sensors currently provide. This includes the actual deployment of a cooperative corridor on a highway. It also takes into account that in some circumstances or scenarios, pre-set or detected by on-board sensors or previous communications, the vehicle must hand back control to the driver, who may have been performing other tasks completely unrelated to supervising the driving. It is thus necessary to assess the driver’s condition as regards retaking control and to provide assistance for a safe transition.

1. Introduction

The solution to the major problems associated with road traffic seems to lie, at least to a significant extent, in automated and connected driving. In particular, the automated vehicle endeavours to have an impact on the high percentage of accidents, with the human factor being the main cause. Thus, according to statistical studies, two thirds of accidents are due exclusively to human failures while this factor is present in 90% of accidents [1]. The reasoning behind the advantages of the automated vehicle starts by eliminating this factor as much as possible. Additionally, congestion problems could be reduced if better information is available, which is the basis of the connected vehicle. Hence, breakthroughs in automated driving technologies herald a much safer and highly efficient future for transportation [2,3,4].
The idea of an automated vehicle is almost as old as the vehicle itself. Between the late 80s and the early 90s, a project was developed in Europe that was crucial in terms of establishing the bases of intelligent vehicles: the PROMETHEUS project (Programme for a European Traffic of Highest Efficiency and Unprecedented Safety) [5]. In that framework, several automated prototypes were shown in October 1994 on Highway 1 near Charles-de-Gaulle Airport in Paris. In 1995, a 1600 km trip was completed from Munich (Germany) to Copenhagen (Denmark), reaching a speed of 175 km/h, with 95% of the trip taking place in automated driving mode, under real conditions. A group from the University of Parma travelled 2000 km in 1996 on Italian highways at an average speed of 90 km/h, with 94% of the trip taking place in automatic mode. The most noteworthy modern projects were CyberCars and CyberMove [6] in the 2000s.
Meanwhile, in 1991 the United States Congress urged the Ministry of Transportation to develop an automated vehicle and an infrastructure suitable for automated driving. The Carnegie Mellon University Laboratory developed 11 automated vehicles and in 1995 travelled 3000 miles with a vehicle that drove in automatic mode 98% of the time; in 1997, a demonstration was conducted with 20 vehicles on I-15 in San Diego. A major milestone was the organization by the Defense Advanced Research Projects Agency (DARPA) of 3 competitions for automated vehicles without a driver [7]. In the first race, in 2004 in the Mojave Desert, no car was able to finish the race. In 2005, the winning vehicle travelled 212 km and was not the only one to reach the finish line. In 2007 the competition moved to an urban environment, including street crossings.
However, the point at which these vehicles became widely known dates mainly to when Google announced their development. From that point, numerous initiatives and projects have emerged; manufacturers are incorporating more and more automation functions in their vehicles. However, this has highlighted the challenges that must be addressed, not only from a technological point of view.
The most widely used classification system for automated driving was developed by SAE [8] and includes 5 levels. Up to level 2, monitoring of the environment is left to the driver, although the task is already shared with the vehicle in level 2. In level 3, the vehicle takes on the task of monitoring the environment and in level 4 it also oversees the driving itself so that the driver can perform other tasks. The main difference between levels 4 and 5 is that, in the former, automation is only operative in some scenarios, so that, at some point, automatic-manual transitions take place with sufficient warning, whereas in level 5, this automation extends to all scenarios.
Despite the technological progress made in recent years, fully automated driving still faces several hurdles.
One of the main limitations is that many automated vehicles today only use on-board sensors to perceive the environment and these may not be robust enough for some driving conditions. As on-board sensors can only obtain current information about the immediate surroundings, these automated vehicles often have difficulty reliably anticipating the motion of vehicles close by [9]. Therefore, many disengagement incidents happen when an automated vehicle has to interact with nearby human-driven vehicles [10]. In order to facilitate the implementation of automated vehicles in real traffic, it is desirable to introduce beyond-line-of-sight information through cooperative systems [11,12,13]. They increase data redundancy for highly automated vehicles [14].
The Cooperative Systems (C-ITS) are based on the generation of safety and efficiency information in road transport and its diffusion through the use of Vehicle-to-X (V2X) communications networks. Several C-ITS experiments have been developed, such as [15] and [16], which were mainly focused on sending traffic information to vehicles using Vehicle-to-Infrastructure (V2I) communications, in such a way that the same information shown in the variable information panels is presented to the driver inside the vehicle through a Human Machine Interface (HMI). Also, there are experiments such as those of [17,18], where there is an exchange of information between some vehicles and others using vehicle-to-vehicle (V2V) communications services. In this way, the limit of the visual horizon of the vehicles is overcome and more comprehensive information is obtained regarding the driving environment with variables that may affect safety.
At present, the European ITS Platform has published the first two sets of cooperative services [19] that are currently available for deployment and have been named Day 1 and Day 1.5, in reference to their implementation terms. In this way, within the C-ITS Day 1 services we can find Emergency vehicle approaching (V2V), Hazardous location notification (V2I), Road works warning (V2I) and Weather conditions (V2I). Within C-ITS Day 1.5 we find Cooperative collision risk warning (V2V) and Zone access control for urban areas (V2I). These services have been extended and analysed in depth in Reference [20]. This set of services is currently being deployed in projects such as C-ROADS (https://www.c-roads.eu), where C-ITS corridors are enabled, such as the highway corridor extending from Rotterdam (Netherlands) via Frankfurt (Germany) to Vienna (Austria).
However, it is clear that automated driving in the strict sense, at an SAE-3 or higher automation level [8], does not make sense if cooperative and connectivity components are not incorporated within the ego-vehicle, since the horizon limitations of visual sensors on the vehicles means they have the same limitations as human beings. Thus, a series of projects is currently being developed in the field of automated and connected driving, such as the MAVEN European projects (http://www.maven-its.eu/) and AUTOPILOT (http://autopilot-project.eu/).
Connected vehicle technology, such as vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communication, may be able to resolve some of the existing problems resulting from heavy reliance on a priori information. V2I communication could provide a network for intersections, road signs and construction signs to transfer important infrastructure information, such as road layout changes, speed limits and traffic light information to autonomous vehicles [21,22]. Similarly, V2V communication allows vehicles to share data [23,24]. By integrating V2V and V2I communication with autonomous vehicle technology, an effective “cooperative driving” network can be established [21,25].
Some scenarios and applications in which cooperative systems can improve autonomous vehicle operation are cooperative adaptive cruise control (CACC) [26], intersections [27] and platooning [28,29]. Furthermore, they can be used for positioning and updating digital map information with dynamic data [30]. An example of these early works on their impact is a study on flow control and congestion management in Automated Highway Systems [31]. More recently, some real test analyses and simulations have shown that safety and energy efficiency can be significantly improved for the connected automated vehicle as well as for the neighbouring human-driven vehicles (e.g., [32,33,34]).
Another relevant issue is the fact that automated driving changes the role of the driver and it should be considered when a new SAE level is reached [35]. In some of the worst accidents involving autonomous vehicles the driver was not properly supervising the driving environment or the driving task. In addition, manufacturers and researchers have not sufficiently addressed how to handle scenarios in which automated systems fail or cannot manage after drivers have become dependent on automation. Some studies consider that the attentional state of a driver who has full confidence in a self-driving system is similar to that of a passenger riding in a vehicle driven by another person [36]. Driver monitoring is becoming crucial as it has been shown that automation aids could lead to distractions and high reaction times in case of emergency.
Previous research showed that drivers transitioned to dependence on automation after only 15 min of autonomous driving, as evidenced by eye movement, braking preparation behaviour and arousal level [37,38]. In recent years, some studies have analysed whether drivers can successfully control vehicles after a system failure based on a range of biometric data [39]. Drivers’ mental workload is relatively low during the autonomous-driving scenario because they do not experience any stress from driving. However, drivers’ systolic blood pressure increases because of the transition from autonomous driving to manual control and because of the mental workload involved in controlling the vehicle on their own just after using an autonomous-driving system. From the viewpoint of the brain activity in the left frontal lobe, the data indicates that drivers’ cognition level during autonomous driving is lower than during manual driving. The eye-gaze data indicate that “mind distraction” occurs in participants while resuming control after a system failure because their brain activity is relatively low at that moment. The final results indicate that drivers who depend on autonomous control systems experience stress upon switching to manual control after a system failure.
The study [40] provides a taxonomy of different forms of autonomous vehicle handover situations. Scheduled, emergency and non-emergency handovers are covered and system and driver-initiated handovers are differentiated. Attention is drawn to the fact that the driver is often expected to maintain situational awareness and is hence legally responsible, whereas the handover situation may make this difficult or impossible and the driver may not respond appropriately. In this field, academic examinations of transitions are becoming more frequent (e.g., [41,42]) [43] focus on driver skills and have developed an approach for safe transitions based on characterizing a driver’s skills).
As cited above, eye tracking is one of the commonly used variables to monitor driver behaviour, studying indicators/variables for mental load assessment and gaze direction [44,45], among other things. Indexes such as saccadic eye-movements and duration of eye-blinks are useful for evaluating the attention of drivers while riding in Level 3 autonomous vehicles [36].
One of the most advanced driver assistance systems (ADAS) that compiles data on the driver’s condition analyses somnolence using the percentage of eye closure (PERCLOS) [46] based on studies that suggest around 20% of all traffic accidents are related to fatigue [47]. Authors such as [48,49] evaluate the driver’s condition by recording the face and head in a simulator and in a real vehicle. Another method to study PERCLOS is to capture biomedical signals instead of images, such as brain, muscle and cardiovascular activity [50,51,52].
Other authors argue that a good method to estimate mental workload is brain activity. Some ocular indexes can show variations in mental activity related to the task being performed, the eye being an extension of the brain [45]. Cognitive load (also referred to as mental workload) is commonly defined as the relationship between the cognitive demands placed on a user by a task and the user’s cognitive resources [53] and can be estimated using performance, physiological and subjective measures. Pupil dilation [54], heart-rate variability and galvanic skin response are examples of physiological measures whose changes are related to variations in cognitive load levels [55]. The most popular eye-movement indices for mapping mental workload are pupil diameter and blink rate, as shown in the literature [56,57,58].
Finally, other studies focus on the design of vehicle driver interfaces [59]. Both automated driving phases as well as transition phases where the driver has to reengage with the driving task should be considered [60]. Concerning interfaces designed to transmit the road situation to the driver, haptic signals and vibrotactile stimuli have been studied in a variety of driving applications such as lane-keeping assistance [61], blind spot warning [62] and rear collision warning systems [63].
In summary, although much progress has been made in introducing automated driving on real roads, several challenges are still in the research agendas in order to achieve the highest automation levels in the shortest period of time.
In this paper, two main aids are presented to foster higher levels of vehicle automation. On the one hand, the architecture of vehicular communications is considered essential for high levels of automation in order to increase the vehicle’s knowledge of the environment, thereby improving the range and capabilities of on-board perception sensors. A cooperative corridor along a real road is presented in order to obtain synergies of both C-ITS and the automated and connected vehicles, thereby enabling Automated Cooperative Driving. In this way, all the architecture designed to allow Traffic Management Centers (TMC) to generate C-ITS messages and transmit them to the road via V2X communications is explained and tested. Likewise, automated vehicles that circulate along this route have been adapted so that they are able to receive and process the C-ITS information, acting accordingly in the most appropriate manner possible. In this way, automated vehicles are able to take advantage of the cooperative environment, deployed and standardized at present, by including in their guidance systems the information received by this channel, which is managed and generated from the TMCs. This cooperative architecture follows current standards but uses some specifically developed communication modules. The main difference with other deployments is the interaction with autonomous vehicles in order to remotely control driving lane or speed in each scenario and road stretch.
Additionally, a study is presented on how driver awareness could be assessed when the automated vehicle should hand back control to the driver and the design of proper interfaces for facilitating this driver task of taking back vehicle control. This transition is critical for safety because long periods of inattention to driving tasks could make it difficult to retake control in a short time, especially when complex manoeuvres are required immediately after this changeover. As previously stated, some research has been done and difficulties in achieving correct driver responses have been discussed. Furthermore, many studies propose variables to assess driver awareness but little work has been done on proposing methods to foster adequate driver psychophysical conditions in control transitions. Although there are some simple solutions to performing this control changeover, their efficiency should be tested with real drivers in order to implement the most reliable ones and thus make sure that drivers are prepared to take control in a short period of time. Finally, driver behaviour is analysed in specific scenarios after regaining vehicle control. Specifically, the merging manoeuvre is analysed because of its complexity and it is a common scenario that could appear after a control changeover.
Both analyses are necessary for achieving SAE level 4 and perhaps they are some of the topics that are more disruptive in comparison with previous levels

2. C-ITS Development

2.1. C-ITS Architecture

C-ITS architecture has been designed to guarantee the flow of information from the TMCs to the vehicles and vice versa, using mainly V2I communications. In this way, the TMCs can send the information provided by the cooperative services to the vehicles, which will inform the driver or carry out automated actions, depending on whether the vehicles are connected or automated and connected.
The complete value chain of the C-ITS has been developed, with the implementation of all the elements necessary to connect the TMCs with the vehicles, including the interfaces for the collection of information from various sources, intelligent processing of the latter and the transmission of C-ITS messages to the road through different means of communication.
Figure 1 shows the complete architecture where the different functional modules are represented. In this way, the C-ITS module is integrated within the TMC of the Spanish Traffic Authority (DGT) and its actions can be manually configured as a further module. Internally, it has an Information Collection Module that obtains internal data from the TMC (through the DATEX II protocol), as well as from other public sources, such as municipalities and enforcement services. This collected information serves as input to the Information Analysis Module, which is able to process it in such a way that it serves as an input to the decision-making module, the latter being responsible for deciding whether it is appropriate to generate some type of C-ITS message for the services that are enabled at that moment and for a geographical area that has Road Side Units (RSU) deployment. This module generates standardized C-ITS Decentralized Environmental Notification Message (DENM) type messages that are sent via an ad-hoc protocol to the corresponding Sending type RSUs, which are able to connect with the TMC via Infrastructure-to-Infrastructure (I2I) communications. These RSUs decode the message from the control centre and transmit the corresponding DENM packet to the specific geographical area, either directly or through another RSU operating in Relay mode, using the ETSI ITS G5 protocol, standardized by the European Telecommunications Standards Institute (ETSI) for Intelligent Transport Systems (ITS). This message is received by the connected vehicles, either with a driver or driverless, through their On Board Unit (OBU), which is also connected to the ETSI ITS G5 network.
In this way, the architecture covers all the elements involved, both in the generation of C-ITS services and in V2X communications, all of which are necessary to enable this type of capability on the road.

2.2. Deployment at the Test Site

Once the architecture was designed, a deployment of the ETSI ITS G5 communications systems was carried out on one of the motorways entering the city of Madrid, the A6 motorway, which is part of the hub of the urban accesses of the TEN-T Atlantic Pan-European corridor. This test site also has a reversible lane in the centre of the road, which can be opened or closed on demand directly from the TMC. This lane allows testing without the need to share the road with other vehicles, in order to more safely validate the developments but it can also be used with open traffic.
We proceeded with the installation of 18 RSU communications modules along a 16-km stretch of this road, with a separation of between 500 and 1000 m between them. These modules have been distributed according to their capabilities. 3 RSU Sending units with an I2I connection with the C-ITS control centre and 15 RSU Relay units that act as repeaters of the signal were deployed. This deployment allows ITS G5 connectivity throughout the section, guaranteeing the reception of the information by the vehicles that circulate at any point along the road. The RSU Sending communication modules are from the manufacturer Cohda and the RSU Relay modules are ITS-INSIA [64,65]. These last modules have been specifically developed for this purpose. They are based on an AR9220 chipset that uses the ath9k modified driver to allow 802.11p bands. This card is configured to work in the 5855–5925 GHz bands. The modules use the Debian “wheezy” operating system, kernel version 3.19.0. The kernel has been configured to allow OCB mode (Outside the Context of a BSS) mode and the ath9k driver. The module includes the Global Navigation Satellite System NV08C-CSM chipset (NVC). It is an integrated GLONASS + GPS + GALILEO + SBAS satellite navigation receiver for use in various applications demanding low cost, low power consumption and uncompromised performance. Figure 2 shows a Relay RSU and the deployment of a Sending RSU on a variable message panel on the road.
A complete C-ITS use case has been designed to demonstrate the successful performance of the architecture. It covers the integration of all subsystems that are part of the value chain of cooperative services. Thus, it is allowed to activate an incident in the TMC that generates a C-ITS message, which was received by the test site RSUs and geo-broadcast to all vehicles in the area with an OBU. Specifically, this message was received by an automated vehicle that performs the appropriate actions upon receipt of the new information or hands back control to the driver.

2.3. Automated and Connected Vehicles

To carry out the validation of the architecture, 2 connected vehicles were used, one manually driven and the other driverless. The connected vehicle is a Peugeot 307, equipped with a Cohda OBU connected to a tablet with a user interface that generates audible and visual warnings based on the information and alerts generated by the C-ITS service (Figure 3).
Meanwhile, to validate the architecture in driverless vehicles, an automated vehicle provided by INSIA was used. All its actuators (accelerator, brake and steering) are automated and are controlled by a low-level layer that is capable of receiving orders from a high-level guidance system. The care is based on the Mitsubishi iMIEV platform (Figure 4) [66].
This system receives the information provided by the on-board sensors and processes them, together with the information from an HD digital map, in order to generate driving behaviour similar to that of humans. This vehicle is equipped with a high-precision GPS/Galileo receiver, an inertial unit (INS) for positioning, a Mobileye perception system for obstacle detection, road lines and traffic signals and a Velodyne Lidar system for obstacle detection. Additionally, ITS-INSIA V2X OBU was added for the reception of C-ITS messages.
Likewise, the guidance system was expanded with a C-ITS information processing module, which is capable of both interpreting the information received through V2X and processing it to adapt the behaviour of the vehicle while performing automated guidance. 2 main scenarios were tested: (1) Automated vehicle speed adaptation as it received a new target via C-ITS communications; (2) Automated lane change when a C-ITS message informs the vehicle that a lane is closed some distance ahead. In both cases, the automated vehicle supervised the environment using the on-board sensors and performed the manoeuvre safely.

3. Transition between Automated and Manual Driving Modes

Although up to automation level 2 the driver is responsible for the driving task and in level 3 he retains responsibility for supervision of this task or the environment, in level 4 the vehicle is responsible for these tasks so that the driver can perform other activities. However, unlike level 5, at level 4 the vehicle may return control to the driver. Therefore, it is necessary to establish methods to facilitate a safe transition as quickly as possible. To do this, the driver’s activation level must be analysed and the driver must be able to take back the driving task. It is usually verified that a driver is awake attending to a motion measure, for example, by pressing a button on the vehicle. However, this action may not be reliable enough to indicate that the driver is ready to drive safely so other additional measures are proposed and tested.

3.1. Study Approach

The aim is to evaluate drivers’ alertness in a semi-automated driving simulation situation. The tests were carried out in a driving simulator, with one large screen where the driving scenario is projected and a driving position with steering wheel and pedals, as shown in Reference [67]. Different measures were used to check the efficiency and suitability of the handback of the driving task to the driver, that is, when the driving goes from being automated to manual. The experiment was carried out by 21 participants (8 were men and 13 women), with an average age of 23.95 years (SD = 5.72) and an average of five years of driving experience (SD = 5.11). All of them were drivers with normal vision or normal vision corrected. In order to evaluate participants’ activation level, an ocular registration system was used. The Model 504 Ocular system log (ASL, Billerica, MA, USA) is a remote unobtrusive eye tracking system designed to measure the diameter of the pupil and coordinate where the user is looking. The eye movement camera launches an infrared beam. Through the reflection of the pupil and cornea, it determines where the subject is looking. The vertical and horizontal position of the eye, pupil diameter and 16-bit external data were recorded at a frequency of 50 Hz. Along with these eye movement data, the system returns an image of the subject’s field of vision and their glance superimposed by a cursor in real time. The system has an accuracy of 0.5° of visual angle. In addition, a steering wheel and pedals connected to the system were used to study the driver’s behaviour. A unidirectional microphone was used as a vocal key when recording the response time to the cognitive task. Figure 5 shows the laboratory with the simulator and the equipment used.
Three tasks performed by the participant to verify their activation level: (1) motor task: press a button; (2) fast and simple cognitive task, which could be arithmetical or verbal (perform an addition/subtraction or read a word); (3) execution task in driving (braking before a STOP signal that appears after the participant starts the simulated driving).
The different variables used to measure participants’ activation levels were: (1) Time it takes to fix the eye on the road when listening to an audible warning (in ms); (2) Reaction time for the motor task (in ms); (3) Reaction time for the cognitive task (in ms); (4) Size of the pupil (in pixels) and (5) Time it takes to brake when the stop sign appears on the road (in ms).
The size of the pupil was recorded from the beginning of the cognitive task to ensure sufficient time for the pupil to accommodate to the light (until the sound signal ends, 2000 ms) and until the appearance of the stop sign, measured as the moment at which the screen illumination changes.
Two situations of driver activation (initial conditions) to which the participants were induced were established before each trial to achieve their distraction: situation of cognitive activation (reading or being distracted with their smartphone) or relaxation (with their eyes closed in a comfortable position). Each type of prior activation situation was combined with the two types of cognitive task, prior to taking control of driving: verbal (reading a projected word on the screen) and arithmetical (performing a simple addition or subtraction), leading to four experimental conditions in a repeated measures design. Each participant performed four trials, one per experimental condition. The order of appearance of the different conditions or types of trial was balanced across participants.
In the experimental process, after 5 min on average (randomized between 1 and 9) a warning is presented indicating that the participant must begin the driving task. After this stimulus, each participant performs two tasks to check their activation level (press the button and solve the cognitive task aloud). The word, addition or subtraction that appeared in each trial is randomly selected from a set of 72 different stimuli for each type of task. After a period of driving in the simulator, a stop sign appears before which the driver must react. Table 1 shows the sequence of the driver’s tasks.

3.2. Results

The test results were as follows:
  • For the variable Time that it takes to fix the gaze in the area of interest for driving, a means difference contrast was made. There is an effect of the activation situation, in the sense that it takes longer to look when the participant is relaxed with his eyes closed (1071 ms) than when he is distracted by the mobile (881 ms), t16 = 2.602, p = 0.019, d = 0.48.
  • For the variable Size of the pupil, an ANOVA of repeated measures 2 × 2 × 2 was performed (Activation situation × Type of cognitive task × Experimental moment). An effect of the experimental moment was found, with F (1, 8) = 27.09, p = 0.001, partial η2 = 0.77. When the participant was solving the cognitive task, his pupil diameter was greater than during driving (average values of 35.3 and 31.1 pixels, respectively). No differences were found either by activation situation or by type of task. These results seem to indicate that both cognitive tasks (verbal vs. arithmetical) entailed the same cognitive load for the participant. The results can be seen in Table 2.
  • For the Reaction Time to press the button, a comparison of related means is also made and statistically significant differences are found for the Activation Situation factor, with t20 = −2066, p =0.05, d = 0.4. When a participant is in a situation of activation, reading or distracted with his smartphone, his reaction time to this task is greater than if he is in a relaxed situation, (1492 ms vs. 1306 ms), that is, responding on average almost 200 ms later. One possible explanation for this result is that the reaction time is longer because you take your mobile phone in your hand before answering.
  • For the variables Reaction time for the cognitive task and Reaction time to the stop signal, an ANOVA of repeated measures 2 × 2 is performed. Regarding the Reaction time for the cognitive task, no effects or Situation are found of activation or the type of cognitive task on the reaction time. However, when the situation is reading, there seem to be indications that the answer to the arithmetical versus the verbal task is slower (2197 vs. 1599 ms, respectively, d = 0.7).
  • There are also no statistically significant differences for the two effects studied on the reaction time to the stop sign; however, and in the same way, if participants had been in an activation situation and had performed an arithmetical task they took on average 198 ms more to brake before the stop sign than those who had also been in an activation situation but had read a word (d = 0.38).

3.3. Discussion

According to the results obtained, the time it takes for a participant to press a button after an audible warning that he must take control of the vehicle is greater when he is distracted reading his smartphone than when he is relaxed, with his eyes closed. However, the time it takes to attend to the driving screen is greater when he is in a relaxed situation compared to the situation of prior activation or cognitive distraction. Both situations are equally plausible in a semi-automated real driving scenario. Probably, it is interfering with the fact that the driver must first release his cell phone to perform the motor task. This result could be easily extrapolated to reading a book, talking on the mobile phone and so forth. However, he seems to respond faster to the road when he is previously in an activation situation. This result highlights the need to complement a motor measurement (e.g.: pressing a button) with an eye-type measurement.
In addition, pupil dilation has been sensitive to changes in cognitive load, as reflected in the results when the participant performs a cognitive task before a driving simulation. A system of ocular recording must detect that the driver looks at the road in time and the change in his pupil dilation when he also performs a cognitive task.
Two types of cognitive tasks have been tested, with the intention of keeping their load equal and low. The results in the diameter of the pupil do not allow us to draw the conclusion that one task reflects more workload than the other. However, being distracted or in a situation of prior activation can interfere with the arithmetical task in two ways: (1) it takes longer to respond to the task itself of evaluating the activation if it is arithmetical (longer reaction time to the cognitive arithmetical task versus a verbal one) and (2) it seems that in these circumstances driving performance is also affected (greater average reaction time to the stop signal when the driver was in a situation of prior activation and performed an arithmetical task). For these reasons, the verbal task of reading a word is selected against the task of performing an arithmetical calculation to assess drivers’ level of alertness.

4. Driver Behaviour during a Safe-Critical Manoeuvre When Regaining Vehicle Control

Finally, after regaining control of the vehicle, the driver may have to manage a situation that requires a higher than usual attentional load, such as merging into another lane or changing speed. In this regard, whether in fact, such situations cause greater cognitive load is evaluated, so some assistance system might be advisable.
More specifically, the manoeuvre of merging to a different road or lane becomes a conflict situation, even more so when it takes place in high-density traffic, where the driver must recalculate and adapt their position and speed in a constant way, taking into account not only the features of the road but also considering the position and speed of the surrounding vehicles.
A detailed study of ocular behaviour in several subjects in driving situations has been conducted with the aim of proposing an interface for a driver assistance system, thus facilitating merging into traffic using intervehicular communications. The driver’s ocular responses to stimuli have been recorded and subsequently analysed through an eye-tracking system to design an adaptable interface to assist them with the merge.
The final purpose of the research is to analyse human factor variables that would correlate to driver workload.

4.1. Study Approach

The test group comprised 8 subjects, of similar age and driving experience, extracted from the previous sample. Subjects drove on routes located in the south-eastern district of Madrid, where they performed merging on both sides of the vehicle. Data on 10–20 merging manoeuvres were collected on both sides of the road for each driver in the tests.
For the tests carried out, a vehicle was instrumented in order to capture the data from the subjects. Participants used their own cars for the tests performed, to make driving manoeuvres more natural. The on-board computer used for logging data in these tests was a i7-6700U with 8 GB RAM and using GNU/Linux.
The subjects were fitted with a Tobii Pro Glasses 2 (Figure 6), a state-of-the-art eye tracking device consisting of a pair of glasses and a software controller installed in a CPU unit. It is designed to capture natural viewing behaviour in any real-world environment while ensuring outstanding eye tracking robustness and accuracy. The glasses were provided with 2 IR sensors and 2 cameras in each eye, which enabled gaze and pupil data to be obtained; in addition, a camera placed in the centre and facing to the front recorded a video of the scene. The gaze sampling frequency is between 50 and 100 Hz and it has automatic parallax compensation. The system involves 4 eye cameras, a gyroscope and an accelerometer. The main advantages are the fact that the system is unrestrained and unobtrusive, it is capable of robust tracking with a wide cross section of the population and it is easy to use without needing extensive training.
The dark pupil detection method is used to detect the pupil, extracting the diameter from the shaded area. At the beginning of the recording, a prior calibration of the subject’s gaze is needed through a target provided by the software manufacturers.
A ROS-based package connected to the Tobii glasses is running during the whole experiment, gathering, transforming and publishing data. A logging process then stores it for further analysis. The data stored along with their description and units are summarized in Table 3.
The glasses are connected to the embedded computer via an Ethernet cable. The transmission protocol is the one provided by Tobii [68], which works over UDP.
The image is segmented in Areas of interest (AOI) to analyse three variables: the total number of fixings per area, the duration of the first fixation and the area where it was made and the total duration of the fixation in the different areas. The areas are centre lane, right lane, left lane, right rear-view mirror, left rear-view mirror and centre mirror.

4.2. Results

Pupil diameter and fixation duration were the variables chosen to study the mental load in merging situations, defined as merging time from the first second when the driver fixes his gaze on the rear-view mirror until he has fully entered the main road. Figure 7 shows an example of the evolution of eye fixation during a merging manoeuvre and a heat map of the gaze direction. The baseline of the subject is extracted, taking into account a resting situation where the activity of the eye does not perceive any external stimulus.
The following variables are statistically compared between areas of interest in relation to the baseline during normal driving conditions: (1) total duration of fixations; (2) total number of fixations; (3) duration of the first fixation. Table 4 shows the results obtained.
In general, the value of the total duration and the number of fixations in the lane is greater in the base rate, since the driver focuses his attention on the lane and does not have to change attention focus in order to change. On the other hand, the value in the total duration and the number of fixations in the left mirror is greater during the merging manoeuvre because the driver has to pay attention to it in order to carry it out successfully.
Similarly, during the merging manoeuvre, the total number of fixations and the total duration between different areas of interest are calculated. Table 5 shows the results. The only significant coincidence between areas of interest is that quantitatively the driver looks more (both in length and number) in the left mirror than at the car when merging, because it is necessary to attend to the information provided by the mirror to perform the manoeuvre successfully.
In addition, each eye pupil diameter is presented in Table 6.
Drivers’ pupil diameters showed sensitivity in merging situations, the value increasing in relation to baseline. Drivers had to examine a changing situation where a decision had to be made quickly. The results also indicate that the cognitive load in this situation rises due to the risk of the manoeuvre and the large volume of information, as is shown in other studies [69].
A Wilcoxon signed-rank test to paired-samples data was carried out to verify that statistically there are significant differences between both pupil dilations in both eyes. This test has been considered taking into account that the small size of the sample does not make it possible to determine whether the sample comes from a population with normal distribution. This test could be considered the equivalent of the t-test for paired-samples but operates with ranges instead of means.
A significant difference is found for the dilation of the right pupil between the baseline situation and merging situation, according to the variables obtained from the Wilcoxon test Z and p, which gave values of Z = −2.38 and p = 0.016 respectively. These significant differences are also found for the dilation of the left pupil (Z = −2.1, p = 0.036). These results show that there are significant differences between the two conditions, in the sense that the pupil is more dilated during merging than during baseline conditions.

4.3. Discussion

Pupil diameter is considered sensitive to highway merging situations. It has been observed that pupils experience more dilation in a merging environment than in a resting situation. The diameter is inferior than for the baseline in both eyes, with this being related to driver cognitive load.
Regarding fixation duration, although in a merging situation it would seem that brain speed would be reflected in a decrease in reaction times, this condition is not fulfilled. Looking in the mirrors during the merging manoeuvre, the eye absorbs all the stimuli that can provide the necessary information for decision making in the shortest time possible; however, in the results obtained no evidence has been found for this hypothesis.
Finally, after analysing the heat maps of each of the merging situations of the different tests, a remarkably hot zone in both the left and right sides of the rear-view mirror was observed. When the subjects performed the merging manoeuvre, a large percentage of their fixations were located on the upper-inner part of the mirror in more than half of the tests conducted, as shown in Figure 7.

5. Conclusions

In this paper, two major issues regarding increasing road vehicle automation levels are tackled. First of all, the use of cooperative systems to provide the vehicle with a wider information horizon so that it can take decisions in advance. The definition, implementation and deployment of a C-ITS architecture for automated and/or connected vehicles on a real road have been presented, which allows automated driving to be compatible with the generation of C-ITS messages from traffic management centres. In this way, all the components that are part of the C-ITS value chain have been implemented and have been applied to the automated and connected vehicle scope. In order to carry out the validation of this architecture, modifications have been made to the guidance system of one of INSIA’s automated vehicles, providing it with a module for the interpretation of C-ITS DENM messages that will be received from an ITS G5 OBU installed in the vehicle itself. This setup was validated in a real deployment on the A6 motorway in Madrid by means of a notification of public works on the road issued from the DGT TMC and the response to this incidence of the automated vehicle was analysed. The satisfactory results of this validation have shown that it is possible to achieve a convergence between the C-ITS and connected and automated driving, extending the capabilities of automated and cooperative driving since it is able to respond to the incidents reported by the TMCs in a similar way as human drivers do, in many cases being more efficient. From the point of view of the TMCs, the result of the project is a great step forward since it allows the capacities of its C-ITS to be extended, addressing not only connected but also manually driven vehicles.
Secondly, when the vehicle is in an area in which automated driving is not allowed or the vehicle expects that a situation that it cannot manage is approaching, vehicle control should be transferred back to the driver. In this scenario, the driver should demonstrate that he is alert and in good condition for taking back control. Although some solutions consider only a minor input (pressing a button, for example), they might be ineffective because they do not guarantee the proper attention level. For this reason, some alternatives are proposed and, finally, our research concludes that the verbal task of reading a word is better than performing an arithmetical calculation to assess driver alertness. In any event, the best solution involves both a motion action and a verbal answer to ensure the driver is in a mental activation state. It should be noted that driver samples are quite small due to practical limitations and this limits extrapolation of these numerical results. However, the statistical analysis has been designed to assess whether or not the differences are significant. The tendencies and general decisions presented in our final results could thus be accepted.
Finally, after taking back vehicle control, some complex scenarios (mainly, changing lanes or merging with another road) may need additional assistance. In fact, such situations cause greater cognitive load and drivers encounter difficulties in capturing all the relevant information, as some results show, making some assistance system advisable. These systems would facilitate decision making and manoeuvres, making manual driving safer and more comfortable. Their importance increases as this safe-critical manoeuvre must be performed just after regaining vehicle control and the driver may not be completely involved in the task (even when his attention has been supervised). As future research, the proposal and testing of an assistance system for safe merging based on V2V communications and the design of a Human Machine Interfaces (HMI) is being carried out, based on the results presented in this paper (HMI location, addition workload, etc.). Furthermore, the cooperative architecture described allows information exchange between vehicles in order to advise drivers on the best way to perform the merging manoeuvre.

Author Contributions

F.J. proposed the complete research, supervised tests and results. J.E.N. was responsible for the C-ITS corridor deployment and tests with connected and automated vehicles. S.S. was in charge of tests with drivers and results analysis. F.S. developed interfaces. E.P., M.J.H. and T.R. developed the tests in the driving simulator and provided statistical support for the analysis of results.

Funding

This research has been partially funded by the Spanish Direccion General de Trafico—DGT (SICOTRAM project SPIP2017-02324), the Spanish Ministerio de Economia y Competitividad (CAV project-TRA2016-78886-C3-3-R) and the Connected Europe Facility (CEF) Program under grant agreement no. 2015-EU-TM-0243-S (AUTOCITS project).

Acknowledgments

We cordially thank all our colleagues from the AUTOCITS project for their collaboration in the C-ITS corridor deployment.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hobbs, F.D. Traffic Planning and Engineering; Pergamon Press: Oxford, UK, 1989. [Google Scholar]
  2. Stern, R.; Cui, S.; Delle Monache, M.L.; Bhadani, R.; Bunting, M.; Churchill, M.; Hamilton, N.; Haulcy, R.; Pohlmann, H.; Wu, F.; et al. Dissipation of stop-and-go waves via control of autonomous vehicles: Field experiments. Transp. Res. Part C 2018, 7, 42–57. [Google Scholar] [CrossRef]
  3. Aeberhard, M.; Rauch, S.; Bahram, M.; Tanzmeister, G.; Thomas, J.; Pilat, Y.; Homm, F.; Huber, W.; Kaempchen, N. Experience, results and lessons learned from automated driving on Germany’s highways. IEEE Intell. Transp. Syst. Mag. 2015, 7, 42–57. [Google Scholar] [CrossRef]
  4. Mersky, A.C.; Samaras, C. Fuel economy testing of autonomous vehicles. Transp. Res. Part C 2016, 65, 31–48. [Google Scholar] [CrossRef] [Green Version]
  5. Hofflinger, B.; Conte, G.; Esteve, D.; Weisglas, P. Integrated Electronics for Automotive Applications in the EUREKA Program PROMETHEUS. In Proceedings of the Sixteenth European Solid-State Circuits Conference. ESSCIRC’90, Grenoble, France, 19–21 September 1990; pp. 13–17. [Google Scholar]
  6. Naranjo, J.E.; Bouraoui, L.; Garcia, R.; Parent, M.; Sotelo, M.A. Interoperable Control Architecture for Cybercars and Dual-Mode Cars. IEEE Trans. Intell. Transp. Syst. 2009, 10, 146–154. [Google Scholar] [CrossRef]
  7. Thrun, S. Stanley: The Robot that Won the DARPA Grand Challenge. J. Field Robot. 2006, 23, 661–692. [Google Scholar] [CrossRef]
  8. SAE Automotive. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles; Report J3016_201609; SAE Automotive: Warrendale, PA, USA, 2016. [Google Scholar]
  9. Harding, J.; Powell, G.; Yoon, R.; Fikentscher, J.; Doyle, C.; Sade, D.; Lukuc, M.; Simons, J.; Wang, J. Vehicle-to-Vehicle Communications: Readiness of V2V Technology for Application; Tech. Rep. DOT HS 812 014; National Highway Traffic Safety Administration: Washington, DC, USA, 2014.
  10. California DMV. Autonomous Vehicle Disengagement Reports; California DMV: Sacramento, CA, USA, 2016.
  11. Montemerlo, M.S.; Murveit, H.J.; Urmson, C.P.; Dolgov, D.A.; Nemec, P. Determining When to Drive Autonomously. U.S. Patent 8718861, 6 May 2014. [Google Scholar]
  12. Urmson, C.P.; Dolgov, D.A.; Chatham, A.H.; Nemec, P. System and Method of Providing Recommendations to Users of Vehicles. U.S. Patent 9658620, 23 May 2017. [Google Scholar]
  13. Krasniqi, X.; Hajrizi, E. Use of IoT Technology to Drive the Automotive Industry from Connected to Full Autonomous Vehicles. IFAC-PapersOnLine 2016, 49, 269–274. [Google Scholar] [CrossRef]
  14. Vanholme, B.; Gruyer, D.; Lusetti, B.; Glaser, S.; Mammar, S. Highly automated driving on highways based on legal safety. IEEE Trans. Intell. Transp. Syst. 2013, 14, 333–347. [Google Scholar] [CrossRef]
  15. Castiñeira, R.; Gil, M.; Naranjo, J.E.; Jimenez, F.; Premebida, C.; Serra, P.; Vadejo, A.; Nashashibi, F.; Abualhoul, M.Y.; Asvadi, A. AUTOCITS—Regulation study for interoperability in the adoption of autonomous driving in European urban nodes. In Proceedings of the European Transportation Research Arena 2018, Viena, Austria, 16–18 April 2018. [Google Scholar]
  16. Nikolaou, S.; Gragapoulos, I. IoT—Based interaction of automated vehicles with Vulnerable Road Users in controlled environments. In Proceedings of the 8th International Congress on Transportation Research—ICTR 2017, Thessaloniki, Greece, 27–29 September 2017. [Google Scholar]
  17. Sakr, A.H.; Bansal, G.; Vladimerou, V.; Kusano, K.; Johnson, M. V2V and onboardonboard sensor fusion for road geometry estimation. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–8. [Google Scholar]
  18. Rios-Torres, J.; Malikopoulos, A.A. A Survey on the Coordination of Connected and Automated Vehicles at Intersections and Merging at Highway On-Ramps. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1066–1077. [Google Scholar] [CrossRef]
  19. C-ITS Platform. C-ITS Platform Report Phase I; European Commission: Brussels, Belgium, January 2016. [Google Scholar]
  20. C-ITS Platform. C-ITS Platform Report Phase II; European Commission: Brussels, Belgium, September 2017. [Google Scholar]
  21. Barrachina, J.; Sanguesa, J.A.; Fogue, M.; Garrido, P.; Martinez, F.J.; Cano, J.-C.; Calafate, C.T.; Manzoni, P. V2X-d: A Vehicular Density Estimation System That Combines V2V and V2I Communications. In Proceedings of the 2013 IFIP Wireless Days, Valencia, Spain, 13–15 November 2013. [Google Scholar]
  22. Ilgin Guler, S.; Menendez, M.; Meier, L. Using connected vehicle technology to improve the efficiency of intersections. Transp. Res. Part C. 2014, 46, 121–131. [Google Scholar] [CrossRef]
  23. Dang, R.; Ding, J.; Su, B.; Yao, Q.; Tian, Y.; Li, K. A lane change warning system based on v2v communication. In Proceedings of the 17th International IEEE Conference Intelligent Transportation Systems (ITSC), Qingdao, China, 8–11 October 2014; pp. 1923–1928. [Google Scholar]
  24. Dey, K.C.; Rayamajhi, A.; Chowdhury, M.; Bhavsar, P.; Martin, J. Vehicle-to-vehicle (v2v) and vehicle-to-infrastructure (v2i) communication in a heterogeneous wireless network performance evaluation. Transp. Res. Part C 2016, 68, 168–184. [Google Scholar] [CrossRef]
  25. Kaviani, S.; O’Brien, M.; Van Brummelen, J.; Michelson, D.; Najjaran, H. INS/GPS localization for reliable cooperative driving. In Proceedings of the 2016 IEEE Canadian Conference Electrical and Computer Engineering (CCECE), Vancouver, Canada, 15–18 May 2016. [Google Scholar]
  26. Jin, I.; Ge, J.I.; Avedisov, S.S.; He, C.R.; Qin, W.B.; Sadeghpour, M.; Orosz, G. Experimental validation of connected automated vehicle design among human-driven vehicles. Transp. Res. Part C 2018, 91, 335–352. [Google Scholar]
  27. Xu, B.; Li, S.E.; Bian, Y.; Li, S.; Ban, X.J.; Wang, J.; Li, K. Distributed conflict-free cooperation for multiple connected vehicles at unsignalized intersections. Transp. Res. Part C 2018, 93, 322–334. [Google Scholar] [CrossRef]
  28. Rahman, M.S.; Abdel-Aty, M. Longitudinal safety evaluation of connected vehicles’ platooning on Expressways. Accid. Anal. Prev. 2018, 117, 381–391. [Google Scholar] [CrossRef] [PubMed]
  29. van Nunen, E.; Koch, R.; Elshof, L.; Krosse, B. Sensor safety for the european truck platooning challenge. In Proceedings of the 23rd World Congress on Intelligent Transport Systems, Melbourne, Australia, 10–14 October 2016. [Google Scholar]
  30. O’Brien, M.; Kaviani, S.; Van Brummelen, J.; Michelson, D.; Najjaran, H. Localization Estimation Filtering Techniques for Reliable Cooperative Driving. In Proceedings of the 2016 Canadian Society Mechanical Engineering International Congress, Kelowna, Canada, 2–5 June 2016. [Google Scholar]
  31. Varaiya, P.; Shladover, S.E. Sketch of an IVHS systems architecture. In Proceedings of the Vehicle Navigation and Information Systems Conference, Dearborn, MI, USA, 20–23 October 1991; pp. 909–922. [Google Scholar]
  32. Talebpour, A.; Mahmassani, H.S. Influence of connected and autonomous vehicles on traffic flow stability and throughput. Transp. Res. Part C 2016, 71, 143–163. [Google Scholar] [CrossRef]
  33. Lioris, J.; Pedarsani, R.; Tascikaraoglu, F.Y.; Varaiya, P. Platoons of connected vehicles can double throughput in urban Roads. Transp. Res. Part C 2017, 77, 292–305. [Google Scholar] [CrossRef]
  34. Talavera, E.; Díaz, A.; Jiménez, F.; Naranjo, J.E. Impact on Congestion and Fuel Consumption of a Cooperative Adaptive Cruise Control System with Lane-Level Position Estimation. Energies 2018, 11, 194. [Google Scholar] [CrossRef]
  35. Kyriakidis, M.; de Winter, J.C.; Stanton, N.; Bellet, T.; van Arem, B.; Brookhuis, K.; Martens, M.H.; Bengler, K.; Andersson, J.; Merat, N.; et al. A human factors perspective on automated driving. Theor. Issues Ergon. Sci. 2017, 1–27. [Google Scholar] [CrossRef]
  36. Takeda, Y.; Sato, T.; Kimura, K.; Komine, H.; Akamatsu, M.; Sato, J. Electrophysiological evaluation of attention in drivers and passengers: Toward an understanding of drivers’ attentional state in autonomous vehicles. Transp. Res. Part F 2016, 42, 140–150. [Google Scholar] [CrossRef]
  37. Arakawa, T.; Oi, K. Verification of autonomous vehicle over-reliance. In Proceedings of the Measuring Behavior 2016, Dublin, Ireland, 25–27 May 2016; pp. 177–182. [Google Scholar]
  38. Arakawa, T. Trial verification of human reliance on autonomous vehicles from the viewpoint of human factors. Int. J. Innov. Comput. Inf. Control 2018, 14, 491–501. [Google Scholar]
  39. Arakawa, T.; Hibi, R.; Fujishiro, T. Psychophysical assessment of a driver’s mental state in autonomous vehicles. Transp. Res. Part A 2018, in press. [Google Scholar] [CrossRef]
  40. McCall, R.; McGee, F.; Mirnig, A.; Meschtscherjakov, A.; Louveton, N.; Engel, T.; Tscheligi, M. A taxonomy of autonomous vehicle handover situations. Transp. Res. Part A 2018, in press. [Google Scholar] [CrossRef]
  41. Politis, I.; Langdon, P.; Bradley, M.; Skrypchuk, L.; Mouzakitis, A.; Clarkson, P.J. Designing autonomy in cars: A survey and two focus groups on driving habits of an inclusive user group, and group attitudes towards autonomous cars. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Los Angeles, CA, USA, 17–21 July 2017. [Google Scholar]
  42. Wintersberger, P.; Green, P.; Riener, A. Am I driving or are you or are we both? A taxonomy for handover and handback in automated driving. In Proceedings of the 9th International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, Manchester Village Vermont, VT, USA, 26–29 June 2017. [Google Scholar]
  43. Nilsson, J.; Falcone, P.; Vinter, J. Safe transitions from automated to manual driving using driver controllability estimation. IEEE Trans. Intell. Transp. Syst. 2014, 16, 1806–1816. [Google Scholar] [CrossRef]
  44. Recarte, M.A.; Nunes, L.M. Mental workload while driving: Effects on visual search, discrimination, and decision making. J. Exp. Psychol. Appl. 2003, 9, 119–137. [Google Scholar] [CrossRef] [PubMed]
  45. Recarte, M.A.; Nunes, L.M. Effects of verbal and spatial-imagery tasks on eye fixations while driving. J. Exp. Psychol. Appl. 2000, 6, 31. [Google Scholar] [CrossRef] [PubMed]
  46. Schleicher, R.; Galley, N.; Briest, S.; Galley, L.A. Blinks and saccades as indicators of fatigue in sleepiness warnings: Looking tired? Ergonomics 2008, 51, 982–1010. [Google Scholar] [CrossRef] [PubMed]
  47. Dinges, D. PERCLOS: A Valid Psychophysiology Measure of Alertness as Assessed by Psychomotor Vigilance; Technical Report; Federal Highway Administration: Washington, DC, USA, 1998.
  48. The Royal Society for the Prevention of Accidents. Road Accidents: A Literature Review and Position Paper; The Royal Society for the Prevention of Accidents: Birmingham, UK, February 2001. [Google Scholar]
  49. Daza, I.G.; Hernandez, N.; Bergasa, L.M.; Parra, I.; Yebes, J.J.; Gavilan, M.; Quintero, R.; Llorca, D.F.; Sotelo, M.A. Drowsiness monitoring based on driver and driving data fusion. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 1199–1204. [Google Scholar]
  50. Flores, M.J.; Armingol, J.M.; de la Escalera, A.A. Driver drowsiness warning system using visual information for both diurnal and nocturnal illumination conditions. EURASIP J. Adv. Signal Process. 2010, 2010, 438205. [Google Scholar] [CrossRef]
  51. Oron-Gilad, T.; Ronen, A.; Shinar, D. Alertness maintaining tasks (AMTs) while driving. Accid. Anal. Prev. 2008, 40, 851–860. [Google Scholar] [CrossRef] [PubMed]
  52. Papadelis, C.; Chen, Z.; Kourtidou-Papadeli, C.; Bamidis, P.; Chouvarda, L.; Bekiaris, E.; Maglaveras, N. Monitoring sleepiness with onboardonboard electrophysiological recordings for preventing sleep-deprived traffic accidents. Clin. Neurophysiol. 2007, 118, 1906–1922. [Google Scholar] [CrossRef] [PubMed]
  53. Wilson, G.F.; O’Donnell, R.D. Measurement of operator workload with the neuropsychological workload test battery. Adv. Psychol. 1988, 52, 63–100. [Google Scholar]
  54. Wickens, C.D. Multiple resources and performance prediction. Theor. Issues Ergon. Sci. 2002, 3, 159–177. [Google Scholar] [CrossRef] [Green Version]
  55. Bailey, B.P.; Iqbal, S.T. Understanding changes in mental workload during execution of goal-directed tasks and its application for interruption management. ACM Trans. Comput.-Hum. Interact. 2008, 14, 1–28. [Google Scholar] [CrossRef] [Green Version]
  56. Reimer, B.; Mehler, B.; Coughlin, J.F.; Godfrey, K.M.; Tan, C. An on-road assessment of the impact of cognitive workload on physiological arousal in young adult drivers. In Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Essen, Germany, 21–22 September 2009; pp. 115–118. [Google Scholar]
  57. Wickens, C.D.; Hollands, J. Engineering Psychology and Human Performance, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2000. [Google Scholar]
  58. Ahlstrom, U.; Friedman-Berg, F. Using eye movement activity as a correlate of cognitive workload. Int. J. Ind. Ergon. 2006, 36, 623–636. [Google Scholar] [CrossRef]
  59. Flemisch, F.; Heesen, M.; Hesse, T.; Kelsch, J.; Schieben, A.; Beller, J. Towards a dynamic balance between humans and automation: Authority, ability, responsibility and control in shared and cooperative control situations. Cogn. Technol. Work 2012, 14, 3–18. [Google Scholar] [CrossRef]
  60. Debernard, S.; Chauvin, C.; Pokam, R.; Langlois, S. Designing Human-Machine Interface for Autonomous Vehicles. IFAC-PapersOnLine 2016, 49, 609–614. [Google Scholar] [CrossRef]
  61. Rosario, H.D.; Solaz, J.S.; Rodriguez, N.; Bergasa, L.M. Controlled inducement and measurement of drowsiness in a driving simulator. IET Intell. Transp. Syst. 2010, 4, 280–288. [Google Scholar] [CrossRef]
  62. Beruscha, F.; Wang, L.; Augsburg, K.; Wandke, H.; Bosch, R. Do drivers steer toward or away from lateral directional vibrations at the steering wheel? In Proceedings of the 2nd European Conference on Human Centred Design for Intelligent Transport Systems, Berlin, Gremany, 29–30 April 2010; pp. 227–236. [Google Scholar]
  63. Morrell, J.; Wasilewski, K. Design and evaluation of a vibrotactile seat to improve spatial awareness while driving. In Proceedings of the 2010 IEEE Haptics Symposium, Waltham, MA, USA, 25–26 March 2010; pp. 281–288. [Google Scholar]
  64. Anaya, J.J.; Talavera, E.; Jiménez, F.; Gómez, N.; Naranjo, J.E. A novel GeoBroadcast algorithm for V2V Communications over WSN. Electronics 2014, 3, 521–537. [Google Scholar] [CrossRef]
  65. Talavera, E.; Anaya, J.J.; Gómez, O.; Jiménez, F.; Naranjo, J.E. Performance comparison of Geobroadcast strategies for winding roads. Electronics 2018, 7, 32. [Google Scholar] [CrossRef]
  66. Naranjo, J.E.; Jiménez, F.; Gómez, O.; Zato, J.G. Low level control layer definition for autonomous vehicles based on fuzzy logic. Intell. Autom. Soft Comput. 2012, 18, 333–348. [Google Scholar] [CrossRef]
  67. Jiménez, F.; Naranjo, J.E.; Serradilla, F.; Pérez, E.; Hernández, M.J.; Ruiz, T.; Anaya, J.J.; Díaz, A. Intravehicular, Short- and Long-Range Communication Information Fusion for Providing Safe Speed Warnings. Sensors 2016, 16, 131. [Google Scholar] [CrossRef] [PubMed]
  68. TobiiAB. Tobi Pro Glasses 2 API Developer’s Guide v.1.26.0; TobiiAB: Stockholm, Sweden, 2016; p. 40. [Google Scholar]
  69. Poock, G.K. Information processing vs pupil diameter. Percept. Mot. Ski. 1973, 37, 1000–1002. [Google Scholar] [CrossRef]
Figure 1. C-ITS functional architecture.
Figure 1. C-ITS functional architecture.
Electronics 07 00228 g001
Figure 2. Relay RSU (left) and installation of Sending RSU on a variable message panel (right).
Figure 2. Relay RSU (left) and installation of Sending RSU on a variable message panel (right).
Electronics 07 00228 g002
Figure 3. Human Machine Interface for the C-ITS driver warning messages.
Figure 3. Human Machine Interface for the C-ITS driver warning messages.
Electronics 07 00228 g003
Figure 4. Detail of the INSIA automated vehicle used in the validation of the C-ITS Architecture.
Figure 4. Detail of the INSIA automated vehicle used in the validation of the C-ITS Architecture.
Electronics 07 00228 g004
Figure 5. Driver simulator (left) and equipment for assessing driver activation level (right).
Figure 5. Driver simulator (left) and equipment for assessing driver activation level (right).
Electronics 07 00228 g005
Figure 6. Eye tracking software Tobii Pro Glasses 2.
Figure 6. Eye tracking software Tobii Pro Glasses 2.
Electronics 07 00228 g006
Figure 7. Eye fixations in the areas of interest. Left figure: Fixations numbered in order. Right figure: Heat map.
Figure 7. Eye fixations in the areas of interest. Left figure: Fixations numbered in order. Right figure: Heat map.
Electronics 07 00228 g007
Table 1. Driver tasks.
Table 1. Driver tasks.
TaskTime Interval (min)
Reading or relaxing (eyes closed)1–9
Beep sound
Press button
Cognitive task
Driving task0.1–0.5
Stop signal
Table 2. Results of pupil size (pixels).
Table 2. Results of pupil size (pixels).
Cognitive Task
VerbalArithmetic
Initial driver situationReading34.837.1
Relaxing34.135.3
Driving baseline: 31.1 pixels.
Table 3. Captured variables with the Eye Tracking glasses.
Table 3. Captured variables with the Eye Tracking glasses.
VariablesDescriptionUnits
Gaze directionGazing direction for both eyes.millimetre
Gaze positionGaze position within the boundaries of the recording frame.Up-left [0, 0] to bottom-right [1, 1]
Gaze position 3DWhere the pupil is located in 3D.millimetre
Pupil diameter
(Left and right)
Two variables with the diameter of each left and right pupil.millimetre
Pupil centre
(Left and right)
Two variables with the centre position for each left and right pupil.millimetre
GyroscopeThe angular speed on each of the three axes.millimetre
VideoThe video recorded.fps
Sampling rate 50 Hz
Table 4. Gaze direction comparison during merging manoeuvre with baseline.
Table 4. Gaze direction comparison during merging manoeuvre with baseline.
Are Differences Significant?
Area of InterestTotal DurationFixations NumberDuration of First Fixation
LaneYes (baseline higher)NoYes (baseline lower)
Left lane NoNoYes (baseline lower)
Left rear screenYes (baseline lower)Yes (baseline lower)Yes (baseline lower)
VehicleNoNoNo
Table 5. Gaze direction comparison between areas of interest during merging manoeuvre.
Table 5. Gaze direction comparison between areas of interest during merging manoeuvre.
Are Differences Significant?
Area of interestTotal durationFixations number
Lane- Left laneYes (Lane > Left lane)No
Lane- Left rear screenNoNo
Lane-VehicleYes (Lane > Vehicle)Yes (Lane > Vehicle)
Left lane- Left rear screenNoYes (Left lane < Left rear screen)
Left lane-VehicleNoYes (Left lane > Vehicle)
Left rear screen-VehicleYes (Left rear screen > Vehicle)Yes (Left rear screen > Vehicle)
Table 6. Pupil average diameters in merge situations and baseline.
Table 6. Pupil average diameters in merge situations and baseline.
DriversAverage Pupil Diameter (mm)
Merging ScenarioBaseline Scenario
Left EyeRight EyeLeft EyeRight Eye
A2.71162.70002.62612.5709
B1.87471.82541.88661.8759
C2.41572.33672.31922.2646
D2.48982.55962.46032.4523
E3.49483.46922.40892.4337
F2.48012.45522.28782.2364
G2.09942.09311.95861.9615
H1.87461.84991.83181.7699

Share and Cite

MDPI and ACS Style

Jiménez, F.; Naranjo, J.E.; Sánchez, S.; Serradilla, F.; Pérez, E.; Hernández, M.J.; Ruiz, T. Communications and Driver Monitoring Aids for Fostering SAE Level-4 Road Vehicles Automation. Electronics 2018, 7, 228. https://doi.org/10.3390/electronics7100228

AMA Style

Jiménez F, Naranjo JE, Sánchez S, Serradilla F, Pérez E, Hernández MJ, Ruiz T. Communications and Driver Monitoring Aids for Fostering SAE Level-4 Road Vehicles Automation. Electronics. 2018; 7(10):228. https://doi.org/10.3390/electronics7100228

Chicago/Turabian Style

Jiménez, Felipe, José Eugenio Naranjo, Sofía Sánchez, Francisco Serradilla, Elisa Pérez, Maria José Hernández, and Trinidad Ruiz. 2018. "Communications and Driver Monitoring Aids for Fostering SAE Level-4 Road Vehicles Automation" Electronics 7, no. 10: 228. https://doi.org/10.3390/electronics7100228

APA Style

Jiménez, F., Naranjo, J. E., Sánchez, S., Serradilla, F., Pérez, E., Hernández, M. J., & Ruiz, T. (2018). Communications and Driver Monitoring Aids for Fostering SAE Level-4 Road Vehicles Automation. Electronics, 7(10), 228. https://doi.org/10.3390/electronics7100228

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop