Next Article in Journal
Low-Engine-Order Forced Response Analysis of a Turbine Stage with Damaged Stator Vane
Next Article in Special Issue
A Comprehensive Framework for Measuring the Immediate Impact of TV Advertisements: TV-Impact
Previous Article in Journal
A Unified Approach to Two-Dimensional Brinkman-Bénard Convection of Newtonian Liquids in Cylindrical and Rectangular Enclosures
Previous Article in Special Issue
Anomaly Detection Using an Ensemble of Multi-Point LSTMs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Difference of Transfer Entropies between Head Motion and Eye Movement Indicates a Proxy of Driving

1
College of Intelligence and Computing, Tianjin University, Tianjin 300072, China
2
Department of Computer Science, University of Huddersfield, Huddersfield HD1 3DH, UK
3
Institute of Information Technology, Klagenfurt University, 9020 Klagenfurt, Austria
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(1), 3; https://doi.org/10.3390/e26010003
Submission received: 30 October 2023 / Revised: 7 December 2023 / Accepted: 13 December 2023 / Published: 19 December 2023
(This article belongs to the Special Issue Information-Theoretic Methods in Data Analytics)

Abstract

:
Visual scanning is achieved via head motion and gaze movement for visual information acquisition and cognitive processing, which plays a critical role in undertaking common sensorimotor tasks such as driving. The coordination of the head and eyes is an important human behavior to make a key contribution to goal-directed visual scanning and sensorimotor driving. In this paper, we basically investigate the two most common patterns in eye–head coordination: “head motion earlier than eye movement” and “eye movement earlier than head motion”. We utilize bidirectional transfer entropies between head motion and eye movements to determine the existence of these two eye–head coordination patterns. Furthermore, we propose a unidirectional information difference to assess which pattern predominates in head–eye coordination. Additionally, we have discovered a significant correlation between the normalized unidirectional information difference and driving performance. This result not only indicates the influence of eye–head coordination on driving behavior from a computational perspective but also validates the practical significance of our approach utilizing transfer entropy for quantifying eye–head coordination.

1. Introduction

Visual scanning performed under the effort of eye, head and torso is important for general human environment interactions [1,2]. The investigation of visual scanning provides a fundamental window into the nature of visual-cognitive processing while performing naturalistic sensorimotor tasks such as walking and driving [3,4]. The underpinning mechanism of visual scanning and visual-cognitive processing essentially includes the coordination of head and eyes in the procedure of performing sensorimotor tasks [5,6,7,8,9]. Therefore, head–eye coordination can be used as a valid means to study the internal mechanisms of visual scanning and visual cognitive processes.
Head–eye coordination primarily exhibits two patterns: “head motion earlier than the eye movement” and “eye movement earlier than the head motion” [7]. The “head motion earlier than the eye movement” is frequently observed in goal-directed, top-down and prepared tasks [5]. Conversely, the “eye movement earlier than the head motion” often occurs in stimulus-driven, bottom-up and spontaneous tasks [10]. Thus, when head and eye movements occur concurrently, the pattern of head–eye coordination reflects the level of preparedness for the gaze shift, which subsequently affects the performance of sensorimotor tasks. Therefore, this paper quantitatively measures the state of head–eye coordination from the perspective of its patterns, aiming to explore the relationship between head–eye coordination and driving performance.
We design a virtual reality driving task to obtain the head motion and eye movement data that we need to investigate. Firstly, driving is one of the most common sensorimotor tasks and a popular topic, and head–eye coordination is abundantly present in driving. Secondly, the driving task, as a whole, is executed as a “top-down” goal-directed activity [1,2]. During driving, “head motion earlier than eye movement” should dominate, which aids in achieving more significant results.
Eye movement and head motion data are observed as time series of eye rotation X t and of head rotation Y t , respectively, labeled with a sequential time index t = , 1 , 2 , . In this paper, stochastic processes, usually used as natural representations for complex and real-world data [11], are introduced to model the time series data of eye movement and head motion, denoted by variables X and Y, respectively.
Therefore, the behavior of head–eye coordination is reflected in the inter-relationship between X and Y [12]. For example, if the coordination of “head motion earlier than eye movement” exists, the past of head motion Y t 1 helps predict the current observation of eye movement X t . That is to say, the probabilistic predictivity of X t is added to Y t 1 . The transfer entropy from head motion to eye movement ( T E Y X ) precisely measures this contribution [13], the same as transfer entropy from eye movement to head motion ( T E X Y ).
Based on this, we use the transfer entropy between head movement and eye movement to measure head–eye coordination during driving [13]. Firstly, significant T E Y X can provide evidence for the presence of “head motion earlier than eye movement” coordination, while significant T E X Y can demonstrate the existence of “eye movement earlier than head motion” coordination. Secondly, according to the Wiener and Granger causality [14], the unidirectional information difference ( U I D ) between T E Y X and T E X Y can determine whether the coordination between the head and eyes occurs from the head to the eyes or from the eyes to the head.
Notice that, although the research on the dynamics of the coordination of head and eyes in visual scanning attracted a lot of studies recently [5,6,7,9], there is no quantitative measure on this coordination. The tight connection between the information flow between head motion and eye movement and head–eye coordination leads us to believe that quantifying head–eye coordination based on transfer entropy is feasible. In this paper, we introduce the normalized unidirectional information difference ( N U I D ), which preserves the relationship between unidirectional information difference and head–eye coordination, makes T E Y X and T E Y X into the same scale and improves the unidirectional information difference. We have found a significant correlation between driving performance and the normalized unidirectional information difference from head motion to eye movement. Our finding indicates that head–eye coordination during driving, with a quantification based on transfer entropy, is related to driving performance.
This paper is organized as follows. Firstly, related works are presented in Section 2. We then describe the proposed methodology for the new measures in Section 3. The experiment conducted is detailed in Section 4, followed by the results and discussion in Section 5. Finally, we present the conclusion and future works in Section 6.

2. Related Works

2.1. Transfer Entropy

Transfer entropy, basically as a measure of complexity, is a well-known way for quantifying the directional information flow between time series [13]. Transfer entropy is considered as a non-parametric and model-free version of the Wiener and Granger causality [14], being capable of handling complex and non-linear time series [11]. Given random variables P and Q, transfer entropy from source Q to target P is defined as follows [11]:
T E Q P ( l , k ) = I ( P t : Q t 1 ( l ) | P t 1 ( k ) ) = H ( P t | P t 1 ( k ) ) H ( P t | P t 1 ( k ) , Q t 1 ( l ) ) ,
where P t and Q t are the observations of variables P and Q at time t, respectively, P t 1 ( k ) = ( P t k , , P t 1 ) and Q t 1 ( l ) = ( Q t l , , Q t 1 ) are the temporally ordered histories of target and source variables, respectively, and H ( · | · ) and I ( · : · ) represent, respectively, conditional entropy and mutual information. Here, l and k are the so-called history lengths of Q t 1 ( l ) and of P t 1 ( k ) , respectively. Notice that the information flow from Q to P obtained via T E Q P ( l , k ) tries to take out the influences of the past of P.
Transfer entropy is asymmetric. Because H ( P t | P t 1 ( k ) ) is no smaller than H ( P t | P t 1 ( k ) , Q t 1 ( l ) ) , transfer entropy is non-negative. Considering that H ( P t | P t 1 ( k ) , Q t 1 ( l ) ) and H ( P t | P t 1 ( k ) ) are non-negative, T E Q P ( l , k ) takes H ( P t | P t 1 ( k ) ) as the maximum.

2.2. Coordination of Head and Eyes

Recently, many research studies have suggested the large popularity of the coordination of head and eyes in human activities [8,9,15,16], for example, in motor control [8,9,15,16].
The coordination of head and eyes always exists in our behavioral activities, particularly when a relatively large attentional shift is about to occur [1,2,17,18,19]; as a matter of fact, this coordination emerges as long as the eye movement is bigger than 15° [9,17]. Specifically, the coordination of head and eyes is necessary because eye movements could selectively allocate the available attentional resources to task relevant information and head motions could accommodate the limited field of view of the eyes [18,19]. That is, head motions and eye movements are synergistic, especially temporally, for visual scanning and visual-cognitive processing [6]. Basically, head motions are followed by eye movements (namely, the preparatory head motion earlier than the eye movement) during sensorimotor tasks, because the observer usually has prior and “top-down” knowledge, attaining attentional shift for goal-directed modulation [6,7,8,9,20].
Note that head–eye coordination involving head motions temporally preceding eye movements (rather than coordination with eye movements temporally preceding head motions) has been definitively accepted as the main coordination of head and eyes in goal-directed human activities [6,7,8,9] and principally contributes to goal-directed modulation during sensorimotor tasks [1,2,17,18,19]. In addition, the point here is that the directional coordination of head and eye movements itself does possess information about the performer’s attentional and cognitive state, affecting task performance [7,8,9,21,22].

2.3. Complexity Measures for Visual Scanning

In this paper, the complexity measures based on information entropy, which have been used for the assessment of visual scanning efficiency, are introduced.
The entropy rate can be identified by multiplying the summation of inverse transition durations and the normalized entropy of fixation sequence together [23]. The entropy of fixation sequence (EoFS) is the Shannon entropy of the probability distribution of fixation sequences [24]. Gaze transition entropy (GTE) [25] is defined as a conditional entropy based on the probability transition between Markov states (namely, the areas of interest (AOIs)). Stationary gaze entropy (SGE) [25] gives the Shannon entropy based on an equilibrium distribution of Markov states. The latest technique called time-based gaze transition entropy (TGTE) [9], which uses time bins to realize the idea of GTE, is proposed for handling visual stimuli with dynamic changes.

3. The Proposed Methodology for New Measures

3.1. A Unidirectional Information Difference (UID)

As discussed in Section 1, head–eye coordination can be exploited as a measure of the unidirectional information difference. Following (1), transfer entropy from head motion Y to eye movement X, T E Y X , is defined as:
T E Y X = H X t | X t 1 H X t | X t 1 , Y t 1 = x t , x t 1 , y t 1 p x t , x t 1 , y t 1 log 2 p ( x t | x t 1 , y t 1 ) p ( x t | x t 1 ) ,
Note that in this paper, the history lengths of X and Y are both taken as 1, as usually performed in the literature [11]. Other possible options of the history length are outside of this paper’s scope but will be considered in the near future. Here, p ( · ) and p ( · | · ) denote the (conditional) probability distributions of gaze ( x t ) and head ( y t ) data. And similarly, T E X Y , transfer entropy from eye movement to head motion, is given as follows:
T E X Y = H Y t | Y t 1 H Y t | Y t 1 , X t 1 = y t , y t 1 , x t 1 p y t , y t 1 , x t 1 log 2 p ( y t | y t 1 , x t 1 ) p ( y t | y t 1 ) .
Notice that the more predictivity of current eyes (X) is added to the past of head (Y), the larger T E Y X is. Analogously, the more predictivity of the current head (Y) is added to the past of eyes (X), the larger T E X Y is. In this case, the unidirectional information difference from head motion Y to eye movement X can be defined as T E Y X minus T E X Y :
U I D Y X = T E Y X T E X Y .
It is easy to see that U I D Y X is a methodology for identifying the Wiener and Granger causality [14]. When U I D Y X > 0 , the causal relationship is from the head to the eyes, and the head–eye coordination presents as “head motion earlier than eye movement”. When U I D Y X < 0 , the causal relationship is from the eyes to the head, and the head–eye coordination represents “eye movement earlier than head motion”. U I D Y X = 0 (practically U I D Y X approaches zero) means that causality between the eyes and the head is not clear and that the head–eye coordination behaves ambiguously. In addition, the reason for using T E Y X minus T E X Y instead of T E X Y minus T E Y X is that we found T E Y X is statistically significant but T E X Y is not and the value of T E Y X is larger than T E X Y (see details in Section 5.2).

3.1.1. Significance Test

Measurement variance and estimation bias usually occur when obtaining transfer entropy, which is a common consideration [11]. Here, we take a hypothesis testing approach [26] to combat this problem.
The standard statistical technique of hypothesis testing [11,26,27], due to its popular use in handling time series data, is performed for determining whether there exists a valid U I D Y X with a high confidence level. To accomplish this, the null hypothesis H 0 taken is that U I D Y X is small enough, that is, it means that X and Y do not influence each other. And H 1 supports a causal-effect relationship between X and Y, unidirectionally. To verify or reject H 0 , surrogate time series X i S and Y i S ( i = 1 , , N S ) of the original X and Y, respectively, are used. For surrogate generation, random shuffle, which is simple yet effective, is utilized, because in this paper, the history lengths of X and Y are both taken as 1, as usually used for the practical definition and computation of transfer entropy [11]. The unidirectional information difference from Y i S to X i S , following (4), is obtained as follows:
U I D Y i S X i S = T E Y i S X i S T E X i S Y i S .
The significance level of U I D Y X is defined as:
λ Y X = U I D Y X μ Y i S X i S σ Y i S X i S ,
where μ Y i S X i S and σ Y i S X i S are the mean and standard deviation of U I D Y i S X i S values, respectively. The probability of rejecting H 0 can be obtained based on Chebyshev’s inequality, calculated as follows:
P ( | U I D Y X μ Y i S X i S | k σ Y i S X i S ) 1 k 2 = α ,
where 1 α is the confidence level of rejecting H 0 (and of accepting H 1 ) and parameter k is any positive real number. The number of surrogates, which is related to the confidence level, is obtained as:
N S = 2 α 1
for a two-sided test.
In this paper, the parameter k used in (7) is taken as 6, resulting in a confidence level of 97.3% , and this is a high requirement satisfied in practice [27]. That is, if the significance level is bigger than 6 ( λ Y X > 6 ), then, equivalently, with a confidence level of more than 97.3% , there exists a unidirectional head–eye information flow from head motion to eye movement (note this technique is called 6 S i g m a [26]; some other techniques based on a p v a l u e approach to statistical significance testing [28] could be attempted in the future). In fact, according to statistical test theory [27], it is important to know that a minimum confidence level, acceptable in practice, is 95.0% (here, the corresponding significance level is 4.47 ). Notice that the significance and confidence levels play the same role in hypothesis testing.
The U I D Y X and U I D Y i S X i S computations, highlighted in red and blue boxes, respectively, are illustrated in Figure 1. The example values of U I D Y X and U I D Y i S X i S based on the gaze and head data of participant 5 in Trial 3 in our psychophysical studies are also presented (here, U I D Y X and λ Y X are especially used for emphasis; see all the results relevant to the unidirectional information difference in Section 5.3). Clearly, a big difference between U I D Y X = 0.068 (with a very high confidence level of 99.3% and a very large significance level λ of 12.53 ) and U I D Y i S X i S ( μ Y i S X i S = 0.001 , σ Y i S X i S = 0.005 , i = 1 , , N S ) exists. For the driving activity of participant 5 in Trial 3, there appears a significant unidirectional head–eye information difference from head motion to eye movement in goal-directed sensorimotor tasks.
It is noticed that the significance test for the computation of the unidirectional information difference described here is standard and general enough to be employed as well for checking the statistical significance of the transfer entropy, as shown in Section 5.2.

3.2. A Normalized Unidirectional Information Difference (NUID)

In a goal-directed driving scenario, head–eye coordination corresponds to the state of visual scanning and visual-cognitive processing (correspondingly, the attentional states of drivers) [6,7,8,9], and meanwhile, this state signifies the performance of sensorimotor tasks [21,22]. As discussed in Section 3.1, the unidirectional information difference from head motion to eye movement in effect gives a quantitative estimation of the head–eye coordination. Therefore, we hypothesize that the unidirectional information difference should work well as a proxy of the driving performance. This hypothesis will be verified by using the correlation analysis technique, which is a classic and popular tool for investigating the relationship between variables [29].
Because a proxy indicator of driving performance actually contributes to an objective and quantitative score, for the sake of comparing performances, we propose a normalized unidirectional information difference from head motion to eye movement, N U I D Y X , for being quantitatively compatible with driving performance, as follows:
N U I D Y X = N T E Y X N T E X Y ,
where
N T E Y X = T E Y X μ Y S X H X t | X t 1
is a kind of normalized transfer entropy, whose definition is effective and popularly used [11]. Here, μ Y S X is the mean of the transfer entropies T E Y i S X i ( i = 1 , , N S ) from surrogate head motion to original eye movement, and the conditional entropy H ( X t | X t 1 ) denotes the maximum of T E Y X . N T E X Y can be obtained similarly:
N T E X Y = T E X Y μ X S Y H Y t | Y t 1 .
Note other normalization methods for transfer entropy and for the unidirectional information difference could be performed in future work [30,31].
By normalization, both N T E Y X and N T E X Y are constrained to the range between −0.5 and 0.5. Consequently, the range of N U I D Y X is from −1 to 1. N U I D Y X differs from U I D Y X when they take the zero value. For N U I D Y X , the zero value no longer signifies the primary direction for assessing causality or specific types of head–eye coordination. In the meantime, N U I D Y X retains an important property. That is, the larger N U I D Y X is, the more it indicates a tendency toward “head motion earlier than eye movement”, while a smaller N U I D Y X suggests a tendency toward “eye movement earlier than head motion”. It is this property that leads us to choose N U I D Y X to calculate the correlation with driving performance.

4. Experiment

4.1. Virtual Reality Environment and Task

Driving, which is commonly considered as a goal-directed activity [17,18], is taken as the sensorimotor task in our psychophysical experiments. Due to its repeatable usability, high safety and good performance, the (head-worn) virtual reality technique has become a popular paradigm to study gaze shifts in sensorimotor tasks [19,32,33,34]. Therefore, our study is performed based on head-worn virtual reality.
In this paper, the virtual environment for the psychophysical studies utilizes a four-lane, two-way, suburban road consisting of straight sections, curves (4 left bends and 4 right bends with mean radii of curvature of 30 m) and 4 intersections, with common trees and buildings. In order to focus on the study of goal-directed activity in sensorimotor driving and on quantitatively investigating the specific head–eye coordination (with head motions temporally preceding eye movements), irrelevant visual distractors such as the sudden appearance of a running animal, which have been considered as ignored in the performing of goal-directed tasks [35] (and also this topic relevant to irrelevant visual distractors has been understood well in the research area [36]), are not included.
In our study, a single driving task, which is to smoothly maintain the driving speed at 40 km/h, is used. The inverse of the average acceleration during driving is taken as the indicator of driving performance, as popularly performed in the literature [37]. That is, the larger the average acceleration is, the worse the driving performance becomes, and vice versa.
Example illustrations of the virtual environment and of performing a driving task are presented in Figure 2.

4.2. Apparatus

The psychophysical experiments in this paper are conducted in a virtual reality environment through the display via an HTC Vive headset [38]. And there is a 7INVENSUN Instrument aGlass DKII eye-tracking piece of equipment [39] embedded in the headset. An illustration of the headset with the embedded eye tracker is given in Figure 2. Eye rotation and head motion (head rotation) data are recorded at a frequency of 90 Hz via the eye-tracking equipment (gaze position’s accuracy is 0 . 5 ) and via the headset, respectively, both being captured as pitch and yaw (as usually conducted in the relevant field [7]). Virtual driving is performed based on a Logitech G29 steering wheel [40]. A desktop monitor is utilized to display the captured data and driving activities of participants in the procedure of the experiment.

4.3. Participants

Twelve people participated in the psychophysical study. Each participant took part in four independent test sessions to have a large enough sample size for our study (see details in Section 4.4). These participants, with normal color vision and normal/corrected-to-normal visual acuity, were recruited from students at one of the authors’ universities (7 male, 5 female; ages 22.9± 1.95 ). All of the participants held their driver licenses for no less than one and a half years. None of the participants had any adverse reactions to the virtual environment utilized in this study. All participants provided written consent and were compensated with payment. This study was approved by the Ethics Committee of one of the authors’ universities under the title “Eye tracking based Quantitative Behavior Analysis in Virtual Driving”.

4.4. Procedure

Each participant finished four test sessions, with an interval of one week between every two consecutive tests, based on the same task requirements and driving routes. In this study, a test session is represented as a trial. In total, there were 12 4 = 48 valid trials accomplished in the psychophysical experiments. Although this number of trials satisfies the large-sample condition in classical statistics [41], in the near future, a larger sample size could be utilized for making our proposed measures have more possible contributions to practical behaviometrics applications.
Before each test, the purpose and procedure of psychological studies were introduced to the participants. For the sake of high-quality data recordings, (a) all participants completed a 9-point calibration procedure prior to the experiments; (b) the headset was adjusted and fastened to participants’ heads; (c) sight and eye cameras were adjusted to prevent hair and eyelashes from obscuring and (d) the seat was adjusted to a comfortable position in front of the steering wheel.
For each test session, first of all, conducting a 3-min period of familiarization was introduced. Then, for a 3-min driving session, participants were instructed to comply with driving rules: driving smoothly at a speed of 40 km/h and following the formulated routes (trying to stay close to the center line).

5. Results and Discussion

5.1. Temporal Sequences of Head Motion and Eye Movement Data

Example data for head motion and eye movement are plotted as a function of time, shown in Figure 3. As usually performed in the study of the coordination of head motion and eye movement [7,9], the data of eye and head rotations in yaw are utilized in this paper. It is obvious that head motion and eye movement always exist during driving. Furthermore, the synchronized registration of the local extreme values of head motion and eye movement data indicates, to a certain extent, an overall correspondence between two kinds of data, clearly showing that the coordination of head and eyes does exist. We introduce an evaluation measure for the amount of coordination of head and eyes ( C o o r d A m o u n t ), inspired by the widely used measure P S N R in the field of signal processing [42], as follows:
C o o r d A m o u n t = 10 × log 10 S c a l e F a c t o r 2 D i f f ,
where
D i f f = 1 n u m t = 1 n u m y t x t 2
is the mean square difference (Euclidean distance) between gaze and head rotation data ( n u m is the number of time units (t) considered). S c a l e F a c t o r = 360 is the maximal absolute difference between any two gaze and head data pair. C o o r d A m o u n t quantifies the quality of matching two kinds of rotation data streams according to data values and shows the synergy of both rotation streams, providing a normalized measurement of the amount of synergistic coordination of head and eyes. The greater amount of coordination exists, the higher the C o o r d A m o u n t becomes, and vice versa. The C o o r d A m o u n t values, which are 31.45 dB and 33.09 dB for participants 1 (fourth trial) and 5 (third trial), respectively (in Figure 3), are relatively high, and this verifies the existence of the coordination of head and eyes. Note these two amounts of coordination for the two trials are close.
However, in fact, due to the lack of time sequence in its definition, C o o r d A m o u n t can only be used to determine the presence of head–eye coordination but cannot ascertain whether the coordination occurs with head motion preceding eye movement or vice versa. For example, the results from C o o r d A m o u n t indicate that participants 1 (fourth trial) and 5 (third trial) exhibited coordination between head and eye movements during the driving process. However, it is only through a detailed analysis that we can determine whether the head moves first or the eyes move first. In Figure 3, we have highlighted two specific instances of head motion earlier than eye movement behaviors using boxes. In the box of the upper row, the head yaw starts to consistently increase from its local minimum earlier than the eye yaw, and in the box of the lower row, the head yaw starts to consistently decrease from its local maximum earlier than the eye yaw. In addition, the corresponding performance values are relatively diverse, 0.34 and 0.42 for the two participants, respectively. The latter is 1.24 times as large as the former. Therefore, relying solely on C o o r d A m o u n t to determine the presence of head–eye coordination is insufficient. We need to ascertain whether the head moves before the eyes or vice versa, which one dominates the entire driving process and what their relationship is to driving performance. These are all questions worthy of our attention.

5.2. Transfer Entropies between Head Motion and Eye Movement

We first determine whether “head motion earlier than eye movement” and “eye movement earlier than head motion” exist. Both “head motion earlier than eye movement” and “eye movement earlier than head motion” have a significant impact on analyzing whether the previous moment’s head motion (eye movement) has a notable influence on the current moment’s eye movement (head motion). Therefore, we opt to use transfer entropy to characterize this process, as discussed in Section 3.1. The larger T E Y X is, the more predictivity of current X adds to the past of Y. Therefore, if T E Y X is statistically significant, it indicates that during the driving process, head motion has conveyed a substantial amount of information to eye movements, providing evidence for the existence of the head–eye coordination “head motion earlier than eye movement”. The same applies to T E X Y .
All the values of transfer entropies are listed in Table 1. A significant difference between two transfer entropies T E Y X and T E X Y is revealed via one-way analysis of variance (ANOVA) ( F ( 1 , 94 ) = 80.25, p < 0.05 ), as illustrated in Figure 4. The transfer entropy in the direction from head motion to eye movement, T E Y X , is much bigger than that in the reverse direction, T E X Y , with the averages of the former and latter 3.8× 10 2 and 1.9× 10 2 , respectively. That is, T E Y X is twice as big as T E X Y for the experimentation data in this paper. Further, statistical significance testing, which is completely similar to what has been described in Section 3.1.1, is used for checking the statistical confidence levels of T E Y X and T E X Y , entirely separately. It is observed that the significance and confidence levels for T E Y X are 4.49 and 95.0% , respectively. In contrast, the two corresponding values for T E X Y are 1.46 and 53.4% , respectively. This means that T E Y X and T E X Y are statistically acceptable and unacceptable at 5% significance level, respectively. As previously mentioned, in goal-directed tasks, “head motion earlier than eye movement” is the primary pattern of head–eye coordination [5]. The result, where T E Y X is significant and T E X Y is not, validates our idea of using transfer entropy to measure the existence of head–eye coordination patterns.
Furthermore, the lack of statistical significance in T E X Y does not necessarily imply the absence of eye movement followed by head motion throughout the entire driving process. Rather, it signifies that the influence of head movements on eye movements during driving is minimal. In such cases, we conclude that there is no significant head–eye coordination with “eye movement earlier than head motion” during the driving process.

5.3. The Unidirectional Information Difference between Head Motion and Eye Movement

This section primarily investigates the primary pattern of head–eye coordination during the driving process, whether it is “head motion earlier than eye movement” or “eye movement earlier than head motion”. We employ a commonly used approach in Wiener and Granger causality analysis, which involves calculating the difference in information transfer between the two directions. We observe that T E Y X is greater than T E X Y and statistically significant at a 5% significance level, whereas T E X Y is not statistically significant. Therefore, we conclude that “head motion earlier than eye movement” predominates during the driving process, while “eye movement earlier than head motion” is attributed to data variability. Therefore, we opt for T E Y X as the minuend and T E X Y as the subtrahend when calculating the information difference. This choice ensures a positive information difference and enhances the interpretability of its underlying meaning.
The unidirectional head–eye information difference U I D Y X results (as provided in Table 2) are obtained with high significance levels ( λ Y X ), which are presented in Table 3. Almost all the λ Y X values are larger than 6, that is, the corresponding confidence levels are more than 97.3% . There are only two exceptional evaluations of λ Y X , 5.7 and 5.5, marked with boxes (Table 3), that are slightly lower than 6. Even here, the corresponding confidence levels are 96.9% and 96.7% , respectively, and this is acceptable in statistics for practical use [43]. The strict positive U I D Y X (Table 2) reveals that there indeed exists a unidirectional information difference from head motion to eye movement (with high confidence), in the procedure of performing goal-directed sensorimotor tasks.

5.4. The Normalized Unidirectional Information Difference between Head Motion and Eye Movement

Now, we aim to quantitatively characterize the relationship between head–eye coordination and driving performance. We utilize the inverse of the average acceleration (denoted by 1 / A v g A c c ) as a measure of driving performance. However, the correlation between U I D Y X and driving performance was not statistically significant. Therefore, we improved U I D Y X by adopting the normalization to obtain the normalized unidirectional information difference ( N U I D Y X ). Although N U I D Y X alters the value range and the meaning of its zero value, it still indicates an important property for practice. That is, a higher N U I D Y X points out a stronger tendency toward “head motion earlier than eye movements” and vice versa.
The results of the normalized head–eye unidirectional information difference ( N U I D Y X ) and the corresponding driving performance (the inverse of the average acceleration, denoted by 1 / A v g A c c ) are listed in Table 4. As a concrete instance, depicted in Figure 3 together with the corresponding descriptions in Section 5.1, the two very different N U I D Y X values obtained by participants 1 and 5 (in the fourth and third trials) are 0.18× 10 2 and 8.07× 10 2 , respectively. The large difference between these two values of N U I D Y X corresponds closely to the big difference between the two coordination patterns of head and eyes of these two participants and, meanwhile, contrasts sharply with the closeness of the two corresponding C o o r d A m o u n t values. Importantly, this clearly reveals that the proposed N U I D Y X , which represents the degree of head–eye coordination pattern, is largely related to driving activity and performance. More importantly, N U I D Y X even enhances discriminating to differentiate the distinct driving activities of the two participants under consideration (correspondingly, the two relatively diverse values of driving performances are 0.34 and 0.42 , respectively, with the latter 1.24 times as big as the former). In fact, a significant correlation ( p < 0.05 ) between the new normalized information difference and driving performance, based on all the head and gaze data in 48 trials, is obtained via three correlation analyses [29], with a Pearson linear correlation coefficient (PLCC), Kendall rank order correlation coefficient (KROCC) and Spearman rank order correlation coefficient (SROCC) of 0.32 , 0.27 and 0.41 , respectively (Table 5). These correlation coefficient values definitely indicate a statistically significant relationship between our proposal of normalized information difference and driving performance, as popularly recognized in the literature [44]. By contrast, the measurements using the compared techniques (Table 5) cannot show an acceptable association with the performance of virtual driving ( p > 0.05 ).
The statistically significant positive correlation between N U I D Y X and driving performance may be due to the fact that, as the degree of the “head motion earlier than eye movement” increases, the driver’s preparation for the gaze shift becomes more adequate, thus leading to better driving performance. This indicates that our experimental design is effective. N U I D Y X , by measuring the patterns of head–eye coordination, has established a correlation with driving performance. The mathematical essence of all the transfer entropy-relevant formulas in this paper is well suited for assessing and quantifying head–eye coordination. Prior to our research, no work had been able to demonstrate a significant correlation between driving performance and the transfer entropy-based measure of head–eye coordination. As a comparison, we also calculated other eye movement indicators mentioned in Section 2.3 and methods commonly used in signal processing to analyze the similarity between two signals, PSNR and SSIM [45]. We analyzed their relationship with driving performance, as shown in Table 5. Among all methods, only N U I D Y X showed a significant effect on driving performance. Our studies have effectively translated the abstract concept of head–eye coordination into an objective quantity and provided meaningful insight into its influence on driving. Furthermore, we believe our methodology offers a new perspective for digitizing similar abstract concepts.

6. Conclusions and Future Works

In this paper, we designed a “top-down” goal-directed driving experiment based on virtual reality to collect head and eye movement data from drivers. We treated head motion data and eye movement data as two stochastic processes and calculated transfer entropy from head to eye and from eye to head to determine the presence of head–eye coordination in terms of “head motion earlier than eye movement” and “eye movement earlier than head motion” . We discovered a significant existence of the head–eye coordination “head motion earlier than eye movement” among drivers during driving, while there was no clear evidence of “eye movement earlier than head motion” coordination. By calculating unidirectional information differences, we established that head–eye coordination predominates during driving. Without compromising the ability of N U I D Y X to measure head–eye coordination patterns, we optimized unidirectional information differences, yielding normalized unidirectional information differences. Notably, we found a significant correlation between normalized unidirectional information differences and driving performance. This discovery validates two key points: firstly, head–eye coordination during the driving process does impact a driver’s performance, and secondly, our approach of quantifying this abstract concept of head–eye coordination using transfer entropy is both feasible and meaningful in practice.
In the future, transfer entropy, unidirectional information differences and its normalized version can be applied to a broader range of abstract concepts, quantifying and validating their practical significance. During the resampling process, particularly in the resampling of multivariate time series, maintaining the auto-correlation of the time series could be utilized to analyze the correlation between head–eye coordination [46] and also to measure head–eye coordination. Furthermore, as mentioned in our paper, head–eye coordination is not the sole factor influencing driving performance. Beyond head–eye coordination, it is essential to identify additional elements that impact driving, allowing for a more precise modeling of driver behavior.

Author Contributions

Conceptualization, R.Z. and Q.X.; validation, R.Z.; formal analysis, R.Z.; investigation, R.Z. and S.W.; resources, Q.X.; data curation, R.Z.; writing—original draft preparation, R.Z.; writing—review and editing, Q.X., S.W., S.P. and K.S.; supervision, Q.X.; project administration, Q.X.; funding acquisition, Q.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of China under Grant No. 61471261 and No. 61771335 and was funded by The National Key Research and Development Program of China (Grant No. 2020YFC1807904 and No. 2020YFC1807905).

Data Availability Statement

The data presented in this study are openly available in https://github.com/zhangrlll/unidirectional-causality accessed on 15 December 2023.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript or in the decision to publish the results.

References

  1. Hayhoe, M.; Ballard, D. Eye movements in natural behavior. Trends Cogn. Sci. 2005, 9, 188–194. [Google Scholar] [CrossRef] [PubMed]
  2. Henderson, J.M. Gaze control as prediction. Trends Cogn. Sci. 2017, 21, 15–23. [Google Scholar] [CrossRef] [PubMed]
  3. Kapitaniak, B.; Walczak, M.; Kosobudzki, M.; Jóźwiak, Z.; Bortkiewicz, A. Application of eye-tracking in drivers testing: A review of research. Int. J. Occup. Med. Environ. Health 2015, 28, 941–954. [Google Scholar] [CrossRef] [PubMed]
  4. Amini, R.E.; Al Haddad, C.; Batabyal, D.; Gkena, I.; De Vos, B.; Cuenen, A.; Brijs, T.; Antoniou, C. Driver distraction and in-vehicle interventions: A driving simulator study on visual attention and driving performance. Accid. Anal. Prev. 2023, 191, 107195. [Google Scholar] [CrossRef] [PubMed]
  5. Pelz, J.; Hayhoe, M.; Loeber, R. The Coordination of eye, head, and hand movements in a natural task. Exp. Brain Res. 2001, 139, 266–277. [Google Scholar] [CrossRef] [PubMed]
  6. Freedman, E.G. Coordination of the eyes and head during visual orienting. Exp. Brain Res. 2008, 190, 369. [Google Scholar] [CrossRef] [PubMed]
  7. Doshi, A.; Trivedi, M.M. Head and eye gaze dynamics during visual attention shifts in complex environments. J. Vis. 2012, 12, 189–190. [Google Scholar] [CrossRef]
  8. Fang, Y.; Nakashima, R.; Matsumiya, K.; Kuriki, I.; Shioiri, S. Eye-head coordination for visual cognitive processing. PLoS ONE 2015, 10, e0121035. [Google Scholar] [CrossRef]
  9. Mikula, L.; Mejia-Romero, S.; Chaumillon, R.; Patoine, A.; Lugo, E.; Bernardin, D.; Faubert, J. Eye-head coordination and dynamic visual scanning as indicators of visuo-cognitive demands in driving simulator. PLoS ONE 2020, 15, e0240201. [Google Scholar] [CrossRef]
  10. Morasso, P.; Sandini, G.; Tagliasco, V.; Zaccaria, R. Control strategies in the eye-head coordination system. IEEE Trans. Syst. Man Cybern. 1977, 7, 639–651. [Google Scholar] [CrossRef]
  11. Bossomaier, T.; Barnett, L.; Lizier, J.T. An Introduction to Transfer Entropy: Information Flow in Complex Systems; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
  12. Weiss, R.S.; Remington, R.; Ellis, S.R. Sampling distributions of the entropy in visual scanning. Behav. Res. Methods Instrum. Comput. 1989, 21, 348–352. [Google Scholar] [CrossRef]
  13. Thomas, S. Measuring information transfer. Phys. Rev. Lett. 2000, 85, 461–464. [Google Scholar]
  14. Granger, C.W. Some recent development in a concept of causality. J. Econom. 1988, 39, 199–211. [Google Scholar] [CrossRef]
  15. Pfeil, K.; Taranta, E.M.; Kulshreshth, A.; Wisniewski, P.; LaViola, J.J. A Comparison of Eye-Head Coordination between Virtual and Physical Realities. In Proceedings of the 15th ACM Symposium on Applied Perception, SAP ’18, New York, NY, USA, 10–11 August 2018. [Google Scholar] [CrossRef]
  16. Nakashima, R.; Shioiri, S. Why Do We Move Our Head to Look at an Object in Our Peripheral Region? Lateral Viewing Interferes with Attentive Search. PLoS ONE 2014, 9, e92284. [Google Scholar] [CrossRef] [PubMed]
  17. Land, M.F. Predictable eye-head coordination during driving. Nature 1992, 359, 318–320. [Google Scholar] [CrossRef] [PubMed]
  18. Lappi, O. Eye movements in the wild: Oculomotor control, gaze behavior & frames of reference. Neurosci. Biobehav. Rev. 2016, 69, 49–68. [Google Scholar] [PubMed]
  19. Sidenmark, L.; Gellersen, H. Eye, head and torso coordination during gaze shifts in virtual reality. ACM Trans. Comput. Hum. Interact. 2019, 27, 1–40. [Google Scholar] [CrossRef]
  20. Land, M.F. Eye movements and the control of actions in everyday life. Prog. Retin. Eye Res. 2006, 25, 296–324. [Google Scholar] [CrossRef]
  21. Tong, M.H.; Zohar, O.; Hayhoe, M.M. Control of gaze while walking: Task structure, reward, and uncertainty. J. Vis. 2017, 17, 28. [Google Scholar] [CrossRef]
  22. Hansen, J.H.L.; Busso, C.; Zheng, Y.; Sathyanarayana, A. Driver modeling for detection and assessment of driver distraction: Examples from the UTDrive test bed. IEEE Signal Process. Mag. 2017, 34, 130–142. [Google Scholar] [CrossRef]
  23. Itoh, Y.; Hayashi, Y.; Tsukui, I.; Saito, S. The ergonomic evaluation of eye movement and mental workload in aircraft pilots. Ergonomics 1990, 33, 719–732. [Google Scholar] [CrossRef] [PubMed]
  24. Chanijani, S.S.M.; Klein, P.; Bukhari, S.S.; Kuhn, J.; Dengel, A. Entropy based transition analysis of eye movement on physics representational competence. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, Heidelberg, Germany, 12–16 September 2016; pp. 1027–1034. [Google Scholar]
  25. Shiferaw, B.; Downey, L.; Crewther, D. A review of gaze entropy as a measure of visual scanning efficiency. Neurosci. Biobehav. Rev. 2019, 96, 353–366. [Google Scholar] [CrossRef] [PubMed]
  26. Mendenhall, W.; Beaver, R.J.; Beaver, B.M. Introduction to Probability and Statistics; Cengage Learning: Boston, MA, USA, 2012. [Google Scholar]
  27. Schreiber, T.; Schmitz, A. Surrogate time series. Phys. D Nonlinear Phenom. 1999, 142, 346–382. [Google Scholar] [CrossRef]
  28. Knijnenburg, T.A.; Wessels, L.F.A.; Reinders, M.J.T.; Shmulevich, I. Fewer permutations, more accurate p-values. Bioinformatics 2009, 25, 161–168. [Google Scholar] [CrossRef] [PubMed]
  29. Wikipedia. Correlation and Dependence. 2021. Available online: https://en.wikipedia.org/wiki/Correlationanddependence (accessed on 15 December 2023).
  30. Marschinski, R.; Kantz, H. Analysing the information flow between financial time series. Phys. Condens. Matter 2002, 30, 275–281. [Google Scholar] [CrossRef]
  31. Mao, X.; Shang, P. Transfer entropy between multivariate time series. Commun. Nonlinear Sci. Numer. Simul. 2016, 47, 338–347. [Google Scholar] [CrossRef]
  32. Borojeni, S.S.; Chuang, L.; Heuten, W.; Boll, S. Assisting Drivers with Ambient Take-Over Requests in Highly Automated Driving. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Automotive’UI 16, New York, NY, USA, 24–26 October 2016; pp. 237–244. [Google Scholar] [CrossRef]
  33. Lv, Z.; Xu, Q.; Schoeffmann, K.; Parkinson, S. A Jensen-Shannon Divergence Driven Metric of Visual Scanning Efficiency Indicates Performance of Virtual Driving. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar] [CrossRef]
  34. Plopski, A.; Hirzle, T.; Norouzi, N.; Qian, L.; Bruder, G.; Langlotz, T. The Eye in Extended Reality: A Survey on Gaze Interaction and Eye Tracking in Head-Worn Extended Reality. ACM Comput. Surv. 2022, 55, 1–39. [Google Scholar] [CrossRef]
  35. Brams, S.; Ziv, G.; Levin, O.; Spitz, J.; Wagemans, J.; Williams, A.; Helsen, W. The relationship between gaze behavior, expertise, and performance: A systematic review. Psychol. Bull. 2019, 145, 980–1027. [Google Scholar] [CrossRef]
  36. Green, M. “How long does it take to stop?” Methodological analysis of driver perception-brake times. Trans. Hum. Factors 2000, 2, 195–216. [Google Scholar] [CrossRef]
  37. Yadav, A.K.; Velaga, N.R. Effect of alcohol use on accelerating and braking behaviors of drivers. Traffic Inj. Prev. 2019, 20, 353–358. [Google Scholar] [CrossRef]
  38. HTC. HTC Vive. 2021. Available online: https://www.htcvive.com (accessed on 15 December 2023).
  39. 7INVENSUN. 7INVENSUN Instrument aGlass. 2021. Available online: https://www.7invensun.com (accessed on 15 December 2023).
  40. Logitech. Logitech G29. 2021. Available online: https://www.logitechg.com/en-us/products/driving/driving-force-racing-wheel.html (accessed on 15 December 2023).
  41. Lehmann, E.L. Elements of Large-Sample Theory; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  42. Gonzalez, R.C. Digital Image Processing; Pearson Education India: Chennai, India, 2009. [Google Scholar]
  43. Theiler, J.; Eubank, S.; Longtin, A.; Galdrikian, B.; Farmer, J.D. Testing for nonlinearity in time series: The method of surrogate data. Phys. D Nonlinear Phenom. 1992, 58, 77–94. [Google Scholar] [CrossRef]
  44. Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Routledge: London, UK, 2013. [Google Scholar]
  45. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  46. Jentsch, C.; Politis, D.N. Covariance matrix estimation and linear process bootstrap for multivariate time series of possibly increasing dimension. Ann. Stat. 2015, 43, 1117–1140. [Google Scholar] [CrossRef]
Figure 1. An illustration scheme for the computation of unidirectional information difference U I D Y X .
Figure 1. An illustration scheme for the computation of unidirectional information difference U I D Y X .
Entropy 26 00003 g001
Figure 2. In the virtual environment (bottom), with an HTC Vive headset and a 7INVENSUN Instrument aGlass DKII eye tracker (top left), a participant is performing the driving task (top right).
Figure 2. In the virtual environment (bottom), with an HTC Vive headset and a 7INVENSUN Instrument aGlass DKII eye tracker (top left), a participant is performing the driving task (top right).
Entropy 26 00003 g002
Figure 3. Examples of head and eye rotation data from the fourth and third trials of two participants, 1 and 5, upper and lower rows, respectively.
Figure 3. Examples of head and eye rotation data from the fourth and third trials of two participants, 1 and 5, upper and lower rows, respectively.
Entropy 26 00003 g003
Figure 4. Transfer entropies between head motion and eye movement.
Figure 4. Transfer entropies between head motion and eye movement.
Entropy 26 00003 g004
Table 1. Values of transfer entropies T E X Y and T E Y X ( × 10 2 ).
Table 1. Values of transfer entropies T E X Y and T E Y X ( × 10 2 ).
Participant Trial 1Trial 2Trial 3Trial 4
X Y Y X X Y Y X X Y Y X X Y Y X
12.192.981.992.771.681.851.752.99
21.823.032.606.931.643.522.004.76
32.253.681.972.582.422.722.664.48
40.791.971.252.231.313.161.173.86
51.824.961.135.870.867.661.593.84
61.835.384.288.712.065.091.993.96
71.454.362.996.412.266.601.094.74
81.392.311.493.513.006.862.313.88
91.765.092.176.501.855.602.163.79
101.121.801.234.301.082.991.864.95
112.275.021.222.432.093.981.102.80
121.323.582.347.872.818.992.369.37
Table 2. Values of unidirectional information difference U I D Y X ( × 10 2 ).
Table 2. Values of unidirectional information difference U I D Y X ( × 10 2 ).
ParticipantTrial 1Trial 2Trial 3Trial 4
10.790.780.171.24
21.214.331.882.76
31.430.610.301.82
41.180.981.852.70
53.144.736.802.25
63.554.433.021.97
72.913.424.333.65
80.912.023.861.57
93.334.333.751.62
100.683.061.913.09
112.761.211.891.70
122.265.536.187.01
Table 3. Significance levels λ Y X for U I D Y X .
Table 3. Significance levels λ Y X for U I D Y X .
Participant Trial 1Trial 2Trial 3Trial 4
18.899.1815.2510.23
217.3318.087.835.68
317.015.4811.2011.64
411.9415.0312.2516.67
516.0212.1812.538.02
615.8620.1821.1317.15
717.2819.3626.3422.89
87.8613.9213.5810.74
916.4017.8417.3713.94
1014.3216.8613.1414.47
1119.3512.7118.8719.43
1214.6712.5216.6915.06
Table 4. Values of normalized unidirectional information difference N U I D Y X ( × 10 2 ) and driving performance ( 1 / A v g A c c ).
Table 4. Values of normalized unidirectional information difference N U I D Y X ( × 10 2 ) and driving performance ( 1 / A v g A c c ).
ParticipantTrial 1Trial 2Trial 3Trial 4
NUID Y X
 ( × 10 2 )
1 / AvgAcc
( s 2 /m)
NUID Y X
 ( × 10 2 )
1 / AvgAcc
( s 2 /m)
NUID Y X
 ( × 10 2 )
1 / AvgAcc
( s 2 /m)
NUID Y X
 ( × 10 2 )
1 / AvgAcc
( s 2 /m)
1−1.750.37−1.130.43−0.350.42−0.180.34
21.230.712.770.643.110.695.050.89
31.660.451.650.41−0.800.42−0.280.40
44.850.721.500.564.450.564.840.52
55.950.446.020.488.070.424.470.55
64.180.25−1.420.293.000.300.880.30
76.280.560.600.525.170.648.200.57
8−2.540.352.150.462.580.400.910.40
93.400.621.250.44−0.990.40−1.040.47
103.210.447.970.518.450.365.930.50
114.760.452.410.424.120.443.150.38
122.390.642.690.594.350.552.930.71
Table 5. Correlation analysis between measures and driving performance.
Table 5. Correlation analysis between measures and driving performance.
Methods PLCC , p-Value KROCC , p-Value SROCC , p-Value
NUID Y X 0 . 32 , p < 0 . 05 0 . 27 , p < 0 . 05 0 . 41 , p < 0 . 05
T G T E 0.19, p > 0.05 0.19, p > 0.05 0.26, p > 0.05
G T E 0.07, p > 0.05 0, p > 0.05 −0.01, p > 0.05
S G E −0.07 p > 0.05 −0.09, p > 0.05 −0.15, p > 0.05
E o F S 0.01, p > 0.05 0, p > 0.05 −0.03, p > 0.05
E n t r o p y r a t e −0.06, p > 0.05 −0.01, p > 0.05 −0.02, p > 0.05
F i x a t i o n r a t e −0.24, p > 0.05 −0.17, p > 0.05 −0.24, p > 0.05
S a c c a d e a m p l i t u d e 0.25, p > 0.05 0.11, p > 0.05 0.09, p > 0.05
P S N R 0.08, p > 0.05 0.11, p > 0.05 0.14, p > 0.05
S S I M −0.17, p > 0.05 −0.13, p > 0.05 −0.21, p > 0.05
Bold indicates the indicators proposed in this paper.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, R.; Xu, Q.; Wang, S.; Parkinson, S.; Schoeffmann, K. Information Difference of Transfer Entropies between Head Motion and Eye Movement Indicates a Proxy of Driving. Entropy 2024, 26, 3. https://doi.org/10.3390/e26010003

AMA Style

Zhang R, Xu Q, Wang S, Parkinson S, Schoeffmann K. Information Difference of Transfer Entropies between Head Motion and Eye Movement Indicates a Proxy of Driving. Entropy. 2024; 26(1):3. https://doi.org/10.3390/e26010003

Chicago/Turabian Style

Zhang, Runlin, Qing Xu, Shunbo Wang, Simon Parkinson, and Klaus Schoeffmann. 2024. "Information Difference of Transfer Entropies between Head Motion and Eye Movement Indicates a Proxy of Driving" Entropy 26, no. 1: 3. https://doi.org/10.3390/e26010003

APA Style

Zhang, R., Xu, Q., Wang, S., Parkinson, S., & Schoeffmann, K. (2024). Information Difference of Transfer Entropies between Head Motion and Eye Movement Indicates a Proxy of Driving. Entropy, 26(1), 3. https://doi.org/10.3390/e26010003

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop