*Article* **Empowering Advanced Driver-Assistance Systems from Topological Data Analysis**

**Tarek Frahi 1,\*, Francisco Chinesta 1, Antonio Falcó 2, Alberto Badias 3, Elias Cueto 3, Hyung Yun Choi 4, Manyong Han <sup>5</sup> and Jean-Louis Duval <sup>6</sup>**


**Abstract:** We are interested in evaluating the state of drivers to determine whether they are attentive to the road or not by using motion sensor data collected from car driving experiments. That is, our goal is to design a predictive model that can estimate the state of drivers given the data collected from motion sensors. For that purpose, we leverage recent developments in topological data analysis (TDA) to analyze and transform the data coming from sensor time series and build a machine learning model based on the topological features extracted with the TDA. We provide some experiments showing that our model proves to be accurate in the identification of the state of the user, predicting whether they are relaxed or tense.

**Keywords:** Morse theory; topological data analysis; machine learning; time series; smart driving

#### **1. Introduction**

While there have recently been considerable advances in self-driving car technology, driving still relies mainly on human factors. Even in self-driving mode, human drivers must often make decision in a fraction of a second to avoid accidents. Therefore, it is still of utmost importance to develop systems capable of discerning if the human driver is attentive or not to the road conditions. In general, the so-called advanced driver assistance systems (ADAS) [1,2] are systems that are able to improve the driver's performance, among which, adaptive speed limiters, pedestrian detectors [3], and cruise controllers are some of the most popular systems. Fatigue alerting systems are among the most useful among ADAS systems, and the aim of this work is to contribute to the development of such a system based on a systematic analysis of drivers in actual driving conditions.

The estimation of the driver's condition (degree of attention to the road, fatigue, etc.) is a very important factor to ensure safety in driving [4,5]. A recent review on the topic can be found in [6]. The goal of this work is to extract behavior patterns from car user data to be able to accurately estimate their state. We used data obtained by the laboratory of prof. Hyung Yun Choi at Hongik University in Seoul. His experiment involved the application of mechanical stimulation to people seated in an automobile.

Our main goal is to extract patterns of behavior from experimental data so as to allow us to learn the most relevant factors affecting driver's attention to the situation of the road. In the present work, we combine some tools from Morse theory [7] and topological data analysis (TDA) with all of the associated concepts and methods (e.g., Betti numbers,

**Citation:** Frahi, T.; Chinesta, F.; Falcó, A.; Badias, A.; Cueto, E.; Choi, H.Y.; Han, M.; Duval, J.-L. Empowering Advanced Driver-Assistance Systems from Topological Data Analysis. *Mathematics* **2021**, *9*, 634. https://doi.org/10.3390/math9060634

Academic Editors: Duarte Valério and Mauro Malvè

Received: 31 January 2021 Accepted: 11 March 2021 Published: 16 March 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

homology persistence, barcodes, persistence images, etc.) [8], most of them introduced and employed later in order to analyze and classify the experimental data. This allows us to introduce concepts as barcodes, that is, persistent and life-time diagrams in a similar way to how they are used in persistent homology. Our main goal is to predict car user behavior following a supervised approach [9]. Instead of considering an original sensor signal as the quantity of interest, we focus on its topological features. In this sense, the framework proposed in this paper allows us to unveil the true dimensionality of data or, in other words, the actual number of factors affecting driver's performance. Thus, we model a sensor signal as a dynamical system, and, therefore, our approach seems to be better at describing its properties, or rather its variations, such as extrema, patterns, and self-similarity, than other approaches. We note that our approach is, in some senses, similar to that followed by Milnor and Thurston [10] in the study of the combinatorial properties of dynamical systems by combining tools from automata theory.

The structure of the paper is as follows: In Section 2, we describe the material and methods employed in this work. Particular attention is paid to the process of data acquisition and the description of time series and data curation. In Section 3, we present the main results of this work, and we discuss the main consequences in Section 4. As a complement, in Appendix A, we thoroughly illustrate the process of computing persistence images for the data of interest.

#### **2. Material and Methods**

In this section, we describe the collection and preprocessing of the experimental data. In Section 2.1, we describe the data acquisition, and in Section 2.2, we provide a description of the time series. Section 2.3 is devoted to data preprocessing. The mathematical tools used to describe the times series at a topological level are explained in Section 2.4. Finally, the image classification methodology is given in Section 2.5.

#### *2.1. Data Acquisition*

Our proposed predictor directly uses the data collected from the experiments. The data acquisition process involves measuring the response of human behavior when an excitation is applied to the seat. Figure 1 shows the location of the sensors in the experiments.

**Figure 1.** Scheme of the data acquisition process showing the location of the sensors.

The excitation signal is an angular acceleration imposed on the seat of the user. This acceleration is an oscillating chirp function with a frequency range of 1 to 7.5 Hz on the X axis in rotation. The linear acceleration **a** = (*ax*, *ay*, *az*) and angular velocity *ω* = (*ωx*, *ωy*, *ωz*) were measured in both the head and the seat by two IMU (Shimmer inertia measurement

unit (IMU) sensors) at 256 Hz. By observing the floor excitation signals, we noted that the excitation is purely rotational around the X-axis—see Figure 2.

**Figure 2.** Floor excitation: X-axis angular velocity time series.

Several experiences were conducted by nine people by taking into account a set of six fixed states: driver, passenger, tense person, relaxed person, rigid seat, and SAV (sport activity vehicle seat). In particular, for each individual, eight experiments for the six available states were performed:


As a consequence, we worked with a sample of 72 experiences, each of them encoded in a time series (as we explain later). Our goal is to classify the behavior of a generic driver, assigning one of the two states (tense or relaxed) by using the sensor data.

#### *2.2. Time Series Description*

The data acquired from sensors (see Figures 3 and 4) were stored into six-dimensional time series, for both linear acceleration and angular velocity of the head movement. The sampling frequency of the data was 256 Hz, and the duration of the experiment was 34 s; hence, the resulting data dimensionality is 256 × 34 = 8704. For each times series, where 1 ≤ *t* ≤ 8704, we constructed three new times series called the sliding window, embedding a length of 5800. The first one is given by the times values from *t* = 1 to *t* = 5800, the second is given by the times values from *t* = 1450 to *t* = 7250, and, to conclude, the third time window is defined as from *t* = 2904 to *t* = 8704. Each element in the sample (1 ≤ *i* ≤ 72) was encoded by means of three six-dimensional time series representing each of the three sliding windows that we represent in matrix form as follows:

$$\operatorname{TS}\_{3(i-1)+1} = \begin{bmatrix} a\_x^\ell(1) & a\_x^\ell(2) & \cdots & a\_x^\ell(5800) \\ a\_y^\ell(1) & a\_y^\ell(2) & \cdots & a\_y^\ell(5800) \\ a\_z^\ell(1) & a\_z^\ell(2) & \cdots & a\_z^\ell(5800) \\ \omega\_x^\ell(1) & \omega\_x^\ell(2) & \cdots & \omega\_x^\ell(5800) \\ \omega\_y^\ell(1) & \omega\_y^\ell(2) & \cdots & \omega\_x^\ell(5800) \\ \omega\_z^\ell(1) & \omega\_z^\ell(2) & \cdots & \omega\_z^\ell(5800) \\ \end{bmatrix}, \operatorname{TS}\_{3(i-1)+2} = \begin{bmatrix} a\_x^\ell(1450) & a\_x^\ell(1451) & \cdots & a\_x^\ell(7251) \\ a\_y^\ell(1450) & a\_y^\ell(1451) & \cdots & a\_y^\ell(7251) \\ a\_z^\ell(1450) & a\_z^\ell(1451) & \cdots & a\_z^\ell(7251) \\ \omega\_x^\ell(1450) & \omega\_x^\ell(1451) & \cdots & \omega\_x^\ell(7251) \\ \omega\_y^\ell(1450) & \omega\_y^\ell(1451) & \cdots & a\_y^\ell(7251) \\ a\_z^\ell(1450) & a\_z^\ell(1451) & \cdots & a\_z^\ell(7251) \\ \end{bmatrix}$$

and

$$TS\_{3i} = \begin{bmatrix} a\_x^\ell(2903) & a\_x^\ell(2905) & \cdots & a\_x^\ell(8704) \\ a\_y^\ell(2903) & a\_y^\ell(2905) & \cdots & a\_y^\ell(8704) \\ a\_z^\ell(2903) & a\_z^\ell(2905) & \cdots & a\_z^\ell(8704) \\ \omega\_x^\ell(2903) & \omega\_x^\ell(2905) & \cdots & \omega\_x^\ell(8704) \\ \omega\_y^\ell(2903) & \omega\_y^\ell(2905) & \cdots & \omega\_y^\ell(8704) \\ \omega\_z^\ell(2903) & \omega\_z^\ell(2905) & \cdots & \omega\_z^\ell(8704) \end{bmatrix}$$

.

Here, the matrices have a size of 6 × 5800 and 1 ≤ *i* ≤ 72. This allows us to represent the information by using a third-order tensor, namely, Z ∈ <sup>R</sup>216×6×<sup>5800</sup> defined by

$$\mathcal{Z}\_{i,j,k} := (TS\_i)\_{j,k}$$

for 1 ≤ *i* ≤ 216, 1 ≤ *j* ≤ 6 and 1 ≤ *k* ≤ 5800. We can identify *Zi* = *TSi* for 1 ≤ *i* ≤ 216.

**Figure 3.** Sensor data: linear acceleration time series.

**Figure 4.** Sensor data: angular velocity time series.

#### *2.3. Data Preprocessing*

In order to obtain a single series for each observation, we concatenated all of the 6 time series (linear accelerations and angular velocities) for each observation horizontally and then created a data frame by stacking the 216 in sample observations.

The concatenation operation on the multidimensional time series collapsed the last two dimensions into one dimensional arrays with a length of 5800 × 6 = 34,800. The result is the two-dimensional table of concatenated time series

$$D = \begin{bmatrix} \mathsf{vec}(\mathcal{Z}\_{1;:,:}) \\ \dots \\ \mathsf{vec}(\mathcal{Z}\_{216;:,:}) \end{bmatrix} \in \mathbb{R}^{216 \times 34800} \text{ J}$$

We chose not to filter the signals because the topological sub-level set method should filter the high-frequency features naturally. We also chose to keep working on acceleration signals in order to avoid signal deviations after two integrations in time so as to obtain positions, the sensors not always keeping a zero mean height. Thus, the approach is completely (topologically) data-based.

The six time series Z*<sup>i</sup>* of each observation were collapsed into a single concatenated time series with a size of 34,800—see Figure 5. The concatenated time series for the 216 observations were then stacked to create the dataset *D* with a size of 216× 34,800. We also used binary labels in the chained time series Z*<sup>i</sup>* on the two target classes that we were interested in. In particular, we wrote <sup>Z</sup>(*α*) *<sup>i</sup>* where *α* is "0" for a relaxed driver and "1" for a tense one.

**Figure 5.** Tensor reduction of a sensor time series.

#### *2.4. Extracting Topological Features from a Time Series*

The idea to extract the topological information regarding the times series is to consider each sample observation as a piecewise linear continuous map from a closed interval to the real line. Therefore, we used a construction closely related to the Reeb graph [11] used in Morse theory to describe the times series at the topological level.

To this end, we consider the time series *xt* for 0 ≤ *t* ≤ *N* − 1 (*N* ≥ 3) given by a vector

$$\mathbf{X} = (\mathbf{x}\_{0\prime} \mathbf{x}\_{1\prime} \dots \mathbf{x}\_{N-1}) \in \mathbb{R}^N.$$

we can view **<sup>X</sup>** as a function also denoted by **<sup>X</sup>** : {0, 1, ... , *<sup>N</sup>* <sup>−</sup> <sup>1</sup>} −→ <sup>R</sup> defined by **X**(*i*) = *xi* for 0 ≤ *i* ≤ *N* − 1. Here, to study the topological features of **X** we use the sub-level set of a piecewise-linear function *<sup>f</sup>***<sup>X</sup>** : <sup>R</sup> −→ <sup>R</sup> associated with **<sup>X</sup>** satisfying that *f***X**(*i*) = **X**(*i*) = *xi* for 0 ≤ *i* ≤ *N* − 1.

To construct this function, we consider the basis functions {*ϕ*0, ... , *ϕN*−1} of continuous functions *<sup>ϕ</sup><sup>i</sup>* : <sup>R</sup> −→ <sup>R</sup> defined by

$$\varphi\_i(s) := \begin{cases} s - i + 1 & \text{if } \quad i - 1 \le s \le i \\\ i + 1 - s & \text{if } \quad i \le s \le i + 1 \\\ 0 & \text{if } \quad s \notin [i - 1, i + 1] \end{cases}$$

where *i* = 1, . . . , *N* − 2 and

$$\begin{aligned} \varphi\_0(s) &:= \begin{cases} 1-s & \text{if } \quad 0 \le s \le 1\\ 0 & \text{if } \quad s \in [0,1] \end{cases} \\\\ \varphi\_{N-1}(s) &:= \begin{cases} \begin{array}{c} s-N+2\\ 0 \end{array} & \text{if } \quad N-2 \le s \le N-1\\ 0 & \text{if } \quad s \notin [N-2, N-1] \end{cases} \end{aligned}$$

This allows us to construct a piecewise continuous map *<sup>f</sup>***<sup>X</sup>** : <sup>R</sup> −→ <sup>R</sup> by

$$f\_{\mathbf{X}}(s) = \sum\_{j=0}^{N-1} \mathbf{x}\_j q\_j(s)\_j$$

and also to endow R*<sup>N</sup>* with a norm given by

$$\|\mathbf{X}\| := \|f\_{\mathbf{X}}\|\_{L^2(\mathbb{R})} = \left(\int\_{-\infty}^{\infty} |f\_{\mathbf{X}}(s)|^2 ds\right)^{1/2}.$$

In particular, we prove the following result, which helps us to identify the time series given by the vector **X** in R*<sup>N</sup>* with the function *f***<sup>X</sup>** in *L*2(R).

**Proposition 1.** *The linear map* <sup>Φ</sup> : (R*N*, ·) −→ (*L*2(R), ·*L*2(<sup>R</sup>)) *given by* <sup>Φ</sup>(**X**) = *<sup>f</sup>***<sup>X</sup>** *is an injective isometry between Hilbert spaces. Furthermore,* Φ(R*N*) *is a closed subspace in L*2(R*N*).

**Proof.** The map is clearly isometric and injective because {*ϕ*0,..., *ϕN*−1} is a set of linear independent functions in *L*2(R).

Here, we describe the maps *<sup>f</sup>***<sup>X</sup>** <sup>∈</sup> <sup>Φ</sup>(R*N*) at the combinatorial level using the connected components (intervals) associated with its *λ* sub-level sets

$$\mathcal{LS}\_{\lambda}(f\_{\mathbf{X}}) := \{ \mathbf{x} \in [0, N-1] : f\_{\mathbf{X}}(\mathbf{x}) \le \lambda \}$$

for *<sup>λ</sup>* <sup>∈</sup> <sup>R</sup>. For this purpose, we introduce the following distinguished objects related to the supp(*f***X**)=[0, *<sup>N</sup>* <sup>−</sup> <sup>1</sup>] <sup>⊂</sup> <sup>R</sup> of *<sup>f</sup>***<sup>X</sup>** :

• The nodes or vertices denoted by

$$\mathcal{V} := \{ [0], [1], \dots, [N-1] \}$$

that represent the components of the vector **X**,;

• The faces denoted by

$$\mathcal{F} := \{ [0,1][1,2], \dots, [N-2,N-1] \}$$

that represent the intervals used to construct the connected components of the sublevel sets of the map *f***X**. Recall that we consider

$$\{ [i, i+1] := \{ z \in \mathbb{R} : z = \mu \ge \mathbf{x}\_{i+1} + (1 - \mu)\mathbf{x}\_i, 0 \le \mu \le 1 \} \subset \mathbb{R} . \}$$

Let

$$\lambda\_{\max} = \max\_{s \in [0, N-1]} f\_{\mathbf{X}}(s) = \max\_{0 \le i \le N-1} \mathbf{X}(i),$$

and

$$\lambda\_{\min} = \min\_{s \in [0, N-1]} f\_{\mathbf{X}}(s) = \min\_{0 \le i \le N-1} \mathbf{X}(i).$$

For each *λ*min ≤ *λ* ≤ *λ*max, we introduce the following symbolic *λ* sub-level set for the map *f***<sup>X</sup>** :

$$LS\_{\lambda}(f\mathfrak{x}) := \{ \sigma \in \mathcal{F} : f(\sigma) \le \lambda \}.$$

For *λ*min ≤ *λ* ≤ *λ* ≤ *λ*max, it holds

$$LS\_{\lambda}(f\_{\mathbf{X}}) \subset LS\_{\lambda'}(f\_{\mathbf{X}}) \cdot$$

Our next goal was to quantify the evolution of the above symbolic *λ* sub-level with. To this end, we introduce the notion of feature associated with the *λ* sub-level set *LSλ*(*f***X**). We define the set of features for functions in Φ(R*N*) as

$$\mathfrak{F}(\Phi(\mathbb{R}^N)) := \{ [i, j] \subset \mathbb{R} : 0 \le i < j \le N - 1 \}.$$

We note that *LSλ*(*f***X**) ⊂F⊂ <sup>F</sup>(Φ(R*N*)). Then next definition introduces the notion of features for a symbolic *λ* sub-level set as the interval of F(Φ(R*N*)) constructed by a maximal union of faces of *LSλ*(*f***X**).

**Definition 1.** *We suggest that* <sup>I</sup> <sup>∈</sup> <sup>F</sup>(Φ(R*N*)) *is a feature for the symbolic <sup>λ</sup> sub-level set LSλ*(*f***X**) *if there exists* <sup>I</sup>1, ... ,I*<sup>k</sup>* <sup>∈</sup> *LSλ*(*f***X**) *such that* <sup>I</sup> <sup>=</sup> !*<sup>k</sup> <sup>j</sup>*=<sup>1</sup> <sup>I</sup>*<sup>k</sup> and for every* <sup>J</sup> <sup>∈</sup> *LSλ*(*f***X**) *such that* <sup>J</sup> <sup>=</sup> <sup>I</sup>*<sup>i</sup> for* <sup>1</sup> <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>k</sup> it holds that* <sup>I</sup> <sup>∩</sup> <sup>J</sup> <sup>=</sup> <sup>∅</sup>. *We denote by* <sup>F</sup>(*LSλ*(*f***X**)) *the set of features for the λ sub-level set LSλ*(*f***X**).

A feature for a *λ* sub-level set *LSλ*(*f***X**) is the maximal interval of F(Φ(R*N*)) that we can construct by unions of intervals in *LSλ*(*f***X**). To illustrate this definition, we give the following example:

**Example 1.** *Let us consider the time series*

$$\mathbf{X} = (11, 14, 9, 7, 9, 7, 8, 10, 9).$$

*This allows us to construct the map f***<sup>X</sup>** *as shown in Figure 6. Then, λmin* = 7 *and λ*max = 14*, and we have the following symbolic λ sub-level sets.*

$$\begin{split} & LS\_{\lambda=7}(f\mathbf{x}) = \bigcirc \\ & LS\_{\lambda=8}(f\mathbf{x}) = LS\_{\lambda=7}(f\mathbf{x}) \cup \{ [5,6] \} \\ & LS\_{\lambda=9}(f\mathbf{x}) = LS\_{\lambda=8}(f\mathbf{x}) \cup \{ [2,3], [3,4], [4,5] \} \\ & LS\_{\lambda=10}(f\mathbf{x}) = LS\_{\lambda=9}(f\mathbf{x}) \cup \{ [6,7], [7,8] \} \\ & LS\_{\lambda=11}(f\mathbf{x}) = LS\_{\lambda=10}(f\mathbf{x}) \\ & LS\_{\lambda=12}(f\mathbf{x}) = LS\_{\lambda=11}(f\mathbf{x}) \\ & LS\_{\lambda=13}(f\mathbf{x}) = LS\_{\lambda=11}(f\mathbf{x}) \\ & LS\_{\lambda=14}(f\mathbf{x}) = LS\_{\lambda=11}(f\mathbf{x}) \cup \{ [0,1] \} . \end{split}$$

*This allows us to compute the available features for each λ-value:*


Let F(*f***X**) be the whole set of features for *f***X**, that is,

<sup>F</sup>(*f***X**) = {<sup>I</sup> : <sup>I</sup> <sup>∈</sup> <sup>F</sup>(*LSλ*(*f***X**)) for some *<sup>λ</sup>*min <sup>≤</sup> *<sup>λ</sup>* <sup>≤</sup> *<sup>λ</sup>*max}.

**Figure 6.** The map *f***<sup>X</sup>** for **X** = (11, 14, 9, 7, 9, 7, 8, 10, 9).

**Example 2.** *From Example 1, we obtain*

$$\mathfrak{F}(f\_{\mathbf{X}}) = \{ [5, 6], [2, 6], [2, 8], [0, 8] \}.$$

*We can represent the map λ* → *LSλ*(*f***X**) *from* [*λ*min, *λ*max] *to* F(*f***X**) *as shown in Figure 7.*

Let <sup>I</sup> <sup>∈</sup> <sup>F</sup>(*f***X**); in order to quantify the persistence of this particular feature for the map *f***X**, we use the map *λ* → *LSλ*(*f***X**) from [*λ*min, *λ*max] to F(*f***X**). To this end, we introduce the following definition: the birth point of the feature I is defined by

$$a(\mathbb{I}) = \inf \{ \lambda : \mathbb{I} \in \mathfrak{F}(L\mathbb{S}\_{\lambda}(f\mathfrak{x})) \}$$

and the corresponding death point by

$$b(\mathbb{I}) = \sup \{ \lambda : \mathbb{I} \in \mathfrak{F}(LS\_{\lambda}(f\_{\mathbf{X}})) \}.$$

In particular, we note that *<sup>a</sup>*([0, *<sup>N</sup>* <sup>−</sup> <sup>1</sup>]) = *<sup>λ</sup>*max (see Figure 7). Since *<sup>a</sup>*(I) <sup>≤</sup> *<sup>b</sup>*(I) <sup>&</sup>lt; <sup>∞</sup> holds for all <sup>I</sup> <sup>∈</sup> <sup>F</sup>(*f***X**), <sup>I</sup> = [0, *<sup>N</sup>* <sup>−</sup> <sup>1</sup>], we call the finite interval [*a*(I), *<sup>b</sup>*(I)] *the barcode of the feature* <sup>I</sup> <sup>∈</sup> <sup>F</sup>(*f***X**) \ {[0, *<sup>N</sup>* <sup>−</sup> <sup>1</sup>]}.

**Example 3.** *From Example 1 we consider the features* [5, 6] ∈ *LSλ*=8(*f***X**), [2, 6] ∈ *LSλ*=9(*f***X**), *and* [2, 8] ∈ *LSλ*=10(*f***X**). *Then, the feature* [5, 6] *has its birth point at a*([5, 6]) = 8 *and its death point at b*([5, 6]) = 9; *the feature* [2, 6] *has its birth point at a*([2, 6]) = 9 *and its death point at b*([2, 6]) = 10. *Finally, the feature* [2, 8] *has its birth point at a*([2, 8]) = 10 *and its death point at b*([2, 8]) = 14. *As a consequence, the set*

$$\mathcal{B}(f\_{\mathbf{X}}) := \{ ([5,6];8,9), ([2,6];9,10), ([2,8];10,14) \}$$

*of features and its corresponding barcodes contain the relevant information of the shape of f***<sup>X</sup>** *(see Figure 7).*

Thus, we define the set of barcodes for *f***<sup>X</sup>** by

$$\mathcal{B}(f\_{\mathbf{X}}) = \{ (\mathbb{I}; a(\mathbb{I}), b(\mathbb{I})) : \mathbb{I} \in \mathfrak{F}(f\_{\mathbf{X}}) \; \vert \; \{ [0, N - 1] \} \}$$

and its persistence diagram as

$$\mathcal{PD}(f\_{\mathbf{X}}) := \left\{ (a(\mathbb{I}), b(\mathbb{I})) \in \mathbb{R}^2 : \mathbb{I} \in \mathfrak{F}(f\_{\mathbf{X}}) \; \middle\} \; \{ [0, N-1] \} \right\}$$

(see Figure 8). An equivalent representation of the persistence diagram is the life-time diagram for *f***X**, which is constructed by means of a bijective transformation *T*(*a*, *b*) = (*a*, *b* − *a*), acting over PD(*f***X**), that is,

$$\mathcal{LT}(f\_{\mathbf{X}}) := \left\{ (a(\mathbb{I}), b(\mathbb{I}) - a(\mathbb{I})) \in \mathbb{R}^2 : \mathbb{I} \in \mathfrak{F}(f\_{\mathbf{X}}) \right\};$$

see Figure 9.

**Figure 8.** Persistence diagram for the map *f***<sup>X</sup>** when **X** = (11, 14, 9, 7, 9, 7, 8, 10, 9).

**Figure 9.** Life-time diagram for the map *f***<sup>X</sup>** when **X** = (11, 14, 9, 7, 9, 7, 8, 10, 9).

In order to determine the grade of similarity between two barcodes from two different time series, we need to set a similarity metric. To this end, we construct the persistent image for *f***<sup>X</sup>** as follows: we observe that LT (*f***X**) is a finite set of points, namely,

$$\mathcal{LT}(f\_{\mathbf{X}}) = \{ (a\_1, b\_1 - a\_1), \dots, (a\_k, b\_k - a\_k) \},$$

for some natural numbers *k* ≥ 1 and such that *b*<sup>1</sup> − *a*<sup>1</sup> ≤ *b*<sup>2</sup> − *a*<sup>2</sup> ... ≤ *bk* − *ak*. Then, we consider a non-negative weighting function *w* : LT (*f***X**) −→ [0, 1] given by

$$w(a\_{i\prime}b\_i - a\_i) = \frac{b\_i - a\_i}{b\_k - a\_k} \text{ for } 1 \le i \le k.$$

Finally, we fix *M*, a natural number, and take a bivariate normal distribution *gu*(*x*, *y*) centered at each point *<sup>u</sup>* ∈ LT (*f***X**) and with its variance *<sup>σ</sup> id* <sup>=</sup> <sup>1</sup> *<sup>M</sup>* max1≤*i*≤*k*(*bi* − *ai*)*id*, where *id* is the 2 × 2 identity matrix. A persistence kernel is then defined by means of a function *<sup>ρ</sup>***<sup>X</sup>** : <sup>R</sup><sup>2</sup> <sup>→</sup> <sup>R</sup>, where

$$\rho\_{\mathbf{X}}(\mathbf{x}, \mathbf{y}) = \sum\_{\mathbf{u} \in \mathcal{LT}(f\mathbf{x})} w(\mathbf{u}) \varrho\_{\mathbf{u}}(\mathbf{x}, \mathbf{y}). \tag{1}$$

We associate with each **<sup>X</sup>** <sup>∈</sup> <sup>R</sup> a matrix in <sup>R</sup>*M*×*<sup>M</sup>* as follows: let *<sup>ε</sup>* <sup>&</sup>gt; 0 be a nonnegative real number that is sufficiently small, and then consider a square region Ω**X**,*<sup>ε</sup>* = [*α*, *<sup>β</sup>*] <sup>×</sup> [*α*∗, *<sup>β</sup>*∗] <sup>⊂</sup> <sup>R</sup>2, covering the support of *<sup>ρ</sup>***X**(*x*, *<sup>y</sup>*) (up to a certain precision), such that

$$\iint\_{\Omega\_{\mathbf{X}\_{\varepsilon}}} \rho\_{\mathbf{X}}(x, y) \, dx \, dy \ge 1 - \varepsilon$$

holds. Next, we consider two equispaced partitions of the intervals

$$\alpha = p\_0 \le p\_1 \dots \le p\_M = \beta \text{ and } \alpha^\* = p\_0^\* \le p\_1^\* \dots \le p\_M^\* = \beta^\*.$$

Now, we put

$$\Omega\_{\mathbf{X},\varepsilon} = \bigcup\_{i=0}^{M-1} \bigcup\_{j=0}^{M-1} \left[ p\_{i\prime} p\_{i+1} \right] \times \left[ p\_{j\prime}^\* p\_{j+1}^\* \right] = \bigcup\_{i=0}^{M-1} \bigcup\_{j=0}^{M-1} P\_{i,j}$$

The persistence image of **X** associated with the partition P = {*Pi*,*j*} is then described by the matrix given by the following equation:

$$PI(\mathbf{X}, M, \mathcal{P}, \varepsilon) = \left( \int\_{P\_{\hat{\mathbb{Q}}\_{\mathcal{J}}}} \rho\_{\mathbf{X}}(\mathbf{x}, y) dx dy \right)\_{\mathbf{i} = 0, \mathbf{j} = 0}^{\mathbf{i} = M - 1, \mathbf{j} = M - 1} \in \mathbb{R}^{M \times M}. \tag{2}$$

#### *2.5. Classification*

Image classification is a procedure that is used to automatically categorize images into classes by assigning to each image a label representative of its class. A supervised classification algorithm requires a training sample for each class, that is, a collection of data points whose class of interest is known. Labels are assigned to each class of interest. The classification problem applied to a new observation is thus based on how close a new point is to each training sample. The Euclidean distance is the most common distance metric used in low-dimensional datasets. The training samples are representative of the known classes of interest to the analyst. In order to classify the persistence diagrams, we can use any state-of-the-art technique. In our case, we considered the random forest classification.

Recall that we conducted 9 different experiments, with 24 samples associated with each one of them corresponding to 3 samples for each of the different experimental conditions: relaxed rigid driver, relaxed rigid passenger, relaxed SAV driver, relaxed SAV passenger, tense rigid driver, tense rigid passenger, tense SAV driver, and tense SAV passenger. Their respective labels are {0, 0, 0, 0, 1, 1, 1, 1}. Therefore, we designed the following training validation process: The model is trained over 144 samples and evaluated over the remaining unseen 72 experiments (two-to-one training-to-testing ratio). The split between training and sampling is achieved using random shuffling and stratification to ensure balance between the classes. In order to improve the evaluation of the model generalizability, we also performed a cross-validation procedure following a *leave-one-out* strategy, consisting of iteratively training over the full dataset except one sample that was left out and used to test and score the model. We used the *accuracy* metric to evaluate the classification model. We can represent the performance of the model using the so-called confusion matrix: a 2D entries table where elements account for the number of samples in each category, with the first axis representing the true labels and the second axis the predicted labels. We also computed the different classification metrics to obtain a more detailed reporting of the model performances.

#### **3. Results**

The trained random forest classifier model for the persistence images has a notably high accuracy score on the training dataset (144) for both approaches and high accuracy for the testing dataset (72 samples). This suggests strong differentiation of the images with the respect to their generating signals, see Figure 10. The scores on the training and testing are 93 and 83%, respectively. The leave-one-out cross-validation achieved a score of 81%, indicating a good variance–bias trade-off and good generalization potential of the model.

**Figure 10.** Model performance for prediciting the attention state.

#### **4. Discussion**

The combination of Morse theory and topological data analysis allows us to extract information from real data without the need for smoothness or regularity assumption on the time series. In our case, input data for each experiment were reduced from six-sensor time series of measurements to one single image containing the persistent pattern for attention to the road. Using the obtained persistence images as the new inputs, supervised learning proved to successfully predict the attention state of the driver or passenger.

The procedure used and described in this paper does not involve any additional pre-processing of the sensor data; is robust to noise and degraded signals; and supports large quantities of data, which makes it efficient and scalable.

It is important to highlight the fact that while the proposed methodology based on the TDA (successfully applied in large datasets [9]) seems general and powerful and it was able to extract the main data features, the validity of the driver behaviors observed in the analyzed dataset should be carefully checked due to the overly reduced dataset employed (limited to nine individuals) that does not allow for the full validation of prediction robustness.

**Author Contributions:** Conceptualization, F.C., H.Y.C. and M.H.; data curation, T.F., A.B. and M.H.; formal analysis, F.C., A.F., A.B., E.C., H.Y.C. and J.-L.D.; funding acquisition, F.C.; investigation, T.F., A.F. and M.H.; methodology, F.C., A.F., A.B. and H.Y.C.; software, A.B.; supervision, F.C., E.C. and H.Y.C.; writing—original draft, T.F.; writing—review editing, A.B. and E.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by ESI Group, Contract 2019-0060 with the University of Zaragoza.

**Institutional Review Board Statement:** Ethical review and approval were waived for this study due to the research presents no risk of harm to subjects and only collects non-personalized anonymized data.

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Data Availability Statement:** Data are available under request.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **Appendix A**

We can illustrate the process of computing the persistence diagrams, the lifetime diagrams, and the persistence images for the driver time series for each experimental setup:


**Figure A1.** Relaxed driver with SAV seat.

**Figure A2.** Relaxed driver with rigid seat.

**Figure A3.** Relaxed passenger with SAV seat.

**Figure A4.** Relaxed passenger with rigid seat.

**Figure A5.** Tense driver with SAV seat.

**Figure A6.** Tense driver with rigid seat.

**Figure A7.** Tense passenger with SAV seat.

**Figure A8.** Tense passenger with rigid seat.

#### **Appendix B**

To better evaluate a classification model, we are interested in quantities that express how often a sample is correctly or wrongly labelled into a particular class over all the samples and all the classes:


Therefore, we can examine in more detail the classification model performance using the following metrics:

• The *precision* P is the number of correct positive results divided by the number of all positive results.

$$P = \frac{TP}{TP + FP} \tag{A1}$$

• The *recall* R is the number of correct positive results divided by the number of all relevant samples.

$$R = \frac{TP}{TP + FN} \tag{A2}$$

• The F-1 score is the harmonic mean of precision and recall.

$$F1 = 2 \times \frac{P \times R}{P + R} \tag{A3}$$

• The *accuracy* A is the number of correct predictions over the number of all samples.

$$A = \frac{TP + TN}{TP + TN + FP + FN} \tag{A4}$$

We can summarize the presented metrics for our model in the following two reports:


(**a**) Training set. (**b**) Testing set.

**Figure A9.** Classification report.

#### **References**

