*Article* **A Novel V2V Cooperative Collision Warning System Using UWB/DR for Intelligent Vehicles**

**Mingyang Wang, Xinbo Chen, Baobao Jin, Pengyuan Lv, Wei Wang \* and Yong Shen \***

Institute of Intelligent Vehicles, School of Automotive Studies, Tongji University, No. 4800 Cao'an Highway, Jiading District, Shanghai 201804, China; wangmingyang@sina.cn (M.W.); austin\_1@163.com (X.C.); jin\_baobao2019@163.com (B.J.); lv\_Pengyuan@tongji.edu.cn (P.L.)

**\*** Correspondence: lazguronwang@gmail.com (W.W.); shenyong@tongji.edu.cn (Y.S.)

**Abstract:** The collision warning system (CWS) plays an essential role in vehicle active safety. However, traditional distance-measuring solutions, e.g., millimeter-wave radars, ultrasonic radars, and lidars, fail to reflect vehicles' relative attitude and motion trends. In this paper, we proposed a vehicleto-vehicle (V2V) cooperative collision warning system (CCWS) consisting of an ultra-wideband (UWB) relative positioning/directing module and a dead reckoning (DR) module with wheel-speed sensors. Each vehicle has four UWB modules on the body corners and two wheel-speed sensors on the rear wheels in the presented configuration. An over-constrained localization method is proposed to calculate the relative position and orientation with the UWB data more accurately. Vehicle velocities and yaw rates are measured by wheel-speed sensors. An extended Kalman filter (EKF) is applied based on the relative kinematic model to combine the UWB and DR data. Finally, the time to collision (TTC) is estimated based on the predicted vehicle collision position. Furthermore, through UWB signals, vehicles can simultaneously communicate with each other and share information, e.g., velocity, yaw rate, which brings the potential for enhanced real-time performance. Simulation and experimental results show that the proposed method significantly improves the positioning, directing, and velocity estimating accuracy, and the proposed system can efficiently provide collision warning.

**Keywords:** collision warning system; ultra-wideband; dead reckoning; time to collision

### **1. Introduction**

The global status report on road safety 2018, launched by the WHO in December 2018, highlighted that the number of annual road traffic deaths had reached 1.35 million [1]. Twovehicle and multi-vehicle collisions were the most severe types of accidents. Studies showed that more than 80% of road traffic accidents resulted from drivers' belated responses, and more than 65% resulted in rear-end collisions [2]. Researches indicate that more than 80% of accidents could have been averted if drivers had focused and driven correctly in three seconds before the accident [3].

In recent years, more and more researchers have focused on advanced driving assistance systems (ADAS) to raise consumers' awareness of safety devices and to reduce the risk of accidents caused by careless driving. As an essential component of the collision warning system, the forward collision warning system (FCWS), can measure the distance with the leading vehicle by itself and warn drivers when the distance between vehicles is less than the safe distance. At present, FCWS using active sensors, such as laser [4,5], radar [6], vision sensor [7–9], and infrared [10], has been widely studied. Sanberg et al. [11] presented a stereo vision-based CWS suited for real-time execution in a car. Hernandez et al. designed an object warning collision system for high-conflict vehicle-pedestrian zones using a laser [12]. Coelingh et al. [13,14] proposed a collision avoidance and automatic braking system using a car mounted with radar and camera. Srinivasa et al. [15] proposed an improved CWS combining data from a forward-looking camera and a radar. Although these sensors have high accuracy, they cannot work robustly in bad weather such as rain, snow,

**Citation:** Wang, M.; Chen, X.; Jin, B.; Lv, P.; Wang, W.; Shen, Y. A Novel V2V Cooperative Collision Warning System Using UWB/DR for Intelligent Vehicles. *Sensors* **2021**, *21*, 3485. https://doi.org/10.3390/ s21103485

Academic Editor: Chao Huang

Received: 11 April 2021 Accepted: 14 May 2021 Published: 17 May 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

and fog and effectively identify dangerous vehicles in visual blind areas. Many advanced algorithms have been proposed to overcome the defects of the sensors [16,17]. However, these algorithms are always limited to particular scenarios, e.g., lane changing [18] and turning [19].

CCWS is an effective solution to this issue, which combines traditional CWS with vehicle-to-infrastructure (V2I) communication and V2V communication [20]. In CCWS, the sensor defects of a single vehicle are supplemented by acquiring information from other vehicles or infrastructures. A V2V-based system shares information among the on-board units (OBU) of vehicles. In V2I systems, accidents and hazardous events are detected by roadside units (RSU) and sent to the OBUs of vehicles [21]. Since vehicles can communicate directly through V2V without dependence on infrastructures, it is more suitable for CWS than V2I. Yang et al. [22] proposed a novel FCWS, which used license plate recognition and vehicleto-vehicle (V2V) communication to warn the drivers of both vehicles. Xiang et al. [23] proposed an FCWS based on dedicated short-range communication (DSRC) and the global positioning system (GPS). Yang et al. [24] proposed an FCWS combining differential global positioning system (DGPS) and DSRC. Patra et al. [25] proposed a novel FCWS, in which GPS provides the relative positioning information, and vehicles communicate through a vehicular network integrated with smartphones. In general, CCWS can overcome the limitations of the in-vehicle sensor-based CWS by sharing information such as vehicle speed, location, and angle to surrounding vehicles. However, the current V2V based CWSs implement relative positioning and communication separately using different technologies, e.g., predicting collision warning based on radars but communicating through WIFI, which may affect the real-time performance.

To address this issue, the UWB-based CCWS seamlessly combines CWS and V2V without delay. UWB is a communication technology that uses nanosecond narrow pulse signal to transmit data and to measure distances, which has become an effective transmission technology in location-aware sensor networks [26]. Inherently, the UWB-based ranging technology has the advantages of high time resolution and can achieve centimeter-level ranging accuracy [27]. UWB is more adaptable to different environments than traditional sensors used in CWS [28]. There has also been some research on UWB-based CCWS. Sun et al. [29] proposed a UWB/INS (Inertial Navigation System)-based automatic guided vehicle (AGV) collision avoidance system. Liu et al. [30] designed a vehicle collisionavoidance system based on UWB wireless sensor networks. Marianna et al. used UWB to obtain distance information and calculated the collision time to provide collision warnings for workers [31]. Kianfar et al. presented a CWS for the underground mine, which predicted collisions using distances between workers and the mining vehicle measured by UWB [32]. In summary, the existing UWB-based CCWS mainly has two technical routes, which are based on absolute positioning and relative positioning, respectively. The former is hard to popularize due to the small coverage area and high cost of base stations. For the latter, most of the existing research only considers the relative distance between targets rather than the position and ignores the information such as relative velocity and orientation.

To deal with the above problems, a CCWS based on UWB and DR is proposed in this paper. In the proposed system, relative positioning and communication are implemented by UWB simultaneously, which contributes to better real-time performance. Four UWB modules are installed on each vehicle, which makes it possible to calculate not only twodimension (2D) relative positions but also relative orientations. An over-constrained method is proposed to improve the positioning/directing accuracy. Then, the accuracy and stability of the system are further improved, and the TTC can be estimated with the integration of DR.

This paper is organized as follows: In Section 2, the three subsystems of CCWs are introduced. Section 3 carries on a simulation to evaluate the performance of the system. In Section 4, we conduct experiments and analyze the results. Finally, we summarize the conclusions in Section 5.

#### **2. Algorithm and Modeling**

The CWS consists of three parts, the UWB-based relative positioning and directing system, the DR system based on wheel-speed sensors, and the TTC estimation system. In the following sections, the UWB-based relative positioning/directing system is shortened to the UWB system. In this section, the UWB and DR subsystems are established. Then, an EKF-based fusion algorithm is proposed to integrate UWB with DR, which significantly improves the accuracy of relative position, orientation, and velocity. Finally, the TTC estimation method in several different collision scenarios is put forward.

#### *2.1. The Relative Positioning and Directing System*

According to the vehicle axis system regulated by ISO 8855: 2011 [33], as shown in Figure 1, the origin is located at the automotive rear axle center. The X-axis points to the forward of the vehicle, and the Y-axis points to the left. In this paper, all proposed systems are established based on this axis system.

**Figure 1.** Vehicle axis system.

Figure 2 shows the UWB system model. XOY represents the coordinate system of vehicle 1. X'O'Y' represents the coordinate system of vehicle 2. Points 1, 2, 3, and 4 represent the UWB modules on vehicle 1, and points M, N, P, and Q represent the UWB modules on vehicle 2. The coordinate of each UWB module in its own vehicle axis system is known when installed. As Figure 2, *XK* = [*xK*, *yK*] *<sup>T</sup>* is defined as the position of module K in the axis system of vehicle 1 and *X <sup>K</sup>* = [*x <sup>K</sup>*, *y K*] *<sup>T</sup>* is defined as the position of module K in the axis system of vehicle 2, where K = - 1, 2, 3, 4, M, N, P, Q, C, O, O .

**Figure 2.** The UWB based relative positioning system model.

With the distances measured by UWB and the coordinates of UWB modules, the relative position and orientation [*x*, *y*, *β*] *<sup>T</sup>* can be calculated. [*x*, *y*] *<sup>T</sup>* is the position of vehicle 2 in the axis system of vehicle 1. *β* is the relative orientation, which means the intersection angle of the two vehicles' driving directions.

As the ranging precision of UWB is very sensitive to NLOS, not all UWB modules are necessary at the same time. Therefore, only four modules, two on each vehicle, in LOS are picked at the same time. The other modules are used to help distinguish multiple solutions. On account of the high time resolution and low multipath effect of UWB signals, it is not complex to distinguish NLOS and LOS signals.

Figure 2 shows a typical driving scenario. Vehicle 2 is changing lanes to the front of vehicle 1. Apparently, rear-end collision risk exists if vehicle 1 drives faster than vehicle 2 and does not brake. Since the CWS is especially necessary in this condition, we take it as an example to interpret our algorithm. In this case, points 1, 2, M, and N are in LOS. Define *d*1, *d*2, *d*3, and *d*<sup>4</sup> as the real distances shown in Figure 2, and ˆ*d*1, ˆ*d*2, ˆ*d*3, and ˆ*d*<sup>4</sup> as the corresponding measurements ranged by UWB. Other known parameters include *X*<sup>1</sup> = [*x*1, *y*1] *<sup>T</sup>*, *X*<sup>2</sup> = [*x*2, *y*2] *<sup>T</sup>*, *X <sup>M</sup>* = ' *x <sup>M</sup>*, *y M* (*T* , *X <sup>N</sup>* = ' *x <sup>N</sup>*, *y N* (*T* . Then, we have

$$\begin{cases} \begin{aligned} d\_1 &= \sqrt{\left(\mathbf{x}\_M - \mathbf{x}\_1\right)^2 + \left(y\_M - y\_1\right)^2} \\ d\_2 &= \sqrt{\left(\mathbf{x}\_M - \mathbf{x}\_2\right)^2 + \left(y\_M - y\_2\right)^2} \\ d\_3 &= \sqrt{\left(\mathbf{x}\_N - \mathbf{x}\_1\right)^2 + \left(y\_N - y\_1\right)^2} \\ d\_4 &= \sqrt{\left(\mathbf{x}\_N - \mathbf{x}\_1\right)^2 + \left(y\_N - y\_1\right)^2} \end{aligned} \tag{1}$$

As *d*1, *d*2, *d*3, and *d*<sup>4</sup> are unknown, ˆ*d*1, ˆ*d*2, ˆ*d*3, and ˆ*d*<sup>4</sup> are substituted into Equation (1) for the estimated positions of M and N, *X*ˆ *<sup>M</sup>* = [*x*ˆ*M*, *y*ˆ*M*] *<sup>T</sup>* and *X*ˆ *<sup>N</sup>* = [*x*ˆ*N*, *y*ˆ*N*] *<sup>T</sup>*. Then, the estimated distance between M and N can be calculated by Equation (2).

$$\hat{d}\_5 = \sqrt{\left(\mathfrak{k}\_M - \mathfrak{k}\_N\right)^2 + \left(\mathfrak{j}\_M - \mathfrak{j}\_N\right)^2} \tag{2}$$

However, when UWB modules are installed, the real distance between M and N is a determined constant, which can be calculated by Equation (3).

$$d\_5 = \sqrt{\left(\mathbf{x}\_M' - \mathbf{x}\_N'\right)^2 + \left(y\_M' - y\_N'\right)^2} \tag{3}$$

When ranging error exists, <sup>ˆ</sup>*d*<sup>5</sup> <sup>=</sup> *<sup>d</sup>*5. In order to get the least square (LS) solutions that could better meet all the distances, we rewrite Equation (1) as Equation (4).

$$\begin{cases} \begin{aligned} d\_1 &= \sqrt{\left(\mathbf{x}\_M - \mathbf{x}\_1\right)^2 + \left(\mathbf{y}\_M - \mathbf{y}\_1\right)^2} \\ d\_2 &= \sqrt{\left(\mathbf{x}\_M - \mathbf{x}\_2\right)^2 + \left(\mathbf{y}\_M - \mathbf{y}\_2\right)^2} \\ d\_3 &= \sqrt{\left(\mathbf{x}\_N - \mathbf{x}\_1\right)^2 + \left(\mathbf{y}\_N - \mathbf{y}\_1\right)^2} \\ d\_4 &= \sqrt{\left(\mathbf{x}\_N - \mathbf{x}\_1\right)^2 + \left(\mathbf{y}\_N - \mathbf{y}\_1\right)^2} \\ d\_5 &= \sqrt{\left(\mathbf{x}\_M - \mathbf{x}\_N\right)^2 + \left(\mathbf{y}\_M - \mathbf{y}\_N\right)^2} \end{aligned} \tag{4}$$

Significantly, it is an overdetermined nonlinear equation set with five equations and four unknowns. When ranging error exists, the equation set does not have exact solutions. We define function *g* as shown in Equation (5).

$$\begin{aligned} g(\mathbf{x}\_{M}, \mathbf{y}\_{M}, \mathbf{x}\_{N}, \mathbf{y}\_{N}) &= \left(\hat{d}\_{1} - \sqrt{\left(\mathbf{x}\_{M} - \mathbf{x}\_{1}\right)^{2} + \left(\mathbf{y}\_{M} - \mathbf{y}\_{1}\right)^{2}}\right)^{2} \\ &+ \left(\hat{d}\_{2} - \sqrt{\left(\mathbf{x}\_{M} - \mathbf{x}\_{2}\right)^{2} + \left(\mathbf{y}\_{M} - \mathbf{y}\_{2}\right)^{2}}\right)^{2} \\ &+ \left(\hat{d}\_{3} - \sqrt{\left(\mathbf{x}\_{N} - \mathbf{x}\_{1}\right)^{2} + \left(\mathbf{y}\_{N} - \mathbf{y}\_{1}\right)^{2}}\right)^{2} \\ &+ \left(\hat{d}\_{4} - \sqrt{\left(\mathbf{x}\_{N} - \mathbf{x}\_{2}\right)^{2} + \left(\mathbf{y}\_{N} - \mathbf{y}\_{2}\right)^{2}}\right)^{2} \\ &+ \left(\hat{d}\_{5} - \sqrt{\left(\mathbf{x}\_{M} - \mathbf{x}\_{N}\right)^{2} + \left(\mathbf{y}\_{M} - \mathbf{y}\_{N}\right)^{2}}\right)^{2} \end{aligned} \tag{5}$$

Then, the positioning algorithm is converted to an optimization problem with the optimized objective function *g*. According to the first-order necessary condition of optimization problems, the partial derivative of the function *g* should be zero, which is

$$\frac{\partial \mathbf{g}}{\partial \mathbf{x}\_M} = \frac{\partial \mathbf{g}}{\partial y\_M} = \frac{\partial \mathbf{g}}{\partial \mathbf{x}\_N} = \frac{\partial \mathbf{g}}{\partial y\_N} = 0. \tag{6}$$

Several sets of local optimal solutions may be derived from Equation (6). Define ' *x*∗ *<sup>M</sup>*, *y*<sup>∗</sup> *<sup>M</sup>*, *x*<sup>∗</sup> *<sup>N</sup>*, *y*<sup>∗</sup> *N* ( as the global LS solution that minimizes the objective function *g*. Then, we have

$$\left[\mathbf{x}\_{M'}^\* y\_{M'}^\* \mathbf{x}\_{N'}^\* y\_N^\*\right]^T = \operatorname\*{argmin} \left[\mathbf{g}\left(\mathbf{x}\_{M'} y\_{M'} \mathbf{x}\_{N'} y\_N\right)\right].\tag{7}$$

The solutions of Equation (7) are much more accurate than those of Equation (1). It will be proved later by simulation in Section 3. When no real solutions can be solved from Equation (7), we can go back to Equation (1) for solutions instead.

In the example scenario, we can get two sets of solutions that are symmetric about the line determined by point 1 and point 2, as shown in Figure 3. Dealing with this, ranging information between other UWB modules can be drawn. For example, in Figure 3, distances *M*4 and *Q*2 can be used to distinguish the two sets of solutions.

**Figure 3.** Two sets of solutions.

After ' *x*∗ *<sup>M</sup>*, *y*<sup>∗</sup> *<sup>M</sup>*, *x*<sup>∗</sup> *<sup>N</sup>*, *y*<sup>∗</sup> *N* ( is solved, the relative orientation *β* and position [*x*, *y*] *<sup>T</sup>* can be derived as

$$\begin{aligned} \boldsymbol{\beta} &= \operatorname{atan2}(y\_M - y\_{N'}, \mathbf{x}\_M - \mathbf{x}\_N) - \frac{\pi}{2} \\ \begin{bmatrix} \mathbf{x} \\ \mathbf{y} \end{bmatrix} &= \begin{bmatrix} \overline{\mathbf{x}}^\* \\ \overline{\mathbf{y}}^\* \end{bmatrix} - \begin{bmatrix} \cos(\beta) & -\sin(\beta) \\ \sin(\beta) & \cos(\beta) \end{bmatrix} \begin{bmatrix} \overline{\mathbf{x}}' \\ \overline{\mathbf{y}}' \end{bmatrix} \end{aligned} \tag{8}$$

$$\begin{array}{rcl} \text{where} & \begin{bmatrix} \overline{\mathbf{x}}^{\*} \\ \overline{\mathbf{y}}^{\*} \end{bmatrix} &=& \frac{1}{2} \begin{bmatrix} \mathbf{x}\_{M}^{\*} + \mathbf{x}\_{N}^{\*} \\ \mathbf{y}\_{M}^{\*} + \mathbf{y}\_{N}^{\*} \end{bmatrix}, & \begin{bmatrix} \overline{\mathbf{x}}^{\prime} \\ \overline{\mathbf{y}}^{\prime} \end{bmatrix} &=& \frac{1}{2} \begin{bmatrix} \mathbf{x}\_{M}^{\prime} + \mathbf{x}\_{N}^{\prime} \\ \mathbf{y}\_{M}^{\prime} + \mathbf{y}\_{N}^{\prime} \end{bmatrix}, \text{atan2}(y, \mathbf{x}) &=& \mathbf{x}\_{M}^{\prime} \text{atan2}(\mathbf{y}, \mathbf{x}). \end{array}$$

#### *2.2. The DR System Based on Wheel Speed Sensors*

The proposed system consists of four wheel-speed sensors, which are installed on the rear wheels of two vehicles. According to the Ackerman steering model shown in Figure 4, the instantaneous center of a vehicle is located on the line of the rear axle. The velocity *v*, yaw rate *ω*, and tuning radius *r* can be derived as shown in Equation (9).

$$\begin{cases} \begin{array}{c} v = \frac{v\_r + v\_l}{2} \\ \omega = \frac{v\_r - v\_l}{L} \\ r = \frac{v}{\omega} \end{array} \end{cases} \tag{9}$$

where *vr* denotes the speed of the right wheel, *vl* represents the speed of the left wheel, and *L* indicates the rear wheelbase.

**Figure 4.** Ackerman steering model.

Then, the position [*xt*+Δ*t*, *yt*+Δ*t*] *<sup>T</sup>* and yaw angle *yawt*+Δ*<sup>t</sup>* of the vehicle in the global axis system at time *t* + Δ*t* can be reckoned by [*xt*, *yt*] *<sup>T</sup>*, *vt*, and *yamt* at time *t* as shown in Equation (10).

$$
\begin{bmatrix} x\_{t + \Delta t} \\ y\_{t + \Delta t} \\ yaw\_{t + \Delta t} \end{bmatrix} = \begin{bmatrix} x\_t + v\_t \Delta t \cos(yaw\_t) \\ y\_t + v\_t \Delta t \sin(yaw\_t) \\ yaw\_t + \omega \Delta t \end{bmatrix} \tag{10}
$$

#### *2.3. The EKF Based UWB/DR Fusion Model*

We define *Xk* as the state vector at time *k*. It contains the relative position/orientation *Pk* = [*xk*, *yk*, *βk*] *<sup>T</sup>*, as well as yaw rates and velocities of the two vehicles *Sk* = ' *ω*1*<sup>k</sup>* , *ω*2*<sup>k</sup>* , *v*1*<sup>k</sup>* , *v*2*<sup>k</sup>* (*T* , which can be expressed as Equation (11).

$$X\_k = \begin{bmatrix} \mathbf{x}\_{k'} y\_{k'} \boldsymbol{\beta}\_{k'} \boldsymbol{\omega}\_{1\_{k'}} \boldsymbol{\omega}\_{2\_{k'}} \boldsymbol{v}\_{1\_{k'}} \boldsymbol{v}\_{2\_k} \end{bmatrix}^T \tag{11}$$

We define Δ*t* as the time period from time *k* − 1 to time *k*. *Xk* can be predicted by *Xk*−<sup>1</sup> based on the relative kinematics model shown in Figure 5. The state equation can be expressed on the basis of Equation (10) as Equation (12).

**Figure 5.** The relative kinematic model.

$$\mathbf{X}\_{k} = \begin{bmatrix} \mathbf{x}\_{k} \\ \mathbf{y}\_{k} \\ \boldsymbol{\beta}\_{k} \\ \boldsymbol{\omega}\_{1\_{k}} \\ \boldsymbol{\omega}\_{2\_{k}} \\ \boldsymbol{v}\_{1\_{k}} \\ \mathbf{v}\_{2\_{k}} \end{bmatrix} = f(\mathbf{X}\_{k-1}, \mathbf{W}) = \begin{bmatrix} \mathbf{C}\cos(\boldsymbol{\theta}) + D\sin(\boldsymbol{\theta}) \\ -\mathbb{C}\sin(\boldsymbol{\theta}) + D\cos(\boldsymbol{\theta}) \\ \boldsymbol{\beta}\_{k-1} - \boldsymbol{\theta} + \omega\_{2\_{k-1}}\Delta t + \frac{1}{2}\mathcal{W}\_{\omega\_{2}}\Delta t^{2} \\ \boldsymbol{\omega}\_{1\_{k-1}} + \mathcal{W}\_{\omega\_{1}}\Delta t \\ \boldsymbol{\omega}\_{2\_{k-1}} + \mathcal{W}\_{\omega\_{2}}\Delta t \\ \boldsymbol{v}\_{1\_{k-1}} + \mathcal{W}\_{\mathbf{v}\_{1}}\Delta t \\ \boldsymbol{v}\_{2\_{k-1}} + \mathcal{W}\_{\mathbf{v}\_{2}}\Delta t \end{bmatrix},\tag{12}$$

where

$$\begin{aligned} \mathcal{C} &= \mathbf{x}\_{k-1} - \upsilon\_{1\_{k-1}}t + \upsilon\_{2\_{k-1}}\cos(\beta\_{k-1})\Delta t - \mathcal{W}\_{\upsilon\_1}\omega\Delta t^2/2 + \mathcal{W}\_{\upsilon\_2}\cos(\beta\_{k-1})\Delta t^2/2, \\ D &= \mathcal{Y}\_{k-1} + \upsilon\_{2\_{k-1}}\sin(\beta\_{k-1})\Delta t + \mathcal{W}\_{\upsilon\_2}\sin(\beta\_{k-1})\Delta t^2/2, \\ \theta &= \omega\_{1\_{k-1}}\Delta t + \mathcal{W}\_{\omega\_1}\Delta t^2/2. \end{aligned}$$

Then, the transition matrix of the state vector *A* can be derived as Equation (13).

$$A = \frac{\partial f}{\partial X} = \begin{bmatrix} \cos(\theta) & \sin(\theta) & A\_{1,3} & A\_{1,4} & 0 & -\cos(\theta)\Delta t & A\_{1,7} \\ -\sin(\theta) & \cos(\theta) & A\_{2,3} & A\_{2,4} & 0 & \sin(\theta)\Delta t & A\_{2,7} \\ 0 & 0 & 1 & -\Delta t & \Delta t & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix},\tag{13}$$

where

$$\begin{split} &A\_{1,3} = -\cos(\theta)\sin(\beta\_{k-1})v\_{2\_{k-1}}\Delta t + \sin(\theta)\cos(\beta\_{k-1})v\_{2\_{k-1}}\Delta t, \\ &A\_{1,4} = -\mathbb{C}\sin(\theta)\Delta t + D\cos(\theta)\Delta t, \\ &A\_{1,7} = \cos(\theta)\cos(\beta\_{k-1})\Delta t + \sin(\theta)\sin(\beta\_{k-1})\Delta t, \\ &A\_{2,3} = \sin(\theta)\sin(\beta\_{k-1})v\_{2\_{k-1}}\Delta t + \cos(\theta)\cos(\beta\_{k-1})v\_{2\_{k-1}}\Delta t, \\ &A\_{2,4} = -\mathbb{C}\cos(\theta)\Delta t - D\sin(\theta)\Delta t, \\ &A\_{2,7} = -\sin(\theta)\cos(\beta\_{k-1})\Delta t + \cos(\theta)\sin(\beta\_{k-1})\Delta t. \end{split}$$

Similarly, the transition matrix of process noise is:

$$G = \frac{\partial f}{\partial W} = \begin{bmatrix} G\_{1,1} & 0 & G\_{1,3} & G\_{1,4} \\ G\_{2,1} & 0 & G\_{2,3} & G\_{2,4} \\ -\Delta t^2 / 2 & \Delta t^2 / 2 & 0 & 0 \\ \Delta t & 0 & 0 & 0 \\ 0 & \Delta t & 0 & 0 \\ 0 & 0 & \Delta t & 0 \\ 0 & 0 & 0 & \Delta t \end{bmatrix} \tag{14}$$

where

$$\begin{aligned} G\_{1,1} &= [-\mathbb{C}\sin(\theta) + D\cos(\theta)]\Delta t^2 / 2, \\ G\_{1,3} &= -\cos(\theta)\Delta t^2 / 2, \\ G\_{1,4} &= [\cos(\theta)\cos(\beta\_{k-1}) + \sin(\theta)\sin(\beta\_{k-1})]\Delta t^2 / 2, \\ G\_{2,1} &= [-\mathbb{C}\cos(\theta) - D\sin(\theta)]\Delta t^2 / 2, \\ G\_{2,3} &= \sin(\theta)\Delta t^2 / 2, \\ G\_{2,4} &= [-\sin(\theta)\cos(\beta\_{k-1}) + \cos(\theta)\sin(\beta\_{k-1})]\Delta t^2 / 2. \end{aligned}$$

The error covariance matrix *Q* of process noise consists of error covariances of speeds and yaw rates, that is:

$$Q = \text{cov}(\mathcal{W}) = \begin{bmatrix} \sigma\_{\omega\_1}^2 & 0 & 0 & 0 \\ 0 & \sigma\_{\omega\_2}^2 & 0 & 0 \\ 0 & 0 & \sigma\_{v\_1}^2 & 0 \\ 0 & 0 & 0 & \sigma\_{v\_2}^2 \end{bmatrix}. \tag{15}$$

Thus, the predicting process of the model is:

$$\begin{array}{l}\hat{\mathcal{X}}\_{k}^{-} = f(\hat{\mathcal{X}}\_{k-1})\\P\_{k}^{-} = AP\_{k-1}A + GQG.\end{array} \tag{16}$$

We define *Zk* as the observation vector, containing the relative position and orientation of vehicle 2 measured by the UWB system, four wheel-speeds measured by the DR system, and the observation noise *Vk*. Then, the observation equation can be expressed as Equation (17).

$$Z\_k = \begin{bmatrix} \mathbf{x}\_{IIWB,k} \\ \mathbf{y}\_{IIWB,k} \\ \beta\_{IIWB,k} \\ v\_{r1,k} \\ v\_{l1,k} \\ v\_{r2,k} \\ v\_{l2,k} \end{bmatrix} = H\mathbf{X}\_k + V\_k \tag{17}$$

Referring to Equation (9), the velocities and yaw rates of the two vehicles can be expressed by the velocities measured by wheel-speed sensors as Equation (18).

$$
\begin{bmatrix} v\_{1r} \\ v\_{1l} \\ v\_{2r} \\ v\_{2l} \end{bmatrix} = \begin{bmatrix} L\_1/2 & 0 & 1 & 0 \\ -L\_1/2 & 0 & 1 & 0 \\ 0 & L\_2/2 & 0 & 1 \\ 0 & -L\_2/2 & 0 & 1 \end{bmatrix} \begin{bmatrix} \omega\_1 \\ \omega\_2 \\ v\_1 \\ v\_2 \end{bmatrix} \tag{18}
$$

Then, the Jacobian matrix *H* is obtained as Equation (19).

$$H = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & L\_1/2 & 0 & 1 & 0 \\ 0 & 0 & 0 & -L\_1/2 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & L\_2/2 & 0 & 1 \\ 0 & 0 & 0 & 0 & -L\_2/2 & 0 & 1 \end{bmatrix} \tag{19}$$

The estimating process is:

$$\begin{aligned} K\_k &= P\_k^- H^T \left( H P\_k^- H^T + R \right)^{-1} \\ X^- &= \hat{X}\_k^- + K\_k \left( Z\_k - H \hat{X}\_k^- \right) \\ P\_k &= P\_k^- - K\_k H P\_k^- \,. \end{aligned} \tag{20}$$

In Equation (20), *R* represents the error covariance matrix of *Zk*. It can be divided into the error covariance matrix of the UWB system *RUWB* and the error covariance matrix of the DR system *RDR*. That is:

$$R = \text{cov}(V\_k) = \begin{bmatrix} R\_{lIVB} & 0 \\ 0 & R\_{DR} \end{bmatrix} \tag{21}$$
 
$$\text{s.t.:} \quad \text{o} \qquad \text{o} \qquad \text{o} \qquad \text{u}$$

$$\text{where } \mathbb{R}\_{IWB} = \begin{bmatrix} \sigma\_x^2 & 0 & 0 \\ 0 & \sigma\_y^2 & 0 \\ 0 & 0 & \sigma\_\beta^2 \end{bmatrix}, \mathbb{R}\_{DR} = \begin{bmatrix} \sigma\_{v\_{r1}}^2 & 0 & 0 & 0 \\ 0 & \sigma\_{v\_{l1}}^2 & 0 & 0 \\ 0 & 0 & \sigma\_{v\_{r2}}^2 & 0 \\ 0 & 0 & 0 & \sigma\_{v\_{l2}}^2 \end{bmatrix}.$$

*RDR* is decided by measurement errors of the wheel-speed sensors directly, whereas *RUWB* is decided by positioning and directing errors, which is indirectly decided by the ranging error of UWB modules. Define *D* = [*d*1, *d*2, *d*3, *d*4]. On the basis of Equation (5), we can derive the relationship between the deviation *D* and the deviation of UWB modules' position *XM* and *XN* as Equation (22). *d*<sup>5</sup> is ignored because it is not a measurement but a constant, which means d*d*<sup>5</sup> = 0.

$$\mathbf{d}D = \begin{bmatrix} \mathbf{d}d\_1 \\ \mathbf{d}d\_2 \\ \mathbf{d}d\_3 \\ \mathbf{d}d\_4 \end{bmatrix} = \frac{\partial D}{\partial (\mathbf{x}\_M \mathbf{y}\_M \mathbf{x}\_N \mathbf{y}\_N \mathbf{y}\_N)} \begin{bmatrix} \mathbf{d}\mathbf{x}\_M \\ \mathbf{d}\mathbf{y}\_M \\ \mathbf{d}\mathbf{x}\_N \\ \mathbf{d}\mathbf{y}\_N \end{bmatrix} = F\_D \begin{bmatrix} \mathbf{d}\mathbf{x}\_M \\ \mathbf{d}\mathbf{y}\_M \\ \mathbf{d}\mathbf{x}\_N \\ \mathbf{d}\mathbf{y}\_N \end{bmatrix},\tag{22}$$

*FD* can be derived as Equation (23).

$${}\_{F}F\_{D} = \begin{bmatrix} \frac{x\_{M} - x\_{1}}{d\_{1}} & \frac{y\_{M} - y\_{1}}{d\_{1}} & 0 & 0\\ \frac{x\_{M} - x\_{2}}{d\_{2}} & \frac{y\_{M} - y\_{2}}{d\_{2}} & 0 & 0\\ 0 & 0 & \frac{x\_{N} - x\_{1}}{d\_{3}} & \frac{y\_{N} - y\_{1}}{d\_{3}}\\ 0 & 0 & \frac{x\_{N} - x\_{2}}{d\_{4}} & \frac{y\_{N} - y\_{2}}{d\_{4}} \end{bmatrix}.\tag{2.3}$$

where *d*<sup>1</sup> = ˆ*d*1, *d*<sup>2</sup> = ˆ*d*2, *d*<sup>3</sup> = ˆ*d*3, *d*<sup>4</sup> = ˆ*d*4, *xM* = *x*<sup>∗</sup> *<sup>M</sup>*, *yM* = *y*<sup>∗</sup> *<sup>M</sup>*, *xN* = *x*<sup>∗</sup> *<sup>N</sup>*, *yN* = *y*<sup>∗</sup> *N*.

From Equation (8), we can get the relationship between the deviation of the vehicle position and orientation *XUWB* = [*x*, *y*, *β*] and the deviation of the UWB modules' position *XM* and *XN* as Equation (24).

$$\mathbf{dX}\_{\text{LWB}} = \begin{bmatrix} \mathbf{x} \\ \mathbf{y} \\ \boldsymbol{\beta} \end{bmatrix} = \frac{\partial \mathbf{X}\_{\text{LWB}}}{\partial (\mathbf{x}\_{M\prime} \mathbf{y}\_{M\prime} \mathbf{x}\_{N\prime} \mathbf{y}\_{N})} \begin{bmatrix} \mathbf{dx}\_{M} \\ \mathbf{dy}\_{M} \\ \mathbf{dx}\_{N} \\ \mathbf{dy}\_{N} \end{bmatrix} = \mathbf{F}\_{\text{X}\_{\text{LWB}}} \begin{bmatrix} \mathbf{dx}\_{M} \\ \mathbf{dy}\_{M} \\ \mathbf{dx}\_{N} \\ \mathbf{dy}\_{N} \end{bmatrix}. \tag{24}$$

*FXUWB* can be derived as Equation (25).

$$F\_{X\_{LIVB}} = \begin{bmatrix} F\_{1,1} & F\_{1,2} & F\_{1,3} & F\_{1,4} \\ F\_{2,1} & F\_{2,2} & F\_{2,3} & F\_{2,4} \\ -\frac{(y\_M - y\_N)}{d\_5^2} & \frac{(x\_M - x\_N)}{d\_5^2} & \frac{(y\_M - y\_N)}{d\_5^2} & -\frac{(x\_M - x\_N)}{d\_5^2} \end{bmatrix} \tag{25}$$

where

$$\begin{split} &F\_{1,1} = 1/2 + \left[\boldsymbol{x}\_{M}^{\prime}\cos(\beta) - \boldsymbol{y}\_{M}^{\prime}\sin(\beta)\right](\boldsymbol{y}\_{M} - \boldsymbol{y}\_{N})/d\_{5}^{2}, \\ &F\_{1,2} = -\left[\boldsymbol{x}\_{M}^{\prime}\cos(\beta) - \boldsymbol{y}\_{M}^{\prime}\sin(\beta)\right](\boldsymbol{x}\_{M} - \boldsymbol{x}\_{N})/d\_{5}^{2}, \\ &F\_{1,3} = 1/2 - \left[\boldsymbol{x}\_{M}^{\prime}\cos(\beta) - \boldsymbol{y}\_{M}^{\prime}\sin(\beta)\right](\boldsymbol{y}\_{M} - \boldsymbol{y}\_{N})/d\_{5}^{2}, \\ &F\_{1,4} = \left[\boldsymbol{x}\_{M}^{\prime}\cos(\beta) - \boldsymbol{y}\_{M}^{\prime}\sin(\beta)\right](\boldsymbol{x}\_{M} - \boldsymbol{x}\_{N})/d\_{5}^{2}, \\ &F\_{2,1} = \left[\boldsymbol{y}\_{M}^{\prime}\cos(\beta) + \boldsymbol{x}\_{M}^{\prime}\sin(\beta)\right](\boldsymbol{y}\_{M} - \boldsymbol{y}\_{N})/d\_{5}^{2}, \\ &F\_{2,2} = 1/2 - \left[\boldsymbol{y}\_{M}^{\prime}\cos(\beta) + \boldsymbol{x}\_{M}^{\prime}\sin(\beta)\right](\boldsymbol{x}\_{M} - \boldsymbol{x}\_{N})/d\_{5}^{2}, \end{split}$$

$$\begin{split} F\_{2,3} &= -\left[y\_M' \cos(\beta) + \mathbf{x}\_M' \sin(\beta)\right] \left(y\_M - y\_N\right) / d\_{5'}^2 \\ F\_{2,4} &= 1/2 + \left[y\_M' \cos(\beta) + \mathbf{x}\_M' \sin(\beta)\right] \left(\mathbf{x}\_M - \mathbf{x}\_N\right) / d\_{5'}^2 \\ \mathbf{x}\_{MN}' &= \left(\mathbf{x}\_M' + \mathbf{x}\_N'\right) / 2, \ y\_{MN}' = \left(y\_M' + y\_N'\right) / 2, \\ \mathbf{x}\_M &= \mathbf{x}\_{M'}^\* \ y\_M = y\_{M'}^\* \ \mathbf{x}\_N = \mathbf{x}\_{N'}^\* \ y\_N = y\_N^\*. \end{split}$$

Then, *RUWB* can be expressed as Equation (26).

$$R\_{LWB} = F\_{\text{X}\_{LWB}} \left( F\_D^T F\_D \right)^{-1} F\_D^T R\_D F\_D \left( F\_D^T F\_D \right)^{-1} F\_{\text{X}\_{LWB}}^T \tag{26}$$

where *RD* <sup>=</sup> diag *σ*2 *d*1 , *σ*<sup>2</sup> *d*2 , *σ*<sup>2</sup> *d*3 , *σ*<sup>2</sup> *d*4 , *σ*<sup>2</sup> *d*5 is determined directly by UWB ranging error covariance.

#### *2.4. The Collision Warning Model*

CWS mainly works in two ways, headway measurement warning (HMW) and TTCbased warning [34]. Both of them need to measure the distance to the front vehicle but estimate the collision time with different speeds as Equation (27).

$$\begin{aligned} \textit{Headway \\$Collision Time} &= \frac{\textit{Headway}}{\textit{v}\_{\textit{RearVield}}}\\ \textit{TTC} &= \frac{\textit{Headway}}{\textit{v}\_{\textit{RearVield}} - \textit{v}\_{\textit{FrontVeloc}}} \end{aligned} \tag{27}$$

The TTC-based system takes relative velocity into account, so it provides a more accurate collision warning. In this paper, the proposed system allows vehicles to share information through UWB, such as velocities. The TTC method is apparently the better choice.

Two vehicles driving on the road have the probability of collisions in various types, such as head-to-head collision, rear-end collision, and side collision. Different kinds of collisions may happen at different times. That means all cases need to be taken into account in order to obtain the exact TTC. Before establishing the collision warning model, we simplified the shape of a vehicle as a rectangle. With this assumption, all kinds of collisions can be described as point-to-edge collisions. Edge-to-edges collisions and point-to-point collisions are also covered by point-to-edge collisions, as shown in Figure 6.

**Figure 6.** Collision types. (**a**) Point-to-edge collision; (**b**) Edge-to-edge collision; (**c**) Point-to-point collision.

After unifying different collision types, TTC can be calculated in the same way. We take the collision type shown in Figure 7 as an example. In this case, the front left corner of vehicle 2 collides on the right edge of vehicle 1. As we defined in Section 2.1, the coordinate of a point in the axis system of vehicle 1 is expressed as *Xk* = [*xk*, *yk*] *<sup>T</sup>*, and *Xk* = [*xk* , *yk* ] *T* in the axis system of vehicle 2. *Ri* (*i* = 1,2,3,4) represents the four corners of vehicle 1. *Fi* (*i* = 1,2,3,4) represents the four corners of vehicle 2. Therefore, the coordinate of *Ri*

is *XRi* = ' *xRi* , *yRi* (*T* , which is known by measuring the size of the vehicle 1. Similarly, *X Fi* = *x Fi* , *y Fi T* is also known by measuring the size of vehicle 2. The relative position of *X* = [*x*, *y*] *<sup>T</sup>* and the relative orientation *β* are estimated by the UWB/DR system. Then, the coordinates of vehicle 2 s corners in the axis system of vehicle 1 can be derived as Equation (28).

$$\mathbb{E}\left[X\_{\overline{F}\_{1'}}X\_{\overline{F}\_{2'}}X\_{\overline{F}\_{3'}}X\_{\overline{F}\_{4}}\right] = \mathbb{R}\left[X\_{\overline{F}\_{1'}}'X\_{\overline{F}\_{2'}}'X\_{\overline{F}\_{3'}}'X\_{\overline{F}\_{4}}'\right] + X[1,1,1,1] \tag{28}$$

where *R* = cos(*β*) <sup>−</sup> sin(*β*) sin(*β*) cos(*β*)

 .

**Figure 7.** The collision warning model.

We define all the points at the collision time as *RCi* and *FCi* , and their coordinates as *XRCi* <sup>=</sup> *xRCi* , *yRCi T* , *XFCi* <sup>=</sup> *xFCi* , *yFCi T* . The velocity vectors of the two vehicles are known for the UWB/DR system, which are *VR* = [*vR* cos(*βR*), *vR* cos(*βR*)]*<sup>T</sup>* (*β<sup>R</sup>* = 0) and *VF* = [*vF* cos(*βF*), *vF* sin(*βF*)]*<sup>T</sup>* (*β<sup>F</sup>* = *<sup>β</sup>*). Assume that point *Fi* collides on the edge between *Rj* and *Rk* at time *tFi*,*Rjk* . Then *XFCi* , *XRCj* , and *XRCk* can be expressed as Equation (29).

$$\begin{aligned} X\_{F\_{\bar{i}}} &= X\_{F\_{\bar{i}}} + V\_{\mathcal{F}} t\_{F\_{\bar{i}}, R\_{jk}} \\ X\_{R\_{\bar{C}\_{j}}} &= X\_{R\_{\bar{j}}} + V\_{\mathcal{R}} t\_{F\_{\bar{i}}, R\_{jk}} \\ X\_{R\_{\bar{C}\_{k}}} &= X\_{R\_{k}} + V\_{\mathcal{R}} t\_{F\_{\bar{i}}, R\_{jk}} \end{aligned} \tag{29}$$

Point *Fi* collides on the edge between *Rj* and *Rk* means *Fci* is on the segment *RCj RCk* , which can be expressed as Equation (30).

$$\begin{array}{c} F\_{\mathbf{C}\_{i}} \stackrel{\rightarrow}{R}\_{\mathbf{C}\_{j}} \cdot F\_{\mathbf{C}\_{i}} \stackrel{\rightarrow}{R}\_{\mathbf{C}\_{k}} = -||F\_{\mathbf{C}\_{i}} \stackrel{\rightarrow}{R}\_{\mathbf{C}\_{j}}|| ||F\_{\mathbf{C}\_{i}} \stackrel{\rightarrow}{R}\_{\mathbf{C}\_{k}}|| \end{array} \tag{30}$$

Solution *t* of Equation (30) is the collision time under the condition that corners of vehicle 2 collide on edges of vehicle 1, including 16 different conditions altogether. In the other 16 cases in which the corners of vehicle 1 collide on the edges of vehicle 2, the collision times can be calculated similarly. Thirty-two collision times can be calculated in total. Ignoring negative values, the minimum of the rest value is TCC. That is:

$$\begin{split}TT\mathbb{C} = \min \Big(t\_{\mathbb{R}\_i, \mathbb{F}\_{jk}}, t\_{\mathbb{F}\_i, \mathbb{R}\_{jk}}\Big)\_{\prime\prime} (i = 1, 2, 3, 4; jk = 12, 23, 34, 41), \\ t\_{\mathbb{R}\_i, \mathbb{F}\_{jk}} \ge 0, t\_{\mathbb{F}\_i, \mathbb{R}\_{jk}} \ge 0. \end{split} \tag{31}$$

When *TTC* → ∞ or *TTC* < 0, there is no risk of collision.

#### **3. Simulation**

In this section, simulation is conducted to evaluate our algorithm. Firstly, the accuracy of the UWB positioning and directing system is validated by comparing the algorithm with and without the constraint of *d*5. Secondly, the accuracy of the UWB/DR fusion model based on EKF is compared to the accuracy of UWB and DR separately. Finally, plenty of driving scenarios are generated to evaluate the success rate of the CWS.

#### *3.1. Simulation of the Overconstrained UWB Positioning and Directing System*

In Section 2.1, a relative positioning/directing algorithm with the constraint of *d*<sup>5</sup> is proposed. Its performance is simulated in this section. Firstly, a driving scenario is established in the driving scenario designer of MATLAB as shown in Figure 8. The blue cube represents vehicle 1, and the red cube represents vehicle 2. The lines in blue and red denote their driving track. Kinematic parameters of vehicle and positions of UWB modules and wheel sensors in their own vehicle axis system are defined in the model. The UWB ranging error is set to *σ<sup>d</sup>* = 0.05 m, and the wheel speed error is set to *σ<sup>v</sup>* = 0.2 m/s referring to the sensors we will use in experiments. Calculating results of our algorithm are compared to the real values exported by the model.

**Figure 8.** The virtual scenario in the driving scenario designer.

Solutions of the algorithm with and without the constraint of *d*<sup>5</sup> are compared in Figure 9 and Table 1. The improvement of accuracy with the derivation of *d*<sup>5</sup> is very intuitive, especially for *x* and *β*. In Table 1, the root mean square error (RMSE) is recommended to compare their accuracy quantitatively.

**Figure 9.** Comparison of relative positioning and directing algorithm with and without the constraint of *d*5: (**a**) The relative longitudinal position *x*; (**b**) The relative lateral position *y*; (**c**) The relative orientation *β*.



#### *3.2. Simulation of the UWB/DR Fusion Algorithm*

We also take the scenario in Section 3.1 as an example to validate the performance of the UWB/DR fusion algorithm. The comparison results are shown in Figure 10 and Table 2. The proposed UWB/DR fusion method based on EKF significantly improves the accuracy and stability of positioning and directing.

**Figure 10.** Comparison of positioning and directing performance using UWB and fusion of UWB/DR: (**a**) The relative longitudinal position *x*; (**b**) The relative lateral position *y*; (**c**) The relative orientation *β*.


Figures 11 and 12 and Table 3 compare the accuracy of yaw rates and velocities estimated by UWB/DR to DR. They are improved significantly as well, which contributes to the better prediction accuracy of TTC in the next section.

**Figure 11.** Comparison of yaw rates measured by DR and estimated by UWB/DR: (**a**) Yaw rate of vehicle 1; (**b**) Yaw rate of vehicle 2.

V V

**Figure 12.** Comparison of velocities measured by DR and estimated by UWB/DR: (**a**) Velocity of vehicle 1; (**b**) Velocity of vehicle 2.

**Table 3.** RMSE of velocities and yaw rates estimated by DR and UWB/DR.


#### *3.3. Simulation of CWS based on TTC Estimation*

In this section, we generate plenty of driving scenarios with different velocities, relative positions, and relative orientations, as shown in Figure 13. The ranges of parameters are set as outlined in Table 4.

**Figure 13.** TTC simulating scenarios.

**Table 4.** Ranges of parameters in TTC simulation.


*TTCreal* is certain when a scenario is established, and *TTCest* estimated by CWS is calculated every 10 ms. The collision warning threshold is set to 3.0 s. It means that when *TTCest* ≤ 3.0 s, the CWS will send an alert. *TTCerr* = *TTCest* – *TTCreal* denotes the TTC error at the warning time as Figure 14.

**Figure 14.** TTC estimation error.

In order to guarantee driving safety, we set 2.7 s as the latest warning time. If the system does not work when the vehicle is colliding within 2.7 s, the collision warning evaluation is failed. In addition, in order not to disturb the driver too much, if the system sends alerts when vehicles have no risk of collision within 4 s, we regard the warning as false. Then, *TTCerr* can be divided into three conditions corresponding to three evaluations of collision warning as Equation (32).

$$TTC\_{err} \begin{cases} > 0.3 & \text{Failed} \\ \in [-1, 0.3] & \text{Correct} \\ < -1 & \text{False} \end{cases} \tag{32}$$


The scenario marked with gray background is the typical rear-end collision scenario, which is the most critical function of a collision warning system. One hundred and ninetysix rear-end collision scenarios are generated, and Table 5 shows the results. In all the 196 simulation scenarios, two of them behave false, which means that the collision warning is triggered too early. All of the others perform correctly. It shows the reliability of the proposed CWS in the most common rear-end collision scenarios.

**Table 5.** Collision warning evaluation in rear-end scenarios.


Then, we emulate other scenarios in which two vehicles drive in any lanes from any positions to any directions defined in Figure 13 and Table 4. Results are shown in Table 6. The scenarios with initial *TTCreal* less than 3 s will not be considered. In the remaining 10,823 scenarios, 10,593 of them perform correctly. The collision warning success rate is 97.9%.


**Table 6.** Collision warning evaluation in random scenarios.

#### **4. Experiments**

In this section, experiments are divided into two parts: straight driving experiments and curved driving experiments. The straight driving experiments are conducted referring to JT/T883-2014, which describes the standard experiments for FCWS, published by the Ministry of Transport of the People's Republic of China (MOT). As JT/T883-2014 only regulates straight driving experiments, to further validate the performance of our system, curved driving experiments are conducted in addition. Since the CWS is implemented based on the UWB/DR relative positioning/directing system, the positioning/directing accuracy can reflect the performance of the CWS. Therefore, in the curved experiments, we drive through complex routes and compare the positioning/directing accuracy to the parameters of a commercial millimeter-wave radar (MMWR) used for collision warning.

#### *4.1. Experimental Equipment and Environment*

Figure 15 shows the equipment used in the experiments. Two vehicles are required in the experiments for relative positioning and directing. UWB modules are installed on the corners of the vehicles. Four wheel-speed sensors designed by our team are installed on the centers of the wheels. The wheel-speed measurements are transmitted to a receiver inside the vehicle wirelessly, which receives the velocity information from the four wheels and then sends it to the controller area network (CAN) bus. In the proposed system, only the speeds of the rear wheels are used. UWB modules are also developed by our team based on DW1000. Two vehicles share data through UWB. All data are transferred to the CAN bus and recorded by the computer using a USB-CAN adapter. A computing terminal receives sensor data from the CAN bus and calculates the relative position, direction, velocity, and TTC. Results from the computing terminal are compared to the measurement of a high-precision integrated positioning system, which combines dual-antenna real-time kinematic (RTK)-GPS and INS. The long-range radio (LoRa) antenna is used to receive differential signals from the RTK-GPS base station, which is installed in the testing ground. A total station is used to measure the relative coordinates of the UWB modules to the main RTK-GPS antenna. It should be noted that the main GPS antenna is not right above the center of the rear wheels. The deviation needs to be derived from measurements of the total station and compensated in the algorithm.

**Figure 15.** Experimental Equipment.

Figure 16 shows the testing ground in which we conduct experiments. The driving routes of the two types of experiments are also marked in Figure 16.

**Figure 16.** The testing ground and vehicle driving routes.

#### *4.2. Straight Driving Experiments*

According to JT/T883-2014 [35], experiments for FCWS consist of three tests. Each test needs repeating seven times. Only if five of them were passed, and no two consecutive failed tests exist could the test be evaluated as passed. In the standard experiments, the headway distances, velocities, and accelerations of vehicles need controlling around specific values, so we design software as shown in Figure 17, with necessary parameters displayed, which helps drivers better control vehicles and records necessary data. The TTC derived from the data of the RTK-GPS/INS is recognized as real TTC.


**Figure 17.** Vehicle state display software.

#### 4.2.1. Test 1

Test 1 is designed as shown in Figure 18. The rear vehicle drives at the speed of 72 km/h toward the parked front vehicle from an inertial headway distance of 150 m. If the collision warning system is triggered before the real TTC is 2.7 s, the test is passed. Otherwise, the test is failed.

**Figure 18.** Test 1 in the straight driving experiments.

#### 4.2.2. Test 2

Test 2 is designed as shown in Figure 19. The rear vehicle drives at the speed of 72 km/h toward the front vehicle, which drives at the speed of 32 km/h, from an initial headway distance of 150 m. If the collision warning system is triggered before the real TTC is 2.1 s, the test is passed. Otherwise, the test is failed.

**Figure 19.** Test 2 in the straight driving experiments.

#### 4.2.3. Test 3

Test 3 is designed as Figure 20. The rear vehicle drives at the speed of 72 km/h toward the front vehicle, which drives at the speed of 32 km/h and decelerates with the acceleration of −0.3 g. If the collision warning system is triggered before the real TTC is 2.4 s, the test is passed. Otherwise, the test is failed.

**Figure 20.** Test 3 in the straight driving experiments.

4.2.4. Results Analysis of the Straight Driving Experiments

During each test, two TTC values are calculated: (1) *TTCreal*, which is derived from the RTK-GPS/INS information; (2) *TTCCWS*, which is estimated using the UWB/DR measurements. Since the terminating conditions in the three experiments are different, to satisfy all the three tests and reserve some margin, we set the warning *TTCCWS* to 3.0 s. The software in Figure 17 will send a warning when either *TTCreal* or *TTCCWS* reaches its marginal value. If *TTCreal* reaches the regulated marginal value when *TTCCWS* is still greater than 3.0 s, the test is terminated and evaluated as failed. In the standard, only the minimum threshold of the collision warning time is regulated, whereas the maximum threshold is not. In other words, the standard only cares about "how safe" the warning is, with no consideration of "how accurate" it is. However, as we explained in Section 3, too early warnings are annoying and offensive, so we set 4.0 s as the upper limit. If the CWS is triggered when

*TTCreal* > 4.0 s, we also regard the test as failed. According to JT/T883-2014, each test needs repeating seven times. Tables 7–9 show the results of the three tests, respectively.

**Table 7.** Results of Test 1.


**Table 8.** Results of Test 2.


**Table 9.** Results of Test 3.


According to Tables 7–9, all the tests were passed, which proves that the proposed system can satisfy the requirement of MOT and has the ability to provide collision warning for vehicles in time.

#### *4.3. Curved Driving Experiments*

JT/T883-2014 only regulates the straight driving experiments but does not request or give advice to curved driving experiments. However, to further validate the superiority of our system, we conduct curved driving experiments and compare its accuracy to a commercial MMWR, Aptiv (Electronically Scanning RADAR) ESR 2.5, which is used in CWS. The MMWR measures the relative distance, relative azimuth, and relative velocity. Table 10 shows the accuracy of Aptiv ESR 2.5 according to its datasheet. *ρ*, *θ*, and *v* represent the relative distance, azimuth angle, and velocity, respectively. The MMWR has two working modes, middle-distance mode and long-distance mode, and the accuracies are different.

**Table 10.** The accuracy of the MMWR.


In order to facilitate comparison, the curved experiments are also divided into a middle-distance experiment under vehicle distances within 50 m and a long-distance experiment under vehicle distances within 100 m. The estimated values of the relative position [*x*, *y*] are converted to the polar coordinate [*ρ*, *θ*], and the velocities of the two vehicles [*v*1, *v*2] are converted to the relative velocity *v*, as shown in Figure 21. In addition, the relative orientation *β* cannot be measured by MMWR directly.

#### 4.3.1. Middle-Distance Experiments

The proposed CWS and MMWR are all dynamic systems, so the vehicle distance is not kept to a constant value but changes in the experiment. During the middle-distance experiment, the vehicle distance changes between 10 and 50 m. In our system, the vehicle distance represents the distance between the real axle centers of the two vehicles, so it cannot be zero.

**Figure 21.** The transformation from Cartesian coordinates to polar coordinates.

#### 4.3.2. Long-Distance Experiments

During the long-distance experiment, the vehicle distances change between 10 and 100 m.

#### 4.3.3. Results Analysis of the Curved Experiments

According to Figures 22 and 23, the accuracy of the proposed system improves significantly after fusion, which reaches the same conclusion as the simulation results shown in Figure 12. The relative position is described as Cartesian coordinate [*x*, *y*] in the simulation but as polar coordinate [*ρ*, *θ*] in the experiments. Both x and y improve after fusion as shown in Table 2, whereas only *θ* without *ρ* improves after fusion according to Tables 11 and 12. That is because the accuracy improvement of *θ* can contribute to better accuracy of both *x* and *y*, as Figure 21. Therefore, the experimental results are consistent with the simulation. The comparison result of the proposed system and the MMWR is shown in Table 13, which combines Tables 11 and 12 with Table 10.



**Table 12.** The accuracy of the MMWR.


**Table 13.** Accuracy comparison of the proposed system and the MMWR.


**Figure 22.** The results of the middle-distance experiments. (**a**) Relative distance; (**b**) Relative azimuth angle; (**c**) Relative velocity; (**d**) Relative orientation.

**Figure 23.** The results of the long-distance experiments. (**a**) Relative distance; (**b**) Relative azimuth angle; (**c**) Relative velocity; (**d**) Relative orientation.

The distance accuracy of the proposed system is always much better than the MMWR, no matter with or without fusion. The azimuth accuracy without fusion is about 0.76◦ in both experiments, which is better than the middle-distance MMWR but is inferior to the long-distance MMWR. However, velocity accuracy without fusion is worse than the MMWR in both modes. As for the fusion system, the accuracy of relative distance and azimuth performs significantly better than the MMWR, and the relative velocity accuracy also improves to a similar level as the MMWR in both middle and long-distance modes. Table 14 shows the accuracy enhanced rates of the proposed system to the MMWR.


**Table 14.** Enhanced rate of the proposed system to the MMWR.

In addition, the proposed system can provide the relative orientation, which is not available directly in the MMWR system.

#### **5. Conclusions**

In this paper, we proposed a CWS combining UWB and DR. An improved relative positioning/directing algorithm based on UWB is presented, and a DR model based on the speeds of the rear wheels is established. Then, a fusion algorithm using EKF is proposed to improve the accuracy of relative position, orientation, and velocity. Afterwards, the advantage of the proposed system is preliminarily verified by simulation. Finally, experiments are conducted to further validate the performance of our system, and the experiment results are compared to a commercial MMWR used in CWS. The main conclusions are summarized as follows:


The inadequacy of the proposed system is the velocity accuracy. Although it performs at the same level as MMWR in terms of velocity accuracy, it can be further improved. To facilitate comparison of the proposed system and the MMWR, the velocity data of the experiments are shown as relative velocity. We also analyze the accuracy of absolute velocities of two vehicles. In the middle-distance experiment, *RMSEv*<sup>1</sup> = 0.16 m/s and *RMSEv*<sup>2</sup> = 0.13 m/s, and in the long-distance experiment, *RMSEv*<sup>1</sup> = 0.16 m/s and *RMSEv*<sup>2</sup> = 0.16 m/s. Both of them are inferior to the simulation results. It is because the DR system is established based on a theoretical Ackerman steering model, which ignores the stiffness of suspensions and tires. In the actual situation, vehicle dynamic parameters such as side-slip angles will also affect the precision of the algorithm. Therefore, our

research direction in the future is the system with a more accurate vehicle dynamic model and with more sensors integrated such as IMU and GPS.

**Author Contributions:** Conceptualization, M.W. and Y.S.; Funding acquisition, X.C. and Y.S.; Investigation, M.W.; Methodology, M.W. and P.L.; Software, M.W., B.J. and P.L.; Validation, M.W. and B.J.; Writing—original draft, M.W.; Writing—review and editing, W.W. and Y.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by National Key R&D Program of China (Grant No. 2018YFB0104802) and Industry University Research Project of Shanghai Automotive Industry Science and Technology Development Foundation (Grant No. 1705).

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Acknowledgments:** The authors are grateful to the subjects in the experiment and appreciate the reviewers for their helpful comments and suggestions in this study.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

