Next Article in Journal
Interlayer Defect Detection in Intra-Ply Hybrid Composite Material (GF/CF) Using a Capacitance-Based Sensor
Next Article in Special Issue
A Contactless Glucose Solution Concentration Measurement System Based on Improved High Accurate FMCW Radar Algorithm
Previous Article in Journal
Sensitivity Analysis of Sidelobes of the Lowest Order Cladding Mode of Long Period Fiber Gratings at Turn Around Point
Previous Article in Special Issue
Determination of Binary Gas Mixtures by Measuring the Resonance Frequency in a Piezoelectric Tube
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Localization of Stereovision for Measuring In-Crash Toeboard Deformation

1
Department of Mechanical Engineering, University of Virginia, Charlottesville, VA 22903, USA
2
Monozukuri Center, Automobile Operations, Honda Motor Co., Ltd., Haga-gun, Tochigi 321-3322, Japan
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(8), 2962; https://doi.org/10.3390/s22082962
Submission received: 8 March 2022 / Revised: 4 April 2022 / Accepted: 8 April 2022 / Published: 12 April 2022

Abstract

:
This paper presents a technique to localize a stereo camera for in-crash toeboard deformation measurement. The proposed technique designed a sensor suite to install not only the stereo camera but also initial measurement units (IMUs) and a camera for localizing purpose. The pose of the stereo camera is recursively estimated using the measurement of IMUs and the localization camera through an extended Kalman filter. The performance of the proposed approach was first investigated in a stepwise manner and then tested in controlled environments including an actual vehicle crash test, which had successfully resulted in measuring the toeboard deformation during a crash. With the oscillation motion in the occurrence of the crash captured, the deformation of the toeboard measured by stereo cameras can be described in a fixed coordinate system.

1. Introduction

The incidence of vehicle crashes keeps increasing with the production of vehicles, and more than six million crashes are reported every year these days in the United States, about thirty percent of which come with fatality and injury [1]. Vehicle crash is a severe and complicated dynamic phenomenon due to the complexity of physical interaction between the structures resulting from the impact. It is indispensable that the vehicles be crash-tested in indoor facilities and that the test results be used to design vehicles with improved crashworthiness [2,3,4,5,6]. In the crash tests, one of the focuses is laid on the toeboard deformation [7,8], which is due to the strong correlation between toeboard intrusion and lower extremity injuries, as seen both from collision statistics [9,10] and crash simulation [11]. A detailed analysis of toeboard deformation during a crash test can disclose useful intricacies about the otherwise unknown effect of the crash phenomenon.
In general, past work on vehicle deformation measurement can be classified into two approaches. In the first approach, finite element analysis (FEA) and other computational mechanics analyses have been used to predict instead of measuring the deformation. The advantage of numerical analysis is its ability to simulate all types of crash tests ([12,13,14,15]). Hickey and Xiao [16] used the three-dimensional (3D) model of a commercial vehicle and performed FEA to examine the effects of the deformation during a car crash test. Employing the full-scale 3D model directly for crash analysis and subsequent computational design requires large computational loads. Yang et al. [17] utilized the response surface method for a complex and dynamic large deformation problem to accelerate the analysis and design process. Pre- and post-measurements from the actual vehicle crash tests were used by Cheng et al. [18] and McClenathan et al. [19] to provide boundary conditions for FEA. Zhang et al. [20] reported the reconstruction of the deformation of a vehicle in a crash accident using high-performance parallel computing to incorporate an advanced elastic–plastic model. While the FEA has grown to be ever more important, its ability to estimate detailed deformations such as toeboard deformation is significantly limited because of the presence of various computation errors and the lack of actual measurements.
The second approach is based on the direct measurement. In the approach, vision-based measurement became most popular because of the ability of the camera to capture information of a continuous field ([21,22,23,24,25]). Digital image correlation, originally proposed by Sutton et al. [26], projects an electronic speckle pattern to the specimen to derive the deformation of the specimen from the displacement of the image of the speckle pattern on the specimen, and it has been widely employed to measure small deformation such as structure strain [27], cracks identification [28,29], or vibration of structures [30,31], in a static environment. Schmidt et al. [32] employed a stereo vision-based technique to measure the time-varying deformation of specimens in gas gun impact tests. As an alternative to the digital image correlation (DIC), Iliopoulos and Mchopoulos [33] developed the mesh-free random grid technique to measure deformation using marked dots, which was later referred to as Dot Centroid Tracking (DCT) for comparison with the DIC [34]. These techniques observed the deformation of a surface fixed to a static base from a static viewpoint. In general, while deformation measurement was well studied, the techniques cannot be applied to toeboard deformation measurement directly because of the oscillatory motion of the cameras. Since the line of sight of the toeboard is significantly limited, the cameras must be fixed near the toeboard to some part of the vehicle body, which deforms and oscillates by the crash. The measurement of toeboard through a stereo camera is therefore subject to the relative motion of the camera itself.
This paper presents a technique to localize a stereo camera for in-crash toeboard deformation measurement and a design to implement the technique in vehicle crash tests using a recursive state estimation [35,36,37]. This state estimation technique further completes our previous work that localizes the sensor suite using only the camera pose obtained through a computer vision technique [7], as the excessive oscillation of the localization camera due to the non-rigidity of the sensor suite poses extra errors in the measured toeboard deformation. In addition to the cameras for deformation measurement and localization, the proposed technique uses additional inertial measurement units (IMUs) for the localization. Having all the sensors in a rigid structure, the proposed technique implements extended Kalman filter (EKF) and estimates the pose of the stereo camera in the global coordinate frame using the observations of the downward camera and IMUs. In-crash toeboard deformation can thus be measured by subtracting the estimated stereo camera pose from the observed toeboard deformation. A sensor suite comprising the stereo camera, downward camera, and the IMUs is designed and placed on the seat mount such that the rigid body assumption is valid. The downward camera is located underneath the vehicle floor by making a hole.
This paper is organized as follows. The next section defines the state estimation problem of concern. Section 3 presents the proposed EKF-based technique to localize the stereo camera for the in-crash toeboard deformation measurement. Section 4 introduces the hardware design for our approach. The experimental results are then presented in Section 5. Conclusions and ongoing work are summarized in Section 6.

2. Sensor Suite and Localization Problem of Stereovision

2.1. Problem Formulation

Figure 1 illustrates the fundamental design of the sensor suite adopted in this paper and the major components that address the problem of localizing the stereo camera. The stereo camera is fixed to the camera fixture, which is assumed to be rigid, and located such that it sees the toeboard. Additionally fixed to the camera fixture are the downward camera and the IMUs adjacent to the cameras, and a checkerboard is taped to the floor to localize the downward camera. The IMUs each consist of a triaxial gyroscope and an accelerometer. While the stereo camera measures the toeboard deformation in its camera coordinate frame, the downward camera and the IMUs are the sensors available to localize the pose of the stereo camera. The stereovision localization problem is thus defined as identifying the stereo camera pose given the images of the downward camera and the readings of the IMUs.

2.2. Linear Acceleration of Sensor Suite

In accordance with the rigid body dynamics, the acceleration ( a ) at the positions of IMUs is related to that of the center of the sensor suite by
a = a c + ω ˙ × r + ω × ( ω × r ) ,
where a c is the acceleration of the center of the sensor suite body, ω is the angular velocity of the sensor suite body frame, and r is the relative position of the IMU to the center.
Note that it is more convenient to describe the acceleration relation (1) in the body frame, and that the cross-product identity U ( v 1 × v 2 ) = ( U v 1 ) × ( U v 2 ) holds for unitary matrix U R 3 × 3 . By left multiplying the transformation matrix from the inertial frame to the body frame, the acceleration relation (1) can be rewritten as
a B = a c B + ω ˙ B × r B + ω B × ( ω B × r B ) ,
where a B , ω ˙ B , ω B and r B are the linear acceleration, angular acceleration, angular velocity, and position in the body frame ( B ).
The linear accelerations at different position of a rigid body need the input of angular acceleration. The input can be derived from the measurements of the triaxial accelerometers and gyroscopes in the IMUs. Because the derivation of angular acceleration at a point requires two IMU measurements, consider two IMUs in addition to the reference point as indicated in Figure 2. From Equation (1), the difference of the angular acceleration of IMU 1 from that of IMU 2 is given by
a 2 a 1 = ω ˙ × ( r 2 r 1 ) + 1 2 ω 2 × ω 2 × r 2 1 2 ω 1 × ω 1 × r 1 ,
where a 1 , a 2 , ω 1 , and ω 2 are the accelerations and angular velocities at IMU 1 and 2, respectively. r 1 and r 2 are each the vector from the reference point to them. The position of the reference point for each pair of IMUs is chosen to be the midpoint of the IMU positions: r 1 = r 2 = 1 2 r 12 1 2 r . This is to minimize the complexity and the error of transformation. The substitution of the midpoint into Equation (3) yields
C ( r ) ω ˙ = b a 1 a 2 + 1 2 ω 2 × ( ω 2 × r ) + 1 2 ω 1 × ( ω 1 × r ) ,
where the skew-symmetric cross-product matrix  C ( r ) : R 3 R 3 × 3 is defined as
C ( r ) = 0 r 3 r 2 r 3 0 r 1 r 2 r 1 0 ,
where r = ( r 1 , r 2 , r 3 ) T . The angular acceleration ω ˙ can be computed through the IMU measurements of a 1 , a 2 , ω 1 , and ω 2 by solving the linear problem (4).

2.3. Localization of Downward Camera

The installation of the downward camera and the checkerboard on the ground is because the ground is the closest static surrounding structure to the sensor suite and thus can be measured most accurately. As illustrated in Figure 3, the global frame can be set at one end of the checkerboard and used to localize the downward camera. The downward camera observes the checkerboard pattern and any other extractable features. By associating the pattern and features between the global frame and pixel frame, the pose of the downward camera with respect to the global frame can be identified after calibrating the camera parameters following the procedures proposed by Zhang [38] or Heikkilä and Silvén [39]. In particular, for the checkerboard attached to the ground, the checkerboard corner points are readily identified [40] and serve as the planar pattern for the camera pose measurement, as illustrated in Figure 3.
For the global localization of the downward camera, let the pixel coordinates of the corner point ( i , j ) at the kth time step be p k , i j . The pixel coordinates can be related to the corner point in the global frame by
p k , i j N k 0 + i j 0 w ,
where w is the width of tiles of the checkerboard. N k is the number of tiles of the origin of the detected checkerboard at the current step relative to the origin at the first step, which can be accumulated by
N k = N k 1 + Integer Δ k + p k , 0 p k 1 , 0 Δ ¯ t i l e ,
where Δ k is the motion between two images in the pixel coordinates, and p k , 0 is the origin of the detected checkerboard origin in the pixel coordinate in image k. The initial N 0 is ( 0 , 0 ) T .
Figure 3. Scheme for downward camera localization.
Figure 3. Scheme for downward camera localization.
Sensors 22 02962 g003
The design and sensor settings adopted in this paper allow the motion of the sensor suite to be predicted with respect to the body frame and the pose of the downward camera to be measured in the global frame. While the downward camera is fixed to the sensor suite and thus can measure the sensor suite pose through the kinematic transformation, the pose predicted by the motion model and the angular acceleration measurement would be different from the pose measured by the downward camera and the kinematic transformation. The primary reason is the dead-reckoning errors stemming from the motion prediction with angular acceleration measurements. In addition, the pose of the downward camera alone contains excessive oscillations because of the non-rigidity of the sensor frame in the crash test. The acceleration measurement is in general noisy, and its integration creates notable integration errors, which accumulate over time in the name of dead-reckoning errors. Since the pose of the downward camera can be accurately measured because of its close distance to the ground, it is essential to estimate the pose of the sensor suite or the stereo camera by integrating the motion prediction and the observation. The next section presents the localization of a stereo camera proposed and formulated in the framework of EKF.

3. EKF-Based Localization of Stereo Camera for Toeboard Measurement

3.1. Overview

Figure 4 shows the EKF-based localization of the stereo camera proposed in this paper for toeboard deformation measurement. Since the camera fixture is assumed to be rigid and can transform the body frame to the stereo camera, to be identified are the position and orientation of the body frame, { p , θ } , where p R 3 is the position of the origin of the body frame and θ R 3 is its Euler angle. To recursively estimate the motion of sensor suite using EKF framework, the state x { p , p ˙ , p ¨ , θ , ω B , ω ˙ B } includes not only the pose, but also velocity p ˙ , acceleration p ¨ , angular velocity ω B , and angular acceleration ω ˙ B in the body frame.
In accordance with the EKF, the primary processes of estimation are the prediction and the correction. The prediction of the state at discrete time k, or the derivation of the mean x k | k 1 and the covariance P k | k 1 , is performed using the motion model of a sensor suite and its state estimated at k 1 , i.e., x k 1 | k 1 and P k 1 | k 1 . In the correction step, the predicted sensor suite state is corrected to x k | k and P k | k by fusing the measurements of IMUs ( a k and ω k ), downward camera pose ( p k C and θ k C ), and angular acceleration ( ω ˙ k ) described in Section 2.2 and Section 2.3. While different sensors are used for the same state, Kalman gain is adjusted through the statistic property of motion and sensor noise via their covariance P k | k 1 and Σ k , v and provides the optimal estimate for each state in the proposed approach. Once the pose of the sensor suite is globally estimated, the deformation of the toeboard can be measured with respect to the global and any other coordinate frames.
The operation of the EKF needs motion and sensor models. The discrete motion model of the sensor suite and the observation model subjected to uncertainty are generically given by
x k = f ( x k 1 , w k ) , and z k = h ( x k , v k ) ,
where f ( · ) and h ( · ) are the respective motion and sensor models; w k and v k represent the motion model noise and sensor model noise, respectively. The subsequent two sections present the motion and sensor models developed in the proposed approach.

3.2. Motion Model of Sensor Suite

The motion of the sensor suite pose is determined by the motion and the deformation of the entire vehicle, which cannot be modeled and identified easily. Meanwhile, the sensor suite motion is constrained by the motion and the deformation of the vehicle. This means that the range of the sensor suite motion is bounded. With a short time step Δ t , the proposed approach accordingly predicts the pose of the sensor suite by the random walk motion model as
p k = p k 1 + Δ t p ˙ k 1 + 1 2 Δ t 2 p ¨ k 1 + ϵ k , p ¨ , p ˙ k = p ˙ k 1 + Δ t p ˙ k 1 + ϵ k , p ¨ , p ¨ k = p ¨ k 1 + ϵ k , p ¨ , θ k = θ k 1 + Δ t E ( θ k ) ω k 1 B + Δ t ( ω ˙ k 1 B + ϵ k , ω ˙ ) , ω k B = ω k 1 B + Δ t ω ˙ k 1 B + ϵ k , ω ˙ , ω ˙ k B = ω ˙ k 1 B + ϵ k , ω ˙ ,
where ϵ k , p ¨ , ϵ k , ω ˙ is a motion noise due to unresolved linear and angular acceleration, and θ = ϕ , θ , φ are the Euler angles corresponding to the roll, pitch, and yaw motion of the sensor suite. In the equation
E ( θ ) = 1 sin ( ϕ ) tan ( θ ) cos ( ϕ ) tan ( θ ) 0 cos ( ϕ ) sin ( ϕ ) 0 sin ( ϕ ) / cos ( θ ) cos ( ϕ ) / cos ( θ ) ,
is the Euler angle rates matrix [41], whose multiplication with the body-fixed angular velocity yields Euler angle rates. Through linearization, the motion model can be encapsulated into a canonical form as:
x k = F k x k 1 + V k w k ,
where F k and V k are the Jacobian matrix of the motion and the motion noise term, respectively, and are given by
F k = F 1 0 9 × 9 0 9 × 9 F 2 , k , and V k = V 1 0 9 × 3 0 9 × 3 V 2 , k ,
where
F 1 = I 3 Δ t I 3 Δ t 2 2 I 3 0 3 I 3 Δ t I 3 0 3 0 3 I 3 , F 2 , k = I 3 Δ t E ( θ k ) Δ t 2 E ( θ k ) 0 3 I 3 Δ t I 3 0 3 0 3 I 3 , V 1 = Δ t 2 2 I 3 Δ t I 3 I 3 , V 2 , k = Δ t 2 E ( θ k ) Δ t I 3 I 3 .
w k N ( 0 , Σ k , w ) is a motion noise, which is validly assumed to be Gaussian since Δ t is small.

3.3. Sensor Models for Localization of Sensor Suite

3.3.1. Sensor Models of Accelerometer

According to Equation (2), the sensor model outputting linear acceleration with respect to the body frame can be approximated with Gaussian noise as
z k , a = h k , a p ¨ ( x k , r B ) + v a R ( θ k ) T p ¨ k + ω ˙ k B × r B + ω k B × ( ω k B × r B ) + v a ,
where z k , a is the measurement of linear acceleration, r B is the position of accelerometer in the body frame, and v a N ( 0 , Σ a ) is the measurement noise of the accelerometer. R ( θ ) is the rotation matrix [41] to transfer the body frame to the inertial frame.
Because of the association through integration, the measurement z k , a can also be represented with the velocity and the position. This additionally introduces two sensor models:
z k , a = h k , a p ( x k , r B ) + v a R ( θ k ) T p k 2 p k 1 + p k 2 Δ t 2 + ω ˙ k B × r B + ω k B × ( ω k B × r B ) + v a , z k , a = h k , a p ˙ ( x k , r B ) + v a R ( θ k ) T p ˙ k p ˙ k 1 Δ t + ω ˙ k B × r B + ω k B × ( ω k B × r B ) + v a .
Let the measurements and the sensor models be described collectively as
z k a = z k , a z k , a z k , a , h k a ( x k ) = h k , a p h k , a p ˙ h k , a p ¨ .
The corresponding Jacobian matrix is written as
H k , a h k a x = R ( θ ) T Δ t 2 0 3 0 3 h k , a p θ k h k , a p ¨ ω k B C ( r B ) Δ t 0 3 0 3 R ( θ ) T Δ t 0 3 h k , a p ˙ θ k h k , a p ¨ ω k B C ( r B ) Δ t 0 3 0 3 0 3 R ( θ ) T h k , a p ¨ θ k h k , a p ¨ ω k B C ( r B ) ,
where C ( · ) is the skew symmetric matrix in Equation (5) and
h k , a p θ k = R ( θ ) T δ 2 p k Δ t 2 θ , h k , a p ˙ θ k = R ( θ ) T δ p ˙ k Δ t θ , h k , a p ¨ θ k = R ( θ ) T p k θ , h k , a p ¨ ω k B = C ( ω k B × r B ) C ( ω k B ) C ( r B ) ,
where δ represents the finite difference operation and δ 2 p k = p k 2 p k 1 + p k 2 and δ p ˙ k = p ˙ k p ˙ k 1 .

3.3.2. Sensor Models of Gyroscope

Because the gyroscope measures the angular velocity, the proposed approach constructs two sensor models, each outputting the angular velocity and the sensor suite orientation:
z k , g = h k , g ω ( x k , v ) ω k B + v g , z k , g = h k , g θ ( x k , v ) E ( θ k ) 1 θ k θ k 1 Δ t + v g ,
where v g is the gyroscope measurement noise, and the inverse of the Euler angle rates matrix is given by
E ( θ ) 1 = 1 0 sin θ 0 cos ϕ sin ϕ cos θ 0 sin ϕ cos ϕ cos θ .
Let the measurements and the sensor models be described collectively again:
z k g = z k , g z k , g , h k g ( x k ) = h k , g θ h k , g ω .
The Jacobian matrix is given by
H k , g h k g x = 0 3 0 3 0 3 E ( δ θ k ) + E ( θ k ) 1 Δ t 0 3 0 3 0 3 0 3 0 3 0 3 I 3 0 3 ,
with
E ( v ) = 0 v 3 cos θ 0 v 2 sin ϕ + v 3 cos ϕ cos θ v 3 sin ϕ sin θ 0 v 2 cos ϕ v 3 sin ϕ cos θ v 3 cos ϕ sin θ 0 .
δ θ k = θ k θ k 1 is the finite difference.

3.3.3. Sensor Models for Angular Acceleration

The derivation of the angular acceleration ω ˙ k B in Equation (13) from accelerometers in Equations (3)–(5) additionally introduces sensor models for angular acceleration. They are associated with the angular acceleration, the angular velocity, and the orientation as
z k , ω ˙ = h k , ω ˙ ω ˙ ( x ) ω ˙ k B + v k , ω ˙ , z k , ω ˙ = h k , ω ˙ ω ( x ) ω k B ω k 1 B Δ t + v k , ω ˙ , z k , ω ˙ = h k , ω ˙ θ ( x ) E ( θ ) 1 θ k 2 θ k 1 + θ k 2 Δ t 2 + v k , ω ˙ .
where v k , ω ˙ N ( 0 , Σ ω ˙ ) is a measurement noise for angular acceleration. Let the measurements and the sensor models be described collectively:
z k ω ˙ = z k , ω ˙ z k , ω ˙ z k , ω ˙ , h k ω ˙ ( x k ) = h k , ω ˙ θ h k , ω ˙ ω h k , ω ˙ ω ˙ .
The Jacobian matrix is obtained by
H k , ω ˙ h k ω ˙ ( x k ) x = 0 3 0 3 0 3 E ( δ 2 θ k ) + E ( θ k ) 1 Δ t 2 0 3 0 3 0 3 0 3 0 3 0 3 I 3 Δ t 0 3 0 3 0 3 0 3 0 3 0 3 I 3
with E ( · ) being defined in Equation (20) and δ 2 θ k = θ k 2 θ k 1 + θ k 2 .
While the sensor models have been constructed, the remaining modeling process for angular acceleration is the derivation of the covariance Σ ω ˙ for angular acceleration from a pair of IMUs. We expand the right-hand side of Equation (4) by considering noise on top of the measurement of acceleration and angular velocity:
b + v b = ( a 2 a 1 ) + 1 2 ω 2 × ( ω 2 × r ) + 1 2 ω 1 × ( ω 1 × r ) + v a 2 + 1 2 ω 2 × ( v ω 2 × r ) + 1 2 v ω 2 × ( ω 2 × r ) + v a 1 + 1 2 ω 1 × ( v ω 1 × r ) + 1 2 v ω 1 × ( ω 1 × r ) + 1 2 v ω 2 × ( v ω 2 × r ) + 1 2 v ω 1 × ( v ω 1 × r ) ,
where v b is the noise on vector b , v a 1 and v a 1 are the measurement noise of the accelerometer, and v ω 1 and v ω 2 are that of the gyroscope. By neglecting the smaller quadratic noise terms above, the covariance can be computed as
Σ b Σ a 1 + Σ a 2 + 1 2 C ˜ 1 Σ ω 1 C ˜ 1 T + 1 2 C ˜ 2 Σ ω 2 C ˜ 2 T ,
where
C ˜ 1 = C ( ω 1 × r ) + C ( ω 1 ) C ( r ) , C ˜ 2 = C ( ω 2 × r ) + C ( ω 2 ) C ( r ) .
C ( · ) is the cross-product matrix defined in Equation (5).

3.3.4. Sensor Model of Downward Camera

Since the downward camera ultimately derives its pose with respect to the global frame, the sensor model associates the observations with the pose of the sensor suite:
z k , p = h k , p ( x k , v k , p ) p k + R ( θ k ) T r c B + v k , p , z k , θ = h k , θ ( x k , v k , θ ) θ k + v k , θ ,
where r c B is the coordinate of the downward camera in the body frame. Let the measurements and the sensor models be described collectively:
z k , c = p k , c θ k , c , h k , c ( x k ) p k + R ( θ k ) T r c B θ k + v k , c ,
The Jacobian of the downward camera model is given by
H k , c h k , c x = I 3 0 3 0 3 R ( θ ) T r c B θ 0 3 0 3 0 3 0 3 0 3 I 3 0 3 0 3 .

3.4. EKF

The implementation of the EKF for the localization of the sensor suite is relatively straightforward once the motion model and all the sensor models have been developed. Given the motion model (11), the proposed approach predicts the mean and the covariance, x k | k 1 and P k | k 1 , using its prior belief x k 1 | k 1 and P k 1 | k 1 as
x k | k 1 = F k x k 1 | k 1 , P k | k 1 = F k P k 1 | k 1 F k + V k Σ k , w V k .
The sensor models are each given by a function of some state variables. Accordingly, the proposed approach corrects the estimation through the parallel KF where the correction of each state from each sensor is fused through summation [42]:
x k | k = x k | k 1 + s S K k , s z k , s h k , s ( x k | k 1 ) , P k | k 1 = P k | k 1 1 + s S H k , s T Σ k , s 1 H k , s ,
where s S is each sensor, and Σ k , s and H k , s are the covariance of measurement noise of sensor and the Jacobian matrix of the sensor model h k , s ( x ) , respectively. Kalman gain for sensor s is given by
K k , s = P k | k H k , s T Σ k , s 1 .
The great advantage of the proposed formulation is the simplicity. While the dimension of each correction is different, the Kalman gain acts not only as a scaling factor but also as a transformation matrix. All the correction terms are thus summed without loss of generality.

4. Sensor Suite Design and Installation

Figure 5a shows the design of the sensor suite with all sensors installed. The stereo cameras are installed on the sensor suite using an orientation-adjustable base to have a better view of the toeboard. The downward camera is installed on a leg welded on the sensor suite. A three-axis gyroscope and a three-axis accelerometer are equipped on each camera to estimate its position and orientation. The frame size is 560 mm × 450 mm × 130 mm . The size enables the sensor suite to be installed on the mounting bolt holes of the front seat without modifications to the test vehicle structure, as shown in Figure 5b.
The sensor specifications are shown in Table 1. Since the the crash of vehicle test typically lasts 100∼200 ms, high-speed sensors with a wide measurement range are selected.
The sensor suite was designed and tested in a laboratory environment. It was later tested at the crash test center of Honda Research & Development Americas, Inc. (Ohio, USA), in a full-size passenger car. Figure 6a,b show the side and top view of the sensor suite installed on the test vehicle. The stereo camera was placed 400 mm away from the toeboard so both cameras could have a full view of it. A hole with size 150 mm × 150 mm was opened on the vehicle floor to install the downward camera, as shown in Figure 6b,c. The downward camera was about 95 mm above the ground and had a clear view of the checkerboard pattern. Additional installations of IMU are shown in Figure 6c,d.

5. Experimental Results

This section first demonstrates the localization of the sensor suite using the proposed EKF framework in a simulated environment. Then, results of a real car crash test are followed.

5.1. Sensor Suite Localization in Simulated Environment

Consider the following case where the center of sensor suite undergoes simple linear and angular motion in the global frame as
a = ( 2 π f ) 2 sin ( 2 π f t ) , 0 , 0 T , ω = 2 π f w cos ( 2 π f w t ) 1 , 0 , 0 T ,
with angular acceleration
ω ˙ = ( 2 π f w ) 2 sin ( 2 π f w t ) T , 0 , 0 T .
The measurement of the accelerometer and gyroscope in the body frame shall be
a I M U B = a c B + ω ˙ B × r B + ω B × ( ω B × r B ) + v a , ω B = ω + v g
where a c B and ω ˙ B are measurements at sensor suite center as
a c B = a , ω ˙ B = ω ˙
and v g N ( 0 , σ g ) and v a N ( 0 , σ a ) model the sensor noise of accelerometer and gyroscope. The pose of downward camera shall be provided by
p C D = p + R ( θ ) r C D + v c , p θ C D = θ + v c , θ ,
where the position and orientation of the sensor suite are
p = sin ( 2 π f t ) 2 π f t 2 π , 0 , 0 T , θ = sin ( 2 π f w t ) + 2 π f w t + 2 π , 0 , 0 T .
Here, R ( θ ) is the rotation matrix given Euler angle θ , and r C D B is the position of downward camera in the body frame. Sensor noise of camera v c N ( 0 , σ c ) is Gaussian. IMU data are available between 0.1 s 0.3 s while camera pose information is available between 0.01 s 0.2 s , to be consistent with the real car crash test.
By letting f = f w = 10 , the angular acceleration can be recovered from the measurement of accelerometers and gyroscope, as illustrated in Figure 7a. This angular acceleration follows the prescribed ω ˙ B . The amplitude in the x-direction is close to the given amplitude ( 2 π f w ) 2 . The other two directions ( ω ˙ y and ω ˙ z ) are zero-meaned. The estimated velocity is shown in Figure 7b. Linear velocity and angular velocity at x-direction follow the cosine wave, while motion in other directions is trivial. The displacement and roll angles are shown in Figure 7c,d. Figure 7e illustrates the motion of the sensor suite in space, where rectangular panels illustrate the position with a normal vector for the orientation.
Differential entropy of the estimation is plotted in Figure 8, which is given by
h ( x ) = 1 2 log ( 2 π e ) n | Σ k | k |
for multivariate Gaussian distribution considered here. The entropy shows that the downward camera pose significantly improves the certainty of the estimation, as shown in Figure 8.
To test the limits of the proposed EKF framework, we vary noise level by σ a , σ g , or σ c individually while keeping the other two at a low level, 0.1 % of the maximum value, to be specific. For each combination, 100 tests were performed and the statistics of maximum error of the position max p k | k p k are plotted in Figure 9, where p k | k is the estimation and p k is the ground truth at step k. The figure shows that the proposed EKF-based estimation is accurate when noise is typically small, say, less than 1 % . Then, error can grow significantly when the noise of accelerometers and gyroscope further increases. Note that dead-reckoning error comes from measurements of accelerometer and gyroscope; therefore, noise on the downward camera pose may not significantly influence the estimation if IMU measurements are not significantly noisy.
Figure 7. Results of sensor suite localization using EKF framework in a simulated environment. (a) Angular acceleration; (b) linear and angular velocity of the camera fixture; (c) Estimated displacement of the camera fixture; (d) estimated roll angle of the camera fixture; (e) trajectory of the camera fixture in space.
Figure 7. Results of sensor suite localization using EKF framework in a simulated environment. (a) Angular acceleration; (b) linear and angular velocity of the camera fixture; (c) Estimated displacement of the camera fixture; (d) estimated roll angle of the camera fixture; (e) trajectory of the camera fixture in space.
Sensors 22 02962 g007
Figure 8. Entropy of sensor suite localization in simulated environment.
Figure 8. Entropy of sensor suite localization in simulated environment.
Sensors 22 02962 g008

5.2. Sensor Suite Localization in 56 km/h Frontal Barrier Car Crash Test

5.2.1. Pose Estimation of Downward Camera

Figure 10 shows the results of pose detection of the downward camera using the computer vision technique. Figure 10a shows the checkerboard pattern detected, while Figure 10b shows the features matched between two consecutive images for the purpose of localizing the downward camera. With pixel coordinates in Figure 10a and their global frame coordinates through Equation (7), camera pose is computed and presented in Figure 10c for position and orientation in Figure 10d.

5.2.2. Localization of Sensor Suite

For the localization purpose, angular acceleration is first computed by using IMU measurements and is plotted in Figure 11a. The results show that the pitch motion (rotation along y-axes), related to ω ˙ y , is significantly larger than the other two directions, which is consistent with onsite observation. The state of the sensor suite is then estimated through the proposed EKF framework with the computed pose of downward camera, angular acceleration, and IMU measurements. The results are shown in the rest of Figure 11.
Figure 11b shows the estimated velocity. The frontal crash test has an initial speed of 15.6 m / s ( 56 km / h ). The results show that, after the crash, the vehicle bounced back and moved powerlessly at a speed around 3.8 m / s . Motion in the y- and z-direction is much smaller than the x-direction. This is consistent with the onsite measurement.
Figure 11c shows the position estimated for the sensor suite. Instead of the center, the position of the downward camera is plotted so that results from Section 5.2.1 can be compared. The compression of structures, the distance moved in x-direction after the crash, is about 700 mm from the figure. It conforms to the onsite measurement. The figure also shows that the vehicle bounced up about 50 mm after the crash (from 33 ms to 77 ms ), and the motion in the z-direction was relatively small compared to the other two directions.
Figure 11d shows the pitch motion during the crash. The estimation shows that the maximum pitch angle is around 8 during the crash. For comparison, we also include the pitching angles measured by the downward camera and the gyroscope installed at the center of the designed sensor suite. The camera measurement is subject to the oscillation of the frame and shows strong oscillations throughout the in-crash measurement. Meanwhile, the integration of gyroscope measurement is subject to the dead-reckoning errors and it is hard to measure the precise pitching angle. Therefore, measurement based on a single sensor may be less accurate than the state estimation technique that fuses multiple sensor measurements. The motion of the sensor suite during the crash is illustrated in Figure 11e, where rectangular panels represent its position and orientation at every other 5 ms . On each panel, the normal vector is plotted and scaled by velocity v x to illustrate the orientation and velocity of the sensor suite during the crash test.
Figure 11. Frontal barrier localization result. (a) Displacement of the camera fixture. (b) Pitch angle of the camera fixture. (c) Localization of camera fixture in the global frame.
Figure 11. Frontal barrier localization result. (a) Displacement of the camera fixture. (b) Pitch angle of the camera fixture. (c) Localization of camera fixture in the global frame.
Sensors 22 02962 g011

6. Conclusions and Ongoing Work

This paper presented a technique to localize a stereo camera for in-crash toeboard deformation measurement. For localizing purposes, we designed a sensor suite with IMU sensors and cameras. This technique recursively estimates the pose of the sensor suite to where the stereo camera is attached. Using an EKF-based framework, the state, including state mean and covariance, is predicted using a motion model and current state. The prediction is then corrected through proposed sensor models and measurements of IMUs and the camera. The state estimation technique can remove the dead-reckoning errors of IMU sensors. Compared to the classical measurement-based technique, the state estimation can remove the measurement error of a single sensor incurred by the excessive oscillation of the sensor suite during a crash test. The proposed technique was first verified in a simulated environment, in which the estimation matched well with the ground truth, and the estimation was reliable given moderate sensor noise. The real crash data were analyzed and provided localization results of the sensor suite as well as of the stereo camera.
This paper mainly focuses on the localization of a sensor suite for toeboard deformation measurement, where the latter is not touched on in this paper but will be reported in future work. We noticed that cameras underwent relative motion to the sensor suite in the real crash test. This may be due to the structural flexibility, and further investigation of the influence of structure flexibility may provide more accurate localization results.

Author Contributions

Conceptualization, W.Z., T.F., A.N., and T.H.; methodology, W.Z. and T.F.; software, W.Z.; validation, W.Z., T.F., A.N., and T.H.; formal analysis, W.Z.; investigation, W.Z.; resources, W.Z., T.F., A.N., and T.H.; data curation, A.N. and T.H.; writing—original draft preparation, W.Z.; writing—review and editing, T.F.; visualization, W.Z.; supervision, T.F.; project administration, T.F.; funding acquisition, T.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Honda Motor Co., Ltd.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Shinsuke Shibata and Kaitaro Nambu at Honda Motor Co., Ltd. and Kazunobu Seimiya at Honda Development & Manufacturing of America for their assistance in crash tests. We would like also to thank Mengyu Song for his assistance in image processing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. NHTSA. Motor vehicle crashes: Overview. Traffic Saf. Facts Res. Note 2016, 2016, 1–9. [Google Scholar]
  2. Kim, H.; Hong, S.; Hong, S.; Huh, H.; Motors, K.; Kwangmyung Shi, K. The evaluation of crashworthiness of vehicles with forming effect. In Proceedings of the 4th European LS-DYNA Users Conference, Ulm, Germany, 22–23 May 2003; pp. 25–34. [Google Scholar]
  3. Mehdizadeh, A.; Cai, M.; Hu, Q.; Alamdar Yazdi, M.A.; Mohabbati-Kalejahi, N.; Vinel, A.; Rigdon, S.E.; Davis, K.C.; Megahed, F.M. A review of data analytic applications in road traffic safety. Part 1: Descriptive and predictive modeling. Sensors 2020, 20, 1107. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Wei, Z.; Karimi, H.R.; Robbersmyr, K.G. Analysis of the Relationship between Energy Absorbing Components and Vehicle Crash Response; SAE Technical Paper Series; SAE International: Warrendale, PA, USA, 2016; Volume 4. [Google Scholar]
  5. Górniak, A.; Matla, J.; Górniak, W.; Magdziak-Tokłowicz, M.; Krakowian, K.; Zawiślak, M.; Włostowski, R.; Cebula, J. Influence of a passenger position seating on recline seat on a head injury during a frontal crash. Sensors 2022, 22, 2003. [Google Scholar] [CrossRef] [PubMed]
  6. Varat, M.S.; Husher, S.E. Vehicle crash severity assessment in lateral pole impacts. SAE Trans. 1999, 108, 302–324. [Google Scholar]
  7. Song, M.; Chen, C.; Furukawa, T.; Nakata, A.; Shibata, S. A sensor suite for toeboard three-dimensional deformation measurement during crash. Stapp Car Crash J. 2019, 63, 331–342. [Google Scholar]
  8. Nakata, A.; Furukawa, T.; Shibata, S.; Hashimoto, T. Development of Chronological Measurement Method of Three-Dimensional Toe Board Deformation During Frontal Crash. Trans. Soc. Automot. Eng. Jpn. 2021, 52, 1131–1136. [Google Scholar]
  9. Austin, R.A. Lower extremity injuries and intrusion in frontal crashes. Accid. Reconstr. J. 2013, 23, 1–23. [Google Scholar]
  10. Patalak, J.P.; Stitzel, J.D. Evaluation of the effectiveness of toe board energy-absorbing material for foot, ankle, and lower leg injury reduction. Traffic Inj. Prev. 2018, 19, 195–200. [Google Scholar] [CrossRef]
  11. Hu, Y.; Liu, X.; Neal-Sturgess, C.E.; Jiang, C.Y. Lower leg injury simulation for EuroNCAP compliance. Int. J. Crashworthiness 2011, 16, 275–284. [Google Scholar] [CrossRef]
  12. Lin, C.S.; Chou, K.D.; Yu, C.C. Numerical simulation of vehicle crashes. Appl. Mech. Mater. 2014, 590, 135–143. [Google Scholar] [CrossRef]
  13. Saha, N.K.; Wang, H.C.; El-Achkar, R. Frontal offset pole impact simulation of automotive vehicles. In Proceedings of the International Computers in Engineering Conference and Exposition, San Francisco, CA, USA, 2–6 August 1992; pp. 203–207. [Google Scholar]
  14. Bathe, K.J. Crash simulation of cars with finite element analysis. Mech. Eng. Mag. Sel. Artic. 1998, 120, 82–83. [Google Scholar]
  15. Omar, T.; Eskandarian, A.; Bedewi, N. Vehicle crash modelling using recurrent neural networks. Math. Comput. Model. 1998, 28, 31–42. [Google Scholar] [CrossRef]
  16. Hickey, A.; Xiao, S. Finite Element Modeling and Simulation of Car Crash. Int. J. Mod. Stud. Mech. Eng. 2017, 3, 1–5. [Google Scholar]
  17. Yang, R.J.; Wang, N.; Tho, C.H.; Bobineau, J.P.; Wang, B.P. Metamodeling development for vehicle frontal impact simulation. J. Mech. Des. 2005, 127, 1014–1020. [Google Scholar] [CrossRef]
  18. Cheng, Z.Q.; Thacker, J.G.; Pilkey, W.D.; Hollowell, W.T.; Reagan, S.W.; Sieveka, E.M. Experiences in reverse-engineering of a finite element automobile crash model. Finite Elem. Anal. Des. 2001, 37, 843–860. [Google Scholar] [CrossRef]
  19. McClenathan, R.V.; Nakhla, S.S.; McCoy, R.W.; Chou, C.C. Use of photogrammetry in extracting 3d structural deformation/dummy occupant movement time history during vehicle crashes. SAE Trans. 2005, 1, 736–742. [Google Scholar]
  20. Zhang, X.Y.; Jin, X.L.; Qi, W.; Sun, Y. Virtual reconstruction of vehicle crash accident based on elastic-plastic deformation of auto-body. In Key Engineering Materials; Trans Tech Publications Ltd.: Bäch SZ, Switzerland, 2004; Volume 274, pp. 1017–1022. [Google Scholar]
  21. Pan, B. Digital image correlation for surface deformation measurement: Historical developments, recent advances and future goals. Meas. Sci. Technol. 2018, 29, 1–33. [Google Scholar] [CrossRef]
  22. Ghorbani, R.; Matta, F.; Sutton, M.A. Full-field deformation measurement and crack mapping on confined masonry walls using digital image correlation. Exp. Mech. 2015, 55, 227–243. [Google Scholar] [CrossRef]
  23. Scaioni, M.; Feng, T.; Barazzetti, L.; Previtali, M.; Roncella, R. Image-based deformation measurement. Appl. Geomat. 2015, 7, 75–90. [Google Scholar] [CrossRef] [Green Version]
  24. Chen, F.; Chen, X.; Xie, X.; Feng, X.; Yang, L. Full-field 3D dimensional measurement using multi-camera digital image correlation system. Opt. Lasers Eng. 2013, 51, 1044–1052. [Google Scholar] [CrossRef]
  25. Lichtenberger, R.; Schreier, H.; Ziegahn, K. Non-contacting measurement technology for component safety assessment. In Proceedings of the 6th International Symposium and Exhibition on Sophisticated Car Occupant Safety Systems (AIRBAG’02), Karlsruhe, Germany, 6–8 December 2002. [Google Scholar]
  26. Sutton, M.A.; Mingqi, C.; Peters, W.H.; Chao, Y.J.; McNeill, S.R. Application of an optimized digital correlation method to planar deformation analysis. Image Vis. Comput. 1986, 4, 143–150. [Google Scholar] [CrossRef]
  27. Chu, T.; Ranson, W.; Sutton, M.A. Applications of digital-image-correlation techniques to experimental mechanics. Exp. Mech. 1985, 25, 232–244. [Google Scholar] [CrossRef]
  28. De Domenico, D.; Quattrocchi, A.; Alizzio, D.; Montanini, R.; Urso, S.; Ricciardi, G.; Recupero, A. Experimental characterization of the FRCM-concrete interface bond behavior assisted by digital image correlation. Sensors 2021, 21, 1154. [Google Scholar] [CrossRef] [PubMed]
  29. Bardakov, V.V.; Marchenkov, A.Y.; Poroykov, A.Y.; Machikhin, A.S.; Sharikova, M.O.; Meleshko, N.V. Feasibility of digital image correlation for fatigue cracks detection under dynamic loading. Sensors 2021, 21, 6457. [Google Scholar] [CrossRef]
  30. Chou, J.Y.; Chang, C.M. Image motion extraction of structures using computer vision techniques: A comparative study. Sensors 2021, 21, 6248. [Google Scholar] [CrossRef]
  31. Xiong, B.; Zhang, Q.; Baltazart, V. On quadratic interpolation of image cross-correlation for subpixel motion extraction. Sensors 2022, 22, 1274. [Google Scholar] [CrossRef]
  32. Schmidt, T.E.; Tyson, J.; Galanulis, K.; Revilock, D.M.; Melis, M.E. Full-field dynamic deformation and strain measurements using high-speed digital cameras. In Proceedings of the 26th International Congress on High-Speed Photography and Photonics, Alexandria, VA, USA, 20–24 September 2004; pp. 174–185. [Google Scholar]
  33. Iliopoulos, A.P.; Michopoulos, J.; Andrianopoulos, N.P. Performance sensitivity analysis of the mesh-free random grid method for whole field strain measurements. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Brooklyn, NY, USA, 3–6 August 2008; pp. 545–555. [Google Scholar]
  34. Furukawa, T.; Pan, J.W. Stochastic identification of elastic constants for anisotropic materials. Int. J. Numer. Methods Eng. 2010, 81, 429–452. [Google Scholar] [CrossRef]
  35. Schweppe, F. Recursive state estimation: Unknown but bounded errors and system inputs. IEEE Trans. Autom. Control. 1968, 13, 22–28. [Google Scholar] [CrossRef]
  36. Vargas-Melendez, L.; Boada, B.L.; Boada, M.J.L.; Gauchia, A.; Diaz, V. Sensor fusion based on an integrated neural network and probability density function (PDF) dual Kalman filter for on-line estimation of vehicle parameters and states. Sensors 2017, 17, 987. [Google Scholar] [CrossRef]
  37. Qu, L.; Dailey, M.N. Vehicle trajectory estimation based on fusion of visual motion features and deep learning. Sensors 2021, 21, 7969. [Google Scholar] [CrossRef]
  38. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  39. Heikkila, J.; Silvén, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 1106–1112. [Google Scholar]
  40. Bouguet, J.Y. Camera Calibration Toolbox for matlab. Available online: http://robots.stanford.edu/cs223b04/JeanYvesCalib/htmls/links.html (accessed on 7 April 2022).
  41. Diebel, J. Representing attitude: Euler angles, unit quaternions, and rotation vectors. Matrix 2006, 58, 1–35. [Google Scholar]
  42. Willner, D.; Chang, C.B.; Dunn, K.P. Kalman Filter Configurations for Multiple Radar Systems; Technical Report; MIT Lexington Lincoln Laboratory: Lexington, MA, USA, 1976. [Google Scholar]
Figure 1. Sensor suite and localization problem formulation.
Figure 1. Sensor suite and localization problem formulation.
Sensors 22 02962 g001
Figure 2. Illustration for angular acceleration using two IMUs.
Figure 2. Illustration for angular acceleration using two IMUs.
Sensors 22 02962 g002
Figure 4. Proposed recursive localization of stereovision for in-crash toeboard measurement.
Figure 4. Proposed recursive localization of stereovision for in-crash toeboard measurement.
Sensors 22 02962 g004
Figure 5. Sensor suite design (a) and installation (b).
Figure 5. Sensor suite design (a) and installation (b).
Sensors 22 02962 g005
Figure 6. Sensor suite for real car test: (a) side view; (b) top view; (c) bottom view; (d) close-up view of camera and IMU sensors.
Figure 6. Sensor suite for real car test: (a) side view; (b) top view; (c) bottom view; (d) close-up view of camera and IMU sensors.
Sensors 22 02962 g006
Figure 9. Error of EKF estimation under different noise level on accelerometer, gyroscope, and camera measurement.
Figure 9. Error of EKF estimation under different noise level on accelerometer, gyroscope, and camera measurement.
Sensors 22 02962 g009
Figure 10. Results of camera pose estimation using computer vision technique.
Figure 10. Results of camera pose estimation using computer vision technique.
Sensors 22 02962 g010
Table 1. Parameters of sensors.
Table 1. Parameters of sensors.
Inertial sensors sampling rate20,000 Hz
Accelerometer noise 0.01 m / s 2 ( 1 σ )
Accelerometer range 2000 g
Gyroscope noise noise 0 . 01 / s ( 1 σ )
Gyroscope range18,000 / s
Camera sampling rate 1000 Hz
Camera resolution 1024 × 1024 pixels
Downward camera to the ground 95 mm
Target toeboard size 500 mm × 350 mm
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, W.; Furukawa, T.; Nakata, A.; Hashimoto, T. Localization of Stereovision for Measuring In-Crash Toeboard Deformation. Sensors 2022, 22, 2962. https://doi.org/10.3390/s22082962

AMA Style

Zhang W, Furukawa T, Nakata A, Hashimoto T. Localization of Stereovision for Measuring In-Crash Toeboard Deformation. Sensors. 2022; 22(8):2962. https://doi.org/10.3390/s22082962

Chicago/Turabian Style

Zhang, Wei, Tomonari Furukawa, Azusa Nakata, and Toru Hashimoto. 2022. "Localization of Stereovision for Measuring In-Crash Toeboard Deformation" Sensors 22, no. 8: 2962. https://doi.org/10.3390/s22082962

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop