Next Article in Journal
Data-Driven and Knowledge-Guided Heterogeneous Graphs and Temporal Convolution Networks for Flood Forecasting
Previous Article in Journal
A Numerical Model Comparison of the Energy Conversion Process for an Offshore Hydro-Pneumatic Energy Storage System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Model for Human Running Micro-Doppler FMCW Radar Features

1
Shijiazhuang Campus, Army Engineering University of PLA, Shijiazhuang 050003, China
2
School of Information Science and Engineering, Hebei University of Science and Technology, Shijiazhuang 050018, China
3
Hebei Technology Innovation Centre of Intelligent IoT, Shijiazhuang 050018, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(12), 7190; https://doi.org/10.3390/app13127190
Submission received: 27 March 2023 / Revised: 1 June 2023 / Accepted: 14 June 2023 / Published: 15 June 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Human body detection is very important in the research of automotive safety technology. The extraction and analysis of human micro-motion based on frequency-modulated continuous wave (FMCW) radar is gradually receiving attention. Aimed at the modulation effect of human micro-motion on FMCW radar, a human running model is proposed to study human radar characteristics. According to the scattering characteristics of rigid bodies, the analytical expression of human running radar echoes is established. By using time–frequency analysis, the micro-Doppler features in the radar echoes are extracted during the running period. Under running conditions, the micro-Doppler characteristics of key components are studied. This model is applied to the real FMCW radar verification platform, and the runners are measured at a distance of 10 m. The fit rate of all parts of the human body can reach above 90%. The overall fit rate of the human model can reach up to 90.6%. The model proposed is a realistic and simple human kinematic model. This model, which can realize the real simulation of a running human body and provide strong support for human target radar echo analysis, can fill the deficiency of FMCW radar technology in the complex motion model.

1. Introduction

Human detection is a key area of autonomous vehicle safety research [1,2,3,4]. Sensors are often used to sense human objects to avoid collisions and ensure safe driving behavior, such as cameras, LiDAR, and FMCW radar. The signal will be interfered with by other signal sources during the acquisition process [5,6]. The visual sensor will also be affected by the light [7]. There will be various noises, spurs, and interferences when the radar sensor is working [8,9]. The human target information will be seriously affected. The detection accuracy and algorithm performance will also be reduced [10,11]. Furthermore, the hardware system and real measurement are costly [12], and large-scale actual measurement will be expensive [13]. Therefore, this work adopts simulation to detect human targets, which also provides assistance for subsequent interference simulation and anti-interference work.
Human detection simulation relies on the human body model, and the most advanced human body model is based on vision [14,15,16,17,18,19]. In the field of radar, there are also a variety of mannequins to simulate human walking movement [20,21,22,23,24]. The advanced mannequins are mostly used for 3D reconstruction, movie special effects, and action recognition. The visual human body model uses three-dimensional grids or voxels to describe the human body’s surface and structure. The radar human body model uses electromagnetics and scattering theory to establish a mathematical physical model. The two models’ representations are different. As a result, the visual solution is insufficient for the human target detection and echo analysis problem in the radar field. Therefore, a human body model based on radar must be created. However, the existing radar human body model is only suitable for a specific type of walking motion and cannot be generalized to any desired motion. In short, the Doppler signals of complex human motion models, such as the running model, have not been simulated yet. A more realistic and simpler FMCW radar human motion model is lacking. Therefore, a human running model based on FMCW radar was proposed. The detection of human targets was completed by FMCW radar. The Doppler characteristics in radar echoes were analyzed.
Human radar echoes have unique micro-Doppler signatures that are caused by the motion of different body parts [20,21,25]. The radar echo signal was modulated by movements of the hands and legs. The micro-Doppler’s excitation frequency is about a sine wave. This unique modulation can be described by two parameters, amplitude and frequency. The regular pattern of body motion can be described finely by the micro-Doppler feature of the radar echo signal. The micro-Doppler features are observed with a joint time–frequency representation, such as short-time Fourier transform, and smooth pseudo Wigner–Ville distribution [20,21,26]. The generated Doppler spectrograms can be used to identify and classify different types of motion [27,28,29].
The Thalmann model [22] was simplified to describe the electromagnetic wave modulation of the human running movement. A human running model which is suitable for human micro-motion radar was proposed. Based on the human running model, the analytical expressions of the FMCW radar echoes were derived. The micro-Doppler characteristics of human running were analyzed. Finally, an FMCW radar measurement platform was built to validate the human running model. The main contributions of the proposed approach are summarized as follows.
  • A human running simulation model based on FMCW radar is proposed, which can achieve a real simulation of a running human body. In the field of FMCW radar, this model fills the gap in the complex human motion model and provides a theoretical model for studying the dynamic characteristics and radar response mechanisms.
  • The running Doppler signature of each body part is comprehensively analyzed in the verification platform. The Doppler feature data of human running can be used for algorithm training and testing and provide support for the development of human running target detection and tracking algorithms.
The rest of this paper is organized as follows. Section 2 presents the human running model based on Thalmann’s simplified form. Section 3 presents the FMCW echo model of the human running model. Section 4 presents the experimental results. Section 5 presents the comparative analysis. Finally, the conclusions are presented in Section 6.

2. Design of the Human Running Model

The human running model is based on the Thalmann model, which is a model of human walking motion. Through extensive experiments, the Thalmann model was proposed by Ronan Boulic, Magnenat Thalmann, and Daniel Thalmann. The Thalmann human model with 62 degrees of freedom and 32 joints is simplified to a model consisting of 12 regular rigid bodies and 16 reference points. The 12 rigid body parts of the human body are the head, the torso, the left upper arm, the right upper arm, the left lower arm, the right lower arm, the left thigh, the right thigh, the left lower leg, and the right lower leg. The 16 reference points are the head, the base point of the spine, the left shoulder, the right shoulder, the left elbow, the right elbow, the left hand, the right hand, the left hip, the right hip, the left knee, the right knee, the left ankle joint, the right ankle joint, the left toe, and the right toe. The distribution of these 16 reference points is shown on the left side of Figure 1, and the reference points are all located in the x-y-z coordinate system. This human motion model uses three simple geometries: sphere, ellipsoid, and cylinder. These three geometric shapes construct the human head, hands, limbs, and neck. The specific construction process is shown in Figure 1. The positive x direction is forward, the positive y direction is to the right, the positive z direction is upward, and the base point of the spine is the origin.
The continuous changes in the spatial coordinates of various body parts are recorded and analyzed during running. By tracking these spatial coordinate change trajectories, the model extracts 12 key degrees of freedom. These degrees of freedom include three translation degrees of freedom, three rotation axis angular velocity degrees of freedom, and six main joint rotation degrees of freedom, which are closely related to the characteristic information of running kinematics. The coordinated motion between these 12 degrees of freedom describes the motion state and control strategy in a running gait cycle. The range of motion and the rate of change are determined by speed and body proportions, such as height, leg length and arm length ratio. Different running speeds and human body structures will lead to different motion patterns and gait characteristics in these 12 degrees of freedom.
The 12 motion trajectories of the human body refer to the motion paths or posture changes of various parts, which can be described by 12 kinematic degrees of freedom. The changes of 12 kinematic degrees of freedom can be precisely defined and described in the form of functions. The form is divided into three types. The first one is described by sine function. When the trajectory exhibits periodic oscillation characteristics, it can be described by a sine function. The second one is described by the piecewise sine function. If different segments of the motion trajectory show different motion characteristics, it needs to be described by a piecewise function. The third is described by the interpolation function, which uses the positions of several key points on the trajectory as the endpoints of the interpolation function. These endpoints are connected through functions to express the motion trajectory. These three forms of description methods are universal. For brevity, typical examples of each of the three motion trajectories are given for explanation.
(1)
Vertical translation: The up and down translation of the spine center along the vertical direction, which can be described by a sine function:
T v t * = a v + a v s i n 2 π 2 t R 0.35 ,
if V R is the relative walking speed, a v   = 0.020 V R .
(2)
Left and right rotation: The flexing way of the human body’s pelvis to swing the legs, which can be described by a piecewise function:
R L I R = a r L I R + a r L I R cos 2 π 10 t R 3 , 0 t R 0.15 a r L I R a r L I R cos { 2 π 10 t R 0.15 7 } , 0.15 t R 0.5 a r L I R a r L I R cos { 2 π 10 t R 0.5 3 } , 0.5 t R 0.65 a r L I R a r L I R cos { 2 π 10 t R 0.65 7 } , 0.65 t R 1 ,
if V R is the relative walking speed, a L I R = 1.70 V R .
(3)
Hip flexion: The hip flexes in the front-back direction. The hip flexion contains three control points, which can be described by an interpolation function:
S t * y 2 y 1 1.2 t R 0.6 + y 1 , 0 t R 0.4 y 3 y 2 1.2 t R 0.8 + y 1 , 0.4 t R 0.6 ,
where y1, y2, and y3 are the vertical coordinates of the three control points.
Figure 2 shows the curves of the three kinds of motion trajectories, so that the overall motion characteristics and laws are clear at a glance.
The movement trajectory and deflection angle of the key parts are obtained. The continuously human running motion can be simulated with the position reference points of each part in the gait cycle. In the experiment section, the depth camera was used to track the running movement of the human body. The three-dimensional space coordinate trajectory of each part was obtained and can be used as a position reference point.

3. Radar Echo Model of Human Running

Most existing research methods use simple single-carrier frequency continuous waves in studying human motion [24]. However, these waves can only be used to acquire the velocity of the subject and not its spatial location. FMCW radar is used for human target detection and motion information extraction. Firstly, an FMCW radar point target echo model is constructed. Then, the FMCW radar echo model of human targets is established according to the rigid body scattering characteristic.
The plural form of the FMCW radar signal is:
m t = e j 2 π f 0 + 0.5 K t 2 ,
where K is the slope, determined by the bandwidth and period, and f0 is the starting frequency of the chirp signal. The echo signal can be expressed as:
r t = σ m t τ = σ e j 2 π [ f 0 t τ + 0.5 K t τ ) 2 ,
where τ is the two-way delay of electromagnetic wave, and σ is the amplitude attenuation factor. The received signal is mixed with the local oscillating signal to obtain the intermediate frequency (IF) signal:
s t = e j 2 π f 0 τ + K τ t 0.5 K τ 2 .
For FMCW radar, the frequency of the IF signal can be used to estimate the target distance. A one-dimensional fast Fourier transform (FFT) is carried out on the IF signal. The distance amplitude is obtained after phase-reference accumulation. The phase information on each distance unit responds to target phase change information of the Doppler effect. Therefore, without loss of generality, when modeling the radar echo model of the human body, the radar signal can be directly restored from the phase of the signal to a single-frequency signal. The human body model is constructed from geometric bodies such as spheres and ellipsoids. First, it is necessary to discuss the radar scattering cross-sectional area of ellipsoids and spheres. The radar cross section (RCS) of each part can be determined by the relationship between the location of the target and the location of the radar, as well as the radii of geometry. The relationship between the ellipsoid and the radar is shown in Figure 3. The x-y-z coordinate system is the global coordinate system. It belongs to the same coordinate system as the coordinate system in Figure 1 and has the same definition.
The radar cross-section (RCS) of an ellipsoid, which is given by:
R C S e l l i p = π a 2 b 2 c 2 ( a 2 s i n 2 θ c o s 2 φ + b 2 s i n 2 θ s i n 2 φ + c 2 c o s 2 θ ) 2 ,
where the ellipsoid’s three semi-axes lengths in the x, y, and z directions are denoted by a, b, and c, respectively. θ and φ denote the pitch and azimuth angles; when a = b = c, the above equation simplifies to the sphere RCS formula.
In order to calculate the FMCW radar echo of the human body, the distance of each body component relative to the radar needs to be calculated based on the spatial coordinates of the ellipsoid of different parts. Using the RCS of different body parts, the echo attenuation coefficients are calculated. Finally, the echo arithmetic of the human body model is calculated by weighted summation. Assume the center coordinate of the ith part is expressed as the distance from the radar as Ri. In the simulation process, since the duration of each chirp signal is short, usually in the order of ms, the human body moves at a limited speed during a chirp pulse. It can be approximated that the radial distance Ri does not change significantly during a chirp pulse. Then the radar echo from one body part is:
S i t = e j 2 π f 0 t 2 R i c .
If the attenuation coefficient of the ith part is ηi, the overall echo is:
S a l l t = i = 1 16 η i S i t .

4. Experimental Result

The sensors are the AWR1642 radar and the Azure Kinect somatosensory camera. The AWR1642 radar system emits FMCW signals. The system consists of three parts: the RF front-end, a data acquisition module, and a signal processing terminal. Continuous sampling of data is achieved through circular calls from memory. Microsoft’s next-generation Azure Kinect motion capture system is the most common device in markerless acquisition. It provides an excellent human joint tracking SDK and electronic motion sensors. In the actual measurement, the camera captures the trajectory of the body’s joint points. Then the coordinate changes of these joint points are imported into the human model as a function of time, so that the human model and the actual human body achieve a high degree of consistency in running movements. This adds reliability to the subsequent validation.
Azure Kinect is used to collect 3D data of human bones and joints while running in an unobstructed environment. It can capture the distinguishing characteristics of human bones and joints without markers. An optical motion acquisition system with multiple cameras is used to record the motion of an optical marker placed on the body [30]. Each camera can acquire data in a two-dimensional coordinate system, and it is necessary to keep each optical marker seen by at least two cameras. Based on these two-dimensional coordinate groups, motion data in a three-dimensional coordinate system can be calculated. However, multiple infrared cameras are used to capture and integrate 3D data from multiple angles [30], mainly to solve the instability problem when the single camera is collecting data. Compared to a single camera, this approach has little difference in effectiveness for accessibility [31]. Moreover, the markerless method is easy to prepare, which has low equipment costs, and requires less space than the active marking method. The FMCW radar experimental platform is shown in Figure 4. The experimental site should be spacious and bright. In order to better detect the human target, a specific set of radar parameters was selected for this experiment, where the center frequency was 77 GHz, the sampling frequency was 2.5 MHz, the frequency modulation period was 50 μs, the number of pulses per frame was 128, and the bandwidth was 4000 MHz.
In accordance with the parameters described above, human target detection was performed to obtain echo and micro-motion information. Figure 5 simulates the moving distance information of the human running model. Figure 5 has a color band on the side, and the corresponding color of the band is mapped to the image matrix data, indicating the dynamic range and value change of the data. Through continuous color change from blue to red, the change from low to high values of the distance profile data are visually represented. The data show that the human running model moved from 6 m to 2 m from FMCW radar.
Figure 6 shows the micro-Doppler characteristics obtained from the simulation and the micro-Doppler characteristics of human running in the actual detection situation. It uses the short−time Fourier transform (STFT) method. As can be seen from Figure 6, the modulation of the echoes by the legs, feet, and arms is reflected in the outer envelope of the time–frequency spectrum. The strongest part is the modulation information of the torso. The modulation information of other body parts fluctuates periodically around the torso and forms the inner envelope of the time–frequency spectrum.

5. Comparative Analysis

According to the research and experimental results in the Section 3 and Section 4, it can be observed that the swing of each limb will cause obvious Doppler frequency shift during running. This means that when the human body moves, the vibration frequency of the limbs will change as the speed of the movement changes. In order to better verify the human body model and analyze the micro-Doppler features, this section shows the waveforms of each component in the human running time–frequency image. These waveforms reflect that the swing of different limbs show different waveform characteristics.
Figure 7 shows the waveforms of the various components in the running micro-Doppler signature. The left side shows the simulated data and the right side shows the measured data. From the Doppler waveform of the human torso (a), it can be seen that the echo energy of the torso is the strongest. The average Doppler frequency shift brought by the average speed of the human body is the closest, which is approximately sinusoidal. The Doppler frequency shift of the upper arm (b) and the thigh (d) surrounds the trunk, which is also approximately sinusoidal. The Doppler waveform of the lower leg (e) and foot (f) is relatively similar. The difference is mainly that the Doppler frequency shift of the feet is larger. Both are more complex than standard walking, mainly because the stride length is larger and the frequency is faster when running. There are two feet vacating at the same time, which makes the dynamic gait planning of the lower extremities more complex. The calf and foot movement brings the largest Doppler frequency shift, which is also the largest Doppler frequency shift in the running state of the whole human body. The minimum Doppler frequency shift is close to zero.
From the above analysis, we can see that the Doppler frequency shift of the lower leg and foot is the maximum Doppler frequency in the whole human running process. The outermost envelope of human running Doppler features is caused by the lower leg and foot. Comparing the lower leg with the foot, the Doppler frequency of the lower leg is a little clearer, because the energy of the radar echo signal of the foot is very weak, with it almost submerged in the clutter. Therefore, the maximum Doppler shift extracted in the time–frequency map is more similar to the Doppler shift of the lower leg. The waveforms of the Doppler frequencies of the torso, upper arm, lower arm, and thigh are approximately sinusoidal, surrounding the Doppler frequency induced by the mean velocity. The calf and foot have a more complex frequency waveform with a higher Doppler frequency, which reflects the special movement posture of the lower leg during the cycle. In short, the Doppler frequency curve brought about by the movement of each limb changes periodically. An envelope extraction algorithm for heart sound signal segmentation was proposed [32], where the envelope is the amplitude of the resolved signal computed by Hilbert. Experiments prove that the envelope information extracted by Hilbert transform has good time and frequency resolution. Dynamic time warping [33] (DTW) is an algorithm based on a dynamic planning strategy. It works by adjusting the temporal alignment of two time series to correctly calculate the similarity between them.
In order to confirm the accuracy and efficiency of the model, six parameters in the time–frequency spectrum obtained were selected as reference quantities: torso, upper arm, lower arm, thigh, lower leg, and foot. Firstly, Hilbert envelope extraction [32] is performed on the reference quantity. After the envelope curve is obtained, the similarity matching between the simulated and measured envelope curves is carried out by using the DTW algorithm [33]. The columns in Table 1 show the four types of data for model validation. The peak is the highest value of the envelope curve, which is divided into a simulated peak and a measured peak. The simulation rate is the waveform similarity of each part calculated by the DTW algorithm. The last column is the human model fit rate, which is obtained by calculating the weighted average of six covariates: torso, large arm, small arm, thigh, lower leg, and foot. Since the model is the global echo equation derived by weighted summation of different body parts, the fit rate of the human model can present whether the human model is efficient or not.
Two other models were evaluated and compared on the basis of the above. The results show that the outer envelope of the Thalmann model [22] has a large deviation between the actual measurement and the simulation peak. The deviation may be due to the instability of the model. For complex human actions, it cannot achieve a high degree of simulation of the real human body and is only suitable for simple human walking. The Doppler model [20] is a point-scattering model. For human RCS, the scattering intensities of different parts of the human target are generally weighted and superimposed in proportion. The RCS of the model in this paper is calculated according to the modeling shapes of different body parts. Therefore, the Doppler model [20] has a lower resolution for more detailed body part transformations and a lower average rate in the model. Looking at this model as a whole, for a human target with a running speed of 1.2 m/s, the fitting accuracies for the torso, upper arm, lower arm, thigh, lower leg, and foot are 97%, 91%, 89%, 93%, 87%, and 84%, respectively. Among them, the fitting accuracy of the torso, upper arm, thigh, lower leg, and foot is higher than that of the other two models. It shows that the fitting effect of the torso is the best because the average Doppler of the torso is most affected by the moving speed of the target subject. The echo energy is the largest, and it is less affected by the ground clutter. As for the outer envelope of the human body, echo energy is small, so it is greatly affected by ground clutter. There is an error with the theoretical value. However, the overall fit of the model is good, as high as 90.6%, which is highly consistent with the actual measured data.
To further demonstrate the effectiveness of the proposed method, we report the position errors of the main joint points. All joints of the body are listed in Figure 8. Table 2 shows that the whole human body is divided into six parts, and each part corresponds to the joint points in Figure 8. Since we take the main parts of the human body as parameters in Table 1, the nodes covered by the main parts are taken as the main nodes. Through the average error of the joint points, the overall accuracy of the human skeleton model can be evaluated. The position errors of the main joint points are shown in Table 3. It can be seen from Table 3 that the errors of the elbow, hand, knee, and foot are larger than the other joints. There may be two reasons for this. First, the reflective area of the human arm or foot is small, resulting in weaker reflected signals than other parts of the human body. Second, the subjects’ arms had a greater range of motion in the measured experiments, making it harder to track those joints in comparison to others.
JGLNet-T [16], mPose [17], KCL [18], and mmPose [19] are advanced human skeleton models in the field of deep learning. These models are mainly used for human pose reconstruction. High-precision attitude reconstruction is a computationally intensive task that requires a large number of parameters, an iterative optimization process, and other complex operations. This naturally increases the need for computing resources and dedicated hardware. However, the joint points of the proposed model are mainly concentrated in important joints. This model reduces the number of parameters and calculations. The purpose of the human running model is to judge the state of human motion rather than to achieve high-precision reconstruction. Therefore, iterative optimization and precise registration are not required. This reduces the dependence on computing power resources. The data in Table 3 show that the proposed model can achieve similar results to the advanced human skeleton model on a more general computing platform. It has high real-time performance and application potential in autonomous driving.

6. Conclusions

In this paper, a new human running model has been proposed. According to the point target model of FMCW radar, the running echo of FMCW radar of the human target has been deduced. The correctness of the human running model has been verified by using the real experimental platform. This model can provide the basis for the subsequent human target micro-motion information and provide the basis for the micro-Doppler feature classification recognition of human motion. Due to the deviation caused by clutter in the actual measurement, the anti-disturbance design will be combined to improve the motion model in the follow-up research.
The proposed human running model based on the FMCW radar achieves a real simulation of a running human body, which fills the gap of a complex action human model in the field of millimeter wave radar. The human running Doppler features extracted by this model can provide data support for the later interference simulation and anti-disturbance work. High-resolution Doppler characterization of the human body can also be widely used in security and biometric applications. The model is now mostly utilized for a qualitative investigation of the human running motion mechanism, and the practical applications of other actions are still to be developed and explored.

Author Contributions

Conceptualization, Y.Z., X.L. and S.L.; methodology, Y.Z., X.L. and S.L.; software, Y.Z. and X.L.; validation, Y.Z. and J.M.; formal analysis, Y.Z., S.L. and M.M.; investigation, Y.Z., X.L. and M.M.; resources, Y.Z., X.L. and G.M.; data curation, Y.Z., X.L. and J.M.; writing—original draft preparation, Y.Z. and X.L.; writing—review and editing, Y.Z., X.L. and J.M.; visualization, Y.Z. and X.L.; supervision, G.M. and Y.Z.; project administration, Y.Z. and M.M.; funding acquisition, Y.Z. and G.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Defense Basic Research Plan (grant number. JCKYS2020DC202), the Natural Science Foundation of Hebei Province (grant number. F2022208002), the Science and Technology Project of Hebei Education Department (Key program) (grant number. ZD2021048).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mokhtari, K.; Wagner., A. Pedestrian Collision Avoidance for Autonomous Vehicles at Unsignalized Intersection Using Deep Q-Network. arXiv 2021, arXiv:2105.00153. [Google Scholar]
  2. Chen, L.; Lin, S.; Lu, X.; Cao, D.; Wu, H.; Guo, C.; Liu, C.; Wang, F.-Y. Deep Neural Network Based Vehicle and Pedestrian Detection for Autonomous Driving: A Survey. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3234–3246. [Google Scholar] [CrossRef]
  3. Roszyk, K.; Nowicki, M.R.; Skrzypczyński, P. Adopting the YOLOv4 Architecture for Low-Latency Multispectral Pedestrian Detection in Autonomous Driving. Sensors 2022, 22, 1082. [Google Scholar] [CrossRef]
  4. Camara, F.; Bellotto, N.; Cosar, S.; Nathanael, D.; Althoff, M.; Wu, J.; Ruenz, J.; Dietrich, A.; Fox, C.W. Pedestrian Models for Autonomous Driving Part I: Low-Level Models, From Sensing to Tracking. IEEE Trans. Intell. Transp. Syst. 2020, 22, 6131–6151. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Shen, Y.; Ma, G.; Man, M.; Liu, S. Analysis of Electrostatic Discharge Interference Effects on Small Unmanned Vehicle Handling Systems. Electronics. 2023, 12, 1640. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Peng, L.; Ma, G.; Man, M.; Liu, S. Dynamic Gesture Recognition Model Based on Millimeter-Wave Radar with ResNet-18 and LSTM. Front. Neurorobotics 2022, 16, 903197. [Google Scholar] [CrossRef]
  7. Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef]
  8. Fuchs, J.; Dubey, A.; Lübke, M.; Weige, R.; Lurz, F. Automotive Radar Interference Mitigation using a Convolutional Autoencoder. In Proceedings of the 2020 IEEE International Radar Conference (RADAR), Washington, DC, USA, 28–30 April 2020; pp. 315–320. [Google Scholar] [CrossRef]
  9. Addabbo, P.; Besson, O.; Orlando, D.; Ricci, G. Adaptive Detection of Coherent Radar Targets in the Presence of Noise Jamming. IEEE Trans. Signal Process. 2019, 67, 6498–6510. [Google Scholar] [CrossRef] [Green Version]
  10. Chen, S.; Shangguan, W.; Taghia, J.; Kühnau, U.; Martin, R. Automotive Radar Interference Mitigation Based on a Generative Adverarial Network. In Proceedings of the 2020 IEEE Asia-Pacific Microwave Conference (APMC), Hong Kong, 8–11 December 2020; pp. 728–730. [Google Scholar] [CrossRef]
  11. Brooker, G.M. Mutual Interference of Millimeter-Wave Radar Systems. IEEE Trans. Electromagn. Compat. 2007, 49, 170–181. [Google Scholar] [CrossRef]
  12. Liu, W.; Yu, C.; Wang, X.; Zhang, Y.; Yu, Y. The Altitude Hold Algorithm of UAV Based on Millimeter Wave Radar Sensors. In Proceedings of the 2017 International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), Hangzhou, China, 26–27 August 2017; pp. 436–439. [Google Scholar] [CrossRef]
  13. Song, H.; Jung, J. Towards an unsupervised large-scale 2D and 3D building mapping with LiDAR. arXiv 2022, arXiv:2205.14585. [Google Scholar]
  14. Pavlakos, G.; Choutas, V.; Ghorbani, N.; Bolkart, T. Expressive Body Capture: 3D Hands, Face, and Body from a Single Image. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 10967–10977. [Google Scholar] [CrossRef] [Green Version]
  15. Yu, T.; Zhao, J.; Zheng, Z.; Guo, K.; Dai, Q.; Li, H.; Pons-Moll, G.; Liu, Y. DoubleFusion: Real-Time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 2523–2539. [Google Scholar] [CrossRef] [Green Version]
  16. Cao, Z.; Ding, W.; Chen, R.; Zhang, J.; Guo, X.; Wang, G. A Joint Global–Local Network for Human Pose Estimation with Millimeter Wave Radar. IEEE Internet Things J. 2022, 10, 434–446. [Google Scholar] [CrossRef]
  17. Shi, C.; Lu, L.; Liu, J.; Wang, Y.; Chen, Y.; Yu, J. mPose: Environment- and subject-agnostic 3D skeleton posture reconstruction leveraging a single mmWave device. Smart Health 2022, 23, 100228. [Google Scholar] [CrossRef]
  18. Ding, W.; Cao, Z.; Zhang, J.; Chen, R.; Guo, X.; Wang, G. Radar-based 3D human skeleton estimation by kinematic constrained learning. IEEE Sens. J. 2021, 21, 23174–23184. [Google Scholar] [CrossRef]
  19. Sengupta, A.; Jin, F.; Zhang, R.; Cao, S. mm-Pose: Real-time human skeletal posture estimation using mmWave radars and CNNs. IEEE Sens. J. 2020, 20, 10032–10044. [Google Scholar] [CrossRef]
  20. Geisheimer, J.L.; Greneker, E.F.; Marshall, W.S. High-resolution Doppler model of the human gait. In Proceedings of the AeroSense 2002, Radar Sensor Technology and Data Visualization, Orlando, FL, USA, 22–25 October 2012. [Google Scholar] [CrossRef]
  21. van Dorp, P.; Groen, F. Human walking estimation with radar. IEE Proc. Radar Sonar Navig. 2003, 150, 356–365. [Google Scholar] [CrossRef]
  22. Boulic, R.; Thalmann, N.M.; Thalmann, D. A global human walking model with real-time kinematic personification. Vis. Comput. 1990, 6, 344–358. [Google Scholar] [CrossRef]
  23. Dorp, P.V.; Groen, F.C. Feature-based human motion parameter estimation with radar. IET Radar Sonar Navig. 2008, 2, 135–145. [Google Scholar] [CrossRef]
  24. Trommel, R.P.; Harmanny, R.; Cifola, L.; Driessen, J.N. Multi-target human gait classification using deep convolutional neural networks on micro-doppler spectrograms. In Proceedings of the 2016 European Radar Conference (EuRAD), London, UK, 5–7 October 2016; pp. 81–84. [Google Scholar]
  25. Chen, V.C.; Ling, H. Time-Frequency Transforms for Radar Imaging and Signal Analysis; Artech House: Boston, MA, USA, 2002. [Google Scholar]
  26. Zhang, P.; Xie, W.; Ou, J.; Zhang, J.; Liu, K.; Wang, G. Research on Human Micro-motion Feature Extraction Technology. In Proceedings of the 2019 IEEE 3rd Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Chongqing, China, 11–13 October 2019; pp. 762–767. [Google Scholar] [CrossRef]
  27. Hernangómez, R.; Santra, A.; Stańczak, S. Human Activity Classification with Frequency Modulated Continuous Wave Radar Using Deep Convolutional Neural Networks. In Proceedings of the 2019 International Radar Conference (RADAR), Toulon, France, 23–27 September 2019; pp. 1–6. [Google Scholar] [CrossRef]
  28. Sharifi, A.; Amini, J. Forest biomass estimation using synthetic aperture radar polarimetric features. J. Appl. Remote. Sens. 2015, 9, 097695. [Google Scholar] [CrossRef]
  29. Sharifi, A.; Amini, J.; Tateishi, R. Estimation of Forest Biomass Using Multivariate Relevance Vector Regression. Photogramm. Eng. Remote. Sens. 2016, 82, 41–49. [Google Scholar] [CrossRef] [Green Version]
  30. Chen, V.C. The Micro-Doppler Effect in Radar; Artech House: London, UK, 2011. [Google Scholar]
  31. Eichler, N.; Hel-Or, H.; Shimshoni, I. Spatio-Temporal Calibration of Multiple Kinect Cameras Using 3D Human Pose. Sensors 2022, 22, 8900. [Google Scholar] [CrossRef] [PubMed]
  32. Choi, S.; Jiang, Z. Comparison of envelope extraction algorithms for cardiac sound signal segmentation. Expert Syst. Appl. 2008, 34, 1056–1069. [Google Scholar] [CrossRef]
  33. Müller, M. Dynamic Time Warping. In Information Retrieval for Music and Motion; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar] [CrossRef]
Figure 1. Human running model construction.
Figure 1. Human running model construction.
Applsci 13 07190 g001
Figure 2. Space trajectory. (a) Positional changes in vertical translation and (b) angle changes in hip flexion and left and right rotation.
Figure 2. Space trajectory. (a) Positional changes in vertical translation and (b) angle changes in hip flexion and left and right rotation.
Applsci 13 07190 g002
Figure 3. Geometric relationship between the radar and one part of the person.
Figure 3. Geometric relationship between the radar and one part of the person.
Applsci 13 07190 g003
Figure 4. Experimental verification platform for FMCW radar.
Figure 4. Experimental verification platform for FMCW radar.
Applsci 13 07190 g004
Figure 5. Simulation distance information of the human running model.
Figure 5. Simulation distance information of the human running model.
Applsci 13 07190 g005
Figure 6. FMCW radar platform echo micro-motion information. (a) Simulated echoes of the running model and (b) echoes of the actual data.
Figure 6. FMCW radar platform echo micro-motion information. (a) Simulated echoes of the running model and (b) echoes of the actual data.
Applsci 13 07190 g006
Figure 7. Simulation data analysis and actual measurement data analysis of the main parts of human running. (a) Torso; (b) upper arm; (c) lower arm; (d) thigh; (e) lower leg; (f) foot.
Figure 7. Simulation data analysis and actual measurement data analysis of the main parts of human running. (a) Torso; (b) upper arm; (c) lower arm; (d) thigh; (e) lower leg; (f) foot.
Applsci 13 07190 g007aApplsci 13 07190 g007b
Figure 8. Illustration of the 16-joint human skeleton tree.
Figure 8. Illustration of the 16-joint human skeleton tree.
Applsci 13 07190 g008
Table 1. The human model validates the experimental data.
Table 1. The human model validates the experimental data.
ModelParameterPeak Simulation (Hz)Peak Measurement (Hz)Simulation RateModel Fit Rate
Torso35549088%
Upper arm39055885%
Thalmann [22]Lower arm42065089%83.2%
Thigh46067585%
Lower leg53770379%
Foot59070578%
Torso33050090%
Upper arm35251087%
Doppler [20]Lower arm38560387%86.5%
Thigh36565388%
Lower leg37273084%
Foot41076083%
Torso48053695%
Upper arm53056091%
OursLower arm61064589%90.6%
Thigh70674393%
Lower leg78580687%
Foot1017106884%
Table 2. The whole human body is divided into six parts.
Table 2. The whole human body is divided into six parts.
Body PartCorresponding Joint List
Torso0—hip center
Head1—head
Left upper limb2—left shoulder, 3—left elbow, and 4—left hand
Right upper limb5—right shoulder, 6—right elbow, and 7—right hand
Left lower limb12—left hip, 13—left knee, 14—left ankle, and 15—left foot
Right lower limb8—right hip, 9-right knee, 10—right ankle, and 11—right foot
Table 3. Comparison of the proposed method with the existing methods in the estimation errors per joint.
Table 3. Comparison of the proposed method with the existing methods in the estimation errors per joint.
Model02345678911121315Error
mmPose [19]0.515.357.7118.016.559.7120.87.142.077.67.142.678.449.5
KCL [18]1.814.550.592.914.148.188.17.328.952.86.829.252.737.5
mPose [17]0.317.956.9108.718.359.1109.37.738.166.57.637.468.545.9
JGLNet-T [16]1.112.037.562.811.936.364.67.422.837.17.422.936.227.7
Ours1.314.542.878.114.748.086.47.229.050.17.229.949.535.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Li, X.; Ma, G.; Ma, J.; Man, M.; Liu, S. A New Model for Human Running Micro-Doppler FMCW Radar Features. Appl. Sci. 2023, 13, 7190. https://doi.org/10.3390/app13127190

AMA Style

Zhang Y, Li X, Ma G, Ma J, Man M, Liu S. A New Model for Human Running Micro-Doppler FMCW Radar Features. Applied Sciences. 2023; 13(12):7190. https://doi.org/10.3390/app13127190

Chicago/Turabian Style

Zhang, Yongqiang, Xiaopeng Li, Guilei Ma, Jinlong Ma, Menghua Man, and Shanghe Liu. 2023. "A New Model for Human Running Micro-Doppler FMCW Radar Features" Applied Sciences 13, no. 12: 7190. https://doi.org/10.3390/app13127190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop