*3.3. Pre-Processing Results*

Figure 12 shows the received raw signal for a set time of eight seconds for all cases, where the x-axis and y-axis correspondingly indicate the time (s) and amplitude (voltage level).

**Figure 12.** *Cont.*

–

–

**Figure 12.** Received signals in the time domain; (**a**–**h**) indicate cases #1~#8.

In Figure 11a,f, only noise is found, since there are no moving objects (cases #1 and #6). While a breathing signal appears as a sine wave for the human without motion in Figure 11b, we find that the breathing and motion signals are mixed due to the slight movement of the neck (case #3) in Figure 11c.

In Figure 11d,e, because more human motion exists, aperiodic signals are displayed with a higher frequency than the respiration of a human.

– Finally, Figure 11g,h show periodic vibration signals that appear due to the shaking of the box and vehicle (cases #7 and #8).

– Figure 13 shows a micro-Doppler image generated from the micro-Doppler-based motion feature extraction part described in Figure 4. Figure 12a–h are the signal processing results of Figure 12, showing the results for the eight aforementioned cases. Here, the x-axis is the time (s) and the y-axis is the frequency (Hz).

In Figure 12a,f, no scattering points higher than the background noise are seen because there is no moving object.

In Figure 12b,c, because the human has no movement or only slight motion, mostly breathing signals are extracted. In this case, Doppler spectra with a narrow shape appear almost continuously over time.

Interestingly, in Figure 12g,h, sharp patterns appear, similar to those in Figure 12b,c. This occurs because other objects do not have multiple scattering points.

On the other hand, in Figure 12d,e, a wide distribution of scattering points is found due to various components of human movement. In addition, it can be seen that the distribution of the scattering points varies over time.

Based on these micro-Doppler images, we can extract two feature vectors *x*<sup>1</sup> and *x*<sup>2</sup> as the 'different degree of scattering points' and the 'extended degree of scattering points' through processing, as shown in Figure 8.

Figure 14 shows the Doppler spectra of the Doppler frequency-based vital sign feature extraction part shown in Figure 6. These results are also measured based on the signals in Figure 12 for all eight scenarios. Here, the x-axis is frequency (Hz) and the y-axis denotes the magnitude. In addition, the red-dotted boxes indicate the area of the frequency below 1 Hz, occupied by the breathing signal.

<sup>1</sup> <sup>2</sup>

In Figure 14a,f, no dominant spectrum is found due to the absence of motion or vital sign. In Figure 14b,c, the Doppler spectrum is sharp in the red box due to the breathing signal. However, the spectrum of Figure 14c is slightly widened compared to that in Figure 14b due to the motion of the neck.

In Figure 14g,h, sharp-type Doppler spectra are also found, but most of them are located outside of 1 Hz.

On the other hand, in Figure 14d,e, we find that the Doppler spectra are spread across the red box. Occasionally, maximum peaks can be found at less than 1 Hz according to the Doppler volume of human movement.

Based on these Doppler spectra, we can determine whether or not vital signals exist, and obtain the third feature vector *x*3. In this paper, a simple algorithm that can extract the breathing signal is employed. If we use a very fine algorithm, we can extract vital signs with a high detection probability.

Figure 15 shows two motion features extracted from micro-Doppler images of Figure 13 for eight cases. Figure 15a,b indicate the first feature *x*<sup>1</sup> as the 'extended degree of scattering points' and *x*<sup>2</sup> as the "different degree of scattering point". Here, the x-axis is the time (sec) and the y-axis is the number of the detected scatters. The results for cases #1 to #8 are correspondingly represented by the green-solid, red-solid, red-dotted, blue-solid, blue-dotted, green-dotted, black-solid, and black-dotted lines.

μ –

– **Figure 14.** Doppler spectrum extracted from the Doppler frequency-based vital sign feature extraction part; (**a**–**h**) indicate cases #1 to #8.

**Figure 15.** Motion feature extraction results for eight cases: (**a**) results of the extended degree of scattering points and (**b**) the results of the different degree of scattering point.

Figure 15a shows that the number of scatters in cases #2 and #3 is mostly lower compared to cases #4 and #5 due to the different levels of human movement. That is, for a still or slightly moving human, only a few scattering points are reflected.

In cases #7 and #8, we can find similar patterns to those in cases #2 and #3. In cases of inanimate objects, the Doppler spectra are narrow, because multiple components do not exist.

Finally, when there is no motion component, the number of extracted features is 0, such as in cases #1 and #6.

In Figure 15b, when the passenger is moving on a seat for cases #4 and #5, we find that the distribution of the Doppler spectrum varies more than those of a still human (case #2), a slightly moving human (case #3), and other objects (cases #7 and #8).

Figure 16 shows the extracted Doppler frequency spectra for the third vital sign feature in Figure 14. That is, Figure 16a,h indicate the presence or absence of vital sign. For all cases, the x-axis and the y-axis denote the time (s) and logic value (true or false), respectively.

As shown in Figure 16a,f, no signal is detected in the cases without motion

In Figure 16b,c, vital signals are recognized at all times, despite the fact that the passenger moves his neck only slightly. However, from the results in Figure 16d,e, we find that breathing signs may or may not be extracted depending on the movement level of the human.

Finally, for inanimate objects, we can find that vital signals are mostly not detected, as shown in Figure 16g,h. However, in the results, false detections occasionally occur due to noise. This problem will be resolved by employing a fine breathing detection algorithm in the future.

–

**Figure 16.** Vital sign feature extraction results; (**a**– –**h**) indicate cases #1 to #8. 3

#### *3.4. Proposed Feature-Based Human Recognition Results* 3

Figure 17 presents the three-dimensional distributions of the features extracted from the micro-Doppler image and the vital sign frequency spectrum. <sup>1</sup> <sup>2</sup>

**Figure 17.** Three-dimensional distribution of three features extracted from the proposed algorithm scheme for a human and other objects.

In cases #1 and #6, without any moving object, the three features are positioned at zero, as shown by the purple boxes.

 We labeled the features of the actual human as ' ′ and other cases as ' ′ We can distinguish between a human with no or little motion (cases #2 or #3) and an inanimate object with movement (cases #7 and #8) using only the vital sign feature *x*3. Here, cases #2 and #3 are displayed with red-star marks, and the blue-cross marks are used to present cases #7 and #8. However, as mentioned above, even in cases #7 and #8, a few incorrect results appear, as if vital signals are detected due to noise.

label '1′ label '0′ As shown by the green circle marks in cases #4 and #5, because the *x*<sup>3</sup> values for the passenger with movement are distributed between 1 and 0, it is impossible to determine whether the detected object is human or not if using only *x*3. However, in these two cases, the first and second features extracted from the micro-Doppler image are positioned in an area far from the origin, while the results for the inanimate object appear around the origin. Thus, when using *x*<sup>1</sup> and *x*2, we can distinguish between a human with motion and other objects with movement.

In this paper, we use three feature vectors to train and test the process using machine learning with the BDT, as shown in Figure 18. The procedure we used for all programming for machine learning and verification is described below. Here, we coded all procedures using the Matlab library.

**Figure 18.** Structure of features comprising the training data set and the test set for machine learning.


In the typical methods [7,12], vital sign monitoring of breathing in vehicle applications is used. That is, the sampled radar echo signal is analyzed for the presence of periodic breathing while separating vital signs form background noise. Thus, previous works considered only the scenario of a human being asleep in a vehicle. In this paper, we define instances that use only vital sign signals as the typical method.

In Table 1, the human recognition performances outcomes are presented for the typical method and with the proposed algorithm. In the typical algorithm, only the vital signals are used in cases for a human with no motion and with slight motion. However, in the proposed algorithm, we use not only the characteristics of the Doppler scattering points, but also the breathing signals.

We present two performance metrics: the classification accuracy (%) and the classification error rate (%).

The classification accuracy indicates whether or not an actual human has been accurately classified as a human. On the other hand, the classification error rate represents the rate at which an inanimate object is mistakenly determined to be a human.

While the classification accuracy of the typical algorithm using only vital signs is approximately 70%, the classification accuracy of the proposed method is improved to 98.6%, as shown in Table 2. That is, the performance is enhanced by nearly 28%.

'extended degree


**Table 2.** Classification performance for human recognition.

Regarding the classification error rate, the performance of the proposed method is decreased by 0.5% compared to the typical method. In this paper, because we employ a very simple algorithm to detect vital signs, the noise of an inanimate object can be occasionally incorrectly recognized as vital signs. If the vital sign detection algorithm is advanced in the future, this problem will be resolved.

In this paper, we conducted this experiment with three people, and similar performance outcomes were obtained. This occurred because the measuring distance is very close, at about 1 m, and the shapes of the human bodies of the participants are similar.

#### **4. Conclusions**

In this paper, we defined three new features and proposed a human recognition scheme based on machine learning using a CW radar sensor. To do this, we initially measured the 'extended degree of scattering points' from micro-Doppler images, after which we calculated the mean of the Doppler reflection points over sub-frames. Second, in order to extract the 'different degree of scattering points', we calculated the mean of the difference in the Doppler reflection points between two successive sub-frames.

While the two feature vectors described above were meant to recognize a human's motion, the last feature vector is for human vital sign recognition. Hence, we defined the 'presence of vital signs' as extracted from the Doppler frequency spectrum as the breathing signal of a human.

To verify the performance of the proposed algorithm, we built a test-bed similar to the interior of an actual vehicle and defined eight cases, consisting of non-moving objects, a still or moving human, and inanimate moving object scenarios. For these cases, we used a commercial real-time DAQ module and a 2.45 GHz CW radar front-end module with antennas developed by Yeungnam University. Then, in order to extract three features from the received radar signals, we obtained the raw data using the test-bed.

The extracted features were used as input data for a BDT as machine learning engine, and we verified the proposed algorithm through randomly repeated verification trials.

The results with the typical method using only vital signs show that the classification accuracies for a human were 70.7%. However, with the proposed human recognition scheme using motion and vital sign features, the classification accuracy was found to be 98.6%. That is, compared to the typical method, the performance of the proposed method is improved by approximately about 28%.

Moreover, because the proposed algorithm has very low complexity, we can implement a passenger detection radar system with a simple structure.

In the future, using radar sensors with multiple receiving antennas, we will conduct research to determine the presence and status of occupants for each seat. In the addition, we plan to employ a new vital sign detection algorithm to improve the classification error rate for humans. We will also verify the proposed algorithm together with the various types of human forms. Moreover, we will install the test-bed in an actual vehicle in order to verify the proposed recognition scheme more practically.

**Author Contributions:** E.H. handled the design and development of the proposed algorithm, contributed to the creation of the main ideas in the paper, and handled the writing of the manuscript. Y.-S.J. focused on the experimental task (i.e., setting up the system, the design and realization of the experiments). J.-R.Y. designed the radar front-end module and antenna, and J.-H.P. fabricated and tested the hardware of the radar sensor. All authors conducted data analysis and interpretation tasks together. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** This research was supported by the DGIST R&D Program funded by the Ministry of Science and ICT, Korea. (No: 20-IT-02, Title: Development of core technologies for environmental sensing and software platform of future automotive)

**Conflicts of Interest:** The authors declare no conflict of interest.
