Next Article in Journal
Transformative Noise Reduction: Leveraging a Transformer-Based Deep Network for Medical Image Denoising
Previous Article in Journal
In-Plane Vibrations of Elastic Lattice Plates and Their Continuous Approximations
Previous Article in Special Issue
Feature Fusion-Based Re-Ranking for Home Textile Image Retrieval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Classification of Human Fall and Sit Motions Using Ultra-Wideband Radar and Hidden Markov Models

by
Thottempudi Pardhu
1,
Vijay Kumar
2,
Andreas Kanavos
3,*,
Vassilis C. Gerogiannis
4 and
Biswaranjan Acharya
5
1
Department of Electronics and Communications Engineering, BVRIT HYDERABAD College of Engineering for Women, Hyderabad 500090, Telangana, India
2
School of Electronics Engineering, Vellore Institute of Technology, Vellore 632014, Tamil Nadu, India
3
Department of Informatics, Ionian University, 49100 Corfu, Greece
4
Department of Digital Systems, University of Thessaly, 41500 Larissa, Greece
5
Department of Computer Engineering-AI and BDA, Marwadi University, Rajkot 360003, Gujarat, India
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(15), 2314; https://doi.org/10.3390/math12152314
Submission received: 11 June 2024 / Revised: 10 July 2024 / Accepted: 18 July 2024 / Published: 24 July 2024
(This article belongs to the Special Issue Advanced Research in Image Processing and Optimization Methods)

Abstract

:
In this study, we address the challenge of accurately classifying human movements in complex environments using sensor data. We analyze both video and radar data to tackle this problem. From video sequences, we extract temporal characteristics using techniques such as motion history images (MHI) and Hu moments, which capture the dynamic aspects of movement. Radar data are processed through principal component analysis (PCA) to identify unique detection signatures. We refine these features using k-means clustering and employ them to train hidden Markov models (HMMs). These models are tailored to distinguish between distinct movements, specifically focusing on differentiating sitting from falling motions. Our experimental findings reveal that integrating video-derived and radar-derived features significantly improves the accuracy of motion classification. Specifically, the combined approach enhanced the precision of detecting sitting motions by over 10% compared to using single-modality data. This integrated method not only boosts classification accuracy but also extends the practical applicability of motion detection systems in diverse real-world scenarios, such as healthcare monitoring and emergency response systems.

1. Introduction

Providing adequate healthcare for aging populations is a significant challenge facing societies worldwide. As life expectancies increase, so does the prevalence of age-related ailments, making falls a major public health concern. Falls are not only the leading cause of injury and hospitalization among the elderly but also contribute significantly to mortality rates, particularly for those suffering from chronic conditions such as cardiovascular disease, high blood pressure, diabetes, or stroke.
Ultra-wideband (UWB) radar has evolved into a powerful tool for non-intrusive health monitoring, surpassing traditional methods with its ability to penetrate obstacles and maintain privacy. Recent technological advances have greatly enhanced its precision in detecting and classifying nuanced human movements, particularly in distinguishing various types of falls—an essential feature for elderly care. Integration of machine learning techniques, such as hidden Markov models (HMMs), with UWB radar data has been shown to significantly enhance detection accuracy in complex environments. This study aims to delve deeper into this integration, refining the classification of human motions with a focus on critical distinctions between sitting and falling motions. By harnessing the improved sensitivity and specificity of UWB radar, this research seeks to elevate the reliability of health monitoring systems, potentially transforming preventive care practices for vulnerable populations.
Traditional monitoring methods for fall detection primarily rely on visual imaging technologies such as CCTV and webcam systems. While effective under certain conditions, these methods face significant challenges including occlusions caused by furniture and other obstructions, and variable lighting conditions that can drastically reduce their reliability. Furthermore, the use of cameras in private environments such as homes and healthcare facilities raises serious privacy concerns.
In response to these challenges, ultra-wideband (UWB) radar emerges as a compelling alternative [1]. Leveraging electromagnetic waves capable of penetrating various materials, UWB radar excels in detecting human movements without the limitations imposed by visual line-of-sight requirements [2]. Its ability to detect subtle movements through obstacles offers a significant advantage over traditional methods. However, despite these technological advancements, the extent to which UWB radar can improve the real-time monitoring of human activities, particularly in comparison to conventional vision-based systems, remains underexplored.
These technological advancements have not only improved the accuracy of motion detection but have also expanded the applicability of UWB radar systems in real-world healthcare settings, offering more reliable solutions for preventive monitoring and emergency response systems. The integration of these sophisticated technologies into everyday healthcare practices promises not only to enhance patient outcomes but also to reduce long-term healthcare costs by preventing falls and enabling early intervention.
This study aims to fill this gap by investigating the effectiveness of UWB radar for the classification and detection of human motions, with a specific focus on differentiating between sitting and falling motions. By integrating hidden Markov models (HMMs) with radar-generated data, we develop a sophisticated framework for enhancing motion classification. This integration not only improves accuracy but also enhances the robustness of real-time monitoring systems.
Our research contributes to the field of human motion detection through several key advancements:
  • Demonstrating that UWB radar technology can effectively detect and classify human motions with high accuracy, even through common obstructions found in residential settings.
  • Introducing a novel application of hidden Markov models (HMMs) that harnesses radar data to provide more accurate and reliable motion classification than traditional visual-based methods.
  • Providing a comparative analysis that illustrates the superiority of UWB radar in various environmental conditions, offering insights into its potential for wider adoption.
  • Highlighting the potential applications of this technology in developing non-intrusive, privacy-preserving health monitoring systems that are adaptable to a range of living environments without the need for invasive structural modifications.
The remainder of this paper is organized as follows: Section 2 reviews related work, providing a background on existing technologies and methods in the fields of motion detection and human activity recognition. Section 3 discusses the analysis of UWB radar signal characteristics for human motion detection, detailing the specific techniques and methodologies employed. Section 4 describes the process of extracting human features based on image processing, emphasizing the use of motion history images (MHI) and Hu moments. Section 5 delves into a detailed analysis of UWB radar data and its comparison with Hu moments, providing insights into the nature and effectiveness of the extracted features for human motion classification. Section 6 presents the experimental evaluation, where the effectiveness of the proposed methods is tested through a series of controlled experiments. Finally, Section 7 concludes the paper, summarizing the findings and discussing future work in the context of advancing UWB radar technology and machine learning models for improved motion classification.

2. Related Work

The measurement and analysis of human motion are becoming increasingly vital across a diverse range of applications, from healthcare and sports to security and robotics. These applications require sophisticated technologies that can integrate seamlessly into daily life and operate under various environmental conditions. Sensors that measure human motion, gait, and posture are essential tools in these fields, offering critical data that can guide technology development and implementation [3].
The limitations of traditional electro-optical techniques and video analysis are well-documented, primarily their confinement to controlled environments and significant privacy concerns [3]. Despite the widespread use of electrical sensors such as gyroscopes, accelerometers, and flexible angular sensors, which attempt to address these issues, they often fall short in uncontrolled environments. Moreover, electromagnetic tracking systems, while useful, still focus heavily on signal processing and feature extraction, which do not fully solve the challenges posed by environmental variability [4,5].
The advent of ultra-wideband (UWB) radar and radio technologies has been a game changer, especially noted in references [4,5,6,7,8,9,10,11,12]. UWB radar’s capability to penetrate through obstacles like walls and furniture has not only improved the feasibility of deploying these technologies in critical environments but also expanded their reliability and accuracy in real-time applications.
UWB radar has gained trust as a sensing modality for security surveillance, useful in applications such as motion detection [13,14], and people counting [7,8,11,12,13,15], where through-wall sensing is crucial [4,6,7,8,9,10,11,12]. The medical applications of UWB Radars are extensively discussed in [16], highlighting their use in monitoring arterial pulsation, assessing cardiac motion, and analyzing medical images. UWB radar’s high-range resolution and penetration capabilities make it ideal for biomedical sensing applications, as further explored in [14,17,18,19,20,21,22,23,24].
Acknowledging the cost-effectiveness and clinical utility of these technologies in human activity recognition and quantification, the industry faces ongoing challenges related to the time inefficiency and high costs associated with their deployment and data interpretation [22,25]. However, advancements in radar signal feature extraction, human activity characteristics extraction, and machine learning have substantially improved the analysis of complex human motions [9,18,23,26,27,28,29].
Furthermore, effective UWB synthetic aperture radar (SAR) techniques are essential for accurately imaging and analyzing moving targets, as detailed in [4,10,11,28,30,31]. Additional medical applications of UWB radars include estimating filter transfer functions in the vocal tract [32], measuring arterial stiffness [33], and characterizing human arm muscles [34].
The central theme of this work focuses on the classification of human motion using ultra-wideband radar, addressing the crucial need for rapid fall detection—a leading cause of hospitalization among the elderly [35,36,37]. Various sensors, including inertial measurement units and video cameras [38,39,40], have been proposed to tackle this challenge, yet they often present limitations such as intrusiveness, fragility, and high user involvement.
Video-based methods have seen widespread use in human motion analysis but are limited by obstacles such as walls and furniture, alongside persistent privacy concerns. Kinect sensors, despite their cost-effectiveness, are vulnerable to interference from external infrared sources, which can drastically alter the quality of recorded videos. Additionally, visual data are particularly susceptible to noisy environments [41]. In contrast, radar technology, with its non-intrusive illumination and insensitivity to lighting conditions, offers a robust alternative that ensures privacy and security [38,42]. Ultra-wideband (UWB) radars excel by providing detailed spatial distribution data essential for classifying human motion and detecting falls [43]. Radar also captures inferential but valuable information showcasing the dynamics of object movement [44].
The integration of ultra-wideband (UWB) radar with advanced machine learning techniques for human activity recognition is gaining traction, as evidenced by recent research that underscores the potential of this technology in sensitive applications. The authors in [45] demonstrated the use of UWB radar mounted on mobile robots for monitoring elderly activities, utilizing long short-term memory (LSTM) networks to achieve high accuracy in detecting subtle human movements. This approach aligns with our work, where we leverage UWB radar coupled with hidden Markov models to enhance the precision and reliability of detecting specific human motions like sitting and falling. Similarly, in [46], a combination of UWB radar and random multimodal deep learning to classify human motions is applied; an approach that complements our methodology by illustrating the effectiveness of sophisticated machine learning models in interpreting radar data amidst various environmental conditions.
Furthermore, the expansion of UWB applications into areas such as image retrieval illustrates the versatility of UWB technologies when combined with deep learning techniques [47]. This study’s method of enhancing image retrieval accuracy by fusing handcrafted features with deep neural outputs provides a promising direction for our future research, particularly in refining feature extraction processes for UWB radar data. Ref. [48] provides a comprehensive survey of UWB methodologies for motion detection, identifying challenges and performance metrics that are directly applicable to our study. By addressing issues such as motion state overlaps and noise interference, we can further advance our understanding and implementation of UWB radar in real-world settings, potentially extending our research to more complex applications such as intelligent surveillance and advanced eldercare systems. These references collectively highlight the broad applicability and future potential of UWB radar technology in conjunction with machine learning, setting a foundation for our continued exploration and development in the field.
This section has examined the strengths and weaknesses of the aforementioned sensing modalities and their potential applications in fall detection. An RGB image captured by the Kinect sensor is employed to track human movement, while radar echo signals collected by the UWB radar offer a comparable dataset. A thorough comparative analysis of video and radar data reveals the efficacy of these technologies in motion classification and fall detection [49].

3. Analyzing UWB Radar Signal Characteristics for Human Motion Detection

The detection and classification of human motion through sensor data is a critical method in the field of human activity recognition. Predominantly, existing research has utilized radar signal micro-Doppler signatures to extract distinct human motion features, employing these signatures within the time–frequency domain for effective characterization [50]. In this work, we delve into a classification approach using UWB radar to discern between different human motions, assessing its effectiveness relative to image-based techniques. Figure 1 presents a process diagram illustrating the methodology for human motion detection based on both radar and imaging technologies.

3.1. Motions Corresponding to Single Person

The ability to differentiate between a fall and sitting or other motions is crucial in environments where fall detection is necessary. This section analyzes the unique signal characteristics that UWB radar captures for different motions. By examining the time–frequency representation of these signals, we can observe distinct patterns that differentiate falling motion, which typically presents a sudden change in signal strength and speed, from sitting, which shows a slower, more gradual adjustment. Subjects perform motions within the detection range of the radar sensor, allowing for the capture and analysis of specific echo patterns. Figure 2 illustrates the radar signals associated with falling and sitting motions, with fall motion characterized by a wider range extension up to approximately 3 m, while sitting motion typically shows a range of about 1 m.

3.2. Motions Corresponding to Multiple Persons

In scenarios with multiple individuals, distinguishing between human and non-human entities, such as pets, becomes important. Our experiments also explore how well UWB radar can differentiate between motions such as standing, sitting, and walking performed simultaneously by different subjects. We provide a deeper analysis of the spatial and temporal resolution advantages of UWB radar, which facilitates the distinction of overlapping signals from multiple sources. Figure 3 showcases the experimental setup for multi-person motion detection, while Figure 4 displays the radar data collected as subjects performed these activities.
Further investigations focus on differentiating between falling and walking motions under dynamic conditions where one subject falls while another walks past. This study highlights the challenge and the radar’s ability to separate these signals effectively, which is critical for reliable fall detection in real-world settings. Figure 5 and Figure 6 illustrate the experimental layout and the radar signal characteristics for these motions, highlighting the distinct trajectories and range differences between falling and walking.

3.3. Human Motion in Multi Radar Environment

To achieve comprehensive coverage and enhance detection accuracy, we employ multiple Radar sensors strategically placed around the environment. This setup allows for capturing a 360-degree view of human motion, significantly improving the reliability of motion detection. We analyze how the integration of data from multiple radars can reduce ambiguity and improve the overall classification accuracy of different motion types. The first experiment uses two radars placed at opposite ends of the room, with their boresights perpendicular to each other, as shown in Figure 7. This arrangement captures the range variations in radar images for the same motion captured from different angles, offering insights into the spatial dynamics of detected motions.
Figure 8 and Figure 9 depict the radar signal images from two different experiments. These images demonstrate how changes in motion direction relative to radar positioning can affect the data collected, affirming that multiple radars can yield consistent and comparative data on human motion.

3.4. Proposed Methodology for Noise Analysis in UWB Radar Data

To address the critical challenge of noise in UWB radar data and its impact on motion classification accuracy, we propose a systematic approach to simulate, filter, and analyze noise:
  • Noise Simulation: We will introduce Gaussian noise into the radar signal data at various signal-to-noise ratios (SNRs) to realistically simulate different environmental noise conditions.
  • Noise Filtering: Wavelet-based denoising techniques will be applied to enhance the quality of the noisy radar data, aiming to improve the signal clarity essential for accurate motion detection.
  • Classification Performance Assessment: The performance of hidden Markov models (HMMs) will be evaluated on both the denoised and original noisy data to assess the effectiveness of the noise reduction techniques.
  • Statistical Analysis: Comprehensive statistical analyses will be conducted to quantify the impact of various noise levels on the classification metrics such as precision, recall, and F1-score, providing insights into the robustness of the radar system under different noise conditions.
This structured investigation will not only delineate the limitations of current UWB radar technology in noisy environments but also pave the way for developing more robust radar-based human motion detection systems applicable in diverse real-world settings.
Incorporating these methodological enhancements will significantly deepen the analytical rigor of our study and broaden our understanding of how noise influences the reliability of UWB radar-based motion classification systems.

4. Extraction of Human Features Based on Image Processing

Analysis of motion in video sequences can be performed with great efficacy using the motion history image (MHI) method. This technique efficiently compresses the history of motion into a single image template, which is represented as an intensity map. In this intensity map, more recent motions are depicted with brighter pixel values. This distinct representation simplifies the task of spotting ongoing motion and predicting its trajectory, making it particularly valuable for applications that require the analysis of temporal aspects of motion.
The MHI technique is advantageous because it transforms complex, dynamic motion data into a single, easily interpretable image. This transformation allows for the rapid assessment of movement patterns over time, which is crucial in contexts such as surveillance, where quick detection of unusual activities is necessary, or in sports analytics, where movements are analyzed to enhance athletic performance.
Further explanation and mathematical representation of how MHI is constructed and used will follow, elucidating its implementation and practical applications in various fields.

4.1. MHI Based Human Motion Feature Extraction

Among the many applications of the motion history image (MHI) method are recognizing human actions, analyzing gaits, and tracking objects in video sequences. Motion sequences are effectively described in terms of their shape and spatial distribution, which proves invaluable for applications such as video surveillance and monitoring systems. One of the main advantages of MHI is its ability to compress video data into a single image frame. This compression allows for a significant reduction in the storage and computational requirements needed for motion analysis, as a smaller number of MHIs can represent the timescale of human motions effectively.
Overall, the motion history image technique is a powerful tool for understanding motion in video sequences. Its applications range from enhancing security in surveillance systems to creating engaging interactions in entertainment technologies.
To illustrate the impact of varying the parameter τ , which determines the temporal depth of motion captured in MHIs, Figure 10 shows a series of MHI images developed using different τ values. These images highlight how increasing τ extends the duration during which motion influences the resulting MHI, thereby capturing more prolonged motion activities.
The mathematical formulation of MHI, X τ ( u , v , t ) , is defined as follows:
X τ ( u , v , t ) = τ if A ( x , y , t ) = 1 , max ( 0 , X τ ( u , v , t 1 ) δ ) otherwise ,
where the position is described by coordinates u , v and time by t. The function A ( x , y , t ) is a binary update equation indicating the presence of an object in the current video frame. The parameter τ determines the duration that influences the temporal extent of captured movement, while δ represents the decay factor, reducing the values over time to fade older movements. This method allows for an efficient and scalable approach to motion analysis, adaptable to various application requirements.

4.2. Human Motion Features Extraction Based on Hu Moments

Image pattern descriptions have made extensive use of moments, as cited in the literature [51]. We compute eight statistic descriptors from the Hu moments for each MHI frame X τ ( u , v , k ) , where k is the index of the MHI frame. This computation aids in extracting scale-, translation-, and rotation-invariant features from the segmented MHIs.
For a two-dimensional (2D) image function f ( u , v ) , the ( i + j ) -th order moments are defined as
m i j = + + u i v j f ( u , v ) d u d v , i , j = 0 , 1 , 2 ,
Moments of all orders exist if the image function f ( u , v ) is a sectional function. The sequence of moments m i j determines f ( u , v ) . It should be noted that moments may change when f ( u , v ) is transformed by either rotation or scaling.
Consequently, features that are robust against changes in position, orientation, and size are extracted using the following central moments:
μ i j = ( u u ¯ ) i ( v v ¯ ) j f ( u , v ) d x d y , i , j = 0 , 1 , 2 ,
where u ¯ = m 10 m 00 and v ¯ = m 01 m 00 .
We focus on a set of eight invariant moments up to the third order with i , j = 0 , , 3 :
h 1 = β 20 + β 02
h 2 = ( β 20 β 02 ) 2
h 3 = ( β 30 3 β 12 ) 2 + ( 3 β 21 β 03 ) 2
h 4 = ( β 30 + 3 β 12 ) 2 + ( 3 β 21 + β 03 ) 2
h 5 = ( β 30 3 β 12 ) ( β 30 + β 12 ) [ ( β 30 + β 12 ) 2 3 ( β 21 + β 03 ) 2 ] + ( 3 β 21 β 03 ) ( β 21 + β 03 ) [ 3 ( β 30 + β 12 ) 2 ( β 21 + β 03 ) 2 ]
h 6 = ( β 20 β 02 ) [ ( β 30 + β 12 ) 2 ( β 21 + β 03 ) 2 ] + 4 β 11 ( β 30 + β 12 ) ( β 21 + β 03 )
h 7 = ( 3 β 21 β 03 ) ( β 30 + β 12 ) [ ( β 30 + β 12 ) 2 3 ( β 21 + β 03 ) 2 ] + ( 3 β 12 β 30 ) ( β 21 + β 03 ) [ 3 ( β 30 + β 12 ) 2 ( β 21 + β 03 ) 2 ]
h 8 = β 11 ( β 30 + β 12 ) 2 ( β 03 + β 21 ) 2 ( β 20 β 02 ) ( β 30 + β 12 ) ( β 21 + β 03 )
where β i j = μ i j μ 00 .

4.3. Extraction of Human Motion Features Based on UWB Radar Data

The analysis of human motion using UWB radar data begins with a crucial preprocessing step: filtering the radar signal. This step enhances the signal quality and isolates the motion-related information from noise. The filtered radar signal is mathematically represented as
r k = [ s k , 2 s k , 1 , , s k , L s k , L 1 ] T
This vector represents the differential signal, which accentuates dynamic changes in the radar echoes, facilitating better motion recognition.
To facilitate comprehensive analysis, these individual vectors are assembled into a matrix:
R = [ r 1 , r 2 , , r K ]
This matrix R forms the basis for further processing and analysis steps.
Identifying different types of motion, such as falls or sitting actions, can be challenging due to the subtlety of differences in radar signatures. To enhance the distinction between these motions, a thresholding method is applied to the radar data, as visualized in Figure 11. This method suppresses less significant variations in the signal, focusing analysis on substantial changes likely indicative of true motion events.
The radar matrix is transformed into a 2D logical matrix using the threshold defined as
R k , l = 0 , if R k , l T h k , 1 , otherwise ,
for k = 1 , 2 , , K and l = 1 , 2 , , L 1 .
The threshold T h k is determined by the root mean square of the signal intensities to filter out low-amplitude noise:
T h k = 1 L 1 l = 1 L 1 R k , l 2
This thresholding effectively reduces noise, allowing clearer identification of motion patterns.
For advanced analysis and feature extraction, principal component analysis (PCA) is employed to compress the radar data matrix R without losing critical motion information. PCA achieves this by transforming the data into a new coordinate system where the greatest variances lie on the first few principal axes:
R ( n × K ) = A ( n × n ) Λ ( n × K ) B ( K × K )
where A and B are matrices containing the eigenvectors of the covariance matrices R R H and R H R , respectively. This transformation reduces the dimensionality of the radar data, focusing on the most informative aspects of the motion signals.
The selection of principal components is based on their ability to explain a significant portion of the variance in the data:
k = 1 i λ k k = 1 K λ k 85 % , i = 1 , 2 , , K
This criterion ensures that the selected components retain at least 85% of the total motion information.
K-means clustering is then applied to these principal components to categorize motion types into clusters, enhancing the classification process. Figure 12 shows scatter plots generated from PCA-processed radar data, demonstrating clear separations between different motion types.
Finally, hidden Markov models (HMMs) are utilized to recognize and classify temporal patterns in motion data. HMMs provide a probabilistic framework that models the sequences of observed events, linked through underlying hidden states:
λ = { X , Y , π }
Given the current state p at time t, the likelihood of transitioning to state q at time t + 1 is denoted by X, and the likelihood of observing a symbol at state q is denoted by Y. Two HMMs, designated as “falling” and “sitting” models, classify the testing sequence θ into model y ^ i , with i ^ = 1 for falling and i ^ = 2 for sitting, where
i ^ = arg max P ( θ | i ^ )
This maximum-likelihood classification using HMMs indicates that P ( θ | i ^ ) is the likelihood probability for each motion class.
While deep learning offers considerable advantages for large-scale and complex datasets, our decision to employ hidden Markov models was driven by several factors. Primarily, the limited availability of extensive labeled radar data necessitated a model that could operate effectively with smaller datasets. Furthermore, HMMs provide crucial interpretability for understanding motion patterns in our target scenarios—essential in healthcare applications where decisions need to be transparent and justifiable. Nonetheless, the potential of deep learning to enhance initial feature extraction remains promising, and as such, is considered a valuable direction for future research to expand upon the current findings.
During the preprocessing of radar data, basic noise reduction techniques were employed to mitigate the impact of environmental and electronic noise inherent to UWB systems. This preliminary noise handling was crucial for maintaining the integrity of motion data used for training our models.
In this study, principal component analysis (PCA) was employed to reduce the dimensionality of radar data before classification. PCA is particularly advantageous for its ability to transform the original variables into a new set of variables. These are linear combinations of the original ones, orthogonal to each other, and ordered such that the first few retain most of the variation present in all of the original variables.
The decision to use standard PCA over Kernel PCA was influenced by the primarily linear nature of our radar data and the specific requirements of our analysis. While Kernel PCA is typically favored for datasets with nonlinear relationships because it maps data into a higher-dimensional space where linear separation becomes feasible, our preliminary evaluations indicated that the linear assumptions of standard PCA were adequate for the initial abstraction levels necessary for our data. This primarily involved identifying linearly separable features within the motion patterns, suitable for our classification tasks.
Additionally, the computational efficiency of standard PCA is considerably higher than that of Kernel PCA. Considering the extensive volume of data and the necessity for real-time processing in our application, standard PCA provided an optimal balance between computational simplicity and the capability to extract meaningful features from the radar signals. Future work could explore the application of Kernel PCA to determine if the nonlinear mapping offers substantial improvements in classification accuracy in scenarios involving more complex motion dynamics.

5. Detailed Analysis of UWB Radar Data and Feature Comparison

The following section provides a comprehensive examination of the ultra-wideband (UWB) radar data utilized in this study and contrasts them with traditional image processing metrics such as Hu moments. This analysis aims to underscore the distinctive advantages and characteristics of UWB radar for human motion detection and highlight the integration of these data with image-based features to improve motion classification accuracy. We delve into the technical specifics of UWB radar operation, the nature of the data it captures, and the rationale behind choosing UWB radar over other sensing technologies. Additionally, we discuss the comparative benefits of UWB radar features and Hu moments in various application scenarios, offering insights into their combined utility in complex environments.

5.1. Ultra-Wideband (UWB) Radar Data Analysis

Ultra-wideband (UWB) radar systems are pivotal in capturing high-resolution data, which is crucial for accurately distinguishing complex human motions such as falls and sitting actions. By emitting short-duration pulses, UWB radar provides detailed insights into target range, velocity, and movement dynamics, essential for precise motion detection. This section discusses the technical specifics of UWB radar systems, including their operational principles, the nature of the emitted pulses, and the data acquisition process that underpins the subsequent analysis.

5.2. Nature of Extracted Features

The features extracted from UWB radar data are numerical and encompass various aspects including signal amplitude, range profiles, Doppler shifts, and micro-Doppler signatures. These features are derived from raw time-domain signals and are transformed into quantitative data points that effectively represent the dynamics of the detected motions:
  • Signal Amplitude and Range Profiles: These features indicate the strength and distance of reflected signals, providing information about the proximity and orientation of the target relative to the radar sensor. They are critical for establishing the initial detection and localization of subjects within the monitored environment.
  • Doppler Shifts: These measure changes in frequency due to the object’s movement towards or away from the radar. They are crucial for detecting and classifying motion, allowing for the differentiation between types of movements based on velocity and direction.
  • Micro-Doppler Signatures: These are variations in the Doppler frequency resulting from micro-motions of human limbs and other body parts. They provide detailed information on the nature of the movement, such as the oscillatory patterns of arms and legs, which are especially useful in distinguishing between different types of human activities.

5.3. Comparison with Hu Moments

Hu moments are numerical descriptors used in image analysis, optimized to provide a compact representation of an image’s shape. They are particularly useful for static image analysis and object recognition. This section contrasts the utility and applicability of UWB radar features with Hu moments, focusing on their distinct advantages in motion detection and activity recognition:
  • Range of Values: While Hu moments are typically normalized and vary within a confined numerical range, UWB radar features can span a wide range of values depending on the strength of the radar returns and the dynamics of the observed scene. This broader spectrum allows for a richer representation of motion dynamics.
  • Applicability: Whereas Hu moments are ideal for analyzing static shapes in images, UWB radar features excel at capturing dynamic and temporal changes. This makes them more suitable for applications requiring real-time motion detection and activity recognition, particularly in environments with occlusions and variable conditions.

5.4. Integrating Radar and Image Features

The integration of UWB radar data with Hu moments represents a hybrid approach that combines spatial information from images with dynamic and temporal data from radar. This section explores the synergies achieved by this integration, discussing how it enhances the accuracy and reliability of motion classification systems. Particularly in complex and occluded environments, this method offers a comprehensive solution that leverages the strengths of both modalities to provide enhanced detection capabilities.

6. Experimental Evaluation

To validate the effectiveness of the proposed methods, a comprehensive experimental setup was implemented. Data collection involved recording RGB video footage using a Kinect sensor, complemented by radar reflections captured using an UWB radar. To ensure the accuracy and comparability of the data, simultaneous and synchronized recordings were made from both the video cameras and the radar systems.
The experiments were designed to mimic real-world scenarios that could benefit from enhanced motion detection. Subjects were instructed to perform a series of motions directed towards both the Kinect camera and the Radar. This setup was chosen to ensure that the sensors captured a wide range of motion dynamics, enhancing the robustness and applicability of the study.
A total of 13 individuals participated in the experiments, with 7 people performing falling motions and 6 executing sitting motions. This distribution was intended to balance the dataset across the two types of motions studied. From these activities, the training dataset was compiled, consisting of 49 instances of falling and 47 instances of sitting. These instances were used to construct two distinct motion models.
The data collected were then utilized to train and refine the motion detection models. The models were evaluated based on their ability to accurately classify and predict the two types of motions under controlled experimental conditions. This approach not only tested the efficacy of the sensor technology but also the effectiveness of the analytical methods employed in processing and interpreting the sensor data.
This experimental evaluation serves not only as a proof of concept for the proposed methods but also sets the foundation for further research into sensor-based motion detection and analysis. By systematically analyzing the performance of the models in a controlled environment, the study provides valuable insights into their potential real-world applications and limitations.

6.1. Classification of Motions without Cross-Validation

To assess the performance of the trained hidden Markov models (HMMs) and evaluate the quality of these models, additional test data were utilized. These data included a selection of 27 falling motions and 33 sitting motions, from a total of 60 recorded instances. The breakdown of data used for training and testing the models is detailed in Table 1.
The classification outcomes based on the video-based approach are detailed in Table 2. This approach demonstrated high effectiveness in recognizing fall motions, correctly identifying 26 out of 27 cases, which translates to an accuracy rate of approximately 96%. This high level of accuracy underscores the system’s capability to detect falls—a critical feature for applications such as elderly care where timely detection can prevent serious injuries.
However, the system’s performance in classifying sitting motions was less robust, with 25 out of 33 motions correctly identified, resulting in a lower accuracy rate of about 75.76%. This variance in classification success rates between different motion types highlights several challenges:
  • Occlusions: Unlike falls, sitting motions can often be subtler and may occur behind obstacles like furniture, leading to potential occlusions that impede accurate detection.
  • Sensor Sensitivity: The sensors deployed may exhibit lower sensitivity to the less dynamic and more gradual nature of sitting compared to the abrupt dynamics typical of falls.
  • Algorithmic Limitations: The current algorithms might be optimized for detecting sudden changes characteristic of falls, at the expense of capturing slower, more nuanced movements like sitting.
Given these findings, there is a clear need for further algorithmic refinement to enhance the detection and classification accuracy of sitting motions. Strategies could include advancing the sensitivity of motion detection algorithms and incorporating more sophisticated machine learning techniques that can differentiate between diverse motion types more effectively.
Additionally, the integration of multi-modal sensor data may mitigate the limitations posed by environmental occlusions, thereby improving the system’s overall robustness and reliability. These improvements will be crucial for expanding the technology’s application in real-world settings, ensuring it can provide reliable safety assurances across a variety of environments.
Table 3 presents the classification outcomes using the UWB radar approach. This method not only correctly classified all 27 falling motions but also improved the recognition of sitting motions, correctly identifying 29 out of 33 instances.
The higher accuracy observed with the radar-based method, particularly in identifying sitting motions (with an 87.88% detection rate compared to 75.76% for the video-based approach), suggests that radar data may be less susceptible to the types of visual occlusions or background variability that can affect video sensors. This robustness against environmental factors could make UWB radar a more reliable choice in environments where visual cues are unreliable or obscured.
These results underscore the potential of UWB radar technology in enhancing motion detection systems. By providing more precise recognition capabilities, UWB radar may offer substantial improvements over traditional video-based detection systems, especially in complex environments where accuracy and reliability are critical.

6.2. Results with 10-Fold Cross Validation

Cross-validation has been implemented in the training and testing processes of the classification models to verify their reliability. A randomized 10-fold data partitioning scheme was used to conduct ten independent learning experiments to assess recognition accuracy. Each experiment consisted of nine training folds and one test fold, ensuring a robust evaluation by minimizing biases related to the order of data or specific data subsets.

6.2.1. Video-Based Approach

The classification results of the video-based approach using 10-fold cross-validation are summarized in Table 4. This approach shows a variation in recognition rates for sitting and falling motions, highlighting the challenges in distinguishing complex human activities consistently across different test scenarios.
The fluctuation in recognition rates may be attributed to factors such as the variability in the background of the video, lighting conditions, or the movement dynamics of different subjects. These factors can significantly affect the visual clarity and hence the performance of video-based recognition systems.

6.2.2. Radar-Based Approach

The Radar-based approach, on the other hand, demonstrates more consistent and higher accuracy in the classification results, as shown in Table 5. Radar technology’s ability to penetrate through poor lighting and visually cluttered environments offers a significant advantage in detecting motions more reliably.
The average accuracy from Table 5 demonstrates an improvement, with falling motion detection reaching up to 95.48% and sitting motion classification improving from 87.88% to 88.80%. These results underline the efficacy of radar technology in accurately detecting human motions, potentially offering substantial improvements over traditional video-based systems.
The higher success rate of radar over video in detecting fall motions suggests that integrating radar data could substantially enhance the reliability of systems designed for critical applications such as elderly care or security surveillance. Furthermore, the consistent performance across different test folds indicates that radar-based models are not only accurate but also robust against variations in test conditions, making them suitable for deployment in diverse environments.
These findings advocate for a more integrated approach where radar could be used to complement video surveillance systems, potentially leading to systems that leverage the strengths of both technologies to provide more accurate and reliable motion detection.

6.3. Discussion

The experimental evaluations presented in the preceding sections provide a robust foundation for assessing the efficacy of the proposed motion detection models using both video and UWB Radar technologies. This discussion aims to synthesize findings from the classification of motions without cross-validation and the results obtained through the rigorous 10-fold cross-validation process.

6.3.1. Comparative Analysis of Video and Radar Technologies

The initial classification results without cross-validation demonstrated the potential of both video and radar-based approaches in accurately detecting human motions. The radar-based method showed particularly promising results, achieving higher accuracy rates than the video-based approach. This trend was consistently observed across various experimental setups, suggesting that radar technology’s ability to penetrate through occlusive environments provides a significant advantage in motion detection tasks.
In the non-cross-validated experiments, the radar-based approach demonstrated an exceptional ability to recognize falling motions with perfect accuracy (100%), and a notably high recognition rate for sitting motions (87.88%). Conversely, the video-based method, while effective, exhibited lower accuracy in detecting sitting motions (75.76%), a result possibly influenced by factors such as lighting, obstructions, and camera angle.

6.3.2. Insights from 10-Fold Cross-Validation

The implementation of 10-fold cross-validation introduced a more rigorous testing environment, which helped confirm the reliability and generalizability of the findings. Cross-validation results further underscored the superior performance of radar technology in motion classification tasks. Notably, the radar-based approach maintained high accuracy levels across all groups, with an impressive average recognition rate for falling motions at 95.48% and for sitting at 88.80%.
The video-based approach, while showing some variability across different folds, still provided valuable insights into the conditions under which video surveillance might struggle or excel. The average accuracy rates of 84.39% for sitting and 90.66% for falling suggest that while video technology is capable, it may require more controlled conditions or additional processing to achieve the levels of accuracy presented by radar technology.

6.4. Innovations of the Study

This study introduces several groundbreaking approaches to enhancing human motion classification using ultra-wideband (UWB) radar integrated with hidden Markov models (HMMs), contributing significantly to the field of human activity recognition (HAR). The principal innovations of this article include the following:
  • Integration of UWB Radar and HMMs: Our research uniquely combines UWB radar data with hidden Markov models to establish a robust framework for accurately classifying complex human motions such as sitting and falling. This integration exploits UWB radar’s capability to penetrate obstacles and HMMs’ dynamic modeling features to markedly enhance detection accuracy.
  • Advanced Feature Extraction Techniques: We implement innovative feature extraction by applying principal component analysis (PCA) and k-means clustering to radar data. This approach refines features critical for detecting subtle human movements, vital for applications in elderly care and emergency response systems.
  • Comparative Analysis for Real-World Application: Our research includes an extensive comparative analysis with conventional video-based systems under varying environmental conditions. The results affirm the superior performance of our methodology in terms of both accuracy and reliability, showcasing its practical application in environments where privacy and interference are major concerns.
  • Non-Intrusive and Privacy-Preserving Technology: Addressing the significant privacy concerns associated with monitoring and surveillance systems, our use of UWB Radar ensures a completely non-intrusive monitoring process that safeguards individual privacy. This innovation is crucial in areas where ethical considerations are paramount.
  • Practical Implications and Scalability: The study also highlights the scalability of our proposed method across diverse settings, demonstrating its adaptability without extensive modifications to existing infrastructure. This versatility extends its applicability beyond healthcare to fields such as security, sports, and robotics.
These innovations not only push the boundaries of existing technology but also synergize methodologies in novel ways, marking a significant advancement in the accurate and ethical monitoring of human activities. The contributions of this research are poised to redefine industry standards, paving the way for safer, more efficient, and respectful uses of technology in sensitive environments.

7. Conclusions and Future Work

This study investigated the efficacy of camera and UWB radar-based sensing modalities for categorizing and recognizing human motion. The research employed motion history image (MHI) and Hu moment techniques to extract information from RGB images, while features from radar data were extracted using principal component analysis (PCA) and motion filtering. The dimensionality of the radar data was reduced to enhance the efficiency of the subsequent analysis. Additionally, vector quantization was performed using the k-means clustering algorithm. Two hidden Markov models (HMMs) were trained using both vision and radar data to recognize two distinct types of motions: sitting and falling.
The classification results clearly demonstrated the superiority of the radar-based method, which achieved a 95.48% recognition rate for fall scenarios and an 88.80% rate for sitting scenarios. This significant contrast in performance convincingly underscores the advantages of using UWB radar for human motion classification over traditional camera-based systems. The robustness of the radar-based approach is particularly evident in its ability to accurately detect motions even in challenging environmental conditions where visual systems might be compromised by poor lighting or occlusions.
Further validation of the methodology was provided through a rigorous 10-fold cross-validation process, which confirmed the reliability and generalizability of the models across multiple subsets of data. This validation process not only enhanced the credibility of the classification results but also demonstrated the robustness of the feature extraction and classification techniques employed.
Looking forward, the promising results of this study suggest several avenues for future research. One potential area is the exploration of hybrid systems that integrate both radar and video data to leverage the strengths of each modality. Such systems could potentially provide even higher accuracy and reliability, particularly in complex environments where single-modality systems might falter. Additionally, the application of advanced machine learning algorithms could further refine the classification processes, enhancing the ability to distinguish between more nuanced variations of human motion.
Moreover, expanding the scope of motion types and environmental conditions tested could provide a more comprehensive understanding of the practical limitations and capabilities of the proposed methodologies. Future studies could also focus on real-time processing challenges, aiming to reduce latency and increase the operational efficiency of motion detection systems for applications such as interactive systems, healthcare monitoring, and automated surveillance.
One critical area for future research is the systematic investigation of the impact of radar configuration on detection performance. While our current study provides a robust foundation for using UWB radar in human motion detection, the theoretical considerations suggest that the strategic placement and number of radars could significantly influence the system’s effectiveness. Future experiments should aim to empirically validate these theories by testing various configurations in controlled environments. This research will help optimize radar setups for enhanced accuracy and reliability in complex monitoring scenarios, ensuring the technology can be effectively adapted to diverse operational conditions.
Recognizing the significant impact of noise on radar data accuracy, future research will focus on systematically evaluating how different noise levels influence the classification of human motions. We plan to introduce synthetic noise into our radar datasets using Gaussian noise models to simulate real-world interference conditions. This approach will allow us to test the resilience of our hidden Markov models under various noisy scenarios. Additionally, we will explore advanced noise filtering techniques, such as wavelet transforms, to improve the signal-to-noise ratio while preserving crucial features for motion detection. By conducting these targeted experiments, our goal is to enhance the robustness and reliability of UWB radar-based systems in noisy environments, ultimately contributing to the development of more effective and dependable motion detection solutions for healthcare and other sensitive applications.
Ultimately, the integration of these technologies into practical applications will require addressing both technical challenges and considerations related to privacy and ethical use, particularly when deploying these systems in sensitive environments. Continuing to refine and adapt these technologies in alignment with ethical standards will be crucial as they become more integrated into everyday life.

Author Contributions

Conceptualization, T.P., V.K., A.K., V.C.G. and B.A.; Methodology, T.P., V.K., A.K., V.C.G. and B.A.; Writing—original draft, T.P., V.K., A.K., V.C.G. and B.A.; Writing—review & editing, T.P., V.K., A.K., V.C.G. and B.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are available in a publicly accessible repository.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sahinoglu, Z.; Gezici, S.; Güvenc, I. Ultra-Wideband Positioning Systems: Theoretical Limits, Ranging Algorithms, and Protocols; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  2. Yarovoy, A.G.; Ligthart, L.P.; Matuzas, J.; Levitas, B. UWB Radar for Human Being Detection. IEEE Aerosp. Electron. Syst. Mag. 2006, 21, 10–14. [Google Scholar] [CrossRef]
  3. Wong, W.Y.; Wong, M.S.; Lo, K.H. Clinical Applications of Sensors for Human Posture and Movement Analysis: A Review. Prosthetics Orthot. Int. 2007, 31, 62–75. [Google Scholar] [CrossRef] [PubMed]
  4. Liu, Q.; Wang, Y.; Fathy, A.E. A Compact Integrated 100 GS/s Sampling Module for UWB See Through Wall Radar with Fast Refresh Rate for Dynamic Real Time Imaging. In Proceedings of the Radio and Wireless Symposium (RWS), Santa Clara, CA, USA, 15–18 January 2012; pp. 59–62. [Google Scholar]
  5. Park, P.; Kim, S.; Woo, S.; Kim, C. A High-Resolution Short-Range CMOS Impulse Radar for Human Walk Tracking. In Proceedings of the Radio Frequency Integrated Circuits Symposium (RFIC), Seattle, WA, USA, 2–4 June 2013; pp. 9–12. [Google Scholar]
  6. Ivashchuk, V.E.; Prokhorenko, V.P.; Pitertsev, A.A.; Yanovsky, F.J. Evaluation of Combined Ground Penetrating and Through-the-Wall Surveillance UWB Technology. In Proceedings of the European Microwave Conference, Nuremberg, Germany, 6–10 October 2013; pp. 384–387. [Google Scholar]
  7. Levitas, B.; Matuzas, J.; Drozdov, M. Detection and Separation of Several Human Beings behind the Wall with UWB Radar. In Proceedings of the International Radar Symposium, Wroclaw, Poland, 21–23 May 2008; pp. 1–4. [Google Scholar]
  8. Li, J.; Zeng, Z.; Sun, J.; Liu, F. Through-Wall Detection of Human Being’s Movement by UWB Radar. IEEE Geosci. Remote Sens. Lett. 2012, 9, 1079–1083. [Google Scholar]
  9. Ram, S.S.; Christianson, C.; Ling, H. Simulation of High Range-Resolution Profiles of Humans Behind Walls. In Proceedings of the Radar Conference, Pasadena, CA, USA, 4–8 May 2009; pp. 1–4. [Google Scholar]
  10. Safarik, M.; Mrkvica, J.; Protiva, P.; Sikl, R. Three-Dimensional Image Fusion of Moving Human Target Data Measured by a Portable Through-wall Radar. In Proceedings of the 23rd International Conference Radioelektronika, Pardubice, Czech Republic, 16–17 April 2013; pp. 308–311. [Google Scholar]
  11. Wu, S.; Tan, K.; Xu, Y.; Chen, J.; Meng, S.; Fang, G. A Simple Strategy for Moving Target Imaging via an Experimental UWB Through-wall Radar. In Proceedings of the 14th International Conference on Ground Penetrating Radar (GPR), Shanghai, China, 4–8 June 2012; pp. 961–965. [Google Scholar]
  12. Zhu, G.; Hu, J.; Yuan, Z.; Huang, X. Automatic Human Target Detection of Ultra-wideband Through-wall Radar. In Proceedings of the Radar Conference (RadarCon13), Ottawa, ON, Canada, 29 April–3 May 2013; pp. 1–4. [Google Scholar]
  13. Schleicher, B.; Dederer, J.; Leib, M.; Nasr, I.; Trasser, A.; Menzel, W.; Schumacher, H. Highly Compact Impulse UWB Transmitter for High-resolution Movement Detection. In Proceedings of the International Conference on Ultra-Wideband, Hannover, Germany, 10–12 September 2008; Volume 1, pp. 89–92. [Google Scholar]
  14. Schleicher, B.; Nasr, I.; Trasser, A.; Schumacher, H. IR-UWB Radar Demonstrator for Ultra-Fine Movement Detection and Vital-Sign Monitoring. IEEE Trans. Microw. Theory Tech. 2013, 61, 2076–2085. [Google Scholar] [CrossRef]
  15. Maaref, N.; Millot, P.; Pichot, C.; Picon, O. A Study of UWB FM-CW Radar for the Detection of Human Beings in Motion Inside a Building. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1297–1300. [Google Scholar] [CrossRef]
  16. Staderini, E.M. UWB Radars in Medicine. IEEE Aerosp. Electron. Syst. Mag. 2002, 17, 13–18. [Google Scholar] [CrossRef]
  17. Boryssenko, A.; Boryssenko, E. UWB Radar Sensor to Monitor Heart Physiology. In Proceedings of the Loughborough Antennas & Propagation Conference (LAPC), Loughborough, UK, 14–15 November 2011; pp. 1–4. [Google Scholar]
  18. Chu, T.; Roderick, J.; Chang, S.; Mercer, T.; Du, C.; Hashemi, H. A Short-range UWB Impulse-radio CMOS Sensor for Human Feature Detection. In Proceedings of the International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, 20–24 February 2011; pp. 294–296. [Google Scholar]
  19. Gentile, C.; Kik, A. A Comprehensive Evaluation of Indoor Ranging Using Ultra-Wideband Technology. EURASIP J. Wirel. Commun. Netw. 2007, 2007, 86031. [Google Scholar] [CrossRef]
  20. Immoreev, I.; Ivashov, S. Remote Monitoring of Human Cardiorespiratory System Parameters by Radar and its Applications. In Proceedings of the 4th International Conference on Ultrawideband and Ultrashort Impulse Signals, Sevastopol, UKraine, 15–19 September 2008; pp. 34–38. [Google Scholar]
  21. Nijsure, Y.; Tay, W.P.; Gunawan, E.; Wen, F.; Yang, Z.; Guan, Y.L.; Chua, A.P. An Impulse Radio Ultrawideband System for Contactless Noninvasive Respiratory Monitoring. IEEE Trans. Biomed. Eng. 2013, 60, 1509–1517. [Google Scholar] [CrossRef] [PubMed]
  22. Simon, S.R. Quantification of Human Motion: Gait Analysis Benefits and Limitations to its Application to Clinical Problems. J. Biomech. 2004, 37, 1869–1880. [Google Scholar] [CrossRef] [PubMed]
  23. Thiel, F.; Seifert, F. Noninvasive Probing of the Human Body with Electromagnetic Pulses: Modeling of the Signal Path. J. Appl. Phys. 2009, 105, 044904. [Google Scholar] [CrossRef]
  24. Wang, Y.; Liu, Q.; Fathy, A.E. Simultaneous Localization and Respiration Detection of Multiple People using Low Cost UWB Biometric Pulse Doppler Radar Sensor. In Proceedings of the IEEE/MTT-S International Microwave Symposium Digest, Montreal, QC, Canada, 17–22 June 2012; pp. 1–3. [Google Scholar]
  25. Wang, Y.; Fathy, A.E. Micro-Doppler Signatures for Intelligent Human Gait Recognition using a UWB Impulse Radar. In Proceedings of the International Symposium on Antennas and Propagation (APSURSI), Spokane, WA, USA, 3–8 July 2011; pp. 2103–2106. [Google Scholar]
  26. Bryan, J.; Kim, Y. Classification of Human Activities on UWB Radar using a Support Vector Machine. In Proceedings of the Antennas and Propagation Society International Symposium (APSURSI), Toronto, ON, Canada, 11–17 July 2010; pp. 1–4. [Google Scholar]
  27. Bryan, J.D.; Kwon, J.; Lee, N.; Kim, Y. Application of Ultra-wide Band Radar for Classification of Human Activities. IET Radar Sonar Navig. 2012, 6, 172–179. [Google Scholar] [CrossRef]
  28. Saho, K.; Sakamoto, T.; Sato, T.; Inoue, K.; Fukuda, T. Pedestrian Classification Based on Radial Velocity Features of UWB Doppler Radar Images. In Proceedings of the International Symposium on Antennas and Propagation (ISAP), Nagoya, Japan, 29 October–2 November 2012; pp. 90–93. [Google Scholar]
  29. Sakamoto, T.; Sato, T.; He, Y.; Aubry, P.J.; Yarovoy, A.G. Texture-based Technique for Separating Echoes from People Walking in UWB Radar Signals. In Proceedings of the International Symposium on Electromagnetic Theory, Hiroshima, Japan, 20–24 May 2013; pp. 119–122. [Google Scholar]
  30. Saho, K.; Sakamoto, T.; Sato, T. Imaging of Pedestrians with UWB Doppler Radar Interferometry. In Proceedings of the 2013 International Symposium on Electromagnetic Theory, Hiroshima, Japan, 20–24 May 2013; pp. 29–32. [Google Scholar]
  31. Wang, Y.; Fathy, A.E. Three-dimensional Through Wall Imaging using an UWB SAR. In Proceedings of the Antennas and Propagation Society International Symposium, Toronto, ON, Canada, 11–17 July 2010; pp. 1–4. [Google Scholar]
  32. Gupta, A.; Saxena, V.; Joshi, S. Development of a High Resolution UWB Sensor for Estimation of Transfer Function of Vocal Tract Filter. In Proceedings of the 3rd International Conference on Wireless Communication and Sensor Networks, Allahabad, India, 13–15 December 2007; pp. 131–134. [Google Scholar]
  33. Tao, T.H.; Hu, S.J.; Peng, J.H.; Kuo, S.C. An Ultrawideband Radar Based Pulse Sensor for Arterial Stiffness Measurement. In Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; pp. 1679–1682. [Google Scholar]
  34. Eldosoky, M.A.A. The Applications of the Ultra Wide Band Radar in Detecting the Characteristics of the Human Arm Muscles. In Proceedings of the National Radio Science Conference, Cairo, Egypt, 17–19 March 2009; pp. 1–7. [Google Scholar]
  35. Gürbüz, S.Z. Radar Detection and Identification of Human Signatures Using Moving Platforms. Ph.D. Thesis, Georgia Institute of Technology, Atlanta, GA, USA, 2009. [Google Scholar]
  36. Liu, L.; Popescu, M.; Ho, K.C.; Skubic, M.; Rantz, M. Doppler Radar Sensor Positioning in a Fall Detection System. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), San Diego, CA, USA, 28 August–1 September 2012; pp. 256–259. [Google Scholar]
  37. Wu, M.; Dai, X.; Zhang, Y.D.; Davidson, B.; Amin, M.G.; Zhang, J. Fall Detection Based on Sequential Modeling of Radar Signal Time-Frequency Features. In Proceedings of the International Conference on Healthcare Informatics (ICHI), Philadelphia, PA, USA, 9–11 September 2013; pp. 169–174. [Google Scholar]
  38. Amin, M.G.; Zhang, Y.D.; Ahmad, F.; Ho, D.K.C. Radar Signal Processing for Elderly Fall Detection: The Future for in-home Monitoring. IEEE Signal Process. Mag. 2016, 33, 71–80. [Google Scholar] [CrossRef]
  39. Bennett, T.R.; Wu, J.; Kehtarnavaz, N.; Jafari, R. Inertial Measurement Unit-Based Wearable Computers for Assisted Living Applications: A Signal Processing Perspective. IEEE Signal Process. Mag. 2016, 33, 28–35. [Google Scholar] [CrossRef]
  40. Wang, X.; Li, M.; Ji, H.; Gong, Z. A Novel Modeling Approach to Fall Detection and Experimental Validation using Motion Capture System. In Proceedings of the International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China, 12–14 December 2013; pp. 234–239. [Google Scholar]
  41. Li, Y.; Ho, K.C.; Popescu, M. Efficient Source Separation Algorithms for Acoustic Fall Detection Using a Microsoft Kinect. IEEE Trans. Biomed. Eng. 2014, 61, 745–755. [Google Scholar] [CrossRef] [PubMed]
  42. Kianoush, S.; Savazzi, S.; Vicentini, F.; Rampa, V.; Giussani, M. Leveraging RF Signals for Human Sensing: Fall Detection and Localization in Human-machine Shared Workspaces. In Proceedings of the 13th International Conference on Industrial Informatics (INDIN), Cambridge, UK, 22–24 July 2015; pp. 1456–1462. [Google Scholar]
  43. Martone, A.; Ranney, K.; Innocenti, R. Automatic through the Wall Detection of Moving Targets using Low-frequency Ultra-wideband Radar. In Proceedings of the Radar Conference, Arlington, VA, USA, 10–14 May 2010; pp. 39–43. [Google Scholar]
  44. Hao, J.; Dai, X.; Stroder, A.; Zhang, J.J.; Davidson, B.; Mahoor, M.; McClure, N. Prediction of a Bed-exit motion: Multimodal Sensing Approach and Incorporation of Biomechanical Knowledge. In Proceedings of the 48th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 2–5 November 2014; pp. 1747–1751. [Google Scholar]
  45. Noori, F.M.; Uddin, M.Z.; Tørresen, J. Ultra-Wideband Radar-Based Activity Recognition Using Deep Learning. IEEE Access 2021, 9, 138132–138143. [Google Scholar] [CrossRef]
  46. Pardhu, T.; Kumar, V. Human Motion Classification Using Impulse Radio Ultra Wide Band through-wall RADAR Model. Multimed. Tools Appl. 2023, 82, 36769–36791. [Google Scholar] [CrossRef]
  47. Pardhu, T.; Kumar, V.; Kumar, P.; Deevi, N. Advancements in UWB Based Human Motion Detection Through Wall: A Comprehensive Analysis. IEEE Access 2024, 12, 89818–89835. [Google Scholar] [CrossRef]
  48. Shamsipour, G.; Ershad, S.F.; Sharifi, M.; Alaei, A. Improve the Efficiency of Handcrafted Features in Image Retrieval by Adding Selected Feature Generating Layers of Deep Convolutional Neural Networks. Signal Image Video Process. 2024, 18, 2607–2620. [Google Scholar] [CrossRef]
  49. Meikle, H. Modern Radar Systems; Artech House: Norwood, MA, USA, 2008. [Google Scholar]
  50. Tivive, F.H.C.; Bouzerdoum, A.; Amin, M.G. Automatic Human Motion Classification from Doppler Spectrograms. In Proceedings of the 2nd International Workshop on Cognitive Information Processing (CIP), Elba, Italy, 14–16 June 2010; pp. 237–242. [Google Scholar]
  51. Huang, Z.; Leng, J. Analysis of Hu’s Moment Invariants on Images Scaling and Rotation. In Proceedings of the 2nd International Conference on Computer Engineering and Technology, Chengdu, China, 16–18 April 2010; Volume 7, pp. 476–480. [Google Scholar]
Figure 1. Process flow diagram for human motion classification using radar and imaging methods.
Figure 1. Process flow diagram for human motion classification using radar and imaging methods.
Mathematics 12 02314 g001
Figure 2. Radar signal comparisons for single-person motions: (a) fall detection, radar signal associated with fall motion for a single person, demonstrating an extended range and (b) sitting posture, radar signal corresponding to a person sitting, showing a limited range extension.
Figure 2. Radar signal comparisons for single-person motions: (a) fall detection, radar signal associated with fall motion for a single person, demonstrating an extended range and (b) sitting posture, radar signal corresponding to a person sitting, showing a limited range extension.
Mathematics 12 02314 g002
Figure 3. Experimental setup for multi-person motion detection using UWB radar.
Figure 3. Experimental setup for multi-person motion detection using UWB radar.
Mathematics 12 02314 g003
Figure 4. Radar signal output for dual-subject walking and sitting motions.
Figure 4. Radar signal output for dual-subject walking and sitting motions.
Mathematics 12 02314 g004
Figure 5. Experimental layout for distinguishing between fall and walk motions via UWB radar.
Figure 5. Experimental layout for distinguishing between fall and walk motions via UWB radar.
Mathematics 12 02314 g005
Figure 6. Radar trajectories for concurrent falling and walking motions.
Figure 6. Radar trajectories for concurrent falling and walking motions.
Mathematics 12 02314 g006
Figure 7. Setup for multi-radar experiments to capture 360-degree motion data.
Figure 7. Setup for multi-radar experiments to capture 360-degree motion data.
Mathematics 12 02314 g007
Figure 8. Comparative radar signal images from the first multi-radar experiment, highlighting the differences in data captured from two radar positions. (a) Signal image from Radar-1 showing consistent data collection from the first experiment. (b) Signal image from Radar-2 providing a different perspective from the first experiment.
Figure 8. Comparative radar signal images from the first multi-radar experiment, highlighting the differences in data captured from two radar positions. (a) Signal image from Radar-1 showing consistent data collection from the first experiment. (b) Signal image from Radar-2 providing a different perspective from the first experiment.
Mathematics 12 02314 g008
Figure 9. Results from the second experiment Using multiple radars, demonstrating how radar position affects motion detection accuracy. (a) Signal image from Radar-1 corresponding to the second experiment with a 45-degree motion change. (b) Signal image from Radar-2 corresponding to the same second experiment, illustrating data consistency.
Figure 9. Results from the second experiment Using multiple radars, demonstrating how radar position affects motion detection accuracy. (a) Signal image from Radar-1 corresponding to the second experiment with a 45-degree motion change. (b) Signal image from Radar-2 corresponding to the same second experiment, illustrating data consistency.
Mathematics 12 02314 g009
Figure 10. Development of MHI images using different τ values to illustrate the temporal depth of motion capture. (a) τ = 5 . (b) τ = 10 . (c) τ = 20 .
Figure 10. Development of MHI images using different τ values to illustrate the temporal depth of motion capture. (a) τ = 5 . (b) τ = 10 . (c) τ = 20 .
Mathematics 12 02314 g010
Figure 11. Filtered radar images demonstrating the enhanced distinction between falling and sitting motions. (a) Radar image corresponding to falling. (b) Radar image corresponding to sitting.
Figure 11. Filtered radar images demonstrating the enhanced distinction between falling and sitting motions. (a) Radar image corresponding to falling. (b) Radar image corresponding to sitting.
Mathematics 12 02314 g011
Figure 12. PCA-based scatter plots highlighting the distinct motion patterns of falling and sitting. (a) Scatter plot using PCA for falling. (b) Scatter plot using PCA for sitting.
Figure 12. PCA-based scatter plots highlighting the distinct motion patterns of falling and sitting. (a) Scatter plot using PCA for falling. (b) Scatter plot using PCA for sitting.
Mathematics 12 02314 g012
Table 1. Data selection for use in training and testing.
Table 1. Data selection for use in training and testing.
PhaseTrainingTesting
Sit4733
Fall4927
Total9660
Table 2. Classification outcomes based on video-based approach.
Table 2. Classification outcomes based on video-based approach.
ActivitiesSitFall
Sit259
Fall126
Table 3. Classification outcomes based on UWB radar approach.
Table 3. Classification outcomes based on UWB radar approach.
ActivitiesSitFall
Sit294
Fall027
Table 4. Classification results of video-based approach using 10-fold cross-validation.
Table 4. Classification results of video-based approach using 10-fold cross-validation.
GroupActivitySitFallRecognition
Rate of Sitting (%)
Recognition
Rate of Falling (%)
1Sit6185.71-
Fall19-90.0
2Sit7187.50-
Fall16-85.71
3Sit8188.89-
Fall06-100
4Sit7187.50-
Fall07-100
5Sit9281.82-
Fall05-100
6Sit6275.0-
Fall16-85.71
7Sit7370.0-
Fall14-80.0
8Sit7187.5-
Fall08-100
9Sit4180.0-
Fall210-83.3
10Sit60100-
Fall29-81.81
Average Accuracy (%)84.3990.66
Table 5. Classification Results of Radar-Based Approach Using 10-fold Cross Validation.
Table 5. Classification Results of Radar-Based Approach Using 10-fold Cross Validation.
GroupActivitySitFallRecognition
Rate of Sitting (%)
Recognition
Rate of Falling (%)
1Sit6185.71-
Fall010-100
2Sit80100-
Fall16-85.71
3Sit8188.89-
Fall06-100
4Sit6285.0-
Fall07-100
5Sit10190.91-
Fall05-100
6Sit7187.5-
Fall16-85.71
7Sit8280.0-
Fall05-100
8Sit80100-
Fall08-100
9Sit4180.0-
Fall210-83.3
10Sit60100-
Fall011-100
Average Accuracy (%)88.8095.48
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pardhu, T.; Kumar, V.; Kanavos, A.; Gerogiannis, V.C.; Acharya, B. Enhanced Classification of Human Fall and Sit Motions Using Ultra-Wideband Radar and Hidden Markov Models. Mathematics 2024, 12, 2314. https://doi.org/10.3390/math12152314

AMA Style

Pardhu T, Kumar V, Kanavos A, Gerogiannis VC, Acharya B. Enhanced Classification of Human Fall and Sit Motions Using Ultra-Wideband Radar and Hidden Markov Models. Mathematics. 2024; 12(15):2314. https://doi.org/10.3390/math12152314

Chicago/Turabian Style

Pardhu, Thottempudi, Vijay Kumar, Andreas Kanavos, Vassilis C. Gerogiannis, and Biswaranjan Acharya. 2024. "Enhanced Classification of Human Fall and Sit Motions Using Ultra-Wideband Radar and Hidden Markov Models" Mathematics 12, no. 15: 2314. https://doi.org/10.3390/math12152314

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop