Next Article in Journal
Denoising and Baseline Correction Methods for Raman Spectroscopy Based on Convolutional Autoencoder: A Unified Solution
Previous Article in Journal
The Potential of Fecal Volatile Organic Compound Analysis for the Early Diagnosis of Late-Onset Sepsis in Preterm Infants: A Narrative Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Critical Analysis of Data Leakage in WiFi CSI-Based Human Action Recognition Using CNNs

Nokia Bell Labs, 1082 Budapest, Hungary
Sensors 2024, 24(10), 3159; https://doi.org/10.3390/s24103159
Submission received: 13 April 2024 / Revised: 12 May 2024 / Accepted: 13 May 2024 / Published: 16 May 2024
(This article belongs to the Section Internet of Things)

Abstract

:
WiFi Channel State Information (CSI)-based human action recognition using convolutional neural networks (CNNs) has emerged as a promising approach for non-intrusive activity monitoring. However, the integrity and reliability of the reported performance metrics are susceptible to data leakage, wherein information from the test set inadvertently influences the training process, leading to inflated accuracy rates. In this paper, we conduct a critical analysis of a notable IEEE Sensors Journal study on WiFi CSI-based human action recognition, uncovering instances of data leakage resulting from the absence of subject-based data partitioning. Empirical investigation corroborates the lack of exclusivity of individuals across dataset partitions, underscoring the importance of rigorous data management practices. Furthermore, we demonstrate that employing data partitioning with respect to humans results in significantly lower precision rates than the reported 99.9% precision, highlighting the exaggerated nature of the original findings. Such inflated results could potentially discourage other researchers and impede progress in the field by fostering a sense of complacency.

1. Introduction

In recent years, the utilization of WiFi channel state information (CSI) for human action recognition has garnered significant attention due to its non-intrusive nature and potential for application in various fields such as healthcare monitoring [1], security surveillance [2], and human–computer interaction [3]. Leveraging advanced machine learning techniques, particularly convolutional neural networks (CNNs), researchers have reported remarkable accuracy rates in identifying human actions solely based on WiFi CSI data.
Among the plethora of studies in this domain, a notable IEEE Sensors Journal paper [4] claimed an exceptional accuracy rate of 99% of WiFi CSI-based human action recognition using CNNs. Such a high accuracy rate holds immense promise for practical applications, potentially revolutionizing how human actions are monitored and analyzed in various contexts. However, beneath the surface of these seemingly groundbreaking results lies a critical concern: data leakage. Data leakage, often overlooked or underestimated, poses a significant threat to the integrity and reliability of machine learning models. In the context of WiFi CSI-based human action recognition, data leakage can occur when information from the test set inadvertently leaks into the training process, leading to inflated performance metrics and misleading conclusions.
This study makes several significant contributions to the field of WiFi CSI-based human action recognition using machine/deep learning, particularly in identifying and addressing data leakage.
  • Detection of data leakage: Our study successfully detects and analyzes instances of data leakage within the experimental methodology of a prominent IEEE Sensors Journal publication [4]. By meticulously examining the data partitioning methods, performance metrics, and model behavior reported in the original study, we identify inconsistencies and anomalies indicative of data leakage.
  • Empirical validation: Through empirical validation and meticulous scrutiny of the dataset and experimental procedures, we provide concrete evidence to support our assertion of data leakage in the original study.
  • Recommendations for mitigation: Building upon our findings, we propose practical recommendations for mitigating data leakage and enhancing the integrity of WiFi CSI-based human action recognition using machine/deep learning.
This paper is structured as follows: In Section 2, we provide a brief overview of the background literature related to WiFi CSI-based human action recognition and the significance of addressing data leakage. Section 3 outlines the methodology employed in the original study, followed by a detailed examination of the experimental setup and results in Section 4. In Section 5, we present our critical analysis of the findings, highlighting instances of data leakage and their impact on the reported accuracy rates. Finally, we conclude the paper in Section 6 by summarizing our key findings, discussing the implications of data leakage in WiFi CSI-based human action recognition, and suggesting avenues for future research.

2. Related Work

2.1. Human Action Recognition Based on WiFi Channel State Information

Human action recognition (HAR) algorithms may utilize different data modalities [5], such as RGB image [6], skeleton [7], depth [8], infrared [9], point cloud [10], event stream [11], audio [12], acceleration [13], radar [14], and WiFi [15]. Further, a significant amount of studies focuses on the fusion of different modalities for HAR [16,17,18,19]. In this section, we review methods utilizing WiFi CSI for HAR. WiFi CSI refers to the data obtained by monitoring the changes in the wireless channel characteristics between a transmitter (such as a WiFi access point) and a receiver (such as a WiFi-enabled device). This information includes various parameters such as signal strength, signal phase, signal-to-noise ratio (SNR), and other channel properties. WiFi CSI is collected by specialized hardware such as software-defined radios (SDRs) or WiFi chipsets that support CSI reporting. It provides detailed insights into the wireless channel’s behavior, allowing for advanced signal processing techniques and analysis. Researchers and engineers use WiFi CSI for various purposes, such as channel estimation [20], localization [21], gesture recognition [22], activity recognition [23], or wireless networking research [24]. WiFi CSI can be used for human action recognition due to the following reasons:
  • Effect of human actions on wireless signals: Human actions, such as gestures or movements, can cause changes in the wireless channel characteristics due to blockage, reflection, or absorption of the WiFi signals. These changes are reflected in the WiFi CSI measurements.
  • Distinctive patterns in CSI: Different human actions result in characteristic patterns in the WiFi CSI data. For example, a specific gesture may cause a sudden drop or fluctuation in signal strength or phase, which can be detected and recognized through signal processing techniques.
  • Machine learning algorithms: Advanced machine learning algorithms can be trained to recognize specific human actions based on patterns observed in WiFi CSI data. By collecting the labeled data of WiFi CSI corresponding to different human actions, classifiers can be trained to accurately recognize and classify these actions in real time.
Due to the availability of WiFi signals, many WiFi CSI based systems for HAR have been proposed in the literature recently. Recent studies on WiFi CSI-based human action recognition have introduced various methods to improve accuracy and address different challenges. For instance, Wang et al. [25] discuss a device-free fall detection system called WiFall, which leverages wireless signal propagation models and CSI to detect falls without the need for wearable devices. Specifically, it employs the time variability and special diversity of CSI. The system consists of a two-phase detection architecture: a local outlier factor-based algorithm to identify abnormal CSI series and activity classification using one-class support vector machine (SVM) [26] to distinguish falls from other human activities. In contrast, the detection of large-scale human movements was the goal in the WiSee [27], WiTrack [28], Wi-Vi [29], and E-eyes [15] projects. Specifically, Pu et al. [27] extracted human gesture and motion information from wireless signals using the Doppler shift property, which results in a pattern of frequency shifts at the wireless receiver when a user performs a gesture or moves. WiSee also maps these Doppler shifts to gestures by leveraging the continuous nature of human gestures and classifying them using a binary pattern-matching algorithm. Additionally, the system works effectively in the presence of multiple users by utilizing multiple-input multiple-output capabilities to focus on gestures and motion from a specific user. In the WiTrack [28] project, 3D human motion tracking was carried out based on radio frequency reflections from a human body. Further, WiTrack can also provide a coarse tracking of larger human parts, such as legs or arms. In the Wi-Vi [29] project, Adib et al. demonstrated that the detection of human movements is also possible behind walls and doors in a closed room. In the E-eyes [15] project, researchers introduced a low-cost system for identifying activities in the home environments using WiFi access points and devices. The system uses the cumulative moving variance of CSI samples to determine the presence of walking or in-place activities. For activity identification, it employs dynamic time warping [30] for walking activities and the earth mover’s distance [31] for in-place activities, comparing the testing CSI measurements to known activity profiles. In [32], Halperin et al. issued a publicly available tool for WiFi collection and processing according to 802.11n standard [33] for a specific Intel chipset. Alternatives for Atheros chip sets were provided by Xie et al. [34] and Tsakalaki and Schäfer [35].
Recent studies in the field of WiFi CSI-based human action recognition has heavily utilized different deep-learning architectures and techniques. For instance, Chen et al. [36] applied Bi-LSTM and attention mechanism to learn from CSI amplitude and phase characteristics. Similarly, Guo et al. [37] applied an LSTM network but it was combined with a CNN. In contrast, Zhang et al. [38] proposed adversative auto-encoder networks for CSI signal security. Jiang et al.’s [39] consisted of three main components: the feature extractor, activity recognizer, and domain discriminator. The feature extractor, a CNN, collaborates with the activity recognizer to recognize human activities and simultaneously aims to fool the domain discriminator to learn environment/subject-independent representations. The domain discriminator is designed to identify the environment where activities are recorded, forcing the feature extractor to produce environment-independent activity features. Zhu et al. [40] combined casual and dilated convolution to implement a temporal convolution network.

2.2. Data Leakage in Machine Learning Models

Research on data leakage in machine learning models spans a variety of contexts and methodologies [41]. Data leakage occurs when information from the test set unintentionally influences the training process, leading to inflated performance metrics and misleading conclusions. This can happen due to various reasons, including improper data partitioning, feature engineering, or preprocessing techniques. Data leakage undermines the generalizability of machine learning models, as they may learn spurious correlations rather than genuine patterns in the data. The consequences of data leakage may extend beyond the realm of machine learning algorithms. In fields such as healthcare, finance, and security, relying on models affected by data leakage can have dire consequences [42,43,44]. Misguided decisions based on inaccurate predictions can result in financial losses, compromised patient care, or breaches in security protocols [45,46,47]. In [48], Poldrack et al. pointed out that a number of papers in the field of neuroimaging may have suffered from data leakage by performing dimensionality reduction across the whole dataset before the train/test split. Kapoor and Narayanan [49] identified eight types of leakage, i.e., not having a separate test set, preprocessing on the training and test sets, feature selection jointly on the training and test sets, duplicate data points, illegitimate features, temporal leakage, non-independence between the training and test sets, and sampling bias. Further, the authors identified 329 studies across 17 fields containing data leakage.

3. Methodology

The HAR framework, ImgFi [4], addresses the challenges of recognizing human activities using WiFi CSI data by converting the information into images and applying a CNN as an image classificator for improved performance—as illustrated in Figure 1. By introducing five CSI imaging approaches, i.e., recurrence plot (RP) transformation [50], Gramian angular summation field (GASF) transformation [51], Gramian angular difference field (GADF) transformation [51], Markov transition field (MTF) transformation [52], and short-time Fourier transformation (STFT) [53], the framework demonstrates the advantage and limitation of each method in processing CSI imaging. Since the authors of [4] found that RP slightly outperforms GASF, GADF, and STFT and significantly outperform MTF, the authors applied RP as CSI imaging in ImgFi. It follows that RP was also applied to our analysis and reimplementation of ImgFi. RP transformation is a method used in time series analysis and nonlinear dynamics to visualize the recurrence behavior of a dynamical system [54]. It is particularly useful for detecting hidden patterns, periodicities, and other nonlinear structures within time series data.

3.1. Structure of ImgFi

The structure of the proposed ImgFi CNN model is depicted in Figure 2. This model was reimplemented in PyTorch [55] strictly following the authors’ description. As it can be seen in Figure 2, ImgFi consists of four convolutional layers and a fully connected layer which is the final classifier in this structure. Furthermore, the authors implemented batch normalization [56], activation—rectified linear units (ReLU)—, and max pooling layer between every two convolutional layers.
Besides the implementation of ImgFi, the authors [4] gave the performance of several on ImageNet database pretrained CNNs, such as Shufflenet [57], VGG19 [58], ResNet18 [59], and ResNet50 [59]. ShuffleNet [57] is a CNN architecture designed for efficient computation and memory usage, particularly suited for mobile and embedded devices. It employs the concept of group convolutions and channel shuffling to significantly reduce the computational complexity while maintaining high accuracy in image classification tasks. VGG19 [58] is composed of 19 layers, including convolutional layers followed by max-pooling layers, and topped with fully connected layers. The network architecture consists of alternating convolutional layers with small 3 × 3 filters and max-pooling layers with 2 × 2 filters. This structure allows VGG19 to capture complex patterns at different scales in the input images. The last few layers of VGG19 are fully connected layers responsible for high-level reasoning and classification. ResNet18 and ResNet50 [59] are both CNN architectures developed by Microsoft Research as part of the ResNet (residual network) family. They are designed to address the vanishing gradient problem encountered in very deep neural networks by introducing skip connections or shortcuts. The main characteristics of the examined CNNs are summarized in Table 1.
Since a detailed description of the finetuning process of these architectures is not given in the original publication [4], we briefly summarize here our applied procedure. First, fully-connected layers were removed from the pretrained CNN models because these are specific to the original task (ImageNet classification). Second, we added new fully connected layers on top of the convolutional base. These layers will be specific to our new task. Specifically, the number of nodes in the final fully connected layer should match the number of classes in our dataset.

3.2. Detected Data Leakage

The authors [4] have not dedicated a separate section or paragraph on dataset partition in their paper. However, they claimed an exceptional accuracy rate of 99%. In the experimental results, we empirically corroborate that the authors divided the dataset in their methodology without respect to individual subjects, a crucial oversight that undermines the integrity of the study. By neglecting to divide the dataset with respect to the individuals, the authors inadvertently introduced a significant source of data leakage. The absence of subject-based data partitioning in the training, validation, and test sets raises concerns regarding the potential overlap of CSI images originating from the same individual across these sets. This oversight introduces a significant risk of data leakage, as CSI samples from a given individual may inadvertently influence the training process and subsequently inflate the reported accuracy metrics. Without a systematic approach to ensure the exclusivity of subjects across the dataset partitions, the study’s findings are susceptible to biases and inaccuracies, undermining the credibility of the proposed methodology. For clarity, Figure 3 and Figure 4 illustrate the differences between dataset partitioning with respect to and without respect to humans, respectively. As depicted in Figure 3, partitioning with respect to humans ensures the exclusivity of subjects across training, validation, and test sets, avoiding the risk of data leakage. In contrast, Figure 4 demonstrates the dataset division without respect to humans. In this incorrect strategy, CSI images are generated first (the exact number of CSI channels available in a Wi-Fi system may vary depending on the specific implementation and hardware capabilities [61]. Subsequently, these CSI images are randomly divided into training, validation, and test sets. As a consequence, samples from one human can very easily occur both in the training and test sets leading to data leakage. Why does dataset partition without respect to humans cause data leakage? Let us consider a scenario where we develop a product for WiFi-based human action recognition and intend to sell it to a market in another country. In this new market, the demographic composition, cultural norms, and individual identities will undoubtedly differ from those in our local setting. If our model is trained without proper consideration for subject-based partitioning, it may inadvertently learn patterns specific to the individuals in our local dataset. Consequently, when deployed in a different country where the population characteristics vary, the model’s performance may degrade significantly. By partitioning the data with respect to humans, we ensure that the model learns generalizable patterns of human actions rather than relying on idiosyncratic features of specific individuals. This approach not only enhances the model’s adaptability to diverse populations but also promotes its robustness and reliability across different cultural contexts.
Empirical analysis confirms that the authors generated CSI images first, and subsequently partitioned the dataset without respect to human identities. Through meticulous examination of the dataset, it became evident that CSI images originating from the same individual were distributed across the training, validation, and test sets without proper isolation. This critical oversight in the data partitioning process introduces a significant risk of data leakage, as features specific to individual subjects may inadvertently influence model training and evaluation, leading to inflated performance metrics. By empirically corroborating the lack of subject-based data partitioning, we underscore the necessity of adhering to rigorous data management protocols to ensure the integrity and reliability of machine learning studies.

4. Results

4.1. Data Details and Training

In the ImgFi study [4], the authors used three publicly available datasets, i.e., WiAR [62], SAR [63], Widar3.0 [64], and an own database to test the proposed CNN-based solutions. Since WiAR [62] and Widar3.0 [64] are the largest among the used publicly available datasets, we opted to use WiAR and Widar3.0 for the demonstration of the detected data leakage issue. Table 2 gives information on the action labels and the dataset size.
In Table 3, the parameter setting, which was used in the training of ImgFi and in the finetuning of the pre-trained CNN models, can be seen. Unlike [4], the WiFi CSI-based HAR dataset was divided with respect to the human subjects into training, validation, and test sets. As a consequence, CSI data originating from the same individual cannot be distributed across the train, validation, and test sets. As already mentioned, we empirically corroborate that the data split was carried out without respect to the human subjects in [4] which results in exceptionally high classification accuracy.

4.2. Evaluation Metrics

In our analysis to ensure correctness, we have used exactly the same evaluation metrics as proposed in [4]. Since the applied dataset was balanced, accuracy, precision, recall, and F1 were determined for each human action label and subsequently their arithmetic mean was given. In the examined classification problem, accuracy for each category can be expressed using the terms true positive (TP), true negative (TN), false positive (FP), and false negative (FN) as
A c c = T P + T N T P + T N + F P + F N .
Precision and recall for each category can be given as
P r e c = T P T P + F P ,
R e c a l l = T P T P + F N .
Similarly, F1 for each category can be expressed as
F 1 = 2 · P r e c · R e c a l l P r e c + R e c a l l .
If the number of categories is denoted by N, accuracy, precision, and F1 for all categories can be determined as follows:
M a c r o _ A c c = 1 N i = 1 N A c c ,
M a c r o _ P r e c = 1 N i = 1 N P r e c ,
M a c r o _ R e c a l l = 1 N i = 1 N R e c a l l ,
M a c r o _ F 1 = 1 N i = 1 N F 1 .

4.3. Numerical Results

The numerical results are illustrated in Table 4 and Table 5. Specifically, the results reported in [4] and the results of the two dataset partition protocols—without respect to and with respect to humans—are compared. Our findings revealed that while the results of ImgFi’s retraining without respect to humans are slightly lower than the reported 99.9% precision, there are potential factors that may contribute to this discrepancy. One possible explanation could be the application of a data augmentation technique in [4] which was not reported in the paper. Additionally, it is important to note that our retraining process aimed to replicate the methodology of the original study to the best of our ability, but minor variations in the implementation may have influenced the final performance metrics. Despite the slight disparity in results, our analysis underscores the significance of implementing rigorous data management practices, such as subject-based partitioning, to ensure the integrity and reliability of machine learning studies in the domain of WiFi CSI-based human action recognition. Upon retraining the model with data partitioning carried out with respect to humans, we observed a notable decrease in the performance metrics compared to the reported results in the IEEE Sensors Journal paper [4]. Specifically, our retraining yielded a precision of 23.4%, recall of 22.8%, and F1 score of 22.0% on WiAR and precision of 47.4%, recall of 45.6%, and F1 score of 43.9% on Widar3.0, respectively. These findings highlight the critical role of proper data partitioning in ensuring the integrity and reliability of model evaluation. While the decrease in performance may be discouraging, it underscores the necessity of adhering to rigorous data management practices to obtain more accurate and generalizable results.
The training curves of ResNet18’s retraining without respect to humans and with respect to humans are depicted in Figure 5 and Figure 6, respectively. These figures allow interesting conclusions. When the data split is conducted without respect to humans, we observe a strong correlation between training and validation accuracy, with validation accuracy closely tracking the trends of training accuracy albeit with a small difference. This consistency suggests that the model is effectively learning from the training data and generalizing well to unseen validation data. Conversely, when the data split is performed with respect to humans, a notable disparity emerges between training and validation accuracy. Despite a consistent increase in training accuracy, validation accuracy exhibits saturation, indicating that the model’s performance fails to generalize to unseen data effectively. This discrepancy suggests that the model may be overfitting to the training data when subjected to subject-based data partitioning, emphasizing the importance of proper data management practices to ensure model robustness and generalizability. The stark contrast in the behavior of training and validation accuracy highlights the critical role of data partitioning methodology in evaluating model performance accurately. These findings underscore the necessity of subject-based data partitioning to obtain reliable estimates of model generalization and mitigate the risks of overfitting.

5. Discussion

The empirical corroboration of data leakage in the ImgFi [4] WiFi CSI-based human action recognition study underscores the importance of rigorous data management practices in machine learning research [66,67]. The presence of data leakage compromises the validity and generalizability of the study’s findings. By allowing CSI images from the same individual to influence both the training and evaluation processes, the reported accuracy metrics are likely inflated, leading to an overestimation of the model’s performance. Consequently, the proposed approach may not accurately generalize to unseen data or real-world scenarios, undermining its practical utility.
Our recommendations for avoiding data leakage in WiFi CSI-based HAR are the following:
  • Subject-based data partitioning: Future studies should prioritize subject-based data partitioning to ensure the exclusivity of individuals across training, validation, and test sets. By maintaining strict isolation of subjects, researchers can mitigate the risk of data leakage and obtain more reliable performance estimates.
  • Transparent reporting: Researchers should provide detailed documentation of data partitioning procedures to facilitate reproducibility and scrutiny of the study’s methodology. Transparent reporting enables reviewers and readers to identify potential methodological flaws, such as data leakage, and assess the reliability of the reported results.
  • Publishing training curves enables other researchers to replicate and validate the presented results more effectively. By providing detailed insights into the model’s training process, a researcher can facilitate transparency and reproducibility in the field, contributing to the advancement of knowledge and best practices.
  • Reviewers play a crucial role in ensuring the integrity and reliability of published research, including identifying and addressing potential data leakage issues. Reviewers should carefully scrutinize the methodology section to ascertain how the data were partitioned for training, validation, and testing. Specifically, reviewers should look for explicit descriptions of how subjects or samples were allocated to each partition and assess whether the partitioning strategy adequately prevents information leakage between sets.
  • Publishers of publicly available databases for machine learning should consider to provide clear and comprehensive guidance on appropriate data partitioning methodologies to assist researchers in conducting robust experiments and accurately evaluating model performance. By offering recommendations for the correct train/validation/test split procedures, publishers can empower researchers to adopt best practices in data management and mitigate the risk of common pitfalls, such as data leakage. This guidance should include detailed instructions on subject-based partitioning, cross-validation techniques, and transparent reporting of data preprocessing steps to foster transparency and reproducibility in machine learning research.
Addressing data leakage is crucial for ensuring the integrity and reliability of machine learning studies. By implementing rigorous data management practices, such as subject-based partitioning and transparent reporting, researchers can enhance the validity and generalizability of their findings.

6. Conclusions

In this study, we conducted a critical analysis of WiFi Channel State Information (CSI) based human action recognition using Convolutional Neural Networks (CNNs), with a specific focus on addressing the issue of data leakage. Through empirical investigation and meticulous scrutiny of the methodology, experimental setup, and results presented in a notable IEEE Sensors Journal paper, we identified instances of data leakage that undermine the integrity and reliability of the reported findings. Our analysis revealed that the authors failed to implement subject-based data partitioning, leading to the inadvertent inclusion of CSI images from the same individual across the training, validation, and test sets. This critical oversight introduced a significant risk of data leakage, whereby information from the test set leaked into the training process, resulting in inflated accuracy metrics and misleading conclusions.
In conclusion, addressing data leakage is essential for advancing the field of machine learning and ensuring the reliability and generalizability of research findings. By identifying and rectifying methodological pitfalls, we can strengthen the foundations of machine learning research and pave the way for more robust and impactful applications in diverse domains. The overly optimistic performance metrics reported in studies affected by data leakage may inadvertently create a false sense of accomplishment, discouraging other researchers from critically examining the underlying methodologies and contributing to a stagnation in the advancement of the field.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study is available for download at https://github.com/linteresa/WiAR (WiAR database) (accessed on 15 March 2024 and http://tns.thss.tsinghua.edu.cn/widar3.0/ (Widar3.0) (accessed on 15 March 2024).

Acknowledgments

We would like to express our sincere gratitude to our colleagues Gábor Sörös, Ferenc Kovács, and Chung Shue Chen for their invaluable feedback and constructive comments on the manuscript. Their insights and suggestions have greatly contributed to the clarity and rigor of this work. We are also deeply grateful to our manager, Lóránt Farkas, for his unwavering support and encouragement throughout the research project. We extend our heartfelt appreciation to Krisztián Varga for his invaluable assistance and expertise in GPU computing. His guidance and support have been instrumental in optimizing our computational workflows and accelerating the progress of this research project. We would like to express our heartfelt gratitude to the entire team of Nokia Bell Labs, Budapest for fostering an environment of collaboration, support, and positivity throughout the duration of this project. Finally, we thank the anonymous reviewers and the academic editor for their careful reading of our manuscript and their many insightful comments and suggestions.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CNNconvolutional neural network
CSIchannel state information
GADFGramian angular difference field
GASFGramian angular summation field
GPUgraphics processing unit
HARhuman action recognition
IEEEInstitute of Electrical and Electronics Engineers
MTFMarkov transition field
ReLUrectified linear unit
ResNetresidual network
RPrecurrence plot
SDRsoftware-defined radio
SNRsignal-to-noise ratio
SVMsupport vector machine
VGGvisual geometry group

References

  1. Khan, U.M.; Kabir, Z.; Hassan, S.A. Wireless health monitoring using passive WiFi sensing. In Proceedings of the 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC), Valencia, Spain, 26–30 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1771–1776. [Google Scholar]
  2. Sruthy, S.; George, S.N. WiFi enabled home security surveillance system using Raspberry Pi and IoT module. In Proceedings of the 2017 IEEE International Conference on Signal Processing, Informatics, Communication and Energy Systems (SPICES), Kollam, India, 8–10 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  3. Zhang, R.; Jiang, C.; Wu, S.; Zhou, Q.; Jing, X.; Mu, J. Wi-Fi sensing for joint gesture recognition and human identification from few samples in human–computer interaction. IEEE J. Sel. Areas Commun. 2022, 40, 2193–2205. [Google Scholar] [CrossRef]
  4. Zhang, C.; Jiao, W. Imgfi: A high accuracy and lightweight human activity recognition framework using csi image. IEEE Sens. J. 2023, 23, 21966–21977. [Google Scholar] [CrossRef]
  5. Sun, Z.; Ke, Q.; Rahmani, H.; Bennamoun, M.; Wang, G.; Liu, J. Human action recognition from various data modalities: A review. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 3200–3225. [Google Scholar] [CrossRef]
  6. Hao, Z.; Zhang, Q.; Ezquierdo, E.; Sang, N. Human action recognition by fast dense trajectories. In Proceedings of the 21st ACM International Conference on Multimedia, Barcelona, Spain, 21–25 October 2013; pp. 377–380. [Google Scholar]
  7. Du, Y.; Wang, W.; Wang, L. Hierarchical recurrent neural network for skeleton based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1110–1118. [Google Scholar]
  8. Sanchez-Caballero, A.; de López-Diz, S.; Fuentes-Jimenez, D.; Losada-Gutiérrez, C.; Marrón-Romera, M.; Casillas-Perez, D.; Sarker, M.I. 3dfcnn: Real-time action recognition using 3d deep neural networks with raw depth information. Multimed. Tools Appl. 2022, 81, 24119–24143. [Google Scholar] [CrossRef]
  9. Akula, A.; Shah, A.K.; Ghosh, R. Deep learning approach for human action recognition in infrared images. Cogn. Syst. Res. 2018, 50, 146–154. [Google Scholar] [CrossRef]
  10. Munaro, M.; Ballin, G.; Michieletto, S.; Menegatti, E. 3D flow estimation for human action recognition from colored point clouds. Biol. Inspired Cogn. Archit. 2013, 5, 42–51. [Google Scholar] [CrossRef]
  11. Huang, C. Event-based action recognition using timestamp image encoding network. arXiv 2020, arXiv:2009.13049. [Google Scholar]
  12. Gao, R.; Oh, T.H.; Grauman, K.; Torresani, L. Listen to look: Action recognition by previewing audio. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 10457–10467. [Google Scholar]
  13. Micucci, D.; Mobilio, M.; Napoletano, P. Unimib shar: A dataset for human activity recognition using acceleration data from smartphones. Appl. Sci. 2017, 7, 1101. [Google Scholar] [CrossRef]
  14. Hernangómez, R.; Santra, A.; Stańczak, S. Human activity classification with frequency modulated continuous wave radar using deep convolutional neural networks. In Proceedings of the 2019 International Radar Conference (RADAR), Toulon, France, 23–27 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
  15. Wang, Y.; Liu, J.; Chen, Y.; Gruteser, M.; Yang, J.; Liu, H. E-eyes: Device-free location-oriented activity identification using fine-grained wifi signatures. In Proceedings of the 20th Annual International Conference on Mobile Computing and Networking, Maui, HI, USA, 7–11 September 2014; pp. 617–628. [Google Scholar]
  16. Dawar, N.; Kehtarnavaz, N. A convolutional neural network-based sensor fusion system for monitoring transition movements in healthcare applications. In Proceedings of the 2018 IEEE 14th International Conference on Control and Automation (ICCA), Anchorage, AK, USA, 12–15 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 482–485. [Google Scholar]
  17. Khaire, P.; Imran, J.; Kumar, P. Human activity recognition by fusion of rgb, depth, and skeletal data. In Proceedings of the 2nd International Conference on Computer Vision & Image Processing: CVIP 2017, Roorkee, India, 9–12 September 2017; Springer: Berlin/Heidelberg, Germany, 2018; Volume 1, pp. 409–421. [Google Scholar]
  18. Ardianto, S.; Hang, H.M. Multi-view and multi-modal action recognition with learned fusion. In Proceedings of the 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Honolulu, HI, USA, 12–15 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1601–1604. [Google Scholar]
  19. Yu, J.; Cheng, Y.; Zhao, R.W.; Feng, R.; Zhang, Y. Mm-pyramid: Multimodal pyramid attentional network for audio-visual event localization and video parsing. In Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 10–14 October 2022; pp. 6241–6249. [Google Scholar]
  20. Xie, H.; Gao, F.; Jin, S. An overview of low-rank channel estimation for massive MIMO systems. IEEE Access 2016, 4, 7313–7321. [Google Scholar] [CrossRef]
  21. Wu, K.; Xiao, J.; Yi, Y.; Chen, D.; Luo, X.; Ni, L.M. CSI-based indoor localization. IEEE Trans. Parallel Distrib. Syst. 2012, 24, 1300–1309. [Google Scholar] [CrossRef]
  22. Ahmed, H.F.T.; Ahmad, H.; Aravind, C. Device free human gesture recognition using Wi-Fi CSI: A survey. Eng. Appl. Artif. Intell. 2020, 87, 103281. [Google Scholar] [CrossRef]
  23. Gao, Q.; Wang, J.; Ma, X.; Feng, X.; Wang, H. CSI-based device-free wireless localization and activity recognition using radio image features. IEEE Trans. Veh. Technol. 2017, 66, 10346–10356. [Google Scholar] [CrossRef]
  24. De Kerret, P.; Gesbert, D. CSI sharing strategies for transmitter cooperation in wireless networks. IEEE Wirel. Commun. 2013, 20, 43–49. [Google Scholar] [CrossRef]
  25. Wang, Y.; Wu, K.; Ni, L.M. Wifall: Device-free fall detection by wireless networks. IEEE Trans. Mob. Comput. 2016, 16, 581–594. [Google Scholar] [CrossRef]
  26. Kecman, V. Support vector machines—An introduction. In Support Vector Machines: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1–47. [Google Scholar]
  27. Pu, Q.; Gupta, S.; Gollakota, S.; Patel, S. Whole-home gesture recognition using wireless signals. In Proceedings of the 19th Annual International Conference on Mobile Computing & Networking, Miami, FL, USA, 30 September–4 October 2013; pp. 27–38. [Google Scholar]
  28. Adib, F.; Kabelac, Z.; Katabi, D.; Miller, R.C. 3D tracking via body radio reflections. In Proceedings of the 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14), Seattle, WA, USA, 2–4 April 2014; pp. 317–329. [Google Scholar]
  29. Adib, F.; Katabi, D. See through walls with WiFi! In Proceedings of the ACM SIGCOMM 2013 Conference on SIGCOMM, Hong Kong, China, 12–16 August 2013; pp. 75–86. [Google Scholar]
  30. Müller, M. Dynamic time warping. In Information Retrieval for Music and Motion; Springer: Heidelberg, Germany, 2007; pp. 69–84. [Google Scholar]
  31. Ling, H.; Okada, K. An efficient earth mover’s distance algorithm for robust histogram comparison. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 840–853. [Google Scholar] [CrossRef] [PubMed]
  32. Halperin, D.; Hu, W.; Sheth, A.; Wetherall, D. Tool release: Gathering 802.11 n traces with channel state information. ACM SIGCOMM Comput. Commun. Rev. 2011, 41, 53. [Google Scholar] [CrossRef]
  33. Van Nee, R.; Jones, V.; Awater, G.; Van Zelst, A.; Gardner, J.; Steele, G. The 802.11 n MIMO-OFDM standard for wireless LAN and beyond. Wirel. Pers. Commun. 2006, 37, 445–453. [Google Scholar] [CrossRef]
  34. Xie, Y.; Li, Z.; Li, M. Precise power delay profiling with commodity WiFi. In Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, Paris, France, 7–11 September 2015; pp. 53–64. [Google Scholar]
  35. Tsakalaki, E.; Schäfer, J. On application of the correlation vectors subspace method for 2-dimensional angle-delay estimation in multipath ofdm channels. In Proceedings of the 2018 14th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Limassol, Cyprus, 15–17 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–8. [Google Scholar]
  36. Chen, Z.; Zhang, L.; Jiang, C.; Cao, Z.; Cui, W. WiFi CSI based passive human activity recognition using attention based BLSTM. IEEE Trans. Mob. Comput. 2018, 18, 2714–2724. [Google Scholar] [CrossRef]
  37. Guo, L.; Zhang, H.; Wang, C.; Guo, W.; Diao, G.; Lu, B.; Lin, C.; Wang, L. Towards CSI-based diversity activity recognition via LSTM-CNN encoder-decoder neural network. Neurocomputing 2021, 444, 260–273. [Google Scholar] [CrossRef]
  38. Zhang, W.; Zhou, S.; Peng, D.; Yang, L.; Li, F.; Yin, H. Understanding and modeling of WiFi signal-based indoor privacy protection. IEEE Internet Things J. 2020, 8, 2000–2010. [Google Scholar] [CrossRef]
  39. Jiang, W.; Miao, C.; Ma, F.; Yao, S.; Wang, Y.; Yuan, Y.; Xue, H.; Song, C.; Ma, X.; Koutsonikolas, D.; et al. Towards environment independent device free human activity recognition. In Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, New Delhi, India, 29 October–2 November 2018; pp. 289–304. [Google Scholar]
  40. Zhu, A.; Tang, Z.; Wang, Z.; Zhou, Y.; Chen, S.; Hu, F.; Li, Y. Wi-ATCN: Attentional temporal convolutional network for human action prediction using WiFi channel state information. IEEE J. Sel. Top. Signal Process. 2022, 16, 804–816. [Google Scholar] [CrossRef]
  41. Domnik, J.; Holland, A. On data leakage prevention and machine learning. In Proceedings of the 35th Bled eConference Digital Restructuring and Human (Re) Action, Bled, Slovenia, 26–29 June 2022; p. 695. [Google Scholar]
  42. Samala, R.K.; Chan, H.P.; Hadjiiski, L.; Koneru, S. Hazards of data leakage in machine learning: A study on classification of breast cancer using deep neural networks. In Medical Imaging 2020: Computer-Aided Diagnosis; SPIE: Bellingham, WA, USA, 2020; Volume 11314, pp. 279–284. [Google Scholar]
  43. Chiavegatto Filho, A.; Batista, A.F.D.M.; Dos Santos, H.G. Data leakage in health outcomes prediction with machine learning. comment on “prediction of incident hypertension within the next year: Prospective study using statewide electronic health records and machine learning”. J. Med Internet Res. 2021, 23, e10969. [Google Scholar] [CrossRef]
  44. Rosenblatt, M.; Tejavibulya, L.; Jiang, R.; Noble, S.; Scheinost, D. Data leakage inflates prediction performance in connectome-based machine learning models. Nat. Commun. 2024, 15, 1829. [Google Scholar] [CrossRef] [PubMed]
  45. Hannun, A.; Guo, C.; van der Maaten, L. Measuring data leakage in machine-learning models with fisher information. In Uncertainty in Artificial Intelligence; PMLR: Cambridge MA, USA, 2021; pp. 760–770. [Google Scholar]
  46. Stock, A.; Gregr, E.J.; Chan, K.M. Data leakage jeopardizes ecological applications of machine learning. Nat. Ecol. Evol. 2023, 7, 1743–1745. [Google Scholar] [CrossRef]
  47. Yang, M.; Zhu, J.J.; McGaughey, A.; Zheng, S.; Priestley, R.D.; Ren, Z.J. Predicting extraction selectivity of acetic acid in pervaporation by machine learning models with data leakage management. Environ. Sci. Technol. 2023, 57, 5934–5946. [Google Scholar] [CrossRef]
  48. Poldrack, R.A.; Huckins, G.; Varoquaux, G. Establishment of best practices for evidence for prediction: A review. JAMA Psychiatry 2020, 77, 534–540. [Google Scholar] [CrossRef] [PubMed]
  49. Kapoor, S.; Narayanan, A. Leakage and the reproducibility crisis in machine-learning-based science. Patterns 2023, 4, 100804. [Google Scholar] [CrossRef] [PubMed]
  50. Eckmann, J.P.; Kamphorst, S.O.; Ruelle, D. Recurrence plots of dynamical systems. World Sci. Ser. Nonlinear Sci. Ser. A 1995, 16, 441–446. [Google Scholar]
  51. Wang, Z.; Oates, T. Imaging time-series to improve classification and imputation. arXiv 2015, arXiv:1506.00327. [Google Scholar]
  52. Jiang, J.R.; Yen, C.T. Markov transition field and convolutional long short-term memory neural network for manufacturing quality prediction. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-Taiwan), Taoyuan, Taiwan, 28–30 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–2. [Google Scholar]
  53. Sejdić, E.; Djurović, I.; Jiang, J. Time–frequency feature representation using energy concentration: An overview of recent advances. Digit. Signal Process. 2009, 19, 153–183. [Google Scholar] [CrossRef]
  54. Marwan, N.; Romano, M.C.; Thiel, M.; Kurths, J. Recurrence plots for the analysis of complex systems. Phys. Rep. 2007, 438, 237–329. [Google Scholar] [CrossRef]
  55. Ketkar, N.; Moolayil, J.; Ketkar, N.; Moolayil, J. Introduction to pytorch. In Deep Learning with Python: Learn Best Practices of Deep Learning Models with PyTorch; Springer: Berlin/Heidelberg, Germany, 2021; pp. 27–91. [Google Scholar]
  56. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; PMLR: Cambridge MA, USA, 2015; pp. 448–456. [Google Scholar]
  57. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 6848–6856. [Google Scholar]
  58. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  59. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  60. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 248–255. [Google Scholar]
  61. Li, M.; Meng, Y.; Liu, J.; Zhu, H.; Liang, X.; Liu, Y.; Ruan, N. When CSI meets public WiFi: Inferring your mobile phone password via WiFi signals. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 1068–1079. [Google Scholar]
  62. Guo, L.; Wang, L.; Lin, C.; Liu, J.; Lu, B.; Fang, J.; Liu, Z.; Shan, Z.; Yang, J.; Guo, S. Wiar: A public dataset for wifi-based activity recognition. IEEE Access 2019, 7, 154935–154945. [Google Scholar] [CrossRef]
  63. Brinke, J.K.; Meratnia, N. Scaling activity recognition using channel state information through convolutional neural networks and transfer learning. In Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things, New York, NY, USA, 10–13 November 2019; pp. 56–62. [Google Scholar]
  64. Zhang, Y.; Zheng, Y.; Qian, K.; Zhang, G.; Liu, Y.; Wu, C.; Yang, Z. Widar3.0: Zero-effort cross-domain gesture recognition with wi-fi. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 8671–8688. [Google Scholar] [CrossRef] [PubMed]
  65. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  66. Götz-Hahn, F.; Hosu, V.; Lin, H.; Saupe, D. KonVid-150k: A Dataset for No-Reference Video Quality Assessment of Videos in-the-Wild. IEEE Access 2021, 9, 72139–72160. [Google Scholar] [CrossRef]
  67. Saupe, D.; Hahn, F.; Hosu, V.; Zingman, I.; Rana, M.; Li, S. Crowd workers proven useful: A comparative study of subjective video quality assessment. In Proceedings of the QoMEX 2016: 8th International Conference on Quality of Multimedia Experience, Lisbon, Portugal, 6–8 June 2016. [Google Scholar]
Figure 1. Framework for WiFi CSI based HAR.
Figure 1. Framework for WiFi CSI based HAR.
Sensors 24 03159 g001
Figure 2. Structure of ImgFi model [4] which consists of four convolutional layers and a fully-connected layer which is the final classifier in this structure. Further, batch normalization and max pooling were applied between every two convolutional layers.
Figure 2. Structure of ImgFi model [4] which consists of four convolutional layers and a fully-connected layer which is the final classifier in this structure. Further, batch normalization and max pooling were applied between every two convolutional layers.
Sensors 24 03159 g002
Figure 3. Illustration of dataset split with respect to humans. Humans are exclusively allocated to either training, validation, or test sets, ensuring independence and preventing data leakage between partitions.
Figure 3. Illustration of dataset split with respect to humans. Humans are exclusively allocated to either training, validation, or test sets, ensuring independence and preventing data leakage between partitions.
Sensors 24 03159 g003
Figure 4. Illustration of dataset split without respect to humans. Humans are not allocated exclusively to training, validation, or test sets, but CSI images are generated first and then these CSI images are randomly divided into training, validation, and test sets. As a consequence, samples from one human can very easily occur both in the training and test sets leading to data leakage.
Figure 4. Illustration of dataset split without respect to humans. Humans are not allocated exclusively to training, validation, or test sets, but CSI images are generated first and then these CSI images are randomly divided into training, validation, and test sets. As a consequence, samples from one human can very easily occur both in the training and test sets leading to data leakage.
Sensors 24 03159 g004
Figure 5. Retraining of ResNet18 without respect to humans. In the upper figure, the training accuracy is depicted in blue, while the validation accuracy is shown in black. In the bottom figure, training loss is shown in red and validation loss is illustrated in black.
Figure 5. Retraining of ResNet18 without respect to humans. In the upper figure, the training accuracy is depicted in blue, while the validation accuracy is shown in black. In the bottom figure, training loss is shown in red and validation loss is illustrated in black.
Sensors 24 03159 g005
Figure 6. Retraining of ResNet18 with respect to humans. In the upper figure, the training accuracy is depicted in blue, while the validation accuracy is shown in black. In the bottom figure, training loss is shown in red and validation loss is illustrated in black.
Figure 6. Retraining of ResNet18 with respect to humans. In the upper figure, the training accuracy is depicted in blue, while the validation accuracy is shown in black. In the bottom figure, training loss is shown in red and validation loss is illustrated in black.
Sensors 24 03159 g006
Table 1. On ImageNet [60] pretrained CNNs and some of their properties. The depth is defined as the largest number of sequential convolutional or fully connected layers on a path from the network input to the network output. All models accept RGB images as input.
Table 1. On ImageNet [60] pretrained CNNs and some of their properties. The depth is defined as the largest number of sequential convolutional or fully connected layers on a path from the network input to the network output. All models accept RGB images as input.
CNNDepthSizeParameters (Millions)
ShuffleNet [57]505.4 MB1.4
VGG19 [58]19535 MB144
ResNet18 [59]1844 MB11.7
ResNet50 [59]5096 MB25.6
Table 2. Dataset’s details.
Table 2. Dataset’s details.
Dataset NameAction LabelsDataset Size
WiAR [62]two hands wave, high throw, horizontal arm wave, draw tick, toss paper, walk, side kick, bend, forward kick, drink water, sit down, draw X, phone call, hand clap, high arm wave, squat62,415 images
Widar3.0 [64]push, sweep, clap, slide, draw-Z, draw-N80,000 images
Table 3. Parameter setting.
Table 3. Parameter setting.
ParameterValue
Dataset partitioningTraining/validation/test (0.6/0.2/0.2). Split is carried out w.r.t. humans.
Loss functionCross-entropy
OptimizerAdam [65] ( β 1 = 0.9 , β 2 = 0.99 , ϵ 2 = 1 × 10 9 )
Learning rate0.001
Decay rate0.8
Batch size128
Epochs20
Table 4. Comparison of percentages on WiAR.
Table 4. Comparison of percentages on WiAR.
Reported in [4]Retrained
w/o.r.t. Humans
Retrained
w.r.t. Humans
ArchitecturePrec.Rec.F1Prec.Rec.F1Prec.Rec.F1
ShuffleNet99.499.499.494.594.594.320.420.019.9
VGG1999.899.799.794.694.494.420.519.919.9
ResNet1899.899.899.788.188.088.015.314.714.6
ResNet5099.899.899.894.594.594.020.719.819.8
ImgFi99.999.899.899.099.098.923.422.822.0
Table 5. Comparison of percentages on Widar3.0.
Table 5. Comparison of percentages on Widar3.0.
Reported in [4]Retrained
w/o.r.t. Humans
Retrained
w.r.t. Humans
ArchitecturePrec.Rec.F1Prec.Rec.F1Prec.Rec.F1
ShuffleNet99.399.399.399.199.199.140.739.639.5
VGG1999.899.799.699.799.799.641.039.539.8
ResNet1899.899.899.797.997.997.930.329.329.2
ResNet5099.899.899.899.399.299.241.439.639.4
ImgFi99.899.899.899.599.599.547.445.643.9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Varga, D. Critical Analysis of Data Leakage in WiFi CSI-Based Human Action Recognition Using CNNs. Sensors 2024, 24, 3159. https://doi.org/10.3390/s24103159

AMA Style

Varga D. Critical Analysis of Data Leakage in WiFi CSI-Based Human Action Recognition Using CNNs. Sensors. 2024; 24(10):3159. https://doi.org/10.3390/s24103159

Chicago/Turabian Style

Varga, Domonkos. 2024. "Critical Analysis of Data Leakage in WiFi CSI-Based Human Action Recognition Using CNNs" Sensors 24, no. 10: 3159. https://doi.org/10.3390/s24103159

APA Style

Varga, D. (2024). Critical Analysis of Data Leakage in WiFi CSI-Based Human Action Recognition Using CNNs. Sensors, 24(10), 3159. https://doi.org/10.3390/s24103159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop