Critical Analysis of Data Leakage in WiFi CSI-Based Human Action Recognition Using CNNs
Abstract
:1. Introduction
- Detection of data leakage: Our study successfully detects and analyzes instances of data leakage within the experimental methodology of a prominent IEEE Sensors Journal publication [4]. By meticulously examining the data partitioning methods, performance metrics, and model behavior reported in the original study, we identify inconsistencies and anomalies indicative of data leakage.
- Empirical validation: Through empirical validation and meticulous scrutiny of the dataset and experimental procedures, we provide concrete evidence to support our assertion of data leakage in the original study.
- Recommendations for mitigation: Building upon our findings, we propose practical recommendations for mitigating data leakage and enhancing the integrity of WiFi CSI-based human action recognition using machine/deep learning.
2. Related Work
2.1. Human Action Recognition Based on WiFi Channel State Information
- Effect of human actions on wireless signals: Human actions, such as gestures or movements, can cause changes in the wireless channel characteristics due to blockage, reflection, or absorption of the WiFi signals. These changes are reflected in the WiFi CSI measurements.
- Distinctive patterns in CSI: Different human actions result in characteristic patterns in the WiFi CSI data. For example, a specific gesture may cause a sudden drop or fluctuation in signal strength or phase, which can be detected and recognized through signal processing techniques.
- Machine learning algorithms: Advanced machine learning algorithms can be trained to recognize specific human actions based on patterns observed in WiFi CSI data. By collecting the labeled data of WiFi CSI corresponding to different human actions, classifiers can be trained to accurately recognize and classify these actions in real time.
2.2. Data Leakage in Machine Learning Models
3. Methodology
3.1. Structure of ImgFi
3.2. Detected Data Leakage
4. Results
4.1. Data Details and Training
4.2. Evaluation Metrics
4.3. Numerical Results
5. Discussion
- Subject-based data partitioning: Future studies should prioritize subject-based data partitioning to ensure the exclusivity of individuals across training, validation, and test sets. By maintaining strict isolation of subjects, researchers can mitigate the risk of data leakage and obtain more reliable performance estimates.
- Transparent reporting: Researchers should provide detailed documentation of data partitioning procedures to facilitate reproducibility and scrutiny of the study’s methodology. Transparent reporting enables reviewers and readers to identify potential methodological flaws, such as data leakage, and assess the reliability of the reported results.
- Publishing training curves enables other researchers to replicate and validate the presented results more effectively. By providing detailed insights into the model’s training process, a researcher can facilitate transparency and reproducibility in the field, contributing to the advancement of knowledge and best practices.
- Reviewers play a crucial role in ensuring the integrity and reliability of published research, including identifying and addressing potential data leakage issues. Reviewers should carefully scrutinize the methodology section to ascertain how the data were partitioned for training, validation, and testing. Specifically, reviewers should look for explicit descriptions of how subjects or samples were allocated to each partition and assess whether the partitioning strategy adequately prevents information leakage between sets.
- Publishers of publicly available databases for machine learning should consider to provide clear and comprehensive guidance on appropriate data partitioning methodologies to assist researchers in conducting robust experiments and accurately evaluating model performance. By offering recommendations for the correct train/validation/test split procedures, publishers can empower researchers to adopt best practices in data management and mitigate the risk of common pitfalls, such as data leakage. This guidance should include detailed instructions on subject-based partitioning, cross-validation techniques, and transparent reporting of data preprocessing steps to foster transparency and reproducibility in machine learning research.
6. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
CNN | convolutional neural network |
CSI | channel state information |
GADF | Gramian angular difference field |
GASF | Gramian angular summation field |
GPU | graphics processing unit |
HAR | human action recognition |
IEEE | Institute of Electrical and Electronics Engineers |
MTF | Markov transition field |
ReLU | rectified linear unit |
ResNet | residual network |
RP | recurrence plot |
SDR | software-defined radio |
SNR | signal-to-noise ratio |
SVM | support vector machine |
VGG | visual geometry group |
References
- Khan, U.M.; Kabir, Z.; Hassan, S.A. Wireless health monitoring using passive WiFi sensing. In Proceedings of the 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC), Valencia, Spain, 26–30 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1771–1776. [Google Scholar]
- Sruthy, S.; George, S.N. WiFi enabled home security surveillance system using Raspberry Pi and IoT module. In Proceedings of the 2017 IEEE International Conference on Signal Processing, Informatics, Communication and Energy Systems (SPICES), Kollam, India, 8–10 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
- Zhang, R.; Jiang, C.; Wu, S.; Zhou, Q.; Jing, X.; Mu, J. Wi-Fi sensing for joint gesture recognition and human identification from few samples in human–computer interaction. IEEE J. Sel. Areas Commun. 2022, 40, 2193–2205. [Google Scholar] [CrossRef]
- Zhang, C.; Jiao, W. Imgfi: A high accuracy and lightweight human activity recognition framework using csi image. IEEE Sens. J. 2023, 23, 21966–21977. [Google Scholar] [CrossRef]
- Sun, Z.; Ke, Q.; Rahmani, H.; Bennamoun, M.; Wang, G.; Liu, J. Human action recognition from various data modalities: A review. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 3200–3225. [Google Scholar] [CrossRef]
- Hao, Z.; Zhang, Q.; Ezquierdo, E.; Sang, N. Human action recognition by fast dense trajectories. In Proceedings of the 21st ACM International Conference on Multimedia, Barcelona, Spain, 21–25 October 2013; pp. 377–380. [Google Scholar]
- Du, Y.; Wang, W.; Wang, L. Hierarchical recurrent neural network for skeleton based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1110–1118. [Google Scholar]
- Sanchez-Caballero, A.; de López-Diz, S.; Fuentes-Jimenez, D.; Losada-Gutiérrez, C.; Marrón-Romera, M.; Casillas-Perez, D.; Sarker, M.I. 3dfcnn: Real-time action recognition using 3d deep neural networks with raw depth information. Multimed. Tools Appl. 2022, 81, 24119–24143. [Google Scholar] [CrossRef]
- Akula, A.; Shah, A.K.; Ghosh, R. Deep learning approach for human action recognition in infrared images. Cogn. Syst. Res. 2018, 50, 146–154. [Google Scholar] [CrossRef]
- Munaro, M.; Ballin, G.; Michieletto, S.; Menegatti, E. 3D flow estimation for human action recognition from colored point clouds. Biol. Inspired Cogn. Archit. 2013, 5, 42–51. [Google Scholar] [CrossRef]
- Huang, C. Event-based action recognition using timestamp image encoding network. arXiv 2020, arXiv:2009.13049. [Google Scholar]
- Gao, R.; Oh, T.H.; Grauman, K.; Torresani, L. Listen to look: Action recognition by previewing audio. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 10457–10467. [Google Scholar]
- Micucci, D.; Mobilio, M.; Napoletano, P. Unimib shar: A dataset for human activity recognition using acceleration data from smartphones. Appl. Sci. 2017, 7, 1101. [Google Scholar] [CrossRef]
- Hernangómez, R.; Santra, A.; Stańczak, S. Human activity classification with frequency modulated continuous wave radar using deep convolutional neural networks. In Proceedings of the 2019 International Radar Conference (RADAR), Toulon, France, 23–27 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
- Wang, Y.; Liu, J.; Chen, Y.; Gruteser, M.; Yang, J.; Liu, H. E-eyes: Device-free location-oriented activity identification using fine-grained wifi signatures. In Proceedings of the 20th Annual International Conference on Mobile Computing and Networking, Maui, HI, USA, 7–11 September 2014; pp. 617–628. [Google Scholar]
- Dawar, N.; Kehtarnavaz, N. A convolutional neural network-based sensor fusion system for monitoring transition movements in healthcare applications. In Proceedings of the 2018 IEEE 14th International Conference on Control and Automation (ICCA), Anchorage, AK, USA, 12–15 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 482–485. [Google Scholar]
- Khaire, P.; Imran, J.; Kumar, P. Human activity recognition by fusion of rgb, depth, and skeletal data. In Proceedings of the 2nd International Conference on Computer Vision & Image Processing: CVIP 2017, Roorkee, India, 9–12 September 2017; Springer: Berlin/Heidelberg, Germany, 2018; Volume 1, pp. 409–421. [Google Scholar]
- Ardianto, S.; Hang, H.M. Multi-view and multi-modal action recognition with learned fusion. In Proceedings of the 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Honolulu, HI, USA, 12–15 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1601–1604. [Google Scholar]
- Yu, J.; Cheng, Y.; Zhao, R.W.; Feng, R.; Zhang, Y. Mm-pyramid: Multimodal pyramid attentional network for audio-visual event localization and video parsing. In Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 10–14 October 2022; pp. 6241–6249. [Google Scholar]
- Xie, H.; Gao, F.; Jin, S. An overview of low-rank channel estimation for massive MIMO systems. IEEE Access 2016, 4, 7313–7321. [Google Scholar] [CrossRef]
- Wu, K.; Xiao, J.; Yi, Y.; Chen, D.; Luo, X.; Ni, L.M. CSI-based indoor localization. IEEE Trans. Parallel Distrib. Syst. 2012, 24, 1300–1309. [Google Scholar] [CrossRef]
- Ahmed, H.F.T.; Ahmad, H.; Aravind, C. Device free human gesture recognition using Wi-Fi CSI: A survey. Eng. Appl. Artif. Intell. 2020, 87, 103281. [Google Scholar] [CrossRef]
- Gao, Q.; Wang, J.; Ma, X.; Feng, X.; Wang, H. CSI-based device-free wireless localization and activity recognition using radio image features. IEEE Trans. Veh. Technol. 2017, 66, 10346–10356. [Google Scholar] [CrossRef]
- De Kerret, P.; Gesbert, D. CSI sharing strategies for transmitter cooperation in wireless networks. IEEE Wirel. Commun. 2013, 20, 43–49. [Google Scholar] [CrossRef]
- Wang, Y.; Wu, K.; Ni, L.M. Wifall: Device-free fall detection by wireless networks. IEEE Trans. Mob. Comput. 2016, 16, 581–594. [Google Scholar] [CrossRef]
- Kecman, V. Support vector machines—An introduction. In Support Vector Machines: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1–47. [Google Scholar]
- Pu, Q.; Gupta, S.; Gollakota, S.; Patel, S. Whole-home gesture recognition using wireless signals. In Proceedings of the 19th Annual International Conference on Mobile Computing & Networking, Miami, FL, USA, 30 September–4 October 2013; pp. 27–38. [Google Scholar]
- Adib, F.; Kabelac, Z.; Katabi, D.; Miller, R.C. 3D tracking via body radio reflections. In Proceedings of the 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14), Seattle, WA, USA, 2–4 April 2014; pp. 317–329. [Google Scholar]
- Adib, F.; Katabi, D. See through walls with WiFi! In Proceedings of the ACM SIGCOMM 2013 Conference on SIGCOMM, Hong Kong, China, 12–16 August 2013; pp. 75–86. [Google Scholar]
- Müller, M. Dynamic time warping. In Information Retrieval for Music and Motion; Springer: Heidelberg, Germany, 2007; pp. 69–84. [Google Scholar]
- Ling, H.; Okada, K. An efficient earth mover’s distance algorithm for robust histogram comparison. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 840–853. [Google Scholar] [CrossRef] [PubMed]
- Halperin, D.; Hu, W.; Sheth, A.; Wetherall, D. Tool release: Gathering 802.11 n traces with channel state information. ACM SIGCOMM Comput. Commun. Rev. 2011, 41, 53. [Google Scholar] [CrossRef]
- Van Nee, R.; Jones, V.; Awater, G.; Van Zelst, A.; Gardner, J.; Steele, G. The 802.11 n MIMO-OFDM standard for wireless LAN and beyond. Wirel. Pers. Commun. 2006, 37, 445–453. [Google Scholar] [CrossRef]
- Xie, Y.; Li, Z.; Li, M. Precise power delay profiling with commodity WiFi. In Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, Paris, France, 7–11 September 2015; pp. 53–64. [Google Scholar]
- Tsakalaki, E.; Schäfer, J. On application of the correlation vectors subspace method for 2-dimensional angle-delay estimation in multipath ofdm channels. In Proceedings of the 2018 14th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Limassol, Cyprus, 15–17 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–8. [Google Scholar]
- Chen, Z.; Zhang, L.; Jiang, C.; Cao, Z.; Cui, W. WiFi CSI based passive human activity recognition using attention based BLSTM. IEEE Trans. Mob. Comput. 2018, 18, 2714–2724. [Google Scholar] [CrossRef]
- Guo, L.; Zhang, H.; Wang, C.; Guo, W.; Diao, G.; Lu, B.; Lin, C.; Wang, L. Towards CSI-based diversity activity recognition via LSTM-CNN encoder-decoder neural network. Neurocomputing 2021, 444, 260–273. [Google Scholar] [CrossRef]
- Zhang, W.; Zhou, S.; Peng, D.; Yang, L.; Li, F.; Yin, H. Understanding and modeling of WiFi signal-based indoor privacy protection. IEEE Internet Things J. 2020, 8, 2000–2010. [Google Scholar] [CrossRef]
- Jiang, W.; Miao, C.; Ma, F.; Yao, S.; Wang, Y.; Yuan, Y.; Xue, H.; Song, C.; Ma, X.; Koutsonikolas, D.; et al. Towards environment independent device free human activity recognition. In Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, New Delhi, India, 29 October–2 November 2018; pp. 289–304. [Google Scholar]
- Zhu, A.; Tang, Z.; Wang, Z.; Zhou, Y.; Chen, S.; Hu, F.; Li, Y. Wi-ATCN: Attentional temporal convolutional network for human action prediction using WiFi channel state information. IEEE J. Sel. Top. Signal Process. 2022, 16, 804–816. [Google Scholar] [CrossRef]
- Domnik, J.; Holland, A. On data leakage prevention and machine learning. In Proceedings of the 35th Bled eConference Digital Restructuring and Human (Re) Action, Bled, Slovenia, 26–29 June 2022; p. 695. [Google Scholar]
- Samala, R.K.; Chan, H.P.; Hadjiiski, L.; Koneru, S. Hazards of data leakage in machine learning: A study on classification of breast cancer using deep neural networks. In Medical Imaging 2020: Computer-Aided Diagnosis; SPIE: Bellingham, WA, USA, 2020; Volume 11314, pp. 279–284. [Google Scholar]
- Chiavegatto Filho, A.; Batista, A.F.D.M.; Dos Santos, H.G. Data leakage in health outcomes prediction with machine learning. comment on “prediction of incident hypertension within the next year: Prospective study using statewide electronic health records and machine learning”. J. Med Internet Res. 2021, 23, e10969. [Google Scholar] [CrossRef]
- Rosenblatt, M.; Tejavibulya, L.; Jiang, R.; Noble, S.; Scheinost, D. Data leakage inflates prediction performance in connectome-based machine learning models. Nat. Commun. 2024, 15, 1829. [Google Scholar] [CrossRef] [PubMed]
- Hannun, A.; Guo, C.; van der Maaten, L. Measuring data leakage in machine-learning models with fisher information. In Uncertainty in Artificial Intelligence; PMLR: Cambridge MA, USA, 2021; pp. 760–770. [Google Scholar]
- Stock, A.; Gregr, E.J.; Chan, K.M. Data leakage jeopardizes ecological applications of machine learning. Nat. Ecol. Evol. 2023, 7, 1743–1745. [Google Scholar] [CrossRef]
- Yang, M.; Zhu, J.J.; McGaughey, A.; Zheng, S.; Priestley, R.D.; Ren, Z.J. Predicting extraction selectivity of acetic acid in pervaporation by machine learning models with data leakage management. Environ. Sci. Technol. 2023, 57, 5934–5946. [Google Scholar] [CrossRef]
- Poldrack, R.A.; Huckins, G.; Varoquaux, G. Establishment of best practices for evidence for prediction: A review. JAMA Psychiatry 2020, 77, 534–540. [Google Scholar] [CrossRef] [PubMed]
- Kapoor, S.; Narayanan, A. Leakage and the reproducibility crisis in machine-learning-based science. Patterns 2023, 4, 100804. [Google Scholar] [CrossRef] [PubMed]
- Eckmann, J.P.; Kamphorst, S.O.; Ruelle, D. Recurrence plots of dynamical systems. World Sci. Ser. Nonlinear Sci. Ser. A 1995, 16, 441–446. [Google Scholar]
- Wang, Z.; Oates, T. Imaging time-series to improve classification and imputation. arXiv 2015, arXiv:1506.00327. [Google Scholar]
- Jiang, J.R.; Yen, C.T. Markov transition field and convolutional long short-term memory neural network for manufacturing quality prediction. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-Taiwan), Taoyuan, Taiwan, 28–30 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–2. [Google Scholar]
- Sejdić, E.; Djurović, I.; Jiang, J. Time–frequency feature representation using energy concentration: An overview of recent advances. Digit. Signal Process. 2009, 19, 153–183. [Google Scholar] [CrossRef]
- Marwan, N.; Romano, M.C.; Thiel, M.; Kurths, J. Recurrence plots for the analysis of complex systems. Phys. Rep. 2007, 438, 237–329. [Google Scholar] [CrossRef]
- Ketkar, N.; Moolayil, J.; Ketkar, N.; Moolayil, J. Introduction to pytorch. In Deep Learning with Python: Learn Best Practices of Deep Learning Models with PyTorch; Springer: Berlin/Heidelberg, Germany, 2021; pp. 27–91. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; PMLR: Cambridge MA, USA, 2015; pp. 448–456. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 6848–6856. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 248–255. [Google Scholar]
- Li, M.; Meng, Y.; Liu, J.; Zhu, H.; Liang, X.; Liu, Y.; Ruan, N. When CSI meets public WiFi: Inferring your mobile phone password via WiFi signals. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 1068–1079. [Google Scholar]
- Guo, L.; Wang, L.; Lin, C.; Liu, J.; Lu, B.; Fang, J.; Liu, Z.; Shan, Z.; Yang, J.; Guo, S. Wiar: A public dataset for wifi-based activity recognition. IEEE Access 2019, 7, 154935–154945. [Google Scholar] [CrossRef]
- Brinke, J.K.; Meratnia, N. Scaling activity recognition using channel state information through convolutional neural networks and transfer learning. In Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things, New York, NY, USA, 10–13 November 2019; pp. 56–62. [Google Scholar]
- Zhang, Y.; Zheng, Y.; Qian, K.; Zhang, G.; Liu, Y.; Wu, C.; Yang, Z. Widar3.0: Zero-effort cross-domain gesture recognition with wi-fi. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 8671–8688. [Google Scholar] [CrossRef] [PubMed]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Götz-Hahn, F.; Hosu, V.; Lin, H.; Saupe, D. KonVid-150k: A Dataset for No-Reference Video Quality Assessment of Videos in-the-Wild. IEEE Access 2021, 9, 72139–72160. [Google Scholar] [CrossRef]
- Saupe, D.; Hahn, F.; Hosu, V.; Zingman, I.; Rana, M.; Li, S. Crowd workers proven useful: A comparative study of subjective video quality assessment. In Proceedings of the QoMEX 2016: 8th International Conference on Quality of Multimedia Experience, Lisbon, Portugal, 6–8 June 2016. [Google Scholar]
CNN | Depth | Size | Parameters (Millions) |
---|---|---|---|
ShuffleNet [57] | 50 | 5.4 MB | 1.4 |
VGG19 [58] | 19 | 535 MB | 144 |
ResNet18 [59] | 18 | 44 MB | 11.7 |
ResNet50 [59] | 50 | 96 MB | 25.6 |
Dataset Name | Action Labels | Dataset Size |
---|---|---|
WiAR [62] | two hands wave, high throw, horizontal arm wave, draw tick, toss paper, walk, side kick, bend, forward kick, drink water, sit down, draw X, phone call, hand clap, high arm wave, squat | 62,415 images |
Widar3.0 [64] | push, sweep, clap, slide, draw-Z, draw-N | 80,000 images |
Parameter | Value |
---|---|
Dataset partitioning | Training/validation/test (0.6/0.2/0.2). Split is carried out w.r.t. humans. |
Loss function | Cross-entropy |
Optimizer | Adam [65] |
Learning rate | 0.001 |
Decay rate | 0.8 |
Batch size | 128 |
Epochs | 20 |
Reported in [4] | Retrained w/o.r.t. Humans | Retrained w.r.t. Humans | |||||||
---|---|---|---|---|---|---|---|---|---|
Architecture | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 |
ShuffleNet | 99.4 | 99.4 | 99.4 | 94.5 | 94.5 | 94.3 | 20.4 | 20.0 | 19.9 |
VGG19 | 99.8 | 99.7 | 99.7 | 94.6 | 94.4 | 94.4 | 20.5 | 19.9 | 19.9 |
ResNet18 | 99.8 | 99.8 | 99.7 | 88.1 | 88.0 | 88.0 | 15.3 | 14.7 | 14.6 |
ResNet50 | 99.8 | 99.8 | 99.8 | 94.5 | 94.5 | 94.0 | 20.7 | 19.8 | 19.8 |
ImgFi | 99.9 | 99.8 | 99.8 | 99.0 | 99.0 | 98.9 | 23.4 | 22.8 | 22.0 |
Reported in [4] | Retrained w/o.r.t. Humans | Retrained w.r.t. Humans | |||||||
---|---|---|---|---|---|---|---|---|---|
Architecture | Prec. | Rec. | F1 | Prec. | Rec. | F1 | Prec. | Rec. | F1 |
ShuffleNet | 99.3 | 99.3 | 99.3 | 99.1 | 99.1 | 99.1 | 40.7 | 39.6 | 39.5 |
VGG19 | 99.8 | 99.7 | 99.6 | 99.7 | 99.7 | 99.6 | 41.0 | 39.5 | 39.8 |
ResNet18 | 99.8 | 99.8 | 99.7 | 97.9 | 97.9 | 97.9 | 30.3 | 29.3 | 29.2 |
ResNet50 | 99.8 | 99.8 | 99.8 | 99.3 | 99.2 | 99.2 | 41.4 | 39.6 | 39.4 |
ImgFi | 99.8 | 99.8 | 99.8 | 99.5 | 99.5 | 99.5 | 47.4 | 45.6 | 43.9 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Varga, D. Critical Analysis of Data Leakage in WiFi CSI-Based Human Action Recognition Using CNNs. Sensors 2024, 24, 3159. https://doi.org/10.3390/s24103159
Varga D. Critical Analysis of Data Leakage in WiFi CSI-Based Human Action Recognition Using CNNs. Sensors. 2024; 24(10):3159. https://doi.org/10.3390/s24103159
Chicago/Turabian StyleVarga, Domonkos. 2024. "Critical Analysis of Data Leakage in WiFi CSI-Based Human Action Recognition Using CNNs" Sensors 24, no. 10: 3159. https://doi.org/10.3390/s24103159
APA StyleVarga, D. (2024). Critical Analysis of Data Leakage in WiFi CSI-Based Human Action Recognition Using CNNs. Sensors, 24(10), 3159. https://doi.org/10.3390/s24103159