Optical Sensor-Based Approaches in Obesity Detection: A Literature Review of Gait Analysis, Pose Estimation, and Human Voxel Modeling
Abstract
Highlights
- This review examines optical and vision-based sensors including pose estimation (OpenPose, MediaPipe), infrared depth sensing, and 3D body modelling for non-contact obesity detection through gait and posture analysis.
- It highlights AI-driven, real-time capabilities and addresses challenges such as measurement accuracy, environmental factors, scalability, and ethical concerns (privacy, consent, algorithmic bias). Hybrid sensor approaches are proposed to improve robustness.
- The findings show strong potential of AI-driven, contactless sensors to improve obesity screening and personalized monitoring beyond traditional static methods.
- Successful translation into practice requires addressing technical and ethical issues to ensure equitable, reliable, and scalable adoption in healthcare and public health.
Abstract
1. Introduction
1.1. Background and Motivation
1.2. Gait as a Diagnostic Tool
1.3. Shift in Technology: Toward Optical and Computational Sensing
- Non-invasiveness: No physical contact or markers required, increasing user comfort and compliance.
- Scalability: Portable and low-cost systems enable deployment in diverse settings, from clinics to homes and schools.
- Automation: AI-driven pipelines facilitate rapid, objective assessment, reducing operator dependency and human error.
- Personalization: Continuous monitoring allows for individualized feedback and early intervention.
1.4. Scope and Objectives of the Review
- Optical gait analysis systems that derive spatiotemporal and kinematic metrics from video or depth data.
- Vision-based pose estimation frameworks that infer body mechanics from 2D/3D skeletal reconstructions.
- Three-dimensional voxel modeling techniques that provide volumetric insights into posture and body shape relevant to obesity diagnosis.
- Provide an accessible overview of state-of-the-art methodologies and comparative system performance.
- Discuss validation, accessibility, and ethical considerations in deploying these technologies.
- Highlight both the current potential and limitations of optical sensor-based systems.
- Identify opportunities for future research and clinical translation.
2. Review Methodology
- How has the landscape of optical sensor technology for obesity detection evolved since 2000?
- What are the comparative advantages of different optical sensing modalities for obesity assessment?
- What methodological challenges exist in validating these technologies across diverse populations?
- How do optical sensor approaches compare with traditional obesity assessment methods?
2.1. Search Strategy and Information Sources
- ✓
- Initial Screening: Two independent reviewers screened titles and abstracts against the inclusion/exclusion criteria using Rayyan software to manage the screening process. Disagreements were resolved through discussion or by consulting a third reviewer when necessary.
- ✓
- Full-Text Assessment: Full texts of potentially eligible studies were retrieved and independently assessed by two reviewers. A standardized form was used to document reasons for exclusion.
- ✓
- Final Selection: The final set of included studies was determined after resolving all disagreements through consensus meetings.
2.2. Inclusion and Exclusion Criteria
2.3. Study Selection Process
2.4. Quality Assessment and Risk of Bias
- The Joanna Briggs Institute (JBI) Critical Appraisal Tools.
- The Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2).
- Additional technical criteria specific to optical sensing technologies.
- High quality: 20–24 points.
- Moderate quality: 14–19 points.
- Low quality: <14 points.
2.5. Characteristics of Included Studies
2.6. Chronological Evolution Analysis
2.7. Review Structure
2.7.1. Primary Organization by Technology
- Light barrier technologies (e.g., OptoGait);
- Pressure-sensitive walkways (e.g., GAITRite);
- Video-based markerless systems;
- Multi-camera setups.
- 2D pose estimation approaches (e.g., OpenPose, MediaPipe);
- 3D pose reconstruction methods;
- Deep learning architectures (e.g., CNNs, transformers);
- Multi-person tracking systems.
- Structured light systems (e.g., first-generation Kinect);
- Time-of-flight sensors (e.g., Azure Kinect, RealSense);
- 3D body composition analysis;
- Dynamic modeling approaches.
- Sensor fusion architectures;
- Combined optical-inertial systems;
- Multi-view integration approaches;
- Ensemble methods.
2.7.2. Secondary Organization by Application Focus
- Spatiotemporal gait parameters;
- Joint angles and ranges of motion;
- Center of mass trajectories;
- Dynamic stability metrics.
- Body volume estimation;
- Circumference measurements;
- Body shape analysis;
- Segmental proportions.
- Algorithm development and validation;
- Feature extraction methodologies;
- Classification performance metrics;
- Threshold determination.
- Clinical integration pathways;
- Edge computing implementations;
- Privacy-preserving architectures;
- Real-world deployment considerations.
3. Optical Sensors Technologies for Gait Analysis in Obesity Detection
3.1. Sensor Technologies Overview/Optical Gait Sensing for Obesity Detection
3.1.1. Optical Timing Systems
3.1.2. Video-Based Capture (Image Processing)
- Marker-Based Systems: These optical motion capture systems track targeted joints and orientations using reflective markers placed on the body [15]. They use multi-camera stereophotogrammetric video systems to compute the 3D localization of these markers, determining joint positions and body segment orientations [15].
- Markerless Systems: These systems use a human body model and image features to determine shape, pose, and joint orientations without the need for markers [15]. Recent work utilizes computer vision techniques and deep neural networks to extract 2D skeletons from images for gait analysis, even exploring privacy-preserving methods by processing encrypted images [9]. Examples include systems based on single cameras, Time of Flight sensors, Stereoscopic Vision, Structured Light, and IR Thermography [7].
3.2. Applications in Obesity Context: Identified Biomarkers
3.2.1. Spatiotemporal Parameters
3.2.2. Kinematics
3.2.3. Kinetics
3.3. Technical Advantages and Limitations
3.3.1. Precision vs. Portability Trade-Offs
3.3.2. Environmental Dependencies, Calibration Needs, and Other Factors
- Controlled Environment: Optical NWS requires controlled research facilities. Subjects must walk on a clearly marked walkway [7].
- Calibration: Both optical sensors and camera systems require calibration. For instance, stereoscopic vision systems involve complex calibration, and structured light systems also require calibration [7]. While the sources do not detail the specific calibration requirements for obese subjects, increased body size or altered gait patterns could potentially influence calibration procedures or accuracy.
- Subject-Specific Variance: While not unique to optical systems, individual variations in gait patterns are inherent. In the context of obesity, larger body mass significantly affects biomechanics and gait patterns [5,7,11]. Accurately capturing these subject-specific variations requires robust measurement techniques. Image processing systems that track body segments or skeletons may need to account for differences in body shape and soft tissue movement in obese individuals [9].
3.3.3. Limitations Specific to Optical Sensor-Based Gait Analysis Systems
3.4. Analytical Models for Human Motion Capture, Gait Analysis, and Obesity Detection
3.4.1. 1. Time-Series Analytical Frameworks
3.4.2. 2. Deep Learning with Convolutional Neural Networks (CNNs)
- Markerless Motion Capture
- CNNs are central to markerless motion capture, which estimates human pose from images or videos without physical markers [12,19,31]. Advanced frameworks such as DeepLabCut employ deep residual networks (e.g., ResNet-50) for precise localization of anatomical landmarks in video frames [12]. Similarly, DeeperCut, leveraging fully convolutional ResNet architectures, enhances multi-part detection robustness through expanded receptive fields [32,33]. OpenPose is another deep learning-based method for 2D pose estimation using part affinity fields [19,34,35].
- 3D Human Reconstruction
- Thermal Imaging
- Both custom and pre-trained CNNs (e.g., VGG16, ResNet, DenseNet) are deployed to classify thermal images of anatomical regions such as the abdomen, forearm, and shank, discriminating between obese and non-obese phenotypes by identifying patterns associated with brown adipose tissue activity [31].
- Gait Analysis with Smartphone Sensors
- Although it is not in the scope of this review, it is important to mention that one-dimensional CNNs (1D CNNs) are specifically designed to handle 1D signals like those from smartphone accelerometers and gyroscopes to classify individuals as normal or overweight/obese based on distinctive gait signatures [6].
- Abnormal Gait Detection
- CNNs are also utilized to distinguish between normal and pathological gait by analyzing 2D skeletal representations extracted from video sequences [9].
3.4.3. Autoencoders and Generative Modeling
- 3D Shape Reconstruction:
- Abnormal Gait Analysis:
- Long Short-Term Memory (LSTM) autoencoders are implemented for anomaly detection, identifying deviations from normative gait patterns in daily activities [9].
3.4.4. Traditional and Hybrid Analytical Approaches
- CNN–LSTM Architectures:
- These combine the spatial feature extraction capabilities of CNNs with the temporal modeling strengths of LSTMs, offering enhanced performance for sequential gait data in obesity identification [6].
- RNN–CNN Networks:
- Hybrid architectures that integrate recurrent and convolutional layers are utilized for abnormal gait detection, leveraging multimodal data such as 3D skeletal trajectories and plantar pressure distributions [9].
3.4.5. Statistical Modeling and Validation Techniques
4. Markerless Video-Based Pose Estimation Technologies
4.1. Key Algorithms and Platforms
4.1.1. OpenPose
4.1.2. MediaPipe
4.1.3. DeepLabCut
4.2. Validation and Accuracy
4.2.1. Comparison with Gold Standard Systems
4.2.2. Comparison with IMU Systems
4.2.3. Body Morphology Effects on Detection
4.3. Obesity-Related Gait Signatures
4.3.1. Technical Challenges
4.3.2. Biomechanical Alterations
4.3.3. Clinical Applications
4.4. Depth and Hybrid Systems
4.4.1. RGB-D Framework
4.4.2. Accuracy Improvements
4.4.3. Real-World Applications
5. Human Voxel Modeling and Anthropometric Estimation
5.1. Three-Dimensional Body Reconstruction Using Depth Sensors
5.1.1. Depth Sensing Technologies
5.1.2. Voxel-Based Representation: Principles, Algorithms, and Metrics
5.1.3. Single-View vs. Multi-View Reconstruction
5.1.4. Statistical Parametric Body Models (SCAPE and SMPL)
5.2. Applications in Body Composition Analysis
5.2.1. Anthropometric Measurement Extraction
5.2.2. Waist-to-Hip Ratio and Volumetric Indices
5.2.3. Shape Descriptors and Curvature Analysis
5.2.4. Comparison with Traditional Methods
5.3. Gait Integration Possibilities
5.3.1. Morphology-Locomotion Relationships
5.3.2. Biomechanical Analysis and Clinical Applications
5.3.3. Longitudinal Monitoring and Intervention Assessment
5.4. Practical Limitations and Deployment Constraints
5.4.1. Segmentation Errors and Depth Artifacts
5.4.2. Resolution and Surface Quality Limitations
5.4.3. Posture Variability and Subject Positioning
5.4.4. Clothing and Surface Appearance Effects
5.4.5. Accuracy Compared to Gold Standards
- Improving voxel reconstruction fidelity via better sensors and algorithms;
- Adapting models for robust real-world deployment (clothing, motion, lighting);
- Validating outcomes against reference techniques in diverse populations.
6. Hybrid Systems and Sensor Fusion Strategies for Obesity Detection
6.1. Multimodal/Sensor Fusion System Architectures
6.1.1. Integration of Optical and Depth Sensing Technologies
6.1.2. Fusion of Inertial and Optical Sensors
6.1.3. Thermal Imaging Integration for Multimodal Assessment
- Early fusion: Feature-level integration that combines raw or low-level features from multiple sensors before processing;
- Late fusion: Decision-level integration that combines independently processed data from each sensor at the decision stage;
- Hybrid fusion: Combinations of early and late fusion approaches that leverage the strengths of each method.
6.1.4. Advanced Data Integration Frameworks
6.2. Federated Learning and Data Privacy
- Personal Health Information Protection: Gait patterns constitute protected health information under regulations like HIPAA and GDPR, necessitating stringent data handling protocols.
- Identification Risk: Gait is a behavioral biometric that can uniquely identify individuals, creating potential for unauthorized tracking or identification if data is compromised.
- Stigmatization Concerns: Data relating to obesity carries social stigma risks, making privacy preservation particularly important for patient dignity and acceptance of monitoring technologies.
- Longitudinal Data Vulnerabilities: Continuous monitoring of gait for obesity management generates extensive personal datasets that, if centralized, create attractive targets for data breaches.
6.2.1. Comparative Analysis of FL Algorithms for Obesity Detection
- Federated Averaging (FedAvg): The most fundamental FL algorithm works by averaging model updates received from multiple clients before updating the global model. FedAvg performs adequately in homogeneous environments where gait data distributions are similar across users. It offers the advantage of minimizing communication overhead (8.5 MB), making it suitable for resource-constrained devices. However, FedAvg struggles with convergence in heterogeneous settings where gait patterns vary significantly across users with different degrees of obesity [52,53].
- Federated Proximal (FedProx): This extension of FedAvg addresses statistical heterogeneity in federated learning by introducing a proximal term that restricts local model updates, preventing destabilizing changes. We believe that FedProx is particularly valuable for gait-based obesity detection, where individual users may have unique walking patterns influenced by varying fat distribution, compensatory mechanisms, and comorbidities. By reducing client drift, FedProx ensures more stable learning across diverse populations [52,54].
- SCAFFOLD (Stochastic Controlled Averaging for Federated Learning): This advanced algorithm improves upon both FedAvg and FedProx by incorporating variance reduction techniques. SCAFFOLD corrects for client drift by maintaining control variates that align local model updates with the global model’s direction. Comparative studies show SCAFFOLD achieves the highest accuracy (89.1%) and fastest convergence (70 rounds) among FL algorithms for gait analysis. It also demonstrates superior privacy preservation (0.9 privacy score) and explainability (79.4), making it particularly suitable for obesity detection systems that must balance performance with interpretability for clinical use [52].
6.2.2. On-Device Learning for Mobile Obesity Screening
- Maximum Privacy Protection: Raw gait data never leave the device, addressing concerns about collection and storage of sensitive biometric information.
- Real-Time Assessment: Models can provide immediate feedback on obesity-related gait parameters without requiring cloud connectivity, enabling point-of-care applications.
- Personalization with Privacy: Models can adapt to individual walking patterns while still benefiting from population-level insights through federated updates.
- Reduced Infrastructure Requirements: By distributing computational load across user devices, on-device learning reduces need for centralized server infrastructure.
6.3. Scalable Deployment and Real-Time Systems
6.3.1. Edge Computing Architectures for Real-Time Analysis
6.3.2. School-Based Implementation Strategies
- Non-invasive and respectful of privacy concerns;
- Capable of efficiently screening large numbers of students;
- Simple enough to be operated by school health personnel;
- Affordable within typical school health program budgets.
6.3.3. Clinical Integration Frameworks
- Interoperability with existing electronic health record (EHR) systems;
- Compliance with medical device regulations;
- Integration with established clinical assessment protocols;
- Support for longitudinal patient monitoring.
6.3.4. Telemedicine and Remote Monitoring Solutions
- Standardized capture protocols with real-time guidance;
- Automated quality control to reject unsuitable images;
- Calibration procedures to account for varying camera characteristics;
- Confidence metrics that indicate measurement reliability.
6.4. Ethical Considerations in Deploying Gait and Body Modeling Technologies for Obesity Detection
7. Future Directions and Research Opportunities for Obesity Detection Based on Gait Analysis
7.1. Toward Portable, AI-Enabled Obesity Detection
7.2. Standardized Protocols and Open Datasets
- Unified spatiotemporal parameter definitions;
- Standardized BMI classification thresholds;
- Age- and sex-specific normative ranges.
7.3. Wearable and Optical Sensor Integration
7.4. Personalization with Digital Twins
- Biomechanical body composition profiles;
- Muscle activation patterns;
- Joint loading characteristics.
8. Conclusions
- Expanding validation across diverse and pediatric populations.
- Developing and adopting standardized benchmarking protocols.
- Ensuring transparency, explainability, and fairness in AI-driven analytics.
- Integrating optical sensors with wearable and mobile health technologies for holistic, continuous monitoring.
- Addressing ethical, privacy, and data governance challenges through robust frameworks.
Author Contributions
Funding
Conflicts of Interest
References
- One in Eight People Are Now Living with Obesity. Available online: https://www.who.int/news/item/01-03-2024-one-in-eight-people-are-now-living-with-obesity (accessed on 18 April 2025).
- Obesity and Overweight. Available online: https://www.who.int/news-room/fact-sheets/detail/obesity-and-overweight (accessed on 18 April 2025).
- World Obesity Federation. World Obesity Atlas 2023; Lobstein, T., Jackson-Leach, R., Powis, J., Brinsden, H., Gray, M., Eds.; World Obesity Federation: London, UK, 2023; Available online: https://data.worldobesity.org/publications/?cat=19 (accessed on 18 April 2025).
- World Heart Federation. Obesity What We Do; World Heart Federation: Geneva, Switzerland, 2025. [Google Scholar]
- Koinis, L.; Maharaj, M.; Natarajan, P.; Fonseka, R.D.; Fernando, V.; Mobbs, R.J. Exploring the Influence of BMI on Gait Metrics: A Comprehensive Analysis of Spatiotemporal Parameters and Stability Indicators. Sensors 2024, 24, 6484. [Google Scholar] [CrossRef]
- Degbey, G.-S.; Hwang, E.; Park, J.; Lee, S. Deep Learning-Based Obesity Identification System for Young Adults Using Smartphone Inertial Measurements. Int. J. Environ. Res. Public Health 2024, 21, 1178. [Google Scholar] [CrossRef] [PubMed]
- Muro-de-la-Herran, A.; Garcia-Zapirain, B.; Mendez-Zorrilla, A. Gait Analysis Methods: An Overview of Wearable and Non-Wearable Systems, Highlighting Clinical Applications. Sensors 2014, 14, 3362–3394. [Google Scholar] [CrossRef] [PubMed]
- Carbajales-Lopez, J.; Becerro-de-Bengoa-Vallejo, R.; Losa-Iglesias, M.E.; Casado-Hernández, I.; Benito-De Pedro, M.; Rodríguez-Sanz, D.; Calvo-Lobo, C.; San Antolín, M. The OptoGait Motion Analysis System for Clinical Assessment of 2D Spatio-Temporal Gait Parameters in Young Adults: A Reliability and Repeatability Observational Study. Appl. Sci. 2020, 10, 3726. [Google Scholar] [CrossRef]
- Naz, A.; Prasad, P.S.; McCall, S.; Leung, C.C.; Ochi, I.; Gong, L.; Yu, M. Privacy-Preserving Abnormal Gait Detection Using Computer Vision and Machine Learning. EAI Endorsed Trans. Pervasive Health Technol. 2025, 11, 9094. Available online: https://publications.eai.eu/index.php/phat/article/view/9094 (accessed on 18 July 2025). [CrossRef]
- Desrochers, P.C.; Kim, D.; Keegan, L.; Gill, S.V. Association between the Functional Gait Assessment and Spatiotemporal Gait Parameters in Individuals with Obesity Compared to Normal Weight Controls: A Proof-of-Concept Study. J. Musculoskelet. Neuronal. Interact. 2021, 21, 335–342. [Google Scholar]
- Popescu, C.; Matei, D.; Amzolini, A.M.; Trăistaru, M.R. Comprehensive Gait Analysis and Kinetic Intervention for Overweight and Obese Children. Children 2025, 12, 122. [Google Scholar] [CrossRef]
- From Marker to Markerless: Validating DeepLabCut for 2D Sagittal Plane Gait Analysis in Adults and Newly Walking Toddlers -ScienceDirect. Available online: https://www.sciencedirect.com/science/article/pii/S0021929025002209 (accessed on 1 May 2025).
- Albert, J.A.; Owolabi, V.; Gebel, A.; Brahms, C.M.; Granacher, U.; Arnrich, B. Evaluation of the Pose Tracking Performance of the Azure Kinect and Kinect v2 for Gait Analysis in Comparison with a Gold Standard: A Pilot Study. Sensors 2020, 20, 5104. [Google Scholar] [CrossRef] [PubMed]
- Nakano, N.; Sakura, T.; Ueda, K.; Omura, L.; Kimura, A.; Iino, Y.; Fukashiro, S.; Yoshioka, S. Evaluation of 3D Markerless Motion Capture Accuracy Using OpenPose With Multiple Video Cameras. Front. Sports Act. Living 2020, 2, 50. [Google Scholar] [CrossRef]
- Das, R.; Paul, S.; Mourya, G.K.; Kumar, N.; Hussain, M. Recent Trends and Practices Toward Assessment and Rehabilitation of Neurodegenerative Disorders: Insights From Human Gait. Front. Neurosci. 2022, 16, 859298. [Google Scholar] [CrossRef]
- Tao, W.; Liu, T.; Zheng, R.; Feng, H. Gait Analysis Using Wearable Sensors. Sensors 2012, 12, 2255–2283. [Google Scholar] [CrossRef]
- Prisco, G.; Pirozzi, M.A.; Santone, A.; Esposito, F.; Cesarelli, M.; Amato, F.; Donisi, L. Validity of Wearable Inertial Sensors for Gait Analysis: A Systematic Review. Diagnostics 2024, 15, 36. [Google Scholar] [CrossRef]
- Apovian, C.M. Obesity: Definition, Comorbidities, Causes, and Burden. Am. J. Manag. Care 2016, 22, s176–s185. [Google Scholar] [PubMed]
- Needham, L.; Evans, M.; Wade, L.; Cosker, D.P.; McGuigan, M.P.; Bilzon, J.L.; Colyer, S.L. The Development and Evaluation of a Fully Automated Markerless Motion Capture Workflow. J. Biomech. 2022, 144, 111338. [Google Scholar] [CrossRef]
- Monfrini, R.; Rossetto, G.; Scalona, E.; Galli, M.; Cimolin, V.; Lopomo, N.F. Technological Solutions for Human Movement Analysis in Obese Subjects: A Systematic Review. Sensors 2023, 23, 3175. [Google Scholar] [CrossRef]
- Bersamira, J.N.; De Chavez, R.J.A.; Salgado, D.D.S.; Sumilang, M.M.C.; Valles, E.R.; Roxas, E.A.; dela Cruz, A.R. Human Gait Kinematic Estimation Based on Joint Data Acquisition and Analysis from IMU and Depth-Sensing Camera. In Proceedings of the 2019 IEEE 11th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), Laoag, Philippines, 29 November 2019–1 December 2019; pp. 1–6. [Google Scholar]
- Li, Z.; Oskarsson, M.; Heyden, A. Detailed 3D Human Body Reconstruction from Multi-View Images Combining Voxel Super-Resolution and Learned Implicit Representation. Appl. Intell. 2022, 52, 6739–6759. [Google Scholar] [CrossRef]
- Zhang, C.; Greve, C.; Verkerke, G.J.; Roossien, C.C.; Houdijk, H.; Hijmans, J.M. Pilot Validation Study of Inertial Measurement Units and Markerless Methods for 3D Neck and Trunk Kinematics during a Simulated Surgery Task. Sensors 2022, 22, 8342. [Google Scholar] [CrossRef] [PubMed]
- Lee, J.-Y.; Kwon, K.; Kim, C.; Youm, S. Development of a Non-Contact Sensor System for Converting 2D Images into 3D Body Data: A Deep Learning Approach to Monitor Obesity and Body Shape in Individuals in Their 20s and 30s. Sensors 2024, 24, 270. [Google Scholar] [CrossRef] [PubMed]
- Zafra-Palma, J.; Marín-Jiménez, N.; Castro-Piñero, J.; Cuenca-García, M.; Muñoz-Salinas, R.; Marín-Jiménez, M.J. Health & Gait: A Dataset for Gait-Based Analysis. Sci. Data 2025, 12, 44. [Google Scholar] [CrossRef]
- MediaPipe Pose Estimation Models Dataloop. Available online: https://dataloop.ai/library/model/qualcomm_mediapipe-pose-estimation/ (accessed on 1 May 2025).
- Ahmed, U.; Ali, M.F.; Javed, K.; Babri, H.A. Predicting Physiological Developments from Human Gait Using Smartphone Sensor Data. arXiv 2017, arXiv:1712.07958. [Google Scholar] [CrossRef]
- Alexa, A. Assessment of Kinect-Based Gait Analysis for Healthcare Applications. Master’s Thesis, Radboud University Nijmegen, Nijmegen, The Netherlands, 2016. [Google Scholar]
- Chiu, C.Y.; Thelwell, M.; Senior, T.; Choppin, S.; Hart, J.; Wheat, J. Comparison of depth cameras for three-dimensional reconstruction in medicine. Proc. Inst. Mech. Eng. Part H J. Eng. Med. 2019, 233, 938–947. [Google Scholar] [CrossRef]
- Siena, F.L.; Byrom, B.; Watts, P.; Breedon, P. Utilising the Intel RealSense Camera for Measuring Health Outcomes in Clinical Research. J. Med. Syst. 2018, 42, 53. [Google Scholar] [CrossRef] [PubMed]
- Computer Aided Diagnosis of Obesity Based on Thermal Imaging Using Various Convolutional Neural Networks-ScienceDirect. Available online: https://www.sciencedirect.com/science/article/abs/pii/S1746809420303633 (accessed on 4 May 2025).
- Insafutdinov, E.; Pishchulin, L.; Andres, B.; Andriluka, M.; Schiele, B. DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model. In Proceedings of the Computer Vision–ECCV 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 34–50. [Google Scholar]
- Mathis, A.; Mamidanna, P.; Cury, K.M.; Abe, T.; Murthy, V.N.; Mathis, M.W.; Bethge, M. DeepLabCut: Markerless Pose Estimation of User-Defined Body Parts with Deep Learning. Nat. Neurosci. 2018, 21, 1281–1289. [Google Scholar] [CrossRef] [PubMed]
- Liang, S.; Zhang, Y.; Diao, Y.; Li, G.; Zhao, G. The Reliability and Validity of Gait Analysis System Using 3D Markerless Pose Estimation Algorithms. Front. Bioeng. Biotechnol. 2022, 10, 857975. [Google Scholar] [CrossRef] [PubMed]
- Jiang, H.; Cai, J.; Zheng, J. Skeleton-Aware 3D Human Shape Reconstruction From Point Clouds. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October 2019–2 November 2019; pp. 5430–5440. [Google Scholar]
- Tsoli, A.; Loper, M.; Black, M.J. Model-Based Anthropometry: Predicting Measurements from 3D Human Scans in Multiple Poses. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs, CO, USA, 24–26 March 2014; pp. 83–90. [Google Scholar]
- MediaPipe Pose—DroneVis 1.3.0 Documentation. Available online: https://drone-vis.readthedocs.io/en/latest/pose/mediapipe.html (accessed on 1 May 2025).
- Lauer, J.; Zhou, M.; Ye, S.; Menegas, W.; Nath, T.; Rahman, M.M.; Santo, V.D.; Soberanes, D.; Feng, G.; Murthy, V.N.; et al. Multi-Animal Pose Estimation and Tracking with DeepLabCut. BioRxiv 2022, 19, 496–504. [Google Scholar] [CrossRef]
- Panconi, G.; Grasso, S.; Guarducci, S.; Mucchi, L.; Minciacchi, D.; Bravi, R. DeepLabCut Custom-Trained Model and the Refinement Function for Gait Analysis. Sci. Rep. 2025, 15, 2364. [Google Scholar] [CrossRef]
- Weiss, A.; Hirshberg, D.; Black, M.J. Home 3D Body Scans from Noisy Image and Range Data. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1951–1958. [Google Scholar]
- Su, H.; Jampani, V.; Sun, D.; Maji, S.; Kalogerakis, E.; Yang, M.-H.; Kautz, J. SPLATNet: Sparse Lattice Networks for Point Cloud Processing. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2530–2539. [Google Scholar]
- A Multidomain Approach to Assessing the Convergent and Concurrent Validity of a Mobile Application When Compared to Conventional Methods of Determining Body Composition. Available online: https://www.mdpi.com/1424-8220/20/21/6165 (accessed on 4 May 2025).
- Ruget, A.; Tyler, M.; Mora Martín, G.; Scholes, S.; Zhu, F.; Gyongy, I.; Hearn, B.; McLaughlin, S.; Halimi, A.; Leach, J. Pixels2Pose: Super-Resolution Time-of-Flight Imaging for 3D Pose Estimation. Sci. Adv. 2022, 8, eade0123. [Google Scholar] [CrossRef]
- Laws, J.; Bauernfeind, N.; Cai, Y. Feature Hiding in 3D Human Body Scans. Inf. Vis. 2006, 5, 271–278. [Google Scholar] [CrossRef]
- Cerfoglio, S.; Lopomo, N.F.; Capodaglio, P.; Scalona, E.; Monfrini, R.; Verme, F.; Galli, M.; Cimolin, V. Assessment of an IMU-Based Experimental Set-Up for Upper Limb Motion in Obese Subjects. Sensors 2023, 23, 9264. [Google Scholar] [CrossRef]
- Ergün, U.; Aktepe, E.; Koca, Y.B. Detection of Body Shape Changes in Obesity Monitoring Using Image Processing Techniques. Sci. Rep. 2024, 14, 24178. [Google Scholar] [CrossRef]
- Wong, C.; McKeague, S.; Correa, J.; Liu, J.; Yang, G.-Z. Enhanced Classification of Abnormal Gait Using BSN and Depth. In Proceedings of the 2012 Ninth International Conference on Wearable and Implantable Body Sensor Networks, London, UK, 9–12 May 2012; pp. 166–171. [Google Scholar]
- Agostini, V.; Gastaldi, L.; Rosso, V.; Knaflitz, M.; Tadano, S. A Wearable Magneto-Inertial System for Gait Analysis (H-Gait): Validation on Normal Weight and Overweight/Obese Young Healthy Adults. Sensors 2017, 17, 2406. [Google Scholar] [CrossRef]
- Monfrini, R.; Cimolin, V.; Galli, M.; Lopomo, N.F. Use of Inertial Sensor System for Upper Limb Motion Analysis in Obese Subjects: Preliminary Setting and Analysis. Gait Posture 2022, 97, S124–S125. [Google Scholar] [CrossRef]
- Bhatnagar, B.L.; Sminchisescu, C.; Theobalt, C.; Pons-Moll, G. Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction. In Proceedings of the Computer Vision–ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 311–329. [Google Scholar]
- Zhou, B.; Franco, J.-S.; Bogo, F.; Tekin, B.; Boyer, E. Reconstructing Human Body Mesh from Point Clouds by Adversarial GP Network. In Proceedings of the Computer Vision–ACCV 2020; Ishikawa, H., Liu, C.-L., Pajdla, T., Shi, J., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 123–139. [Google Scholar]
- Prathusha, D.P.; Aparna, K. A Comparative Analysis of FedAvg, FedProx, and Scaffold in Gait-Based Activity Recognition by Evaluating Accuracy, Privacy, and Explainability. Glob. J. Eng. Innov. Interdiscip. Res. 2025, 5, 16. [Google Scholar] [CrossRef]
- Wang, H.; Yurochkin, M.; Sun, Y.; Papailiopoulos, D.; Khazaeni, Y. Federated Learning with Matched Averaging. arXiv 2020, arXiv:2002.06440. [Google Scholar] [CrossRef]
- Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated Optimization in Heterogeneous Networks. Proc. Mach. Learn. Syst. 2020, 2, 429–450. [Google Scholar]
- Wang, T.; Du, Y.; Gong, Y.; Choo, K.-K.R.; Guo, Y. Applications of Federated Learning in Mobile Health: Scoping Review. J. Med. Internet. Res. 2023, 25, e43006. [Google Scholar] [CrossRef]
- Frontiers|A Multi-Sensor Wearable System for the Assessment of Diseased Gait in Real-World Conditions. Available online: https://www.frontiersin.org/journals/bioengineering-and-biotechnology/articles/10.3389/fbioe.2023.1143248/full (accessed on 4 May 2025).
- Scataglini, S.; Dellaert, L.; Meeuwssen, L.; Staeljanssens, E.; Truijen, S. The Difference in Gait Pattern between Adults with Obesity and Adults with a Normal Weight, Assessed with 3D-4D Gait Analysis Devices: A Systematic Review and Meta-Analysis. Int. J. Obes. 2025, 49, 541–553. [Google Scholar] [CrossRef] [PubMed]
- Towards a Low Power Wireless Smartshoe System for Gait Analysis in People with Disabilities. Available online: https://www.resna.org/sites/default/files/conference/2015/cac/zerin.html (accessed on 4 May 2025).
Database | Rationale for Inclusion | Field Coverage |
---|---|---|
PubMed/MEDLINE | Core biomedical literature | Medicine, biomechanics, clinical validation |
Scopus | Broad multidisciplinary coverage | Engineering, computer science, healthcare |
IEEE Xplore | Engineering and computing focus | Signal processing, sensor design, algorithms |
ACM Digital Library | Computing research | Computer vision, machine learning |
ScienceDirect | Multidisciplinary science platform | Optical engineering, biomechanics |
Web of Science | Citation tracking capability | Cross-disciplinary research |
Google Scholar | Grey literature and technical reports | Emerging technologies, pre-prints |
Concept | Search Terms |
---|---|
Population | (“obesity” [MeSH Terms] OR obes * [TIAB] OR overweight [TIAB] OR “body composition” [MeSH Terms] OR “body volume” [TIAB] OR “body fat” [TIAB]) |
Intervention-Technology | (“OptoGait” [TIAB] OR “OpenPose” [TIAB] OR “MediaPipe” [TIAB] OR “DeepLabCut” [TIAB] OR “Azure Kinect” [TIAB] OR “Kinect v2” [TIAB] OR “RGB-D camera” [TIAB] OR “voxel modeling” [TIAB] OR “3D reconstruction” [TIAB] OR “Depth Camera” [MeSH Terms] OR “Motion Capture” [TIAB] OR “markerless motion capture” [TIAB] OR “pose estimation” [TIAB] OR “3D scanning” [TIAB] OR “voxel modeling” [TIAB]) |
Outcomes | (“Gait” [MeSH Terms] OR gait [TIAB] OR “Gait Analysis” [TIAB] OR “spatiotemporal gait” [TIAB] OR “stride length” [TIAB] OR “joint angles” [TIAB] OR “body pose” [TIAB] OR posture [MeSH Terms] OR “body reconstruction” [TIAB] OR anthropometr * [TIAB] OR “body scan” [TIAB]) |
Study Design | (“Validation Studies as Topic” [MeSH Terms] OR validation [TIAB] OR performance [TIAB] OR accuracy [TIAB] OR “technical evaluation” [TIAB] OR experimental [TIAB] OR observational [TIAB]) |
PICOS Element | Inclusion Criteria | Exclusion Criteria | Rationale |
---|---|---|---|
Population |
|
| To ensure clinical relevance and applicability to human obesity detection while allowing for comparative analyses |
Intervention/Technology |
|
| To focus specifically on non-contact optical sensing methodologies while excluding technologies that do not utilize optical principles |
Comparison |
|
| To ensure methodological rigor and establish validity of the optical approaches being evaluated |
Outcomes |
|
| To focus on clinically relevant parameters that relate to obesity detection and assessment |
Study Design |
|
| To ensure inclusion of high-quality, empirical research with sufficient methodological detail |
Domain | Assessment Criteria | Scoring |
---|---|---|
Study Design |
| 0–3 points |
Participant Selection |
| 0–3 points |
Technical Methodology |
| 0–4 points |
Reference Standard |
| 0–3 points |
Data Analysis |
| 0–4 points |
Results Reporting |
| 0–4 points |
Applicability |
| 0–3 points |
Study Type | Count |
---|---|
Validation | 15 |
Experimental | 7 |
Modeling | 7 |
Pilot | 6 |
Technical | 4 |
Review | 3 |
System Design | 3 |
Anthropometric | 3 |
Comparative | 2 |
Systematic Review | 2 |
Technical Validation | 1 |
Observational | 1 |
Cross-sectional | 1 |
Dataset | 1 |
Title | Year | Technology Used | Validation Method | Outcome Measures | Obesity-Specific? |
---|---|---|---|---|---|
Exploring the Influence of BMI on Gait Metrics [5] | 2025 | Optical (Depth Camera) | Gait comparison across BMI | Stride, step time, stability | Yes |
Deep Learning-Based Obesity Identification System [6] | 2024 | Smartphone Inertial Sensors | ML-based classification | Obesity detection | Yes |
Gait Analysis Methods Overview [7] | 2014 | Wearable + non-wearable | Narrative methodology comparison | General gait metrics | No |
OptoGait Motion Analysis in Young Adults [8] | 2020 | OptoGait | Intra-rater repeatability | Spatiotemporal gait parameters | No |
Privacy-Preserving Abnormal Gait Detection [9] | 2021 | Computer Vision | Proof-of-concept validation | Anomaly detection | No |
Functional Gait and Obesity Correlation [10] | 2021 | Motion capture system | Obesity vs. control group | Spatiotemporal parameters | Yes |
Comprehensive Gait Analysis in Obese Children [11] | 2025 | Marker-based + camera | Cross-sectional study | Kinetics, gait phases | Yes |
DeepLabCut Markerless Pose for Gait [12] | 2025 | DeepLabCut | Pose estimation accuracy | 2D joint trajectories | Yes |
Pose Tracking with Azure Kinect [13] | 2020 | Azure Kinect | Comparison with gold standard | Joint angles, stride length | Yes |
3D Markerless Motion with OpenPose [14] | 2020 | OpenPose | Accuracy benchmark | Joint position error | No |
Gait Feature | Obesity-Related Alteration |
---|---|
Step Width | Increased; wider base enhances mediolateral stability |
Step Length | Often increased in adults; variable in children; compensatory for stride control |
Walking Velocity | Decreased; reduced speed reflects cautious gait pattern |
Stance Phase Duration | Prolonged; greater time spent in stable double-limb support |
Double-Limb Support Time | Increased; enhances static balance |
Single-Limb Support Time | Decreased; minimizes demand on each limb |
Swing Phase Duration | Reduced; contributes to shorter single-limb support |
Hip Flexion | Excessive across gait cycle; compensatory for lower limb inertia |
Hip Movement (Frontal Plane) | Increased lateral sway; linked to trunk mass and pelvic instability |
Knee Position (Stance) | Slightly more extended; reduces joint torque and energy demand |
Ankle Position (Initial Contact) | More plantarflexed; altered loading strategy |
Ankle Range of Motion | Increased in adults; decreased dorsiflexion in obese children |
Midfoot Loading | Elevated plantar pressure and contact area; especially midfoot |
Plantar Pressure | Increased peak pressure and force-time integral |
Forefoot Contact Phase | Prolonged, particularly in right foot; altered rollover mechanics |
Heel-Off and Step Duration | Increased; contributes to overall gait cycle elongation |
Hip/Knee/Ankle Moments | Higher joint moments; increased mechanical demand across lower limbs |
Gait Stability | Decreased; reflects dynamic instability and fall risk |
System Type/Feature | Principles and Hardware Configuration | Accuracy | Sensitivity | Applications for Obesity Research | Technical Advantages | Technical Limitations |
---|---|---|---|---|---|---|
Marker-Based Optical Motion Capture (OMC) Systems |
|
|
| |||
Markerless Motion Capture Systems (General) |
|
| ||||
Markerless Motion Capture Systems: Depth Cameras (e.g., Microsoft Kinect, Intel RealSense) |
|
|
|
| ||
Photoelectric Cell Systems (e.g., OptoGait) |
|
|
|
|
|
|
Architecture | Input Type | Primary Application |
---|---|---|
1D Convolutional Neural Networks (1D CNNs) | Time-series data (e.g., inertial signals) | Classification and analysis of gait patterns from sensor signals (e.g., accelerometers) |
2D Convolutional Neural Networks (2D CNNs) | Image data | Human pose estimation, body part segmentation, thermal image classification |
3D Convolutional Neural Networks (3D CNNs) | Volumetric data (e.g., voxels, point clouds) | 3D human body reconstruction, voxel super-resolution, volumetric feature learning |
Graph Convolutional Networks (GCNs) | Graph-structured data (e.g., skeletal graphs, point clouds) | Joint dependency modeling, shape estimation, and advanced pose recognition |
Model | Description | Application Context |
---|---|---|
Decision Tree [9,27] | Rule-based classifier that recursively partitions data using feature thresholds. | Classification of gait characteristics and weight-related categories. |
Multilayer Perceptron (MLP) [24,25] | Feedforward neural network with one or more hidden layers. | Sensor-based gait classification and physiological prediction tasks. |
Support Vector Machine (SVM) [9,16,27] | Separates data classes using optimal hyperplanes in feature space. | Used for BMI estimation and obesity classification from gait features. |
Random Forest [5,9,27] | Ensemble learning technique that aggregates multiple decision trees for improved accuracy. | Recognition of movement patterns and spatiotemporal gait features. |
k-Nearest Neighbor (k-NN) [9,27] | Instance-based learner that classifies based on feature proximity to labeled examples. | Pattern matching in gait signals from wearable sensors. |
Logistic Regression [27] | Statistical model used for binary or multi-class classification tasks. | Body type and gait pattern classification using extracted features. |
Bayesian Regularization Artificial Neural Network (BRANN) [21] | Neural network enhanced with regularization to prevent overfitting. | Sensor fusion and classification in multi-modal obesity detection tasks. |
Metric/Feature | MediaPipe [26] | OpenPose [14,19,32,34] | DeepLabCut [12,19,33,39] |
---|---|---|---|
Keypoints | 33 real-time 3D keypoints (incl. face, hands, feet) [37] | 25 2D keypoints (Body25 model) | User-defined body parts |
Speed/Efficiency | Inference time: 0.774 ms on Samsung S23 UltraModel size: 3.14 MB + 12.9 MB | Real-time multi-person 2D pose estimation using a 10-layer VGG19 network | Requires ~200 labeled images to train |
Accuracy | Described as high; no specific RMSE/PCK reported | MAE:
| RMSE:
|
Limitations | Input optimized for 256 × 256 pxPerformance varies on non-Snapdragon devices | 2D tracking errors: object misidentification, segment confusion | Suboptimal for distal foot keypointsError propagation in 3D estimation |
PCK | Not reported | Not directly provided | Median test error: 2.69–5.62 px DeeperCut (which DeepLabCut is derived from) achieved 58.7% Average Precision (MPII Multi-Person dataset) |
mAP (mean Average Precision)/mPCP (mean Percentage of Correct Parts)/AOP (Average Over Parts) | Not reported | Not reported | showed significant improvements in mPCP (mean Percentage of Correct Parts) and AOP (Average Over Parts) on the WAF dataset compared to DeepCut. |
MPTPE | Not reported | Not reported | Not directly reported, but RMSE and joint errors used as equivalents |
Aspect | Findings |
---|---|
Accuracy (MAE < 20 mm) | ~47% of all calculated mean absolute errors |
Accuracy (MAE < 30 mm) | ~80% of errors fall below this threshold |
High Error Rate (MAE > 40 mm) | ~10% of errors |
Primary Cause of High Errors | Failures in OpenPose’s 2D tracking |
Examples of Tracking Failures |
|
Implication for Markerless Systems | Reasonable accuracy for many applications, but limited robustness in diverse conditions |
Comparison to Marker-Based Systems | Can approach similar accuracy, but with notable tracking limitations |
Methodology | Principles | Key Algorithms/Mechanisms | Volumetric Error/ WHR Estimation Accuracy |
---|---|---|---|
Traditional Voxel-Based Methods | Converts irregular point clouds into 3D volumetric grids; early methods used fixed voxel grids; recent advances use hierarchical and sparse voxel grids [41]. | 3D CNNs over voxel grids; adaptive subdivision for higher surface resolution [41]. | High memory/computation cost limits resolution. No specific error or WHR values; resolution insufficient for fine anatomical detail [41]. |
SPLATNet (SParse LATtice Networks) | Processes point clouds directly without voxelization by projecting onto a sparse lattice [41]. | Bilateral Convolution Layers (BCLs) on a permutohedral lattice; integrates 3D (SPLATNet3D) and 2D (SPLATNet2D-3D) features [41]. | Avoids voxel discretization artifacts [41]. preserves detail. While specific numerical volumetric errors or WHR accuracies are not provided, its design principles suggest an improvement in preserving surface detail compared to traditional voxelization [24,41]. |
Voxel Super-Resolution (VSR) + MF-PIFu | VSR is part of a coarse-to-fine methodology for reconstructing detailed 3D human body models from multi-view images. A coarse 3D model is initially estimated (using MF-PIFu) then voxelized into a low-resolution voxel grid. VSR then refines this low-resolution grid by learning an implicit function [22]. | Coarse Stage (MF-PIFu): Learns a Pixel-aligned Implicit Function based on Multi-scale Features (MF-PIFu) VSR Refinement Stage: Takes the low-resolution voxel grids as input and refines them using a multi-stage 3D convolutional neural network to extract multi-scale features [22]. | VSR is quantitatively evaluated using metrics such as Point-to-surface error (P2S), Chamfer-L2, and Intersection over Union (IoU): |
KinectFusion (Voxel-Configurable) | Generates 3D point clouds from consumer depth cameras by converting captured depth data into a 3D volumetric representation where a voxel resolution can be set [29]. | KinectFusion techniques with inputs from depth cameras. The process involves random sample consensus algorithms and density filters for selecting regions of interest, and the iterative closest point (ICP) algorithm for aligning generated point clouds to a reference model [29]. | The resolution of KinectFusion was set to 256 voxels/m, with tests also conducted at 128, 384, and 512 voxels/m to evaluate its effect. Best case 3D error: 2.0 mm (ToF vs. stereo). This error can lead to ~2.0 cm girth deviation. No direct WHR data, but the potential for 2.0 cm variation in girth measurement suggests an influence on body circumference estimations [29]. |
PointGAN and 3D-R2N2 [24] | Generate 3D bodies from 2D inputs using deep generative models. | Point-based GAN and recursive CNNs over voxel grids. | Severe limitations beyond 643 voxels; 10× training time increase; cannot derive circumferences or WHR, excluded from some studies => This implies that they were practically unusable or highly inaccurate for detailed body measurements like WHR [24]. |
LS3D (Photonic Scanning) [42] | Uses 3D photonic scanner to generate mesh. Circumferential measurements are then computed by defining 2D planes that intersect this 3D model. While not a voxel method, it deals with 3D shape reconstruction for body measurements. | Polygonal mesh from scan → calculate linear distances → circumference calculated by summing distances along slices. Proprietary algorithms then use these measurements to output body circumferences, WHR, and body fat percentage (BF%). | WHR: r = 0.81 with Gulick tape, LoA = ±0.06, 87.1% within RCI = 0.04. Poor for absolute waist/hip values, good for ratio. Volumetric error not specified. |
Model-Based Anthropometry [36] | Fits deformable 3D body model to scan, predicts (using regularized linear regression) measurements using shape features. While not strictly voxel-based, it directly relates to 3D shape reconstruction and measurement. | Registers scan to parametric shape model → computes local/global features → regularized regression. | The method’s accuracy is evaluated using the Mean Absolute Difference (MAD) and Average Mean Absolute Difference (AMAD): AMAD ≈ 1 cm (1.2–1.3 × ANSUR error). 10–15% lower error than commercial tools. WHR not isolated but circumferences included in overall error metrics. |
Attribute | Gait Analysis | Pose Estimation | Human Voxel Modeling |
---|---|---|---|
Typical setup | Marker-based Optical motion Capture (OMC) and IMUs. | Markerless (RGB, RGB-D, Depth Cameras). | 3D Reconstruction from Depth Cameras/Images |
Measurable Markers | Reflective markers (e.g., 39–65) [13,19] or IMUs placed on body segments [17,45]. Force plates, pressure insoles [6]. | Keypoints (e.g., joints) extracted via deep learning; 2D/3D skeletons [19]. | 3D point clouds [29], voxel models, anthropometric landmarks [36]. |
Accuracy (Typical Error) | OMC: High (0.15–2 mm position error) [13]. IMU: Good agreement kinematic measures (ICC 0.80–0.97) but spatiotemporal it is less consistent [17]. | Mean differences for lower limb joint angles in [19]: 0.1–10.5° for hip (3 DoF) and 0.7–3.9° for knee (1 DoF) and ankle (2 DoF). RMSE for neck and trunk kinematics in [23]: 5.5–8.7°. MAE: ~20–30 mm (OpenPose) [14]. DeepLabCut ICC: 0.60–0.75+ [12]. Kinect: excellent for all joints in the anteroposterior (AP) direction (Pearson r ~ 0.98–0.99) and poor in the vertical (V) direction, especially for foot markers with Kinect v2 (r = 0.11–0.01) [13]. | 3D point cloud error: ~2 mm; The nominal accuracy (point-to-point difference) of 3D is around 0.2 mm [29]. Anthropometric AMAD: ~10 mm [36] Waist-to-hip ratio LoA: ±0.06 and Body-fat%: high agreement with BIA [42]. |
Cost | OMC: very high (specialized cameras and force plates). IMU: moderate. | Generally low (consumer-grade cameras). | Low (depth cameras, mobile phones). |
Portability | OMC: Lab-bound. IMU: Portable. | Highly portable. | Moderate to high portability. |
Set-up Complexity | Specialized laboratory setup. | Medium to high. Requires multi-camera calibration for 3D. Custom training may improve performance. | Medium. May require manual annotation, sufficient space for clear view volume, or assistance |
Validation Status | OMC: Gold standard. IMU: Validated vs. OMC but inconsistencies exist. | Extensively validated vs. OMC. Promising agreement but needs further testing. | Validated vs. 3D scanners, BIA, DXA. Anthropometric validity still under refinement. |
Key Applications | Clinical gait assessment, Rehabilitation, Obesity-related movement studies. | Movement analysis, Joint kinematics, Gait parameters, Home-based monitoring. | Anthropometric measurement, BF% estimation, Body shape modeling. |
Strengths | Weaknesses | Opportunities | Threats | |
---|---|---|---|---|
Optical Marker-Based Systems |
|
|
|
|
Markerless Pose Estimation Systems |
|
|
|
|
3D Body Scanners (Camera-Based) |
|
|
|
|
Hybrid Optical Approaches |
|
|
|
|
Study/Source | Technical Approach | Privacy Impact | Bias Mitigation/Validation |
---|---|---|---|
Markerless System Development [19] | Multi-camera, open-source, DeepLabCut. | Not explicitly detailed as an impact but raises privacy questions, especially when considering wide deployment | Customizable models, open-source transparency. |
Deep Learning and Thermal Imaging [31] | Near-Infrared (NIR) Spectroscopy and Infrared Thermal Imaging (IRT), deep learning (CNNs, transfer learning models like VGG-16/19). | Data not public Informed consent. | Stratified sample Automation for efficiency. |
Depth Cameras for 3D Reconstruction [29]. | Kinect/RealSense KinectFusion | Detailed body shape data | Controlled objects for validation, repeatability. |
Body Shape Change Detection [46]. | 2D photos, image processing | Reluctance to upload images | Prototype models, call for more data. |
Privacy-Preserving Gait Detection [9] | Encrypted optical system, ML | Focus on identity protection, encrypted skeletons | Privacy-preserving mechanisms. |
Body Fat from 3D Kinect Scans [28]. | Kinect v2, depth-maps, regression | Home use, self-scan; less manual processing | Acknowledges info loss, error awareness |
Gait Assessment in Individuals with Obesity [10]. | GAITRite, Matlab scripts | Not addressed | Confirms group differences, small sample size |
Influence of BMI on Gait Metrics (Systematic Review) [5]. | IMUs, AI algorithms (Random Forest, SVM) | Informed consent; Data contained within the article. | Small obese group, confounders, call for diversity |
Reliability of OptoGait Photoelectric Cell System [8]. | Photoelectric cell system | Informed consent; low privacy impact | Focus on reliability, power analysis |
Depth Camera and IMU Integration [21]. | Depth camera, IMUs, Vicon, BRANN | Not detailed | Alternative to Vicon, acknowledges IMU limits |
Kinetic Program in Obese Children [11]. | BTS G-WALK system (G-SENSOR inertial system, G-Studio software) for gait parameters and pelvic kinematics | High priority on safety and well-being of child participants (vulnerable population), confidentiality, parental consent | Notes single device limitations, reporting bias |
Reliability and Validity of Gait Analysis using 3D Markerless Pose Estimation [34]. | 3D markerless pose estimation algorithms (OpenPose, 3DPoseNet); Single-camera video | Informed consent from participents; Raw data planned to be made available without undue reservation | Lower accuracy than marker-based Suggests training “networks that are specific to each population” |
Multi-sensor Wearable System [56]. | Multi-sensor wearable system (INDIP) with IMUs and force-resistive sensors; Stereophotogrammetric system as reference | Informed consent; public datasets | sensor redundancy limits wearability, validated across cohorts |
Feature Hiding in 3D Scans [44]. | 3D scan data processing using Analogia Graph and feature shape templates; Surface rendering methods: blurring and transparency | Focus on the privacy of body parts Demonstrates how user preferences for privacy vary with security context | Tested on CAESAR, quantifies privacy preferences |
DeepLabCut for Gait in Children [12]. | DeepLabCut, 2D video | Informed consent, ethical compliance | Underexplored validity, arm swing occlusion |
Detailed 3D Human Body Reconstruction from Multi-View Image [22]. | Coarse-to-fine method combining 3D reconstruction from multi-view images (MF-PIFu) and Voxel Super Resolution (VSR) to infer detailed 3D human body models | Not directly detailed as a privacy concern but involves the creation of detailed 3D human body models which capture comprehensive body shape information. | Evaluates input views, compares to prior methods |
Intel RealSense Camera for Measuring Health Outcomes [30]. | Intel RealSense 3D depth sensing camera; Comparisons with Vicon 3D motion analysis system and GAITRite pressure pad system | As a review paper, it does not involve human participants. Notes that some traditional systems are expensive and confined to specialist centers, limiting widespread use | Supports the use of technology to develop robust, objective endpoints |
Health&Gait Dataset [25]. | Video sequences of participants walking; Optical flow computation; Machine learning for BMI, age, sex estimation | Informed consent for data sharing in an anonymized form that does not allow for identification | Stratified sampling by age, sex, and BMI |
Kinect Cameras for Spatio-temporal Gait Parameters [13]. | Kinect v2/Azure, Vicon as reference system | Not discussed | Validated vs. Vicon, algorithm error noted |
Mobile Application (LeanScreen) for Body Composition [42]. | 2D digital photography (LS2D) and 3D photonic scanning (LS3D); Compared to conventional methods (Gulick tape, BIA, skinfolds, DXA) | No explicit concerns beyond image capture | Validity/comparison with conventional methods |
Systematic Review of Technologies for Human Movement Analysis in Obese Subjects [20]. | Marker-based optoelectronic stereophotogrammetric systems; Wearable MIMUs; Medical imaging for validation | Not discussed | Notes STA, gender/obesity stratification needed |
Deep Learning-Based 3D Body Modeling from 2D Images [24] | 3D generative model creating 3D body data from 2D images | No external data sharing; informed consent | Compared to 3D scanner, average error reported |
Category | Sample Size | Accuracy Metrics | Deployment Context |
---|---|---|---|
Single-Smartphone Video Pose Estimation/Inertial Sensors | One study collected gait samples from 63 subjects using smartphone sensors [27]. Another study trained deep learning models on gait data from 138 participants (92 normal, 46 overweight/obese) and tested them on an additional 35 participants (23 normal, 12 overweight/obese) [6]. The Health&Gait dataset for video-based gait analysis includes 398 participants and 1564 videos [25]. For 2D image to 3D body data generation, training data from 400 subjects (200 male, 200 female) in their 20s and 30s were used, and validation was performed on 214 people (103 men, 111 women) in the same age group [24]. |
|
|
Compact RGB-D Scanners |
|
|
|
Method/Technology | Deployment Status | Key Notes |
---|---|---|
Marker-based motion capture | Clinical/research standard | High accuracy, costly, lab-bound |
Markerless video-based (OpenPose, MediaPipe) | Pilots, research studies | Accessible, ongoing validation, limited robustness |
RGB-D cameras (Kinect, RealSense) | Clinics, research, some home use | Portable, moderate cost, expanding clinical adoption |
Smartphone-based gait analysis | Pilot studies, emerging commercial apps | High accessibility, validation ongoing |
AI/deep learning models | Research, early clinical pilots | Rapid evolution, needs explainability and clinical validation |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dhaouadi, S.; Khelifa, M.M.B.; Balti, A.; Duché, P. Optical Sensor-Based Approaches in Obesity Detection: A Literature Review of Gait Analysis, Pose Estimation, and Human Voxel Modeling. Sensors 2025, 25, 4612. https://doi.org/10.3390/s25154612
Dhaouadi S, Khelifa MMB, Balti A, Duché P. Optical Sensor-Based Approaches in Obesity Detection: A Literature Review of Gait Analysis, Pose Estimation, and Human Voxel Modeling. Sensors. 2025; 25(15):4612. https://doi.org/10.3390/s25154612
Chicago/Turabian StyleDhaouadi, Sabrine, Mohamed Moncef Ben Khelifa, Ala Balti, and Pascale Duché. 2025. "Optical Sensor-Based Approaches in Obesity Detection: A Literature Review of Gait Analysis, Pose Estimation, and Human Voxel Modeling" Sensors 25, no. 15: 4612. https://doi.org/10.3390/s25154612
APA StyleDhaouadi, S., Khelifa, M. M. B., Balti, A., & Duché, P. (2025). Optical Sensor-Based Approaches in Obesity Detection: A Literature Review of Gait Analysis, Pose Estimation, and Human Voxel Modeling. Sensors, 25(15), 4612. https://doi.org/10.3390/s25154612