Next Article in Journal
Enhanced YOLOv5: An Efficient Road Object Detection Method
Next Article in Special Issue
Evaluating the Use of a Robot in a Hematological Intensive Care Unit: A Pilot Study
Previous Article in Journal
Enhanced Fault Type Detection in Covered Conductors Using a Stacked Ensemble and Novel Algorithm Combination
Previous Article in Special Issue
A Smart, Textile-Driven, Soft Exosuit for Spinal Assistance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Protocol

CARRT—Motion Capture Data for Robotic Human Upper Body Model

Department of Mechanical Engineering, University of South Florida, Tampa, FL 33620, USA
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(20), 8354; https://doi.org/10.3390/s23208354
Submission received: 16 September 2023 / Revised: 7 October 2023 / Accepted: 9 October 2023 / Published: 10 October 2023

Abstract

:
In recent years, researchers have focused on analyzing humans’ daily living activities to study various performance metrics that humans subconsciously optimize while performing a particular task. In order to recreate these motions in robotic structures based on the human model, researchers developed a framework for robot motion planning which is able to use various optimization methods to replicate similar motions demonstrated by humans. As part of this process, it will be necessary to record the motions data of the human body and the objects involved in order to provide all the essential information for motion planning. This paper aims to provide a dataset of human motion performing activities of daily living that consists of detailed and accurate human whole-body motion data collected using a Vicon motion capture system. The data have been utilized to generate a subject-specific full-body model within OpenSim. Additionally, it facilitated the computation of joint angles within the OpenSim framework, which can subsequently be applied to the subject-specific robotic model developed MATLAB framework. The dataset comprises nine daily living activities and eight Range of Motion activities performed by ten healthy participants and with two repetitions of each variation of one action, resulting in 340 demonstrations of all the actions. A whole-body human motion database is made available to the public at the Center for Assistive, Rehabilitation, and Robotics Technologies (CARRT)-Motion Capture Data for Robotic Human Upper Body Model, which consists of raw motion data in .c3d format, motion data in .trc format for the OpenSim model, as well as post-processed motion data for the MATLAB-based model.

1. Introduction

In recent years, the field of robotics has undergone notable advancements and emerged as a forefront technology driven by artificial intelligence and machine learning. In order to meet the diverse needs of these applications, robotic systems have evolved in terms of structural design, dexterity, manipulability, adaptability, and intelligence. The robotics community has shown substantial interest in utilizing robots in personal and social environments [1,2,3]. Studies have emphasized the significance of predictability and motion velocity in robotic manipulators, as these factors significantly impact the performance of human collaborators [4,5,6]. One aspect of this development is investing in robotic manipulators with human-like motion characteristics. By achieving this, robots become more than machines; they become intuitive collaborators who can anticipate and synchronize their actions with human co-workers. This advancement holds immense promise in collaborative settings, mitigating anxiety and enhancing situational awareness among human collaborators. As a result, collaborative tasks within shared workspaces become more efficient, safer, and more conducive to productive interactions [5,7]. Therefore, it becomes important for researchers to formulate robust motion planning algorithms capable of accurately mimicking human movements to foster this sense of security. To gain a deeper understanding of human behavior during various daily living activities and to evaluate the effectiveness of algorithms, researchers need to have access to comprehensive datasets that include human motion data. Creating datasets encompassing comprehensive information for learning various performance metrics based on human demonstrations requires substantial effort. The data collection process involves capturing systematic movements of human motion. Comprehensive robotic models are constructed using data derived from anthropomorphic models for Opensim Version 3.3 and MATLAB R2019b software, along with the development of tools for processing and interpreting these datasets for the MATLAB environment. The availability of such datasets, including anthropomorphic models and post-processed datasets for various environments, represents a unique contribution that can significantly advance the field of Human Motor Control through biomimetic approaches.
In this paper, we present a dataset encompassing the following components:
  • Motion Capture Dataset: The dataset includes recordings captured using a VICON motion capture system, comprising a total of nine activities of daily living which includes eight unimanual activities and one bimanual activity of daily living and eight Range of Motion activities. These activities were performed by a group of ten individuals.
  • OpenSim Motion Files: For each of the tasks in the dataset, motion files compatible with OpenSim are provided. These files enable the study and analysis of the captured motions using OpenSim software.
  • An Upper Body Model for MATLAB: An approximate upper body model is included, specifically designed for MATLAB utilizing Peter Corke Robotic Toolbox. This model facilitates the investigation of various performance metrics utilizing MATLAB.
  • Post-processed Dataset: The dataset has undergone further processing to enable the exploration of a wide range of performance metrics within the MATLAB environment. This processed dataset allows researchers to conduct in-depth analyses and investigations.
The motion capture data have been carefully processed and presented in .xls format, allowing them to be used on different software platforms. However, in this dataset, the authors have provided all the required components for the MATLAB framework, making it easier to use the data within this specific environment. This dataset can be used to aid future research and advancements in the field of Human Motor Control using biomimetic approaches. To facilitate accessibility and utilization, the dataset is made publicly available through our CARRT—Motion Capture Data for Robotic Human Upper Body Model [8].

2. Related Work

Human Motion Datasets

In the field of biomimetic motion, many methods use human motion recordings as a basis and benchmark for assessment. This dependence on motion capture remains consistent regardless of the scope, field, or application being considered. However, the limited accessibility to large, adaptable datasets of high quality, negatively impacts the overall extent of research in this area [9]. An approximate human kinematic model is crucial to conduct a comprehensive kinematic analysis. This model encompasses the joint angles and movements exhibited by the human body during different activities. Despite multiple motion capture databases, it is crucial to highlight that most of these researchers have tailored their datasets to suit their specific research requirements. As a result, their limited coverage of daily living activities makes them less helpful in conducting task-specific kinematic analysis.
The KIT Whole-Body Human Motion Database [10] is a comprehensive repository of large-scale whole-body human motion data. It provides a range of methods and tools that facilitate a unified representation of captured human motion and enable efficient searching within the database. Moreover, it allows for the transfer of subject-specific motions to robots with varying physical characteristics. In addition to detailing the reference model, the authors of the database outline systematic procedures and techniques for recording, labeling, and organizing human motion capture data. They also address the recording of object motions and the establishment of subject–object relations. The AndyData-lab-onePerson Dataset [11] comprises a collection of datasets that include motion and force measurements captured during various manual tasks. The dataset also includes detailed annotations of the actions and postures exhibited by the participants during these tasks. In this dataset, a total of 13 participants were involved, each engaging in a series of activities that simulate industrial tasks commonly encountered in real-world settings. These include tasks such as setting screws at different heights and manipulating loads of varying weights.
The CMU Graphics Lab Motion Capture Database [12] is a prominent repository of human motion data. It comprises 6 major task categories, each further divided into 23 subcategories, resulting in a total of 2605 trials. This database offers a diverse range of captured motions, encompassing various activities and scenarios such as walking, washing clothes, playing games, etc.
The SFU Motion Capture Database [13] has significantly contributed to the field by offering a comprehensive dataset encompassing a wide range of human motions. The dataset includes recordings of eight subjects performing tasks across five major categories, providing researchers with valuable insights into human movement patterns and behavior. One notable advantage of the SFU Motion Capture Database is its provision of motion data in multiple file formats. This versatility lets users import motion files into various platforms, including Maya, MotionBuilder, and OpenSim.
The Archive of Motion Capture As Surface Shapes (AMASS) [14] is a comprehensive and diverse database of human motion. It serves as a unifying platform for integrating data from 15 distinct optical marker-based motion capture (mocap) datasets, employing a shared framework and parameterization approach. The authors of the study utilized a software tool called MoSh++ to transform the mocap data into realistic three-dimensional (3D) human meshes, represented by a rigged body model.
The GRAB (GRasping Actions with Bodies) dataset [15] offers an extensive repository of whole-body grasping actions, encompassing complete three-dimensional (3D) shape and pose sequences of 10 subjects interacting with 51 everyday objects exhibiting diverse shapes and sizes. However, it is important to emphasize that the dataset’s primary research focuses on object grasping rather than capturing the intricacies of actual object interactions. Furthermore, the dataset represents the body using the SMPL-X model, which needs more detailed human body parameters for conducting thorough kinematic analyses.
In contrast to previous research, our study introduces a task-specific motion capture dataset that captures whole-body motions during daily living tasks. This dataset is adaptable across different platforms and seamlessly integrates subject-specific kinematic models into the MATLAB workspace. This dataset and these models play a crucial role in conducting diverse analyses to understand the various performance criteria that humans intuitively optimize during the execution of specific activities of daily living tasks. This dataset serves as a valuable complement to the previously established KIT Whole-Body Human Motion Database [10] and the GRAB (GRasping Actions with Bodies) dataset [15].

3. The Dataset

The primary objective of this section is to provide a comprehensive description of the marker placement and camera setup employed in the study, as well as detailed participant demographics and the experimental setup. Moreover, it encompasses a comprehensive description of the objects utilized during the performance of various activities of daily living. The CARRT-Motion Capture Data for Robotic Human Upper Body Model Database offers an extensive assortment of anthropomorphic data, .c3d motion files, and a subject-specific Robotic-Toolbox-based MATLAB model.

3.1. Marker Placement and Camera Setup

In this dataset, the entire kinematics of the participant’s whole body was captured using a Vicon motion capture system based on reflective markers [16]. Eight cameras were strategically positioned around the workspace, as shown in Figure 1. For marker placement, 43 spherical reflective markers with a diameter of 12.5 mm were affixed to the participant’s skin using double-sided adhesive tapes. A comprehensive description of the markers, including their labels and positions, can be found in Table 1, Figure 2 and Figure 3.
To ensure accurate data collection, the Vicon system underwent calibration for each participant at the start of the recording session. The participants maintained fixed, static T-positions at the beginning and end of each trial. The data from each trial were recorded at a frequency of 120 Hz. To mitigate noise and marker flickering during the trials, the recorded data were subjected to post-processing using Nexus 1.8.5 software by Vicon (Denver, CO, USA) [16]. This post-processing was performed on a computer running Windows 7, equipped with an Intel Core i5 processor, a 250 GB hard disk, and 32 GB of RAM.
The data collection process involved the utilization of a video camera, where videos were recorded at a frame rate of 25 frames per second. To ensure anonymity, each video underwent post-processing using video editor software. It is important to note that the videos are not included in this dataset. Their purpose was solely to ensure the accuracy of motion capture.

3.2. Participants

A total of 10 healthy adults participated in the data collection process, consisting of 4 men and 6 women. Among the participants, nine had dominant right hands, while one had a dominant left hand. Detailed demographic information about the participants is presented in the accompanying Table 2.
The average age of the participants was 28 years (SD = 8.56 years), with an average height of 164.32 cm (SD = 5.69 cm) and an average body mass of 66.10 kg (SD = 10.48 kg). Individuals with limited or no experience working with motion capture systems, including students and researchers, were recruited to ensure diverse participation in the study.
Ethical considerations were addressed by obtaining approval from the Institutional Review Board of the University of South Florida. Prior to data collection, all participants provided informed consent after receiving comprehensive information about the study and its procedures (IRB number: 004898).

3.3. Experimental Setup

In this section, we will explore a dataset comprising nine carefully chosen daily living activities, along with the associated objects employed to perform these activities.

3.3.1. Activities of Daily Living Tasks and Procedure

The dataset includes a total of nine activities of daily living (ADL) and eight range of motion (ROM) tasks [17]. As this data collection primarily centers on upper body movements, the selection of ADL tasks is based on prior work by the authors of references [18,19,20]. These activities focus on the movement of joints and how humans interact with objects while sitting or standing.
Participants were instructed to perform each task either in a standing position or while seated on a chair positioned behind a table with a height of approximately 88 cm. Prior to recording, the Vicon system underwent calibration for each participant to ensure accurate measurements. Two repetitions of each action were recorded, while participants maintained a fixed, static position at the start and end of each trial.
Overall, a total of 18 demonstrations of ADL and 16 ROM were collected from each participant, resulting in a combined total of 340 demonstrations. The duration of each recording ranged between five to fifteen seconds.

3.3.2. Objects

To capture human motion during the execution of activities of daily living (ADL) tasks, a set of seven natural household objects was employed. These objects were carefully selected to resemble the items commonly encountered in real-life scenarios closely. For comprehensive information regarding each object, including its weight and dimensions, please refer to Table 3. The dataset does not include information regarding object markers.

4. Data Post Processing

4.1. File Formats and Organization

This section aims to present a comprehensive overview of the key software components involved in the study, including OpenSim, the MATLAB Robotic Human Upper Body Model (RHUBM), and the dataset organization. The dataset comprises 426 .c3d motion files and 426 .trc files specifically tailored for OpenSim. The Demographic Data .xls file contains essential participant information, such as ID, age, gender, dominant hand, weight, and body dimensions, which can be useful to create the scaled model in OpenSim. Additionally, the dataset includes seven MATLAB files, which comprise the RHUBM model of each subject, and 215 .xls joint angle files specifically tailored for MATLAB. Instructions on running the MATLAB code can be found in the accompanying text file. Please refer to Table 4 and Figure 4 for more detailed insights into the data content and format.

4.2. OpenSim Musculoskeletal Model

The musculoskeletal models offer a non-invasive approach to investigating human movement. In this dataset, we employed the full-body musculoskeletal model developed by Rajagopal et al. [21] to generate subject-specific joint angles for each task. By utilizing the OpenSim scaled model option, we created a customized model that accurately represents the individual’s anatomy. The skeletal structure of the model consisted of 22 articulating rigid bodies, including the pelvis, femurs, patella’s, tibia/fibulas, talus, calcaneus, and toes, to depict the lower body. Similarly, the upper body was represented by the combined head and torso and the humerus, ulna, radius, and hand for both sides. In total, the model encompassed 20 degrees of freedom in the lower body, accounting for the pelvis and each leg, and 17 degrees of freedom in the torso and upper body, incorporating the lumbar joint and each arm. The dataset offers comprehensive joint angles for the entire body. However, our primary research focus revolves around upper body movements specifically during activities of daily living. The focus is placed on analyzing and documenting the joint coordinates specifically related to the upper body. For a detailed understanding of each joint coordinate system, please refer to the work by Rajagopal et al. [21].
The head and torso were represented as a single rigid segment connected to the pelvis, with the orientation of the torso relative to the pelvis described using torso fixed ZXY rotations (representing lumbar extension, lateral bending, and rotation, respectively). The connection of the humerus with the torso was facilitated by a ball-and-socket joint, and the orientation of the right humerus in relation to the torso was determined by humerus-fixed ZXY rotations (representing shoulder flexion, adduction, and rotation, respectively). The ulna is linked to the humerus via a pin joint at the elbow, while forearm pronation was modeled by a pin joint connecting the radius and ulna. The hand was connected to the radius through a two-degree-of-freedom universal joint, with the orientation of the hand with respect to the radius described by hand-fixed ZX rotations (representing wrist flexion and ulnar deviation, respectively). It should be noted that this model focused primarily on capturing the overall motion of the torso and upper extremities using the OpenSim inbuild inverse kinematics toolbox and did not account for the complex kinematics of scapular motion or spinal bending.

4.3. MATLAB Robotic Toolbox Model

In this dataset, the authors have provided only the upper body model designed explicitly for the Matlab environment based on the OpenSim model discussed in the above section. The upper body robot model was constructed using a rigid kinematic chain based on Denavit-Hartenberg (D-H) parameters [22]. The upper body model consists of 17 Degrees of Freedom (DoFs) for each subject. These DoFs include three for the torso (representing lumbar lateral bending, extension and rotation, respectively) [23], three for each shoulder joint (representing shoulder flexion, adduction, and rotation, respectively), two for each elbow joint (represent elbow flexion and forearm pronation), and two for each wrist joint (representing wrist flexion and ulnar deviation, respectively). The complete set of parameters used to create the links of the RHUBM is presented in Table 5, Table 6 and Table 7 and Figure 5.
In the RHUBM, the vertical length of the torso is denoted as D1, and the horizontal length of the torso is represented as A1. The upper arm length is measured from the shoulder center to the elbow center, while the forearm length is measured from the elbow center to the wrist center. A2 denotes the upper arm length from shoulder to elbow center, and D3 represents forearm length. Additionally, the length of the hand, denoted as D4, is measured from the center of the wrist to the center of the palm. The mass of each body segment is expressed as a percentage of the total body mass [24]. Graphic descriptions of these parameters used in the RHUBM are provided in Figure 5.

5. Conclusions

In this study, we present a dataset consisting of human demonstrations capturing the activities of daily living and range of motion tasks. The dataset was recorded using the Vicon motion capture system and is specifically designed to provide comprehensive information for learning and analyzing human daily living activities for researchers focusing on studying various performance metrics that humans subconsciously optimize during task execution. It comprises nine different ADL actions and eight range of motion actions performed by ten subjects, with up to two repetitions of each action variation. This yields a total of 340 human demonstrations.
The dataset aims to facilitate the learning and generalization of object manipulation actions, which are crucial for analyzing various performance metrics optimized by humans during task performance. Using Vicon motion capture data, a subject-specific full-body model was constructed within the OpenSim framework, enabling the computation of joint angles. The Robotic toolbox developed by Peter Corke [25] played a crucial role in building a subject-specific Matlab-based upper body model. Additionally, a MATLAB-based library was created, leveraging OpenSim joint angles as a foundation, with the potential to support future research work. These tools and resources are intended to support research in learning performance criteria from human observation and can contribute to various research directions, such as motion primitive learning and trajectory planning.
In our future work, we plan to expand the dataset by including more actions and their variations from additional subjects. We also aim to incorporate bimanual tasks and a full-body MATLAB-based kinematic model to analyze the dynamics of the entire body. This will enable better trajectory planning for humanoid robots.

Author Contributions

Conceptualization, U.T.; methodology, U.T.; formal analysis, U.T.; investigation, U.T.; resources, R.A. and R.D.; data curation, U.T.; writing—original draft preparation, U.T.; writing—review and editing, R.A.; visualization, U.T.; supervision, R.A.; project administration, R.D.; funding acquisition, R.A. and R.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of University of South Florida (IRB number: 004898 and 21 November 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data utilized in this review article are available in the sources cited and can be accessed through web searches or by contacting the authors of the original studies.

Acknowledgments

The authors would like to thank the Florida Department of Education—Division of Vocational Rehabilitation for their support.

Conflicts of Interest

The authors have disclosed the absence of any potential conflict of interest pertaining to the research, authorship, and/or publication of this article.

References

  1. Ray, C.; Mondada, F.; Siegwart, R. What do people expect from robots? In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008. [Google Scholar]
  2. Welfare, K.S.; Hallowell, M.R.; Shah, J.A.; Riek, L.D. Consider the human work experience when integrating robotics in the workplace. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019. [Google Scholar]
  3. Yang, J.; Chew, E. A systematic review for service humanoid robotics model in hospitality. Int. J. Soc. Robot. 2021, 13, 1397–1410. [Google Scholar] [CrossRef]
  4. Koppenborg, M.; Nickel, P.; Naber, B.; Lungfiel, A.; Huelke, M.J.H.F.; Manufacturing, E.; Industries, S. Effects of movement speed and predictability in human–robot collaboration. Hum. Factors Ergon. Manuf. 2017, 27, 197–209. [Google Scholar] [CrossRef]
  5. Tanizaki, Y.; Jimenez, F.; Yoshikawa, T.; Furuhashi, T. Impression Investigation of Educational Support Robots using Sympathy Expression Method by Body Movement and Facial Expression. In Proceedings of the 2018 Joint 10th International Conference on Soft Computing and Intelligent Systems (SCIS) and 19th International Symposium on Advanced Intelligent Systems (ISIS), Toyama, Japan, 5–8 December 2018. [Google Scholar]
  6. Tanie, K. Humanoid robot and its application possibility. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003, Tokyo, Japan, 1 August 2003. [Google Scholar]
  7. Sim, J.; Kim, S.; Park, S.; Kim, S.; Kim, M.; Park, J. Design of JET humanoid robot with compliant modular actuators for industrial and service applications. Appl. Sci. 2021, 11, 6152. [Google Scholar] [CrossRef]
  8. Trivedi, U. CARRT—Motion Capture Data for Robotic Human Upper Body Model (Version 1). Zenodo 2023. [Google Scholar] [CrossRef]
  9. Trivedi, U.; Menychtas, D.; Alqasemi, R.; Dubey, R. Biomimetic Approaches for Human Arm Motion Generation: Literature Review and Future Directions. Sensors 2023, 23, 3912. [Google Scholar] [CrossRef] [PubMed]
  10. Krebs, F.; Meixner, A.; Patzer, I.; Asfour, T. The KIT Bimanual Manipulation Dataset. In Proceedings of the 2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids), Munich, Germany, 19–21 July 2021. [Google Scholar]
  11. Maurice, P.; Adrien, M.; Serena, I.; Olivier, R.; Clelie, A.; Nicolas, P.; Guy-Junior, R.; Lars, F. AndyData-lab-onePerson [Data set]. In The International Journal of Robotics Research. Zenodo 2019. [Google Scholar] [CrossRef]
  12. De la Torre, F.; Hodgins, J.; Bargteil, A.; Martin, X.; Macey, J.; Collado, A.; Beltran, P. Guide to the Carnegie Mellon University Multimodal Activity (Cmu-Mmac) Database; CMU-RI-TR-08-22. 2009. Available online: https://www.ri.cmu.edu/publications/guide-to-the-carnegie-mellon-university-multimodal-activity-cmu-mmac-database/ (accessed on 8 October 2023).
  13. Jing, G.; Ying, K.Y. SFU Motion Capture Database. 2023. Available online: http://mocap.cs.sfu.ca (accessed on 8 October 2023).
  14. Mahmood, N.; Ghorbani, N.; Troje, N.F.; Pons-Moll, G.; Black, M.J. AMASS: Archive of motion capture as surface shapes. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
  15. Taheri, O.; Ghorbani, N.; Black, M.J.; Tzionas, D. GRAB: A Dataset of Whole-Body Human Grasping of Objects. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  16. Vicon Motion Capture System. 2023. Available online: https://www.vicon.com/ (accessed on 8 October 2023).
  17. Lura, D.J. The Creation of a Robotics Based Human Upper Body Model for Predictive Simulation of Prostheses Performanc; University of South Florida: Tampa, FL, USA, 2012. [Google Scholar]
  18. Magermans, D.; Chadwick, E.; Veeger, H.; Van Der Helm, F.C.T. Requirements for upper extremity motions during activities of daily living. Clin. Biomech. 2005, 20, 591–599. [Google Scholar] [CrossRef]
  19. Bucks, R.S.; Ashworth, D.L.; Wilcock, G.K.; Siegfried, K. Assessment of activities of daily living in dementia: Development of the Bristol Activities of Daily Living Scale. Age Ageing 1996, 25, 113–120. [Google Scholar] [CrossRef]
  20. Edemekong, P.F.; Bomgaars, D.; Sukumaran, S.; Levy, S.B. Activities of Daily Living; StatPearls. 2019. Available online: https://digitalcollections.dordt.edu/faculty_work/1222 (accessed on 8 October 2023).
  21. Rajagopal, A.; Dembia, C.L.; DeMers, M.S.; Delp, D.D.; Hicks, J.L.; Delp, S.L. Full-body musculoskeletal model for muscle-driven simulation of human gait. IEEE Trans. Biomed. Eng. 2016, 63, 2068–2079. [Google Scholar] [CrossRef]
  22. Denavit, J.; Hartenberg, R. A Kinematic Notation for Lower. ASME J. Appl. Mech. 1955, 11, 337–359. [Google Scholar]
  23. Holzbaur, K.R.S.; Murray, W.M.; Delp, S.L. A model of the upper extremity for simulating musculoskeletal surgery and analyzing neuromuscular control. Ann. Biomed. Eng. 2005, 33, 829–840. [Google Scholar] [CrossRef]
  24. Weight of Human Body Parts as Percentages of Total Body Weight. 2023. Available online: https://robslink.com/SAS/democd79/body_part_weights.htm (accessed on 8 October 2023).
  25. Corke, P.I.; Khatib, O. Robotics, Vision and Control: Fundamental Algorithms in MATLAB; Springer: Berlin/Heidelberg, Germany, 2011; Volume 73. [Google Scholar]
Figure 1. Camera positions relative to subject and motion capture system origin.
Figure 1. Camera positions relative to subject and motion capture system origin.
Sensors 23 08354 g001
Figure 2. Upper body marker placement.
Figure 2. Upper body marker placement.
Sensors 23 08354 g002
Figure 3. Lower body marker placement.
Figure 3. Lower body marker placement.
Sensors 23 08354 g003
Figure 4. Dataset file schematic.
Figure 4. Dataset file schematic.
Sensors 23 08354 g004
Figure 5. RHUBM Model.
Figure 5. RHUBM Model.
Sensors 23 08354 g005
Table 1. Marker description.
Table 1. Marker description.
NameMarker Placement
T1Spinous Process; 1st Thoracic Vertebrae
T10Spinous Process; 10th Thoracic Vertebrae
CLAVJugular Notch
STRNXiphoid Process
LBAKMiddle Of Left Scapula (Asymmetrical)
R/LASIRight/Left Anterior Superior Iliac Spine
R/LPSIRight/Left Posterior Superior Iliac Spine
R/LGTRight/Left Greater Trochanters
R/LSHOAAnterior Portion of Right/Left Acromion
R/LSHOPPosterior Portion of Right/Left Acromion
R/LUPARight/Left Lateral Upper Arm
R/LELBRight/Left Lateral Epicondyle
R/LELBMRight/Left Medial Epicondyle
R/LFRARight/Left Lateral Forearm
R/LWRARight/Left Wrist Radial Styloid
R/LWRBRight/Left Wrist Ulnar Styloid
R/LFINDorsum Of Right Hand Just Proximal To 3rd Metacarpal Head
R/LTHIRight/Left Thigh
R/LLFCRight/Left Lateral Epicondyle of Femur
R/LMFCRight/Left Medial Epicondyle of Femur
R/LTBIRight/Left Tibia Interior
R/LLMALRight/Left Lateral Malleolus
R/LCALRight/Left Calcaneus
R/LTOERight/Left Toe
Table 2. Subject demographic information.
Table 2. Subject demographic information.
SubjectGenderDominant HandHeight (cm)Body Weight (Kg)
Subject 1MaleR162.560
Subject 2MaleR165.0965
Subject 3FemaleR160.0258
Subject 4MaleR172.7286
Subject 5FemaleR152.458
Subject 6FemaleR162.568
Subject 7FemaleR165.0951
Subject 8FemaleR162.579
Subject 9MaleL172.7260
Subject 10FemaleR167.6476
Table 3. Activities of daily living tasks. (* NA = Not Applicable).
Table 3. Activities of daily living tasks. (* NA = Not Applicable).
ADL Task NameAbbreviationObject UsedObject Weight (Kg)Object Size (m)
Brushing HairBHHairbrush0.020.243 × 0.081 × 0.0381
Drinking From a CupFPCPlastic Cup0.020.053 × 0.053 × 0.109
Opening a Lower-Level CabinetOCL* NANANA
Opening a Higher-Level CabinetOCHNANANA
Picking Up the BoxPBCarboard Box0.520.457 × 0.356 × 0.305
Picking Up the Duster and CleaningPDCCleaning Duster0.180.356 × 0.051 × 0.076
Picking Up an Empty Water JugPEWJ1 Gallon Water Jug0.900.15 × 0.15 × 0.269
Picking Up a Full Water JugPFWJ1 Gallon Water Jug3.790.15 × 0.15 × 0.269
Picking Up a Water Jug and PouringPWJ1/2 Gallon Water Jug and Plastic Cup2.550.191 × 0.105 × 0.289
Table 4. File formats.
Table 4. File formats.
DatasetFormatNumber of Files
Participant Data.trc426
Vicon Data.c3d426
Matlab Raw Data.xls215
Matlab Code.M7
Demographic Data and Task Name.xls2
Table 5. D-H parameters of RHUBM model.
Table 5. D-H parameters of RHUBM model.
iαi-1
(deg)
ai-1
(m)
di
(m)
Ɵi
(deg)
Joint
100090 + Ɵ1Torso Lateral Flexion
2900090 + Ɵ2Torso Flexion/Extension
3−9000−90 + Ɵ3Torso Rotation
40A1D1Ɵ4Shoulder Flexion/Extension
5−9000−90 + Ɵ5Shoulder Abduction/Adduction
6−900D2Ɵ6Shoulder Rotation
7−9000180 + Ɵ7Elbow Flexion
8−900D390 + Ɵ8Elbow Pronation/Supination
9−900090 + Ɵ9Wrist Flexion/Extension
10−9000180 + Ɵ10Wrist Abduction/Adduction
Table 6. Segment weight.
Table 6. Segment weight.
Weight (Kg)Segment
W10.551 × Body WeightTorso
W20.0325 × Body WeightUpper Arm
W30.0187 × Body WeightLower Arm
W40.0065 × Body WeightHand
Table 7. Subject segment dimensions.
Table 7. Subject segment dimensions.
SubjectW1
(Kg)
D1
(m)
W2
(Kg)
A1
(m)
W3
(Kg)
D2
(m)
W4
(Kg)
D3
(m)
Subject 129.8200.4351.6800.2000.9600.2700.3600.265
Subject 232.3050.4321.8200.2801.0400.2800.3900.254
Subject 328.8260.3681.6240.1400.9280.3000.3480.250
Subject 442.7420.4572.4080.2001.3760.3180.5160.265
Subject 528.8260.3561.6240.1900.9280.2540.3480.228
Subject 633.7960.3561.9040.2031.0880.2670.4080.254
Subject 725.3470.4321.4280.2000.8160.2540.3060.229
Subject 839.2630.4572.2120.2001.2640.2540.4740.228
Subject 929.8200.4351.6800.2000.9600.2700.3600.265
Subject 1037.7720.4322.1280.2001.2160.2540.4560.228
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Trivedi, U.; Alqasemi, R.; Dubey, R. CARRT—Motion Capture Data for Robotic Human Upper Body Model. Sensors 2023, 23, 8354. https://doi.org/10.3390/s23208354

AMA Style

Trivedi U, Alqasemi R, Dubey R. CARRT—Motion Capture Data for Robotic Human Upper Body Model. Sensors. 2023; 23(20):8354. https://doi.org/10.3390/s23208354

Chicago/Turabian Style

Trivedi, Urvish, Redwan Alqasemi, and Rajiv Dubey. 2023. "CARRT—Motion Capture Data for Robotic Human Upper Body Model" Sensors 23, no. 20: 8354. https://doi.org/10.3390/s23208354

APA Style

Trivedi, U., Alqasemi, R., & Dubey, R. (2023). CARRT—Motion Capture Data for Robotic Human Upper Body Model. Sensors, 23(20), 8354. https://doi.org/10.3390/s23208354

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop