Next Article in Journal
Assessment of Exoskeletons for Work Activities: The Dilemma behind the Product
Next Article in Special Issue
Industrial Image Anomaly Detection via Self-Supervised Learning with Feature Enhancement Assistance
Previous Article in Journal
Studying the Freezing Law of Reinforcement by Using the Artificial Ground Freezing Method in Shallow Buried Tunnels
Previous Article in Special Issue
Analysis of Bullet Impact Locations in the 10 m Air Pistol Men’s Competition Based on Covariance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Human Body Simulation Using Semantic Segmentation and Image-Based Reconstruction Techniques for Personalized Healthcare

Department of Industrial and Systems Engineering, Dongguk University, Seoul 04620, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(16), 7107; https://doi.org/10.3390/app14167107
Submission received: 16 July 2024 / Revised: 12 August 2024 / Accepted: 12 August 2024 / Published: 13 August 2024
(This article belongs to the Special Issue State-of-the-Art of Computer Vision and Pattern Recognition)

Abstract

:
The global healthcare market is expanding, with a particular focus on personalized care for individuals who are unable to leave their homes due to the COVID-19 pandemic. However, the implementation of personalized care is challenging due to the need for additional devices, such as smartwatches and wearable trackers. This study aims to develop a human body simulation that predicts and visualizes an individual’s 3D body changes based on 2D images taken by a portable device. The simulation proposed in this study uses semantic segmentation and image-based reconstruction techniques to preprocess 2D images and construct 3D body models. It also considers the user’s exercise plan to enable the visualization of 3D body changes. The proposed simulation was developed based on human-in-the-loop experimental results and literature data. The experiment shows that there is no statistical difference between the simulated body and actual anthropometric measurement with a p-value of 0.3483 in the paired t-test. The proposed simulation provides an accurate and efficient estimation of the human body in a 3D environment, without the need for expensive equipment such as a 3D scanner or scanning uniform, unlike the existing anthropometry approach. This can promote preventive treatment for individuals who lack access to healthcare.

1. Introduction

Personalized healthcare has received worldwide attention due to technical improvement [1]. Both special groups (e.g., company or research center) and ordinary individuals are able to access wearable devices involving real-time monitoring sensors [2,3,4]. To be more specific, a smart watch (e.g., Apple watch) can be used to track the irregular pulse of a person, so that potential heart anomalies can be detected in real time [5]. In addition, a wearable tracker, such as a Fitbit, can be used to track the physical activity of an individual [3]. Given that most OECD countries have become aging societies and their lifestyle is rapidly changing towards single households [6,7], the growth of the personalized healthcare market is not in question.
There are two types of personalized healthcare in terms of point in time of treatment. Wearable devices used to monitor anomalies in health can be considered as proactive treatment. This is because real-time information can mitigate the reaction time of a first responder [8]. According to the personal health devices (PHDs) standard [9], proactive treatment can be illustrated as activities that allow people to conduct self-monitoring of health conditions and to share that information with healthcare professionals and other caregivers. Proactive treatment consists of two major types of devices: detecting anomalies using users’ health data, and curing discomfort. An example of the former is the Advanced care and alert portable telemedical MONitor (AMON) [10], which continuously collects vital signs and alerts the medical center of anomalies; an example of the latter is the Quell, by Neuro Metrix Inc., which relieves chronic leg pain via transcutaneous electrical nerve stimulation [11].
On the other hand, preventive treatment is associated with maintaining the individual’s health [12]. The most popular approaches in this category are home training and dietary therapy [13]. In fact, under the COVID-19 pandemic, home training and dietary therapy are becoming more common and popular in many countries, including the U.S., Germany, France, Japan, China, and South Korea [14]. According to Fortune, the market size of home fitness equipment in the U.S. in 2020 was about USD 3.55 billion, and by 2028, is expected to grow to USD 14.74 billion [15]. Similarly, the market size of dietary supplements in 2020 was about USD 61.20 billion, and by 2028 is expected to be USD 128.64 billion. Generally speaking, this is known as a more effective approach in terms of cost and fatality compared with proactive treatment [16].
In preventive treatment, it is critical to understand the correlation between anthropometric variables and diseases, such as diabetes, hypertension, dyslipidemia, and coronary artery disease [17]. Once the correlation is analyzed, it can be used to build a prediction model of human diseases. This implies that diseases can be prevented by changing anthropometric variables in terms of decreasing waist circumference, waist–hip ratio, and body mass index (BMI). Additionally, a number of studies for predicting disease were conducted. Sung et al. [18] proposed cardiovascular disease (CVD) prediction models. The prediction model was based on the deep learning method (RNN−LSTM), which is widely used to analyze time-series datasets. This technique improved the prediction accuracy compared to conventional Cox proportional hazard regression models [19]. In addition, various companies have participated in developing and commercializing services that can diagnose disease and health abnormalities by analyzing users’ health record data; e.g., Apple received FDA approval for a deep learning algorithm that detects atrial fibrillation using their smartwatch [4]. Even though recent technologies in preventive treatment, which are combined with artificial intelligence (AI), have been developing rapidly, additional research is needed to provide a result that can represent a cross-section of the population, and the privacy and security of the data used should be supplemented, which can reveal the users’ medical history details [20].
This study aims to propose a human body simulation that predicts and visualizes the individual’s 3D body change based on 2D images taken by a portable device (e.g., a smart phone). Notice that 3D body models provide more accurate volume information and body shape than conventional 2D body images, so they can be used to precisely calculate health indicators such as BMI. The main part of the proposed simulation consists of two major subprocesses in the anthropometric measurement module: (1) A semantic segmentation process [21,22,23], which segments the human image from 2D imagery taken by a smart phone, and (2) an image-based reconstruction process [24,25], which constructs the 3D body model. Experiments were conducted to obtain optimal conditions in the aforementioned subprocesses to solve limitations, such as the expensive equipment requirement in 3D scanner-based anthropometry and direct anthropometry, while deriving reliable anthropometric variables. According to the experiments, the p-value of the paired t-test conducted between the measurement given by the proposed simulation and the actual measurement is 0.3483, so the proposed simulation can accurately estimate the human body in the 3D environment (see Section 4 for more detail). As a result, the proposed simulation enables not only the reconstruction of an accurate 3D body model from 2D images, but also the visualization of bodily changes in accordance with a prescribed exercise plan through the 3D body model. This will contribute to eliminating inequalities for people who are not provided with expensive healthcare services.
The contributions of this study are as follows. This study presents a specific method for preprocessing anthropometric measurements to better utilize the existing 3D body modeling techniques, and designs a selection module for precise 3D body part implementation from 2D images. In addition, it is significant that a module for predicting body changes according to a given exercise plan is developed and implemented as a single simulation program.
The organization of the paper as follows: Section 2 briefly introduces existing studies associated with 3D modeling of the human body and image segmentation, while Section 3 introduces a three-phase simulation-based modeling approach for the visualization and prediction of a 3D human body. Section 4 demonstrates the proposed simulation with experimental analysis and test results, while Section 5 concludes the paper and indicates future work associated with this study.

2. Three-Dimensional Modeling of the Human Body

As the demand for 3D human body modeling increases in multiple industries (e.g., clothing, healthcare, gaming, and industrial design), researchers are intensively conducting research in the field of human body shape modeling (HBSM). According to Baek and Lee [26], the HBSM techniques can be categorized as follows: (1) Direct model creation/acquisition modeling of the 3D human body via depth-related information or a map of per-pixel data, e.g., RGB−D data; (2) template-based model scaling, which modifies template models to elicit the desired 3D human body model; (3) statistical-based model synthesis, deforming pose and shape by learning datasets; and (4) image-based reconstruction, extracting parameters that elicit the 3D human body model from 2D images.
The direct model creation/acquisition technique is the modeling of a 3D model via surface spatial data obtained by a 3D scanner, which is based on photogrammetry, structured light, and laser scanning techniques. In photogrammetry, multiple lenses are used to obtain the spatial data of an object. Once two charge-coupled devices (CCDs) located at a parallax angle take two different object images, the technique calculates the distance between a base plane and points on the object by using their geometrical relationship [27]. The laser scanning technique with triangulation [28] and the structured light-based technique using contour mapping (e.g., moiré topography) are also widely used techniques due to their modeling accuracy and cost efficiency [29].
Template-based model scaling generates a 3D model by scaling a body part categorized by the deformation of applied types: (1) vertical deformation part, such as waist height, and inside leg length, (2) joint girth (circumference) deformation, such as knee girth and wrist girth, and (3) girth deformation between two joints or regions, such as thigh girth and waist girth [30]. The major advantage of template-based model scaling compared to the other techniques is its effectiveness, because it generates a body model from a template, while expensive equipment (e.g., a 3D scanner) is unnecessary.
Statistical-based model synthesis was devised to overcome the inefficiency of the direct model creation/acquisition technique in terms of cost and labor. Instead of observing and measuring all body parts every time, this technique models a homogeneous face model database with principal component analysis (PCA) in analyzing shape variation [31]. Once the database is constructed, the dominant variables that determine the 3D model face are analyzed and extracted. By generating these variables, the objective 3D body is modeled. In this technique, Shape Completion and Animation of PEople (SCAPE) and Skinned Multi-Person Linear (SMPL) techniques are widely used to understand shape variations under different body shapes [32,33].
Image-based reconstruction focuses on modeling a 3D body model from an existing 2D image. Since it does not require the numerous image collection efforts necessary for the other three techniques, it is known as the most cost-effective technique. One of the popular models is DeepCut [34], using a convolutional neural network (CNN) to reconstruct the posture of a human body via partitioning and the detection of body parts. Bogo et al. [35] proposed an advanced framework called SMPLify based on CNN and SMPL. In this technique, SMPLify identifies 2D body joint locations using DeepCut, and the derived 2D body joints are used to calibrate the pose ( θ ) and shape ( β ) parameters of the SMPL model. Furthermore, to fit the pose and shape to 2D body joints, a minimizing objective function is used that represents the sum of joint-based data terms, E j ( β , θ ; K , J e s t ), pose priors, E a θ and E θ θ , a shape prior, E β β , and a penalty term, E s p θ ; β . Equation (1) represents the objective function of SMPLify, which implies camera parameter K , scalar weights λ θ , λ a , λ s p , λ β , and the 2D distance between estimated joints ( J e s t ) :
E β , θ = E j β , θ ; K , J e s t + λ θ E θ θ + λ a E a θ + λ s p E s p θ ; β + λ β E β β
However, in terms of estimating the human body shape, the SMPLify framework only uses the connection length between two joints to fit the 3D model, so that the derived result can have insufficient accuracy. Thus, Lassner et al. [36] added an image silhouette (S) and a model silhouette ( S ^ ) to the SMPLify framework. Thus, Equation (1) is modified with the bi-directional distance between S and S ^ , which is shown in Equation (2):
E S θ , β , γ ; S , K = x S ^ θ , β , γ d i s t ( x , S ) 2 + x S d i s t ( x , S ^ ( θ , β , γ ) ) 2
In contrast to the other techniques, the framework called HMR [37] has advantages in terms of training process and output results. First, the HMR framework directly derives 3D mesh parameters from image pixels, while the aforementioned frameworks only estimate 2D joint locations and 3D model parameters. Next, compared to outputting only the 3D skeleton, it is more appropriate in terms of applied shapes, as well as poses for outputting 3D meshes. To make this possible, the HMR framework consists of two steps: (1) the image is encoded through the CNN process; then, (2) the encoded latent vector Θ = { θ : p o s e , β : s h a p e , R , t , s : c a m e r a } is sent to the 3D regression module to learn to represent the 3D human shape. At this time, for the 3D shapes representing the vector, the latent vector Θ is inferred by minimizing the re-projection error.

3. Human Body Simulation Using Semantic Segmentation and Image-Based Reconstruction Techniques

This study aims to introduce the human body simulation using the HBSM technique mentioned in Section 2. In particular, the image-based reconstruction technique is utilized because of its temporal efficiency in constructing the 3D body model, as well as smoothness in the simulator-driven environment. The image-based reconstruction technique, which is known as the most cost-effective technique, focuses on building a 3D body model from an existing 2D image, without requiring numerous image collection efforts. Figure 1 shows an overview of the proposed simulation that consists of three modules: (1) anthropometric measurement, (2) model selection, and (3) prediction adjustment (see Figure 1).
Once the user inputs their front picture and background picture via a camera in their smart phone, the image-based reconstruction is conducted via the framework shown in Figure 1. From the given input image data, the framework measures anthropometric variables and compares relevant body models by calculating differences to demonstrate the closest and realistic human body change under different exercise plans.
In Figure 1, the anthropometric measurement module extracts anthropometric variables via two processes termed image segmentation and 3D body shaping. In particular, to reduce noise in the input image of 3D body shaping, the DeepLabv3 pretrained model is used to segment only the subject’s body image from the original input image. Next, the HMR framework (i.e., a selected image-based reconstruction method) is used to produce the 3D human body shape model. The measured anthropometric variables via ten body parts—i.e., circumferences of the waist (natural indention and omphalion), bust, wrist, neck, thigh, and hip; arm length; and biacromial breadth—are delivered to the model selection module, which identifies the most similar 3D model by calculating the SAD between the extracted anthropometric variables and the data of the anthropometry database (ADB) with historical anthropometric variables and its 3D model file. The ADB database is referenced in the fifth and sixth Size Korea Surveys. The surveys were conducted on a nationally representative sample of 29,592 individuals, with direct measurements taken from 6016 individuals in three-dimensional shapes for each city and county in the country. Direct measurements were taken on individuals aged 7 to 90 years old, stratified by sex. In direct measurement using a human measuring instrument, 139 human dimensions and 3D shape measurements using a full-body scanner were taken. These measurements were taken on 177 human dimensions (body: 73, foot: 19, head: 45, hand: 19, direct measurement: 21). The prediction adjustment module estimates the variation in anthropometries over time, which is affected by the user’s exercise plan. The changed anthropometric variables are then used to select the adjusted 3D model.

3.1. Anthropometric Measurement

This module is devised to enhance time efficiency by resolving the limitations of traditional direct measurement approaches, such as Martin’s anthropometric technique [38] and the measurement techniques using the 3D scanners mentioned in Section 2, while maintaining measurement accuracy. Figure 2 represents the process of the anthropometric measurement module.
Initially, two input images (i.e., front body image and background image) are obtained from a smart phone camera to generate a 3D model that is more similar to the actual subject body in the 3D body shaping process, and to measure the anthropometric variables accurately. To make this possible, the image segmentation process is conducted. In particular, considering the objective of the module to measure the subject’s anthropometric variables from images, the semantic segmentation method is selected, since, in semantic segmentation, only the important parts are extracted when the spatial resolution is reduced by the pooling operation, so that detailed information about the original image is disregarded. This result can cause difficulty in the classification of the outer pixels of an object (e.g., to classify human pixels from natural images, the outer pixels of a human can be classified as background class). To resolve this problem, DeepLabv3 [39] is adopted in the proposed module. The model design of DeepLabv3 involves several critical components: (1) Atrous convolution, which allows the model to capture multi-scale contextual information without losing resolution, crucial for accurately segmenting the human body in images; (2) Xception Architecture [40], as the backbone network, providing a deep and efficient feature extraction mechanism by leveraging depthwise separable convolutions to improve performance and speed; and (3) Down-Sampling strategies to address the challenge of resolution loss, helping to retain important spatial information and ensuring that the segmented output preserves the details necessary for accurate anthropometric measurements.
The processing flow of the model involves several steps. Initially, the input images are preprocessed to normalize and resize them to a consistent resolution. Next, feature extraction is performed using the Xception backbone to capture detailed representations of the input images. Atrous Spatial Pyramid Pooling (ASPP) is then applied, which uses Atrous convolutions at multiple rates to capture context at different scales; and finally, the decoder refines the segmentation map, producing a high-resolution output. There are specific constraints on the input images to ensure accurate segmentation: for the front image, the subject should be clearly visible, with minimal occlusion and appropriate lighting; for the background image, a plain or simple background is preferable to reduce the complexity of the segmentation task. However, DeepLabv3 is robust enough to handle varying background complexities to a reasonable extent. Background complexity can affect segmentation, but the robustness of DeepLabv3, trained on diverse datasets like MS-COCO [41] and PASCAL VOC [42], helps mitigate these effects. The model’s ability to generalize across different backgrounds ensures reliable performance in varied real-world scenarios.
After the human segmentation process, the 3D body-shaping process, which uses the segmented subject image, is conducted. This process uses the aforementioned HMR framework to construct 3D models that are based on SMPL model fitting to segment the subject image.
Lastly, using vertices in the constructed 3D model area to be measured (e.g., length between vertices of the crown of the head and sole are used in measuring stature, and perimeter of vertices on waist are used in measuring waist circumferences), the stature is set to the reference value, and other anthropometric variables (i.e., circumference of the waist (natural indention and omphalion), bust, wrist, neck, thigh, and hip; arm length; and biacromial breadth) are inferred from their proportional relationship to stature.

3.2. Model Selection

This module visualizes the 3D model from the most similar model to the subject’s body using the ADB with anthropometric variables that are given by the anthropometric measurement module.
As described in Figure 3, eleven anthropometric variables (i.e., stature; circumferences of natural indentation, omphalion, bust, wrist, neck, thigh, and hip; arm length; and biacromial breadth) derived from the anthropometric measurement module, gender, and age of subject are used as inputs of this module.
A body type of the subject’s age and gender group (e.g., male in his 40s) is provided (e.g., male in his 40s reverse triangular (40 rtm) and average (40 avgm)). The comparison stage is conducted to identify the best 3D representation with the minimum sum of absolute difference (MINSAD) between the anthropometric variables of the subject’s body parts and the stored anthropometric variables in ADB (see Table 1 for more detail). Equation (3) is used in this process, where i is a row index in ADB, and j is a column index in ABD. In the equation, X i , j is the j th body part in the i th row in ADB, and Y j is the j th body part of the subject’s anthropometric data. This module identifies an appropriate 3D model by computing MINSAD for all relevant body types in ADB.
argmin i S A D i = j = 0 10 X i , j Y j

3.3. Prediction Adjustment

Figure 4 shows a pseudo code of the prediction adjustment module. Once anthropometric variable set (A) from the model selection module is loaded, the exercise effect is chosen under the selected exercise plan (e.g., walking, and leg cycling 60 min, 6 days a week in 12 weeks) in Table 2. The module calculates the body measurement value shown in Section 3.1 through the Exercise Effect Database (EEDB) values described in Table 2. Equation (4) is used for the calculation.
A p r e d i c t , i = A o r i g i n a l , i E n , i
where A o r i g i n a l , i is the i th part of the original anthropometric variables; A p r e d i c t , i is the changed body measurement computed from A o r i g i n a l , i ; and E n , i is the i th part of the affected part of anthropometric variables given by the n th exercise plan of EEDB (see Table 2 and Table 3). In EEDB, each exercise plan shows the change in weight, waist, bust, upper arm, thigh, and hip circumference based on the type of exercise, gender, and how the exercise is performed (i.e., week, exercise per week, iteration per set, set number, and minutes per set). Through this procedure, the predicted anthropometric variables go through the model selection module again, and finally result in visualizing the 3D model and its body type.

3.4. Data Collection

There are two main simulator factors in this study: (1) anthropometry, and (2) exercise variables. In this section, we focus on each factor, and how each factor works on the simulator.

3.4.1. Anthropometry

Anthropometry has been studied in various fields, such as clothing, ergonomics, and physiology, for a long time. In this section, we introduce anthropometric measurement techniques from direct measurement to recently studied techniques using a 3D body scanner. In the past, anthropometry focused on studying systemic correlations of human body parts. One of the representative anthropometries, body surface area (BSA), has received attention. Therefore, a number of techniques used to measure BSA were developed: coating, surface integration, and triangulation [49]. Each technique has been widely used for some time; however, they still contain limitations that make them very laborious and time consuming. To solve these limitations, 3D scanners are widely used in anthropometric measurements, such as the Civilian American and European Surface Anthropometry Resource (CAESAR) [50]. Of course, 3D scanner techniques have advantages over manual measurements, such as Martin’s anthropometric technique, in terms of labor and time effectiveness. Nevertheless, according to Daanen et al. [51], scan-derived body circumference measures are slightly larger than values manually obtained. Manual measurements and scan-derived measurements are calculated as the performance measure of accuracy. Therefore, we collected anthropometries based on Martin’s anthropometric technique and the 3D scanner technique.
To collect anthropometric data, we referred to the fifth and sixth Size Korea Surveys that were conducted under ISO 15535 [52], in which 29,592 people were measured directly, and 6016 people were measured as 3D shapes for each city and county nationwide. Direct measurements were taken from male and females aged 7 to 90 years old, and from another set of males and females aged 8 to 75 years old. In the direct measurements using a human measuring instrument, 139 human dimensions and 3D shape measurements using a full-body 3D scanner were measured, with 177 human dimensions (body: 73, foot: 19, head: 45, hand: 19, direct measurement: 21). Among these dimensions, exercise plan and affected body area are also considered using these anthropometrics, and various body type classification techniques exist, such as those of Sheldon et al. [53] and Rasband and Liechty [54]. However, since Sheldon’s classification technique uses skinfold variables, which are hard to measure, we classified the 3D model into Rasband’s, which is widely used in the clothing industry, and classified based on length and circumference variables. According to Rasband and Liechty [54], body type can be classified into eight types (ideal, triangular, inverted triangular, rectangular, hourglass, diamond, tubular, and rounded). To classify these body types, using the cluster analysis technique, each body type was considered in terms of pattern and size cluster. The pattern cluster is the body type of the group that it represents, and it can also represent which parts are the same or different in each part of the human body compared to the standard body type. On the other hand, from the formation of the size cluster, it provides information on what size distribution the human characteristics of observers belonging to the pattern cluster represent. After the formation cluster process, the interpretation of the formed cluster was conducted. Interpreting the cluster meaning process is performed according to the characteristics of the center position (average) of the cluster; i.e., if the first factor in a cluster represents a ‘torso’, and the value is very large, the characteristic of the ‘torso’ of the subjects in the cluster can be said to be people with a very thick torso. The meaning analysis of the cluster ensures that the factor score follows the standard normal distribution; the area of the probability density function (pdf) is equally divided into 10 sections, meaning that the initial 10% is the section with the smallest factor score from the first section. That is, the first observation refers to a case where the corresponding factor score is −1.28, and when any one factor belongs to each section, the display technique is P1 to P10. In this way, Table 4 lists the clusters that are formed for size characteristics, while Table 5 lists the resultant body types for people in their 40s. Therefore, we can classify the subject body type and interpret what it means.

3.4.2. Exercise Plan

We wished to determine the changes in the body through the anthropometries described in Section 3.4.1. To predict relevant body changes, it is necessary to set appropriate independent variables that can cause such change. There are various independent variables that can be used to predict body change, such as dietary habits and exercise. Thus, we needed to establish independent variables that are more closely related to body change, so we set an exercise that shows different changes depending on the area where we exercise as an independent variable. To establish the exercise plan data, we used the results of the previous body shape change experiments according to the exercise effects. The exercise plans for each experiment were first classified into aerobic exercise and anaerobic exercise, according to the type of exercise, such as bench press or walking. The classified experimental data were again classified according to the condition of the amount of exercise, which is a variable representing how much exercise is performed over a set period, and aerobic exercise depends on how many minutes per week the exercise is performed over a period and how many sets per set of anaerobic exercises are performed over a period. In the case of anaerobic exercise, it was set in consideration of the results of existing studies [44,45,46,47,48] that revealed the difference between the number of exercise sets and the exercise effect with exercise iterations per set. By selecting these variables for the exercise plan, each plan can derive different body shape changes; therefore, we selected anthropometries that related to body change, which were weight, waist circumference (omphalion), bust circumference, thigh circumference, and hip circumference.

4. Experiments

4.1. Scenario

Experiments were conducted considering the optimal conditions of the human segmentation process for image-based reconstruction methods in the anthropometric measurement process. The experiments were conducted to verify the necessity for the human segmentation process, and if human segmentation processes were required, to determine the architecture and training datasets of the human segmentation pretrained models. In addition, 174 subjects with various demographic characteristics were selected to prevent inaccuracies in the experiment that could occur if subjects were confined to a specific age and gender, considering the characteristics of the experiment conducted through 2D images and directly measured anthropometric variables of the subjects. Table 6 summarizes the demographic characteristics of the subjects.

4.2. Results

First, to determine the necessity of the human segmentation process, a paired t-test was performed on the actual measured body size values and the body size values inferred through the HMR-based technique without the human segmentation process. Table 7 describes the difference between the actual body measurements (direct measurement) and the body measurements based on the 3D model created from the original 2D image without removing the background under the HMR-based technique (i.e., no segmentation). In the case of body size (i.e., stature), which can be confirmed relatively accurately even in 2D images, accurate values were estimated, but for other body parts that included volume information, differences of as little as 0.90% (arm length) and as much as 4.30% (thigh circumference) were observed. The reason for this result is that during the process of creating a 3D model, the size parameters of body parts that require volume information, such as thighs or arms, are predicted relatively inaccurately because they are inferred together with the surrounding background. When a paired t-test is conducted between the two values in Table 7, the p-value of the paired t-test at α = 0.05 is 0.0268, which means that there is significant difference between the two measurement methods. This result confirms that the HMR-based technique without the human segmentation process is an inadequate methodology for predicting body dimensions by generating 2D images into 3D images.
In the semantic segmentation combined HMR frameworks, we can consider which pretrained model for human segmentation will be selected for the rapid measurement process. The considered pretrained models were (1) the Xception architecture [40] using MS−COCO [41] and the PASCAL VOC training–validation dataset, (2) Xception architecture using MS-COCO and the PASCAL VOC training–augmented dataset, (3) Mobilenetv2 [55] architecture using MS−COCO and the PASCAL VOC training–validation dataset, and (4) Moblienetv2 architecture using MS-COCO and the PASCAL VOC training–augmented dataset. As mentioned in Table 7, the prediction accuracy of body dimensions that can be identified even in 2D images, such as stature, showed that accurate predictions were possible in all methodologies. However, in body parts requiring information on volume, differences were observed depending on the methodology, with the highest difference of 3.02% on average observed in neck circumference, excluding stature, and the lowest difference of 0.52% observed in arm length.
According to the measured results described in Table 8, the p-values of the paired t-tests at α = 0.05 are 0.0682, 0.09, 0.3483, and 0.3434, respectively, which means that the measurement technique using pretrained models (1), (2), (3), and (4) exhibits no significant difference to the direct measurement approach. However, considering that the shape of the body can vary according to each anthropometric variable, the human segmentation process based on conditions (3) and (4) is optimal to modeling the body model through the image-based reconstruction process. Figure 5 shows examples of segmented images for each condition. In Figure 5, there is a minor difference between the actual body dimensions and the dimension values from the inferred 3D image because the background images of the armpits and between the legs, where the gap between the body and the clothes is relatively narrow, were not accurately removed. Although these aspects need improvement, we can still confirm that more accurate body dimension prediction is possible through the proposed segmentation process than the existing HMR approach.
In addition, we conducted computational complexity experiments on Xception architecture and Mobilenetv2 architecture, which are the main methodologies used in semantic segmentation. In the process of converting 513 × 513 image inputs into 3D models, Xception architecture was applied to conditions (1) and (2), and Mobilenetv2 architecture was applied to conditions (3) and (4). The results of measuring the FLOPs (Floating Point OPerations), number of parameters (#param.), and GPU utilization (%) for the two architectures are described in Table 9. The experiment was conducted on the Dell Precision Workstation (Intel Xeon ® Silver 4210 Processor, Nvidia RTX Quadro 4000, and Ubuntu 20.04), Austin, TX, USA.
In Table 9, conditions (3) and (4) using Mobilenetv2 show 9.8 G FLOPs, 2.1 M parameters, and 22.3% GPU usage, which are all more efficient than the Xception architecture in terms of computational complexity. This is because the Mobilenetv2 architecture has a smaller number of filters and a shallower network structure than the Xception architecture and requires less computational demand and resource usage by applying the Linear Bottleneck structure.
The results in Table 8 and Table 9 and Figure 5 show that the HMR framework is suitable for creating 3D models using Mobilenetv2. Accordingly, additional experiments were conducted to secure the general applicability of the DeepLabv3 model with the Mobilenetv2 architecture. That is, the experiments were conducted on combinations of the Atrous rates of the Atrous convolutions in the ASPP (Atrous Spatial Pyramid Pooling) module, which is used to detect information of various scales and integrate the global context in the image in the DeepLabv3 model in order to derive the optimal structure of the ASPP module. An example of each Atrous rate of the Atrous Convolution is shown in Figure 6. The experiment was conducted under the same hardware conditions as the experiment in Table 9, with three conditions for the Atrous rates of the Atrous Convolution in the ASPP module: (1) single Atrous rate of 12, (2) two Atrous rates of 6 and 12, and (3) three Atrous rates of 6, 12, and 18. Semantic segmentation was performed and the IoU (Intersection of Union; %), FLOPs, and #param under each condition were measured. The experimental results are shown in Table 10.
The experimental results show that applying the ASPP module (ASPP condition (3)) consisting of three-layer Atrous Convolution with Atrous rates of 6, 12, and 8 has the highest IoU of 73.1%. However, in terms of computational complexity, ASPP conditions (1) and (2) were shown to have superior performance with 8.8 GFLOPs and 1.9M FLOPs and 9.1 GFLOPs and 2.0M #param, respectively, compared to 9.8 GFLOPs and 2.1M for the ASPP condition (3). Considering these results, it was confirmed that setting the conditions of the ASPP module according to the specifications of the computational hardware device used in the process of segmenting the user’s 2D image, rather than specifying one methodology, allows for a wide range of applications to various computational hardware (such as low-cost PCs and mobile devices).
As a result, the selected measurement method to process human segmentation based on Mobilenetv2 architecture, before modeling the 3D human model using HMR framework, can produce reasonable results, while solving the limitations of the existing measurement method, such as expensive 3D scanners, long measurement time, and limited clothing conditions for measurement.

5. Conclusions

This study proposes a human body simulation that can predict body changes induced by selected exercise plans for disease prevention, and even aesthetic purposes. The proposed simulation is based on a body measurement technique using a 3D body model modeled from 2D images, which has the advantage of being time- and cost-effective, compared to conventional expensive 3D scanners and time-consuming direct measurement approaches. Previous studies have attempted to reconstruct three-dimensional images from two-dimensional images. However, this study is notable for its approach of measuring each part of the body through a three-dimensional model created for the purpose of visualizing three-dimensional changes in the body according to an exercise plan. This is achieved by aggregating existing studies to create an exercise effect database. Experiments conducted on subjects with various demographic characteristics to verify the validity of the proposed simulation demonstrate that the proposed simulation is sufficiently accurate and versatile. The paired t-test result between constructed images and actual anthropometric measurement was 0.3483 in terms of the p-value. In addition, considering that the proposed simulator can predict body changes according to various exercise plans, it is expected to be widely used in the recently growing field of preventive treatment and personalized healthcare.
This study has the possibility of expansion through future studies in body change prediction. First, the prediction of body shape change through the proposed simulator is based on the existing body change experiment, which only considers exercise plans. However, since body shape changes are significantly affected by eating habits, through future research, the simulator is expected to provide more accurate and realistic predictions of body shape changes.

Author Contributions

Conceptualization, J.S., S.Y. and S.K.; methodology, J.S. and S.K.; software, J.S. and S.K.; validation, J.S. and S.K.; formal analysis, J.S. and S.K.; investigation, J.S., S.Y. and S.K.; resources, J.S., S.Y. and S.K.; writing—original draft, J.S. and S.K.; writing—review and editing, J.S., S.Y. and S.K.; visualization, J.S. and S.K.; funding acquisition, S.Y.; supervision, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a National Research Foundation of Korea (NRF) (No. 2023R1A2C2004252).

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy information of participants.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chawla, N.V.; Davis, D.A. Bringing big data to personalized healthcare: A patient-centered framework. J. Gen. Intern. Med. 2013, 28, 660–665. [Google Scholar] [CrossRef] [PubMed]
  2. Andreu-Perez, J.; Leff, D.R.; Ip HM, D.; Yang, G.-Z. From wearable sensors to smart implants—Toward pervasive and personalized healthcare. IEEE Trans. Biomed. Eng. 2015, 62, 2750–2762. [Google Scholar] [CrossRef] [PubMed]
  3. Rosenberg, D.; Kadokura, E.A.; Bouldin, E.D.; Miyawaki, C.E.; Higano, C.S.; Hartzler, A.L. Acceptability of Fitbit for physical activity tracking within clinical care among men with prostate cancer. AMIA Annu. Symp. Proc. 2016, 2016, 1050. [Google Scholar] [PubMed]
  4. Turakhia, M.P.; Desai, M.; Hedlin, H.; Rajmane, A.; Talati, N.; Ferris, T.; Desai, S.; Nag, D.; Patel, M.; Kowey, P. Rationale and design of a large-scale, app-based study to identify cardiac arrhythmias using a smartwatch: The Apple Heart Study. Am. Heart J. 2019, 207, 66–75. [Google Scholar] [CrossRef] [PubMed]
  5. Wyatt, K.D.; Poole, L.R.; Mullan, A.F.; Kopecky, S.L.; Heaton, H.A. Clinical evaluation and diagnostic yield following evaluation of abnormal pulse detected using Apple Watch. J. Am. Med. Inform. Assoc. 2020, 27, 1359–1363. [Google Scholar] [CrossRef] [PubMed]
  6. Graham, H.; White PC, L. Social determinants and lifestyles: Integrating environmental and public health perspectives. Public Health 2016, 141, 270–278. [Google Scholar] [CrossRef] [PubMed]
  7. Shaw, J.W.; Horrace, W.C.; Vogel, R.J. The determinants of life expectancy: An analysis of the OECD health data. South. Econ. J. 2005, 71, 768–783. [Google Scholar]
  8. Atiq, M.K.; Mehmood, K.; Niaz, M.T.; Kim, H.S. Energy-aware optimal slot allocation scheme for wearable sensors in first responder monitoring system. Int. J. Ad Hoc Ubiquitous Comput. 2019, 31, 103–111. [Google Scholar] [CrossRef]
  9. ISO/IEEE 11073-20601:2016; Health informatics—Personal Health Device communication—Part 20601: Application Profile—Optimized Exchange Protocolexchange Protocol. ISO: Geneva, Switzerland, 2010.
  10. Anliker, U.; Ward, J.A.; Lukowicz, P.; Troster, G.; Dolveck, F.; Baer, M.; Keita, F.; Schenker, E.B.; Catarsi, F.; Coluccini, L.; et al. AMON: A wearable multiparameter medical monitoring and alert system. IEEE Trans. Inf. Technol. Biomed. 2004, 8, 415–427. [Google Scholar] [CrossRef] [PubMed]
  11. Gozani, S.N. Science Behind QuellTM Wearable Pain Relief Technology for Treatment of Chronic Pain. NeuroMetrix, Inc. 2015. Available online: https://pdfs.semanticscholar.org/1be4/fe3ce9f3d0bb81a55f67bee1859c71fb36f9.pdf (accessed on 5 March 2024).
  12. Diener, H.C.; Ashina, M.; Durand-Zaleski, I.; Kurth, T.; Lantéri-Minet, M.; Lipton, R.B.; Ollendorf, D.A.; Pozo-Rosich, P.; Tassorelli, C.; Terwindt, G. Health technology assessment for the acute and preventive treatment of migraine: A position statement of the International Headache Society. Cephalalgia 2021, 41, 279–293. [Google Scholar] [CrossRef] [PubMed]
  13. Walters, M.E.; Dijkstra, A.; de Winter, A.F.; Reijneveld, S.A. Development of a training programme for home health care workers to promote preventive activities focused on a healthy lifestyle: An intervention mapping approach. BMC Health Serv. Res. 2015, 15, 1–12. [Google Scholar] [CrossRef] [PubMed]
  14. Panyod, S.; Ho, C.-T.; Sheen, L.-Y. Dietary therapy and herbal medicine for COVID-19 prevention: A review and perspective. J. Tradit. Complement. Med. 2020, 10, 420–427. [Google Scholar] [CrossRef] [PubMed]
  15. Fortune Business Insights. Home Fitness Equipment Market Size, Share & COVID-19 Impact Analysis, by Type (Cardiovascular Training Equipment and Strength Training Equipment), and Sales Channel (Online and Offline), and Regional Forecast, 2021–2028. Fortune Business Insight, Inc. 2021. Available online: https://www.fortunebusinessinsights.com/home-fitness-equipment-market-105118 (accessed on 13 April 2024).
  16. Diel, R.; Lampenius, N.; Nienhaus, A. Cost effectiveness of preventive treatment for tuberculosis in special high-risk populations. Pharmacoeconomics 2015, 33, 783–809. [Google Scholar] [CrossRef] [PubMed]
  17. Patil, V.C.; Parale, G.P.; Kulkarni, P.M.; Patil, H.V. Relation of anthropometric variables to coronary artery disease risk factors. Indian J. Endocrinol. Metab. 2011, 15, 31. [Google Scholar] [CrossRef] [PubMed]
  18. Sung, J.M.; Cho, I.-J.; Sung, D.; Kim, S.; Kim, H.C.; Chae, M.-H.; Kavousi, M.; Rueda-Ochoa, O.L.; Ikram, M.A.; Franco, O.H. Development and verification of prediction models for preventing cardiovascular diseases. PLoS ONE 2019, 14, e0222809. [Google Scholar] [CrossRef] [PubMed]
  19. Hippisley-Cox, J.; Coupland, C.; Vinogradova, Y.; Robson, J.; May, M.; Brindle, P. Derivation and validation of QRISK, a new cardiovascular disease risk score for the United Kingdom: Prospective open cohort study. BMJ 2007, 335, 136. [Google Scholar] [CrossRef] [PubMed]
  20. Brundage, M.; Avin, S.; Clark, J.; Toner, H.; Eckersley, P.; Garfinkel, B.; Dafoe, A.; Scharre, P.; Zeitzoff, T.; Filar, B. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv 2018, arXiv:1802.07228. [Google Scholar]
  21. Chang, X.; Yi, W.; Lin, X.; Sun, Y. 3D hand reconstruction with both shape and appearance from an RGB image. Image Vis. Comput. 2023, 135, 104690. [Google Scholar] [CrossRef]
  22. Tan, J.; Wang, K.; Chen, L.; Zhang, G.; Li, J.; Zhang, X. HCFS3D: Hierarchical coupled feature selection network for 3D semantic and instance segmentation. Image Vis. Comput. 2021, 109, 104129. [Google Scholar] [CrossRef]
  23. Leibe, B.; Ettlin, A.; Schiele, B. Learning semantic object parts for object categorization. Image Vis. Comput. 2008, 26, 15–26. [Google Scholar] [CrossRef]
  24. Ohashi, T.; Ikegami, Y.; Nakamura, Y. Synergetic reconstruction from 2D pose and 3D motion for wide-space multi-person video motion capture in the wild. Image Vis. Comput. 2020, 104, 104028. [Google Scholar] [CrossRef]
  25. Bowden, R.; Mitchell, T.A.; Sarhadi, M. Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences. Image Vis. Comput. 2000, 18, 729–737. [Google Scholar] [CrossRef]
  26. Baek, S.-Y.; Lee, K. Parametric human body shape modeling framework for human-centered product design. Comput. -Aided Des. 2012, 44, 56–67. [Google Scholar] [CrossRef]
  27. Kang, T.J.; Kim, S.M. Optimized garment pattern generation based on three-dimensional anthropometric measurement. Int. J. Cloth. Sci. Technol. 2000, 12, 240–254. [Google Scholar] [CrossRef]
  28. Paquette, S. 3D scanning in apparel design and human engineering. IEEE Comput. Graph. Appl. 1996, 16, 11–15. [Google Scholar] [CrossRef]
  29. Mada, S.K.; Smith, M.L.; Smith, L.N.; Midha, P.S. Overview of passive and active vision techniques for hand-held 3D data acquisition. Opto-Irel. 2002: Opt. Metrol. Imaging Mach. Vis. 2003, 4877, 16–27. [Google Scholar]
  30. Kasap, M.; Magnenat-Thalmann, N. Parameterized human body model for real-time applications. In Proceedings of the 2007 International Conference on Cyberworlds (CW’07), Hannover, Germany, 24–26 October 2007; pp. 160–167. [Google Scholar]
  31. Malagon-Borja, L.; Fuentes, O. Object detection using image reconstruction with PCA. Image Vis. Comput. 2009, 27, 2–9. [Google Scholar] [CrossRef]
  32. Anguelov, D.; Srinivasan, P.; Koller, D.; Thrun, S.; Rodgers, J.; Davis, J. Scape: Shape completion and animation of people. In ACM SIGGRAPH 2005 Papers; Stanford University: Stanford, CA, USA, 2005; pp. 408–416. [Google Scholar]
  33. Loper, M.; Mahmood, N.; Romero, J.; Pons-Moll, G.; Black, M.J. SMPL: A skinned multi-person linear model. ACM Trans. Graph. (TOG) 2015, 34, 1–16. [Google Scholar] [CrossRef]
  34. Pishchulin, L.; Insafutdinov, E.; Tang, S.; Andres, B.; Andriluka, M.; Gehler, P.V.; Schiele, B. Deepcut: Joint subset partition and labeling for multi person pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4929–4937. [Google Scholar]
  35. Bogo, F.; Kanazawa, A.; Lassner, C.; Gehler, P.; Romero, J.; Black, M.J. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 561–578. [Google Scholar]
  36. Lassner, C.; Romero, J.; Kiefel, M.; Bogo, F.; Black, M.J.; Gehler, P.V. Unite the people: Closing the loop between 3d and 2d human representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 June 2017; pp. 6050–6059. [Google Scholar]
  37. Kanazawa, A.; Black, M.J.; Jacobs, D.W.; Malik, J. End-to-end recovery of human shape and pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7122–7131. [Google Scholar]
  38. Martin, R. Lehrbuch der Anthropologie—In systematischer Darstellung. Rev. Port. De Filos. 1968, 24, 253–254. [Google Scholar]
  39. Yurtkulu, S.C.; Şahin, Y.H.; Unal, G. Semantic segmentation with extended DeepLabv3 architecture. In Proceedings of the 2019 27th Signal Processing and Communications Applications Conference (SIU), Sivas, Turkey, 24–26 April 2019; pp. 1–4. [Google Scholar]
  40. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  41. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 740–755. [Google Scholar]
  42. Everingham, M.; van Gool, L.; Williams CK, I.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
  43. Bompa, T.O.; di Pasquale, M.; Cornacchia, L. Serious Strength Training; Human Kinetics: Champaign, IL, USA, 2012. [Google Scholar]
  44. Rhea, M.R.; Alvar, B.A.; Ball, S.D.; Burkett, L.N. Three sets of weight training superior to 1 set with equal intensity for eliciting strength. J. Strength Cond. Res. 2002, 16, 525–529. [Google Scholar] [PubMed]
  45. Ostrowski, K.J.; Wilson, G.J.; Weatherby, R.; Murphy, P.W.; Lyttle, A.D. The effect of weight training volume on hormonal output and muscular size and function. J. Strength Cond. Res. 1997, 11, 148–154. [Google Scholar]
  46. Munn, J.; Herbert, R.D.; Hancock, M.J.; Gandevia, S.C. Resistance training for strength: Effect of number of sets and contraction speed. Med. Sci. Sports Exerc. 2005, 37, 1622. [Google Scholar] [CrossRef] [PubMed]
  47. Sarsan, A.; Ardiç, F.; Özgen, M.; Topuz, O.; Sermez, Y. The effects of aerobic and resistance exercises in obese women. Clin. Rehabil. 2006, 20, 773–782. [Google Scholar] [CrossRef] [PubMed]
  48. McTiernan, A.; Sorensen, B.; Irwin, M.L.; Morgan, A.; Yasui, Y.; Rudolph, R.E.; Surawicz, C.; Lampe, J.W.; Lampe, P.D.; Ayub, K.; et al. Exercise effect on weight and body fat in men and women. Obesity 2007, 15, 1496–1512. [Google Scholar] [CrossRef] [PubMed]
  49. Boyd, E. The Growth of the Surface Area o! the Human Body. 1935. Available online: https://www.cabidigitallibrary.org/doi/full/10.5555/19361401350 (accessed on 16 April 2024).
  50. Robinette, K.M.; Daanen, H.; Paquet, E. The CAESAR project: A 3-D surface anthropometry survey. In Proceedings of the Second International Conference on 3-D Digital Imaging and Modeling (Cat. No. PR00062), Ottawa, ON, Canada, 8 October 1999; pp. 380–386. [Google Scholar]
  51. Daanen, H.A.M. Circumference estimation using 3D-whole body scanners and shadow scanner. In Proceedings of the Workshop on 3D Anthropometry and Industrial Products Design, Paris, France, 25–26 June 1998; Volume 5, p. 1. [Google Scholar]
  52. ISO 15535:2012; General Requirements for Establishing Anthropometric Databases. ISO: Geneva, Switzerland, 2012.
  53. Sheldon, W.H.; Stevens, S.S.; Tucker, W.B. The Varieties of Human Physique; Harper: New York, NY, USA, 1940. [Google Scholar]
  54. Rasband, J.; Liechty, E.G. Fabulous Fit: Speed Fitting and Alteration; Fairchild Publications, Incorporated: New York, NY, USA, 2006. [Google Scholar]
  55. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
Figure 1. Components of the proposed simulation.
Figure 1. Components of the proposed simulation.
Applsci 14 07107 g001
Figure 2. Pseudo code of the anthropometric measurement module.
Figure 2. Pseudo code of the anthropometric measurement module.
Applsci 14 07107 g002
Figure 3. Pseudo code of the model selection module.
Figure 3. Pseudo code of the model selection module.
Applsci 14 07107 g003
Figure 4. Pseudo code of the prediction adjustment module.
Figure 4. Pseudo code of the prediction adjustment module.
Applsci 14 07107 g004
Figure 5. Examples of human segmentation images based on the conditions of the pretrained model: (a) original image; (b) segmented image with Xception architecture trained with coco_voctrianval dataset; (c) segmented image with Xception architecture trained with coco_voctrainaug dataset; (d) segmented image with mobilenetv2 architecture trained with coco_voctrainval dataset; (e) segmented image with mobilenetv2 architecture trained with coco_voctrainaug dataset.
Figure 5. Examples of human segmentation images based on the conditions of the pretrained model: (a) original image; (b) segmented image with Xception architecture trained with coco_voctrianval dataset; (c) segmented image with Xception architecture trained with coco_voctrainaug dataset; (d) segmented image with mobilenetv2 architecture trained with coco_voctrainval dataset; (e) segmented image with mobilenetv2 architecture trained with coco_voctrainaug dataset.
Applsci 14 07107 g005
Figure 6. Examples of Atrous Convolution with kernel size 3 × 3 under three different rates of 1, 6, and 12.
Figure 6. Examples of Atrous Convolution with kernel size 3 × 3 under three different rates of 1, 6, and 12.
Applsci 14 07107 g006
Table 1. Sample anthropometric data of subjects.
Table 1. Sample anthropometric data of subjects.
3D Model File (.obj)30e_3dm54e_3dm57e_3dm107e_3dm62e_3dm
Stature (cm)170172170168182
Waist circumference
(Natural indentation; cm)
8485858993
Waist circumference
(Omphalion; cm)
8286849093
Bust circumference (cm)1019395100106
Wrist circumference (cm)1817171818
Neck circumference (cm)3738413942
Arm length (cm)5657565461
Thigh circumference (cm)5555536259
Biacromial breadth (cm)4545474345
Hip circumference (cm)9592919797
Upper arm circumference (cm)3842352931
Body type40avgm40avgm40avgm40avgm40avgm
Table 2. Exercise Effect Database (EEDB).
Table 2. Exercise Effect Database (EEDB).
Exercise
Plan
Plan APlan BPlan CPlan DPlan E
Exercise code *a, b, c,
d, e, f
a, ba, b, e,
f, g, h,
i, j, k
ca, c, h,
o, p, q
r, sr, s
GenderMMMFFMF
Week121061212481248
Exercise
per week
343346
Iteration
per set
8~1286~88~12
Set number1336 2~3
Minutes
per exercise
3060
Weight (%) 2.022.632.19 −3.37−4.02−1.30−1.90−0.50−1.80
Waist
circumference (%)
−2.32−5.86−1.90−3.20−1.90−1.60
Bust
circumference (%)
1.532.20
Upper arm
circumference (%)
2.274.654.761.891.761.891.76
Thigh
circumference (%)
−0.981.302.941.506.35
Hip
circumference (%)
−2.33−2.33
* See Table 3 [43]; data obtained from A Rhea et al. [44], B Ostrowski et al. [45], C Munn et al. [46], D Sarsan et al. [47], E MCTiernan et al. [48].
Table 3. Exercise code and primary worked muscle [43].
Table 3. Exercise code and primary worked muscle [43].
Exercise CodeExercise NamePrimary Worked Muscle
aBench pressPectoralis major, anterior deltoids, triceps brachii
bLeg pressRectus femoris, vastus intermedius, vastus medialis, vastus lateralis
cBiceps curlBiceps brachii, brachialis
dPull-downLatissimus dorsi, brachialis, brachioradialis
eSeated rowLatissimus dorsi, trapezius, rhomboids, erector spinae
fBack extensionErector spinae
gSquatRectus femoris, vastus intermedius, vastus medialis, glutes, vastus lateralis
hLeg extensionVastus medialis, vastus lateralis, rectus femoris, vastus intermedius
iStiff-leg deadliftBiceps femoris, semimembranosus, semitendinosus, gluteus maximus
jLeg curlBiceps femoris, semimembranosus, semitendinosus
kCalf raiseGastrocnemius
lShoulder pressAnterior deltoids
mUpright rowTrapezius, anterior deltoids, medial deltoids
nClose grip
bench press
Triceps, middle pectoralis major, middle pectoralis major
oArm extensionOuter triceps, medial triceps, anconeus
pTwisting ObliqueUpper rectus abdominis, lower rectus abdominis serratus muscles, exterior oblique
qAbdominal crunchUpper rectus abdominis, middle rectus abdominis
rWalking-
sLeg cycle-
Table 4. Subject groups clustered by size.
Table 4. Subject groups clustered by size.
Factor
ClusterHeightCircumference (Thickness)Torso LengthShoulder WidthRatio (%)
1P9 (1.16)P5 (−0.44)P6 (0.14)P6 (0.23)21
Tall height and the rest are normal.
2P5 (−0.11)P6 (0.17)P7 (0.47)P2 (−1.24)21
Short torso with very narrow shoulders.
3P5 (−0.22)P7 (0.33)P2 (−1.24)P6 (0.123)28
Thick and very short torso.
4P3 (−0.55)P5 (−0.11)P8 (0.74)P8 (0.59)30
Short height and long torso with wide shoulders.
Table 5. Example of body types of subjects.
Table 5. Example of body types of subjects.
Body TypeRatio (%)Detailed Characteristic (Compared to Standard Body Type)
Inverted triangular27.7Slightly thick torso, wide shoulder width, small head, short hip length
Small rectangular28.5Thin torso, narrow shoulder width, long arm length, big head, short hip length, long limbs
Triangular20.8Normal torso, narrow shoulder width, very short arm length, very small head, long hip length, slightly short limbs
Large triangular23.0Very thick torso, wide shoulder width, very big head, long hip length, very short limbs
Total100.0
Table 6. Summary of subject demographic characteristics.
Table 6. Summary of subject demographic characteristics.
GenderAgeNumber of SubjectsRatio (%)
Male20–294224
30–3953
40–49106
50–59159
60–69137
70–84
Female20–293118
30–3932
40–49137
50–591911
60–69127
70–32
Total 174100
Table 7. Anthropometric variables of the direct measurement approach and the HMR-based technique without segmentation.
Table 7. Anthropometric variables of the direct measurement approach and the HMR-based technique without segmentation.
Anthropometric VariableDirect MeasurementHMR without Segmentation
Stature (cm)169.09169.09
Waist Circumference (Natural Indentation; cm)83.4184.90
Waist Circumference (Omphalion; cm)76.9074.23
Bust Circumference (cm)96.9493.95
Wrist Circumference (cm)15.2715.11
Neck Circumference (cm)43.9842.12
Arm Length (cm)51.2050.74
Thigh Circumference (cm)57.0154.56
Biacromial Breadth (cm)48.5246.95
Hip Circumference (cm)96.9495.86
Table 8. Anthropometric variables of the direct measurement approach, and each condition.
Table 8. Anthropometric variables of the direct measurement approach, and each condition.
Anthropometric VariableDirect MeasurementCondition (1)Condition (2)Condition (3)Condition (4)
Stature (cm)169.09169.09169.09169.09169.09
Waist circumference
(Natural indentation; cm)
83.4182.8182.982.0482.01
Waist circumference
(Omphalion; cm)
76.9078.3878.2377.7777.77
Bust circumference (cm)96.9494.8993.9598.0697.14
Wrist circumference (cm)15.2715.0715.1115.1215.21
Neck circumference (cm)43.9842.0342.1243.1443.32
Arm length (cm)51.2050.8450.7451.151.05
Thigh circumference (cm)57.0154.8854.5655.7856.69
Biacromial breadth (cm)48.5247.9847.9548.7548.57
Hip circumference (cm)96.9496.0196.8695.6295.5
Table 9. Computational complexity of semantic segmentation with 513 × 513 image inputs under MobilenetV2 architecture and Xception architecture.
Table 9. Computational complexity of semantic segmentation with 513 × 513 image inputs under MobilenetV2 architecture and Xception architecture.
Computational MetricsMobilenetV2 Architecture
(Conditions (1) and (2))
Xception Architecture (Conditions (3) and (4))
FLOPs35.5 G9.8 G
# param.22.8 M2.1 M
GPU usage (%)65.6%22.3%
Table 10. Experimental results of semantic segmentation with ASPP module conditions: (1) ASPP module with Atrous rate of 12; (2) ASPP module with Atrous rate of 6 and 12; and (3) ASPP module with Atrous rates of 6, 12, and 18.
Table 10. Experimental results of semantic segmentation with ASPP module conditions: (1) ASPP module with Atrous rate of 12; (2) ASPP module with Atrous rate of 6 and 12; and (3) ASPP module with Atrous rates of 6, 12, and 18.
MetricsASPP Condition (1)ASPP Condition (2)ASPP Condition (3)
FLOPs8.8 G9.1 G9.8 G
# param.1.9 M2.0 M2.1 M
IoU (%)72.4%72.7%73.1%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

So, J.; Youm, S.; Kim, S. A Human Body Simulation Using Semantic Segmentation and Image-Based Reconstruction Techniques for Personalized Healthcare. Appl. Sci. 2024, 14, 7107. https://doi.org/10.3390/app14167107

AMA Style

So J, Youm S, Kim S. A Human Body Simulation Using Semantic Segmentation and Image-Based Reconstruction Techniques for Personalized Healthcare. Applied Sciences. 2024; 14(16):7107. https://doi.org/10.3390/app14167107

Chicago/Turabian Style

So, Junyong, Sekyoung Youm, and Sojung Kim. 2024. "A Human Body Simulation Using Semantic Segmentation and Image-Based Reconstruction Techniques for Personalized Healthcare" Applied Sciences 14, no. 16: 7107. https://doi.org/10.3390/app14167107

APA Style

So, J., Youm, S., & Kim, S. (2024). A Human Body Simulation Using Semantic Segmentation and Image-Based Reconstruction Techniques for Personalized Healthcare. Applied Sciences, 14(16), 7107. https://doi.org/10.3390/app14167107

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop