Next Article in Journal
Game-Based Demand Feedback Reservation Parking Pricing
Previous Article in Journal
MSREA-Net: An Efficient Skin Disease Segmentation Method Based on Multi-Level Resolution Receptive Field
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing a Generalized Regression Forecasting Network for the Prediction of Human Body Dimensions

1
College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China
2
School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
3
School of Information Science and Technology, Zhejiang Shuren University, Hangzhou 310015, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(18), 10317; https://doi.org/10.3390/app131810317
Submission received: 12 August 2023 / Revised: 12 September 2023 / Accepted: 13 September 2023 / Published: 14 September 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
With the increasing demand for intelligent custom clothing, the development of highly accurate human body dimension prediction tools using artificial neural network technology has become essential to ensuring high-quality, fashionable, and personalized clothing. Although support vector regression (SVR) networks have demonstrated state-of-the-art (SOTA) performances, they still fall short on prediction accuracy and computation efficiency. We propose a novel generalized regression forecasting network (GRFN) that incorporates kernel ridge regression (KRR) within a multi-strategy multi-subswarm particle swarm optimizer (MMPSO)-SVR nonlinear regression model that applies a residual correction prediction mechanism to enhance prediction accuracy for body dimensions. Importantly, the predictions are generated using only a few basic body size parameters from small-batch samples. The KRR regression model is employed for preliminary residual sequence prediction, and the MMPSO component optimizes the SVR parameters to ensure superior correction of nonlinear relations and noise data, thereby yielding more accurate residual correction value predictions. The GRFN hybrid model is superior to SOTA SVR models and increases the root mean square performance by 91.73–97.12% with a remarkably low mean square error of 0.0054 ± 0.07. This outstanding advancement sets the stage for marketable intelligent apparel design tools for the fast fashion industry.

1. Introduction

With the many recent improvements in global living standards, human clothing requirements now prioritize diversity and personalization with small-batch, made-to-measure fashion solutions. As such, body dimension and posture measurement prediction supported by artificial neural networks (ANNs) has become a crucial human–machine engineering field. From these tools, new computer-aided ergonomic garment design tools can be paired with three-dimensional (3D) non-contact body scanning and virtual fitting techniques that avoid the use of various calipers, tape measures, and Martin-style anthropometric measuring tools [1]. These anthropometric methods utilize stature percentiles for body segments, which can lead to large errors in practice. Hence, the fashion industry demands more cost-effective, accurate, and efficient non-percentile anthropometric methods that cater to clothing and human-centered product designs [2].
In recent years, research on predicting human body dimensions and shapes using artificial intelligence technologies has been widely conducted, mainly including various prediction methods such as multiple linear regression (MLR) [3,4,5], back-propagation (BP) ANN [6,7,8,9], radial basis function (RBF) ANN [2,10,11], and support vector regression (SVR) methods [12,13,14]. Su et al. [3] used an MLR model that incorporates easily measurable features, including the thickness/width parameters of cross-sections, with the objective of constructing additional lower-body characteristics. Galada and Baytar [4] used the U.S. size database to train a new lasso regression method that establishes relationships between key predictor variables related to crotch length to improve the fit of bifurcated garments. Chan et al. [5] utilized 3D anthropometric data to forecast the relevant parameters in shirt pattern design by deploying an ANN with a linear regression model to reveal the relationship between patterns and body measurements. However, the nonlinearity in the relationships between different body parts becomes more pronounced, and the MLR model struggles to accurately capture this characteristic. Liu et al. [6] provided a back-propagation (BP) ANN that uses anthropometric data to predict human body dimensions, providing robust support for custom clothing. Furthermore, Liu et al. [7] introduced digital clothing pressure from virtual try-ons as input parameters and employed a BPNN model to predict the fit of clothing, including tight, well-fitted, or loose. This research has expanded the potential applications of BPNN methods in the field of clothing design. However, due to the random selection of initial weights and thresholds in BPNN, the network may become stuck in local minima, thereby diminishing its fitting performance. To overcome this challenge, Cheng et al. [8] employed a genetic algorithm to implement a GA-BP-K-means model to cluster data and enhance body shape prediction. This method involves using a genetic algorithm to optimize the initial parameters of the BPNN. Similarly, Cheng et al. [9] addressed the issue of predicting underwear pressure using an improved GA algorithm combined with a BPNN model. The improved GA algorithm accelerated the BP neural network’s convergence speed and enhanced underwear pressure prediction accuracy. In addition to BPNN, other ANN models, such as the radial basis function neural network (RBF-NN) are also often used to estimate body sizes. For example, Liu et al. [10] trained a model on clothing knowledge to extract key human body feature parameters using factor analysis with a combined RBF-NN and linear regression method. This enabled the prediction of detailed human body dimensions with a small number of feature parameters. Wang et al. [11] employed an RBF-NN to estimate complex parameters to improve the comfort and adaptability needed for sportswear. The generalized regression neural network, of Wang et al. [2], utilizes a particular RBF-NN, and demonstrated a high degree of accuracy in predicting 76 specific human body parameters, without the need for manual measurements or 3D body scanning. This model has shown remarkable predictive capabilities, in line with the growing trend of predicting human body dimensions rather than relying on direct measurements.
Support vector regression (SVR) networks comprise state-of-the-art (SOTA) ANN models, but they still fall short in prediction accuracy and computation efficiency. However, SVR models are the most suitable for nonlinear regression problems in which only small-batch samples are available, and they are robust to outliers. Li and Jing [12] used an SVR network to construct a regression model for two-dimensional width and depth features and the corresponding circumference sizes of three important measurement parts (i.e., bust, waist, and hip) of young female samples. However, the establishment of this method relied on cross-sectional data of the human body and cannot avoid the need for 3D scanning. Rativa et al. [13] demonstrated the possibility of superior performance over traditional linear regression methods by employing an SVR model with a Gaussian kernel for the estimation of height and weight using anthropometric data alone. While the results of this method were said to be insensitive to race and gender, in practical application, there may be other unaccounted factors that challenge the model’s robustness. Li et al. [14] introduced a data-driven model based on particle swarm optimization (PSO) to optimize the least-squares support vector machine (LSSVM) algorithm. This model was applied to solve the problems of garment style recognition and size prediction for pattern making. Tailoring experience was used as the training basis to improve prediction accuracy. However, the ambiguity of the relationship between the variables made the aforementioned single direct prediction model unsuitable for the fashion industry, due to the diversity of body shapes and precise body dimensions. Although the prediction results exhibit a certain level of accuracy, there is ample room for improvement.
Furthermore, anthropometry found extensive applications in the field of intelligent clothing design. For instance, Wang et al. [15] employed fuzzy logic and genetic algorithms to generate initial clothing patterns. They utilized SVR to learn about the quantitative relationships between clothing structural lines, control points, and pattern parameters. These relationships were then used to predict and adjust the pattern parameters, achieving pattern adaptability. Liu et al. [16] proposed a machine learning framework that combines hybrid feature selection and a Bayesian search to estimate missing 3D body measurements, addressing the challenge of incomplete data in 3D body scanning. The study found that this approach leverages hybrid feature selection and the Bayesian search to enhance the performance of random forest (RF) and XGBoost 0.72, particularly in filling in missing data, where RF outperforms XGBoost. Wang et al. [17] introduced an approach that utilizes multiple machine learning frameworks, including RBF-NN, GA, PNN (probabilistic neural network), and SVR, for interactive personalized clothing design. This method enhanced the capability of personalized clothing design by estimating body dimensions, generating customized design solutions, quantifying consumer preferences, predicting clothing fit, and self-adjusting design parameters.
The global optimization of parameters can improve prediction accuracy and generalizability, and modernized meta-heuristic swarm intelligence optimization algorithms can simulate individual and group behaviors that are suitable for complex optimization problems. They also rapidly converge with fewer parameters, and are easily implemented [18]. However, multimodal swarm intelligence algorithms cannot conquer optimization problems caused by global exploration and local exploitation imbalances that deprive late iterations of their required data diversity [19]. The literature provides a wealth of PSO improvements related to this via parameter weight adjustments (e.g., linear decay [20], chaotic dynamism [21], and S-shaped decay [22]), population topologies (e.g., static neighborhood [23], dynamic neighborhood [24], and hierarchical structure particle subswarms [25]), and evolutionary learning strategies (e.g., comprehensive learning strategy [26], generalized opposition-based learning strategy [27], orthogonal learning strategy [28], and dimensional learning strategy [29]). As stated by the no-free-lunch theorem [30], the identification of a singular method that can be universally regarded as superior to all others for every task is not feasible. However, owing to discrepancies between the two aspects of global exploration and local exploitation, multi-swarm techniques can be used to maintain the population diversity needed to facilitate information flows within subpopulations. Hence, heterogeneous multi-subpopulation techniques have become effective in enhancing PSO algorithms [31].
In this study, we propose a novel generalized regression forecasting network (GRFN), that combines kernel ridge regression (KRR) prediction with a new multi-strategy, multi-subswarm PSO (MMPSO)-SVR model to considerably reduce large errors in human body dimension prediction. The resulting highly accurate small-batch, data-driven human body dimension prediction scheme does not require stature percentile anthropometric measurements and 3D body reconstruction. Our hybrid model only needs a few basic body size parameters to obtain detailed body dimensions. For our experiment, we used the processes of Liu et al. [6] to collect lower-body data from 106 women and applied principal factor analysis so that the KRR-based regression model can establish a multi-variate nonlinear correlation map of parameters. This generates preliminary prediction results as a residual sequence that can be used to fit the data linearly. To deal with nonlinear and noisy data, our model takes the residual of the predicted KRR output as input. A residual correction prediction mechanism is employed to improve the fit and predictive performance of our hybrid model, addressing the following aspects: improving prediction accuracy, accounting for unmodeled factors and making adjustments, correcting model bias, and enhancing model robustness. For further optimization, we apply teaching and co-optimization to the MMPSO algorithm, which divides the population into three role-based subgroups: teachers, students, and independent learners. The ability of the MMPSO model to search for the global optimum primarily depends on the diversity of the population. The SVR then balances exploration and exploitation using a diversity-focused multi-strategy search to avoid local optima. Finally, the estimated values of the predicted body parameters are obtained by combining the KRR results with the corrected MMPSO-SVR residuals. Thus, our hybrid model provides verifiable human body dimension predictions that can work with small-batch samples.
In summary, this study makes the following contributions:
(1)
Our GRFN utilizes a novel KRR-based MMPSO-SVR nonlinear regression network to achieve residual correction prediction. The proposed hybrid model applies a direct approach that utilizes a few basic body measurements to predict other, more detailed human body dimensions from small-batch samples. The results clearly validate the proposed model, as it outperforms state-of-the-art (SOTA) SVR models in terms of both prediction accuracy and reliability.
(2)
Our MMPSO model adopts a teaching–learning co-optimization scheme in which a teacher subgroup performs enhanced self-learning searching and a student subgroup constructs learning exemplars under the guidance of the teacher group via sub-swarm. The independent subgroup performs self-perceptive search behaviors, and the MMPSO algorithm enhances population diversity while preventing subswarms from becoming trapped in local optima. Competitive results are achieved in terms of optimization convergence and stability in terms of most classical benchmarks.
(3)
Our GRFN network utilizes just a few basic body size parameters as input and obtains detailed human body dimensions and establishes correlations between key body size parameters and clothing pattern sample sizes. This model eliminates the need for anthropometric measurements and 3D body reconstructions, making it more accurate and easily implemented than existing regression models. Consequently, it offers a novel and efficient solution for clothing and human-centered product design.
The rest of this paper is arranged as follows: In Section 2, the research methods are elaborated. Section 3 further describes the GRFN model design alongside the MMPSO algorithm. The effective performance of the proposed hybrid KRR-based MMPSO-SVR model is then reported in Section 4, and a discussion and conclusions for the paper are presented in Section 5, along with directions for future research.

2. Research Methods

2.1. KRR

The KRR regression model employs a kernel function to nonlinearly transform the original sample data and map it to a high-dimensional space [32,33]. By weighting the regularization terms and the L 2 norm with these parameters, the model effectively prevents overfitting and enhances its generalizability. For the training set, x i , y i i = 1 n , x i p , and y i are the i-th observation and response variables, respectively. The count of observations is denoted as n , whereas the count of independent variables is denoted as p . The loss function is expressed as follows:
f = arg min f H 1 n i = 1 n f ( x i ) y i 2 2 + λ f H 2 , λ > 0 ,
f ( x i ) = j = 1 n α ^ j κ ( x j , x i ) ,
where H 2 is the L 2 norm form in the function space, λ is the regularization parameter, and κ ( , ) is the kernel function. Given an n × n kernel matrix, K N , the KRR estimate is given as follows:
α ^ = K N + λ n I 1 y ,
y ^ = i = 1 n α ^ i κ ( x j , x i ) ,
where x represents the provided sample, and y ^ stands for the predicted value. We use cross-validation to calculate the minimum value of the Bayesian information criterion (BIC) for the training set to select the best model’s λ value.

2.2. PSO

Particles can be divided into whole or nearest neighbor types, which can be further divided into global and local versions (i.e., GPSO and LPSO, respectively). The GPSO [20] algorithm takes the position and velocity, X i = ( x i 1 , x i 2 , , x i d ) i = 1 N and V i = ( v i 1 , v i 2 , , v i d ) i = 1 N , respectively. In this context, variable D denotes the number of dimensions, d 1 , 2 , , D , the search zone consists of a population size of N , and the formulas for updating velocity and position are implemented as follows:
v i d ( t + 1 ) = w v i d ( t ) + c 1 r 1 ( p b i d ( t ) x i d ( t ) ) + c 2 r 2 ( p b g b , d ( t ) x i d ( t ) ) ,
x i d ( t + 1 ) = x i d ( t ) + v i d ( t ) ,
where v i d represents the velocity of the particle located at i-th index in the d-th dimension, and t and t + 1 represent the iteration numbers of the previous and current rounds, respectively. Symbol w denotes the inertial weight associated with the memory part of the particle, c 1 represents the cognitive acceleration factor, while c 2 corresponds to the social acceleration factor. Given matrix r 1 , r 2 [ 0 , 1 ] D , p b i = [ p b i 1 , p b i 2 , , p b i D ] signifies the historical best position of particle p i , and p b g b = [ p b g b 1 , p b g b 2 , , p b g b D ] represents the optimal position of the entire population. If the dimensional bounds of v i d exceed the maximum velocity, V max , then v i d = sgn ( v i d ) V max .
For LPSO, speed v i d is updated as shown in Equation (7), where the historically best position of the nearest-neighbor particle is p s b = [ p s b 1 , p s b 2 , , p s b D ] :
v i d ( t + 1 ) = w v i d ( t ) + c 1 r 1 ( p b i d ( t ) x i d ( t ) ) + c 2 r 2 ( p s b , d ( t ) x i d ( t ) ) .

2.3. SVR

An SVM uses few-shot learning to support structural risk minimization and solve classification and regression problems [34]. An SVM can be divided into classification and regression operations, where SVR is widely used to handle few-shot and nonlinear regression problems. Within a provided training set, S = x i , y i i = 1 N , x i denotes the input vector, and y i is the corresponding output. The SVR builds an optimally separated hyper-plane in a high-dimensional (sometimes infinite) feature space, maximizing the margin among the nearest training data points. Symbol φ ( x i ) represents the eigenvector of x i within the decision function, and empirical risk can be calculated using the ε -insensitive loss function. A regularization term is used to enhance the optimization of structural risk and achieve comprehensive regression:
L ε = max ( 0 , y i w T φ ( x i ) b ε ) .
When sample data are found in the ε -insensitive interval band, its loss is zero. Variables ξ i and ξ i are considered slack variables in this context. Specifically, ξ i is used to represent the training error exceeding a certain threshold of + ε , whereas ξ i is used to represent a training error below a threshold of ε . Finding hyper-planes w and b are convex quadratic programming problems.
min ω , b , ξ , ξ * 1 2 w 2 + C i = 1 m ( ξ i + ξ i * ) ,
s . t . w T φ ( x i ) + b y i ε + ξ i y i w T φ ( x i ) b ε + ξ i * ξ i , ξ i * 0 , i = 1 , 2 , , m ,
where ε denotes the maximum deviation, and C is the trade-off factor of model complexity and error. By incorporating Lagrange multipliers α i and α i , the optimization problem of Equations (9) and (10) is transformed into a dual optimization problem, and an optimal hyper-plane regression decision function is obtained using the KKT condition.
f ( x ) = i = 1 m ( α i α i ) κ ( x i , x ) + b κ ( x i , x j ) = exp x i x j 2 2 σ = exp ( g x i x j 2 ) ,
where κ ( , ) is the Gaussian kernel function, and g stands for the kernel coefficient.

3. Methodology of the GRFN

3.1. General Scheme

The architecture of the generalized regression forecasting network (GRFN) is visualized in Figure 1. To improve accuracy and efficiency, we establish a non-linear correlation between easily measured body parameters and other detailed dimensions that are normally difficult to integrate for customized clothing design. First, we extract the principal factors by preprocessing Liu et al.’s human body dimension dataset consisting of the data of 106 young women [6]. To prepare the model, the anthropometric dataset is preprocessed, and normalized for input to the KRR regression model. An appropriate λ value is selected by 10-fold cross-validation, multivariate regression is constructed, and the residual sequence between predicted and ground-truth data is calculated. These residual and measured sequences are input to the SVR model, where rough C and g hyperparameters are obtained using the grid search method, and the best pair ( C b e s t , g b e s t ) are obtained using the MMPSO-SVR model. The KRR-based regression estimations are combined with predicted residual correction values to obtain the dimensional parameters of the human body. Finally, detailed body dimensions are obtained by using a few basic body size parameters input to our GRFN model. An interactive intelligent pattern-making system is employed to generate 2D patterns, and garments are simulated on a virtual 3D avatar for clothing display.

3.2. Hybrid Prediction Process

The hybrid prediction process is illustrated in Figure 2. The details are more concisely elaborated in the following subsections.

3.2.1. Step 1: Data PreProcessing

The data are subjected to min-max normalization, resulting in a range of [ 0 , 1 ] :
x i = x i x min x max x min ,
where variable x i denotes the original data, and x i represents the normalized data, whose sequences are bounded by location boundaries of x max and x min . Then, the length and perimeter factors with large contribution rates (i.e., height, waist circumference, hip circumference, waist circumference height, and hip circumference height) are used as key feature inputs, and randomly assigned to the training and testing sets.

3.2.2. Step 2: KRR Regression Modeling

The minimum BIC value is calculated from the training data to select the best λ value [35]. The ridge regression model is then used to obtain the predicted y ^ i value of the training sequence data, and the residual value is provided:
e i = y i y ^ i ,
where the i-th residual value is represented by e i , and y i corresponds to measured value. The residual sequence data pair { ( y i , e i ) } is then constructed as input to the MMPSO-SVR residual prediction model.

3.2.3. Step 3: Calculation of the SVR Kernel Function

The parameters of the SVR kernel function are estimated to obtain approximate values. Subsequently, the fitness function is then defined as follows:
f i t ( e i ) = 1 n i = 1 n ( e ^ i e i ) 2 ,
where n is the quantity of samples, and e ^ i is the predicted residual values after applying the grid search method on the i-th sample. The grid search method searches a large range that is enlarged and reduced twice to obtain the optimized penalty factor and RBF kernel function parameter pair ( C , g ) .

3.2.4. Step 4: MMPSO-SVR Residual Correction

Population position and velocity are initialized, where population is divided into three subgroups, each representing a feasible solution hyperparameter pair ( C , g ) . In the search zone, the fitness function of each particle is computed to ascertain its individual extreme value, represented as p b , and its group extreme value, represented as p b g b . The MMPSO co-optimization algorithm uses teaching and learning to find p b g b and locate the global optimum. The termination condition is iteratively determined, and 10-fold cross-validation is utilized to determine the optimal combination of SVR kernel function parameters, { ( y i , e i ) } , as input. Variable e i represents the supervisory information used for training, which gives us the residual prediction value, e ^ i .

3.2.5. Step 5: Residual Correction Combination Prediction

By combining y ^ i and e ^ i , the estimated body size parameter is obtained:
y ^ i = y ^ i + e ^ i .
Thus, body dimensions for pattern making can be predicted with an accuracy sufficient for garment production.

3.3. Construction of MMPSO Algorithm for Body Dimension Prediction

3.3.1. Sine Chaotic Opposite-Based Learning Population Initializations

Initial population diversity is crucial to improving the search zone, optimization precision, and convergence speed. The chaotic parameters are characterized by their randomness, ergodicity, and regularity. Currently, the most common chaotic mappings [36] include tent, logistic, and sine types. Sine chaos is a self-mapping mode characterized by infinite folding. During optimization, opposite-based learning (OBL) can quickly find an approximate reverse solution. Accordingly, an elitist greedy strategy is adopted to select the optimal population, effectively improving the quality of solutions in the search zone. Therefore, sine chaos is employed to generate an initial population with the greatest diversity. The one-dimensional mapping of the sine chaos is expressed as follows:
Z i + 1 = sin ( 2 / Z i ) , i = 0 , 1 , , N 1 Z i 1 , Z i 0 .
To avoid the generation of fixed and zero points in the closed interval [ 1 , 1 ] , the initial value should not be set to zero. The sine chaos sequence is converted into a set of D pseudo-random numbers, Z i , j , and is inversely mapped into variables x i , j :
x i , j = l b j + ( u b j l b j ) a b s ( z i , j ) ,
where [ l b j , u b j ] is the dynamic search zone boundary along dimension j, and z i , j represents the j-th dimensional component of the i-th pseudo-random number. Next, the OBL strategy [27] is utilized to generate an opposite-based population, X = { X i , i = 1 , 2 , , N } , and the individual term, x i , j , can be defined as
x i , j = k ( l b j + u b j ) x i , j ,
where the generalized coefficient, denoted as k U ( 0 , 1 ) , is sampled from a uniform distribution. Finally, employing the elitist greedy strategy, the reversed population, X , and initial population after sine-mapping, X , are merged into a new population, X n e w = { X X } . From this, the initial N particles with the most suitable fitness score are chosen as X . In an initialization learning environment, the heterogeneous search behavior of multi-subpopulation diversities is utilized. According to a fitness value difference, D f , ascending sort, the population particles are grouped into elite (i.e., teacher subgroup X S T ), ordinary (i.e., independent learner subgroup X S I ), and poor (i.e., student subgroup X S S ) categories. The classification method is as follows:
X i X S T ,   i f   D f λ r X S I ,   i f   λ r D f ( 1 λ r ) X S S ,   i f   D f ( 1 λ r ) ,
where swarm ratio λ r = 30 % , and 1 λ r = 70 % .

3.3.2. Teaching and Learning Co-Optimization Strategy

Inspired by the literature [31], the teaching and learning co-optimization strategy for the teacher and student subpopulations includes two stages: global and local tuning. With global tuning, the teacher subpopulation uses a ring-topological comprehensive learning strategy to search based on self-cognition and social learning behaviors to obtain information from neighbors, enhance search efficiency and stability, and accelerate convergence. Social learning consists of group induction guidance and ring neighborhood particle learning:
v i d ( t + 1 ) = w 1 v i d ( t ) + c 1 r 1 ( p b f i ( d ) ( t ) x i d ( t ) ) + c 2 r 2 β S T ( p b g b , d ( t ) x i d ( t ) ) + c 3 r 3 ( r b i d ( t ) x i d ( t ) ) ,
where the linear decreasing inertia weight is expressed as w 1 : 0.9 ~ 0.4 , and the first acceleration factor is c 1 : 2.5 ~ 2.0 , which is linearly decreasing. The second acceleration factor is c 2 : 0.5 ~ 2.0 , which is linearly increasing, and the third acceleration factor is c 3 : 2.0 ~ 0.0 , which is linearly decreasing. Function f i ( d ) = [ f i ( 1 ) , f i ( 2 ) , , f i ( D ) ] represents p b of the i-th particle and must conform to requirements in the d-th dimension, r b i = [ r b i 1 , r b i 2 , , r b i D ] is the i-th particle’s closest neighbor in the ring topology, and β S T = 0.5 is the social learning tendency coefficient. These items are used to improve the performance of particles and enhance their self-search. To strike a balance between exploring and exploiting particles across different dimensions, the CLS is used to define the individual learning probability, p c , of each particle, which generates a [ 0 , 1 ] -interval random number for each dimension. A random number performs better than learning probability in solving the multipeak function. CLS adopts an aging strategy to regenerate the historical optimal solution, p b f i ( d ) , of individual nearest neighbors when the number of stagnant neighbors exceeds the threshold of seven. During global tuning, a subgroup stagnation state-checking mechanism is employed to detect whether the subpopulations of teachers and students are stagnant. Then, the fitness deviation of the optimal solution is calculated for generations t and t − 1 subgroups to judge whether any are stagnant.
δ s b e s t ( t + 1 ) = f i t ( p b s b ( t + 1 ) ) f i t ( p b s b ( t ) ) 0 ,
where the fitness function is f i t ( ) , and p b s b represents the locally optimal solution within the subgroup. When δ s b e s t ( t + 1 ) < 0 , the capacity for global search of the subgroup is normal, and the stagnant state counter, S t a g s , is zero. When δ s b e s t ( t + 1 ) = 0 , the capacity for global search of the subpopulation weakens and S t a g s = S t a g s + 1 . If S t a g s C t 1 , where C t 1 is the stagnation threshold [37] selected as T max / 10 , where T max denotes the fixed number of iterations, the subgroup is identified as having entered stagnation. Then, the local tuning operation is employed to reinvigorate the search capability and counteract stagnation.
During local tuning, particle disturbance update and refresh compensation operations are used. Inspired by the literature [38], during the particle disturbance update, a search operator guided by the historical extremum of the induction group is used to remove bias from the search and improve population diversity as follows:
new X i = X i + ζ 1 ( X i X k ) + ζ 2 ( p b g b X i ) ,
where ζ 1 , ζ 2 are random numbers in the range of [ 1 , 1 ] and [ 0 , 1.5 ] , respectively. k is a random index in [ 1 , N ] , and k i . The new search operator guides particle-balanced exploration and exploitation by inducing historical extrema in the group. During the particle disturbance update operation, an individual particle adopts an optimal greedy strategy. Thus, if the new position’s fitness value, X i ( t + 1 ) , surpasses the initial fitness value, X i ( t ) is activated with X i ( t + 1 ) , and the state counter is activated as S t a g i = 0 ; otherwise, the particles will not be activated, and S t a g i = S t a g i + 1 is performed. If S t a g i > C t 2 , where C t 2 is 0.8 ( D × N ) / 2 after many iterations, the position of the new particles may still not meet the requirements of optimization. Hence, the subgroup is vulnerable to the potential of converging towards a local optimum. Therefore, a refresh compensation operation is used to reverse the subgroup stagnation and improve quality. Two particle historical extrema are used for an arithmetic crossover operation to generate the new position:
new X i = X i + ϑ 1 ( p b k 1 p b k 2 ) ,
where ϑ 1 is a random number in [ 1 , 1 ] , k 1 , k 2 [ 1 , N 1 ] is a random number, and p b k 1 and p b k 2 are individual historical optima of the randomly selected particles, k 1 and k 2 , respectively. The refresh compensation operation performs a random search and shares individual historical extrema. The advantages of individual candidate solutions are related to the selected particle vector, X i ( t ) , and the difference vector, p b k 1 p b k 2 , which are chemotaxis operations. During the early progress of our algorithm, the step size of the difference vector is usually larger than it will be later, which is beneficial for initially expanding the search zone. The vector then reduces over time to improve the local development ability. Generally, the step size of the difference vector is continuously adjusted to accommodate various stages and attain superior optimization.
To enhance the search efficiency of the student group, its particles are generally initially far from the optimal area and must be studied under the guidance of the teacher subgroup. During global tuning, the student subgroup employs a search behavior based on social self-cognition; its velocity update equations are as follows:
v i d ( t + 1 ) = ω 1 v i d ( t ) + c 1 r 1 β i C T ( p b i d ( t ) x i d ( t ) ) + c 2 r 2 β i S T ( p b g b , d ( t ) x i d ( t ) ) ,
β i C T = 1 ,   if   α i > 0.5 0 ,   otherwise ,
β i S T = 1 ,   if   β i C T > 0.5 0 ,   otherwise ,
where β C T is the self-cognition tendency coefficient, β S T is the social learning tendency coefficient, and variable α i ~ U ( 0 , 1 ) represents a random number that is drawn from a uniform distribution over the closed interval [ 0 , 1 ] . The student subgroup adopts two-stage tuning to enhance its local optimization ability, with which both the teacher and student subgroups can fully exploit their advantages and jointly promote the exploitation and convergence accuracy of the large algorithm. Simultaneously, to hasten student subgroup convergence, the students generate new learning examples under the guidance of the teacher, and they are sorted based on the fitness function [39]. If the fitness value of a student, y j , surpasses that of the teacher group, x i , the velocity update of y j is distributed and updated around its original position. The velocity calculation formula for the student subgroup particles following the instruction is as follows:
v j d ( t + 1 ) = w c v j d ( t ) + γ 2 ( x i d y j d ) exp ( - μ d i s ( x i , y j ) ) ,
d i s ( x i , y j ) = d = 1 D ( x i d y j d ) T ( x i d y j d ) ,
where w c = 0.8 is the adjustment coefficient, symbol x i denotes the spatial position of the i-th individual within the teacher subgroup, whereas symbol y j denotes the spatial position of the j-th individual within the student subgroup, and the function d i s ( x i , y j ) represents the Euclidean distance between particles x i and y j , where γ 2 = 1.2 and μ = 2.0 .

3.3.3. Multi-Swarm Dynamic Regroup Strategy

The independent learner group employs a dynamic regrouping strategy for multiple subswarms to generate learning exemplars. First, the dynamic regroup learning strategy of multiple subswarms is used to maintain population diversity. In each iteration, the velocity of particles i within each dynamic subgroup is updated using Equations (29) and (30) to enhance the global exploration performance:
v i d ( t + 1 ) = w 2 v i d ( t ) + c 4 r 4 ( p b i d ( t ) x i d ( t ) ) + c 5 r 5 β i S T ( p b s b , d ( t ) x i d ( t ) ) ,
β i S T = 1 ,   if   α i > 0.5 0 ,   otherwise ,
where ω 2 is a self-adjusting inertial weight that controls the search range and improves the global search capability. c 4 , c 5 are constants of 1.49445, and β i S T represents the coefficient of the social learning tendency of the independent learner group subpopulation as perceived by particle p i . When α i > 0.5 , β i S T = 1 represents the social cognitive ability perceived by p i ; otherwise, there is no social ability. Simultaneously, a nonuniform Gaussian mutation operator is applied as an update perturbation to augment the subpopulation’s global search performance of the independent learner group during early iterations and improve the local exploitation ability in later iterations.
X ˜ i d ( t ) = X i d ( t ) + Δ ( t , N ( ( u b d X i d ( t ) ) , σ 2 ( t ) ) ,   rand   <   0.5 X i d ( t ) Δ ( t , N ( ( X i d ( t ) l b d ) , σ 2 ( t ) ) ,   rand 0.5 ,
Δ ( t , y ) = y ( 1 rand ( 1 t / T max ) 2 ) ,
σ 2 ( t ) = σ 0 2 exp ( ( t / T max ) ) ,
where Δ ( t , y ) adaptively adjust the step length. σ 0 2 is the initial variance and the symbol σ 2 ( t ) denotes the particle’s mutation amplitude in the t-th iteration.

3.3.4. Refined Search Strategy for the Global Optimal Solution

The teacher, independent learner, and student group subpopulations exchange information among subpopulations based on the global optimal solution, p b g b , to improve performance and convergence. When the number of stagnant updates for p b g b surpasses the predefined threshold, m = 5 , a fine search is implemented based on the t-distribution mutation and reverse learning using a beetle antennae search (BAS) [40]. In the refined search phase, if a globally optimal particle becomes trapped in a local optimum, we employ the opposite learning method to obtain a reverse solution, thus broadening the search range of the globally optimal particle. This OBL strategy is integrated into the MMPSO algorithm; its mathematical description is as follows:
p b g b ( t ) = u b + r 6 ( l b p b g b ( t ) ) ,
p b g b ( t + 1 ) = p b g b ( t ) + b 1 ( p b g b ( t ) p b g b ( t ) ) ,
where p b g b ( t ) represents the reverse of the global optimum solution at iteration t, and the global optimal solution of the t + 1 iteration is denoted as p b g b ( t + 1 ) . Items u b and l b represent the spatial boundaries of the search region, r 6 denotes a random number that follows a uniformly distributed random number within interval 0 , 1 , and b 1 denotes the coefficient for controlling information exchange [41], which is mathematically expressed by the following formula:
b 1 = T max t T max t .
The t-distribution is a continuous probability distribution that amalgamates features from both Gaussian and Cauchy distributions. A degree-of-freedom parameter governs its behavior, enabling a balance between global and local searches. This parameter enables the shape of the distribution to be adjusted so that larger values have higher peaks, such as a Gaussian shape, which has improved local search capability. Smaller values result in flatter shapes, such as those of the Cauchy distribution, which results in the improvement of global development capabilities. During the fine search, the global optimal solution is further optimized by introducing a t-distribution mutation operator. The mathematical expression is as follows:
p b g b , d ( t + 1 ) = p b g b , d ( t ) + η dir ( x max , d x min , d ) T D ( t ) ,
dir = rands ( 1 , D ) , dir = dir / norm ( dir ) ,
where p b g b , d represents the global optimal solution for the d-th dimension, the region within the d-dimensional search zone is [ x min , d , x max , d ] , η represents the particle rotation direction with a value of +1/−1, symbol dir is the normalized direction vector, and T D ( t ) is the degree of freedom of the t-distribution mutation operator. Like the BAS method, which simulates a beetle using its two antennae to sense the direction of the most intense food smell and thereby decide whether to move left or right, the current globally optimal particle determines its distance to the optimal solution based on its left or right orientation. The particle rotation direction, η , is calculated in the following manner:
p b g b l e f t ( t ) = p b g b ( t ) + d r ( t ) dir / 2 p b g b r i g h t ( t ) = p b g b ( t ) d r ( t ) dir / 2 ,
d r ( 0 ) = s t e p ,   d r ( t + 1 ) = d r ( t ) × 0.95 ,
η = sgn ( f i t ( p b g b r i g h t ) f i t ( p b g b l e f t ) ) ,
where p b g b l e f t ( t ) denotes the t-th iteration’s global optimum after the search process in the left direction, p b g b r i g h t ( t ) denotes the t-th iteration’s global optimum after the search process in the right direction, d r ( t ) represents the sensing range diameter of the t-th iteration’s global optimal particle, the initial value assigned to variable “step” is 10, and function sgn ( ) denotes the sign function used to determine the direction of the individual’s next search.
By integrating the t-distribution mutation strategy with the BAS algorithm and the OBL strategy, the perturbation of the global optimal solution updates; these steps are alternately executed based on the calculated selection probability, Pr .
Pr = exp 1 t T max 20 + 0.05 .
If rand < Pr , the OBL strategy is employed to introduce disturbances into the update process of the global optimal solution; otherwise, the process of choosing the t-distribution mutation strategy is employed, which incorporates the BAS, to update the target position. Additionally, an elitist greed-based approach is employed to determine whether the target position should be updated.

3.3.5. The Full Process of the MMPSO

Based on the improvements discussed in Section 3.3.1, Section 3.3.2, Section 3.3.3 and Section 3.3.4, the full MMPSO process is described with Algorithm 1.
Algorithm 1: The full MMPSO process
Input:  Swarm   size   N ,   inertial   weights   w 1 ,   w 2 ,   w c ,   Swarm   ratio   λ r ,   acceleration   factor   c 1 , c 2 , c 3 ,   maximum   iterations   T max ,   C o n s t = 0.15 ,   m = 5 ,   location   boundary   [ x min d , x max d ] ,   velocity   boundary   [ v min d , v max d ] ,   and   d = 1 , 2 , , D ;
Output:  Global   optimal   solution   p b g b ;
  • Initialize initial position  x i d  and velocity    v i d of particles;
  • Compute the fitness value of each particle  p i ;
  • while  t = 1 , 2 , , T max  do
  •   if  S t a g s C t 1  then
  •     for   i 1 : N 1  do  /* teacher subgroup global tuning */
  •       Update  X S T  subgroup  V i  and  X i using Equations (6) and (20);
  •     end for
  •     Check  X S T subgroup stagnation using Equation (21);
  •   else  /* teacher subgroup local tuning */
  •       X i perform particle disturbance update operation using Equation (22);
  •      if  S t a g i > C t 2  then
  •         X i perform refresh compensation operation using Equation (23);
  •      end if
  •   end if
  •   if  f i t ( X i ) > f i t _ a v g  then
  •      w 2 = w 2 ( t ) + C o n s t ;  if  w 2 > 0.99 w 2 = 0.99  end;
  •   else
  •      w 2 = w 2 ( t ) C o n s t ;  if  w 2 < 0.2 w 2 = 0.2  end;
  •   end if
  •   for  i 1 : N 2  do  /* independent learner subgroup */
  •     Update  the   velocity   of   X S I subgroup using Equations (29) and (30);
  •      X i  perform nonuniform Gaussian mutation using Equations (31)–(33);
  •   end for
  •   if  S t a g s C t 1  then
  •     for i 1 : N 3  do   /* student subgroup global tuning */
  •       Update  X S S  subgroup  V i  and  X i  using Equations (6) and (24);
  •     end for
  •     Check for  X S S  subgroup stagnation using Equation (21);
  •   else   /* student subgroup local tuning */
  •      X i  perform particle disturbance update operation using Equation (22);
  •     if  S t a g i > C t 2  then
  •        X i  perform refresh compensation operation using Equation (23);
  •     end if
  •   end if
  •   Update particle of student subgroup using Equations (27) and (28);
  •   Calculate all particles  p b i  and  p b g b  of the population;
  •   if  rand < Pr  then
  •     Update  p b g b  with OBL strategy using Equations (34) and (35);
  •   else
  •     Update  p b g b  with t-distribution mutation using Equations (37)–(41);
  •   end if
  •   Update  X S T  subgroup with aging strategy the CLPSO-like algorithm;
  •   Update  X S I   subgroup with multi-swarm dynamic regrouping strategy;
  • end while
  • return  p b g b g b e s t v a l ;

3.3.6. Time Computational Complexity Analysis

MMPSO’s time complexity is analyzed according to Algorithm 1. In MMPSO, we assume that the parameters are initialized, where the swarm size of the input is denoted as N, the spatial dimension is denoted by D, and the fixed number of iterations is indicated as T. The time computational complexity formula for the basic PSO is O ( T N D ) . In MMPSO, we assume that the time complexity needed to initialize the population phase of the parameter is O ( N D ) . During global tuning, we determine whether the stagnant state counter, S t a g s , is greater than threshold C t 1 and activate state detector S t a g i > C t 2 . Then, the particle disturbance update and refresh compensation operations in the local tuning phase are performed, and the time computational complexity of the worst-case scenario is represented as O ( 2 N 1 ) . The independent learner subgroups alternately execute dynamic multi-swarm reorganization learning strategies, and their time computational complexity is O ( N 2 D ) . The student subgroup learns under the supervision of the teacher subgroup, and the global tuning phase performs a self-cognitive or social cognitive search. The local tuning phase then performs a particle disturbance update and refreshes the compensation operations. In the event of the most unfavorable circumstances, the time computational complexity becomes O ( N 3 D + 2 N 3 ) . During the fine-search phase of the global optimal solution, the time computational complexity associated with the selection of the dimensional vector for updating the global optimal solution is O ( D ) . Consequently, when the stopping criterion for the MMPSO algorithm is characterized by a fixed number of Iterations, denoted as T , the total time computational complexity of the MMPSO algorithm can be expressed as O ( T N D ) .

4. Experimental Results and Discussion

Within this section, model human body dimensions were evaluated using two experiments to assess model performance. Experiment 1 entailed the classical CEC2005 test suite for MMPSO task validation, and Experiment 2 validated the model’s human body size prediction based on real-world small-batch samples. All experiments were implemented on a 64-bit operating system, a Core i7-6500U CPU with a main frequency of 2.5 GHz, 16-GB memory, and a MATLAB 2018b programming/runtime environment.

4.1. Experimental Setup

4.1.1. Description of Experimental Data

Following Ref. [6], Liu et al. utilized the Vitus Smart 3D body scanning device to create a dataset comprising measurements from 106 female undergraduate students, never pregnant, aged 20 to 25 years, in the northeastern region of China. The participants’ heights ranged from 151.5 to 173.2 cm, and their weights ranged from 40 to 71 kg based on intermediate sizes provided by the Chinese National Standard, GB/T 1335.2-2008: [42] Standard Sizing Systems for Garments–Women. Prior to commencement of this study, we addressed ethical compliance issues and made relevant disclosures. Notably, all participants provided informed consent, allowing their measurement data to be obtained using a 3D body scanner in an automated manner. The final measurement dataset exclusively provided our measurement testing data, and we completely anonymized the data items. Hence, no personally identifiable participant information was retained. Furthermore, our reference few-shot anthropometric database is publicly available, as explained in appendix 1 in reference [43]. To this end, we utilized this open-source anthropometric database to validate the efficacy of our hybrid model. Table 1 displays the descriptive statistics of the body dimensions and relevant measures, such as mean, median, and central tendency.

4.1.2. Factor Analysis

We used maximum orthogonal factor analysis variance to extract two principal factors from 13 observation items and eliminate multicollinearity among variables. The rotated component matrix is represented in Table 2, which lists the scores of each sample and factor. Based on the total variance explanation provided in Table 3, the height factor and perimeter (circumference) coefficient were found to represent most of the information on lower body size.
To avoid large prediction errors caused by large differences in input data, both easy- and difficult-to-measure key human body parameters were normalized. To mitigate interference resulting from varying body shapes, the dataset was divided into distinct training and testing sets, with divisions based on variables, such as stature and waist girth. The training set underwent cross-verification k times to evaluate the results.

4.1.3. Performance Metrics

To quantitatively compare the predictive accuracies of the models, this study employed the root mean square error ( e R M S E ), mean absolute error ( e M A E ), and coefficient of determination ( R 2 ). The corresponding equations are presented as follows:
e R M S E ( Y o b s , Y p r e d ) = 1 n i = 1 n ( y i y ^ i ) 2 ,
e M A E ( Y o b s , Y p r e d ) = 1 n i = 1 n y i y ^ i ,
R 2 ( Y o b s , Y p r e d ) = 1 i = 1 n ( y i y ^ i ) 2 / i = 1 n ( y i y ^ ¯ ) 2 ,
where y i and y ^ i represent observed and predicted values, respectively, and the variable y ^ ¯ denotes the average of the predicted values derived from a collection of n samples.

4.2. Comparison of PSO Variant Algorithms for Benchmark Functions

4.2.1. Benchmark Functions and Parameter Settings for PSO Variants

To evaluate the optimization performance of the proposed MMPSO, the classical CEC 2005 test suite [44] was selected as the experimental benchmark. The test set consisted of four types of functions: F01–F05 (unimodal (UN)), F06–F12 (multimodal (MN)), F10 and F11 (rotation), F13 and F14 (expanded (EF)), and F15–F25 (hybrid composition (CF)). The test suite can comprehensively and objectively reflect the optimization performance of an algorithm. For a more comprehensive evaluation and analysis, we compared the MMPSO algorithm with seven well-known PSO variants: HIDMSPSO, HCLPSO, HCLDMSPSO, CLPSO, DMSPOS, EPSO, and FDRPSO. The PSO variants’ parameter settings were established as indicated in Table 4.

4.2.2. Numerical Experimental Results

The performance of the MMPSO and several well-known PSO variants was evaluated using a benchmark test suite. Following the acquisition of the results from 30 independent runs of each algorithm, the evaluation metrics were determined based on the average value (mean) and standard deviation (Std) of each benchmark function. The mean reflects the overall trend of convergence and optimization ability, whereas the Std indicates the stability of the algorithm and its capacity to evade local optima.
The dimensions needed to solve the problem were 30, the upper limit for the total number of fitness evaluations was 300,000, the population size was set to 40, and the sizes of the three heterogeneous subgroups (i.e., teacher, independent learner, and student) were 12, 16, and 12, respectively. The MMPSO algorithm in F04, F06, F08, F10, F11, F14, F17–F20, and F22–F25 benchmark functions were compared with the alternative PSO variants, revealing clearly improved solution accuracy, as demonstrated in Table 5. Figure 3A illustrates how the MMPSO algorithm demonstrated excellent performance in terms of finding the global optimum using unimodal functions. Furthermore, the MMPSO algorithm performed well when solving multimodal functions and expanded functions, as illustrated in Figure 3B. In terms of algorithm stability, the standard deviations of the F04, F05, F06–F08, F10, F17–F18, and F23–F25 benchmark functions exhibited superior performance over the alternative PSO variants. The final ranking algorithms are arranged based on overall averages in the following order: MMPSO, HCLDMSPSO, EPSO, HCLPSO, HIDMSPSO, DMS-PSO, FDRPSO, and CLPSO. This study’s experimental findings provide evidence that the proposed algorithm exhibits effective global convergence capability, particularly when dealing with multimodal and compound test functions. In summary, the MMPSO algorithm shows a superior optimization effect, and experimental results clearly validate that the presence of diverse populations aids in the avoidance of local optima within the model (i.e., locating the global optimum).

4.2.3. Analysis using Friedman Statistical Test

To verify the statistical differences of the full algorithm, a nonparametric Friedman test and a Nemenyi test were used [50,51]. Table 6 presents the Friedman test results for the eight algorithms using a significance level of α = 5 % . Prior to the analysis, an average ranking of the algorithm was performed so that r n m would represent the average ranking of the m-th algorithm with the n-th benchmark function, which is expressed as follows:
R m = 1 N K n = 1 N K r n m ,
where N K = 25 represents the number of benchmark functions. The results demonstrate that the MMPSO algorithm ranks the highest on the benchmark test set and significantly outperforms other algorithms with unimodal, multimodal, and hybrid composition functions. In this experiment, there were 25 benchmark functions, eight algorithms, and seven degrees of freedom. The square value of the 30-dimensional Friedman test card was 47.160, and the p-value was less than 0.05. According to the critical table of the chi-square distribution, the critical value was 14.07, and the actual chi-square value was larger than the critical value, indicating that there were obvious differences in the overall performance of each algorithm.
The Nemenyi rank sum test was used to achieve pairwise comparisons among multiple samples. Figure 4 presents the final rank in 30 dimensions, visually indicating the Friedman statistical test’s critical difference (CD). If the discrepancy between the mean rank of the two algorithms exceeds the value domain’s CD, the null hypothesis is rejected at the designated level of confidence. The CD calculation formula is built as follows:
C D = q α K ( K + 1 ) 6 N K ,
where q α = 3.031, and K = 8 represents the number of the algorithm’s critical value range C D = 1.863687 .
There were obvious differences among the performances of MMPSO, EPSO, HCLPSO, HIDMSPSO, DMSPSO, FDRPSO, and CLPSO, as shown in Figure 4. In our qualitative analysis, there were no significant differences between the proposed algorithm and the HCLDMSPSO algorithm; however, the proposed algorithm showed superior performance according to the overall average rank.

4.3. Parameters Selection of SVR

In the context of SVR, the estimation and generalization performances were impacted by the hyperparameter pair ( C , g ) , where the penalty factor is denoted by C , and the RBF kernel function is represented by g . The higher the trade-off factor C , the more prone to overfitting the model becomes. The larger the g , the fewer the support vectors, and vice versa, noting that this number significantly influences the speed of training and prediction processes. The fitness function is an effective measure for optimizing the SVR parameter settings to achieve the best prediction accuracy and generalization ability. First, we employed 10-fold cross-validation to train the model parameters and comprehensively assess the model’s performance. Then, the grid search method was used to search a twice enlarged/reduced range to obtain the optimized kernel function parameter range ( C , g ) . Finally, the MMPSO algorithm was used to optimize ( C , g ) , where the actual measured values were obtained for each body dimension, and the residual sequence was constructed based on the KRR of the RBF kernel, based on input information. To evaluate the effectiveness of the MMPSO method of enhancing the hyper-parameters of the SVR model, we selected the maximum number of iterations for the MMPSO-SVR (i.e., 1000), a population size of 50, two dimensions, and initial ranges of the variables C and g as 10 , 1000 and [ 1 × 10 - 4 , 1 ] , respectively.
To analyze the influences of different training proportions, we performed another comparative test taking crotch height and knee circumference parameters as examples, as depicted in Figure 5A,B. When the size of the training sample exceeded 40, the e R M S E and e M A E values of the GRFN hybrid model were stable, demonstrating that it had satisfactory generalizability and performed better than SOTA models on small-batch sample prediction tasks.

4.4. Model Prediction Performance and Garment Pattern Making

Figure 6A,B depict the results of our qualitative analysis for verifying the effectiveness of our method, which seeks to demonstrate the efficacy of GRFN training on the prediction of crotch height and knee circumference. By correcting the residual errors in the training data, the best results pair ( C b e s t , g b e s t ) of the SVR hyperparameter optimization method were (12.1940, 0.3538) and (12.5767, 0.1852), respectively. Notably, the GRFN model had a good fit between the measured and estimated values, according to the datasets used for training and testing. Table 7 displays the error between the estimated and ground-truth values. Among these results, the extreme of the maximum error in the testing set was 0.3720 cm, and that of the average error was 0.1182 cm, aligning with the GB/T 23698-2009 standard [52]. Compared with the results of Liu et al. [6], who used a BPNN-based model, the mean square error (MSE) and standard error (SE) were 2.06 ± 0.2. Using the PSO-LSSVM model, Li et al. [14] could only predict the sleeve sizes at MSE and SE measures of 1.057 ± 0.06. As shown in Table 7, our proposed hybrid model predicts eight difficult-to-measure lower body sizes, where the total MSE and SE were 0.0054 ± 0.07. Moreover, compared with the research conducted by Wang et al. [2], which utilized a generalized RBF-NN regression model and yielded an average R 2 value of 0.971 and an average e R M S E of 5.823 mm, ours achieved significantly improved results of 0.9997 and 0.6142 mm. The prediction effects from the incremental increase in the number of components/parts are displayed in Figure 7A–D. In summary, the hybrid model clearly improved prediction accuracy and generalizability.
A subject’s stature, waist circumference, hip circumference, waist height, and hip height vectors were 161.7, 68.9, 93.7, 100.3, and 83.5, respectively, as inputs, and the ground-truth outputs were 69.55, 43.7, 53.85, 77.4, 79.2, 37.4, 20.4, and 96.6 for crotch height, knee height, thigh circumference, total crotch length, abdomen circumference, knee circumference, crotch width, and abdomen height, respectively. The outputs predicted by the GRFN hybrid model were 69.559, 43.718, 53.804, 77.501, 79.101, 37.357, 20.397, and 96.310. These predictions are exceptionally accurate and can be used to design patterns for women’s sports trousers and other lower-body garments.
As shown in Figure 8A, a customer participant can input a few basic body parameters and accurately predict the relevant dimensions for pattern making via the GRFN. We also provide an interactive automatic pattern generation prototype, built using Microsoft Visual Studio 2013 and the Qt 5 integrated C++ development environment. Figure 8B depicts a screenshot of this system. During the pattern making process, a point-numbering method [53] is used to draw the structural lines of clothing patterns. The pattern for three-quarter, seven-quarter, and long pants are shown in Figure 8C. Utilizing the parameter settings of CLO Standalone virtual design software, a customer can effectively identify the precise measurement landmarks required based on the Chinese National Standard, namely GB 3975-1983 [54] and GB 16160-2017 [55], to construct an accurate human body model. Figure 8D illustrates the effect of this 3D simulation using an avatar.

4.5. Performance Comparison with Other Single Optimization SVR Methods

To verify the MMPSO model’s performance in optimizing the SVR hyperparameters, we select HCLDMSPSO-SVR, HCLPSO-SVR, PSO-LSSVM, and BPNN models for comparison. We set the MMPSO-SVR, HCLDMSPSO-SVR, HCLPSO-SVR, and PSO-LSSVM models to the maximum iteration limit of 1000, with a population size of 50. We selected the HCLDMSPSO [48] and HCLPSO [46] models as representative SOTA swarm-intelligence algorithms. We built a BPNN with an input layer consisting of five nodes. The hidden layer consisted of 13 nodes, and the output layer consisted of 8 nodes. The objective function was set to the minimum fitness value, and the best optimization result was preserved during the training process. To analyze the hyperparameter influence for an SVR small-sample human body size estimation, we compared the single MMPSO-SVR optimization model enhanced with the population’s diversity to avoid local optima in subsequent iterations, thus improving overall accuracy.
Based on our quantitative analysis, we compared each model’s prediction results, as shown in Table 8. By assessing the R 2 statistics, we concluded that the degree of fit for a single MMPSO-SVR model was the highest in all parts, and the e R M S E and e M A E measures were the smallest of all prediction methods. This again clearly demonstrates the effectiveness of our proposed single unmodified MMPSO-SVR global optimization model. Compared with other single optimization SVR hyperparameter methods, our method demonstrates superior efficacy by evading local optima and attaining the global optimum.

4.6. Ablation Experiments

To validate the accuracy and reliability of our proposed model, we conducted an ablation study by using two simplified models: our KRR regression version and our single direct MMPSO-SVR version. Figure 9A–H present the performance results on the few-shot human body dimension dataset. The comparative analysis involved assessing the predictive accuracy of the GRFN model in terms of the KRR regression and the single direct MMPSO-SVR models. Compared with the e R M S E values of the single direct MMPSO-SVR model, those of the combination hybrid model increased by 97.12% maximally in crotch width, and by 91.73% maximally in abdomen circumference. The e R M S E and e M A E values indicate that the hybrid model demonstrated superior predictive performance compared with every other model. The KRR regression model performed poorly in predicting R 2 values across all measurement items, but the single unmodified MMPSO-SVR achieved R 2 values above 0.9 for KH, AC, and AH predictions. However, it struggled with CW and TCL, which are particularly challenging to measure accurately, resulting in low R 2 values. In contrast, the regression values of our hybrid model were above 0.999 for all prediction items, indicating that our method predicts body sizes relevant to garment pattern making more accurately and efficiently, effectively avoiding the shortcomings of single body dimension prediction methods.

5. Discussion and Conclusions

With the rapid development of artificial intelligence technology, efficiently estimating the human body dimensions required for clothing patterns using ANN technology, rather than anthropometry measurement methods, has emerged as a new trend. The KRR regression model does not require feature selection and exhibits strong robustness to outliers, making it particularly suitable for handling datasets with noise or outliers. However, due to the ambiguity in the variable relationships among body dimensions, the KRR-based model may not fully capture their nonlinear features. Therefore, by enhancing the improved SVR model to compensate for the nonlinear features in the body dimension data, we have developed a nonlinear hybrid prediction approach with a high tolerance for input errors. This study employed an innovative data-driven GRFN hybrid model, based on a small-batch sample, for predicting highly accurate human body dimensions. We use the KRR regression model to fit and construct the predictive residual sequence. The MMPSO-SVR model is used to deal with nonlinear relationships and noisy data and to obtain more accurate predictive residual correction values. We then combine the estimated values from the KRR regression and the predictive residual correction values to obtain high-precision estimates of human body size parameters. Using the proposed teaching and learning co-optimization MMPSO algorithm enhances global search capabilities and population diversity [56], effectively preventing the algorithm from falling into local optima, thus leading to optimal SVR hyper-parameters. The experimental results validate the suitability of the proposed hybrid residual correction model, and compared with the performance of the single direct MMPSO-SVR model, the GRFN hybrid model exhibits astonishing improvements in RMSE of between 91.73% and 97.12%. Compared with the prediction accuracy and reliability results of other SOTA ANNs, those of our model are improved by 0.0054 ± 0.07 in terms of MSE and SE. The absolute error fluctuation range of the hybrid model is smaller than those of the other models. Hence, this remarkable advancement provides the fashion industry with intelligent garment design tools to effectively achieve the accuracy and efficiency of mass customization.
To further improve prediction accuracy and ease of implementation, we plan to overcome data limitations and improve the anthropometric dataset to cover full-body information, especially the anthropometric data of abnormal body shapes. Apart from the sample data capacity, we will also consider the impact of gender, age, ethnicity, geographical region, and body mass index on data sources. We also plan to improve the diversity of enhanced particle individuals in the MMPSO algorithm by leveraging the topological structure and communication methods of particles to improve global search stability and convergence.

Author Contributions

Conceptualization, C.B. and Y.M.; methodology, C.B., Y.M. and X.Z.; software, C.B. and X.Z.; validation, C.B., J.C. and X.Z.; data curation, C.B.; writing—original draft preparation, C.B. and Y.M.; writing—review and editing, C.B., Y.M. and X.Z.; visualization, C.B.; supervision, Y.M.; project administration, Y.M. and X.Z.; funding acquisition, Y.M., J.C. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (grant No. 61972458, 62172367), the Zhejiang Provincial Natural Science Foundation of China (grant No. LZ23F020002), and the Zhejiang Province Public Welfare Technology Application Research Project (grant No. LGF22F020006).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets analyzed in the current study are available at: https://doi.org/10.1080/00405000.2017.1315794 (accessed on 1 August 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, G.; Liu, S.; Wu, X.; Ding, X. Research on lower body shape of late pregnant women in Shanghai area of China. Int. J. Ind. Ergon. 2015, 46, 69–75. [Google Scholar] [CrossRef]
  2. Wang, L.; Lee, T.J.; Bavendiek, J.; Eckstein, L. A data-driven approach towards the full anthropometric measurements pre-diction via generalized regression neural networks. Appl. Soft Comput. 2021, 109, 107551. [Google Scholar] [CrossRef]
  3. Su, J.; Ke, Y.; Kuang, C.; Gu, B.; Xu, B. Converting lower-body features from three-dimensional body images into rules for individualized pant patterns. Text. Res. J. 2019, 89, 2199–2208. [Google Scholar] [CrossRef]
  4. Galada, A.; Baytar, F. Developing a prediction model for improving bifurcated garment fit for mass customization. Int. J. Cloth. Sci. Technol. 2023, 35, 397–418. [Google Scholar] [CrossRef]
  5. Chan, A.P.; Fan, J.; Yu, W. Men’s shirt pattern design part II: Prediction of pattern parameters from 3D body measurements. J. Fiber Sci. Technol. 2003, 59, 328–333. [Google Scholar] [CrossRef]
  6. Liu, K.; Wang, J.; Kamalha, E.; Li, V.; Zeng, X. Construction of a prediction model for body dimensions used in garment pattern making based on anthropometric data learning. J. Text. Inst. 2017, 108, 2107–2114. [Google Scholar] [CrossRef]
  7. Liu, K.; Wu, H.; Zhu, C.; Wang, J.; Zeng, X.; Tao, X.; Bruniaux, P. An evaluation of garment fit to improve customer body fit of fashion design clothing. Int. J. Adv. Manuf. Technol. 2022, 120, 2685–2699. [Google Scholar] [CrossRef]
  8. Cheng, P.; Chen, D.; Wang, J. Fast clustering of male lower body based on GA-BP neural network. Int. J. Cloth. Sci. Technol. 2020, 32, 163–176. [Google Scholar] [CrossRef]
  9. Cheng, P.; Chen, D.; Wang, J. Research on underwear pressure prediction based on improved GA-BP algorithm. Int. J. Cloth. Sci. Technol. 2021, 33, 619–642. [Google Scholar]
  10. Liu, Z.; Li, J.; Chen, G.; Lu, G. Predicting detailed body sizes by feature parameters. Int. J. Cloth. Sci. Technol. 2014, 26, 118–130. [Google Scholar] [CrossRef]
  11. Wang, Z.; Wang, J.; Xing, Y.; Yang, Y.; Liu, K. Estimating human body dimensions using RBF artificial neural networks technology and its application in activewear pattern making. Appl. Sci. 2019, 9, 1140. [Google Scholar] [CrossRef]
  12. Li, X.; Jing, X. The establishment of human body three vital measurements regression relationship based on SVR method. Int. J. Cloth. Sci. Technol. 2015, 27, 148–158. [Google Scholar] [CrossRef]
  13. Rativa, D.; Fernandes, B.J.T.; Roque, A. Height and weight estimation from anthropometric measurements using machine learning regressions. IEEE J. Transl. Eng. Health Med. 2018, 6, 4400209. [Google Scholar] [CrossRef] [PubMed]
  14. Li, T.; Lyu, Y.; Guo, Z.; Du, L.; Zou, F. Construction of the PSO-LSSVM prediction model for sleeve pattern dimensions based on garment flat recognition. Int. J. Cloth. Sci. Technol. 2023, 35, 67–87. [Google Scholar] [CrossRef]
  15. Wang, Z.; Tao, X.; Zeng, X.; Xing, Y.; Xu, Y.; Xu, Z.; Bruniaux, P.; Wang, J. An interactive personalized garment design recommendation system using intelligent techniques. Appl. Sci. 2022, 12, 4654. [Google Scholar] [CrossRef]
  16. Liu, X.; Wu, Y.; Wu, H. Machine learning enabled 3D body measurement estimation using hybrid feature selection and Bayesian search. Appl. Sci. 2022, 12, 7253. [Google Scholar] [CrossRef]
  17. Wang, Z.; Tao, X.; Zeng, X.; Xing, Y.; Xu, Z.; Bruniaux, P. Design of customized garments towards sustainable fashion using 3D digital simulation and machine learning-supported human-product interactions. Int. J. Comput. Intell. Syst. 2023, 16, 16. [Google Scholar] [CrossRef]
  18. Poli, R. Analysis of the publications on the applications of particle swarm optimisation. J. Artif. Evol. Appl. 2008, 2008, 685175. [Google Scholar] [CrossRef]
  19. Liang, B.; Zhao, Y.; Li, Y. A hybrid particle swarm optimization with crisscross learning strategy. Eng. Appl. Artif. Intell. 2021, 105, 104418. [Google Scholar] [CrossRef]
  20. Shi, Y.; Eberhart, R.C. Empirical study of particle swarm optimization. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99, Washington, DC, USA, 6–9 July 1999; pp. 1945–1950. [Google Scholar]
  21. Chen, K.; Zhou, F.; Liu, A. Chaotic dynamic weight particle swarm optimization for numerical function optimization. Knowl.-Based Syst. 2018, 139, 23–40. [Google Scholar] [CrossRef]
  22. Tian, D.; Shi, Z. MPSO: Modified particle swarm optimization and its applications. Swarm Evol. Comput. 2018, 41, 49–68. [Google Scholar] [CrossRef]
  23. Mendes, R.; Kennedy, J.; Neves, J. The fully informed particle swarm: Simpler, maybe better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  24. Liang, J.J.; Suganthan, P.N. Dynamic multi-swarm particle swarm optimizer. In Proceedings of the 2005 IEEE Swarm Intelligence Symposium, Pasadena, CA, USA, 8–10 June 2005; pp. 124–129. [Google Scholar]
  25. Lim, W.H.; Mat Isa, N.A. An adaptive two-layer particle swarm optimization with elitist learning strategy. Inf. Sci. 2014, 273, 49–72. [Google Scholar] [CrossRef]
  26. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  27. Wang, H.; Wu, Z.; Rahnamayan, S.; Liu, Y.; Ventresca, M. Enhancing particle swarm optimization using generalized opposition-based learning. Inf. Sci. 2011, 181, 4699–4714. [Google Scholar] [CrossRef]
  28. Zhan, Z.; Zhang, J.; Li, Y.; Shi, Y. Orthogonal learning particle swarm optimization. IEEE Trans. Evol. Comput. 2011, 15, 832–847. [Google Scholar] [CrossRef]
  29. Xu, G.; Cui, Q.; Shi, X.; Ge, H.; Zhan, Z.-H.; Lee, H.P.; Liang, Y.; Tai, R.; Wu, C. Particle swarm optimization based on dimensional learning strategy. Swarm Evol. Comput. 2019, 45, 33–51. [Google Scholar] [CrossRef]
  30. Wolpert, D.H.; Macready, M.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  31. Lu, J.; Zhang, J.; Sheng, J. Enhanced multi-swarm cooperative particle swarm optimizer. Swarm Evol. Comput. 2022, 69, 100989. [Google Scholar] [CrossRef]
  32. Yin, R.; Liu, Y.; Wang, W.; Meng, D. Sketch kernel ridge regression using circulant matrix: Algorithm and theory. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 3512–3524. [Google Scholar] [CrossRef]
  33. Liu, Y.; Liao, S.; Lin, Y.; Wang, W. Infinite kernel learning: Generalization bounds and algorithms. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 2280–2286. [Google Scholar]
  34. Li, S.; Fang, H.; Liu, X. Parameter optimization of support vector regression based on sine cosine algorithm. Expert Syst. Appl. 2018, 91, 63–77. [Google Scholar] [CrossRef]
  35. Gu, X.; Xiao, Z. Research on the application of ARIMA-SVR combination model in satellite telemetry parameter prediction. Chin. J. Space Sci. 2022, 42, 306–312. [Google Scholar]
  36. Yu, Y.; Gao, S.; Cheng, S.; Wang, Y.; Song, S.; Yuan, F. CBSO: A memetic brain storm optimization with chaotic local search. Memetic Comput. 2018, 10, 353–367. [Google Scholar] [CrossRef]
  37. Tong, X.; Choi, K.; Lai, T.; Wong, W. Stability bounds and almost sure convergence of improved particle swarm optimization methods. Res. Math. Sci. 2021, 8, 30. [Google Scholar] [CrossRef]
  38. Li, Z.; Wang, W.; Yan, Y.; Li, Z. PS-ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional optimization problems. Expert Syst. Appl. 2015, 42, 8881–8895. [Google Scholar] [CrossRef]
  39. Zervoudakis, K.; Tsafarakis, S. A mayfly optimization algorithm. Comput. Ind. Eng. 2020, 145, 106559. [Google Scholar] [CrossRef]
  40. Wu, Q.; Lin, H.; Jin, Y.; Chen, Z.; Li, S.; Chen, D. A new fallback beetle antennae search algorithm for path planning of mobile robots with collision-free capability. Soft Comput. 2020, 24, 2369–2380. [Google Scholar] [CrossRef]
  41. He, Q.; Lin, J.; Xu, H. Hybrid cauchy mutation and uniform distribution of grasshopper optimization algorithm. Control Decis. 2021, 36, 1558–1568. [Google Scholar]
  42. GB/T 1335.2-2008; Standard Sizing Systems for Garment-Women. GB/T: Beijing, China, 2008.
  43. Liu, K.; Zhu, C. Clothing Intelligent Design: Structural Design and Fit Evaluation; China Textile Press: Beijing, China, 2021; pp. 138–141. [Google Scholar]
  44. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.; Auger, A.; Tiwari, S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; pp. 1–50. [Google Scholar]
  45. Peram, K.; Veeramachaneni, K.; Mohan, C.K. Fitness-distance-ratio based particle swarm optimization. In Proceedings of the IEEE Swarm Intelligence Symposium, Indianapolis, IN, USA, 24–26 April 2003; pp. 174–181. [Google Scholar]
  46. Lynn, N.; Suganthan, P.N. Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
  47. Lynn, N.; Suganthan, P.N. Ensemble particle swarm optimizer. Appl. Soft Comput. 2017, 55, 533–548. [Google Scholar] [CrossRef]
  48. Wang, S.; Liu, G.; Gao, M.; Cao, S.; Guo, A.; Wang, J. Heterogeneous comprehensive learning and dynamic multi- swarm particle swarm optimizer with two mutation operators. Inf. Sci. 2020, 540, 175–201. [Google Scholar] [CrossRef]
  49. Varna, F.T.; Husbands, P. HIDMS-PSO: A New Heterogeneous Improved Dynamic Multi-Swarm PSO Algorithm. In Proceedings of the IEEE Symposium Series on Computational Intelligence (IEEE SSCI), Electr Network, Canberra, Australia, 1–4 December 2020; pp. 473–480. [Google Scholar]
  50. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  51. Zhou, X.; Zhou, S.; Han, Y.; Zhu, S. Levy flight-based inverse adaptive comprehensive learning particle swarm optimization. Math. Biosci. Eng. 2022, 19, 5241–5268. [Google Scholar] [CrossRef] [PubMed]
  52. GB/T 23698-2009; General Requirements for 3-D Scanning Anthropometric Methodologies. GB/T: Beijing, China, 2009.
  53. Liu, D. Clothing Pattern Design, 4th ed.; China Textile Press: Beijing, China, 2019; pp. 166–170. [Google Scholar]
  54. GB 3975-1983; Terminology for Human Body Measurements. GB: Beijing, China, 1983.
  55. GB/T 16160-2017; Anthropometric Definitions and Methods for Garment. GB/T: Beijing, China, 2007.
  56. Kadirkamanathan, V.; Selvarajah, K.; Fleming, P.J. Stability analysis of the particle dynamics in particle swarm optimizer. IEEE Trans. Evol. Comput. 2006, 10, 245–255. [Google Scholar] [CrossRef]
Figure 1. Architecture of the GRFN model for human body dimension prediction. (In the 2D pattern-making subfigure, the blue curve represents the back pattern piece of the pant; the red curve represents the frontal pattern piece of the pant; the black curve indicates an auxiliary line. Additionally, the blue dots are used to denote auxiliary frames.)
Figure 1. Architecture of the GRFN model for human body dimension prediction. (In the 2D pattern-making subfigure, the blue curve represents the back pattern piece of the pant; the red curve represents the frontal pattern piece of the pant; the black curve indicates an auxiliary line. Additionally, the blue dots are used to denote auxiliary frames.)
Applsci 13 10317 g001
Figure 2. Full hybrid prediction process.
Figure 2. Full hybrid prediction process.
Applsci 13 10317 g002
Figure 3. The average ranking of PSO variants: (A) average ranking in unimodal functions of PSO variants; (B) average rank in multimodal and expanded functions of PSO variants.
Figure 3. The average ranking of PSO variants: (A) average ranking in unimodal functions of PSO variants; (B) average rank in multimodal and expanded functions of PSO variants.
Applsci 13 10317 g003
Figure 4. Critical difference under Friedman test, statistical rank.
Figure 4. Critical difference under Friedman test, statistical rank.
Applsci 13 10317 g004
Figure 5. Comparison of MAE and RMSE with different input training samples; (A) crotch-height, (B) knee circumference.
Figure 5. Comparison of MAE and RMSE with different input training samples; (A) crotch-height, (B) knee circumference.
Applsci 13 10317 g005
Figure 6. Predicted results of GRFN hybrid model; (A) crotch-height, (B) knee circumference.
Figure 6. Predicted results of GRFN hybrid model; (A) crotch-height, (B) knee circumference.
Applsci 13 10317 g006
Figure 7. More results for body part prediction; (A) total crotch length, (B) crotch width, (C) thigh circumference, and (D) knee height.
Figure 7. More results for body part prediction; (A) total crotch length, (B) crotch width, (C) thigh circumference, and (D) knee height.
Applsci 13 10317 g007
Figure 8. Pattern making and virtual simulation using body dimensions predicted by the GRFN model: (A) GRFN model, (B) 2D pattern making, (C) pant pattern, and (D) 3D simulation try-on. (Notes: In subfigure (B), the blue curve represents the back pattern piece of the pant; the red curve represents the frontal pattern piece of the pant; the black dot curve indicates an auxiliary line.)
Figure 8. Pattern making and virtual simulation using body dimensions predicted by the GRFN model: (A) GRFN model, (B) 2D pattern making, (C) pant pattern, and (D) 3D simulation try-on. (Notes: In subfigure (B), the blue curve represents the back pattern piece of the pant; the red curve represents the frontal pattern piece of the pant; the black dot curve indicates an auxiliary line.)
Applsci 13 10317 g008
Figure 9. Performance comparison of regression model on small-sample body dimensions datasets: (A) crotch height; (B) total crotch length; (C) knee height; (D) abdomen circumference; (E) thigh circumference; (F) knee circumference; (G) crotch width; and (H) abdomen height.
Figure 9. Performance comparison of regression model on small-sample body dimensions datasets: (A) crotch height; (B) total crotch length; (C) knee height; (D) abdomen circumference; (E) thigh circumference; (F) knee circumference; (G) crotch width; and (H) abdomen height.
Applsci 13 10317 g009
Table 1. Descriptive statistics on dimension measurements.
Table 1. Descriptive statistics on dimension measurements.
MeasurementsAbbr.MinMaxMeanMedianSMSDCV (%)
StatureST151.5173.2161.9160.9161.45.213.22
Waist HeightWH93.4111.0101.2100.5100.83.973.92
Abdomen HeightAH86.2103.394.594.194.23.804.02
Hip HeightHH75.291.182.982.482.73.764.54
Crotch HeightCH64.880.771.771.271.53.294.59
Knee HeightKH39.048.143.543.343.32.044.70
Waist HeightWH58.381.369.168.869.04.947.14
Hip CircumferenceHC78.0101.892.292.592.44.755.15
Thigh CircumferenceTC42.460.852.952.652.73.757.09
Knee CircumferenceKC31.643.937.437.337.32.266.05
Abdomen CircumferenceAC65.091.279.279.279.25.356.75
Total Crotch LengthTCL64.593.973.272.873.14.105.61
Crotch WidthCW17.222.219.719.819.71.045.28
Notes: Measurements are in centimeters (cm). Height (ST), waist circumference (WC), hip circumference (HC), crotch height (CH), waist height (WH), abdominal height (AH), hip height (HH), knee height (KH), total crotch length (TCL), thigh circumference (TC), abdomen circumference (AC), knee circumference (KC), and crotch width (CW). The sum mean (SM) is the average of the sums of the upper quartile, lower quartile, and overall mean. Standard deviation (SD) and the coefficient of variation (CV) are also listed.
Table 2. Rotated component matrix.
Table 2. Rotated component matrix.
No.MeasurementsAbbr.Component 1Component 2
1StatureST0.9700.209
2Waist HeightWH0.9640.215
3Abdomen HeightAH0.9580.223
4Hip HeightHH0.9600.198
5Crotch HeightCH0.890−0.021
6Knee HeightKH0.9090.273
7Waist CircumferenceWC0.0470.915
8Hip CircumferenceHC0.2580.918
9Thigh CircumferenceTC0.1630.927
10Knee CircumferenceKC0.2320.794
11Abdomen CircumferenceAC0.2200.920
12Total Crotch LengthTCL0.4450.553
13Crotch WidthCW0.0450.852
Table 3. Sum of squares of rotated loadings.
Table 3. Sum of squares of rotated loadings.
ComponentFactorEigenvalues% of VarianceCumulative %
Factor 1Height Factor5.72544.04044.040
Factor 2Circumference Factor5.30240.78784.827
Table 4. Parameters settings for PSO variants.
Table 4. Parameters settings for PSO variants.
AlgorithmsYearParameter SettingsReference
FDRPSO2003 w : 0.9 0.4 , c 1 = c 2 = 1.0 , c = 2.0 Peram et al. [45]
DMSPSO2005 w = 0.729 , c 1 = c 2 = 1.49445 Liang et al. [24]
CLPSO2006 w : 0.9 0.4 , c 1 = c 2 = 1.49445 , m = 7 Liang et al. [26]
HCLPSO2015 w : 0.99 0.2 , c 1 : 2.5 0.5 , c 2 : 0.5 2.5 Lynn et al. [46]
EPSO2017 w : 0.9 0.4 , c 1 = c 2 = 2.0 Lynn et al. [47]
HCLDMSPSO2020 w 1 : 0.99 0.29 , w 2 , c 1 : 2.5 0.5 , c 2 : 0.5 2.5 , P m = 0.1 Wang et al. [48]
HIDMSPSO2020 w 1 : 0.99 0.2 , w 2 , c 1 : 2.5 0.5 , c 2 : 0.5 2.5 Varna et al. [49]
MMPSO w 1 : 0.9 0.4 , w 2 : 0.99 0.2 , w c = 0.8 , c 1 : 2.5 0.5 ,
c 2 : 0.5 2.5 , c 3 : 2.0 0 , c 4 = c 5 = 1.49445
Table 5. Solution accuracy of benchmark functions (30-dim).
Table 5. Solution accuracy of benchmark functions (30-dim).
FunctionMMPSO (Ours)HIDMSPSOHCLPSOCLPSO
MeanStdMeanStdMeanStdMeanStd
F015.11 × 10−141.79 × 10−147.96 × 10−143.11 × 10−148.43 × 10−142.87 × 10−145.49 × 10−141.13 × 10−14
F022.96 × 10−124.59 × 10−121.79 × 10−101.31 × 10−108.23 × 10−51.10 × 10−41.09 × 1032.81 × 102
F033.97 × 1052.24 × 1054.77 × 1052.38 × 1059.41 × 1055.40 × 1051.28 × 1074.05 × 106
F0421.9824.5367.4954.416.65 × 1023.40 × 1028.59 × 1031.85 × 103
F052.12 × 1034.98 × 1025.89 × 1024.69 × 1023.28 × 1036.14 × 1024.41 × 1033.99 × 102
F060.421.2564.5651.182.482.061.041.64
F079.61 × 10−38.87 × 10−31.33 × 10−21.50 × 10−21.91 × 10−21.51 × 10−20.750.12
F0820.526.99 × 10−220.610.1220.806.70 × 10−220.897.08 × 10−2
F0918.397.4340.629.341.10 × 10−133.30 × 10−145.68 × 10−140
F1027.955.6545.7914.1556.1914.221.13 × 10216.75
F118.482.6515.895.1820.692.9424.571.64
F128.16 × 1034.38 × 1036.61 × 1036.99 × 1034.78 × 1034.57 × 1031.35 × 1044.09 × 103
F131.950.273.680.981.480.221.820.19
F1411.090.5011.740.4912.360.8812.640.21
F153.44 × 1021.38 × 1023.66 × 10284.2589.3987.7637.5720.42
F1696.6940.841.51 × 1021.66 × 1021.08 × 10237.831.57 × 10222.33
F171.13 × 1021.06 × 1021.16 × 1021.02 × 1021.28 × 10249.222.31 × 10233.91
F188.88 × 10216.829.05 × 1023.328.96 × 10243.799.10 × 10220.59
F198.81 × 10212.529.04 × 1020.499.10 × 10220.959.15 × 1021.32
F208.66 × 10230.639.05 × 1020.678.98 × 10239.449.13 × 10220.61
F215.00 × 1022.27 × 10−125.00 × 1022.88 × 10−134.99 × 1023.625.00 × 1023.51 × 10−13
F227.51 × 10248.628.38 × 10220.679.06 × 10216.399.75 × 10215.37
F235.34 × 1021.77 × 10−45.34 × 1024.28 × 10−45.34 × 1024.03 × 10−45.34 × 1023.16 × 10−4
F242.00 × 1021.12 × 10−132.00 × 1021.54 × 10−122.00 × 1021.62 × 10−122.00 × 1023.09 × 10−12
F252.00 × 1022.13 × 10−132.00 × 1021.36 × 10−122.00 × 1021.56 × 10−122.00 × 1024.46 × 10−10
FunctionDMSPSOEPSOFDRPSOHCLDMSPSO
meanStdmeanStdmeanStdmeanStd
F011.89 × 10−151.13 × 10−145.49 × 10−141.03 × 10−141.15 × 10−133.49 × 10−145.87 × 10−141.03 × 10−14
F0221.5413.828.90 × 10−132.88 × 10−132.24 × 10−116.33 × 10−113.00 × 10−23.56 × 10−2
F035.89 × 1062.59 × 1063.23 × 1051.41 × 1053.75 × 1051.75 × 1051.13 × 1063.95 × 105
F041.33 × 1035.42 × 1021.11 × 1021.13 × 1023.10 × 1021.71 × 10243.7945.38
F052.43 × 1035.03 × 1024.26 × 1038.31 × 1023.45 × 1036.82 × 1022.29 × 1033.70 × 102
F0652.3138.361.331.569.2922.7226.6421.56
F071.74 × 10−21.11 × 10−22.20 × 10−022.11 × 10−21.51 × 10−21.25 × 10−26.66 × 10−39.80 × 10−3
F0820.534.64 × 10−220.898.25 × 10−220.877.81 × 10−220.680.12
F0913.7710.220.330.6530.897.9429.715.28
F1055.156.9457.7114.9859.2214.8528.555.99
F1128.841.4523.063.3919.234.7110.343.54
F126.80 × 1035.47 × 1034.87 × 1034.45 × 1037.05 × 1038.83 × 1033.97 × 1033.70 × 103
F133.930.512.200.403.000.682.590.52
F1412.480.2711.830.6211.890.6711.090.64
F153.55 × 10283.721.09 × 10297.063.13 × 1021.13 × 1023.23 × 10278.42
F162.62 × 1021.95 × 1021.14 × 10251.862.46 × 1021.87 × 10263.1836.36
F171.78 × 1021.26 × 1021.34 × 10276.352.95 × 1021.99 × 1021.23 × 1021.38 × 102
F189.09 × 1021.749.01 × 10246.799.16 × 1024.108.91 × 10241.82
F199.08 × 1022.439.00 × 10245.629.16 × 1024.338.88 × 10244.98
F209.08 × 1022.419.11 × 10230.629.16 × 1024.099.06 × 10220.21
F215.00 × 1022.48 × 10−134.97 × 10214.305.80 × 1021.63 × 1025.00 × 1021.85 × 10−13
F229.09 × 10212.109.23 × 10220.789.26 × 10216.809.05 × 10210.41
F235.34 × 1022.38 × 10−45.34 × 1024.15 × 10−46.70 × 1022.19 × 1025.34 × 1021.89 × 10−4
F242.00 × 1027.97 × 10−132.00 × 1021.39 × 10−122.00 × 1021.46 × 10−122.00 × 1021.32 × 10−13
F252.00 × 1025.77 × 10−132.00 × 1028.09 × 10−132.00 × 1021.10 × 10−122.00 × 1022.95 × 10−13
Notes: Values in bold font indicate the optimal results.
Table 6. Mean values of the benchmark under the Friedman test (30-dim).
Table 6. Mean values of the benchmark under the Friedman test (30-dim).
Final RankAlgorithmAvg
Rank
UN FunctionsMN, EF FunctionsCF Functions
AlgorithmRankAlgorithm RankAlgorithmRank
1MMPSO2.2MMPSO2.0MMPSO2.4MMPSO2.1
2HCLDMSPSO3.2EPSO3.2HCLDMSPSO3.2HCLDMSPSO2.6
3EPSO4.2HIDMSPSO3.6HCLPSO4.0HCLPSO4.1
4HCLPSO4.4HCLDMSPSO4.4HIDMSPSO4.7EPSO4.2
5HIDMSPSO4.5FDRPSO4.8EPSO4.9HIDMSPSO4.8
6DMSPSO5.2DMSPSO5.2DMSPSO5.6DMSPSO5.0
7FDRPSO6.0HCLPSO5.6FDRPSO5.6CLPSO6.3
8CLPSO6.2CLPSO7.2CLPSO5.7FDRPSO6.9
chi-value47.160 14.250 14.872 30.964
p-value0.000 0.046 0.037 0.000
Table 7. Error between calculated and measured values.
Table 7. Error between calculated and measured values.
CriteriaCHKHTCTCLACKCCWAH
e R M S E (cm)0.12250.02540.04090.12890.07760.03950.01610.0405
e M A E (cm)0.06410.02280.03170.11820.07060.03830.01190.0319
R 2 0.99950.99990.99990.99980.99990.99970.99960.9999
Mean Error/cm0.06410.02280.00170.11820.07060.03830.01180.0319
Maximum Error/cm0.37200.03810.00540.20370.10180.05980.04190.0910
Table 8. Comparison of estimation results of single optimization SVR models ( e R M S E and e M A E in cm).
Table 8. Comparison of estimation results of single optimization SVR models ( e R M S E and e M A E in cm).
AlgorithmItem e R M S E e M A E R 2 Item e R M S E e M A E R 2
MMPSO-SVRCH1.53851.12250.8686TCL1.79911.29430.7794
HCLDMSPSO-SVR1.57411.13350.86072.00601.55640.7243
HCLPSO-SVR1.57091.13260.86092.01241.52220.7301
PSO-LSSVM1.59151.14920.85682.05931.53250.7237
BPNN2.18581.43420.72193.12212.21510.5894
MMPSO-SVRKH0.58040.44290.9774AC1.35791.04400.9721
HCLDMSPSO-SVR0.61450.43600.97681.39361.06370.9708
HCLPSO-SVR0.61050.42830.97651.39331.06210.9709
PSO-LSSVM0.61090.42870.97651.41131.07690.9698
BPNN0.64750.53370.94041.60731.26380.9661
MMPSO-SVRTC1.54791.32910.8602KC1.09300.84900.8212
HCLDMSPSO -SVR1.57341.34500.85421.09320.84910.8210
HCLPSO-SVR1.56661.33730.85471.09410.86350.8171
PSO-LSSVM1.57321.34750.85391.09420.85120.8203
BPNN1.87101.42670.75071.51021.15520.6559
MMPSO-SVRCW0.56040.38840.7428AH0.48990.36290.9923
HCLDMSPSO -SVR0.56760.39720.73320.50160.43210.9883
HCLPSO-SVR0.56550.39510.73560.51190.42320.9882
PSO-LSSVM0.56710.40200.73090.53690.56030.9799
BPNN0.57550.45210.72410.93440.61800.9709
Notes: e R M S E and e M A E in centimeters. Symbol represents that the smaller value, the better. Symbol represent that the larger value, the better. Values in bold font indicate the optimal results.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bao, C.; Miao, Y.; Chen, J.; Zhang, X. Developing a Generalized Regression Forecasting Network for the Prediction of Human Body Dimensions. Appl. Sci. 2023, 13, 10317. https://doi.org/10.3390/app131810317

AMA Style

Bao C, Miao Y, Chen J, Zhang X. Developing a Generalized Regression Forecasting Network for the Prediction of Human Body Dimensions. Applied Sciences. 2023; 13(18):10317. https://doi.org/10.3390/app131810317

Chicago/Turabian Style

Bao, Chen, Yongwei Miao, Jiazhou Chen, and Xudong Zhang. 2023. "Developing a Generalized Regression Forecasting Network for the Prediction of Human Body Dimensions" Applied Sciences 13, no. 18: 10317. https://doi.org/10.3390/app131810317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop