Next Article in Journal
Correction of Arterial-Phase Motion Artifacts in Gadoxetic Acid-Enhanced Liver MRI Using an Innovative Unsupervised Network
Next Article in Special Issue
The Three-Dimensional Body Center of Mass at the Workplace under Hypogravity
Previous Article in Journal
Investigation of DNA Hybridization on Nano-Structured Plasmonic Surfaces for Identifying Nasopharyngeal Viruses
Previous Article in Special Issue
Correlations between Ratings and Technical Measurements in Hand-Intensive Work
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Comfort and Measurement Precision-Based Multi-Objective Optimization Method for Gesture Interaction

1
School of Mechanical Engineering, Inner Mongolia University of Science and Technology, Baotou 014010, China
2
Inner Mongolia Firmaco HongYuan Electric Co., Ltd., Baotou 014010, China
3
Inner Mongolia North Heavy Industries Group Co., Ltd., Baotou 014010, China
4
School of Mechanical and Electrical Engineering, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Bioengineering 2023, 10(10), 1191; https://doi.org/10.3390/bioengineering10101191
Submission received: 28 September 2023 / Accepted: 11 October 2023 / Published: 13 October 2023
(This article belongs to the Special Issue Human Movement and Ergonomics)

Abstract

:
As an advanced interaction mode, gestures have been widely used for human–computer interaction (HCI). This paper proposes a multi-objective optimization method based on the objective function J C P to solve the inconsistency between the gesture comfort J C S and measurement precision J P H in the gesture interaction. The proposed comfort model C S takes seventeen muscles and six degrees of freedom into consideration based on the data from muscles and joints, and is capable of simulating the energy expenditure of the gesture motion. The C S can provide an intuitive indicator to predict which act has the higher risk of fatigue or injury for joints and muscles. The measurement precision model P H is calculated from the measurement error ( X H , Y H , Z H ) caused by calibration, that provides a means to evaluate the efficiency of the gesture interaction. The modeling and simulation are implemented to analyze the effectiveness of the multi-objective optimization method proposed in this paper. According to the result of the comparison between the objective function J C S , based on the comfort model C S , and the objective function J P H , based on the measurement precision models P H , the consistency and the difference can be found due to the variation of the radius r B _ R H O and the center coordinates P B _ R H O x B _ R H O , y B _ R H O , z B _ R H O . The proposed objective function J C P compromises the inconsistency between the objective function J C S and J P H . Therefore, the multi-objective optimization method proposed in this paper is applied to the gesture design to improve the ergonomics and operation efficiency of the gesture, and the effectiveness is verified through usability testing.

1. Introduction

Human–computer interaction (HCI) examines how people interact with computer systems. It makes use of an interactive way to realize the information flow between computers and people. Traditional HCI is machine-centric, requiring users to become accustomed to the workings of the computer through the use of command languages, graphical user interfaces, and physical interaction equipment. It is urgently necessary to create natural, comfortable, and effective natural interaction technologies in order to circumvent and eliminate the limitations of these regulations. Instead of mechanically converting their operating intentions into specific instructions that the machine can understand, operators can express their willingness to interact naturally as if they were speaking to people thanks to natural interaction technology [1]. This is in contrast to the traditional method of interaction. Gesture interaction, voice interaction, brain–computer interaction, emotional interaction, etc. are some examples of the natural interactions that have emerged in the area of human–computer interaction in recent years [2,3]. In addition to having powerful ideographic capabilities, gestures—a kind of interactive communication that is frequently utilized in social interactions—also exhibit the qualities of intuition, simplicity, and vividness. The state-of-the-art techniques for large gestures and speech recognition in human–computer interaction systems, such as STF and 2DCNN + BiLSTM [4], SAM-SLR [5], Ensemble-NTIS [6], MViT-SLR [7], and FE + LSTM [8], are evaluated using the widely used well-known LRW [9] and AUTSL [10] datasets. Ryumin et al. [4] proposed a benchmark methodology on two well-known datasets: LRW for audio-visual speech recognition and AUTSL for gesture recognition. The accuracy of gesture recognition is achieved through the use of a unique set of spatio-temporal features, including those that take into account lip articulation information. Jiang et al. [5] proposed a novel Skeleton Aware Multimodal SLR framework (SAM-SLR) to take advantage of multi-modal information towards a higher sign language recognition rate. Furthermore, regarding emotion recognition in human–computer interaction, the EEG dataset [11] is trained with differential entropy features extracted from multichannel EEG data; the deep belief networks (DBNs) are introduced to construct EEG-based emotion recognition models for three emotions: positive, neutral, and negative. AffectNet [12] is by far the largest database of facial expressions, enabling further progress in the automatic understanding of facial behavior in both categorical and continuous dimensional space. On AffectNet datasets, Mao et al. [13] proposed POSTER++ that achieves the state-of-the-art FER performance while greatly reducing the parameters and floating point operations of POSTER. She et al. [14] proposed the DMUE method to address the problem of annotation ambiguity from two perspectives: the latent Distribution Mining and the pairwise Uncertainty Estimation. All in all, academics are paying more and more attention to natural interactions technologies.
The study of gesture interaction technology covers a wide range of research topics such as theories and methods of gesture recognition, gesture comfort, gesture design, and usability evaluation [15], as well as application research and development in mobile computing, virtual reality [16], etc. Gestures can be translated into control commands for interactive activities in many fields, such as teleoperation, robotics, virtual reality, education, and entertainment [17], which can benefit from its use. However, long-term human–computer interactions will result in muscle fatigue, low operational efficiency, and operator frustration [18]. Therefore, in order to improve the level of ergonomics and usability of the gesture interaction, it is crucial to study and analyze the gesture comfort, gesture measurement accuracy, and gesture multi-objective optimization for an improved level of ergonomics and usability of the gesture interaction. According to ergonomics research, different gestures have different comfort levels for the operator [18]. In addition, due to the effect of measurement accuracy, different gestures will obtain different gesture recognition rates [19], thus affecting the efficiency and experience of gesture interaction. This research has general applicability to gesture interaction; it would be a valuable study to analyze the relevance of different gestures with assistive technologies.
The gesture comfort is a crucial indicator used to assess the ergonomics of human–computer interaction applications and refers to the level of comfort that employees experience when engaging with their job and the environment. The comfort level is challenging to identify and quantify since it is a subjective experience that varies with the length of time and mood of the human–computer contact process. Comfortable gesture interaction will significantly lessen operator fatigue, increase work time, and enhance the effectiveness and experience of the interactions. In general, there are four criteria used to evaluate human comfort: (1) joint angle range of motion (ROM)-based comfort models, such as RULA [18], LUBA [20], REBA [21], OCRA [22], OCRA-CL [22], and others. These models are based on ROM to evaluate the gesture comfort and only take into account comfort evaluation in a static posture; (2) a comfort model based on ROM and motion data, such as those proposed by Andreoni [23], Ramona [24], etc., which can evaluate the comfort of dynamic gestures but contains less information on the biomechanics of humans; (3) a comfort model based on simulation software is used, for instance, by Keyvani [25], Qing [26], and others to evaluate the human comfort, but standardized software tools have limitations such as not understanding the internal principles of the model and being unable to modify in accordance with the specifications; (4) a comfort model based on sensor-based measurement tools, such as pressure, temperature, and cardiopulmonary function sensors [27,28,29] to measure pressure, temperature, and energy consumption as well as other data indicators to evaluate how comfortable and acceptable a human body is.
The comfort of the human body can be enhanced and improved, and the operator’s workload can be decreased in accordance with the established comfort model, which has practical implications for increasing the ergonomics and HCI efficiency. Sam [30] summarizes the state-of-the-art of research on biomechanical optimization and ergonomic risk assessment utilizing wearable sensors for industrial and sports use, which offers a wealth of information for our study. A multi-objective optimization strategy for gestures based on human intuition, comfort, and gesture recognition rate was proposed by Stern [31,32]. The three indicators of human intuition, comfort, and gesture recognition rate are coordinated by using the optimization calculation approach, although this method is only employed for hand gestures, and experimental analysis and verification have not been carried out. Herman [33] optimizes the gesture comfort during surgical procedures to enhance the ergonomics and operational stability of surgical procedures, but the comfort model utilized does not include biomechanical information. A pilot helmet design strategy that maximizes comfort [34] through pressure distribution and eye position has been reported. Battini [35] et al. predicted the energy expenditure of the human body and assessed the degree of comfort of the human body using gender, height, load, arm position, speed, and duration of action; however, the energy expenditure computed using this method contains less biomechanical information. In order to prevent the overload of nursing staff, Zhang [36] employed the metabolic energy expenditure module to compute the work energy consumption that directly reflects the energy consumption, physical condition, and fatigue recovery time of nursing staff in each sub-task. A human energy expenditure model based on heat dissipation [37] and muscular mechanical energy expenditure [38] was proposed, but it has not been used in a human comfort study. Additionally, there is numerous research on how to make the human body more comfortable, including those on aircraft cabins [39], agricultural machines [40], military vehicles [41], pHRI activities [2], construction [42], and wheelchairs [43]. The comfort models are either deficient in human biomechanical knowledge or inappropriate for optimizing gestures. A comfort model based on energy expenditure will be developed in this study with the goal of improving the comfort of the human’s upper limbs in gesture interaction applications [44]. The model, that can determine the comfort of a human’s upper limbs in static or dynamic gestures, incorporates rich biomechanical information. The comfort model can be utilized to predict potential risks of discomfort in the muscles or injury, as well as gesture design optimization.
Additionally, excessive measurement inaccuracy will negatively impact the rate of gesture recognition, and it will affect both the usability and operational effectiveness of the gesture interaction. When measuring the human skeleton using depth stereoscopic vision, Zago [45] et al. did not take into account the measurement error resulting from the structural parameters and measurement positions of the stereoscopic depth model. Instead, they used a motion capture system to compensate for the human skeleton’s measurement error. In order to increase the measurement accuracy of the ToF depth camera, a noise filtering method [46] was applied to account for the impact of multipath error and ambient light error on the depth map. A theoretical error calculation equation based on an error propagation model [47] is proposed to rapidly establish accurate measurement systems that are capable of ensuring the accuracy of tube measurement systems based on multi-stereo vision. Through analysis of the depth measurement error’s effect factors, Wang [48] et al. found that the depth measurement error of binocular stereo vision is significantly influenced by the rotation angle errors and image feature extraction errors. However, this research only gives the guidelines, and does not give the specific parameter optimization method. Aiming at the problem of the loss of gesture features when the KLT tracker is occluded and large-scale rotation, Liu [49] uses a Kalman filter to predict the position of the gesture to improve the measurement precision of the gesture, but the study did not consider the depth information of gesture. Furthermore, some researchers look at the issue of the measurement accuracy from the perspectives of picture distortion correction [50] and technique comparison [51], but not from the structural features of the stereoscopic depth vision system. In order to reduce the binocular stereo vision measurement inaccuracy, the depth measurement error is derived by considering the structural parameters of the binocular stereo vision and the position of the measured object. We will aim to optimize the overall performance of the depth stereo measurement system.
The comfort and measurement accuracy concerns in gesture interaction applications were the main topics of this article. On the one hand, gestures’ comfort issues can result in a number of issues, including muscular fatigue, poor working efficiency, and a bad interaction experience. The usability of gesture interaction applications will be impacted by the measurement precision of motions, which will result in low gesture recognition rates. However, there is frequently a contradiction between measurement accuracy and comfort. Other targets’ performance can suffer if comfort is singly pursued. As a result, Our research will establish the comfort model and measurement precision model of gestures in light of the coordination problem of gestures in terms of the comfort and measurement precision, and will use multi-objective optimization methods to calculate the optimal design variables for improving the level of comfort, operating efficiency, experience, and usability of gesture interactive applications.

2. Gesture Comfort Modeling

An essential metric for assessing the ergonomics of the human upper limbs is comfort. It is very important to study the gesture design theory and techniques to lower operator risk and fatigue, and to enhance human–computer interaction. In order to optimize gestures, a model of gesture comfort based on muscle mechanical energy expenditure and efficiency was developed in the article.

2.1. Muscle Mechanical Energy Expenditure of Gesture

Human muscular energy expenditure is a crucial biomechanical parameter in the study of human biomechanics and has significant scientific implications in the areas of ergonomics, upper limb rehabilitation, muscle fatigue analysis, and human comfort assessment.
In general, the human body uses two types of energy: muscle mechanical energy and calorie expenditure. Muscle contractions convert chemical energy into thermal and mechanical energy. However, because the primary role of muscles is to produce muscle power and the calorie expenditure is little, the amount of heat generated during muscle contraction is negligible. Figure 1 from the paper illustrates the human upper limb musculoskeletal model. The recommended comfort model based on the data from muscles and joints, CS can simulate the energy consumption of the gesture by taking into account six degrees of freedom and seventeen muscles.
However, for nonliving mechanical systems, there is no energy expenditure when the mechanical system is at rest. For the human body, whether the human upper limbs are performing a static or dynamic gesture, the muscles will produce mechanical energy expenditure. Therefore, according to the human upper limb musculoskeletal model as shown in Figure 1, the muscle mechanical energy expenditure model of the human upper limb is established. The muscle energy expenditure of the dynamic gesture is equal to the integral of the sum of the absolute value of the power of each joint in time, and the muscle energy expenditure of the static gesture is equal to the integral of the sum of the absolute value of muscle force in time. Then, the calculation formula of muscle energy expenditure of the gesture can be expressed as follows:
M E E M = t 1 t 2 ( i = 1 6 ( T i + θ ˙ i + T i θ ˙ i ) + i = 1 17 F i m u s c l e _ L i ˙ ) d t , θ ˙ 0 t 1 t 2 i = 1 17 F i m u s c l e _ d t , θ ˙ = 0                                                                                                            
where M E E M is the muscle energy consumption of gesture, T i + and T i are the positive and negative joint torques caused by inertia, gravity, and ligament at the i t h joint. θ ˙ i is joint angle velocity at the i t h joint. F i m u s c l e _ and L i ˙ are the muscle force and muscle length of the human upper limb at the i t h joint.
Generally, muscle efficiency is not constant, but changes with the state of muscle contraction. Its value depends on the load and contraction velocity of the muscle. An appropriate muscle load and contraction velocity can maximize mechanical efficiency, but the muscle load and contraction rate are not fixed; they depend on the nerve stimulation state of the muscle. Therefore, the mechanical efficiency of muscles is also an important indicator to evaluate the ergonomics of human muscles. Then, the muscle mechanical efficiency of the gesture can be calculated from the ratio of muscle mechanical work to muscle mechanical energy consumption. The calculation formula is as follows:
M E = W M M E E M
W M = t 1 t 2 i = 1 6 τ i θ ˙ i d t
where M E is the muscle mechanical efficiency of the human upper limbs, W M is the muscle mechanical work of the human upper limbs, τ i is the joint net torque of the human upper limb at the i t h joint.

2.2. Comfort Model of Gesture

The muscle mechanical energy expenditure of the gesture can reflect the load level of the active and passive muscles of the gesture, and the muscle mechanical efficiency can reflect the efficiency level of the muscles of the gesture. This paper establishes a gesture comfort model based on the muscle mechanical energy expenditure and mechanical efficiency through linear weighted combination. According to the habit of comfort evaluation in ergonomics, the comfort score of the gesture is set between 0 and 10. The smaller the score, the better the comfort. The calculation formula of the gesture comfort model is as follows:
C S = k ( w 1 M E E M E E m a x + w 2 W M M E E )
where the C S is the comfort model of the gesture; k is the constant and k = 10; w 1 and w 2 are the weight coefficients of the M E E and M E ; M E E m a x is the maximum energy consumption when the human upper limbs feel fatigue.
In summary, the comfort model of the gesture can analyze the comfort of static or dynamic gestures. In addition to M E E , the model also considers the impact of M E on comfort. M E E contains rich biomechanical information of the upper limbs’ movement posture, muscle strength, inertial force, ligament restraint force, muscle mechanical energy consumption and efficiency. It reflects the muscle energy consumption of the human upper limbs during movement. The greater the energy consumption, the more fatigue. The smaller the energy consumption, the more comfort. M E is the ratio of work W M to M E E , which reflects the muscles’ efficiency of the human upper limb. The higher the efficiency, the higher the utilization rate of the muscle, and the lower the efficiency, the lower the utilization rate of the muscle.

3. Measurement Precision Modeling

3.1. Depth Stereo Measurement Model

Depth stereo measurement is based on the principle of parallax, and obtains the depth value by comparing the same feature points in two projection planes. The depth stereo measurement precision will directly affect the recognition precision and efficiency of the gesture, and affect the level of ergonomics and interactive experience in gesture interaction applications. According to the principle of depth stereo measurement, the depth stereo measurement model is composed of the relative position relationship of the left/right view of the depth stereo measurement system, the projection angle, the angle relative to the optical axis, the focal length, and the position of the measurement object. The depth stereo measurement model of gesture features is shown in Figure 2.
In order to build the depth stereo measurement model, one first needs to establish a perspective relationship between the projection plane coordinate and the view coordinate of the gesture features. The formula is as follows:
z L _ H x L _ R H y L _ R H 1 = f L 0 0 0 f L 0 0 0 1 X L _ R H Y L _ R H Z L _ H
z R _ H x R _ R H y R _ R H 1 = f R 0 0 0 f R 0 0 0 1 X R _ R H Y R _ R H Z R _ R H
where the ( x L _ R H , y L _ R H , z L _ R H ) are the coordinates of the gesture features in coordinate system O x y _ L ; ( x R _ R H , y R _ R H , z R _ R H ) are the coordinates of the gesture features in coordinate system O x y _ R ; ( X L _ R H , Y L _ R H , Z L _ R H ) are the coordinates of the gesture features in coordinate system O L ; ( X R _ R H , Y R _ R H , Z R _ R H ) are the coordinates of the gesture features in coordinate system O R ; f L and f R are the focal length.
The relationship between ( X L _ R H , Y L _ R H , Z L _ R H ) and ( X R _ R H , Y R _ R H , Z R _ R H ) can be expressed by the spatial homogeneous transformation matrix M L R :
X R _ R H Y R _ R H Z R _ R H = r 1 r 2 r 3 t x r 4 r 5 r 6 t y r 7 r 8 r 9 t z X L _ R H Y L _ R H Z L _ R H 1 = M L R X L _ R H Y L _ R H Z L _ R H 1
where M L R is the spatial homogeneous transformation matrix between the coordinate system O L and O R ; r i ( i = 1 ~ 9 ) are the elements of the rotation matrix; t x ,   t y , t z are the elements of the translation vector.
Then, the relationship between the coordinate system O x y _ L and O x y _ R can be expressed as follows:
Z R _ R H z L _ R H x R _ R H y R _ R H 1 = f R r 1 f R r 2 f R r 3 f R t x f R r 4 f R r 5 f R r 6 f R t y r 7 r 8 r 9 t z Z L _ R H x L _ R H / f L Z L _ R H y L _ R H / f L Z L _ R H 1
Therefore, the three-dimensional coordinates of the gesture features in the right view coordinate system can be expressed as follows:
X L _ R H = Z L _ R H x L _ R H / f L Y L _ R H = Z L _ R H y L _ R H / f L Z L _ R H = f L ( f R t x x R 1 t z ) x R _ R H ( r 7 x L _ R H + r 8 y L _ R H + f L r 9 ) f R ( r 1 x L _ R H + r 2 y L _ R H + f L r 3 )

3.2. Measurement Precision Model

The efficiency of the gesture interaction is impacted by the depth stereo measurement precision, which also influences the measurement precision of the gesture features. In order to improve the measurement accuracy of the gesture features, the paper established a depth stereo measurement error model. In order to simplify the complexity of the depth stereo measurement model, it is assumed that the left and right view are placed horizontally and at the same height, and the coordinate origin of the depth stereo measurement model is the center position of the left view. Then, the simplified three-dimensional coordinates of the gesture features in the right view coordinate system can be expressed as follows:
X L _ R H = B C O S ( ρ l + ϕ l ) C O S ρ L + ϕ L + c o s ( ρ R + ϕ R )                                                                   Y L _ R H = y L _ R H Z H s i n ( ρ l ) f L s i n ( ρ L + ϕ L ) = y R _ R H Z H s i n ( ϕ R ) f R s i n ( ρ R + ϕ R ) Z L _ R H = B C O S ρ L + ϕ L + c o s ( ρ R + ϕ R )                                                                  
In order to analyze the influence of the parameters of the depth stereo measurement model on the precision, the partial derivative of Equation (10) can be obtained:
X L _ R H x L _ R H = Z L _ R H 2 B f L cos ρ R + ϕ R S i n 2 ρ L + ϕ L cos 2 ρ L X L _ R H x R _ R H = Z L _ R H 2 B f R cos ρ L + ϕ L S i n 2 ρ R + ϕ R cos 2 ρ R
Z L _ R H x L _ R H = Z L _ R H 2 B f L cos 2 ρ L S i n 2 ρ L + ϕ L Z L _ R H x R _ R H = Z L _ R H 2 B f R cos 2 ρ R S i n 2 ρ R + ϕ R
Y L _ R H x L _ R H = Y L _ R H Z L _ R H B f L cos 2 ρ L S i n 2 ρ L + ϕ L Y L _ R H x R _ R H = Y L _ R H Z L _ R H B f R cos 2 ρ R S i n 2 ρ R + ϕ R
Y L _ R H y L _ R H = Z L _ R H f L S i n ρ L S i n ρ L + ϕ L Y L _ R H y R _ R H = Z L _ R H f R S i n ρ R S i n ρ R + ϕ R
Generally, the average projection error x y of the depth stereo measurement model can be obtained by calibration, then the average error of the depth stereo measurement model in the left/right view coordinate system can be calculated as follows:
X L = x y d x ,         Y L = x y d x X R = x y d y ,         Y R = x y d y
According to Figure 2, the projection angles on the left/right view are ρ l and ρ R and can be calculated as follows:
ρ l = a t a n ( Z L _ H , X L _ H 2 + Y L _ H 2 ) ρ R = ρ l
Then, the depth stereo measurement error in the X/Y/Z direction can be expressed as follows:
X R H = ( X L _ R H x L R _ H X L ) 2 + ( X L _ R H x R _ R H X R ) 2 Y R H = ( Y L _ R H x L _ R H X L ) 2 + ( Y L _ R H x R _ R H X R ) 2 + ( Y L _ R H y L _ R H Y L ) 2 + ( Y L _ R H y R _ R H Y R ) 2 Z R H = ( Z L _ R H x L _ R H X L ) 2 + ( Z L _ R H x R _ R H X R ) 2
Therefore, the depth stereo measurement precision model can be expressed as follows:
P H = X H 2 + Y H 2 + Z H 2 = X L _ R H x L R H X L 2 + X L _ R H x R R H X R 2 + ( Y L _ R H x L _ R H X L ) 2 + ( Y L _ R H x R _ R H X R ) 2 + ( Y L _ R H y L _ R H Y L ) 2 + ( Y L _ R H y R _ R H Y R ) 2 + Z H 2 ( Z L _ R H x L _ R H X L ) 2 + ( Z L _ R H x R _ R H X R ) 2
where ( X H , Y H , Z H ) are the depth stereo measurement error in the X/Y/Z direction; X L , Y L , X R , Y R are the average projection errors in the projection plane of the left/right view; P H is the depth stereo measurement precision model.
The binocular stereo vision parameter may be guided by the precision analysis, in accordance with the depth stereo measurement precision model. This can guide us how to choose the lens ( f L and f R ) of the camera, so as to obtain a higher measurement accuracy. In addition, the appropriate baseline distance B and the distance between the camera and the measured object can be given. Even the position with the lowest measurement accuracy can be given, so that this position can be avoided as much as possible in practical operation.

4. Multi-Objective Optimization Method for Gestures

4.1. Multi-Objective Optimization Model

The usability of gesture interaction applications is influenced by a number of significant elements, including the user comfort and measurement accuracy. The ease of use and measurement accuracy of the gestures will influence how easily the human upper limb muscles fatigue and how well they operate. However, there is frequently a conflict between the two performances of the comfort and measurement accuracy. If there is, it can result in additional performance deterioration. In order to maximize co-optimization, it is required to coordinate and compromise between the performances of the comfort and measurement accuracy. In order to reduce muscle fatigue, increase operation efficiency, and enhance the interactive experience of the gesture in interactive applications, this paper proposed a multi-objective optimization method of the gesture based on comfort and measurement precision. It uses this method to calculate the optimal design variables that can make the gesture achieve the co-optimization.
Schematic diagram of the multi-objective optimization method of the gesture, as shown in Figure 3. The biomechanics theory of human upper limbs, the depth stereo measurement theory, the multi-objective optimization theory and algorithm, as well as other related theories and research, were all taken into consideration when developing the multi-objective optimization method of gestures suggested in this work. Research on multi-objective gesture optimization is useful for improving the ergonomics of gesture interaction applications, including comfort, operational effectiveness, and interactive experience.
According to the proposed gesture comfort model C S and measurement precision model P H , the two single objective functions of the multi-objective optimization based on the gesture comfort and measurement error are expressed as follows:
M i n J C S = k ( w M E E M E E M E E m a x + w M E W M M E E )
M i n J P H = P H = X H 2 + Y H 2 + Z H 2
In order to simplify the multi-objective optimization objective function of the gesture and reduce the calculation amount, the two objective functions are transformed into a single objective function through the linear weighting method:
M i n J = f m i n c o n ( w 1   J C S + w 2   J P H )
s . t x B _ R H _ M i n x B _ R H x B _ R H _ M a x y B _ R H _ M i n y B _ R H y B _ R H _ M a x z B _ R H _ M i n z B _ R H z B _ R H _ M a x
where J is the multi-objective optimization objective function based on the comfort of gestures and measurement precision; w 1 and w 2 are the weight coefficient of the   J C S and   J P H ; f m i n c o n is the calculated functions for multi-objective optimization; ( x B _ R H , y B _ R H , z B _ R H ) are the coordinates of the gesture features as optimization variables.

4.2. Multi-Objective Optimization Calculation

The coordinates of the gesture features ( x B _ R H , y B _ R H , z B _ R H ) were used as optimization variables and gesture-based circular trajectories were optimized in accordance with the established multi-objective optimization model of the gesture based on the comfort and measurement error, so that the gesture achieved an optimal performance in both comfortable and measurement error. Figure 4 depicts the flow chart of a multi-objective optimization computation for a gesture based on the measurement accuracy and comfort.
The nonlinear programming solver is used to perform multi-objective optimization calculations for the gesture based on the comfort and measurement errors. Firstly, give the initial values P 0 and variable constraints P m i n P 0 P m a x ; secondly, use the trajectory planning algorithm to calculate the human upper limb movement trajectory; thirdly, calculate the objective function J of the gesture multi-objective optimization model; fourthly, determine whether the objective function meets the iteration stop condition J δ , if it is satisfied, output the current optimal solution P , if not, continue to the next step; fifthly, modify the initial value of the variable P 0 = P 0 + α with a given step size α along the search direction, and return to the third step for iterative calculation; finally, the judgment condition is met, and the iteration is terminated.

4.3. Case Analysis and Results of Multi-Objective Optimization of Gestures

In order to verify the effectiveness of the proposed method, the gesture of the circular trajectory is exemplified in the work undertaken by the multi-objective optimization. According to the position of the circle center P B _ R H O x B _ R H O , y B _ R H O , z B _ R H O and radius r R H , a continuous circular trajectory can be calculated using the trajectory planning algorithm, so that the gesture comfort and measurement precision can be calculated. Therefore, the multi-objective optimization model of the gesture based on the circular trajectory can be expressed as follows:
M a x J = f m i n c o n ( w 1   J C S + w 2   J P H )
s . t 0.6 x B _ R H O 0.6 0.6 y B _ R H O 0.6 0 z B R H O 0.6 0.1 r R H 0.6
According to the movement habits of most people, this paper makes the following assumptions: gestures of circular trajectory make clockwise movements, and the initial position P B _ R H 1 x B _ R H 1 , y B _ R H 1 , z B _ R H 1 and target position P B _ R H 2 x B _ R H 2 , y B _ R H 2 , z B _ R H 2 of the circular trajectory are coincided and above the center of the circle. Then, the position of the center of the circle is equal to P B _ R H O x B _ R H O , y B _ R H O , z B _ R H O   = x B _ R H 1 , y B _ R H 1 r R H , z B _ R H 1 . Therefore, the non-linear optimization function f m i n c o n was used to solve the optimal solution until the convergence condition is met and the iteration is stopped.
As shown in Table 1, the initial values of the position of the center of the circle were equal to P B _ R H O = ( 0 ,   0 ,   0.2 ) and the r R H is equal to 0.1 ; the optimal solutions of the position of the center of the circle were equal to P B _ R H O = ( 0.1469 , 0.1823 ,   0.2809 ) and the radius is equal to r R H = 0.1355 . So, the optimal initial and target position of the circular trajectory are equal to P B _ R H 1 / 2 x B _ R H 1 / 2 , y B _ R H 1 / 2 , z B _ R H 1 / 2 = ( 0.1469 , 0.0468 ,   0.2809 ) . The result of the gesture optimization based on the circular trajectory is shown in Figure 5.

5. Discussion

To illustrate the effectiveness of the multi-objective optimization method for the gesture based on the comfort and measurement precision, the comparison and analysis are performed among the objective functions of   J C S ,   J P H , and J C P based on the case analysis of the multi-objective optimization of gestures. The center coordinates ( x B _ R H o , y B _ R H o , z B _ R H o ) and radius r R H are changed as the variables to analyze the change law of the objective functions.
Firstly, in order to eliminate the difference and dimension among the   J C S ,   J P H , and J C P , the objective functions are transformed into dimensionless values between 0 and 1 through standardization. Then, all data indicators are in the same order of magnitude to solve the comparability among the objective functions, which is convenient for the weighting processing and comparative analysis and intuitively understanding the change law of each objective function corresponding to each variable. Finally, the results of the comparison and analyses based on the multi-objective optimization of the gesture are shown in Figure 6, Figure 7, Figure 8 and Figure 9. In the figures, the green, blue, and magenta curves represent the change law of the objective function of   J C S ,   J P H , and J C P corresponding to different characteristic variables. Through the comparison and analysis, the effectiveness of the proposed method for optimizing the comprehensive performance of the gesture’s comfort and measurement precision is illustrated.
In Figure 6, according to the case analysis in the work undertaken by the multi-objective optimization based on gestures of circular trajectories, the center coordinates are set to P B _ R H O = ( 0.1469 , 0.1823 ,   0.2809 ) , and the radius r R H of the circular trajectory varies from 0 to 0.5 m. The figure shows that the curve of the objective function   J C S increases as the radius r R H increases. The curve of the objective function   J P H is inversely parabolic as the radius r R H increases. The trend of the objective function   J C S and   J P H are partially conflicting and partially identical. The objective function J C P is weighted by   J C S and   J P H , and the optimal radius of the circular trajectory can be obtained through the multi-objective optimization calculation r B _ R H O = 0.1355   m . The comparison result illustrates that for the gesture of circular trajectories, the radius r R H will affect the comfort and measurement precision of the gesture, especially when the radius is too large. The objective function J C P can combine the comprehensive performance of the gesture comfort and measurement precision.
In Figure 7, the center coordinates of the circular trajectory are set to P B _ R H O = ( x B _ R H O , 0.1823 ,   0.2809 ) , the coordinate x B _ R H O determines the position of the gesture on the left and right sides of the body and varies from −0.5 to 0.5 m. The radius r B _ R H O is equal to 0.1355   m . Figure 8 shows the experimental data on the objective function of   J C S ,   J P H , and J C P corresponding to the variable x B _ R H O , respectively. The objective function   J C S and   J P H show the same trend, but there are still obvious differences. The x B _ R H O position corresponding to the optimal measurement precision is in the middle of the body, and the optimal comfort is at the position of x B _ R H O = 0.2143   m . The objective function J C P compromises the performance of   J C S and   J P H and obtains the optimal center coordinate x B _ R H O = 0.1469   m . The comparison result shows that there is no significant difference in the trend of the objective function   J C S and   J P H , but the center coordinate x B _ R H O has a significant impact on the comfort and measurement precision.
In Figure 8, the center coordinates of the circular trajectory are set to P B _ R H O = ( 0.1355 , y B _ R H O , 0.2809 ) , and the coordinate y B _ R H O determines the height of the position of gesture and varies from −0.5 to 0.5 m. The radius r B _ R H O equal to 0.1355   m . The curve of objective function   J C S increase with the change in y B _ R H O , but the curve of objective function P H is parabolic with the change in y B _ R H O . It can be observed that the differences should be considerable between   J C S and   J P H , especially when the y B _ R H O is in the range of −0.5 to 0 m. The objective function J C P reconciles the conflict between the   J C S and   J P H and obtains the optimal center coordinate y B _ R H O = 0.1823   m . The comparison results show that the center coordinate y B _ R H O has a different impact on   J C S and   J P H , but the objective function J C P makes a compromise and achieve a common optimal between the comfort and measurement precision of gesture.
In Figure 9, the center coordinates of the circular trajectory are set to P B _ R H O = ( 0.1355 , 1823 , z B _ R H O ) , and the coordinate z B _ R H O , that determines the distance of the gesture from the body, varies from 0 to 0.5 m. The radius r B _ R H O is equal to 0.1355   m . It can be observed that the objective functions   J C S and   J P H show a high consistency, but there are still slight differences. Further improvement can still be made by the multi-objective optimization, and the optimal center coordinate z B _ R H O can be obtained as equal to 0.1403   m . The comparison result shows that the trend of   J C S and   J P H affected by the center coordinate z B _ R H O is relatively consistent, but the impact on the comfort and measurement precision cannot be ignored.
In summary, these results show that the objective functions   J C S and   J P H based on the circular trajectory gestures are related to the radius r B _ R H O and the center coordinates P B _ R H O x B _ R H O , y B _ R H O , z B _ R H O . Through the proposed multi-objective optimization method based on the gesture comfort and measurement precision models, the r B _ R H O and P B _ R H O x B _ R H O , y B _ R H O , z B _ R H O are optimized to an appropriate position. Among them, compared with parameters x B _ R H O and z B _ R H O , the difference between the objective function   J C S and   J P H affected by the parameters r B _ R H O and y B _ R H O is more significant. The objective function J C P integrates the differences between the   J C S and   J P H by weighting, and improves the comprehensive performance of gestures in terms of the comfort and measurement precision.

6. Conclusions

The current study aims to address the inconsistency between the measurement accuracy and gesture comfort in gesture interaction. This paper proposes a multi-objective optimization method based on the gesture comfort and measurement precision. Firstly, the gesture comfort C S is modeled by the muscle energy expenditure of the human upper limb. The C S provides an intuitive indicator   J C S to predict which act has the higher risk of fatigue or injury for joints and muscles, so as to reduce operators’ fatigue and extend their working hours. Secondly, the depth stereo measurement precision P H was modeled by the measurement error. The P H provides an indicator   J P H to evaluate the operation efficiency of the gesture interaction. Then, we proposed a multi-objective optimization model J C P based on the   J C S and   J P H , that provides a method to achieve an optimal performance between the gesture comfort and measurement precision. Finally, a case analysis based on the circular trajectory gesture is implemented to verify the effectiveness of the multi-objective optimization method proposed in this paper. The comparison result shows both the consistency and the difference between the objective function   J C S and   J P H corresponding to different parameters. The multi-objective optimization method of the gesture proposed in this paper effectively solves the inconsistency between the gesture comfort and measurement precision in gesture interaction. In general, the research in this paper is of great significance to the improvement of ergonomics and interaction efficiency in gesture interaction.
In the future, for robot teleoperation based on gesture interaction, the authors will carry out gesture design and use the multi-objective optimization method proposed in this paper to improve the ergonomics and operation efficiency of the gesture. Furthermore, the research will focus on the usability problem of gesture interaction, and comprehensively evaluate the satisfaction, comfort, effectiveness, operation efficiency, consistency, and interactive experience in gesture interaction through usability testing to verify the effectiveness of our work.

Author Contributions

Conceptualization, W.W. and X.Q.; methodology, W.W. and C.Z.; software, W.W. and C.Z.; validation, H.S. and Y.W.; formal analysis, W.W.; investigation, W.W.; resources, S.T. and Y.H.; data curation, S.T. and Y.H.; writing—original draft preparation, W.W.; writing—review and editing, W.W.; visualization, H.S.; supervision, L.W. and W.W.; project administration, Y.H.; funding acquisition, Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Youth Fund of the Natural Science Foundation of Inner Mongolia (grant number 2022QN06002 and 2022QN05010), Inner Mongolia Autonomous Region Education science research “14th Five-Year” Planning project (grant number NGJGH2021158), Innovation Fund of Inner Mongolia University of Science and Technology (grant number 0303052203), Basic Research Funds for Universities Directly Under the Inner Mongolia Autonomous Region in 2022 (grant number 0406082207) and School of Mechanical Engineering intelligent lift project of Inner Mongolia University of Science and Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the data are available prior request to the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gupta, S.; Maple, C.; Crispo, B.; Raja, K.; Yautsiukhin, A.; Martinelli, F. A Survey of Human-Computer Interaction (HCI) & Natural Habits-Based Behavioural Biometric Modalities for User Recognition Schemes. Pattern Recognit. 2023, 139, 109453. [Google Scholar] [CrossRef]
  2. Zakia, U.; Menon, C. Detecting Safety Anomalies in PHRI Activities via Force Myography. Bioengineering 2023, 10, 326. [Google Scholar] [CrossRef] [PubMed]
  3. Ezzameli, K.; Mahersia, H. Emotion Recognition from Unimodal to Multimodal Analysis: A Review. Inf. Fusion 2023, 99, 101847. [Google Scholar] [CrossRef]
  4. Ryumin, D.; Ivanko, D.; Ryumina, E. Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices. Sensors 2023, 23, 2284. [Google Scholar] [CrossRef] [PubMed]
  5. Jiang, S.; Sun, B.; Wang, L.; Bai, Y.; Li, K.; Fu, Y. Skeleton Aware Multi-Modal Sign Language Recognition. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 3408–3418. [Google Scholar] [CrossRef]
  6. Hrúz, M.; Gruber, I.; Kanis, J.; Boháček, M.; Hlaváč, M.; Krňoul, Z. Ensemble Is What We Need: Isolated Sign Recognition Edition. Sensors 2022, 22, 5043. [Google Scholar] [CrossRef] [PubMed]
  7. Novopoltsev, M.; Verkhovtsev, L.; Murtazin, R.; Milevich, D.; Zemtsova, I. Fine-Tuning of Sign Language Recognition Models: A Technical Report. arXiv 2023. [Google Scholar] [CrossRef]
  8. Ryumin, D.; Ivanko, D.; Axyonov, A. Cross-Language Transfer Learning Using Visual Information For Automatic Sign Gesture Recognition. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2023, 48, 209–216. [Google Scholar] [CrossRef]
  9. Chung, J.; Zisserman, A. Lip Reading in the Wild (LRW) Dataset. Asian Conf. Comput. Vis. 2016, 2016, 10112. [Google Scholar]
  10. Sincan, O.M.; Keles, H.Y. AUTSL: A Large Scale Multi-Modal Turkish Sign Language Dataset and Baseline Methods. IEEE Access 2020, 8, 181340–181355. [Google Scholar] [CrossRef]
  11. Zheng, W.L.; Lu, B.L. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  12. Mollahosseini, A.; Hasani, B.; Mahoor, M.H. AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild. IEEE Trans. Affect. Comput. 2019, 10, 18–31. [Google Scholar] [CrossRef]
  13. Mao, J.; Xu, R.; Yin, X.; Chang, Y.; Nie, B.; Huang, A. POSTER++: A Simpler and Stronger Facial Expression Recognition Network. arXiv 2023, arXiv:2301.12149. [Google Scholar]
  14. She, J.; Hu, Y.; Shi, H.; Wang, J.; Shen, Q.; Mei, T. Dive into Ambiguity: Latent Distribution Mining and Pairwise Uncertainty Estimation for Facial Expression Recognition. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 6244–6253. [Google Scholar] [CrossRef]
  15. Chunhui, J.; Zhang, J. Research on Car Gesture Interaction Design Based on the Line Design. In Lecture Notes in Computer Science; Rau, P.L., Ed.; Springer: Cham, Germany, 2017; Volume 10281. [Google Scholar] [CrossRef]
  16. Johnson-Glenberg, M. Immersive VR and Education: Embodied Design Principles That Include Gesture and Hand Controls. Front. Robot. AI 2018, 5, 81. [Google Scholar] [CrossRef]
  17. Rautaray, S.S.; Agrawal, A. Vision Based Hand Gesture Recognition for Human Computer Interaction: A Survey. Artif. Intell. Rev. 2015, 43, 1–54. [Google Scholar] [CrossRef]
  18. Brambilla, C.; Nicora, M.L.; Storm, F.; Reni, G.; Malosio, M.; Scano, A. Biomechanical Assessments of the Upper Limb for Determining Fatigue, Strain and Effort from the Laboratory to the Industrial Working Place: A Systematic Review. Bioengineering 2023, 10, 445. [Google Scholar] [CrossRef]
  19. Wu, C.; Chen, W.; Lin, C.H. Depth-Based Hand Gesture Recognition. Multimed. Tools Appl. 2016, 75, 7065–7086. [Google Scholar] [CrossRef]
  20. Kee, D.; Karwowski, W. LUBA: An Assessment Technique for Postural Loading on the Upper Body Based on Joint Motion Discomfort and Maximum Holding Time. Appl. Ergon. 2001, 32, 357–366. [Google Scholar] [CrossRef]
  21. Escalona, E.; Hernández, M.; Yanes, E.L.; Yanes, L.; Yanes, L. Ergonomic Evaluation in a Values Transportation Company in Venezuela. Work 2012, 41, 710–713. [Google Scholar] [CrossRef]
  22. Sala, E.; Cipriani, L.; Bisioli, A.; Paraggio, E.; Tomasi, C.; Apostoli, P.; Palma, G. De A Twenty-Year Retrospective Analysis of Risk Assessment of Biomechanical Overload of the Upper Limbs in Multiple Occupational Settings: Comparison of Different Ergonomic Methods. Bioengineering 2023, 10, 580. [Google Scholar] [CrossRef]
  23. Andreoni, G.; Mazzola, M.; Ciani, O.; Zambetti, M.; Romero, M.; Costa, F.; Preatoni, E. Method for Movement and Gesture Assessment (MMGA) in Ergonomics. In Digital Human Modeling. ICDHM 2009. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5620, pp. 591–598. [Google Scholar] [CrossRef]
  24. Nowara, R.; Holzgreve, F.; Golbach, R.; Wanke, E.M.; Maurer-grubinger, C.; Erbe, C.; Brueggmann, D.; Nienhaus, A.; Groneberg, D.A.; Ohlendorf, D. Testing the Level of Agreement between Two Methodological Approaches of the Rapid Upper Limb Assessment (RULA) for Occupational Health Practice—An Exemplary Application in the Field of Dentistry. Bioengineering 2023, 10, 477. [Google Scholar] [CrossRef] [PubMed]
  25. Keyvani, A.; Hanson, L. Ergonomic Risk Assessment of a Manikin’s Wrist Movements-a Test Study in Manual Assembly. In Proceedings of the Second International Digital Human Modeling Symposium, San Diego, CA, USA, 11–13 June 2013. [Google Scholar]
  26. Qing, T.; Jinsheng, K.; Wen-Lei, S.; Shou-Dong, W.; Zhao-Bo, L. Analysis of the Sitting Posture Comfort Based on Motion Capture System and JACK Software. In Proceedings of the 23rd International Conference on Automation & Computing, Huddersfield, UK, 7–8 September 2017; pp. 7–8. [Google Scholar] [CrossRef]
  27. Smulders, M.; Berghman, K.; Koenraads, M.; Kane, J.A.; Krishna, K.; Carter, T.K.; Schultheis, U. Comfort and Pressure Distribution in a Human Contour Shaped Aircraft Seat (Developed with 3D Scans of the Human Body). Work 2016, 54, 925–940. [Google Scholar] [CrossRef]
  28. Liu, S.; Schiavon, S.; Das, H.P.; Jin, M.; Spanos, C.J. Personal Thermal Comfort Models with Wearable Sensors. Build. Environ. 2019, 162, 106281. [Google Scholar] [CrossRef]
  29. Wyss, T.; Roos, L.; Beeler, N.; Veenstra, B.; Delves, S.; Buller, M.; Friedl, K. The Comfort, Acceptability and Accuracy of Energy Expenditure Estimation from Wearable Ambulatory Physical Activity Monitoring Systems in Soldiers. J. Sci. Med. Sport 2017, 20, S133–S134. [Google Scholar] [CrossRef]
  30. Mcdevitt, S.; Hernandez, H.; Hicks, J.; Lowell, R.; Bentahaikt, H.; Burch, R.; Ball, J.; Chander, H.; Freeman, C.; Taylor, C.; et al. Wearables for Biomechanical Performance Optimization and Risk Assessment in Industrial and Sports Applications. Bioengineering 2022, 9, 33. [Google Scholar] [CrossRef]
  31. Stern, H.I.; Wachs, J.P.; Edan, Y. Human Factors for Design of Hand Gesture Human–Machine Interaction. IEEE Int. Conf. Syst. Man Cybern. 2006, 5, 4052–4056. [Google Scholar] [CrossRef]
  32. Stern, H.I.; Wachs, J.P.; Edan, Y. Hand Gesture Vocabulary Design: A Multicriteria Optimization. IEEE Int. Conf. Syst. Man Cybern. 2004, 1, 19–23. [Google Scholar] [CrossRef]
  33. Herman, B.; Zahraee, A.H.; Szewczyk, J.; Morel, G.; Bourdin, C.; Vercher, J.L.; Gayet, B. Ergonomic and Gesture Performance of Robotized Instruments for Laparoscopic Surgery. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 1333–1338. [Google Scholar] [CrossRef]
  34. Xu, Y.; Chen, L.Y.; Zhang, H.B.; Zhao, X.; Tian, Y.S.; Ding, L. Optimal Pressure Comfort Design for Pilot Helmets. Comput. Methods Biomech. Biomed. Engin. 2018, 21, 437–443. [Google Scholar] [CrossRef]
  35. Battini, D.; Delorme, X.; Dolgui, A.; Persona, A.; Sgarbossa, F. Ergonomics in Assembly Line Balancing Based on Energy Expenditure: A Multi-Objective Model. Int. J. Prod. Res. 2016, 54, 824–845. [Google Scholar] [CrossRef]
  36. Zhang, Q.; Tao, J.; Yang, Q.; Fan, S. The Simulation and Evaluation of Fatigue/Load System in Basic Nursing Based on JACK. J. Phys. Conf. Ser. 2021, 1983, 12096. [Google Scholar] [CrossRef]
  37. Miller, R.H. A Comparison of Muscle Energy Models for Simulating Human Walking in Three Dimensions. J. Biomech. 2014, 47, 1373–1381. [Google Scholar] [CrossRef]
  38. Kistemaker, D.A.; Wong, J.D.; Gribble, P.L. The Central Nervous System Does Not Minimize Energy Cost in Arm Movements. J. Neurophysiol. 2010, 104, 2985–2994. [Google Scholar] [CrossRef]
  39. Pang, L.; Li, P.; Bai, L.; Liu, D.; Zhou, Y.; Yao, J. Optimization of Air Distribution Mode Coupled Interior Design for Civil Aircraft Cabin. Build. Environ. 2018, 134, 131–145. [Google Scholar] [CrossRef]
  40. De Temmerman, J.; Deprez, K.; Hostens, I.; Anthonis, J.; Ramon, H. Conceptual Cab Suspension System for a Self-Propelled Agricultural Machine–Part 2: Operator Comfort Optimisation. Biosyst. Eng. 2005, 90, 271–278. [Google Scholar] [CrossRef]
  41. Ramamurthy, N.V.; Vinayagam, B.K.; Roopchand, J. Comfort Level Refinement of Military Tracked Vehicle Crew through Optimal Control Study. Def. Sci. J. 2018, 68, 265–272. [Google Scholar] [CrossRef]
  42. Pinzon, J.A.; Vergara, P.P.; Da Silva, L.C.P.; Rider, M.J. Optimal Management of Energy Consumption and Comfort for Smart Buildings Operating in a Microgrid. IEEE Trans. Smart Grid 2019, 10, 3236–3247. [Google Scholar] [CrossRef]
  43. Wang, S.; Zhao, L.; Hu, Y.; Yang, F. Vibration Characteristics Analysis of Convalescent-Wheelchair Robots Equipped with Dynamic Absorbers. Shock Vib. 2018, 2018, 5393051. [Google Scholar] [CrossRef]
  44. Wang, W.; Qin, X.; Zheng, C.; Wang, H.; Li, J.; Niu, J. Mechanical Energy Expenditure-Based Comfort Evaluation Model for Gesture Interaction. Comput. Intell. Neurosci. 2018, 2018, 9861697. [Google Scholar] [CrossRef]
  45. Zago, M.; Luzzago, M.; Marangoni, T.; De Cecco, M.; Tarabini, M.; Galli, M. 3D Tracking of Human Motion Using Visual Skeletonization and Stereoscopic Vision. Front. Bioeng. Biotechnol. 2020, 8, 181. [Google Scholar] [CrossRef]
  46. Hussmann, S.; Knoll, F.; Edeler, T. Modulation Method Including Noise Model for Minimizing the Wiggling Error of TOF Cameras. IEEE Trans. Instrum. Meas. 2014, 63, 1127–1136. [Google Scholar] [CrossRef]
  47. Huang, H.; Liu, J.; Liu, S.; Jin, P.; Wu, T.; Zhang, T. Error Analysis of a Stereo-Vision-Based Tube Measurement System. Meas. J. Int. Meas. Confed. 2020, 157, 107659. [Google Scholar] [CrossRef]
  48. Wang, Q.; Yin, Y.; Zou, W.; Xu, D. Measurement Error Analysis of Binocular Stereo Vision: Effective Guidelines for Bionic Eyes. IET Sci. Meas. Technol. 2017, 11, 829–838. [Google Scholar] [CrossRef]
  49. Ma, C.; Wang, A.; Chen, G.; Xu, C. Hand Joints-Based Gesture Recognition for Noisy Dataset Using Nested Interval Unscented Kalman Filter with LSTM Network. Vis. Comput. 2018, 34, 1053–1063. [Google Scholar] [CrossRef]
  50. Gorevoĭ, A.V.; Kolyuchkin, V.Y.; Machikhin, A.S. Estimating the Measurement Error of the Coordinates of Markers on Images Recorded with a Stereoscopic System. J. Opt. Technol. 2020, 87, 266. [Google Scholar] [CrossRef]
  51. Sankowski, W.; Włodarczyk, M.; Kacperski, D.; Grabowski, K. Estimation of Measurement Uncertainty in Stereo Vision System. Image Vis. Comput. 2017, 61, 70–81. [Google Scholar] [CrossRef]
Figure 1. The schematic diagram of human upper limb musculoskeletal model. Note: Bioengineering 10 01191 i001 The spring damping system represents the human muscle model.
Figure 1. The schematic diagram of human upper limb musculoskeletal model. Note: Bioengineering 10 01191 i001 The spring damping system represents the human muscle model.
Bioengineering 10 01191 g001
Figure 2. Depth stereo measurement model of gesture features (note: O L X L , Y L , Z L and O R X R , Y R , Z R are the left and right view coordinate systems of the depth stereo measurement model, O B X B , Y B , Z B is human body coordinate system, O x y _ L x L , y L and O x y _ R x R , y R are the projection plane coordinate systems for the left and right view; the blue overlapping area is the measurement field of view; f L and f R are the focal lengths of the left and right view; ϕ l and ϕ R are the angles between the central axis and baseline; ρ l and ρ R are the projection angles of the left and right view;   B is the baseline distance).
Figure 2. Depth stereo measurement model of gesture features (note: O L X L , Y L , Z L and O R X R , Y R , Z R are the left and right view coordinate systems of the depth stereo measurement model, O B X B , Y B , Z B is human body coordinate system, O x y _ L x L , y L and O x y _ R x R , y R are the projection plane coordinate systems for the left and right view; the blue overlapping area is the measurement field of view; f L and f R are the focal lengths of the left and right view; ϕ l and ϕ R are the angles between the central axis and baseline; ρ l and ρ R are the projection angles of the left and right view;   B is the baseline distance).
Bioengineering 10 01191 g002
Figure 3. Schematic diagram of multi-objective optimization method of gestures.
Figure 3. Schematic diagram of multi-objective optimization method of gestures.
Bioengineering 10 01191 g003
Figure 4. The flow chart of multi-objective optimization calculation for gesture.
Figure 4. The flow chart of multi-objective optimization calculation for gesture.
Bioengineering 10 01191 g004
Figure 5. The multi-objective optimization results of gestures based on circular trajectories.
Figure 5. The multi-objective optimization results of gestures based on circular trajectories.
Bioengineering 10 01191 g005
Figure 6. The results comparison and analysis of the objective functions corresponding to the radius r R H of the circular trajectory.
Figure 6. The results comparison and analysis of the objective functions corresponding to the radius r R H of the circular trajectory.
Bioengineering 10 01191 g006
Figure 7. The results comparison and analysis of the objective functions corresponding to variable x B _ R H O .
Figure 7. The results comparison and analysis of the objective functions corresponding to variable x B _ R H O .
Bioengineering 10 01191 g007
Figure 8. The results comparison and analysis of the objective functions corresponding to variable y B _ R H O .
Figure 8. The results comparison and analysis of the objective functions corresponding to variable y B _ R H O .
Bioengineering 10 01191 g008
Figure 9. The results of comparison and analysis of objective functions corresponding to variable z B _ R H O .
Figure 9. The results of comparison and analysis of objective functions corresponding to variable z B _ R H O .
Bioengineering 10 01191 g009
Table 1. Initial values and optimal solutions of gesture optimization variable.
Table 1. Initial values and optimal solutions of gesture optimization variable.
Variable
Values
Center   Position   of   Circle   P B _ R H O Radius
x B _ R H 0 (m) y B _ R H 0 (m) z B _ R H 0 (m) r R H (m)
Initial   values   P 0 00 0.2 0.1
Optimal   solutions   P 0.1469 0.1823 0.2809 0.1355
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, W.; Hou, Y.; Tian, S.; Qin, X.; Zheng, C.; Wang, L.; Shang, H.; Wang, Y. The Comfort and Measurement Precision-Based Multi-Objective Optimization Method for Gesture Interaction. Bioengineering 2023, 10, 1191. https://doi.org/10.3390/bioengineering10101191

AMA Style

Wang W, Hou Y, Tian S, Qin X, Zheng C, Wang L, Shang H, Wang Y. The Comfort and Measurement Precision-Based Multi-Objective Optimization Method for Gesture Interaction. Bioengineering. 2023; 10(10):1191. https://doi.org/10.3390/bioengineering10101191

Chicago/Turabian Style

Wang, Wenjie, Yongai Hou, Shuangwen Tian, Xiansheng Qin, Chen Zheng, Liting Wang, Hepeng Shang, and Yuangeng Wang. 2023. "The Comfort and Measurement Precision-Based Multi-Objective Optimization Method for Gesture Interaction" Bioengineering 10, no. 10: 1191. https://doi.org/10.3390/bioengineering10101191

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop