Next Article in Journal
Does Exposure to Burning and Heated Tobacco Affect the Abundance of Perio-Pathogenic Species in the Subgingival Biofilm?
Previous Article in Journal
Bending Performance of a Prestressed Concrete Composite Girder Bridge with Steel Truss Webs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Ergonomic Risk Assessment System Based on 3D Human Pose Estimation and Collaborative Robot

1
Department of Engineering, University of Sannio, 82100 Benevento, Italy
2
Department of Technique and Management of Industrial Systems, University of Padua, 35122 Padua, Italy
3
Department of Management Innovation Systems, University of Salerno, 84084 Fisciano, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(11), 4823; https://doi.org/10.3390/app14114823
Submission received: 1 February 2024 / Revised: 17 May 2024 / Accepted: 27 May 2024 / Published: 2 June 2024

Abstract

:
Human pose estimation focuses on methods that allow us to assess ergonomic risk in the workplace and aims to prevent work-related musculoskeletal disorders (WMSDs). The recent increase in the use of Industry 4.0 technologies has allowed advances to be made in machine learning (ML) techniques for image processing to enable automated ergonomic risk assessment. In this context, this study aimed to develop a method of calculating joint angles from digital snapshots or videos using computer vision and ML techniques to achieve a more accurate evaluation of ergonomic risk. Starting with an ergonomic analysis, this study explored the use of a semi-supervised training method to detect the skeletons of workers and to estimate the positions and angles of their joints. A criticality index, based on RULA scores and fuzzy rules, is then calculated to evaluate possible corrective actions aimed at reducing WMSDs and improving production capacity using a collaborative robot that supports workers in carrying out critical operations. This method is tested in a real industrial case in which the manual assembly of electrical components is conducted, achieving a reduction in overall ergonomic stress of 13% and an increase in production capacity of 33% during a work shift. The proposed approach can overcome the limitations of recent developments based on computer vision or wearable sensors by performing an assessment with an objective and flexible approach to postural analysis development.

1. Introduction and Literature Review

A wide range of workplace health problems may occur due to work-related musculoskeletal disorders (WMSDs). The possible causes of WMSDs include factors such as (i) the work environment, (ii) types of activities and (iii) occupational posture [1]. Such disorders may cause inflammatory or degenerative conditions of the body’s functional structures, such as nerves, tendons, ligaments and muscles [2]. Several authors have shown that WMSDs are a major cause of injury in modern industries, leading to an overall loss of productivity in developed countries [3,4]. The core of workplace ergonomics, occupational health and safety (OHS) programs, identify these sources of injury as ergonomic risk factors [5]. In many industrialized countries, mechanical overload, repetitive work and prolonged non-ergonomic postures are widely recognized as risk factors for upper limb and lumbar spine injuries [6,7]. Therefore, a robust tool for estimating and monitoring workers’ posture may be critical for the prevention of musculoskeletal disorders. Such a tool could be used to assess imbalances between workplace requirements and the capabilities of workers to prevent WMSDs. This goal can be achieved using observational techniques. Due to its simplicity and efficiency, Rapid Upper Limb Assessment (RULA) is one of the observational procedures most used by safety/ergonomic professionals in industrial settings [6,8,9]. The main reason for this lies in the possibility of performing quick and reliable assessments of the upper body [10,11]. Thus, based on the angle of the worker’s joints, the grand score, or overall numerical score [1], reflects the degree of postural load on a musculoskeletal system. This is ascertained based on the relative locations of the parts of the body. A final score is determined using specific algorithms and may be used to appraise potential dangers associated with an activity. However, RULA and other observational techniques have two significant drawbacks. First, experienced assessors are required, so these methods may not be the most cost-effective option. Second, the final score may be subjected to inconsistency brought about by the subjectivity of the evaluators [12].
Therefore, since these methods rely on observational techniques that require an ergonomic analyst to perform observation of the work in real time or from a recorded video [13], they can be affected by human error, producing results with low consistency and repeatability. These limits can be reduced or eliminated using advanced technologies [14,15]. Specifically, automated data collection and analysis may be possible using a new class of data-driven applications. In this context, technological advancements in hardware sensors and machine learning (ML) offer new opportunities for ergonomics. From this perspective, refs. [16,17] used inclinometers and accelerometers in their study, while [18] used simple RGB color cameras. The authors of [19] used sensors and video capture methods to observe human operators and gather information during a task and a process and for use in error analysis. Yet, these intrusive direct-measurement and wearable technologies may limit or impact the development of work activities [18,20].
Despite these potential limits, modern technologies based on computer vision (CV) and machine learning (ML) enable the accurate recognition and analysis of human posture. This type of analysis can help in identifying ergonomic observation-based risk and choosing the relevant assessment methods. In this research area, a number of authors [15,21,22,23,24] have used CV systems as color and depth devices (RGB-D) to analyze ergonomics. However, up-to-date CV-based approaches do not yet meet the requirements for the correct management of complexities associated with real-world environments, such as uneven lighting and occlusion [25,26]. Recently, several authors conducted studies that required workers to be confined to typical situations, such as changes in outside light and being constrained to a limited range of movement. These authors suggested that changes in the camera’s viewing angle affected the accuracy and precision of the results [12,15,25,27]. These studies, regarding the automation of the posture assessment task, required additional equipment and were difficult to adapt to general industry [15,28,29,30].
A number of approaches use computer vision algorithms to assess postural risk and forecast RULA scores. In this context, Convolutional Neural Networks (CNNs) were used in [28] to predict kinematic data based on images and a network. In this approach, the output provided by the RULA score was also classified. In a study on WMSD risk prediction, ref. [14] compared the most widely used supervised machine learning classifiers, such as the Random Forest algorithm, the Naive Bayes classifier, the Decision Tree algorithm, the K-Nearest Neighbors algorithm and Neural Networks (NNs).
As previously stated, in addressing posture assessment problems, CV approaches may grant better measurement accuracy and have smaller impacts on workers and working environments [16,17,18]. The CV approach allows us to more accurately appraise posture and obtain measurements of body joints than wearable sensors. This is due to the possibility of accurately identifying the sizes of the body’s parts and their proportions and locating the body in the environment. If using wearable sensors, the distance between the joints would have to be estimated, which would lead to a greater error. The CV approach also leads to smaller impacts on workers in cases of technology upgrades or environmental changes.
In CV, the goal of human posture assessment is to identify the joints in the human body (knees, elbows, shoulders, etc.) in a digital image, and then, to search for a specific pose that matches the observed joint from a selection of possible joint postures. The use of artificial intelligence tools such as CNNs has increased the robustness of these methods. In this research area, there here have been several advances regarding the estimation of human poses, especially from 2D images, based on data collected on a large scale and deep learning techniques. Yet, the performance of the 3D human pose estimation remains unsatisfactory due to the lack of in situ 3D datasets. The authors of [31,32] proposed an algorithm for fusing multi-viewpoint video (MVV) with inertial measurement unit (IMU) sensor data to accurately estimate 3D human poses.
Modern open-source software tools, such as OpenPose [9], allow for real-time joint and limb detection from digital images and videos. OpenPose is a bottom-up approach to estimating the poses of multiple individuals; it takes an entire image as the input to a two-branch CNN to jointly produce confidence maps for predicting the associations between body part detection and the fields of affinity of each party. Given an image as the input, the network returns a list of detected bodies, each with its own skeleton of previously defined joints. Several works, such as [33], in which the authors use VideoPose3D, have enriched the research of posture assessment using 3D estimation models that employ pose estimation to achieve a more realistic and complete skeleton keypoint representation, enabling the application of such models in many domains, including industrial production.
In recent years, collaborative robots have become increasingly popular in the manufacturing industry and represent a solution to reducing risks [34].
Within a production system implementing collaborative robots, human operators can be flexibly supported in their physical workloads and different tasks. The combination of the agility and cognitive abilities of human operators and the repeatability and load capacity of robots can have a positive impact on productivity, flexibility, safety and costs [35].
The rest of this paper is organized as follows. Section 2 describes the methodology used; in Section 3, details of the case study are given, the criticality indices are evaluated, and the possible solutions are defined. In Section 4 the results are analyzed, and in Section 5 the discussions are presented, outlining the main limitations. Section 6 concludes the work with future research directions.
This paper aims to explore the use of artificial intelligence for 3D human pose estimation using individual snapshots or video sequences. A new body postural assessment system is developed that can automatically calculate joint angles based on keypoint detection using an Artificial Intelligence (AI) model for pose estimation fed with images of operator activity. The output is obtained, which consists of a 3D representation of the human skeleton with 17 keypoints; then, the joint angles are calculated using the keypoints and the image processed via image-based motion capture technology. Based on the joint angle and RULA matrix, the main inferential rules are created to build a fuzzy IE for the evaluation of each body region. The system calculates a global “criticality index”, which is able to provide a measure of ergonomic stress for each task performed by the operator and define the impact of each upper-body quadrant on the final result. Then, the collaborative robot is implemented to reduce ergonomic stress and increase production capacity.
The methodology developed is based on a temporal convolutional model that takes 2D keypoint sequences as the input and 3D human pose estimation as the output and allows the parallel processing of multiple frames, unlike recurrent networks. Therefore, the measurement of the joint angle is more accurate, and as wearable measurement sensors are not used, ergonomic risk assessment can take place without hindering the operator’s activities.

2. Methods

This study attempts to answer some basic questions about postural assessment based on AI in conjunction with a collaborative robot (cobot).
According to [34], the ever-increasing implementation of collaborative robotics in manufacturing companies has led to changes in some of the operator activities they support, with effects on the outcomes of ergonomic assessments.
Our first research question aims to assess the impact of cobots as a possible solution to reducing ergonomic stress in workers.
RQ1: What are the comparative advantages and limitations of using a collaborative robot (cobot) to mitigate ergonomic stress in high-risk elementary operations (EOi)?
The second research question aims to investigate the overall potential impact of cobots on the performance of production lines. This was carried out using a collaborative robot in a real case study.
RQ2: How does the implementation of cobots impact production capacity, worker productivity and ergonomic stress levels in manufacturing environments?
Starting with a 3D reproduction of the human skeleton with 17 keypoints, the joint angles were computed. Then, a criticality index (Ic) was defined to (i) measure the ergonomic stress of each operation conducted by the worker and (ii) define the impact on the ergonomic assessment of each upper-body quadrant. The second part of this study explores the use of a collaborative robot to reduce and/or eliminate those elementary operations (EOis) with the highest criticality indices.
The research methodology was divided into four phases according to the structure shown in Figure 1. The first phase identifies all the EOis and defines the domains of ergonomic analysis. In the second phase, a novel method of assessing ergonomic risk is explored in which the 3D human pose is analyzed through computer vision and machine learning techniques.
In the third step, we perform a criticality analysis for the EOis through RULA combined with a fuzzy inference engine (FIE). This tool computes the total criticality index (IcTOT) for each EOi.
In the fourth step, the criticality classes are calculated. In this step, we explore the use of a cobot for those EOis whose indices fall into the highest criticality classes. The last portion of this study provides a new risk assessment method that assesses the ability of the collaborative robot in the operation to reduce the criticality classes for those EOis.
The main steps are as follows:
  • Worker activities are divided into EOis [36]. These EOis define the input of the ergonomic analysis.
  • A system based on artificial vision is introduced for the estimation of human poses in 3D; this outperforms some methods evaluated in the literature in which the estimation of posture scores in the workplace is evaluated in posture images by detecting the 2D coordinates of the body joints, including the wrist joints [1,37].
  • The total criticality indices for the EOis are balanced through a fuzzy interface. These indices summarize the workers’ ergonomic stress during manufacturing operations.
  • A cobot is implemented for elementary operations with higher criticality classes.
Step 1. EOi Identification
In this phase, elementary operations [38] are identified through a video recording of production activities during a work shift [36].
The main goal is to analyze the EOis to determine the joint angles, while also considering the cyclic and non-cyclic operations conducted within the cycle time. In this phase, repetitive operations are also considered, as they can cause ergonomic problems [39].
Step 2. Human Pose Assessment
The 3D CV is explored to perform an ergonomic risk assessment for each EOi. Using the approach presented in VideoPose3D [33], the keypoints of the human body are detected from digital images or videos. VideoPose3D uses a system based on the dilated temporal convolution of 2D keypoint trajectories to estimate 3D keypoint coordinates. Given an input image or a video, the network provides a list of detected keypoints, as shown in Table 1 and Figure 2.
The model used in this study is a 17-joint skeleton, and the skeletal keypoints are listed as joint angles in Table 1 and in skeleton form in Figure 2. For each joint, VideoPose 3D provides (i) a vector with its relative position in the image and (ii) the confidence of the estimation, ranging from 0 (null) to 1 (complete). From this information, we calculate the overall confidence of skeletal detection as an average of the confidence of the joint estimates, which will be used for filtering out noisy or spurious detections.
An example of this step is given in Table 1. The left elbow angle (EL) is calculated from the positions of the left shoulder, elbow and wrist, which correspond to skeleton joints #11, #12 and #13.
To compute the ergonomic risk value, the threshold values of the joint angles per skeleton must be described. These thresholds are explicit for some joint angles, considering the RULA method (e.g., elbows and neck), but not for others. Thus, to define these threshold values, the approach of [14,40] is used. The results are shown in Table 2.
Step 3. The total criticality indices
This portion of the study explores the uses of a FIE to calculate the total criticality indices, considering some ergonomic indicators.
From the literature, it can be determined that fuzzy logic allows us to simulate complicated processes and solve problems with qualitative, vague or uncertain information [41]. Recently, there have been several applications of this methodology in the field of safety and risk analysis, such as system reliability and risk assessment [42,43,44] and the analysis of human reliability [45,46,47,48]. The authors of [49] use this methodology to assess the risk of human error, and those of [50] propose a framework based on fuzzy logic to address the inaccuracy of input data regarding ergonomic evaluation due to human subjectivity in field observation. A further ergonomic assessment based on fuzzy logic is proposed in [51], with the aim of assessing and defining the level of risk in manual load-handling and the severity of the impact on workers’ health. Therefore, in this study, we use this methodology to determine the total criticality index.
A fuzzy system consists of four basic units: a knowledge base and three computational units (fuzzification, inference and defuzzification).
  • A knowledge base contains all information about a system and allows other entities to process input data and obtain outputs. This information can be divided into (i) a database and (ii) a rule base. The former contains the descriptions of all variables, including membership functions, while the latter contains the inference rules.
  • Since the input data are almost always crisp and the fuzzy system works on “fuzzy” sets, a conversion is required to translate standard numeric data into fuzzy data. The operation that implements this transformation is called fuzzification. It is conducted using membership functions of the variables being processed. To defuzzify the input value, the membership degree is set for each linguistic term of the variable.
  • The phase in which the returned fuzzy values are converted into usable numbers is called defuzzification. In this phase, we start with a particular fuzzy set obtained through inference. This fuzzy set is often irregularly shaped due to the combination of the results of various rules, and a single sharp value must be found that best represents it [52]. The resulting values represent the final output of the system and are used as control actions or decision parameters.
In this study, the fuzzy engine (FE) is implemented using the Fuzzy Logic Toolbox in MatlaR2023b. The FE processes five variables, which are outlined in [40] and Table 2. These variables define the measure of postural stress for the upper quadrant, in particular, for the elbow, shoulder, neck and trunk, while another index refers to high repetition of the same movements.
After the identification of the EOis, the human pose for each EOi is analyzed and the joint angle is automatically determined considering the RULA method and the data in Table 2. This joint angle is the input value for the criticality index calculation. The ergonomic indicators in this study are evaluated through VideoPose3D (CVPR2019), which uses a system based on time convolution on the trajectories of 2D keypoints to estimate the coordinates of 3D keypoints. The main advantage of this phase is that, unlike the RULA and REBA methods, no training is required to obtain the final result because all calculation steps are performed by the FIE.
The fuzzy interface translates numerical values into linguistic values that are associated with fuzzy sets.
For each ergonomic input parameter, a membership function with five labels is defined [40]. Thus, the domains, specified as the ranges of each variable in Table 2, are regions where the input crisp values can be associated with a fuzzy number using the membership function.
This fuzzy number is expressed using three triangular-central membership functions and two trapezoidal functions at the domain’s boundaries. The membership functions are symmetrical with respect to the value 0.5 [53]. This value indicates the degree to which each variable belongs to various fuzzy sets when its membership degree changes.
Since the input values have different ranges, they are normalized to the maximum value in the [0,1] range before the fuzzy conversion. The value of the ergonomic index is defined in the range [low, medium, high]. Hence, if the value of the ergonomic index is in the high range, the normalization routine always returns a value of 1. The fuzzy interface uses the knowledge base to calculate the input variables.
The input values are interpreted in terms of fuzzy sets through an input membership function in which the labels at the ends of the domain are trapezoidal and the central ones are triangular (Figure 3).
For example, consider the ergonomic indicator for the right shoulder variable. Since the measured angle is 63.151° the normalized value (with respect to the maximum value in the range of this variable = 90°) is 63/90 = 0.7. Through the membership function in Figure 3, the fuzzy interface associates the ergonomic indicators with the VH set, with a degree of 0.23, and with the H set, with a degree of 0.8.
In the second step, all the rules in which our variable is associated with VH or H are considered, as shown in the following example:
  • Rule #1: if (neck is VL) and (right_elbow is VL) and (right_Shoulder is VH) and (Spine is VL) and (Repetition of the same movements is VL), then (IcTOT is L)
  • Rule #2: if (neck is VH) and (right_elbow is VH) and (right_Shoulder is H) and (Spine is VH) and (Repetition of the same movements is VH), then (IcTOT is VH)
Consider rule #2 with the following input parameters:
  • Ic neck = 1;
  • Ic right_elbow = 1;
  • Ic spine = 1;
  • Ic Repetition of the same movements = 1;
  • Ic right_Shoulder = 0.702.
Based on the membership function output, a numerical value is obtained for each index by reading the value on the x-axis corresponding to the assigned membership degree and label.
To obtain a single IcTOT output value, the weighted average is determined with respect to the membership grades:
IcTOT = (1 × 1 + 1 × 1 + 1 × 1 + 1 × 1 + 0.702 × 0.8)/(1 + 1 + 1 + 1 + 8) = 0.95
These rules are built to express, in linguistic terms, the requirement to obtain a strong outline if even a single domain is critical. The de-fuzzy interface translates the result of the inference process, expressed in terms of the degree of membership of a fuzzy set, into a numeric form.
The operative fuzzy implementation consists of the following steps:
  • Loading the .fis fuzzy inference file (which contains all the system settings saved through the Fuzzy Logic Toolbox) into the Matlab R2023b workspace;
  • Reading and normalizing the input array from the workspace (collecting the ergonomic parameters measured for each elementary operation, i.e., neck bending angle and shoulder angle);
  • Computing IcTOT through the fuzzy inference engine for each analyzed variable.
Step 4. Criticality classes and Cobot implementation
The outputs of step 4 are the EOis with the highest criticality classes.
In this portion of the study, the cobot is implemented for those operations with the greatest criticality classes.
The total criticality index associated with a criticality class is defined, and the EOis in which the workers assume a critical posture are defined. Three criticality classes are defined through triangle-shaped and/or trapezoidal-shaped membership functions consisting of three labels: low criticality index, medium criticality index and high criticality index.
For the improvement of working conditions, refs. [54,55,56] proposed human–robot collaboration, yielding promising results in terms of reducing workloads and risks associated with WMSDs. The authors of [55] highlight the importance of industrial collaborative robots, defined as cobots, for the reduction in ergonomic problems in the workplace caused by physical and cognitive stress and for improving safety, productivity and work quality. Thanks to the close interaction between the machine and the operator, these tools allow for high precision, speed and repeatability, which have a positive impact on productivity and flexibility.
Therefore, after the implementation of the cobot, the values of the criticality indices are calculated and we perform an assessment of certain production parameters, including the impact on production capacity and ergonomic stress. The methodology is iterated if there are other EOis in which the cobot can be implemented. Automatic ergonomic assessments in various working environments will aid in the prevention of occupational injuries. Furthermore, rather than requiring observation of the entire operation period, high-risk elementary operations can be identified immediately and the information provided to the inspector for further evaluation.

3. Industrial Application

The test bed of our research methodology is a shop floor producing low-voltage breakers. These products are composed of an electrical set (a coil and an electronic switch) and a mechanical drive enclosed in an iron case (Figure 4). The experiment is conducted on assembly activity for trip coil production (Figure 4).
The ergonomic assessment is conducted considering the following criticality indices in the form of ergonomic parameter inputs:
  • Trunk bending angle;
  • Neck bending/rotation angle;
  • Left or right elbow angle;
  • Spine;
  • Number of repetitions of the same movements.
The entire assembly process is described in Figure 5, where the frames of the EOis of the worker are shown. In the first step, the analysis focuses on the set-up activity. This activity is divided into fourteen EOs. This section of the study provides an accurate ergonomic assessment method by introducing posture estimation algorithms based on bi-dimensional (2D) video. To monitor the postures of the employees, we developed an ergonomic evaluation algorithm able to provide the values of the joint angles as an output.
Starting with the video of the whole working cycle, each EOi is isolated by identifying the portion of the video where a single repetition of the operation can be isolated. This process generates a set of frames (images) for each elementary operation of the work cycle. Each frame set is used as an input for the detection of the 3D human pose for each EOi. Figure 6 shows an example of human pose computation, including the predicted 2D keypoints for EO12. In the same figure, the key frames of the acquired image are shown. From these frames, we can see the joint angle based on the information acquired from the human skeleton incorporated with keypoint prediction. These predictions are the coordinates (see Figure 6) of 17 keypoints. Starting with these coordinates, an algorithm processes the predictions for each frame of each EOi. The outputs of this phase are the joint angles computed in the worst case for each EOi. Table 3 reports the values of the indicators arising from the ergonomic assessment for each EOi.
Before running the fuzzy inference engine, it is necessary to normalize the inputs regarding the maximum values of their respective ranges. Table 4 shows the values normalized regarding five ergonomic parameters.
As an example, let us consider the elementary operation EO1, for which the input parameters for the fuzzy inference engine are as follows:
  • Right elbow = 0.571
  • Right shoulder = 0.761
  • Neck bending = 0.655
  • Spine = 0.137
  • Repetition of the same movements = 0.630
According to the membership functions, the fuzzy system assigns a label and a membership degree to each input. All of the rules in which the variables have these labels are considered by the decision logic.
The system infers a numeric value for each index based on the output membership function by reading the value on the x-axis corresponding to the assigned membership degree and label.
In the example given in Figure 7, only rule #21 is completely satisfied. Furthermore, it is clear that some of the rules are followed at least partially. This occurs when the input’s red line intersects the trapezoidal function at one of its slopes. The individual outputs must be summed up to produce a single value.
The centroid calculation method is used for the defuzzification step.
This methodology allows us to determine the highest ergonomic risk value for each EOi. Table 5 shows the IcTOT postural criticality indices calculated through the integration of computer vision and FIE for all the EOis, with the respective criticality classes. With the results obtained using this system, the ergonomic manager can prioritize the corrective actions required for EO12 and EO14.
In recent efforts to reduce ergonomic risk and move towards intelligent production and Industry 4.0, cobots have been increasingly utilized, particularly for manual operations in production systems. In such systems, human operators and robots collaborate safely on different work activities. Hence, the dexterity and cognitive abilities of human operators can be effectively integrated with the repeatability of operations and payload capacity of robots to achieve high productivity and flexibility, reduce ergonomic risk, improve safety and lower costs.
Regarding EO12, we can replace the machine with an automatically operated press to reduce the criticality index and improve the ergonomic quality. Moreover, EO14 is improved by implementing a collaborative robot that helps the operator to position the assembled component in the box, as shown in Table 6.
To verify the benefits of implementing the cobot and to answer RQ1, the new values of the total criticality index are calculated. Table 7 summarizes the results obtained in terms of criticality indices and classes, demonstrating that the cobot reduces the criticality class of the EO14 operation to low; thus, immediate action is not necessary.
Therefore, through the application of our methodology, it is possible to address RQ1, and we propose that the introduction of cobots to replace certain EOis is an effective solution to reduce ergonomic stress.

4. Results Analysis

After the determination of the criticality class for each elementary operation, certain production parameters were considered within a work shift that allowed us to measure the impact of implementing the cobot on stress resulting from the EOi conducted by the operator. The overall ergonomic score (OES) considering all the EOis was determined through the approach of [40]:
O E S = i P E S i 3   N U M D O M N U M _ E L E M _ O P S
The elements of the above equation are defined as follows:
PES is the sum of each score obtained for a single elementary operation;
NUM_DOM is the number of selected ergonomic domains;
NUM_ELEM_OPS is the number of the EOis.
Figure 8 reports the OESs, normalized with respect to the maximum value, before and after the implementation of the cobot.
Another interesting result pertains to the variation in production capacity for each work shift. The implementation of the cobot allows the operator to start assembling a new component while the cobot positions the component in the “finished products” box. Therefore, in Figure 9, we compare the production capacity trend before and after cobot implementation.
From Figure 8 and Figure 9, we can see that the cobot affects both the total ergonomic score and productive capacity. Therefore, to answer RQ2, the two figures were analyzed in detail, and we note that the OES is reduced by 13%, while production capacity increases from 673 pieces to 900 pieces produced during a work shift. In addition, Figure 8 shows the difference in production for each slot of work. At the beginning of the work shift, there is a strong effect of help from the cobot, but as the operator’s stress increases, the difference in pieces produced drops considerably. The production difference in the first slot is 34.3%, while in the last work slot, it decreases by up to 20%. This difference occurs because EO11 has not yet been modified and the operator is subjected to stress, which decreases the production capacity during these time slots.
The analyzed case study aimed to test our methodology by considering a real video of a worker during the manual assembly of an electromechanical component. To study the accuracy of our method in evaluating upper quadrant angles, frames were periodically captured.
Subsequently, to ensure effective validation of the methodology, an evaluation was carried out considering the frames for a mold-positioning operation performed on a hydraulic press.
The same ergonomic parameters were used as the input, and the frames of the EOis chosen within our research methodology were inserted. As the output, the system returned the human pose in 3D, as shown in Figure 10a,b.
In particular, the EOi in Figure 10a concerns the positioning and centering of the mold in the hydraulic press, and the EOi in Figure 10b concerns positioning the metal strip on the tape unwinder and inserting the end of the strip into the tape guide.
In addition, the algorithm provides the joint angles calculated considering the worst case based on the human skeleton with the 17 keypoints. Table 8 shows the normalized values of the two EOis considered.
At this point, the fuzzy inference engine was implemented. This system infers a numeric value for each index based on the output membership function by reading the value on the x-axis corresponding to the assigned membership degree and label.
The final result provides an IcTOT of 0.53 for the EOi in Figure 10a and an IcTOT of 0.46 for the EOi in Figure 10b; therefore, the criticality class of the analyzed EOi is medium.

5. Discussion

Unlike previous vision-based methods [22], our proposed method enables ergonomic assessments at the joint level, which can be more accurate and beneficial for specific corrective actions. The reason for this lies in the new method developed for the analysis of postural data. This study used a temporal convolutional model that takes 2D keypoint sequences as the input and produces 3D human pose estimations as the output. This convolutional model allows for the parallel processing of multiple frames, unlike recurring networks. The benefits of this work include (i) the non-intrusive collection of posture data and performance of ergonomic risk assessment without interfering with the normal activities of workers; (ii) the provision of 3D posture data instead of 2D posture data, so that joint angles can be measured accurately; and (iii) the adaptability of the model to different 2D keypoint detectors and effective management of large-scale scenarios through dilated convolutions.
This research resulted from industrial activities aimed at developing a framework for the automated estimation of human poses in workplaces. The proposed model was developed through computer vision and machine learning techniques to produce cobots that can support production and safety managers in preventing workplace health problems due to WMSDs, with the goal of improving production capacity and decreasing ergonomic risk.
In the case study, we chose to analyze a task in which the assembly of the semi-finished product is mainly manual, consisting of numerous elementary operations, and whose analysis allowed us to identify which tasks should be assigned to the collaborative robot.
This task is generally performed by an operator with over 10 years of experience; a comparison of ergonomic and productivity terms was therefore carried out considering the use of the cobot alongside an expert worker, while also taking into account that the benefit that this solution brings in terms of reducing stress and increasing the number of pieces completed must be related to the following:
  • The need to modify the layout of the workstation and the entire line;
  • The need to adopt specific semi-finished containers for handling by the cobot;
  • The costs of purchasing and programming the cobot and training the operator to interact with it.
From a managerial perspective, this study and its activities addressed the effect of a collaborative robot on process flow, supporting the company in evaluating the impact of ergonomic risk and productivity within a single framework. The developed research methodology was also validated for another workstation.
Although this research has shown significant results, it has certain limitations.
The first limitation pertains to workplace analysis when there are interactions between several human operators, or if there are objects that may partially occlude the area to be analyzed.
The second limitation pertains to the analysis of workplaces where there are more people, as the model developed in this study only determines the pose of a single operator. Future work may include a method of 3D pose estimation for multiple people.
The third limitation stems from the fact that it is necessary to first record a video of all operations, and then, to input it into our model. Future research could process video sequences in real-time systems with an automatic alerting system for the most critical operations.

6. Conclusions

The topic of human–robot collaboration and its possibilities in industrial scenarios is of great interest and at the center of debate within the scientific community. This is mainly because collaborative robotics is unanimously considered one of the most promising technologies for manufacturing companies that aspire to develop more flexible, adaptive and efficient production systems. In this research, the use of a neural model integrated with a cobot was explored to assess and optimize 3D human poses. The architecture created is based on temporal information with dilated convolutions through trajectories of 2D keypoints for the automatic calculation of joint angles. Our ergonomic analysis produced a dataset that provides objective values for postures, with the acquisition and processing of data completely independently from the evaluation phase. For the alleviation of human stress and to improve some production parameters, a collaborative robot was implemented, which was possible thanks to the technologies of adaptive systems based on flexibility, reconfigurability and production efficiency. The main contribution of this work lies in the calculation of the joint angles based on 3D human pose estimations in workplaces. A second contribution of this work is the IcTOT calculation, which assesses various ergonomic indicators through a fuzzy inference system. IcTOT defines the elementary operation for which it is most critical to implement the collaborative robot to evaluate ergonomic stress and production capacity. These methods outperform those in [1,37,57], in which posture scores are appraised by detecting the 2D coordinates of body joints. The automation of observation-based techniques for posture assessment using computer vision is able to eliminate errors due to human subjectivity and to reduce the times needed for posture evaluation in workplaces.
Our research results were validated by comparing them with risk classes computed using the classical method based on tabular values and nominal angle values for each elementary operation.
Future research will address the following:
  • Testing our methodology on different personal protective equipment worn by operators, or considering different environmental conditions, to evaluate whether they can influence the obtained results.
  • The processing of video sequences in real-time systems with an automatic alerting system for identifying the most critical operations.
  • The evolution of 3D pose estimations for multiple people in the same area.
  • Comparing ergonomic assessments between workers and investigating the impact of cobot implementation on different workstations while changing the operators.

Author Contributions

Conceptualization, M.M., C.R. and V.B.; methodology, M.M., C.R., F.G. and V.B.; software, F.G. and V.B.; validation, M.M., F.G. and V.B.; formal analysis, M.M., F.G. and V.B.; investigation, M.M. and C.R.; resources, M.M., F.G. and V.B.; data curation, M.M., C.R., F.G. and V.B.; writing—original draft preparation, M.M., F.G. and V.B.; writing—review and editing, M.M.S. and L.T.; visualization, M.M.S., L.T., M.M., F.G. and V.B.; supervision, M.M.S. and L.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Massiris Fernández, M.; Fernández, J.Á.; Bajo, J.M.; Delrieux, C.A. Ergonomic risk assessment based on computer vision and machine learning. Comput. Ind. Eng. 2020, 149, 106816. [Google Scholar] [CrossRef]
  2. Middlesworth, M. How to Prevent Sprains and Strains in the Workplace. 2015. Available online: https://ergo-plus.com/prevent-sprains-strains-workplace (accessed on 31 January 2024).
  3. Van Der Beek, A.J.; Dennerlein, J.T.; Huysmans, M.A.; Mathiassen, S.E.; Burdorf, A.; Van Mechelen, W.; Coenen, P. A research framework for the development and implementation of interventions preventing work-related musculoskeletal disorders. Scand. J. Work Environ. Health 2017, 526–539. [Google Scholar] [CrossRef] [PubMed]
  4. Ng, A.; Hayes, M.J.; Polster, A. Musculoskeletal disorders and working posture among dental and oral health students. Healthcare 2016, 4, 13. [Google Scholar] [CrossRef] [PubMed]
  5. Luttmann, A.; Jager, M.; Griefahn, B.; Caffier, G.; Liebers, F. Preventing Musculoskeletal Disorders in the Workplace; World Health Organization: Geneva, Switzerland, 2003. [Google Scholar]
  6. Roman-Liu, D. Comparison of concepts in easy-to-use methods for MSD risk assessment. Appl. Ergon. 2014, 45, 420–427. [Google Scholar] [CrossRef]
  7. Ha, C.; Roquelaure, Y.; Leclerc, A.; Touranchet, A.; Goldberg, M.; Imbernon, E. The French musculoskeletal disorders surveillance program: Pays de la Loire network. Occup. Environ. Med. 2009, 66, 471–479. [Google Scholar] [CrossRef] [PubMed]
  8. Jeong, S.O.; Kook, J. CREBAS: Computer-Based REBA Evaluation System for Wood Manufacturers Using MediaPipe. Appl. Sci. 2023, 13, 938. [Google Scholar] [CrossRef]
  9. Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299. [Google Scholar]
  10. Kee, D. An empirical comparison of OWAS, RULA and REBA based on self-reported discomfort. Int. J. Occup. Saf. Ergon. 2020, 26, 285–295. [Google Scholar] [CrossRef] [PubMed]
  11. Kong, Y.K.; Lee, S.Y.; Lee, K.S.; Kim, D.M. Comparisons of ergonomic evaluation tools (ALLA, RULA, REBA and OWAS) for farm work. Int. J. Occup. Saf. Ergon. 2018, 24, 218–223. [Google Scholar] [CrossRef] [PubMed]
  12. Li, X.; Han, S.; Gül, M.; Al-Hussein, M.; El-Rich, M. 3D visualization-based ergonomic risk assessment and work modification framework and its validation for a lifting task. J. Constr. Eng. Manag. 2018, 144, 04017093. [Google Scholar] [CrossRef]
  13. Andrews, D.M.; Fiedler, K.M.; Weir, P.L.; Callaghan, J.P. The effect of posture category salience on decision times and errors when using observation-based posture assessment methods. Ergonomics 2012, 55, 1548–1558. [Google Scholar] [CrossRef]
  14. Sasikumar, V. A model for predicting the risk of musculoskeletal disorders among computer professionals. Int. J. Occup. Saf. Ergon. 2018, 26, 384–396. [Google Scholar] [CrossRef] [PubMed]
  15. Plantard, P.; Shum, H.P.; Le Pierres, A.S.; Multon, F. Validation of an ergonomic assessment method using Kinect data in real workplace conditions. Appl. Ergon. 2017, 65, 562–569. [Google Scholar] [CrossRef] [PubMed]
  16. Nath, N.D.; Akhavian, R.; Behzadan, A.H. Ergonomic analysis of construction worker’s body postures using wearable mobile sensors. Appl. Ergon. 2017, 62, 107–117. [Google Scholar] [CrossRef] [PubMed]
  17. Jayaram, U.; Jayaram, S.; Shaikh, I.; Kim, Y.; Palmer, C. Introducing quantitative analysis methods into virtual environments for real-time and continuous ergonomic evaluations. Comput. Ind. 2006, 57, 283–296. [Google Scholar] [CrossRef]
  18. Zhang, H.; Yan, X.; Li, H. Ergonomic posture recognition using 3D view-invariant features from single ordinary camera. Autom. Constr. 2018, 94, 1–10. [Google Scholar] [CrossRef]
  19. Papoutsakis, K.; Papadopoulos, G.; Maniadakis, M.; Papadopoulos, T.; Lourakis, M.; Pateraki, M.; Varlamis, I. Detection of physical strain and fatigue in industrial environments using visual and non-visual low-cost sensors. Technologies 2022, 10, 42. [Google Scholar] [CrossRef]
  20. Yu, Y.; Yang, X.; Li, H.; Luo, X.; Guo, H.; Fang, Q. Joint-level vision-based ergonomic assessment tool for construction workers. J. Constr. Eng. Manag. 2019, 145, 04019025. [Google Scholar] [CrossRef]
  21. Vignais, N.; Bernard, F.; Touvenot, G.; Sagot, J.C. Physical risk factors identification based on body sensor network combined to videotaping. Appl. Ergon. 2017, 65, 410–417. [Google Scholar] [CrossRef] [PubMed]
  22. Yan, X.; Li, H.; Wang, C.; Seo, J.; Zhang, H.; Wang, H. Development of ergonomic posture recognition technique based on 2D ordinary camera for construction hazard prevention through view-invariant features in 2D skeleton motion. Adv. Eng. Inform. 2017, 34, 152–163. [Google Scholar] [CrossRef]
  23. Battini, D.; Persona, A.; Sgarbossa, F. Innovative real-time system to integrate ergonomic evaluations into warehouse design and management. Comput. Ind. Eng. 2014, 77, 1–10. [Google Scholar] [CrossRef]
  24. Xu, X.; Robertson, M.; Chen, K.B.; Lin, J.H.; McGorry, R.W. Using the Microsoft Kinect™ to assess 3-D shoulder kinematics during computer use. Appl. Ergon. 2017, 65, 418–423. [Google Scholar] [CrossRef] [PubMed]
  25. Fang, W.; Love, P.E.; Luo, H.; Ding, L. Computer vision for behaviour-based safety in construction: A review and future directions. Adv. Eng. Inform. 2020, 43, 100980. [Google Scholar] [CrossRef]
  26. Liu, M.; Han, S.; Lee, S. Tracking-based 3D human skeleton extraction from stereo video camera toward an on-site safety and ergonomic analysis. Constr. Innov. 2016, 16, 348–367. [Google Scholar] [CrossRef]
  27. Seo, J.; Alwasel, A.; Lee, S.; Abdel-Rahman, E.M.; Haas, C. A comparative study of in-field motion capture approaches for body kinematics measurement in construction. Robotica 2019, 37, 928–946. [Google Scholar] [CrossRef]
  28. Li, L.; Martin, T.; Xu, X. A novel vision-based real-time method for evaluating postural risk factors associated with musculoskeletal disorders. Appl. Ergon. 2020, 87, 103138. [Google Scholar] [CrossRef]
  29. Peppoloni, L.; Filippeschi, A.; Ruffaldi, E.; Avizzano, C.A. A novel wearable system for the online assessment of risk for biomechanical load in repetitive efforts. Int. J. Ind. Ergon. 2016, 52, 1–11. [Google Scholar] [CrossRef]
  30. Clark, R.A.; Pua, Y.H.; Fortin, K.; Ritchie, C.; Webster, K.E.; Denehy, L.; Bryant, A.L. Validity of the Microsoft Kinect for assessment of postural control. Gait Posture 2012, 36, 372–377. [Google Scholar] [CrossRef] [PubMed]
  31. Trumble, M.; Gilbert, A.; Hilton, A.; Collomosse, J. Deep autoencoder for combined human pose estimation and body model upscaling. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 784–800. [Google Scholar]
  32. Von Marcard, T.; Henschel, R.; Black, M.J.; Rosenhahn, B.; Pons-Moll, G. Recovering accurate 3d human pose in the wild using imus and a moving camera. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 601–617. [Google Scholar]
  33. Pavllo, D.; Feichtenhofer, C.; Grangier, D.; Auli, M. 3d human pose estimation in video with temporal convolutions and semi-supervised training. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 7753–7762. [Google Scholar]
  34. El Makrini, I.; Merckaert, K.; De Winter, J.; Lefeber, D.; Vanderborght, B. Task allocation for improved ergonomics in Human-Robot Collaborative Assembly. Interact. Stud. 2019, 20, 102–133. [Google Scholar] [CrossRef]
  35. Parra, P.S.; Calleros, O.L.; Ramirez-Serrano, A. Human-robot collaboration systems: Components and applications. In Proceedings of the International Conference of Control, Dynamic Systems, and Robotics, Virtual, 9–11 November 2020; Volume 150, pp. 1–9. [Google Scholar]
  36. Battini, D.; Faccio, M.; Persona, A.; Sgarbossa, F. New methodological framework to improve productivity and ergonomics in assembly system design. Int. J. Ind. Ergon. 2011, 41, 30–42. [Google Scholar] [CrossRef]
  37. Nayak, G.K.; Kim, E. Development of a fully automated RULA assessment system based on computer vision. Int. J. Ind. Ergon. 2021, 86, 103218. [Google Scholar] [CrossRef]
  38. Shuval, K.; Donchin, M. Prevalence of upper extremity musculoskeletal symptoms and ergonomic risk factors at a Hi-Tech company in Israel. Int. J. Ind. Ergon. 2005, 35, 569–581. [Google Scholar] [CrossRef]
  39. Das, B.; Shikdar, A.A.; Winters, T. Workstation redesign for a repetitive drill press operation: A combined work design and ergonomics approach. Hum. Factors Ergon. Manuf. Serv. Ind. 2007, 17, 395–410. [Google Scholar] [CrossRef]
  40. Savino, M.M.; Battini, D.; Riccio, C. Visual management and artificial intelligence integrated in a new fuzzy-based full body postural assessment. Comput. Ind. Eng. 2017, 111, 596–608. [Google Scholar] [CrossRef]
  41. Klir, J.; Yuan, B. Fuzzy Sets and Fuzzy Logic: Theory and Applications; Prentice Hall: Upper Saddle River, NJ, USA, 1995. [Google Scholar]
  42. Li, H.X.; Al-Hussein, M.; Lei, Z.; Ajweh, Z. Risk identification and assessment of modular construction utilizing fuzzy analytic hierarchy process (AHP) and simulation. Can. J. Civ. Eng. 2013, 40, 1184–1195. [Google Scholar] [CrossRef]
  43. Markowski, A.S.; Mannan, M.S.; Bigoszewska, A. Fuzzy logic for process safety analysis. J. Loss Prev. Process Ind. 2009, 22, 695–702. [Google Scholar] [CrossRef]
  44. Nasirzadeh, F.; Afshar, A.; Khanzadi, M.; Howick, S. Integrating system dynamics and fuzzy logic modelling for construction risk management. Constr. Manag. Econ. 2008, 26, 1197–1212. [Google Scholar] [CrossRef]
  45. Zioa, E.; Baraldia, P.; Librizzia, M. A fuzzy set-based approach for modeling dependence among human errors. Fuzzy Sets Syst. 2008, 160, 1947–1964. [Google Scholar] [CrossRef]
  46. Marseguerra, M.; Zio Enrico Librizzi, M. Human reliability analysis by fuzzy “CREAM”. Risk Anal. 2007, 27, 137–154. [Google Scholar] [CrossRef] [PubMed]
  47. Kim, B.J.; Bishu, R.R. Uncertainty of human error and fuzzy approach to human reliability analysis. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2006, 14, 111–129. [Google Scholar] [CrossRef]
  48. Konstandinidou, M.; Nivolianitou, Z.; Kiranoudis, C.; Markatos, N. A fuzzy modeling application of CREAM methodology for human reliability analysis. Reliab. Eng. Syst. Saf. 2006, 91, 706–716. [Google Scholar] [CrossRef]
  49. Li, P.C.; Chen, G.H.; Dai, L.C.; Li, Z. Fuzzy logic-based approach for identifying the risk importance of human error. Saf. Sci. 2010, 48, 902–913. [Google Scholar] [CrossRef]
  50. Golabchi, A.; Han, S.; Fayek, A.R. A fuzzy logic approach to posture-based ergonomic analysis for field observation and assessment of construction manual operations. Can. J. Civ. Eng. 2016, 43, 294–303. [Google Scholar] [CrossRef]
  51. Contreras-Valenzuela, M.R.; Seuret-Jiménez, D.; Hdz-Jasso, A.M.; León Hernández, V.A.; Abundes-Recilla, A.N.; Trutié-Carrero, E. Design of a Fuzzy Logic Evaluation to Determine the Ergonomic Risk Level of Manual Material Handling Tasks. Int. J. Environ. Res. Public Health 2022, 19, 6511. [Google Scholar] [CrossRef]
  52. Campanella, P. Neuro-Fuzzy Learning in Context Educative. In Proceedings of the 2021 19th International Conference on Emerging eLearning Technologies and Applications (ICETA), Košice, Slovakia, 11–12 November 2021; pp. 58–69. [Google Scholar]
  53. Bukowski, L.; Feliks, J. Application of fuzzy sets in evaluation of failure likelihood. In Proceedings of the 18th International Conference on Systems Engineering (ICSEng’05), Las Vegas, NV, USA, 16–18 August 2005; pp. 170–175. [Google Scholar]
  54. Villani, V.; Sabattini, L.; Czerniaki, J.N.; Mertens, A.; Vogel-Heuser, B.; Fantuzzi, C. Towards modern inclusive factories: A methodology for the development of smart adaptive human-machine interfaces. In Proceedings of the 2017 22nd IEEE international conference on emerging technologies and factory automation (ETFA), Limassol, Cyprus, 12–15 September 2017; pp. 1–7. [Google Scholar]
  55. Cherubini, A.; Passama, R.; Crosnier, A.; Lasnier, A.; Fraisse, P. Collaborative manufacturing with physical human–robot interaction. Robot. Comput.-Integr. Manuf. 2016, 40, 1–13. [Google Scholar] [CrossRef]
  56. Tsarouchi, P.; Makris, S.; Chryssolouris, G. Human–robot interaction review and challenges on task planning and programming. Int. J. Comput. Integr. Manuf. 2016, 29, 916–931. [Google Scholar] [CrossRef]
  57. Li, L.; Xu, X. A deep learning-based RULA method for working posture assessment. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2019, 63, 1090–1094. [Google Scholar] [CrossRef]
Figure 1. Research methodology.
Figure 1. Research methodology.
Applsci 14 04823 g001
Figure 2. Kinematic representation of the human body by 17 keypoints.
Figure 2. Kinematic representation of the human body by 17 keypoints.
Applsci 14 04823 g002
Figure 3. Input membership function of right shoulder.
Figure 3. Input membership function of right shoulder.
Applsci 14 04823 g003
Figure 4. Assembly activity for Trip coil.
Figure 4. Assembly activity for Trip coil.
Applsci 14 04823 g004
Figure 5. Frames of elementary operations.
Figure 5. Frames of elementary operations.
Applsci 14 04823 g005aApplsci 14 04823 g005bApplsci 14 04823 g005c
Figure 6. Reconstruction of 3D Human Pose from 2D Image.
Figure 6. Reconstruction of 3D Human Pose from 2D Image.
Applsci 14 04823 g006
Figure 7. Rule viewer of the MATLAB Fuzzy Logic Toolbox for EO14.
Figure 7. Rule viewer of the MATLAB Fuzzy Logic Toolbox for EO14.
Applsci 14 04823 g007
Figure 8. OESs before and after cobot implementation.
Figure 8. OESs before and after cobot implementation.
Applsci 14 04823 g008
Figure 9. Production capacity for one work shift before and after cobot implementation.
Figure 9. Production capacity for one work shift before and after cobot implementation.
Applsci 14 04823 g009
Figure 10. (a) Reconstruction of 3D Human Pose from 2D Image. (b) Reconstruction of 3D Human Pose from 2D Image.
Figure 10. (a) Reconstruction of 3D Human Pose from 2D Image. (b) Reconstruction of 3D Human Pose from 2D Image.
Applsci 14 04823 g010
Table 1. RULA joint angles from 17 skeleton data. Points are numbered according to Figure 2.
Table 1. RULA joint angles from 17 skeleton data. Points are numbered according to Figure 2.
Angle NameAcronymPoints of the Skeleton Joints Involved
Left elbowEL<13, 12, 11
Right elbowER<14, 15, 16
Left shoulderSL<12, 11, 08
Right shoulderSR<15, 16, 08
Left clavicleCL<11, 08, 07
Right clavicleRC<14, 08, 07
Left kneeKL<04, 05, 06
Right kneeKR<01, 02, 03
Neck twistingNT<10, 09 08
Neck bending leftNB<09, 08, 11
Neck bending rightNBR<09, 08, 14
Neck flexionNF<09, 08, 00
Trunk twisting rightTT<11, 00, 04
Trunk twisting leftTTL<14, 00, 04
Trunk BendingTB<04, 00, 07
Table 2. The ergonomic domains and criticality index.
Table 2. The ergonomic domains and criticality index.
Domain GroupErgonomic IndicatorScore Level According to RULA
LowMediumHigh
Upper Quadrant (UQ)Trunk bending angle (degree)(0°–15°)(15°–30°)(>30°)
Left or right elbow angle (degree)(0°–15°)(15°–40°)(>45°)
Left or right shoulder (degree)(20°–45°)(45°–90°)(>90°)
Neck bending or rotation angle (degree)(0°–10°)(10°–20°)(>20°)
Forearm rotation angle (degree)(0°–90°)(>90°)(>90° and crossed)
Spine (degree)(0°–20°)(20°–60°)(>60°)
Wrist bending angle (degree)The value is calculated considering the ulnar or radial deviation (inward or outward rotation) according to the RULA and Health Safety Executive guidelines (2014)(0°) The wrist is not subject to rotation.(+15°; −15°)(>15°)
Stereotypy, loads, typical actions (TA)Arm position for material withdrawal(Without extending an arm)(Extending an arm)(Two hands needed)
Trunk rotation (degree)(0°–45°)(45°–90°)(>90°)
Repetition of the same movements (RM)This parameter refers to high repetition of the same movements(From 25% to 50% of the cycle time)(From 51% to 80% of the cycle time)(>80%)
Table 3. Input values of step 3.
Table 3. Input values of step 3.
Right
Elbow
[angles]
Left
Elbow
[angles]
Right
Shoulder
[angles]
Left
Shoulder
[angles]
Neck Bending
[angles]
Spine
[angles]
Repetition of the Same Movements
[time]
EO1102.69597.21468.53169.91026.20098.224239.786
EO2107.36490.89568.25769.25929.53397.857 235.964
EO3112.442106.22268.57469.12530.26796.867 254.074
EO4110.278102.84864.14858.48227.38893.154 297.116
EO5111.06097.37163.03660.89124.28192.560 198.744
EO6111.55490.52263.15159.89425.08194.731 298.116
EO7104.62896.77463.58761.16327.96995.033 245.700
EO8104.12299.81063.66161.77927.36596.354 267.540
EO9109.308113.18764.22560.82522.25992.860 236.054
EO1084.00795.47367.99359.76115.21695.207 253.508
EO1190.000100.09969.19914.46814.46894.822 305.760
EO12125.13292.03583.91864.06415.28599.599 259.820
EO13108.64692.46068.66261.72913.16596.608 278.650
EO1438.81777.64763.00069.24221.78996.694229.170
Table 4. Results of ergonomic assessment.
Table 4. Results of ergonomic assessment.
Ergonomic Parameters
Right ElbowRight ShoulderNeck BendingSpineRepetition of the Same Movements
Range According to RULA(0°–45°)(20°–90°)(0°–20°)(0°–60°)(From 25% to 80%)
Elementary OperationNormalized to the Maximum ValueNormalized to the Maximum ValueNormalized to the Maximum ValueNormalized to the Maximum ValueNormalized to the Maximum Value
EO10.5710.7610.6550.1370.630
EO20.5960.7580.7380.1310.620
EO30.6250.7620.7570.1140.670
EO40.6130.7130.6850.0520.780
EO50.6170.7000.6070.04270.520
EO60.6200.7020.6270.07890.580
EO70.5810.7070.6990.08390.780
EO80.5780.7070.6840.10590.660
EO90.6070.7140.5560.04770.700
EO100.4670.7550.3800.08680.618
EO110.5000.7690.3620.08040.664
EO120.6950.9320.3820.16000.800
EO130.6040.7630.3290.11010.680
EO140.8820.7000.5950.44490.750
Table 5. IcTOT values.
Table 5. IcTOT values.
Elementary OperationIcTOTCriticality Class
EO10.69Medium
EO20.69Medium
EO30.7Medium
EO40.65Medium
EO50.41Medium
EO60.44Medium
EO70.7Medium
EO80.63Medium
EO90.60Medium
EO100.33Low
EO110.39Medium
EO120.77High
EO130.44Medium
EO140.833High
Table 6. Cobot implementation.
Table 6. Cobot implementation.
Elementary OperationCorrective Actions ProposedCorrective Action
EO14Implementation of a collaborative robotApplsci 14 04823 i001
Table 7. New criticality indices after corrective action.
Table 7. New criticality indices after corrective action.
Elementary OperationIcTOTCriticality Class
EO10.69Medium
EO20.69Medium
EO30.7Medium
EO40.65Medium
EO50.41Medium
EO60.44Medium
EO70.7Medium
EO80.63Medium
EO90.60Medium
EO100.33Low
EO110.39Medium
EO120.77High
EO130.44Medium
EO140Low
Table 8. Ergonomic assessment.
Table 8. Ergonomic assessment.
Ergonomic Parameters
Right ElbowRight ShoulderNeck BendingSpineRepetition of the Same Movements
Range According to RULA(0°–45°)(20°–90°)(0°–20°)(0°–60°)(From 25% to 80%)
Elementary OperationNormalized to the Maximum ValueNormalized to the Maximum ValueNormalized to the Maximum ValueNormalized to the Maximum ValueNormalized to the Maximum Value
EOi in Figure 10a0.4380.9770.5960.2720.158
EOi in Figure 10b0.4970.7910.4750.1030.354
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Menanno, M.; Riccio, C.; Benedetto, V.; Gissi, F.; Savino, M.M.; Troiano, L. An Ergonomic Risk Assessment System Based on 3D Human Pose Estimation and Collaborative Robot. Appl. Sci. 2024, 14, 4823. https://doi.org/10.3390/app14114823

AMA Style

Menanno M, Riccio C, Benedetto V, Gissi F, Savino MM, Troiano L. An Ergonomic Risk Assessment System Based on 3D Human Pose Estimation and Collaborative Robot. Applied Sciences. 2024; 14(11):4823. https://doi.org/10.3390/app14114823

Chicago/Turabian Style

Menanno, Marialuisa, Carlo Riccio, Vincenzo Benedetto, Francesco Gissi, Matteo Mario Savino, and Luigi Troiano. 2024. "An Ergonomic Risk Assessment System Based on 3D Human Pose Estimation and Collaborative Robot" Applied Sciences 14, no. 11: 4823. https://doi.org/10.3390/app14114823

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop