Next Article in Journal
Investigation of the Effect of Relative Density on the Dynamic Modulus and Damping Ratio for Coarse Grained Soil
Previous Article in Journal
Reduction Potential of Gaseous Emissions in European Ports Using Cold Ironing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Methods to Analyze the Forces and Torques in Joints Motion

1
Department of Statistics and Data Sciences, Washington University in St. Louis, St. Louis, MO 63105, USA
2
Department of Physics, Tianjin University, Tianjin 300354, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(15), 6846; https://doi.org/10.3390/app14156846
Submission received: 11 June 2024 / Revised: 27 July 2024 / Accepted: 30 July 2024 / Published: 5 August 2024

Abstract

:
This paper proposes a composite model that combines convolutional neural network models and mechanical analysis to determine the forces acting on an object. First, we establish a model using Newtonian mechanics to analyze the forces experienced by the human body during movement, particularly the forces on joints. The model calculates the mapping relationship between the object’s movement and the forces on the joints. Then, by analyzing a large number of fencing competition videos using a deep learning model, we extract video features to study the torques and forces on human joints. Our analysis of numerous images reveals that, in certain movement patterns, the peak pressure on the knee joint can be two to three times higher than in a normal state, while the driving knee can withstand peak torques of 400–600 Nm. This straightforward model can effectively capture the forces and torques on the human body during movement using a deep neural network. Furthermore, this model can also be applied to problems involving non-rigid body motion.

1. Introduction

Saber fencing, known for its quick-paced and dynamic nature, has a rich history, with its roots tracing back to the Mongol Empire from 1206 to 1368, long before the modern sport was formalized. After its development for centuries [1,2], although physical contact is replaced by electric blades and lamé, which send signals to the display when a fencer is hit, fencing is still considered a very competitive combat sport. This category in fencing demands extremely fast reflexes, as evidenced by the fastest recorded fencing response being about 13.5 ms [3]. As a result, competitive saber fencers are often trained to perform movements such as extended lunges, “flunges” (fly + lunge), and speed footwork, which impose a high demand on the ankles and knees of athletes [4].
With the development of computer vision and machine learning, data-oriented analysis algorithms offer a new dimension in understanding and enhancing fencing performance. The collection and analysis of large datasets from fencing matches—including motion capture data, sensor readings, and video footage—enable the identification of patterns and trends that are not readily apparent through traditional observation.
Previous machine learning application in fencing falls mainly into two categories. First, machine learning is applied to interactions between fencers. Machine learning is trained as motion capture algorithms in fencing scoring [5] with an accuracy of 89.1%. The prediction algorithms for fencers in competitions are also reported to aid fencers in mastering games [6]. Additionally, sword tip tracking is also modeled with an algorithm that predicts the next movement of the tip based on the current tip position [7]. Second, algorithms for motion analysis are another critical concern about fencing. There are also video-based classification and correction models for fencing style analysis and improvement [8]. A neural network-based gait tracking algorithm is also designed to identify factors influencing peak horizontal speed and differences across skill levels [9].
However, the risk of injuries, especially to the ankles and joints of these highly competitive fencers, has not been extensively investigated. The common injuries in fencing, including sabers, often involve strained muscles and ligaments, as well as twisted knees and ankles [10]. Ankle sprains are another common injury in fencing, accounting for a significant portion of all fencing injuries. They happen due to the ballistic movements, quick stop–starts, direction changes, and strong lunging involved in the sport. The repeated high mechanical demands placed on the musculoskeletal structures during fencing contribute to these injuries [11,12].
To understand the injury, forces and related concepts must be understood. Classical mechanics, with its foundational principles of motion and force, serves as a robust framework for modeling the dynamics of fencing. By applying Newton’s laws of motion, we can develop equations that describe the kinematics and kinetics of a fencer’s movements. However, before the introduction of modern vision algorithms, using sensors to capture data are the major way for fencing analysis [11,13]. However, the introduction of modern image processing algorithms, classical mechanics, and image processing algorithms simplified the data collection for a comprehensive understanding of fencing dynamics. Such a dual approach not only aids in the development of more effective training regimens and injury prevention strategies, but also enhances the competitive edge of athletes by leveraging data-driven insights. As fencing continues to evolve, the synergy between classical mechanics and modern data science will play a crucial role in pushing the boundaries of athletic achievement, ensuring that the sport remains both scientifically grounded and dynamically innovative.
So, in this work, a new machine learning model is proposed and applied in the perspective of force and torque analysis. The model involves a convolutional neural network-based algorithm to extract the coordinates of critical joints in the body and a mechanical module to solve the forces and torques between these points. Based on the detection of joint positions, the proposed model can solve the forces and torques experienced by the ankles, knees, and waist joints without any extra equipment; therefore, achieving the hidden results that reflect the impact in real fencing competitions. In Section 2, data and modeling details are presented. Different motions of the points in the body from different images are also evaluated in Section 3. The model and results are briefly summarized in Section 4.

2. Data and Modeling Details

The model is constituted of a joint identification module and a mechanical module. When an image is passed in, the joint identification module processes the image with the joint positions extracted. Then, the joints are passed to the mechanical module which solves forces and torques for its output. The whole flowchart is summarized in Figure 1a.

2.1. Data Collection

In the fine-tuning stage (model training), 400 fencing images are prepared with joints marked by humans. Joints include knees, ankles, and hips. The center of mass of the body is also marked for the checking purpose. The training/validation images are obtained by decomposing a video and picking out players’ movements at random moments (described with more details in Appendix A) to avoid bias. The 400 images are split into a training and a validation set by a 4:1 ratio.
To analyze and compare the fencing styles and risks of injuries of athletes from the United States, Eastern Europe, and Korea, three players are selected to represent each style in considering the data availability and representativeness. The fencers chosen as representatives for their respective regions have been identified based on their elevated rankings within the global fencing community. This selection criteria ensures that the fencers exemplify developed and distinct regional styles, offering a reliable basis for comparative analysis in the study of international fencing techniques. The full list of players and their labels in this work can be found in Table 1. In application, for a particular game between players, the attack/defense movements usually happen within 2 s. Thus, each video of attack/defense is usually decomposed to 30–60 images for joint detection and mechanical modeling.
Each player’s single series of movements is obtained in their international competitions in the form of online videos (cached and provided by https://www.youtube.com). Details of the video are described in Appendix A. The videos are downloaded at the highest available resolution with 29.97002997002997 frames per second (FPS) to capture detailed movement and technique. Regarding the raw video, a series of attempts are made for data corrections:
  • Each trimmed video with players’ movement contained is decomposed into a series of time–images pairs.
  • Within each time–image pair, if the image is duplicated in one of the previous images, we ignore the image but preserve the time to be filled later.
  • A turned model (which will be explained in the next subsection) is used to extract critical joint points.
  • Since the initial separation between players is 4 m, the pixel–meter correspondence can be established with the first image (Figure 1b), e.g., before the movements of players, the distance between the two lines is 1279 pixels, then we know that 319.75 pixels correspond to 1 m.
  • We convert players’ joint data to actual values.
  • We fill the missing ones by interpolations (the “cubic” interpolation is used in this work).
  • We apply noise filtering algorithm for high order terms including acceleration (“L2” denoising” is used in this work).
Based on the cleaned data, after joint identification and calibrations, further model joint identification can be made, which further allows the force/torque analysis with classical mechanics.

2.2. A Machine Learning Approach for Critical Points Detection

The Body postures identification model is a fine-tuned model based on Detectron2 [14], a high-quality implementation of state-of-the-art object detection algorithm tool collection. As a Region-based Convolutional Neural Network (RCNN) algorithm [15], each image can be segmented into various boxed regions that are sensitive to the attached classifiers. The boundary of each box is represented by a pair of “anchor points”. With RCNN, each image can be marked by multiple boxes that correspond to regions of interest (ROI) [16].
The critical component of RCNN or Detectron2 is the Region Proposal Network (RPN), which processes features from each image and gives suggestions of segmented regions represented by “anchor points” pairs. Assume that there is a trained Convolutional Neural Network (CNN) classifier, the idea of the RPN is to score each picked boxed region with the classification score of the classifier. This research uses MASK-RCNN [17], a framework built on FASTER-RCNN [18], which is an extension of RCNN. FASTER-RCNN further accelerates the procedure of RPN to score ROIs. Detectron2, a library developed by Facebook AI Research (FAIR), is an open-source implementation MASK-RCNN.
Though Detectron2 is designed for image segmentation, the algorithm is further customized to fit the needs of joint detection. The idea for joint detection is to find the common “anchor point” of limbs. For instance, the knee is the common “anchor point” of ROIs of a thigh and a shank. Based on a customized model (from Detectron2 model zoo), which identifies 17 key spatial points of humans as the initial point, further fine-tuning is made. The loss function of the fine-tuning is the summation of three components, as shown in Equation (1).
T o t a l L o s s = c l s L o s s + b o x L o s s + d f l L o s s
where “clsLoss ” is the cross entropy loss of the classifier. The “boxLoss” is the loss of the real box segmentation and the predicted one. The “dflLoss” is the Distribution Focal Loss to improve the precision of the “anchor points”. As shown in Figure 1c, as the fine-tuning progress, the total loss decreases. The final loss by component is represented in Figure 1d, indicating good behavior across training and validation sets. The precision after tuning (Figure 1e) proves the quality of the tuning for the mechanical modelings.
With fine-tuning, the model is adjusted to perform very well for analyzing the movements and techniques of fencers in videos. The precise identification of joint points enables further implementation with classical mechanics. There are more details of the model tuning in Appendix B.

2.3. A Mechanical-Based Force and Torque Analysis

In the context of mechanical modeling, the human body (Figure 2a) in motion is represented as a simplified mechanism comprising interconnected rods and pivots. As shown in Figure 2b, the two shins (feet included) are represented by rods AB and EF, while the two thighs are represented by rods BC and DE. In the simplification, the pelvis, torso, head, and arms are collectively represented as a single component, denoted as CD. This simplification allows for a more streamlined analysis of the mechanical interactions between these five segments, which are interconnected at their respective joints, denoted as A, B, C, D, E and F. The mass and inertia of each component are considered to be proportional to a player’s height. As shown in Figure 2b, the simplified model for a lunging player is based on two approximations. First, we consider the player’s motion to be two-dimensional in a vertical plane. Second, since we focus on the significant mechanical impacts of lunging (or abrupt stopping), we consider that the friction between the propelling foot and the ground is large enough to prevent relative motion.
In the lunging in Figure 2c, the five rods can be picked out and perform force analysis. In mechanical derivations, each rod keeps stationary in its own reference frame with the origin attached to the left end of the rod. Thus, rod AB lies stationary in its reference frame attached to point A. There are four forces exerted to rot AB, including three real forces, f A , f B and G A B and a virtual force F A B . Thus, the net force is 0:
f A + f B + G A B + F A B = 0
Bold letters represent vectors. f A , f B are the forces at joint A and B. G A B is the gravity of rod AB. F A B is the inertia force of rod AB due to the selection of the non-inertia reference frame. For the calculation of the moment of the rod AB, point A is taken as the reference point. Thus, f A has no contribution to the total torque. Besides the three torques from f B , G A B and F A B , there are two driving torques, T A and T B , for the contracting/stretching of the ankle and the knee and a virtual torque M A B due to the selection of the reference frame with respect to point A. The six torques build a rotational equilibrium.
T f B + T G A B + T F A B + T A + T B + M A B = 0 .
The inertia force and the inertia torque can be calculated as
F A B = m A B a A + 1 2 α A B Q r A B 1 2 ω A B 2 r A B ,
M A B = I A , A B α A B 1 2 m A B a A T Q r A B ,
where a A is the linear acceleration with respect to the ground. α A B and ω A B are the angular acceleration and angular velocity of rod AB with respect to point A. m A B is the mass of rod AB while I A , A B is the inertia of rod AB with respect to point A. The inertias and masses are calculated by assuming that the player’s body is the average body defined in China National Standard [19] (details in Appendix C). Additionally, to make the forces and torques comparable across players, all players are assumed to be 80 kg in weight and 1.8 m in height. Q = 0 1 1 0 is a constant matrix [20] for mathematical convenience which has been extensively used in robotics. T denotes the matrix transpose. r A B is the position vector from point A to B. a A , r A B and r A B are denoted as column vectors.
Similarly, the other four parts, BC, CD, DE and EF obey similar mechanical equations.
f B + f C + G B C + F B C = 0 ,
f C + f D + G C D + F C D = 0 ,
f D + f E + G D E + F D E = 0 ,
f E + f F + G E F + F E F = 0 .
T f C + T G B C + T F B C + T B + T C + M B C = 0 ,
T f D + T G C D + T F C D + T C + T D + M C D = 0 ,
T f E + T G D E + T F D E + T D + T E + M D E = 0 ,
T f F + T G E F + T F E F + T E + T F + M E F = 0 .
During lunge, the force and torque of point F are both negligible. Therefore, the five vector equations and the five scalar equations form a system of linear equations, and can be solved for forces and torques. The virtual forces and torques can be calculated with equations similar to Equations (4) and (5).
To dodge or prevent further injuries, players usually stop abruptly usually after they lunge. To capture forces during stopping, the force and torque of point A can be considered as 0 when the body relies on point F to stop. In this work, the lunging approximation is mainly considered since the movements in games are aggressive. In practice, to improve the quality of identification, identified joints are filtered by the L2 normalization algorithm and then calibrated again with the reference body height.

2.4. Solving Kinetic Quantities with Critical Points

As demonstrated in Figure 2a, the MASK-RCNN algorithm identifies the positions of joints in real-time and the positions must be converted to kinetic quantities including velocity, acceleration, angular velocity and angular acceleration. Assume that the positions of points A and B at time t are p A ( t ) and p B ( t ) . The velocities and accelerations at time t are first- and second-order time derivations of positions, which can be calculated numerically by adopting the five-point central finite difference method (FDM) as
v A = 1 δ t 1 12 p A ( t 2 ) 2 3 p A ( t 1 ) + 2 3 p A ( t + 1 ) 1 12 p A ( t + 2 ) ,
a A = 1 δ t 2 1 12 p A ( t 2 ) + 4 3 p A ( t 1 ) 5 2 p A ( t ) + 4 3 p A ( t + 1 ) 1 12 p A ( t + 2 ) .
δ t is the inverse of the FPS, which tells the time elapse of two consecutive images. p ( t ) is the position vector at time t. With the five-point central FDM approach, the error can be controlled at the level of δ t 4 .
The rotation-related quantities can be evaluated similarly. Assume that there is a rod AB represented by the coordinates of its both ends, the vector AB ( t ) = p B ( t ) p A ( t ) has its angle evaluated as
θ ( t ) = a r c t a n p B y ( t ) p A y ( t ) p B x ( t ) p A x ( t ) .
With the FDM, the angular velocity and angular acceleration can be expressed as
ω A , A B = 1 δ t 1 12 θ A , A B ( t 2 ) 2 3 θ A , A B ( t 1 ) + 2 3 θ A , A B ( t + 1 ) 1 12 θ A , A B ( t + 2 ) ,
α A , A B = 1 δ t 2 1 12 θ A , A B ( t 2 ) + 4 3 θ A , A B ( t 1 ) 5 2 θ A , A B ( t ) + 4 3 θ A , A B ( t + 1 ) 1 12 θ A , A B ( t + 2 ) .

3. Results

3.1. Movements and Attacks

Based on the kinematic parameters resolved from the joint positions, the movement of the player with his attempts can be identified. As shown in Figure 3, the data are taken from the video of a game of p6 versus p9 as a representative clip for p6’s style. As shown in Figure 3a, the front (with respect to his opponent) thigh changes its angle rapidly between frames 17–35, while the front calf shows significant changes in angle between frames 20–40. As a result, extremes in angular velocities of the front thigh can be observed in frames 20 and 27 (Figure 3b). The front calf shows its extreme angular velocities at frames 27 and 34. The alignment of the negative extremes of the front thigh and front calf indicates the rapid swing of the whole leg, which corresponds to the first stage of his attack as a fast approaching of the player. In order to drive the angular velocity to a certain level, an angular acceleration must be applied in advance (Figure 3c). Since the time interval between two consecutive frames is small, the accelerations in rapid movement turn significant. Additionally, the accelerations that represent the movement of each joint are also displayed in Figure 3d.

3.2. Forces and Torques

The forces and torques can be further analyzed based on the kinetic quantities. According to the video of p6, his movement can be decomposed into four periods, including a charge period (frames 0–14), a left foot lunge period (frames 15–26), a right foot lunge period (frames 27–31) and a second left foot lunge period (frame 32–38), as shown in Figure 4a. The time elapsed from frame 0–38 is just 1.27 s until the winner comes out. Since the right foot lunge period is short, the movie is considered as filled with the left foot lunge period. There are three local maxima in the motion of the player, located at frames 17, 25, and 35. The maximum force ( f A on the ankle) is 1223.7 N (Figure 4b). If the player stands still, the exerted force on each ankle is only half of the body weight. Therefore, for an 80 kg player with a 392 N force on each ankle, the maximum force during the motion is 312%. Such a situation is not rare during the player’s attacking, and the force on ankle A peaks three times in a row during a 0.43 s interval.
The time-torque plot in Figure 4c offers more insights into the force exerted by a player’s supporting leg during a real game, specifically focusing on the torque in the five critical joints. As the game progresses, the torque values exhibit an increasing magnitude from frame 15 to the end, the torque does not show a monotonic behavior but fluctuates. These fluctuations in torque with the changes in the forces define the player’s lunge, which constituted three steps, with force peaks and torque troughs. When one focuses on the largest changes, the troughs in the curve show that the torque T B reaches its local minima at frames 25, 30, and 35. The time elapsed from frame 25–35 is as short as 0.33 s when the player’ supporting knee finishes a push-adapt-push-adapt-push process. Such results give a clear clue to the movement of the player. Additionally, the largest torque in magnitude is 529.3 N·m, which is close to the torque of a typical sports car or a typical farm tractor.

3.3. Statistics by Styles and Forces

The force-torque analysis provides an advanced scope to see the fencing game better. The nine players in Table 1 are reviewed in the force-torque perspective before common trends are summarized.
USA fencing style is featured by the collective movement that emphasizes a balance between offense and defense, with a focus on strategic planning and execution. They showcase an ability to adapt to different situations and opponents, leveraging their strengths to overcome weaknesses and exploit opportunities.
P1 (Eli Dershwitz) has a reputation for being a well-rounded fencer, capable of executing complex and precise maneuvers with ease. His footwork is particularly noteworthy, allowing him to move quickly and efficiently on the strip. As shown in Figure 5a, the knee in his supporting leg experiences significant and smooth changes in force which represents the full stretch in his movements. The torques in Figure 5b represent his powerful lounge and leap when his muscle completely releases stored potential energy. P2 (Daryl Homer), on the other hand, is known for his speed and agility, which he combines with strong attacks to great effect. He tends to be more aggressive in his approach, often pushing his opponents back and capitalizing on their mistakes. Earlier fluctuations in forces of Figure 5c correspond to his several attempts before the attack. The whole attack appears to be a more compact pattern (later part in Figure 5c,d). P3 (Colin Heathcock), meanwhile, is a tactically minded fencer who excels at reading opponents’ movements and responding with counterattacks. He is adept at finding openings in his opponents’ defenses and exploiting them to gain the upper hand. He maintains a vigorous movement throughout the attack process (peaks/troughs of forces and torques in Figure 5e,f).
From the perspective of mechanics, one can see frequently changing forces for strategic movement. Adequately tightened/stretched joints correspond to the smoothly changing forces and torques.
The European style is famous for its collective movement, which emphasizes a strategic and defensive approach, characterized by precise footwork, intelligent positioning, and calculated responses to opponents’ actions. By analyzing their opponents’ movements, and exploiting weaknesses, European players execute well-planned attacks.
P4 (Luca Curatoli) is known for his strong defensive capabilities and strategic approach to fencing. He excels at reading his opponents’ movements and effectively countering with well-timed attacks. His footwork is precise and calculated, allowing him to maintain a solid defensive stance while also setting up opportunities for offensive maneuvers. As shown in Figure 6a, player p4’s movement also shows various attempts before he attacks. He performed a significant lunge at the last stage of the attack (the highest peak in Figure 6a,b). P5 (Aron Szilagyi), a three-time Olympic champion, is a master of tactics and mental games. He combines his exceptional defense with an ability to lure opponents into making mistakes, which he capitalizes on with quick and efficient attacks. His footwork is smooth and controlled, enabling him to swiftly change direction and adapt to different situations. His movement is featured by simple patterns in his forces and torques (Figure 6c,d) without many attempts but only movements. P6 (Vincent Anstett) showcases a well-rounded skill set. He possesses a high level of technical proficiency in both offense and defense. His footwork is fluid and dynamic, allowing him to move effortlessly on the strip and launch accurate attacks. So, the evolution of his forces also appears to be significant and smooth (Figure 6e). He can also perform large torque movement, like player p4 in the last stage of his attack (Figure 6f).
The Korean fencing style places a strong emphasis on footwork and speed. They excel at quickly moving in and out of range to execute attacks. They also have a high attack rating and good power, which allows them to deal significant damage when they do strike. However, their defense and tactical abilities are somewhat lacking, which can leave them vulnerable if their opponent is able to anticipate their movements.
P7 (Oh Sanguk) is known for his lightning-fast footwork and ability to quickly move in and out of range during engagements. He relies heavily on mobility to create openings for his attacks, which he executes with precision and speed. As shown in Figure 7a,b, spikes and non-smooth patterns can be observed in his forces and torques plots. Though the non-smooth patterns are caused by the limitation of the frame rate, the patterns also represent the fast movement of Korean fencers. Since player P7 uses heelwork that involves the rolling of his front heel, the model must be replaced by a set of similar equations with the front foot providing the support of the whole body of the player between frames 33 and 36. Thus, the curve of T E peaks at frame 34 with a 1742.6 N force, which represents the force in his ankle/heel movement. The torques at the same time also show significant peaks. P8 (Kim Junghwan), on the other hand, is a more aggressive fencer who utilizes powerful attacks and lunges to dominate his opponents. P8’s forces and torques show larger deviations as asynchronous joint movement (Figure 7c,d), which may be attributed to his counter-attacking style, using his quick reflexes to exploit openings in his opponent’s defenses. P9 (Gu Bongil) is a well-rounded fencer who combines strong footwork and tactical abilities with solid offensive and defensive skills. Figure 7e,f represent his effectiveness at anticipating his opponents’ movements and countering with well-timed attacks.

4. Summary

In summary, this work builds a model to analyze the forces acting on multiple points in the body. We construct a mechanical model based on the MASK-RCNN joint recognition algorithm and analyze events from various situations (USA, European, and Korean players). With the model, we generate a large set of data to build the connections between point motion and the forces acting on those points. Additionally, we constructed a deep neural network, which was trained with this dataset. With a well-trained dataset, this approach enables the extraction of forces and torques acting on specific points of the human body by analyzing fencing images. The theoretical framework allows for direct estimation of forces on the joints of the human body during fencing analysis. Furthermore, this model can be extended to other scenarios involving multi-body motions.

Author Contributions

Conceptualization, R.G., B.C. and Y.L.; methodology, R.G., B.C. and Y.L.; formal analysis, R.G. and Y.L.; investigation, R.G., B.C. and Y.L.; data curation, R.G. and Y.L.; writing—original draft preparation, R.G.; writing—review and editing, B.C. and Y.L.; visualization, R.G.; supervision, B.C. and Y.L.; project administration, B.C. and Y.L.; funding acquisition, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (NSFC) under Grant Nos. 12175165.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Sources of Videos

Since most fencing games are similar in backgrounds and players’ outfits, we focus on the random gestures in attack/defense. So, a video (https://youtu.be/kyCmWyAfBIQ?si=-ZChlT62b8myMxPU, accessed on 5 July 2024) is selected, decomposed, and 400 images of different moments are randomly selected. The joints of these 400 images are added by humans. The images with joint information are then loaded to pytorch for model fine-tuning.
In the application stage, the videos of the following games are used: Milan 2023 World Championships (28 July 2023), 2016 Rio Olympics (10 August 2023), Men’s Sabre Fencing Individual 2017 Moscow Grand Prix (2 June 2017), and 2018 Sabre Grand Prix Men’s Individual (Moscow) (30 March 2018). A less than 2 min clip of attack-defense is picked in each video and then decomposed to images for joint detection and mechanical modeling.

Appendix B. Fine Tuning in Joint Detection

Though the Detectron2 package is used for joint detection. For instance, with the parameters in “R_50_FPN_1x” configuration, the typical standard deviation of joint detection is as large as 0.882 m. The model in the model zoo of Detectron2 is “R_101_FPN_3x”. The backbone of this model is ResNet101, which is widely used in many fields for image feature extraction. Then, with the help of the classifier, the RPN can provide precise regional suggestions for the segmentation purpose.

Appendix C. Human Body Modeling

The human body is modeled following the “Inertial parameters of the adult human body” [19] from China National Standard for its user-friendly documentation, where the averaged male body is described by components formulated by regression equations. Such a model is considered for its availability, and the players’ open data are not enough to calculate all important components.
For example, the mass of the thigh of a male cannot be obtained directly. But according to the reference, it is calculated as
m t h i g h = 0.093 + 0.152 w e i g h t 0.0004 1000 h e i g h t .
The inertia of the thigh can be evaluated as
I t h i g h = ( 366 , 488.9 + 554.9 w e i g h t + 280.78 h e i g h t 1000 ) / 1000 2 .
The units of weight and height are kg and meter. The upper body is merged to the rod CD in Figure 2b. Though moving arms may change the center of mass of the rod CD, the inertia of CD is considered to be fixed to 2/3 of the body inertia. To make the fencing data comparable, we assume all the heights of the nine fencers are 1.8 m and the weights of them are 80 kg.

References

  1. Morehouse, T. Are There 100,000 Fencers? 2010. Available online: https://timmorehouse.wordpress.com/2010/01/14/are-there-100000-fencers (accessed on 1 July 2024).
  2. SR112024A2547. Fencing Equipment Market Report by Product (Protective Clothing, Weapons, Masks, and Others), End User (Men, Women, Children), Distribution Channel (Online, Offline), and Region 2024–2032. 2023. Available online: https://www.imarcgroup.com/fencing-equipment-market (accessed on 1 July 2024).
  3. Hosseini, A.H.; Lifshitz, J. Brain Injury Forces of Moderate Magnitude Elicit the Fencing Response. Med. Sci. Sport. Exerc. 2009, 41, 1687–1697. [Google Scholar] [CrossRef] [PubMed]
  4. Harmer, P.A. Incidence and Characteristics of Time-Loss Injuries in Competitive Fencing: A Prospective, 5-Year Study of National Competitions. Clin. J. Sport Med. 2008, 18, 137–142. [Google Scholar] [CrossRef] [PubMed]
  5. Mo, J. Allez Go: Computer vision and audio analysis for ai fencing referees. J. Stud. Res. 2022, 11. [Google Scholar] [CrossRef]
  6. Honda, Y.; Kawakami, R.; Naemura, T. RNN-based Motion Prediction in Competitive Fencing Considering Interaction between Players. In Proceedings of the BMVC, Manchester, UK, 7–11 September 2020. [Google Scholar]
  7. Takahashi, M.; Yokozawa, S.; Mitsumine, H.; Itsuki, T.; Naoe, M.; Funaki, S. Real-time visualization of sword trajectories in fencing matches. Multimed. Tools Appl. 2020, 79, 26411–26425. [Google Scholar] [CrossRef]
  8. Emmenegger, S.; Egli, M.; Pouly, M. Mastering Fencing Techniques with Machine Learning: A Video-Based Classification and Correction System. In Proceedings of the 2023 10th IEEE Swiss Conference on Data Science (SDS), Zurich, Switzerland, 22–23 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 120–127. [Google Scholar]
  9. Fei, Z.; Zhao, C. Evaluation Algorithm of Fencing Athletes’ Strength Distribution Characteristics Based on Gait Tracking. Mob. Inf. Syst. 2022, 2022, 3602776. [Google Scholar] [CrossRef]
  10. Murgu, A.I. Fencing. Phys. Med. Rehabil. Clin. N. Am. 2006, 17, 725–736. [Google Scholar] [CrossRef] [PubMed]
  11. Chen, T.L.W.; Wong, D.W.C.; Wang, Y.; Ren, S.; Yan, F.; Zhang, M. Biomechanics of Fencing Sport: A Scoping Review. PLoS ONE 2017, 12, e0171578. [Google Scholar] [CrossRef] [PubMed]
  12. Park, K.J.; Brian Byung, S. Injuries in Elite Korean Fencers: An Epidemiological Study. Br. J. Sport. Med. 2017, 51, 220–225. [Google Scholar] [CrossRef]
  13. Anderson, D.M. Virtual fencing–past, present and future1. Rangel. J. 2007, 29, 65–78. [Google Scholar] [CrossRef]
  14. Wu, Y.; Kirillov, A.; Massa, F.; Lo, W.Y.; Girshick, R. Detectron2. 2019. Available online: https://github.com/facebookresearch/detectron2 (accessed on 1 July 2024).
  15. He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
  16. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef]
  17. Zhang, W.; Fu, C.; Zhu, M.; Cao, L.; Tie, M.; Sham, C.W. Joint Object Contour Points and Semantics for Instance Segmentation. Expert Syst. 2024, 41, e13504. [Google Scholar] [CrossRef]
  18. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  19. GB-T 17245-2004; Inertial Parameters of Adult Human Body. Standardization Administration of China: Beijing, China, 2004. Available online: https://openstd.samr.gov.cn/bzgk/gb/newGbInfo?hcno=19796465400C84FA0B3B164D95F71460 (accessed on 1 July 2024).
  20. Aslanov, V.; Kruglov, G.; Yudintsev, V. Newton–Euler equations of multibody systems with changing structures for space applications. Acta Astronaut. 2011, 68, 2080–2087. [Google Scholar] [CrossRef]
Figure 1. (a) A flowchart of the machine learning-based model. (b) A snapshot of the initial status of players before a game begins. The 4 m distance between lines is used for calibrating players’ heights and body parameters. (c) The learning curve of the fine-tuning. The total loss decreases as the progress of the fine-tuning. (d) The final loss after the fine-tuning decomposed of each component. (e) The performance of recall, mean average at 50% Intersection over Union and 50–95% Intersection over Union.
Figure 1. (a) A flowchart of the machine learning-based model. (b) A snapshot of the initial status of players before a game begins. The 4 m distance between lines is used for calibrating players’ heights and body parameters. (c) The learning curve of the fine-tuning. The total loss decreases as the progress of the fine-tuning. (d) The final loss after the fine-tuning decomposed of each component. (e) The performance of recall, mean average at 50% Intersection over Union and 50–95% Intersection over Union.
Applsci 14 06846 g001
Figure 2. (a) A snapshot with players attacking each other with critical points identified by the Regional Convolutional Neural Network algorithm. (b) The lower body of each player can be simplified as a joined stick model for mechanical analysis. Letters A, B, C, D, E and F are the joint points. (c) Associated force diagrams of each stick with forces denoted. The virtual forces are denoted as dashed arrows. The directions of forces are not the actual ones and should be solved dynamically in each snapshot.
Figure 2. (a) A snapshot with players attacking each other with critical points identified by the Regional Convolutional Neural Network algorithm. (b) The lower body of each player can be simplified as a joined stick model for mechanical analysis. Letters A, B, C, D, E and F are the joint points. (c) Associated force diagrams of each stick with forces denoted. The virtual forces are denoted as dashed arrows. The directions of forces are not the actual ones and should be solved dynamically in each snapshot.
Applsci 14 06846 g002
Figure 3. Kinematic quantities of each joint or rod in the mechanical model including (a) angles, (b) angular velocities ω , (c) angular accelerations α and (d) magnitudes of accelerations ( a ). The velocities and accelerations are solved by the 5-point central finite difference equation. Though the quality is limited by the FPS a mild L2 denoising algorithm is applied to prevent mathematical divergences.
Figure 3. Kinematic quantities of each joint or rod in the mechanical model including (a) angles, (b) angular velocities ω , (c) angular accelerations α and (d) magnitudes of accelerations ( a ). The velocities and accelerations are solved by the 5-point central finite difference equation. Though the quality is limited by the FPS a mild L2 denoising algorithm is applied to prevent mathematical divergences.
Applsci 14 06846 g003
Figure 4. (a) Decomposition of a player’s movement by the left foot and right foot lunge in a game of p6 (shown in the snapshots) vs. p9. The two patterns switch rapidly. The left foot lunge dominates the movements where the mechanical approach with the lunging approximation is valid. (b) Solved forces at different joints as time evolves. (c) Solved torques at different joints as time evolves.
Figure 4. (a) Decomposition of a player’s movement by the left foot and right foot lunge in a game of p6 (shown in the snapshots) vs. p9. The two patterns switch rapidly. The left foot lunge dominates the movements where the mechanical approach with the lunging approximation is valid. (b) Solved forces at different joints as time evolves. (c) Solved torques at different joints as time evolves.
Applsci 14 06846 g004
Figure 5. Force and torque analysis of USA players represented by 3 top players. (a) Forces and (b) torques of p1. (c) Forces and (d) torques of p2. (e) Forces and (f) torques of p3.
Figure 5. Force and torque analysis of USA players represented by 3 top players. (a) Forces and (b) torques of p1. (c) Forces and (d) torques of p2. (e) Forces and (f) torques of p3.
Applsci 14 06846 g005
Figure 6. Force and torque analysis of European players represented by 3 top players. (a) Forces and (b) torques of p4. (c) Forces and (d) torques of p5. (e) Forces and (f) torques of p6.
Figure 6. Force and torque analysis of European players represented by 3 top players. (a) Forces and (b) torques of p4. (c) Forces and (d) torques of p5. (e) Forces and (f) torques of p6.
Applsci 14 06846 g006
Figure 7. Force and torque analysis of Korean players represented by 3 top players. (a) Forces and (b) torques of p7 (P7’s data between frames 33 and 36 for his front leg movement). (c) Forces and (d) torques of p8. (e) Forces and (f) torques of p9.
Figure 7. Force and torque analysis of Korean players represented by 3 top players. (a) Forces and (b) torques of p7 (P7’s data between frames 33 and 36 for his front leg movement). (c) Forces and (d) torques of p8. (e) Forces and (f) torques of p9.
Applsci 14 06846 g007
Table 1. Selected players as representatives for regional fencing styles.
Table 1. Selected players as representatives for regional fencing styles.
RegionPlayers (Videos Are Listed in Appendix A)
USAEli Dershwitz (p1)Daryl Homer (p2)Colin Heathcock (p3)
EuropeLuca Curatoli (p4)Aron Szilagyi (p5)Vincent Anstett (p6)
KoreaOh Sanguk (p7)Kim Junghwan (p8)Gu Bongil (p9)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, R.; Chen, B.; Li, Y. Deep Learning Methods to Analyze the Forces and Torques in Joints Motion. Appl. Sci. 2024, 14, 6846. https://doi.org/10.3390/app14156846

AMA Style

Guo R, Chen B, Li Y. Deep Learning Methods to Analyze the Forces and Torques in Joints Motion. Applied Sciences. 2024; 14(15):6846. https://doi.org/10.3390/app14156846

Chicago/Turabian Style

Guo, Rui, Baoyi Chen, and Yonghui Li. 2024. "Deep Learning Methods to Analyze the Forces and Torques in Joints Motion" Applied Sciences 14, no. 15: 6846. https://doi.org/10.3390/app14156846

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop