Next Article in Journal
Reliability of Obstacle-Crossing Parameters during Overground Walking in Young Adults
Previous Article in Journal
Shape Sensing and Kinematic Control of a Cable-Driven Continuum Robot Based on Stretchable Capacitive Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Accessibility of Motion Capture as a Tool for Sports Performance Enhancement for Beginner and Intermediate Cricket Players

by
Kaveendra Maduwantha
1,
Ishan Jayaweerage
2,
Chamara Kumarasinghe
1,
Nimesh Lakpriya
1,
Thilina Madushan
1,
Dasun Tharanga
1,
Mahela Wijethunga
1,
Ashan Induranga
1,
Niroshan Gunawardana
1,
Pathum Weerakkody
3 and
Kaveenga Koswattage
1,*
1
Faculty of Technology, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka
2
Faculty of Computing, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka
3
Faculty of Applied Sciences, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(11), 3386; https://doi.org/10.3390/s24113386
Submission received: 9 April 2024 / Revised: 11 May 2024 / Accepted: 16 May 2024 / Published: 24 May 2024
(This article belongs to the Section Biomedical Sensors)

Abstract

:
Motion Capture (MoCap) has become an integral tool in fields such as sports, medicine, and the entertainment industry. The cost of deploying high-end equipment and the lack of expertise and knowledge limit the usage of MoCap from its full potential, especially at beginner and intermediate levels of sports coaching. The challenges faced while developing affordable MoCap systems for such levels have been discussed in order to initiate an easily accessible system with minimal resources.

1. Introduction

Motion capture, often abbreviated as “MoCap”, is a technology that uses sensors or markers placed on the body to track and digitally replicate human movements. Incorporating motion capture has transformed how movements are studied, analyzed, and reproduced across different fields. Motion Capture (MoCap) has been extensively used in various fields of research and industrial applications such as animation, filmmaking, video games, sports, and medical research [1,2]. In MoCap, the movement of humans and objects is recorded. High-end instrumentation and sophisticated computer algorithms, particularly those utilizing deep learning techniques, have significantly propelled the advancements in MoCap technology [3,4]. Motion capture is usually performed using two methods (as illustrated in Figure 1). These methods specifically include non-optical and optical approaches. In the case of optical systems, despite their higher processing cost, they provide excellent precision and total freedom of movement. Additionally, they offer the potential for interaction between diverse subjects [5]. Camera-based optical systems are categorized into two subcategories: marker-based systems and marker-less systems. A comprehensive review of these aspects and uses of MoCap in industrial applications has been discussed elsewhere [6]. Moreover, the use of MoCap in sports is steadily increasing with time, opening up many research opportunities at all levels of study. To drive this forward, new and efficient algorithms and libraries are required.
High-speed cameras and advanced position-sensing sensors are important for motion capture systems. However, making these systems more affordable and accurate could make them more useful in many different areas. Our group has recently initiated the development of such a system specifically focused on cricket. It is absolutely essential to conduct a comprehensive review of the existing technology landscape in order to gain a proper understanding of the accessibility and availability of current systems. This review serves as a crucial step in exploring how MoCap technology can revolutionize the analysis of cricket batters and bowlers, offering an in-depth exploration of its potential impact.

2. MoCap in Medicine

Gait analysis in medical treatments has made substantial use of motion capture technology. It is common practice to utilize inertial sensors in this area. Cloete in his study compared the “Moven” inertial motion capture system against the “Vicon” motion capturing (Vicon Motion Systems Ltd., Oxford, UK) system in gait analysis revealing that both systems were providing accurate and consistent results [7]. Pfister and Co. studied the comparative uses of “Kinect” (Microsoft Corporation, Redmond, WA, USA) and Vicon 3D (Vicon Motion Systems Ltd., Oxford, UK) in gait analysis. Three-dimensional systems like Vicon are considered to be much more accurate and reliable in clinical settings. However low-cost alternatives to measuring gait timing and alignment are proposed to be vital. Usually, these include two-dimensional (2D) video cameras with analytic software, electrogoniometers, pressure-sensitive mats, or accelerometers. The Kinect sensor released by Microsoft used an infrared (I.R.) light protector, an I.R. camera, and an RGB video camera. They concluded that the Kinect has basic MoCap capabilities and using them in clinical settings may need thorough and careful modifications in both software and hardware [8].
Recent studies show that sound feedback called “motion sonification” can help people with stroke rehabilitation. This method uses sounds to tell them how they are moving, like when they sip water during their rehab exercises [9,10]. Bruckner et al. implanted an inertial sensor system consisting of a wireless sensor network comprising a maximum of 10 inertial sensors. These sensors are securely attached to the patient’s body to accurately capture complex upper body movements, as well as force finger sensors that can recognize gripping motion. Orientation estimation allows the use of forward kinematics and a model of rigid links connected by joints to compute the positions, angles, and speeds of the upper limbs, which is based on information from an inertial measurement unit (IMU) installed on the upper arm and forearm. The wireless IMU stands out for its low-power RF module and 8-bit Main Control Unit (MCU), allowing for full access through platform-independent interfaces. Additionally, it offers real-time onboard orientation estimation, making it a valuable tool [11].
Miniaturization of surgical tools and the creation of better equipment for manipulating specialized tissues, in particular, are the main driving forces behind the reduction in surgical trauma and the development of minimally invasive surgery (MIS) [12]. Through the use of a trocar port (a trocar port refers to a cylindrical cannula that houses a trocar, a medical device with a pointed tip, used for the purpose of introducing ports into the abdominal cavity), optical fibers or light-emitting diodes, and either a body-mounted complementary metal oxide semiconductor sensor or a digital video feed, MIS allows the surgeon to reach the operative site and illuminate it. A laparoscope with lenses and an external camera are attached [13]. On a similar topic, Laribi and coworkers proposed a slave robotic architecture for compact MIS. They highlighted the emulating the complex movements required while creating a surgical connection between two structures [14].
Robot-assisted MIS can reduce hand tremors by using scaling factors and extra joints in the robotic tools that match the surgeon’s hand movements. This can improve the surgical skills and outcomes of the patients and enable more complicated surgeries. However, most of the existing robotic surgical systems are too big, complex, and expensive to use widely. They also take significant time to set up, maintain, and sterilize. A simple, small, and portable robotic surgery system that has similar performance, easy usage, high adaptability, and quick setup time can increase the use of mechanical assistance in regular and other types of surgeries. The da Vinci Surgical System [15] and the ZEUS System [16] are some of the current well-known automated systems that assist surgeons in MIS operations.
In physiotherapy and diagnostics, many people are interested in using motion capture devices that can sense human movement and gestures in real time without touching the body. In a study conducted by Yuminaka et al., they assessed the feasibility of MoCap applications in medical and healthcare using a Kinect v2. The real-time motion capture and depth measuring capabilities of Kinect sensor was assessed to be used in non-contact visual data observation while calculating distances between objects with spatial resolution on the order of millimeters. They suggested this may help therapists and patients with diagnosis and rehabilitation [17].
Apart from rehabilitation, biomechanical analysis MoCap is currently used in surgical planning [18], virtual surgical training [19], patient education [20], and telemedicine [21].

3. MoCap in Sports

Motion capture analysis for sports uses specialized technology to capture and analyze athletes’ movements. MoCap analysis is a powerful tool that can be used to improve the performance of athletes and reduce the risk of injuries [22]. For example, coaches can use motion capture data to see how an athlete’s running gait compares to elite athletes [23]. In the kinematics analysis, MoCap data can be used to measure the athlete’s joint angles, velocities, and accelerations. This information can be used to identify areas where the athlete’s technique can be improved [24]. The kinetics analysis can also be used to measure the forces and torques that the athlete is producing. This information can be used to assess the athlete’s strength and power, identify areas where they can improve their output, and measure the athlete’s energy expenditure [25]. This information can be used to assess the athlete’s fitness level and identify areas where they can improve their efficiency and performance.
Regarding sports injury prevention, the MoCap analysis can be used to identify athletes at risk of injury by identifying movement patterns associated with an increased risk of injury. If athletes have been identified as being at risk of injury, motion capture analysis can be used to develop personalized training and injury prevention programs. This technology can also track the athlete’s progress during injury prevention programs [26]. In addition to the above, MoCap can be used to (i) develop new and innovative training methods that are designed to reduce the risk of injury, (ii) improve the design of sports equipment to make it safer for athletes, and (iii) develop more effective rehabilitation programs for athletes who have been injured [27].
Motion capture technology has been widely used in baseball for various purposes. MoCap helps understand pitching mechanics, including the release point, arm angle, and body posture. This information is valuable for pitchers and coaches, allowing them to adjust for better performance and reduced stress on the player’s arm [28]. Improving hitting abilities requires careful analysis of baseball players’ swing mechanics. MoCap offers information on a swing’s body motions, bat speed, and angle. This information assists in honing methods and streamlining the striking procedure [29]. It may also be used to spot behaviors or trends that could result in harm. Players may concentrate on improving their technique by identifying dangerous actions, and coaches can create specialized conditioning regimens to stop injuries.
In swimming, there are two main types of MoCap systems: optical and inertial. Optical systems use cameras to track reflective markers placed on the swimmer’s body. Inertial systems use sensors attached to the swimmer’s body to measure movement. Most swimmers use more wearable sensors than other MoCap technologies because wearable sensors are small in size, cheap in price, and free from site restrictions. They used MoCap systems to analyze the biomechanics of swimming strokes (freestyle, breaststroke, backstroke, and butterfly), identify areas where swimmers can improve their technique, develop new training methods that are more effective and less likely to cause injury, as well as create realistic swimming animations for video games and movies [30].
Human movements and action analysis are significant to karate. They can improve karate techniques, such as punches, kicks, and blocks, using the MoCap analysis system [31]. Karate requires physical, technical, and tactical skills to succeed in the sport. These techniques depend on how the foot, knee, or elbow is moved. Currently, athletes and coaches are working to improve the sport of karate using MoCap systems [32].

4. Other Fields

Various fields beyond the ones mentioned utilize motion capture technology, including the construction industry [6], entertainment industry [33], virtual reality and augmented reality [34], automotive industry [35], aerospace and defense [36], forensics and law enforcement [37], education [38], etc.
Han et al. carried out an empirical assessment of an RBG-D sensor motion capture and action recognition for monitoring construction workers [39]. Using stereo cameras, Richmond and his team were able to capture the motion of a construction worker, with kinematic modeling to evaluate productivity and safety [40]. Seo et al. studied motion capture approaches for body kinematics measurements in construction to reduce significant unsafe environmental and health risks [41].
In 1915, Max Fleischer developed a technique to give their characters a realistic fluid motion in animations, which can be debated as the first attempt at motion capturing [42]. Zeeshan et al. executed a study about real-time motion capture for the entertainment industry. They developed the “eMoveChat” application, which can imitate video, voice, and trigger events using the “Animazoo” motion capture system (Animazoo UK Ltd., East Sussex, UK) [43]. Robin et al. introduced collaborative Virtual Reality (V.R.) with MoCap to enhance the entertainment industry in poor acting conditions. That collaboration made it easier for film and video game directors to digitize the characters [44]. Knyaz studied a photogrammetric motion capture system “MOSCA” to collect accurate 3D motion object information. The MOSCA method provided highly accurate and reliable information [45]. Chan and the team used MoCap with virtual reality to implement a system for practicing the Chinese martial art Tai Chi [46].
Kim et al. used the MoCap to study the autonomous formation flight of multiple flapping-wing flying vehicles for the first time as the turning point of aviation [47]. Wu et al. provided a technique for evaluating maintainability of a civil airplane using an optical motion capture system. After that, a test case was created to show that the approach was workable. The test case primarily addressed the civil aircraft’s accessibility [48].
Phunsa et al. performed a study introducing a Thai boxing defense system using MoCap using nine optical motion capture cameras with 42 markers for two actors [49]. Parks et al. developed a portable mobile motion capture system, “ MO 2 CA”, for military use in rural environments and clinical situations [50].
Joint angles of the human body are important identification features in forensic science. So Yang et al. investigated the use of marker-less motion capture systems for person tracking in forensic biomechanics applications [37]. To investigate the dubious forensic cases Aquila et al. explored the reconstruction of the dynamics of a murder using 3D motion capture and 3D modeled buildings [51].
In the education field, the collaboration of virtual reality and motion capture can fill the gap between virtual classrooms and real classrooms, according to research by Alonso et al. considering secondary school teacher training [52]. Yokokohji et al. studied a method to teach a robot using motion capture from the demonstrator’s viewpoint [53].

5. Cricket-Related Literature

As our focus is on cricket, it is important to look at the efforts that have been made by others to enhance the performance of cricket skills by MoCap. So we will now discuss it in two subsections, namely bowling and batting.

5.1. Cricket Bowling

MoCap has gained popularity in cricket bowling analysis in the last several years. Real-time tracking of the ball’s and a bowler’s joint movements is possible with MoCap systems. These data can then be used to analyze the bowler’s technique, identify areas for improvement, and reduce the risk of injury. Cricket bowlers are limited to 15 degrees of elbow extension during the bowling action [54]. This complex movement requires 3D motion analysis to be assessed accurately. Traditionally, this has been achieved using marker-based motion capture systems in a laboratory setting. However, this raises concerns about ecological validity, as cricket bowlers typically train and compete outdoors. Researchers are now developing new methods for 3D motion analysis that can be used in outdoor settings. This will allow for more accurate and realistic assessments of cricket bowlers’ technique, which could lead to improved performance and reduced risk of injury [55]. Researchers have developed a wearable arm sensor that can be used to assess a bowler’s bowling action in real time. This technology could help umpires make more accurate decisions about whether or not a bowler is using an illegal throw-like bowling action. The sensor uses inertial sensors on the upper and lower arms to track the bowler’s movements. Suspicious deliveries reveal valid distinctive inertial signatures. The technology is still under development, but it has the potential to revolutionize the way that bowling actions are assessed in cricket. It could also be used as a training tool for developing bowlers [56].
Kumar and his group created a cricket ball with magnetometers that calculated the spin rates at high speeds and identified the spin types (off-spin and leg-spin). The spin type is validated by IMUs on the bowler’s wrist and elbow. The researchers proposed that the magnetometer and the IMU sensors can help customize training and track performance. This technology has the potential to revolutionize the way that spin bowling is analyzed and trained. It could help bowlers improve their technique and consistency, and it could also help coaches identify and address deficiencies in bowlers’ techniques [57]. Coaches can use the findings of the study to identify key performance indicators (KPIs) for spin bowlers. By tracking these KPIs, coaches can assess the progress of their bowlers and develop training programs that help them to improve their performance. Players can also use the findings of the study to identify areas where they can improve their bowling techniques using MoCap systems [58].
Wells et al. also developed a predictive model to predict ball deviation based on the four most important kinematic variables: average velocity, elbow joint angle, angle of release, and ankle joint angle. The regression equation was reliable and explained 97.4 percent of the total variability in ball deviation [55].
Harnett and Co. tested an array-based IMU to measure cricket fast bowling movements as a first step to see if it could be used for tele-sport and exercise medicine. They found that the IMU-based system can measure specific cricket fast bowling movements accurately, such as the angle between the shoulder girdle and the pelvis, the bending of the trunk, and the bending of the knee. Thus, they speculated that the IMU-based system can help identify injury risk in tele-sport and exercise medicine [59].
In another study carried out by Ferdinands et al., they examined how rear leg movements and wrist speed influenced fast bowling. It revealed that bowling wrist speed was associated with several factors, such as the average speed of thigh extension, the speed of thigh adduction at the moment of back foot contact, and the maximum change in speed of knee extension. The study also showed that rear leg drive was not a deliberate action, but rather a passive outcome of the hip and knee movements in different planes, which had minimal and regulated effects on torque motion. The study used a Cortex 2.0 motion analysis system (200 Hz) to measure these variables [60].

5.2. Cricket Batting

Using motion capture and computer vision techniques, it is possible to extract precise information regarding player movements and the paths of the ball. This information has the potential to contribute to the advancement of knowledge in the field of cricket performance. For instance, computer vision techniques can be employed to identify the crucial technical factors that contribute to a batsman’s success [61].
Moodley et al. tested the potential of deep learning architectures (AlexNet, Inception V3, Inception Resnet V2, and Xception) to identify cricket batting backlift techniques in video footage. It showed that deep learning architectures could reliably detect cricket batting backlift techniques in video footage, with the “Xception” architecture achieving the highest accuracy of 98.25 among the four architectures evaluated [62].
In 2016, Peploe’s study explored how technique and bat speed affected post-impact ball speed and carry distance in a cricket range hitting task. The study used an 18-camera Vicon motion analysis system and three high-speed video cameras to measure these variables. The study discovered that a large carry distance required a launch angle close to 42 degrees and a high launch speed. It also revealed that a high ball launch speed depended on an impact location near the bat’s sweet spot in both the horizontal and vertical directions, as well as a high bat speed [63]. In a previous study conducted in 2012, researchers found that bat speed plays a crucial role in achieving the necessary distance for hitting a six in cricket. They also discovered that hitting the sweet spot of the bat maximizes the power transferred to the ball [64].
In a 2019 study, researchers employed utilizing motion capture data, and machine learning algorithms demonstrated a remarkable capability in accurately predicting the trajectory of the ball. The neural network algorithm was the most accurate algorithm, with a mean absolute error of 1.5 degrees. This suggests that machine learning algorithms could be used to develop new tools and techniques that might help cricket batsmen and bowlers improve their performance [65]. In their study using the Vicon motion analysis system, researchers were able to assess the influence of both the impact of anthropometric characteristics (height, weight, and body composition) and technical variables (bat speed, launch angle, and impact location) on the batting performance of elite cricketers. Bat speed was the most important variable in predicting batting performance, followed by launch angle and impact location. Anthropometric characteristics were not found to be significant predictors of batting performance.
Efforts made by Callaghan et al. with motion capture analysis has revealed the rolling cricket start is the most effective way to accelerate during a quick single, and that cricketers should be aware of the kinematic alterations that occur when carrying the bat during a quick single [66].
Also, the study revealed higher back-lift was detrimental to bat alignment and that the batter with the highest back-lift showed the least bat control. They also found that on-side defensive strokes could be discriminated from the off-side defensive strokes by observing a significantly lower magnitude acceleration at ball contact [67].
In 2023, Siddiqui et al. used a novel approach employing motion capture in “MediaPipe” to extract features from cricket stroke videos and machine learning models to classify the strokes. They achieved an accuracy of 99.77 percent using a random forest model [68]. Stuelcken et al. employed an innovative method using two synchronized high-speed video camera motion capture systems to show that the batsmen used a unitary upper limb movement, that is, their arms and shoulders moved together as a single unit. This helped them to generate more power and control. The batter stepped forward with their front foot very late, just before hitting the ball. Their front ankle was pointed towards the inside of the field, which helped them to hit the ball with more power [69].
In 2016, Dhawan et al. conducted a pioneering study, capturing the movements of elite cricket bowlers using advanced motion capture technology. These data were then utilized to create virtual animations, meticulously replicating the trajectories of bowled balls. The researchers further developed an interactive virtual environment using Unity software, enabling users to engage with the simulation via a head-mounted display and a motion-tracked cricket bat [70]. Curtis et al. used a system that captured the motion of cricket batters while playing strokes and compared it to reference values set by coaches. Fuzzy sets were used to classify the cover drive and forward defensive strokes based on ranges of motion for the head, feet, and bat. The system provided feedback on how well the strokes were executed compared to expert batters [71].
Kelly et al. developed a motion capture system to analyze cricket batting strokes. The system records the movements of players performing strokes and compares them to benchmarks set by coaches. Using fuzzy logic, the system classifies strokes like cover drives and forward defenses based on movements of the head, feet, and bat. The system then provides feedback by evaluating the player’s technique against the standards of expert batters [72]. In their study, they explored both vision-based and depth-based motion capture techniques intending to integrate these technologies into a comprehensive system. They aimed to develop a framework for motion and gesture recognition that works well under different conditions. Minimizing latency and improving accuracy were also important design objectives. The proposed system’s innovative aspect was its ability to generate a real-time motion-captured avatar, which could potentially enhance user immersion and sense of presence.
Callaghan et al. conducted a detailed study that utilized the Vicon motion capture system to compare the effects of two different starting techniques, namely rolling start and static start, on cricket sprint performance metrics like sprint velocity and acceleration kinematics during a quick single maneuver by experienced cricket players. The research provides valuable insights to optimize cricket sprint strategies [73].

6. Cost-Effective MoCap Systems

There are plenty of researchers who are trying to introduce a low-cost MoCap method to analyze motion owing to the significance they possess, as we discussed above. Thewlis et al. used the “OptiTrack” system (Natural Point, Oregon, USA) as the low-cost system and Vicon MX-f20 system (icon Motion Systems Ltd., Oxford, UK) as a high-cost system and obtained a comparison of lower limb gait kinematics from each system [74]. Regazzoni et al. measured the performances and limitations between Sony PS Eye cameras and the Microsoft X-Bod Kinect RGB-D sensors to find a cost-effective method for MoCap [75]. Chatzitofis and co. introduced “DeMoCap” as the first data-driven method with marker-based MoCap using consumer-grade infrared-depth cameras. They used four Intel RealSense D415 sensors as marker-based systems and 24 Vicon MXT40S cameras as marker-less methods to compare them. In this study, they introduced their marker-based system as a low-cost method [76]. Conforming to the low-cost MoCap approach, Sgro et al. carried out a study to assess the vertical jump development levels in childhood using a Microsoft Kinect [77]. Robert et al. sorted out three comparisons to validate a low-cost Inertial Motion Capture (IMC) system for whole-body motion analysis. The first comparison was of Optical Motion Capture (OMC) vs IMC with both anatomical models to depict technological errors; second was the anatomical model vs Neuron model with both using IMC to render the kinematic model differences; and the comparison was finally settled by OMC using the anatomical model vs IMC using the Neuron model to illustrate the total difference [78]. Usually, 3D motion capturing needs multiple cameras to detect 3D behavior. This is indeed a high-cost method, albeit Lee et al. used a single camera and passive optical markers with low cost to capture the 3D motions. They used 3D localization of markers using monocular vision [79]. Choe et al. manifested a MoCap mechanism using Magneto-Inertial Measurement Unit (MIMU) sensors. They used MIMU’s 3-axis accelerometer, 3-axis magnetometer, and a compass to produce a novel calibration model [80]. Patrizi et al. executed a comparison of kinematics multipliers computed between Microsoft Kinect just as marker-less and the BTS SMART optical marker system [81]. To introduce a low-power and low-cost hand motion capture device Sama et al. built a glove as a Tri-dimensional Interaction Device (3dID), which was the gateway to identify gestures [82]. Raghavendra and others created a human character in Blender and controlled it using wireless sensor nodes located on each joint in the human body at a low cost. This sensor node included Wi-Fi SoC (ESP8622-12E), which communicates with the accelerometer (ADXL345), magnetometer (HMC5883L) using I2C protocol, and gyroscope (L3G4200D) [83]. Despite many attempts to create low-cost MoCap systems, with the current study we want to highlight that the low-cost term has been used comparatively. Several thousand dollars are sometimes considered to be low-cost against several hundred thousand dollars. We would like to emphasize the requirement of defining the low-cost term and what minimum features can be made accessible to bottom-layer users like college students.

7. Summary of the Literature

It can be seen that marker-based optical MoCap is the most common method used in many instances. This is largely due to the accuracy that can be achieved. However, hefty cost is unavoidable. It is expensive because (i) accuracy depends on the number of cameras used, (ii) the system requires complex setup, and (iii) placing markers requires careful attention. Inertial sensor-based motion capture is the other popular option. They usually consist of accelerometers and gyroscopes. The main advantage of IMUs is that they do not require expensive camera systems. However, using them in harsh environments, like sports, may pose some difficulties. But if one can eliminate them, this can be a good alternative to marker-based optical systems. From another point of view, marker-less systems can be a good alternative to expensive commercial systems. Currently Microsoft Kinect, Apple ARkit, and Google ARCore are some upcoming technologies for marker-less systems. These are ideal where the measuring times are long and the outcome requires continuous acquisition of many players without having to worry about implanting foreign objects such as markers and wired sensors on the player. Industrial giants like Vicon, Qualisys, and Optitrack also have marker-less solutions. The quality of marker-less systems depends on the algorithms that have been used. There are also libraries like OpenPose and Mediapipe that are popular in the cases of custom systems. OpenPose gives the user the capability of (i) real-time performance, (ii) multi-person capability, and (iii) versatile keypoint detection. Similarly, Mediapipe also includes a variety of pre-trained machine learning models, for tasks like face detection, hand tracking, and pose estimation and also can perform efficiently in real time. (For a quick overlook refer to Table 1) Other than those listed, PoseNet [84], DeepMotion [85], AlphaPose [86], ZED SDK by Sterolabs [87], and SimplePose [88] also have been used in MoCap quite effectively.
In sports like tennis, baseball, and golf, marker-based systems are widely used in controlled environments like training centers for precise biomechanical analysis. Athletes in fast-moving sports like basketball and football benefit from marker-free systems. Outdoor and extreme sports like skiing and cycling use inertial and magnetic sensors to measure movement dynamics without cameras. Due to their restrictions on natural movement, mechanical methods are less common but used to measure force and mechanical efficiency in cycling. When it comes to cricket, marker-based systems are used in training scenarios to analyze batting and bowling techniques. High precision is required for detailed biomechanical feedback. Marker-less systems can be useful during live games or in practice sessions to track player movements without intrusive markers, particularly for fielding analysis and running between the wickets. Inertial sensors are employed mainly in training to gather data on player movements and biomechanics without the need for a full optical setup. Mechanical methods could be used experimentally to measure the force of impact in batting and the mechanics of bowling (refer Table 2).

8. Conclusions

Based on our thorough review of the existing literature, we aim to draw valuable insights that will shape the upcoming work outlined in the grant mentioned in the funding section. We expect to analyze and enhance batting skills for beginner and intermediate-level players. Hence, we will have to think about the cost of our system. Computer vision and machine learning techniques are incredibly useful for creating affordable motion-capturing systems. Utilizing software-based processing methods can significantly decrease the expenses associated with using costly hardware but inherits challenging tasks like calibration of camera networks and data validation of using commercial setups [96,97]. Nevertheless, ensuring that the footage remains clear and stable may prove to be quite challenging. Also, we have to make sure that we are using algorithms that can resolve both indoor and outdoor setting efficiently. This puts the balance towards using high frame-rate cameras as the input devices. Another aspect of decision making in the following steps of the project is to define the scope of interest, such as either kinematics of a specific angle or study of angular velocities. In the literature, most of the works have been performed by constraining the domain of analysis to either several strokes or several poses. We will be trying to compare different types of angle dynamics of professional players and novice/intermediate players generated by a homemade system and two commercial setups. By this we can elucidate the potential of low-cost system to assist coaches with more overall analysis.

Author Contributions

Conceptualization, K.M., A.I. and K.K.; methodology, K.M., N.G. and P.W.; software, K.M. and I.J.; validation, K.M. and I.J.; formal analysis, K.M. and I.J.; investigation, K.M., I.J., C.K., N.L., T.M., D.T., M.W. and A.I.; writing—original draft preparation, I.J., C.K., N.L., T.M., D.T. and M.W.; writing—review and editing, K.M., I.J., N.G., P.W. and A.I.; visualization, K.M. and I.J.; supervision, K.M., P.W. and K.K.; project administration, K.M.; funding acquisition, K.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Science and Technology Human Resource Development Project, Ministry of Education, Sri Lanka, funded by the Asian Development Bank (Grant No. CRG-R3-SB-5).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study.

References

  1. Noiumkar, S.; Tirakoat, S. Use of optical motion capture in sports science: A case study of golf swing. In Proceedings of the 2013 International Conference on Informatics and Creative Multimedia, Kuala Lumpur, Malaysia, 4–6 September 2013; pp. 310–313. [Google Scholar]
  2. Daria, B.; Martina, C.; Alessandro, P.; Fabio, S.; Valentina, V.; Zennaro, I. Integrating mocap system and immersive reality for efficient human-centred workstation design. IFAC-PapersOnLine 2018, 51, 188–193. [Google Scholar] [CrossRef]
  3. Moeslund, T.B.; Hilton, A.; Krüger, V. A survey of advances in vision-based human motion capture and analysis. Comput. Vis. Image Underst. 2006, 104, 90–126. [Google Scholar] [CrossRef]
  4. Mathis, A.; Schneider, S.; Lauer, J.; Mathis, M.W. A primer on motion capture with deep learning: Principles, pitfalls, and perspectives. Neuron 2020, 108, 44–65. [Google Scholar] [CrossRef] [PubMed]
  5. Guerra-Filho, G. Optical Motion Capture: Theory and Implementation. Res. Ital. Netw. Approx. 2005, 12, 61–90. [Google Scholar]
  6. Menolotto, M.; Komaris, D.S.; Tedesco, S.; O’Flynn, B.; Walsh, M. Motion capture technology in industrial applications: A systematic review. Sensors 2020, 20, 5687. [Google Scholar] [CrossRef] [PubMed]
  7. Cloete, T.; Scheffer, C. Benchmarking of a full-body inertial motion capture system for clinical gait analysis. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 4579–4582. [Google Scholar]
  8. Pfister, A.; West, A.M.; Bronner, S.; Noah, J.A. Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis. J. Med. Eng. Technol. 2014, 38, 274–280. [Google Scholar] [CrossRef] [PubMed]
  9. Raglio, A.; Panigazzi, M.; Colombo, R.; Tramontano, M.; Iosa, M.; Mastrogiacomo, S.; Baiardi, P.; Molteni, D.; Baldissarro, E.; Imbriani, C.; et al. Hand rehabilitation with sonification techniques in the subacute stage of stroke. Sci. Rep. 2021, 11, 7237. [Google Scholar] [CrossRef] [PubMed]
  10. Nikmaram, N.; Scholz, D.S.; Großbach, M.; Schmidt, S.B.; Spogis, J.; Belardinelli, P.; Müller-Dahlhaus, F.; Remy, J.; Ziemann, U.; Rollnik, J.D.; et al. Musical sonification of arm movements in stroke rehabilitation yields limited benefits. Front. Neurosci. 2019, 13, 1378. [Google Scholar] [CrossRef] [PubMed]
  11. Brückner, H.P.; Krüger, B.; Blume, H. Reliable orientation estimation for mobile motion capturing in medical rehabilitation sessions based on inertial measurement units. Microelectron. J. 2014, 45, 1603–1611. [Google Scholar] [CrossRef]
  12. Khandalavala, K.; Shimon, T.; Flores, L.; Armijo, P.R.; Oleynikov, D. Emerging surgical robotic technology: A progression toward microbots. Ann. Laparosc. Endosc. Surg. 2019, 5. [Google Scholar] [CrossRef]
  13. Bouget, D.; Allan, M.; Stoyanov, D.; Jannin, P. Vision-based and marker-less surgical tool detection and tracking: A review of the literature. Med. Image Anal. 2017, 35, 633–654. [Google Scholar] [CrossRef]
  14. Laribi, M.A.; Riviere, T.; Arsicault, M.; Zeghloul, S. A design of slave surgical robot based on motion capture. In Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guangzhou, China, 11–14 December 2012; pp. 600–605. [Google Scholar]
  15. DiMaio, S.; Hanuschik, M.; Kreaden, U. The da Vinci Surgical System. In Surgical Robotics: Systems Applications and Visions; Rosen, J., Hannaford, B., Satava, R.M., Eds.; Springer: New York, NY, USA, 2011; pp. 199–217. [Google Scholar]
  16. Marescaux, J.; Rubino, F. The ZEUS robotic system: Experimental and clinical applications. Surg. Clin. 2003, 83, 1305–1315. [Google Scholar] [CrossRef] [PubMed]
  17. Yuminaka, Y.; Mori, T.; Watanabe, K.; Hasegawa, M.; Shirakura, K. Non-contact vital sensing systems using a motion capture device: Medical and healthcare applications. Key Eng. Mater. 2016, 698, 171–176. [Google Scholar] [CrossRef]
  18. Charbonnier, C.; Chagué, S.; Kevelham, B.; Preissmann, D.; Kolo, F.C.; Rime, O.; Lädermann, A. ArthroPlanner: A surgical planning solution for acromioplasty. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 2009–2019. [Google Scholar] [CrossRef] [PubMed]
  19. Karaliotas, C. When simulation in surgical training meets virtual reality. Hell. J. Surg. 2011, 83, 303–316. [Google Scholar] [CrossRef]
  20. Gavrilova, M.L.; Ahmed, F.; Bari, A.H.; Liu, R.; Liu, T.; Maret, Y.; Sieu, B.K.; Sudhakar, T. Multi-modal motion-capture-based biometric systems for emergency response and patient rehabilitation. In Research Anthology on Rehabilitation Practices and Therapy; IGI Global: Hershey, PA, USA, 2021; pp. 653–678. [Google Scholar]
  21. Khoury, A.R. Motion capture for telemedicine: A review of nintendo wii, microsoft kinect, and playstation move. J. Int. Soc. Telemed. eHealth 2018, 6, e14-1. [Google Scholar]
  22. Armitano-Lago, C.; Willoughby, D.; Kiefer, A.W. A SWOT analysis of portable and low-cost markerless motion capture systems to assess lower-limb musculoskeletal kinematics in sport. Front. Sport. Act. Living 2022, 3, 809898. [Google Scholar] [CrossRef] [PubMed]
  23. Ortega, B.P.; Olmedo, J.M.J. Application of motion capture technology for sport performance analysis. Retos Nuevas Tendencias Educ. Física Deporte Recreación 2017, 32, 241–247. [Google Scholar]
  24. Rana, M.; Mittal, V. Wearable sensors for real-time kinematics analysis in sports: A review. IEEE Sens. J. 2020, 21, 1187–1207. [Google Scholar] [CrossRef]
  25. Lee, C.J.; Lee, J.K. Inertial motion capture-based wearable systems for estimation of joint kinetics: A systematic review. Sensors 2022, 22, 2507. [Google Scholar] [CrossRef]
  26. Johnson, W.R.; Mian, A.; Donnelly, C.J.; Lloyd, D.; Alderson, J. Predicting athlete ground reaction forces and moments from motion capture. Med. Biol. Eng. Comput. 2018, 56, 1781–1792. [Google Scholar] [CrossRef] [PubMed]
  27. Godfrey, A.; Stuart, S.; Kenny, I.C.; Comyns, T.M. Methodological Considerations in Sports Science, Technology and Engineering. Front. Sport. Act. Living 2023, 5, 1294412. [Google Scholar] [CrossRef] [PubMed]
  28. Boddy, K.J.; Marsh, J.A.; Caravan, A.; Lindley, K.E.; Scheffey, J.O.; O’Connell, M.E. Exploring wearable sensors as an alternative to marker-based motion capture in the pitching delivery. PeerJ 2019, 7, e6365. [Google Scholar] [CrossRef] [PubMed]
  29. Punchihewa, N.G.; Yamako, G.; Fukao, Y.; Chosa, E. Identification of key events in baseball hitting using inertial measurement units. J. Biomech. 2019, 87, 157–160. [Google Scholar] [CrossRef]
  30. Wang, J.; Wang, Z.; Gao, F.; Zhao, H.; Qiu, S.; Li, J. Swimming stroke phase segmentation based on wearable motion capture technique. IEEE Trans. Instrum. Meas. 2020, 69, 8526–8538. [Google Scholar] [CrossRef]
  31. Hachaj, T.; Piekarczyk, M.; Ogiela, M.R. Human actions analysis: Templates generation, matching and visualization applied to motion capture of highly-skilled karate athletes. Sensors 2017, 17, 2590. [Google Scholar] [CrossRef] [PubMed]
  32. Szczkesna, A.; Błaszczyszyn, M.; Pawlyta, M. Optical motion capture dataset of selected techniques in beginner and advanced Kyokushin karate athletes. Sci. Data 2021, 8, 13. [Google Scholar] [CrossRef]
  33. Bregler, C. Motion capture technology for entertainment [in the spotlight]. IEEE Signal Process. Mag. 2007, 24, 158–160. [Google Scholar] [CrossRef]
  34. Pilati, F.; Faccio, M.; Gamberi, M.; Regattieri, A. Learning manual assembly through real-time motion capture for operator training with augmented reality. Procedia Manuf. 2020, 45, 189–195. [Google Scholar] [CrossRef]
  35. Bortolini, M.; Gamberi, M.; Pilati, F.; Regattieri, A. Automatic assessment of the ergonomic risk for manual manufacturing and assembly activities through optical motion capture technology. Procedia CIRP 2018, 72, 81–86. [Google Scholar] [CrossRef]
  36. Guo, W.J.; Chen, Y.R.; Chen, S.Q.; Yang, X.L.; Qin, L.J.; Liu, J.G. Design of Human Motion Capture System Based on Computer Vision. In Proceedings of the 5th International Conference on Advanced Design and Manufacturing Engineering, Shenzhen, China, 19–20 September 2015; pp. 1891–1894. [Google Scholar]
  37. Yang, S.X.; Christiansen, M.S.; Larsen, P.K.; Alkjær, T.; Moeslund, T.B.; Simonsen, E.B.; Lynnerup, N. Markerless motion capture systems for tracking of persons in forensic biomechanics: An overview. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2014, 2, 46–65. [Google Scholar] [CrossRef]
  38. Tirakoat, S. Optimized motion capture system for full body human motion capturing case study of educational institution and small animation production. In Proceedings of the 2011 Workshop on Digital Media and Digital Content Management, Hangzhou, China, 15–16 May 2011; pp. 117–120. [Google Scholar]
  39. Han, S.; Achar, M.; Lee, S.; Peña-Mora, F. Empirical assessment of a RGB-D sensor on motion capture and action recognition for construction worker monitoring. Vis. Eng. 2013, 1, 1–13. [Google Scholar] [CrossRef]
  40. Starbuck, R.; Seo, J.; Han, S.; Lee, S. A stereo vision-based approach to marker-less motion capture for on-site kinematic modeling of construction worker tasks. In Proceedings of the Computing in Civil and Building Engineering 2014, Orlando, FL, USA, 23–25 June 2014; pp. 1094–1101. [Google Scholar]
  41. Seo, J.; Alwasel, A.; Lee, S.; Abdel-Rahman, E.M.; Haas, C. A comparative study of in-field motion capture approaches for body kinematics measurement in construction. Robotica 2019, 37, 928–946. [Google Scholar] [CrossRef]
  42. Baker, T. The History of Motion Capture within the Entertainment Industry. Ph.D. Thesis, Metropolia Ammattikorkeakoulu, Helsinki, Finland, 2020. [Google Scholar]
  43. Patoli, M.Z.; Gkion, M.; Newbury, P.; White, M. Real time online motion capture for entertainment applications. In Proceedings of the 2010 Third IEEE International Conference on Digital Game and Intelligent Toy Enhanced Learning, Kaohsiung, Taiwan, 12–16 April 2010; pp. 139–145. [Google Scholar]
  44. Kammerlander, R.K.; Pereira, A.; Alexanderson, S. Using virtual reality to support acting in motion capture with differently scaled characters. In Proceedings of the 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Lisbon, Portugal, 27 March–1 April 2021; pp. 402–410. [Google Scholar]
  45. Knyaz, V. Scalable photogrammetric motion capture system “mosca”: Development and application. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 43–49. [Google Scholar] [CrossRef]
  46. Chan, J.C.; Leung, H.; Tang, J.K.; Komura, T. A virtual reality dance training system using motion capture technology. IEEE Trans. Learn. Technol. 2010, 4, 187–195. [Google Scholar] [CrossRef]
  47. Kim, H.Y.; Lee, J.S.; Choi, H.L.; Han, J.H. Autonomous formation flight of multiple flapping-wing flying vehicles using motion capture system. Aerosp. Sci. Technol. 2014, 39, 596–604. [Google Scholar] [CrossRef]
  48. Wu, X.D.; Liu, H.; Mo, S.; Wu, Z.; Li, Y. Research of Maintainability Evaluation for Civil Aircrafts Based on the Motion Capture System. Appl. Mech. Mater. 2012, 198, 1062–1066. [Google Scholar] [CrossRef]
  49. Phunsa, S.; Potisarn, N.; Tirakoat, S. Edutainment–Thai Art of Self-Defense and Boxing by Motion Capture Technique. In Proceedings of the 2009 International Conference on Computer Modeling and Simulation, Cambridge, UK, 25–27 March 2009; pp. 152–155. [Google Scholar]
  50. Parks, M.; Chien, J.H.; Siu, K.C. Development of a mobile motion capture (MO2CA) system for future military application. Mil. Med. 2019, 184, 65–71. [Google Scholar] [CrossRef] [PubMed]
  51. Aquila, I.; Sacco, M.A.; Aquila, G.; Raffaele, R.; Manca, A.; Capoccia, G.; Cordasco, F.; Ricci, P. The reconstruction of the dynamic of a murder using 3D motion capture and 3D model buildings: The investigation of a dubious forensic case. J. Forensic Sci. 2019, 64, 1540–1543. [Google Scholar] [CrossRef]
  52. Alonso, S.; López, D.; Puente, A.; Romero, A.; Álvarez, I.M.; Manero, B. Evaluation of a motion capture and virtual reality classroom for secondary school teacher training. In Proceedings of the 29th International Conference on Computers in Education, Virtual, 22–26 November 2021; pp. 327–332. [Google Scholar]
  53. Yokokohji, Y.; Kitaoka, Y.; Yoshikawa, T. Motion capture from demonstrator’s viewpoint and its application to robot teaching. J. Robot. Syst. 2005, 22, 87–97. [Google Scholar] [CrossRef]
  54. Marshall, R.N.; Ferdinands, R. The biomechanics of the elbow in cricket bowling. Int. Sport. J. 2005, 6, 1–6. [Google Scholar]
  55. Wells, D.; Alderson, J.; Camomilla, V.; Donnelly, C.; Elliott, B.; Cereatti, A. Elbow joint kinematics during cricket bowling using magneto-inertial sensors a feasibility study. J. Sport. Sci. 2019, 37, 515–524. [Google Scholar] [CrossRef] [PubMed]
  56. Wixted, A.; Portus, M.; Spratford, W.; James, D. Detection of throwing in cricket using wearable sensors. Sport. Technol. 2011, 4, 134–140. [Google Scholar] [CrossRef]
  57. Kumar, A.; Espinosa, H.G.; Worsey, M.; Thiel, D.V. Spin rate measurements in cricket bowling using magnetometers. Proceedings 2020, 49, 11. [Google Scholar] [CrossRef]
  58. Spratford, W.; Whiteside, D.; Elliott, B.; Portus, M.; Brown, N.; Alderson, J. Does performance level affect initial ball flight kinematics in finger and wrist-spin cricket bowlers? J. Sport. Sci. 2018, 36, 651–659. [Google Scholar] [CrossRef] [PubMed]
  59. Harnett, K.; Plint, B.; Chan, K.Y.; Clark, B.; Netto, K.; Davey, P.; Müller, S.; Rosalie, S. Validating an inertial measurement unit for cricket fast bowling: A first step in assessing the feasibility of diagnosing back injury risk in cricket fast bowlers during a tele-sport-and-exercise medicine consultation. PeerJ 2022, 10, e13228. [Google Scholar] [CrossRef] [PubMed]
  60. Ferdinands, R.E.; Sinclair, P.J.; Stuelcken, M.C.; Greene, A. Rear leg kinematics and kinetics in cricket fast bowling. Sport. Technol. 2014, 7, 52–61. [Google Scholar] [CrossRef]
  61. Peploe, C.; King, M.; Harland, A. The effects of different delivery methods on the movement kinematics of elite cricket batsmen in repeated front foot drives. Procedia Eng. 2014, 72, 220–225. [Google Scholar] [CrossRef]
  62. Moodley, T.; van der Haar, D.; Noorbhai, H. Automated recognition of the cricket batting backlift technique in video footage using deep learning architectures. Sci. Rep. 2022, 12, 1895. [Google Scholar] [CrossRef]
  63. Peploe, C. The Kinematics of Batting against Fast Bowling in Cricket. Ph.D. Thesis, Loughborough University, Reading, UK, 2016. [Google Scholar]
  64. McErlain-Naylor, S.; Peploe, C.; Felton, P.; King, M. Hitting for Six: Cricket Power Hitting Biomechanics. 2022. Available online: https://www.stuartmcnaylor.com/publication/cricket_BASES/McErlain-Naylor_et_al_2022.pdf (accessed on 9 November 2023).
  65. Peploe, C.; McErlain-Naylor, S.A.; Harland, A.; King, M. Relationships between technique and bat speed, post-impact ball speed, and carry distance during a range hitting task in cricket. Hum. Mov. Sci. 2019, 63, 34–44. [Google Scholar] [CrossRef]
  66. Callaghan, S.J.; Lockie, R.G.; Jeffriess, M.D. The acceleration kinematics of cricket-specific starts when completing a quick single. Sport. Technol. 2014, 7, 39–51. [Google Scholar] [CrossRef]
  67. Sarkar, A.K. Bat Swing Analysis in Cricket. Ph.D. Thesis, Griffith School of Engineering, Nathan, QLD, Australia, 2013. [Google Scholar]
  68. Siddiqui, H.U.R.; Younas, F.; Rustam, F.; Flores, E.S.; Ballester, J.B.; Diez, I.d.l.T.; Dudley, S.; Ashraf, I. Enhancing Cricket Performance Analysis with Human Pose Estimation and Machine Learning. Sensors 2023, 23, 6839. [Google Scholar] [CrossRef]
  69. Stuelcken, M.; Portus, M.; Mason, B. Cricket: Off-side front foot drives in men’s high performance Cricket. Sport. Biomech. 2005, 4, 17–35. [Google Scholar] [CrossRef] [PubMed]
  70. Dhawan, A.; Cummins, A.; Spratford, W.; Dessing, J.C.; Craig, C. Development of a novel immersive interactive virtual reality cricket simulator for cricket batting. In Proceedings of the 10th International Symposium on Computer Science in Sports (ISCSS), Loughborough, UK, 9–11 September 2015; pp. 203–210. [Google Scholar]
  71. Curtis, K.M.; Kelly, M.; Craven, M.P. Cricket batting technique analyser/trainer using fuzzy logic. In Proceedings of the 2009 16th International Conference on Digital Signal Processing, Santorini, Greece, 5–7 July 2009; pp. 1–6. [Google Scholar]
  72. Kelly, M.; Curtis, K.; Craven, M. Fuzzy recognition of cricket batting strokes based on sequences of body and bat postures. In Proceedings of the IEEE SoutheastCon, Ocho Rios, Jamaica, 4–6 April 2003; pp. 140–147. [Google Scholar]
  73. Callaghan, J.S.; Jeffriess, D.M.; Mackie, L.S.; Jalilvand, F.; Lockie, G.R. The impact of a rolling start on the sprint velocity and acceleration kinematics of a quick single in regional first grade cricketers. Int. J. Perform. Anal. Sport 2015, 15, 794–808. [Google Scholar] [CrossRef]
  74. Thewlis, D.; Bishop, C.; Daniell, N.; Paul, G. Next-generation low-cost motion capture systems can provide comparable spatial accuracy to high-end systems. J. Appl. Biomech. 2013, 29, 112–117. [Google Scholar] [CrossRef]
  75. Regazzoni, D.; De Vecchi, G.; Rizzi, C. RGB cams vs RGB-D sensors: Low cost motion capture technologies performances and limitations. J. Manuf. Syst. 2014, 33, 719–728. [Google Scholar] [CrossRef]
  76. Chatzitofis, A.; Zarpalas, D.; Daras, P.; Kollias, S. DeMoCap: Low-cost marker-based motion capture. Int. J. Comput. Vis. 2021, 129, 3338–3366. [Google Scholar] [CrossRef]
  77. Sgrò, F.; Nicolosi, S.; Schembri, R.; Pavone, M.; Lipoma, M. Assessing vertical jump developmental levels in childhood using a low-cost motion capture approach. Percept. Mot. Skills 2015, 120, 642–658. [Google Scholar] [CrossRef] [PubMed]
  78. Robert-Lachaine, X.; Mecheri, H.; Muller, A.; Larue, C.; Plamondon, A. Validation of a low-cost inertial motion capture system for whole-body motion analysis. J. Biomech. 2020, 99, 109520. [Google Scholar] [CrossRef]
  79. Lee, Y.; Yoo, H. Low-cost 3D motion capture system using passive optical markers and monocular vision. Optik 2017, 130, 1397–1407. [Google Scholar] [CrossRef]
  80. Choe, N.; Zhao, H.; Qiu, S.; So, Y. A sensor-to-segment calibration method for motion capture system based on low cost MIMU. Measurement 2019, 131, 490–500. [Google Scholar] [CrossRef]
  81. Patrizi, A.; Pennestrì, E.; Valentini, P.P. Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics. Ergonomics 2016, 59, 155–162. [Google Scholar] [CrossRef] [PubMed]
  82. Sama, M.; Pacella, V.; Farella, E.; Benini, L.; Riccó, B. 3dID: A low-power, low-cost hand motion capture device. In Proceedings of the Design Automation & Test in Europe Conference, Munich, Germany, 6–10 March 2006; Volume 2, p. 6. [Google Scholar]
  83. Raghavendra, P.; Sachin, M.; Srinivas, P.; Talasila, V. Design and development of a real-time, low-cost IMU based human motion capture system. In Proceedings of the Computing and Network Sustainability: Proceedings of IRSCNS 2016, Goa, India, 1–2 July 2016; pp. 155–165. [Google Scholar]
  84. Kendall, A.; Grimes, M.; Cipolla, R. Posenet: A convolutional network for real-time 6-dof camera relocalization. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2938–2946. [Google Scholar]
  85. Gladh, S.; Danelljan, M.; Khan, F.S.; Felsberg, M. Deep motion features for visual tracking. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 1243–1248. [Google Scholar]
  86. Fang, H.S.; Li, J.; Tang, H.; Xu, C.; Zhu, H.; Xiu, Y.; Li, Y.L.; Lu, C. Alphapose: Whole-body regional multi-person pose estimation and tracking in real-time. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 7157–7173. [Google Scholar] [CrossRef] [PubMed]
  87. Mosna, P. Integrated Approaches Supported by Novel Technologies in Functional Assessment and Rehabilitation. Ph.D. Thesis, Università Degli Studi di Brescia, Brescia, Italy, 2021. [Google Scholar]
  88. Li, J.; Su, W.; Wang, Z. Simple pose: Rethinking and improving a bottom-up approach for multi-person pose estimation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 11354–11361. [Google Scholar]
  89. Achilles, F.; Ichim, A.E.; Coskun, H.; Tombari, F.; Noachtar, S.; Navab, N. Patient MoCap: Human pose estimation under blanket occlusion for hospital monitoring applications. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, 17–21 October 2016; pp. 491–499. [Google Scholar]
  90. Kiely, N.; Pickering Rodriguez, L.; Watsford, M.; Reddin, T.; Hardy, S.; Duffield, R. The influence of technique and physical capacity on ball release speed in cricket fast-bowling. J. Sport. Sci. 2021, 39, 2361–2369. [Google Scholar] [CrossRef] [PubMed]
  91. Kishita, Y.; Ueda, H.; Kashino, M. Temporally coupled coordination of eye and body movements in baseball batting for a wide range of ball speeds. Front. Sport. Act. Living 2020, 2, 64. [Google Scholar] [CrossRef]
  92. Jayaraj, L.; Wood, J.; Gibson, M. Improving the immersion in virtual reality with real-time avatar and haptic feedback in a cricket simulation. In Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Nantes, France, 9–13 October 2017; pp. 310–314. [Google Scholar]
  93. Gao, L.; Zhang, G.; Yu, B.; Qiao, Z.; Wang, J. Wearable human motion posture capture and medical health monitoring based on wireless sensor networks. Measurement 2020, 166, 108252. [Google Scholar] [CrossRef]
  94. Haque, M.R.; Imtiaz, M.H.; Kwak, S.T.; Sazonov, E.; Chang, Y.H.; Shen, X. A Lightweight Exoskeleton-Based Portable Gait Data Collection System. Sensors 2021, 21, 781. [Google Scholar] [CrossRef]
  95. Kim, H.g.; Lee, J.w.; Jang, J.; Han, C.; Park, S. Mechanical design of an exoskeleton for load-carrying augmentation. In Proceedings of the IEEE ISR 2013, Seoul, Republic of Korea, 24–26 October 2013; pp. 1–5. [Google Scholar]
  96. Lafayette, T.B.d.G.; Kunst, V.H.d.L.; Melo, P.V.d.S.; Guedes, P.d.O.; Teixeira, J.M.X.N.; Vasconcelos, C.R.d.; Teichrieb, V.; Da Gama, A.E.F. Validation of angle estimation based on body tracking data from RGB-D and RGB cameras for biomechanical assessment. Sensors 2022, 23, 3. [Google Scholar] [CrossRef]
  97. Palani, P.; Panigrahi, S.; Jammi, S.A.; Thondiyath, A. Real-time joint angle estimation using mediapipe framework and inertial sensors. In Proceedings of the 2022 IEEE 22nd International Conference on Bioinformatics and Bioengineering (BIBE), Taichung, Taiwan, 7–9 November 2022; pp. 128–133. [Google Scholar]
Figure 1. Classification of motion capture technology.
Figure 1. Classification of motion capture technology.
Sensors 24 03386 g001
Table 1. Quick overlook of existing MoCap systems.
Table 1. Quick overlook of existing MoCap systems.
MethodAccuracySetup ComplexityCostEnvironmental NeedsPrior Usage
Optical (Marker-based)Very accurate; captures detailed movements.Complex setup; needs many cameras and careful placement of markers.Expensive; requires lots of cameras and computer power.Needs controlled lighting and clear view of markers.[5,18,26,44,45,46,51,60,66,70,73,76,89,90,91]
Optical (Marker-less)Very accurate; quality depends on software.Medium-to-complex setup; depends on software.Medium to expensive; needs advanced software.Needs clear view of the person, sensitive to surroundings.[1,20,22,35,76,92]
InertialQuite accurate; might lose accuracy over time.Easy-to-medium setup; no external cameras needed.Medium cost; sensors and processors are needed.Very flexible; works anywhere, but needs initial setup.[2,7,11,28,31,59,76,80,93]
MagneticFairly accurate; can be disrupted by metals.Medium setup; involves placing magnetic sensors.Medium to expensive; specialized equipment needed.Must avoid metal in the area.[71,80]
MechanicalQuite accurate; measures movement at joints directly.Medium setup; involves wearing a suit with sensors.Medium cost; suits and sensors can be pricey.Suit needs to fit well; can limit movement.[94,95]
Table 2. Usage of differentmotion capturing method in sports.
Table 2. Usage of differentmotion capturing method in sports.
SportMarker-Based OpticalMarker-Less OpticalInertial & MagneticMechanical
Golf
Baseball
Gymnastics
Diving
Basketball
Football (Soccer/American)
Athletics
Cycling
Skiing
Rowing
Canoeing
Tennis
Mixed Martial Arts
Boxing
Cricket
⊗ represents the abundance of usage of each method in particular sport.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Maduwantha, K.; Jayaweerage, I.; Kumarasinghe, C.; Lakpriya, N.; Madushan, T.; Tharanga, D.; Wijethunga, M.; Induranga, A.; Gunawardana, N.; Weerakkody, P.; et al. Accessibility of Motion Capture as a Tool for Sports Performance Enhancement for Beginner and Intermediate Cricket Players. Sensors 2024, 24, 3386. https://doi.org/10.3390/s24113386

AMA Style

Maduwantha K, Jayaweerage I, Kumarasinghe C, Lakpriya N, Madushan T, Tharanga D, Wijethunga M, Induranga A, Gunawardana N, Weerakkody P, et al. Accessibility of Motion Capture as a Tool for Sports Performance Enhancement for Beginner and Intermediate Cricket Players. Sensors. 2024; 24(11):3386. https://doi.org/10.3390/s24113386

Chicago/Turabian Style

Maduwantha, Kaveendra, Ishan Jayaweerage, Chamara Kumarasinghe, Nimesh Lakpriya, Thilina Madushan, Dasun Tharanga, Mahela Wijethunga, Ashan Induranga, Niroshan Gunawardana, Pathum Weerakkody, and et al. 2024. "Accessibility of Motion Capture as a Tool for Sports Performance Enhancement for Beginner and Intermediate Cricket Players" Sensors 24, no. 11: 3386. https://doi.org/10.3390/s24113386

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop