1. Introduction
Monitoring athletes’ well-being is a common practice in high-performance sport to optimize the training process and performance during the competition [
1]. Neuromuscular fatigue (NMF) can be defined as a reduction in the maximal voluntary force that a muscle group can generate. The fatigue generated by the training and insufficient recovery can influence team sports performance, especially during an intense competitive period [
2]. If the fatigue is excessive and not enough focus is placed on the recovery, the risk of injury or illnesses arises [
3]. The authors suggest that post-competition recovery of 48 h is a minimum period to restore the testosterone–cortisol ratio [
4], but there is a possibility that even 4–5 days may be needed to fully recover after an intense competition [
4,
5]. Thus, monitoring fatigue may be crucial as it can lead to an increased availability of the players and subsequently influence a team’s chance of success [
6,
7]. It is especially important in team sports which involve frequent competition and traveling that may result in accumulating fatigue over time. Apart from physiological factors, the psychological factors should also be taken into consideration. Mental fatigue may impair physical performance, and observing fluctuations in mood, emotions and perceived stress can be valuable information for coaches of an athlete’s individual response to training [
8,
9].
Monitoring fatigue during the competitive season is a common practice that takes place in team sports. The trend of collecting athletes’ monitoring data within team sports tends to increase and the practitioners are recommended to use simple, valid and reliable solutions to monitor athletes’ well-being and prescribe training [
10]. Practitioners usually introduce various methods to monitor training load and fatigue in athletes [
11,
12]. Subjective monitoring tools (e.g., wellness questionnaires, session rating of perceived exertions) can be introduced separately [
1,
13,
14] or with the addition of objective assessments (e.g., strength and power testing, GPS monitoring) in an integrated approach [
8,
15]. Until now, the strength and conditioning coaches have been surveyed several times about their monitoring procedures [
16,
17,
18]. 84% of practitioners tend to introduce subjective questionnaires, 61% of practitioners introduce neuromuscular performance testing and more than a half (54%) use vertical jump testing [
16]. Also, the latest survey by Asimakidis et al. indicated that a countermovement jump (CMJ) is the most frequently (used by 92% of practitioners) used test for jump performance assessment [
18]. CMJ is reliable [
19] and sensitive in monitoring neuromuscular status [
20] and may be more suitable to use than other vertical jumping tests such as squat jump or drop jump [
21]. It was proved to be useful in team sports involving powerful actions such as sprinting or jumping [
17,
22,
23], and tracking the average CMJ height may be more appropriate than the highest CMJ height to monitor NMF [
20]. However, the authors suggested that this marker of NMF should not be introduced in relation to endurance runners as it shows the lack of sensitivity [
24]. As these kinds of measurements are frequently implemented, the idea of injury risk during the testing procedures could be raised. So far, researchers have not examined extensively the potential risk of injury during testing procedures of CMJ, and no specific relationship was found in squat jump or depth jump testing in regards to knee or low back injury [
25].
Despite being a popular tool to monitor NMF [
16,
17,
18], the authors identified the lack of method standardization for CMJ as the testing procedures seem to be carried out with different technical execution [
26]. It is important to select appropriate tests for a given population of the athletes, and testing within strength and conditioning should be simple and serve a purpose. Practitioners need to know why they introduce a given test and how it should be conducted to maximize its utility [
27,
28]. Also, the selected test needs to be valid and reliable to effectively track changes in an athlete’s performance [
29]. The same principles apply to a technology that is used to assess vertical jump characteristics. Originally, force platforms that measure ground reaction force exerted by the subject were classified as ‘gold standard’ [
30]. They allow the analysis of kinematic variables of the jump (e.g., countermovement depth, time to peak force) in addition to kinetic variables (e.g., jump height, mean force) [
31]. Despite providing a lot of potentially useful data, both laboratory-based and portable force platforms are expensive [
32], and often the practitioners are forced to use the alternatives. Fortunately, there is a variety of technologies that can be implemented as an effective replacement to a force platform. Infrared platforms [
33], contact mats [
34], inertial devices [
35] or even a smartphone video application [
36] may be used as a more affordable option to track vertical jump height. However, it should be noted that the results from different devices are not interchangeable, in order to evaluate changes in jump height over time [
37]. Thus, it is suggested to always perform the measurements with the same device to avoid misinterpretation of received results.
In this pilot observational study, we analyzed changes in CMJ height during two competitive seasons. So far, authors have conducted investigations that contained several testing procedures through the competitive season [
38,
39] and also weekly CMJ testing [
40]. In our study, CMJ testing was performed weekly on a contact mat to assess NMF and monitor athletes’ well-being and enhance the decision-making process to optimize prescribing training microcycles. In accordance with the authors [
26,
27,
28], the general instructions of jump’s execution were unchanged between two competitive seasons. However, we introduced a differentiating factor in jump execution between the seasons and observed changes in jump height (JH). Thus, the main goal of our study was to evaluate changes in jump performance in professional volleyball players during two competitive seasons. Another aim of this study was to determine whether the execution method of the jump during the same test can change the results of JH in CMJ.
4. Discussion
The most important finding of this study is that the small change in technical execution of a CMJ test was associated with subsequent JH in professional volleyball players when the measurements were performed using the same device. Introducing more rigorous testing procedures in S2 and changing execution of the CMJ during the flight phase led to a statistically significant decrease in JH. Also, individual analysis of players’ jumps indicated that a standard deviation in JH data drastically decreased by approximately 30%. It highlights the importance of carrying out the measurements in the same rigorous manner with special attention to execution of the test.
To our knowledge, this kind of investigation has not been carried out so far. Monitoring fluctuations in weekly CMJ measurements during the competitive season was described in division I basketball players [
40] but not within professional athletes. A large amount of data was taken into the statistical analysis (540 jumps in S1 and 468 in S2; 1008 jumps in total) so the results are not based on three or four measurements that took place during the competitive season [
38,
39]. The results prove that the testing procedures need to be carried out with particular caution to assess CMJ height appropriately. As the practitioners are often not able to afford ‘gold standard’ devices and they are forced to use cheaper alternatives during in-field testing, they should be aware of the limitations of the device implemented into measurements [
50]. Weakley et al. [
28] highlighted some key recommendations regarding testing procedures to improve its reliability in sports science. They placed the emphasis on describing testing considerations regarding various subjects such as equipment, environmental conditions, tester’s and athlete’s familiarization with the test and equipment. Researchers underline the importance of executing the test in the same manner [
27,
28], and despite having instructions as to how the test should be performed, a discrepancy within execution of the test by different investigators can be observed [
26]. Our work confirmed that the proper execution of the test is critical in obtaining reliable results, especially when no ‘gold standard’ device is introduced into measurements.
Our measurements that took place in S1 could be considered suboptimal as we allowed more freedom in the movement to imitate athlete’s technical action (as in volleyball block), and it is in opposition to current recommendations [
26]. These measurements were examined in the same manner consistently, but the addition of one technical cue in S2 changed the dynamics of the measurements. The approximately 4% decrease in JH and decrease in standard deviation by approximately 30% is a discrepancy that would probably not be expected while using the same testing device. Contact mats are considered as an appropriate tool to test vertical JH for slow stretch-shortening cycle as in CMJ [
34,
51], and the particular mat used in this study was also suggested to be appropriate [
41,
42,
43]. However, some authors indicate that contact mats are appropriate to test JH in college-aged athletes but question its utility with the population of professional athletes as its accuracy seems to decrease with higher values of JH [
52]. Our results provide an important insight into testing with this type of equipment. We suggest that in order to maximize accuracy of the measurements and its utility for subsequent NMF monitoring, the practitioners need to implement more rigorous testing procedures as in S2. A 4% decrease in JH was a statistically significant decrease, and it is important in the context of testing CMJ as its approximate between-day coefficient of variation was described as 3.5% [
28]. Therefore, if we select an execution of the test that does not maximize the probability of receiving as accurate results as possible, we potentially increase the risk of unreliable testing procedures. CMJ is used not only to track NMF [
20] but also as a hint to identify potential talent in youth athletes [
53] and ranking the players within the group [
28]. The results of our investigation promote high-quality, standardized procedures as they often influence subsequent decision-making within sport institutions.
The approximately 4% decrease in JH was statistically significant, but the approximately 30% decrease in standard deviation may be more relevant for strength and conditioning coaches. To track NMF, it is a common practice to introduce basic statistical analysis such as z-score, coefficient of variation or smallest worthwhile change [
28]. In all of these statistical calculations, one of the variables is a standard deviation. Results presented in
Table 4 and
Table 5 demonstrated a large decrease in standard deviation after introducing more rigorous testing procedures in S2 for all the players (
Table 4 and
Table 5). Analyzing players’ individual changes in standard deviation between S1 and S2, some results need special attention as such changes as demonstrated by athletes n1 (45 and 37.5%), n3 (58.2 and 54.8%) and n8 (47.1 and 38.6%) should persuade practitioners to implement execution of the CMJ test of S2. A large decrease in the standard deviation means that the results of the measurement are oriented much closer to the mean, and it directly influences the use of highlighted statistical tools as they become much more sensitive to changes in JH. If a strength and conditioning coach uses z-score to compare individual results of a given session to overall data collected during an extended period, such changes in standard deviation may alter the decision-making process within the team. Usually, an integrated approach of subjective and objective monitoring is implemented [
8,
15], but in the case of using only CMJ measurements to assess NMF, the practitioners could decide incorrectly about a player’s contribution during technical sessions. To put it into perspective, a result of JH that is lower than an athlete’s mean by 3 cm for players n1, n3 or n8 could lead to a completely different intervention regarding these players. In relation to standard deviations from S1, a coach could potentially keep athletes within their normal training regimen, but with the results from S2, he could also potentially modify their participation in a technical session or even withdraw them from the session. In our opinion, this example appropriately illustrates how the decrease in standard deviation in S2 can be even more important than the decrease in JH.
An argument that could be raised against the results of our study is that apart from execution of the jump, a number of other factors can influence their performance in the test. The players could simply be introduced to more physically demanding sessions or participate in a higher number of the games and had insufficient recovery. However, the role of the players between the seasons became unchanged, the starters in S1 were also the starters in S2, the bench players in S1 were also the bench players in S2. Naturally, their playing time could slightly differ, but they generally practiced a similar amount of time (as hours of practice are restricted by the club environment) and participated in nearly the same amount of official games (30 games in S1 and 33 games in S2). Thus, it cannot be said that the smaller mean JH negatively influenced players’ volleyball performance as they advanced to the play-off series in S2 and could not achieve it in S1. Also, according to
Table 7, players’ subjective readiness to train did not significantly differ between S1 and S2 (was even slightly higher in S2) so we can assume that their subjective readiness to measurements was similar in S1 and S2 and did not affect the results (
Table 7). Another argument may concern the lack of measurements involving one repetition maximum (1RM) of multi-joint lower body movements such as deadlifts, squats or power cleans. The latest research indicates that the measurements of this type are not commonly introduced within the high-performance environment during the competitive season [
18]. The isometric mid-thigh pull (IMTP) test is the most popular choice within the practitioners (35%), but it was not performed within a population of our study due to the lack of measurement device. Despite not performing frequent IMTP test or 1RM or 3RM measurements of multi-joint lower body movements, the athletes regularly participated in resistance training that involved different forms of exercises such as squats, trap bar deadlifts, power cleans, etc., to promote their maximal strength and rate of force development. Therefore, we can assume that their level of maximal strength most likely remained unchanged within the competitive seasons and was not a limiting factor to CMJ measurements. However, as the athletes’ training prescriptions changed weekly to optimize their performance during the competition, it should be taken into consideration that their physical condition could also slightly vary during the measurements because of these frequent changes.
Apart from characteristics describing physical condition of the athletes during the measurements, a variety of other external factors could also be associated with their performance. Despite having the same recovery modalities through both competitive seasons (ice baths, sauna, pneumatic compression, visits with a qualified physiotherapist), the athletes had a freedom to choose their recovery modalities on a daily basis. As it was not taken into consideration during the measurements, it could also play a small role in players’ results. Additionally, the athletes were acquainted with general guidelines of sports nutrition, but their caloric intake was not strictly controlled during the measurements, and it could influence their energy level and recovery. Also, in addition to diet, hydration may play a role in athletes’ performance and can affect individual fluctuations in an athlete’s body mass [
54,
55]. Another factor that influences performance is sleep [
56], and its restriction was found to impair athlete’s maximal jump performance [
57]. As the competitive season is long and exhaustive, mental fatigue could also arise and influence an athlete’s mental health and motivation [
8,
9]. Thus, the psychological factors could also play a role in an athlete’s result during the measurements. Despite the inability to control all the external variables that could be associated with CMJ performance, it should be highlighted that apart from the proper execution of a jump, a number of other factors may correlate with results of the test.
Our study has practical applications and can be transferred to high-performance environments. Despite force platforms being a ‘gold standard’ for CMJ measurements, the practitioners are often forced to use cheaper alternatives in in-field testing, and contact mats are often chosen for this matter. Our results provide an additional context to already existing guidelines regarding testing procedures [
26,
27,
28]. Apart from the proper execution of the jump in the same environment, the coaches also need to implement the most reliable technique that can guarantee the most repeatable results. Our approach to give the athletes more freedom during flight phase in S1 and let them bend the hips as in the volleyball block seem to be suboptimal for weekly CMJ measurements to monitor NMF within team sports. Introducing more rigorous testing in S2 allowed us to significantly decrease standard deviation of their JH, and it improved the sensitivity of the test while using this specific measurement device. Our investigation involved professional athletes and large data collected during two competitive seasons so the strength and conditioning coaches using this type of equipment may apply results of this study into their training environment. Therefore, during CMJ measurements, not only the device should be interchangeable but also the execution of the jump, as different executions within the same device could be associated with influencing the vertical JH. However, as the sample size was small, extrapolating the results of our study to other athletes should be performed with caution. The researchers should be encouraged to verify it through further investigation with a larger sample size and more controlled design of external factors (i.e., sleep, nutrition, etc.).