1. Introduction
National Collegiate Athletics Association (NCAA) Cross-Country (CC) races range from 5 km to 10 km, with large variations in elevation profile between courses. Based on distance (>3 km) and duration of the race (>12 min and <2 h), CC is classified as distance running [
1]. The determinants of performance during distance running have been extensively investigated and found to have significant associations with maximal oxygen uptake (VO
2max), critical velocity (CV), running economy, and the gas exchange threshold (GET) [
1,
2]. To enhance physiological outcomes and improve performance, the training of distance runners is organized and periodized in a manner that allows for the accumulation of training time at specific intensities [
3,
4]. Researchers and sport scientists have utilized a training intensity distribution (TID) analysis in order quantify the amount of time spent in each of the exercise intensity zones and to estimate overall physiological stress accumulated during training [
5,
6,
7].
The establishment of training intensity zones is required to perform a TID analysis and should be anchored by physiological markers that represent differences in acute responses to exercise. The GET, the convergence of VCO
2 and VO
2 toward a zero-difference value, and CV, the highest sustainable running velocity in which a metabolic steady state can be achieved, have been used previously to establish a three-zone model for quantifying TID for middle-to-long-distance runners, with the transition between moderate (Zone 1) and heavy exercise (Zone 2) demarcated by the GET and the transition between heavy and severe exercise (Zone 3) marked by CV [
8,
9,
10]. The TID of both middle- and long-distance runners has been recently investigated with a systematic review indicating that a pyramidal TID is commonly observed to be followed by a polarized TID [
4,
11]. Although the TID of training has been investigated extensively, the intensity distribution of an actual collegiate cross-country competition has yet to be investigated. The basic principles of periodization and exercise training state that the stimuli applied during training sessions should progress in the order of non-specific to specific stimuli, with the specific stimuli phasing closely mimicking the physiological demands of the competition. However, it is unclear what the physiological demands of a collegiate cross-country competition are as this has yet to be fully investigated. Though previous investigations have reported associations between performance metrics and race times, this does not provide a clear understanding of the physiological demands of collegiate cross-country racing.
The modeling or prediction of race performance time has grown in popularity amongst coaches and sport scientists. The CV concept originated from the graphing of world-record times from various forms of human location and evolved into a method of modeling total work capacity relative to the time of reaching volitional exhaustion [
12,
13]. CV represents the highest velocity in which a metabolic steady state can be achieved while D′ (pronounced D prime) is the finite capacity to maintain velocities exceeding CV. An equation has been formulated using CV, D′, and the desired running distance (D) to predict the required time to reach D [
13]. However, it is unclear if this performance modeling equation is applicable to a collegiate CC race, as it only predicts all-out running performance without taking consideration oscillation in running velocity or alternative race strategies [
14]. Thus, the primary purpose of the current investigation is to analyze the intensity distribution of an NCAA Division 1 CC competition using both running velocity and heart rate. A secondary purpose of the current investigation was to compare actual race times with modeled performance times using CV and D′.
4. Results
A total of 24 race files were analyzed from the 10 participating athletes—16 from males and 8 from females. Participant demographic information and performance parameters are reported in
Table 1. In total, four courses were run during the season, with course profile information reported in
Table 2.
The percentage of competition time spent in each zone based on heart rate and running velocity separated by sex is illustrated in
Figure 1. Both running velocity and heart rate data, not the distribution between zones, were observed to not violate sphericity. Two-way ANOVA revealed no statistically significant interactions between sex and percent of race time spent in zones were present. A main effect was present for percent of race time in zones for both running velocity (F = 82.8,
p = 0.00, η
2 = 0.734) and heart rate (F = 8.823,
p = 0.000, η
2 = 0.227). Post hoc analysis revealed a statistically significant difference between percentage of time spent in running velocity Zone 2 compared to Zone 1 (75.0% ± 20.7 vs. 8.4% ± 14.0,
p = 0.00, d = 3.7) and Zone 3 (75.0% ± 20.7 vs. 16.7% ± 19.1,
p = 0.00, d = 2.9). Post hoc analysis revealed a statistically significant difference between percent of race time spent in heart rate Zone 1 compared to Zone 2 (12.1% ± 13.7 vs. 37.5% ± 30.2,
p = 0.01, d = 1.0) and Zone 3 (12.1% ± 13.7 vs. 50.3% ± 33.7,
p = 0.00, d = 1.4). Groups were collapsed across sex, and percentage of race time spent in zones was compared between heart rate and running velocity. Time in zone for both methods is illustrated in
Figure 2. Two-way ANOVA revealed no statistically significant interactions between method of calculation and percent of race time spent in zones were present. However, a main effect for zone was present (F = 43.1,
p = 0.00, η
2 = 0.407). Post hoc analysis revealed a statistically significant difference percent time spent in Zone 1 compared to Zone 2 (10.2% ± 13.8 vs. 56.2% ± 31.8,
p = 0.00, d = 1.8) and Zone 3 (10.2% ± 13.8 vs. 33.3% ± 32.9,
p = 0.00, d = 0.9) and between Zone 2 and Zone 3 (56.2% ± 31.8 vs. 33.3% ± 32.9,
p = 0.00, d = 0.7).
Athletes’ average velocities over the duration of competitions were 96.2% ± 11.3 of their CV. Paired samples
t-tests revealed a statistically significant difference with a moderate effect size between modeled performance time and actual race time (1359.6 ± 192.7 s vs. 1499.5 ± 248.5 s,
p = 0.00, d = 0.62). However, partial correlation analysis revealed a large statistically significant association between modeled performance time and actual race time (r = 0.683,
p = 0.00) when controlling for VO
2max.
Figure 3 illustrates a mean difference of −139.9 ± 139.3 s (95% LOA: −403.5 to 134.5) was observed. A statistically significant regression (r
2 = 0.688,
p = 0.168) with an SEE of 142.1 s was observed.
5. Discussion
The primary purpose of the current investigation was to analyze the intensity distribution of an NCAA Division 1 CC competition by quantifying the percentage of time spent in each of the three exercise intensity zones using both running velocity and HR. To the knowledge of the current authors, this has yet to be reported. It was observed that athletes spent a statistically higher percentage of race time in Zone 2 compared to Zones 1 and 3 based on running velocity and a statistically higher percentage of time spent in Zones 2 and 3 compared to Zone 1 based on HR. No statistically significant differences were observed between sexes for time spent in each of the zones, nor were there any differences based on the method of quantification. Athletes maintained an average running velocity of ~96% of their CV across competitions. Additionally, a secondary purpose of the current investigation was to compare actual race times with modeled performance times using CV and D′. A mean difference of 139 s was observed between modeled performance time and actual race time. Understanding the intensity distribution of a collegiate CC competition and the ability to model and predict performance will help inform coaches and athletes in their race preparation and strategy.
Collegiate CC running places a significant amount of physiological and metabolic stress on athletes, requiring them to race for an extended duration, across multiple exercise intensity zones. An understanding of physiological mechanisms that take place within each zone is the foundation of understanding CC racing’s physiological demands. Zones 1, 2, and 3 are noted as the moderate, heavy, and severe exercise intensity domains, respectively. The boundary between Zone 1 Zone 2 is demarcated by the gas exchange threshold, while transition from Zone 2 to Zone 3 is demarcated by CV [
2]. Exercise within each zone induces unique acute physiological responses [
8]. These responses are primarily associated with the magnitude of perturbation from homeostasis and the ability or inability to achieve a metabolic steady state, which is defined as a leveling out or plateauing of parameters such as blood lactate and VO
2 during constant-work-rate exercise [
19,
20,
21,
22,
23]. Zone 1 is characterized by sustained work that is performed below the gas exchange, ventilatory, or lactate threshold with an increase in VO
2, with a metabolic steady state achieved in ~3 min from the onset of exercise. Exercising within Zone 2 is characterized by an elevated VO
2 and blood lactate response, coinciding with the development of the VO
2 slow component, though a metabolic steady state is still achievable within 10–20 min from the onset of exercise [
2,
24]. Increases in both central and peripheral nervous system fatigue with reductions in peak power output, maximal voluntary contraction, and VO
2max have all been observed following prolonged exercise within Zone 2 [
25]. The transition from Zone 2 to Zone 3 is demarcated by CV, representing the highest exercise intensity in which a metabolic steady state can be achieved [
8,
23,
26,
27]. Thus, exercise within Zone 3 is primarily characterized by the failure to achieve a metabolic steady state, with both VO
2 and blood lactate increasing significantly from the onset of exercise and continuously rising at a rate proportional to the distance above CV until achieving VO
2max and task failure [
22,
28,
29].
The current investigation observed statistically significant differences in the time spent in each of the zones based on velocity, with ~8%, ~75%, and ~16% of race time spent in Zones 1, 2, and 3, respectively. The distribution of percentages of time for velocity zones reflects the athletes self-selecting at a high velocity that is near their CV but still below, allowing athletes to still reach a metabolic steady state and avoiding the rapid accumulation of fatigue associated with exercising in Zone 3 continuously [
21,
30]. However, ~16% of race time was spent within Zone 3, conveying that the athletes spent time running above their CV, thus utilizing D′. Based on the time spent within each zone, it appears that athletes were strategic in their decision to push their velocity into Zone 3 and use the finite D′ in instances such as passing an opponent, matching increases in opponents’ velocities, or adjusting to alterations in course elevation. Previous investigations on self-selected pacing during time-trials in experienced runners have observed a well-defined u-shaped pattern, beginning at intensities within Zone 3 and then decreasing to a sustainable intensity that falls within Zone 2 for much of the exercise duration before increasing back into Zone 3 towards the end of the time-trial [
9,
31]. The decrease in intensity from within Zone 3 to Zone 2 not only aligns with maintaining a desirable intensity but also allowing for the regeneration of D′ in a manner proportional to the distance below CV. The replenishment of D′ may allow athletes to perform the “kick” at the end of the time trial or race. In the context of racing, rather than solo time-trial performance, an athlete may accelerate and decelerate their velocity above and below CV multiple times depending on the tactics of the race [
14].
For HR, a statistically greater amount of time was spent in Zones 2 (~38%) and 3 (~50%) compared to Zone 1 (~12%). Although no statistically significant differences were different based on method of quantification, this contrasts with the distribution of intensity based on velocity, as the majority of time was spent in Zone 2 (~75%), whereas majority of time was spent in Zone 3 based on HR. It is speculated that cardiovascular drift (CD) is likely the reason there is a substantial variance between corresponding heart rate and velocity. Cardiovascular drift is characterized as the slow and steady increase in heart rate that is seen during extended bouts of endurance exercise despite a similar workload and intensity. It is hypothesized to occur due to fluid loss to perspiration—thus, the total volume of blood circulating decreases, leading to decreases in stroke volume. In order to maintain the same cardiac output despite decreases in stroke volume, the heart increases the rate of contraction in order to maintain circulation and delivery of oxygen to working muscles [
32]. An additional hypothesis suggests that due to increases in skin temperature, blood will begin to pool at the surface of the skin to promote the dissipation of heat through convection [
32]. This divergence in blood flow also leads to decreases in total blood volume returning to the heart, resulting in the heart having to increase the rate of contraction to counteract decreases in stroke volume to maintain cardiac output. Although the development of cardiovascular drift assists individuals in maintaining a constant work rate despite increases in core and skin temperature, perceived exertion has been observed to increase when cardiovascular drift manifests [
31]. Dehydration has been reported to contribute to the development of cardiovascular drift, but hydration and nutritional status were not assessed prior to races in the current investigation.
An interesting observation of the current investigation is the lack of statistically significant differences overserved between the distribution of race time when quantified using running velocity and HR. The authors speculate that this is likely due to the large variances observed within zones using the two different methods. The large variances may be attributed to differences in environmental conditions on competition days, racecourse profiles, and differences in the kinetics of internal and external metrics of intensity. This is likely due to some of the physiological responses to hot and humid atmospheric conditions that can influence HR but also due to some of the inherent differences in internal versus external measures of intensity. External measures represent the physical work performed during training sessions and competitions (i.e., running velocity), while internal measures encompass the corresponding physiological response (i.e., heart rate) to the performed work [
33]. A previous investigation of the TID of middle-distance runners over an 8-week period observed differences in TID when quantified based on running velocity and HR [
3]. A greater amount of time was spent in Zone 2 when using heart rate compared to running velocity. It was speculated that this was due to slower kinetics of heart rate in response to changes in running velocity (e.g., running velocity would increase very quickly but HR took longer to respond to the change in velocity). Although less time was spent in Zone 2 and more time in Zone 3 for HR compared to running velocity, the same difference in response time between velocity and HR may contribute to the large variances observed during competition. Athletes will accelerate, decelerate, and cross over between Zones 2 and 3 very quickly in terms of running velocity, but due to the slower HR kinetics, HR may still stay elevated even though velocity has decreased and may subsequently slowly decrease. HR will also be influenced by the atmospheric conditions that day, potentially resulting in a disassociation between alterations in velocity and alterations in HR. Additionally, an acute dose–response relationship exists between these two modes of measures and is in constant flux due to different states of fatigue. For example, a running velocity of 5.0 m/s may elicit a corresponding heart rate of 175 bpm during a training session after coming off a day of recovery. However, after several days of training including extended bouts of running in Zone 3, the same 5.0 m/s may elicit an HR of 185 bpm due to accumulated fatigue and altering the distribution of intensity between zones [
14]. The current investigation did not assess fatigue levels prior to competition and did not have access to training logs leading up to competitions.
A major benefit of assessing CV and D′ is the ability model total work capacity which can then be used to inform running race strategy and to prescribe high intensity intermittent training [
25,
30]. However, the modeled race time was statistically lower than actual race time with a large SEE and large LOA observed. This is likely due to the equation not accounting for oscillations in running velocity that are dictated by the competition, strategy, and terrain of the racecourse. As previously mentioned, a common pacing strategy is to begin in Zone 3 before decreasing running velocity to below CV in Zone 2, for much of the exercise duration before increasing again into Zone 3 near the end of race [
31]. The equation utilized models the performance time as if the individual ran to their physiological limit on a flat surface, such as an outdoor running track. This likely does not reflect what is done by most individuals in races, in which the U-shaped pacing strategy is most likely implemented and does not reflect the characteristics of racecourses for collegiate CC [
7]. It should be noted that participants and coaches were not made aware of their CV, D′, or modeled performance time and thus did not have the ability to use this in the development of race strategy. Future investigations should seek to better understand how these variables could be implemented by coaches and athletes in actual race scenarios.
The current investigation is not without limitations. A limited number of collegiate cross-country runners were able to be recruited for the current investigation, with more males than females participating, limiting the generalization of the results. Training logs were not available for analysis, making it unclear the degree of accumulated fatigue each participant had going into each of the competitions. Additionally, all performance metrics were assessed at the beginning of the competition phase of training. Though traditionally, this phase of training primarily targets the maintenance of physiological adaptations induced during previous phases of training, participants may have improved metrics—such as the velocity associated with GET and CV—thus altering zones.