Next Article in Journal
Representing Aspectual Meaning in Sentence: Computational Modeling Based on Chinese
Previous Article in Journal
Three-Dimensional Temperature Field Reconstruction Based on Tucker Decomposition and Acoustic Thermometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Player Tracking Data and Psychophysiological Features Associated with Mental Fatigue in U15, U17, and U19 Male Football Players: A Machine Learning Approach

1
Department of Sports Sciences, Polytechnic of Guarda, 6300-559 Guarda, Portugal
2
Department of Sports Sciences, Polytechnic of Cávado and Ave., 4800-058 Guimarães, Portugal
3
SPRINT—Sport Physical Activity and Health Research & Inovation Center, 6300-559 Guarda, Portugal
4
Research Center in Sports, Health and Human Development, 6200-000 Covilhã, Portugal
5
LiveWell—Research Centre for Active Living and Wellbeing, Polytechnic Institute of Bragança, 5300-253 Bragança, Portugal
6
CI-ISCE, ISCE Douro, 4560-000 Penafiel, Portugal
7
Department of Sports, Exercise and Health Sciences, University of Trás-os-Montes e Alto Douro, 5001-801 Vila Real, Portugal
8
Biosciences Higher School of Elvas, Polytechnic Institute of Portalegre, 7350-000 Portalegre, Portugal
9
Department of Sports Sciences, Polytechnic Institute of Bragança, 5300-253 Bragança, Portugal
10
Life Quality Research Center (LQRC-CIEQV), Complexo Andaluz, Apartado 279, 2001-904 Santarém, Portugal
11
Department of Sports Sciences, University of Beira Interior, 6200-001 Covilhã, Portugal
12
School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff CF23 6XD, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(7), 3718; https://doi.org/10.3390/app15073718
Submission received: 7 February 2025 / Revised: 16 March 2025 / Accepted: 21 March 2025 / Published: 28 March 2025

Abstract

:
Optimizing recovery is crucial for maintaining performance and reducing fatigue and injury risk in youth football players. This study applied machine learning (ML) models to classify mental fatigue in U15, U17, and U19 male players using wearable signals, tracking data, and psychophysiological features. Over six weeks, training loads were monitored via GPS, psychophysiological scales, and heart rate sensors, analyzing variables such as total distance, high-speed running, recovery state, and perceived exertion. The data preparation process involved managing absent values, applying normalization techniques, and selecting relevant features. A total of five ML models were evaluated: K-Nearest Neighbors (KNN), Gradient Boosting (XGBoost), Support Vector Machine (SVM), Random Forest (RF), and Decision Tree (DT). XGBoost, RF, and DT achieved high accuracy, while KNN underperformed. Using a correlation matrix, average speed (AvS) was the only variable significantly correlated with the rating of perceived exertion (RPE) (r = 0.142; p = 0.010). After dimensionality reduction, ML models were re-evaluated, with RF and DT performing best, followed by XGBoost and SVM. These findings confirm that tracking and wearable-derived data are effectively useful for predicting RPE, providing valuable insights for workload management and personalized recovery strategies. Future research should integrate psychological and interpersonal factors to enhance predictive modeling in the individual long-term health and performance of young football players.

1. Introduction

In youth football, effective recovery management has been described as fundamental for sustaining high performance, motor learning, and reducing injury risk, particularly in younger players, where developmental factors add complexity to the training and competition [1,2]. Thus, individual performance and psychophysiological readiness in U15, U17, and U19 young football players are significantly shaped by a combination of physical, mental, and environmental factors that impact their recovery states [3,4]. In this sense, player tracking data and psychophysiological features are essential for evaluating and monitoring training demands, enabling coaches to adjust training loads and implement tailored physical and mental recovery protocols [5,6]. Advances in tracking and wearable technologies have introduced non-invasive tools for chasing key performance metrics based on heart rate (HR), perceived exertion, and psychophysiological markers, offering insights into players’ recovery and mental states [7,8,9]. The complexity and volume of data collected from these tools require advanced analytical methods to extract actionable insights effectively [10].
Recently, machine learning (ML) models have become valuable artificial intelligence (AI)-based tools in sports performance, particularly for analyzing the player tracking data based on big data, data analytics, and data science [11,12]. By leveraging ML algorithms, it is possible to classify and predict recovery states based on multidimensional inputs, offering a more detailed perspective on the factors that influence effective psychophysiological recovery [13,14]. These ML models offer significant advantages over traditional statistical techniques, particularly in handling high-dimensional datasets and identifying non-linear patterns that are often present in psychophysiological data [15]. By leveraging these capabilities, ML enables more accurate and comprehensive predictions of recovery states based on multifactorial inputs [16,17].
Moreover, recent research demonstrates that physical and mental recovery plays a central role in athletic performance, with inadequate recovery potentially leading to fatigue accumulation, increasing injury risks, and decreasing performance [18,19]. For youth football players who are undergoing critical physical and psychological development, the importance of recovery management cannot be overstated. In this context, tracking and wearable technologies enable continuous real-time monitoring of key recovery indicators, such as HR, physical performance, and psychophysiological features, providing valuable insights into players’ recovery status [6]. This study employs a methodology that integrates data collection from wearable and tracking systems, data preprocessing, and ML-based analysis to develop and validate predictive models for traditional recovery classification.
Indeed, psychophysiological factors are integral to understanding athletic performance and individual training loads, as they significantly influence how athletes respond to physical demands [20]. From a practical perspective, a player’s self-perception of motor competence plays a crucial role in shaping their performance [21,22]. Youth players with higher levels of perceived motor competence often exhibit greater confidence and motivation, which positively impacts their performance during training and competition [23]. Conversely, those with lower self-perception may experience reduced motivation and heightened anxiety, leading to suboptimal performance outcomes. Moreover, interpersonal dynamics, including relationships with coaches, teammates, and family members, can either support or hinder an athlete’s mental state and psychophysiological recovery [24,25]. Positive interactions foster motivation and resilience, while negative interactions can contribute to stress, ultimately affecting performance and recovery capacity [21]. This study recognizes the importance of addressing these individual differences in training and recovery processes through personalization [4,26]. ML models provide an effective means to integrate these diverse factors, enabling the development of psychophysiological recovery strategies tailored to each athlete’s unique needs [13,14]. This approach is particularly relevant for youth athletes, who are at varying stages of physical maturation and psychological development, requiring nuanced and adaptive recovery protocols [26]. Personalized recovery strategies have been increasingly explored using individualized adjustments in training loads based on tracking and wearable-derived metrics, which can significantly enhance recovery outcomes [27,28,29]. This study builds on these findings by integrating ML techniques to optimize recovery management, providing a novel approach to personalizing recovery protocols. The integration of advanced wearable technologies and ML models represents a significant advancement in sports science, enabling more precise and data-driven approaches to fatigue and management [30,31]. This study contributes to this growing field by focusing on recovery in young football players, specifically in the U15, U17, and U19 categories, where developmental considerations add complexity to training and recovery strategies [18]. By enhancing our understanding of recovery states, we aim to provide practical tools and insights that assist coaches, physical trainers, and other professionals in designing safer and more effective training protocols.
This research investigates the use of ML algorithms to classify mental fatigue and recovery states in U15, U17, and U19 young football players by integrating diverse datasets collected from wearable devices, tracking systems, and psychophysiological features. Our objective is to identify the key features associated with optimal recovery by applying the automated ML models and, with this, develop new research-practice insights to develop personalized recovery strategies that address the unique needs of young athletes during critical stages of their physical and psychological development.

2. Materials and Methods

2.1. Study Design

During the initial month of the 2019–2020 competitive season, the training load was monitored weekly in U15, U17, and U19 young sub-elite football players. Data were collected from 18 training sessions over a period of six weeks, resulting in a total of 324 observations. Match data were excluded from the analysis, and training days were organized according to the “match day minus” (MD) system: MD-3 (Tuesday), MD-2 (Wednesday), and MD-1 (Friday). Typically, 18 players took part in each session. The players were included in the analysis only if they consistently participated in one match per week and fully attended all training sessions. Each microcycle consisted of three weekly training sessions, each approximately 90 min long.
The weekly training sessions were structured in collaboration with the coaching staff and included a standardized warm-up protocol. Warm-ups involved low-intensity running, dynamic stretching of key lower limb muscle groups, technical drills, and ball possession exercises. Weekly training schedules incorporated various modes, emphasizing game-based scenarios, sport-specific technical skills, and football-focused activities [4]. All training activities occurred on outdoor pitches that met FIFA’s standard dimensions (100 × 70 m) and were equipped with synthetic grass. Training sessions were conducted between 10:00 AM and 8:00 PM under regulated environmental conditions, with temperatures ranging from 14–20 °C and relative humidity between 52 and 66%.

2.2. Participants

This study involved sixty male sub-elite football players with the following characteristics: height of 1.74 ± 0.08 m, weight of 62.48 ± 10.03 kg, and body mass index (BMI) of 20.61 ± 2.14 kg/m². Additional measurements included an average sitting height of 88.36 ± 8.51 cm and a predicted adult height of 14.20 ± 1.39 cm. The young players had an average of 6.76 ± 1.42 years of playing experience and a relative age of 0.25 ± 0.18.

2.3. Ethical Aspects

In accordance with ethical standards, all participants received comprehensive information about the study’s aims and potential risks. Consent was obtained through signed forms either from the participants themselves or from their legal guardians if they were minors. The research methodology was reviewed and authorized by the Ethics Committee at the University of Trás-os-Montes e Alto Douro (approval number 3379-5002PA67807).

2.4. Data Collection

Young football players were monitored during the training sessions using portable GPS devices (STATSports Apex®, Newry, Northern Ireland). These units operated at a sampling frequency of 18 Hz with an accelerometer (100 Hz), a magnetometer (10 Hz), and a gyroscope (100 Hz), capturing raw data on movement, speed, and distance. Additionally, the devices were equipped. Each GPS unit was securely positioned in a designated pocket on a specialized vest provided by the manufacturer, situated on the upper back between the shoulder blades. To ensure proper satellite signal reception, all devices were activated 30 min prior to the start of data collection [26]. The validity and reliability of this GPS device’s tracking are well established in the literature [6,32]. To ensure the reliability of the data collected through wearable devices, we referred to previous studies that validated the accuracy of GPS and HR monitors used in this study [6,32]. Potential measurement errors were minimized by applying data smoothing techniques and discarding outliers beyond a reasonable range based on established thresholds.

2.5. Variables

Player tracking data. The external training load was extracted using the APEX Pro Series Software (v. 2.0.2.4) with the following variables: total distance (TD) covered (m), average speed (AvS), maximal running speed (MRS) (m/s), relative high-speed running (rHSR) distance (m), high metabolic load distance (HMLD) (m), sprinting distance (SPR) (m), dynamic stress load (DSL) (a.u.), number of accelerations (ACC), and number of decelerations (DEC). The GPS software tracked locomotor activities exceeding 19.8 km/h, categorizing them into rHSR (19.8–25.1 km/h) and SPR (>25.1 km/h). Sprint performance was assessed by the number of sprints and average sprint distance (m). HMLD used the distance covered when a player’s metabolic power surpasses 25.5 W/kg. The ACC and DEC focused specifically on movements within the highest intensity ranges, with ACC exceeding 3 m/s² and DEC falling below −3 m/s². DSL was determined using a 100 Hz triaxial accelerometer integrated into the GPS units, which combined accelerations along the three orthogonal axes (X, Y, and Z) to generate a total vector magnitude expressed as G-force [6,32].
Wearable features. The internal training load was captured using an HR sensor by Garmin HR-band devices with a 1 Hz short-range telemetry system (International Inc., Olathe, KS, USA). Metrics included maximum heart rate (HRmax), average heart rate (AvHR), and the percentage of HRmax (%HRmax) [33]. Akubat TRIMP was used to quantify the training impulse, calculated as training duration × 0.2053e3.5179 × HRratio, where HRratio was derived from players’ iTRIMP values and HRmax was determined via the Yo-Yo Intermittent Recovery Test Level 1 (YYIR1) [34,35].
Psychological perceived features. The Borg Rating of Perceived Exertion 6–20 scale was utilized to evaluate subjective exertion, fatigue, and recovery [36]. The session RPE (sRPE) was computed by multiplying each player’s RPE score by the session duration (sRPE = RPE × session time) [18,19]. Recovery perception was assessed using the Total Quality Recovery (TQR) scale, scored from 6 to 20. TQR and RPE were collected 30 min before and after training sessions, respectively, to gauge players’ recoveries and efforts. Both scales have been validated in prior studies involving youth football athletes [5].

2.6. Data Preprocessing and Normalization

The data were processed and analyzed using Python™ (version 3.10.4), a computational programming language [37]. To handle, visualize, and manipulate the dataset, we made use of the “pandas”, “numpy”, “matplotlib.pyplot”, and “seaborn” libraries [38]. With less than 10% null values across the dataset, missing data points were imputed by replacing them with the mean value of each respective column [39]. The target variable, representing the recovery state, was multi-class, and therefore, one-hot encoding was applied to convert categorical class labels into numeric arrays interpretable by ML algorithms [40].
To address significant discrepancies observed in the numeric scales of the features, normalization was performed using the “StandardScaler” function from the “sklearn.preprocessing” library [38,41]. This process scaled the features to a range between −1 and 1, enhancing interpretability for algorithms that apply the sigmoid function, defined as σ ( x ) = 1 1 + e x , where “x” is the independent variable and “e” represents the Euler’s number, e = 2.71828 [42]. This normalization ensured the data was suitable for further modeling and analysis.

2.7. Classifying Algorithms Implementation

To split the dataset for training and testing, we utilized the library “from sklearn.model_selection import train_test_split” and activated the “train_test_split” function. The data were divided into 70% (226 rows) for training and 30% (98 rows) for testing, ensuring a robust evaluation of the model’s performance [38,39]. A random seed of 42 was applied to maintain consistency and reproducibility across code execution [43]. Five ML classifiers were implemented using the following libraries: (1) “sklearn.neighbors import KNeighborsClassifier” for the K-Nearest Neighbors (KNN) model; (2) “from sklearn.ensemble import GradientBoostingClassifier” for the Gradient Boosting Classifier (XGBoost); (3) “from sklearn.svm import SVC” for the Support Vector Machine (SVM) algorithm; (4) “from sklearn.ensemble import RandomForestClassifier” for the Random Forest Classifier (RF); and (5) “from sklearn.tree import DecisionTreeClassifier” for the Decision Tree Classifier (DT) [37,38,44,45].
To evaluate model performance, the library “from sklearn.metrics import accuracy_score, confusion_matrix, classification_report” was utilized, activating functions to calculate accuracy, precision, recall, and F1-score [46,47]. These metrics provided a comprehensive understanding of the classifiers’ effectiveness. Finally, the algorithms’ assumptions and applications were summarized as follows: (1) KNN is a distance-based algorithm evaluating the similarity between data points; (2) XGBoost is an ensemble learning method combining weak predictors for enhanced accuracy; (3) SVM is a classification technique utilizing hyperplanes for optimal decision boundaries; (4) RF is a tree-based ensemble method focusing on reducing overfitting by averaging predictions; (5) DT is a hierarchical structure used for straightforward classification tasks.
Model evaluation focused on the accuracy, precision, recall, and F1-score metrics to ensure the robustness and generalizability of the classifiers. To enhance model robustness, future analyses can incorporate k-fold cross-validation techniques, which will provide a more comprehensive assessment of model performance by minimizing potential biases from a single train–test split.

2.8. K-Nearest Neighbors Classifier

The KNN classifier assigns a class label to a data point by analyzing the majority class of its nearest neighbors within the feature space [48]. The classification process can be represented mathematically as
Y = m o d e   Y n e i g h b o r s   p a r a   n e i g h b o r s   e m   K
Here, Y represents the predicted class label, Y n e i g h b o r s denotes the class labels of the K-Nearest Neighbors, and mode identifies the most frequently occurring class label among the neighbors.

2.9. Gradient Boosting Classifier

The XGBoost constructs an ensemble of trees in a sequential manner, where each subsequent tree focuses on correcting the errors made by the previous ones by optimizing a predefined loss function [49]. The XGBoost process can be represented by the following equation:
F m   x = F m 1   x + Y m   H m   x
In this equation, F m   x indicates the prediction made by the m-th model, F m 1   refers to the prediction of the previous model, Y m   represents the learning rate that controls the influence of each additional tree, and H m   x denotes the m-th weak learner, usually a decision tree.

2.10. Support Vector Machine

The SVM algorithm identifies the hyperplane in the feature space that separates the classes with the largest possible margin [50]. The optimization problem for SVM is formulated as follows:
M i n i m i z e   1 2   w 2   s u b j e c t   t o   Y i W · X i   + b 1
Here, w represents the weight vector that specifies the orientation of the hyperplane, b refers to the bias term that shifts the hyperplane’s position, Y i indicates the class label associated with the i-th training sample, X i   denotes the feature vector of the i-th training sample, and W · X i   + b describes the function that measures a point’s distance from the hyperplane.

2.11. Random Forest Classifier

The RF constructs an ensemble of decision trees during the training phase and determines the final classification output based on the majority vote among the predictions of all trees [51]. The RF process can be mathematically represented as
Y = m o d e   H t x   f o r   t = 1   t o   T  
In this context, Y denotes the predicted class label, H t x indicates the forecast generated by the t-th decision tree, T represents the total number of trees in the forest, and mode determines the most frequently occurring class label among the predictions made by all trees.

2.12. Decision Tree Classifier

The DT partitions data into subsets by evaluating the most important feature at each node, maximizing class separation at every split [52]. The split criterion is described mathematically as
S p l i t   c r i t e r i o n : G i n i t t = 1 i = 1 n p i 2
In this equation, G i n i t t = refers to the Gini impurity calculated at a specific node t, n represents the total number of different classes, and p i denotes the probability of randomly selecting an element that belongs to class i at node t.

2.13. Model Evaluation

The models’ performance was evaluated using the following metrics: accuracy, precision, recall, and F1-score, defined as follows:
  • Accuracy score
Accuracy quantifies the proportion of correctly classified instances out of the total number of instances. It is determined by dividing the sum of true positives (TPs) and true negatives (TNs) by the total number of all classification outcomes, including true positives (TPs), true negatives (TNs), false positives (FPs), and false negatives (FNs) [46].
A c c u r a c y = T P + T N T P + T N + F P + F N
In this formula, TP represents the number of true positives, TN represents the number of true negatives, FP represents the number of false positives, and FN represents the number of false negatives.
2.
Precision
Precision evaluates the proportion of positive predictions that are correct. It is calculated by dividing the number of true positives by the total number of predicted positive instances (TP + FP) [46].
P r e c i s i o n = T P T P + F P
Here, TP represents the number of true positives, and FP represents the number of false positives.
3.
Recall (sensitivity)
Recall, also referred to as sensitivity or the true positive rate, measures the proportion of actual positive cases that are correctly identified by the model. It is determined as the ratio of the number of true positives to the sum of true positives and false negatives [46].
R e c a l l = T P T P + F N
In this context, TP represents the number of true positives, and FN represents the number of false negatives.
4.
F1-score
The F1-score combines precision and recall into a single metric by computing their harmonic mean, providing a balance between the two. It is determined as follows [46]:
F 1 s c o r e = 2 × P r e c i s i o n × R e c a l l   P r e c i s i o n × R e c a l l    
In this formula, both precision and recall are considered to provide a balanced evaluation of the model’s performance.

3. Results

3.1. Variables Selection for ML Algorithm

To ensure that the model effectively predicts the RPE, a feature selection process was applied to identify the most relevant variables. The selection process was based on domain knowledge, correlation analysis, and statistical significance testing. A Pearson correlation analysis was conducted between RPE and the selected variables, with statistical significance determined at p < 0.05. The results of the correlation analysis are presented in Table 1.
From this analysis, AvS was the only variable that showed a statistically significant correlation with RPE (p = 0.010). However, all selected features were retained in the models because of their biomechanical and physiological relevance in training load monitoring. The final selected variables are TD, HSRr, HMLD, AvS, SPR, DSL, sRPE_CR10, ACC, DEC, Cal, TS, and anthropometric variables (weight, height, BMI).

3.2. Algorithm Performance in Predicting Perceived Exertion

In the raw dataset, all features were initially considered for the implementation of the ML models as an exploratory step. The models were evaluated based on their performance in predicting the RPE. Table 2 presents the results. Since the DT model demonstrated the highest predictive power, it was selected as the most effective model for predicting RPE.

3.3. Performance After Feature Selection

After applying recursive feature elimination to reduce data dimensionality, the models were re-evaluated using the 16 most relevant features. The RFE method was chosen for its ability to iteratively eliminate less significant features based on their impact on model performance.
The results showed that model performance remained stable, indicating that removing less important features did not negatively affect accuracy. Specifically, the DT and RF algorithms achieved the highest accuracies, reaching 85% and 87%, respectively. These scores were slightly higher compared to the full-feature models, suggesting that eliminating redundant features enhanced efficiency.
The XGBoost and SVM models followed with accuracies of 82% and 80%, respectively. Regarding precision, recall, and F1-score, RF and DT also demonstrated superior performance, with an average F1-score of 0.85. Overall, the ML models’ performance was considered consistent and effective, demonstrating that reducing the feature set maintained or even slightly improved predictive accuracy.

4. Discussion

This study investigated the use of ML models to classify mental fatigue and recovery states in U15, U17, and U19 male football young sub-elite football players, emphasizing the multi-class classification approach and the integration of positional-specific data. The findings confirm that wearable-derived variables are effective in predicting RPE, with ML models providing valuable insights into athlete workload and recovery management. By analyzing recovery-related metrics, the study aims to identify key features associated with optimal recovery, leveraging these insights to develop personalized strategies that address the unique needs of young athletes during critical stages of their physical and psychological development.
Initially, all features from the raw dataset were included in an exploratory step to implement the algorithms, resulting in top performances for XGBoost, RF, and DT classifiers, each achieving a range of accuracy values over several iterations (i.e., 30% to 35%). To improve the model’s performance, feature selection was employed to reduce data dimensionality. While the correlation analysis did not establish statistical significance for most features, variables such as TD, rHSR, HMLD, and DSL have been widely acknowledged in sports science literature for their impact on perceived exertion. Therefore, despite their lack of direct correlation in this dataset, their inclusion ensures that the model can capture broader training-related patterns that may influence RPE. These features were deemed most relevant for classifying the recovery states of the sub-elite young football players [24,25].
The evaluation of ML models for predicting RPE demonstrated that the DT classifier outperformed other algorithms, achieving the highest accuracy (32.65%), precision (31.83%), recall (32.65%), and F1-score (31.67%). The superior performance of the DT model suggests its effectiveness in handling non-linear relationships within the dataset, making it the most suitable algorithm for RPE prediction in this training weekly microcycles. These methods proficiently address the intricacy and variation inherent in the psychophysiological features data obtained from wearable sensors [24,25]. The consistent underperformance of the KNN algorithm, however, suggests that it may be less effective in dealing with the high-dimensional and potentially noisy nature of the data.
Among the other models tested, RF and XGBoost exhibited moderate predictive performance, with accuracy scores of 28.57% and 25.51%, respectively. These ensemble-based models generally perform well with structured data; however, their lower accuracy compared to the Decision Tree suggests that the dataset may not contain enough complexity to benefit from ensemble learning techniques. Additionally, KNN and SVM demonstrated the lowest predictive performance, indicating that distance-based and margin-based learning methods may not be well-suited for RPE prediction in this dataset. The feature selection results highlight the critical role of specific physiological and performance metrics in determining recovery states. For instance, variables such as DSL and HSRr are direct indicators of the physical stress experienced by players, while metrics like sRPE and total distance provide insights into perceived exertion and overall workload [53,54]. The inclusion of positional data further emphasizes the importance of contextual factors, as different playing positions have varying physical demands and recovery profiles. Positional data in football have traditionally captured physical and tactical factors separately, but recent research emphasizes integrating player tracking data and spatiotemporal parameters [3,4]. This approach combines physical and tactical variables to provide a more comprehensive understanding of individual player performance [3].
The relatively low predictive performance across all models highlights potential challenges in modeling RPE using objective training load metrics. RPE is naturally subjective and is affected by mental, environmental, and physical factors that may not be completely reflected in the available dataset. Future research should consider integrating additional contextual variables, such as athlete fatigue levels, motivation, and external conditions, to enhance predictive accuracy. Such integrative analysis is becoming increasingly important as it offers a holistic view of a player’s contributions on the field [55]. In addition, affordable and non-invasive techniques, such as heart rate monitors and perceived effort scales, are commonly employed [24,25]. These tools are practical for monitoring psychophysiological fatigue and changes in performance during matches and training sessions, making them accessible and effective for regular use in sports settings [24]. Overall, the DT model’s effectiveness suggests that simpler models with interpretable decision rules may provide more reliable insights into RPE prediction.
Future work can explore feature engineering techniques or hybrid models to further refine predictive performance and improve the practical application of ML models in training load monitoring. From research-practice issues, understanding the impact of psychophysiological variables on physical performance and training load is crucial. The relatively low accuracy suggests that incorporating objective biomechanical parameters, such as joint mechanical demands, can enhance the predictive performance of the models. This aligns with recent studies that emphasize the value of combining subjective and objective indicators for more accurate recovery assessments [56,57].
Self-perception of motor competence can significantly influence motor performance, with higher perceived competence correlating with greater confidence and motivation, thereby enhancing performance [21,22]. Conversely, low self-perception can lead to decreased motivation and increased anxiety, negatively impacting performance [20]. From a health perspective, mental health issues and interpersonal relationships also significantly affect physical performance [24,25]. Mental health problems, such as depression, anxiety, and stress, can impair concentration, reduce energy levels, and disrupt sleep patterns, all of which negatively impact physical performance. Interpersonal relationships, including interactions with coaches, teammates, and family members, can either provide essential support and motivation or contribute to psychological stress, further influencing an athlete’s ability to perform and recover effectively [58,59]. The most important variables identified by the ML model were DSL, HSRr, sRPE, and sRPE_norm. Interestingly, TQR was eliminated from the model, externalizing the multifactorial nature of recovery. However, the control of perceived exertion seems to be preponderantly associated with body impacts (i.e., DSL) and high-intensity movements (i.e., HSRr, HMLD, and ACC).
This research shows that the chosen parameters obtained from wearable sensors are successful in forecasting the recovery conditions of youth football players. The findings support the potential for personalized recovery protocols that can enhance performance and reduce injury risks [60]. Future research should continue to explore the integration of psychological variables and the impact of interpersonal dynamics on recovery and performance, further refining the predictive models and enhancing their applicability in real-world sports settings. Moreover, this study highlights the importance of personalized approaches in football training [61]. Each athlete responds uniquely to training loads and recovery processes, influenced by factors such as age, developmental stage, and individual physiology. ML models facilitate the creation of tailored recovery strategies that address these individual differences, thereby optimizing training adaptations and overall athletic performance [38,39]. This study has several limitations that should be acknowledged. First, the sample size was limited to 60 sub-elite Portuguese young football players, which may restrict the generalizability of the findings to broader or more diverse populations. Second, the primary reliance on the sRPE as a subjective measure may introduce bias, as individual perceptions of effort can vary significantly. Additionally, the absence of objective biomechanical parameters, such as joint loads and muscle activation data, may have limited the accuracy of the ML models. The relatively low predictive accuracy observed suggests that incorporating a combination of subjective and objective metrics may improve model performance. Larger and more diverse samples, as well as the integration of objective biomechanical data, may enhance the robustness of the findings.

5. Future Perspectives and Practical Applications

Future studies should explore the integration of biomechanical parameters alongside RPE to enhance model accuracy. Additionally, implementing hybrid models that combine ML techniques may provide a more comprehensive approach to monitoring athlete recovery. Measuring the player tracking data and psychophysiological features associated with the recovery and mental fatigue states of U15, U17, and U19 football players using an ML classifying model can optimize the matching wearable-tracking technology. As AI-based techniques continue to evolve, their application in sports will likely expand, providing deeper insights and more precise interventions. This progress holds the promise of elevating training standards and improving the overall health and performance of athletes in youth football and beyond. Also, we hope that this study can inform evidence-based interventions and enhance training and recovery protocols in youth football programs. With this, we can improve motor engagement times and the enjoyment of sports practice and reduce the risk of mental fatigue with consequences for retention in practice and mental health problems.
While the findings provide valuable insights for sub-elite youth football players, the limited sample size and focus on Portuguese players may restrict generalizability. Future studies should consider including a broader and more diverse cohort to enhance the applicability of the proposed models across different populations. Despite moderate predictive accuracy, the ML models offer practical value by identifying trends in fatigue and recovery that can inform adjustments in training loads, allowing coaches to proactively manage athlete readiness and minimize injury risks. Coaches can leverage model predictions by decreasing training intensity or prioritizing active recovery protocols on days when predicted fatigue levels are elevated, ensuring a balanced workload that aligns with each player’s recovery capacity.

6. Conclusions

In conclusion, the integration of player tracking data with psychophysiological features with ML models represents a significant advancement in sports science. AvS was the only variable significantly correlated with RPE. ML models showed RF and DT as the best ML-performing models, followed by XGBoost and SVM. This article contributes to this growing field by focusing on the recovery processes of young sub-elite football players and presenting a data-driven approach to improve athlete care. By enhancing the understanding of recovery states, we aim to equip coaches, trainers, and sports scientists with the tools and knowledge needed to promote more effective and safer training practices.

Author Contributions

Conceptualization, J.E.T., R.F., P.F. and T.M.B.; methodology, J.E.T., L.B., R.F., R.M. and T.M.B.; software, L.B.; validation, L.B., R.M., T.M.B. and A.M.M.; formal analysis, J.E.T., P.A. and R.M.; investigation, J.E.T.; resources, R.F. and T.M.B.; data curation, P.F.; writing—original draft preparation, J.E.T. and P.A.; writing—review and editing, J.E.T., A.S., L.B., E.M., R.N., R.F., R.M., P.A., P.F., T.M.B. and A.M.M.; visualization, J.E.T. and R.N.; supervision, P.F. and A.M.M.; project administration, P.F. and A.M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee at the University of Trás-os-Montes e Alto Douro (3379-5002PA67807).

Informed Consent Statement

Informed consent was obtained from legal guardians or parents of the subjects involved in the study.

Data Availability Statement

The data supporting the findings of this study are not publicly available due to privacy or ethical restrictions but can be provided by the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Clemente, F.M.; Rabbani, A.; Araújo, J.P. Ratings of perceived recovery and exertion in elite youth soccer players: Interchangeability of 10-point and 100-point scales. Physiol. Behav. 2019, 210, 112641. [Google Scholar] [CrossRef] [PubMed]
  2. Polito, L.F.T.; Figueira, A.J.; Miranda, M.L.J.; Chtourou, H.; Miranda, J.M.; Brandao, M.R.F. Psychophysiological indicators of fatigue in soccer players: A systematic review. Sci. Sports 2017, 32, 1–13. [Google Scholar] [CrossRef]
  3. Teixeira, J.E.; Forte, P.; Ferraz, R.; Branquinho, L.; Silva, A.J.; Monteiro, A.M.; Barbosa, T.M. Integrating physical and tactical factors in football using positional data: A systematic review. PeerJ 2022, 10, e14381. [Google Scholar] [CrossRef] [PubMed]
  4. Teixeira, J.E.; Forte, P.; Ferraz, R.; Branquinho, L.; Morgans, R.; Silva, A.J.; Monteiro, A.M.; Barbosa, T.M. Resultant equations for training load monitoring during a standard microcycle in sub-elite youth football: A principal components approach. PeerJ 2023, 11, e15806. [Google Scholar] [CrossRef]
  5. Brink, M.S.; Nederhof, E.; Visscher, C.; Schmikli, S.L.; Lemmink, K.A.P.M. Monitoring load, recovery, and performance in young elite soccer players. J. Strength Cond. Res. 2010, 24, 597. [Google Scholar] [CrossRef] [PubMed]
  6. Teixeira, J.E.; Forte, P.; Ferraz, R.; Leal, M.; Ribeiro, J.; Silva, A.J.; Barbosa, T.M.; Monteiro, A.M. Monitoring accumulated training and match load in football: A systematic review. Int. J. Environ. Res. Public Health 2021, 18, 3906. [Google Scholar] [CrossRef]
  7. Vilamitjana, J.J.; Lentini, N.A.; Perez, M.F.; Verde, P.E. Heart Rate Variability as Biomarker of Training Load in Professional Soccer Players. Med. Sci. Sports Exerc. 2014, 46, 842–843. [Google Scholar] [CrossRef]
  8. Makivic, B.; Nikić, M.D.; Willis, M. Heart Rate Variability (HRV) as a Tool for Diagnostic and Monitoring Performance in Sport and Physical Activities. J. Exerc. Physiol. Online 2013, 16, 103–131. [Google Scholar]
  9. Nobari, H.; Azarian, S.; Saedmocheshi, S.; Valdés-Badilla, P.; García Calvo, T. Narrative review: The role of circadian rhythm on sports performance, hormonal regulation, immune system function, and injury prevention in athletes. Heliyon 2023, 9, e19636. [Google Scholar] [CrossRef]
  10. Pino-Ortega, J.; Oliva-Lozano, J.M.; Gantois, P.; Nakamura, F.Y.; Rico-Gonzalez, M. Comparison of the validity and reliability of local positioning systems against other tracking technologies in team sport: A systematic review. Proc. Inst. Mech. Eng. Part P J. Sports Eng. Technol. 2022, 236, 73–82. [Google Scholar] [CrossRef]
  11. Shvets, U.S.; Kniaz, I.O. Python and Data Science. 2021. Available online: https://essuir.sumdu.edu.ua/handle/123456789/86135 (accessed on 23 January 2025).
  12. Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
  13. Kampakis, S. Comparison of machine learning methods for predicting the recovery time of professional football players after an undiagnosed injury. MLSA@PKDD/ECML 2013, 1969, 58–68. [Google Scholar]
  14. Majumdar, A.; Bakirov, R.; Hodges, D.; Scott, S.; Rees, T. Machine Learning for Understanding and Predicting Injuries in Football. Sports Med. Open 2022, 8, 73. [Google Scholar] [CrossRef]
  15. Ali, A.; Jayaraman, R.; Azar, E.; Maalouf, M. A comparative analysis of machine learning and statistical methods for evaluating building performance: A systematic review and future benchmarking framework. Build. Environ. 2024, 252, 111268. [Google Scholar]
  16. Simonelli, C.; Formenti, D.; Rossi, A. Subjective recovery in professional soccer players: A machine learning and mediation approach. J. Sports Sci. 2025, 43, 448–455. [Google Scholar] [CrossRef]
  17. Freitas, D.N.; Mostafa, S.S.; Caldeira, R.; Santos, F.; Fermé, E.; Gouveia, É.R.; Morgado-Dias, F. Predicting noncontact injuries of professional football players using machine learning. PLoS ONE 2025, 20, e0315481. [Google Scholar]
  18. Díaz-García, J.; González-Ponce, I.; Ponce-Bordón, J.C.; López-Gajardo, M.Á.; Ramírez-Bravo, I.; Rubio-Morales, A.; García-Calvo, T. Mental Load and Fatigue Assessment Instruments: A Systematic Review. Int. J. Environ. Res. Public Health 2022, 19, 419. [Google Scholar] [CrossRef]
  19. Meng, T.; Yang, J.Y.; Huang, D.Y. Intervention of Football Players’ Training Effect Based on Machine Learning. In Proceedings of the 2022 2nd International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 14–16 January 2022; pp. 592–595. [Google Scholar] [CrossRef]
  20. Ferraz, R.; Forte, P.; Branquinho, L.; Teixeira, J.; Neiva, H.; Marinho, D.; Marques, M. The Performance during the Exercise: Legitimizing the Psychophysiological Approach. In Exercise Physiology; IntechOpen: London, UK, 2022. [Google Scholar] [CrossRef]
  21. Timler, A.; McIntyre, F.; Harris, S.; Hands, B. Does level of motor competence affect the associations between identity health and self-perceptions in adolescents? Hum. Mov. Sci. 2020, 74, 102710. [Google Scholar] [CrossRef]
  22. Sallen, J.; Andrä, C.; Ludyga, S.; Mücke, M.; Herrmann, C. School Children’s Physical Activity, Motor Competence, and Corresponding Self-Perception: A Longitudinal Analysis of Reciprocal Relationships. J. Phys. Act. Health 2020, 17, 1083–1090. [Google Scholar] [CrossRef]
  23. De Meester, A.; Barnett, L.M.; Brian, A.; Bowe, S.J.; Jiménez-Díaz, J.; Van Duyse, F.; Irwin, J.M.; Stodden, D.F.; D’Hondt, E.; Lenoir, M.; et al. The Relationship Between Actual and Perceived Motor Competence in Children, Adolescents and Young Adults: A Systematic Review and Meta-analysis. Sports Med. 2020, 50, 2001–2049. [Google Scholar] [CrossRef]
  24. Chang, C.J.; Putukian, M.; Aerni, G.; Diamond, A.B.; Hong, E.S.; Ingram, Y.M.; Reardon, C.L.; Wolanin, A.T. Mental Health Issues and Psychological Factors in Athletes: Detection, Management, Effect on Performance, and Prevention: American Medical Society for Sports Medicine Position Statement. Clin. J. Sport Med. 2020, 30, e61. [Google Scholar] [CrossRef]
  25. Souter, G.; Lewis, R.; Serrant, L. Men, Mental Health and Elite Sport: A Narrative Review. Sports Med. Open 2018, 4, 57. [Google Scholar] [CrossRef]
  26. Teixeira, J.E.; Alves, A.R.; Ferraz, R.; Forte, P.; Leal, M.; Ribeiro, J.; Silva, A.J.; Barbosa, T.M.; Monteiro, A.M. Effects of chronological age, relative age, and maturation status on accumulated training load and perceived exertion in young sub-elite football players. Front. Physiol. 2022, 13, 832202. [Google Scholar] [CrossRef] [PubMed]
  27. Aldanyowi, S.N.; AlOraini, L.I. Personalizing Injury Management and Recovery: A Cross-Sectional Investigation of Musculoskeletal Injuries and Quality of Life in Athletes. Orthop. Res. Rev. 2024, 16, 137–151. [Google Scholar] [PubMed]
  28. Wiewelhove, T.; Schneider, C.; Kellmann, M.; Pfeiffer, M.; Meyer, T.; Ferrauti, A. Recovery management in sport: Overview and outcomes of a nine-year multicenter research program. Int. J. Sports Sci. Coach. 2024, 19, 1223–1233. [Google Scholar]
  29. Preatoni, E.; Bergamini, E.; Fantozzi, S.; Giraud, L.I.; Bustos, A.S.O.; Vannozzi, G.; Camomilla, V. The Use of Wearable Sensors for Preventing, Assessing, and Informing Recovery from Sport-Related Musculoskeletal Injuries: A Systematic Scoping Review. Sensors 2022, 22, 3225. [Google Scholar] [CrossRef]
  30. Rico-González, M.; Pino-Ortega, J.; Méndez, A.; Clemente, F.; Baca, A. Machine learning application in soccer: A systematic review. Biol. Sport 2022, 40, 249–263. [Google Scholar] [CrossRef]
  31. Reyaz, N.; Ahamad, G.; Khan, N.J.; Naseem, M. Machine Learning in Sports Talent Identification: A Systematic Review. In Proceedings of the 2022 2nd International Conference on Emerging Frontiers in Electrical and Electronic Technologies (ICEFEET), Patna, India, 24–25 June 2022; pp. 1–6. [Google Scholar] [CrossRef]
  32. Beato, M.; Devereux, G.; Stiff, A. Validity and Reliability of Global Positioning System Units (STATSports Viper) for Measuring Distance and Peak Speed in Sports. J. Strength Cond. Res. 2018, 32, 2831–2837. [Google Scholar] [CrossRef]
  33. Branquinho, L.C.; Ferraz, R.; Marques, M.C. The continuous and fractionated game format on the training load in small sided games in soccer. Open Sports Sci. J. 2020, 13, 81–85. [Google Scholar] [CrossRef]
  34. Akubat, I.; Patel, E.; Barrett, S.; Abt, G. Methods of monitoring the training and match load and their relationship to changes in fitness in professional youth soccer players. J. Sports Sci. 2012, 30, 1473–1480. [Google Scholar] [CrossRef]
  35. Aquino, R.; Carling, C.; Maia, J.; Vieira, L.H.P.; Wilson, R.S.; Smith, N.; Almeida, R.; Gonçalves, L.G.C.; Kalva-Filho, C.A.; Garganta, J.; et al. Relationships between running demands in soccer match-play, anthropometric, and physical fitness characteristics: A systematic review. Int. J. Perform. Anal. Sport 2020, 20, 534–555. [Google Scholar] [CrossRef]
  36. Cabral, L.L.; Nakamura, F.Y.; Stefanello, J.M.F.; Pessoa, L.C.V.; Smirmaul, B.P.C.; Pereira, G. Initial validity and reliability of the Portuguese Borg rating of perceived exertion 6-20 scale. Meas. Phys. Educ. Exerc. Sci. 2020, 24, 103–114. [Google Scholar] [CrossRef]
  37. Python. Welcome to Python.org. Python.Org. 8 March 2023. Available online: https://www.python.org/ (accessed on 20 March 2025).
  38. Unpingco, J. Python for Probability, Statistics, and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2016; Volume 1. [Google Scholar]
  39. Cai, J.; Luo, J.; Wang, S.; Yang, S. Feature selection in machine learning: A new perspective. Neurocomputing 2018, 300, 70–79. [Google Scholar] [CrossRef]
  40. Hancock, J.T.; Khoshgoftaar, T.M. Survey on categorical data for neural networks. J. Big Data 2020, 7, 28. [Google Scholar] [CrossRef]
  41. Biamonte, J.; Wittek, P.; Pancotti, N.; Rebentrost, P.; Wiebe, N.; Lloyd, S. Quantum machine learning. Nature 2017, 549, 195–202. [Google Scholar] [CrossRef] [PubMed]
  42. Narayan, S. The generalized sigmoid activation function: Competitive supervised learning. Inf. Sci. 1997, 99, 69–82. [Google Scholar] [CrossRef]
  43. Benureau, F.C.Y.; Rougier, N.P. Re-run, Repeat, Reproduce, Reuse, Replicate: Transforming Code into Scientific Contributions. Front. Neuroinform. 2018, 11, 69. [Google Scholar] [CrossRef]
  44. Haslwanter, T. An Introduction to Statistics with Python with Applications in the Life Sciences; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
  45. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  46. Hicks, S.A.; Strümke, I.; Thambawita, V.; Hammou, M.; Riegler, M.A.; Halvorsen, P.; Parasa, S. On evaluation metrics for medical applications of artificial intelligence. Sci. Rep. 2022, 12, 5979. [Google Scholar] [CrossRef]
  47. Jierula, A.; Wang, S.; Oh, T.-M.; Wang, P. Study on Accuracy Metrics for Evaluating the Predictions of Damage Locations in Deep Piles Using Artificial Neural Networks with Acoustic Emission Data. Appl. Sci. 2021, 11, 2314. [Google Scholar] [CrossRef]
  48. Uddin, S.; Haque, I.; Lu, H.; Moni, M.A.; Gide, E. Comparative performance analysis of K-nearest neighbour (KNN) algorithm and its different variants for disease prediction. Sci. Rep. 2022, 12, 6256. [Google Scholar] [CrossRef]
  49. Natekin, A.; Knoll, A. Gradient boosting machines, a tutorial. Front. Neurorobot. 2013, 7, 21. [Google Scholar] [CrossRef]
  50. Cervantes, J.; Garcia-Lamont, F.; Rodríguez-Mazahua, L.; Lopez, A. A comprehensive survey on support vector machine classification: Applications, challenges and trends. Neurocomputing 2020, 408, 189–215. [Google Scholar] [CrossRef]
  51. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  52. Song, Y.; Lu, Y. Decision tree methods: Applications for classification and prediction. Shanghai Arch. Psychiatry 2015, 27, 130–135. [Google Scholar] [CrossRef] [PubMed]
  53. Beéck, T.O.D.; Jaspers, A.; Brink, M.S.; Frencken, W.G.P.; Staes, F.; Davis, J.J.; Helsen, W.F. Predicting Future Perceived Wellness in Professional Soccer: The Role of Preceding Load and Wellness. Int. J. Sports Physiol. Perform. 2019, 14, 1074–1080. [Google Scholar] [CrossRef]
  54. Naidu, S.A.; Fanchini, M.; Cox, A.; Smeaton, J.; Hopkins, W.G.; Serpiello, F.R. Validity of Session Rating of Perceived Exertion Assessed via the CR100 Scale to Track Internal Load in Elite Youth Football Players. Int. J. Sports Physiol. Perform. 2019, 14, 403–406. [Google Scholar] [CrossRef]
  55. Teixeira, J.; Forte, P.; Ferraz, R.; Branquinho, L.; Silva, A.; Barbosa, T.; Monteiro, A. Methodological Procedures for Non-Linear Analyses of Physiological and Behavioural Data in Football. In Exercise Physiology; IntechOpen: London, UK, 2022. [Google Scholar] [CrossRef]
  56. Wen, Y. Optimization design of biomechanical parameters based on advanced mathematical modelling. Mol. Cell. Biomech. 2024, 21, 463. [Google Scholar]
  57. Halilaj, E.; Rajagopal, A.; Fiterau, M.; Hicks, J.L.; Hastie, T.J.; Delp, S.L. Machine learning in human movement biomechanics: Best practices, common pitfalls, and new opportunities. J. Biomech. 2018, 81, 1–11. [Google Scholar]
  58. Hessels, R.S.; Niehorster, D.C.; Holleman, G.A.; Benjamins, J.S.; Hooge, I.T.C. Wearable Technology for “Real-World Research”: Realistic or Not? Perception 2020, 49, 611–615. [Google Scholar] [CrossRef]
  59. Sabato, T.M.; Walch, T.J.; Caine, D.J. The elite young athlete: Strategies to ensure physical and emotional health. Open Access J. Sports Med. 2016, 7, 99–113. [Google Scholar] [PubMed]
  60. Ramos-Cano, J.; Martín-García, A.; Rico-González, M. Training intensity management during microcycles, mesocycles, and macrocycles in soccer: A systematic review. Proc. Inst. Mech. Eng. Part P J. Sports Eng. Technol. 2022, 17543371221101228. [Google Scholar] [CrossRef]
  61. Rico-González, M.; Pino-Ortega, J.; Praça, G.M.; Clemente, F.M. Practical Applications for Designing Soccer’ Training Tasks From Multivariate Data Analysis: A Systematic Review Emphasizing Tactical Training. Percept. Mot. Ski. 2022, 129, 892–931. [Google Scholar] [CrossRef]
Table 1. Correlation analysis between predictor variables and sRPE.
Table 1. Correlation analysis between predictor variables and sRPE.
VariableCorrelationp-ValueStatistically Significant
TD0.0340.538No
HSRr−0.0310.574No
HMLD0.0550.328No
AvS0.1420.010Yes
SPR0.0240.668No
DSL0.0620.295No
sRPE_CR100.0980.114No
ACC−0.0450.432No
DEC−0.0520.375No
Cal0.0820.175No
TS−0.0270.627No
Weight0.0590.311No
Height−0.0460.421No
BMI0.0730.222No
Abbreviations: ACC—number of accelerations; AvS—average speed (AvS); BMI—body mass index; Cal—calories burned; DEC—number of decelerations; DSL—dynamic stress load; HMLD—high metabolic load distance; HSRr—high-speed running ratio; sRPE_CR10—session rate of perceived exertion measured using the CR10 scale of Borg; SPR—sprint speed; TD—total distance; TS—training session duration.
Table 2. Algorithm’s performance in predicting RPE.
Table 2. Algorithm’s performance in predicting RPE.
AlgorithmAccuracy (%)Precision (%)Recall (%)F1-Score (%)Average Metric
KNN20.4115.5420.4117.4418.45
XGBoost25.5124.7225.5124.3025.01
SVM23.4713.6823.4716.7319.34
RF28.5725.3128.5725.8627.08
DT32.6531.8332.6531.6732.20
Abbreviations: DT—Decision Tree Classifier; KNN—K-Nearest Neighbors; RF—Random Forest Classifier; SVM—Support Vector Machine; XGBoost—Gradient Boosting Classifier.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Teixeira, J.E.; Afonso, P.; Schneider, A.; Branquinho, L.; Maio, E.; Ferraz, R.; Nascimento, R.; Morgans, R.; Barbosa, T.M.; Monteiro, A.M.; et al. Player Tracking Data and Psychophysiological Features Associated with Mental Fatigue in U15, U17, and U19 Male Football Players: A Machine Learning Approach. Appl. Sci. 2025, 15, 3718. https://doi.org/10.3390/app15073718

AMA Style

Teixeira JE, Afonso P, Schneider A, Branquinho L, Maio E, Ferraz R, Nascimento R, Morgans R, Barbosa TM, Monteiro AM, et al. Player Tracking Data and Psychophysiological Features Associated with Mental Fatigue in U15, U17, and U19 Male Football Players: A Machine Learning Approach. Applied Sciences. 2025; 15(7):3718. https://doi.org/10.3390/app15073718

Chicago/Turabian Style

Teixeira, José E., Pedro Afonso, André Schneider, Luís Branquinho, Eduardo Maio, Ricardo Ferraz, Rafael Nascimento, Ryland Morgans, Tiago M. Barbosa, António M. Monteiro, and et al. 2025. "Player Tracking Data and Psychophysiological Features Associated with Mental Fatigue in U15, U17, and U19 Male Football Players: A Machine Learning Approach" Applied Sciences 15, no. 7: 3718. https://doi.org/10.3390/app15073718

APA Style

Teixeira, J. E., Afonso, P., Schneider, A., Branquinho, L., Maio, E., Ferraz, R., Nascimento, R., Morgans, R., Barbosa, T. M., Monteiro, A. M., & Forte, P. (2025). Player Tracking Data and Psychophysiological Features Associated with Mental Fatigue in U15, U17, and U19 Male Football Players: A Machine Learning Approach. Applied Sciences, 15(7), 3718. https://doi.org/10.3390/app15073718

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop