Next Article in Journal
Edge and Cloud Computing in Smart Cities
Previous Article in Journal
Emergency Messaging System for Urban Vehicular Networks Inspired by Social Insects’ Stigmergic Communication
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Workload Management System in the Public Sector: A Comparative Analysis

by
Konstantinos C. Giotopoulos
1,*,
Dimitrios Michalopoulos
1,
Gerasimos Vonitsanos
2,
Dimitris Papadopoulos
1,
Ioanna Giannoukou
1 and
Spyros Sioutas
2
1
Department of Management Science and Technology, University of Patras, 26504 Patras, Greece
2
Department of Computer Engineering and Informatics, University of Patras, 26504 Patras, Greece
*
Author to whom correspondence should be addressed.
Future Internet 2025, 17(3), 119; https://doi.org/10.3390/fi17030119
Submission received: 11 January 2025 / Revised: 28 February 2025 / Accepted: 28 February 2025 / Published: 6 March 2025

Abstract

:
Efficient human resource management is critical to public sector performance, particularly in dynamic environments where traditional systems struggle to adapt to fluctuating workloads. The increasing complexity of public sector operations and the need for equitable task allocation highlight the limitations of conventional evaluation methods, which often fail to account for variations in employee performance and workload demands. This study addresses these challenges by optimizing load distribution through predicting employee capability using data-driven approaches, ensuring efficient resource utilization and enhanced productivity. Using a dataset encompassing public/private sector experience, educational history, and age, we evaluate the effectiveness of seven machine learning algorithms: Linear Regression, Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS), Support Vector Machine (SVM), Gradient Boosting Machine (GBM), Bagged Decision Trees, and XGBoost in predicting employee capability and optimizing task allocation. Performance is assessed through ten evaluation metrics, including Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE), ensuring a comprehensive assessment of accuracy, robustness, and bias. The results demonstrate ANFIS as the superior model, consistently outperforming other algorithms across all metrics. By synergizing fuzzy logic’s capacity to model uncertainty with neural networks’ adaptive learning, ANFIS effectively captures non-linear relationships and variations in employee performance, enabling precise capability predictions in dynamic environments. This research highlights the transformative potential of machine learning in public sector workforce management, underscoring the role of data-driven decision-making in improving task allocation, operational efficiency, and resource utilization.

1. Introduction

In a rapidly evolving digital landscape, the public sector grapples with critical human resource management challenges, particularly in resource optimization and cohesive control over human capital utilization. Traditional hiring and resource allocation methods, often reliant on subjective assessments and bureaucratic processes, no longer prove effective in dynamic environments. Compounded by a pervasive non-compliance culture and inadequate management systems, these limitations result in inefficiencies, productivity gaps, and a lack of actionable insights for decision-makers. As public sector organizations navigate digital transformation, there is an urgent need for robust, data-driven frameworks to evaluate employee capabilities and optimize workload allocation.
Efficiency in public service delivery hinges on an evaluation framework that integrates skill management, competence, training, experience, and diversity in managerial positions. However, the absence of a reliable central management system impedes the collection of statistical data on employee productivity, perpetuating reliance on outdated evaluation systems laden with subjectivity. This gap underscores the necessity for transformative approaches that align with the demands of the digital era.
Building on prior work by Michalopoulos et al. [1], which proposed an integrated system for assessing employee potential using neuro-fuzzy inference, this study expands the scope by conducting a rigorous comparative analysis of machine learning algorithms. While Michalopoulos et al. emphasized skill management and task optimization through quantifiable criteria, their work did not systematically evaluate alternative methodologies. Similarly, Giotopoulos et al. [2] highlighted the role of time-based efficiency metrics but left the question of algorithmic superiority in modeling complex workforce dynamics unresolved.
To address these gaps, we introduce a comprehensive framework leveraging seven machine learning algorithms: Linear Regression, Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS), Gradient Boosting Machines (GBMs), Bagged Decision Trees, XGBoost, and Support Vector Machines (SVMs). Our methodology, extending the integrated system proposed by Giotopoulos et al. [3], emphasizes human-independent efficiency assessment, departing from traditional systems reliant on domain experts. Linear Regression serves as a foundational baseline, elucidating linear relationships between input factors (e.g., work experience, education, age) and employee performance. Subsequent exploration of ANNs captures intricate, non-linear patterns, while ANFIS bridges interpretability and complexity through its hybrid architecture, combining fuzzy logic with neural networks. Ensemble methods such as GBM and XGBoost harness collective decision-making, and SVM maximizes separation margins between performance classes.
Through meticulous experimentation, we evaluate these algorithms using metrics including Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Median Absolute Error (MedAE), and Huber Loss. ANFIS emerges as the standout performer, demonstrating superior accuracy (54% lower RMSE compared to Linear Regression) and robustness in handling diverse data patterns. Its ability to minimize bias while modeling non-linear relationships positions it as an optimal tool for public sector workforce management.
This study makes three key contributions:
  • A systematic comparative analysis of machine learning algorithms tailored to public sector constraints, addressing the lack of empirical benchmarks in bureaucratic environments.
  • Validation of ANFIS’s superiority in accuracy and interpretability, achieving a significant lower RMSE compared to all other algorithms, as detailed in Section 4.
  • Practical insights for integrating ANFIS into operational systems, enabling real-time workload management and resource allocation.
The remainder of this paper is structured as follows: Section 2 reviews related work on public sector efficiency, machine learning applications in workforce analytics, and hybrid neuro-fuzzy systems. Section 3 details the methodology, including data collection (employee records from West Greece), preprocessing, and implementation within the Apache Spark framework. Section 4 presents experimental results, comparing performance across algorithms, while Section 5 discusses implications, limitations, and future directions for deploying ANFIS in public sector workflows.

2. Related Work and Contributions

In the domain of understanding factors influencing employee performance, numerous studies have shed light on key aspects such as motivation, compensation, engagement, intellectual capital, and human resource management practices. Each of these factors plays an essential role in shaping the performance of employees within organizations. A substantial body of research emphasizes applying regression analysis to evaluate human performance. The study by Shahzadi et al. (2014) investigates the impact of employee motivation on performance, revealing a positive correlation mediated by job satisfaction, organizational commitment, and work–life balance. This relationship emphasizes the need for organizations to focus on enhancing employee motivation through appropriate interventions [4]. Hameed, Ramzan, and Zubair (2014) further emphasized the influence of compensation on employee performance, highlighting the positive correlation between various compensation factors and employee performance. The findings underlined the significance of fair compensation practices in motivating and enhancing employee productivity [5]. Anitha (2014) investigated the determinants of employee engagement and their effect on performance, revealing a positive and significant relationship between engagement and employee performance. This suggested that engaged employees are likely to perform better, emphasizing the importance of engagement initiatives in organizations [6]. Moving beyond motivational and engagement factors, Ahangar (2011) explored intellectual capital and financial performance. The study uncovered a significant influence of intellectual capital on profitability and productivity, underscoring the strategic importance of intellectual capital in corporate performance [7], while Hong et al. (2012) explored human resource management practices and their impact on employee retention. The research highlighted the crucial role of training, compensation, and appraisal in retaining employees, providing valuable insights for organizations striving to enhance employee loyalty [8]. In the domain of intellectual capital and performance, Phusavat et al. (2011) established a positive correlation, emphasizing the role of intellectual capital in fostering innovation and, subsequently, firm success. This highlights the need for organizations to invest in and effectively manage their intellectual capital [9]. Darmawan et al. (2020) further stressed the importance of human resource quality, showcasing its positive correlation with job performance. The study underscored the significance of education, experience, skills, and motivation in enhancing overall performance, guiding organizations in investing wisely in their human resources [10]. Lastly, Rivaldo and Nabella (2023) highlighted the positive correlation between employee education, training, experience, work discipline, and performance. Their findings reinforced the importance of investing in these fundamental aspects to enhance employee performance and drive organizational success [11].
In exploring the intersection of artificial intelligence and workforce dynamics, several studies have leveraged Artificial Neural Networks (ANNs) to model and predict various aspects of productivity and performance. Chen and Chang (2010) employed ANNs to elucidate the intricate, non-linear relationships between firm size, profitability, employee productivity, and patent citations within the US pharmaceutical industry. This approach discerned complex patterns that traditional Linear Regression models may overlook [12].
Simeunović et al. (2017) proposed an ANN-based model tailored for optimizing workforce scheduling by considering diverse influential factors such as employee skills, customer demand, and machine capacity. Their model showcased its effectiveness in enhancing productivity and reducing operational costs in manufacturing settings [13].
Fekri Sari and Avakh Darestani (2019) introduced an ANN model that predicts fuzzy overall equipment effectiveness (OEE) and line performance. The integration of fuzzy logic and ANNs allowed for accurate performance measurement, particularly in the context of manufacturing operations [14]. The construction industry also benefits from ANN-based predictions. Goodarzizad et al. (2023) proposed a hybrid model merging ANNs with the grasshopper optimization algorithm to forecast construction labor productivity, considering factors like worker skills, project complexity, and weather conditions. Their approach proved accurate and effective for prediction [15].
Similarly, Heravi and Eslamdoost (2015) employed ANNs to measure and predict construction labor productivity, emphasizing the consideration of factors such as worker skills, project complexity, and weather conditions. Their study underscored the accuracy and efficacy of ANNs in predicting productivity in the construction domain [16].
In a different domain, Proto et al. (2020) pioneered a three-step neural network artificial intelligence modeling approach for time, productivity, and cost prediction within the Italian forestry sector. Their methodology incorporated a diverse array of factors, including tree species, terrain characteristics, and weather conditions, to predict outcomes accurately [17].
Lastly, Gelmereanu, Morar, and Bogdan (2014) presented an ANN model for predicting productivity and cycle time in manufacturing processes. Their model factors in various critical elements such as machine type, worker skills, and material characteristics, highlighting the accuracy and efficiency of ANN-based predictions in optimizing manufacturing processes [18]. These works collectively demonstrate the versatile applications of ANNs in forecasting and optimizing productivity across diverse domains.
In the domain of performance estimation and analysis across diverse sectors, the integration of the Adaptive Neuro-Fuzzy Inference System (ANFIS) proves to be a powerful tool. Ershadi, Qhanadi Taghizadeh, and Hadji Molana (2021) introduced a hybrid approach utilizing technology readiness level (TRL), data envelopment analysis (DEA), and ANFIS to select and estimate the performance of Green Lean Six Sigma (GLSS) projects. Their hybrid methodology demonstrated effectiveness in accurately determining and evaluating project performance [19].
In the context of labor loss estimation, ARSLANKAYA (2023) compared the performance of fuzzy logic with ANFIS. Their evaluation of labor loss data from a manufacturing company revealed that ANFIS is superior in accuracy and precision [20].
Keles et al. (2023) focused on determining the leadership perceptions of construction employees, utilizing ANFIS. Their study emphasized ANFIS as an effective tool for accurately determining these perceptions, demonstrating its utility in the construction sector [21].
Education quality assessment is another domain in which ANFIS achieves excellence. Ahanger et al. (2020) propose an ANFIS-inspired smart framework to assess education quality, showcasing its effectiveness in evaluating student performance, teacher quality, and school infrastructure [22], while Azadeh and Zarrin (2016) introduced an intelligent framework for productivity assessment and analysis of human resources, considering various factors such as resilience engineering, motivational aspects, health, safety, and ergonomics. Their approach proves to be effective in accurately assessing human resource productivity [23].
Considering organizational cohesion and its impact on employee productivity, Nikkhah-Farkhani et al. (2022) utilized ANFIS to model these relationships. Their study emphasized the significant positive impact of organizational cohesion on employee productivity and highlighted ANFIS as an effective analytical tool [24].
Mirsepasi, Faghihi, and Babaei (2013) proposed a system model for performance management in the public sector, employing a balanced scorecard approach. Their model considers various crucial aspects of public sector performance and showcases effectiveness in enhancing overall performance [25]. Elshaboury et al. (unpublished) presented an improved ANFIS model based on the particle swarm optimization (PSO) algorithm for predicting labor productivity. Their model demonstrated superior accuracy and precision compared to traditional ANFIS models, highlighting the potential of optimization algorithms in enhancing ANFIS [26]. In contemporary research, machine learning (ML) techniques are increasingly employed to predict critical factors such as employee attrition and performance. Jain and Nayyar (2018) showcased the efficacy of the XGBoost algorithm in predicting employee attrition, emphasizing its efficient memory utilization, high accuracy, and low running times, ultimately achieving an accuracy of nearly 90% [27].
Shifting the focus to the academic sphere, Sekeroglu, Dimililer, and Tuncal (2019) evaluated three prominent ML algorithms for predicting student performance. Notably, neural networks emerged as the most accurate in predicting student grades in both secondary schools and universities [28]. Regarding environmental contexts, Aldin and Sözer (2022) used ANFIS and Artificial Neural Networks (ANNs) to predict thermal data. The study underscored ANFIS’s superior accuracy in predicting temperature and humidity, making it a robust tool for thermal data prediction [29]. In HR- and employee-related domains, Zhao et al. (2019) explored the prediction of employee turnover using various ML methods. Among them, tree-based ensemble methods such as extreme gradient enhancement proved highly effective, especially for medium and large HR datasets, showcasing its superior predictive power and efficiency [30].
Pathak, Dixit, Somani, and Gupta (2023) advocated for ML in predicting employee performance, highlighting its potential to identify high-performing employees and enhance overall workforce performance. The study evaluated diverse ML techniques, including decision trees and Support Vector Machines, underlining the importance of ML in Industry 4.0 [31]. In Saad’s (2020) research, data were gathered with twelve variables, each having 121 instances, aiming to predict the evaluation of the process for individual workers. To ensure the highest prediction accuracy (reaching 99.16%), an ensemble algorithm (Bagging) was employed, combining the four decision tree algorithms. The standard errors for the four algorithms were notably minor, suggesting a strong relationship between the seven input variables and the evaluation output [32]. Adeniyi et al. (2022) rigorously compared the performance of three ML techniques—decision tree (DT), Artificial Neural Network (ANN), and Random Forest (RF)—in predicting employee performance. The study concluded that ANN outperforms RF in performance during testing [33]. Jantan, Puteh, Hamdan, and Ali Othman (2010) took a data mining approach, utilizing classification techniques to predict employee performance patterns within HR databases. The C4.5/J4.8 classifier exhibited the highest accuracy, suggesting its potential for future endeavors [34]. Lastly, Li, Lazo, Balan, and de Goma (2021) focused on employee performance prediction within a company using ML techniques. Logistic Regression emerged as the most accurate classifier among the employed methods, offering a promising avenue for predictive accuracy [35].

Advancing Workforce Management Through ANFIS: Contributions Beyond Existing Research

Building upon these studies, our research contributes to the field by integrating Adaptive Neuro-Fuzzy Inference System (ANFIS) into a dynamic workload management framework for the public sector. While previous studies have extensively explored factors such as motivation (Shahzadi et al., 2014) [4], compensation (Hameed et al., 2014) [5], employee engagement (Anitha, 2014) [6], and intellectual capital (Ahangar, 2011) [7] in shaping performance, our approach focuses on quantifying employee capability using machine learning-driven workload metrics. Additionally, research leveraging Artificial Neural Networks (ANNs) (Chen & Chang, 2010; Simeunovic et al., 2017; Goodarzizad et al., 2023) [12,13,15] has demonstrated their effectiveness in modeling complex workforce dynamics. However, our study advances this by employing ANFIS, which not only captures non-linear relationships but also integrates fuzzy logic for greater interpretability in employee capability assessment. Furthermore, while past research has applied ANFIS in various domains, such as manufacturing performance (Arslankaya, 2023) [20], education quality assessment (Ahanger et al., 2020) [22], and organizational cohesion (Nikkhah-Farkhani et al., 2022) [24], our study is among the first to implement ANFIS in dynamic workload distribution within the public sector, addressing critical gaps in real-time task allocation and resource optimization. This research, therefore, complements existing work by providing a novel AI-driven approach to workforce management, moving beyond static evaluations toward a real-time, data-driven optimization model.

3. Methodology

This chapter outlines the methodology employed in conducting a comparative analysis of various machine learning algorithms to evaluate employee capability within a dynamic workload management system in the public sector. The primary goal of this research is to identify the most effective predictive model. After an extensive examination of different algorithms, the Adaptive Neuro-Fuzzy Inference System (ANFIS) emerged as the standout performer. These algorithms were executed within the Apache Spark framework, leveraging its distributed computing capabilities to process large-scale datasets efficiently. This chapter provides a detailed overview of the research design, data collection, algorithms utilized, and the evaluation metrics employed to make this determination.

3.1. Machine Learning Technique Overview

This section provides an overview of the machine learning techniques utilized in this study, focusing on their unique strengths and applications in predictive modeling. Machine learning has become a key tool for solving complex problems by analyzing large datasets and uncovering patterns that traditional methods may overlook. The selected algorithms encompass a range of approaches, from simple linear models to advanced ensemble methods and hybrid systems, enabling a thorough evaluation of their effectiveness. Each technique was chosen based on its ability to address the specific challenges of modeling employee performance and potential in dynamic environments, highlighting their contributions to improving predictive accuracy and decision-making processes.
In this study, the data analysis was carried out using a broad suite of advanced software tools and configurations to ensure efficiency and accuracy. Apache Spark 3.4.2 was employed as the primary framework for complex data processing, leveraging its powerful distributed computing capabilities. The Java Development Kit (JDK) 21 was utilized to support the development and execution of custom algorithms, ensuring compatibility and optimal performance. The computational environment was built on Ubuntu Linux 22.04, chosen for its stability, security, and robust support for open-source tools. This setup provided a reliable and scalable platform for conducting rigorous analysis, effectively supporting the study’s objectives.

3.1.1. Step 1: Data Collection

To facilitate this study, a dataset was compiled that included information on employees in the public sector. The dataset was carefully curated to ensure its relevance and accuracy. Data points were collected from public sector organizations in the broad region of West Greece to provide a representative sample. It included information on employees, their work experience in the public and private sectors, age and educational history Michalopoulos et al. (2022) [1]. Each data point was carefully recorded and validated to ensure its accuracy.

3.1.2. Step 2: Data Preprocessing

Data preprocessing was essential in preparing the dataset for analysis, ensuring its quality and suitability for machine learning algorithms. Missing values within the dataset were identified and addressed to prevent biases or inaccuracies in the models. This involved techniques such as mean or median imputation for continuous variables and mode imputation or assigning default categories for categorical variables. Additionally, categorical data were encoded into numerical representations using methods such as one-hot encoding or label encoding, enabling algorithms to process these features effectively. These preprocessing steps ensured that the data were complete, consistent, and compatible with the requirements of the machine learning techniques used in this study, thereby enhancing the overall reliability and accuracy of the analysis.

3.1.3. Step 3: Data Management

Based on the research mentioned above, we will proceed to our analysis where each employee is denoted as Node N i , and for each one, there is a correlation between each task weight (TW) the employee undertakes, described as T W i and measured in time units (minutes). Therefore, the interconnectedness among these tasks will be established, with T W 1 as the fundamental reference point, exerting minimal influence on employee productivity. Hence, the relationship, for instance, between T W 5 and T W 4 , T W 4 and T W 3 , T W 3 and T W 2 , T W 2 and T W 1 , can be expressed as a function, where the minimum task weight is represented by T W 1 . This approach, outlining task interdependencies, gives rise to a generalized principle that applies to every task pair T W 1 and T W i , as referred to in Michalopoulos et al. (2022) [1], such as
T W i = i T W 1 , i > 0
In their work, Michalopoulos et al. (2022) [1] utilized a four-tier factor profile, with the skill set grounded in adherence to the Greek Council of State’s Decision No. 540/2021 in alignment with Council Directive 90/270/EEC, as shown in Figure 1.
K1 (Academic Proficiency)
  • Count of seminars related to the current work (maximum of three)—(S1, S2, S3).
  • Number of Bachelor’s degrees (maximum of two)—(B1, B2).
  • Possession of a Master’s degree.
  • Certification from the National School of Public Administration.
  • PhD diploma.
K2 (Public Sector Work Experience)
  • Total years of experience (maximum 35 years).
  • Nature of responsibilities:
    Supervision of a small department.
    Leadership of a department.
    General management.
K3 (Private Sector Work Experience)
  • Total years of experience (maximum 35 years).
  • Nature of responsibilities:
    Supervision of a small department.
    Leadership of a department.
    General management.
K4 (Age)
  • Age in years within the range of 20 to 70 years.
Based on the algorithm applied on Node B, a unique Time Factor is calculated for every dataset profile.
In this study, employee capability is measured on a continuous scale through the Time Factor (TF), which represents the average time an employee takes to complete a task. The Time Factor is derived from the dataset, which includes information on employees’ work experience (both in the public and private sectors), educational history, and age. The Capacity Factor (CF), introduced by Michalopoulos et al. [1], is calculated as the mean value of the number of T W 1 tasks (tasks with the minimum task weight) completed per time unit. This approach allows for a continuous and granular assessment of employee capability, enabling more precise predictions and comparisons.
An employee profile with precise skills recorded on the dataset will produce a Capacity Factor that determines its ability to manage task workload as shown in Figure 2 above.

3.1.4. Step 4: Model Selection and Data Splitting

The model selection and data separation process were critical components of this study, which aimed to ensure the robustness and reliability of the machine learning models used to predict employee performance. To identify the best-performing model, a diverse set of algorithms, including Linear Regression, Artificial Neural Networks (ANN), Adaptive Neuro-Fuzzy Inference System (ANFIS), gradient boost machine (GBM), bounded decision trees, Support Vector Machine (SVM), and XGBoost, were evaluated.
To achieve this, the dataset was first divided into three subsets: training, validation, and testing datasets. The training set comprising most of the data was used to fit the models and optimize their parameters. The validation set was employed to fine-tune hyperparameters and prevent overfitting, ensuring that the models generalized well to unseen data. Finally, the test set was used exclusively to evaluate the final performance of the selected models, providing an unbiased assessment of their predictive accuracy.
The model selection process involved comparing algorithms across various performance metrics, such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE), alongside computational metrics like CPU time and memory usage. This thorough approach enabled the identification of ANFIS as the most effective algorithm to model complex relationships in the dataset with superior accuracy and robustness. The structured data splitting methodology ensured that the evaluation results reflected the true predictive power of the models, laying a strong foundation for their application in real-world scenarios.
The dataset used in this study consists of task assignments and employee profiles obtained from public sector organizations in the Region of Western Greece. The dataset used in our study consists of 638 employee records. Each record is characterized by four key factors: academic qualifications, professional experience in the public and private sectors, and age. These factors provide a detailed view of the employees’ skills, roles, and demographics, enabling us to explore patterns and make meaningful predictions as seen in Table 1 below.
In terms of academic qualifications, the dataset reflects a diverse range of skills. Over half of the employees hold at least one Bachelor’s degree, with 53% having completed their first degree and some employees holding a second Bachelor’s degree. Advanced qualifications are also well-represented, with 35% of employees holding a Master’s degree, few of them holding an NSPA degree, and even fewer possessing a Ph.D. Participation in professional seminars is another common characteristic, with 47% of employees having attended at least one seminar. This diversity of qualifications highlights a workforce with significant educational accomplishments.
Leadership roles in the public and private sectors are another focus of our dataset. In the public sector, 23% of employees are Heads of Small Departments, 12% are Heads of Departments, and 3% are General Managers. Regarding experience in the private sector, these roles are slightly more prevalent, with 26% served as Heads of Small Departments, 12% as Heads of Departments, and 5% as General Managers. These statistics demonstrate a workforce with considerable leadership experience, particularly in the private sector.
The dataset also captures the age distribution of the employees, which is skewed toward older age groups. A significant portion of the workforce, 47%, is aged 50 years or older. Employees aged 40 to 49 years make up 30% of the dataset, while those aged 30 to 39 account for 16%. Only 5% of employees fall within the 20 to 29-year age range. This age distribution aligns with the dataset’s emphasis on experienced professionals who likely possess advanced academic qualifications and leadership roles.
The dataset is moderately balanced, ensuring robust analysis. While Bachelor’s and Master’s degrees are highly prevalent, fewer employees possess NSPA degrees or PhDs, reflecting the specialized nature of these qualifications. Leadership roles show a slightly higher representation in the private sector, with General Managers being relatively rare in both sectors. The age distribution provides a diverse yet heavily experienced workforce, with nearly 78% of employees aged 40 or older.
To reproduce or approximate our results, researchers should utilize a dataset with similar characteristics. Such a dataset should include binary-encoded attributes representing academic qualifications, leadership roles, and age demographics. It should reflect a mix of employees with diverse academic qualifications and leadership roles, including Heads of Departments and General Managers in both sectors. The age distribution should emphasize experienced professionals, particularly those aged 40 and above.

3.1.5. Step 5: Model Performance Evaluation

The evaluation of model performance is a critical aspect of this study, as it provides information on the accuracy, efficiency, and reliability of the machine learning algorithms applied. To achieve this, various evaluation metrics were employed to assess how well each model predicted employee potential and performance.
Each metric provides a unique perspective on model performance, allowing for a thorough comparative analysis. The algorithms were evaluated using training, testing, and validation datasets to ensure their generalizability and robustness. The results highlighted significant differences in model performance, with the Adaptive Neuro-Fuzzy Inference System (ANFIS) consistently achieving superior accuracy and stability across multiple metrics. This thorough evaluation process underscores the importance of selecting appropriate metrics tailored to the specific goals and characteristics of the dataset, enabling the identification of the most effective machine learning models for dynamic workload management systems in the public sector.

3.1.6. Step 6: ML Implementation

This study’s machine learning models were implemented using a robust and efficient computational environment. The process began with preparing the data set through preprocessing steps, including handling missing values, encoding categorical variables, and normalizing numerical features to ensure compatibility with the algorithms. A diverse set of machine learning techniques, including Linear Regression, Artificial Neural Networks (ANN), Adaptive Neuro-Fuzzy Inference System (ANFIS), gradient boost machine (GBM), bounded decision trees, Support Vector Machines (SVM), and XGBoost, were selected to capture different data patterns and complexities.
The models were implemented using Apache Spark 3.4.2 for distributed computing, leveraging its scalability and speed to handle complex datasets effectively. The Java Development Kit (JDK) 21 provided a stable framework for developing and executing custom algorithms. Ubuntu Linux 22.04 is an operating environment known for its reliability and open-source ecosystem.
Each algorithm was hyperparameterized to optimize its performance, ensuring the best results for the data set. Evaluation metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE) were used to compare model accuracy. Additionally, computational metrics such as CPU time and memory usage were recorded to assess each model’s efficiency.
This systematic and well-orchestrated implementation process ensured a thorough exploration of the selected machine learning techniques, providing valuable insights into their suitability for modeling employee performance and workload management in the public sector. The results of this implementation not only highlight the strengths and weaknesses of each approach but contribute to the development of a more effective and efficient evaluation framework.

3.1.7. Data Description and Task Details

As indicated by Giotopoulos et al. [2], the following primary tasks, summarized in Table 2, were identified for areas of interest:
These tasks represent critical administrative functions, including financial management, documentation, and tender processes, which are central to the operations of public sector organizations. The target variable in this study is the time taken to complete assigned tasks, measured in hours. Tasks are categorized by complexity, defined in terms of execution time, and priority levels are applied where necessary. Given that each public sector department manages distinct tasks, the modeling framework is adaptable to the specific needs and task portfolios of individual organizations.
Load factors are defined as the number of tasks assigned to a user within a specified time frame, normalized by their historical capacity and completion rates. Task allocation is based on skill alignment, capacity, and historical performance, with an emphasis on minimizing idle time and maximizing efficiency. Giotopoulos et al. (2024) further analyzed the impacts of load on various user profiles under different scenarios. In this context, a capacity factor (CF) was introduced, serving as a function C F = f ( K 1 , K 2 , K 3 , K 4 ) , where K1, K2, K3, and K4 are illustrated in Figure 1. The capacity factor is calculated as the average time spent on completed tasks for each profile per unit of time [2,3].
The simulation results, detailed in Section 4.2, demonstrate significant differences in algorithm performance under varying load scenarios. These findings underscore the importance of aligning task assignments with user capacity to enhance overall efficiency and effectiveness in task management.

3.2. Machine Learning Algorithms

The study incorporated several machine learning algorithms to evaluate their performance in predicting employee capability. These algorithms included the following:
  • Linear Regression
  • Artificial Neural Networks (ANNs)
  • Adaptive Neuro-Fuzzy Inference System (ANFIS)
  • Support Vector Machine (SVM)
  • Gradient Boosting Machine (GBM)
  • Bagged Decision Trees (BDTs)
  • XGBoost
The selection of these algorithms represents a well-rounded and strategic approach to identifying the most effective predictive techniques for employee capability assessment in the public sector. Each algorithm was chosen to contribute a unique perspective to the analysis, leveraging its strengths to address different facets of the dataset’s complexity. Linear Regression served as a foundational benchmark, providing insights into linear relationships between variables. Its simplicity established a reference point against which the performance of more sophisticated algorithms could be evaluated. Building on this baseline, Artificial Neural Networks extended the analysis to non-linear dynamics. Their multi-layered architecture enabled the capture of intricate patterns in the data, laying the groundwork for understanding more complex relationships. The Adaptive Neuro-Fuzzy Inference System (ANFIS) was introduced to bridge the gap between interpretability and modeling complexity. By combining the transparency of fuzzy logic with the adaptability of neural networks, ANFIS excelled in capturing nuanced and non-linear relationships while addressing uncertainties inherent in employee data.
Support Vector Machine added another dimension to the analysis by focusing on maximizing decision boundaries. Its ability to handle linear and non-linear relationships provided valuable comparisons, especially in scenarios involving distinct data classes. The study of ensemble methods began with Gradient Boosting Machine and XGBoost, both of which leverage iterative corrections to residual errors to enhance predictive accuracy. These algorithms excelled at modeling interactions among features, offering insights into hierarchical and complex decision-making processes. Complementing these, Bagged Decision Tree utilized ensemble learning to improve stability and reliability by aggregating outputs from multiple decision trees, effectively reducing variance and providing robustness against overfitting.
All the aforementioned algorithms were executed within the **Apache Spark** framework, created at UC Berkeley’s AMPLab [36,37], to leverage its powerful capabilities for large-scale data processing. Spark’s hybrid framework fluidly integrates batch and stream processing, outperforming Hadoop’s MapReduce engine due to its innovative design [38]. The primary advantage of Spark in this context lies in its in-memory computation model, which significantly accelerates processing by minimizing reliance on disk I/O. Spark’s Directed Acyclic Graphs (DAGs) for workflow optimization and resilient distributed datasets (RDDs) for fault tolerance were instrumental in efficiently handling the complex datasets involved in the study [39].
The notable features of Apache Spark, such as its exceptional speed—up to 100 times faster than Hadoop and support for multiple programming languages (Java, Scala, R, and Python)—enabled the algorithms to execute efficiently and at scale [40]. Furthermore, Spark’s real-time stream processing, fault tolerance, and scalability ensured the robust performance of the algorithms in processing large, dynamic datasets.
Spark MLlib is a library that enables Apache Spark to perform machine learning algorithms with exceptional speed and accuracy. Built on the RDD API, MLlib leverages multiple cluster nodes to alleviate memory bottlenecks. Complementing MLlib, Apache Spark also includes SparkML, a DataFrame-based machine learning API. This dual-library approach allows developers to choose the most suitable option based on the dataset’s characteristics and size, ensuring optimal performance. These libraries support various algorithms, including classification, regression, recommendation, clustering, and topic modeling.
MLlib offers core machine learning features such as Featurization, Pipelines, Model Tuning, and Persistence. It supports essential tasks, including data preprocessing, model training, and prediction. With a design focused on simplicity and scalability, MLlib is highly effective for various machine learning tasks, including deep learning [41]. Apache Spark, developed in Scala, integrates effortlessly with APIs such as Java and Python and operates efficiently in both Hadoop and standalone environments [42].
A notable algorithm in MLlib is the Multilayer Perceptron (MLP) classifier, a feedforward Artificial Neural Network used for classification [43]. This research utilizes the MLP classifier, configured with a single hidden layer containing ten neurons. This configuration provides a robust framework for analyzing and modeling complex datasets.
This layered analytical approach, executed within Apache Spark, allowed robust comparative analysis across models. By progressively introducing greater complexity and capability, the study identified the Adaptive Neuro-Fuzzy Inference System as the most effective algorithm. ANFIS outperformed others in accuracy, robustness, and interpretability, demonstrating its ability to model complex relationships within the dataset. This iterative and comparative methodology underscores the importance of tailoring algorithm selection to the specific challenges of the domain and leveraging powerful platforms like Apache Spark for efficient large-scale computation in dynamic workload management systems.

3.2.1. Linear Regression

Linear Regression served as a fundamental baseline for comparison. It provided insights into linear relationships between input factors and employee capability. More specifically, it was deployed to model the relationship between factors considered input (work experience in the public sector, work experience in the private sector, educational history, and age). Linear Regression aims to find the best-fitting linear relationship between the input features and the output, defined as a Time Factor. This allows us to make predictions and understand the impact of individual features on the target variable [44,45].
Y = b 0 + b 1 x 1 + b 2 x 2 + b 3 x 3 + b 4 x 4 + ϵ
Y is the dependent variable (Time Factor).
  • x 1 , x 2 , x 3 , x 4 are the four independent variables (input features).
  • b 0 is the intercept.
  • b 1 , b 2 , b 3 , b 4 are the coefficients associated with each independent variable.
  • ϵ represents the error term, accounting for unexplained variability.

3.2.2. Artificial Neural Networks (ANNs)

ANNs were used to capture intricate patterns within the data and identify non-linear relationships between input parameters and employee capability. A feedforward neural network (FNN) consisting of multiple layers, including an input layer, at least one hidden layer, and an output layer, was deployed [46,47].
In forward propagation, the network computes the predicted output (ypred) based on the input features (X) and the weights and biases of the neurons. The mathematical representation of forward propagation is as follows:
Input Layer: The input layer simply passes the input features (X) to the hidden layer.
Hidden Layer: The hidden layer computes the weighted sum of the four inputs, applies an activation function, and passes the result to the output layer. This can be represented as follows for the i-th neuron in the hidden layer:
Y = z i = j = 1 4 ( w i j x j ) + b i a i = f ( z i )
  • z i is the weighted sum for the i-th neuron in the hidden layer.
  • w i j is the weight connecting the j-th input feature to the i-th neuron.
  • x j is the j-th input feature.
  • b i is the bias for the i-th neuron.
  • f ( · ) is the activation function applied to z i to compute a i , the activation of the neuron.
Output Layer: The output layer computes the final predicted output. The activation function used in the output layer is the identity function (linear activation)
y p r e d = a i
Mean Squared Error (MSE) was deployed as the loss function
L = 1 n i = 1 n ( y p r e d y i ) 2
n is the number of data points.
  • y p r e d is the predicted output of the i-th data point.
  • y i is the actual target value for the i-th data point.
Backpropagation was used to compute the loss gradients concerning the network’s weights and biases. These gradients are then used to update the weights and biases through gradient descent, which involves adjusting the weights to minimize the loss.
The weight updates are typically carried out using gradient descent, where the weights ( w i j ) are updated as follows:
w i j = w i j α L w i j
where α is the learning rate, and L w i j is the gradient of the loss concerning the weight w i j .

3.2.3. Adaptive Neuro-Fuzzy Inference System (ANFIS)

ANFIS, as a hybrid algorithm combining fuzzy logic and neural networks, was a central focus of the study. Its capacity to model complex, non-linear relationships while maintaining interpretability made it a promising choice for this context. It was widely analyzed by Jang et al. (1997) [48], Michalopoulos et al. (2022) and Giotopoulos et al. (2023) [1,3]. The dataset underwent a filtration process as a preliminary step, ensuring its suitability for constructing a fuzzy system within the MATLAB R2023b, software environment. A rigorous assessment of performance metrics led to selecting the gbellmf membership function in conjunction with a hybrid algorithm, primarily due to its remarkable capacity to minimize Root Mean Square Error. This choice facilitated the establishment of a well-structured four-input to one-output system. Each of the input variables, denoted as K1 through K4, was subsequently employed as input data for the Adaptive Neuro-Fuzzy Inference System (ANFIS), where individual nodes were assigned specific roles to address the unique functionalities associated with each input provided.

3.2.4. Support Vector Machine (SVM)

SVM was evaluated for its effectiveness in handling both linear and non-linear relationships and maximizing the margin of separation between distinct classes in employee capability. In SVM regression, the goal is to find a function f(x) that estimates the target variable, which is referred to as “Time Factor” (Y), given specific input features X [49,50]. The objective is to find a hyperplane that best fits the data while minimizing the margin of error. The SVM regression model was represented as
f ( x ) = w , x + b
where f ( x ) is the predicted Time Factor, w is the weight vector, x is the input feature vector, and b is the bias term.
The SVM regression algorithm aims to minimize the following optimization problem:
m i n w , b 1 2 w 2 + C i = 1 n ( ξ i + ξ i * )
subject to the following constraints:
Y i w , x i b ϵ + ξ i w , x i + b Y i ϵ + ξ i * ξ i , ξ i * 0
where Y i is the actual Time Factor for data point i, ϵ is the maximum permissible error (epsilon-tube), and ξ i and ξ i * are slack variables, representing the amount by which the predicted Time Factor can deviate from the actual time factor.
The objective function aims to minimize the norm of the weight vector w 2 while allowing for errors that are bounded by ϵ and penalized by the hyperparameter C. The parameter C controls the trade-off between maximizing the margin and minimizing the error. A small C encourages a broader margin with more errors, while a large C encourages a narrower margin with fewer errors.
The Support Vector Machine (SVM) algorithm was implemented using MATLAB’s fitrsvm function, configured to utilize a linear kernel by default. The slack variable (CCC), which controls the trade-off between maximizing the margin and minimizing classification error, was set to its default value of 1. This configuration ensures a straightforward comparison of linear relationships within the dataset. While this study focuses on linear SVMs, future research could explore non-linear kernels such as RBF or polynomial to better capture complex, non-linear relationships in workforce data.

3.2.5. Gradient Boosting Machine (GBM)

The Gradient Boosting Machine (GBM) algorithm is an ensemble learning method for regression problems. It builds a predictive model by combining the predictions from an ensemble of weak learners, typically decision trees. GBM aims to minimize residual errors stepwise, where each new model is trained to fit the residual errors left by the previous models [51,52].
The Gradient Boosting Machine (GBM) algorithm was configured to use decision trees as weak learners. A total of 100 trees were trained iteratively, using the least squares loss function to minimize residual errors. This setup, implemented via MATLAB’s fitensemble function, leverages default parameters for learning rate and tree depth, ensuring a robust and efficient model for this application.
For each boosting iteration, t = 1 , 2 , , T , where T is the total number of boosting iterations:
  • Compute the negative gradient (in terms of the loss function) of the loss function concerning the current model’s predictions. This gradient represents the residual errors.
  • Train a weak learner, often a decision tree, to fit the negative gradient (residuals). This creates a new model that is added to the ensemble.
  • Update the model by adding the new model’s predictions to the current one. This is achieved with a learning rate η that controls the step size of the update.

3.2.6. Bagged Decision Trees

As far as the Bagged Decision Tree algorithm is concerned, Bagging, or bootstrap aggregating, is an ensemble learning method that combines multiple decision trees to reduce variance and improve the predictive accuracy of regression models [53].
The Bagged Decision Tree (BDT) algorithm was implemented using MATLAB’s fitensemble function. A total of 100 decision trees were used as weak learners, with bagging employed to aggregate predictions and reduce variance. Configured for regression tasks, the model leveraged default MATLAB parameters for tree depth and leaf size, ensuring robust and unbiased predictions across the dataset.
Bootstrapped Data:
For each iteration t = 1 , 2 , , T (where T is the number of iterations), randomly sample the training data with replacements to create bootstrapped datasets D t .
Decision Tree Training:
Train a decision tree on each bootstrapped dataset D t . Each tree is denoted as h t ( x ) and aims to capture different patterns or noise in the data.
For each tree t, the decision tree is constructed by recursively partitioning the data based on feature splits that minimize impurity or reduce the Mean Squared Error (MSE).
Predictions:
For a given input instance x i , the predictions are made using all the trained decision trees:
y i ^ = 1 T i = 1 T h t ( x i )
where y i ^ represents the predicted output for instance x i , which is the average prediction across all decision trees.

3.2.7. XGBoost

XGBoost is a specific implementation of gradient boosting, an ensemble learning method that combines the predictions of multiple weak models (decision trees) to create a strong predictive model. The key idea is to fit a new model to the existing model’s residuals (differences between actual and predicted values). This process continues iteratively [54,55].
The XGBoost algorithm was employed to model the dataset, utilizing a learning rate ( η ) of 0.3 to balance convergence speed and generalization. Each decision tree in the ensemble had a maximum depth of 10, enabling the model to capture intricate patterns in the data. The objective function was set to “reg:linear” for regression tasks, and the ensemble was trained over 60 boosting rounds. Default hyperparameters were used for subsampling and regularization to maintain computational efficiency and avoid overfitting
The key mathematical equations in XGBoost relate to the objective function and the gradient of the loss concerning the model’s predictions. The objective function, for instance, illustrates the loss for the entire dataset and the ensemble of trees.
L = i = 1 N ( y i y i ^ ) 2 + γ T ( f ) + λ | w |
where N is the number of data points, y i is the actual target value, y i ^ is the predicted value, γ and λ are regularization terms, T ( f ) measures the complexity of the tree, and | w | is the magnitude of the weights on the leaves.

3.2.8. Workflow

The workflow for determining the capacity factor for each profile in the dataset consists of three distinct phases, as illustrated in Figure 3.
In the first phase, the synthesis of a generic profile structure is performed to establish a standardized representation of employee characteristics. Subsequently, tasks are classified based on their complexity levels, facilitating the identification of temporal dependencies in execution time. Personnel selection criteria are then defined, creating a subset of eligible employees who meet the predefined qualifications. To ensure the reliability of the analysis, time-related performance metrics are collected over a statistically valid period, enabling a comprehensive evaluation of task execution patterns.
In the second phase, various machine learning algorithms are assessed and compared to identify the most suitable model for load distribution within the targeted public sector environment. Among the tested models, the Adaptive Neuro-Fuzzy Inference System (ANFIS) was selected due to its effectiveness in modeling the transformation of employee profiles into task execution times, referred to as the Time Factor (TF). This facilitates a more adaptive and context-aware prediction of workload distribution, improving efficiency in task allocation.
The third phase focuses on implementing load redistribution mechanisms, including Load Control and load balancing strategies, to further enhance workforce efficiency and optimize task distribution.
As Phases 1 and 3 fall outside the primary scope of this study, the present research remains focused on its main objective: the comparative evaluation of machine learning algorithms for load distribution.

3.3. Comparative Analysis and Evaluation Metrics

The core of the research involved conducting a comparative analysis of these machine learning algorithms using a range of evaluation metrics. The aim was to assess their performance in predicting employee capability and identifying the most effective model.
The comparative analysis involved running each of the selected machine learning algorithms on the prepared dataset and evaluating their performance using a range of metrics, including Mean Squared Error, Root Mean Squared Error, Mean Absolute Error, Median Absolute Error, Mean Squared Logarithmic Error, Root Mean Squared Logarithmic Error, Mean Bias Deviation, Huber Loss, and MAPE.
Mean Squared Error (MSE)
M S E = 1 N i = 1 N ( y i y i ^ ) 2
  • MSE assesses the accuracy of a predictive model. It measures the average squared difference between actual y i and predicted y i ^ values. Lower MSE values indicate better accuracy.
Root Mean Squared Error (RMSE)
R M S E = M S E = 1 N i = 1 N ( y i y i ^ ) 2
  • RMSE is a variant of MSE that assesses accuracy. It represents the square root of the MSE, providing a measure of the average absolute error between actual and predicted values.
Mean Absolute Error (MAE)
M S E = 1 N i = 1 N | y i y i ^ |
  • MAE assesses a model’s accuracy by measuring the average absolute difference between actual and predicted values. It is less sensitive to outliers than MSE.
Median Absolute Error (MedAE)
M e d A E = M e d i a n ( | y i y i ^ | )
  • MedAE is another accuracy metric. It measures the median of the absolute differences between actual and predicted values. It is robust to outliers.
Mean Absolute Percentage Error (MAPE) MAPE is an additional accuracy metric used to evaluate forecasting performance. It is calculated as the mean of the absolute percentage differences between actual ( y i ) and predicted ( y i ^ ) values, expressed as
M A P E = 1 N i = 1 N y i y i ^ y i × 100
  • MAPE provides insight into the average magnitude of errors relative to the actual values. It offers a forecasting accuracy measure that considers each prediction’s proportional error.
Mean Squared Logarithmic Error (MSLE)
M S L E = 1 N i = 1 N log ( 1 + y i ) log ( 1 + y i ^ ) 2
  • MSLE assesses the accuracy of models for data with exponential growth patterns. It measures the average squared difference between the logarithm of actual and predicted values.
Root Mean Squared Logarithmic Error (RMSLE)
R M S L E = M S L E
  • RMSLE is a variant of MSLE. It assesses the accuracy of data with exponential growth. Lower RMSLE values indicate better accuracy.
Huber Loss
A Time Factor (TF) based on the four input features, the Huber Loss can be expressed as
L δ ( y , f ( x ) ) = 0.5 ( y f ( x ) ) 2 , if | y f ( x ) | δ δ | y f ( x ) | 0.5 δ 2 , if | y f ( x ) | > δ
where L δ ( y , f ( x ) ) represents the Huber Loss, y represents the true Time Factor (employee capability), f ( x ) represents the predicted Time Factor by the model based on the four input features, and δ is the hyperparameter that controls the threshold at which the loss transitions from quadratic (squared error) to linear (absolute error) behavior.
The choice of the δ parameter depends on the specific problem and is a trade-off between robustness to outliers and the smoothness of the loss function. It is often determined through experimentation, cross-validation, or domain knowledge. Smaller values of δ make the Huber Loss more robust to outliers, while larger values make it behave more like the Mean Squared Error (MSE) loss.
Mean Bias Deviation (MBD)
M B D = 1 N i = 1 N y i y i ^ y i
  • MBD assesses bias in predictions. It measures the average relative bias between actual and predicted values. A positive value indicates overestimation and a negative value suggests underestimation.
C-index
C = Number of concordant pairs + 0.5 × Number of tied pairs Total number of pairs
Concordant pairs are pairs of observations where the predicted order matches the actual order, tied pairs are pairs where the predicted outcomes or the actual outcomes are equal, and the total number of pairs is the total number of possible comparisons between pairs of data points, calculated as
Total number of pairs = n 2 = n ( n 1 ) 2
The C-index (Concordance Index) evaluates a predictive model’s ranking ability by measuring the concordance between predicted and actual outcomes. It ranges from 0.5 (random guessing) to 1.0 (perfect concordance), making it valuable for continuous or ordinal outputs.
The C-index complemented other metrics, such as Mean Squared Error, by analyzing the proportion of correctly ranked pairs, validating the effectiveness of ANFIS in predicting employee performance.
These metrics serve different purposes in assessing the performance of predictive models. MSE, RMSE, MAE, and RMSLE evaluate the accuracy, while MBD helps identify prediction bias and measures the model’s ability to explain the data’s variance.

4. Experimental Results

4.1. Comparing Algorithms

This chapter provides a detailed comparison of the performance metrics obtained from various algorithms applied to our predictive modeling problem. We analyze the results of all seven different algorithms. In all cases, the output was the Time Factor, the mean time of T F i per node N i , for n samples.
T F ¯ = i = 1 n n
The Capacity Factor, introduced by Michalopoulos et al. (2022) [1], is the amount of T W 1 tasks executed per time unit. T W x is defined as the task weight in terms of time in minutes. Therefore, for every node N i with profile P j for Task T k ,
C F ( P j ) = average time spent on number of accomplished T W 1 tasks of P j T U = T F i T U
where T U = 1 min.
Four hundred and fifty employee records were utilized for the training dataset, an additional one hundred records served as the testing dataset, and eight records were deployed for validation. The data samples were collected over three years. Approximately 11% of the initial data were deemed inaccurate due to continuous or unassigned tasks, introducing time-related issues and invalid value ranges (infinite values).
The data in terms of Mean Squared Error (MSE) and Regression Analysis yielded 2781.07, while ANFIS outperformed all other models with an MSE of 1284.38. This indicates that ANFIS achieved the best accuracy in predicting our target. Similarly, ANFIS achieved the lowest Root Mean Squared Error (RMSE) at 35.83, demonstrating its superiority in minimizing prediction errors. XGBOOST reported the worst RMSE value of 81.63, suggesting that it cannot be selected for our purposes.
Comparing other metrics, ANFIS consistently outperforms the other algorithms in Mean Absolute Error (MAE), Median Absolute Error (MedAE), Mean Squared Logarithmic Error (MSLE), Root Mean Squared Logarithmic Error (RMSLE), Mean Bias Deviation (MBD), MAPE, and Huber Loss. The accompanying graphs present these results visually, highlighting ANFIS as the top-performing algorithm in accuracy and predictive power.

4.1.1. Linear Regression Analysis Results

Linear Regression analysis is a fundamental statistical technique to model the relationship between a dependent variable and one or more independent variables. This algorithm serves as a baseline for comparing the performance of more complex models.
The scatter plot (Figure 4a) illustrates the relationship between the actual and predicted ‘Time Factor’ values during the training phase. Each point represents an observation, with the red dashed line indicating a perfect fit. Similar to the training performance, Figure 4b scatter plot shows the testing phase results, comparing actual and predicted ‘Time Factor’ values. Figure 4c visualizes the validation phase results, highlighting how closely the model predictions match the actual values.
The scatter plot generated by the Linear Regression model visualizes the relationship between the actual and predicted ‘Time Factor’ values. Each point represents an observation, with the x-axis displaying the actual ‘TF’ values and the y-axis displaying the predicted ‘TF’ values.
The red dashed line is a reference line representing a perfect fit. Points lying on this line have predictions that match the actual values perfectly.
The angle between perfect fit and best fit, as defined by the following mathematical equation, is illustrated below:
θ = arctan m 2 m 1 1 + m 1 m 2
where m 1 , m 2 are the slopes of the two lines.
We notice that as the “TF” value increases, data points tend to approach a state of “perfect fit”. However, there appears to be a significant error level for a substantial number of data records, particularly those below a “TF” value of 700.
The Linear Regression model provided a straightforward benchmark for evaluating other algorithms. It demonstrated moderate accuracy with some degree of prediction error. Despite its simplicity, it highlighted the linear relationships among the variables. However, its performance was limited in capturing complex, non-linear patterns in the data, as seen in Table 3 below.

4.1.2. ANN Results

As far as ANN is concerned, five different configurations were proposed:
  • Configuration 1: single hidden layer (10 neurons).
  • Configuration 2: two hidden layers (8, 4 neurons).
  • Configuration 3: three hidden layers (8, 4, 4 neurons).
  • Configuration 4: three hidden layers (8, 8, 8 neurons).
  • Configuration 5: three hidden layers (20, 20, 20 neurons).
Configuration 3, with three hidden layers of 8, 4, and 4 neurons, demonstrated the best performance in terms of RMSE (43.34), indicating a good balance between prediction accuracy and model complexity. It also had a relatively low MAE and MedAE, suggesting accurate predictions and lower variability. Configuration 4, with three hidden layers of 8, 8, and 8 neurons, also performed well with a competitive RMSE (54.79). It maintained a balance between accuracy and model complexity. Configuration 1, which included a single hidden layer with 10 neurons, provided reasonable performance but fell slightly short compared to Configuration 3 regarding RMSE. Configuration 2, with two hidden layers of 8 and 4 neurons, demonstrated the second-highest RMSE. Configuration 5 demonstrated the worst performance with three hidden layers of 20, 20, and 20 neurons. In an attempt to add even more layers and nodes, the results showed that the corresponding values do not show improvement on any of the metrics mentioned, as seen in Table 4. For the remainder of the current chapter, we present the data extracted from Configuration 3 (Config. 3), which provides the best results compared to all configurations.
Figure 5a shows the ANN’s performance in predicting ‘Time Factor’ during training, with each point representing a data observation. Figure 5b depicts the relationship between actual and predicted ‘Time Factor’ values during the testing phase. Figure 5c illustrates the ANN’s prediction accuracy in the validation phase.
The ANN model displayed significant improvement over Linear Regression, effectively capturing non-linear relationships in the data and yielding better predictive performance. The model’s ability to learn from data through multiple layers of neurons enabled it to handle intricate patterns (Figure 6 and Figure 7). Nonetheless, its training process was more computationally intensive compared to simpler models.

4.1.3. ANFIS Results

The ANFIS algorithm demonstrated notably superior performance (Figure 8) compared to all other algorithms, showcasing its exceptional efficiency and effectiveness in the given context in all metrics provided.
The ANFIS model was implemented with the following specifications:
1.
Membership Function: The generalized bell-shaped membership function (gbellmf) was selected for its flexibility and ability to model smooth transitions between fuzzy sets [56]. The exact parameters ( a , b , c ) were optimized during training to minimize prediction errors and resulted in the following values:
  • For input K1 “Academic Skills”, a ranges from 63.2492 to 63.2517, b from 1.9829 to 2.0024, and c from 0 to 252.9987.
  • For input K2 “Public Sector”, a ranges from 72.7493 to 72.7509, b from 1.9808 to 2.0005, and c from 0 to 290.9995.
  • For input K3 “Private Sector”, a ranges from 71.2489 to 71.2515, b from 1.9679 to 2.0219, and c from 0 to 284.9992.
  • For input K4 “Age”, a ranges from 11.2449 to 11.2497, b from 1.9997 to 2.0089, and c from 20.9999 to 65.9999.
2.
Optimization Algorithm: A hybrid optimization algorithm was used to train the ANFIS model. This method combines gradient descent for updating the premise parameters (membership function parameters) and least-squares estimation for updating the consequent parameters (linear coefficients). This hybrid approach was selected due to its superior performance, achieving a lower Root Mean Squared Error (RMSE) compared to the standard backpropagation method.
3.
Number of Rules: The ANFIS model utilized 81 fuzzy rules, derived from the training data. These rules effectively capture the relationships between input variables, such as work experience, education, and age, and the output variable (Time Factor, TF).
4.
Model Structure: The ANFIS architecture consisted of 193 nodes, with 405 linear parameters and 36 non-linear parameters. This structure enabled the model to capture both linear and non-linear relationships within the dataset, improving its predictive accuracy.
Figure 9a shows the training phase results for ANFIS, indicating a strong fit to the training data, Figure 9b illustrates ANFIS’s testing performance, with predictions closely aligning with actual values and Figure 9c depicts the validation phase, showing how well ANFIS generalizes to unseen data (Table 5).
ANFIS outperformed other algorithms, demonstrating exceptional accuracy and robustness in modeling the ‘Time Factor’, making it a compelling choice for this application. Its hybrid approach leveraged neural networks and fuzzy logic, enhancing interpretability and precision. The model’s adaptability made it particularly suited for complex, real-world scenarios.

4.1.4. Gradient Boosting Machine Results

Gradient Boosting Machines (GBMs) are ensemble learning methods that build models sequentially, with each new model correcting errors made by previous ones, thus improving predictive accuracy.
Figure 10a displays the GBM’s training phase performance, showing the relationship between actual and predicted ‘Time Factor’ values, Figure 10b represents the GBM’s performance during testing, illustrating its predictive capability, while Figure 10c shows how well the GBM model performs on validation data, emphasizing its accuracy.
The Gradient Boosting Machine (GBM) regression model achieved a Mean Squared Error (MSE) of 2589.08 and a Root Mean Squared Error (RMSE) of 50.88, reflecting relatively low prediction errors. The Mean Absolute Error (MAE) is 36.89, and the Median Absolute Error (MedAE) is 28.54, indicating moderate accuracy. The model’s Mean Bias Deviation (MBD) is also close to zero 2.57, indicating minimal bias.
GBM provided robust predictions with relatively low errors, indicating its strong potential for accurate predictive modeling in this context, as seen in Table 6. The sequential boosting allowed the model to learn from previous mistakes, improving overall performance. However, the model’s complexity necessitated careful tuning of hyperparameters to avoid overfitting.

4.1.5. Bagged Decision Tree Results

Bagged Decision Trees involve creating multiple decision trees through bootstrap aggregating (bagging), which enhances the stability and accuracy of the model by reducing variance.
Figure 11a illustrates the training performance, with actual vs. predicted ‘Time Factor’ values. Figure 11b depicts the testing phase performance, showing the model’s accuracy in predictions, while Figure 11c shows the validation performance, emphasizing the model’s generalizability.
The Bagged Decision Tree model produced promising results in our analysis. It achieved a Mean Squared Error (MSE) of 2589.08 and a Root Mean Squared Error (RMSE) of 50.88, indicating fair predictive accuracy. Additionally, the model displayed a low Mean Absolute Error (MAE) of 36.89 and a Median Absolute Error (MedAE) of 28.54, reflecting its reliability. The Mean Bias Deviation (MBD) near zero (2.57) indicates that the model maintains a balanced prediction.
Bagged Decision Trees demonstrated strong predictive performance, benefiting from the reduced variance and increased stability provided by the bagging technique, as seen in Table 7. This ensemble method effectively handled variability in the data, enhancing model reliability. Despite these advantages, the interpretability of the final model was somewhat diminished due to the aggregation of multiple decision trees.

4.1.6. SVM Results

The Support Vector Machine (SVM) model provided average performance in predicting the target values, with moderate error rates and a reasonable correlation between actual and predicted values. The model shows a balanced approach to handling the training, testing, and validation datasets.
Figure 12a shows SVM’s training performance, with a clear relationship between actual and predicted values. Figure 12b illustrates the testing performance, highlighting the model’s predictive accuracy, while Figure 12c depicts the validation phase, demonstrating SVM’s ability to generalize well to new data.
The model’s performance, as reflected by RMSE 54.21 and other metrics, suggests that it may benefit from further optimization to improve its predictive accuracy.
The SVM algorithm showed a balanced performance, effectively handling linear and non-linear relationships within the data. It offered a solid alternative by maximizing the margin between classes, crucial for clear decision boundaries. SVM’s versatility and robustness were notable; however, it struggled with large datasets due to its computational intensity. Additionally, the model required careful parameter tuning and selecting an appropriate kernel function to achieve optimal results. Despite these challenges, SVM provided a reliable and consistent performance across various metrics, as seen in Table 8.

4.1.7. XGBoost Results

The XGBoost model yielded the worst predictive performance among the tested models, with higher error metrics and a lower correlation between actual and predicted values. Despite its advanced capabilities, the model struggled to capture the underlying patterns in the datasets effectively.
Figure 13a shows the XGBoost performance in predicting ‘Time Factor’ during training, with each point representing a data observation. Figure 13b shows the XGBoost performance in predicting ‘Time Factor’ during training, with each point representing a data observation. Figure 13c illustrates the XGBoost’s prediction accuracy in the validation phase.
XGBoost, contrary to expectations, had a higher RMSE, indicating poorer performance than other algorithms in this study, as seen in Table 9. Although it is generally known for its high accuracy and efficiency, it did not perform as well in this application. The higher error rates could be attributed to overfitting or the specific characteristics of the dataset. Furthermore, the complexity and extensive hyperparameter tuning required for XGBoost could have contributed to its suboptimal performance in this context.
In summary, the experimental results strongly support the selection of ANFIS as the optimal predictive model for our HR data analysis, based on the detailed assessment of key performance metrics as seen in Table 10 and Table 11 below.

4.2. Load Control Impacts

Having in mind that each algorithm produces a different RMSE on the given dataset, based on the research of Giotopoulos et al. (2024) [2], we provide an insight into the load deviation and efficiency of Load Control on public sector services.
On a given scenario where CF = 1 and the Time Factor is also set to 1 min, provided the fact that each algorithm illustrates a dedicated RMSE, the new rescaled values of CF are as follows (Table 12):
The following graphs (Figure 14) are deployed based on the slow-start algorithm, where cognitive switch produces a production loss of 20% on a time frame of 120 min and initial task queue of 40 tasks in TW1 in accordance with Giotopoulos et al. (2024) [2], while Table 13 provides statistics on unfinished tasks and Node Load for the same time frame.
In Figure 14a, 11 tasks remained unfinished. Average Load for 120 min: 47%; high load: 88%.
In Figure 14b, 30 tasks remained unfinished. Average Load for 120 min: 42%; high load: 85%.
In Figure 14c, 26 tasks remained unfinished. Average Load for 120 min: 44%; high load: 87%.
In Figure 14d, 26 tasks remained unfinished. Average Load for 120 min: 40%; high load: 81%.
In Figure 14e, 23 tasks remained unfinished. Average Load for 120 min: 52%; high load: 88%.
In Figure 14f, 29 tasks remained unfinished. Average Load for 120 min: 41%; high load: 85%.
In Figure 14g, 33 tasks remained unfinished. Average Load for 120 min: 39%; high load: 84%.

5. Discussion

5.1. Time Complexity

The efficiency and scalability of algorithms in machine learning are crucial. Understanding the time complexity of various machine learning algorithms is critical in choosing the most appropriate for any task. We investigate the time complexity of all seven algorithms. The time complexity of Linear Regression is n, where n is the number of data points. Simple mathematical calculations make them highly efficient, especially for large datasets. Neural networks are known for their adaptability to complex tasks. Still, the time complexity of training neural networks depends on factors such as architecture and optimization algorithms. At the same time, it ranges from O ( n m k ) , where ‘n’ is the number of samples, ‘m’ is the number of layers, and ‘k’ is the number of iterations. ANFIS combines fuzzy logic and neural networks to model complex relationships. At the same time, its time complexity is analogous to neural networks, standing at O ( n m k ) . ‘n’ represents data points, ‘m’ denotes the number of rules, and ‘k’ signifies the number of iterations. Bagging involves the training of multiple decision trees in parallel and introduces time complexity of O ( m n log ( n ) ) , where ‘m’ stands for features, and ‘n’ for data points.
The time complexity for Bagging is shown by O ( m n log ( n ) ) , where ‘m’ stands for features, and ‘n’ for data points, while the parallelization introduced further enhances the efficiency of this ensemble method.
XGBoost is close enough to the time complexity of GBM at approximately O ( n m k ) . In addition, it leverages data pruning techniques and parallel processing, giving credit to its efficiency and scalability. SVMs, on the other hand, while effective in many scenarios, exhibit variable time complexities. They range from O ( n 2 m ) to O ( n 3 m ) , contingent on the kernel used. Here, ‘n’ stands for data points, and ‘m’ for features. SVMs require substantial hardware processing for extensive datasets and high-dimensional feature spaces.
While ANFIS incurs a substantial computational expense, it remains a prominent choice across various applications owing to its proficiency in acquiring and depicting intricate non-linear associations.
As shown in Table 14, the comparative analysis of CPU time and memory usage highlights significant differences across the algorithms, reflecting their computational efficiency and resource demands. Linear Regression is the most lightweight algorithm, with minimal CPU time (10–30 ms) and memory usage (5–15 MB), making it suitable for simple, linear datasets. On the other hand, Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference System (ANFIS) demand higher resources, with CPU times of 120–180 ms and 180–220 ms and memory usage of 30–50 MB and 40–60 MB, respectively, due to their ability to model complex, non-linear relationships. Ensemble methods, such as Gradient Boosting Machine (GBM) and Bagged Decision Trees, exhibit moderate resource demands, with CPU times ranging from 140 to 200 ms and memory usage from 30 to 50 MB, balancing complexity and efficiency. Support Vector Machine (SVM) and XGBoost, known for their robust performance on complex datasets, show the highest CPU times (200–260 ms and 200–240 ms, respectively) and memory usage (50–60 MB and 40–55 MB), reflecting their computational intensity. This analysis underscores the trade-off between computational resources and predictive power, emphasizing the importance of selecting algorithms that align with the specific requirements of the dataset and application.

5.2. Comparative Analysis and Practical Recommendations for Algorithm Selection

The results of this study highlight the relative strengths and limitations of the selected machine learning algorithms in the context of workload distribution and predictive modeling. Linear Regression served as a baseline model, demonstrating its inability to effectively capture non-linear dependencies within the dataset. In contrast, Artificial Neural Networks (ANNs) significantly improved predictive accuracy by modeling complex patterns through their layered structure and adaptive learning capabilities.
Among the tested models, the Adaptive Neuro-Fuzzy Inference System (ANFIS) emerged as the most effective, achieving superior accuracy and robustness. Its hybrid architecture, which integrates fuzzy logic with neural networks, enables efficient handling of uncertainty and non-linearity, making it particularly well suited for workload optimization.
Ensemble-based models, including Gradient Boosting Machine (GBM) and Bagged Decision Trees, also exhibited strong predictive performance, while XGBoost proved less effective. By leveraging multiple weak learners, these methods enhanced generalization capabilities and delivered competitive accuracy. In addition, they demonstrated a balance between predictive power and computational efficiency, making them viable alternatives for large-scale workload management applications.
The Support Vector Machine (SVM) provided a versatile approach although managed non-linear relationships, thus proving effective in high-dimensional spaces. However, its computational demands were significantly higher compared to other models, requiring careful consideration of trade-offs between accuracy and resource efficiency. Similarly, GBM, while offering strong performance, required extensive computational resources, underscoring the need to balance precision with operational feasibility.
This analysis underscores the importance of selecting machine learning models based on the specific demands of a given task. Factors such as interpretability, scalability, and computational constraints should guide the choice of algorithm to ensure optimal decision-making processes in dynamic environments. By systematically evaluating these models, this study provides insights that can inform both future research and real-world applications, fostering more efficient and adaptive workload distribution strategies.

5.3. Limitations

This study’s constraints include both the data sample and the methodological approach. Regarding the data sample range, it is essential to acknowledge that the study’s findings can not be easily extrapolated to all kinds of populations or different contextual settings. This limitation arises due to the limited sample size, which may not represent the broader populace. To mitigate this constraint, prospective research endeavors should consider acquiring data from a more expansive and heterogeneous sample, facilitating a wider scope of generalizability.
Regarding selecting the ANFIS algorithm, two significant factors are essential for its deployments: specialized expertise and hardware resources. Both of these constraints could restrict its applicability in specific organizational settings. It also has to consider the relatively compact dataset comprising 450 records. This may elucidate why the performance of the XGBoost model does not exceed that of the GBM model. One plausible interpretation for this divergence could be overfitting, stemming from XGBoost’s additional hyperparameters. Consequently, future research should consider these limitations to enhance the robustness and generalizability of the study’s outcomes while considering the pragmatic constraints associated with ANFIS in real-world public-sector scenarios.

5.4. Task Quality

While the primary focus of this study is on optimizing load distribution and predicting employee capability based on the Time Factor (TF), we acknowledge the critical importance of task quality in workload management systems. Task quality is a multifaceted aspect that encompasses accuracy, completeness, and adherence to standards, all of which are essential for effective service delivery in the public sector. Although this paper does not explicitly model or predict task quality, our system indirectly addresses it through a structured validation process supervised by managers.
In the current framework, tasks are assigned to employees based on their predicted capability, as determined by the Time Factor. Once a task is completed, it undergoes a validation process where supervisors assess the output for compliance with required standards. If the task is deemed incomplete or contains errors, it is returned to the same employee for corrections, and the timer resumes until the task meets the necessary quality criteria. This iterative process ensures that tasks are completed to an acceptable standard, albeit at the cost of additional time. While this approach does not explicitly measure or predict task quality, it provides a mechanism for maintaining quality control within the system.
The decision to focus on task completion time rather than quality was driven by the primary objective of this study: to compare machine learning algorithms for optimizing load distribution in dynamic workload management systems. Task completion time is a more straightforward and quantifiable metric, making it suitable for the comparative analysis of the algorithms in the current research. However, we recognize that task quality is equally important and can significantly impact organizational efficiency and service delivery. For instance, an employee who completes tasks quickly but with frequent errors may ultimately require more time for corrections, negating the benefits of faster task completion. Therefore, while this study does not explicitly address task quality, it lays the groundwork for future research in this area.

6. Conclusions and Future Work

In the current study, we conducted a comparative analysis performed on various machine learning algorithms. These algorithms were executed within the Apache Spark framework to utilize its distributed computing power for large-scale data processing. The primary focus was to enhance workforce management within the public sector by predicting employee capability based on multiple factors, such as work experience, educational background, and age of each employee. After meticulous evaluation, ANFIS emerged as the most effective predictive model.
ANFIS’s superiority can be attributed to its remarkable ability to capture complex non-linear relationships, offering unparalleled accuracy and interpretability. It adeptly navigated the intricacies of employee potential prediction within the dynamic workload management system, demonstrating its invaluable potential in enhancing public sector decision-making processes.
This research underscores the critical importance of selecting a machine learning algorithm tailored to the specific domain and dataset demonstrated in this study. ANFIS stands out as a powerful tool for addressing the complex, intricate challenges of workforce management in the public sector.
The next step is to integrate the ANFIS-based predictive model into operational systems within the public sector. This real-time integration can facilitate dynamic workload management and resource allocation, enhancing efficiency and productivity in public sector organizations. Future studies should consider collecting data from more extensive and diverse samples to further bolster the findings’ generalizability. This can help assess the performance of the ANFIS model in various populations and contextual settings. Acknowledging that ANFIS may require specialized expertise and computing resources, future work can examine resource-efficient implementations or cloud-based solutions, making it more accessible to a broader spectrum of organizations.
The application of ANFIS has showcased its potential to revolutionize decision-making processes. Future work can build upon these foundations to further advance the public sector’s efficiency and effectiveness. Through ongoing research and implementation, the public sector can successfully embrace data-driven human resource management practices to navigate the challenges of the digital transformation era.

Author Contributions

K.C.G., D.M., G.V., D.P., I.G. and S.S. conceived the idea, designed and performed the experiments, analyzed the results, drafted the initial manuscript, and revised the final manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Dataset available on request from the authors.

Acknowledgments

We are indebted to the anonymous reviewers whose comments helped us to improve the presentation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Michalopoulos, D.; Karras, A.; Karras, C.; Sioutas, S.; Giotopoulos, K. Neuro-Fuzzy Employee Ranking System in the Public Sector. In Fuzzy Systems and Data Mining VIII; IOS Press: Amsterdam, The Netherlands, 2022; Volume 358. [Google Scholar]
  2. Giotopoulos, K.C.; Michalopoulos, D.; Vonitsanos, G.; Papadopoulos, D.; Giannoukou, I.; Sioutas, S. Dynamic Workload Management System in the Public Sector. Information 2024, 15, 335. [Google Scholar] [CrossRef]
  3. Giotopoulos, K.C.; Michalopoulos, D.; Karras, A.; Karras, C.; Sioutas, S. Modelling and analysis of neuro fuzzy employee ranking system in the public sector. Algorithms 2023, 16, 151. [Google Scholar] [CrossRef]
  4. Shahzadi, I.; Javed, A.; Pirzada, S.S.; Nasreen, S.; Khanam, F. Impact of employee motivation on employee performance. Eur. J. Bus. Manag. 2014, 6, 159–166. [Google Scholar]
  5. Hameed, A.; Ramzan, M.; Zubair, H.M.K. Impact of compensation on employee performance (empirical evidence from banking sector of Pakistan). Int. J. Bus. Soc. Sci. 2014, 5. [Google Scholar]
  6. Anitha, J. Determinants of employee engagement and their impact on employee performance. Int. J. Product. Perform. Manag. 2014, 63, 308–323. [Google Scholar]
  7. Ahangar, R.G. The relationship between intellectual capital and financial performance: An empirical investigation in an Iranian company. Afr. J. Bus. Manag. 2011, 5, 88. [Google Scholar]
  8. Hong, E.N.C.; Hao, L.Z.; Kumar, R.; Ramendran, C.; Kadiresan, V. An effectiveness of human resource management practices on employee retention in institute of higher learning: A regression analysis. Int. J. Bus. Res. Manag. 2012, 3, 60–79. [Google Scholar]
  9. Phusavat, K.; Comepa, N.; Sitko-Lutek, A.; Ooi, K.B. Interrelationships between intellectual capital and performance: Empirical examination. Ind. Manag. Data Syst. 2011, 111, 810–829. [Google Scholar] [CrossRef]
  10. Darmawan, D.; Mardikaningsih, R.; Sinambela, E.A.; Arifin, S.; Putra, A.R.; Hariani, M.; Irfan, M.; Al Hakim, Y.R.; Issalillah, F. The quality of human resources, job performance and employee loyalty. Int. J. Psychosoc. Rehabil. 2020, 24, 2580–2592. [Google Scholar] [CrossRef]
  11. Rivaldo, Y.; Nabella, S.D. Employee performance: Education, training, experience and work discipline. Calitatea 2023, 24, 182–188. [Google Scholar]
  12. Chen, Y.S.; Chang, K.C. Analyzing the nonlinear effects of firm size, profitability, and employee productivity on patent citations of the US pharmaceutical companies by using artificial neural network. Scientometrics 2010, 82, 75–82. [Google Scholar] [CrossRef]
  13. Simeunović, N.; Kamenko, I.; Bugarski, V.; Jovanović, M.; Lalić, B. Improving workforce scheduling using artificial neural networks model. Adv. Prod. Eng. Manag. 2017, 12, 337–352. [Google Scholar] [CrossRef]
  14. Fekri Sari, M.; Avakh Darestani, S. Fuzzy overall equipment effectiveness and line performance measurement using artificial neural network. J. Qual. Maint. Eng. 2019, 25, 340–354. [Google Scholar] [CrossRef]
  15. Goodarzizad, P.; Mohammadi Golafshani, E.; Arashpour, M. Predicting the construction labour productivity using artificial neural network and grasshopper optimisation algorithm. Int. J. Constr. Manag. 2023, 23, 763–779. [Google Scholar] [CrossRef]
  16. Heravi, G.; Eslamdoost, E. Applying artificial neural networks for measuring and predicting construction-labor productivity. J. Constr. Eng. Manag. 2015, 141, 04015032. [Google Scholar] [CrossRef]
  17. Proto, A.R.; Sperandio, G.; Costa, C.; Maesano, M.; Antonucci, F.; Macrì, G.; Scarascia Mugnozza, G.; Zimbalatti, G. A three-step neural network artificial intelligence modeling approach for time, productivity and costs prediction: A case study in Italian forestry. Croat. J. For. Eng. J. Theory Appl. For. Eng. 2020, 41, 35–47. [Google Scholar] [CrossRef]
  18. Gelmereanu, C.; Morar, L.; Bogdan, S. Productivity and cycle time prediction using artificial neural network. Procedia Econ. Financ. 2014, 15, 1563–1569. [Google Scholar] [CrossRef]
  19. Ershadi, M.J.; Qhanadi Taghizadeh, O.; Hadji Molana, S.M. Selection and performance estimation of Green Lean Six Sigma Projects: A hybrid approach of technology readiness level, data envelopment analysis, and ANFIS. Environ. Sci. Pollut. Res. 2021, 28, 29394–29411. [Google Scholar] [CrossRef]
  20. Arslankaya, S. Comparison of performances of fuzzy logic and adaptive neuro-fuzzy inference system (ANFIS) for estimating employee labor loss. J. Eng. Res. 2023, 11, 469–477. [Google Scholar] [CrossRef]
  21. Keles, A.E.; Haznedar, B.; Kaya Keles, M.; Arslan, M.T. The Effect of Adaptive Neuro-fuzzy Inference System (ANFIS) on Determining the Leadership Perceptions of Construction Employees. Iran. J. Sci. Technol. Trans. Civ. Eng. 2023, 47, 4145–4157. [Google Scholar] [CrossRef]
  22. Ahanger, T.A.; Tariq, U.; Ibrahim, A.; Ullah, I.; Bouteraa, Y. ANFIS-inspired smart framework for education quality assessment. IEEE Access 2020, 8, 175306–175318. [Google Scholar] [CrossRef]
  23. Azadeh, A.; Zarrin, M. An intelligent framework for productivity assessment and analysis of human resource from resilience engineering, motivational factors, HSE and ergonomics perspectives. Saf. Sci. 2016, 89, 55–71. [Google Scholar] [CrossRef]
  24. Nikkhah-Farkhani, Z.; Khorakian, A.; Loqmani-Devin, S.; Boustani-Rad, M. Investigating the impact of organisational cohesion on employees’ productivity of Mashhad bus organisation, using the adaptive neuro fuzzy inference system. Int. J. Product. Qual. Manag. 2022, 35, 262–280. [Google Scholar] [CrossRef]
  25. Mirsepasi, N.; Faghihi, A.; Babaei, M.R. Design a System Model for Performance Management in the public sector. Arab. J. Bus. Manag. Rev. 2013, 1, 23–32. [Google Scholar] [CrossRef]
  26. Elshaboury, N.; Al-Sakkaf, A.; Alfalah, G.; Abdelkader, E.M. Improved Adaptive Neuro-Fuzzy Inference System Based on Particle Swarm Optimization Algorithm for Predicting Labor Productivity. In Proceedings of the 2nd International Conference on Civil Engineering Fundamentals and Applications (ICCEFA’21), Virtual, 21–23 November 2021; pp. 21–23. [Google Scholar]
  27. Jain, R.; Nayyar, A. Predicting employee attrition using xgboost machine learning approach. In Proceedings of the IEEE 2018 International Conference on System Modeling & Advancement in Research Trends (Smart), Moradabad, India, 23–24 November 2018; pp. 113–120. [Google Scholar]
  28. Sekeroglu, B.; Dimililer, K.; Tuncal, K. Student performance prediction and classification using machine learning algorithms. In Proceedings of the 2019 8th International Conference on Educational and Information Technology, Cambridge, UK, 2–4 March 2019; pp. 7–11. [Google Scholar]
  29. Aldin, S.S.; Sözer, H. Comparing the accuracy of ANN and ANFIS models for predicting the thermal data. J. Constr. Eng. Manag. Innov. 2022, 5, 119–139. [Google Scholar] [CrossRef]
  30. Zhao, Y.; Hryniewicki, M.K.; Cheng, F.; Fu, B.; Zhu, X. Employee turnover prediction with machine learning: A reliable approach. In Intelligent Systems and Applications: Proceedings of the 2018 Intelligent Systems Conference (IntelliSys); Springer: Berlin/Heidelberg, Germany, 2019; Volume 2, pp. 737–758. [Google Scholar]
  31. Pathak, A.; Dixit, C.K.; Somani, P.; Gupta, S.K. Prediction of Employees’ Performance using Machine Learning (ML) Techniques. In Designing Workforce Management Systems for Industry 4.0; CRC Press: Boca Raton, FL, USA, 2023; pp. 177–196. [Google Scholar]
  32. Saad, H. Use bagging algorithm to improve prediction accuracy for evaluation of worker performances at a production company. arXiv 2020, arXiv:2011.12343. [Google Scholar] [CrossRef]
  33. Adeniyi, J.K.; Adeniyi, A.E.; Oguns, Y.J.; Egbedokun, G.O.; Ajagbe, K.D.; Obuzor, P.C.; Ajagbe, S.A. Comparison of the performance of machine learning techniques in the prediction of employee. ParadigmPlus 2022, 3, 1–15. [Google Scholar] [CrossRef]
  34. Jantan, H.; Puteh, M.; Hamdan, A.R.; Ali Othman, Z. Applying data mining classification techniques for employee’s performance prediction. In Proceedings of the Knowledge Management 5th International Conference (KMICe2010), Kuala Terengganu, Malaysia, 25–27 May 2010. [Google Scholar]
  35. Li, M.G.T.; Lazo, M.; Balan, A.K.; de Goma, J. Employee performance prediction using different supervised classifiers. In Proceedings of the International Conference on Industrial Engineering and Engineering Management, Singapore, 13–16 December 2021; pp. 6870–6876. [Google Scholar]
  36. Shaikh, E.; Mohiuddin, I.; Alufaisan, Y.; Nahvi, I. Apache spark: A big data processing engine. In Proceedings of the 2019 2nd IEEE Middle East and North Africa COMMunications Conference (MENACOMM), Manama, Bahrain, 19–21 November 2019; pp. 1–6. [Google Scholar]
  37. Kanavos, A.; Livieris, I.; Mylonas, P.; Sioutas, S.; Vonitsanos, G. Apache spark implementations for string patterns in dna sequences. In GeNeDis 2018: Computational Biology and Bioinformatics; Springer: Berlin/Heidelberg, Germany, 2020; pp. 439–453. [Google Scholar]
  38. Shyam, R.; HB, B.G.; Kumar, S.; Poornachandran, P.; Soman, K. Apache spark a big data analytics platform for smart grid. Procedia Technol. 2015, 21, 171–178. [Google Scholar] [CrossRef]
  39. Kanavos, A.; Panagiotakopoulos, T.; Vonitsanos, G.; Maragoudakis, M.; Kiouvrekis, Y. Forecasting winter precipitation based on weather sensors data in apache spark. In Proceedings of the IEEE 2021 12th International Conference on Information, Intelligence, Systems & Applications (IISA), Chania Crete, Greece, 12–14 July 2021; pp. 1–6. [Google Scholar]
  40. Vonitsanos, G.; Kanavos, A.; Mylonas, P.; Sioutas, S. A nosql database approach for modeling heterogeneous and semi-structured information. In Proceedings of the IEEE 2018 9th International Conference on Information, Intelligence, Systems and Applications (IISA), Zakynthos, Greece, 23–25 July 2018; pp. 1–8. [Google Scholar]
  41. Meng, X.; Bradley, J.; Yavuz, B.; Sparks, E.; Venkataraman, S.; Liu, D.; Freeman, J.; Tsai, D.; Amde, M.; Owen, S.; et al. Mllib: Machine learning in apache spark. J. Mach. Learn. Res. 2016, 17, 1–7. [Google Scholar]
  42. JayaLakshmi, A.; Kishore, K.K. Performance evaluation of DNN with other machine learning techniques in a cluster using Apache Spark and MLlib. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 1311–1319. [Google Scholar] [CrossRef]
  43. Eng, T.C.; Hasan, S.; Shamsuddin, S.M.; Wong, N.E.; Jalil, I.E.A. Big Data Processing Model for Authorship Identification. Int. J. Adv. Soft Comput. Its Appl. 2017, 9, 1–22. [Google Scholar]
  44. Zou, H.; Hastie, T. Regularization and Variable Selection Via the Elastic Net. J. R. Stat. Soc. Ser. B Stat. Methodol. 2005, 67, 301–320. [Google Scholar] [CrossRef]
  45. Tibshirani, R. Regression Shrinkage and Selection Via the Lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  46. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar]
  47. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  48. Jang, J.; Sun, C.; Mizutani, E. Neuro-Fuzzy and Soft Computing-A Computational Approach to Learning and Machine Intelligence [Book Review]. IEEE Trans. Autom. Control 1997, 42, 1482–1484. [Google Scholar] [CrossRef]
  49. Smola, A.; Schölkopf, B. A tutorial on support vector regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef]
  50. Awad, M.; Khanna, R. Support Vector Machines for Classification. In Efficient Learning Machines; Apress: New York, NY, USA, 2015; pp. 33–66. [Google Scholar]
  51. Friedman, J. Stochastic gradient boosting. Comput. Stat. Data Anal. 2002, 38, 367–378. [Google Scholar] [CrossRef]
  52. Natekin, A.; Knoll, A. Gradient boosting machines, a tutorial. Front. Neurorobot. 2013, 7, 21. [Google Scholar] [CrossRef]
  53. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  54. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]
  55. Shwartz-Ziv, R.; Armon, A. Tabular data: Deep learning is not all you need. Inf. Fusion 2022, 81, 84–90. [Google Scholar] [CrossRef]
  56. Kannathal, N.; Lim, C.; Acharya, U.; Sadasivan, P. Cardiac state diagnosis using adaptive neuro fuzzy technique. Med Eng. Phys. 2006, 28, 809–815. [Google Scholar] [CrossRef]
Figure 1. Employee profile.
Figure 1. Employee profile.
Futureinternet 17 00119 g001
Figure 2. Task allocation and CF calculation.
Figure 2. Task allocation and CF calculation.
Futureinternet 17 00119 g002
Figure 3. Workflow for determining the capacity factor.
Figure 3. Workflow for determining the capacity factor.
Futureinternet 17 00119 g003
Figure 4. Linear Regression analysis Time Factor.
Figure 4. Linear Regression analysis Time Factor.
Futureinternet 17 00119 g004
Figure 5. Configuration 3. ANN Time Factor performance for three layers of 8 × 4 × 4 neurons.
Figure 5. Configuration 3. ANN Time Factor performance for three layers of 8 × 4 × 4 neurons.
Futureinternet 17 00119 g005
Figure 6. Configuration 3. ANN Training.
Figure 6. Configuration 3. ANN Training.
Futureinternet 17 00119 g006
Figure 7. Configuration 3. ANN validation performance.
Figure 7. Configuration 3. ANN validation performance.
Futureinternet 17 00119 g007
Figure 8. ANFIS performance.
Figure 8. ANFIS performance.
Futureinternet 17 00119 g008
Figure 9. ANFIS Time Factor performance.
Figure 9. ANFIS Time Factor performance.
Futureinternet 17 00119 g009
Figure 10. GBM Time Factor performance.
Figure 10. GBM Time Factor performance.
Futureinternet 17 00119 g010
Figure 11. Bagged Decision Tree Time Factor performance.
Figure 11. Bagged Decision Tree Time Factor performance.
Futureinternet 17 00119 g011
Figure 12. SVM Time Factor performance.
Figure 12. SVM Time Factor performance.
Futureinternet 17 00119 g012
Figure 13. XGBoost Time Factor performance.
Figure 13. XGBoost Time Factor performance.
Futureinternet 17 00119 g013
Figure 14. Load Control based on CF produced from algorithms. (a) Load Control: ANFIS; (b) Load Control: Regression Analysis; (c) Load Control: ANN; (d) Load Control: Gradient Boosting Machine; (e) Load Control: Bagged Decision Trees; (f) Load Control: Support Vector Machines; (g) Load Control: XGBoost.
Figure 14. Load Control based on CF produced from algorithms. (a) Load Control: ANFIS; (b) Load Control: Regression Analysis; (c) Load Control: ANN; (d) Load Control: Gradient Boosting Machine; (e) Load Control: Bagged Decision Trees; (f) Load Control: Support Vector Machines; (g) Load Control: XGBoost.
Futureinternet 17 00119 g014aFutureinternet 17 00119 g014b
Table 1. Dataset Summary Table.
Table 1. Dataset Summary Table.
CategoryAttributePercentage (%)
Total Employees 100%
Academic Skills (K1)Seminar 147.02%
Seminar 255.49%
Seminar 351.41%
Bachelor Degree53.29%
Second Bachelor Degree57.05%
Master Degree35.74%
NSPA Degree21.94%
PhD19.12%
Public Sector (K2)Head of Small Department23.51%
Head of Department12.38%
General Manager3.13%
Private Sector (K3)Head of Small Department26.02%
Head of Department8.31%
General Manager5.02%
Age Groups (K4)20–29 years5.64%
30–39 years16.46%
40–49 years30.41%
50+ years47.49%
Table 2. Primary Tasks.
Table 2. Primary Tasks.
TaskDescription
T1Financial request
T2Suggestion for new technical document
T3Committee minutes
T4Primary expense claim
T5Contract deployment
T6Clearance of tenderer accounts
T7Technical opinion
T8Draft tender design
T9Design of a national tender
T10Design of an international tender
T11Implementation of a proposal for inclusion in the NSRF
T12Examination of supporting documentation
Table 3. Linear Regression analysis.
Table 3. Linear Regression analysis.
MetricTrainingTestingValidation
Mean Squared Error (MSE)2963.5992892.0512781.079
Root Mean Squared Error (RMSE)54.43953.77852.736
Mean Absolute Error (MAE)40.53038.11436.905
Median Absolute Error (MedAE)32.21025.29923.322
Mean Squared Logarithmic Error (MSLE)0.0080.0070.007
Root Mean Squared Logarithmic Error (RMSLE)0.0870.0860.085
Mean Bias Deviation (MBD)0.0100.0090.008
Mean Absolute Percentage Error (MAPE—Validation)6.30%5.95%5.77%
Huber Loss (Validation)40.03437.61936.410
Angle between Perfect Fit and Best Fit (degrees)0.0660.0480.061
C-index0.700.700.70
Table 4. ANN configurations with metrics for validation data.
Table 4. ANN configurations with metrics for validation data.
MetricConfig. 1 (10n)Config. 2 (8n × 4n)Config. 3 (8n × 4n × 4n)Config. 4 (8n × 8n × 8n)Config. 5 (20n × 20n × 20n)
Mean Squared2698.9853458.9182285.9153002.7314108.571
Error (MSE)
Root Mean Squared51.95158.81347.81154.79764.098
Error (RMSE)
Mean Absolute39.30342.67035.39739.59246.205
Error (MAE)
Median Absolute30.63731.22326.19825.81536.911
Error (MedAE)
Mean Squared Logarithmic0.0060.0090.0050.0080.012
Error (MSLE)
Root Mean Squared0.0830.0940.0750.0880.109
Logarithmic Error (RMSLE)
Mean Bias Deviation−4.7863.397−0.130−1.9280.505
(MBD)
Mean Absolute Percentage6.18%6.69%5.47%6.25%7.21%
Error (MAPE—Validation)
Huber Loss38.80742.17034.89739.09245.707
(Validation)
Angle between Perfect Fit29.23626.34831.07425.49327.234
and Best Fit (degrees)
C-index0.700.750.850.800.78
Table 5. ANFIS analysis.
Table 5. ANFIS analysis.
MetricTrainingTestingValidation
Mean Squared Error (MSE)574.488648.9091284.383
Root Mean Squared Error (RMSE)23.96825.47335.838
Mean Absolute Error (MAE)15.06421.39726.733
Median Absolute Error (MedAE)7.84918.95420.669
Mean Squared Logarithmic Error (MSLE)0.0010.0010.002
Root Mean Squared Logarithmic Error (RMSLE)0.03530.03600.0524
Mean Bias Deviation (MBD)−0.0016.8735 × 10 −50.003
Mean Absolute Percentage Error (MAPE—Validation)2.210%3.018%3.797%
Huber Loss (Validation)14.61820.90326.238
Angle between Perfect Fit and Best Fit (degrees)14.12339.64698.8751
C-index0.880.870.90
Table 6. Gradient Boosting Machine analysis.
Table 6. Gradient Boosting Machine analysis.
MetricTrainingTestingValidation
Mean Squared Error (MSE)1194.4962404.4662589.089
Root Mean Squared Error (RMSE)34.56249.03550.883
Mean Absolute Error (MAE)26.16836.74936.897
Median Absolute Error (MedAE)19.79226.25028.546
Mean Squared Logarithmic Error (MSLE)0.0030.0060.007
Root Mean Squared Logarithmic Error (RMSLE)0.0550.0770.081
Mean Bias Deviation (MBD)−0.173−6.5952.575
Mean Absolute Percentage Error (MAPE—Validation)4.07%5.74%5.73%
Huber Loss (Validation)25.67136.25436.402
Angle between Perfect Fit and Best Fit (degrees)34.16529.02527.121
C-index0.880.850.86
Table 7. Bagged Decision Tree analysis.
Table 7. Bagged Decision Tree analysis.
MetricTrainingTestingValidation
Mean Squared Error (MSE)1194.4962404.4662589.089
Root Mean Squared Error (RMSE)34.56149.03550.883
Mean Absolute Error (MAE)26.16736.74936.897
Median Absolute Error (MedAE)19.79226.25028.546
Mean Squared Logarithmic Error (MSLE)0.0030.0060.007
Root Mean Squared Logarithmic Error (RMSLE)0.0550.0770.081
Mean Bias Deviation (MBD)−0.173−6.5952.575
Mean Absolute Percentage Error (MAPE—Validation)4.07%5.74%5.73%
Huber Loss (Validation)25.67136.25436.402
Angle between Perfect Fit and Best Fit (degrees)34.16529.02527.121
C-index0.900.880.87
Table 8. Support Vector Machine analysis.
Table 8. Support Vector Machine analysis.
MetricTrainingTestingValidation
Mean Squared Error (MSE)2963.5993038.1462939.308
Root Mean Squared Error (RMSE)54.43955.11954.215
Mean Absolute Error (MAE)40.53038.83337.830
Median Absolute Error (MedAE)32.21026.23628.068
Mean Squared Logarithmic Error (MSLE)0.0080.0080.008
Root Mean Squared Logarithmic Error (RMSLE)0.0870.0880.088
Mean Bias Deviation (MBD)−0.010−0.020−0.005
Mean Absolute Percentage Error (MAPE—Validation)6.30%6.11%5.88%
Huber Loss (Validation)40.03438.33337.337
Angle between Perfect Fit and Best Fit (degrees)13.77912.19513.204
C-index0.700.720.75
Table 9. XGBoost analysis.
Table 9. XGBoost analysis.
MetricTrainingTestingValidation
Mean Squared Error (MSE)3623.5677146.8356663.567
Root Mean Squared Error (RMSE)60.19684.53881.630
Mean Absolute Error (MAE)44.56265.02662.235
Median Absolute Error (MedAE)121.82549.72092.346
Mean Squared Logarithmic Error (MSLE)0.0080.01790.016
Root Mean Squared Logarithmic Error (RMSLE)0.0930.1340.128
Mean Bias Deviation (MBD)0.007−14.770−0.372
Mean Absolute Percentage Error (MAPE—Validation)6.909%10.559%9.681%
Huber Loss (Validation)44.07864.52661.738
Angle between Perfect Fit and Best Fit (degrees)53.75484.37990.875
C-index0.850.820.83
Table 10. Algorithm metrics on training dataset.
Table 10. Algorithm metrics on training dataset.
Linear
Regression
ANN ( 8 × 4 × 4 ) ANFISGBMBagged
Dec. Trees
SVMXGBoost
Mean Squared2963.5991879.087574.4881649.8681194.4962963.5997146.835
Error (MSE)
Root Mean Squared54.43943.34823.96840.61834.56154.43984.5389
Error (RMSE)
Mean Absolute40.53032.07815.06431.75826.16740.53065.026
Error (MAE)
Median Absolute32.21024.6557.84925.62319.79232.21049.720
Error (MedAE)
Mean Squared Logarithmic0.0080.0050.0010.0040.0030.0080.01796
Error (MSLE)
Root Mean Squared0.0870.0690.03530.0630.0550.0870.134
Logarithmic Error (RMSLE)
Mean Bias Deviation0.0100.004−0.0012.2611 × 10−13−0.172−0.010−14.770
(MBD)
Mean Absolute Percentage6.30%5.018%2.210%4.88%4.06%6.30%10.55%
Error (MAPE—Validation)
Huber Loss40.03432.29814.61831.26025.67040.03464.526
(Validation)
C-index0.700.800.880.880.900.700.85
Table 11. Algorithms metrics on validation dataset.
Table 11. Algorithms metrics on validation dataset.
Linear
Regression
ANN ( 8 × 4 × 4 ) ANFISGBMBagged
Dec. Trees
SVMXGBoost
Mean Squared2781.0791879.0871284.3832735.0822589.0882939.3086663.567
Error (MSE)
Root Mean Squared52.73643.34835.83852.29850.88354.215481.630
Error (RMSE)
Mean Absolute36.90532.07826.73339.76836.89637.83062.235
Error (MAE)
Median Absolute23.32224.65520.66928.90228.54628.06892.346
Error (MedAE)
Mean Squared Logarithmic0.0070.0050.0020.0060.0060.0070.0164
Error (MSLE)
Root Mean Squared0.0850.0690.0520.0810.0800.0880.128
Logarithmic Error (RMSLE)
Mean Bias Deviation0.0080.0040.0032.7102.574−0.005−0.372
(MBD)
Mean Absolute Percentage5.77%5.018%3.797%6.080%5.732%5.879%9.68%
Error (MAPE—Validation)
Huber Loss36.41032.298226.23839.27336.40137.33661.738
(Validation)
C-index0.700.850.900.860.870.750.83
Table 12. Rescaled CF values.
Table 12. Rescaled CF values.
Linear
Regression
ANN ( 8 × 4 × 4 ) ANFISGBMBagged
Dec. Trees
SVMXGBoost
OLD CF1111111
RMSE52.73643.34835.83852.29850.88354.21581.630
Rescaled CF based on ANFIS0.6790.82610.6850.7040.6610.439
Table 13. Statistics on unfinished tasks and Node Load for 120 min.
Table 13. Statistics on unfinished tasks and Node Load for 120 min.
Linear
Regression
ANN ( 8 × 4 × 4 ) ANFISGBMBagged
Dec. Trees
SCMXGBoost
Unfinished Tasks30261126232933
Average Load42%44%47%40%52%41%39%
High Load85%87%88%81%88%85%84%
Table 14. Comparison of CPU time and memory usage for algorithms.
Table 14. Comparison of CPU time and memory usage for algorithms.
AlgorithmCPU Time (ms)Memory Usage (MB)
Linear Regression10–305–15
Artificial Neural Networks (ANNs)120–18030–50
Adaptive Neuro-Fuzzy Inference System (ANFIS)180–22040–60
Gradient Boosting Machine (GBM)150–20040–50
Bagged Decision Trees140–18030–40
Support Vector Machine (SVM)200–26050–60
XGBoost200–24040–55
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Giotopoulos, K.C.; Michalopoulos, D.; Vonitsanos, G.; Papadopoulos, D.; Giannoukou, I.; Sioutas, S. Dynamic Workload Management System in the Public Sector: A Comparative Analysis. Future Internet 2025, 17, 119. https://doi.org/10.3390/fi17030119

AMA Style

Giotopoulos KC, Michalopoulos D, Vonitsanos G, Papadopoulos D, Giannoukou I, Sioutas S. Dynamic Workload Management System in the Public Sector: A Comparative Analysis. Future Internet. 2025; 17(3):119. https://doi.org/10.3390/fi17030119

Chicago/Turabian Style

Giotopoulos, Konstantinos C., Dimitrios Michalopoulos, Gerasimos Vonitsanos, Dimitris Papadopoulos, Ioanna Giannoukou, and Spyros Sioutas. 2025. "Dynamic Workload Management System in the Public Sector: A Comparative Analysis" Future Internet 17, no. 3: 119. https://doi.org/10.3390/fi17030119

APA Style

Giotopoulos, K. C., Michalopoulos, D., Vonitsanos, G., Papadopoulos, D., Giannoukou, I., & Sioutas, S. (2025). Dynamic Workload Management System in the Public Sector: A Comparative Analysis. Future Internet, 17(3), 119. https://doi.org/10.3390/fi17030119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop