Next Article in Journal
PDE-Based Two-Dimensional Radiomagnetotelluric forward Modelling Using Vertex-Centered Finite-Volume Scheme
Previous Article in Journal
Enhancing Decision-Making Processes in the Complex Landscape of the Taiwanese Electronics Manufacturing Industry through a Fuzzy MCDM Approach
Previous Article in Special Issue
Mixed-Integer Optimization for Ship Retrofitting in Green Logistics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rightful Rewards: Refining Equity in Team Resource Allocation through a Data-Driven Optimization Approach

1
Institute of Data and Information, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
2
Faculty of Business, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
3
Business School, University of Bristol, Bristol BS8 1PY, UK
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(13), 2095; https://doi.org/10.3390/math12132095
Submission received: 20 April 2024 / Revised: 25 June 2024 / Accepted: 2 July 2024 / Published: 3 July 2024
(This article belongs to the Special Issue Applied Mathematics in Supply Chain and Logistics)

Abstract

:
In group management, accurate assessment of individual performance is crucial for the fair allocation of resources such as bonuses. This paper explores the complexities of gauging each participant’s contribution in multi-participant projects, particularly through the lens of self-reporting—a method fraught with the challenges of under-reporting and over-reporting, which can skew resource allocation and undermine fairness. Addressing the limitations of current assessment methods, which often rely solely on self-reported data, this study proposes a novel equitable allocation policy that accounts for inherent biases in self-reporting. By developing a data-driven mathematical optimization model, we aim to more accurately align resource allocation with actual contributions, thus enhancing team efficiency and cohesion. Our computational experiments validate the proposed model’s effectiveness in achieving a more equitable allocation of resources, suggesting significant implications for management practices in team settings.

1. Introduction

In the realm of group management, accurately assessing individual performance is pivotal, as it directly impacts the fair allocation of project resources, such as bonuses. This, in turn, influences individual development, team cohesion, and overall teamwork efficiency. A precise understanding of each team member’s contribution enables fair resource allocation, ensuring that individuals are rewarded in accordance with their efforts. However, gauging the contribution of each member in multi-participant projects presents a significant challenge. For managers, accurately discerning the individual contributions to a project is fraught with difficulty.
Self-reporting emerges as a practical method to gauge these contributions [1,2,3], yet it is not without its pitfalls. Specifically, the phenomena of under-reporting and over-reporting must be addressed. Individuals with low self-esteem may under-report their perceived contributions, while those inclined towards duplicity may engage in over-reporting. Relying on such self-reported data can lead managers to skew resource allocation, undermining the fairness of the allocation process.
This paper explores, within the context of self-reporting, the development of an equitable allocation policy that leverages collected data. By addressing the inherent challenges of self-reporting, the study aims to propose a methodology that ensures fair and accurate resource allocation, thereby enhancing the efficacy and cohesion of team-based projects. In the following, we first review the existing literature on performance assessment and then introduce an illustrative example of our studied problem.

1.1. Literature Review

Performance assessment is a formal evaluation process in which employees are assessed by a designated judge with specific criteria. The literature on performance assessment spans various fields, focusing on key aspects such as scale format [4], rating standards [5], and collaborative relations [6]. Specifically, DeNisi [7] and Arvey [8] delve into the intricacies of performance evaluation, stressing the importance of justice and psychological measurement characteristics. In software development and student teamwork projects, methodologies such as git-driven technology and collaborative platforms are utilized to measure individual contributions effectively, as seen in studies such as Parizi and Spoletini [9], Gamble and Hale [10], Jorgenson et al. [11], and Hale et al. [12]. Methodological advancements, such as models based on the analytical hierarchy process [13] and fuzzy logic-based systems [14], aim to enhance decision-making processes in employee performance assessment. The integration of frameworks like the balanced scorecard with fuzzy multi-criteria decision-making offers innovative approaches to evaluating the performance of public sector organizations, as demonstrated by Afrasiabi et al. [15].
The contribution and performance evaluation of individuals in multi-participant collaborative projects have attracted increasing attention from researchers. Planas-Lladó et al. [16] study self- and peer-assessment in team collaboration among students from different academic programs, analyzing the dynamics and their relationships with collaboration outcomes and individual grades. Pota et al. [17] propose a decision model for the effective evaluation of mutual satisfaction in staff allocation, addressing the lack of mechanisms to describe subjects' skills and needs. Gunning et al. [18] explore an online self- and intra-team peer-assessment strategy to measure student engagement and enable accountability during team-based assessments. McIver and Murphy [19] examine the integration of self-assessment as a new skill, benefiting graduate students and faculty. Kubincová and Kolčák [20] introduce self-assessment as a component of teamwork, allowing evaluation of individual contributions and performance. Earnest et al. [21] demonstrate the effectiveness of the comprehensive assessment of team members in interprofessional education settings. Cristofaro and Giardino [22] propose that the core self-evaluation trait, a complex personality disposition based on self-efficacy and emotional stability, impacts decision-making processes within organizations. Anahideh et al. [23] design a fairness-aware allocation approach to maximize geographical diversity and avoid demographic disparity. Kaur et al. [24] propose a data-driven risk assessment and prediction model, along with a decision framework for strategic vaccine allocation. Xu et al. [25] investigate how fairness concerns influence the allocation of fixed costs between decision-making units.
This paper focuses on the scenario of self-assessment, which plays a crucial role in organizational performance assessment due to its full consideration of employees’ own perspectives and preferences. Farh et al. [26] investigate the effectiveness of a self-assessment-based performance evaluation system, which integrates self-assessment into performance evaluation procedures. Kromrei [27] involves employees in structured self-assessment and utilizes explicit criteria to reinforce self-assessment methods, thereby reducing evaluation biases. Kamer and Annen [28] examine the role of individual differences in self-performance evaluation based on a sample survey of 250 military personnel. Gbadamosi and Ross [29], utilizing core self-evaluation and gender as moderating variables, study the relationship between perceived stress and discomfort and performance evaluations. Shore et al. [30] analyze the impact of self-assessment information, evaluation purposes, and feedback objectives on performance evaluation scores using experimental data from 230 participants. In practical situations such as construction projects, managers traditionally collect assessment values from each individual regarding their contribution rates in a multi-participant project and then allocate resources based on the normalized assessment values accordingly [31]. However, this resource allocation scheme lacks fairness and reasonableness because it does not consider the scenario that people may report contribution rates that do not match what they deserve. Consequently, this paper mainly addresses this research gap by addressing the inherent challenges of self-reporting.

1.2. Research Objective and an Illustrative Example

This paper considers the following premise: managers conduct surveys to collect the reported values of each individual’s contribution rates to different projects before allocating resources. These data are related to their individual perspectives and preferences. Traditionally, based on collected survey results, managers typically normalize the collected contribution rates of each individual to a project (since the sum of reported contribution rates to a project may not necessarily equal 100%) and then allocate resources proportionally based on the normalized results. We refer to this scheme as the traditional resource allocation method. Apparently, this is the approach adopted by the vast majority of enterprises.
However, this traditional method, while taking into account individuals’ preferences, ignores complex factors that may skew the reported contributions. From a psychological standpoint, individual differences significantly influence how people report their contributions in team settings. Specifically, there exists a dichotomy in reporting behaviors: some individuals are inclined to report a contribution rate higher than their actual input, motivated by the prospect of securing a greater share of resources. Conversely, others may report a lower contribution rate, driven by traits of modesty or self-criticism. This discrepancy between reported and actual contribution rates leads to an allocation of resources that does not accurately reflect true individual contributions.
Example 1.
Consider a scenario where two individuals, A and B, are collaborating on a project, each contributing equally with a true contribution rate of 50%, a fact unknown to their manager. The project yields a bonus pool of 10,000 USD, and an equitable bonus allocation would dictate that each individual receives 5000 USD. However, during a self-assessment survey, individual A reports a contribution rate of 60%, while individual B, more modestly, reports a rate of 45%. Utilizing a traditional resource allocation method, individual A would be allocated 5714 USD (calculated by 10 , 000 × 60 % / 60 % + 45 % ), and individual B would receive 4286 USD (calculated by 10 , 000 × 45 % / 60 % + 45 % ), resulting in a clear disparity.
Example 1 illustrates the inherent flaws in the traditional allocation method when faced with biased self-reporting. If such a method is employed consistently over time, the mismatch between actual contributions and allocated resources can lead to detrimental effects on individual motivation and work efficiency, as well as impede the progress of collaborative team projects. This variance in self-reporting behaviors underscores the complexity of designing equitable resource allocation mechanisms. It highlights the necessity for managers and team leaders to adopt strategies that can mitigate the impact of subjective self-assessment biases, ensuring that resource allocation aligns more closely with the true value of each team member’s contributions.

1.3. Contributions and Organization

To foster fair and equitable resource allocation, our methodology introduces the “reporting coefficient”, a factor that quantifies the degree to which individuals report their submitted contribution rates relative to their actual merit. For instance, in Example 1, individual A’s “reporting coefficient” is 1.2 (60% reported divided by 50% actual), while individual B’s “reporting coefficient” is 0.9 (45% reported divided by 50% actual). We assume that these psychological tendencies are consistent, with each individual’s “reporting coefficient” remaining constant across various projects.
Building on this premise, we propose a data-driven optimization approach that utilizes the contribution rates reported by individuals through surveys to estimate their respective “reporting coefficients”. These estimates allow us to adjust reported contribution rates to more accurately reflect true contributions. Consequently, resources can be allocated in a manner that is both fairer and more equitable for collaborative, multi-participant projects.
Computational experiments validate the efficacy of our proposed method. The results indicate that our approach leads to a resource allocation that is more aligned with actual contributions when compared to traditional methods. By implementing this refined allocation strategy, team members are more likely to receive resources commensurate with their true input, thereby enhancing motivation, efficiency, and the overall success of collaborative endeavors.
In summary, our proposed approach distinguishes itself from the existing literature in several ways:
  • We define a new concept, “reporting coefficient”, to quantify the degree to which individuals report their submitted contribution rates relative to their actual ones.
  • We employ a novel data-driven optimization model to estimate the “reporting coefficients”, which can yield a more equitable team resource allocation scheme.
  • We conduct numerous computational experiments to verify the effectiveness of our proposed method.
The remainder of this paper is organized as follows: Section 2 mathematically states the problem studied in this paper and proposes the data-driven optimization model. Section 3 conducts computational experiments and sensitivity analysis. Section 4 concludes this paper.

2. Problem Statement and Modeling

In team management, when a group of people collaborates to complete a project, team managers typically allocate resources (e.g., bonuses) based on the contribution of each participant. However, it remains challenging to accurately assess the true contribution rate of each participant in a project. In this paper, we assume that the team manager may conduct self-assessment surveys in which participants are invited to report their self-perceived contribution rates. Several important issues arise during the surveys. First, individuals may have biased perceptions toward their true contribution rates. For example, if a participant’s true contribution rate is 50%, he/she might perceive it as 60% with a 10% deviation. Second, during the surveys, individuals may over- or under-report their contribution rates. In the previous example, the participant might over-report his/her contribution rate as 65%. Traditionally, team managers usually normalize the reported contribution rates and allocate resources accordingly, which is unfair and unreasonable. Therefore, this paper studies a data-driven optimization approach based on the reported contribution rates.
In this section, we first present the mathematical description of the resource allocation problem in group management, and then introduce two allocation methods, i.e., the traditional approach and our proposed data-driven optimization approach. Finally, we define an evaluation metric to evaluate the effectiveness of these two methods.

2.1. Mathematical Description

Consider that M people have completed N projects, where at least two people are engaged in each project. For project j   j 1 , , N , a set of resources R j (e.g., bonus) will be allocated for n j people engaged in this project based on their contribution rates. We denote by r i j the amount of resources allocated to participant i   i 1 , , M in project j   j 1 , , N under the ground truth.
x i j true denotes the true contribution rate of participant i to project j , i 1 , , M , j 1 , , N , where 0 x i j true < 1 . We assume that the sum of the true contribution rates of all people in one project is definitely 100%; that is, i = 1 M x i j true = 100 %   j 1 , , N .
Although true contributions to a project by different participants exist, they may have biased perceptions toward their true contributions [32]. To consider this scenario, we denote by x ^ i j true the self-perceived contribution rate of participant i to project j , i 1 , , M , j 1 , , N , where 0 x ^ i j true 1 . To measure the relationship between x i j true and x ^ i j true , we denote by σ the maximum error rate of people’s perception. Thus, we assume that
x ^ i j true ~ U x i j true 1 σ ,   min x i j true 1 + σ ,   1 ,
which means that the self-perceived contribution rate follows a uniform distribution within the error range and x ^ i j true 1 has to be guaranteed at the same time.
Now, assume that the manager conducts a self-assessment survey in which M people are invited to report their self-perceived contributions to N projects. We denote by x i j the self-reported contribution rate of participant i to project j , i 1 , , M , j 1 , , N , where 0 x i j 1 . It is not surprising that some people may report a higher contribution rate than the self-perceived value x ^ i j true , while others may report a lower one. In order to characterize this phenomenon, we denote by a i the “reporting coefficient” of participant i , where a i > 0 . We assume that a i remains constant for participant i across various projects; thus, we have x i j = min a i x ^ i j true ,   1 . In other words, participant i estimates his contribution rate at x ^ i j true in his mind. When reporting, he may over-report at x i j , motivated by the prospect of securing a greater share of resources; that is, a i > 1 and x i j x ^ i j true . Or, he may conservatively under-report at x i j , driven by traits of modesty or self-criticism; that is, a i < 1 and x i j < x ^ i j true .
Given our above introduction, this paper investigates how to allocate resources fairly for each individual in each project based on the self-reported contribution rates x i j   i 1 , , M , j 1 , , N . Apparently, the optimal (i.e., the most equitable) resource allocation scheme under the ground truth is
r i j = R j x i j true ,       i 1 , , M , j 1 , , N .
However, it is actually difficult to obtain the value of x i j true   i 1 , , M , j 1 , , N . Now, we have a dataset D = x i j : i = 1 , , M ;   j = 1 , , N obtained through the self-assessment survey. Based on the dataset D , the estimated value of r i j , denoted by r ^ i j , can be obtained by various methods, such as the traditional resource allocation method and the data-driven optimization method proposed in this paper.
The definitions of parameters in the problem statement are summarized in Table 1.

2.2. Allocation Method 1: The Traditional Approach

Traditionally, the manager allocates resources using the normalization method:
r ^ i j norm   = R j x i j i = 1 M x i j ,       i 1 , , M , j 1 , , N .
This resource allocation scheme means that the more a participant can boast, the more he gains than he deserves, which is apparently unfair. If this method is adopted by the manager, the discrepancy between reported and actual contribution rates would lead to an allocation of resources that does not accurately reflect true individual contributions, consequently reducing individual work efficiency and impeding the progress of team-based projects. To remedy this issue, we consider a novel data-driven approach.

2.3. Allocation Method 2: The Novel Data-Driven Optimization Approach

In this section, we establish a data-driven mathematical optimization model based on the dataset D , which aims to obtain the estimated value of a i   i 1 , , M , denoted by a ^ i . Using a ^ i   i 1 , , M , we can calculate the contribution rates close to the ground truth of each participant in each project and thus allocate resources more equitably and reasonably. We define
y ^ j = i = 1 M x i j a ^ i   , j 1 , , N
as the sum of the adjusted self-reported contribution rates of all of the participants engaged in project j   j 1 , , N , which is expected to be 100% (recall that i = 1 M x i j true = 100 % ,   j 1 , , N ). The definitions of decision variables in the mathematical optimization model are summarized in Table 2.
Therefore, we establish the quadratic optimization model as follows:
min 1 N j = 1 N y ^ j 100 % 2
subject to
y ^ j = i = 1 M x i j a ^ i ,       j 1 , , N
a ^ i > 0 ,       i 1 , , M .
Objective function (4) minimizes the mean squared error between the sum of adjusted self-reported contribution rates and 100% across all of the considered N projects. Here, 100% means that the sum of the true contributions from multiple participants to a project should be ideally one (i.e., 100%). Constraints (5) compute the sum of the adjusted self-reported contribution rates for project j after considering the adjustment effect brought by the “reporting coefficient”. Constraints (6) specify the variable domain.
By solving the model, we obtain a ^ i   i 1 , , M . Let x i j adjusted = x i j / a ^ i   i 1 , , M , j 1 , , N represent the adjusted self-reported contribution rate of participant i for project j (i.e., adjusted by a ^ i ), and we get the resource allocation scheme by our proposed method:
r ^ i j new = R j x i j adjusted i = 1 M x i j adjusted ,       i 1 , , M , j 1 , , N .

2.4. Evaluation Metric

To evaluate the effectiveness of various methods, the loss of resource allocation is defined as
L M N = 1 M N i = 1 M j = 1 N r ^ i j r i j 2 ,
which measures the difference between the actual assigned reward and the ground-truth reward across all of the considered participants and projects. The smaller the loss of resource allocation, the more equitable the resource allocation scheme for participants, which means that individuals are rewarded in accordance with their efforts to a greater extent. We denote by L M N norm and L M N new the loss of resource allocation by using the traditional resource allocation method and the novel data-driven method proposed in this paper, respectively. Therefore, the loss reduction percentage of our proposed method compared to the traditional resource allocation method is defined as
Δ p = L M N norm   L M N new L M N norm   × 100 % .
A positive Δ p indicates that our proposed method is more effective compared to the traditional method, which implies that the loss of resource allocation of our method is lower than that of the traditional method, and, as a result, our method presents a more fair and reasonable allocation scheme. Furthermore, the larger the value of Δ p , the better the performance of our proposed method compared to the traditional method.

3. Computational Experiments

In this section, we conduct computational experiments and non-parametric hypothesis tests to verify the effectiveness of our proposed method. Furthermore, sensitivity analysis is comprehensively conducted to investigate the impact of parameter changes on the effectiveness of the proposed data-driven approach.

3.1. Simulation

For the simulation experiments, it is necessary to determine the values of four sets of parameters: M , N , σ , and a i   i 1 , , M , for generating the computational instances and portraying different scenarios. The parameters M and N represent the numbers of people and projects, respectively. The parameter σ quantifies the discrepancy between an individual’s perception of their contribution and their actual contribution. A higher σ suggests a greater level of inaccuracy in self-assessment. The parameter a i   i 1 , , M represents the extent to which participant i   i 1 , , M misreports his contribution rate. If a i > 1 , it indicates that the participant i tends to over-report his contribution, while a i < 1 would indicate under-reporting.
We take the example where M = 10 , N = 500 , σ = 10 % , and a i 0.8 ,   2     i 1 , , 10 , to illustrate the process of generating the computational instances. Consider that M = 10 people have completed N = 500 projects. For participant i 1 , , 10 , we randomly generate a i 0.8 ,   2 . For project j 1 , , 500 , we randomly generate the number of engaged people n j 2 , 3 , , 10 (recall that at least two people are engaged in one project), where n j is an integer. The resources to be allocated for project j is R j = 10 3 n j / 5 . Suppose that P j   j 1 , , 500 represents the set of people engaged in project j , while Q j   j 1 , , 500 represents the set of people not engaged. Then, we randomly select n j different integers in 1 , 2 , , 10 for P j and then Q j = 1 , , 10 P j . For participant i P j , we randomly generate x i j true 0 ,   1 under the condition that i P j x i j true = 1   j 1 , , 500 . For participant i Q j , we set x i j true = 0 . With σ = 10 % , we randomly generate x ^ i j true   i 1 , , 10 , j 1 , , 500 from U x i j true 1 10 % ,   min x i j true 1 + 10 % ,   1 , and consequently, we obtain x i j = min a i x ^ i j true ,   1 , i.e., the dataset D = x i j : i = 1 , , 10 ;   j = 1 , , 500 .
Based on the rules for generating the computational instances mentioned above, five sets of simulation experiments with different parameter combinations are conducted, with results shown in Table 3. Experiment 1 describes a scenario in which a group of 10 individuals completes 100 projects. Experiment 2 involves an additional 10 individuals compared to Experiment 1. In contrast with Experiment 2, Experiment 3 increases the number of projects to 500. In Experiment 4, the maximum error rate of people’s perception is increased by 10% based on Experiment 3. In Experiment 5, the upper bound of a i   i 1 , , M is raised to 3. The parameters for the simulation experiments are set based on real-world scenarios. The five sets of parameter combinations in Experiments 1–5 mimic real-world scenarios under various conditions.
From the above experimental results, it can be seen that all of the values of L M N new are smaller than those of L M N norm in the five sets of experiments, i.e., the values of Δ p are positive; this means that our proposed data-driven method exhibits a lower loss of resource allocation compared to the traditional method. Furthermore, the values of Δ p generally exceed 70%, with an average of 81.34% across the five sets of simulation experiments. Therefore, compared to the traditional resource allocation method, the novel data-driven method proposed in this paper can significantly reduce the losses of resource allocation in different parameter combinations (corresponding to different scenarios).

3.2. Non-Parametric Hypothesis Test

To further evaluate whether the novel data-driven method proposed in this paper can significantly reduce the loss of resource allocation compared to the traditional method, we conduct a statistical significance test in this section.
In the simulation experiments, we assume that the values of L M N norm are derived from a population L norm with an unknown distribution and an unknown mean μ norm ; similarly, the values of L M N new are derived from a population L new with an unknown distribution and an unknown mean μ new . The populations L norm and L new are assumed to be independent of each other. Through extensive simulation experiments, we can obtain two sets of sample data from these two populations. Based on the sample data, we aim to determine whether μ new is significantly lower than μ norm . Given that the distributions of the two populations are unknown, we employ the Mann–Whitney U test [33], which is a non-parametric hypothesis test method, to evaluate whether there is a significant difference in the medians of two independent samples, and thus to infer whether there is a significant difference between the means of two populations. Therefore, the null hypothesis and the alternative hypothesis for this significance test are as follows:
  • The null hypothesis: μ norm = μ new .
  • The alternative hypothesis: μ norm > μ new .
To obtain the sample data from these two populations, L norm and L new , we conduct K = 100 sets of simulation experiments. In each set of experiments, the values of the four parameters M , N , σ , and U a (denoted as the upper bound of a i , i 1 , , M ) are randomly selected from the corresponding value ranges set in Table 4. We assume that most people are inclined to report a higher contribution rate than their self-perceived values, while very few people report lower values. Therefore, for the range of a i   i 1 , , M , we only analyze the impact of U a , while L a (denoted as the lower bound of a i , i 1 , , M ) is fixed at 0.8.
Through K = 100 sets of simulation experiments, we obtain two sets of sample data, D norm and D new , each containing 100 sample points, from the two populations, L norm and L new , respectively. At a significance level of α = 0.05 , we conduct the Mann–Whitney U test on the hypothesis proposed above. The value of the resulting Mann–Whitney U test statistic is 8451.0, with a p-value of 1.715 × 10 17 . Since 1.715 × 10 17 < 0.05 , we reject the null hypothesis and accept the alternative hypothesis, concluding that μ new is significantly lower than μ norm ; that is, the novel data-driven method proposed in this paper can significantly reduce the loss of resource allocation compared to the traditional method.

3.3. Sensitivity Analysis

To investigate the impact of parameter changes on Δ p , we conduct the sensitivity analysis, where the settings of the four parameters: M , N , σ , and U a (denoted as the upper bound of a i , i 1 , , M ) are shown in Table 5, where L a (denoted as the lower bound of a i , i 1 , , M ) is also fixed at 0.8. The line charts of Δ p with respect to M , N , σ , and U a are depicted, respectively, in Figure 1.
In Figure 1a, there is a tendency for Δ p to increase as the total number of people M increases, when N = 500 , σ = 10 % , U a = 2 . That is, the more people in the projects, the better our proposed method is compared to the traditional resource allocation method, which means that the reduction in the loss of the resource allocation is greater.
In Figure 1b, as the number of projects N increases, Δ p tends to decrease but remains almost above 86.5% when M = 10 , σ = 10 % , U a = 2 , which means that N has little effect on Δ p .
In Figure 1c, Δ p decreases almost linearly with the increase in σ when M = 10 , N = 500 , U a = 2 . That is, as the maximum error rate in individuals’ perceptions escalates, the effectiveness of our proposed resource allocation method in mitigating resource allocation loss progressively diminishes when contrasted with the conventional approach to resource allocation.
In Figure 1d, Δ p increases and then decreases with respect to U a and roughly reaches a maximum at U a = 2 , when M = 10 , N = 500 , σ = 10 % . Δ p decreases almost linearly when U a > 3 . This trend can be explained by the condition x i j = min a i x ^ i j true ,   1 ( i 1 , , M , j 1 , , N ). As U a increases, there is a corresponding increase in the likelihood that a i   i 1 , , M takes on larger values. Consequently, x i j   i 1 , , M , j 1 , , N is more likely to reach its maximum value of 1, rather than being equal to a i x ^ i j true . Therefore, the effectiveness of our proposed method in mitigating resource allocation loss progressively diminishes compared to the traditional resource allocation method.

4. Conclusions

This study introduces a methodology for estimating the true contribution rates of individuals in a collaborative project involving multiple participants based on a data-driven optimization method and allocating resources more equitably in teamwork. Given the potential for individuals to misreport their contributions, the traditional resource allocation method does not accurately reflect true individual contributions. This paper addresses the inherent challenges of self-reporting, ensuring that individuals are rewarded in accordance with their efforts.
We present a mathematical formulation of the problem under investigation and develop a data-driven optimization model to estimate the true contribution rates based on the self-reported values of individuals. We define a metric, loss of resource allocation, to assess the effectiveness of various allocation methods. In the computational experiments, we conduct extensive simulation experiments and a non-parametric hypothesis test to verify the effectiveness of our proposed method. Furthermore, sensitivity analysis is employed to illustrate the impact of parameter changes on the effectiveness of the novel data-driven approach. Results of simulation experiments indicate that our proposed method exhibits an average 81.34% decrease in team resource allocation loss compared to the traditional method. The non-parametric hypothesis test further validates the efficacy of our proposed method and indicates that our approach significantly reduces the loss of resource allocation compared to the traditional resource allocation method, leading to a resource allocation that is more aligned with actual individual contributions, thereby enhancing motivation, efficiency, and the overall success of collaborative teamwork.
The proposed data-driven team resource allocation approach depends on the results of self-reporting. A prerequisite assumption of this method is that the “reporting coefficients” of participants remain constant across different projects, which may not be true in some actual cases. Future research could integrate knowledge and methodologies from fields such as psychology and organizational behavior to design more rational self-assessment systems and improve team resource allocation mechanisms to ensure greater fairness and equity.

Author Contributions

Conceptualization, B.J., X.T. and S.W.; methodology, B.J., X.T., K.-W.P., Q.C., Y.J. and S.W.; software, B.J.; validation, B.J.; formal analysis, B.J. and X.T.; investigation, B.J.; resources, K.-W.P. and Y.J.; data curation, B.J.; writing—original draft preparation, B.J. and X.T.; writing—review and editing, X.T., Q.C. and Y.J.; visualization, B.J.; supervision, S.W.; project administration, K.-W.P.; funding acquisition, Y.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Massingham, P.; Nguyet Que Nguyen, T.; Massingham, R. Using 360 degree peer review to validate self-reporting in human capital measurement. J. Intellect. Cap. 2011, 12, 43–74. [Google Scholar] [CrossRef]
  2. Audia, P.G.; Brion, S.; Greve, H.R. Self-assessment, self-enhancement, and the choice of comparison organizations for evaluating organizational performance. In Advances in Strategic Management; Gavetti, G., Ocasio, W., Eds.; Emerald Group Publishing Limited: Leeds, UK, 2015; Volume 32, pp. 89–118. ISBN 978-1-78441-946-2. [Google Scholar]
  3. Blanch-Hartigan, D. Medical students’ self-assessment of performance: Results from three meta-analyses. Patient Educ. Couns. 2011, 84, 3–9. [Google Scholar] [CrossRef]
  4. DeNisi, A.S.; Murphy, K.R. Performance appraisal and performance management: 100 years of progress? J. Appl. Psychol. 2017, 102, 421–433. [Google Scholar] [CrossRef] [PubMed]
  5. Scullen, S.E.; Bergey, P.K.; Aiman-Smith, L. Forced distribution rating systems and the improvement of workforce potential: A baseline simulation. Pers. Psychol. 2005, 58, 1–32. [Google Scholar] [CrossRef]
  6. Poocharoen, O.; Wong, N.H.-L. Performance management of collaborative projects: The stronger the collaboration, the less is measured. Public Perform. Manag. Rev. 2016, 39, 607–629. [Google Scholar] [CrossRef]
  7. DeNisi, A.S.; Smith, C.E. Performance appraisal, performance management, and firm-level performance: A review, a proposed model, and new directions for future research. Acad. Manag. Ann. 2014, 8, 127–179. [Google Scholar] [CrossRef]
  8. Arvey, R.D.; Murphy, K.R. Performance evaluation in work settings. Annu. Rev. Psychol. 1998, 49, 141–168. [Google Scholar] [CrossRef] [PubMed]
  9. Parizi, R.M.; Spoletini, P.; Singh, A. Measuring team members’ contributions in software engineering projects using git-driven technology. In Proceedings of the 2018 IEEE Frontiers in Education Conference (FIE), San Jose, CA, USA, 3–6 October 2018; pp. 1–5. [Google Scholar]
  10. Gamble, R.F.; Hale, M.L. Assessing individual performance in agile undergraduate software engineering teams. In Proceedings of the 2013 IEEE Frontiers in Education Conference (FIE), Oklahoma City, OK, USA, 23–26 October 2013; pp. 1678–1684. [Google Scholar]
  11. Jorgenson, N.M.; Hale, M.L.; Gamble, R.F. SEREBRO: Facilitating student project team collaboration. In Proceedings of the Proceedings of the 33rd International Conference on Software Engineering, Waikiki, HI, USA, 21 May 2011; pp. 1019–1021. [Google Scholar]
  12. Hale, M.; Jorgenson, N.; Gamble, R. Predicting individual performance in student project teams. In Proceedings of the 2011 24th IEEE-CS Conference on Software Engineering Education and Training (CSEE&T), Honolulu, HI, USA, 22–24 May 2011; pp. 11–20. [Google Scholar]
  13. Mittal, K.C.; Goel, A.K.; Mohindru, P. Performance evaluation of employees using analytical hierarchical process: A case study of Indian IT industry. Asia Pac. Bus. Rev. 2009, 5, 119–127. [Google Scholar] [CrossRef]
  14. Ahmed, I.; Sultana, I.; Paul, S.K.; Azeem, A. Employee performance evaluation: A fuzzy approach. Int. J. Product. Perform. Manag. 2013, 62, 718–734. [Google Scholar] [CrossRef]
  15. Afrasiabi, A.; Chalmardi, M.K.; Balezentis, T. A novel hybrid evaluation framework for public organizations based on employees’ performance factors. Eval. Program Plan. 2022, 91, 102020. [Google Scholar] [CrossRef]
  16. Planas-Lladó, A.; Feliu, L.; Arbat, G.; Pujol, J.; Suñol, J.J.; Castro, F.; Martí, C. An analysis of teamwork based on self and peer evaluation in higher education. Assess. Eval. High. Educ. 2021, 46, 191–207. [Google Scholar] [CrossRef]
  17. Pota, M.; Minutolo, A.; Damiano, E.; De Pietro, G.; Esposito, M. Betting on yourself: A decision model for human resource allocation enriched with self-assessment of soft skills and preferences. IEEE Access 2022, 10, 26859–26875. [Google Scholar] [CrossRef]
  18. Gunning, T.K.; Conlan, X.A.; Collins, P.K.; Bellgrove, A.; Antlej, K.; Cardilini, A.P.; Fraser, C.L. Who engaged in the team-based assessment? Leveraging EdTech for a self and intra-team peer-assessment solution to free-riding. Int. J. Educ. Technol. High. Educ. 2022, 19, 38. [Google Scholar] [CrossRef]
  19. McIver, S.; Murphy, B. Self-assessment and what happens over time: Student and staff perspectives, expectations and outcomes. Act. Learn. High. Educ. 2023, 24, 207–219. [Google Scholar] [CrossRef]
  20. Kubincová, Z.; Kolčák, K. Acceptance of team assessment and self-assessment by high school students. In Proceedings of the 14th International Conference on Education and New Learning Technologies, Palma, Spain, 4–6 July 2022; pp. 5225–5231. [Google Scholar]
  21. Earnest, M.; Madigosky, W.S.; Yamashita, T.; Hanson, J.L. Validity evidence for using an online peer-assessment tool (CATME) to assess individual contributions to interprofessional student teamwork in a longitudinal team-based learning course. J. Interprofessional Care 2022, 36, 923–931. [Google Scholar] [CrossRef]
  22. Cristofaro, M.; Giardino, P.L. Core Self-Evaluations, self-Leadership, and the self-serving bias in managerial decision making: A laboratory experiment. Adm. Sci. 2020, 10, 64. [Google Scholar] [CrossRef]
  23. Anahideh, H.; Kang, L.; Nezami, N. Fair and diverse allocation of scarce resources. Socio-Econ. Plan. Sci. 2022, 80, 101193. [Google Scholar] [CrossRef] [PubMed]
  24. Kaur, N.; Hughes, J.; Chen, J. VaxEquity: A data-driven risk assessment and optimization framework for equitable vaccine distribution. In Proceedings of the 2022 56th Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 9 March 2022; pp. 25–30. [Google Scholar]
  25. Xu, G.; Wu, J.; Zhu, Q. Fixed cost allocation in two-stage system: A data-driven approach from the perspective of fairness concern. Comput. Ind. Eng. 2022, 173, 108647. [Google Scholar] [CrossRef]
  26. Farh, J.; Werbel, J.D.; Bedeian, A.G. An empirical investigation of self-appraisal-based performance evaluation. Personnel Psychol. 1988, 41, 141–156. [Google Scholar] [CrossRef]
  27. Kromrei, H. Enhancing the annual performance appraisal process: Reducing biases and engaging employees through self-assessment. Perform. Improv. Q. 2015, 28, 53–64. [Google Scholar] [CrossRef]
  28. Kamer, B.; Annen, H. The role of core self-evaluations in predicting performance appraisal reactions. Swiss J. Psychol. 2010, 69, 95–104. [Google Scholar] [CrossRef]
  29. Gbadamosi, G.; Ross, C. Perceived stress and performance appraisal discomfort: The moderating effects of core self-evaluations and gender. Public Pers. Manag. 2012, 41, 637–659. [Google Scholar] [CrossRef]
  30. Shore, T.H.; Adams, J.S.; Tashchian, A. Effects of self-appraisal information, appraisal purpose, and feedback target on performance appraisal ratings. J. Bus. Psychol. 1998, 12, 283–298. [Google Scholar] [CrossRef]
  31. Choi, J.; Yun, S.; De Oliveira, D.P. Developing a cost normalization framework for phase-based performance assessment of construction projects. Can. J. Civ. Eng. 2016, 43, 1075–1086. [Google Scholar] [CrossRef]
  32. Bailey, E.R.; Levy, A. Are you for real? Perceptions of authenticity are systematically biased and not accurate. Psychol. Sci. 2022, 33, 798–815. [Google Scholar] [CrossRef]
  33. Rubarth, K.; Sattler, P.; Zimmermann, H.G.; Konietschke, F. Estimation and testing of Wilcoxon–Mann–Whitney effects in factorial clustered data designs. Symmetry 2021, 14, 244. [Google Scholar] [CrossRef]
Figure 1. Line charts of Δ p with respect to M , N , σ , and U a , respectively.
Figure 1. Line charts of Δ p with respect to M , N , σ , and U a , respectively.
Mathematics 12 02095 g001
Table 1. The definitions of parameters.
Table 1. The definitions of parameters.
Parameters:
M Total number of participants, i 1 , , M .
N Total number of projects, j 1 , , N .
n j Number of participants engaged in project j   j 1 , , N .
R j The amount of resources to be allocated for participants engaged in project j   j 1 , , N .
r i j The amount of resource allocated to participant i   i 1 , , M in project j   j 1 , , N under the ground truth.
r ^ i j The estimated value of r i j obtained by various methods.
x i j true The true contribution rate of participant i to project j , i 1 , , M , j 1 , , N , where 0 x i j true < 1 .
σ The maximum error rate of participants’ perception.
x ^ i j true The self-perceived contribution rate of participant i to project j , i 1 , , M , j 1 , , N , where 0 x ^ i j true 1 .
a i The “reporting coefficient” of participant i   i 1 , , M , which remains constant across various projects, where a i > 0 .
x i j The self-reported contribution rate of participant i to project j , i 1 , , M , j 1 , , N , where 0 x i j 1 .
Set:
D D = x i j : i = 1 , , M ;   j = 1 , , N , a dataset obtained through the self-assessment survey.
Table 2. The definitions of decision variables.
Table 2. The definitions of decision variables.
Decision Variables:
a ^ i The estimated value of a i   i 1 , , M .
y ^ j The sum of adjusted self-reported contribution rates of all of the participants engaged in project j   j 1 , , N .
Table 3. Results of the five sets of simulation experiments.
Table 3. Results of the five sets of simulation experiments.
Experiment No. M N σ a i   i 1 , , M L M N n o r m   L M N n e w Δ p
11010010% 0.8 ,   2 1426.76159.8988.79%
22010010% 0.8 ,   2 1807.49389.1878.47%
32050010% 0.8 ,   2 1670.87178.7489.30%
42050020% 0.8 ,   2 2075.49597.5371.21%
52050020% 0.8 ,   3 3382.24712.8578.92%
Table 4. The value ranges for the four parameters in the simulation experiments.
Table 4. The value ranges for the four parameters in the simulation experiments.
ParametersValue Ranges
M range (3, 30, 1) 1
N range (100, 1000, 10) 2
σ range (0, 0.5, 0.01)
U a range (1.2, 4, 0.05)
1 range (3, 30, 1) means that M 3 , 4 , , 30 . 2 range (100, 1000, 10) means that N 100 , 110 , , 1000 .
Table 5. The settings of the four parameters in the sensitivity analysis.
Table 5. The settings of the four parameters in the sensitivity analysis.
Experiment No. M N σ U a
6range (3, 30, 1) 150010%2
710range (100, 1000, 10) 210%2
810500range (0, 0.5, 0.01)2
91050010%range (1.2, 4, 0.05)
1 range (3, 30, 1) means that M 3 , 4 , , 30 . 2 range (100, 1000, 10) means that N 100 , 110 , , 1000 .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, B.; Tian, X.; Pang, K.-W.; Cheng, Q.; Jin, Y.; Wang, S. Rightful Rewards: Refining Equity in Team Resource Allocation through a Data-Driven Optimization Approach. Mathematics 2024, 12, 2095. https://doi.org/10.3390/math12132095

AMA Style

Jiang B, Tian X, Pang K-W, Cheng Q, Jin Y, Wang S. Rightful Rewards: Refining Equity in Team Resource Allocation through a Data-Driven Optimization Approach. Mathematics. 2024; 12(13):2095. https://doi.org/10.3390/math12132095

Chicago/Turabian Style

Jiang, Bo, Xuecheng Tian, King-Wah Pang, Qixiu Cheng, Yong Jin, and Shuaian Wang. 2024. "Rightful Rewards: Refining Equity in Team Resource Allocation through a Data-Driven Optimization Approach" Mathematics 12, no. 13: 2095. https://doi.org/10.3390/math12132095

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop