Next Article in Journal
Mathematical and Experimental Model of Neuronal Oscillator Based on Memristor-Based Nonlinearity
Next Article in Special Issue
Quarantine and Vaccination in Hierarchical Epidemic Model
Previous Article in Journal
Delay-Dependent Stability Region for the Distributed Coordination of Delayed Fractional-Order Multi-Agent Systems
Previous Article in Special Issue
Influence of the Internal Structure on the Integral Risk of a Complex System on the Example of the Risk Minimization Problem in a “Star” Type Structure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Tripartite Evolutionary Game Analysis of Participant Decision-Making Behavior in Mobile Crowdsourcing

School of Information, Shanxi University of Finance and Economics, Taiyuan 030006, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(5), 1269; https://doi.org/10.3390/math11051269
Submission received: 2 February 2023 / Revised: 24 February 2023 / Accepted: 2 March 2023 / Published: 6 March 2023

Abstract

:
With the rapid development of the Internet of Things and the popularity of numerous sensing devices, Mobile crowdsourcing (MCS) has become a paradigm for collecting sensing data and solving problems. However, most early studies focused on schemes of incentive mechanisms, task allocation and data quality control, which did not consider the influence and restriction of different behavioral strategies of stakeholders on the behaviors of other participants, and rarely applied dynamic system theory to analysis of participant behavior in mobile crowdsourcing. In this paper, we first propose a tripartite evolutionary game model of crowdsourcing workers, crowdsourcing platforms and task requesters. Secondly, we focus on the evolutionary stability strategies and evolutionary trends of different participants, as well as the influential factors, such as participants’ irrational personality, conflict of interest, punishment intensity, technical level and awareness of rights protection, to analyze the influence of different behavioral strategies on other participants. Thirdly, we verify the stability of the equilibrium point of the tripartite game system through simulation experiments. Finally, we summarize our work and provide related recommendations for governing agencies and different stakeholders to facilitate the continuous operation of the mobile crowdsourcing market and maximize social welfare.

1. Introduction

Mobile crowdsourcing (MCS) has new opportunities due to the advancement of mobile communication technology and intelligent terminal applications. Currently, smart mobile devices contain a lot of sensors, such as microphones, GPS, cameras, etc. [1]. Therefore, mobile devices have the ability to collect a variety of sensing data, and users can use the multi-source sensing capabilities of smart devices to collect and share data. For instance, vehicle sensors with computing and communication capabilities are added to the car, which can be used to provide environmental information, such as detecting air quality, road condition information, and available parking spaces. In addition, mobile crowdsourcing is also widely used in many service fields [2,3], i.e., public safety, healthcare, smart cities, etc.
The mobile crowdsourcing system contains three stakeholders, namely task requesters, crowdsourcing workers and the crowdsourcing platform. Typically, task requesters first submit task details to the platform, including the due date, quantity and budget of the task. Then, registered workers receive and execute crowdsourcing tasks and return the results to the crowdsourcing platform. The crowdsourcing platform verifies the uploaded data, provides settlement services to the workers according to the completion of the task and finally sends the data to the task requester [4]. However, according to the hypothesis of rational man in economics, all stakeholders are inherently selfish [5]. Furthermore, they are ignorant of each other’s strategic behavior. Therefore, different stakeholders may make speculative behaviors to obtain their own maximum economic benefits with the minimum cost, which will have a negative impact on the management and operation of the mobile crowdsourcing system [6].
Although many scholars have proposed solutions from the aspects of quality control [7], task assignment [8], incentive mechanism [9], etc., these solutions have some limitations, such as they only focus on one or two stakeholders. Moreover, they ignore the influence and constraints of the behavioral strategies of different stakeholders on other participants and do not analyze the behavior of each participant with dynamic system theory.
In this paper, we construct a three-way evolutionary game model of crowd workers, crowdsourcing platforms, and task requesters. In addition, we take into account a variety of influencing factors, such as the irrational characteristics of the participants, attrition costs, the strength of punishment, the data filtering ability of the crowdsourcing platform, attrition costs and the task requester’s awareness of his rights, and dynamically analyze the effects of a participant’s different behavioral strategies on the behavioral strategies of other participants in the system. The following is a summary of this study’s main innovations and contributions:
  • Combining irrational characteristics, such as selfishness and the perfunctory strategy of the participant, the law of interaction among workers, crowdsourcing platforms and task requesters is studied.
  • The mutual concealment of various misbehaviors is fully considered, and replication dynamics are employed to propose an ideal steady state that obtains the Nash equilibrium and achieves the maximum benefit to society.
  • Using supervision mechanisms and combining factors, such as the strength of penalties, the cost of complaints and the technical level of the platform, we guide the participants and regulators in mobile crowdsourcing to make the right decisions.
The remaining portions of the paper are structured as follows: Section 2 reviews the relevant literature and comparatively analyzes the innovations in this paper. Section 3 formulates fundamental assumptions based on the problem, identifies the variables and their definitions and builds models based on the assumptions. Section 4 analyzes the stability of each equilibrium point by constructing a Jacobian matrix. Section 5 conducts a simulation experiment based on setting parameters and analyzes the impacts of various factors on participants’ behavior. Section 6 summarizes the conclusions, makes recommendations for each participant and proposes future work.

2. Related Work

2.1. Mobile Crowdsourcing

Mobile crowdsourcing has grown in popularity over the past few years, and various research studies on mobile crowdsourcing data management mainly focus on the following three core issues: quality control, task allocation and incentive mechanisms.
To obtain high-quality sensing data, Li et al. [10] proposed an effective quality control mechanism designed to induce workers to report task completion without collusion and with honesty, which can better control the quality of sensing data without supervision. Gong et al. [11] designed the quality, effort and data elicitation crowdsourcing mechanism to allow employees to truthfully provide their data and level of fulfillment to the party making the request and to carefully complete the task as requested by the requesting party to provide high-quality sensing data.
Another key factor affecting crowd-worker and platform behavior is the task assignment mechanism. Designing an efficient task allocation scheme can significantly improve the performance of mobile crowdsourcing systems. Wu et al. [12] designed a task allocation mechanism with a real-time, budget-aware task package allocation for a spatial crowdsourcing (RB-TPSC) mechanism that not only improves the efficiency of task allocation but also maximizes the quality of sensing data for crowd workers with a limited budget. Huang et al. [13] proposed an efficient task assignment algorithm, the optimized allocation scheme of time-dependent tasks (OPAT), which takes into account the perceived duration and the perceived capability of the user and can improve the perceived capability of each mobile user. Zhang et al. [14] proposed expert knowledge-aware truth-value analysis and task allocation, which can speculate the areas of expertise of the crowd workers and thus accurately assign tasks. Wu et al. [15] used the prediction model of the light gradient boosting machine (LightGBM) to forecast workers’ away time and assign mobile crowdsourcing tasks to suitable workers by considering time and space factors. The method effectively enhances data quality and decreases task requesters’ waiting times.
In addition, the incentive mechanism stimulates the enthusiasm of crowdsourcing workers to participate in work by means of rewards and so on, thus improving the quality of sensing data. In order to inspire workers to produce high-quality work, Yang et al. [16] proposed an offline incentive mechanism, examining both crowdsourcing-centric and user-centric models. While the user-centric model of the system evaluates the benefits available to them based on the quality of the sensing data they provide, the crowdsourcing-centric model of the system offers appropriate rewards for participating users. Peng et al. [17] proposed to pay participants according to their good or bad performance to motivate rational participants to complete mobile crowdsourcing tasks efficiently. The mechanism provides rewards to each participant by assessing the sensing data ‘s quality and the amount of effective contribution from each participant. Luo et al. [18] designed two competence reputation systems to assess workers’ fine-grained competence online. An incentive mechanism based on reverse auctions and fine-grained competence reputation systems is designed to maximize social welfare.

2.2. Game Theory in Mobile Crowdsourcing

Game theory is an effective tool for examining how various participants behave, and it may simulate various individuals’ behaviors in order to maximize their utility [19]. A participant’s behavior in mobile crowdsourcing is influenced by that of other participants. This paper analyzes the diversified interaction behaviors among participants in mobile crowdsourcing from the perspective of a cooperative game and a non-cooperative game.
Cooperation among participants rather than competition may be advantageous to everybody [20]. Li et al. [21] designed a time-optimization model based on a cooperation game in which participants adjust their strategies through time to maximize the perceived utility of cooperation. Based on a mobile crowdsourcing scenario that allows unauthorized cooperation between workers and task requesters, Zhao et al. [22] designed a multi-agent deep reinforcement learning (MADRL) scheme, which creates a task allocation plan that can consider the long-term interests of requesters and workers through the game between participants. Zhan et al. [23] formulated perceptual data trading as a two-person cooperative game, proposed an efficient mechanism to define the expected payoff of perceptual data and solved it by the Nash equilibrium theory. Considering that every mass worker has an incentive to encourage his friends to provide high-quality sensor data, Yang et al. [24] developed a mobile crowdsourcing social incentive mechanism with interdependent tasks. In this mechanism, participants can cooperate to complete tasks at a lower budget.
However, when cooperation does not lead to maximum benefit, a competitive relationship between players emerges. Competition can motivate crowd workers to submit high-quality data [20,25]. Jin et al. [26] designed the Theseus incentive mechanism, which is a non-cooperative relationship. This mechanism can ensure users provide high-quality data by ensuring the Bayesian Nash equilibrium. Wu et al. [27] designed a non-cooperative game-based price mechanism framework that can help the platform to provide a stable perceived strategy for each participant to maximize profits. Based on the characteristics of the noncooperative game and the Nash equilibrium of mobile crowdsourcing, Jiang et al. [28] used the stability of iterative algorithms and developed a perceptual scheme for crowd workers to obtain a method that better matches the crowd workers’ capabilities and task requesters’ needs. To cope with the competitive mobile crowdsourcing market, Wu et al. [29] proposed a genetic-algorithm-based method to find tasks based on the non-cooperative game nature of crowd workers. It also reveals the effect of competition between crowd workers and task requesters on their strategy choices and maximizes the utility of crowd workers.
This paper employs dynamic theory to analyze the behavioral strategies of various stakeholders and proposes an evolutionary stable strategy based on evolutionary game theory. Evolutionary game theory changes the assumption of complete rationality in traditional game theory to the assumption of bounded rationality, which satisfies the irrationality of game participants. Such changes enable decision makers to make relatively optimal decisions by continuously adjusting strategies [30]. Since the participants of mobile crowdsourcing cannot be completely rational, the evolutionary game is used for modeling to make it more realistic.

3. Model Formulation

3.1. Model Assumptions

For simplicity, we make the following assumptions to explore the interactions and constraints on the strategic choices of the three stakeholders. For the readers’ convenience, the main symbols used in the paper are listed in Table 1.
Assumption 1.
The crowd workers consume a certain cost in completing the task, including loss of battery, storage space, hardware, time, etc. If completed with a “hardworking” strategy, the task unit cost is W h , while the cost is W l ( W h > W l ) with a perfunctory strategy. The crowd workers receive payment of R 1 from the platform after completing the work. Thus, they obtain a payoff of ( R 1 W h ) N when these workers complete the task with a “hardworking” strategy and ( R 1 W l ) N when they take a perfunctory strategy to finish the task. The number of tasks is N.
Assumption 2.
The unit cost of operating a crowdsourcing platform is F p , which includes the costs of data aggregation, integration, platform development and maintenance. After data aggregation is completed, it is sold to the task requester at a price of R 2 to support the normal operation of the platform ( R 2 > R 1 > W h > W l ). Thus, the crowdsourcing platform receives a revenue of ( R 2 R 1 F p ) N from the distribution and trading of the data.
Assumption 3.
Crowdsourcing platforms should be in charge of filtering the sensing data. The platform will reject the sensing data given by the crowd workers if it detects incomplete or subpar works, at which point the crowd workers will not be paid. The crowdsourcing platform may choose not to check the quality or filter the data to reduce the expense of detection. The cost of filtering is F t . The possibility that the platform detects a quality problem in the sensing data is μ, and the possibility of failure is ( 1 μ ) , which means that μ reflects the platform’s ability to control the sensing data’s quality. The higher the μ, the better the platform’s ability to filter the data will be. If the crowd workers use the “perfunctory” strategy to complete the tasks and the crowdsourcing platform chooses a “data filtering” strategy, the income of crowd workers is ( 1 μ ) R 1 N . In addition, the governing agencies will investigate and punish violations of crowdsourcing platforms with a certain probability, and the fine amount is P n .
Assumption 4.
The platform should receive the payment R 2 N from the task requester when the task requester requests N tasks from it. The cost of filing a complaint is F c if a task requester receives sensing data and discovers that it does not comply with the specifications or cannot be used to create a model. The likelihood that the regulator will voluntarily enact oversight on the crowdsourcing platform is α if there is no task requester complaint. When a task requester complains, the probability of the regulator taking regulatory action is one. Since task requesters deal with the platform directly, they will receive C from the crowdsourcing platform as compensation if they are successful in their complaint that the crowdsourcing platform’s data filtering is flawed. If the quality of sensing data provided by the crowd worker is really low, the crowdsourcing platform has the right to ask the crowd workers for the reputation compensation g C , where g is the compensation coefficient. If the crowd workers and crowdsourcing platform have good performance, the task requester’ s complaint will be invalid.
Assumption 5.
The probability of crowd workers completing a task with a “hardworking strategy” is x, and the probability of completing a task perfunctorily is ( 1 x ) . The probability of the crowdsourcing platform filtering data is y, and the probability of not filtering data is ( 1 y ) .The probability of the task requester not complaining is z, and the probability of complaining is ( 1 z ) ; x, y and z are in the range of [ 0 , 1 ] .

3.2. Game Relationship among Three Participants

The mobile crowdsourcing system contains three stakeholders, namely task requesters, crowdsourcing workers and the crowdsourcing platform, and its workflow is shown in Figure 1. The task requester publishes crowdsourced task information to the platform, such as time limitations, task requirements, budget, etc. Meanwhile, the task requester can provide feedback to the crowdsourcing platform after obtaining the sensing data. As the performer of the task, crowd workers registered on the platform receive and complete tasks, and their work attitude will be associated with the quality of task completion. The crowdsourcing platform will filter and check the uploaded data. However, the data may also not be filtered to save costs. Based on the completion situation, the platform gives crowd workers a certain remuneration and sends the processed data to the task requester.

3.3. Model Construction

Using the aforementioned presumptions and parameters, we construct a profit and loss matrix for crowd workers, crowdsourcing platforms and task requesters in Table 2.

3.3.1. Replication Dynamic Equation and Phase Diagram for Crowd Workers

The expected payoff of “hardworking” is E 11 , and that of “perfunctory” is E 12 . The average expected payoff of the crowd workers is E 1 . They are shown below:
E 11 = N ( R 1 W h ) [ y z + y ( 1 z ) + ( 1 y ) z + ( 1 y ) ( 1 z ) ]
E 12 = y z [ ( 1 μ ) R 1 N W l N ] + y ( 1 z ) [ ( 1 μ ) R 1 N W l N g C ( 1 μ ) ] + ( 1 y ) z [ R 1 N W l N ] + ( 1 y ) [ R 1 N W l N g C ] ( 1 z )
E 1 = x E 11 + ( 1 x ) E 12
The crowd workers’ replication dynamic equation is:
F ( x ) = d x d t = x E 11 E 1 = ( 1 x ) x ( E 11 E 12 ) = x ( 1 x ) y μ R 1 N + z g C g C N W h W l + g C z g C
The choice of strategy for crowd workers is influenced by the proportion of crowdsourcing platforms choosing the “data filtering” strategy and the proportion of task requesters choosing the “no complaint” strategy.
Let y 0 = ( W h W l ) N ( 1 z ) g C μ R 1 N ( 1 z ) g C , take the derivative of F ( x ) :
d F ( x ) d x = ( 1 2 x ) y μ R 1 N ( 1 z ) g C N W h W l + ( 1 z ) g C
If y = y 0 , F ( x ) = 0 . No matter what the value of x is, the strategy choice of crowd workers is in a stable state.
If y < y 0 , d F ( x ) d x x = 0 < 0 and d F ( x ) d x x = 1 > 0 can be inferred, x = 0 is the equilibrium point. The crowd workers will choose the “perfunctory” strategy when the likelihood of the crowdsourcing platform selecting the “data filtering” strategy drops below a particular threshold.
If y > y 0 , d F ( x ) d x x = 1 < 0 and d F ( x ) d x x = 0 > 0 can be inferred, so x = 1 is the equilibrium point. The crowd workers will choose the “hardworking” strategy when the likelihood of the crowdsourcing platform selecting the “data filtering” strategy exceeds a particular threshold.
According to the above analysis, the crowd workers’ replication dynamic phase diagram can be obtained, as shown in Figure 2.

3.3.2. Replication Dynamic Equation and Phase Diagram for Crowdsourcing Platforms

The expected payoff of “data filtering” is E 21 , and that of “no data filtering” is E 22 . The average expected payoff of the task requester is E 2 . They are shown below:
E 21 = [ x z + x ( 1 z ) ] R 2 R 1 F p N F t + ( 1 x ) z [ ( R 2 R 1 F p ) N ( 1 + μ ) F t ( 1 μ ) α P n ] + ( 1 x ) ( 1 z ) [ ( R 2 R 1 F p ) N ( 1 + μ ) F t ( 1 μ ) ( P n + C g C ) ]
E 22 = x z + ( 1 x ) z ( R 2 R 1 F p ) N α P n + x ( 1 z ) [ R 2 R 1 F p N P n C ] + ( 1 x ) R 2 R 1 F p N P n C + g C ( 1 z )
E 2 = y E 21 + ( 1 y ) E 22
The crowdsourcing platform’s replication dynamic equation is:
F ( y ) = d y d t = y E 21 E 2 = y ( 1 y ) E 21 E 22 = y ( 1 y ) { ( x μ μ 1 ) F t + ( μ + x x μ ) P n z C [ x ( μ g μ + 1 ) + μ g μ ] + [ x ( μ g μ + 1 ) + μ g μ ] C ( 1 α ) z [ x + μ x μ ] P n
The choice of strategy for the crowdsourcing platform is influenced by the proportion of task requesters who choose a “no complaint” strategy and the proportion of crowd workers who choose a “hardworking” strategy.
Let z 0 = [ x ( μ g μ + 1 ) + μ g μ ] C + ( x μ μ 1 ) F t + [ μ ( 1 x ) + x ] P n ( 1 α ) ( μ + x x μ ) P n + C [ x ( μ g μ + 1 ) + μ g μ ] , take the derivative of F ( y ) :
d F ( y ) d y = ( 1 2 x ) { ( x μ μ 1 ) F t + [ μ x μ + x ] P n + [ x ( μ g g + 1 ) + μ g μ ] C ( 1 α ) z ( x + μ ( 1 x ) ) P n z C [ x ( μ g μ + 1 ) + μ g μ ] }
If z = z 0 , F ( y ) = 0 . No matter what the value of y is, the strategy choice of the crowdsourcing platform is in a stable state.
If z < z 0 , d F ( y ) d y y = 1 < 0 and d F ( y ) d y y = 0 > 0 can be inferred, y = 1 is the equilibrium point. The crowdsourcing platform will choose the “data filtering” strategy when the likelihood of the task requester choosing the “no complaint” strategy drops below a particular threshold.
If z > z 0 , d F ( y ) d y y = 0 < 0 and d F ( y ) d y y = 1 > 0 can be inferred, so y = 0 is the equilibrium point. The crowdsourcing platform will choose the “no data filtering” strategy when the likelihood of the task requester choosing the “no complaint” strategy exceeds a particular threshold.
According to the above analysis, the crowdsourcing platform’s replication dynamic phase diagram can be obtained, as shown in Figure 3.

3.3.3. Replication Dynamic Equation and Phase Diagram for Task Requesters

The expected payoff of “no complaint” is E 31 , and that of “complaint” is E 32 . The average expected payoff of the task requester is E 3 . They are shown below:
E 31 = [ x y + x ( 1 y ) + ( 1 x ) ( 1 y ) + ( 1 x ) y ] ( R 2 N )
E 32 = x y ( R 2 N F c ) + x ( 1 y ) ( R 2 N F c + C ) + ( 1 x ) y [ R 2 N F c + ( 1 μ ) C ] + ( 1 x ) ( R 2 N F c + C ) ( 1 y )
E 3 = z E 31 + ( 1 z ) E 32
The task requesters’ replication dynamic equation is:
F ( z ) = d z d t = z E 31 E 3 = z ( 1 z ) E 31 E 32 = z ( 1 z ) [ F c + ( y μ 1 ) C + ( 1 μ ) x y C ]
The choice of strategy for the task requester game is influenced by the proportion of crowd workers selecting the “hardworking” strategy and the proportion of the crowdsourcing platform selecting the “data filtering” strategy.
Let x 0 = ( 1 y μ ) C F c y ( 1 μ ) C , take the derivative of F ( z ) :
d F ( z ) d z = ( 1 2 z ) [ F c + ( y μ 1 ) C + ( 1 μ ) x y C ]
If x = x 0 , F ( z ) = 0 . No matter what the value of z is, the strategy choice of task requesters is in a stable state.
If x < x 0 , d F ( z ) d z z = 0 < 0 and d F ( z ) d z z = 1 > 0 can be inferred, z = 0 is the equilibrium point. The task requester will select the “complaint” strategy when the likelihood of crowd workers selecting the “hardworking” strategy drops below a particular threshold.
If x > x 0 , d F ( z ) d z z = 1 < 0 and d F ( z ) d z z = 0 > 0 can be inferred, so z = 1 is the equilibrium point. The task requester will select the “no complaint” strategy when the likelihood of crowd workers choosing the “hardworking” strategy exceeds a particular threshold.
According to the above analysis, the task requesters’ replication dynamic phase diagram can be obtained, as shown in Figure 4.

4. Evolutionary Equilibrium Analysis

4.1. Jacobian Matrix

The aforementioned section examines how the three crowdsourcing system actors’ evolutionary stability strategies have been impacted by critical conditions and evolutionary paths.The following sections involve different equilibrium states under the combined action of various participants as well as evolutionary stability strategies. The three replication dynamic Equations (4), (9) and (14) can be combined to form a replicator dynamics system [31], which is denoted by Formula (16).
{ F ( x ) = x ( 1 x ) y μ R 1 N + z g C g C N W h W l + g C z g C F ( y ) = y ( 1 y ) { ( x μ μ 1 ) F t + ( μ + x x μ ) P n z C [ x ( μ g μ + 1 ) + μ g μ ] + [ x ( μ g μ + 1 ) + μ g μ ] C ( 1 α ) z ( x + μ x μ ) P n F ( z ) = z ( 1 z ) [ F c + ( y μ 1 ) C + ( 1 μ ) x y C ]
According to [32], the evolutionary stability strategy (ESS) of a set of differential equations can be found by looking at the Jacobian matrix’s local stability.The corresponding equilibrium point in the replicator dynamics system’s evolutionary stability strategy (ESS) is only stable if all of the Jacobian matrix’s eigenvalues are less than 0; otherwise, the equilibrium point is unstable. The Jacobian matrix J is shown in Formula (17).
J = F 11 F 12 F 13 F 21 F 22 F 23 F 31 F 32 F 33
The other elements of the Jacobian matrix are as follows:
F 11 = F ( x ) x = ( 1 2 x ) y μ R 1 N ( 1 z ) g C N W h W l + ( 1 z ) g C F 12 = F ( x ) y = x ( 1 x ) μ R 1 N + z g C g C F 13 = F ( x ) z = x ( 1 x ) [ y μ g C g C ] F 21 = F ( y ) x = y ( 1 y ) μ F t + z α ( 1 μ ) P n + ( 1 z μ + z μ ) P n + C + g μ ( 1 z ) C F 22 = F ( y ) y = ( 1 2 y ) ( x μ 1 μ ) F t + [ μ x μ + x ] P n + [ x ( μ g μ + 1 ) + μ g μ ] C z ( 1 α ) [ μ μ x + x ] P n z C [ x ( μ g μ + 1 ) + μ g μ ] F 23 = F ( y ) z = y ( 1 y ) [ x ( μ g μ + 1 ) + μ g μ ] C ( 1 α ) [ μ x μ + x ] P n F 31 = F ( z ) x = z ( 1 z ) [ C y μ y C ] F 32 = F ( z ) y = z ( 1 z ) [ μ C + x C x μ C ] F 33 = F ( z ) z = ( 1 2 z ) x y ( 1 μ ) C C + F c + y μ C
We obtain eight strategy equilibrium points by setting F ( x ) = F ( y ) = F ( z ) = 0 : E p 1 ( 0 , 0 , 0 ) , E p 2 ( 0 , 0 , 1 ) , E p 3 ( 0 , 1 , 0 ) , E p 4 ( 1 , 0 , 0 ) , E p 5 ( 0 , 1 , 1 ) , E p 6 ( 1 , 0 , 1 ) , E p 7 ( 1 , 1 , 0 ) and E p 8 ( 1 , 1 , 1 ) .

4.2. Stability Analysis

Take the equilibrium point E p 3 ( 0 , 1 , 0 ) as an example; we can obtain its eigenvalues from the Jacobian matrix. The result obtained is shown in Formula (19).
J = μ R 1 N + g C ( 1 μ ) N ( W h W l ) 0 0 0 ( 1 + μ ) F t + μ P n + μ ( 1 g ) C 0 0 0 F c C + μ C
The eigenvalues of the E p 3 ( 0 , 1 , 0 ) are μ R 1 N + g C ( 1 μ ) N ( W h W l ) , ( 1 + μ ) F t + μ P n + μ ( 1 g ) C and F c C + μ C . The three eigenvalues are less than 0, so the equilibrium point is ESS. Similar calculations can be made to determine the eigenvalues of other equilibrium points, and Table 3 presents the outcomes.
For a long time, consumers in China have been in a passive position, and it is difficult to obtain evidence, claim compensation and file lawsuits. As a result, the cost of defending rights is extremely high. With the development of the “Internet+” model, the sharing economy has emerged, and bicycle sharing is a new form of the sharing economy. In one case in 2019, Mr. Sun applied for a deposit refund from the ofo bicycle-sharing company, which never arrived. In May 2020, Mr. Sun found that the app updated the user service agreement. According to the agreement, users can only apply for arbitration to defend their rights, and the minimum cost of arbitration is CNY 6100. The arbitration cost is much higher than the CNY 99 deposit and will largely block consumers from defending their rights. According to the current reality, C < F c should be set, and at this time, E p 1 , E p 4 and E p 3 are not equilibrium points. Table 4 contains the stability analysis of the equilibrium points.
When the parameters meet the following conditions, E p 2 , E p 5 and E p 8 are possible equilibrium points, and the gaming system will achieve a steady state.

5. Simulated Analysis

This section includes numerical simulation studies using MATLAB 2020b to study the effect of different initial strategies and parameter changes on participants. Based on the above stability analysis, we set F t = 200 , R 1 = 5 , R 2 = 7 , W h = 2 , W l = 1 , F c = 100 , C = 20 , P n = 1000 , N = 50 , μ = 0.3 , α = 0.2 and g = 0.5 for the simulation analysis. We assume that the initial strategy probability of all the participants is 0.5, as shown in Figure 5. At the equilibrium point E p 2 ( 0 , 0 , 1 ) , the system is stable.

5.1. The Effect of Different Initial Strategies on Evolution

Since the crowdsourcing platform, the crowd workers, and the task requester are in a dynamic system, a change in the strategy choice of one party will have an impact on the strategy choice of other participants. Keeping the initial values constant, we create four different sets of initial strategies: x = 0.4 , y = 0.2 , z = 0.6 ; x = 0.4 , y = 0.8 , z = 0.6 ; x = 0.5 , y = 0.3 , z = 0.7 and x = 0.5 , y = 0.3 , z = 0.3 . Figure 6 displays the simulation results. When the probability that the task requester chooses “no complaint” decreases from 0.7 to 0.3, the strategy evolution speed of crowd workers and crowdsourcing platforms decreases. It indicates that as the likelihood of the task requester choosing not to complain decreases, the improper behavior of the crowd workers and the crowdsourcing platform will be suppressed. If the probability of the crowdsourcing platform choosing the “data filtering” strategy goes from 0.2 to 0.8, the strategy evolution speed of task requesters increases, while that of crowd workers decreases. It reveals that when the probability of the “data filtering” strategy increases, it will inhibit the crowd workers from choosing the “perfunctory” strategy so that task requesters will choose the “no complaint” strategy.

5.2. Analysis of Parameters Related to the Crowdsourcing Platform

5.2.1. Data Filtering Cost F t for the Crowdsourcing Platform

The crowdsourcing platform’s data filtering costs have a direct impact on evolutionary results. Keeping other parameters unchanged, we increase F t from 200 to 500 and decrease it to 50. The results of the simulation are displayed in Figure 7. When F t rises from 200 to 500, the equilibrium point remains the same. However, the strategy evolution speed of crowd workers and crowdsourcing platforms increases, and the task requester is basically unchanged, which shows that the crowdsourcing platform will pick the “no data filtering” strategy more quickly as the cost of data filtering rises. The equilibrium point shifts from (0,0,1) to (1,1,1) when F t drops from 200 to 50. In this instance, the crowd workers’ strategy shifts from “perfunctory” to “hardworking”, and the platform’s strategy shifts from “no data filtering” to “data filtering”. The results show that decreasing the cost of data filtering on the crowdsourcing platform can make it more inclined to choose the “data filtering” strategy so that crowd workers take the task seriously.

5.2.2. Data Filtering Capacity μ of the Crowdsourcing Platform

The data filtering capability of a crowdsourcing platform may directly determine whether a task requester is satisfied with the data. Now, we look at how the change in the μ value affects the tripartite participants. The initial value of μ is set to 0.3. We look into two distinct circumstances where the initial value was raised to 0.4 and 1. The results are shown in Figure 8. When it increases from 0.3 to 0.4, the gaming system’s equilibrium point is still at (0,0,1). However, the strategy evolution of the crowd workers and the crowdsourcing platform slows down, which indicates that the improved data filtering ability of the crowdsourcing platform can inhibit the crowd workers from choosing the “perfunctory” strategy to finish the task. There is a new equilibrium point (1,1,1) for the game system when the value of μ increases from 0.4 to 1. In this case, the strategy choice of the crowd workers changes from “perfunctory” to “hardworking”, and the platform changes from “no data filtering” to “data filtering”. It can be concluded that the crowdsourcing platform with high data filtering capability will be more willing to filter the data, which will further motivate the crowd workers to choose the “hardworking” task completion strategy.

5.2.3. Governing Agencies’ Fines P n for Crowdsourcing Platforms

Next, we will examine the effect of the governing agencies’ penalty P n on the system. The value of P n is reduced from 1000 to 500 and increased to 3000, and Figure 9 displays the simulation results. When P n is downsized from 1000 to 500, the equilibrium point remains unchanged at (0,0,1). However, the strategy evolution speed of crowd workers and crowdsourcing platforms increases. It implies that the crowdsourcing platform will be more willing to select the “no data filtering” strategy if the fines are reduced, which will lead the crowd workers to choose the “perfunctory” strategy. The equilibrium point shifts from (0,0,1) to (1,1,1) when P n rises to 3000. Crowd workers’ strategy changes from “perfunctory” to “hardworking”, and the crowdsourcing platform’s strategy changes from “no data filtering” to “data filtering”. This is the ideal equilibrium state. It can be concluded that increasing the fines for misconduct on crowdsourcing platforms can motivate crowdsourcing platforms to choose the “data filtering” strategy and thus motivate crowd workers to choose the “hardworking” strategy.

5.2.4. The Probability α of Regulation by the Governing Agencies

The participants’ behavioral strategies are also influenced by the governing agencies. Next, the cases of increasing α from 0.2 to 0.25 and 0.8 will be studied, as shown in Figure 10. When α = 0.25 , the equilibrium point is still at (0,0,1). However, the strategy evolution speed of the crowd workers and crowdsourcing platforms becomes slower, which can indicate that the increase in the probability of regulation by the governing agencies inhibits the crowdsourcing platform from choosing the “no data filtering” strategy, which further inhibits the crowd workers from choosing the “perfunctory” strategy. When α = 0.8 , the equilibrium point shifts from (0,0,1) to (1,1,1). The crowd workers’ strategy switches from “perfunctory” to “hardworking”, and the crowdsourcing platform‘s strategy switches from “no data filtering” to “data filtering”. In summary, when governing agencies actively regulate, the improper behavior of crowdsourcing platforms and crowd workers can be curbed, and the rights of task requesters can be protected.

5.3. Analysis of Parameters Related to the Task Requester

5.3.1. The Task Requester’s Complaint Cost F c

The starting value of F c is set to 100, and we study the two cases where F c increases to 500 and decreases to 10. The simulation outcomes are shown in Figure 11. The equilibrium point remains at (0,0,1) when the value of F c rises to 500. However, the strategy evolution speed of the crowd workers, crowdsourcing platforms and task requesters increases significantly. It indicates that the rising cost of complaints will lead the task requester to choose the “no complaint” strategy after weighing the benefits, which will lead the crowd workers to select the “perfunctory” strategy. When the value of F c decreases to 10, the equilibrium point shifts from (0,0,1) to (1,1,1). The crowd workers’ strategy switches from “perfunctory” to “hardworking”, and the crowdsourcing platform‘s strategy switches from “no data filtering” to “data filtering”. From the above study, we can conclude that reducing the cost of complaints can increase the motivation of task requesters to protect their own interests, which can curb misconduct by crowd workers and crowdsourcing platforms. Therefore, the governing agencies should reduce the complaint cost of task requesters to promote the stable operation of the transaction market.

5.3.2. Compensable C for the Task Requester

The initial value of C is set to 20. Next, we will study the case of increasing C to 50 and 800, as shown in Figure 12. When C is increased from 20 to 50, the equilibrium point remains at (0,0,1). The strategy evolution speed of crowd workers and crowdsourcing platforms decreases. When C reaches 800, it tends to move to the equilibrium point (1,1,0) and eventually stay at (1,1,1). This outcome indicates that task requesters are more likely to choose the “complaint” strategy when the amount of compensation is larger. At this time, the improper behavior of crowd workers and crowdsourcing platforms will be suppressed, so there is a tendency to move to the equilibrium point (1,1,0). However, the task requester’s complaints are ineffective because crowd workers and crowdsourcing platforms do not have improper behaviors. So, the task requester will choose the “no complaint” strategy. Eventually, the equilibrium point stabilizes at the point (1,1,1). Therefore, if we appropriately increase the amount of compensation so that it is greater than the cost of complaints by task requesters, the task requesters will actively protect their own interests. Thus, the misconduct of crowd workers and crowdsourcing platforms will be suppressed, which will maintain the stable operation of the trading market.

5.4. Analysis of Parameters Related to the Crowd Workers

The Compensation Coefficient g

When a task requester chooses the “complaint” strategy, they will request compensation directly from the crowdsourcing platform. The crowdsourcing platform will pay the requester C as compensation if the complaint is justified. Then, the crowdsourcing platform will require the crowd workers to cover the penalty in the amount of g C , because the crowdsourcing platform may suffer a loss of reputation, resources, time and money. The crowdsourcing platform may seek more compensation for the crowd workers when it suffers a larger hidden loss. The effect of g on evolutionary outcomes is discussed next. The initial value of g is 0.5, and we investigate the effects of increasing g to 1 and 3 on evolutionary outcomes, as shown in Figure 13. The equilibrium point remains at (0,0,1) as g increases in value, but the strategy evolution speed of crowd workers becomes slower. This result shows that when the compensation coefficient increases, it will inhibit crowd workers from choosing the “perfunctory” strategy. The larger the compensation coefficient is, the less probable it is that the crowd workers will choose to finish the task perfunctorily.

6. Conclusions and Future Works

The quality of sensing data and the stability of the mobile crowdsourcing market are our concerns. This paper establishes a three-party evolutionary game model by analyzing the behavior decisions of crowdsourcing workers, crowdsourcing platforms and task requesters. Furthermore, the influencing factors and evolutionary paths of each player’s different behavioral strategies are discussed. According to the experimental simulation results, the conclusions and future works are listed:
  • Since the crowdsourcing platform, the crowd worker and the task requester are in the same dynamic system, any change in strategy choices by one of them will affect and restrict the strategy choices of the other two parties. Therefore, the following measures can be implemented to maintain the stable operation of the mobile crowdsourcing market: (i) increase fines for misconduct, (ii) increase the probability of supervision by governing agencies, (iii) reduce the cost of data filtering, (iv) reduce the complaint cost of task requesters, (v) improve the data filtering capability of crowdsourcing platforms and (vi) encourage the task requesters to actively report misconduct on crowdsourcing platforms. For example, the higher the probability of selecting the “data filtering” strategy is, the more crowd workers tend to select the “hardworking” strategy to complete the task; the higher the enthusiasm of task requesters to safeguard their rights os, the more misconduct on crowd workers and crowdsourcing platforms will be inhibited. Through the above analysis and the outcomes of the experiments, it can be concluded that the crowdsourcing platform selects the “data filtering” strategy, the task requester selects the “complaint” strategy, and the crowd workers select the “hardworking” task completion strategy, all of which are conducive to improving the sensing data’s quality and realizing the maximum social benefits. Therefore, the ideal stable state of the system is a policy combination of 1 , 1 , 1 .
  • For crowdsourcing platforms, data filtering plays an important role in stabilizing market operations. First of all, it will be difficult for task requesters to obtain high-quality sensing data without implementing data filtering measures, thus harming the self-interests of task requesters and causing the reputation loss of the platform. Secondly, the speculative behavior of crowdsourcing workers is more rampant without implementing data filtering, and it is not conducive to obtaining high-quality perception data. Therefore, crowdsourcing platforms need to improve data filtering capabilities, intensify technological transformation, reduce data filtering costs and enhance platform social responsibilities. In this way, not only can the interests of task requesters be protected from losses and mobile crowdsourcing market transactions be promoted, but also the reputation and core competitiveness of the platform can be comprehensively enhanced, and a virtuous cycle of the trading market can be promoted.
  • Task requesters should enhance their awareness of rights protection and safeguard their own interests from being infringed upon. In addition, protecting their rights actively plays an important role in supervising the speculative behaviors of other participants. Task requesters are the final inspectors who estimate the quality of data. After finding data quality problems, they should give timely feedback to regulators, which can not only safeguard their own interests but also help curb speculation in the crowdsourcing system and play a positive role in stabilizing market operation.
  • Crowd workers should enhance their sense of social responsibility and not deceive others into performing their tasks perfunctorily for their own benefit. The only way to stand out and gain trust in the highly competitive trading market environment is to enhance your own working skills and treat one another with sincerity.
  • Governing agencies should actively perform their supervisory duties. To begin with, they should develop anonymous reporting platforms, develop a reasonable regulatory system, improve the probability of regulation and reduce the cost of rights protection for task requesters. In addition, they can also conduct regular thematic education to raise the rights protection awareness of task requesters and increase the sense of social responsibility of mass workers and crowdsourcing platforms.
There are numerous chances to extend our work further. The research agenda is created by outlining several possible future study directions.
  • Regarding stakeholders’ participation in collusion, we will consider the effect of inter-participant collusion on the strategy choices of other participants in a comprehensive manner to make the game system more realistic.
  • Regarding setting incentives for crowd workers, we will combine the reputations of the crowd workers to set incentives to ensure they complete the tasks with a “hardworking” strategy and improve the quality of the sensing data.

Author Contributions

Conceptualization, H.H. and J.Y.; methodology, J.Y. and J.W.; validation, H.H. and J.Y.; formal analysis, H.H. and J.Y.; investigation, J.Y.; writing—original draft preparation, H.H., J.Y. and J.W.; writing—review and editing, H.H. and J.Y.; visualization, H.H.; supervision, J.Y.; funding acquisition, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Humanities and Social Science Fund of Ministry of Education of China (No.21YJCZH197); the National Natural Science Fundation of China (No. 62006145); the Scientific and Technologial Innovation Programs of Higher Education Institutions in Shanxi (No. 2020L0245, No. 2020L0252); the Youth Science Foundation of Shanxi University of Finance and Economics (No. QN-202016).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included in the article.

Acknowledgments

The authors would like to thank the editors and anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hamrouni, A.; Ghazzai, H.; Frikha, M.; Massoud, Y. A Spatial Mobile Crowdsourcing Framework for Event Reporting. IEEE Trans. Comput. Soc. Syst. 2020, 7, 477–491. [Google Scholar] [CrossRef]
  2. Khan, F.; Rehman, A.U.; Zheng, J.; Jan, M.A.; Alam, M. Mobile crowdsensing: A survey on privacy-preservation, task management, assignment models, and incentives mechanisms. Future Gener. Comput. Syst. 2019, 100, 456–472. [Google Scholar] [CrossRef]
  3. Yang, J.; Ban, X.; Xing, C. Using greedy random adaptive procedure to solve the user selection problem in mobile crowdsourcing. Sensors 2019, 19, 3158. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Wu, S.; Wang, Y.; Tong, X. Multi-objective task assignment for maximizing social welfare in spatio-temporal crowdsourcing. China Commun. 2021, 18, 11–25. [Google Scholar] [CrossRef]
  5. Li, F.; Wang, Y.; Gao, Y.; Tong, X.; Jiang, N.; Cai, Z. Three-Party Evolutionary Game Model of Stakeholders in Mobile Crowdsourcing. IEEE Trans. Comput. Soc. Syst. 2022, 9, 974–985. [Google Scholar] [CrossRef]
  6. Chi, C.; Wang, Y.; Li, Y.; Tong, X. Multistrategy Repeated Game-Based Mobile Crowdsourcing Incentive Mechanism for Mobile Edge Computing in Internet of Things. Wirel. Commun. Mob. Comput. 2021, 2021, 6695696. [Google Scholar] [CrossRef]
  7. Yang, S.; Wu, F.; Tang, S.; Gao, X.; Yang, B.; Chen, G. On Designing Data Quality-Aware Truth Estimation and Surplus Sharing Method for Mobile Crowdsensing. IEEE J. Sel. Areas Commun. 2017, 35, 832–847. [Google Scholar] [CrossRef]
  8. Hou, Y.; Cheng, W. Task allocation based on profit maximization for mobile crowdsourcing. J. China Univ. Posts Telecommun. 2020, 27, 26–30. [Google Scholar]
  9. Zhan, Y.; Xia, Y.; Zhang, J. Quality-aware incentive mechanism based on payoff maximization for mobile crowdsensing. Ad Hoc Netw. 2018, 72, 44–55. [Google Scholar] [CrossRef]
  10. Li, K.; Wang, S.; Cheng, X.; Hu, Q. A Misreport- and Collusion-Proof Crowdsourcing Mechanism Without Quality Verification. IEEE Trans. Mob. Comput. 2022, 21, 3084–3095. [Google Scholar] [CrossRef]
  11. Gong, X.; Shroff, N.B. Truthful Data Quality Elicitation for Quality-Aware Data Crowdsourcing. IEEE Trans. Control. Netw. Syst. 2020, 7, 326–337. [Google Scholar] [CrossRef]
  12. Wu, P.; Ngai, E.; Wu, Y. Toward a real-time and budget-aware task package allocation in spatial crowdsourcing. Decis. Support Syst. 2018, 110, 107–117. [Google Scholar] [CrossRef]
  13. Huang, Y.; Chen, H.; Ma, G.; Lin, K.; Ni, Z.; Yan, N.; Wang, Z. OPAT: Optimized Allocation of Time-Dependent Tasks for Mobile Crowdsensing. IEEE Trans. Ind. Inform. 2022, 18, 2476–2485. [Google Scholar] [CrossRef]
  14. Zhang, X.; Wu, Y.; Huang, L.; Heng, J.; Cao, G. Expertise-Aware Truth Analysis and Task Allocation in Mobile Crowdsourcing. IEEE Trans. Mob. Comput. 2021, 20, 1001–1016. [Google Scholar] [CrossRef]
  15. Wu, Z.; Peng, L.; Xiang, C. Assuring quality and waiting time in real-time spatial crowdsourcing. Decis. Support Syst. 2023, 164, 113869. [Google Scholar] [CrossRef]
  16. Yang, D.; Xue, G.; Xi, F.; Jian, T. Incentive Mechanisms for Crowdsensing: Crowdsourcing With Smartphones. IEEE/ACM Trans. Netw. 2016, 24, 1732–1744. [Google Scholar] [CrossRef]
  17. Peng, D.; Wu, F.; Chen, G. Data Quality Guided Incentive Mechanism Design for Crowdsensing. IEEE Trans. Mob. Comput. 2018, 17, 307–319. [Google Scholar] [CrossRef]
  18. Luo, Z.; Xu, J.; Zhao, P.; Yang, D.; Luo, J. Towards high quality mobile crowdsensing: Incentive mechanism design based on fine-grained ability reputation. Comput. Commun. 2021, 180, 197–209. [Google Scholar] [CrossRef]
  19. Liang, X.; Chen, T.; Xie, C.; Dai, H.; Poor, H. Mobile Crowdsensing Games in Vehicular Networks. IEEE Trans. Veh. Technol. 2018, 67, 1535–1545. [Google Scholar]
  20. Dasari, V.; Kantarci, B.; Pouryazdan, M.; Foschini, L.; Girolami, M. Game theory in mobile crowdsensing: A comprehensive survey. Sensors 2020, 20, 2055. [Google Scholar] [CrossRef] [Green Version]
  21. Li, X.; Zhu, Q. Social Incentive Mechanism Based Multi-User Sensing Time Optimization in Co-Operative Spectrum Sensing with Mobile Crowd Sensing. Sensors 2018, 18, 250. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Zhao, P.; Li, X.; Gao, S.; Wei, X. Cooperative task assignment in spatial crowdsourcing via multi-agent deep reinforcement learning. J. Syst. Archit. 2022, 128, 102551. [Google Scholar] [CrossRef]
  23. Zhan, Y.; Xia, Y.; Zhang, J.; Wang, Y. Incentive Mechanism Design in Mobile Opportunistic Data Collection with Time Sensitivity. IEEE Internet Things J. 2018, 5, 246–256. [Google Scholar] [CrossRef]
  24. Yang, G.; He, S.; Shi, Z.; Chen, J. Promoting Cooperation by the Social Incentive Mechanism in Mobile Crowdsensing. IEEE Commun. Mag. 2017, 55, 86–92. [Google Scholar] [CrossRef]
  25. Ming, L.; Jian, L.; Yang, D.; Xue, G.; Jian, T. QUAC: Quality-Aware Contract-Based Incentive Mechanisms for Crowdsensing. In Proceedings of the 2017 IEEE 14th International Conference on Mobile Ad Hoc and Sensor Systems (MASS), Orlando, FL, USA, 22–25 October 2017; pp. 72–80. [Google Scholar]
  26. Jin, H.; Su, L.; Xiao, H.; Nahrstedt, K. INCEPTION: Incentivizing Privacy-Preserving Data Aggregation for Mobile Crowd Sensing Systems. ACM Int. Symp. 2016, 2016, 341–350. [Google Scholar]
  27. Wu, L.; Xiong, Y.; Liu, K.; She, J. A real-time pricing mechanism considering data freshness based on non-cooperative game in crowdsensing. Inf. Sci. 2022, 608, 392–409. [Google Scholar] [CrossRef]
  28. Jiang, W.; Liu, X.; Shi, D.; Chen, J.; Sun, Y.; Guo, L. Research on Crowdsourcing Price Game Model in Crowd Sensing. Comput. Mater. Contin. 2021, 68, 1769–1784. [Google Scholar] [CrossRef]
  29. Wu, L.; Xiong, Y.; Liu, K.; She, J. Stable Strategy Formation for Mobile Users in Crowdsensing Using Co-Evolutionary Model. J. Adv. Comput. Intell. 2021, 25, 1000–1010. [Google Scholar] [CrossRef]
  30. Wang, J.; Hu, Y.; Qu, W.; Ma, L. Research on Emergency Supply Chain Collaboration Based on Tripartite Evolutionary Game. Sustainability 2022, 14, 11893. [Google Scholar] [CrossRef]
  31. Yang, J.; Yan, X.; Yang, W. A Tripartite Evolutionary Game Analysis of Online Knowledge Sharing Community. Wirel. Commun. Mob. Comput. 2022, 2022, 4460034. [Google Scholar] [CrossRef]
  32. Friedman, D. Evolutionary Games in Economics. Econometrica 1991, 59, 637–666. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Mobile crowdsourcing workflow.
Figure 1. Mobile crowdsourcing workflow.
Mathematics 11 01269 g001
Figure 2. Replication dynamic phase diagram of crowd workers: (a) y = y 0 ; (b) y < y 0 ; (c) y > y 0 .
Figure 2. Replication dynamic phase diagram of crowd workers: (a) y = y 0 ; (b) y < y 0 ; (c) y > y 0 .
Mathematics 11 01269 g002
Figure 3. Replication dynamic phase diagram of crowdsourcing platforms: (a) z = z 0 ; (b) z < z 0 ; (c) z > z 0 .
Figure 3. Replication dynamic phase diagram of crowdsourcing platforms: (a) z = z 0 ; (b) z < z 0 ; (c) z > z 0 .
Mathematics 11 01269 g003
Figure 4. Replication dynamic phase diagram of task requesters: (a) x = x 0 ; (b) x < x 0 ; (c) x > x 0 .
Figure 4. Replication dynamic phase diagram of task requesters: (a) x = x 0 ; (b) x < x 0 ; (c) x > x 0 .
Mathematics 11 01269 g004
Figure 5. Initial equilibrium point E p 2 (0,0,1).
Figure 5. Initial equilibrium point E p 2 (0,0,1).
Mathematics 11 01269 g005
Figure 6. The effect of different initial strategies.
Figure 6. The effect of different initial strategies.
Mathematics 11 01269 g006
Figure 7. The impact of F t on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Figure 7. The impact of F t on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Mathematics 11 01269 g007
Figure 8. The impact of μ on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Figure 8. The impact of μ on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Mathematics 11 01269 g008
Figure 9. The impact of P n on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Figure 9. The impact of P n on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Mathematics 11 01269 g009
Figure 10. The impact of α on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Figure 10. The impact of α on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Mathematics 11 01269 g010
Figure 11. The impact of F c on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Figure 11. The impact of F c on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Mathematics 11 01269 g011
Figure 12. The impact of C on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Figure 12. The impact of C on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Mathematics 11 01269 g012
Figure 13. The impact of g on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Figure 13. The impact of g on evolution: (a) crowd workers; (b) crowdsourcing platforms; (c) task requesters.
Mathematics 11 01269 g013
Table 1. Main symbols used in the paper.
Table 1. Main symbols used in the paper.
SymbolDescription
W h The cost of crowd workers choosing “hardworking” strategies to complete tasks
W l The cost of crowd workers choosing “perfunctory” strategies to complete tasks
R 1 The payments that crowdsourcing platforms make to crowd workers
R 2 The amount paid to the platform by the task requester
F t The cost of data filtering on crowdsourcing platforms
F p The unit cost of operating a crowdsourcing platform
P n Fines imposed on platforms by governing agencies
CCompensation received by task requesters
F c Complaint costs for task requesters
NThe number of tasks
μ Data filtering capabilities of crowdsourcing platforms
α Regulatory probability of governing agencies
gCompensation factor
xThe probability of crowd workers completing a task with a “hardworking” strategy
yThe probability of data filtering by the crowdsourcing platform
zThe probability of no complaint by task requesters
Table 2. The profit and loss matrix of the three game participants.
Table 2. The profit and loss matrix of the three game participants.
Crowd WorkersTask RequesterCrowdsourcing Platform
Filter ( y ) Not Filter ( 1 y )
Hardworking ( x ) Not complain ( z ) ( R 1 W h ) N ( R 1 W h ) N
( R 2 R 1 F p ) N F t ( R 2 R 1 F p ) N α P n
R 2 N R 2 N
Complain ( 1 z ) ( R 1 W h ) N ( R 1 W h ) N
( R 2 R 1 F p ) N F t ( R 2 R 1 F p ) N P n C
R 2 N F c R 2 N F c + C
Perfunctory ( 1 x ) Not Complain ( z ) ( ( 1 μ ) R 1 W l ) N ( R 1 W l ) N
( R 2 R 1 F p ) N + ( α P n F t ) μ F t α P n ( R 2 R 1 F p ) N α P n
R 2 N R 2 N
Complain ( 1 z ) ( 1 μ ) ( R 1 N g C ) W l N ( R 1 W l ) N g C
( R 2 R 1 F p ) N ( 1 + μ ) F t ( 1 μ ) ( P n + C g C ) ( R 2 R 1 F p ) N P n + ( g 1 ) C
R 2 N F c + ( 1 μ ) C R 2 N F c + C
Table 3. System equilibrium points and eigenvalues.
Table 3. System equilibrium points and eigenvalues.
λ 1 λ 2 λ 3
E p 1 (0,0,0) N ( W h W l ) + g C ( 1 + μ ) F t + μ P n + μ ( 1 g ) C F c C
E p 2 (0,0,1) N ( W h W l ) ( 1 + μ ) F t + μ α P n C F c
E p 3 (0,1,0) μ R 1 N + g C ( 1 μ ) N ( W h W l ) ( 1 + μ ) F t μ P n + μ ( g 1 ) C F c C + μ C
E p 4 (1,0,0) N ( W h W l ) g C F t + P n + C F c C
E p 5 (0,1,1) μ R 1 N N ( W h W l ) ( 1 + μ ) F t μ α P n ( 1 μ ) C F c
E p 6 (1,0,1) N ( W h W l ) F t + α P n C F c
E p 7 (1,1,0) N ( W h W l ) μ R 1 N ( 1 μ ) g C F t P n C F c
E p 8 (1,1,1) N ( W h W l ) μ R 1 N F t α P n F c
Table 4. Stability analysis of equilibrium points.
Table 4. Stability analysis of equilibrium points.
Eigenvalue AnalysisStability
E p 1 (0,0,0) F c C > 0 , so it is unstable.Unstable
E p 2 (0,0,1)If ( 1 + μ ) F t + μ α P n < 0 , all eigenvalues are negative.ESS
E p 3 (0,1,0) F c C + μ C > 0 , so it is unstable.Unstable
E p 4 (1,0,0) F c C > 0 , so it is unstable.Unstable
E p 5 (0,1,1)If μ R 1 N N ( W h W l ) < 0 , and ( 1 + μ ) F t μ α P n < 0 , all eigenvalues are negative.ESS
E p 6 (1,0,1) N ( W h W l ) > 0 , so it is unstable.Unstable
E p 7 (1,1,0) F c > 0 , so it is unstable.Unstable
E p 8 (1,1,1)If N ( W h W l ) μ R 1 N < 0 , and F t α P n < 0 , all eigenvalues are negative.ESS
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hao, H.; Yang, J.; Wang, J. A Tripartite Evolutionary Game Analysis of Participant Decision-Making Behavior in Mobile Crowdsourcing. Mathematics 2023, 11, 1269. https://doi.org/10.3390/math11051269

AMA Style

Hao H, Yang J, Wang J. A Tripartite Evolutionary Game Analysis of Participant Decision-Making Behavior in Mobile Crowdsourcing. Mathematics. 2023; 11(5):1269. https://doi.org/10.3390/math11051269

Chicago/Turabian Style

Hao, Hanyun, Jian Yang, and Jie Wang. 2023. "A Tripartite Evolutionary Game Analysis of Participant Decision-Making Behavior in Mobile Crowdsourcing" Mathematics 11, no. 5: 1269. https://doi.org/10.3390/math11051269

APA Style

Hao, H., Yang, J., & Wang, J. (2023). A Tripartite Evolutionary Game Analysis of Participant Decision-Making Behavior in Mobile Crowdsourcing. Mathematics, 11(5), 1269. https://doi.org/10.3390/math11051269

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop