Next Article in Journal
Studying the Relationship between the Traffic Flow Structure, the Traffic Capacity of Intersections, and Vehicle-Related Emissions
Previous Article in Journal
A Novel Scheme of Control Chart Patterns Recognition in Autocorrelated Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Location-Based Crowdsensing Incentive Mechanism Based on Ensemble Learning and Prospect Theory

1
School of Computer Science and Engineering, Central South University, Changsha 410083, China
2
School of Electronic Information, Central South University, Changsha 410083, China
3
Computer Science Department, Missouri State University, Springfield, MO 65897, USA
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(16), 3590; https://doi.org/10.3390/math11163590
Submission received: 11 July 2023 / Revised: 15 August 2023 / Accepted: 16 August 2023 / Published: 19 August 2023

Abstract

:
Crowdsensing uses the participants’ smart devices to form a new perception network. The coverage of crowdsensing’s tasks determines the quality of services. Under the constraint of budget and the number of participants, the platform needs to increase the participation duration of participants through incentive mechanisms to increase the coverage of tasks. There are two problems with the existing incentive mechanisms: (1) many incentives ignore the participants’ characteristics, and using a single incentive mechanism for different participants will make the incentive effect not reach the expectation; (2) many incentives will affect the effectiveness because of the decision problem caused by asymmetric information. Inspired by ensemble learning and prospect theory, this paper proposes the Incentive Mechanism based on Ensemble Learning and Prospect Theory (IMELPT). First, we propose the Deep-Stacking-Generation algorithm based on Dropout (DSGD), to predict the participants and distinguish whether they are long-term or short-term participants. If the participants are short-term, we incentivize them through the Short-term Participant Incentive Mechanism based on Prospect Theory (SPIMPT). We increase the participation duration by transforming the change in reward into asymmetric information that aligns the participant’s goal with the platform. If the participants are long-term participants, we motivate them through the Long-term Participant Incentive Mechanism (LPIM), to maintain the participation rate of participants by maximizing their utility. Theoretical analysis and experiments on real datasets demonstrated that IMELPT can reliably improve the coverage of crowdsensing tasks.

1. Introduction

Crowdsensing is a new paradigm that constructs a sensing network through participants’ smart devices to collect data [1]. For those location-based crowdsensing tasks, coverage can affect the quality of services [2]. The high coverage of the task means that the participants can collect data over a large area [3]. For example, crowdsensing can provide data-collection services for navigation systems. If the coverage of data collection is low, the navigation system cannot collect enough road information and cannot provide accurate road recommendations [4]. Therefore, it is necessary to design an effective incentive mechanism to motivate participants to improve the coverage of data collection.
However, the high coverage of crowdsensing is challenging to achieve. First, because the platform’s budget is limited [5,6], the platform cannot recruit enough participants to participate in the task with a high reward, affecting the coverage. Second, the participants’ number in crowdsensing is often limited. Many applications currently recruit cab drivers as participants to collect data [7]. However, since there is an upper limit on the number of cab drivers, the coverage is also limited. Therefore, if we can design an incentive mechanism that can motivate participants to increase their participation duration under the budget constraint, we can improve the task coverage with limited participants, thus ensuring the quality of services. However, the existing incentive mechanisms have some problems.
First, many studies have used a single mechanism to motivate participants. These mechanisms may make the incentives less effective than expected because they ignore participant differences [8,9]. For example, for incentives that pay more for better performance [10], participants who perform well will continue to receive high pay as long as they maintain their original behavior. In contrast, participants who perform poorly will need to exert more effort and cost to receive high pay and will be less motivated. Thus, a single mechanism can affect the effectiveness of incentives. At the same time, it is difficult to satisfy the goals of different participants with a single mechanism. The goal of the participants who perform well is to maintain their high reward, and the goal of the participants who perform poorly is to increase their reward by increasing the participation duration. If a single mechanism does not meet the participants’ goals, it will reduce the incentive effect of the mechanism. Thus, with a limited budget and number of participants, a single mechanism could not realize the potential of the participants and thus could not effectively increase the coverage.
Second, most incentive mechanisms do not consider the effect of asymmetric information on incentive effects [11], and those that do consider asymmetric information have large uncertainty in their outcomes [12,13]. The mechanisms that do not consider asymmetric information often assume that the platform and the participants have the same information for decision-making. However, asymmetric information exists in crowdsensing [14], and the decision-making behavior of participants under incomplete information is different from that of participants under complete information [15]. Thus, complete information is necessary to describe the participants’ goals accurately, and the presence of incomplete information can make the incentive effect of the mechanism less than expected. Those mechanisms that consider asymmetric information simply use asymmetric information as a prerequisite assumption. They assume that participants do not know all the information of the platform, as well as the decision information of other participants. These mechanisms can solve the problem of biased incentive goals, but there is the problem of uncertain incentive effects. The uncertainty of the participants’ decision-making under incomplete information is large [16]. The platform can determine its own optimal decision only after it knows the optimal decision of the participants. When the presence of incomplete information makes the uncertainty of participants’ optimal decisions large, the incentive effect of the platform also becomes uncertain. As a result, the uncertainty of the final task coverage also becomes larger. With the constraints of budget and the number of participants, the platform cannot recruit more participants with extra rewards to compensate for the uncertainty of task completion, which will affect the quality of the crowdsensing service.
In this paper, we were inspired by ensemble learning [17] and prospect theory [18] and designed incentive mechanisms to solve the problem of low coverage. First, we designed a mechanism to predict whether participants are long-term or short-term participants and, thus, designed incentives that meet the goals of both types of participants. Ensemble learning outperforms in terms of accuracy and generalization ability. In crowdsensing, the differences between participants are large and new participants are frequently added, so the demand for generalization ability is high. Therefore, we designed the mechanism based on the idea of ensemble learning to predict participants. Second, we designed a mechanism based on prospect theory to address the problem of asymmetric information. Prospect theory states that there is a gap between the perceived change and the actual change of a variable. We provide that gap as asymmetric information to guide the participants’ decision-making. As a result, we can transform asymmetric information, which might have a side effect, into a positive incentive effect. We can use asymmetric information to align the goals of participants with the goals of the platform, thus maximizing the incentive effect of the platform. The main contributions of this study are as follows:
In this paper, we designed the DSGD based on ensemble learning, which can classify the participants into long-term and short-term participants. We designed the SPIMPT to incentivize short-term participants. The SPIMPT is based on prospect theory and can maximize the participation duration by using the minimal perceptible difference of the reward as asymmetric information to motivate participants’ goals to become aligned with the platform’s goal. We designed the LPIM to motivate long-term participants, and we maximized participants’ utility to increase the probability of their continued participation in the task. Thus, our mechanism can solve the problem of the low coverage of crowdsensing under a budget and participant number constraints.

2. Related Work

This section introduces some existing incentive mechanisms and the relevant concepts of the theories used in this paper.

2.1. Current Study of Incentive Mechanisms for Crowdsensing

Some studies aim to improve incentive mechanisms by finding the optimal solution. For example, Yang et al. evaluated the coverage based on the number of target points covered by the participants [19]. This study selected the most-efficient sensing data under the limited platform budget so that the coverage of sensing data could be effectively optimized. Liu et al. analyzed the effect of participant willingness on participant recruitment and task completion rates, and evaluated the ability of participants to cover the task [20]. The study then used a greedy approach to optimize participant recruitment to select the most appropriate participants for each task. Zhang et al. used the mobile participant’s location information and historical reputation to select the optimal participant to satisfy the information quality requirement [21]. Then, the study applied a two-stage Stackelberg game to analyze the perceived level of mobile users to obtain the optimal incentive mechanism. Zhan et al. used incentives to motivate mobile participants to participate in data collection [12]. This study formulated the interaction as a two-participant cooperative game, thus maximizing participant rewards. Xu et al. established an auction-based incentive mechanism, designed two independent objective functions, and maximized expected profit and coverage, respectively, using a combination of binary search and greedy algorithms [9]. Gu et al. used reverse auction mechanisms to maximize social welfare while meeting the requirements of quality, timeliness, relevance, and coverage [22]. Some studies aim to improve incentive mechanisms by introducing additional factors. Tan et al. proposed a new three-stage approach to form compatible groups using realistic relationships in social networks, which improves task coverage through group-oriented cooperation while achieving good quality task cooperation [1]. Chen et al. predicted the probability of vehicle routes and ride requests, and based on the prediction results, this study proposed a planning algorithm to achieve the best coverage with a limited budget [23]. Song et al. learned and used participants’ task preferences to achieve high coverage [24]. This study migrated suitable participants to less popular tasks to increase task coverage. Zhao et al. quantified data quality based on deviations between reliable data and reality, and assigned monetary rewards to task participants based on their data quality [8]. Regardless of the methods used for the incentive mechanism described above, there are two problems with these incentive mechanisms.
First, the current incentive mechanisms use a single mechanism to motivate participants, often making the incentive less effective than expected. For example, Zhang et al. selected the optimal participants based on their past task completion performance [21]. This mechanism discards some of the underperforming participants, thus making the platform unable to achieve sufficient coverage with the constrained number of participants. At the same time, the mechanism incentivized the selected participants through the single Stackelberg game, which affects the incentive effect. This is because the goal of participants who performed well originally is to maintain their current high reward, which is inconsistent with the goal of the single mechanism. Zhao et al. paid participants according to their ranking, and the better the performance, the higher the reward [8]. The mechanism ignored the participant’s initial state, thus limiting the incentive effect. If the participant has performed well before and can receive high rewards, the mechanism cannot motivate him to improve his performance. If the participant is not performing well and is poorly paid, motivating him to improve his performance with low pay is difficult. If the participant is not performing well and is poorly paid, then it is difficult to motivate him to improve his performance with low reward. This makes the single mechanism inconsistent with the participants’ goals and does not motivate them effectively.
Second, most incentive mechanisms cannot address the effect of asymmetric information on incentive outcomes. Xu et al. found the optimal solution based on the effectiveness of the auction mechanism, i.e., assumed that the participants can make the optimal decision for themselves [9]. However, due to information asymmetry, participants can only obtain some of the information and make the optimal decision for themselves. This makes the auction mechanism less effective than expected, and thus the optimal solution is not optimal. Therefore, information asymmetry may reduce the effectiveness of incentives. Zhan et al. considered the effect of information asymmetry, but only as a constraint in the game model [12]. The mechanism assumed that the participants were deciding in the background of information asymmetry. However, due to the presence of information asymmetry, the distribution of the probability of the participants’ optimal decisions will be larger. The platform can only recruit more participants to reduce the impact of uncertainty to ensure that the task can be completed on time, which is difficult to achieve when there is a budget constraint. Therefore, information asymmetry can have a negative impact on the effectiveness of incentives.
In summary, the existing incentives have limited effectiveness when the budget and number of participants are limited. The high accuracy and generalization ability of ensemble learning’s predictions can help us effectively differentiate participants, which in turn allows the platform to provide targeted incentives. At the same time, we can use prospect theory to treat the gap between real and perceived changes in reward as asymmetric information controllable by the platform, which can be used to make the goals of the platform consistent with those of the participants, thus transforming the negative effect generated by asymmetric information into a positive incentive effect.

2.2. Current Study of Ensemble Learning

The ensemble learning is used to obtain a better model by combining multiple base learners. Ensemble learning models have better performance than single models [25]. Ensemble learning has been applied in many fields. Wang et al. designed an LSTM-based ensemble learning model for predicting the storm time thermal layer mass density, which has good generalization ability to different satellite datasets [26]. Wang et al. designed a stacking ensemble learning model to diagnose transformer faults, and experiments showed that the accuracy and generalization ability of the model was greatly improved [27]. Wang et al. used ensemble learning to predict the preferences of new users to solve the cold-start problem in recommender systems, and the model had a strong generalization ability and accuracy in a small sample of user information [28].
We can only accurately motivate participants by accurately classifying them. Since participants’ data are small and vary widely among participants, the generalization ability and accuracy of the prediction model must be high. The ensemble learning model can train a model with high accuracy and high generalization ability on small samples, so we designed the participant classification model using the ensemble learning model.

2.3. Current Study of Prospect Theory

Prospect theory assumes that the effect of reward on participants increases logarithmically, rather than linearly, with reward [29]. Prospect theory has been applied in many fields. Wang et al. developed an evolutionary game model based on prospect theory to motivate vehicle owners to install private chargers [30]. Liu et al. used prospect theory to construct a targeted incentive for building units [31]. Liu et al. constructed a prospect theory-based game model to incentivize smallholder farmers to transition to green production [32]. Qu et al. introduced prospect theory to construct an evolutionary game model and discussed how the government should incentivize firms to innovate [33].
Prospect theory has proved to be an effective motivational tool in many fields. We hope to build an incentive mechanism based on prospect theory to motivate participants to increase the participation duration to improve the quality of crowdsensing services.

3. System Model

This section focuses on the system model of the IMELPT. We first describe the IMELPT’s physical model and introduce some related definitions. Then, we describe the process of the IMELPT. In the incentive mechanism of crowdsensing, there are two parties, the platform, and the participants. G represents the set of participants and G = g 1 , g 2 , , g i , , g I , where g i represents the i-th participant, and I represents the total number of participants. As shown in Figure 1, the platform is responsible for publishing the task, verifying the data, and paying the reward. At the same time, the participants are responsible for collecting the data after accepting the task. The specific interaction between the participants and the platform consists of the following five steps:
  • The platform publishes tasks. The platform will publish the task information to the participants, including the quality of data to be collected, the reward the platform can provide, etc. The platform will first use the DSGD to predict whether the participant is a long-term or short-term participant and then provide the relevant information to the participant using different mechanisms respectively;
  • Participants select the task. Participants decide whether to accept a task based on their consideration of reward and cost. Different types of participants are motivated by different mechanisms, where short-term participants are motivated by the SPIMPT, and long-term participants are motivated by the LPIM;
  • Platform selects participants. After the platform receives the participant’s decision, it will select the appropriate participants from the participants who are willing to participate in the task;
  • The participants complete the tasks. The participants collect data and send the collected data to the platform;
  • The platform pays rewards. The platform pays short-term participants according to the SPIMPT and long-term participants according to the LPIM.
Figure 1 shows a physical figure of collecting traffic data, where the platform expects participants to collect traffic information on the road. Our mechanism aims to improve the participants’ participation duration and, thus, the quality of the crowdsensing service. We use the total length of the participants’ trajectories and the frequency of data collection to evaluate the participants’ participation duration. As shown in Figure 1a, the participant with a black trajectory has a longer participation duration than the participant with a red trajectory because he has a longer trajectory and collects data more frequently. The longer the participation duration, the higher the coverage of the collected road data, and the better the quality of service of the crowdsensing will be. Figure 1a shows the case without the IMELPT, and Figure 1b shows the case after the IMELPT is motivated. We use the DSGD to predict whether a participant is a long-term participant or a short-term participant before the participant participates in the task. The participant of the red trajectory is a short-term participant, and we use the SPIMPT to motivate this participant to increase the participation duration. Meanwhile, the participant of the black trajectory is a long-term participant, and we use the LPIM to maximize the participant’s utility to motivate the participant to continue participating in the task.
The parameters and related descriptions involved in this paper are given in Table 1.

Logical Model

Figure 2 shows the logical model of the IMELPT. The IMELPT mainly acts on the two most critical steps of the physical model in the previous section, participants select the task, and the platform pays the reward.
First, we use the DSGD to predict whether a participant is a long-term or short-term participant. In Step 1, we select the relevant features used to predict participants, and in Step 2, we construct a model to predict and classify participants. We then motivate short-term participants using the SPIMPT to increase their participation duration. Steps 3–6 are the cultivation stage of the SPIMPT. In Step 3, we calculate the evaluation factors for short-term participants. In Step 4, we set the reward and cost of participants. Based on the reward and cost, we determine the evaluation factors of participants in Step 5. By ranking the evaluation factors of all participants, we can obtain the probability of a participant entering the maintenance stage in Step 6. In Step 7, we determine whether the participant can enter the maintenance stage based on the probability. If the participant does not enter the maintenance stage, the participant returns to Step 3. If the participant enters the maintenance stage, the participant will receive an additional right. We will calculate the value of the rights in Step 8 and the optimal participant’s participation duration in Step 9. During the incentive process, the platform sets the prospect factor, which increases the probability that the participant will enter the maintenance stage at Step 6 and increases the optimal participation duration of the participant at Step 9.
We will use the LPIM to motivate long-term participants and increase the probability of their continued participation in the task. We will first determine the participants’ utility formula in Step 10. In Step 11, we determine the optimal goal based on the participant’s utility formula, which is to maximize the participant’s utility throughout the task phase. In Step 12, we solve the optimization problem to obtain the optimal reward payment strategy.

4. Detail of the DSGD

For short-term participants, we want to incentivize them to increase the duration of participation. For long-term participants, we want to increase the probability of their continued participation in the task. To precisely motivate participants, we need to first accurately classify whether the participants are long-term or short-term participants using the DSGD. Therefore, in this section, we first discuss the detail of the DSGD.
Stacking generation is a classic, high-performing classification method. Stacking generation has better prediction and generalization ability in small-sample classification tasks. In crowdsensing, the difference between participants is large, and the historical data is small, so Stacking generation meets the requirements of our mechanism. Therefore, we choose Stacking generation for the prediction and classification of participants.
The traditional stacking generation model has only one layer. This paper deepens the number of layers of the model and designs an improved model, DSGD. It is difficult to improve the classification accuracy because the behavioral features of the participants that we can directly identify are relatively simple. Therefore, we increased the number of layers of the DSGD model so that we could learn a more abstract representation of the features, which would improve the accuracy of participant classification. However, increasing the number of layers will bring the problem of large time complexity and model overfitting. The model may spend a lot on remembering noise or details instead of learning patterns that really make sense. To solve this problem, this paper sets up random links between layers. The method of random links introduces noise by randomly discarding feature parameters, thus reducing the model’s dependence on specific parameter values and helping to control the model’s complexity. Thus the model can reduce the risk of overfitting. Figure 3 shows the framework of the DSGD.
First, the DSGD divides the training set into five parts, with one part being the validation set and four parts being the training set. DSGD then uses 5-fold cross-validation to obtain five prediction sets stacked vertically as the input features for the next training set. Since the network of DSGD is complex, it has already learned the features of the data well and no longer needs a complex meta-classification model. Therefore, we used a simple logistic regression model as a meta-classifier to obtain the final model which allows us to observe the sample probability scores more conveniently. When classifying participants, we first need to set the basis for the classification. We classified participants based on their total time on the task and the frequency of collecting and uploading data. Participants with high frequency and long total duration are long-term participants, and participants with low frequency and short total duration are short-term participants.
Next, we want to find features that can describe whether a participant is a long-term or short-term participant. We first use the time interval between participants’ uploading data and driving speed as features that reflect the participants’ ability to complete the task. Furthermore, the number of tasks completed by participants is determined by the trajectory, which is a continuous sequence. Time series studies are the statistical patterns followed by the data series. Therefore, we use the statistical results of time series analysis as features, including the percentage of unique data points, the Fourier coefficients of the one-dimensional discrete Fourier transform, and the spectral center of mass (mean), variance, skewness, and kurtosis of the Fourier transform spectrum. Table 2 shows the features of the DSGD.
In summary, we can use the DSGD to classify the participants by prediction. In this paper, we set the predicted long-term participants to form set L and the predicted short-term participants to form set S. Next, we can precisely motivate these two types of participants separately.
In the evaluation of the classification task, we utilize several key performance parameters to assess the effectiveness of the model. First, we use the confusion matrix to present a tabular representation of the model’s predictions against the actual class labels, facilitating the calculation of metrics such as True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN). From the confusion matrix, essential metrics such as Precision (the proportion of correctly predicted positive instances among all positive predictions), Recall (also known as Sensitivity or True Positive Rate, measuring the proportion of correctly predicted positive instances among all actual positive instances), and Accuracy (the proportion of correctly classified instances among the total number of instances) are derived. Second, we use the ROC curve to assess the model. ROC curve’s corresponding Area Under the Curve (ROC-AUC) provide an insightful visualization of the model’s trade-off between True Positive Rate and False Positive Rate across different probability thresholds. A higher ROC-AUC value indicates better discrimination between classes. Third, we use F1-score to have a balanced evaluation of the model’s performance.

5. Discussion of the Participant Incentive Mechanism

In this section, we will motivate the different types of participants separately based on the classification results. Section 5.1 and Section 5.2 motivate short-term participants to increase their duration of participation in the task. Section 5.3 motivates long-term participants to maintain the probability of their continued participation in the task.

5.1. Cultivation Mechanism for Short-Term Participants

This section will focus on SPIMPT, a prospect theory-based cultivation mechanism for short-term participants, through which we hope to increase the participation duration of short-term participants.

5.1.1. Design of Cultivation Mechanism Based on Prospect Theory

The key to the cultivation mechanism is the evaluation factor, and the size of the evaluation factor is related to the rewards and costs of the participants, so we discuss the rewards and costs first.
The participation duration determines the reward and cost of the participants, so we first discuss the properties of the participation duration. We let the participation duration of participant g i be δ i δ i > 0 . To accommodate different task scenarios, we first normalize the participation duration. Then, we analyze the impact of the participation duration on the quality of crowdsensing’s service. First, the longer the participants’ participation duration, the larger the number of task areas that participants can reach, which can increase the coverage of tasks and improve the quality of crowdsensing’s service. Second, there is a marginal decreasing impact of participants’ participation duration on the crowdsensing service. This is because the number of task areas is limited, and the probability of participants collecting duplicate data increases as the participation duration increases. This is not helpful to the quality of the crowdsensing service. Therefore, we want to increase the participation duration of participants, but not indefinitely. We define a performance factor θ i to quantify the effect of participants’ participation duration on service quality. We expect the performance factor to be proportional to the participation duration but with marginal decreasing change. We define the performance factor of participants as shown in the following Formula (1):
θ i = e δ i e δ i e δ i + e δ i ,
Next we use Theorem 1 to discuss the rationality of the performance factor’s definition.
Theorem 1. 
In SPIMPT, θ i δ i and d 2 θ i d δ i 2 < 0 . When δ i + , θ i 1 .
Proof of Theorem 1. 
We first discuss the trend of change and let θ i take the first order derivative with respect to δ i to obtain: d θ i d δ i = d d δ i e δ i e δ i e δ i + e δ i d d δ i e δ i + e δ i e δ i e δ i e δ i + e δ i 2 . We simplify the formula to obtain: d θ i d δ i = 4 e δ i + e δ i 2 > 0 . Thus θ i δ i . By finding the second order derivative we obtain: d 2 θ i d δ i 2 = d d δ i 4 e δ i + e δ i 2 = 8 e δ i e δ i e δ i + e δ i 3 < 0 . Therefore, as δ i increases, the increasing trend of θ i is marginal decreasing. Next, we discuss the case when δ i + . It is simple to know that lim δ i + e δ i = + and lim δ i + e δ i = 0 . Thus lim δ i + θ i = lim i + e δ i lim i + e δ i = 1 . Therefore, when δ i + , θ i 1 . Proven. □
Theorem 1 justifies the performance factor to quantify the participation duration of participants. We can then determine the rewards and costs of participants based on the performance factors and thus determine the evaluation factor that can motivate participants.
First, we define the reward given to the participants as following Formula (2):
b i δ i = α i θ i δ i + ε t ,
where α i is the reward coefficient and α i > 0 . The larger the performance factor, the larger the reward. ε t is the stochastic error, ε t N μ , ϑ i ( t ) σ 2 . N ( . ) represents the normal distribution, σ is the standard deviation of the distribution, μ is the mean of the distribution, and ϑ i ( t ) is the prospect factor with round t as the independent variable. The prospect factor, which will be introduced in the next section, reflects the volatility of the participant’s reward. The larger ϑ i ( t ) is, the greater the volatility of the participant’s reward. The platform wants to motivate the participants by adjusting the size of the prospect factor.
Next, we discuss the cost of the participant to complete the task. Since the participant’s time is finite, increasing the participation duration will make the participant keep giving up other things of greater value. Therefore, the cost c i δ i should satisfy c i δ i > 0 . Furthermore, since cost is not the focus of this paper, we use the current common definition of cost [34] as shown in the following Formula (3):
c i δ i = 1 2 ϵ δ i 2 ,
where ϵ is the cost coefficient and ϵ > 0 .
After determining the participants’ reward and cost based on the prospect factor, we can then design the evaluation factor used to motivate the participants. Based on the participant’s evaluation factor, the platform will determine the probability of a participant entering the maintenance stage. Since participants can obtain additional rights in the maintenance stage, participants have the incentive to increase the evaluation factor to enter the maintenance stage. For the convenience of this section, we assume that the value of the right after entering the maintenance stage is φ i . We will discuss φ i in detail in Section 5.2.
We defined the probability of a participant entering the maintenance stage as the proportion of that participant’s evaluation factor that exceeded that of other participants. Since the participant’s probability is a discrete function with respect to the evaluation factor, the participant often needs to improve the evaluation factor by a larger amount to improve the probability once. Therefore, to continuously motivate participants, we continuous the discrete function. Furthermore, we use the normal distribution to describe this probability because the normal distribution is the most common distribution and is more acceptable to the participants. Therefore, we define the probability of a participant entering the maintenance stage as the following Formula (4):
p i = V i 1 σ ¯ 2 π e V i μ ¯ 2 2 σ ¯ 2 ,
where μ ¯ is the mean of the evaluation factors for all participants, and σ ¯ is the standard deviation for all participants.
Next, we can determine the probability of a participant entering the maintenance stage once we have defined the evaluation factor. We define the evaluation factor as shown in the following Formula (5):
V i = α i θ i δ i + ε t 1 2 ϵ δ i 2 1 2 ρ i α i 2 σ 2 1 + ϑ i ( t ) ,
where α i θ i δ i + ε t 1 2 ϵ δ i 2 represents the participant’s reward so that the goal of maximizing the participant’s reward and the goal of maximizing the evaluation factor become consistent. 1 2 ρ i α i 2 σ 2 1 ϑ i ( t ) represents the impact of the prospect factor on the evaluation factor. Since the prospect factor represents the fluctuation of reward, which is similar to the concept of risk in economics, we use the concept of risk in economics to assess the impact of the prospect factor [35]. ρ i is the risk aversion coefficient of participant g i .

5.1.2. Impact of Prospect Factor on Participants’ Decisions

The previous section introduces the design of the prospect factor-based cultivation mechanism, and this section focuses on the setting of the prospect factor and its influence on participants’ decisions.
First, in the cultivation mechanism, the utility of the participants is as shown in the following Formula (6):
U i c = α i θ i δ i + ε t 1 2 ϵ δ i 2 + p i φ i ,
where α i θ i δ i + ε t 1 2 ϵ δ i 2 represents the reward of the participant and p i φ i represents the value of the right that the participant can obtain if the participant enters the maintenance stage.
To facilitate the evaluation of the experimental results, we set a formula for the participants’ probability of continuing to participate in the task. The greater the utility, the greater the probability that the participant will continue participating in the task. Since the participation probability is only a reflection of the participant’s utility and the form of the participation probability formula does not affect the comparison between mechanisms, we choose a simple formula that is consistent with the properties of the participation probability:
p i c = e η U i c 1 e η U i c ,
where p i c represents the participation probability of the i-th participant and η represents the probability coefficient.
After determining the participation probability of participants, we can then discuss the effect of the prospect factor on the participation probability. Based on the properties of prospect theory introduced in Section 1, we first set up the non-linear prospect factor. The hyperbolic function [36] is consistent with the properties of prospect theory and is a commonly used non-linear function, so we choose the hyperbolic function to describe the non-linear prospect factor:
ϑ i ( t ) = 2 e t + e t ,
where is the regulating factor.
According to the prospect theory, we transform the non-linear prospect factor into a linear prospect factor so that we can obtain the linear prospect factor as shown in the following Formula (9):
ϑ i ( t + 1 ) ^ = max ( t + 1 ) 2 e t + e t t e t + e t ) + 1 , 0 ,
Participants can determine their utility and thus make optimal decisions based on the prospect factor. We next use Theorem 2 to discuss the effect of the prospect factor on participants’ decisions.
Theorem 2. 
The prospect factor increases the participation probability of participants.
Proof of Theorem 2. 
In this problem, we need to compare the sizes of ϑ i ( t + 1 ) and ϑ i ( t + 1 ) ^ . First, when ( t + 1 ) 2 e t + e t t e t + e t + 1 < 0 , as ϑ i ( t + 1 ) > 0 = ϑ i ( t + 1 ) ^ , ϑ i ( t ) 1 / V i , and V i p i , the prospect factor will increase the participation probability. Second, we discuss the case when ( t + 1 ) 2 e t + e t t e t + e t + 1 > 0 . We let F ( t ) = 2 e t + e t and G ( t ) = ( t + 1 ) 2 e t + e t t e t + e t + 1 . We transform the problem into comparing the slopes of F ( t ) and G ( t ) . Since these two functions intersect at the point ( t , 2 e t + e t , we only need to discuss the slope of these two functions at the point. We derive the above two functions separately to obtain: d F ( t ) d t = 2 e t e t e t + e t 2 < 0 and d G ( t ) d t = 2 e t + e t t e t + e t < 0 . We now need to compare these two derivatives. We let: T ( t ) = d F ( t ) d t d G ( t ) d t = 2 t e t e t 2 e t + e t + e t + e t 2 t e t + e t 2 . As t e t + e t 2 > 0 , we only need to discuss the numerator. We let: K ( t ) = 2 t e t e t 2 e t + e t + e t + e t 2 , and derive this function to obtain: d K ( t ) d t < 0 and K ( 0 ) = 0 . Thus K ( t ) < 0 and ϑ i ( t + 1 ) > ϑ i ( t + 1 ) ^ . Therefore, the prospect factor will increase the participation probability of participants. Proven. □
Theorem 2 proves that the prospect-based cultivation mechanism increases participants’ participation probability, which in turn increases participation duration and thus improves the quality of the crowdsensing task. The following section discusses how to maintain participants’ participation duration.

5.2. Maintenance Mechanism for Short-Term Participants

The previous section discusses the incentive mechanism in the cultivation stage, where the platform can motivate participants to increase their participation duration by additional rights in the maintenance stage. This section is to discuss how to motivate participants who enter the maintenance stage. We will first introduce the design of the maintenance mechanism and then discuss the optimal decisions of the participants in the maintenance mechanism.

5.2.1. Design of Maintenance Mechanism Based on Prospect Theory

Participants in the maintenance stage may have the right to share a portion of the profits of the remaining participants. The number of participants that can be shared is determined by the participation duration of the participant in the maintenance stage. We set the share number of the participant g i as the following Formula (10):
ω i = δ i a i ,
where a i represents the elasticity factor of the share number.
From Formula (3), since the participation duration determines the cost of the participant, the cost of the participant in the maintenance stage is shown in the following Formula (11):
c i ω i = 1 2 ϵ ω i 2 a i ,
The share number and the reward of the shared participants determine the value of the participants’ rights in the maintenance stage. The reward of the shared participants is uncertain and fluctuates randomly. Since geometric Brownian motion is a common way to describe random variables [37], we use geometric Brownian motion to describe the reward fluctuations of the shared participants. We assume that the average reward b ^ i of shared participants satisfies the following Formula (12):
d b ^ i b ^ i = μ 1 d t + σ 1 d z ,
where μ 1 and σ 1 represent the expected growth rate and volatility of the average reward b ^ i , respectively, and z represents the Wiener process.
We can then obtain the value of the right to the participant as shown in the following Formula (13):
φ i b ^ i = 0 + 1 ϑ i ( t ) ^ ω i b ^ i e μ 1 t c i ^ e r i t d t ,
where r i represents the discount factor of participant g i and c i ^ represents the average cost of the shared participants. Since the participant can obtain the profit of the shared participant for a long time, we use the form of integration to express the sum of the values of the rights at different stages. At the same time, the value of different periods is different. For participants, the longer the reward is, the lower the value is, so we need to discount the reward for different periods. e μ 1 t and e r i t represent the discount rate resulting from the growth rate and discount factor, respectively. We next discuss the rationality of the discount rate using Theorem 3.
Theorem 3. 
The discount rates of the growth rate and discount factor are e μ 1 t and e r i t , respective.
Proof of Theorem 3. 
First we discuss the discount rate e μ 1 t resulting from the growth rate. We simply know that the variation of b i ^ with the number of rounds t satisfies the following function: F ( t ) = b ^ i 1 + μ 1 t . We divide t into m equal parts, and the equation becomes: F ( t ) = b ^ i 1 + μ 1 m m t . We can then obtain the continuous function: F ( t ) = lim m b ^ i ( 1 + μ 1 m m t . We let n m < m + 1 , then b ^ i 1 + μ 1 m + 1 m t < b ^ i 1 + μ 1 m m t < b ^ i 1 + μ 1 n ( n + 1 ) t . We can easily prove that lim n b ^ i 1 + μ 1 m + 1 m t = lim n b ^ i 1 + μ 1 n ( n + 1 ) t = b ^ i e t μ 1 . According to the squeeze theorem we know that: b ^ i = lim m b ^ i 1 + μ 1 m m t = b ^ i e t μ 1 . Therefore, the discount rate of the growth rate is e μ 1 t . Similarly, the discount rate of the discount factor is e r i t . Proven. □
Theorem 3 justifies the discount rate and next we can solve the Formula (13) and can obtain the value of right as the following Formula (14):
φ i b ^ i = 1 ϑ i ( t ) ^ ω i b i ^ r i μ 1 c ^ i r i ,

5.2.2. Optimal Decision-Making of Participants in the Maintenance Mechanism

The previous section discusses the design of the maintenance mechanism, and in the next section, we discuss the optimal decision-making of the participants in the maintenance mechanism.
We know from the previous section that whether a participant accepts shared profits is a right, and the participant can choose to accept or waive the right. Therefore, φ i b ^ i is a trigger value for the participant. If b ^ i is too low, the cost of increasing the participant’s participation duration is not enough to offset the additional benefit, so the participant will give up the right. A participant can benefit from this right and accept it only if b ^ i exceeds a certain threshold. We set this threshold as b ^ i . The platform needs to adjust the size of b ^ i to make b ^ i > b ^ i so that the maintenance mechanism can incentivize the participants.
Next, we will discuss the participants’ decision-making in detail. Participants do not make decisions based on the trigger value φ i b ^ i , but on the chosen value that has not yet been triggered, denoted as κ i b ^ i in this paper. The trigger value φ i b ^ i represents the actual benefit available to the participant when b ^ i > b ^ i . The choice value κ i b ^ i represents the value the participant evaluates this right brings when b ^ i < b ^ i . Before calculating κ i b ^ i , we discuss the properties that κ i b ^ i should satisfy using the following Formula (15):
κ i b ^ i = κ i b ^ i + d b ^ i e r i t e μ 1 t ,
Formula (15) implies that the current choice value equals the future choice value discounted to the present moment. To calculate κ i b ^ i , we need to translate Formula (15) into the differential equation as shown in the following Formula (16):
1 2 σ 1 2 b ^ i 2 2 κ i b ^ i b ^ i 2 + μ 1 b ^ i κ i b ^ i b ^ i r i κ i b ^ i = 0 ,
Next, we use Theorem 4 to prove the rationality of Formula (16).
Theorem 4. 
Formula (15) can be transformed into the differential equation: 1 2 σ 1 2 b ^ i 2 2 κ i b ^ i b ^ i 2 + μ 1 b ^ i κ i b ^ i b ^ i r i κ i b ^ i = 0 .
Proof of Theorem 4. 
In this problem, we need to use the Taylor equation to transform this formula. We first obtain from the Taylor expansion that: κ i b ^ i + Δ b ^ i κ i b ^ i = κ i b ^ i + Δ b ^ i Δ b ^ i + k i 2 b ^ i + Δ b ^ i Δ b ^ i 2 + k i 2 b ^ i + Δ b ^ i Δ b ^ i 3 + . b ^ i obeys the geometric Brownian motion, according to the properties of geometric Brownian motion, κ i b ^ i + Δ b ^ i Δ b ^ i and k i 2 b ^ i + Δ b ^ i Δ b ^ i 2 in the above equation are of the same order. After removing the part of smaller order than d t we can obtain d κ i b ^ i = κ i b ^ i t d t + κ i b ^ i b ^ i d b ^ i + 1 2 2 κ i b ^ i b ^ i 2 d b ^ i 2 . According to the nature of b ^ i , we simply know that: d ln b ^ i = μ 1 σ 1 2 2 d t + σ 1 d z . After substituting this equation into the formula of choice value, we obtain d κ i = κ i b ^ i b ^ i b ^ i μ 1 + κ i b ^ i t + 1 2 2 κ i b ^ i b ^ i 2 σ 1 2 b ^ i 2 d t + κ i b ^ i b ^ i b ^ i σ d z . We re-assume a value function to eliminate the effect of d z . We let Δ κ ^ i = κ i b ^ i t 1 2 2 κ i b ^ i b ^ i 2 σ 1 2 b ^ i 2 Δ t . We substitute Δ κ ^ i and κ ^ i = κ i b ^ i + κ i b ^ i b ^ i b ^ i into the above equation to obtain: 1 2 σ 1 2 b ^ i 2 2 κ i b ^ i b ^ i 2 + μ 1 b ^ i κ i b ^ i b ^ i r i κ i b i ^ = 0 . Proven. □
Theorem 4 proves the rationality of Formula (16), and next, we can solve the equation to obtain κ i b ^ i . We know from the definition of κ i b ^ i that Formula (16) should also satisfy the constraint as shown in the following Formula (17):
s . t . lim b i 0 κ i b ^ i = 0 κ i b ^ i = φ i b ^ i 1 2 ϵ ω i 2 a i κ i b ^ i = φ i b ^ i ,
The first constraint above shows that the shared participants are mostly unpaid when b ^ i 0 . This indicates that when b ^ i 0 , most participants are already in the maintenance stage, or the tasks are nearly finished. The second constraint indicates that the chosen value should equal the real value when b ^ i = b ^ i . The third constraint indicates that when b ^ i = b ^ i , the choice value should grow in the same trend as the real value.
After considering the constraints, we solve the differential equation of Formula (16) to obtain the result as shown in the following Formula (18):
κ i b ^ i = 1 ϑ i ( t ) ^ c ^ i ω i r i d 1 1 + ϵ ω i ^ 2 2 1 ϑ i ( t ) ^ b ^ i b ^ i d 1 ,
where d 1 = 1 2 μ 1 σ 1 2 + μ 1 σ 1 2 1 2 2 + 2 r i σ 1 2 > 1 . Next, we use Theorem 5 to prove this solution.
Theorem 5. 
The solution of Formula (16) is: κ i b ^ i = 1 ϑ ^ i ( t ) c ^ 1 ω i r i d 1 1 + ϵ ω ^ i 2 2 1 ϑ i ( t ) ^ b ^ i b i ^ d 1 .
Proof of Theorem 5. 
We know from the properties of second-order differential equations that the solution of the above differential equation has the form of κ i b ^ i = λ 1 b ^ i d 1 + λ 2 b ^ i d 2 , where d 1 = 1 2 μ 1 σ 1 2 + μ 1 σ 1 2 1 2 2 + 2 r i σ 1 2 > 1 and d 2 = 1 2 μ 1 σ 1 2 μ 1 σ 1 2 1 2 2 + 2 r i σ 1 2 < 0 . Next we are going to solve for λ 1 and λ 2 . As d 2 < 0 and lim b i 0 κ i b ^ i = 0 , λ 2 = 0 . We then discuss λ 1 . By transform the equation, we can obtain: λ 1 b ^ i d 1 = 1 ϑ i ( t ) ^ ω i b ^ i r i μ 1 c ^ i r i 1 2 ϵ ω i 2 / a i . Due to constraints κ i b ^ i = φ i b ^ i , we can obtain: d 1 λ 1 b ^ i d 1 1 = 1 ϑ i ( t ) ^ ω i r i μ 1 . After combining and simplifying the two formulas obtained above, we obtain b ^ i = r i μ 1 d 1 d 1 1 ϵ ω i 2 2 1 ϑ i ( t ) ^ + c ^ i r i and λ 1 = 1 ϑ i ( t ) ^ c i ^ ω i r i d 1 1 + ϵ ω ^ i 2 2 d 1 1 b ^ i d 1 . Substituting these two obtained formulas, we then obtain κ i b i ^ = 1 ϑ i ( t ) ^ c ^ i ω i r i d 1 1 + ϵ ω ^ i 2 2 d 1 1 b i ^ b ^ i d 1 . Proven. □
Theorem 5 justifies the solution of the differential equation. Thus, participant g i can adjust his participation duration according to b ^ i and thus maximize his benefit. Then, we let d κ i b ^ i d ω i = 0 . By solving this equation, we can obtain the optimal share number of participants as shown in the following Formula (19):
ω i = 2 c i ^ 1 ϑ i ( t ) ^ r i ϵ d 1 2 a i 1 2 a i 1 2 / a i 1 ,
Substituting Formula (19) into Formula (10), we can obtain the optimal participation duration of participants as shown in the following Formula (20):
δ i = 2 c i ^ 1 ϑ i ( t ) ^ r i ϵ d 1 2 a i 1 2 a i 1 2 a i ,
where d 1 2 a i 1 2 a i > 0 . We let: d δ i d ϑ ^ i ( t ) = 2 c ^ i 2 a i 2 c i ^ 1 ϑ i ( t ) ^ r i d 1 2 a i 1 2 a i a i 1 2 a i < 0 . Combined with Theorem 2, we can know that the presence of the prospect factor increases participants’ participation duration when they seek to maximize their own benefit.
Finally, we need to find the threshold b ^ i . The mechanism of the maintenance stage can only work if the average reward paid by the platform is greater than the threshold. We substitute Formula (20) into b ^ i = r i μ 1 d 1 d 1 1 ϵ ω i 2 2 1 ϑ ^ i ( t ) + c i ^ r i obtained from Theorem 5 to obtain b ^ i as shown in the following Formula (21):
b ^ i = r i μ 1 c ^ i d 1 2 a i 1 r i d 1 2 a i 1 2 a i ,
Thus, only when the average reward paid by the platform is greater than r i μ 1 c ^ i d 1 2 a i 1 r i d 1 2 a i 1 2 a i will participant g i be willing to accept this right.

5.3. Incentive Mechanism for Long-Term Participant

The above two sections discuss the incentive mechanism for short-term participants, and the next section discusses the incentive mechanism for long-term participants. The incentive for long-term participants is simpler because long-term participants originally have a longer participation duration. Our incentive goal is to maximize the utility of long-term participants throughout the task phase, thereby increasing the probability that long-term participants will continue to participate in the task. Therefore, in this section, we will focus on assigning the reward B i e to long-term participants throughout the task phase to maximize the participant’s utility.
First, we discuss the participants’ utility functions. The CES utility function [38] is a commonly used function for evaluating participants’ utility, and therefore we choose it for evaluation. We define the participants’ utility function as shown in the following Formula (22):
u i e b i ( t ) = b i ( t ) β β β 0 ln b i ( t ) β = 0 ,
where β represents the utility coefficient. we know from Formula (22) that the participants’ utility shows a marginal decreasing trend.
The next step is to discuss how to pay the reward b i ( t ) to maximize the participants’ utility. We first define the objective of this optimization problem as shown in the following Formula (23):
max 0 T e r i t b i ( t ) β β d t ,
where T is the total number of task rounds, since the reward is paid at different rounds, we need to discount the reward.
We can simply know that the optimization problem needs to satisfy the constraint as shown in the following Formula (24):
d x ( t ) d t = b i ( t ) x ( 0 ) = B i e x ( T ) = 0 ,
where x ( t ) represents the remaining reward at the t-th round. The first constraint indicates that the size of the reduction in the remaining reward is the same as the reward paid to the participant in the current round. The second and third constraints indicate the initial and final reward states, respectively. To maximize the utility of the participants, the platform needs to pay all the rewards to the participants in the last round.
By solving this optimization problem, we can obtain the optimal reward payment formula as shown in the following Formula (25):
b i ( t ) = r i B i e ( β 1 ) e r i T β 1 1 e r i t β 1 ,
Next, we use Theorem 6 to justify this optimal reward payment formula.
Theorem 6. 
In LPIM, the optimal reward payment formula is: b i ( t ) = r i B i e ( β 1 ) e r i T β 1 1 e r i t β 1 .
Proof of Theorem 6. 
This problem is an optimization problem with respect to allocation and can be solved by the Hamiltonian operator. We can obtain the Hamiltonian operator for this optimization problem as H = e r i t b i ( t ) β β + λ b i + x ( t ) d λ ( t ) d t . According to the maximum principle, we obtain H b i = e r i t b i ( t ) β 1 λ = 0 and H x = 0 + 0 + d λ ( t ) d t = 0 . To facilitate the solution, we set an intermediate variable k, and we obtain e r i t b i ( t ) β 1 = k and b i ( t ) = k 1 β 1 e r i t β 1 . According to the constraints, we know that: 0 T k 1 β 1 e r i t β 1 d t = B i e . By calculating the formula, we can obtain k = r i B i e ( β 1 ) e r i T β 1 1 ( β 1 ) . Therefore, the final optimal reward function is b i ( t ) = r i B i e ( β 1 ) e r i T β 1 1 e r i t β 1 . Proven. □

6. Experiment and Analysis

This section evaluates the performance of the IMELPT using a real dataset. We compared the performance of the IMELPT with the SGSIM [34] and PACE [8] mechanisms. The SGSIM mechanism uses the two-stage Stackelberg game to design an optimal incentive mechanism and to motivate participants to participate in the task through social networks. The SGSIM mechanism, the same as the IMELPT, uses the relationship between participants to motivate them to participate in the task. The PACE mechanism first evaluates the participants’ performance and then motivates them by giving more rewards to those who perform better, which is the most direct incentive mechanism. The SGSIM and PACE mechanisms have proven their superior performance through experiments. Both are the latest user recruitment mechanisms, so these two are chosen for comparison in this paper.
This section will be divided into three parts. Section 5.1 discusses the experimental environment and the set-up of the experiment, Section 5.2 analyzes the experimental results of the IMELPT, and Section 5.3 discusses the results of comparing the IMELPT with the comparison mechanisms.

6.1. Setup of the Experiment

To ensure the fairness of the experimental evaluation, we set the initial parameters of the IMELPT, SGSIM, and PACE mechanisms to be the same. The dataset used for our evaluation is the driving dataset of Beijing cabs [7]. The driving dataset of Beijing cabs collected by Microsoft Research Asia contains the GPS positions of 10,357 taxis in Beijing during one month. Since the users in crowdsensing are those who use their free time at work to complete tasks, and cab drivers are such people, this dataset meets the requirements of our experiment. Figure 4a,b show the driving trajectories of two cab drivers in the dataset, respectively. We can see from the figure that the total length of the driving trajectory of the cab in Figure 4a is much smaller than that of the cab in Figure 4b. Furthermore, the cab in Figure 4a collects data at a much larger interval than in Figure 4b. Therefore, the total duration of cab participation in the task in Figure 4a is shorter. The interval between data collection is longer, and generally, its participation duration is shorter than that of the cab in Figure 4b. This also reflects that it is reasonable to classify participants according to their participation duration. In addition, the values of the key parameters in the experiments are shown in Table 3 below:

6.2. Evaluation of the IMELPT

The first step of the IMELPT is to predict and classify the participants according to the DSGD, and the accuracy of the prediction will greatly affect the final incentive effect. In the DSGD, we choose three base learners, SVM, KNN, and random forest. SVM has excellent robustness, KNN is insensitive to abnormal data, and random forest has high accuracy and is not easy to overfit. In this paper, we hope to combine the advantages of the above three classification algorithms through the DSGD to improve prediction accuracy.
We use the confusion matrix in Figure 5 to show the performance of the DSGD. The confusion matrix is a common evaluation metric used in classification tasks. The confusion matrix places the predicted and true results in the same table by category, and in this table, we can see the number of correct and incorrect classifications. Table 4 shows the evaluation results of DSGD. The experimental results reveal promising outcomes for DSGD, with precision, recall, and accuracy values of 0.805, 0.811, and 0.813, respectively. The F1 score, which considers both precision and recall, is found to be 0.833. In contrast, the baseline method yielded precision, recall, and accuracy scores of 0.782, 0.786, and 0.789, respectively, with an F1 score of 0.784. The comparison between DSGD and the baseline method clearly demonstrates the superiority of our proposed model across all evaluated parameters. DSGD exhibited a substantial improvement in precision, recall, and accuracy, leading to an overall higher F1 score, thereby establishing its efficacy and potential practical applicability in the classification tasks.
Next, we use the ROC curve in Figure 6 to intuitively demonstrate the performance of the DSGD on the real dataset. The ROC curve measures how well a binary classifier can distinguish between two classes, typically by plotting the False Positive Rate against the True Positive Rate. The area under the curve measures the overall accuracy and is a number between 0 and 1, with a perfect score of 1.0. As shown in the figure, the area of the ROC curve for DSGD is 0.813, which is larger than that of the baseline method, 0.789. This result indicates that DSGD has a better performance and is able to categorize the participants more accurately.
Before evaluating the mechanism, we discuss the distribution of participation duration using Figure 7. Two factors determine participation duration, the total time participants spent on the task and the average interval between participants collecting data. The longer the total time to complete the task and the shorter the interval to collect data, the longer the participation duration. The figure shows the distribution of the participation duration of all participants. As can be seen from the figure, the distribution of the total time spent on the task by the participants is more concentrated, mainly around the time stamp of 530,000. The distribution of participants’ time intervals for data collection is scattered. We also find that many short-term participants can be motivated, and they are mainly distributed at a total time of (510,000, 530,000) and a data interval of (0, 800). This figure also illustrates that, because of the uneven distribution of participants’ participation duration, it is necessary to divide the participants into long-term and short-term participants and motivate them separately.
Next, we discuss the incentive mechanism for short-term participants. In the cultivation mechanism, the platform ties the reward to the participation duration to motivate participants to increase their participation duration. At the same time, to allow the platform to adjust the reward in real-time according to the budget and the participants’ completion, we set the reward coefficient to adjust. Figure 8 shows the effect of the reward coefficient and the participation duration on the reward. The figure shows that both the reward coefficient and the participation duration are directly proportional to the reward. The reward coefficient and reward are linearly related, and the reward coefficient has a greater effect on the reward. We also find that platforms can moderate the incentive effect of participants by adjusting the reward coefficient. For example, when the reward coefficient is 5, a participant who increases the participation duration from 0 to 1 can only receive a reward of less than 4; when the reward coefficient is 10, a participant who increases the participation duration can receive a reward close to 8. The setting of the reward coefficient lays the foundation for the following incentive mechanism.
The platform mainly motivates short-term participants through the prospect factor, the properties of which are discussed in Figure 9. As seen from the figure, the increase of the prospect factor is nonlinearly decreasing as the number of rounds increases, which is consistent with the definition of the prospect factor. Furthermore, we find that the gap between the non-linear and linear prospect factors widens as the number of rounds increases. This is consistent with the nature of the prospect factor, i.e., the rate of change of the linear factor is greater than that of the non-linear factor. When the number of rounds is 1, the difference between the two factors is about 0.02, while when the number of rounds increases to the tenth round, the difference increases to 0.1. The platform incentivizes participants to increase their participation duration by the gap between the non-linear prospect factor and the linear prospect so that the incentive effect will be more significant as the number of rounds increases.
After discussing the properties of the prospect factor, we then use Figure 10 to discuss the effects of the prospect factor and the participation duration on the evaluation factor. The evaluation factor determines whether a participant can enter the maintenance stage and thus gain rights and is a key factor in the cultivation stage. We find that the participation duration is proportional to the evaluation factor, so the setting of the evaluation factor can positively motivate the participants to increase the participation duration. The prospect factor is inversely proportional to the evaluation factor. Combined with Figure 9, as the number of rounds increases, the gap between the non-linear and linear prospect factors becomes larger, thus increasing participants’ expectations of increasing the evaluation factor. This will increase the probability that participants increase their participation duration. Furthermore, we find that the evaluation factor increases as the participants’ participation duration increases. When the participation duration is 0.1, and the prospect factor decreases from 0.9 to 0.1, the evaluation factor increases by three units. When the participation duration is 0.9, the evaluation factor can be increased by about five units. In summary, changes in the prospect factor will increase the incentive effect as participants increase the participation duration.
The evaluation factor determines the probability of participants entering the maintenance stage. Figure 11 discusses the effect of participation duration and prospect factor on the probability of participants entering the maintenance stage. As seen from the figure, both the participation duration and the prospect factor significantly affect the probability. The probability of participants entering the maintenance stage remained above 90% when the participation duration increases to 0.7, and the prospect factor decreases to 0.6. This indicates that for those short-term participants, increasing the participation duration will largely increase the probability of entering the maintenance stage. In summary, the incentive effect of the mechanism on participants with shorter participation duration is significant.
We next use Figure 12 to show the effect of the prospect factor on the probability of participants entering the maintenance stage on the real dataset. We show the impact of the prospect factor by comparing the expected probability of participants entering the maintenance stage under SPIMPT and baseline method. As seen in the figure, the probability that participants are expected to enter the maintenance stage under the SPIMPT is much greater than that under baseline method. The mean value of the probability increases from 0.5 to 0.78 with the effect of the prospect factor. In summary, the prospect factor will increase the expected probability of a participant entering the maintenance stage. At the same time, we find that the participants’ expected probabilities do not fluctuate significantly with the number of rounds. Because the probability is determined by the ranking of participants’ evaluation factors, the expected probability of participants is relatively stable.
The increased expected probability in Figure 12 will increase the participation duration of the participants. Figure 13 compares the SPIMPT and baseline methods regarding the participation duration. As seen from the figure, the participants’ participation duration is significantly improved as a whole. With baseline method, the participants’ participation duration is mainly concentrated in the range of (0, 0.5). In contrast, under the incentive of the SPIMPT, participants’ participation duration is mainly concentrated in the range of (1.25, 1.75). At the same time, we find that many participants’ participation duration is not effectively enhanced in the SPIMPT. This is because the increase in costs for these participants is greater than the benefits that can be derived from entering the maintenance stage.
After a participant enters the maintenance stage, the platform’s goal is to maintain the participant’s participation duration. In the SPIMPT, the platform calculates the optimal participation duration for the participants. The optimal participation duration can maximize the participants’ benefit. Then the platform adjusts the elasticity factor and the prospect factor to influence the optimal participation duration of the participants. Figure 14 shows the effect of the elasticity and prospect factors on the optimal participation duration. As seen in the figure, the elasticity factor is positively proportional to the participation duration, and the prospect factor is inversely proportional to the participation duration. The elasticity factor reflects the size of the number of people who can be shared. The larger the elasticity factor, the greater the number of people participants can share and the greater the willingness of participants to increase the participation duration. The prospect factor decreases continuously, increasing the participants’ participation duration. At the same time, we find that the elasticity factor and the prospect factor have a stable effect on the participation duration, which will make the incentive of the maintenance stage more stable.
Next, we use Figure 15 to show the effect of the SPIMPT on the participation duration in the maintenance stage. As seen in the figure, SPIMPT increases the participation duration from 0.7 to 0.8, caused by the prospect factor. Furthermore, we find that the overall participation duration increases as the number of rounds increases. This is because the platform will keep tuning down the prospect factor, producing more incentive effects. We also find that the gap between participants gradually widens as the number of rounds increases. Because SPIMPT continuously motivates participants, participants will increase their participation duration to different degrees in different rounds.
After evaluating the incentive mechanism for short-term participants, we next evaluate the incentive mechanism LPIM for long-term participants. LPIM pays rewards based on the participants’ utility formula, so we first analyze the properties of the participants’ utilities using Figure 16. As can be seen from the figure, the reward, and utility are proportional regardless of the variation of the utility coefficient, which is consistent with the properties of the utility. At the same time, the utility coefficient determines the growth trend of the reward. Therefore, the platform can adapt to different participants by adjusting the utility coefficients. In summary, the setting of the utility formula is reasonable and meets the requirements of our mechanism.
After determining the participants’ utility formula, the platform can then optimize the reward payment strategy to maximize the participants’ utility throughout the task stage. Figure 17 shows the optimal reward strategy. The figure shows that the platform allocates more rewards in the first few rounds due to the discount rate. At the same time, we find that the utility coefficient largely affects the payment of optimal reward, and the utility coefficient reflects the marginal effect of an increase in reward on participants’ utility. The larger the utility coefficient, the more the platform would pay participants in the first few rounds. For example, when the utility coefficient increases from 0.2 to 0.7, the reward for the first round increases by 50%.
After discussing the optimal reward payment mechanism, we next show the performance of LPIM in Figure 18. As seen in the figure, LPIM can significantly increase the probability of participants continuing to participate in the task. Compared to the baseline method, the LPIM increases the mean value of the probability of continuing to participate in the task from 0.76 to 0.86. We find that the participation probability of participants under the baseline method is mainly distributed in the range of (0.53, 0.88). In contrast, of the LPIM is mainly distributed in the range of (0.85, 0.88). This indicates that the LPIM is relatively stable in raising the participation probability, while the baseline method can be volatile because of the different participants.

6.3. Comparison of the IMELPT

We evaluate the IMELPT in the previous section, and in this section, we first compare the IMELPT with the baseline method and then with the two comparison mechanisms.
First, we use Figure 19 to show the comparison results between the IMELPT and baseline method in terms of participation duration. As seen from the figure, under baseline method, the participants’ participation duration is distributed mainly in the range of (0.1, 0.5). After being incentivized by the IMELPT, most participants’ participation duration is distributed in the range of (0.2, 0.7), and some participants’ duration is distributed in the range of (1.25, 1.75). We find that under the IMELPT, long-term participants have a high and stable participation duration, while short-term participants are motivated to increase their own participation duration continuously. Thus, the precise incentive of the IMELPT can effectively increase participants’ participation duration.
Next, we use Figure 20 to show the comparison results of the IMELPT and baseline method regarding the platform’s utility. The utility of the IMELPT is mainly concentrated in the range of (0.25, 0.75), while the utility of baseline method is mainly concentrated in the range of (0, 0.5). This indicates that the IMELPT will bring more value to the platform because participants increase their participation duration and complete more tasks. At the same time, we find that there are many cases where the platform utility is greater than 1 in the IMELPT, while there are few cases under the baseline method. This is because, in the IMELPT, there is a large number of participants with high participation duration, which will substantially increase the platform’s utility.
Next, we compare the IMELPT with the comparison mechanisms. Figure 21 shows the comparison results of the IMELPT and the comparison mechanisms regarding coverage. The participant’s participation duration determines the size of the task area he can cover, and the coverage of the task directly reflects the effectiveness of the crowdsensing service, so we use the coverage to evaluate the performance. As seen from the figure, the IMELPT is much larger than the SGSIM and PACE mechanisms regarding coverage. As the number of rounds increases, the gap between the IMELPT and the other two mechanisms becomes larger and larger. At the end of the task, IMELPT has 30% higher coverage than the other two mechanisms, which is helpful for the improvement of the quality of crowdsensing’s service. We also find that the IMELPT can improve the coverage at the beginning of the task, reaching 0.2 in the first round.
Finally, we use Figure 22 to show the comparison of the IMELPT and the comparison mechanisms regarding the platform’s utility. We can see from the figure that at the end of the task, the platform’s utility of the IMELPT is about 30% higher than the other two comparison mechanisms, which is largely due to the increased coverage. Meanwhile, we find that in the first six rounds, the platform’s utility of the IMELPT is smaller than that of the SGSIM mechanism. This is because the IMELPT motivates participants with more reward in the first few rounds. While after round 6, IMELPT significantly reduces the expenditure of the platform because of the prospect factor, which then increases the total utility of the platform. In summary, the IMELPT can improve the platform’s utility and, thus, the quality of the crowdsensing service because the platform can have more rewards to motivate more participants to collect more data.

7. Conclusions

In this paper, we proposed the IMELPT based on prospect theory and ensemble learning to solve the problem of the low coverage of crowdsensing under budget constraints and the number of participants. We used the DSGD to predict and classify participants, the SPIMPT to motivate short-term participants, and the LPIM to motivate long-term participants. The IMELPT is like a social chameleon that can implement different incentives for different participants. In the DSGD, we improved the ensemble learning model to increase prediction accuracy. In the SPIMPT, we drew on prospect theory to transform differences in reward variation into asymmetric information, thus making the platform’s and participants’ goals consistent. The SPIMPT can incentivize short-term participants to increase participation duration and, thus, increase coverage. In the LPIM, we ensured high coverage of long-term participants by maximizing their utility and, thus, maintaining their participation rate. We evaluated the IMELPT on a real dataset and found that the IMELPT can effectively increase the participant’s participation duration and, thus, the coverage of the task. We compared the IMELPT with the latest mechanisms and found that the IMELPT is much better than the comparison mechanisms regarding the coverage rate. Next, we will further apply the IMELPT to a real environment and optimize the parameters of the IMELPT based on the feedback from the real environment. Our current research is only focused on crowdsensing, and we will further investigate the application of this approach to fog computing and edge computing in the future.

Author Contributions

J.L., H.X. and D.L. designed the project and drafted the manuscript, as well as collected the data. X.D. and H.L. wrote the code and performed the analysis. All participated in finalizing the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant 62272488, by the Natural Science Foundation of Hunan Province, China, under Grant 2021JJ30869 and by Scientific and Technological Innovation 2030-Major Project of New Generation Artificial Intelligence under the Grant No. 2020AAA0109601.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The study did not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IMELPTIncentive Mechanism based on Ensemble Learning and Prospect Theory
DSGDDeep-Stacking-Generation algorithm based on Dropout
SPIMPTShort-term Participant Incentive Mechanism based on Prospect Theory
LPIMLong-term Participant Incentive Mechanism

References

  1. Tan, W.; Zhao, L.; Li, B.; Xu, L.; Yang, Y. Multiple Cooperative Task Allocation in Group-Oriented Social Mobile Crowdsensing. IEEE Trans. Serv. Comput. 2022, 15, 3387–3401. [Google Scholar] [CrossRef]
  2. Li, L.; Shi, D.; Zhang, X.; Hou, R.; Yue, H.; Li, H.; Pan, M. Privacy Preserving Participant Recruitment for Coverage Maximization in Location Aware Mobile Crowdsensing. IEEE Trans. Mob. Comput. 2022, 21, 3250–3262. [Google Scholar] [CrossRef]
  3. Chen, F.; Huang, L.; Gao, Z.; Liwang, M. Latency-Sensitive Task Allocation for Fog-Based Vehicular Crowdsensing. IEEE Syst. J. 2022, 17, 1909–1917. [Google Scholar] [CrossRef]
  4. Ramazani, A.; Vahdat-Nejad, H. CANS: Context-aware traffic estimation and navigation system. IET Intell. Transp. Syst. 2017, 11, 326–333. [Google Scholar] [CrossRef]
  5. Xu, J.; Yang, S.; Lu, W.; Xu, L.; Yang, D. Incentivizing for Truth Discovery in Edge-assisted Large-scale Mobile Crowdsensing. Sensors 2020, 20, 805. [Google Scholar] [CrossRef]
  6. Liu, W.; Yang, Y.; Wang, E.; Wang, H.; Wang, Z.; Wu, J. Dynamic online user recruitment with (non-) submodular utility in mobile crowdsensing. IEEE-ACM Trans. Netw. 2021, 29, 2156–2169. [Google Scholar] [CrossRef]
  7. Restuccia, F.; Ferraro, P.; Silvestri, S.; Das, S.K.; Lo Re, G. IncentMe: Effective Mechanism Design o Stimulate Crowdsensing Participants with Uncertain Mobility. IEEE Trans. Mob. Comput. 2019, 18, 1571–1584. [Google Scholar] [CrossRef]
  8. Zhao, B.; Tang, S.; Liu, X.; Zhang, X. PACE: Privacy-Preserving and Quality-Aware Incentive Mechanism for Mobile Crowdsensing. IEEE Trans. Mob. Comput. 2021, 20, 1924–1939. [Google Scholar] [CrossRef]
  9. Xu, J.; Zhou, Y.; Ding, Y.; Yang, D.; Xu, L. Biobjective Robust Incentive Mechanism Design for Mobile Crowdsensing. IEEE Internet Things J. 2021, 8, 14971–14984. [Google Scholar] [CrossRef]
  10. Xu, C.; Si, Y.; Zhu, L.; Zhang, C.; Sharif, K.; Zhang, C. Pay as How You Behave: A Truthful Incentive Mechanism for Mobile Crowdsensing. IEEE Internet Things J. 2019, 6, 10053–10063. [Google Scholar] [CrossRef]
  11. Xu, X.; Yang, Z.; Xian, Y. ATM: Attribute-Based Privacy-Preserving Task Assignment and Incentive Mechanism for Crowdsensing. IEEE Access 2021, 9, 60923–60933. [Google Scholar] [CrossRef]
  12. Zhan, Y.; Xia, Y.; Liu, Y.; Li, F.; Wang, Y. Incentive-Aware Time-Sensitive Data Collection in Mobile Opportunistic Crowdsensing. IEEE Trans. Veh. Technol. 2017, 66, 7849–7861. [Google Scholar] [CrossRef]
  13. Lu, J.; Zhang, Z.; Wang, J.; Li, R.; Wan, S. A Green Stackelberg-game Incentive Mechanism for Multi-service Exchange in Mobile Crowdsensing. ACM Trans. Internet Technol. 2022, 22, 31. [Google Scholar] [CrossRef]
  14. Ren, Y.; Li, X.; Miao, Y.; Luo, B.; Weng, J.; Choo, K.K.R.; Deng, R.H. Towards Privacy-Preserving Spatial Distribution Crowdsensing: A Game Theoretic Approach. IEEE Trans. Inf. Forensics Secur. 2022, 17, 804–818. [Google Scholar] [CrossRef]
  15. Wang, W.; Zhan, J.; Herrera-Viedma, E. A three-way decision approach with a probability dominance relation based on prospect theory for incomplete information systems. Inf. Sci. 2022, 611, 199–224. [Google Scholar] [CrossRef]
  16. Liu, C.; Du, R.; Wang, S.; Bie, R. Cooperative Stackelberg game based optimal allocation and pricing mechanism in crowdsensing. Int. J. Sens. Netw. 2018, 28, 57–68. [Google Scholar] [CrossRef]
  17. Han, M.; Li, X.; Wang, L.; Zhang, N.; Cheng, H. Review of ensemble classification over data streams based on supervised and semi-supervised. J. Intell. Fuzzy Syst. 2022, 43, 3859–3878. [Google Scholar] [CrossRef]
  18. Gan, L.; Hu, Y.; Chen, X.; Li, G.; Yu, K. Application and Outlook of Prospect Theory Applied to Bounded Rational Power System Economic Decisions. IEEE Trans. Ind. Appl. 2022, 58, 3227–3237. [Google Scholar] [CrossRef]
  19. Yang, J.; Fu, L.; Yang, B.; Xu, J. Participant Service Quality Aware Data Collecting Mechanism With High Coverage for Mobile Crowdsensing. IEEE Access 2020, 8, 10628–10639. [Google Scholar] [CrossRef]
  20. Liu, Y.; Li, Y.; Cheng, W.; Wang, W.; Yang, J. A willingness-aware user recruitment strategy based on the task attributes in mobile crowdsensing. Int. J. Distrib. Sens. Netw. 2022, 18, 15501329221123531. [Google Scholar] [CrossRef]
  21. Zhang, J.; Yang, X.; Feng, X.; Yang, H.; Ren, A. A Joint Constraint Incentive Mechanism Algorithm Utilizing Coverage and Reputation for Mobile Crowdsensing. Sensors 2020, 20, 4478. [Google Scholar] [CrossRef] [PubMed]
  22. Gu, Y.; Shen, H.; Bai, G.; Wang, T.; Liu, X. QoI-aware incentive for multimedia crowdsensing enabled learning system. Multimed. Syst. 2020, 26, 3–16. [Google Scholar] [CrossRef]
  23. Chen, X.; Xu, S.; Han, J.; Fu, H.; Pi, X.; Joe-Wong, C.; Li, Y.; Zhang, L.; Noh, H.Y.; Zhang, P. PAS: Prediction-Based Actuation System for City-Scale Ridesharing Vehicular Mobile Crowdsensing. IEEE Internet Things J. 2020, 7, 3719–3734. [Google Scholar] [CrossRef]
  24. Song, S.; Liu, Z.; Li, Z.; Xing, T.; Fang, D. Coverage-Oriented Task Assignment for Mobile Crowdsensing. IEEE Internet Things J. 2020, 7, 7407–7418. [Google Scholar] [CrossRef]
  25. Mohammadi, S.; Narimani, Z.; Ashouri, M.; Firouzi, R.; Karimi-Jafari, M.H. Ensemble learning from ensemble docking: Revisiting the optimum ensemble size problem. Sci. Rep. 2022, 12, 410. [Google Scholar] [CrossRef]
  26. Wang, P.; Chen, Z.; Deng, X.; Wang, J.; Tang, R.; Li, H.; Hong, S.; Wu, Z. The Prediction of Storm-Time Thermospheric Mass Density by LSTM-Based Ensemble Learning. Space-Weather.-Int. J. Res. Appl. 2022, 20, e2021SW002950. [Google Scholar] [CrossRef]
  27. Wang, X.; Han, T. Transformer Fault Diagnosis Based on Stacking Ensemble Learning. IEEJ Trans. Electr. Electron. Eng. 2020, 15, 1734–1739. [Google Scholar] [CrossRef]
  28. Wang, H.; Zhao, Y. ML2E: Meta-Learning Embedding Ensemble for Cold-Start Recommendation. IEEE Access 2020, 8, 165757–165768. [Google Scholar] [CrossRef]
  29. Gelbrich, K.; Roschk, H. Do complainants appreciate overcompensation? A meta-analysis on the effect of simple compensation vs. overcompensation on post-complaint satisfaction. Mark. Lett. 2011, 22, 31–47. [Google Scholar] [CrossRef]
  30. Wang, Y.; Fan, R.; Du, K.; Lin, J.; Wang, D.; Wang, Y. Private charger installation game and its incentive mechanism considering prospect theory. Transp. Res. Part-Transp. Environ. 2022, 113, 103508. [Google Scholar] [CrossRef]
  31. Liu, Y.; Cai, D.; Guo, C.; Huang, H. Evolutionary Game of Government Subsidy Strategy for Prefabricated Buildings Based on Prospect Theory. Math. Probl. Eng. 2020, 2020, 8863563. [Google Scholar] [CrossRef]
  32. Liu, W.; Liu, Z. Evolutionary game analysis of green production transformation of small farmers led by cooperatives based on prospect theory. Front. Environ. Sci. 2022, 10, 1041992. [Google Scholar] [CrossRef]
  33. Qu, X.; Wang, X.; Qin, X. Research on Responsible Innovation Mechanism Based on Prospect Theory. Sustainability 2023, 15, 1358. [Google Scholar] [CrossRef]
  34. Nie, J.; Luo, J.; Xiong, Z.; Niyato, D.; Wang, P. A Stackelberg Game Approach Toward Socially-Aware Incentive Mechanisms for Mobile Crowdsensing. IEEE Trans. Wirel. Commun. 2019, 18, 724–738. [Google Scholar] [CrossRef]
  35. Tian, K.; Zhuang, X.; Yu, B. The Incentive and Supervision Mechanism of Banks on Third-Party B2B Platforms in Online Supply Chain Finance Using Big Data. Mob. Inf. Syst. 2021, 2021, 9943719. [Google Scholar] [CrossRef]
  36. Chen, C.P.; Sandor, J. Inequality chains related to trigonometric and hyperbolic functions and inverse trigonometric and hyperbolic functions. J. Math. Inequalities 2013, 7, 569–575. [Google Scholar] [CrossRef]
  37. Munis, R.A.; Camargo, D.A.; Gomes Da Silva, R.B.; Tsunemi, M.H.; Ibrahim, S.N.I.; Simoes, D. Price Modeling of Eucalyptus Wood under Different Silvicultural Management for Real Options Approach. Forests 2022, 13, 478. [Google Scholar] [CrossRef]
  38. Arrow, K.J.; Chenery, H.B.; Minhas, B.S.; Solow, R.M. Capital-labor substitution and economic efficiency. Rev. Econ. Stat. 1961, 43, 225–250. [Google Scholar] [CrossRef]
Figure 1. Physical model. (a) Without IMELPT; (b) with IMELPT.
Figure 1. Physical model. (a) Without IMELPT; (b) with IMELPT.
Mathematics 11 03590 g001
Figure 2. Logical model.
Figure 2. Logical model.
Mathematics 11 03590 g002
Figure 3. Framework of the DSGD.
Figure 3. Framework of the DSGD.
Mathematics 11 03590 g003
Figure 4. Experimental environment. (a) Trajectory of a short-term participant; (b) Trajectory of a long-term participant.
Figure 4. Experimental environment. (a) Trajectory of a short-term participant; (b) Trajectory of a long-term participant.
Mathematics 11 03590 g004
Figure 5. Confusion matrix. (a) Confusion matrix of DSGD; (b) confusion matrix of baseline method.
Figure 5. Confusion matrix. (a) Confusion matrix of DSGD; (b) confusion matrix of baseline method.
Mathematics 11 03590 g005
Figure 6. ROC curve. (a) ROC curve of DSGD; (b) ROC curve of baseline method.
Figure 6. ROC curve. (a) ROC curve of DSGD; (b) ROC curve of baseline method.
Mathematics 11 03590 g006
Figure 7. Distributionof participation duration.
Figure 7. Distributionof participation duration.
Mathematics 11 03590 g007
Figure 8. Effect of reward coefficient and participation duration on reward.
Figure 8. Effect of reward coefficient and participation duration on reward.
Mathematics 11 03590 g008
Figure 9. Properties of the prospect factor.
Figure 9. Properties of the prospect factor.
Mathematics 11 03590 g009
Figure 10. Effect of participation duration and prospect factor on evaluation factor.
Figure 10. Effect of participation duration and prospect factor on evaluation factor.
Mathematics 11 03590 g010
Figure 11. Effect of participation duration and prospect factor on the probability of entering the maintenance stage.
Figure 11. Effect of participation duration and prospect factor on the probability of entering the maintenance stage.
Mathematics 11 03590 g011
Figure 12. Comparison of the SPIMPT and baseline method in terms of participation probability.
Figure 12. Comparison of the SPIMPT and baseline method in terms of participation probability.
Mathematics 11 03590 g012
Figure 13. Comparison of the SPIMPT and baseline method in terms of participation duration in cultivation stage.
Figure 13. Comparison of the SPIMPT and baseline method in terms of participation duration in cultivation stage.
Mathematics 11 03590 g013
Figure 14. Effect of elasticity factor and prospect factor on optimal participation duration.
Figure 14. Effect of elasticity factor and prospect factor on optimal participation duration.
Mathematics 11 03590 g014
Figure 15. Comparison of the SPIMPT and baseline method in terms of participation duration in the maintenance stager.
Figure 15. Comparison of the SPIMPT and baseline method in terms of participation duration in the maintenance stager.
Mathematics 11 03590 g015
Figure 16. Properties of participant’s utility.
Figure 16. Properties of participant’s utility.
Mathematics 11 03590 g016
Figure 17. The properties of optimal reward for long-term participants.
Figure 17. The properties of optimal reward for long-term participants.
Mathematics 11 03590 g017
Figure 18. Comparison of participation probability between LPIM and baseline method.
Figure 18. Comparison of participation probability between LPIM and baseline method.
Mathematics 11 03590 g018
Figure 19. Comparison of the participation duration of the IMELPT and baseline method. (a) Participation duration in the baseline method; (b) participation duration in the IMELPT.
Figure 19. Comparison of the participation duration of the IMELPT and baseline method. (a) Participation duration in the baseline method; (b) participation duration in the IMELPT.
Mathematics 11 03590 g019
Figure 20. Comparison of the platform’s utility of the IMELPT and baseline method. (a) Platform’s utility in the baseline method; (b) platform’s utility in the IMELPT.
Figure 20. Comparison of the platform’s utility of the IMELPT and baseline method. (a) Platform’s utility in the baseline method; (b) platform’s utility in the IMELPT.
Mathematics 11 03590 g020
Figure 21. Comparison of the IMELPT and comparison mechanisms in terms of coverage.
Figure 21. Comparison of the IMELPT and comparison mechanisms in terms of coverage.
Mathematics 11 03590 g021
Figure 22. Comparison of IMELPT and comparison mechanisms in terms of platform’s utility.
Figure 22. Comparison of IMELPT and comparison mechanisms in terms of platform’s utility.
Mathematics 11 03590 g022
Table 1. Parameter table.
Table 1. Parameter table.
SymbolDescription
g i The i-th participant
δ i Participation duration of the i-th participant
θ i Performance factor of the i-th participant
ϵ Cost coefficient
ϑ i Non-linear prospect factor of the i-th participant
ϑ ^ i Linear prospect factor of the i-th participant
α Reward coefficient
μ , σ Mean and standard deviation of the reward’s distribution
V i Evaluation factor of the i-th participant
μ ¯ , σ ¯ Mean and standard deviation of the evaluation factor
ρ i Risk aversion coefficient of the i-th participant
p i c participation probability of the i-th participant
η Probability coefficient
ω i Share number of the i-th participant
a i Elasticity factor of the i-th participant’s share number
Regulating factor
b ^ i , c ^ i Average reward and cost of shareable participants
μ 1 , σ 1 Expected growth rate and volatility of b ^ i
r i Discount rate of the i-th participant
β Utility coefficient
Table 2. Features of the DSGD.
Table 2. Features of the DSGD.
Features
Standard deviation of the time interval
Skewness of the time interval
Kurtosis of the time interval
Mean value of velocity
Standard deviation of velocity
Percentage of unique data points
Fourier coefficients of the one-dimensional discrete Fourier transform
Mean, variance, skewness and kurtosis of the Fourier transform spectrum
Table 3. Parameter setting.
Table 3. Parameter setting.
VariablesValue
ϵ 30
α 5
ρ i 0.5
η 0.04
a i 0.3
r i 0.1
Table 4. Evaluation results of DSGD.
Table 4. Evaluation results of DSGD.
PrecisionRecallAccuracyF1
DSGD0.8050.8110.8130.833
Baseline method0.7820.7860.7890.784
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, J.; Xu, H.; Deng, X.; Liu, H.; Li, D. A Location-Based Crowdsensing Incentive Mechanism Based on Ensemble Learning and Prospect Theory. Mathematics 2023, 11, 3590. https://doi.org/10.3390/math11163590

AMA Style

Liu J, Xu H, Deng X, Liu H, Li D. A Location-Based Crowdsensing Incentive Mechanism Based on Ensemble Learning and Prospect Theory. Mathematics. 2023; 11(16):3590. https://doi.org/10.3390/math11163590

Chicago/Turabian Style

Liu, Jiaqi, Hucheng Xu, Xiaoheng Deng, Hui Liu, and Deng Li. 2023. "A Location-Based Crowdsensing Incentive Mechanism Based on Ensemble Learning and Prospect Theory" Mathematics 11, no. 16: 3590. https://doi.org/10.3390/math11163590

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop