A Joint Constraint Incentive Mechanism Algorithm Utilizing Coverage and Reputation for Mobile Crowdsensing
Abstract
:1. Introduction
- An optimal MU selecting algorithm (OMUS) is proposed to select the optimal MUs according to the location information and historical reputation of MUs. Thus, the collected sensing data will be more accurate and credible;
- A two-stage Stackelberg game model is proposed to solve the balance problem between the lowest rewards of the SC and the optimal strategy of the MUs in the MCS system, and the existence of the Nash equilibrium is proven in the Stackelberg game;
- A task priority time series method is proposed to maximize the total utility of the MUs’ tasks;
- A reputation update and reward allocation method for the MUs is proposed. After the MUs upload the sensing data, the EM algorithm is used to evaluate the quality of sensing data, and SC evaluates the reputation of MUs according to the quality of sensing data and updates the historical reputation of each MU. Then, the reward is allocated to MUs who have completed the tasks according to the selected optimal strategy.
2. System Model and Game Formulation
- (1)
- TP publishes a sensing task and the target area to the SC;
- (2)
- If the MUs with a mobile smart device sensor are interested in the sensing task, then they will sign up to participate in the sensing task. The MU set is U = {u1, u2, …, un};
- (3)
- The SC uses the OMUS algorithm to select the optimal users W = {w1, w2, …, wm} (m ≤ n);
- (4)
- The SC and the selected MUs choose their optimal strategies by using the coverage and reputation joint constraint incentive mechanism algorithm (CRJC-IMA). When the SC determines the total reward R, MUs choose the optimal bandwidth strategy to maximize the utility of the SC and MUs;
- (5)
- Each MU sorts the tasks in time series according to the allocated reward to maximize the total utility of the MUs;
- (6)
- MUs upload sensing data to the SC, and MUs receive the reward allocated by the SC;
- (7)
- The SC evaluates the quality of the sensing data, and the SC updates the reputation of MUs.
3. CRJC-IMA
3.1. OMUS
3.1.1. Virtual Point Selection (VPS)
- Step 1:
- Parameter setting. Initializing the speed and position of the m virtual points randomly in the target area;
- Step 2:
- Calculating the coverage of m virtual points and finding the individual extremum and group extremum. The individual extremum is the coverage rate of the virtual point’s optimal position, and the group extremum is the optimal position of the virtual point corresponding to the maximum coverage among the individual extremes of all virtual points;
- Step 3:
- Updating the speed and position of each virtual point in the virtual points set;
- Step 4:
- Calculating the coverage of the m virtual points;
- Step 5:
- Updating the individual extremum and group extremum of the m virtual points;
- Step 6:
- If the maximum number of iterations has been reached, then the global optimal position will be determined, otherwise skip to the second step.
3.1.2. MU Selection Process
3.2. Update Reputation of MUs
3.2.1. Quality Evaluation
3.2.2. Reputation Updated
3.3. Incentive Allocation
4. Designing the Incentive Mechanism Based on Stackelberg Game
4.1. Follower Game
4.1.1. Related Definitions
4.1.2. Proofs for Properties
4.2. Leader Game
5. Simulation Results and Analysis
5.1. Performance Evaluation of Selecting the Optimal Users
- The number of optimal users: Figure 3 shows the relationship between the optimal number of MUs m and the total rewards R given by the SC in the CRJC-IMA. As the rewards increase to 2000, an increasing number of MUs are selected as the collectors to sense the data and the number of selected MUs rises linearly. This phenomenon indicates that the CRJC-IMA performs well relative to the reward.
- Coverage: Figure 4 shows the relationship between the total reward R given by the SC and the coverage rate of the CRJC-IMA and CTSIA algorithms. The coverage rate in the target area is increasing in both algorithms as the reward R rises. When the total reward R is greater than 800, the coverage of the CRJC-IMA reaches 90%. When R is greater than 1600, the coverage rate of both algorithms will not increase significantly, and both reach 90%. However, more reward will increase the cost of the SC and cause excessive data redundancy. The experimental results of the two algorithms show that the coverage rate of the CRJC-IMA is superior to that of the CTSIA.
- Reputation: Figure 5 shows the influence of different weights on reputation and coverage when selecting the optimal users using Equation (11). The horizontal axis represents a and b, whose domains are between 0 and 1. When the horizontal axis represents the reputation weight a, the vertical axis that corresponds to the left represents the reputation value. The vertical axis that corresponds to the right represents the coverage value when the horizontal axis represents the coverage weight b. The sum of the reputation weight a and coverage weight b is 1. For example, the coverage weight b is 0.9 when the reputation weight a is 0.1. The experimental result shows that the reputation value of the selected MUs is a non-decreasing trend with the increase in reputation weight. When the weight of reputation is greater than 0.1, the average reputation value of the users will reach 4. The coverage of the selected MUs is related to the number of users. Thus, the coverage ratio and weight have a weak coupling relationship.
5.2. Performance Evaluation of Incentive Mechanism
- 4.
- Energy and bandwidth payoff: Figure 6 analyzes the mean square deviation of the energy payoff and bandwidth payoff obtained by the MUs. The total reward R given by the SC to MUs will be divided into two parts: each MU is rewarded on the basis of the energy consumed and bandwidth used. As shown in Figure 6, the mean square deviation of the energy payoff and bandwidth payoff increases with the total reward R. This phenomenon shows that the gap among MUs’ energy payoff, bandwidth payoff, and the average payoff increases with R. The mean square deviation of the energy payoff is greater than that of the bandwidth payoff when R is fixed because energy is determined by the distance of MUs, and the bandwidth is selected between zero and five. Thus, the stability of each MU’s energy payoff is worse than that of the bandwidth payoff.
- 5.
- The utility of priority: Figure 7 shows the relationship between the utility of MUs whether choosing priority and the total reward R of the task. The task has the utility when MUs choose the optimal bandwidth strategy. However, if MUs need to perform other tasks, then the utility of each task will vary because the total rewards of the MU’s every task are diverse. Figure 7 illustrates that as the total reward R of the task increases, the total utility obtained by MUs also rises. After the priority ranking is performed, the total utility obtained by MUs will be greater than that without prioritization. This finding shows that the total utility will increase after each MU selects the priority of the task, and the MU can perform tasks better to avoid time conflicts.
- 6.
- Bandwidth strategy: Figure 8 analyzes the relationship between the bandwidth selected by MUs and the total reward R of the task. In Figure 8, the average bandwidth selected by MUs increases with the total reward R. When the total reward is less than 1000, the average bandwidth selected by the MU is less than 2.5. However, MUs choose the average bandwidth value to exceed 4 when the total reward is greater than 3000.
- 7.
- Utility and payoff: Figure 9 shows the utility and payoff of MUs in the follower game and the utility and payoff of the SC in the leader game when the MUs have been selected to perform the task. The horizontal axis represents the total reward. In Figure 9a, the utility of the SC is declined when the total reward R increases from 1000 to 2000, and the payoff of the SC remained unchanged. Considering the selected MUs are certain, the utility of the SC will decrease when the total reward paid by the SC increases. In Figure 9b, the average utility and payoff of MUs are growing with the increase in R. Moreover, the total rewards paid by the SC are linearly related to the average utility and payoff of MUs. This indicates that under the condition of a certain number of users, the more rewards the SC has, the less utility the SC has, however, the users will have more utility.
- 8.
- Figure 10 shows the utility of the SC and MUs of the CRJC-IMA and STD algorithm [15]. The STD algorithm is a non-cooperative game based on the Stackelberg game, and the total reward and sensing time of a task are used as the parameters of the utility function in the proposed game formula. With the increase in the total reward R paid by the SC to MUs, the utility of the SC in both algorithms declines (Figure 10a). As R grows, the optimal number of MUs increases, and the average utility of the two algorithms will no longer increase significantly (Figure 10b). However, MUs have more average utility when the total reward R and the optimal number of users are constant in the CRJC-IMA. The reason is that the cost is determined by the bandwidth of MUs when performing the task in the CRJC-IMA, and the cost of MUs will be lower than that of the STD algorithm.
- 9.
- Reputation evaluation: Figure 11 shows the relationship between the reputation of MUs and the quality evaluation matrix (effort level eii) of the sensing data collected by MU wi. For the quality evaluation matrix eii of the sensing data, the majority of MUs uploaded follow the same normal distribution in [32], where μ = 0.75 and σ = 0.125. Figure 11 shows a linear relationship between the effort level (eii) and the reputation. The MU’s reputation is lower when the MU uploads the sensing data with a small evaluation matrix. However, the MUs’ reputation will be higher if the evaluation matrix of the MUs is larger.
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Guo, B.; Yu, Z.; Zhou, X.; Zhang, D. From participatory sensing to mobile crowd sensing. In Proceedings of the 2014 IEEE International Conference on Pervasive Computing and Communication Workshops, Budapest, Hungary, 24–28 March 2014; pp. 593–598. [Google Scholar]
- Hoteit, S.; Secci, S.; Sobolevsky, S.; Ratti, C.; Pujolle, G. Estimating human trajectories and hotspots through mobile phone data. Comput. Netw. 2014, 64, 296–307. [Google Scholar] [CrossRef] [Green Version]
- Kim, S.; Robson, C.; Zimmerman, T.; Pierce, J.; Haber, E. Creek watch: Pairing usefulness and usability for successful citizen science. In Proceedings of the Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 2125–2134. [Google Scholar]
- Pankratius, V.; Lind, F.; Coster, A.; Erickson, P.; Semeter, J. Mobile crowd sensing in space weather monitoring: The mahali project. IEEE Commun. Mag. 2014, 52, 22–28. [Google Scholar] [CrossRef]
- Eisenman, S.B.; Miluzzo, E.; Lane, N.D.; Peterson, R.A.; Gahng-Seop, A.; Andrew, T.C. BikeNet: A mobile sensing system for cyclist experience mapping. ACM Trans. Sens. Netw. 2010, 6, 6.1–6.39. [Google Scholar] [CrossRef]
- Baguena, M.; Calafate, C.T.; Cano, J.C.; Manzoni, P. An adaptive anycasting solution for crowd sensing in vehicular environments. IEEE Trans. Ind. Electron. 2015, 62, 7911–7919. [Google Scholar] [CrossRef] [Green Version]
- Zhan, Y.; Xia, Y.; Zhang, J. Incentive mechanism in platform-centric mobile crowdsensing: A one-to-many bargaining approach. Comput. Netw. 2018, 132, 40–52. [Google Scholar] [CrossRef]
- He, S.; Shin, D.H.; Zhang, J.; Chen, J.; Lin, P. An exchange market approach to mobile crowdsensing: Pricing, task allocation, and walrasian equilibrium. IEEE J. Sel. Areas Commun. 2017, 35, 921–934. [Google Scholar] [CrossRef]
- Yang, D.; Xue, G.; Fang, X.; Tang, J. Incentive mechanisms for crowdsensing: Crowdsourcing with smartphones. IEEE/ACM Trans. Netw. 2015, 24, 1732–1744. [Google Scholar] [CrossRef]
- Duan, X.; Zhao, C.; He, S.; Cheng, P.; Zhang, J. Distributed algorithms to compute Walrasian equilibrium in mobile crowdsensing. IEEE Trans. Ind. Electron. 2016, 64, 4048–4057. [Google Scholar] [CrossRef]
- Jaimes, L.G.; Vergara-Laurens, I.J.; Raij, A. A survey of incentive techniques for mobile crowd sensing. IEEE Internet Things J. 2015, 2, 370–380. [Google Scholar] [CrossRef]
- Lee, J.; Hoh, B. Sell your experiences: A market mechanism based incentive for participatory sensing. In Proceedings of the 2010 IEEE International Conference on Pervasive Computing and Communications (PerCom), Mannheim, Germany, 29 March–2 April 2010; pp. 60–68. [Google Scholar]
- Gao, L.; Hou, F.; Huang, J. Providing long-term participation incentive in participatory sensing. In Proceedings of the 2015 IEEE Conference on Computer Communications (INFOCOM), Kowloon, Hong, China, 26 April–1 May 2015; pp. 2803–2811. [Google Scholar]
- Zhang, X.; Jiang, L.; Wang, X. Incentive Mechanisms for Mobile Crowdsensing with Heterogeneous Sensing Costs. IEEE Trans. Veh. Technol. 2019, 68, 3992–4002. [Google Scholar] [CrossRef]
- Yang, D.; Xue, G.; Fang, X.; Tang, J. Crowdsourcing to smartphones: Incentive mechanism design for mobile phone sensing. In Proceedings of the 18th annual international conference on Mobile computing and networking, Istanbul, Turkey, 22–26 August 2012; pp. 173–184. [Google Scholar]
- Li, X.; Zhu, Q. Game based incentive mechanism for cooperative spectrum sensing with mobile crowd sensors. Wirel. Netw. 2019, 25, 1855–1866. [Google Scholar] [CrossRef]
- Lin, Y.; Cai, Z.; Wang, X.; Hao, F. Incentive mechanisms for crowdblocking rumors in mobile social networks. IEEE Trans. Veh. Technol. 2019, 68, 9220–9232. [Google Scholar] [CrossRef]
- Ota, K.; Dong, M.; Gui, J.; Liu, A. QUOIN: Incentive mechanisms for crowd sensing networks. IEEE Netw. 2018, 32, 114–119. [Google Scholar] [CrossRef] [Green Version]
- Nie, J.; Luo, J.; Xiong, Z.; Niyato, D.; Wang, P. A Stackelberg Game Approach toward Socially-Aware Incentive Mechanisms for Mobile Crowdsensing. IEEE Trans. Wirel. Commun. 2019, 18, 724–738. [Google Scholar] [CrossRef]
- Ueyama, Y.; Tamai, M.; Arakawa, Y.; Yasumoto, K. Gamification-based incentive mechanism for participatory sensing. In Proceedings of the 2014 IEEE International Conference on Pervasive Computing and Communication Workshops, Budapest, Hungary, 24–28 March 2014; pp. 98–103. [Google Scholar]
- Yu, T.; Zhou, Z.; Zhang, D.; Wang, X.; Liu, Y.; Lu, S. INDAPSON: An incentive data plan sharing system based on self-organizing network. In Proceedings of the 2014-IEEE Conference on Computer Communications, INFOCOM, Toronto, ON, Canada, 27 April–2 May 2014; pp. 1545–1553. [Google Scholar]
- Reddy, S.; Estrin, D.; Hansen, M.H.; Srivastava, M.B. Examining micro-payments for participatory sensing data collections. In Proceedings of the Ubiquitous Computing, 12th International Conference, UbiComp 2010, Copenhagen, Denmark, 26–29 September 2010. [Google Scholar]
- Song, Z.; Zhang, B.; Liu, C.H.; Vasilakos, A.V.; Wang, W. QoI-aware energy-efficient participant selection. In Proceedings of the 2014 Eleventh Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), Singapore, 30 June–3 July 2014; pp. 248–256. [Google Scholar]
- Mendez, D.; Labrador, M.; Ramachandran, K. Data interpolation for participatory sensing systems. Pervasive Mob. Comput. 2013, 9, 132–148. [Google Scholar] [CrossRef]
- Wang, W.; Gao, H.; Liu, C.H.; Leung, K.K. Credible and energy-aware participant selection with limited task budget for mobile crowd sensing. Ad Hoc Netw. 2016, 43, 56–70. [Google Scholar] [CrossRef]
- Jaimes, L.G.; Vergara-Laurens, I.; Labrador, M.A. A location-based incentive mechanism for participatory sensing systems with budget constraints. In Proceedings of the 2012 IEEE International Conference on Pervasive Computing and Communications, Lugano, Switzerland, 19–23 March 2012; pp. 103–108. [Google Scholar]
- Jaimes, L.G.; Laurens, I.J.V.; Raij, A. A location-based incentive algorithm for consecutive crowd sensing tasks. IEEE Lat. Am. Trans. 2016, 14, 811–817. [Google Scholar] [CrossRef]
- Yang, H.F.; Zhang, J.; Roe, P. Using reputation management in participatory sensing for data classification. Procedia Comput. Sci. 2011, 5, 190–197. [Google Scholar] [CrossRef] [Green Version]
- Zhang, J.; Lei, L.; Feng, X. Energy-efficient collaborative transmission algorithm based on potential game theory for beamforming. Int. J. Distrib. Sens. Netw. 2019, 15. [Google Scholar] [CrossRef] [Green Version]
- Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN 95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
- Dawid, A.P.; Skene, A.M. Maximum likelihood estimation of observer error-rates using the EM algorithm. J. R. Stat. Soc. Ser. C Appl. Stat. 1979, 28, 20–28. [Google Scholar] [CrossRef]
- Peng, D.; Wu, F.; Chen, G. Pay as how well you do: A quality based incentive mechanism for crowdsensing. In Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing, Hangzhou, China, 22–25 June 2015; pp. 177–186. [Google Scholar]
- Fudenberg, D.; Tirole, J. Game Theory; MIT Press: Boston, MA, USA, 1991; pp. 67–100. [Google Scholar]
- Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004; pp. 484–496. [Google Scholar]
Symbol | Definition | Symbol | Definition |
---|---|---|---|
U = {u1, u2, …, un} | Registered users set | dij | Distance between MUi and point j |
W = {w1, w2, …, wm} | Optimal MUs | The historical reputation of MUi | |
m | Choose the number of optimal MUs | fij | The objective value of MUi and point j |
R | Total reward | ui | MUi chooses the utility of the task |
r | Sensing range | Utility after MUi chooses priority | |
Payoff of SC | h | Number of the tasks for MU | |
Bi | Bandwidth strategy MUi selected | pil | Priority for MUi to perform task l |
Ei | Energy used by MUi | Time for MUi to perform task l | |
Utility of SC | Eelect | Radio electronic energy | |
Payoff of MUi | εfs | Radio amplifier energy | |
Cost of MUi | εamp | Radio amplifier energy | |
Threshold | k | Packet size |
Parameter | Value |
---|---|
the target area | 1000 m × 1000 m |
1000 | |
220 | |
300 | |
[1,5] | |
[2,10] | |
[1,5]/3 | |
[1,5] | |
[0.5,1] | |
60 m |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, J.; Yang, X.; Feng, X.; Yang, H.; Ren, A. A Joint Constraint Incentive Mechanism Algorithm Utilizing Coverage and Reputation for Mobile Crowdsensing. Sensors 2020, 20, 4478. https://doi.org/10.3390/s20164478
Zhang J, Yang X, Feng X, Yang H, Ren A. A Joint Constraint Incentive Mechanism Algorithm Utilizing Coverage and Reputation for Mobile Crowdsensing. Sensors. 2020; 20(16):4478. https://doi.org/10.3390/s20164478
Chicago/Turabian StyleZhang, Jing, Xiaoxiao Yang, Xin Feng, Hongwei Yang, and An Ren. 2020. "A Joint Constraint Incentive Mechanism Algorithm Utilizing Coverage and Reputation for Mobile Crowdsensing" Sensors 20, no. 16: 4478. https://doi.org/10.3390/s20164478