Next Article in Journal
Realization of Forest Internet of Things Using Wireless Network Communication Technology of Low-Power Wide-Area Network
Previous Article in Journal
Deep-Reinforcement-Learning-Based Object Transportation Using Task Space Decomposition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Contract-Optimization Approach (COA): A New Approach for Optimizing Service Caching, Computation Offloading, and Resource Allocation in Mobile Edge Computing Network

School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun 130000, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(10), 4806; https://doi.org/10.3390/s23104806
Submission received: 22 February 2023 / Revised: 25 April 2023 / Accepted: 9 May 2023 / Published: 16 May 2023
(This article belongs to the Section Communications)

Abstract

:
An optimal method for resource allocation based on contract theory is proposed to improve energy utilization. In heterogeneous networks (HetNets), distributed heterogeneous network architectures are designed to balance different computing capacities, and MEC server gains are designed based on the amount of allocated computing tasks. An optimal function based on contract theory is developed to optimize the revenue gain of MEC servers while considering constraints such as service caching, computation offloading, and the number of resources allocated. As the objective function is a complex problem, it is solved utilizing equivalent transformations and variations of the reduced constraints. A greedy algorithm is applied to solve the optimal function. A comparative experiment on resource allocation is conducted, and energy utilization parameters are calculated to compare the effectiveness of the proposed algorithm and the main algorithm. The results show that the proposed incentive mechanism has a significant advantage in improving the utility of the MEC server.

1. Introduction

In recent years, the rapid development of the Internet of Things (IoT) and artificial intelligence (AI) has led to the emergence of a wide range of complex applications, including augmented reality (AR) navigation, autonomous driving, face recognition, autonomous control, and smart healthcare [1]. However, these applications, which are typically latency-sensitive and computationally intensive, place significant demands on smart devices, which are unable to match these requirements for high CPU power and battery life [2]. Mobile edge computing (MEC) has thus emerged as a new paradigm to move computing away from centralized cloud infrastructure and toward the logical edges of the network, where base stations (BSs) with MEC servers installed, known as MEC-enabled BSs, are able to help mobile terminals (MTs) overcome their resource shortage [1]. At the same time, mobile edge computing helps large-scale environmental monitoring, is of great help in the prevention and control of natural disasters, and effectively improves the response efficiency of emergency communication networks [3].
In the realm of mobile edge computing (MEC), much attention has been devoted to task offloading. However, an often-overlooked aspect is the possibility of caching input data on the base station (BS), which can be reused by task offloading activities, leading to reduced costs associated with input data uploading [4]. Although repetitive task offloading operations for a mobile terminal (MT) are rare in practice, it is worth noting that the tasks generated by the MT are never precisely identical and always have distinct parameters. Nevertheless, the service components, such as databases, libraries, and programs that support task processing, can be shared and reused across the tasks of the MT. To address this issue, we propose a potential solution called service caching (similar to [5]), which entails caching essential service components on BSs. By enabling service sharing among the tasks of an MT, service caching effectively minimizes the amount of data required for uploading during the task offloading process, unlike data caching. However, the storage capacity of a BS is limited, allowing only a small number of services to be cached. Therefore, to optimize the system’s overall utility, it is vital to determine which services can be cached on the BS. Recent research has focused on jointly designing service caching, computing offloading, and resource allocation in MEC [1,6,7,8].
In addition, the standard cellular network is limited by constrained spectrum resources and cannot simultaneously support numerous devices offloading their computational workloads to MEC servers when the number of MTs increases. To address this challenge, HetNets, an emerging technology that deploys small cells (such as picocells and femtocells) atop microcells, have been proposed to increase spectral efficiency and system throughput by allowing small cells to repurpose subchannel resources of microcells. The effective control of interference and resource allocation strategies are crucial in HetNets due to the presence of co-tier and cross-tier interference. Recently, HetNets resource allocation, computational offloading, and service caching have all undergone collaborative design work, frequently employing MEC to reduce system latency, increase energy efficiency, or improve forecast accuracy (usually prediction accuracy will use computational performance evaluation metrics, such as RMSE [2,9,10,11]).
According to intuition, HetNets with MEC can effectively migrate compute offloading with service caching thanks to well-designed offloading methods. However, in reality, MEC services are provided by operators, and MTs must pay for this service. MTs are self-interested and want to maximize their profit, which makes it unrealistic to expect them to slavishly follow the control instructions from MEC servers. For this reason, it is crucial to design an effective incentive mechanism. Existing incentive mechanisms (such as Stackelberg game in [12] and auction in [13]) are currently used to allocate either cache, computation resources, or communication resources. However, no one has yet designed an incentive mechanism for optimizing multiple resources at the same time. This paper is the first to use contract theory (a potent paradigm from economics [14]) to simultaneously optimize multiple resources. The interaction between a MEC server and MTs with different classes of service cache-based computation tasks is formulated into a contract problem. The formulated problem maximizes the utility of the MEC server by optimizing CPU cycle, transmission power, caching decision, and reward while ensuring the non-negative and maximum utility for each MT selecting the contract item that belongs to their own type. The non-convex objective function and complex and non-convex constraints in the contract problem make the contract problem difficult to solve. By applying variable transformation and constraint reduction techniques, we transform and simplify the original contract problem, ultimately resolving it with a greedy algorithm.
The following is a list of this paper’s main contributions:
  • Taking into account the information asymmetry where the MEC service provider is unaware of the MT’s transmission power and preference for delay, we apply contract theory to devise an incentive mechanism for the MEC service provider. Specifically, the four-dimensional contract item (including CPU cycle, transmission power, caching decision, and reward) can maximize the MEC service providers’ revenue gains while satisfying feasible individual rationality (IR) and incentive compatibility (IC) constraints for each MT. It is noted that IR and IC constraints can ensure the non-negative and maximum utility for each MT when they select the contract items that belong to their own type.
  • We design a greedy algorithm to slove the formulated complex and non-convex contract problem. Due to the non-convex objective function and complex and non-convex constraints in the contract problem, we transform and simplify our contract problem through the methods of variable transformation and constraint reduction, and finally solve it using a greedy algorithm.
  • The numerical results show that the proposed incentive mechanism has great advantages in improving the utility of the MEC service provider compared to other baseline mechanisms. In addition, we verify the validity of the proposed incentive mechanism and the greedy algorithm.
The rest of this paper is structured as follows. The related work of optimization scheme of service caching, computation offloading, and resource allocation in HetNets with MEC is presented in Section 2. The system model and problem formulation are presented in Section 3. The problem formulation and solution is presented in Section 4. Section 5 shows performance evaluation. Section 6 concludes this paper.

2. Related Work

Mobile edge computing (MEC) is a promising paradigm for addressing resource limitations in various aspects [15]. To mitigate computation, communication, and storage resource shortages, a variety of approaches have been proposed, including computation offloading and content caching strategies [16,17,18,19].
In [20], a stochastic mixed-integer nonlinear programming approach is presented, which optimizes decisions related to task offloading, wireless resource allocation, and elastic computing resource scheduling jointly. Mobile applications produce a vast amount of data at high rates, which can strain the backhaul link and mobile core network. Edge caching is an effective solution for storing and caching data at the mobile edge [21]. Moreover, it can handle spikes in mobile data traffic and enhance system performance [22,23]. Coordinated allocation of processing resources and communication between mobile devices and MEC servers is crucial to optimize system performance in heterogeneous networks. While recent efforts have been made to jointly design compute offloading and caching in MEC systems [1,6,7,8,11], the issue of edge server service utility rationalization has been neglected. Therefore, further research is required to optimize offloading and caching in heterogeneous network computing systems.
Most existing studies on incentive mechanisms for mobile edge computing (MEC) aim to encourage all parties to participate in the task-offloading system. Some recent studies have employed game theory to address resource allocation issues in heterogeneous networks. For instance, Li et al. [24] use deep reinforcement learning and game theory to maximize the MEC server’s profit while optimizing the choice of the MEC server, the size of the offloaded data, and the cost of the MEC computing service to prevent end users from abusing the system’s computing resources. An evolutionary-based MEC offloading system was proposed by Xia et al. [25], who also created an online distributed optimization method based on game theory and perturbed Lyapunov optimization theory. This algorithm decides how to manage battery energy as well as offload heterogeneous tasks and allocate compute resources as needed. However, these approaches only examine the joint optimization of computational and communication resources in the MEC system, neglecting cache resources. To address this, Tefera et al. [26] offer a congestion-aware distributed computing, caching, and communication architecture for MEC networks, using a deep reinforcement learning-based adaptive scheduling method based on noncooperative game theory to maximize each smart end-user device’s utility. Yan et al. [27] propose a MEC service incentive mechanism to control task offloading behavior and optimize service caching decisions, using a two-stage incomplete information dynamic game model and a simple algorithm to jointly optimize service providers’ pricing and service caching. Tutuncuoglu et al. [12] address the issue of offloading caches and pricing edge computing applications in a dynamic setting, representing the issue as a Stackelberg game. They propose a Bayesian Gaussian process bandit algorithm to learn the ideal pricing for a cache placement and greedy heuristic based on Gaussian process approximation to compute the cache placement in the situation of inadequate information, leading to significant performance improvements with low overhead.
Although game theory has been effective in addressing resource allocation, it often neglects user privacy. To address this, some researchers have used auction techniques. Le et al. [13] proposed a random auction technique to optimize social utility by simulating bandwidth transactions between suppliers and tenants. However, this approach has limitations, as it cannot foresee user demand or server failure, and the improvement in social welfare is not evident. In [28], the authors propose a dual-auction architecture that enables heterogeneous MECs to perform job offloading across edges without the use of cloud-centric servers. This approach benefits both social welfare and computation efficiency, but it does not consider resource allocation issues for communication and caching. Zhou et al. [29] used reverse auction as an incentive mechanism to model incentive-driven D2D offloading and content caching processes with the goal of maximizing cost savings for content service providers (CSPs). They proposed a content caching method based on deep reinforcement learning (DRL) and standard Vickrey–Clarke–Groves (VCG)-based payment rules to effectively save overhead and improve offloading efficiency.
Although the aforementioned working mechanism takes into account user privacy concerns and reflects individual rationality and incentive compatibility of participants, the auction mechanism is conducted periodically, and frequent interactions determine that new transactions occur. Therefore, the incentive mechanism is unsuccessful. The supply chain subjects can further increase transaction efficiency by creating procedures in the pre-transaction period using contractarianism to avoid frequent interactions with users. This is because the supply chain subjects are made up of numerous service-providing operators and customers [30]. Therefore, in [31], the authors first provide a contractarianism-based incentive mechanism that enables MEC operators to encourage more temporary ECNs to join the MEC network before they take into account the difficulty of allocating computational resources between ECNs and CSSs. Since CSSs contain private information, the issue is represented as a Bayesian matching game with externalities. The iterative matching method suggested next has the potential to significantly raise the amount of societal welfare.
Another line of research related to our work is the allocation of resources to heterogeneous networks based on other game theories. A new distributed computation offloading scheme for heterogeneous MECs was proposed by Wu et al. [32]. One of the early approaches was to formulate the problem as an optimization problem, accounting for inter-user interference and dynamic allocation of computation and communication resources when mobile devices (MDs) access different mobile stations (MSs). To minimize the overall system delay and energy consumption, a DOT computational offloading algorithm was developed that uses the finite improvement property of an ordinal potential game. The offloading problem is then modeled as a potential game, specifically an ordinal potential game. Moreover, since the freshness of information is very important in MEC applications, Yang et al. [33] considered the age of information (AoI) jointly with resource allocation in their work. For the purpose of obtaining AoI metrics, the authors first created a system model that takes active probability into account. They then established an AoI-based channel access optimization problem and used the ordinary potential game (OPG) approach to solve it. Finally, they proposed a learning algorithm called distributed channel access policy determination (DCASD) to choose the channel access policy. In [34], a dynamic and decentralized resource allocation technique is proposed based on evolutionary game theory to handle job offloading among different users to multiple edge nodes and central clouds. Replicator dynamics are utilized to simulate resource competition among several consumers. In [35], the authors propose an improved Gale–Shapley algorithm based on matching game theory for the issues of large resource differences and multiple-user service quality requirements in heterogeneous cellular networks. In heterogeneous networks, this algorithm can successfully reduce user service latency and enhance system performance.
As shown in Table 1, we summarize the related work on optimizing network resources through incentive mechanisms. We highlight the optimization strategy in this paper that uses contract theory as an incentive mechanism approach to jointly optimize CPU cycles, transmission power, and cache decisions with latency as the optimization objective. This emphasizes the need for an effective incentive mechanism that can safeguard the private information of MTs in HetNets with MEC.

3. System Model

In this section, we first present MEC in a HetNets environment that offloads computational tasks, some of which are transferred to the MEC servers and others are processed at the end device. Following that, the utility of MTs and MEC servers are described in more detail.
The system model shown in Figure 1 depicts a MEC system in HetNets that consists of an edge pool, denoted as K = 1 , 2 , k , , K , and a number of MTs, denoted as N = 1 , 2 , n , , N . The system includes K service programs that correspond to service-dependent tasks, and if a requested task’s associated service program has been cached at the edge pool or MTs, MTs can offload a portion of the computing tasks through wireless communication, such as cellular vehicle-to-everything (C-V2X), to the edge pool based on their trajectory and the location of the cached service program. The edge pool is composed of interconnected MEC servers that balance various computing and caching resources, but its limited computing and caching resources can only accommodate a small number of service programs being cached at the same time. Therefore, an AI-based management controller, or agent, is commonly deployed at the edge pool, which collects information from MTs and edge servers and makes decisions on service caching, task offloading, and resource allocation.
Next, we present in detail the utility of MEC servers and MT in the following scenario. In the scenario, a MEC service provider uses MEC servers to offer K classes of computational task to N MTs, with each class of computational tasks correlating to a separate service cache. At the same time, we think about the more real-world scenario when there is cross-terminal interference during task offloading. We define the number of MTs belonging to the class k K computational tasks as N k , so we have k K N k = N . Endpoints offload computational tasks to the MT in order to meet the task completion latency requirements due to the limited computing power of MTs. The offloaded computational tasks are required for service caching, and the edge server can choose to store them on the edge side or download them directly from the cloud center. We start by defining the MT as n k N k for the class k K compute tasks. The task of MT n k N k is denoted as θ k , n , D k , s k , η k , T k max , where θ k , n denotes the sensitivity of the MT to the latency of the computation task of type k, D k is the data size corresponding to the computation task of type k, s k is the service cache size required by the computation task of type k, η k denotes the number of CPUs required for 1 bit of data for a computational task of type k, and T k max is the maximum tolerated latency for a computational task of type k. Here we assume that the maximum tolerable latency of offloading tasks of type k is the same. It is noted that MEC servers belong to MEC service providers, and in this paper we will use MEC service providers and MEC servers interchangeably. For ease of reference, Table 2 summarizes the key notations.

3.1. Utility of MT

The four phases that make up the MT n k offload task are as follows: the first step involves the MT uploading and transmitting the computation task to the edge server; the second involves the edge server obtaining the service cache either locally or from the cloud; the third involves the edge server executing the computation task; and the fourth involves the edge server transmitting the computation result to the MT. The fourth step takes very little time because the computation result is so little.
The upload transmission time of the first step is described as D k R k , n , where R k , n is the MT’s wireless transmission rate, defined as
R k , n = B log 1 + d k , n h k , n 2 σ 2 + b = 1 , b n b = N k d k , b h k , b 2 + a = 1 , a k K n = 1 n = N a d a , n h a , n 2
where σ is the Gaussian white noise, d k , n is the transmission power, and h k , n 2 is the channel power gain between the edge server and the MT n k . The second step’s time to acquire the service cache is α k s k R k , where α k = 1 denotes that the service caching is located in MTs, α k = 0 denotes that it is located in the edge server. The third step’s computation time is D k η k f k , n , where f k , n is the edge server’s available CPU cycle. Based on the preceding instances, we define the MT satisfaction function n k as θ k , n T k max D k R k , n α k s k r k D k η k f k , n . The task is offloaded to the edge server by the MT n k , which incurs a cost, defined as p k , n . Therefore, the utility u n , k of the MT n k is defined as follows.
u k , n = θ k , n T k max D k η k f k , n D k R k , n α k s k r k p k , n
The channel gain between each MT, as well as the MT’s preference for the offloading task θ , are unknown to the edge server. For the same kind of offloading work, we first simplify the wireless transmission rate R k , n and account for the same power and gain for each MT. Hence, R k can be understood as follows.
R k = B log 1 + d k h k 2 σ 2 + d k h k 2 N k 1 + a = 1 , a k K N a d a h a 2
Following that, we investigate the contract design problem for the class k of the computing task.
Definition 1.
There are N k MTs for the offload task type k on the edge server. The N k MTs can be classified into different latency sensitivity categories based on their type. The definitions are as follows Θ k = θ k , i : 1 i I k . Because of this, there are I k classes of MTs in total that are within the edge server’s communication range. Each category’s probability distribution is q k , i , and its corresponding number is N k q k , i , i.e., i I k N k q k , i = N k . Non-degenerate sequences of MTs are arranged according to kind.
0 < θ k , 1 θ k , 2 θ k , I k
A higher θ indicates that computational tasks should be offloaded to the edge server as soon as possible. In this case, we designate the contract for MT of type i as f k , i , p k , i , d k , α k . The edge server will offer different contracts based on the θ of the MTs rather than providing the same contract to all MTs. MTs have the option to accept or reject any contract. MTs have the option to accept or reject any contract. If the MT rejects any contract, we assume that the MT signs a contract of ( 0 , 0 ) .
To simplify the notation, we denote the MT within the edge server with offload task type k and time sensitivity type i as ( k , i ) . The utility of a MT of type ( k , i ) is then redefined as follows.
u k , i = θ k , i T k max D k η k f k , i D k R k α k s k R k p k , i = θ k , i G f k , i p k , i
where, G f k , i = T k max D k η k f k , i D k R k α k s k R k .

3.2. Utility of MEC Servers

The expense to finish the computation task that the MT offloaded is borne by the MEC service provider. In order to coordinate with other MTs to lessen the effects of interference and to enable the MT to send data with d k power, the MEC service provider pays a unit cost of c k , n 1 for the k class of offloaded computation tasks, i.e., the cost is c k , n 1 d k . When the service cache is located in MTs, its cost is specified as c k 2 α k r k k , where c k 2 stands for the cost expenditure per unit transfer rate. This cost is paid by the MEC server. The cost of the service is specified as c k 3 1 α k s k when it is cached at the MEC server, where c k 3 stands for the cost per unit of storage. The cost of computing to complete a task is denoted by the formula c k 4 D k η k κ k , i f k , i 2 , where c k 4 denotes the unit cost expenditure for computational energy consumption, η k is the arithmetic power needed by the MT n k to process a unit number of bits of data, and κ n is an effective switching capacitor. Thus, the utility obtained by the MEC server for the k class of offloaded computation tasks is as follows.
U k = i N k N k q k , i p k , i c k , i 1 d k c k 2 α k d k c k 3 1 α k s k c k 4 D k η k κ k , i f k , i 2
As a result, the utility of MEC server is as follows for all types of computational jobs.
U = k K U k

4. Problem Formulation and Solution

4.1. Problem Formulation

In order to optimize MEC servers’ utilities, the MEC service provider provides incentives for multiple MTs to offload tasks to it. The MT is the agent that selects the contract item that best fits its kind, and the MEC service provider is the subject party who creates the contract. The contract for MEC servers is indicated as Φ k = θ k , i , f k , i , p k , i , d k , α k , i I k , where f k , i , p k , i , d k , α k is defined for MTs of type ( k , i ) . For each MTs, they choose the contract that suits their type, satisfying both the individual rationality (IR) and incentive compatibility (IC) constraints.
While assuring a non-negative utility for each MT, the IR condition promotes MT involvement. The IR condition for MTs of type ( k , i ) can be specifically expressed as
u k , i f k , i p k , i , d k , α k 0 , 1 i I k
The MT pays less than the gain from offloading work in order to encourage the MT to offload tasks to the edge server. The MT will decide not to offload the work and carry out the computation locally if for u k , i f k , i , p k , i , d k , α k < 0 . The actual utility received by the MT of type i is if the MT of type i chooses the contract f k , j , p k , j , d k , α k that is intended for the MT of type j .
u k , i f k , j , p k , j , d k , α k = θ k , i G f k , j , d k , α k p k , j
As we previously established, our goal is to create a contract where MT of type i chooses the f k , i , p k , i , d k , α k contract over all other alternatives. In other words, MTs of type i chooses the contract f k , i , p k , i , d k , α k with the greatest utility. The following conditions must all be met for the contract to qualify as a self-revealing contract.
The IC condition ensures that each MT selects the contract that best meets its demands while still maximizing utility. The following equation can be used to determine the IC condition for type ( m , i ) MT.
u k , i f k , i , p k , i , d k , α k u k , i f k , i , p k , i , d k , α k , 1 i , i I k
The fundamental prerequisites necessary to guarantee contract incentive compatibility are the IC and IR limitations. Several additional requirements must be met in addition to the IC and IR restrictions.
Theorem 1.
For k-th computation task, we have f k , i f k , j for each realizable contract p k , i , f k , i when and only when θ k , i θ k , j . When, and only when, θ k , i = θ k , j , we have f k , i = f k , j .
Proof. 
Please refer to Appendix A.1 for proof.    □
Definition 2.
Monotonicity: for k-th computation task, for any feasible contract { p , f } , the required computational resources f are as follows:
0 f k , 1 < < f k , i < < f I k
Higher latency-sensitive MTs, which call for more processing resources, are implied by monotonicity. We can obtain the following proposition by starting from the monotonicity’s nature.
Proposition 1.
The following conditions are intuitively satisfied by p as a strictly rising function of f.
0 p k , 1 < < p k , i < < p I k
According to Proposition 1, incentive-compatible contracts cost more if they have a large computational capacity, and vice versa.
Theorem 2.
For k-th computation task, each type of MT’s utility for each practicable contract { p , f } must be satisfied.
0 u k , 1 < < u k , i < < u k , I k
Proof. 
Please refer to Appendix A.2 for proof.    □
Now, we have u k , i > u k , j when θ k , i θ k , j . So, when 0 < θ k , 1 < θ k , 2 < < θ k , I k , we have 0 u k , 1 < < u k , i < < u k , I k .
As a result, MTs of higher types are more useful than MTs of lower types. Following is a simple conclusion that can be drawn from the IC requirement and the two lemmas we demonstrated. A lower obtained reward decreases the utility of the higher type MT if a higher type MT selects a contract intended for a lower type MT, even if the edge server allocates fewer computational resources for the tasks the higher type MT offloads. The cost outweighs the benefit if the low-type MT chooses a contract made for the high-type MT, since the gain from the high computational resources for the low-type MT’s computational activities cannot be balanced by its expense. The MT only achieves maximum utility when it selects the contract that most closely matches its preferences. We can thus promise that the contract is self-evident.
In the case of information asymmetry, the service provider, acting as the contract maker, must create the contract Φ k for each MT in a way that maximizes its utility while fulfilling the IC and IR conditions. Consequently, the mathematical issue is framed as the following P1.
max p k , i , i f k , i , d k , α k U s . t . C 1 : ( IC ) , C 2 : ( IR ) , C 3 : p k , i 0 , 1 i , i I k , k K , C 4 : f k , i 0 , 1 i , i I k , k K , C 5 : d k 0 , k K , C 6 : α k = { 0 , 1 } , k K , C 7 : k K n k N k N k q k , i f k , i F max , C 8 : k K 1 α k s k < S max .
where C7 indicates that the total amount of computing power used to carry out the computational activities offloaded by the MT cannot exceed the upper limit of the edge server’s computing power, and C8 indicates that the total amount of storage space utilized to store the service cache cannot exceed the upper limit of the MEC server’s storage space. The original optimization issue is not a concave problem and is difficult to solve, since the objective and constraints are not concave functions in P1.

4.2. Solution

We first simplify the original contract problem by using the following three theorems and then design an algorithm to solve the simplified contract problem.
We will use the following theorem for reducing IR constraints.
Theorem 3.
N k IR constraints are reduced to one IR constraint of type 1.
Proof. 
From P1, we can see that a total of N k IR constraints are satisfied. However, we know from Definition 1 that 0 < θ k , 1 θ k , 2 θ k , I k . By using the IC constraint, we then have
θ k , i G f k , i p k , i θ k , j G f k , 1 p k , 1 θ k , 1 G f k , 1 p k , 1
   □
Accordingly, satisfying the IR constraint of type 1 MT will automatically ensure the maintenance of the remaining IR constraints. Therefore, it is only necessary to maintain the first IR constraint while reducing the others.
We will use the following two theorems for reducing IC constraints.
Theorem 4.
Downward incentive constraints (DICs) referring to IC constraints between types i and j where j 1 , , i 1 , are reduced to local downward incentive constraints (LDICs) referring to IC constraints between type i and type ( i 1 ) . The mathematical expression is as follows:
θ k , i G f k , i p k , i > θ k , i G f k , j p k , j , I k i > j 1
Proof. 
Please refer to Appendix A.3 for proof.    □
Theorem 5.
Upward incentive constraints (UICs) referring to IC constraints between type i and type j where j i + 1 , , N k , are reduced to local upward incentive constraints (LUICs) referring to IC constraints between type i and type ( i + 1 ) . The mathematical expression is as follows:
θ k , i G f k , i p k , i θ k , i G f k , j p k , j , 1 i < j N k
Proof. 
Please refer to Appendix A.4 for proof.    □
Based on Theorems 3, 4, and 8, P1 is reduced to P2 as follows.
max p k , i , f k , i , d k , α k U C 2 : θ k , i G f k , i p k , i = θ k , i G f k , i 1 p k , i 1 C 3 : p k , i 0 , 1 i , i I k , k K , C 4 : f k , i 0 , 1 i , i I k , k K , C 5 : d k 0 , k K , C 6 : α k = { 0 , 1 } , k K , C 7 : k K n k N k N k q k , i f k , i F max C 8 : k K 1 α k s k < S max .
Based on the C2 constraints in P2, we are able to derive
p k , i = z = 1 i Δ z + θ k , 1 G f k , 1
where Δ 1 = 0 , Δ z = θ k , z G f k , z θ k , z G f k , z 1 , z { 1 , , i } , i 1 , , I k .
By substituting the above equation into P2, we can obtain a transformed P3.
max f k , i , d k , α k U = k K i I k U k , i s . t . C 1 : f k , i 0 , 1 i , i I k , k K , C 2 : d k 0 , k K , C 3 : α k = { 0 , 1 } , k K , C 4 : k K n k N k N k q k , i f k , i F max C 5 : k K 1 α k s k < S max .
where
U k , i = θ k , i G f k , i a = i I k N k q k , a θ k , i + 1 G f k , i b = i + 1 I k N k q k , b N k q k , i c k , i 1 d k + c k 2 α k d k + c k 3 ( 1 α k ) s k N k q k , i c k 4 D k η k κ k , i f k , i 2 , 0 < i < I k ;
and
U k , I k = θ k , i G f k , i N k q k , i c k , i 1 d k + c k 2 α k d k + c k 3 1 α k s k N k q k , i c k 4 D k η k κ k , i f k , i 2 , i = I k
The objective function in P3 can be expressed as U = k K i I k U k , i . Since the variables f k , i , i I k , k K are independent of d k , k K and α k , k K , the optimization problem P3 is split into the following two subproblems. The subproblem P3.1 is the following:
max f k , i U 1 = k K i I k U 1 , k , i s . t . C 1 : f k , i 0 , 1 i , i I k , k K , C 4 : k K n k N k N k q k , i f k , i F max
where
U 1 , k , i = θ k , i a = i I k N k q k , a θ k , i + 1 b = i + 1 I k N k q k , b T k max D k η k f k , i N k q k , i c k 4 D k η k κ k , i f k , i 2 , 0 < i < I k ,
and
U 1 , k , I = θ k , i T k max D k η k f k , i N k q k , i c k 4 D k η k κ k , i f k , i 2 , i = I k .
The subproblem P3.2 is the following:
max d k , α k U 2 = k K i I k U 2 , k , i s . t . C 2 : d k 0 , k K , C 3 : α k = { 0 , 1 } , k K , C 5 : k K 1 α k s k < S max .
where
U 2 , k , i = θ k , i a = i I k N k q k , a θ k , i + 1 b = i + 1 I k N k q k , b T k max D k R k α k s k R k N k q k , i c k , i 1 d k + c k 2 α k d k + c k 3 ( 1 α k ) s k , 0 < i < I k ,
and
U 2 , k , I = θ k , i T k max D k R k α k s k R k N k q k , i c k , i 1 d k + c k 2 α k d k + c k 3 ( 1 α k ) s k , i = I k .
Theorem 6.
The subproblem P3.1 is a convex problem.
Proof. 
It is important to note that U 1 is made up of concave inverse proportional and quadratic functions. These concave functions still have a concave shape when added together positively. Moreover, the constraints C1 and C4 are both affine sets, which means that it is a convex set. So the subproblem P3.1 is a convex problem.    □
Therefore, we can leverage standard convex optimization tools in [36] to solve it to obtain f k , i .
Theorem 7.
The subproblem P3.2 is a non-convex problem.
Proof. 
Since the subproblem P3.2 contains continuous variables (i.e., d k , k K ) and integer variables α k , k K while U 2 in terms of d k is a non-convex function, this means that the subproblem P3.2 is a non-convex problem.    □
Since the subproblem P3.2 is a non-convex problem, finding the optimal solution usually requires exponential time complexity [36,37]. However, it is noted that U 2 in terms of d k is a concave function when d j , j K , j k and π Π are both fixed. Here, Π is permutations of K service caches, one of Π is π . Thus, we have the following theorem
Theorem 8.
The subproblem P3.2 is a concave problem in terms of d k when d j , j K , j k and π are both fixed.
Proof. 
When α k , k K and d j , j K , j k are both fixed, 1 + d k h k 2 σ 2 + d k h k 2 N k 1 + a = 1 , a k K N a d a h a 2 is a concave function. It is because its second order derivative is constantly less than zero when d k 0 , which leads to R k = log 1 + d k h k 2 σ 2 + d k h k 2 N k 1 + a = 1 , a k K N a d a h a 2 still being a concave function. Furthermore, we deduce that D k R k is also a concave function. In addition, c k 2 α k d k is also a concave function. The positive summation of all these concave functions, i.e., U 2 , is still a concave function. Thus, U 2 in terms of d k is a concave function when d j , j K , j k and π are both fixed. Moreover, the constraint set in terms of d k is a convex set. Finally, we complete the proof.    □
Theorem 8 motivates us to use a block coordinated descent (BCD) algorithm in [38] to find the optimal d k , k K when π is fixed. Further, a greedy algorithm is used to traverse each π to find the optimal π that maximizes the subproblem P3.2.
Thus, the algorithm for solving the subproblem P3.1 and the subproblem P3.2 is summarized as follows.
Initializing parameters and solving subproblem P3.1 using convex optimization tools have a time complexity of O( N k 3 ), where N k is the number of nodes in the network. Computing the permutations of K service caches takes O( 2 K ) time. The while loop runs until the termination condition is met, i.e., ϵ < ϵ min , and we obtain z m a x and z m a x representing the number of iterations after the w h i l e loop in the ending pseudocode. After the above analysis, the final time complexity of Algorithm 1 is O ( z m a x K 2 K ) .
Algorithm 1 Finding the optimal contract items
1:
Initialize parameters
2:
Use standard convex optimization tools in [36] to solve subproblem P3.1 to obtain f k , i * , k K , i I k and U 1 * .
3:
Compute the permutations of K service caches (denoted as Π )
4:
for Each π Π  do
5:
    Initialize z = 1 , d π , z = d 1 π , z , , d k π , z , , d K π , z
6:
    while  ϵ > ϵ min  do
7:
        for Each service cache k K  do
8:
           Use a greedy algorithm to find the optimal transmit power d k π , z when each transmit power d a π , z , a K , a k is fixed.
9:
        end for
10:
        Compute U 2 π , z
11:
        Compute ϵ = | | U 2 π , z U 2 π , z 1 | |
12:
        Update z = z + 1 , d π , z
13:
    end while
14:
end for
15:
Find π * which maximizes U 2 and obtain d k * , k K
16:
Based π * (i.e., α k * , k K ), d k * , k K and f k , i * , k K , i I k , we can calculate p k , i * by (19). Moreover, monotonicity is met automatically when the type is uniformly distributed [14].
17:
Output: α k * , d k * , f k , i * , p k , i * , k K , i I k

5. Simulation Results

We conducted simulations in MATLAB to demonstrate the effectiveness of the proposed incentive mechanism. In the simulation, we consider a MEC server offers K = 3 classes of service cache-based computation tasks to N = 15 MTs. The number of MTs for each class is equal, i.e, N 1 = N 2 = N 3 = 5 . For fixed class, we set that the number of MTs as equal to the number of MTs’ types of preference for service latency, i.e., N k = I k = 5 . We assume that each type of preference for service latency has an equal probability within the range of ( 0 , 20 ] , i.e., q k , i = 1 I k = 1 N k . Additional parameters are presented in Table 3, which are based on prior studies [39,40,41,42].
Furthermore, the numerical results indicate that the proposed mechanism outperforms other baseline incentive mechanisms and significantly enhances the utility of the MEC server. These baseline incentive mechanisms are contract-based incentive mechanism under symmetric information scenario (CS), Stackelberg game-based incentive mechanism (SG) [43], and linear pricing incentive mechanism (LP). CS considers the scenario where the MEC server knows the types of preference of each MT. SG considers that the MEC server sets different unit price for each type of MTs. The objective of each MT is to maximize its own utility, which is expressed as
U k , i SG = θ k , i G f k , i δ k , i f k , i
where δ k , i is the price per CPU cycle. The objective of the MEC server is to maximize its own utility, which is expressed as
U SG = k K U k SG
where
U k SG = n k N k N k q k , i δ k , i f k , i c k , i 1 d k c k 2 α k r k c k 3 1 α k s k c k 4 D k η k κ k , i f k , i 2
Referring to [44], LP considers that the MEC server sets same unit price for each type of MTs.

5.1. Algorithm Effectiveness

In order to verify the effectiveness of the algorithm, we need to verify it in two steps. First, we verify the effectiveness of the optimal contract items as shown in Figure 2. Then, we verify the algorithm’s convergence as shown in Figure 3.
In Figure 2, we evaluate the IR and the IC conditions of our proposed CA scheme. Figure 2 shows the utilities of type-1, type-2, type-3, type-4, and type-5 MTs for different types of service caching when selecting all the contracts ( p k , i , f k , i ) , k K , i I k offered by the MEC server. For example, for service caching type 1, it can be seen that each MT can maximize its utility when selecting the contract that fits its own type, which means that the IC constraints are satisfied. Furthermore, each type of MT receives a positive utility value when selecting the contract that fits their type, which suggests that the IR constraints are satisfied. Therefore, by applying the proposed CA scheme, the MEC server can overcome the asymmetric information between the MEC server and the MTs by being aware of the types of MTs. Additionally, the utilities of higher types of MTs are larger than those of lower types of MTs, which verifies the result of Theorem 2.
Algorithm 1 converges to a predetermined error for P3.2 as the number of iterations increases, as demonstrated in Figure 3a, for various caching decisions. The MEC server can achieve various utilities depending on the caching decisions, as shown in Figure 3b. The maximum utility can be obtained by the MEC server when all service cache is stored on the MEC server, as demonstrated in Figure 3b.

5.2. Performance Comparison

In this paper, we compare our proposed contract-based incentive mechanism under asymmetric information (CA) to three other incentive mechanisms: CS, SG, and LP.
Figure 4 depicts the relationship between the utilities of the MTs (and the MEC server) and the number of MT types under different incentive mechanisms. The utilities of the MTs and the MEC server increase with the number of MT types, as shown in Figure 4b and Figure 4a, respectively. Additionally, we analyze the utilities of the MTs and the MEC server under different incentive mechanisms when the number of MT types is fixed.
In Figure 4a, the CS approach achieves the best performance among the four approaches, serving as an upper bound. This is because the MEC server is fully aware of the different types of MTs and does its best to extract the maximum amount of revenue from them until their utilities are all zero. Additionally, the contract-based CS and CA methods outperform the SG approach in terms of the MEC server’s utility. The contract-based approaches aim to collect as much revenue from the MTs as possible while satisfying both the IR and IC constraints, leaving only a small share of revenue for the MTs. In contrast, the SG strategy aims to maximize the combined utility of the MEC server and the MTs, allowing for more revenue to be allocated to the MTs. Finally, the SG strategy outperforms the LP approach in terms of the MEC server’s utility, while the LP approach has the weakest performance among the four approaches. This is because the LP approach cannot adapt to changes in various offloaded jobs, which would worsen its performance. In Figure 4b, the CA approach provides better utilities for the MTs than the LP approach, while the utilities of the MTs are both zero under the SG and CS approaches. This is due to the same reasons mentioned in Figure 4a.
In Figure 5, the relationship between MTs’ utilities (and MEC server’s utility) and the number of service caching types is demonstrated under different incentives. The MEC server’s utility is shown to increase as the number of service caching types increases, as depicted in Figure 5a, and, similarly, the utilities of the MTs increase as well, as shown in Figure 5b.
Figure 6 depicts the relationship between the utilities of the MTs and MEC server and the unit cost of service caching under the CA and CA with a random transmission power strategy d and caching strategy α . As shown in the figure, the MEC server’s utility decreases while the utilities of the MTs decrease along with the unit cost of service caching. Moreover, when the unit cost of service caching is fixed, the utilities of both the MTs and the MEC server are higher under the CA approach than under the CA with a random strategies approach.
Figure 7 illustrates how the utilities of the MTs and MEC server are affected by caching strategy and caching unit cost. As shown, when caching costs are low, the MEC server tends to store all of its service caching on the edge side, represented by α = [ 0 0 0 ] . When caching costs are moderate, the MEC server tends to store part of its service caching on the edge side, represented by α = [ 1 0 0 ] . When caching costs are high, the MEC server tends not to store any of its service caching on the edge side, represented by α = [ 1 1 1 ] .
Figure 8 illustrates the effect of service caching costs and strategies and number of service caching types on social welfare. The social welfare is defined as
U SW = k K i I k N k q k , i U k , i + k K i I k N k q k , i u k , i = k K i I k N k q k , i θ k , i T k max D k R k α k s k R k c k , i 1 d k + c k 2 α k d k + c k 3 1 α k s k + c k 4 D k η k κ k , i f k , i 2
where U k , i = p k , i c k , i 1 d k c k 2 α k d k c k 3 1 α k s k c k 4 D k η k κ k , i f k , i 2 . The social welfare in this context is measured by considering both the time gain and energy consumption. This is a more realistic approach as it takes into account the fact that users may prioritize either time or energy efficiency depending on their needs and preferences. The results suggest that the CS and CA methods can significantly improve social welfare compared to other methods, especially when the service caching costs and strategies are fixed. Moreover, increasing the number of service caching types can also lead to higher social benefits. Specifically, when the number of service caching types is the same, the CS and CA methods exhibit significantly higher social welfare than the other methods. It can be seen that the CS and CA methods can use less energy to produce the same time gain. Overall, these findings indicate the effectiveness of the CS and CA methods in enhancing social welfare and optimizing service caching in wireless networks.

6. Conclusions

In order to balance various computing capacities based on heterogeneous network environments, we construct a distributed heterogeneous network architecture in this study. Additionally, we create a method based on contract theory to optimize resource allocation, service caching, and compute offloading in order to maximize the revenue yield of MEC servers. We come to the conclusion that the contract problem is a complex problem with a non-convex objective function and complex non-convex constraints after a careful theoretical derivation. We use variable transformation and constraint reduction to transform and simplify the contract issue, then a greedy method is used to solve it. According to numerical data, the proposed incentive mechanism has a significant advantage over alternative baseline approaches in terms of increasing the utility of the MEC server.

Author Contributions

Conceptualization, Z.S.; methodology, Z.S.; validation, Z.S.; data curation, Z.S.; writing—original draft preparation, Z.S.; writing—review and editing, G.C.; visualization, Z.S.; project administration, Z.S.; funding acquisition, G.C. All authors have read and agreed to the published version of the manuscript.

Funding

Special Industrial Technology Research Project of Jilin Province, Research on Self-organizing Network System of Unmanned Platform for Optoelectronic Composite Communication, No. 2022C047-8; Supported by “Thirteenth Five-Year Plan” of Provincial Science and Technology of Education Department of Jilin Province, Research on Large-scale D2D Access and Traffic Balancing Technology for Heterogeneous Wireless Networks (JJKH20181130KJ).

Institutional Review Board Statement

Studies not involving humans or animals.

Acknowledgments

The authors would like to thank all of the authors cited and the anonymous reviewers in this article for their helpful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1

Proof. 
Using the IC constraint, we demonstrate this lemma. As a first step, we establish sufficiency: if θ k , i θ k , j , then f k , i f k , j . We start by defining G f k , i = T k max D k f k , i D k R k α k s k r k . IC constraint of MT according to type i and type j, where i j and i , j I k , we have
θ k , i G f k , i p k , i θ k , i G f k , j p k , j
and
θ k , j G f k , j p k , j θ k , j G f k , i p k , i
The result of adding the two formulas above is θ k , i G f k , i + θ k , j G f k , j θ k , i G f k , j + θ k , j G f k , i . We obtain G f k , i θ k , i θ k , j G f k , j θ k , i θ k , j by changing the formula’s terms. If we have θ k , i θ k , j , then θ k , i θ k , j 0 must also exist. We obtain G f k , i G f k , j by dividing by the inequality’s two sides. We can infer that G ( f ) is a strictly growing function of f from the definition of G ( · , · ) . We require f k , i · f k , j to hold when G f k , i G f k , j does.
Next, we prove the necessity: if f k , i f k , j , then θ k , i θ k , j . Similarly to the first case, using a similar procedure, we can obtain
θ k , i G f k , i G f k , j θ k , j G f k , i G f k , j
Since f k , i f k , j and G ( f ) increase strictly with f, we must have G f k , i G f k , j and G f k , i G f k , j 0 . Thus, by dividing the two sides of the inequality, we obtain θ k , i θ k , j . As a result, we prove that there is θ k , i θ k , j when and only when f k , i f k , j .
Using the same procedure, we can easily prove that there is θ k , i = θ k , j when and only when f k , i = f k , j . □

Appendix A.2

Proof. 
When the constraints θ k , i θ k , j and f k , i f k , j are jointly applied, we can demonstrate from Definition 2 and Proposition 1 that we are aware that MTs requiring a delay-sensitive degree can gain more value by offloading the work. If the value is θ k , i θ k , j , we have
u k , i = θ k , i G f k , i p k , i θ k , i G f k , j p k , j ( IC ) > θ k , j G f k , j p k , j = u k , j

Appendix A.3

Proof. 
Since the number of MTs in our model is N k , there exists a total of N k N k 1 IC constraints. Here, we focus on three different MT kinds that come after θ k , i 1 < θ k , i < θ k , i + 1 . Then, we have the following two LDICs.
θ k , i + 1 G f k , i + 1 p k , i + 1 θ k , i + 1 G f k , i p k , i
and
θ k , i G f k , i p k , i θ k , i G f k , i 1 p k , i 1
In Theorem 1, we prove that for any θ k , i θ k , j > 0 , we have f k , i f k , j , and the second inequality becomes
θ k , i + 1 G f k , i G f k , i 1 θ k , i G f k , i G f k , i 1 p k , i p k , i 1
and
θ k , i + 1 G f k , i + 1 p k , i + 1 θ k , i + 1 G f k , i p k , i θ k , i + 1 G f k , i 1 p k , i 1
The above equation holds due to the fact that θ k , i + 1 G f k , i G f k , i 1 p k , i p k , i 1 . Consequently, we have θ k , i + 1 G f k , i + 1 p k , i + 1 θ k , i + 1 G f k , i 1 p k , i 1 .
Thus, if LDIC holds for MTs of type i , then the incentive constraint holds with respect to MTs of type ( i 1 ) . This process can be extended from type i 1 down to type 1 to prove that all DICs hold.
θ k , i + 1 G f k , i + 1 p k , i + 1 θ k , i + 1 G f k , i 1 p k , i 1 θ k , i + 1 G f k , 1 p k , 1 , 1 i I k

Appendix A.4

Proof. 
The following two LUICs result from the IC constraint.
θ k , i 1 G f k , i 1 p k , i 1 θ k , i 1 G f k , i p k , i
and
θ k , i G f k , i p k , i θ k , i G f k , i + 1 p k , i + 1
In Theorem 2, we show that, for any value of θ k , i θ k , j > 0 , we have the inequality f k , i f k , j . This inequality may be deduced as
p k , i + 1 p k , i θ k , i G f k , i + 1 θ k , i G f k , i θ k , i 1 G f k , i + 1 θ k , i 1 G f k , i
and
θ k , i 1 G f k , i 1 p k , i 1 θ k , i 1 G f k , i p k , i θ k , i 1 G f k , i + 1 p k , i + 1
Therefore, we have
θ k , i 1 G f k , i 1 p k , i 1 θ k , i 1 G f k , i + 1 p k , i + 1
Therefore, all UICs are met if the incentive constraint on type i consumers holds for MTs of type ( i 1 ) . All UICs assert that by extending this process upward from type ( i + 1 ) to N k MTs.
θ k , i 1 G f k , i 1 p k , i 1 θ k , i 1 G f k , i + 1 p k , i + 1 θ k , i 1 G f k , N k p k , N k , I k i > 1

References

  1. Xue, Z.; Liu, C.; Liao, C.; Han, G.; Sheng, Z. Joint service caching and computation offloading scheme based on deep reinforcement learning in vehicular edge computing systems. IEEE Trans. Veh. Technol. 2023, 72, 6709–6722. [Google Scholar] [CrossRef]
  2. Xu, C.; Zheng, G.; Zhao, X. Energy-minimization task offloading and resource allocation for mobile edge computing in noma heterogeneous networks. IEEE Trans. Veh. Technol. 2020, 69, 16001–16016. [Google Scholar] [CrossRef]
  3. Yang, Z.; Chen, M.; Liu, X.; Liu, Y.; Chen, Y.; Cui, S.; Poor, H.V. Ai-driven uav-noma-mec in next generation wireless networks. IEEE Wirel. Commun. 2021, 28, 66–73. [Google Scholar] [CrossRef]
  4. Hao, Y.; Miao, Y.; Hu, L.; Hossain, M.S.; Muhammad, G.; Amin, S.U. Smart-edge-cocaco: Ai-enabled smart edge with joint computation, caching, and communication in heterogeneous iot. IEEE Netw. 2019, 33, 58–64. [Google Scholar] [CrossRef]
  5. Hao, Y.; Chen, M.; Hu, L.; Hossain, M.S.; Ghoneim, A. Energy efficient task caching and offloading for mobile edge computing. IEEE Access 2018, 6, 365–373. [Google Scholar] [CrossRef]
  6. Hao, Y.; Hu, L.; Qian, Y.; Chen, M. Profit maximization for video caching and processing in edge cloud. IEEE J. Sel. Areas Commun. 2019, 37, 1632–1641. [Google Scholar] [CrossRef]
  7. Zhang, J.; Hu, X.; Ning, Z.; Ngai, E.C.-H.; Zhou, L.; Wei, J.; Cheng, J.; Hu, B.; Leung, V.C. Joint resource allocation for latency-sensitive services over mobile edge computing networks with caching. IEEE Internet Things J. 2018, 6, 4283–4294. [Google Scholar] [CrossRef]
  8. Zhou, H.; Zhang, Z.; Li, D.; Su, Z. Joint optimization of computing offloading and service caching in edge computing-based smart grid. IEEE Trans. Cloud Comput. 2022. [Google Scholar] [CrossRef]
  9. Hastie, T.; Tibshirani, R.; Friedman, J.H.; Friedman, J.H. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: Berlin/Heidelberg, Germany, 2009; Volume 2. [Google Scholar]
  10. Muhuri, A.; Gascoin, S.; Menzel, L.; Kostadinov, T.S.; Harpold, A.A.; López-Moreno, J.I. Performance assessment of optical satellite-based operational snow cover monitoring algorithms in forested landscapes. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7159–7178. [Google Scholar] [CrossRef]
  11. Fan, W.; Han, J.; Su, Y.; Liu, X.; Wu, F.; Tang, B.; Liu, Y. Joint task offloading and service caching for multi-access edge computing in wifi-cellular heterogeneous networks. IEEE Trans. Wirel. Commun. 2022, 21, 9653–9667. [Google Scholar] [CrossRef]
  12. Tütüncüoğlu, F.; Dán, G. Optimal service caching and pricing in edge computing: A bayesian gaussian process bandit approach. IEEE Trans. Mob. Comput. 2022. [Google Scholar] [CrossRef]
  13. Le, T.H.T.; Tran, N.H.; LeAnh, T.; Oo, T.Z.; Kim, K.; Ren, S.; Hong, C.S. Auction mechanism for dynamic bandwidth allocation in multi-tenant edge computing. IEEE Trans. Veh. Technol. 2020, 69, 15162–15176. [Google Scholar] [CrossRef]
  14. Bolton, P.; Dewatripont, M. Contract Theory; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  15. Aliyu, A.; Abdullah, A.H.; Kaiwartya, O.; Madni, S.H.H.; Joda, U.M.; Ado, A.; Tayyab, M. Mobile cloud computing: Taxonomy and challenges. J. Comput. Netw. Commun. 2020, 2020, 2547921. [Google Scholar] [CrossRef]
  16. Wang, J.; Zhao, L.; Liu, J.; Kato, N. Smart resource allocation for mobile edge computing: A deep reinforcement learning approach. IEEE Trans. Emerg. Top. Comput. 2019, 9, 1529–1541. [Google Scholar] [CrossRef]
  17. Park, C.; Lee, J. Mobile edge computing-enabled heterogeneous networks. IEEE Trans. Wirel. Commun. 2020, 20, 1038–1051. [Google Scholar] [CrossRef]
  18. Ahmed, A.; Hanan, A.A.; Omprakash, K.; Usman, M.; Syed, O. Mobile cloud computing energy-aware task offloading (mcc: Eto). In Proceedings of the Communication and Computing Systems: Proceedings of the International Conference on Communication and Computing Systems (ICCCS 2016), Gurgaon, India, 9–11 September, 2016; p. 359. [Google Scholar]
  19. Cao, Y.; Song, H.; Kaiwartya, O.; Zhou, B.; Zhuang, Y.; Cao, Y.; Zhang, X. Mobile edge computing for big-data-enabled electric vehicle charging. IEEE Commun. Mag. 2018, 56, 150–156. [Google Scholar] [CrossRef]
  20. Zhang, Q.; Gui, L.; Hou, F.; Chen, J.; Zhu, S.; Tian, F. Dynamic task offloading and resource allocation for mobile-edge computing in dense cloud ran. IEEE Internet Things J. 2020, 7, 3282–3299. [Google Scholar] [CrossRef]
  21. Yao, J.; Han, T.; Ansari, N. On mobile edge caching. IEEE Commun. Surv. Tutor. 2019, 21, 2525–2553. [Google Scholar] [CrossRef]
  22. Lyu, X.; Ren, C.; Ni, W.; Tian, H.; Liu, R.P.; Tao, X. Distributed online learning of cooperative caching in edge cloud. IEEE Trans. Mob. Comput. 2020, 20, 2550–2562. [Google Scholar] [CrossRef]
  23. Alioua, A.; Hamiroune, R.; Amiri, O.; Khelifi, M.; Senouci, S.-M.; Gidlund, M.; Abedin, S.F. Incentive mechanism for competitive edge caching in 5g-enabled internet of things. Comput. Netw. 2022, 213, 109096. [Google Scholar] [CrossRef]
  24. Li, S.; Hu, X.; Du, Y. Deep reinforcement learning and game theory for computation offloading in dynamic edge computing markets. IEEE Access 2021, 9, 121456–121466. [Google Scholar] [CrossRef]
  25. Xia, S.; Yao, Z.; Li, Y.; Mao, S. Online distributed offloading and computing resource management with energy harvesting for heterogeneous mec-enabled iot. IEEE Trans. Wirel. Commun. 2021, 20, 6743–6757. [Google Scholar] [CrossRef]
  26. Tefera, G.; She, K.; Chen, M.; Ahmed, A. Congestion-aware adaptive decentralised computation offloading and caching for multi-access edge computing networks. IET Commun. 2020, 14, 3410–3419. [Google Scholar] [CrossRef]
  27. Yan, J.; Bi, S.; Duan, L.; Zhang, Y.-J.A. Pricing-driven service caching and task offloading in mobile edge computing. IEEE Trans. Wirel. Commun. 2021, 20, 4495–4512. [Google Scholar] [CrossRef]
  28. Lu, W.; Wu, W.; Xu, J.; Zhao, P.; Yang, D.; Xu, L. Auction design for cross-edge task offloading in heterogeneous mobile edge clouds. Comput. Commun. 2022, 181, 90–101. [Google Scholar] [CrossRef]
  29. Zhou, H.; Wu, T.; Zhang, H.; Wu, J. Incentive-driven deep reinforcement learning for content caching and d2d offloading. IEEE J. Sel. Areas Commun. 2021, 39, 2445–2460. [Google Scholar] [CrossRef]
  30. Tian, L.; Li, J.; Li, W.; Ramesh, B.; Cai, Z. Optimal contract-based mechanisms for online data trading markets. IEEE Internet Things J. 2019, 6, 7800–7810. [Google Scholar] [CrossRef]
  31. Su, C.; Ye, F.; Liu, T.; Tian, Y.; Han, Z. Computation offloading in hierarchical multi-access edge computing based on contract theory and bayesian matching game. IEEE Trans. Veh. Technol. 2020, 69, 13686–13701. [Google Scholar] [CrossRef]
  32. Wu, L.; Liu, Z.; Sun, P.; Chen, H.; Wang, K.; Zuo, Y.; Yang, Y. Dot: Decentralized offloading of tasks in ofdma-based heterogeneous computing networks. IEEE Internet Things J. 2022, 9, 20071–20082. [Google Scholar] [CrossRef]
  33. Yang, Y.; Wang, W.; Xu, R.; Srivastava, G.; Alazab, M.; Gadekallu, T.R.; Su, C. Aoi optimization for uav-aided mec networks under channel access attacks: A game theoretic viewpoint. In Proceedings of the ICC 2022-IEEE International Conference on Communications, Seoul, Republic of Korea, 16–20 May 2022; pp. 1–6. [Google Scholar]
  34. Dong, C.; Wen, W. Joint optimization for task offloading in edge computing: An evolutionary game approach. Sensors 2019, 19, 740. [Google Scholar] [CrossRef]
  35. Liao, Y.; Shen, D. Multi-mec server multi-user resource allocation in heterogeneous network. J. Phys. Conf. Ser. 2021, 1792, 012005. [Google Scholar]
  36. Boyd, S.; Boyd, S.P.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  37. Pochet, Y.; Wolsey, L.A. Production Planning by Mixed Integer Programming; Springer: Berlin/Heidelberg, Germany, 2006; Volume 149. [Google Scholar]
  38. Tseng, P. Convergence of a block coordinate descent method for nondifferentiable minimization. J. Optim. Theory Appl. 2001, 109, 475. [Google Scholar] [CrossRef]
  39. Liu, M.; Liu, Y. Price-based distributed offloading for mobile-edge computing with computation capacity constraints. IEEE Wirel. Commun. Lett. 2017, 7, 420–423. [Google Scholar] [CrossRef]
  40. Wang, S.; Ye, D.; Huang, X.; Yu, R.; Wang, Y.; Zhang, Y. Consortium blockchain for secure resource sharing in vehicular edge computing: A contract-based approach. IEEE Trans. Netw. Sci. Eng. 2020, 8, 1189–1201. [Google Scholar] [CrossRef]
  41. Xiao, H.; Zhao, J.; Pei, Q.; Feng, J.; Liu, L.; Shi, W. Vehicle selection and resource optimization for federated learning in vehicular edge computing. IEEE Trans. Intell. Transp. Syst. 2021, 23, 11073–11087. [Google Scholar] [CrossRef]
  42. Feng, J.; Zhang, W.; Pei, Q.; Wu, J.; Lin, X. Heterogeneous computation and resource allocation for wireless powered federated edge learning systems. IEEE Trans. Commun. 2022, 70, 3220–3233. [Google Scholar] [CrossRef]
  43. Li, Y.; Yang, B.; Wu, H.; Han, Q.; Chen, C.; Guan, X. Joint offloading decision and resource allocation for vehicular fog-edge computing networks: A contract-stackelberg approach. IEEE Internet Things J. 2022, 9, 15969–15982. [Google Scholar] [CrossRef]
  44. Zhang, Y.; Liu, L.; Gu, Y.; Niyato, D.; Pan, M.; Han, Z. Offloading in software defined network at edge with information asymmetry: A contract theoretical approach. J. Signal Process. Syst. 2016, 83, 241–253. [Google Scholar] [CrossRef]
Figure 1. System illustration.
Figure 1. System illustration.
Sensors 23 04806 g001
Figure 2. Types of MT versus utilities of MTs: (a) service caching type 1; (b) service caching type 2; (c) service caching type 3.
Figure 2. Types of MT versus utilities of MTs: (a) service caching type 1; (b) service caching type 2; (c) service caching type 3.
Sensors 23 04806 g002aSensors 23 04806 g002b
Figure 3. Algorithm convergence: (a) the predetermined error of the algorithm; (b) utility of various caching decisions.
Figure 3. Algorithm convergence: (a) the predetermined error of the algorithm; (b) utility of various caching decisions.
Sensors 23 04806 g003
Figure 4. Utilities of MTs and MEC server versus number of MT types under different incentives: (a) utility of MEC server; (b) utilities of MTs.
Figure 4. Utilities of MTs and MEC server versus number of MT types under different incentives: (a) utility of MEC server; (b) utilities of MTs.
Sensors 23 04806 g004
Figure 5. Utilities of MTs and MEC server versus number of service caching types under different incentives: (a) utility of MEC server; (b) utilities of MTs.
Figure 5. Utilities of MTs and MEC server versus number of service caching types under different incentives: (a) utility of MEC server; (b) utilities of MTs.
Sensors 23 04806 g005
Figure 6. Utilities of MTs and MEC server versus unit cost of service caching under CA and CA with random strategies: (a) utility of MEC server; (b) utilities of MTs.
Figure 6. Utilities of MTs and MEC server versus unit cost of service caching under CA and CA with random strategies: (a) utility of MEC server; (b) utilities of MTs.
Sensors 23 04806 g006
Figure 7. Effect of caching strategy and caching unit cost on utilities of MTs and utility of MEC server: (a) utility of MEC server; (b) utilities of MTs.
Figure 7. Effect of caching strategy and caching unit cost on utilities of MTs and utility of MEC server: (a) utility of MEC server; (b) utilities of MTs.
Sensors 23 04806 g007
Figure 8. Effects of service caching costs and strategies and number of service caching types on social welfare: (a) service caching costs and strategies; (b) number of service caching types.
Figure 8. Effects of service caching costs and strategies and number of service caching types on social welfare: (a) service caching costs and strategies; (b) number of service caching types.
Sensors 23 04806 g008
Table 1. Optimize network resources through incentive mechanisms.
Table 1. Optimize network resources through incentive mechanisms.
Ref.Optimization StrategiesOptimization GoalsIncentive Mechanism
[24]Offloading dataConvergence time and stabilityDeep reinforcement learning (DRL) and game theory
[28]Utility of userSocial welfare and computation efficiencyDual auction framework
[31]Reward the MEC operator pays and CPU resourceSocial welfare and computation efficiencyContract theory and Bayesian matching game
[23]Caching price and the number of contents stored on the edge cachesQuality of experienceStackelberg game
[25]CPU cycle and computation task offloading strategy and the unit task paymentCommunication overhead and processing efficiencyGame theory and perturbed Lyapunov optimization
[26]Variables for whether the tenant’s bid winsDelay and energy consumptionRandomized auction mechanism
[27]Service caching pricingPrices and service caching decisionsDynamic game of incomplete information
[12]Offloading decision and transmission powerComputational overheadStackelberg game
In this paperCPU cycle, transmission power, caching decisionDelayContract theory
Table 2. Key notations.
Table 2. Key notations.
SymbolDescription
K = 1 , 2 , k , , K Set of edge nodes
N = 1 , 2 , n , , N Set of MTs
N k the number of MTs belonging to the class k K computational tasks
θ k , n The sensitivity of the MT to the latency of the computation task of type k
D k The data size corresponding to the computation task of type k
s k The service cache size required by the computation task of type k
η k The number of CPUs required for 1 bit of data for a computational task of type k
T k max The maximum tolerated latency for a computational task of type k
R k , n The terminal device’s wireless transmission rate
σ The Gaussian white noise
d k , n The transmission power
h k , n 2 The channel power gain between the edge server and the MT n k
α k Service cache location
r k The edge server’s transfer rate m for acquiring the service cache k
f k , n The edge server’s available processing power
p k , n The cost of tasks offloaded to edge servers
u n , k MT utility n k
Θ k = θ k , i : 1 i I k The N k terminal devices can be classified into different latency sensitivity categories based on their type
q k , i Each category’s probability distribution
c k , n 1 The unit cost the service provider pays for the computational tasks offloaded by the k class
c k 2 The cost expenditure per unit transfer rate
c k 3 The cost per unit of storage
c k , n 4 The unit cost expenditure for computational energy consumption
β k The hotness level of the computational work delegated by the k type
γ The file’s popularity
κ n The effective switching capacitor
Table 3. Parameter setting in the simulation.
Table 3. Parameter setting in the simulation.
ParameterSetting
Effective switched capacitance κ = 10 28
Number of CPU cycles executing one bit η = β ( 0 , 10 ] cycles/bit
Maximum tolerance time T max ( 0 , 6 ] s
Size of service caching s ( 0 , 81 , 920 ] bit
Maximum computing capacity F max = 2 × 10 10 cycles
Maximum storage capacity S max = 900,000 bit
Others c 1 ( 0 , 1 ] , c 2 ( 0 , 5 ] , c 3 ( 0 , 10 6 ]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Z.; Chen, G. Contract-Optimization Approach (COA): A New Approach for Optimizing Service Caching, Computation Offloading, and Resource Allocation in Mobile Edge Computing Network. Sensors 2023, 23, 4806. https://doi.org/10.3390/s23104806

AMA Style

Sun Z, Chen G. Contract-Optimization Approach (COA): A New Approach for Optimizing Service Caching, Computation Offloading, and Resource Allocation in Mobile Edge Computing Network. Sensors. 2023; 23(10):4806. https://doi.org/10.3390/s23104806

Chicago/Turabian Style

Sun, Zhiyao, and Guifen Chen. 2023. "Contract-Optimization Approach (COA): A New Approach for Optimizing Service Caching, Computation Offloading, and Resource Allocation in Mobile Edge Computing Network" Sensors 23, no. 10: 4806. https://doi.org/10.3390/s23104806

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop