Next Article in Journal
Least Squares Reverse Time Migration of Ground Penetrating Radar Data Based on Modified Total Variation
Previous Article in Journal
The Real-Time Optimal Attitude Control of Tunnel Boring Machine Based on Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Resource Allocation Strategy for Satellite Edge Computing Based on Task Dependency

Communication and Network Laboratory, Dalian University, Dalian 116622, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(18), 10027; https://doi.org/10.3390/app131810027
Submission received: 27 July 2023 / Revised: 13 August 2023 / Accepted: 21 August 2023 / Published: 5 September 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Satellite edge computing has attracted the attention of many scholars, but the limited resources of satellite networks bring great difficulties to the processing of edge-computing-dependent tasks. Therefore, under the system model of the satellite-terrestrial joint network architecture, this paper proposes an efficient scheduling strategy based on task degrees and a resource allocation strategy based on the improved sparrow search algorithm, aiming at the low success rate of application processing caused by the dependency between tasks, limited resources, and unreasonable resource allocation in the satellite edge network, which leads to the decline in user experience. The scheduling strategy determines the processing order of tasks by selecting subtasks with an in-degree of 0 each time. The improved sparrow search algorithm incorporates opposition-based learning, random search mechanisms, and Cauchy mutation to enhance search capability and improve global convergence. By utilizing the improved sparrow search algorithm, an optimal resource allocation strategy is derived, resulting in reduced processing latency for subtasks. The simulation results show that the performance of the proposed algorithm is better than other baseline schemes and can improve the processing success rate of applications.

1. Introduction

At present, the rapid development of the Internet of Things and 5G mobile technology has promoted a wide range of applications, such as augmented reality (AR)/virtual reality (VR), 4K/8K video transmission, and other emerging applications [1,2,3]. These applications require low latency, high efficiency, and network reliability. When large amounts of data are offloaded to remote cloud centers, the user’s quality of experience (QoE) will be affected [4]. The mobile edge computing (MEC) proposed by the European Telecommunications Standardization Organization (ETSI) has attracted people’s attention. MEC involves migrating computing capabilities to the edge of mobile networks, effectively reducing response latency [5].
The terrestrial network also has shortcomings. The terrestrial network may not be able to provide ubiquitous coverage for forests, rural areas, mountains, and marine areas. Moreover, the anti-interference and anti-disaster capabilities of the terrestrial network are weak. In the face of emergencies, the traditional terrestrial network may encounter problems such as traffic overload or users moving out of the coverage of the base station. Satellite communication has the advantages of a long communication distance, a wide coverage area, and not being affected by terrain and geological disasters, etc. This enables the satellite network to provide resilient support when the terrestrial network is insufficient to meet demand. Satellite nodes have significant advantages in providing large-scale broadcast services for the ground and efficiently distributing multimedia services. In addition, the existence of the satellite-ground link provides a new network backhaul option for the ground network. When the capacity of the ground network is saturated or fails, the satellite network can provide a space-based backhaul service; that is, the ground data transmission is sent back to the ground through the satellite. This backhaul method can effectively relieve the pressure on the ground network and ensure the smoothness and stability of data transmission [2,3,4,5,6]. The core idea of introducing edge computing technology in the satellite-ground collaborative network is to extend the cloud computing platform to the network edge or even the user terminal itself. The multi-level, heterogeneous computing resources, global coverage, and optimized task unloading and resource allocation of the satellite network can provide users with more efficient and convenient computing services and improve their service experience. At the same time, it also helps reduce the redundant traffic of the network and optimize the performance of the network [5,6,7].
Due to the limited computing power of local devices, when a large number of user devices need computing tasks, users prefer to offload tasks to satellite edge computing nodes as much as possible, because this can reduce processing delays. However, due to the limited satellite resources, it is a challenge to develop efficient task scheduling and resource allocation algorithms, reduce processing delays, and improve the application success rate in satellite network edge computing.
Therefore, the main contributions of this paper are as follows:
  • Considering the interdependence of tasks in satellite edge computing, tasks can be processed at satellite edge nodes. An application completion rate model is established in this paper.
  • A scheduling algorithm based on the in-degree of tasks (TBID) is proposed, and the processing order of tasks is obtained. The subtasks with an in-degree of 0 in the directed acyclic graph (DAG) are scheduled each time to form a subtask set, and the tasks in the task set are processed each time and then deleted. The application program chooses whether to process in the satellite or locally according to the deadline delay.
  • An improved sparrow search algorithm is proposed to solve the resource allocation problem of subtasks scheduled in each iteration. The opposition-based learning is introduced to increase the initial population diversity. The random search mechanism from the whale optimization algorithm is introduced to improve the global search ability of the algorithm. The problem of local optimum is solved by Cauchy variation.
The rest of this paper is organized as follows: In Section 2, this part summarizes the current research work on application scheduling and resource allocation. In Section 3, the system model is described, the task success rate model is formulated, and the TBID scheduling algorithm and the improved sparrow search algorithm are described. The performance of the satellite edge resource allocation method proposed in this paper is experimented with in Section 4. Finally, the related work is summarized in Section 5.

2. Related Research

At present, the allocation of edge computing resources has both domestically and internationally become a research focus for scholars. Cui et al. used the Lagrange multiplier method to obtain optimal computing and communication resource allocation, aiming to attain the optimal resource allocation scheme. Deep reinforcement learning-based methods are used to learn optimal offloading decisions [8]. Qiu et al. proposed a software-defined satellite-terrestrial network (STN) to jointly manage network, cache, and computer resources, and used a deep Q-learning approach to solve it [9]. Wang et al. proposed a three-layer network architecture combined with the software-defined network model to guide inter-satellite link connection and inter-satellite link resource scheduling. They proposed a K-means algorithm (AKG) and a spanning tree algorithm based on breadth-first search (BFST) [10]. Jia et al. proposed a new algorithm for joint task offloading and communication computing resource optimization (JTO-CCRO). The resource optimization subproblem is a convex optimization problem that can be solved by the Lagrange multiplier method [11]. Song et al. proposed a new MEC framework for a terrestrial satellite Internet of Things (IoT). This framework utilizes Terrestrial Satellite Terminals (TSTs) to divide the computation offloading of Internet of Mobile Devices (IMDs) to low Earth orbit satellites into two stages: ground and space [12]. Cheng et al. proposed a dynamic offload strategy to minimize the overall latency of tasks in the satellite edge computing (SEC) network and jointly optimized the task offloading strategy, computing resources, and transmit power based on Lyapunov theory [13]. Gao et al. studied the dynamic resource allocation problem of virtual network functions (VNFs) in the satellite edge cloud to minimize both network bandwidth cost and service end-to-end latency [14]. [15,16,17,18,19,20] Wei et al. studied the problem of user computing offloading in the low earth orbit (LEO) satellite network. They formulated a joint optimization problem to minimize task completion latency and energy consumption. However, this literature only considered the processing of the overall task and did not consider the dependencies between tasks.
In addition, Zhu et al. allocated an execution time for each task according to task soft deadlines and task dependency constraints, solving the problem of task scheduling difficulties caused by task dependency constraints. They proposed an adaptive offloading of dependent tasks in a mobile edge computing environment and scheduling algorithm [21]. Sun et al. proposed a novel Application-driven Task Offloading Strategy (ATOS) based on deep reinforcement learning. This strategy achieves joint optimization by adding a preliminary sorting mechanism and proposes a heuristic algorithm to determine the processing order of parallel subtasks by introducing a new factor [22]. Sadatdiynov et al. proposed a constrained multi-objective computation offload optimization solution to solve the task dependency problem under limited resources [23]. Zahid Halim et al. proposed a task scheduling method based on improved evolutionary computation that minimizes the execution time of applications [24], and Wang et al. proposed an intelligent task offloading scheme leveraging off-policy reinforcement learning empowered by a Sequence-to-Sequence (S2S) neural network [25]. Chen et al. solved dependent task offloading and edge computing in multi-user scenarios. The task offloading problem is modeled as a Markov decision process (MDP) [26]. However, this literature only considered the processing of task dependencies and did not consider the resource allocation of tasks. In summary, when multiple users unload a large number of latency-sensitive tasks at the same time, the dependencies between tasks will greatly affect the completion time of the application. Therefore, the impact of dependencies between tasks on the effect of resource allocation needs to be considered. However, this has not been considered in existing studies. Therefore, considering the satellite edge environment, this paper proposes a resource scheduling algorithm for joint computing communication resource allocation while considering task dependencies.
A summary of the above literature studies is summarized in Table 1, in which √ indicates that there are studies in this area.

3. System Models and Problem Formulation

3.1. Network Model

The network architecture adopts a satellite ground network structure. In this architecture, there is a low-orbit satellite where edge computing nodes can be deployed. In the area covered by low-orbit satellites, multiple users are randomly distributed in the area. Satellite edge servers directly providing services to end users can reduce the response latency of end devices. In the network scenario, various devices such as user terminals, data monitoring devices, and IoT devices are connected to the LEO satellite network. The LEO satellite undertakes the task of computing services. Due to the fact that each DAG task has different time tolerance delays, it has different priorities. Different processing methods are selected based on these priorities. As shown in Figure 1:

3.2. Task Model

3.2.1. Application Model

The satellite period is discretized into n time slots. There are a total of M mobile devices in the system, and the device number set is represented as M = { 1 , 2 , 3 m } , and indexed with i. The application generated on each mobile device can be divided into multiple tasks with dependencies, and the calculation task set is expressed as N = { 1 , 2 , 3 n } , indexed by j. Due to the dependencies between the divided tasks, the structure of the application can be represented by a directed acyclic graph DAG, denoted as G i = V i , E i , in which V i represents the set of subtask nodes in the ith application V i = ( v i , 1 , v i , 2 v i , n ) . In G i = V i , E i , for a task v i , j , v i , j = ( C i , j , D i , j ) , C i , j represents the CPU cycle required to process the task, the unit is cycle, D i , j represents the size of the current task in bits, and the deadline delay of the application is expressed as T d i . E i represents the set of directed edges between compute tasks E i = { e j , k i = ( v i , j , v i , k ) v i , j , v i , k V i } . The presence of directed edges between two vertices indicates that the end vertex cannot be processed earlier than the starting vertex of the directed edge; that is, there is a dependency between tasks. When a task is unloaded, the MEC server allocates resources to it until the task processing result is completed. At the beginning of each time slot, each mobile device requests an application for processing, and the task processing flow is as follows:
  • At the beginning of each time slot, multiple users on the ground initiate a task request.
  • Through the scheduling algorithm, the task processing approach and the set of subtasks to be processed are determined.
  • If the task needs to be offloaded to the satellite, the resource allocation algorithm obtains the resources to be allocated.
  • The satellite edge node calculates the results for the ground user.
According to the task processing flow, the total delay generated by the whole system can be divided into processing delay and transmission delay. The overall processing latency of the application is then compared to the maximum tolerated delay in the task model. If it is less than that, the processing succeeds; otherwise, the processing fails.

3.2.2. Application Completion Rate Model

The transmission delay of subtask offloading to the edge computing server can be expressed as:
T i , j u s   = D i , j R u s i , j
R u s i , j represents the uplink transmission rate (unit: Mbps), R u s i , j expressed as:
R u s i , j = B i , j log 2 ( 1 + P i , j g i , j N )
The transmitted power of the subtask v i , j is expressed as P i , j , the channel gain is expressed as g i , j , and the white Gaussian noise power is expressed as N and B i , j is the allocated bandwidth resource in MHz.
The processing delay of the subtask offloading to the MEC server is:
T i , j sec = C i , j F i , j
F i , j represents the computing resources (in cycles/s) allocated to a task.
The processing delay of a local computing task is composed of a processing delay and a waiting delay. Since tasks are locally processed one at a time, the waiting time for a task is the sum of the computing time of all tasks before the task v i , j in the task set. The expression is:
T i , j l = C i , j F l + T i , j l , w
F l represents the local computing resources (unit: cycles/s), T i , j l , w represents the waiting time.
According to Equations (1)–(4), the total delay in task completion is:
T i , j s = ( 1 a i , j ) T i , j l + a i , j ( T i , j sec + T i , j u s + 2 s v )
where a i , j is the processing method of the task, the distance from the mobile device to the satellite node is expressed as s, and the speed of light is expressed as v.
To satisfy the relationship between tasks, the scheduling algorithm should reasonably arrange the processing sequence of tasks, enabling suspended tasks to acquire as many resources as possible to expedite the processing time. Therefore, a scheduling processing scheme is set up: each DAG is processed according to the scheduling algorithm. Only when all tasks with an in-degree of 0 in the current task graph have been processed (all resources are completely released) can the next schedule start processing. E S T i represents the maximum completion delay of all predecessor tasks of the last task in the ith application. When the task output degree in the task graph is 0, T i , e x i t s represents the processing time of the last task, L is the set of tasks processed at the edge node for the current in-degree of 0, s l i is the number of times the current execution scheduling algorithm processes the DAG graph, and S L i is the maximum number of layers of the application DAG to be scheduled. Therefore, the completion time of the application can be expressed as:
E S T i = s l i = 1 S L i 1 max j L T i , j s T i = E S T i + T i , e x i t s
The constraint optimization model is established as follows:
max λ = i = 1 m I i m s . t .       C 1 : I i = { 1             T i T d i 0     T i > T d i C 2 : v i , j L B i , j B max C 3 : v i , j L F i , j F max C 4 : a i , j { 0 , 1 } i M , j N ,
m is the total number of applications that need to be processed. From Equation (6), we calculate T i . When application i is completed within the limited delay and returns the result, T i T d i , I i = 1 indicates that the application was successfully processed, otherwise it is 0; where A, F, and B represent the uninstall policy and the resource allocation strategy. A is the set of subtask offloading policies, A = { a i , j }   i M , j N . F is the set of computing resource allocation policies F = { F i , j } and F max is the computing resources that can be allocated to subtasks. B is defined as a set of bandwidth resource allocation policies B = { B i , j } and B max is the bandwidth resources that can be allocated.

3.3. Task Scheduling Model

3.3.1. Scheduling Model Based on Subtask Degree

Because the task structure of various types of applications is different, there may be dependency constraints between each task, and the execution of such tasks is seriously affected by its predecessor tasks, so the dependencies between tasks need to be considered before scheduling subtasks. As shown in Figure 2, the numbers 1 to 6 represent the sequence number of the DAG’s neutron task nodes, and the arrows between the nodes represent the dependencies between the subtasks. So the first DAG in Figure 2 has four task nodes, the second has six task nodes, and the third has two task node nodes.
Different types of applications for user request processing have different requirements for completion time. Therefore, to improve the completion rate of the application within the system, it is necessary to establish a priority queue at the beginning. The queue is sorted according to the deadline delay of the application. In case two applications have the same deadline time, the one with the higher current DAG data volume is selected as the higher priority. The applications to be scheduled are divided into high priority and low priority according to the priority, and the high priority is executed on satellite edge nodes a i , j = 1 . Low-priority ones are executed on the ground a i , j = 0 . Tasks locally executed are prioritized because they can only process one task at a time, so subtasks with smaller deadlines are prioritized.
The TBID scheduling algorithm selects the computing task with an in-degree of 0 in each DAG and generates a set of tasks defined as L. Each time a group subtask is completed in the scheduling process, the task and associated edges will be removed from the DAG, and the structure of the DAG will be changed. The point with an in-degree of 0 will become a ready subtask at the next dispatch. The specific scheduling steps of the application program are described in Algorithm 1, and the specific steps to solve the task processing delay of the subtask set through the improved sparrow search algorithm in the scheduling process are described in the next section. The pseudo-code of the scheduling algorithm is as follows:
Algorithm 1 TBID scheduling algorithm
Input: Applications for all users
Output: The completion rate of the application
1. for i = 1 to m do
2.     Compare the priorities of each application into a set of high- and low-priority
  tasks, with high priority executed at satellite edge nodes and low priority locally executed
3.     while  G i is not null
4.           Use the BFS algorithm to traverse the application’s DAG
5.      end while
6.      Obtain the number of layers for each DAG S L i
7. end for
8. for i = 1 to m do
9.     while  G i is not null
10.          Traverse the application’s DAG and look for nodes with an indegree
            of 0 to add to subtask set L.
11.          According to Algorithm 2, the resource allocation strategy F and B of sub-
            task set L and the completion delay of subtask set tasks are obtained
12.          Delete subtask nodes and task-related edges
13.    end while
14.    The application completion delay is calculated according to Equation (6)
15. end for
16. Calculate the application completion rate according to Equation (7).
Algorithm 2 Satellite edge resource allocation method based on improved sparrow algorithm
Inputs: subtask set, the proportion of producers, the proportion of investigators early
     warning value, the maximum number of iterations T, population size
Output: optimal resource allocation vectors F, B, subtask set completion delay
1. The TBID scheduling algorithm is used to obtain the subtask set L
2. According to Equation (12), opposition-based learning is used to initialize the population
3. for t = 1 to T do
4.   Calculate the fitness value according to Equations (8)–(11) and find the optimal fit
5.   for i = 1 to the number of producers do
6.      Update the location of the sparrow finder according to Equation (13).
7.   end for
8.   for i = number of producers +1 to population number do
9.      Update the position of the sparrow joiner according to Equation (14).
10.  end for
11.  for i = 1 to the number of investigators do
12.    Update the position of the Sparrow Vigilant according to Equation (15).
13.  end for
14.    The Cauchy mutation is used to apply Formula (16) to interfere with the
   population position so that individual sparrows can jump out of the local optimum.
15.    if the fitness of the new solution is better than that of the previous solution, and
        the new solution is updated to the global optimal solution
16.  t = t + 1
17. end for
18. return the completion delay of tasks in the optimal resource allocation strategy F,
      B, and subtask sets

3.3.2. Improve the Resource Allocation Method of the Sparrow Search Algorithm

Compared with most traditional optimization algorithms, the swarm intelligence optimization algorithm is characterized by the simple cooperation of individuals in the group and is easy to implement. The advantage of the sparrow search algorithm lies in its specific search strategy and diversity maintenance mechanism, which helps to avoid falling into the local optimal solution and improve the global search ability. This enables the algorithm to search for the optimal solution more effectively in the resource allocation problem. Therefore, the sparrow search algorithm is chosen to solve the resource allocation problem.
In the original sparrow algorithm, due to the lack of diversity among individuals in the population, it is easy to lead to a local optimum. The value of each dimension of the producer will decrease with the increase in the number of iterations, which will make the overall population too concentrated, resulting in a decrease in population diversity and affecting the effect of optimization. In the later stage of the iteration of the basic sparrow algorithm, the diversity of the sparrow population decreases and it is easy to fall into the situation of local optimum. To solve these problems, an improved sparrow search algorithm is proposed. The mapping relationship between the algorithm and the resource allocation optimization problem is as Table 2:
In the improved sparrow algorithm, the population number of sparrows is the set of resource allocation strategies, and the position of each sparrow is represented by a vector group X. The content of the constructed fitness function should include optimization goals and constraints. The fitness function is the completion delay of the subtask set. In order to meet the resource constraint requirements, a penalty function should be added to the fitness function to ensure that the resources allocated each time do not exceed the maximum value [27]. The optimal individual is found by the constructed fitness function. According to the form of the fitness function in the literature [27], the fitness function can be expressed as
T s l i = max j L T i , j s
f = T s l i + δ 1 p e n a l t y 1 + δ 2 p e n a l t y 2
p e n a l t y 1 = ( j L F i , j F max ) 2 j L F i , j > F max 0 j L F i , j F max
p e n a l t y 2 = ( j L B i , j B max ) 2 j L B i , j > B max 0 j L B i , j B max
The coding length of the individual is the length of each scheduled task set; 1 to L is the result of computing resource allocation, and 1 + L to 2L is the result of the bandwidth resource allocation.
(1)
Initialize the population in combination with the opposition-based learning strategy.
Due to the random initialization of the population at the beginning of the SSA algorithm, the diversity of the population is relatively poor and uneven in terms of distribution. The OBL strategy calculates the opposite solution of the current solution, finally selects a better fitness solution from the current solution and the corresponding opposite solution, and updates the individual. The opposition-based learning strategy is widely used by scholars in various algorithms. The literature [28] indicates that the opposition-based learning strategy can be used to generate reverse solutions in the search space to increase the diversity of the population. Therefore, the opposition-based learning strategy is introduced when the sparrow population is initialized to expand the search space of the sparrow to maintain population diversity and enhance the global search ability. The initialization is as follows:
X i , j * = u b i , j + l b i , j X i , j
where u b i , j is the upper bound of the search space, l b i , j is the lower bound of the search space. After the randomly generated original sparrow population and opposite sparrow population are merged into a new population, the fitness function of the new population is calculated.
(2)
Producer location updates combined with whale-optimized random search strategies.
In the sparrow search algorithm, global exploration is mainly undertaken by the producer. When the food searched by the producer is located in the local optimal, a large number of scroungers will pour into the location, resulting in the loss of population position diversity and the local optimum, and at the same time, the overall search performance deteriorates, affecting the optimization effect. So, the producer position is fused with the random search location update mechanism in the whale optimization algorithm to improve the global search ability of the algorithm. The producer position is updated to:
X i , j t + 1 = X r a n d t A C X r a n d t X i , j t R 2 < S T X i , j t + Q L R 2 S T
t represents the number of current iterations, where A and C are vectors of coefficients, A = 2 a r 1 a ,   C = 2 r 2 , a = 2 2 t / T . r 1 and r 2 is the random number in (0, 1), a is the convergence factor that linearly decreases from 2 to 0, and T is the maximum number of iterations. X i , j t is the position information of the ith sparrow in the j-dimension. R 2 < S T indicates that there are no predators around, and the producer can conduct a large-scale search. When a scout finds a predator and immediately sends an alarm signal. R 2 S T , R 2 R 2 0 , 1 represents the early warning value, and S T S T 0.5 , 1 represents the safety value. Q is a random number that follows a normal distribution, and L represents a matrix representing 1 × d with all 1 elements.
Scroungers location update as:
X i , j t + 1 = Q exp ( X w o r s t t X i , j t i 2 ) i f     i > n 2 X p t + 1 + X i , j t X p t + 1 A + L i f     i n 2
X p is the optimal position occupied by the current discoverer, and X w o r s t represents the current global worst position. A represents a matrix, where each element is randomly assigned a value of 1 or −1, and A + = A T A A T 1 . When i > n/2, it indicates the ith scroungers with a lower fitness value have not received food, are in a very hungry state, and need to fly elsewhere to feed to obtain more energy. When the scroungers sense that the producer has found a better food source, they will continue to search for food in the vicinity of the producer. Select a subset of individuals from the individuals as investigators.
Updating investigator’s location:
X i , j t + 1 = X b e s t t + β X i , j t X b e s t t f i > f g X i , j t + k X i , j t X w o r s t t ( f i f w ) + ε f i = f g
X b e s t t is the current global optimal position. The step control parameter β is a random number following a normal distribution with a mean of 0 and a variance of 1. k [ 1 , 1 ] is a random number, and f i is the fitness value of the current individual sparrow. f g and f w are the current global best and worst fitness values, respectively. ε is the smallest constant to avoid zero in the denominator.
(3)
Population disturbance combined with Cauchy variation
In the later stages of the algorithm, the sparrow population gradually approaches the optimal individual, which leads to the lack of population diversity and premature convergence of the algorithm. Therefore, the Cauchy variant operator is added in the late stage of the sparrow search. In the iterative process of the search algorithm, the locally optimal solution is continuously disturbed, while the sparrows with better fitness value are retained. Such a strategy is aimed at avoiding situations in which the sparrow population falls into a locally optimal solution during the search process, and the position is updated to:
X b e s t * = X b e s t t + X b e s t t ·   C h u c h y ( 0 , 1 )

4. Discussion

This simulation experiment uses STK 11 simulation software to simulate the information transmission network. In the simulation, we consider a circular coverage area, a LEO satellite with a radius of 1000 km, where the height of the satellite is 500 km. The parameter settings for the simulation in this chapter are shown in Table 3 [29,30,31,32].
This section compares and analyzes application completion rates. The simulation results are displayed and analyzed from three aspects: resource changes, processing methods, and the number of DAGs.
As shown in Figure 3, with the increase in satellite computing resources, the task completion rates of all schemes show an upward trend. In addition, the method of all satellite processing (AS) [20] and random processing (RA) [29] has a low task completion rate, while the resource allocation scheme based on task degree and the improved Sparrow search algorithm (TDISSA) has a high completion rate compared to other algorithms. When the satellite computing resources were 14 Gcycles/s, the completion rate of TDISSA was 100%, while that of RA and AS were 63.63% and 54.54%, respectively. The more computing resources a satellite has, the more computing resources can be allocated to tasks, which reduces task computing latency. However, the computing resources of the satellite’s MEC server are still limited and there is currently only one satellite node, which can process limited tasks. Therefore, the algorithm proposed in the article only offloads a part of the application to the satellite for calculation. In the AS, each application offloads its tasks to the satellite for calculation, so this leads to its lower task completion rate, and many applications cannot successfully process. The RA method adopts random decisions, which are not optimal strategies, resulting in a lower task completion rate as well. The completion rate of the method proposed in this paper is higher than the AS, indicating that the satellite ground processing together is effective, and also illustrates the importance of proper task offloading decisions. The completion rate of the average allocation algorithm (AV) was 90.9% [33]. The AV has a lower completion rate than the method proposed in this paper, indicating the importance of resource allocation for tasks offloaded to the satellite.
As shown in Figure 4, with the increase in satellite communication resources, the task completion rates of all schemes show an upward trend. In addition, the AS and RA have a lower task completion rate than the other two algorithms, while the TDISSA has the highest completion rate. When the satellite communications resources were 14 MHz, the completion rate of TDISSA was 90.9%, while that of RA and AS were 63.63% and 54.54%, respectively. The more communication resources a satellite has, the more communication resources can be allocated to a task, which reduces task computation latency. However, the communication resources of the satellite’s MEC server are still limited and, currently, there is only one satellite node, which can process limited tasks. So, offload a part of the application to the satellite for computation. In AS, each application offloads its tasks to the satellite for calculation, so this leads to its lower task completion rate, and many applications cannot successfully process. However, since the random decision is not optimal, its completion rate is also lower. The completion rate of the method proposed in this paper is higher than that of AS, indicating that the satellite ground processing together is effective, and also illustrates the importance of appropriate task offloading decisions. The completion rate of the AV is 81.81%, which is lower than the completion rate of the method proposed in this paper, indicating the importance of resource allocation for the tasks offloaded to the satellite.
As shown in Figure 5, as the number of DAGs increases, the task completion rates of all schemes show a downward trend. In addition, the AS and RA have a low task completion rate, while the TDISSA has a consistently higher completion rate. When the number of DAGs to be processed is 16, the completion rate of TDISSA was 12.5%, while that of RA and AS were 6.25% and 0%, respectively. Because the computing resources of the satellite’s MEC server are still limited and there is currently only one satellite node, the tasks that can be processed are limited. When the number of DAGs continues to increase, the processing success rate will decrease. In the AS, each application offloads its tasks to the satellite for calculation. As the number of DAGs increases, many applications cannot be successfully processed. However, since the random decision is not optimal, its completion rate is also lower. The completion rate of the method proposed in this paper is consistently higher than the AS, indicating that the satellite ground co-processing is effective, and also indicates the importance of the task offloading decision explained. The completion rate of the AV is 6.25%, which is lower than the completion rate of the method proposed in this paper, indicating the importance of resource allocation for the tasks offloaded to the satellite.
As shown in Figure 6, with the increase in ground computing power, the task completion rate of all schemes shows an upward trend. In addition, the method with locally all processed (AL) [31] and RA has a lower task completion rate, while the TDISSA has the highest completion rate. When the local computing power was 4 Gcycles/s, the completion rate of TDISSA was 100%, while that of the RA and AL were 72.72% and 63.63%, respectively. The reason for this is that the computing resources of the ground terminal are limited and can only handle one task at a time. Therefore, put a part of the application on the ground. In the AL, each application calculates its tasks on the ground, so this leads to a lower task completion rate, and many applications cannot successfully process. However, since the random decision is not optimal, its completion rate is also lower. The completion rate of the method proposed in this paper is higher than the completion rate of the AL, indicating that the satellite ground processing together is effective, and also illustrates the importance of proper task offloading decisions. The completion rate of the AV is 90.9%, which is lower than the completion rate of the method proposed in this paper, indicating the importance of resource allocation for the tasks offloaded to the satellite.

5. Conclusions

In this paper, when considering the dependency between tasks in satellite edge computing, improper resource allocation leads to a too-long subtask processing delay and a low success rate of application processing. To solve this problem, this paper first proposes a satellite-ground joint deployment network architecture based on task dependency, then schedules the subtasks of the application DAG with dependency, and then proposes a scheduling algorithm of TBID. Secondly, to overcome the problem that the traditional sparrow search algorithm falls into local optimality in the search process, this paper improves it and uses the improved sparrow search algorithm to allocate resources. In this paper, opposition-based learning, random search, and Cauchy mutation operators are introduced into the Sparrow search algorithm to improve the convergence and global search ability of the algorithm. The simulation results show that when the satellite computing resource is 14 Gcycles/s, the communication resource is 14 MHz, the number of DAGs to be processed is 16, and the local computing power is 4 Gcycles/s, the completion rate of the scheme is 100%, 90.9%, 12.5%, and 100%, respectively, which has a higher application completion rate than other benchmark methods. In the future, we will consider using the resources of adjacent satellites to schedule processing tasks in a multi-satellite satellite network architecture and consider the problem of real-time task scheduling.

Author Contributions

Conceptualization, Z.L. and Y.J.; Methodology, Z.L.; Software, Y.J.; Validation, Y.J. and J.R.; Formal analysis, Y.J.; Investigation, Y.J. and J.R.; Resources, Z.L.; Data curation, Y.J.; Writing—original draft, Y.J.; Writing—review and editing, Z.L., Y.J. and J.R.; Visualization, Y.J.; Supervision, J.R.; Project administration, J.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no funding.

Data Availability Statement

The processed data required to reproduce these findings cannot be shared as the data also form part of an ongoing study.

Acknowledgments

The writers would like to thank the editor and anonymous reviewers for their helpful comments for improving the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Taleb, T.; Samdanis, K.; Mada, B.; Flinck, H.; Dutta, S.; Sabella, D. On multi-access edge computing: A survey of the emerging 5G network edge cloud architecture and orchestration. IEEE Commun. Surv. Tutor. 2017, 19, 1657–1681. [Google Scholar] [CrossRef]
  2. Yao, H.; Wang, L.; Wang, X.; Lu, Z.; Liu, Y. The Space-Terrestrial Integrated Network (STIN): An Overview. IEEE Commun. Mag. 2018, 56, 178–185. [Google Scholar] [CrossRef]
  3. Tang, Q.; Xie, R.; Liu, X.; Zhang, Y.; He, C.; Li, C.; Huang, T. Integrating MEC’s satellite-ground collaborative network: Architecture, key technologies, and challenges. J. Commun. 2020, 41, 162–181. [Google Scholar]
  4. Liu, Y.; Wang, S.; Zhao, Q.; Du, S.; Zhou, A.; Ma, X.; Yang, F. Dependency-aware task scheduling in vehicular edge computing. IEEE Internet Things J. 2020, 7, 4961–4971. [Google Scholar] [CrossRef]
  5. Ma, Y.; Liang, W.; Huang, M.; Xu, W.; Guo, S. Virtual Network Function Service Provisioning in MEC via Trading Off the Usages between Computing and Communication Resources. In Proceedings of the IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Paris, France, 29 April–2 May 2019; pp. 1–7. [Google Scholar]
  6. Zhang, Z.; Zhang, W.; Tseng, F.-H. Satellite Mobile Edge Computing: Improving QoS of High-Speed Satellite-Terrestrial Networks Using Edge Computing Techniques. IEEE Netw. 2019, 33, 70–76. [Google Scholar] [CrossRef]
  7. Xie, R.; Tang, Q.; Wang, Q.; Liu, X.; Yu, F.R.; Huang, T. Satellite-terrestrial integrated edge computing networks: Architecture, challenges, and open issues. IEEE Netw. 2020, 34, 224–231. [Google Scholar] [CrossRef]
  8. Cui, G.; Long, Y.; Xu, L.; Wang, W. Joint offloading and resource allocation for satellite-assisted vehicle-to-vehicle communication. IEEE Syst. J. 2021, 15, 3958–3969. [Google Scholar] [CrossRef]
  9. Qiu, C.; Yao, H.; Yu, F.R.; Xu, F.; Zhao, C. Deep Q-Learning Aided Networking, Caching, and Computing Resources Allocation in Software-Defined Satellite-Terrestrial Net-works. IEEE Trans. Veh. Technol. 2019, 68, 5871–5883. [Google Scholar] [CrossRef]
  10. Wang, F.; Jiang, D.; Qi, S.; Qiao, C.; Shi, L. A Dynamic Resource Scheduling Scheme in Edge Computing Satellite Networks. Mob. Netw. Appl. 2021, 26, 597–608. [Google Scholar] [CrossRef]
  11. Jia, M.; Zhang, L.; Wu, J.; Guo, Q.; Gu, X. Joint Computing and Communication Resource Allocation for Edge Computing towards Huge LEO Networks. China Commun. Mag. 2022, 19, 73–84. [Google Scholar] [CrossRef]
  12. Song, Z.; Hao, Y.; Liu, Y.; Sun, X. Energy-Efficient Multiaccess Edge Computing for Terrestrial-Satellite Internet of Things. IEEE Internet Things J. 2021, 8, 14202–14218. [Google Scholar] [CrossRef]
  13. Cheng, L.; Feng, G.; Sun, Y.; Liu, M.; Qin, S. Dynamic Computation Offloading in Satellite Edge Computing. ICC 2022. In Proceedings of the IEEE International Conference on Communications, Seoul, Korea, 16–20 May 2022; pp. 4721–4726. [Google Scholar]
  14. Gao, X.; Liu, R.; Kaushik, A.; Zhang, H. Dynamic Resource Allocation for Virtual Network Function Placement in Satellite Edge Clouds. Sciencing 2022, 9, 2252–2265. [Google Scholar] [CrossRef]
  15. Wei, K.; Tang, Q.; Guo, J.; Zeng, M.; Fei, Z.; Cui, Q. Resource Scheduling and Offloading Strategy Based on LEO Satellite Edge Computing. In Proceedings of the 2021 IEEE 94th Vehicular Technology Conference, Norman, OK, USA, 27–30 September 2021; pp. 1–6. [Google Scholar]
  16. Zhang, S.; Cui, G.; Long, Y.; Wang, W. Joint computing and communication resource allocation for satellite communication networks with edge computing. China Commun. 2021, 18, 236–252. [Google Scholar] [CrossRef]
  17. Tang, Q.; Fei, Z.; Li, B.; Han, Z. Computation Offloading in LEO Satellite Networks with Hybrid Cloud and Edge Computing. IEEE Internet Things J. 2021, 8, 9164–9176. [Google Scholar] [CrossRef]
  18. Li, P.; Wang, Y.; Wang, Z. A Game-Based Joint Task Offloading and Computation Resource Allocation Strategy for Hybrid Edgy-Cloud and Cloudy-Edge Enabled LEO Satellite Networks. In Proceedings of the 2022 IEEE/CIC International Conference on Communications in China (ICCC), Sanshui, Foshan, China, 11–13 August 2022; pp. 868–873. [Google Scholar]
  19. Zhang, H.; Xi, S.; Jiang, H.; Shen, Q.; Shang, B.; Wang, J. Resource Allocation and Offloading Strategy for UAV-Assisted LEO Satellite Edge Computing. Drones 2023, 7, 383. [Google Scholar] [CrossRef]
  20. Tong, M.; Wang, X.; Li, S.; Peng, L. Joint Offloading Decision and Resource Allocation in Mobile Edge Computing-Enabled Satellite-Terrestrial Network. Symmetry 2022, 14, 564. [Google Scholar] [CrossRef]
  21. Zhu, X.; Xiao, Y. Adaptive offloading and scheduling algorithm for big data-based mobile edge computing. Neurocomputing 2022, 485, 285–296. [Google Scholar] [CrossRef]
  22. Sun, M.; Bao, T.; Xie, D.; Lv, H.; Si, G. Towards Application-Driven Task Offloading in Edge Computing Based on Deep Reinforcement Learning. Micromachines 2021, 12, 1011. [Google Scholar] [CrossRef] [PubMed]
  23. Sadatdiynov, K.; Cui, L.; Huang, J. Offloading dependent tasks in MEC-enabled IoT systems: A preference-based hybrid optimization method. Peer–Peer Netw. Appl. 2023, 16, 657–674. [Google Scholar] [CrossRef]
  24. Sulaiman, M.; Halim, Z.; Lebbah, M.; Waqas, M.; Tu, S. An Evolutionary Computing-Based Efficient Hybrid Task Scheduling Approach for Heterogeneous Computing Environment. J. Grid Comput. 2021, 19, 11. [Google Scholar] [CrossRef]
  25. Wang, J.; Hu, J.; Min, G.; Zhan, W.; Zomaya, A.Y.; Georgalas, N. Dependent Task Offloading for Edge Computing based on Deep Reinforcement Learning. IEEE Trans. Comput. 2022, 71, 2449–2461. [Google Scholar] [CrossRef]
  26. Chen, J.; Yang, Y.; Wang, C.; Zhang, H.; Qiu, C.; Wang, X. Multitask Offloading Strategy Optimization Based on Directed Acyclic Graphs for Edge Computing. IEEE Internet Things J. 2022, 9, 9367–9378. [Google Scholar] [CrossRef]
  27. Huynh, L.N.T.; Pham, Q.-V.; Pham, X.-Q.; Nguyen, T.D.T.; Hossain, M.D.; Huh, E.-N. Efficient Computation Offloading in Multi-Tier Multi-Access Edge Computing Systems: A Particle Swarm Optimization Approach. Appl. Sci. 2020, 10, 203. [Google Scholar] [CrossRef]
  28. Zhao, X.; Yang, F.; Han, Y.; Cui, Y. An Opposition-Based Chaotic Salp Swarm Algorithm for Global Optimization. IEEE Access 2020, 8, 36485–36501. [Google Scholar] [CrossRef]
  29. Li, J.; Shang, Y.; Qin, M.; Yang, Q.; Cheng, N.; Gao, W.; Kwak, K.S. Multiobjective Oriented Task Scheduling in Heterogeneous Mobile Edge Computing Networks. IEEE Trans. Veh. Technol. 2022, 71, 8955–8966. [Google Scholar] [CrossRef]
  30. Ma, S.; Song, S.; Yang, S.; Zhao, J.; Yang, F.; Zhai, L. Dependent tasks offloading based on particle swarm optimization algorithm in multi-access edge computing. Appl. Soft Comput. 2021, 112, 107790. [Google Scholar] [CrossRef]
  31. Zhang, Y.; Chen, J.; Zhou, Y.; Yang, L.; He, B. Dependent task offloading with an energy-latency tradeoff in mobile edge computing. IET Commun. 2022, 16, 1993–2001. [Google Scholar] [CrossRef]
  32. Chai, F.; Zhang, Q.; Yao, H.; Xin, X.; Gao, R.; Guizani, M. Joint Multi-task Offloading and Resource Allocation for Mobile Edge Computing Systems in Satellite IoT. IEEE Trans. Veh. Technol. 2023, 72, 7783–7795. [Google Scholar] [CrossRef]
  33. Cui, G.; Li, X.; Xu, L.; Wang, W. Latency and Energy Optimization for MEC Enhanced SAT-IoT Networks. IEEE Access 2020, 8, 55915–55926. [Google Scholar] [CrossRef]
Figure 1. Architecture diagram of satellite edge network.
Figure 1. Architecture diagram of satellite edge network.
Applsci 13 10027 g001
Figure 2. Task dependency diagram.
Figure 2. Task dependency diagram.
Applsci 13 10027 g002
Figure 3. DAG completion rate versus satellite computing resources.
Figure 3. DAG completion rate versus satellite computing resources.
Applsci 13 10027 g003
Figure 4. DAG completion rate versus satellite communications resources.
Figure 4. DAG completion rate versus satellite communications resources.
Applsci 13 10027 g004
Figure 5. DAG completion rate versus the number of DAG tasks.
Figure 5. DAG completion rate versus the number of DAG tasks.
Applsci 13 10027 g005
Figure 6. DAG completion rate versus local computing capacity.
Figure 6. DAG completion rate versus local computing capacity.
Applsci 13 10027 g006
Table 1. Comparison of the existing literature.
Table 1. Comparison of the existing literature.
LiteratureIndependent
Task
Dependent
Task
Satellite
Computing
Local
Computing
Ground
Computing
Resource
Allocation
Task Delay
[8,11,12]
[10,14,18,19]
[13,15,16,17,20]
[21,24,25]
[22,23,26]
Table 2. Mapping between algorithms and optimization problems.
Table 2. Mapping between algorithms and optimization problems.
The Improved Sparrow Search AlgorithmMapping Relationships
Individual vectorsResource allocation results
Individual weightThe result of resource allocation for each subtask
Fitness functionSubtask completion delay
PopulationDifferent collections of resource allocation policies
Table 3. Simulation parameters.
Table 3. Simulation parameters.
ParametersValue
The maximum available computing resources for satellite MEC 10   Gcycles / s
Local computing resource 2.5   Gcycles / s
Satellite MEC available bandwidth resources10 MHz
Subtask size[50 kb, 100 kb]
Number of subtasks per DAG10–15
CPU computing power 1000   cycles / bit
The application tolerates latency0.5–1.5 s
Total number of iterations200
Sparrow population100
Proportion of producers10%
Proportion of investigators20%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Z.; Jiang, Y.; Rong, J. Resource Allocation Strategy for Satellite Edge Computing Based on Task Dependency. Appl. Sci. 2023, 13, 10027. https://doi.org/10.3390/app131810027

AMA Style

Liu Z, Jiang Y, Rong J. Resource Allocation Strategy for Satellite Edge Computing Based on Task Dependency. Applied Sciences. 2023; 13(18):10027. https://doi.org/10.3390/app131810027

Chicago/Turabian Style

Liu, Zhiguo, Yingru Jiang, and Junlin Rong. 2023. "Resource Allocation Strategy for Satellite Edge Computing Based on Task Dependency" Applied Sciences 13, no. 18: 10027. https://doi.org/10.3390/app131810027

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop