Next Article in Journal
Using Haze Level Estimation in Data Cleaning for Supervised Deep Image Dehazing Models
Previous Article in Journal
Path Planning for the Rapid Reconfiguration of a Multi-Robot Formation Using an Integrated Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Privacy-Friendly Task Offloading for Smart Grid in 6G Satellite–Terrestrial Edge Computing Networks †

1
State Grid Economic and Technological Research Institute Co., Ltd., Beijing 102200, China
2
Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China
3
School of Cyberspace Security, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 2021 IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 29 March–1 April 2021.
Electronics 2023, 12(16), 3484; https://doi.org/10.3390/electronics12163484
Submission received: 17 June 2023 / Revised: 20 July 2023 / Accepted: 24 July 2023 / Published: 17 August 2023

Abstract

:
Through offloading computing tasks to visible satellites for execution, the satellite edge computing architecture effectively issues the high-delay problem in remote grids (e.g., mountain and desert) when tasks are offloaded to the urban terrestrial cloud (TC). However, existing works are usually limited to offloading tasks in pure satellite networks and make offloading decisions based on the predefined models. Additionally, runtime consumption for offloading decisions is rather high. Furthermore, privacy information may be maliciously sniffed since computing tasks are transmitted via vulnerable satellite networks. In this paper, we study the task-offloading problem in satellite–terrestrial edge computing networks, where tasks can be executed by satellite or urban TC. A privacy leakage scenario is described, and we consider preserving privacy by sending extra random dummy tasks to confuse adversaries. Then, the offloading cost with privacy protection consideration is modeled, and the offloading decision that minimizes the offloading cost is formulated as a mixed-integer programming (MIP) problem. To speed up solving the MIP problem, we propose a deep reinforcement learning-based task-offloading (DRTO) algorithm. In this case, offloading location and bandwidth allocation only depend on the current channel states. Simulation results show that the offloading overhead is reduced by 17.5% and 23.6% compared with pure TC computing and pure SatEC computing, while the runtime consumption of DRTO is reduced by at least 42.6%. The dummy tasks are exhibited to effectively mitigate privacy leakage during offloading.

1. Introduction

With the innovation of communication and computing technology, the power grid is developing towards intelligence. For the purpose of providing a stable power supply, smart grids are usually equipped with intelligent functions, such as video monitoring and power line monitoring, to facilitate the adjustment of control commands. Analyzing these intelligent functions inevitably brings about tremendous computational tasks, which are challenging to execute locally. Thanks to edge computing technology [1], some computational tasks can be handed over to nearby edge servers for execution to alleviate the local computing pressure of grids. Despite great promise, computational tasks for grids in remote regions (e.g., mountains, deserts, and oceans) are still difficult to be executed on account of lacking edge servers and network service.
To provide computing power for remote grids, satellites are considered to be an effective solution. The computational tasks of remote grids can be offloaded to urban terrestrial cloud (TC) servers for executing through satellite forwarders. But the long propagation distance between terminals in remote grids and urban TC via satellite links will lead to high delays. In response to this problem, researchers proposed satellite edge computing (SatEC) architecture [2,3,4] by referring to mobile edge computing (MEC) [5]. Through deploying edge computing servers on satellites, SatEC allows the computational tasks of remote grids to be directly executed by nearby visible satellites. Since the process of transferring tasks from satellites to urban TC is eliminated, existing works that focus on task offloading in SatEC networks can achieve relatively satisfactory performance. However, they still have the following defects that limit their effectiveness in remote grids:
Firstly, there still are large queuing delays when heavy computational tasks of remote grids are offloaded simultaneously. Though novel works [6,7,8,9] allow the direct processing of computational tasks by satellites to reduce transmission latency, they introduce higher processing latency caused by the lower processing capabilities of satellites. Specifically, since the energy capacity of satellites is limited [10], SatEC servers deployed on satellites usually tend to obtain low energy overhead at the cost of processing efficiency. This means that when multiple tasks are offloaded to the same SatEC server, later tasks must enter the queue to wait for former tasks to be slowly executed. Thus, the queuing delay of smart grids with heavy computational tasks will be extremely high, which may cause the untimely adjustment of control commands and untimely fault monitoring in smart grids. As an infrastructure with considerable service coverage, this untimely task execution will affect the power consumption of massive devices or even further crash charging devices by unstable currents without timely regulation.
Secondly, although some works allow offloading tasks within satellite–air integrated networks to alleviate queuing delay of pure SatEC, they require some predefined models and complex offloading decision processes. For example, the movement trajectories of aerial users are required in [11], and the flight trajectory of unmanned aerial vehicles is required in [12]. Nonetheless, such information is difficult to obtain in practice. Additionally, solving offloading decisions that optimize the delay and energy consumption is usually formulated as a mixed-integer programming (MIP) problem [13,14]. Some advanced methods, such as 3D hypergraph matching [11], the game-theoretic approach [7], and a multiple-satellite offloading method [15] have been proposed to solve the hard MIP problem. However, all of them require a considerable number of iterations to reach satisfying optima. This complex offloading decision process likewise leads to the untimely execution of computational tasks in smart grids, especially under the fast fading satellite channels [16].
Thirdly, existing works rarely notice the problem that the task-offloading process will lead to privacy leakage, especially in the more vulnerable satellite network. Like any other wireless link, satellite links are also open and exposed but with less protection. Malicious adversaries can easily monitor computing task traffic by listening to satellite links. Utilizing machine learning technology, they can identify which computing tasks originate from which applications, and even what operations are performed by terminals [17]. Then, adversaries may initiate attacks specifically targeting terminals that are responsible for core services (e.g., adjustment control commands) in smart grids, which will cause large-scale power failures. Two schemes can be used to deal with such a privacy problem, but both of them are not applicable. For one thing, the differential privacy-based schemes [18,19,20,21] are effective but may be distorted or invalidated over satellite–terrestrial links with large radio fading. For another, privacy protection schemes based on adjusted offloading strategies [22] ask terminals to be fully equipped with computational capabilities, and thus cannot directly apply to smart grids with different computational capabilities.
To tackle the above problems, we present a novel task-offloading scheme with privacy protection consideration for smart grids in the satellite–terrestrial edge computing network. The main contributions of this paper are summarized as follows:
(1) Satellite–terrestrial cooperative offloading architecture. In order to alleviate the queuing waiting delay of remote grids’ tasks in the SatEC network, we consider a satellite–terrestrial cooperative edge computing architecture. In this architecture, the tasks can be executed by either SatEC server or urban TC, which effectively compensates for the low processing ability of pure satellite networks. A deep reinforcement learning-based task-offloading (DRTO) algorithm is proposed to decide where the task is being executed according to the current channel states (see Section 3.1).
(2) Threat scenarios regarding privacy leakage. We provide a possible privacy leakage threat scenario during offloading in satellite–terrestrial cooperative networks, termed usage pattern privacy leakage. To protect against the privacy leakage problem, we consider transmitting some extra dummy tasks with a random size to confuse adversaries. To the best of our knowledge, we are the first to notice the privacy leakage problem in satellite–terrestrial cooperative offloading (see Section 2.2).
(3) Offloading cost model with privacy consideration. We quantify the overhead caused by privacy protection. And the total offloading cost is modeled as a weighted sum of the latency, energy consumption, and privacy overhead. The process of solving the optimal offloading location decision and bandwidth allocation that minimizes the offloading cost is formulated as a MIP problem (see Section 3.4).
(4) Model-free learning. To solve the above MIP problem, we propose a low-complexity deep reinforcement learning-based task-offloading (DRTO) algorithm. Considering that satellite trajectories are difficult to acquire, the proposed DRTO algorithm makes offloading decisions only based on the current channel states. Meanwhile, DRTO can improve its offloading policy by learning from the real-time trend of channel states, which adapts to the high dynamics of satellite–terrestrial networks (see Section 4).
(5) Low time complexity. Compared with traditional optimization methods, DRTO completely removes the need for solving hard MIP problems. Furthermore, we dynamically adjust the size of the action space to speed up the learning process. Simulation results show that the runtime consumption of DRTO is significantly decreased, while the offloading cost performance is not compromised (see Section 5).
We note that a shorter version of this paper was presented in the WCNC conference [23]. Our initial conference paper did not notice the privacy leakage problem during offloading. This manuscript summarizes the privacy leakage problem in the satellite–terrestrial cooperative task-offloading architecture and addresses the problem by sending extra dummy computing tasks. Offloading cost in the privacy-preserving scenario is modeled, and the original DRTO algorithm is used to speed up the solution of the offloading decision. We conduct further experiments to evaluate the variability in the task size metric and task jitter metric before and after privacy protection. Extended simulation results prove the superiority of our privacy protection scheme and DRTO algorithm.
The remaining parts of this paper are organized as follows. In Section 2, we summarize the possible privacy problems during offloading and introduce some other related works. Then we describe the system model with the privacy consideration, and formulate the offloading cost minimization problem in Section 3. The details of the DRTO algorithm are introduced in Section 4. In Section 5, the simulation results of both the privacy protection effectiveness and algorithm performance are presented. Finally, the paper is concluded in Section 6.

2. Background and Related Works

In this section, we first introduce some task-offloading works in a collaborative network architecture. Then, we summarize a threat scenario during the offloading process in the satellite–terrestrial collaborative architecture, and finally some existing privacy protection schemes are reviewed.

2.1. Task Offloading in Satellite–Terrestrial Cooperative Networks

Constrained by the size and battery capacity, the IoT terminal devices’ computing resources and storage capabilities are limited. To this end, some solutions [24,25,26] that offload computational tasks to edge or clouds for execution have been well investigated. Although promising, it will fail in remote regions due to the lack of edge or cloud nodes. Then, remote tasks have to offload to urban TCs via satellite forwarding, which will inevitably cause high delays. To solve this problem, the satellite edge computing architecture (SatEC) is currently proposed [2,3,4]. Through carrying edge computing servers on satellites, satellites can provide computing resources. Therefore, tasks can be executed directly by satellites, rather than being sent to urban TCs for execution. For example, Zhang et al. [11] proposed a satellite–aerial integrated computing architecture, where aerial users offload tasks to high-altitude platforms or LEO satellites. Jaiswal et al. [27] also explored the satellite-UAV computing architecture, where terrestrial computational tasks can be executed by satellites or forwarded to UAV for execution. Considering the intermittent communication caused by satellite orbiting, Wang et al. [7] proposed an IoT-to-satellite offloading method based on game theory. However, they did not consider the problem that the computing and storage resources of SatEC servers are limited, which will result in a certain queuing delay when a large number of tasks are simultaneously offloaded.
To address the above issues, ref. [28] first introduced the concept of satellite–terrestrial cooperative task offloading. In their proposed double-edge computing architecture, both satellite and terrestrial servers can be used as communication resources for computing and caching. Moreover, they further designed a double-edge computing offloading algorithm, which can decide the offloading position with low delay and energy consumption. However, they neglected to consider the delay and energy consumption of the algorithm itself. Then, [8] considered deploying satellite edge computing servers in nano-satellite constellations with small size and low cost, which makes it possible to build a satellite–terrestrial collaborative computing network in practice. Additionally, ref. [3] established a task-offloading model according to the offloading energy consumption, and optimized the model based on the optimization theory. Their simulation results demonstrate that the optimized model can effectively reduce the total energy consumption. Ref. [29] proposed a multi-index joint optimization task-offloading strategy based on game theory, which can effectively improve service quality. All these works can make reasonable offloading decisions, but they all ignore the possible privacy leakage problem during the offloading process. This makes them unable to meet the needs of privacy-sensitive services.

2.2. Privacy Problem in Task Offloading

Due to the lack of cellular base stations and edge computing servers, various terminals in remote smart grids send their computing tasks to visible satellites for execution. Since satellite radio links are open and exposed, the computing task packets of each terminal can be easily captured. In this case, we are aware that a rudimentary adversary can easily obtain the usage pattern privacy information of terminals without prior knowledge.
As shown in Figure 1, it only takes the following two steps. The first step is to capture computing task packets transmitted over open satellite links. There are many wireless sniffer tools that can be directly used by the adversary. For example, Zhao et al. [30] proposed NSSN, which can accurately monitor and capture data packets in wireless sensor networks. Considering the multi-hop and multi-channel characteristics, Kovac et al. [31] further designed an improved wireless sniffer. Despite the low overhead and high accuracy, ref. [31] struggled to maintain real-time sniffing for long periods. Then Sarkar et al. [32] proposed a robust sniffer which can maintain long-lived connections in real time. Based on these efforts, the adversary can obtain all the computational task packets transmitted during the sniffing period. Packets will be classified according to their source (e.g., source IP address, source MAC address), and parsed as data traffic.
The second step is to analyze the data traffic of each source using various identification algorithms. Though the data traffic is usually encrypted, some information can still be acquired according to the metadata and statistical characteristics (such as packet length and packet interval) of the encrypted traffic. For instance, ref. [17] identified which applications and operations the captured encrypted traffic originated from. Ref. [33] inferred user usage on an application according to encrypted data packets. These efforts provide solutions to learn application and terminal information about current computational tasks. After sniffing and analyzing sufficient tasks, the adversary can infer the usage pattern privacy, including the service types of each source (which indicates different terminals) in remote smart grids.
Such usage pattern privacy leakage will lead to terminals that are responsible for core services (e.g., power supply stability monitoring, control command sending and receiving) being in danger. The adversary may focus on the core terminals to launch various attacks, like masquerade attacks and control command replay attacks. As a national infrastructure, a successful attack on smart grids would cause extremely large-scale power disruptions, which greatly impact resident livelihood. Thus, it is necessary to protect privacy during the task-offloading process, especially in more vulnerable satellite networks.

2.3. Related Works on Privacy Protection

To address the problem that adversaries can judge terminal usage patterns by capturing and analyzing the statistical characteristics of encrypted traffic, two types of privacy protection solutions are explored. They are (1) schemes that confuse the adversary by perturbing the statistical characteristics of the traffic, and (2) schemes that confuse the adversary by deliberately adjusting the offloading policy, respectively.
The first scheme refers to generating noise that can perturb the packet size and packet interval of traffic by exploiting differential privacy [18,19,20] techniques. In this way, the adversary will hardly analyze terminals usage pattern privacy according to the statistical characteristics of the captured traffic. Zhang et al. [21] first explored the effectiveness of this scheme. They implemented three noise-generating differential algorithms, including the Fourier perturbation algorithm ( F P A k ), d * -private mechanism ( d * ), and  d L 1 -private mechanism ( d L 1 ) to perturb video stream features. Specifically, F P A k answers long query sequences over correlated time-series data in a differentially private manner based on the discrete Fourier transform (DFT). d * extends the differential privacy mechanism from [34] and applies Laplacian noise on time-series data. d L 1 achieves differential privacy with regard to the L1 distance. Their experimental results show that all these three noises can reduce the accuracy of adversaries in obtaining privacy information by analyzing encrypted traffic. But this scheme is more suitable for situations with good channel quality. For satellite–terrestrial links with large radio fading, these noises may distort or invalidate during transmission.
The second scheme refers to making the adversary misestimate the packet size by adjusting the offloading strategy. According to He et al. [22], when the channel condition is good, the mobile device tends to offload all the generated and buffered tasks at each time slot to the MEC server. In this case, adversaries can extract statistical information and even patterns of each device’s usage based on its task-offloading history. In response to this problem, they proposed a privacy-aware offloading approach. The main idea is to make the adversary misjudge the offload task stream size by deliberately processing a portion of tasks locally. They modeled the privacy protection strength and offloading overhead. Based on constrained Markov decision processes, the minimum overhead offloading scheme that satisfies the privacy protection strength can be calculated. This scheme can effectively confuse adversaries but cannot be performed by some sensing devices without computational capabilities since these devices in smart grids cannot execute tasks locally.
Instead, in this paper, we consider a more general scheme to protect terminal usage pattern privacy. Specifically, terminals make the adversary misestimate the packet size by transmitting additional dummy tasks with random sizes when the real task is transmitted. This scheme does not require terminals to have computational capabilities; thus, it is more suitable for smart grid scenarios that have terminals with different computational power. Combined with specific network coding techniques [35], the dummy task packets are promising to compensate for packet loss in fast-fading satellite wireless links.

3. System Model and Problem Formulation

In this section, we generalize the proposed privacy-friendly task-offloading model in satellite–terrestrial edge computing networks and then represent mathematically the process of finding a task-offloading strategy that minimizes the overhead.

3.1. Overview

As shown in Figure 2, LEO satellites fly above the surface of Earth at high speed, and connect satellite terminals (STs) of remote smart grids to the ground station. The urban terrestrial cloud (TC) is directly connected to the ground station via an optical fiber, and its transmission delay can be ignored. We assume that the access satellite is always available, and consider N STs denoted by N = { 1 , 2 , , N } and a TC within the coverage of the same access satellite. Each ST sends their computational tasks to a visible LEO satellite via the terrestrial–satellite link. Particularly, aiming to protect their usage pattern privacy, STs send not only real tasks but also some random dummy tasks to confuse adversaries. After receiving tasks, the satellite decides whether to execute them directly by the local SatEC server or transmit them to the urban TC. For simplicity, we denote the wireless signal traveling from ST to its access satellite as the 1st hop, and the 2nd hop from access satellite to TC. We assume that the access satellite can measure channel states before deciding the offloading locations and allocating the bandwidth. The notations used throughout the paper are listed in the Nomenclature.

3.2. Offloading Location

For the task offloaded by n-th ST, its access satellite can choose to locally process or transparently forward to its connected TC. We denote the offloading location of n-th ST as x n , where x n = 1 and x n = 0 respectively denote the SatEC server and TC.

3.3. Offloading Cost

The quality of service (QoS) mainly depends on user-perceived latency and energy consumption. Moreover, considering the precious energy reservation of satellites, we also include the energy consumption of satellites into the cost. Both real tasks and dummy tasks for privacy protection will incur the above cost. The detailed definitions of offloading cost for different locations are given as follows.

3.3.1. Offloaded to SatEC Server

In this subsection, we formulate the cost of real tasks when tasks are offloaded to the SatEC server for execution. In this case, the cost mainly consists of the STs transmission cost and SatEC server’s computing cost. We denote α n as the proportion of the bandwidth allocated for the n-th ST, then the n-th ST 1st hop transmission rate is given by C 1 , n = α n B log 2 1 + p n h n / N 0 , where B denotes the total bandwidth of the access satellite, p n denotes the transmission power of the n-th ST, h n denotes the channel gain between the n-th ST and its access satellite, and  N 0 denotes the noise power at the receiver.
Based on the 1st-hop transmission rate C 1 , n , the transmission latency is given by T 1 , n = L r / C 1 , n , where L r denotes the real task size (in bits). Then, the energy consumed by the n-th ST for transmission is given by E 1 , n = p n T 1 , n .
We simply ignore the queuing delay. The computing latency at the SatEC server is given by T 1 , n c = k r L r / f 1 , where k r denotes the computational intensity (in cycles/bit) of real task, and  f 1 denotes the CPU frequency (in cycles/s) of SatEC server. The energy consumed by the SatEC server for computing is given by E 1 , n c = p c T 1 , n c , where p c denotes the computing power consumption (in watts) of the SatEC server.
Therefore, the total latency that the n-th ST perceives and energy consumed for the n-th ST are respectively given by
T n S A T = T 1 , n + T 1 , n c
E n S A T = E 1 , n + E 1 , n c

3.3.2. Offloaded to TC

In this subsection, we formulate the cost of real tasks when tasks are offloaded to urban TC for execution. In this case, apart from the transmission cost of STs, the forwarding cost of the access satellite and computing cost of TC should be included. We denote α N + n as the proportion of the bandwidth allocated for forwarding the task of the n-th ST, then the 2nd-hop transmission rate for n-th ST is given by C 2 , n = α N + n B log 2 1 + p S A T h T C / N 0 , where p S A T denotes the transmission power of the access satellite, and h T C denotes the channel gain between the access satellite and TC. Therefore, the forwarding latency and energy consumption for n-th ST are respectively given by T 2 , n = L / C 2 , n and E 2 , n = p S A T T 2 , n .
The computing latency at TC is given by T 2 , n c = k L / f 0 , where f 0 denotes the CPU frequency (in cycles/s) of TC. Thanks to the continuous electrical power supply for TC, we simply ignore the computing energy consumption of TC. Therefore, the total latency that the n-th ST perceives and the energy consumed for n-th ST are respectively given by
T n T C = T 1 , n + T 2 , n + T 2 , n c
E n T C = E 1 , n + E 2 , n

3.3.3. Offloaded with Privacy Protection

In this subsection, we formulate the cost associated with privacy protection. Privacy cost P n of the n-th ST is introduced by the latency and energy consumption of dummy tasks with a random size. Similar to real tasks, the cost of dummy tasks also needs to consider the different offloading location x n separately, including offloading to the SatEC server ( x n = 1 ) and offloading to TC ( x n = 0 ). Let the size of dummy tasks that are randomly generated by the n-th ST be L d and the required computational density be k d , then we have the following.
When the computational task is offloaded to the SatEC server, the transmission latency of the 1st hop can be represented as T 1 , d = L d / C 1 , n . Then the energy consumed by dummy tasks for the transmission is given by E 1 , d = p n T 1 , d . The computational latency at the SatEC server can be represented as T 1 , d c = k d L d / f 1 ; thus, the energy consumed by the SatEC sever for dummy tasks is given by E 1 , d c = p c T 1 , d c . Therefore, in this case, the total latency and energy consumed by privacy protection for the n-th ST are, respectively, given by
T d S A T = T 1 , d + T 1 , d c
E d S A T = E 1 , d + E 1 , d c
When the computational task is offloaded to the urban TC, the total transmission latency additionally includes the transmission delay caused by the 2nd hop (i.e., from the satellite forwarding to the TC). Like the real task, the transmission latency and energy consumption of the 2nd hop caused by dummy tasks can be expressed as T 2 , d = L / C 2 , n and E 2 , d = p S A T T 2 , d , respectively. Since the computing energy consumption of TC can be ignored, the total latency and energy consumed by privacy protection for the n-th ST are, respectively, given by
T d T C = T 1 , d + T 2 , d + T 2 , d c
E d T C = E 1 , d + E 2 , d
In summary, the overheads arising from privacy protection P n for tasks of the n-th ST can be expressed as
P n = x n ( T d S A T + E d S A T ) + ( 1 x n ) ( T d T C + E d T C )

3.4. Problem Formulation

As mentioned above, the offloading cost is mainly composed of latency, energy, and privacy protection consumption, which depend on the offloading locations, current channel states, bandwidth allocation, and the size of the random dummy tasks. Therefore, the offloading cost minimization problem P is formulated as follows:
P : min x , α F x , α = n = 1 N x n λ 1 T n S A T + λ 2 E n S A T + λ 3 P n + 1 x n λ 1 T n T C + λ 2 E n T C + λ 3 P n
s . t . x n { 0 , 1 } , n N
0 n = 1 2 N α n 1
α n 0 , n { 1 , , 2 N }
where λ 1 , λ 2 , λ 3 denote the weight parameter for balancing the latency, energy, and privacy protection consumption.
It can be seen that problem P is a mixed-integer programming problem, in which the 0 1 integer variable x and the continuous variable α are mutually coupled. This problem is commonly reformulated by a specific relaxation approach and then solved by powerful convex optimization techniques. However, these methods perform considerable iterations, and the original problem cannot be solved within the channel coherent time, especially when many STs simultaneously offload tasks. To tackle this dilemma, we are motivated to propose an effective low-complexity deep reinforcement learning-based task-offloading algorithm to obtain the near-optimal solution. Specifically, we adopt a DNN to map the current channel states to offloading locations, and improve it via reinforcement learning.

4. DRTO: Deep Reinforcement Learning for Task Offloading

To minimize the offloading cost, we design an offloading algorithm π : h x * that quickly selects the optimal offloading location x * = [ x 1 * , x 2 * , , x N * ] only based on the current channel state h = [ h 1 , h 2 , , h N , h T C ] .
The diagram of DRTO is shown in Figure 3. First, the DNN takes the current channel gain h as input, and generates a relaxed offloading location x ^ . Then, we quantize the relaxed location x ^ into K candidate binary offloading locations, namely x 1 , x 2 , , x K . The optimal location x * is obtained by solving a series of bandwidth allocation convex problems. Subsequently, the newly obtained channel state–offloading location pair ( h , x * ) is added into the replay memory. A random batch will be sampled from memory to improve the DNN every δ time frames. To further reduce the runtime consumption, we dynamically adjust K to speed up the learning process. In the following subsections, the details of the above stages are described. The pseudocode of the DRTO algorithm is summarized in Algorithm 1.
Algorithm 1 The DRTO algorithm.
  1:
Input: Current channel gain h .
  2:
Output: Optimal offloading location x * and corresponding bandwidth allocation α * .
  3:
for  t = 1 , 2 , , T  do
  4:
   DNN generates a relaxed offloading location x ^ .
  5:
   Quantize x ^ into K t candidate binary offloading locations x k , k 1 , 2 , , K t .
  6:
   for  k = 1 , 2 , , K t  do
  7:
     Given binary offloading location x k , the bandwidth allocation α x k and offloading cost F x k , α x k are obtained by solving P .
  8:
   end for
  9:
   Obtain the optimal offloading location with respect to
     x * = arg min x k , k 1 , 2 , , K F x k , α x k .
10:
   Add newly obtained channel state-offloading location pair h t , x * into replay memory.
11:
   if  t mod δ = = 0  then
12:
     Sample a random batch from memory for training DNN.
13:
   end if
14:
   if  t mod Δ = = 0  then
15:
     Adjust K t using (15).
16:
   end if
17:
end for

4.1. Generate the Offloading Location

As shown in the upper part of Figure 3, in each time frame, the fully connected DNN takes the current channel gain h as input, and generates a relaxed offloading location x ^ = [ x ^ 1 , x ^ 2 , , x ^ N ] (each entry is relaxed into [ 0 , 1 ] interval). Then, the relaxed location x ^ is quantized into K binary locations. Given a candidate location x k , DRTO solves a bandwidth allocation convex problem and obtains the offloading cost. Subsequently, the optimal offloading location x * is selected according to the minimal offloading cost.
Although the mapping from the channel state to the offloading location is unknown and complex, thanks to the universal approximation theorem [36], we adopt a fully connected DNN to approximate this mapping. The DNN is characterized by the weights that connect the hidden neurons, and composed of four layers, namely the input layer, two hidden layers and output layer. Here, we respectively use ReLU and the sigmoid activation function in the hidden layers and output layer; thus, each entry of the output relaxed offloading location satisfies x ^ n ( 0 , 1 ) .
Then, the x ^ is quantized into K candidate binary offloading locations, where K [ 1 , 2 N ] . Intuitively, a larger K creates higher diversity in the candidate offloading location set, thus increasing the chance of finding the global optimal offloading location but resulting in higher computational complexity. We adopt an order-preserving quantization method proposed in [37] for the trade-off of performance and complexity. In order-preserving quantization, K is relatively small, but the diversity of the candidate offloading locations is guaranteed. Its main idea is preserving the order when quantization, i.e., for each quantized location x k = [ x k , 1 , x k , 2 , , x k , N ] , x k , n x k , m , should be held if x ^ n x ^ m for all n , m { 1 , 2 , , N } . Specifically, a series of K quantized locations { x k } are generated as the following:
(1) Each entry of the 1st binary offloading location x 1 is given by
x 1 , n = 1 x ^ n > 0.5 , 0 x ^ n 0.5 . n = 1 , 2 , , N
(2) As for the remaining K 1 offloading locations, we first sort each entry of x ^ according to its distance to 0.5, i.e., | x ^ ( 1 ) 0.5 | | x ^ ( 2 ) 0.5 | | x ^ ( N ) 0.5 | , where x ^ ( n ) denotes the sorted n-th entry. Hence, each entry of the k-th offloading location x k , k = 2 , 3 , , K is given by
x k , n = 1 x ^ n > x ^ ( k 1 ) , 1 x ^ n = x ^ ( k 1 ) and x ^ ( k 1 ) 0.5 , 0 x ^ n = x ^ ( k 1 ) and x ^ ( k 1 ) > 0.5 , 0 x ^ n < x ^ ( k 1 ) . n = 1 , 2 , , N
Here, we obtain K candidate offloading locations; given a candidate offloading location x k , the original offloading cost minimization problem P is transformed into a convex problem on α
P : min α F x k , α
s . t . 0 n = 1 2 N α n 1
which can be solved by a convex optimization tool like CVXPY [38]. Then, we obtain the optimal bandwidth allocation α x k * and minimum offloading cost F x k , α x k * with the given candidate offloading location x k . By repeatedly solving the problem P for each candidate offloading location, the best offloading location is selected by
x * = arg min { x k } , k = 1 , 2 , K F x k , α x k *
along with its corresponding optimal bandwidth allocation α * .

4.2. Update the Offloading Policy

Due to the rapid changes of satellite–terrestrial channel states, in order to reduce the offloading cost, the offloading policy should be updated in time. Different from traditional deep learning, the training samples of DRTO are composed of the latest channel state h and offloading location x * . Since the current offloading location is generated according to the policy in the last time frame, the training samples in adjacent time frames are strongly correlated. If the latest samples are used to train the DNN immediately, the network will be updated in an inefficient way, and the offloading policy may not even converge. Thanks to the experience replay mechanism [39] proposed by Google DeepMind, the newly obtained state–location pair h , x * is added to the replay memory, and replaces the oldest one if the memory is full. Subsequently, a random batch is sampled from the memory to improve the DNN. The cross-entropy loss is reduced by utilizing the Adam optimizer [40]. Such iterations repeat, and the policy of the DNN is gradually improved.
By utilizing the experience replay mechanism, we construct a dynamic training dataset for DNN. Thanks to the random sampling, the convergence is fastened because the correlation between training samples is reduced. Since the memory space is finite, the DNN is updated only according to the recent experience, and the offloading policy π is always adapted to the recent channel changes.

4.3. Dynamically Adjust K

For each candidate offloading location, a bandwidth allocation convex problem is solved. Intuitively, a larger K can lead to a better temporary offloading decision and a better long-term offloading policy. However, to select the optimal offloading location x * in each time frame, repeatedly solving bandwidth allocation problem ( P ) K times leads to high computational complexity. Therefore, there exists a trade-off between performance and complexity according to the setting of K.
With a fixed K = N , we plot the index of the optimal offloading location in each time frame. As shown in Figure 4, at the very beginning of the learning process, the index of the optimal offloading location is relatively large. As the offloading policy improves, we observe that most of the optimal offloading locations are the first location generated by the above order-preserving quantization method. This indicates that a large value of K is computationally inefficient and unnecessary. In other words, most of the quantized offloading locations in each time frame are redundant. Therefore, to speed up the algorithm, we can gradually adjust K, and the performance will not be compromised.
We denote K t as the number of quantized offloading locations at time frame t. Inspired by [37], we initially set K 1 = N . For every Δ time frame, K t is adjusted once. In an adjustment time frame, to increase the diversity of candidate offloading locations, K t is tuned to max k t 1 * , , k t Δ * + 1 . Therefore, K t is given by
K t = N t = 1 , min max k t 1 * , , k t Δ * + 1 , N t mod Δ = 0 , K t 1 otherwise .

5. Simulation and Evaluation Results

In this section, we first evaluate the performance of the proposed DRTO algorithm, and then verify the effectiveness of our privacy protection schemes via simulations. All the simulation parameter settings are listed in Table 1. We assume that the average channel gain h n or h T C follows the free-space loss model
h = A d c 4 π f c d d e .

5.1. DRTO Algorithm Performance Evaluation

To verify the superiority of the DRTO algorithm in solving offloading decisions between the satellite–terrestrial cooperative computing architecture, we evaluate the convergence performance, offloading cost of the decision location given by DRTO, and the runtime consumption.

5.1.1. The Convergence Performance

The DRTO algorithm is evaluated over 30,000 time frames. In Figure 5, we plot the training loss of the DNN, which gradually decreases and stabilizes at around 0.02 , whose fluctuation is mainly due to the random sampling of the training data.
In Figure 6, we plot the normalized offloading cost, which is defined as
F ^ x * , α * = F x * , α * min x { 0 , 1 } N F x , α x
where the numerator denotes the optimal offloading cost by the DRTO algorithm, and the denominator denotes the optimal offloading cost by greedily enumerating all the 2 N offloading locations. We set the update interval Δ = 64 . As we can see, within the first 5000 time frames, the normalized offloading cost significantly fluctuates, indicating that the offloading policy has not fully converged. Finally, most of the normalized offloading cost is converged to 1, only few frames slightly fluctuating above 1 due to the rapid channel fading when the inter-satellite handover occurs. In spite of this fluctuation, the DRTO algorithm can still achieve near-optimal offloading cost performance.

5.1.2. The Offloading Cost of Decision Location Given by DRTO

Regarding the offloading cost performance, we compare our DRTO algorithm with other five representative benchmarks to demonstrate its superiority:
  • Distributed deep learning-based offloading (DDLO) [41]. Multiple DNNs take the duplicated channel gain as input, then each DNN generates a candidate offloading location. Then, the optimal offloading location is selected with respect to the minimum offloading cost. In the comparison with DRTO, we assume that DDLO is composed of N DNNs.
  • Coordinate descent (CD) [42]. The CD algorithm is a traditional numerical optimization method, which iteratively swaps the offloading location of each ST that leads to the largest offloading cost decrement. The iteration stops when the offloading cost cannot be further decreased by swapping the offloading location.
  • Enumeration. We enumerate all 2 N offloading location combinations and greedily select the best one.
  • Pure TC computing. The LEO access satellite forwards all the tasks to TC for execution.
  • Pure SatEC computing. The LEO access satellite locally executes all the tasks.
We consider N = 5 STs attaching to the same access satellite. In Figure 7, we compare the performance of the average offloading cost per time frame achieved by different offloading algorithms. As we can see, DRTO achieves similar performance as the greedy enumeration method, which verifies the optimality of DRTO. Since the optimal offloading location combination is unique, any other random combinations will lead to a higher offloading cost. In addition, we see that DRTO achieves lower offloading cost with about 17.5% and 23.6% reductions compared to the pure TC computing and pure SatEC computing methods, which indicates the necessity of cooperation between SatEC servers and TCs to provide satisfying computing service.

5.1.3. The Runtime Consumption of DRTO

Finally, we evaluate the runtime performance of DRTO. Since pure TC computing and pure SatEC computing are static, we compare DRTO with other three dynamic benchmarks. Specifically, we respectively record the total runtime consumption of different algorithms running on 30,000 time frames, and compute the average runtime per time frame. The runtime comparison is shown in Figure 8.
Although four dynamic algorithms achieve similar offloading cost performance (in Figure 7), DRTO consumes the lowest runtime with about 42.6%, 87.3% and 96.6% reductions compared to DDLO, CD and enumeration when N = 7 . In addition, the runtime consumption of DRTO or DDLO does not explode when the network scale increases. This is because DNN can accurately fit the complex mapping from the channel states to the offloading location; compared with traditional CD or enumeration methods, the action space of DRTO or DDLO is significantly reduced, resulting in much fewer iterations. In the comparison with DDLO, at the very beginning of the learning process, the action space of DRTO is the same as that of DDLO. With the improvement of the offloading policy, the number of quantized candidate offloading locations in DRTO is dynamically adjusted, and thus the action space of DRTO is further reduced.
Actually, the channel coherent time is extremely short due to the high-speed movement of satellites. DRTO can quickly generate the offloading location and bandwidth allocation without compromising the offloading cost performance, which better adapts to the fast channel fading in satellite–terrestrial edge computing networks.

5.2. Privacy Protection Effectiveness Evaluation

In this section, we verify the effectiveness of the privacy protection scheme proposed in this paper. Specifically, we first design a privacy-preserving evaluation metric and then conduct comprehensive experiments to evaluate the privacy-preserving performance.

5.2.1. Experimental Settings

To fairly demonstrate the effectiveness of our privacy-preserving scheme for task offloading in satellite–terrestrial networks, we assume that the real tasks L r of 5 terminals arrive randomly in a Poisson distribution~ π ( λ = 100 ) , and ask each terminal to generate dummy tasks L d with a random size following Gaussian distribution~ ( μ , σ ) when a new offload task is transmitted in each time slot. Both real tasks and dummy tasks are sent to a visible satellite, and the satellite then decides on the offloading position. We define the redundancy rate ω as the ratio between the real task size L r and the dummy task size L d , i.e., ω = L d / L r . Experiments with average redundancy rates ω = 0.05 , 0.10 , 0.15 , 0.20 , and 0.25 over 30,000 time slots are conducted to present the privacy protection performance and cost. Detailed settings of the five group experiments in each time slot are listed in Table 2.

5.2.2. Evaluation Metrics

According to [43], the packet characteristics used for traffic identification are mainly the various statistical values of packet size and packet jitter. That is, the adversary can analyze the usage pattern privacy of each terminal based on its captured task size, and task arrival interval over a period of time during task offloading. To this end, we characterize privacy protection effectiveness in terms of the following two metrics:
  • Task size: Task size assesses the variability in computational task size transmitted from STs to the satellite before and after adding the redundancy package. Intuitively, a larger change in task size means a more significant difference, which is more effective in confusing the adversary and therefore better for privacy protection.
  • Task jitter: Task jitter assesses the variability in the task arrival interval received by the satellite before and after adding the redundancy package. Similarly, a larger change in task jitter is more effective in confusing the adversary and better for privacy protection.

5.2.3. Evaluation Results

To present the performance of our privacy protection scheme, we evaluate the variability in task size and task jitter over 30,000 time slots. As shown in Figure 9 and Figure 10, five groups with different redundancy rates (i.e., ω = 0.05 , 0.10 , 0.15 , 0.20 , and 0.25, respectively) are implemented to compare with the original real tasks. It can be noticed that after adding redundancy packets with different ω , compared to the original real tasks, all the task sizes and task jitters are changed to a certain extent. Specifically, the overall envelope shapes are shown to be quite different from the original. And the spikes’ location, as well as their corresponding peak values, are also significantly altered. These demonstrate that our privacy protection scheme is able to produce noticeable perturbations in both the packet size and packet arrival interval, thus effectively misleading the adversary.
We also notice an increasing trend in task size and task jitter with adding redundant dummy tasks from Figure 9 and Figure 10, which will cause additional overheads in energy and latency. Thus, a further evaluation of the privacy protection cost is conducted.
As shown in Figure 11, we evaluate the average energy consumption over 30,000 time slots for five redundancy rates ω . It can be observed that as ω rises, both the transmission energy consumption (including trans-UT-SAT and trans-SAT-TC) and computational energy consumption (computing energy) tend to increase gradually, whether the task is offloaded to the satellite or to the TC. Despite this, we still believe that this is a tolerable price to pay for privacy protection since the increase in total energy consumption after adding dummy tasks is only moderate compared to the original real tasks.
As shown in Figure 12, we evaluate each average latency consumption over 30,000 time slots for five redundancy rates ω . Similar to the latency overhead, although each average transmission latency consumption (including trans-UT-SAT and trans-SAT-TC) and computation latency consumption (computing latency) tends to increase as ω rises, the average total latency overhead is not significantly increased compared to the original real task. We therefore consider the latency overhead caused by the privacy preservation also to be within a tolerable limit.

6. Conclusions

In this paper, we investigate the joint offloading location decision and bandwidth allocation problem in satellite–terrestrial edge computing networks. Given the potential privacy compromise caused by task offloading, we ask terminals to send random dummy tasks to protect their usage pattern privacy. Then, the DRTO algorithm is proposed to minimize the offloading cost based on current observed channel states. The simulation results show that our DRTO algorithm achieves near-optimal offloading cost performance like existing algorithms but significantly reduces runtime consumption, and the privacy protection scheme is sufficiently effective with a tolerable cost.
Particularly, our privacy protection scheme is simple enough; thus, it can be applied to various terminals. Despite this, dummy tasks inevitably cause undesirable resource waste. In the future, we will further explore advanced coding techniques that encode dummy tasks to compensating packets, which can cope with fast-fading links in satellite–terrestrial networks and the resource consumption associated with privacy protection.

Author Contributions

Conceptualization, J.Z. and Z.Y.; Formal analysis, Z.X.; Investigation, J.S.; Methodology, P.X.; Resources, Y.L.; Validation, Z.G. and J.F.; Writing—original draft, S.Z.; Writing—review and editing, S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The work in the paper is sponsored by science and technology project of state grid corporation of China: Research on critical technology of secondary system planning and design of distribution network for novel power system (No. 5400-202256273A-2-0-XG).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cao, K.; Liu, Y.; Meng, G.; Sun, Q. An overview on edge computing research. IEEE Access 2020, 8, 85714–85728. [Google Scholar] [CrossRef]
  2. Xie, R.; Tang, Q.; Wang, Q.; Liu, X.; Yu, F.R.; Huang, T. Satellite-Terrestrial Integrated Edge Computing Networks: Architecture, Challenges, and Open Issues. IEEE Netw. 2020, 34, 224–231. [Google Scholar] [CrossRef]
  3. Yan, L.; Cao, S.; Gong, Y.; Han, H.; Wei, J.; Zhao, Y.; Yang, S. SatEC: A 5G Satellite Edge Computing Framework Based on Microservice Architecture. Sensors 2019, 19, 831. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, Y.; Yang, J.; Guo, X.; Qu, Z. Satellite Edge Computing for the Internet of Things in Aerospace. Sensors 2019, 19, 4375. [Google Scholar] [CrossRef]
  5. Mach, P.; Becvar, Z. Mobile Edge Computing: A Survey on Architecture and Computation Offloading. IEEE Commun. Surv. Tutor. 2017, 19, 1628–1656. [Google Scholar] [CrossRef]
  6. Wei, J.; Han, J.; Cao, S. Satellite IoT edge intelligent computing: A research on architecture. Electronics 2019, 8, 1247. [Google Scholar] [CrossRef]
  7. Wang, Y.; Yang, J.; Guo, X.; Qu, Z. A Game-Theoretic Approach to Computation Offloading in Satellite Edge Computing. IEEE Access 2020, 8, 12510–12520. [Google Scholar] [CrossRef]
  8. Denby, B.; Lucia, B. Orbital edge computing: Nanosatellite constellations as a new class of computer system. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, Lausanne, Switzerland, 16–20 March 2020; pp. 939–954. [Google Scholar]
  9. Zhu, W.; Yang, W.; Liu, G. Server Selection and Resource Allocation for Energy Minimization in Satellite Edge Computing. In Proceedings of the International Conference on 5G for Future Wireless Networks, Harbin, China, 17–18 December 2022; pp. 142–154. [Google Scholar]
  10. Aung, H.; Soon, J.J.; Goh, S.T.; Lew, J.M.; Low, K.S. Battery management system with state-of-charge and opportunistic state-of-health for a miniaturized satellite. IEEE Trans. Aerosp. Electron. Syst. 2019, 56, 2978–2989. [Google Scholar] [CrossRef]
  11. Zhang, L.; Zhang, H.; Guo, C.; Xu, H.; Song, L.; Han, Z. Satellite-Aerial Integrated Computing in Disasters: User Association and Offloading Decision. In Proceedings of the IEEE ICC—2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020. [Google Scholar]
  12. Zhou, C.; Wu, W.; He, H.; Yang, P.; Lyu, F.; Cheng, N.; Shen, X. Delay-Aware IoT Task Scheduling in Space-Air-Ground Integrated Network. In Proceedings of the IEEE GLOBECOM—2019 IEEE Global Communications Conference, Big Island, HI, USA, 9–13 December 2019. [Google Scholar]
  13. Kim, J.; Kim, T.; Hashemi, M.; Brinton, C.G.; Love, D.J. Joint Optimization of Signal Design and Resource Allocation in Wireless D2D Edge Computing. In Proceedings of the IEEE INFOCOM—IEEE Conference on Computer Communications, Virtual Event, 6–9 July 2020. [Google Scholar]
  14. Huang, S.; Li, G.; Ben-Awuah, E.; Afum, B.O.; Hu, N. A Stochastic Mixed Integer Programming Framework for Underground Mining Production Scheduling Optimization Considering Grade Uncertainty. IEEE Access 2020, 8, 24495–24505. [Google Scholar] [CrossRef]
  15. Gao, J.; Zhao, L.; Shen, X. Service Offloading in Terrestrial-Satellite Systems: User Preference and Network Utility. In Proceedings of the IEEE GLOBECOM—2019 IEEE Global Communications Conference, Big Island, HI, USA, 9–13 December 2019. [Google Scholar]
  16. Ramirez-Espinosa, P.; Lopez-Martinez, F.J. On the Utility of the Inverse Gamma Distribution in Modeling Composite Fading Channels. In Proceedings of the IEEE GLOBECOM—2019 IEEE Global Communications Conference, Big Island, HI, USA, 9–13 December 2019. [Google Scholar]
  17. Conti, M.; Mancini, L.V.; Spolaor, R.; Verde, N.V. Can’t you hear me knocking: Identification of user actions on android apps via traffic analysis. In Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, San Antonio, TX, USA, 2–4 March 2015; pp. 297–304. [Google Scholar]
  18. Zhao, Y.; Chen, J. A survey on differential privacy for unstructured data content. ACM Comput. Surv. (CSUR) 2022, 54, 1–28. [Google Scholar] [CrossRef]
  19. Dong, J.; Roth, A.; Su, W.J. Gaussian differential privacy. J. R. Stat. Soc. Ser. B Stat. Methodol. 2022, 84, 3–37. [Google Scholar] [CrossRef]
  20. Wu, X.; Zhang, Y.; Shi, M.; Li, P.; Li, R.; Xiong, N.N. An adaptive federated learning scheme with differential privacy preserving. Future Gener. Comput. Syst. 2022, 127, 362–372. [Google Scholar] [CrossRef]
  21. Zhang, X.; Hamm, J.; Reiter, M.K.; Zhang, Y. Defeating traffic analysis via differential privacy: A case study on streaming traffic. Int. J. Inf. Secur. 2022, 21, 689–706. [Google Scholar] [CrossRef]
  22. He, X.; Liu, J.; Jin, R.; Dai, H. Privacy-aware offloading in mobile-edge computing. In Proceedings of the GLOBECOM 2017—2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–6. [Google Scholar]
  23. Zhu, D.; Liu, H.; Li, T.; Sun, J.; Liang, J.; Zhang, H.; Geng, L.; Liu, Y. Deep reinforcement learning-based task offloading in satellite-terrestrial edge computing networks. In Proceedings of the 2021 IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 29 March–1 April 2021; pp. 1–7. [Google Scholar]
  24. Chen, Y.; Gu, W.; Li, K. Dynamic task offloading for internet of things in mobile edge computing via deep reinforcement learning. Int. J. Commun. Syst. 2022, 2022, e5154. [Google Scholar] [CrossRef]
  25. Al-Masri, E.; Souri, A.; Mohamed, H.; Yang, W.; Olmsted, J.; Kotevska, O. Energy-efficient cooperative resource allocation and task scheduling for Internet of Things environments. Internet Things 2023, 23, 100832. [Google Scholar] [CrossRef]
  26. Manukumar, S.T.; Muthuswamy, V. A novel data size-aware offloading technique for resource provisioning in mobile cloud computing. Int. J. Commun. Syst. 2023, 36, e5378. [Google Scholar] [CrossRef]
  27. Jaiswal, K.; Dahiya, A.; Saxena, S.; Singh, V.; Singh, A.; Kushwaha, A. A Novel Computation Offloading Under 6G LEO Satellite-UAV-based IoT. In Proceedings of the 2022 3rd International Conference on Computation, Automation and Knowledge Management (ICCAKM), Dubai, United Arab Emirates, 15–17 November 2022; pp. 1–6. [Google Scholar]
  28. Wang, Y.; Zhang, J.; Zhang, X.; Wang, P.; Liu, L. A Computation Offloading Strategy in Satellite Terrestrial Networks with Double Edge Computing. In Proceedings of the 2018 IEEE International Conference on Communication Systems (ICCS), Chengdu, China, 19–21 December 2018; pp. 450–455. [Google Scholar]
  29. Tong, M.; Wang, X.; Li, S.; Peng, L. Joint Offloading Decision and Resource Allocation in Mobile Edge Computing-Enabled Satellite-Terrestrial Network. Symmetry 2022, 14, 564. [Google Scholar] [CrossRef]
  30. Zhao, Z.; Huangfu, W.; Sun, L. NSSN: A network monitoring and packet sniffing tool for wireless sensor networks. In Proceedings of the 2012 8th International Wireless Communications and Mobile Computing Conference (IWCMC), Limassol, Cyprus, 27–31 August 2012; pp. 537–542. [Google Scholar]
  31. Kovač, J.; Crnogorac, J.; Kočan, E.; Vučinić, M. Sniffing multi-hop multi-channel wireless sensor networks. In Proceedings of the 2020 28th Telecommunications Forum (TELFOR), Belgrade, Serbia, 24–25 November 2020; pp. 1–4. [Google Scholar]
  32. Sarkar, S.; Liu, J.; Jovanov, E. A robust algorithm for sniffing ble long-lived connections in real-time. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM), Big Island, HI, USA, 9–13 December 2019; pp. 1–6. [Google Scholar]
  33. Wang, Q.; Yahyavi, A.; Kemme, B.; He, W. I know what you did on your smartphone: Inferring app usage over encrypted data traffic. In Proceedings of the 2015 IEEE Conference on Communications and Network Security (CNS), Florence, Italy, 28–30 September 2015; pp. 433–441. [Google Scholar]
  34. Chan, T.H.H.; Shi, E.; Song, D. Private and continual release of statistics. ACM Trans. Inf. Syst. Secur. (TISSEC) 2011, 14, 1–24. [Google Scholar] [CrossRef]
  35. Sun, J.; Zhang, Y.; Tang, D.; Zhang, S.; Zhao, Z.; Ci, S. TCP-FNC: A novel TCP with network coding for wireless networks. In Proceedings of the 2015 IEEE International Conference on Communications (ICC), London, UK, 8–12 June 2015; pp. 2078–2084. [Google Scholar]
  36. Marsland, S. Machine Learning: An Algorithmic Perspective; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  37. Huang, L.; Bi, S.; Zhang, Y.J.A. Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks. IEEE Trans. Mob. Comput. 2020, 19, 2581–2593. [Google Scholar] [CrossRef]
  38. Diamond, S.; Boyd, S. CVXPY: A Python-Embedded Modeling Language for Convex Optimization. J. Mach. Learn. Res. 2016, 17, 2909–2913. [Google Scholar]
  39. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing Atari with Deep Reinforcement Learning. arXiv 2013, arXiv:1312.5602. [Google Scholar]
  40. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  41. Huang, L.; Feng, X.; Feng, A.; Huang, Y.; Qian, L.P. Distributed Deep Learning-based Offloading for Mobile Edge Computing Networks. Mob. Netw. Appl. 2018, 27, 1123–1130. [Google Scholar] [CrossRef]
  42. Bi, S.; Zhang, Y.J. Computation Rate Maximization for Wireless Powered Mobile-Edge Computing With Binary Computation Offloading. IEEE Trans. Wirel. Commun. 2018, 17, 4177–4190. [Google Scholar] [CrossRef]
  43. Mirsky, Y.; Doitshman, T.; Elovici, Y.; Shabtai, A. Kitsune: An ensemble of autoencoders for online network intrusion detection. arXiv 2018, arXiv:1802.09089. [Google Scholar]
Figure 1. The usage pattern privacy leakage scenario.
Figure 1. The usage pattern privacy leakage scenario.
Electronics 12 03484 g001
Figure 2. Task offloading with privacy protection in satellite–terrestrial edge computing networks.
Figure 2. Task offloading with privacy protection in satellite–terrestrial edge computing networks.
Electronics 12 03484 g002
Figure 3. The diagram of DRTO.
Figure 3. The diagram of DRTO.
Electronics 12 03484 g003
Figure 4. The index of optimal offloading location with K = N = 5 .
Figure 4. The index of optimal offloading location with K = N = 5 .
Electronics 12 03484 g004
Figure 5. The training loss of DRTO.
Figure 5. The training loss of DRTO.
Electronics 12 03484 g005
Figure 6. Normalized offloading cost with Δ = 64 .
Figure 6. Normalized offloading cost with Δ = 64 .
Electronics 12 03484 g006
Figure 7. Average offloading cost by different algorithms ( N = 5 ).
Figure 7. Average offloading cost by different algorithms ( N = 5 ).
Electronics 12 03484 g007
Figure 8. Average execution latency by different algorithms.
Figure 8. Average execution latency by different algorithms.
Electronics 12 03484 g008
Figure 9. Effectiveness of perturbing task size at different redundancy rates ω .
Figure 9. Effectiveness of perturbing task size at different redundancy rates ω .
Electronics 12 03484 g009
Figure 10. Effectiveness of perturbing task arrive jitter at different redundancy rates ω .
Figure 10. Effectiveness of perturbing task arrive jitter at different redundancy rates ω .
Electronics 12 03484 g010
Figure 11. Average energy consumption evaluation at different redundancy rates ω .
Figure 11. Average energy consumption evaluation at different redundancy rates ω .
Electronics 12 03484 g011
Figure 12. Average latency consumption evaluation at different redundancy rates ω .
Figure 12. Average latency consumption evaluation at different redundancy rates ω .
Electronics 12 03484 g012
Table 1. Simulation parameters setup.
Table 1. Simulation parameters setup.
NotationDescription
Parameters Value
Transmission power of ST p n and satellite p S A T (W)1, 3
Antenna gain A d and path loss exponent d e 4.11, 2.8
Carrier frequency f c (GHz)30
Total bandwidth B (MHz)800
Receiver noise power N 0 (W) 10 9
Task size L (MB)100
Parameters Value
Computational intensity k (cycles/bit)10
Computing Power Consumption of SatEC server p c (W)0.5
CPU frequency of SatEC server f 1 and TC f 0 (GHz)0.4, 3
Latency-energy weight parameter λ 0.5
Training interval δ 10
Random batch size128
Replay memory size1024
Learning rate0.01
Table 2. Five groups of experimental setups with different size dummy tasks.
Table 2. Five groups of experimental setups with different size dummy tasks.
Group 1Group 2Group 3Group 4Group 5
ω 0.050.100.150.200.25
( μ , σ ) (0.05 L r , 0.05 L r 10 )(0.1 L r , 0.1 L r 10 )(0.15 L r , 0.15 L r 10 )(0.2 L r , 0.2 L r 10 )(0.25 L r , 0.25 L r 10 )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zou, J.; Yuan, Z.; Xin, P.; Xiao, Z.; Sun, J.; Zhuang, S.; Guo, Z.; Fu, J.; Liu, Y. Privacy-Friendly Task Offloading for Smart Grid in 6G Satellite–Terrestrial Edge Computing Networks. Electronics 2023, 12, 3484. https://doi.org/10.3390/electronics12163484

AMA Style

Zou J, Yuan Z, Xin P, Xiao Z, Sun J, Zhuang S, Guo Z, Fu J, Liu Y. Privacy-Friendly Task Offloading for Smart Grid in 6G Satellite–Terrestrial Edge Computing Networks. Electronics. 2023; 12(16):3484. https://doi.org/10.3390/electronics12163484

Chicago/Turabian Style

Zou, Jing, Zhaoxiang Yuan, Peizhe Xin, Zhihong Xiao, Jiyan Sun, Shangyuan Zhuang, Zhaorui Guo, Jiadong Fu, and Yinlong Liu. 2023. "Privacy-Friendly Task Offloading for Smart Grid in 6G Satellite–Terrestrial Edge Computing Networks" Electronics 12, no. 16: 3484. https://doi.org/10.3390/electronics12163484

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop