Next Article in Journal
Efficient Fabric Classification and Object Detection Using YOLOv10
Previous Article in Journal
Enhancing Recommendation Diversity and Novelty with Bi-LSTM and Mean Shift Clustering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Resource Allocation Algorithm for Cloud-Network Collaborative Satellite Networks with Differentiated QoS Requirements

1
State Grid Shandong Electric Power Company, State Grid Corporation of China, Jinan 250001, China
2
School of Management, Beijing Union University, Beijing 100101, China
3
Shandong Luruan Digital Technology Co., Ltd., Jinan 250000, China
4
Qingdao Institute of Software, College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, China
5
School of Shijiazhuang, Army Engineering University of PLA, Shijiazhuang 210014, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(19), 3843; https://doi.org/10.3390/electronics13193843 (registering DOI)
Submission received: 8 August 2024 / Revised: 14 September 2024 / Accepted: 25 September 2024 / Published: 28 September 2024

Abstract

:
With the continuous advancement of cloud computing and satellite communication technology, the cloud-network-integrated satellite network has emerged as a novel network architecture. This architecture harnesses the benefits of cloud computing and satellite communication to achieve global coverage, high reliability, and flexible information services. However, as business types and user demands grow, addressing differentiated Quality of Service (QoS) requirements has become a crucial challenge for cloud-network-integrated satellite networks. Effective resource allocation algorithms are essential to meet these differentiated QoS requirements. Currently, research on resource allocation algorithms for differentiated QoS requirements in cloud-network-integrated satellite networks is still in its early stages. While some research results have been achieved, there persist issues such as high algorithm complexity, limited practicality, and a lack of effective evaluation and adjustment mechanisms. The first part of this study examines the state of research on network virtual mapping methods that are currently in use. A reinforcement-learning-based virtual network mapping approach that considers quality of service is then suggested. This algorithm aims to improve user QoS and request acceptance ratio by introducing QoS satisfaction parameters. With the same computational complexity, QoS is significantly improved. Additionally, there has been a noticeable improvement in the request acceptance ratio and resource utilization efficiency. The proposed algorithm solves existing challenges and takes a step towards more practical and efficient resource allocation in cloud-network-integrated satellite networks. Experiments have proven the practicality of the proposed virtual network embedding algorithm of Satellite Network (SN-VNE) based on Reinforcement Learning (RL) in meeting QoS and improving utilization of limited heterogeneous resources. We contrast the performance of the SN-VNE algorithm with DDRL-VNE, CDRL, and DSCD-VNE. Our algorithm improve the acceptance ratio of VNEs, long-term average revenue and delay by an average of 7.9%, 15.87%, and 63.21%, respectively.

1. Introduction

With the rise and expansion of the internet, the content of information services has gradually become rich and diverse, and network traffic, application data, and service demands have all shown explosive growth. Traditional terrestrial networks cannot meet the needs of users in remote areas, at sea, and in the air. Satellite network can solve this problem by taking advantage of its large capacity, no geographical influence, and full global coverage. With the expansion of cloud services, the resources required for business and applications are gradually concentrated on the platform-based cloud. The traditional network organization form, with a basic network and physical connections at the core, cannot provide the network capabilities of dynamic elasticity with cloud resources. Therefore, cloud-network-integrated technology is an important trend in future network development. It breaks the relative independence and isolation of the existing cloud and network, and integrates underlying facilities and resource scheduling to achieve elastic allocation of computing and storage. Simultaneously, it can support flexible orchestration management according to diverse needs. Cloud-network integration can help ordinary users use satellite data more efficiently. The cloud management method can make effective use of network resource and satisfy different types of services for users.
The combination of satellite networks and cloud computing technology aims to organically integrate satellite communications with cloud computing platforms to provide broader and more flexible communication and computing services. Figure 1 is a schematic diagram of a cloud-network-integrated satellite network, which consists of satellites, satellite computing modules, and communication transponder units. This integration can provide more powerful support for remote sensing, Internet of Things, communications, meteorology, and other fields. However, satellite resource allocation problems are usually affected by the dynamics of networks and satellite systems, such as fluctuations in communication link conditions, time-varying user needs, and changes in satellite locations. Cloud-network integration makes the resource allocation problem more complex because it involves the integration of satellite, cloud, and terrestrial networks. Satellite resource allocation under cloud-network integration requires management across multiple levels of ground network, satellite network, and cloud computing, which increases the complexity of resource allocation. In addition, cloud computing is usually concentrated in data centers on the ground, while satellite communications involve the propagation of signals on the ground and satellites, which may result in higher delay. The introduction of network function virtualization enables resources to be allocated and managed more flexibly. This provides edge computing nodes and cloud computing centers on satellites with higher resource utilization and manageability while also reducing latency.
At present, research on satellite resource allocation under cloud-network integration mainly focuses on delay optimization, bandwidth management, edge computing, and cloud collaboration. Researchers are committed to designing new architectures and strategies to reduce data transmission and processing delays to meet real-time requirements for applications such as telemedicine. Researchers are committed to intelligently allocating bandwidth in a cloud-network integration environment to adapt to the bandwidth requirements of different applications and improve the overall efficiency of the system. Researchers focus on collaboratively utilizing edge computing nodes and cloud computing centers on satellites to achieve distributed processing of tasks, reduce latency, and reduce network load.
Although some scholars have studied resource allocation algorithms under cloud-network integration, the following challenges still exist:
(1)
Different applications have greatly different requirements for performance indicators, such as latency, bandwidth, and reliability. As the number of users and service demands grow, on-demand allocation of resources becomes particularly important. Therefore, in a cloud-network integration environment, how to ensure that satellite and cloud resources are fully utilized while meeting the differentiated needs of users is a major challenge.
(2)
Computing, storage, and communication resources on satellites are limited, and their availability may vary depending on satellite operating status, energy supply, and other factors. Therefore, in a cloud-network integration environment, how to design intelligent and adaptive resource allocation strategy that adapts to dynamic changes to more effectively optimize the utilization of satellite resources is a major challenge.
To address the challenges mentioned above, we study a virtual network mapping algorithm under cloud-network integration, and propose a VNE algorithm based on QoS demand satisfaction. The contributions of this paper are summarized as follows:
(1)
We abstracte the topology of the satellite network under cloud-network integration, and introduce QoS satisfaction. By calculating the QoS satisfaction during virtual network mapping, we can measure whether the mapping method meets the needs of users.
(2)
We use reinforcement learning algorithm to design a VNE algorithm, which can adapt to the dynamically changing satellite network conditions. By dynamically adjusting the virtual network mapping, the algorithm makes it more flexible to adapt to the changes of satellite network, so as to better meet the QoS requirements.
The structure of this paper is arranged as follows. We describe related work on satellite network resource allocation and virtual network mapping algorithms in Section 2. In addition, the system model of satellite network is established in Section 3. In Section 4, we introduce the specific algorithm flow of SN-VNE in detail. The SN-VNE algorithm performance evaluation results and conclusions are described in Section 5 and Section 6, respectively.

2. Related Work

2.1. Satellite Network Resource Allocation

Satellite network resource allocation is the reasonable allocation of limited spectrum, power, time, and other resources in satellite communication systems. The main goal of resource allocation is to optimize network performance, increase system capacity, reduce communication delay, and ensure that each user can obtain a QoS that meets their communication requirements. Among the existing satellite network resource allocation algorithms, we have the research by Tian et al. [1], who built a Low Earth Orbit (LEO) satellite communication system based on beam hopping and adopted a greedy algorithm to solve the resource allocation problem. Beam hopping is a technique used in satellite communication to manage and optimize signal coverage and resource allocation. Under this mechanism, the communication beam of the satellite quickly switches between different regions to adapt to the needs and communication loads of different areas and this system can flexibly allocate satellite resources according to changing business needs. Regarding the load balancing issue of cache resource allocation, Wang et al. [2] proposed a cache resource allocation method based on Stackelberg game. By intelligently caching content and allocating resources in satellite networks, their goal is to reduce latency, improve data transmission efficiency, and achieve network load balancing, and the proposed method can optimize the cache resource allocation problem in Geostationary Earth Orbit (GEO) satellites. Although these aforementioned methods account for the distinctive characteristics of satellite networks, they have not yet attained optimal performance in resource allocation algorithms.
Satellite networks combined with edge computing have also achieved some results in resource allocation. For example, Fang et al. [3] modeled the unloading decision problem as a bilateral matching game problem involving LEO and GEO satellites, and proposed an unloading decision scheme based on an improved bilateral many to one matching game algorithm. This algorithm mainly considered the improvement of delay and energy cost, and can achieve the optimization of computing resource and radio resource allocation. Wang et al. [4] decomposed the increasing computing demand problem into two subproblems: computing offloading and resource allocation. They proposed a joint computation offloading and resource allocation (JCORA) strategy and the Lagrange multiplier method was used to realize computing resource allocation. The results showed that the JCORA strategy can effectively reduce system costs.
Due to the inherent complexity of satellite network environments and the limitations of available resources, a significant body of research focused on developing resource allocation strategies. Zhou et al. [5] proposed a task-aware contact plan design method aimed at optimizing resource allocation and data transmission efficiency in small satellite networks. The study considered the unique characteristics of satellite networks, such as the dynamic nature of satellites, limited resources, and the diversity of tasks, and emphasized the importance of accounting for channel conditions and network density when designing satellite network tasks and data transmission strategies. Jia et al. [6] introduced a collaborative data download method utilizing inter-satellite links to improve the data transmission efficiency of LEO satellite networks. The research examined how to enhance satellite networks performance and data transmission efficiency under constrained resource conditions through intelligent scheduling and the use of inter-satellite links. Di et al. [7] presented an ultra-dense LEO network architecture to improve data transmission efficiency by integrating space and ground networks. This study stressed the importance of channel conditions and network density in satellite network task design and data transmission strategies, while also exploring ways to boost network performance through space-ground network integration. Zhou et al. [8] proposed a channel-aware task scheduling method to optimize resource allocation in broadband data relay satellite networks. The study took into account the effects of channel conditions on task scheduling and data transmission efficiency, highlighting the importance of considering both channel conditions and network density when developing satellite network tasks and transmission strategies. Moreover, emerging technologies in the field of artificial intelligence provided novel solutions for efficient resource allocation in resource constrained networks. Deng et al. [9] investigated the application of intelligent algorithms to address complex network capacity management challenges, with the aim of improving data transmission efficiency and task scheduling. Their work offered a new perspective on satellite network resource optimization, particularly by demonstrating how deep reinforcement learning could be leveraged to improve network performance and resource utilization. Similarly, Jiang et al. [10] introduced a reinforcement-learning-based method to optimize satellite network capacity management in response to evolving network environments, thereby enhancing overall network performance. This method facilitated more efficient resource allocation and network utilization in dynamic conditions. Zhou et al. [11] examined the data transmission challenges in distributed satellite cluster networks and proposed a distributed robust planning approach, which enhanced the performance and robustness of distributed satellite networks when faced with randomness and uncertainty. Zhou et al. [12] proposed a state-action-reward-state-action (SARSA)-based actor-critic reinforcement learning (SACRL) resource allocation strategy. Under the above two processes, this algorithm can optimize the multi-dimensional resource allocation of Satellite Internet of Remote Things Networks (SloRTNs) and the data scheduling of the Inter net of Remote Things (IoRT).
The advantage of cloud computing is that it can quickly obtain computing resources and flexibly adjust resource allocation according to business needs. By combining cloud computing technology, researchers can achieve better resource allocation strategies in different networks, such as vehicle networks [13] and wireless sensor networks [14].

2.2. Virtual Network Mapping

The process of virtual network mapping involves the mapping of virtual network requests (VNR) to physical network resources. Through the design of suitable virtual network mapping algorithms, it becomes feasible to optimize the utilization of physical network resources, enhance resource efficiency, and diminish the cost associated with virtual network mapping. In the early stages of research on virtual network mapping problems, many researchers have proposed many heuristic algorithms. Cao et al. [15,16] proposed an efficient mapping algorithm that utilized a novel node-ranking method for embedding virtual networks. Building upon this work, the researchers further developed an embedding algorithm based on multiple topological attributes, which not only considered the importance of individual nodes, but also incorporated the overall network topology. This approach aimed to achieve more efficient resource allocation and improve overall network performance. Zhang et al. [17] proposed a security-aware virtual network embedding algorithm that leveraged information entropy and the TOPSIS method to optimize network security. The algorithm took into account resource allocation efficiency to ensure that security enhancements were achieved without significantly compromising resource efficiency. Similarly, Lu et al. [18] introduced a collaborative dynamic virtual network embedding algorithm based on a resource importance metric. By dynamically evaluating the importance of resources, this algorithm aimed to achieve efficient resource utilization under constrained resource conditions. However, these heuristic algorithms on resource allocation and resource efficiency are either constrained by the scale of the network or limited to single-domain networks. As the scale of virtual networks continues to expand, it may become very time-consuming or impossible to find effective solutions.
Yao et al. [19] introduced reinforcement learning to optimize node mapping, which improved the request acceptance rate and long-term benefits compared to traditional heuristic algorithms, proving the effectiveness of reinforcement learning in virtual network mapping problems. Zhang et al. [20] introduced a virtual network embedding algorithm grounded in enhanced genetic algorithms, yielding a substantial enhancement in the acceptance rate of VNR and the long-term average revenue of Infrastructure Providers (InPs). Ling et al. [21] asserted that prior VNE algorithms overlooked the resource situation of the underlying network (RSUN). Consequently, they devised two heuristic algorithms designed to choose suitable VNE strategies based on RSUN. Through the categorization of physical nodes, the efficacy of bandwidth utilization witnessed significant improvement. Yuan et al. [22] introduced a VNE algorithm founded on Q-learning algorithm, which improved resource utilization compared to traditional heuristic algorithms. Since most algorithms did not consider the QoS requirements of VNR, and a few only considered the mapping of delay-sensitive VNR [23,24,25], Jiang et al. [26] proposed a VNE algorithm rooted in deep reinforcement learning (DRL), which focused on differentiated service quality requirements and network security issues. Cheng et al. [27] believed that most VNE methods only focus on the current VNR and treat different QoS requirements equally, ignoring long-term effects. Therefore, they proposed a hierarchical reinforcement-learning-based active virtual network embedding algorithm (VNE-HRL). Lu et al. [28] introduced a pair of distributed parallel genetic algorithms employing crossover and mutation schemes for addressing online virtual network link embedding challenges. Two algorithms collaborated with each other and demonstrated a speed improvement of 32.78% compared to the prevalent VNE algorithms of the era. In [29], the objective was to tackle data privacy and data areolation concerns, and the introduction of Federated Learning (FL) as a means to model VNE was presented for the first time. Building upon Horizontal Federated Learning (HFL), the researchers introduced a VNE architecture denoted as HFL-VNE. This architecture collectively optimizes decisions by leveraging distinct local and federated reward mechanisms and loss functions, leading to a substantial enhancement in learning efficiency.
In summary, according to above situation, there are limited virtual network mapping algorithms that can fully take into account the QoS requirements. Furthermore, the metrics employed by previous algorithms are typically insufficient, potentially overlooking key factors influencing QoS in practical applications. Hence, the algorithm designed by us considers multiple QoS metrics for VNRs and provides different QoS guarantees for VNRs with different QoS requirements. This approach aims to enhance the request acceptance rate while diligently safeguarding the quality of service for VNRs to the greatest extent possible.

3. Satellite Network Model

The mapping method of virtual networks oriented towards the ground model is inadequate for dynamic and changeable satellite networks. For example, the dynamic movement of satellite nodes will lead to service interruption, causing the established satellite network links to squander resources and increase waiting time. Therefore, establishing the network model according to the particularity of satellite network is of great significance which support virtual network mapping method research.
In this paper, the satellite network model is established for the dynamic and changeable special satellite network environment, and G S = { N S , L S , W N S , W L S } stands for satellite physical network which is the directed graph, where N S stands for the satellite network nodes, L S represents the satellite network link set, the node attribute set W N S represents the satellite network, which includes the available CPU resources C P U ( n s ) and the delay D e l a y ( n s ) , and W L S represents the satellite network link attribute set, which includes bandwidth remaining resources B W ( l s ) and transmission delay of links D e l a y ( l s ) . In this paper, virtual network G V R = { N V R , L V R , C N R V , C L V R } will be established for virtual network requests, where N V R stands for the virtual node set, C N V R represents the constraint of N V R , that is, the demand of satellite nodes, L V R stands for virtual link set, and C L V R represents the constraint of L V R , that is, the demand of inter-satellite links. We mainly investigate the resource allocation of satellite network under cloud network convergence. To achieve this, in order to optimize resource allocation, we introduce VNE. The node mapping result is shown as the following formula:
φ i j = 1 , i f n i v r n j s 0 , e l s e .
The mapping process of virtual nodes can be expressed as n i v r n j s , where n i v r represents the virtual node that need to be mapped and n j s represents the physical node.
The CPU resources constraint are defined as the following formula:
C n i v r n j s C P U : C P U ( n j s ) C P U ( n i v r ) .
The CPU resources of n j s must meet the n i v r resource requirement, which is an essential condition for successful mapping. In addition, during the virtual node mapping process, a physical node can only host one virtual node at most, so the following calculation should be satisfied:
n i v r N V R , n j s φ i j = 1 .
After the node mapping is completed, it is necessary to map the virtual link to the physical link. The link mapping process is shown in the following formula:
ϕ i j = 1 , i f l i v r l j s 0 , e l s e .
Similar to n i v r n j s , the expression of l i v r l j s is used to describe virtual link mapping, where l i v r represents the virtual link and l j s represents the physical link. Whether l i v r can successfully map to l j s depends on whether the bandwidth resources meet the conditions. The judgment conditions are as follows:
C l i v r l j s B W : B W ( l j s ) B W ( l i v r ) .
Besides, the link mapping should meet the following constraints:
l i v r L V R , l j s ϕ i j 1 .
In the VNE problem, a virtual link can be mapped to multiple physical links, and the mapping to multiple physical links is called path splitting. In addition, the path of the physical link has no directionali ty as follows:
d i r ( n j s , n i s ) = d i r ( n i s , n j s ) .
Before defining the node mapping cost, we need to define the CPU resources which can be used; the total CPU consumption of all n i v r deployed on n j s is shown in the following formula:
R e s t C P U ( n j s ) = C P U ( n j s ) n i v r n j s C P U ( n i v r ) .
Moreover, the cost of mapping virtual node n i v r to physical node n j s is shown in the following formula:
C o s t N ( n j s ) = n i v r n j s C P U ( n i v r ) .
Before defining the link mapping cost, we need to define the available bandwidth resources; the bandwidth consumption of l i v r deployed on l j s is shown in the following formula:
R e s t B W ( l j s ) = B W ( l j s ) l i v r l j s B W ( l i v r ) .
The cost of mapping l i v r to l j s is shown in the following formula:
C o s t L ( l j s ) = l i v r l j s B W ( l i v r ) .
In addition, the delay of the physical link should not be less than the delay required by VNR, as shown in the following formula:
C l i v r l j s D e l a y : D e l a y ( l j s ) D e l a y ( l i v r ) .
The revenue of successful deployment of VNRs is as follows:
R E V ( G V R ) = n i v r N V R C P U ( n i v r ) × φ i j + l i v r L V R B W ( l i v r ) × ϕ i j .
The cost of a successful deployment of VNRs is as follows:
C O S T ( G V R ) = n i v r N V R C P U ( n i v r ) × φ i j + l i v r L V R B W ( l i v r ) × ϕ i j × h o p s ( l i v r )
where h o p s indicates mapping hops of l i v r . If h o p s ! = 1 , it means that l i v r is deployed to different satellite links. An increase in the number of paths means an increase in mapping costs. Therefore, minimizing the hops is an effective way to reduce mapping costs.

4. Mapping Algorithm

We will describe a satellite network mapping method based on RL. The algorithm mainly learns optimal mapping strategies through interaction between agents and the environment. Specifically, during the learning process, the agent needs to interact with the environment, select corresponding actions based on current virtual network configuration, usage of physical network resources, network traffic, and other information, and receive feedback from the environment. Based on the feedback, the policy is updated. Through continuous learning and optimization, the agent can eventually learn an optimal virtual network mapping strategy, which effectively meets the ever-changing business requirements.

4.1. Feature Extraction

To facilitate a deeper understanding of the substrate network by the agent, it is imperative to distill the essence of each substrate node and fashion it into a feature matrix that can be fed into the policy network. In this paper, we have chosen four salient attributes for every substrate node.
(1)
Computing resources: In virtual network mapping, the CPU performance of the substrate nodes directly affect the ability to host virtual nodes.
(2)
Degree: The degree of node n i S refers to how many nodes are connected to it. The large degree of a node means that it has higher connectivity and importance in the network.
(3)
Sum of bandwidth: S U M B W of node n i S refers to the total bandwidth available across all of the links that are next to it. The better the node can meet the virtual node’s computing and data transmission requirements, the greater the total bandwidth:
S U M B W n i S = l j S L n i S B W l j s
where L n i s represents the link adjacent to node n i s and B W l j s represents the bandwidth resource of link l j s .
(4)
Average distance to other host nodes: We take into account the mapping positions of many virtual nodes in the same request in addition to the mapping location of a single virtual node when mapping virtual nodes to substrate nodes. We can save costs by choosing to map virtual nodes to places near base nodes that have already been mapped. The shortest path is found using the Floyd–Warshall algorithm, and the number of links on the shortest path is used to calculate the distance between two substrate nodes. When mapping virtual nodes to substrate nodes, we not only consider the mapping location of a single virtual node, but also the mapping locations of other virtual nodes in the same request. By selecting to map virtual nodes to locations close to already mapped base nodes, we can reduce costs. We use the Floyd–Warshall algorithm to calculate the shortest path, and determine the gap from the substrate node to another based on the number of the shortest path’s links:
A V G ( D S T ) n i s = n i s ˜ N S ˜ D S T n i s , n i s ˜ N S ˜ + 1
where N S ˜ denotes the complementary set of n i s , n i s ˜ is the node in N S ˜ , and | N S ˜ | represents the set size of N S ˜ . We normalize the feature extraction results of node n i S and form a feature vector:
v = ( C P U ( n i s ) , D E G ( n i s ) , S U M ( B W ) ( n i s ) , A V G ( D S T ) ( n i s ) ) T .
After feature extraction for all substrate nodes, we connect their respective feature vectors into a feature matrix:
M f = v 1 , v 2 , . . . , v N S T .

4.2. Policy Network

During the virtual network mapping process, we use the policy network to choose a substrate node for each virtual node that needs to be mapped. As illustrated in Figure 2, we construct a policy network in this article. It is made up of an input layer, a convolutional layer, a softmax layer, and a filter. The feature matrix is input, and the probability of mapping the virtual node to the substrate node is ultimately output. Each layer in the strategy network has its specific role:
(1)
Input layer: responsible for receiving feature matrix as input.
(2)
Convolutional layer: through convolutional operation, it extracts local features from the input feature matrix and performs linear transformation on them:
h k c = ω · v k + b
where ω represents the convolution kernel weight vector and b is the bias.
(3)
Softmax layer: the likelihood that the virtual node will map to the underlying node is produced by nonlinearly transforming the convolutional layer’s output. The k th node has the following probability of being chosen:
p k = e h k c i | N s | e h i c .
(4)
Filter: the filter is mainly used to screen out candidate nodes with insufficient computing resources, leaving only nodes with sufficient CPU capacity.

4.3. Training Approach

Our training process for virtual network mapping tasks typically involves the following steps:
(1)
Initialization: first, the policy network’s parameters are initialized at random.
(2)
Interaction: the policy network interacts with the environment. It takes the virtual nodes’ feature matrix out of the substrate network and feeds it into the policy network.
(3)
Policy network output: in each interaction, a list of potential substrate nodes and the likelihood that each will be chosen are produced by the policy network.
(4)
Feedback from the environment: the environment provides feedback based on the policy network’s output. This feedback is known as the reward signal. The agent uses the latest reward signal to determine if its behavior is appropriate. If the environment provides positive feedback, it motivates the agent to continue this behavior in the future. If the feedback is unclear or negative, the current behavior should be avoided. The reward function plays a crucial role here. The algorithm’s performance and rate of convergence can be directly impacted by the reward function which has been correctly designed and optimized. The reward function used in this paper is as follows:
r e w a r d = α · ( r e v T o C o s t ( r e q ) ) + β · b w _ s a + λ · d l _ s a
where r e v T o C o s t ( r e q ) denotes the ratio of revenue to cost of request deployment and α , β , and η are weights coefficient of different factors. In addition, b w _ s a represents the satisfaction of bandwidth and d l _ s a represents the satisfaction of delay. b w _ s a and d l _ s a are defined as follows:
b w _ s a = l i v r L V R 1 a b s ( a c t b w ( l i v r ) r e q b w ( l i v r ) ) | L V R |
d l _ s a = l i v r L V R 1 a b s ( a c t d l ( l i v r ) r e q d l ( l i v r ) ) | L V R |
where a c t b w l i v r represents the actual bandwidth, r e q b w l i v r represents the required bandwidth, a c t d l l i v r represents the actual delay, and r e q d l l i v r represents the required delay.
(5)
Calculation of loss: a loss value is calculated based on the action and feedback. This loss reflects the discrepancy between the policy network’s prediction and the actual environmental feedback. In this paper, we calculate cross-entropy loss:
L ( p ) = i | N s | y i × l o g ( p i )
where y i is the manually labeled selection of the ith node and p i represents the possibility of the ith physical node being mapped by current virtual node.
(6)
Update policy network: an optimization algorithm is then employed to make the parameter of the policy network update to minimize the calculated loss. In gradient descent, parameters are updated by traveling across the gradient in the other direction to gradually decrease the loss function. The learning rate η controls the step size of each parameter update. If η is too large, the model will start to sway. Large steps may skip the optimal solution, causing the training process to converge or only converge to a local optimal solution. If η is too small, it can result in very slow convergence.
g = η · r · g f
where r represents the reward when a virtual request is successfully mapped and g f represents the gradient used to adjust the optimization direction of the loss function.
(7)
Iteration: repeat the above steps until a stopping condition is met.
Throughout the training process, the policy network continuously interacts with the environment and summarizes the best behavioral decisions through trial and error, gradually learning how to take effective actions under given network states to maximize returns (i.e., revenue-to-cost ratio). This way, when facing new virtual requests, the policy network can automatically make effective decisions based on its learned experience. Algorithm 1 demonstrates the specific training process.
Algorithm 1 Algorithm for training models.
     Input: Epoch limit e p o c h _ n u m ; Learning rate η ; differentiated QoS VNRs set U V N R s ;
     Output: Parameters that have been trained in policy network;
1:
Initialize all the parameters in policy network;
2:
while i < e p o c h _ n u m  do
3:
     flag = 0;
4:
     for rep ∈ U V N R s  do
5:
           for node ∈ rep do
6:
                 get Matrix M f by formula (18);
7:
                 get probability by formula (20); //Get the probability distribution;
8:
                 select host node; //Select a node to act as the host;
9:
                 calculate Gradient;
10:
          end for
11:
          if isMapped(∀ node ∈ rep) then
12:
                get Link mapping by Breadth First Search;
13:
          end if
14:
          if isMapped(∀ node ∈ rep, ∀ link ∈ rep) then
15:
                get reward by formula (21) //Determine the revenue-to-cost ratio;
16:
                multiplyGradient(reward, η ) //Determine the final gradients;
17:
          else
18:
                reset Gradient;
19:
          end if
20:
          ++flag;
21:
          if flag attain the batch magnitude then
22:
                apply gradients to parameters;
23:
                flag = 0;
24:
          end if
25:
    end for
26:
    ++i;
27:
end while

5. Experimental Results and Analysis

This section introduces the environment and parameters of experimental, then compares the proposed algorithm performance with the comparative experimental algorithms, i.e., a latency-sensitive VNE algorithm based on Deep Reinforcement Learning (DDRL-VNE), Continuous-Decision virtual network embedding scheme relying on Reinforcement Learning (CDRL) and a Delay Sensitive Cross-Domain Virtual Network Embedding Algorithm (DSCD-VNE), and analyzes the experimental results.

5.1. Experimental Environment and Parameters

We conducts algorithm training on a computer configured with an Intel Core i7-10875H processor, a memory of 16 GB, and a hard disk space of 1T to test the proposed algorithm. We use the NetworkX tool to get the cloud network collaborative satellite network topology. As the parameter settings shown in Table 1, the types of physical nodes are categorized as satellite nodes, cloud server nodes, and edge nodes. Different types of nodes have differences in the amount of resources they possess. Experimental data of 1000 virtual networks and 1 physical network are generated through the topology generation tool. The generated physical network consists of 200 nodes and 800 links. Among them, there are 30 satellite nodes, 50 cloud server nodes, and 120 edge nodes. The CPU resource capacity of satellite nodes is between 40 and 80 units, and the bandwidth resource capacity is between 40 and 100 units. The CPU resource capacity of cloud server nodes is between 120 and 200 units, and the bandwidth resource capacity is between 100 and 200 units. The edge node CPU resource capacity is between 30 and 80 units, and the bandwidth resource capacity is between 30 and 100 units. The virtual network randomly contains 6–30 virtual nodes when generated. The probability of connection between nodes is 50%. Virtual node resources are generated according to QoS requirements. For nodes with low computational requirements, node resources are randomly generated within 3–18 units. For nodes with medium and high computational requirements, the node resources are randomly generated within 20–40 and 50–90 units, respectively. Virtual link bandwidth resources are also generated according to QoS requirements. For links with low, medium, and high bandwidth requirements, bandwidth resources are randomly generated within 5–15, 25–55, and 65–100 units, respectively.

5.2. Comparison Algorithms

In this study, in addition to implementing SN-VNE, DDRL-VNE, CDRL, and DSCD-VNE are also implemented. In this section, we introduce DDRL-VNE, CDRL, and DSCD-VNE in detail, respectively, and design relevant evaluation indicators.
(1)
DDRL-VNE: a latency-sensitive VNE algorithm based on deep reinforcement learning, aimed at the needs of SAGIN resource layout in industrial Internet of Things (IIoT) services, to quickly and effectively arrange network resources and meet users’ service quality requirements. This algorithm transforms the resource scheduling problem of SAGIN into a multi domain virtual network embedding problem, and considers the impact of traffic size and hop count on latency. It constructs a learning agent consisting of a five layer policy network and extracts a feature matrix based on SAGIN network attributes as its training environment. At the same time, the algorithm designed a five-layer policy network to infer node embedding probabilities and used breadth first search strategy to complete link embedding [30].
(2)
CDRL: a reinforcement learning based embedding scheme for continuous decision virtual networks, aimed at solving the problem of static decision mechanisms not adapting to dynamic network structures and environments in VNE problems. The embedding of nodes with the same request is considered a time series problem, modeled using the classic seq2seq model, and the RNN parameters are updated using the policy gradient algorithm to optimize network resource utilization efficiency. Firstly, extract the state information of the underlying network and form a feature matrix. Then, use the seq2seq model to output the node embedding result of the current virtual network request. Finally, update the network parameters through the policy gradient algorithm to optimize the embedding strategy [31].
(3)
DSCD-VNE: a latency sensitive cross domain virtual network embedding algorithm designed to meet the varying latency requirements of QoS in different application scenarios. Unlike traditional two-stage algorithms (mapping nodes first and mapping links), DSCD-VNE adopts a three-stage embedding algorithm of "node link node". In the candidate link matrix generation stage, node resource metrics and set constraints are introduced. With the candidate link matrix as the core, Kruskal minimum spanning tree is used to select the link with the shortest delay for embedding. Path segmentation mechanism and K-shortest path algorithm are introduced to improve the utilization rate of underlying links [32].
For the resource allocation problem, we introduce the following three metrics: (1) VNR acceptance ratio (ACC); (2) long-term revenue(LR); and (3) delay.

5.3. Training Results

Figure 3 illustrates the successful acceptance ratio of VNRs to the physical network when training SN-VNE model. The acceptance ratio is at the low level and fluctuates severely when the mapping just begins. As the training epoch increases, the curve increases until epoch is 70. Finally, the acceptance ratio leveled off and stabilized at 0.66.
Figure 4 illustrates the average revenue, which is low, reaching 810, and fluctuates greatly. As the training epoch increases, the corresponding value of the curve also gradually increases. When the epoch is 70, the curve begins to level off and finally stabilizes at 1100.
The long-term average revenue/cost is a measure of revenue on investment that expresses the revenue received from an investment compared to the cost of the investment. The calculation of the revenue/cost can help evaluate the effect of the model. Figure 5 describes the changes in the revenue/cost during model training. At the beginning, the lowest value of the curve is 0.355 and the highest value is 0.39. The curve fluctuates greatly and remains at a low value for a long time. As the training epoch increases, the curve gradually increases and finally levels off.
This is because the initial model parameters are usually initialized randomly. This randomness triggers to unsteady model performance in the initial stage. Then, the initial stage model is more inclined to explore to discover better strategies. The models try different virtual network mapping strategies resulting in fluctuations in acceptance ratio, long-term average revenue and long-term average revenue-to-cost. As training progresses, the model gradually strengthens the learned experience and begins to make more use of the effective strategies learned, thereby improving performance. Finally, after a certain number of training iterations, the training curves tend to a relatively stable level when Epoch is 70, that is, the model has found a relatively better strategy in the virtual network mapping problem. To evaluate the stability of the algorithm, we collect data from 50 independent experimental runs and sample 30 epochs of data after the model converges. We calculate the confidence interval using formula x ¯ ± z σ n and use the mean of 50 runs as the final result. The statistical results show that the 95% confidence intervals for ACC, LR, and LR/C are [0.636, 0.664], [1044.921, 1117.782], and [0.388, 0.402], respectively. This demonstrates that the SN-VNE algorithm exhibits good stability in the later stages of training.

5.4. Result Evaluation

Figure 6 demonstrates the experimental results of VNR acceptance ratio under four algorithms. The acceptance ratio of the four algorithms all fluctuate greatly during the initial mapping stage and gradually decrease over time. In the initial stage of mapping, the newly arrived VNR has sufficient available satellite network resources, so the embedding success ratio is high. Over time, the limited resources of the physical network are occupied by successfully embedded VNRs, and the embedding success ratio of newly arrived VNRs becomes low because there are not enough resources to carry them. Finally, as the system gradually reaches an equilibrium state of resource utilization to adapt to long-term resource constraints, the embedding success ratio levels off. However, our algorithm has a better VNR acceptance ratio than the other three algorithms from the beginning of mapping to any time period thereafter, which is 3.81%, 7.76%, and 12.13% higher than DDRL-VNE, CDRL, and DSCD-VNE, respectively. The proposed SN-VNE uses graph neural network to extract node link features, optimizes the mapping process, and improves the VNR acceptance ratio. The highly dynamic network cannot be satisfied during link mapping for DSCD-VNE algorithm by shortest path, so the VNR acceptance ratio is low. DDRL-VNE trains the agent to obtain the node mapping method, while getting the link embedding method by the shortest path; it cannot adapt to highly dynamic networks. CDRL designs a mapping scheme through the classic seq2seq model based on reinforcement learning, but the algorithm lacks flexibility, so the VNR acceptance ratio is low.
Figure 7 shows the four algorithms experimental results of the long-term average revenue. At the beginning, all the curves are high. It is worth noting that the abundant physical network resources in the early stage of testing can successfully map VNRs. However, as the mapped VNR takes up network resources, the success ratio of VNR mapping begins to decline, and the long-term average revenue of the four algorithms declines. Finally, the system gradually reaches a balanced state of resource utilization to adapt to the situation of limited long-term resources, the curve tends to be stable. The revenue of the proposed algorithm is 4.72%, 11.63%, and 31.25% higher than that of DDRL-VNE, CDRL, and DSCD-VNE, respectively. Compared with DDRL-VNE, CDRL, and DSCD-VNE, the proposed algorithm accurately perceives the physical network through graph convolution, improves resource utilization efficiency under limited network resources, and obtains higher revenue.
Figure 8 depicts the latency performance of the four algorithms at different points in time. Conclusions can be drawn from the figure. The latency performance of SN-VNE is superior to the other three methods. The SN-VNE algorithm takes latency as a part of the reward function and is directly related to model performance, while other algorithms are not. Through experiments, the average delay of SN-VNE has achieved good optimization effect, which is 52.73%, 67.36%, and 71.52% lower than that of DDRL-VNE, DSCD-VNE, and CDRL, respectively.

6. Conclusions

Today, application data and service demands have shown explosive growth. The combination of satellite communications and cloud computing can offer broader and more flexible communications and computing services. However, satellite network resources are limited, and cloud network integration involves the integration of satellite, cloud, and ground networks, making the resource allocation more complex. Therefore, the resource allocation method with excellent performance has become particularly important. However, existing virtual network mapping algorithms based on satellite networks rarely take into account differentiated quality of service and highly dynamic and resource-limited network environments. In response to this, this article, firstly, uses graph theory to model the satellite network, and secondly, uses a reinforcement learning method based on policy gradient to extract features of the real-time network state and train the mapping model to output the optimal mapping method. It is worth noting that in this paper, when training the mapping model, service quality satisfaction is introduced. The model trained in this article can better meet differentiated user needs. Finally, we compare SN-VNE with the other three algorithms to evaluate the performance of SN-VNE. SN-VNE is more sensitive to differentiated service quality and can better adapt to resource-scarce and dynamic satellite network environments. The above results suggest that incorporating resource management within the system is an effective approach. These findings motivate us to adopt a more comprehensive approach in future work, such as integrating energy management and power consumption control, which will be addressed in our subsequent research.

Author Contributions

Conceptualization, Z.S.; Methodology, Z.S. and Q.D.; Software, Z.S. and Q.D.; Formal analysis, Q.D. and L.M.; Investigation, Q.D.; Resources, T.Y., L.M., S.C. and Y.L.; Data curation, T.Y. and Y.L.; Writing—original draft, Z.S.; Writing—review and editing, S.C.; Supervision, Q.D.; Funding acquisition, Z.S., Q.D., T.Y. and L.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the R & D Program of Beijing Municipal Education Commission under Grant KM202211417008, partially supported by the Beijing Social Science Foundation Program under Grant 23GLC037, and partially supported by the Natural Science Foundation of Shandong Province under Grant ZR2023LZH017.

Data Availability Statement

Raw data supporting this article will be available upon request.

Conflicts of Interest

Author Zhimin Shao, Lingzhen Meng was employed by the State Grid Shandong Electric Power Company. Author Tao Yang was employed by Shandong Luruan Digital Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Tian, F.; Huang, L.; Liang, G.; Jiang, X.; Sun, S.; Ma, J. An efficient resource allocation mechanism for beam-hopping based LEO satellite communication system. In Proceedings of the 2019 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Jeju, Republic of Korea, 5–7 June 2019; pp. 1–5. [Google Scholar]
  2. Wang, E.; Li, H.; Zhang, S. Load balancing based on cache resource allocation in satellite networks. IEEE Access 2019, 7, 56864–56879. [Google Scholar] [CrossRef]
  3. Fang, H.; Jia, Y.; Wang, Y.; Zhao, Y.; Gao, Y.; Yang, X. Matching game based task offloading and resource allocation algorithm for satellite edge computing networks. In Proceedings of the 2022 International Symposium on Networks, Computers and Communications (ISNCC), Shenzhen, China, 19–22 July 2022; pp. 1–5. [Google Scholar]
  4. Wang, B.; Feng, T.; Huang, D. A joint computation offloading and resource allocation strategy for LEO satellite edge computing system. In Proceedings of the 2020 IEEE 20th International Conference on Communication Technology (ICCT), Nanning, China, 28–31 October 2020; pp. 649–655. [Google Scholar]
  5. Zhou, D.; Sheng, M.; Wang, X.; Xu, C.; Liu, R.; Li, J. Mission aware contact plan design in resource-limited small satellite networks. IEEE Trans. Commun. 2017, 65, 2451–2466. [Google Scholar] [CrossRef]
  6. Jia, X.; Lv, T.; He, F.; Huang, H. Collaborative data downloading by using inter-satellite links in LEO satellite networks. IEEE Trans. Wirel. Commun. 2017, 16, 1523–1532. [Google Scholar] [CrossRef]
  7. Di, B.; Zhang, H.; Song, L.; Li, Y.; Li, G.Y. Ultra-dense LEO: Integrating terrestrial-satellite networks into 5G and beyond for data offloading. IEEE Trans. Wirel. Commun. 2018, 18, 47–62. [Google Scholar] [CrossRef]
  8. Zhou, D.; Sheng, M.; Liu, R.; Wang, Y.; Li, J. Channel-aware mission scheduling in broadband data relay satellite networks. IEEE J. Sel. Areas Commun. 2018, 36, 1052–1064. [Google Scholar] [CrossRef]
  9. Deng, B.; Jiang, C.; Yao, H.; Guo, S.; Zhao, S. The next generation heterogeneous satellite communication networks: Integration of resource management and deep reinforcement learning. IEEE Wirel. Commun. 2019, 27, 105–111. [Google Scholar] [CrossRef]
  10. Jiang, C.; Zhu, X. Reinforcement learning based capacity management in multi-layer satellite networks. IEEE Trans. Wirel. Commun. 2020, 19, 4685–4699. [Google Scholar] [CrossRef]
  11. Zhou, D.; Sheng, M.; Li, B.; Li, J.; Han, Z. Distributionally robust planning for data delivery in distributed satellite cluster network. IEEE Trans. Wirel. Commun. 2019, 18, 3642–3657. [Google Scholar] [CrossRef]
  12. Zhou, D.; Sheng, M.; Wang, Y.; Li, J.; Han, Z. Machine learning-based resource allocation in satellite networks supporting internet of remote things. IEEE Trans. Wirel. Commun. 2021, 20, 6606–6621. [Google Scholar] [CrossRef]
  13. Zhao, J.; Li, Q.; Gong, Y.; Zhang, K. Computation offloading and resource allocation for cloud assisted mobile edge computing in vehicular networks. IEEE Trans. Veh. Technol. 2019, 68, 7944–7956. [Google Scholar] [CrossRef]
  14. Yang, J.; Xiang, Z.; Mou, L.; Liu, S. Multimedia resource allocation strategy of wireless sensor networks using distributed heuristic algorithm in cloud computing environment. Multimed. Tools Appl. 2020, 79, 35353–35367. [Google Scholar] [CrossRef]
  15. Cao, H.; Zhu, Y.; Yang, L.; Zheng, G. A efficient mapping algorithm with novel node-ranking approach for embedding virtual networks. IEEE Access 2017, 5, 22054–22066. [Google Scholar] [CrossRef]
  16. Cao, H.; Yang, L.; Zhu, H. Novel node-ranking approach and multiple topology attributes-based embedding algorithm for single-domain virtual network embedding. IEEE Internet Things J. 2017, 5, 108–120. [Google Scholar] [CrossRef]
  17. Zhang, P.; Li, H.; Ni, Y.; Gong, F.; Li, M.; Wang, F. Security aware virtual network embedding algorithm using information entropy TOPSIS. J. Netw. Syst. Manag. 2020, 28, 35–57. [Google Scholar] [CrossRef]
  18. Lu, M.; Lian, Y.; Chen, Y.; Li, M. Collaborative dynamic virtual network embedding algorithm based on resource importance measures. IEEE Access 2018, 6, 55026–55042. [Google Scholar] [CrossRef]
  19. Yao, H.; Chen, X.; Li, M.; Zhang, P.; Wang, L. A novel reinforcement learning algorithm for virtual network embedding. Neurocomputing 2018, 284, 1–9. [Google Scholar] [CrossRef]
  20. Zhang, P.; Yao, H.; Li, M.; Liu, Y. Virtual network embedding based on modified genetic algorithm. Peer-to-Peer Netw. Appl. 2019, 12, 481–492. [Google Scholar] [CrossRef]
  21. Ling, S.; Muqing, W.; Hou, X. VNE-SDN algorithms for different physical network environments. IEEE Access 2020, 8, 178258–178268. [Google Scholar] [CrossRef]
  22. Yuan, Y.; Tian, Z.; Wang, C.; Zheng, F.; Lv, Y. A Q-learning-based approach for virtual network embedding in data center. Neural Comput. Appl. 2020, 32, 1995–2004. [Google Scholar] [CrossRef]
  23. Bianchi, F.; Lo Presti, F. A markov reward based resource-latency aware heuristic for the virtual network embedding problem. ACM SIGMETRICS Perform. Eval. Rev. 2017, 44, 57–68. [Google Scholar] [CrossRef]
  24. Li, Z.; Lu, Z.; Deng, S.; Gao, X. A self-adaptive virtual network embedding algorithm based on software-defined networks. IEEE Trans. Netw. Serv. Manag. 2018, 16, 362–373. [Google Scholar] [CrossRef]
  25. Hejja, K.; Hesselbach, X. Online power aware coordinated virtual network embedding with 5G delay constraint. J. Netw. Comput. Appl. 2018, 124, 121–136. [Google Scholar] [CrossRef]
  26. Jiang, C.; Zhang, P. VNE solution for network differentiated QoS and security requirements from the perspective of deep reinforcement learning. In QoS-Aware Virtual Network Embedding; Springer: Singapore, 9 August 2021; pp. 61–84. [Google Scholar]
  27. Cheng, J.; Wu, Y.; Lin, Y.; E, Y.; Tang, F.; Ge, J. VNE-HRL: A proactive virtual network embedding algorithm based on hierarchical reinforcement learning. IEEE Trans. Netw. Serv. Manag. 2021, 18, 4075–4087. [Google Scholar] [CrossRef]
  28. Lu, Q.; Nguyen, K.; Huang, C. Distributed parallel algorithms for online virtual network embedding applications. Int. J. Commun. Syst. 2023, 36, e4325. [Google Scholar] [CrossRef]
  29. Zhang, P.; Chen, N.; Li, S.; Choo, K.K.R.; Jiang, C.; Wu, S. Multi-domain virtual network embedding algorithm based on horizontal federated learning. IEEE Trans. Inf. Forensics Secur. 2023, 18, 3363–3375. [Google Scholar] [CrossRef]
  30. Zhang, P.; Zhang, Y.; Kumar, N.; Hsu, C.H. Deep reinforcement learning algorithm for latency-oriented IIoT resource orchestration. IEEE Internet Things J. 2022, 10, 7153–7163. [Google Scholar] [CrossRef]
  31. Yao, H.; Ma, S.; Wang, J.; Zhang, P.; Jiang, C.; Guo, S. A continuous-decision virtual network embedding scheme relying on reinforcement learning. IEEE Trans. Netw. Serv. Manag. 2020, 17, 864–875. [Google Scholar] [CrossRef]
  32. Zhang, P.; Pang, X.; Bi, Y.; Yao, H.; Pan, H.; Kumar, N. DSCD: Delay sensitive cross-domain virtual network embedding algorithm. IEEE Trans. Netw. Sci. Eng. 2020, 7, 2913–2925. [Google Scholar] [CrossRef]
Figure 1. The cloud-network-integrated satellite network. This figure shows the overall architecture of a cloud network integrated satellite network, including satellites, satellite computing modules, ground networks, and ground computing centers.
Figure 1. The cloud-network-integrated satellite network. This figure shows the overall architecture of a cloud network integrated satellite network, including satellites, satellite computing modules, ground networks, and ground computing centers.
Electronics 13 03843 g001
Figure 2. Policy network. This figure shows the hierarchical structure of the policy network. The input layer receives the feature matrix, the convolutional layer extracts local features and performs linear transformation, and the Softmax layer outputs the probability of each node being selected.
Figure 2. Policy network. This figure shows the hierarchical structure of the policy network. The input layer receives the feature matrix, the convolutional layer extracts local features and performs linear transformation, and the Softmax layer outputs the probability of each node being selected.
Electronics 13 03843 g002
Figure 3. The VNR acceptance ratio of policy network training process.
Figure 3. The VNR acceptance ratio of policy network training process.
Electronics 13 03843 g003
Figure 4. The Long-term average revenue of policy network training process.
Figure 4. The Long-term average revenue of policy network training process.
Electronics 13 03843 g004
Figure 5. The long-term average revenue/cost of policy network training process.
Figure 5. The long-term average revenue/cost of policy network training process.
Electronics 13 03843 g005
Figure 6. The acceptance ratio of the testing process.
Figure 6. The acceptance ratio of the testing process.
Electronics 13 03843 g006
Figure 7. The long-term average revenue of the testing process.
Figure 7. The long-term average revenue of the testing process.
Electronics 13 03843 g007
Figure 8. The average delay of testing process.
Figure 8. The average delay of testing process.
Electronics 13 03843 g008
Table 1. Experimental environment parameters.
Table 1. Experimental environment parameters.
NetworkParameterValue
Physical NetworkNode number200
Satellite Node number30
Cloud server Node number50
Edge Node number120
Preset CPU of satellite nodes40–80
Preset CPU of cloud server nodes120–200
Preset CPU of edge nodes30–80
Link number800
Preset bandwidth of satellite links40–100
Preset bandwidth of cloud server links100–200
Preset bandwidth of edge links30–100
VNRsNumbers of VNRs2000
Training set1000
Testing set1000
Node number of each VNR6–30
Node connection probability0.5
CPU requirement of nodes10–60
Bandwidth requirement of links10–60
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shao, Z.; Ding, Q.; Meng, L.; Yang, T.; Chen, S.; Li, Y. A Resource Allocation Algorithm for Cloud-Network Collaborative Satellite Networks with Differentiated QoS Requirements. Electronics 2024, 13, 3843. https://doi.org/10.3390/electronics13193843

AMA Style

Shao Z, Ding Q, Meng L, Yang T, Chen S, Li Y. A Resource Allocation Algorithm for Cloud-Network Collaborative Satellite Networks with Differentiated QoS Requirements. Electronics. 2024; 13(19):3843. https://doi.org/10.3390/electronics13193843

Chicago/Turabian Style

Shao, Zhimin, Qingyang Ding, Lingzhen Meng, Tao Yang, Shengpeng Chen, and Yapeng Li. 2024. "A Resource Allocation Algorithm for Cloud-Network Collaborative Satellite Networks with Differentiated QoS Requirements" Electronics 13, no. 19: 3843. https://doi.org/10.3390/electronics13193843

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop