Next Article in Journal
Highly Stable Inverted CdSe/ZnS-Based Light-Emitting Diodes by Nonvacuum Technique ZTO as the Electron-Transport Layer
Previous Article in Journal
Characterizing System-Level Masking Effects against Soft Errors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Energy-Efficient Virtual Network Function Reconfiguration Strategy Based on Short-Term Resources Requirement Prediction

1
School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
2
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(18), 2287; https://doi.org/10.3390/electronics10182287
Submission received: 18 August 2021 / Revised: 10 September 2021 / Accepted: 14 September 2021 / Published: 17 September 2021
(This article belongs to the Section Networks)

Abstract

:
In Network Function Virtualization, the resource demand of the network service evolves with the change of network traffic. VNF dynamic migration has become an effective method to improve network performance. However, for the time-varying resource demand, how to minimize the long-term energy consumption of the network while guaranteeing the Service Level Agreement (SLA) is the key issue that lacks previous research. To tackle this dilemma, this paper proposes an energy-efficient reconfiguration algorithm for VNF based on short-term resource requirement prediction (RP-EDM). Our algorithm uses LSTM to predict VNF resource requirements in advance to eliminate the lag of dynamic migration and determines the timing of migration. RP-EDM eliminates SLA violations by performing VNF separation on potentially overloaded servers and consolidates low-load servers timely to save energy. Meanwhile, we consider the power consumption of servers when booting up, which is existing objectively, to avoid switching on/off the server frequently. The simulation results suggest that RP-EDM has a good performance and stability under machine learning models with different accuracy. Moreover, our algorithm increases the total service traffic by about 15% while ensuring a low SLA interruption rate. The total energy cost is reduced by more than 20% compared with the existing algorithms.

Graphical Abstract

1. Introduction

In recent years, with the emergence of new network technologies and the growing customer demands, the business model of operators is undergoing a revolutionary change. Network Function Virtualization (NFV), which is a promising network technology in this revolution [1], decouples network functions from hardware so that they can run as software on Virtual Machines (VMs) of commodity servers. Virtual Network Function (VNF) is instantiated on a VNF Instance (VNFI) implemented with a VM [2], which is allocated resources such as CPU, RAM, and discs.
Operators use Service Function Chain (SFC) to provide customized network services for users in NFV [3]. The network efficiency closely depends on the mapping of VNF and the routing of SFC. Meanwhile, as the network slices run, traffic arrivals of each specific service fluctuate over time, which may result in the mismatch between SFC resource requirements and resource availability of servers, which has an adverse effect on Quality of Service (QoS) and resource utilization [4]. When one certain VNF placement and resource allocation policy failed to meet the current network requirements, NFV Orchestrator (NFVO) provides reconfiguration for SFCs including vertical scaling, horizontal scaling, and dynamic migration. Horizontal scaling makes full use of resource fragmentation but is only feasible when the available resources in the server are sufficient, while dynamic migration can be a flexible solution in an overloaded state but incurs overhead in the meantime.
However, servers are constantly turned on as the dynamic arrival and departure of SFC, which leads to many servers in low-utilization. Additionally, research shows that a server which is idle, even with no workload, continues to consume 70% of its peak energy consumption and causes a severe waste of energy [5]. In addition, since the traffic of the network is time-varying, the resource demand of VNF changes all the time. Servers may overload which leads to overall performance decline and SLA violations or maybe in a low-load state which further aggravates energy waste. Hence, one of the most important goals for operators is to minimize energy consumption without violating SLA. The major issue to be solved is to find a VNF reconfiguration strategy for time-varying traffic rate which minimizes energy consumption with less migration cost and ensuring SLA simultaneously.
Given the aforementioned problems, the main contributions of this paper are as follows.
  • We consider the boot-up energy cost of infrastructures to establish a more accurate energy consumption model;
  • In consideration of the real SFC request scenarios, we use a time-varying traffic dataset. We adopt LSTM models to predict the resource demand of VNFs in short term and make use of the prediction result, making it possible to migrate VNF in advance proactively in certain timing;
  • We propose the RP-EDM algorithm to minimize the energy consumption of the network while considering SLA. The migration is executed non-periodically, considering servers in overload and low load conditions simultaneously. Additionally, we simulate our algorithm using different prediction models to verify the superiority of our proposed strategy.

2. Related Work

Most previous studies involving VNF energy saving have reported kinds of reactive-driven mechanisms. The authors of [6,7,8] consider the low-load state, where VNFs are consolidated into fewer servers so that some servers are able to shut down to save energy. The work in [6] consolidated the same type of Network Functions (NFs) to minimize the number of VNFI, considering the characteristics of NF types and SFC delay constraint. The author constructed an ILP model and proposed a heuristic algorithm called GNFC which is executed periodically and reduces the reconfiguration rate. The work in [7] proposed an algorithm called VCMM, which used a greedy mechanism to migrate VNFs into fewer servers. Multiple conflicting goals including energy consumption, bandwidth usage, and migration cost are considered in VCMM executed regularly as SFCs dynamically arrive and depart. The author used a neural network trained from historical data to determine the probability of a server being shut down. However, the authors of [6,7] migrate VNFs periodically without considering the timing of consolidation and lack a solution for server overload caused by SFC dynamic resource requirements. The work in [8] proposed a cold migration strategy, which determines the placement of VNFI for each traffic segment in a stable cycle scenario. Aiming at minimizing the migration cost and operation consumption, the author proposed an energy-aware algorithm based on the Viterbi algorithm called MEAVR. However, in the real scenario, the traffic flow is of high variability in short-term and irregular periodicity in long-term that is not obvious rather than cyclically stable. Therefore, the findings of [8] cannot cope with short-term time-varying traffic that is nonstationary in real scenario.
There is a lack of research on the timing of migration in terms of reactive migration, which is, however, one of the key issues to reduce the long-term energy consumption of the network, that is, when to consolidate or separate VNFs. An appropriate migration timing should be chosen to avoid unnecessary and expensive VNF migrations, as well as SLA violation or massive energy waste caused by failure to migrate in time. VNF migration does optimize performance, but also involves migration overhead like increasing bandwidth consumption. The long-term energy consumption of the network varies in migrating at different times. In addition, it is necessary to ensure that the switched-on server works as long as possible and should not be turned on/off frequently. Studies have shown that the energy consumption of the server during startup should not be ignored [9]. Since the server runs at full load when starting up, the power consumption is 33% higher than that in standby. Meanwhile, the instantaneous current increases sharply to several times of that in standby, which is gradually devastating hardware such as CPU, motherboard, etc., and is detrimental to the lifetime of devices as is reported in [10]. Therefore, servers in a low load state should be shut down timely to save energy but avoid being restarted soon.
One feasible method to cure the time lag problem [2] is the proactive resource prediction approach. In ref. [11], a DBN prediction algorithm based on online learning was proposed to predict the amount of resources required by VNF and virtual link, and to solve overload problems by using the prediction results. The work in [12] proposed a CAT-LSTM model to predict the resource requirements of a VNF by using SFC data and improved the accuracy by adding aspect embedding and attention. CAT-LSTM is proved to be efficient in predicting the CPU idle percentage of a certain VNF in a given SFC. However, the authors of [12] do not consider how to use these prediction results to optimize resource allocation.
For the challenges above, this paper proposes the Energy-Efficient Dynamic Migration Algorithm Based on Short-Term Resource Requirement Prediction (RP-EDM). We consider the reconfiguration problem driven by evolving resource demand, focusing on a real time-varying traffic scenario. RP-EDM takes advantage of VNF resources demand predicted by LSTM models and is aware of the short-term change of resources in advance, which makes it possible to determine the timing of migration and migrate ahead of schedule. When servers are under low load, RP-EDM consolidates VNFs into fewer servers and shuts down some of the servers for the sake of saving energy. Additionally, when servers are overloaded, RP-EMD separates the VNFs on the overloaded node timely so as not to violate SLA. We evaluate the effectiveness of our algorithm in energy saving by simulations on real network topology.
Table 1 shows the critical technical contributions of previous works. In addition, we display the difference between our work and others.

3. System Model and Problem Description

In this section, we define the network model and describe SFC and VNF, then formulate the problem of VNF migration.

3.1. Network Model

We consider a three-layer NFV network architecture as [6]. As shown in Figure 1, physical nodes connect with each other via links in the physical layer. In the VNF layer, VNF is instantiated on the physical node that provides the required resources. In the application layer, different types of NFs making up the SFCs are deployed on the corresponding types of VNFs.
We represent the physical network as an undirected graph G = ( V S , E S ) , where V S represents the set of physical nodes and E S represents the set of physical links. The CPU capacity of node v i s V S are C i S , and each unit of CPU resource represents the resources required to process one data packet. The memory capacity M i S means the available megabytes of v i s . We use P i max to indicate the maximum power consumption of v i s . Additionally, e i j s E S represents the edge connecting v i s and v j s , and the physical link bandwidth and delay are B W i j S and L i j S , respectively.

3.2. SFC and VNF

Let F be the set of VNF types. The CPU resource and memory resource consumed by a server when instantiating a VNF of type f F is c f and m f , respectively.
We denote the set of SFCs as S . Each s f c j S can be represented as s f c j = v j , i n s , v j , o u t s , V j N F , E j N F , l j , where v j , i n s and v j , o u t s are the ingress and egress. V j N F is a set of ordered NFs that traffic passes through. E j N F represents the set of logical links connecting VNFs between the ingress and egress. Additionally, l j represents the end-to-end latency allowed by the request. The CPU and memory resource requirements for the u-th NF v j , u n f V j N F are c j , u n f and m j , u n f . The traffic segment between ( v j , u n f , v j , v n f ) is represented as a logical link e j , u v n f E j N F , where the bandwidth requirement is b w j , u v n f .
We define a decision variable t j , u f { 0 , 1 } to indicate the types of NFs in the SFC. t j , u f { 0 , 1 } is 1 when v j , u n f V j N F is of type f .
t j , u f = { 1 0   if   v j , u n f   is   of   type   f F otherwise
We suppose the ingress and egress have no type, so t j , u f { 0 , 1 } is 0 all the time. Decision variable η j , u i { 0 , 1 } is 1 when logical node v j , u n f maps to physical node v i s .
η j , u i , t = { 1 0 if   v j , u n f   is   mapped   to   v i s   during   t otherwise
Decision variable τ j , u v p q { 0 , 1 } is 1 when the traffic segment e j , u v n f flows through the physical link e p q s E S .
τ j , u v p q , t = { 1 0 if   e j , u v n f   is   mapped   to   e p q s   during   t otherwise
A NF of type f is able to map to v i s only when the physical node has instantiated the VNF of type f . State variable β f i { 0 , 1 } is 1 when there is a VNF of type f on physical node v i s .
β f i = { 1 0 if   a   VNF   of   type   f   is   assigned   on   v i s otherwise
β f i , t = 1 ,   if   s f c j S v j , u n f V j N F η j , u i , t t j , u f 1 ,   v i s V S ,   f F ,   t T
Due to the dynamic change of traffic, we describe the relationship between the incoming traffic bandwidth and the resource requirement of v j , v n f V j N F at t by
c j , v n f ( t ) = b w j , u v n f ( t ) f F t j , v f c o e f f c f t T
m j , v n f ( t ) = b w j , u v n f ( t ) f F t j , v f c o e f f m f t T
where c o e f f c f and c o e f f m f represent CPU coefficient and resource coefficient, respectively. Since the traffic flow may be scaled as it passes through each NF, we denote the relationship between the incoming bandwidth of the NF v j , v n f and the bandwidth requirement of the logical link connected to it with
b w j , u v n f ( t ) = b w j , k u n f ( t ) f F t j , v f r a t i o f t T
where v k n f , v u n f , v v n f V i N F is a set of ordered NFs, and r a t i o f is the bandwidth scaling ratio of VNF in type f .
To describe the migration process, we define a set of variables related to the state of servers and VNF migration. δ i t { 0 , 1 } equals to 1 when v i s is turned on during t , and vice versa. ξ j , u t { 0 , 1 } equals 1 when the virtual node v j , u n f will be migrated during t . ρ i t { 0 , 1 } equals 1 when v i s which is powered on will be switched off in next period. Additionally, λ i t { 0 , 1 } equals 1 when v i s which is powered off will be switched on in the next period.
We define a set of variables associated with resource requirement prediction, which includes CPU, disk, memory and other system resource usages, such as the number of processes, OS load and so on. To simplify, in this paper, we consider CPU requirement prediction. We use c j , u n f ( t + n )   ( n = 1 , 2 , ) to represent the CPU requirement of v j , u n f in the future time t + n by predicting. c i s ( t + n )   ( n = 1 , 2 , ) is used to indicate the CPU usage of the node v i s in the future time t + n , which is calculated by
c i s ( t + n ) = f F β f i , t + n c f + s f c j S v j , u n f V j N F η j , u i , t + n r j , u n f ( t + n ) v i s V S ,   t T
Meanwhile, we represent the overload threshold and low load threshold of the physical node by t h r o v e r and t h r l o w , respectively.
The above variables meet the following constraints:
Each NF of the SFCs should be deployed on one unique physical node, which can be formulated by
v i s V S η j , u i , t = 1 s f c j S ,   v j , u n f V j N F ,   t T
Each virtual link should be mapped to one physical link. Since the virtual link may be extended across nodes, we have
e p q s E S τ j , u v p q , t 1 s f c j S ,   e j , u v n f E j N F ,   t T
The CPU and memory capacity of physical nodes need to satisfy the constraint of computational capability, which can be expressed by
f F β f i , t c f + s f c j S v j , u n f V j N F η j , u i , t c j , u n f ( t ) C i S v i s V S ,   t T
f F β f i , t m f + s f c j S v j , u n f V j N F η j , u i , t m j , u n f ( t ) M i S v i s V S ,   t T
The bandwidth constraint of physical links meets
s f c j S e j , u v n f E j N F τ j , u v p q , t b w j , u v n f ( t ) B W p q S e p q s E S ,   t T
The latency constraint of SFCs also needs to be satisfied, which can be given by
e j , u v n f E j N F e p q s E S τ j , u v p q , t L p q S l j s f c j S ,   t T
Each pair of connected VNF ( v j , u n f , v j , v n f ) satisfies traffic conservation. Given a logical link e j , u v n f and a physical node v i s , we have
v q s Ω + ( v p s ) τ j , u v q p , t v q s Ω ( v p s ) τ j , u v p q , t = { η j , v p , t η j , u p , t , 1 , 1 , 0 , v j , u n f , v j , v n f V j N F v j , v n f { v j , o u t s } ,   v j , o u t s = v p s ,   v j , u n f V j N F v j , u n f { v j , i n s } ,   v j , i n s = v p s ,   v j , v n f V j N F otherwise , s f c j S , t T
where Ω + ( v p s ) and Ω ( v p s ) represent the set of upstream and downstream nodes, respectively. The node which NFs deployed on should be turned on, thus we have
δ i t η j , u i , t v j , u n f V j N F ,   s f c j S ,   v i s V S ,   t T
The following constraints are used to represent the migration of virtual node v j , u n f . If the physical nodes that v j , u n f mapped before and after the traffic change are different, then v j , u n f should be migrated.
ξ j , u t η j , u i , t η j , u i , t + 1 v j , u n f V j N F ,   s f c j S ,   v i s V S ,   t T
The physical node may switch on/off before and after the traffic change, which satisfies
λ i t δ i t δ i t + 1 v i s V S ,   t T
ρ i t δ i t δ i t + 1 v i s V S ,   t T

3.3. NF Energy-Efficient Migration Problem

In this paper, we established the system energy consumption model for SFC migration and designed the migration algorithm to minimize the long-term total energy consumption. The energy consumption of the physical node is defined as
E s n = t T ( E s t + E d t )
where E s t is the static energy consumption, which is a basic part of consumption existing since the server turned on and stably consumed even if the server is idle [13]. E d t is the dynamic energy consumption, which includes the energy consumption of physical resources such as CPU, RAM, network and disk caused by the time-varying traffic load. Studies have shown that the dynamic energy consumption of servers mainly depends on CPU utilization and has little correlation with other physical resources [14]. Therefore, we assume that there is a linear relationship between the base power and the maximum power of the physical node v i s V S , and the ratio is a . Therefore, the energy consumption of a unit time Δ t can be expressed as
E s n t = E s t + E d t = v i s V S δ i t a P i max Δ t + v i s V S ( 1 a ) u i C P U , t P i max Δ t
u i C P U , t = f F β f i , t c f + s f c j S v j , u n f V j N F η j , u i , t c j , u n f ( t ) C i S
where u i C P U , t is the CPU utilization of v i s during t .
The cost of migration is mainly from memory migration and power consumption caused by server startup. In this paper, the energy consumption of migration is defined as
E m i g = t T ( E m i g , m t + E m i g , c t )
where E m i g , m t is the cost of migrating the memory of VNF and E m i g , c t is the energy cost of server switching on from shutdown. The energy consumption of VNF memory migration is mainly caused by the extra energy consumed by the source host and the target host [13], so we have
E m i g , m t = s f c j S v j , u n f V j N F ξ j , u t m j , u n f ( t ) L p a c t p a c v i s V S ( 1 a ) ( η j , u i , t P i max + η j , u i , t + 1 P i max )
where L p a c is the length of the packet carrying the memory stream, and t p a c is the processing time of the memory packet. Since the server has power consumption which should not be ignored at the startup moment, we denote the ratio between startup power consumption and maximum power consumption as b . Thus, we have
E m i g , c t = v i s V S λ i t b P i max
Therefore, the optimal object is to minimize the total energy consumption of the system, as
min   E t o t a l = E s n + E m i g
The energy saving migration problem of VNFs proposed in this paper is an NP-hard problem. To prove this conclusion, we construct a simple energy saving migration problem and merge it into the NP-Hard problem which has been verified. Considering the whole network has several SFCs made up of one VNF, each of the physical nodes is instantiated with all types of VNFs. The delay of physical link is far less than the maximum latency of SFCs and is allocated with enough bandwidth. In this case, the problem is transformed into a VNF embedding problem considering node resource constraints. This problem is simplified to load N boxes of volume c j , u n f , v j , u n f V j N F on V S , aiming to maximize the loading flow, which is known as the KLP (Knapsack Loading Problems), that is a proven NP-hard problem [15].

4. Algorithm Design

In the previous section, we proved that the NF energy-saving migration problem is an NP-hard problem. Therefore, in this section, we propose an energy-efficient NF dynamic migration algorithm based on short-term resources requirement prediction (RP-EDM) to solve the problem above. As is presented in Figure 2, we first input the historical network resource information to the prediction model and obtain the resource prediction results of every NF. Then these results are input to RP-EDM. A sub-algorithm termed Resource Prediction Based Network Function Separation (PNFS) is executed on each overloaded physical node according to prediction and output the migrate strategy to minimize energy consumption while the services are uninterrupted; another sub-algorithm called Resource Prediction Based Network Function Consolidation (PNFC) is executed for the possible low-load nodes and output the reconfiguration strategy including the migration timing, destination nodes, routes, aiming at migrating all NFs on target nodes and close them, so as to reduce the network energy consumption. Finally, RP-EDM joins the migration strategy obtained by the above two sub-algorithms to output the final reconfiguration strategy.
RP-EDM follows the following principle:
Overloaded servers should be executed PNFS immediately to ensure that services are not interrupted. Since the server switch on may cause extra energy consumption, in order to avoid unnecessary overhead, we first choose the target node for NF from the servers that have been turned on, if no candidates are available, then switching on other servers is considered.
It is necessary to clarify which node is under low-load condition and when to consolidate it because NFs consolidation brings migration costs. This problem can be interpreted from another perspective, that is, determining servers that should be consolidated and shut down in each time period. The consolidated servers are expected to shut down as long as possible to avoid servers switching on/off frequently.
Different SFCs have different flow fluctuation characteristics. For SFCs with a fast fluctuation and large fluctuation ranges, the migration is supposed to execute frequently, so that the change in resource demand caused by traffic fluctuations can be monitored sensitively to prevent service interruption; for SFCs with a slow fluctuation, the migration should not be executed frequently in order to avoid unnecessary servers switching on and migration cost. RP-EDM uses the maximum normal-load time according to prediction to measure whether the current migration is worthwhile. For NFs which do not need migration, we use flexible horizontal scaling to scale resources.

4.1. RP-EDM Algorithm

The RP-EDM, as presented in Algorithm 1, takes the resources requirement prediction results of nodes as the input. RP-EDM first executes the sub-algorithm NFS on nodes that are predicted to be overloaded in the next period and then obtains the migrating strategy (lines 1–3). Here, we propose an adaptive overload threshold. θ i t is used, as is defined in Equation (28), to represent the overload threshold bias of v i s in t .
θ i t = c o u n t ( L i t ) R M S E
where c o u n t ( L i t ) is the number of NFs deployed on v i s , and R M S E is the root mean square error of the prediction model. Then, the sub-algorithm NFC is executed to output the nodes that needed to be consolidated during the current period and the consolidation strategies of all NFs on them (Line 4). It is worth noting that the reconfiguration strategies output by the above sub-algorithms are not the final strategy of the current period. For instance, the following scenario may occur: v k , w n f are supposed to migrate twice in the current period, that is, from v 1 s to v 2 s , then from v 2 s to v 3 s . To prevent NF from multi-step migration which increases migration energy consumption, line 6 ensures that at most one migration occurs for each NF in the current period, and updates the policy output by sub-algorithms to one migration pattern, that is, v k , w n f migrating from v 1 s to v 3 s . Finally, the actual migrations occur (line 7) to get the updated VNF mapping. In this way, the separation policy is no longer migrating from the minimum resource requirement NF until normal load but follows a strategy that minimizes the migration cost of making overloaded nodes normally loaded.
The sub-algorithms NFS and NFC will be discussed in the next section.
Algorithm 1 RP-EDM
Input: G = ( V S , E S ) , the switch on/off policy of the network δ t in the current period t , CPU requirement of all nodes by predicting C p r e d i c t = { c i s ( t + n )   ( n = 1 , 2 , ) | v i s V S } ;
Output: New mapping strategy of G after reconfiguration;
1
for each v i s V S do
2
  if c i s ( t + 1 )   t h r o v e r C i S θ i t then
3
    π s N F S ( G , δ t , v i s , L i t , C p r e d i c t ) ;
4
π c N F C ( G , δ t , L i t , C p r e d i c t ) ;
5
for each ( v k , w n f , v t a r s ) π s π c do
6
  update v t a r s , ensure v k , w n f only migrate once;
7
  migrate v k , w n f to v t a r s ;
8
return G ;

4.2. Resource Prediction Based Network Function Separation Algorithm

For the source node v s o u s that is about to be overloaded, PNFS needs to be executed immediately to ensure that the service is not interrupted. NFS outputs the migration strategy, including which NFs to be migrated, and the respective target nodes and route mappings. We proposed the NFS Algorithm as presented in Algorithm 2. We construct NFs set L s o u t deployed on v s o u s ordered from small to large according to the memory requirements of NFs (line 1). The main loop (lines 2–21) goes on as long as v s o u s is still overloaded and has NFs that can be separated existing. v k , w n f L s o u t is found with the least memory requirement (line 3). Next, the candidate set of target nodes A is determined for v k , w n f , which consists of on/off nodes A o n and A o f f that satisfy the following requirements except { v s o u s } itself (line 4). When migrating v k , w n f to v i s A , at least one new path can be found that meets the bandwidth constraint and the delay constraint of s f c k . Next, lines 5–12 determine the target node. First, according to the resource prediction results, the maximum duration n i is calculated for which v i s will remain unloaded in the future when migrating v k , w n f to v i s A . Moreover, the migration energy cost E m i g , k w i , t (lines 5–7) is calculated. We choose the node with the largest duration cost ratio as the target node v t a r s , considering that the target node should not migrate frequently and the migration energy consumption should be minimized.
We consider powered-on nodes in the candidate set firstly, and then power-off nodes (lines 9–12). In line 13, we look for all possible paths after the migration of NF v k , w n f from v s o u s to v t a r s , sorted by the path delay. Next, we traverse from the shortest path until we find a path P that meets the link bandwidth constraint and the latency constraint of s f c k , and v t a r s has enough resources for deploying v k , w n f . Then the migration is considered feasible and is added to π s . Then v k , w n f is deleted from L s o u t (lines 14 to 15). Since the mapping of NFs on v s o u s and v t a r s changes, we update the predicted CPU usage of both (line 16). If v t a r s is newly switched on, we update its on/off state (line 17). If there is no suitable migration path that can be found, v t a r s is removed from A and the possibility of migration to a sub-optimal target node is considered (lines 18–20). If no suitable path can be found for all nodes in A , then v k , w n f is removed from L s o u t (line 22). Then, the above operations are performed on other NFs in L s o u t until v s o u s is no longer overloaded.
Algorithm 2 Resource Prediction Based Network Function Separation (PNFS)
Input: G = ( V S , E S ) , the switch on/off policy of the network δ t in the current period t , the over-loader node v s o u s , CPU requirement of all nodes by predicting C p r e d i c t = { c i s ( t + n )   ( n = 1 , 2 , ) | v i s V S } ;
Output: The separation strategy π s ;
1
L s o u t { v j , u n f | η j , u s o u , t = 1 , v j , u n f V j N F , s f c j S } ;
2
while c s o u s ( t + 1 )   t h r o v e r C s o u S θ s o u t  and L s o u t  do
3
select NF v k , w n f | v k , w n f = arg ( min v j , u n f L s o u t   m j , u n f ( t ) ) ;
4
 Find the target node candidate set A A o f f A o n { v s o u s } for v k , w n f . A set of all nodes on and off whose paths meet delay constraints and bandwidth constraints of s f c k ;
5
for each  v i s A  do
6
   n i max n = 1 , 2 ,   { n | c i s ( t + n ) + c k , w n f ( t + n ) < t h r o v e r C i S θ i t + n , v i s A } ;
7
  Calculate energy cost E m i g , k w i , t ;
8
if A  then
9
  if A o n  then
10
   select target node v t a r s | v t a r s = arg ( max v i s A o n { v i s }   n i / E m i g , k w i , t ) ;
11
  else
12
   select  v t a r s | v t a r s = arg ( max v i s A o f f   n i / E m i g , k w i , t ) ;
13
  if find satisfied path P for η k , w t a r , t = 1 then
14
    π s π s . a d d ( { η k , w t a r , t = 1 } ; { τ j , u v p q , t = 1 | v j , u n f , v j , v n f V j N F   p , q P } ) ;
15
    L s o u t L s o u t . r e m o v e ( v k , w n f ) ;
16
   update  c s o u s ( t + n ) and c t a r s ( t + n ) ;
17
   update  δ t a r t 1   |   if   λ t a r t = 1 ;
18
  else
19
    A A . r e m o v e ( v t a r s ) ;
20
   go to line 5;
21
else
22
   L s o u t L s o u t . r e m o v e ( v k , w n f ) ;
23
return π s ;

4.3. Resource Prediction Based Network Function Consolidation Algorithm

In order to save energy, we consolidate servers with low load and reduce the number of powered-on servers. Therefore, we propose the PNFC as shown in Algorithm 3. According to the resource prediction results, we construct a set N l o w t that is in descending order by node low load duration. Next, lines 2–23 are the main loop of PNFC. Unlike the PNFS, consolidation is not required. Each time NFC starts at the node v s o u s with the longest duration of low load and terminates if the consolidation fails. The set of ongoing NFs on v s o u s is L s o u t (line 4). The consolidation succeeds when all NFs in L s o u t are successfully migrated to the target node and the source node can be turned off successfully (lines 5–18). One NF v j , u n f L s o u t is chosen randomly and the target node candidate set A o n is found, which consists of powered-on nodes with adequate available resources and can find paths that meet bandwidth constraint and latency constraint of s f c j . Lines 8–10 calculate every node’s normal load duration after migrating v j , u n f to it and migration cost. The node with the highest ratio of normal load duration to migration cost is chosen as the target node v t a r s (line 11). As dynamic energy cost before and after the migration can be interpreted as a kind of transfer, we assume that it has little change and would not affect the total energy consumption for the sake of simplicity. It is considered worthy that the total migration cost of all nodes in L s o u t migrating twice is less than the static energy saved by turning off v s o u s (line 12). Next, we find the set of all possible paths and start from the shortest path, until we find a path P that satisfied all constraints. That means the migration of v j , u n f from v s o u s to v t a r s is feasible. The migration strategy (NF mapping strategy and path mapping strategy) is added to π c and v j , u n f deleted from L s o u t . Then, the remaining NFs continue to migrate in L s o u t and the network parameters are updated. If no satisfied path is found, the consolidation of v j , u n f fails and NFC is terminated (lines 13–18). If all NFs in L s o u t can be migrated successfully, the v s o u s can be shut down. v s o u s is deleted from N l o w t and the next low-load node continues to be consolidated (lines 19–21).
Algorithm 3 Resource Prediction Based Network Function Consolidation (PNFC)
Input: G = ( V S , E S ) , the switch on/off policy of the network δ t in the current period t , the over-loader node v s o u s , CPU requirement of all nodes by predicting C p r e d i c t = { c i s ( t + n )   ( n = 1 , 2 , ) | v i s V S } ;
Output: The consolidation strategy π c ;
1
create set N l o w t = { n l o w i | v i s V S } , where n l o w i = max n = 1 , 2 ,   { n | c i s ( t + n ) t h r l o w C i S } ;
2
while N l o w t and max n l o w i N l o w t   n l o w i > 1  do
3
select node v s o u s | v s o u s = arg ( max n i N t   n l o w i ) ;
4
L s o u t { v j , u n f | η j , u s o u , t = 1 , v j , u n f V j N F , s f c j S } ;
5
while L s o u t  do
6
  select NF v j , u n f | v j , u n f L s o u t ;
7
  Find the target node candidate set A o n { v i s | δ i t = 1 , v i s V S { v s o u s } } for v j , u n f . A set of all nodes on and off whose paths meet delay constraints and bandwidth constraints of s f c j ;
8
  for each v i s A o n  do
9
    n i max n = 1 , 2 ,   { n | t h r l o w C i S < c i s ( t + n ) + c j , u n f ( t + n ) < t h r o v e r C i S , v i s A } ;
10
   Calculate energy cost E m i g , j u i , t ;
11
  select target node v t a r s | v t a r s = arg ( max v i s A o n   n i / E m i g , j u i , t ) ;
12
  if 2 E m i g , j u t a r , t c o u n t ( L i t ) < a P s o u max n l o w s o u  then
13
   if find satisfied path P for η j , u t a r , t = 1  then
14
     π c π c . a d d ( { η j , u t a r , t = 1 } ; { τ j , m n p q , t = 1 | v j , m n f , v j , n n f V j N F   p , q P } ) ;
15
     L s o u t L s o u t . r e m o v e ( v j , u n f ) ;
16
    update c s o u s ( t + n ) and c t a r s ( t + n ) ;
17
   else
18
    break;
19
if L s o u t =  then
20
   N l o w t . r e m o v e ( n l o w i ) ;
21
   δ s o u t 0 ;
22
else
23
  break;
24
return π c ;

5. Performance Evaluation

5.1. Simulation Setup

In order to evaluate the effectiveness of the above algorithm, this paper evaluates the above algorithm on NSFNET network topology, and the simulation is implemented and run on SFCSim [16], an SFC simulation platform based on Python environment.
The physical network is shown in Figure 3, with 14 physical nodes and 21 links. The CPU resources and memory resources capacity of the server are set to be evenly distributed in ( 250 , 300 )   MIPS and ( 600 , 1000 )   GB [17]. The maximum power of the server is evenly distributed in ( 170 , 230 )   W . We set the link bandwidth capacity to 500   Mbps and the link delay evenly distributed in ( 1 , 4 )   ms . We assume that the ratio of server basic power to maximum power is 0.7 and the ratio of server power-on power to maximum power consumption is 0.15. The traffic packet length is set to be 1500 bytes. Furthermore, the packet processing time is 160   μ s [8]. The low load threshold and the overload threshold are 0.3 and 0.9, respectively [18].
We consider three types of VNFs. Each SFC randomly selects some VNFs. In addition, the length of each SFC is randomly chosen from 1 to 3. Meanwhile, ingress and egress are chosen randomly from V S . The bandwidth factor r a t i o f of different types of VNFs is selected from ( 0.5 , 1 , 1.5 ) . To simplify the problem, we assume that no resources are needed to instantiate the VNF, and resources are only needed when deploying the NF. The CPU coefficient c o e f f c f and memory coefficient c o e f f m f of different types of VNFs are evenly distributed in ( 0.2 , 0.5 ) . The maximum latency of SFC is 30   ms [17]. We use the traffic data set from Clearwater VNF Dataset [19]. We consider the network has 25 SFCs in 24 h, and the arrival time of the requests is dynamically controlled according to the Poisson distribution. Each request has an exponentially distributed life cycle, with an average of 1000 min.

5.2. Simulation Result and Analysis

We use the following metrics to evaluate the performance of the algorithm. Total energy consumption: the sum of energy consumption and migration energy consumption of all server nodes in the network. Total service traffic: the total traffic processed by the SFCs accepted by the network. Energy bandwidth ratio: the ratio of total network energy consumption to total bandwidth, which measures the energy consumption for processing a unit bandwidth. Service outage time rate: for SFCs that have not reached the life cycle but are interrupted early due to insufficient resources, we define the service interruption time as the time from the interruption to the end of the life cycle. The outage time rate is the ratio of the interruption time to the life cycle.
We compared the basic LSTM and the CAT-LSTM [12] to predict the CPU resource requirements of each NFs in the network. The results of model training are presented in Table 2. These prediction results are used as the input of RP-EDM to evaluate the performance of RP-EDM. For the impact of different prediction accuracy on the performance of RP-EDM, we also set RP-EDM-Precise as a reference to indicate the limit situation when the prediction accuracy is 100%. Meanwhile, we compare RP-EDM with the GNFC in [6] which is executed periodically, and the Never Migrate Strategy. GNFC algorithm is executed hourly. In order to ensure that the deployment algorithm cannot affect our comparative analysis, the SFC deployment in the above comparison algorithms all adopt the shortest path priority deployment algorithm.
The total traffic of network services is shown in Figure 4a. RP-EDM-Precise, RP-EDM-Basic-LSTM, and RP-EDM-CAT-LSTM are 14.40–46.52% higher than Never Migrate in total traffic. Since GNFC only makes consolidation and does not consider server overload, it further exacerbates the service interruption. Due to the low accuracy of RP-EDM-Basic-LSTM, it cannot be aware of server overload accurately, resulting in service interruption. Besides, the total service traffic is 2.59% lower than RP-EDM-CAT-LSTM. The energy consumption bandwidth is shown in Figure 4b. As time goes by, the migrations continue, and the energy consumption bandwidth ratio is on the decline and converges gradually. The energy consumption bandwidth ratio of the RP-EDM series algorithm is lower than that of GNFC and Never Migrate, which suggests that our algorithm requires lower energy consumption processing a unit of traffic, which is effective in reducing long-term network energy consumption. At the same time, RP-EDM-CAT-LSTM which has a higher prediction accuracy is closer to the optimal RP-EDM-Precise. Its energy consumption bandwidth ratio is 9.40% lower than RP-EDM-Basic-LSTM.
The SLA violation rate and service outage time rate are shown in Figure 4c. The request acceptance rate of the RP-EDM series algorithm is higher than the comparison algorithm while the SLA violation rate and service outage time rate are lower. This is because RP-EDM combines separation and consolidation simultaneously. For nodes that are predicted to be overloaded, PNFC executes immediately to restore the normal load of the overloaded node, so as not to violate the SLA. According to the accuracy of different prediction models, adaptive overload thresholds are used to ease the service interruption problem caused by prediction errors to a certain extent. However, the interruption time is still 6.08–9.15% higher than the optimal RP-EDM-Precise.
In order to analyze the impact of our algorithm on energy consumption more intuitively, we compare different algorithms on the total energy of the network when serving equal traffic with adequate resources. The total energy for 24 h is shown in Figure 5. The energy increase of the RP-EDM series algorithms is relatively small, and the total energy is lower than that of GNFC 20.84–40.65%. This is because our algorithm uses the prediction results of CPU resources and considers the boot-up energy, according to the future network situation, considers the node energy and migration energy comprehensively. Monitoring the network every minute, RP-EDM selects those servers to be consolidated and ensures they will not be turned on soon after being shut down, which can not only turn off low-load servers to save energy, but also avoid frequent startup and migration. Additionally, we can see that models with different accuracy do have different impacts on the RP-ED. RP-EDM-CAT uses a higher-precision prediction model, whose performance is close to the optimal RP-EDM-Precise. While, RP-EDM-Basic -LSTM is unable to accurately detect the low load of the node in advance due to its low prediction accuracy, and the unsuitability of consolidating leads to high energy consumption.
Figure 6a,b shows the migration times in each period. Compared with GNFC executed every hour, we can see that RP-EDM migrates in some periods, while the migration times in some periods are 0, presenting aperiodic overall. The total migration times of our algorithm are lower than that of the comparison algorithm under both high and low loads. This is because RP-EDM monitors in real time, but it does not migrate every cycle. Instead, it determines whether to migrate certain NF or not based on the predicted resource requirements. The advantage of RP-EDM in determining migration timing lies in the reduction of frequent migration and switching on/off and consolidation of low-load servers timely, which is further reflected in the saving of network energy.
The above results show that the migration strategy considering the prediction results of short-term NF resource demand is effective in improving network performance, especially in reducing the long-term energy consumption and keeping the SLA violation rate low. This fact illustrates the prediction is nontrivial for improving user experience. At the same time, the accuracy of the prediction model has a great influence on the optimization of network energy. A higher prediction accuracy tends to be conducive to determining the migration timing and reducing the service outage time.

6. Conclusions

6.1. Summary of the Performance

In order to solve the problem of network energy consumption caused by the dynamic changes of resource demand during SFCs processing, this paper used the prediction results of short-term network resource demand and proposed the RP-EDM algorithm to determine the migration time. The simulation results show that our algorithm can reduce long-term network energy consumption. In addition, the combination of the prediction and migration algorithms improves network service capabilities and effectively reduces the number of SLA violations. Additionally, we find that the accuracy of prediction model largely affects the performance of the algorithm. In future research, we will focus on modeling the VNF resource prediction more accurately and propose a model with high accuracy and strong generalization.

6.2. Limitation of This Study

Although this study takes many factors into account, it still has some limitations that will affect its performance. First, we adopted the shortest path priority algorithm when deploying VNFs. There are situations where the deployment fails when available resources are available on the network, which directly affects the request acceptance rate. Second, the most important limitation lies in the fact that the prediction models we use are not accurate enough in predicting time-varying flow, which directly reflects in the performance. On the one hand, it is not sensitive enough to overload nodes, which will cause service interruption. On the other hand, potential low-load nodes cannot be sensed, which reduces energy saving performance. Third, in this work, one model can only predict one VNF, multiple models should be trained to predict different VNFs for one SFC, which increases the complexity and training time of model training. In future work, we will examine these points and attempt to conduct some research on learning models for this problem.

6.3. Potential Future Research Directions

It is recommended that further research be undertaken in the following directions.
A regression model with high accuracy on predicting resource demand needs to be established. Learning the features between the target VNF and resource demand, other VNFs on the same SFC, the characteristics of SFC would help us to establish a greater degree of accuracy on this matter. Such information can be added to the training set by embedding. Meanwhile, since the resource demand changes over time, after a period of time, the initial training model will fail to predict new samples. Therefore, it is necessary to update the sample data online to realize real-time optimization of the model.
Traditional heuristic VNF migration algorithms cannot adapt well to dynamic network traffic and structure. Although, in our study, we introduced the prediction of resource demand by Supervised Learning to eliminate this lag as much as possible. Another direction lies in Reinforcement Learning (RL) for assisting VNF reconfiguration, which dynamically updating decision-making strategies are significant to achieve long-term network energy optimization. It would be interesting to assess and compare the effects of heuristic algorithms and RL algorithms. Further research regarding the role of RL would be of great help in curing the VNF reconfiguration problem.

Author Contributions

Y.L., J.R., H.H. and B.T. conceived of the idea, designed and performed the evaluation, analyzed the results, drafted the initial manuscript and revised the final manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62090011.

Acknowledgments

The authors gratefully acknowledge the informative comments and suggestions of the reviewers, which have improved both the content and presentation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kaur, K.; Mangat, V.; Kumar, K. A comprehensive survey of service function chain provisioning approaches in SDN and NFV architecture. Comput. Sci. Rev. 2020, 38, 100298. [Google Scholar] [CrossRef]
  2. Eramo, V.; Miucci, E.; Ammar, M.; Lavacca, F.G. An Approach for Service Function Chain Routing and Virtual Function Network Instance Migration in Network Function Virtualization Architectures. IEEE ACM Trans. Netw. 2017, 25, 2008–2025. [Google Scholar] [CrossRef]
  3. Yang, S.; Li, F.; Trajanovski, S.; Yahyapour, R.; Fu, X. Recent Advances of Resource Allocation in Network Function Virtualization. IEEE Trans. Parallel Distrib. Systs. 2021, 32, 295–314. [Google Scholar] [CrossRef]
  4. Qu, K.; Zhuang, W.; Shen, X.; Li, X.; Rao, J. Dynamic Resource Scaling for VNF Over Nonstationary Traffic: A Learning Approach. IEEE Trans. Cogn. Commun. Netw. 2021, 7, 648–662. [Google Scholar] [CrossRef]
  5. Beloglazov, A.; Abawajy, J.; Buyya, R. Energy-Aware Resource Allocation Heuristics for Efficient Management of Data Centers for Cloud Computing. Future Gen. Comput. Syst. 2012, 28, 755–768. [Google Scholar] [CrossRef] [Green Version]
  6. Wen, T.; Yu, H.; Sun, G.; Liu, L. Network function consolidation in service function chaining orchestration. In Proceedings of the IEEE International Conference on Communications (ICC), Kuala Lumpur, Malaysia, 22–27 May 2016; pp. 1–6. [Google Scholar]
  7. Qi, D.; Shen, S.; Wang, G. Virtualized Network Function Consolidation Based on Multiple Status Characteristics. IEEE Access 2019, 7, 9665–59679. [Google Scholar] [CrossRef]
  8. Eramo, V.; Ammar, M.; Lavacca, F.G. Migration Energy Aware Reconfigurations of Virtual Network Function Instances in NFV Architectures. IEEE Access 2017, 5, 4927–4938. [Google Scholar] [CrossRef]
  9. Rais, I.; Orgerie, A.-C.; Quinson, M.; Lefevre, L. Quantifying the impact of shutdown techniques for energy-efficient data centers. Concurr. Comput. Pr. Exp. 2018, 30, e4471. [Google Scholar] [CrossRef] [Green Version]
  10. Sun, G.; Zhou, R.; Sun, J.; Yu, H.; Vasilakos, A.V. Energy-Efficient Provisioning for Service Function Chains to Support Delay-Sensitive Applications in Network Function Virtualization. IEEE Internet Things J. 2020, 7, 6116–6131. [Google Scholar] [CrossRef]
  11. Tang, L.; He, X.; Zhao, P.; Zhao, G.; Zhou, Y.; Chen, Q. Virtual Network Function Migration Based on Dynamic Resource Requirements Prediction. IEEE Access 2019, 7, 112348–112362. [Google Scholar] [CrossRef]
  12. Kim, H.-G.; Lee, D.-Y.; Jeong, S.-Y.; Choi, H.; Yoo, J.-H.; Hong, J.W.-K. Machine Learning-Based Method for Prediction of Virtual Network Function Resource Demands. In Proceedings of the 2019 IEEE Conference on Network Softwarization (NetSoft), Paris, France, 24–28 June 2019; pp. 405–413. [Google Scholar] [CrossRef]
  13. Strunk, A. Costs of Virtual Machine Live Migration: A Survey. In Proceedings of the 2012 IEEE Eighth World Congress on Services, Honolulu, HI, USA, 24–29 June 2012; pp. 323–329. [Google Scholar]
  14. Chaurasia, N.; Kumar, M.; Chaudhry, R.; Verma, O.P. Comprehensive survey on energy-aware server consolidation techniques in cloud computing. J. Supercomput. 2021, 77, 11682–11737. [Google Scholar] [CrossRef]
  15. Kolesar, P.J. A Branch and Bound Algorithm for the Knapsack Problem. Manag. Sci. 1967, 13, 723–735. [Google Scholar] [CrossRef] [Green Version]
  16. SFCSim Simulation Platform. Available online: https://pypi.org/project/sfcsim/ (accessed on 5 January 2021).
  17. Tang, L.; He, L.; Tan, Q.; Chen, Q. Virtual Network Function Migration Optimization Algorithm Based on Deep Deterministic Policy Gradient. J. Electron. Inf. Technol. 2021, 43, 404–411. [Google Scholar]
  18. Tang, L. Multi-priority Based Joint Optimization Algorithm of Virtual Network Function Migration Cost and Network Energy Consumption. J. Electron. Inf. Technol. 2019, 41, 2079–2086. [Google Scholar]
  19. VNFDataset: Virtual IP Multimedia IP System. Available online: https://www.kaggle.com/imenbenyahia/clearwatervnf-virtual-ip-multimedia-ip-system (accessed on 30 January 2021).
Figure 1. SFC mapping architecture.
Figure 1. SFC mapping architecture.
Electronics 10 02287 g001
Figure 2. Overall RP-EDM architecture.
Figure 2. Overall RP-EDM architecture.
Electronics 10 02287 g002
Figure 3. NSFNET network topology.
Figure 3. NSFNET network topology.
Electronics 10 02287 g003
Figure 4. Comparison of the result of RP-EDM when using Basic LSTM and CAT LSTM, GNFC and Never Migrate policy: (a) total service traffic; (b) energy bandwidth ratio; (c) SLA violation rate and service outage time rate.
Figure 4. Comparison of the result of RP-EDM when using Basic LSTM and CAT LSTM, GNFC and Never Migrate policy: (a) total service traffic; (b) energy bandwidth ratio; (c) SLA violation rate and service outage time rate.
Electronics 10 02287 g004
Figure 5. The total cost of RP-EDM series algorithms, GNFC and Never Migrate policy to the optimal problem is reported for the same amount of traffic being served. CPU capability of each server is equal to 1000 MIPS.
Figure 5. The total cost of RP-EDM series algorithms, GNFC and Never Migrate policy to the optimal problem is reported for the same amount of traffic being served. CPU capability of each server is equal to 1000 MIPS.
Electronics 10 02287 g005
Figure 6. Comparison of the result of RP-EDM and GNFC when serving different amounts of traffic; (a) a large amount of traffic and CPU capability of each server is equal to 1000 MIPS; (b) a small amount of traffic and CPU capability of each server is evenly distributed from 250 to 300 MIPS.
Figure 6. Comparison of the result of RP-EDM and GNFC when serving different amounts of traffic; (a) a large amount of traffic and CPU capability of each server is equal to 1000 MIPS; (b) a small amount of traffic and CPU capability of each server is evenly distributed from 250 to 300 MIPS.
Electronics 10 02287 g006
Table 1. Comparison of the research content of previous works.
Table 1. Comparison of the research content of previous works.
Ref.Time-Varying TrafficAperiodic ExecutionConsolidationSeparationBoot-Up EnergyProactive MigrationUse Prediction Results
[6]
[7]
[8]
[10]
[11]
[12]
Our approach
Table 2. Comparison of the results of CAT-LSTM and basic LSTM averaged across all VNFs from SFCs, for the number of RNN units = 80, learning rate = 0.001, the max number of iterations = 800 with early stop, prediction time = 5 s, amount of historical data = 15 s.
Table 2. Comparison of the results of CAT-LSTM and basic LSTM averaged across all VNFs from SFCs, for the number of RNN units = 80, learning rate = 0.001, the max number of iterations = 800 with early stop, prediction time = 5 s, amount of historical data = 15 s.
MetricBasic-LSTMCAT-LSTM
loss0.33160.0345
mse30.440513.9401
rmse5.51723.7336
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Y.; Ran, J.; Hu, H.; Tang, B. Energy-Efficient Virtual Network Function Reconfiguration Strategy Based on Short-Term Resources Requirement Prediction. Electronics 2021, 10, 2287. https://doi.org/10.3390/electronics10182287

AMA Style

Liu Y, Ran J, Hu H, Tang B. Energy-Efficient Virtual Network Function Reconfiguration Strategy Based on Short-Term Resources Requirement Prediction. Electronics. 2021; 10(18):2287. https://doi.org/10.3390/electronics10182287

Chicago/Turabian Style

Liu, Yanyang, Jing Ran, Hefei Hu, and Bihua Tang. 2021. "Energy-Efficient Virtual Network Function Reconfiguration Strategy Based on Short-Term Resources Requirement Prediction" Electronics 10, no. 18: 2287. https://doi.org/10.3390/electronics10182287

APA Style

Liu, Y., Ran, J., Hu, H., & Tang, B. (2021). Energy-Efficient Virtual Network Function Reconfiguration Strategy Based on Short-Term Resources Requirement Prediction. Electronics, 10(18), 2287. https://doi.org/10.3390/electronics10182287

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop