Next Article in Journal
A Novel Multi-Area Distribution State Estimation Approach with Nodal Redundancy
Previous Article in Journal
Performance Optimization of CsPb(I1–xBrx)3 Inorganic Perovskite Solar Cells with Gradient Bandgap
Previous Article in Special Issue
Decomposition Methods for the Network Optimization Problem of Simultaneous Routing and Bandwidth Allocation Based on Lagrangian Relaxation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Router Activation Heuristics for Energy-Saving ECMP and Valiant Routing in Data Center Networks

by
Piotr Arabas
1,*,
Tomasz Jóźwik
2 and
Ewa Niewiadomska-Szynkiewicz
1
1
Institute of Control and Computation Engineering, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland
2
Doctoral School of the Military University of Technology, Sylwestra Kaliskiego 2, 00-908 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Energies 2023, 16(10), 4136; https://doi.org/10.3390/en16104136
Submission received: 2 April 2023 / Revised: 12 May 2023 / Accepted: 15 May 2023 / Published: 17 May 2023

Abstract

:
This paper addresses the energy conservation problem in computing systems. The focus is on energy-efficient routing protocols. We formulated and solved a network-wide optimization problem for calculating energy-aware routing for the recommended network configuration. Considering the complexity of the mathematical models of data center networks and the limitations of calculating routing by solving large-scale optimization problems, and methods described in the literature, we propose an alternative solution. We designed and developed several efficient heuristics for equal-cost multipath (ECMP) and Valiant routing that reduce the energy consumption in the computer network interconnecting computing servers. Implementing these heuristics enables the selection of routing paths and relay nodes based on current and predicted internal network load. The utility and efficiency of our methods were verified by simulation. The test cases were carried out on several synthetic network topologies, giving encouraging results. Similar results of using our efficient heuristic algorithm and solving the optimization task confirmed the usability and effectiveness of our solution. Thus, we produced well-justified recommendations for energy-aware computing system design to conclude the paper.

1. Introduction

Increasing demand for advanced computing services and transmission quality requires expansion of the existing network and server architecture [1,2,3]. The increase in the number of computing units and network devices results in a significant increase in power consumption while at the same time increasing the number of devices that are not constantly in use. Redundant nodes in the network are necessary to ensure the scalability of services, to enable the handling of an increased number of customers in the future, and, above all, due to changes in demand for computing power and network capacity that are difficult to predict.
Data center (DC) energy efficiency is often not critical in designing a computer network architecture for data exchange between computing servers. However, it is evident that to support computationally intensive services, new-generation networks able to perform complex operations in a scalable way and ensure that the expected quality of service is provided. Therefore, network devices are expected to have increased computing power and bandwidth, resulting in growing energy requirements. The higher network maintenance costs increase the costs incurred by the end customer. In general, ignoring the energy cost when designing a data center can result in an overdesigned infrastructure with an increased maintenance cost and it is not even required to provide adequate safety margins in terms of available computing power and bandwidth of existing links. Therefore, energy awareness is an essential aspect of modern data center design and management, and the optimization of total power consumption in data center networks (DCNs) is a significant research issue. The main challenge is developing novel technologies that reduce energy consumption in such infrastructures.
New energy-efficient routing algorithms, methods, and control frameworks for dynamic power management in energy-aware computer networks have been designed and developed to achieve the desired trade-off between the data center performance according to the DCN capacity, current requirements, and power consumption [4,5,6,7,8,9]. The aim is to reduce the gap between the capacity provided by a data center network for data transfer and the current demand for data exchange between computing nodes and increase network resilience to unpredictable spikes in traffic load. In particular, the energy consumption in a network can be minimized by switching off or idlingenergy-consuming components such as switches, routers, line cards, and communication interfaces and by reducing the speed of processors and link speed [10,11,12,13]. The data transfer aggregation along as few devices as possible instead of balancing traffic in a whole computer network can be successfully used for energy-efficient dynamic management in local area networks (LANs) and wide area networks (WANs). Unfortunately, in the case of heavily used DCNs, where the critical task is to guarantee fast and reliable data transfer between computing servers, it may not be possible to disable multiple network devices. In this case, it is necessary to determine the path online with sufficient bandwidth, assuming the lowest possible power consumption.
In this paper, we overview the network topologies and routing algorithms used in real data centers. We assume that the data center can offer different services and that the data center network must carry the associated transmissions. We consider a software-defined network (SDN) with a central controller when the control plane is separated from network nodes. Thus, the routing may be calculated in a centralized manner. To ensure high availability, the redundant central controller may be implemented.
The primary attention we pay to energy-efficient routing: We formulate the energy-saving DCN management optimization problem. We discuss the limitations of formal analysis and numerical optimization in large-scale DCN management. We propose an alternative solution, i.e., efficient heuristic algorithms for adaptive activity control of DCN nodes and energy-aware multipath routing calculation, and simulation-based analysis. Using simulation allows us to analyze dynamic scenarios and observe many phenomena outside the mathematical model’s scope, e.g., congestion and packet loss.
To sum up, this work’s main contribution is to present and discuss several heuristics for equal-cost multipath (ECMP) and Valiant routing that reduce the energy consumption in networks interconnecting computing servers in large-scale data centers. It assumes selective deactivation of switches in periods of low demand and activation according to increasing traffic. The results of simulations of energy-efficient DNC management using these heuristics conducted on computing centers with different architectures are compared with the solution of the optimization problem mentioned above for energy-aware network management. All numerical results are presented along with conclusions and further research directions. The risks of implementing energy-saving algorithms in production environments are identified and discussed.
It should be noted that the algorithms discussed, similar to all other energy-aware DCN management methods, are applicable in situations of incomplete network loading. In heavily used DCN, there is usually no room for energy saving, i.e., all devices have to work to guarantee the assumed quality of service.
The paper is organized as follows. The problem of communication management in real data centers and the description of ECMP routing and Valiant protocol are reported in Section 2. Section 3 discusses related work on power control in computer networks and data centers provided in the literature. In Section 4, the DCN optimization problem for calculating energy-aware routing is defined and formulated. The heuristic algorithms for energy-efficient computer network management are presented in Section 5. The results of simulations are described and discussed in Section 6. Finally, conclusions and future research directions are drawn in Section 7.

2. Communication Management in a Data Center

In the networks of large data centers, performance and reliability are crucial for the quality of services provided. The designed networks usually contain many redundant connections, enabling the takeover of the tasks of the part of the infrastructure that has failed. Service providers guarantee the availability of the network at a certain level for the infrastructure they maintain, offering financial compensation in the event of a violation of the declared reliability levels. The routing algorithm should enable the automatic minimization of the effects of failures and overloads in the network by selecting alternative paths to handle flows. It can also be used to select a data center to perform tasks outsourced by remote customers. Considering the client’s location, the optimal path minimizing delays in communication caused by a significant distance between the source and destination servers is determined. For this purpose, some infrastructure providers offer services enabling network traffic separation due to its origin.

2.1. Data Center Networks Architectures

The network topology is vital to network performance and reliability [1,14]. It determines the length of network paths and backup connections that can be used in the case of failure. In general, network architectures employing simple topologies, i.e., tree, complete graph, or star, do not allow for an adequate level of reliability and scalability to ensure the efficient operation of a large data center. In the case of the tree topology, the failure of even a single switch results in an inability to communicate some servers due to the lack of backup paths. This problem does not occur in the complete graph topology. A path passing through any other computing node can be used. The number of connections equals n ( n 1 ) / 2 , where n denotes the number of network nodes. However, ensuring the direct connection between each pair of nodes in the data center network is costly. The lack of switches forces the need for a large enough number of ports to handle all existing connections to a given server. Using a single switch in the star topology creates a single point of failure in the network. Moreover, it is also difficult to scale such a network due to the limited number of switch ports.
The fat-tree topology [15] is an extension of the tree topology, providing alternative paths. Multiple switches are used instead of one. The path length between each pair of servers is the same as in the case of the original tree topology. Due to the use of multiple paths between servers, higher transfer rates can be achieved. Moreover, the architecture is robust and less costly. It enables the use of multiple cheaper switches with lower bandwidth ports. For example, the maximum number of servers connected using switches equipped with k ports equals k 3 / 4 . The fat-tree topology can be adjusted to the number of server rack groups in real data centers. Communication within the same group does not employ the switches from the top level. The typical approach to increasing (e.g., doubling) server bandwidth and transmission reliability is multiplying its links similar to in multirail fat-tree or multiplane fat-tree; see, e.g., [16,17]. In the case of a flattened butterfly topology [18] depicted in Figure 1, a greater number of network layers compared to the fat-tree are used to connect the same number of server racks. Equipping each layer with switches guarantees efficient communication between remote servers. The topology can be customized regarding the number of layers and switch to devices connections. The butterfly topology is an efficient, scalable, and cheap architecture that can be used in large-scale data centers.

2.2. Equal Cost Multipath Routing

Routing in data center networks (DCNs) can be implemented in the open systems interconnection (OSI) model layer 2 or 3. It significantly depends on the network devices. Static routing uses paths entered by the system administrator to the routing table. In contrast, dynamic routing populates the routing table during network operation, which allows path changes to eliminate failures or balance links load. In data center networks, efficiency and reliability are crucial. Typically, these networks contain many redundant links to take over tasks of the infrastructure that has failed. The routing used in this case should allow automatic minimization of the effects of failures and overloads in the network by selecting alternative paths to handle flows at risk.
Equal-cost multipath (ECMP) routing is used extensively in DCNs as it solves two problems: high availability and bandwidth scaling. It makes it possible to increase the flow capacity. Furthermore, the distribution of traffic between multiple paths reduces the sensitivity of the quality of service to potential failures of individual switches. However, it should be noted that the differentiation of link bandwidth and length of paths may lead to delivering packets out of sequence.
The typical protocol stack in the DCN is Link Aggregation Control Protocol (LACP) in layer 2 and ECMP in layer 3 of the OSI reference model. Examples of commonly used routing protocols utilizing the concept of ECMP routing are the Open Shortest Path First (OSPF), Border Gateway Protocol (BGP), or the relatively new Routing in Fat Trees (RIFT) [17].

2.3. Valiant Routing Algorithm

For large DCNs with high-radix and low-diameter topologies, which are prone to congestion issues concentrating traffic exclusively on selected nodes is, in general, undesirable. The network becomes less resilient to load spikes. Additional traffic cannot be handled using the remaining devices because they are not on any of the shortest paths between the communication-intensive servers. To alleviate this, Valiant routing can be recommended for use. The aim of the Valiant routing algorithm, developed by Valiant and Brebner [19], is to divert traffic to a randomly selected intermediate switch (a pivot) to avoid pathological congestion issues. It is the basis of some adaptive routing algorithms.
In general, the algorithm of finding a path between the pair of servers calculation consists of three main stages:
  • Pivot node random selection.
  • Path calculation from the source server to the pivot.
  • Path calculation from the pivot to the destination server.
At the cost of extending the path used to serve the transfer, the load present in the network can be balanced. Various modifications of the original version of the Valiant routing can be found in the literature. Benito et al., in [20], introduce restricted Valiant routing that randomizes traffic within local partitions generating shorter paths. Furthermore, in the case of ECMP routing, selecting a pivot located on any of the shortest paths between the chosen pair of servers does not change the path—as the shortest path between servers is also the sum of the shortest paths leading from the source to the pivot, and from the pivot to the destination node.

3. Related Work in Energy-Efficient Routing

Recent hardware used to build data center network infrastructure is more energy-effective than several years ago. The savings are achieved thanks to advancements in semiconductors technology and low-level control techniques such as smart standby and dynamic power scaling. However, control mechanisms implemented at the network devices level can be augmented by network-wide control strategies to further energy benefit. Managing the network as a whole, it is possible to switch off or operate at a lower rate some of the redundant or underutilized links. Many of these mechanisms are centralized (see, e.g., [5,21,22,23,24]) and try to define and solve energy-efficient network design problems. Typically, mixed-integer linear solvers are used [23,24,25] and, as the fully formulated problem is NP-complete, heuristics are proposed to speed up calculations [10,11,26,27]. The reason for NP-completeness of a task is the interdependence of paths which should be aggregated to move traffic out of some links. Moreover, the energy consumption functions are nonconvex due to the presence of integer variables. As long as a centralized control scheme seems to be simple and attractive, it lacks reliability and scalability, which distributed mechanisms, primarily extensions of routing protocols [28,29,30,31], may easily provide. Most proposed algorithms use relatively simple heuristics, which are especially suitable for data center networks due to their regularity and hierarchical topology [32]. The critical feature is multipath routing, allowing the relaxation of some constraints in classic network tasks. Multiple paths provide reliability and boost bandwidth and are widely accepted in cluster interconnect networks, e.g., in the ECMP routing algorithm.
While small and simple switched networks may rely on spanning-tree protocol [33], larger and highly redundant ones employ IP routing at least at higher levels of hierarchy to create data tunnels and alleviate multipathing and load balancing. Proposed topologies are fat-tree and flattened butterfly, presented in Section 2.1, or B-Cube [34] and others, less or more complex. All of them tend to provide total bisection bandwidth and high fault tolerance via several layers of usually symmetrically wired nodes, which, to reduce investment, may be cheaper-commodity devices [35]. As such, architecture seems reasonable when the network load is high; running it in quiet periods results in unjustifiable power consumption. The detailed study of possible energy savings and proposition of elastic-tree systems for bandwidth allocation and node activity control may be found in [36].
In a reactive manner, most proposed energy-saving algorithms apply relatively simple heuristics that select network devices or their parts—line cards or single ports—to be switched off. An example of such a scheme may be [37], where the blocking island paradigm is used to limit routing protocol search depth to allow switching off some of fat-tree network links. In [6], authors propose the fill preferred link first algorithm, which tries to consolidate network traffic on links with utilization below a certain level. Importantly, the algorithm is highly integrated with the network control subsystem via OpenFlow protocol allowing the automatic building of network topology and sensing traffic. Authors of [8] proposed a complete network model and used traffic matrix prediction to solve the resulting multicommodity flow problem using reinforcement learning. In addition, the algorithm described in [4] is based on a mathematical network model. However, due to a lack of flow prediction, it operates reactively. Exploiting the topological features of B-Cube, DCell, and other highly overprovisioned topologies, authors propose joining active and unused links in clusters, allowing substantial energy savings thanks to deactivating network regions. The energy-aware fault-tolerant (EAFT) data center scheduler [38] incorporates low-energy paths finding into scheduling jobs, taking fault recovery and ECMP into account.

4. Mathematical Models for Energy-Aware Routing Calculation

The DCN topologies presented in Section 2.1 exhibit significant redundancy, leading to excessive power consumption in periods of reduced traffic. In such a case, the flows may be routed over a limited set of paths engaging only some of the links and switches—see Figure 2a. One may switch unused equipment into low-power standby mode to save energy. However, further savings may be attained by harmonizing energy-aware functions globally, especially incorporating them into routing. Typically for DCN networks, there are many equal cost paths. Standard routing algorithms use path length metrics and tend to spread the load evenly, as shown in Figure 2a. Energy-aware routing should behave differently thanks to incorporating power consumption into a given metric—it consolidates paths on a minimal subset of nodes and links, as shown in Figure 2b.

4.1. Optimization Problems Formulation

The problem of calculating energy-aware routing and energy-aware states of network nodes may be defined in the form of the optimization task. Let us consider a data center network composed of nodes n N , i.e., source and destination servers and switches that play the role of relay nodes in the transmission between servers. We assume that switches can operate in several modes, which differ in power usage. To simplify the considerations, we limit the number of states to active—a high-power mode, and inactive (sleeping)—a low-power mode. We also use the terms enable and disable to describe these states.
All pairs of network nodes are connected by links l L , working in a full-duplex mode. We model the energy consumption of each network node using the same linear model. M is a set of flows transmitted in the network. Each flow m denotes a standard, constant in-time packet stream transmission between a given pair of servers. It is described by two parameters: w m —the size of the m-th flow (in Mb/s) and c l —the capacity of the l-th link (in Mb/s). We do not consider the time and energy necessary for state transition. First, we assume that the transitions are relatively rare as they depend on the routing algorithm. Second, as far as possible, we tend not to switch nodes off completely but to use a low-power standby state. Hence, we do not consider momentary microsleep as discussed in works [39,40], as the intervention rate is in the range of minutes, and we also avoid switching the nodes off to provide a quick switch-on. In such a case, the transition energy may be neglected, resulting in a simpler mathematical model.
The aim is to minimize the total power utilized by network nodes for transmitting all flows from the set M while ensuring end-to-end quality of service (QoS). We tackle the energy saving in a network by placing it into low-energy states or deactivating currently unused DCN switches. In our approach, we propose to use the Valiant routing algorithm to calculate the transmission paths and traffic load balancing. Let Q N denote a set of pivot nodes, i.e., randomly selected switches. In Valiant routing, a path to transmit each flow m M must contain at leastone pivot node.
We consider various formulations of a DCN-wide energy-saving problem. They differ in the energy used by network node modeling.
SEUlm
Same-energy-usage problem (linear model): a formulation stated in terms of the assumption of the same energy usage by all DCN nodes, i.e., energy usage independent of the transmitted traffic (linear model of energy consumption).
DEUlm
Different-energy-usage problem (linear model): a formulation stated in terms of the assumption of different energy usage by DCN nodes, i.e., energy usage depends on transmitted traffic (linear model of energy consumption).
DEUnm
Different-energy-usage problem (nonlinear model): a formulation stated in terms of the assumption of different energy usage by DCN nodes (nonlinear model of energy consumption).

4.1.1. SEUlm Optimization Problem

Given the above notation, we can formulate the energy-saving optimization problem with the simple model of energy usage by the network components.
min x n , y l m f S E U l m ( x n , y l m ) = m M l L y l m w m e + n N x n r ,
subject to the constraints:
n N , m M l L a l n y l m + s m n = l L b l n y l m + d m n ,
n N , m M l L a l n y l m x n k ,
n N , m M l L b l n y l m x n k ,
l L m M w m y l m c l ,
m M l L q Q a l q y l m 1 .
where variables and constants used in the above formulas denote x n = 1 if the node n is in the active mode (0 otherwise), y l m = 1 if the link l is used for the flow m transmission (0 otherwise), s m n = 1 if the server n is the source of the flow m (0 otherwise), d m n = 1 if the server n is the destination of the flow m (0 otherwise), a l n = 1 if the link l is incoming to the node n (0 otherwise), a l q = 1 if the link l is incoming to the pivot q (0 otherwise), b l n = 1 if the link l is outgoing from the node n (0 otherwise), r denotes the node energy used in the active mode, e is the energy consumed by each DCN link for data transfer, k is the number of ports of the n-th node, w m the size of the m-th flow, and c l the capacity of the l-th link.
In the problem defined above, the conditions (2) assure that the sum of the flows incoming to a given node is equal to the sum of the flows outgoing from this node, and the constraints (3) and (4) determine the number of nodes used for data transmission (a disabled node cannot receive or transmit data). The constraint (5) assures that the flow will not exceed the capacity of a given link. The constraint (6) forces each calculated transmission path to contain at least one pivot node.

4.1.2. DEUlm Optimization Problem

In the case of multilayer topologies, nodes can differ in energy usage due to the different link capacities and node architecture and configuration. Then, the modified energy-saving optimization problem is as follows:
min x n , y l m f D E U l m ( x n , y l m ) = m M l L n N y l m w m e l ( a l n + b l n ) + n N x n r n ,
subject to the constraints (2)–(6). The parameter r n in (7) denotes the energy use of the n-th node in the active mode and e l is the energy cost related to data transmission by the link l.

4.1.3. DEUnm Optimization Problem

In the performance measures (1) and (7), it is assumed that the energy cost of traffic handling by each node is directly proportional to its capacity. Unfortunately, in practice, this is often a nonlinear function. Let us define variables g n l and h n l :
g n l = m M w m y l m b l n ( 1 d m n ) ,
h n l = m M w m y l m a l n ( 1 s m n ) ,
where g n l and h n l denote the total size of the flow transmitted by the link l and, respectively, sent by the node n and received by the node n.
Let the function E ( n , z ) determine the energy cost of receiving or sending flows of total size z by the n-th node. The energy-saving optimization problem for nonlinear energy consumption models can be formulated as follows:
min x n , y l m f D E U n m ( x n , y l m ) = l L n N ( E ( n , g n l + E ( n , h n l ) ) + n N x n r n ,
and solved subject to the constraints (2)–(6). The CPLEX mixed-integer problem (MIP) solver can be used to solve (1), (7), and (10) with quadratic E ( n , z ) problems. In the case of the performance measure (10) with nonlinear (nonquadratic) E ( n , z ) , an appropriate nonlinear solver must be used.

4.2. Optimization Problems Complexity Estimation

To assess the computational complexity of the optimization problems defined in the above section, we must calculate the number of variables and constraints. For the standard fat-tree topology with k port switches, the number of network nodes N and the maximal number of servers S are equal to N = k 2 + ( k / 2 ) 2 and S = k 3 / 4 , respectively. The number of links L connecting switches can be calculated as L = 2 k ( k / 2 ) 2 .
Table 1 shows task dimensionality for a fat-tree network constructed of switches with port numbers varying in the range of 4 to 48 and a rather simplistic scenario of one flow per every of servers ( M = S ), where M denotes the total number of flows. The first row of the table corresponds to the fat-tree network with the number of servers reduced to 1 per lowest level switch.
It can be seen that the number of variables and constraints grows extremely quickly and explodes for networks with switches larger than eight ports. Therefore, the exact solution with the MIP solver may be found only for the small dimension networks. To calculate energy-efficient routing for larger networks, we need fast heuristics; however, the MIP task solution may be used as a benchmark.

5. Heuristic Algorithm for Energy-Aware Routing

The solutions of the optimization problems formulated in the above section are switches’ energy states and the optimal routing, assuming fixed flows. The energy consumption in a data center network is minimized by placing into low power mode (sleeping mode) as many relay nodes (switches) as possible, assuming transmission of all flows from the set M. Unfortunately, calculating routing by solving the above optimization tasks (1), (7), and (10) has some limitations.
  • To solve the optimization problems, the central server has to continuously collect data about the state of all network nodes and links.
  • The traffic forecasts have to be known. These forecasts can be based on declarations from DC users or calculated by the central server.
  • Disabling many DCN relay nodes can negatively affect service quality. It can lead to significant packet losses in unexpected traffic growth. Moreover, enabling a lacking relay node takes additional time for waking up and running services to handle network traffic. Taking this delay into account significantly complicates the optimization problem.
  • Solving the optimization task for a large computing center is time-consuming, especially when nonlinear energy consumption models for data transmission are considered (DEUnm problem). The update of the routing tables is not immediate. In the period between updates, the network load may change significantly.
Considering the limitations above, and the optimization problem dimensionality estimation (Table 1), the presented mathematical models can be used for network management in simple scenarios. Moreover, they assume constant load and neglect various details of network operation to limit complexity. Therefore, we developed and investigated a few heuristics for network nodes activity control and energy-aware multipath routing calculation. We aimed to design efficient and fast algorithms and computing schemes for energy-aware DCN management with effectiveness close to that determined by solving the optimization tasks defined in Section 4.1.
Consider a DCN composed of network nodes n N that can operate in two modes, i.e., active or inactive, and links l L . We assume ECMP routing and use the Valiant algorithm to calculate the transmission paths. Therefore, the algorithm for determining the path from the source server s to the destination server d is implemented in two steps:
  • Select the pivot q through which the flow from the node s to the node d will be sent.
  • Designate subsequent nodes of the path so that the path contains the node q.
The path calculation process depends on the current node availability. If possible, a path is created based on currently active switches. Otherwise, it is necessary to enable additional nodes. Unfortunately, the node waking up takes some time, which should be considered in the path generation process, especially if we assume that unused switches are shut down. Three main problems must be solved: (i) next hop selection, (ii) pivot selection, and (iii) nodes for activation and deactivation selection.

5.1. Algorithm of Next Hop Selection

Assume that each node n N propagates information about current loads of its communication ports to all neighbors that form the set N n N , i.e., switches directly connected with n. We define the accepted maximum load λ N n of ports of nodes from the set N n . This safe load guarantees required QoS, i.e., assures enough link capacity for the sudden surge in traffic. Let a set A n , and A n N n consists of currently active neighbors of the n-th node. Since the number of neighboring nodes in DCNs is usually not very large, greedy Algorithm 1 can be used to determine the next relay node in a path (next hop) for each flow transmission.
Algorithm 1: The next hop for m-th flow transmission calculation; n N is the current traffic source, p is the port linking n with the next node in the path.
  1:
Initial data: w m the size of the flow m, e i m the energy cost of transmission m via relay node i, c i p the capacity of the p-th port of i, l i p the current load of the port p, r i the energy cost of waking up the relay node i.
  2:
Create a set N n of neighbors of n.
  3:
Create a set A n N n of active neighbors of n.
  4:
Create a set T n A n of active neighbors i T n which ports p linking neighbors with n meet the condition: l i p < λ N n w m .
  5:
if  T n  then
  6:
  Select the node i-th with the lowest e im from the set T n
  7:
else
  8:
  if  ( N n A N )  then
  9:
    Select and wake up the disabled neighbor i with the lowest e im
  10:
  else
  11:
     Select the node i with the lowest l ip from the set A n T n
  12:
  end if
  13:
end if
end
When several switches (the n-th node neighbors) with ports of the same load are available, the next node can be selected in several ways:
  • Random selection of the next hope;
  • Selection of the switch with the smallest load on the required port;
  • Selecting the switch with the smallest energy use;
  • Switch selection leading to load balancing.
In practice, suspending data transmission until a new node is activated may result in unacceptable delays. In a situation where the switch activation time cannot be ignored, one of the active switches is selected as the next node, despite exceeding the maximum port load limit. The node with the lowest load on the considered port is selected.
With a heavy network load and the limited number of available active switches, more nodes may need to be waking up to transmit data with the assumed QoS. It may result in activating the number of nodes significantly higher than required for handling current traffic. Newly activated nodes can handle traffic in nonchronological order, especially when different node configurations are used. It can improve the network adjustment time in traffic peaks at the cost of energy usage efficiency. It should be noted that if more than one additional node needs to be woken up, the routing protocol activates the next node when the previously activated node is ready for operation. Its current load can handle the current traffic. It may result in the transmitted packet loss. Different values of the minimum expected capacity α i for various possible next hops i N n and simultaneous multiple nodes activation may speed up network response during rapid network demand changes.
Note that with multipath routing, enabling any additional node can affect routing throughout the network. The number of possible transmission paths usually increases.

5.2. Pivot Selection

Valiant routing can lead to calculating long, multihop transmission paths. Pivots are randomly selected; in the original algorithm, their selection does not depend on the distance to the source node. To minimize network delays and reduce additional traffic, pivots can be selected from the subsets N s assigned to traffic sources s S , where S N denotes the set of flow-initiating servers. In our research, the algorithm of selecting pivots for flows generated by s-th server ( s S ) is as follows:
  • Set the accepted maximum distance h s between the s-th source and pivots (in hops);
  • Select all nodes with maximal distance to s less or equal to h s , and create a set N s N of potential candidates for pivot nodes.
Such modification of the Valiant algorithm prevents transmitting data via far relay nodes and concentrates the traffic as close as possible to the sources of flows. A depth-first search algorithm described in [41] can be used to determine the subsets N s for all s S . In our research, the search depth was equal to h s .
Next, pivots for each transmitting flow m M are selected. Algorithm 1 described in Section 5.1 was adopted to determine pivot nodes. Pivots are usually not directly connected to flow source servers. Due to multiple equal-cost paths between the examined node q N s , s S , and the flow source node, it is impossible to precisely predict the number of the port that will handle a given flow and check its current load. Therefore, instead of the current load l i p of a given port p of the examined node i, the average load of all ports of i is used in Algorithm 1. Hence, the average loads for all currently enabled nodes from the set N s are calculated and compared. The node with the lowest average port utilization is included in the set of pivots Q.

5.3. Adaptive Relay Nodes Activity Control

The number of active nodes should be adapted to the volume of traffic. The need to wake up additional nodes was discussed in Section 5.1. When the volume of traffic on the network is reduced, as far as possible, some nodes should be placed into low-energy states, i.e., sleeping mode. Too many nodes cannot be deactivated. Therefore, we propose to assign a flag to a candidate planned to be placed into sleeping mode. A marked node has the lowest priority of all active neighboring nodes of n, i.e., i A n in the traffic handling. If there is a possibility of transmitting data through unmarked nodes without exceeding the constraints for maximal load λ N n , the marked node should remain unused and can be disabled. Otherwise, marked nodes are used as they have a higher priority than disabled ones. The flag of the selected node is removed, and the node is incorporated into the transmission path as the next hop.
The marked node is placed into sleeping mode when the flag remains untouched for the selected period. After deactivation, the flag is removed. If this node is re-enabled, it implements the standard priority in traffic handling. Thus, the node can be disabled without changing the state of nodes in an unpredictable manner. Unfortunately, both nodes enabling and disabling take some time and produce an inevitable delay that results in the inefficient node state in the transition period.

6. Simulation Experiments and Performance Evaluation

Two energy-efficient routing algorithms, determined by solving the optimization task (1)–(6) and heuristic algorithm described in Section 5, were implemented, validated, evaluated, and compared. Due to the complexity of the optimization task, we limited our experiments to networks composed of devices with similar energy consumption and a linear energy model. In order to evaluate our heuristic algorithm for transmission path calculation, many simulation experiments were conducted for butterfly and fat-tree network topologies. The experiments involved the application of proposed heuristics for a set of scenarios described in the following subsection. We performed various simulation experiments to cover a wide range of DCN topologies and traffic intensity.
All experiments were conducted using the NetBench [42] DCN packet simulator. We extended the original simulator with the implementation of the proposed heuristics. The simulator code is written in Java, while configuration and output data collection are served via proprietary formatted text files.

6.1. Simulation Setup and Testing Scenarios

All simulation experiments were conducted on synthetic networks with different topologies. We assumed that servers and switches could operate in two energy states: active and sleeping, which differed in power requirements. The energy cost of waking up each device was assumed to be approximately 0.07 µJ/ns, and energy used for the flow transmission by a single device was 0.00043 µJ/b [43]. For simplicity, we assumed zero energy cost in the sleeping state of the device. The parameters of the links were as follows: the throughput of each network link was equal to 10 GB/s and the delay was 20 ns. The buffer capacity of switch ports was equal to 150,000 bytes. We simulated 0.1 s network operation. Conducting experiments for a longer time horizon is easy; it requires only rescaling the values of the relevant parameters. Such an approach shortens the time needed to complete the simulation.
Two series of experiments for two network topologies were conducted:
  • Fat-tree topology. A network composed of four servers acting as the root of the tree structure and switches divided into four groups at the lowest level. Depending on the test scenario, one or two servers were connected to each of the lowest-level switches.
  • Butterfly topology. A network composed four servers and four layers of switches. Each of the switches was connected to two switches from the adjacent layer. Depending on the test scenario, one or two servers were connected at the lowest topology level depending on the given test case.
More servers do not carry any additional research value because of their equivalence. At the same time, two of them allow simulation transmission of flows starting and ending in servers connected to the same switch. It should be noted that increasing the number of servers significantly increases the calculation time needed to solve the optimization problem (1)–(6), which is used to ensure the energy efficiency of various network management methods’ comparison.
All experiments aimed to check and compare the heuristic routing algorithm’s performance, efficiency, and scalability with the results of the SEUlm strategy.
In particular, we investigated the impact of node activation and deactivation on maximum network throughput and the influence of different node activation and deactivation policies on the energy cost of the network, its reliability, and quality of service. We also performed a series of experiments to show the impact of the selection method of pivot nodes on the cost and quality of network performance.

6.2. Quality and Performance Metrics

To evaluate the performance of tested routing algorithms we used the energy costs of data center network operation. However, in practice, this is not the only, or even the most important, measure. Network stability and reliability are key to assessing network quality. Therefore, we defined the following additional measures.
  • Packet loss rate.
    Activation of insufficient number of devices may lead to packet losses that significantly influence the QoS. The total packet loss F l o s s can be calculated as the proportion of lost data to the total source flow size.
    F l o s s = m M u m p m m M w m ,
    where M denotes the set of flows to be transmitted, w m is size of the m-th flow, u m is number of lost packets of m, and p m is average packet size of flow m.
  • Average port utilization.
    High utilization of ports and nodes may lead to difficulties in handling the traffic when unexpected growth occurs. It may be caused by using too-long routing paths or defining very high utilization threshold. Average port utilization can be calculated as the average of utilization of all ports existing in all activated devices in the network:
    F a v g _ u t i l i z a t i o n = 1 n N p P n β p n N p P n z p
    where P n denotes a set of ports of the n-th node, and z p is the utilization of port p P n (in %), i.e., amount of its bandwidth used through the analyzed period. For each port p P n β p = 1 if the port p belongs to the active node (0 otherwise). We assume that the analyzed network transmits any traffic, i.e., at least two ports are active.
  • Standard deviation of a port utilization.
    Unequal port and node utilization may result in the necessity of significant traffic rearrangement in the case of single node failure. Additionally, highly utilized nodes are more likely to cause packet loss in the case of traffic changes. An appropriate metric of utilization uniformity may be a standard deviation of loads of all active ports.
    F s t d _ d e v _ u t i l i z a t i o n = 1 n N p P n β p n N p P n ( z p F a v g _ u t i l i z a t i o n ) 2

6.3. Estimation of Maximum Network Bandwidth

The first series of experiments was conducted to assess the maximum available bisection bandwidth of both network topologies. In these experiments, each pair of servers in the network transmitted the same amount of data, generated at a constant rate throughout the simulation. We assumed that nodes remained active until the end of the simulation, i.e., no activity control heuristics were used. The ECMP routing algorithm was used without the activation of any energy-saving heuristics. Figure 3 and Figure 4 show the packet loss percentage depending on flow size for respective topologies. As seen in the case of one server, the heuristic solution is consistent with the exact one obtained using a mathematical model and CPLEX MIP solver.
Packet loss was not present for flows up to 16 MB, while for larger flows it was present, due to the insufficient link capacity. For the same reason, the mathematical programming task for the respective setup was infeasible, conventionally denoted by the 100% loss level. It can be seen that the advantage of heuristics is the possibility of determining an approximate solution also for cases not supported by the mathematical model. In the case of two servers, a solution using heuristics achieves slightly better QoS for the butterfly topology. It is an obvious result of the greater freedom of choosing paths at the lower level of the graph built with a significantly larger number of devices than in the case of the fat-tree. Since the losses for 8 MB flow and fat-tree networks are relatively small and completely absent in the case of one server, this value was selected for further experiments.

6.4. Node Activation Heuristic

Simple topologies with one server attached to the switch at the lowest level were considered to investigate the node activation heuristics. The constant flows of 8 MB were generated. The solution obtained for the ECMP algorithm without energy management heuristics was used as a reference point. It defines 100% energy consumption level in Figure 5. The tested heuristic algorithm activates additional switches if exceeding a given port load threshold. It allowed for a significant reduction of device activation energy costs. However, the costs of handling flows remained unchanged in most test cases. The cause for this is that, assuming the same data transfer costs through all devices, the path determined by the ECMP algorithm itself has the lowest transmission cost. Thus, the only source of savings may be placing all unused modes into sleeping mode. The reduced transmission cost, recorded for larger threshold values and butterfly topology, is not an actual profit but is the side effect of data losses presented in Figure 6. Significantly, the energy usage was also lower than for smaller values of the threshold and lower than the optimal solution found with the MIP solver. This testifies that the number of active switches was, in this case, insufficient.
The proposed heuristic allows a greater energy gain for a butterfly topology than a fat-tree one. In particular, it is caused by a higher number of devices placed in the additional layer of the network. This resulted in higher energy consumption compared to that of the fat-tree topology. Analysis of individual cost components for ECMP routing without energy-saving heuristic and both topologies are presented in Table 2. Setting a high port load threshold can result in packet loss, especially at higher levels of hierarchy, as the switches there located aggregate flows gathered by lower level nodes. Passing a path through such a switch can redirect flows to an excessively loaded part of the network, which results in a high loss rate. However, even conservative settings of port load threshold—in the range of 40–50%—allow visible savings and provide the headroom for burst traffic.

6.5. Node Deactivation

A network traffic generator, a part of a simulator package, was used to analyze how the deactivation of switches affects energy consumption. The Poisson distribution was chosen for a flow start time generation, and the Pareto distribution with the exponent of 1.05 was used to determine the flow rate. The same probability of flows between any of the server pairs was assumed. About 80 flows of an average of 3 MB were generated during the simulation lasting 0.1 s. Two values (7000 ns or 70,000 ns) of the delay related to waking up or sleeping the device were used to test its impact on algorithm operation. In contrast, no delay case was provided as an idealized reference scenario. All switches were in the active state at the simulation beginning to allow the implemented heuristics to disable unnecessary nodes gradually. The obtained results are presented in Table 3.
Turning off redundant devices allowed for a significant energy cost reduction but also contributed to retransmitting part of the packets. The number of retransmitted packets was luckily small, thus avoiding losses and excessive increase in transmission part of energy consumption. In the case of fat-tree and longer device delay, the maximum increase amounted to about 1 / 1000 of primary energy consumption, which may be considered negligible.

6.6. Valiant Routing and Pivot Selection Heuristic

The ECMP routing with 40% device activation threshold was used as a reference point defining 100% energy consumption level to test the efficiency of Valiant routing and the heuristic for pivot selection (Figure 7). Using data centers’ load statistics, the load threshold was chosen and included overhead to handle predictable traffic changes. The Valiant routing scenarios evaluated in experiments presented in this section use two heuristics:
  • Node activation heuristic with 40% device activation threshold;
  • Pivot selection heuristic with average port utilization threshold varying from 0% (i.e., standard Valiant pivot selection) to 100%.
Valiant routing without pivot selection heuristics significantly increased the power consumption (Figure 7). This is due to the need to activate some previously deactivated nodes as pivots. Using heuristics made it possible to reduce the need to activate new switches during pivot selection. In the case of the fat-tree topology, it allowed reducing the cost of device activation to a level close to the ECMP routing. However, data transfer energy costs remained at a higher level, as Valiant routing uses longer paths than in the case of ECMP routing.
Beneficially, the heuristics did not significantly influence the standard deviations of active devices’ port load (see Figure 8). The port load standard deviation decreased for a butterfly topology, while for the fat-tree topology it increased. Such a result suggests that restricting the set of available pivots did not result in visible traffic load imbalance.
The improvement in the performance of the butterfly topology is attained at the cost of the increased average port load, which may eventually lead to data loss in case of a sudden burst of traffic. It must be noted, however, that the observed average port load (around 40% in Figure 8) is consistent with the used device activation threshold and cannot be considered excessive.
To some extent, this result may be mitigated by using a restricted pivot set. As can be observed in Figure 9 for the fat-tree topology, the cost of activating the switches did not change, but the transmission costs were significantly lower. Specifics of the fat-tree topology cause such behavior. In fact, most (or even all) switches must be actively reducing the possibility of disabling any of them for intensive load scenarios. However, some paths may be shorter when the selected pivot belongs to lower layers, reducing energy consumption. For the butterfly topology, narrowing the search depth to determine the pivots enabled both reducing energy costs associated with data transmission and switching on devices. This is a natural consequence of higher redundancy. The choice of pivots located closer to the servers contributes to shorter paths to such an extent that it is possible to use fewer nodes. Essentially, reducing pivot search depth allows for improving traffic distribution. As can be seen in Figure 10, both the average port load and its standard deviation are slightly lower for depths between 2 and 3 as the result of shortening some of the excessively long transmission paths.

7. Summary and Concluding Remarks

An energy-aware data center network may save the energy utilized by adapting its topology and resources to the actual traffic load and demands while ensuring the end-to-end quality of service. We designed the mathematical model of the DCN network and, using it, defined three versions of the optimal energy-saving routing problem. The resulting mathematical programming tasks may be solved using an appropriate solver, i.e., mixed-integer linear for SEUlm and DEUlm versions and mixed-integer nonlinear for most complex DEUnm versions.
Taking into account the limitations of formal analysis in this paper, we proposed activity control heuristics that may wake up and deactivate network nodes to adapt available bandwidth to offered traffic. Another two heuristics were designed to control pivot selection in Valiant routing based on the average port load of switches and the search depth limitation. The algorithm for selecting a pivot saves significant energy, on par with classic ECMP routing. The cost is increased devices load caused by traffic consolidation, which, in an unfortunate case, may lead to lowering QoS. However, this result may be partially mitigated by applying search depth limitation, which we recommend using in all scenarios. We verified, by simulation experiments, that activation heuristics provide solutions consistent with the exact ones calculated using the mathematical model and MIP solver. Moreover, the comparison of two selected topologies, i.e., fat-tree and butterfly, confirms the quite obvious fact of greater reliability and flexibility of the butterfly network resulting from redundant nodes. Again, the price for it is higher energy consumption, which the use of energy-aware routing may mitigate.
As a conclusion based on the simulation study, our routing algorithm implementing proposed heuristics can be successfully used for energy-efficient dynamic management in large-scale data centers and for adapting to dynamically changing network loads. In general, our heuristics aim to determine at a given time such a set of active devices that allows for reducing energy consumption while ensuring the expected QoS. It is essential that they may provide the solution in a shorter time than solving the mathematical programming task, allowing fast adaptation to changing traffic matrix even for larger networks. Their computational complexity is low. They perform relatively simple arithmetic operations compared to the optimization problems defined in Section 4, which are solved by an expensive discrete optimization algorithm, e.g., the branch and bound method. They only monitor ports and, on this basis, decide to enable or disable the switch or choose the pivot. Their complexity is approximately linear; thus, they can be used for networks with hundreds of thousands of ports or tens of thousands of servers.
For further research, we envisage the construction of a randomized energy-aware pivot selection heuristic based on the relation between port use and its selection probability to combine energy-conserving path consolidation with the load-balancing mechanism.

Author Contributions

Conceptualization, P.A., T.J. and E.N.-S.; methodology, P.A., T.J. and E.N.-S.; software, T.J.; validation, T.J. and P.A.; formal analysis, T.J. and P.A.; investigation, T.J., P.A. and E.N.-S.; writing—original draft preparation, P.A., E.N.-S. and T.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dab, B.; Fajjari, I.; Belabed, D.; Aitsaadi, N. Architectures of Data Center Networks: Overview. In Management of Data Center Networks; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2021; Chapter 1; pp. 1–27. [Google Scholar] [CrossRef]
  2. Campa, S.; Danelutto, M.; Goli, M.; González-Vélez, H.; Popescu, A.M.; Torquati, M. Parallel patterns for heterogeneous CPU/GPU architectures: Structured parallelism from cluster to cloud. Future Gener. Comput. Syst. 2014, 37, 354–366. [Google Scholar] [CrossRef]
  3. Szczepanski, J.; Klamka, J.; Wegrzyn-Wolska, K.; Rojek, I.; Prokopowicz, P. Computational Intelligence and Optimization Techniques in Communications and Control. Bull. Pol. Acad. Sci. Tech. Sci. 2020, 68, 181–184. [Google Scholar]
  4. Chkirbene, Z.; Gouissem, A.; Hadjidj, R.; Foufou, S.; Hamila, R. Efficient techniques for energy saving in data center networks. Comput. Commun. 2018, 129, 111–124. [Google Scholar] [CrossRef]
  5. Jaskóła, P.; Arabas, P.; Karbowski, A. Simultaneous routing and flow rate optimization in energy-aware computer networks. Int. J. Appl. Math. Comput. Sci. 2016, 26, 231–243. [Google Scholar] [CrossRef]
  6. Nsaif, M.; Kovásznai, G.; Racz, A.; Malik, A.; de Fréin, R. An Adaptive Routing Framework for Efficient Power Consumption in Software-Defined Datacenter Networks. Electronics 2021, 10, 3027. [Google Scholar] [CrossRef]
  7. Antal, M.; Pop, C.; Cioara, T.; Anghel, I.; Salomie, I.; Pop, F. A System of Systems approach for data centers optimization and integration into smart energy grids. Future Gener. Comput. Syst. 2020, 105, 948–963. [Google Scholar] [CrossRef]
  8. Wang, Y.; Li, Y.; Wang, T.; Liu, G. Towards an energy-efficient Data Center Network based on deep reinforcement learning. Comput. Netw. 2022, 210, 108939. [Google Scholar] [CrossRef]
  9. Aitsaadi, N. Management of Data Center Networks; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2021. [Google Scholar]
  10. Chiaraviglio, L.; Mellia, M.; Neri, F. Energy-aware backbone networks: A case study. In Proceedings of the 1st International Workshop on Green Communications, IEEE International Conference on Communications (ICC’09), Dresden, Germany, 14–18 June 2009; pp. 1–5. [Google Scholar]
  11. Fisher, W.; Suchara, M.; Rexford, J. Greening backbone networks: Reducing energy consumption by shutting off cables in bundled links. In Proceedings of the 1st ACM SIGCOMM Workshop on Green networking (Green Networking’10). ACM, New Delhi, India, 30 August 2010; pp. 29–34. [Google Scholar]
  12. Karpowicz, M. Energy-efficient CPU frequency control for the Linux system. Concurr. Comput. Pract. Exp. 2016, 28, 420–437. [Google Scholar] [CrossRef]
  13. Patan, M.; Uciński, D. Optimal activation strategy of discrete scanning sensors for fault detection in distributed-parameter systems. IFAC Proc. Vol. 2005, 38, 209–214. [Google Scholar] [CrossRef]
  14. Kulesza, R.; Chudzikiewicz, J.; Zielinski, Z. Resource placement in the 4-dimensional cube-type processor networks with soft degradation. Control Cybern. 2017, 46, 87. [Google Scholar]
  15. Al-Fares, M.; Loukissas, A.; Vahdat, A. A Scalable, Commodity Data Center Network Architecture. In SIGCOMM 2008 Conference on Data Communications; Association for Computing Machinery: New York, NY, USA, 2008; pp. 63–74. [Google Scholar]
  16. Wang, Y.; Dong, D.; Lei, F. MR-tree: A Parametric Family of Multi-Rail Fat-tree. In Proceedings of the 2021 IEEE International Performance, Computing, and Communications Conference (IPCCC), Austin, TX, USA, 28–30 October 2021; pp. 1–9. [Google Scholar] [CrossRef]
  17. Wei, Y.; Zhang, Z.; Afanasiev, D.; Thubert, P.; Przygienda, T. RIFT Applicability (Draft-Ietf-Rift-Applicability-11). 2023. Available online: https://datatracker.ietf.org/doc/draft-ietf-rift-applicability/ (accessed on 30 March 2023).
  18. Kim, J.; Balfour, J.; Dally, W. Flattened Butterfly Topology for On-Chip Networks. In Proceedings of the 40th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO 2007), Chicago, IL, USA, 1–5 December 2007; pp. 172–182. [Google Scholar]
  19. Valiant, L.; Brebner, G. Universal schemes for parallel communication. In Proceedings of the Thirteenth Annual ACM Symposium on Theory of Computing (STOC’81), Milwaukee, WI, USA, 11–13 May 1981; pp. 263–277. [Google Scholar] [CrossRef]
  20. Benito, M.; Fuentes, P.; Vallejo, E.; Beivide, R. Analysis and Improvement of Valiant Routing in Low-Diameter Networks. In Proceedings of the IEEE 4th International Workshop on High-Performance Interconnection Networks in the Exascale and Big-Data Era (HiPINEB), Vienna, Austria, 24 February 2018; pp. 1–8. [Google Scholar] [CrossRef]
  21. Arabas, P. Energy Aware Data Centers and Networks: A Survey. J. Telecommun. Inf. Technol. 2018, 4, 26–36. [Google Scholar] [CrossRef]
  22. Arabas, P. Modeling and simulation of hierarchical task allocation system for energy-aware HPC clouds. Simul. Model. Pract. Theory 2021, 107, 102221. [Google Scholar] [CrossRef]
  23. Chiaraviglio, L.; Mellia, M.; Neri, F. Minimizing ISP network energy cost: Formulation and solutions. IEEE/ACM Trans. Netw. 2011, 20, 463–476. [Google Scholar] [CrossRef]
  24. Chabarek, J.; Sommers, J.; Barford, P.; Estan, C.; Tsiang, D.; Wright, S. Power awerness in network design and routing. In Proceedings of the 27th Conference on Computer Communications (INFOCOM 2008), Phoenix, AZ, USA, 13–18 April 2008; pp. 457–465. [Google Scholar]
  25. Arabas, P.; Sikora, A.; Szynkiewicz, W. Energy-Aware Activity Control for Wireless Sensing Infrastructure Using Periodic Communication and Mixed-Integer Programming. Energies 2021, 14, 4828. [Google Scholar] [CrossRef]
  26. Zhang, M.; Yi, C.; Liu, B.; Zhang, B. GreenTE: Power-aware traffic engineering. In Proceedings of the IEEE International Conference on Network Protocols (ICNP’2010), Kyoto, Japan, 5–8 October 2010. [Google Scholar]
  27. Shen, G.; Tucker, R.S. Energy-minimized desig for IP over WDM networks. J. Opt. Commun. Netw. 2009, 1, 176–186. [Google Scholar] [CrossRef]
  28. Cianfrani, A.; Eramo, V.; Listani, M.; Marazza, M.; Vittorini, E. An Energy Saving Routing Algorithm for a Green OSPF Protocol. In Proceedings of the IEEE INFOCOM Conference on Computer Communications, San Diego, CA, USA, 15–19 March 2010; pp. 1–5. [Google Scholar]
  29. Bianzino, A.P.; Chiaraviglio, L.; Mellia, M. GRiDA: A green distributed algorithm for backbone networks. In Proceedings of the Online Conference on Green Communications (GreenCom 2011), Piscataway, NJ, USA, 26–29 September 2011; pp. 113–119. [Google Scholar]
  30. Cuomo, F.; Abbagnale, A.; Cianfrani, A.; Polverini, M. Keeping the connectivity and saving the energy in the Internet. In Proceedings of the IEEE INFOCOM 2011 Workshop on Green Communications and Networking, Shanghai, China, 10–15 April 2011; pp. 319–324. [Google Scholar]
  31. Kamola, M.; Arabas, P. Shortest path green routing and the importance of traffic matrix knowledge. In Proceedings of the 2013 24th Tyrrhenian International Workshop on Digital Communications—Green ICT (TIWDC), Genoa, Italy, 23–25 September 2013; pp. 1–6. [Google Scholar]
  32. Cisco Systems, Inc. Cisco Data Center Infrastructure 2.5 Design Guide; Cisco Systems, Inc.: San Jose, CA, USA, 2011. [Google Scholar]
  33. Pallos, R.; Farkas, J.; Moldovan, I.; Lukovszki, C. Performance of rapid spanning tree protocol in access and metro networks. In Proceedings of the Second International Conference on Access Networks, Ottawa, ON, Canada, 22–24 August 2007; pp. 1–8. [Google Scholar]
  34. Alqahtani, J.; Hamdaoui, B. Rethinking Fat-Tree Topology Design for Cloud Data Centers. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
  35. Benito, M.; Vallejo, E.; Beivide, R. On the Use of Commodity Ethernet Technology in Exascale HPC Systems. In Proceedings of the IEEE 22nd International Conference on High Performance Computing (HiPC), Bengaluru, India, 16–19 December 2015; pp. 254–263. [Google Scholar]
  36. Heller, B.; Seetharaman, S.; Mahadevan, P.; Yiakoumis, Y.; Sharma, P.; Banerjee, S.; McKeown, N. Elastictree: Saving energy in data center networks. Proc. NSDI 2010, 10, 249–264. [Google Scholar]
  37. Wang, T.; Qin, B.; Su, Z.; Xia, Y.; Hamdi, M.; Foufou, S.; Ridha, H. Towards bandwidth guaranteed energy efficient data center networking. J. Cloud Comput. 2015, 4, 35. [Google Scholar] [CrossRef]
  38. Shaukat, M.; Alasmary, W.; Alanazi, E.; Shuja, J.; Madani, S.A.; Hsu, C.H. Balanced Energy-Aware and Fault-Tolerant Data Center Scheduling. Sensors 2022, 22, 1482. [Google Scholar] [CrossRef] [PubMed]
  39. Dai, G.; Wu, W.; Liu, K.; Shan, F.; Wang, J.; Xu, X.; Junzhou, L. Joint Sleep and Rate Scheduling with Booting Costs for Energy Harvesting Communication Systems. IEEE Trans. Mob. Comput. 2021, 22, 3391–3406. [Google Scholar] [CrossRef]
  40. Chang, P.; Miao, G. Optimal Operation of Base Stations With Deep Sleep and Discontinuous Transmission. IEEE Trans. Veh. Technol. 2018, 67, 11113–11126. [Google Scholar] [CrossRef]
  41. Putri, S.; Tulus, T.; Napitupulu, N. Implementation and Analysis of Depth-First Search (DFS). In Proceedings of the International Seminar on Operational Research (InteriOR), Phuket, Thailand, 21–23 December 2011; pp. 1–10. [Google Scholar] [CrossRef]
  42. NetBench–Packet Simulator for Data Center Network Topologies, Routing, and Congestion Control. Available online: https://github.com/ndal-eth/netbench (accessed on 30 March 2023).
  43. Cornea, B.; Orgerie, A.C.; Lefèvre, L. Studying the energy consumption of data transfers in Clouds: The Ecofen approach. In Proceedings of the 2014 IEEE 3rd International Conference on Cloud Networking (CloudNet), Luxembourg, 8–10 October 2014; pp. 143–148. [Google Scholar] [CrossRef]
Figure 1. The four-layer butterfly topology with two switch connections to the next layer.
Figure 1. The four-layer butterfly topology with two switch connections to the next layer.
Energies 16 04136 g001
Figure 2. Network (a fat-tree example) without energy-saving capabilities (a) uses all network devices even in periods of low utilization. Using energy-aware algorithms (b), it is possible to calculate routing allowing to switch off devices and links. Colored links represent active transmissions, and gray links and nodes are switched off or (better) in energy-saving standby mode.
Figure 2. Network (a fat-tree example) without energy-saving capabilities (a) uses all network devices even in periods of low utilization. Using energy-aware algorithms (b), it is possible to calculate routing allowing to switch off devices and links. Colored links represent active transmissions, and gray links and nodes are switched off or (better) in energy-saving standby mode.
Energies 16 04136 g002
Figure 3. Packet loss for fat-tree and varying flow size.
Figure 3. Packet loss for fat-tree and varying flow size.
Energies 16 04136 g003
Figure 4. Packet loss for butterfly and varying flow size.
Figure 4. Packet loss for butterfly and varying flow size.
Energies 16 04136 g004
Figure 5. Energy costs for varying port load threshold.
Figure 5. Energy costs for varying port load threshold.
Energies 16 04136 g005
Figure 6. Packet loss for varying port load threshold.
Figure 6. Packet loss for varying port load threshold.
Energies 16 04136 g006
Figure 7. Energy cost of Valiant routing for various port utilization threshold with reference to ECMP.
Figure 7. Energy cost of Valiant routing for various port utilization threshold with reference to ECMP.
Energies 16 04136 g007
Figure 8. Reliability indicators of Valiant routing for various port utilization threshold regarding ECMP.
Figure 8. Reliability indicators of Valiant routing for various port utilization threshold regarding ECMP.
Energies 16 04136 g008
Figure 9. Energy cost of Valiant routing with reduced pivot search depth.
Figure 9. Energy cost of Valiant routing with reduced pivot search depth.
Energies 16 04136 g009
Figure 10. Reliability indicators of Valiant routing with reduced pivot search depth.
Figure 10. Reliability indicators of Valiant routing with reduced pivot search depth.
Energies 16 04136 g010
Table 1. Dimensionality of the optimization problem (1)–(6). The column marked with 4 * refers to number of servers reduced to one per switch.
Table 1. Dimensionality of the optimization problem (1)–(6). The column marked with 4 * refers to number of servers reduced to one per switch.
Number of Switch Ports
4 * 48163248
N20208032012802880
L3232256204816,38455,296
M = S 8161281024819227,648
Number of variables27653232,848 2.0975 × 10 6 1.3422 × 10 8 1.5288 × 10 8
Number of constraints520100831,104986,112 3.1482 × 10 7 2.3896 × 10 8
Table 2. Energy cost of ECMP without energy-saving heuristic.
Table 2. Energy cost of ECMP without energy-saving heuristic.
Topology
Fat-TreeButterfly
Node activation cost (µJ)139,996,472223,990,136
Data transfer cost (µJ)17,612,80021,135,360
Table 3. Results of node deactivation experiments.
Table 3. Results of node deactivation experiments.
TopologyDevice Delay (ns)Power on Cost (µJ)Transmission Cost (µJ)
0140,000,0002,164,548
Fat-tree700099,921,1252,166,556
70,000101,036,7742,167,525
0224,000,0002,794,195
Butterfly7000155,758,4952,794,737
70,000155,618,2161,794,708
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arabas, P.; Jóźwik, T.; Niewiadomska-Szynkiewicz, E. Router Activation Heuristics for Energy-Saving ECMP and Valiant Routing in Data Center Networks. Energies 2023, 16, 4136. https://doi.org/10.3390/en16104136

AMA Style

Arabas P, Jóźwik T, Niewiadomska-Szynkiewicz E. Router Activation Heuristics for Energy-Saving ECMP and Valiant Routing in Data Center Networks. Energies. 2023; 16(10):4136. https://doi.org/10.3390/en16104136

Chicago/Turabian Style

Arabas, Piotr, Tomasz Jóźwik, and Ewa Niewiadomska-Szynkiewicz. 2023. "Router Activation Heuristics for Energy-Saving ECMP and Valiant Routing in Data Center Networks" Energies 16, no. 10: 4136. https://doi.org/10.3390/en16104136

APA Style

Arabas, P., Jóźwik, T., & Niewiadomska-Szynkiewicz, E. (2023). Router Activation Heuristics for Energy-Saving ECMP and Valiant Routing in Data Center Networks. Energies, 16(10), 4136. https://doi.org/10.3390/en16104136

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop