Next Article in Journal
Prediction and Optimization of Open-Pit Mine Blasting Based on Intelligent Algorithms
Previous Article in Journal
Enhancement of Tricyclazole Analysis Efficiency in Rice Samples Using an Improved QuEChERS and Its Application in Residue: A Study from Unmanned Arial Spraying
Previous Article in Special Issue
Dynamic Link Metric Selection for Traffic Aggregation and Multipath Transmission in Software-Defined Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Price-and-Branch Algorithm for Network Slice Optimization in Packet-Switched Xhaul Access Networks

by
Mirosław Klinkowski
National Institute of Telecommunications, Szachowa 1, 04-894 Warsaw, Poland
Appl. Sci. 2024, 14(13), 5608; https://doi.org/10.3390/app14135608
Submission received: 3 June 2024 / Revised: 21 June 2024 / Accepted: 25 June 2024 / Published: 27 June 2024
(This article belongs to the Special Issue Communication Networks: From Technology, Methods to Applications)

Abstract

:
Network slicing is a concept introduced in 5G networks that supports the provisioning of multiple types of mobile services with diversified quality of service (QoS) requirements in a shared network. Network slicing concerns the placement/allocation of radio processing resources and traffic flow transport over the Xhaul transport network—connecting the 5G radio access network (RAN) elements—for multiple services while ensuring the slices’ isolation and fulfilling specific service requirements. This work focuses on modeling and optimizing network slicing in packet-switched Xhaul networks, a cost-effective, flexible, and scalable transport solution in 5G RANs. The considered network scenario assumes two types of network slices related to enhanced mobile broadband (eMBB) and ultra-reliable low-latency communications (URLLC) services. We formulate a network slicing planning optimization problem and model it as a mixed-integer linear programming (MILP) problem. Moreover, we develop an efficient price-and-branch algorithm (PBA) based on column generation (CG). This advanced optimization technique allows for overcoming the MILP model’s poor performance when solving larger network problem instances. Using extensive numerical experiments, we show the advantages of the PBA regarding the quality of the solutions obtained and the computation times, and analyze the packet-switched Xhaul network’s performance in various network slicing scenarios.

1. Introduction

The fifth-generation mobile networks (5G) support the realization of various wireless communication services with diversified throughput and latency requirements, such as eMBB, URLLC, and massive machine-type communications (mMTC). The deployment of these services is facilitated by the 5G RAN architecture defined by the 3GPP organization in technical specifications [1,2] and developed in parallel within the Open RAN (O-RAN) Alliance activities [3]. In 5G RANs, radio baseband processing functions, traditionally realized in a Baseband Unit (BBU) in 4G networks, are split between radio (RU), distributed (DU), and central (CU) units. The specification [1] defines several functional split options. Whereas the RU replaces the remote radio head (RRH) in 4G and is associated with the antenna, the DU and CU are separate components that can be located in different areas of the network. The DU and CU functions can be virtualized and executed on general-purpose processors available at processing pool (PP) facilities, or in small Data Centers (DCs) [4]. The flexibility in placing the DU and CU entities supports realizing 5G services. Namely, the DUs responsible for performing time-critical radio frequency functions are located near the RUs, which limits the RU–DU latencies and, simultaneously, the bandwidth required for DU–CU links. In addition, multi-access (mobile) edge computing (MEC) has been proposed to reduce traffic load and support latency-sensitive services by extending computation, communication, and storage facilities into the RAN [5].
The radio data between the RU, DU, CU, and a central hub, where the traffic is aggregated and the 5G Core (5GC) functions typically reside, is transported over the so-called Xhaul network [6]. Standard IEEE 1914.1 [7] defines the next-generation fronthaul interface (NGFI) architecture implementing Xhaul based on a packet-switched transport network. The NGFI allows for the concurrent transport of fronthaul (FH), midhaul (MH), and backhaul (BH) data flows, respectively, between the RU–DU, DU–CU, and CU–5GC over a shared packet network. The enabling technology for NGFI is Ethernet, which has been adapted for fronthaul networks in standard IEEE 802.1CM [8]. The mapping and encapsulation of 5G radio data into Ethernet frames is defined by the enhanced Common Public Radio Interface (eCPRI) protocol [9] and the Radio over Ethernet (RoE) specification presented in standard IEEE 1914.3 [10]. Eventually, Time-Sensitive Networking (TSN) mechanisms that prioritize the latency-sensitive traffic in the Ethernet-based fronthaul are proposed in [8].
Fifth-generation transport networks are expected to provide optimized support for different types of services (such as eMBB, URLLC, mMTC, 4G), following their specific requirements, among others, in terms of throughput, latency, mobility, reliability, and availability [11,12]. To achieve this, the concept of network slicing has been identified by 3GPP as one of the key components of 5G networks [13]. Network slicing allows the network operator to deploy several instances of virtual connectivity service tailored to specific service needs over the same shared physical network infrastructure, where services are strictly separated between different slices [11,14]. Two concepts of implementing network slicing in the transport network are discerned: hard and soft [15,16]. Hard slicing typically assumes the physical separation of transmission resources (such as time slots or wavelengths in optical networks) assigned to network slices, which are dedicated and not shared with other slices. Alternatively, in soft slicing, the resources can be shared between slices while maintaining the expected levels of service quality in particular slices. The packet-switched Xhaul transport network architectures specified in NGFI [7] and O-RAN [16] assume network slicing implementation. These networks generally use the soft slicing approach as they allow the sharing of transmission resources, such as switches and links.
Network slicing brings unique open problems and algorithmic challenges arising, among others, from the provisioning of processing and transmission resources for individual slices for which slice isolation and heterogeneous, slice-specific QoS guarantees should be ensured in shared network infrastructure [17]. QoS provisioning is challenging in packet-switched Xhaul networks because the nondeterministic packet transmission causes unpredictable buffering delays in network switches. This issue is especially relevant in the case of latency-sensitive services, such as URLLC, which may require a special treatment of their packet flows compared, for instance, to the flows related to eMBB services. At the same time, the deployment of network slices should be optimized regarding the usage of processing and transmission resources to minimize the network cost. For this purpose, dedicated optimization methods are necessary.
MILP is a frequently used method for modeling and solving optimization problems in communication networks [18]. These problems are generally NP -hard, indicating that no known algorithms can deterministically solve them within polynomial time. In practice, solving MILP models can be challenging and time-consuming, especially for problems involving a large set of integer variables. Enhancing the quality of MILP formulations and applying advanced MILP-based optimization techniques can significantly facilitate solving optimization algorithms. CG is one technique that reduces the complexity of solving MILP models [19]. In CG, this is achieved by initializing the MILP model with a small, basic set of problem variables and the iterative, dynamic addition of selected variables (columns) to the model, improving the optimization objective value. This work studies this approach in the network slicing problem.
This work focuses on optimizing network slices’ planning in packet-switched Xhaul networks assuming specific slice requirements. In the network-planning approach, processing resources are assigned to network slices, and connections are established over a packet Xhaul transport network, assuming given Service Level Agreements (SLAs) regarding traffic loads supported by the slices and traffic flow latency requirements. Such an approach reduces the impact of the uncertainty in traffic flows on the network performance while ensuring a stable network operation if the maximum traffic loads supported by network slices are considered during network planning [20]. If the network slices are planned for lower than the maximum volumes of supported traffic, additional network mechanisms should be applied to ensure a reliable network operation and the fulfillment of the slice’s SLAs. The development of such mechanisms is left for future work.
Our previous studies concerned the optimization of latency-aware packet-switched Xhaul networks, where the problem of DU and CU placement was addressed jointly with the routing of Xhaul packet flows for a single network slice case [20,21,22]. This study fulfills the gap in previous works by accounting for network slicing in the packet Xhaul network scenario. First, the optimization model formulated for the network-planning problem extends previous MILP models by introducing relevant network slicing-related constraints. Second, as the MILP model has limited scalability, a new optimization algorithm using the CG approach is developed. The novelty and main contributions of this study are summarized below:
  • A network slicing-planning problem is addressed in the packet-switched Xhaul transport network with soft slice isolation and buffering latencies considered.
  • The above problem is formulated as an optimization problem modeled using the MILP approach and assuming specific DU/CU placement and latency requirements of eMBB and URLLC network slices.
  • The effective optimization method PBA based on the CG optimization technique is developed to facilitate solving the MILP problem for larger network instances.
  • Extensive numerical experiments are performed, which show the advantages of the PBA and the performance of the packet-switched Xhaul network in different network slicing scenarios.
The remainder is organized as follows. Section 2 discusses related works. Section 3 presents the main assumptions of the considered network scenario. Section 4 formulates the network slicing-planning problem in the packet-switched Xhaul network as a MILP problem. Section 5 describes the PBA optimization. Section 6 reports and discusses the numerical results. Eventually, Section 8 presents the final conclusions.

2. Related Works

Optimized Xhaul network planning is an important research topic in centralized and virtualized RANs. A fundamental design problem in this scope concerns the placement of radio processing resources (BBU/DU/CU) and the provisioning of connectivity between the RAN components. Most studies have focused on a BBU placement problem in centralized RAN (C-RAN) network scenarios based on dedicated fronthaul links established over an optical network between RRHs and a pool of BBUs [23,24,25,26,27]. The problem translates into the service chaining Virtual Network Function (VNF) placement problem if the BBU functions are split and virtualized [28]. The placement of the DU/CU in C-RANs connected using an optical transport network was studied in [29,30,31]. Dynamic scenarios concerning the allocation of the DU/CU and transmission resources with the Xhaul connectivity over a passive optical network (PON) were addressed in [32,33]. The BBU/DU/CU placement/allocation optimization problems were modeled as MILP [23,24,25,29] and non-linear programming [33] problems, and solved using heuristic [27] and reinforcement learning-based algorithms [30,31]. The mentioned problem was extended to the functional split selection (FSS) problem concerning the optimal choice of the split of BBU functions in [4,34,35,36,37,38]. The FSS problem was formulated as a Virtual Network Embedding (VNE) problem in [38] and solved using both the MILP [4,35,36,37,38] and heuristic methods [38]. Apart from the resource-allocation problem, other issues were also addressed, including network survivability [26,27] and energy efficiency [26,36,37]. A literature survey on related works was presented in [39].
Whereas the above works assume the use of dedicated fronthaul/midhaul links, usually established over an optical transport network, fewer studies have concerned packet-switched Xhaul transport networks. The authors of [40,41] focused on the latency-aware routing of FH flows over a packet-switched network. Two heuristics for the FSS problem with routing FH and BH flows in a convergent packet network, assuming a simplified latency model without buffering delays, were proposed in [42]. The planning of the DU and CU placement with latency-aware routing of Xhaul flows, with buffering latencies considered, was addressed in [20,21,22,43], where both MILP models and heuristic algorithms were developed to solve related optimization problems.
The Xhaul network optimization studies have also embraced the problem of network slicing. The authors of [11] discussed the challenges related to the end-to-end (E2E) provisioning of slices, focusing on E2E slice latency monitoring and management. In [44], the optimal allocation of radio resources for eMBB and URLLC slices was studied in the network scenario excluding the Xhaul transport domain. In [45], slice-aware optimization of the DU and CU placement was addressed, assuming fixed propagation latencies in the transport network. In [46], a MILP model was proposed for the problem of the joint selection of the optimal functional split and the routing path, between the connected user equipment to the CU while satisfying each service’s SLAs. The authors of [47] formulated an integer linear programming (ILP) model for the problem of joint admission control and MEC/RAN slicing in 5G metropolitan networks under E2E service delay and resource constraints. A deep reinforcement learning-based method was proposed to solve this NP-hard problem.
Few works focus on network slicing in packet-switched Xhaul networks. The authors of [15] presented experimental results obtained in a test bed of a TSN-aware Xhaul transport network supporting eMBB and URLLC slices, where latency guarantees were provided for URLLC. Recently, the impact of traffic priorities on the performance of a packet Xhaul network supporting eMBB and URRLC services was analyzed in [48].
To our knowledge, the modeling and optimization of network slicing in packet-switched Xhaul networks have not been addressed widely in the literature. Similarly, applying advanced mathematical programming techniques in optimizing such networks has not been considered.

3. Network Scenario

This section presents the main assumptions regarding the addressed network scenario. The study uses similar network, traffic, and latency models as in [20,21], whereas the assumptions regarding network slicing are novel and were not considered in these works. The network scenario is illustrated in Figure 1 and is discussed in detail below.

3.1. Packet-Switched Xhaul Access Network

The considered implementation of the packet-switched Xhaul access network is based on the NGFI architecture presented in the IEEE 1914.1 specification [7]. The NGFI allows for separating the DU and CU processing entities from the RUs associated with the antenna sites and their placement at processing nodes spread over the network. These processing nodes can be either small DCs or processing pool (PP) nodes dedicated to DU/CU processing; in this paper, the term PP represents any of the two options. We assume that the capacities of PPs are limited, and the DU/CU processing at PPs involves some load reflecting the CPU and memory usage [49]. Additionally, joint DU processing at the same PP for a group of RUs (cluster) is considered to effectively implement multi-cell coordination mechanisms [50].
The NGFI architecture specifies that the RAN elements are connected using a packet-switched Xhaul transport network responsible for routing Xhaul (FH/MH) flows between the RUs, radio processing nodes, and a central node (hub) of the RAN. The packet-switched network uses TSN Ethernet switches, according to the IEEE 802.1CM specification [8].

3.2. Network Slicing

Soft slicing of the packet-switched Xhaul network is assumed in this work; in particular, different network slices use common transmission resources (like network switches and links) of the packet transport network. Moreover, the network operates in a slice-aware mode, so the transport network can be optimized for network slicing, where different traffic flows are transported and served according to their specific QoS requirements [7]. These QoS requirements are closely related to the type of 5G service realized by a network slice. This work’s network slicing-planning case study considers two principal 5G services: eMBB and URLLC.
The URLLC services have higher latency requirements than eMBB services regarding the transport of FH traffic flows and the overall end-to-end service delay budget [5,7]. As mentioned in [7], CU instances in network slice-aware mode could be placed at different geographic locations. Accordingly, to support the ultra-low latency requirements of URLLC slices, their CU processing is performed in the proximity of RUs in a selected PP site, where 5GC functions and a MEC server running a URLLC application are also located [5]. Concurrently, the CU functions reside in a central and more distant hub site in the case of eMBB network slices. Eventually, several network slices may coexist within an RU, where the transmission resources (radio frames) at the RU are segmented and assigned to these slices [51].

3.3. Traffic Model

This study implements the traffic model presented in [52]. The radio data are encapsulated [9] and sent by the RU and processing nodes periodically, within a fixed transmission window, as bursts of Ethernet frames. The frames have a fixed length, and the number of frames forming a data burst is called the burst size. The transmission window length (i.e., the period between the burst sending times) depends on the specific 5G radio numerology applied, where the numerology represents a set of 5G radio transmission parameters [51]. It equals 66 . 6 ¯ × 2 μ μ s , where the numerology is identified with an integer μ , where 0 μ 4 [51,52]. The bit rates of data flows correspond to the amount of radio resources assigned at the RUs to particular network slices, and they may differ between the slices. This work assumes the maximum bit rates of flows available to network slices, corresponding to a whole slice utilization, during the network-planning process.

3.4. Flow Prioritization

The switches handle the radio data bursts, i.e., control, buffer, and transmit them as a whole. The selection of a buffered burst for transmission is based on its priority, according to the strict priority algorithm [8]. Latency-sensitive FH flows have a higher priority than MH flows. We assume Profile A of operation defined in [8], where the transmitted bursts are not preempted by other bursts, even of higher priority.
The radio data corresponding to different network slices, but associated with the same RU, are encapsulated into Ethernet frames and sent/received as separate FH data flows [53]. Accordingly, the priorities assigned to these flows may be diversified. We analyze the following two cases:
  • SP–FH (same FH priorities), where all the FH flows are assigned the same (high) priority without respect for the network slice they belong to.
  • DP–FH (different FH priorities), where the priorities of the FH flows are assigned according to the flow latency requirements. In particular, in this study, the URLLC FH flows have a higher priority than the eMBB FH flows.
The above cases are illustrated in Figure 2, where three FH bursts (two eMBB and one URRLC) are transmitted at the same switch output link. In SP–FH, the URLLC burst may be selected for transmission as the last one since all bursts have the same priority, which leads to a buffering delay of this burst. The URLLC burst is transmitted without delay if assigned the highest priority in DP–FH.
Assigning the same priority to the whole FH traffic (SP–FH) is specific to the network slice-unaware mode of operation defined in [7], and is also assumed in [8]. In contrast, the DP–FH approach supports the network slice-aware mode. It can be realized with the QoS-wise FH packetization method discussed in [53] in the studied network scenario, where an RU generates multiple independent FH flows, each related to a different network slice, as mentioned earlier. We evaluate the impact of both policies on network performance in Section 6.4.3.

3.5. Flow Latencies

One of the fundamental assumptions in this work is that the packet Xhaul transport network is capable of ensuring acceptable latencies of individual packet flows, which is an essential requirement for reliable RAN operation, as discussed in [7,54]. This assumption also supports the realization of soft network slicing, where the data flows belonging to different slices use common resources of the packet transport network and their slice-specific latency requirements are satisfied. Therefore, the maximum one-way latencies of packet flows must not exceed specific levels related to these flows, and these guarantees should concern all bursts belonging to a flow. Hence, the considered network-planning problem imposes relevant latency constraints on individual network flows. The study uses the same reliable flow latency-estimation model as in [20,21]. For consistency, the model is presented here again.
The latency of a flow consists of a static and dynamic component. The static component includes the propagation delays in network links (assuming signal speed 2 × 10 5 km/s), the store-and-forward delays in switches (can be bounded), and the burst transmission times in links. The dynamic component represents the burst buffering delays at switch output ports. The static latencies are deterministic, but the buffering delays depend on the buffers’ state and occupation. Therefore, in this work, the buffering latencies are estimated using the worst case latency model proposed in [8] and derived using the network calculus theory, as discussed in [55].
The mentioned model assumes that the latencies are produced by (a) higher or equal-priority bursts that might be selected for transmission before the burst of the given flow and (b) the largest lower priority burst currently in transmission. Let H ( f , e ) and L ( f , e ) be the sets of, respectively, higher/same-priority flows and lower priority flows interfering with flow f in link e and L ( q , e ) denote the latency produced by interfering flow q in link e. The worst case buffering latency of flow f routed through path p can be expressed as:
L buf ( f ) = e p q H ( f , e ) L ( q , e ) + max q L ( f , e ) L ( q , e )
where the first component is the sum of latencies produced by the interfering flows of the higher and same priority and the second component is the maximum latency produced by a lower priority flow.
This study mainly focuses on the packet-switched Xhaul transport network’s latency guarantees, whereas the service level’s latency constraints, imposed by the 5G services, are not explicitly included. Introducing relevant constraints to the optimization model can ensure the 5G service level latency guarantees. To this end, a proper latency model reflecting the specific implementation of the PP hardware and software realizing the DU and CU functions is required. The development of such a model is out of the scope of this study and is left for future work. The lack of a service-level latency model is somehow mitigated by the assumption that URLLC services are realized within the access network (i.e., in the proximity of RUs) using a MEC server colocated with the PP performing the CU functions. Thus, the transport latencies are minimized for these latency-sensitive services.

4. Problem Formulation

This section focuses on the mathematical modeling of the network slicing-planning problem in packet-switched Xhaul networks. The optimization problem considered concerns (a) the placement of DU and CU entities at selected PP nodes for processing of radio traffic related to different 5G services provisioned by a given set of RUs, according to the specific requirements of particular network slices realized in the network, and (b) the routing of FH and MH flows over the packet Xhaul network. As in [29,31], the optimization objective is minimizing the number of PP nodes required to support the demands. The problem constraints are as follows:
  • The DU processing for a cluster of RUs is performed at the same PP node.
  • The CU processing for network slices realizing eMBB services is performed at the hub node.
  • The CU processing for a network slice realizing URLLC services is performed at a selected PP node, where an MEC server is also colocated.
  • The overall load related to DU and CU processing at a PP node cannot exceed the available PP capacity.
  • The FH and MH flows are non-bifurcated and should be realized over single routing paths in the network.
  • The overall volume (bit rate) of traffic in a network link cannot exceed the link capacity.
  • The maximum one-way latency of an individual flow must not exceed the latency limit specific for this flow.
In the following, the notation used in problem modeling is introduced, and afterward, the problem is formulated as a MILP optimization problem. The MILP model is an extension of the model formulated in [20] for the network scenario supporting a single network slice. The purpose of presenting a similar MILP model here is twofold. First, it shows its applicability in a network with multiple slices, each with specific DU/CU placement requirements, which introduces additional problem constraints. Second, the model variables and constraints are directly referred to and used in developing the optimization method in Section 5.

4.1. Notation

The following notions and notation are used in the network and problem modeling.
Network: The packet-switched Xhaul network is represented by directed and connected graph G = ( V , E ) , where V is the set of network nodes and E is the set of network links. Set V includes the RU, PP, switch, and hub nodes. Subset V PP V comprises the PP nodes, and subset E S E comprises the output links of the switches. C ( v ) is the processing capacity of PP node v V PP , and H ( e ) is the transmission capacity of link e E . L P ( e ) is the propagation delay in link e. L SF ( e ) is the store-and-forward delay of the switch being the origin of link e E S ; L S F ( e ) = 0 if e is not outgoing from a switch.
Traffic demands: The set of traffic demands is denoted as D . Demand d D represents the radio traffic to be carried from/to an RU over the network and processed at the PP or hub nodes performing the DU and CU functions for this demand. The demands are realized in both directions: uplink (RU→DU→CU) and downlink (CU→DU→RU). The processing loads of the DU and CU entities of demand d are denoted, respectively, as ρ DU ( d ) and ρ CU ( d ) .
Network slicing: The set of types of network slices is denoted as N . In this work, set N includes two types of slices: eMBB and URLLC; N = { eMBB , URLLC } . The set of slices realized in the network is denoted as S . Each network slice s S represents a subset of demands D ( s ) D . The slices are disjoint, and a pair of uplink and downlink demands belong to the same slice. Subsets S eMBB S and S URLLC S comprise the slices of type eMBB and URLLC, respectively.
DU and CU placement: The set of clusters of RUs is denoted as C . Cluster c C comprises a subset of RUs that requires the placement of their DUs in the same PP node. The demands associated with a certain RU belong to the same cluster. The set of all demands associated with cluster c is denoted as D ( c ) . In this work, for all demands of a network slice of type URLLC, their CU entities are placed in the same PP node. Concurrently, the CU processing is performed in the hub node for all demands of the network slices of type eMBB.
Network flows: The set of types of traffic flows is denoted as F . In this work, two types of flows are considered: (a) fronthaul flows between RUs and the PP nodes in which their DU entities are placed and (b) midhaul flows between the PP/hub nodes performing the DU and CU functions; F = { FH , MH } . Both flows are realized in the network for each demand unless the DU and CU are placed in the same PP; in such a case, the MH flow is absent. The bit rate of flow f of demand d is denoted as H ( d , f ) . The maximum one-way latency limit for flow f of demand d is denoted as L max ( d , f ) . The delay produced by the transmission of the burst of frames of flow f of demand d in link e is denoted as L ( d , f , e ) . Let H ( d , f , e ) and L ( d , f , e ) denote the sets of demand–flow pairs ( d ¯ , f ¯ ) , d ¯ D and f ¯ F , which may cause buffering latencies of flow f of demand d in link e E S . Set H ( d , f , e ) comprises the flows of either equal or higher priority than flow f of demand d, whereas set L ( d , f , e ) comprises lower priority flows. Let L ^ ( d , f , e ) denote the overall latency of flows in set H ( d , f , e ) ; L ^ ( d , f , e ) = ( d ¯ , f ¯ ) H ( d , f , e ) L ( d ¯ , f ¯ , e ) . Let T ( d , f , e ) be the set of possible values of latency L ( d ¯ , f ¯ , e ) produced by lower priority flows ( d ¯ , f ¯ ) L ( d , f , e ) . Let subset L ( d , f , e , t ) L ( d , f , e ) comprise the flows producing latency t T ( d , f , e ) and L ( t ) denote the value of this latency.
Routing paths: Set P ( d , f ) includes candidate routing paths for flow f of demand d. Subset P ( d , f , v ) P ( d , f ) comprises the paths terminated in PP node v, and subset P ( d , f , e ) P ( d , f ) comprises the paths going through link e. Set P = d D , f F P ( d , f ) represents all candidate paths. In this work, we assume that all the candidate MH paths of the demands belonging to slice eMBB are terminated in the hub node, where the CU is placed, and only such paths are included in set P ( d , f ) for these flows. Eventually, set E ( p ) comprises the links belonging to path p.
The notation is summarized in Table 1.

4.2. MILP Optimization Model

The MILP formulation of the slice-aware DU/CU placement- and flow routing-optimization problem uses the variables defined in Table 2.
The MILP model is presented in Table 3 and explained in the following.
The optimization objective, expressed by (2), is to minimize the number of active PP nodes in the network.
Constraints (3)–(9) concern the placement of DU and CU entities in PP nodes. Constraint (3) ensures the selection of a PP node for the DU processing for a cluster of demands. Concurrently, constraint (4) ensures that each demand in a cluster uses the PP assigned to this cluster for DU processing. Constraint (5) activates the PP node where a DU is placed. Constraint (6) ensures the selection of a PP node for CU processing for each URLLC slice. Constraint (7) ensures that each demand belonging to the URLLC slice uses the PP assigned to this slice for CU processing. Constraint (8) activates the PP node where a CU is placed. Constraint (9) ensures that the overall load at a PP, related to the DU and CU processing, does not exceed its capacity.
Constraints (10)–(14) concern the selection of paths for routing of FH and MH flows in the packet-switched transport network. Constraint (10) ensures the selection of a single path for each FH flow. Constraint (11) ensures that the FH path selected for a demand terminates at the PP node processing the DU of this demand. Constraint (12) ensures the selection of an MH path for a demand belonging to an eMBB slice. Moreover, it guarantees that this path terminates at the PP node processing the DU of this demand. Note that the other end of the path should be at the hub node, where the CU is placed for eMBB. Therefore, the set of candidate MH paths for slice eMBB contains only such paths. Constraints (13) and (14) ensure that the MH flow of a demand belonging to an URRLC slice either is absent in the network, if both DU and CU are processed at the same PP ( u d v DUCU = 1 ), or is routed over a single path terminating at the PP nodes processing the DU and CU entities.
Constraints (15) and (16) are used to control and determine the usage of network links. Constraint (15) ensures that the volume (bit rate) of traffic routed over a link does not exceed its capacity. Constraint (16) determines if a network flow is routed over a given link.
Constraints (17)–(22) are used to estimate and guarantee flow latencies in the network. The constraints have been formulated in the efficient MILP model that we developed in [20]. Here, we repeat the main reasoning behind these constraints. Constraint (17) estimates the worst case buffering latency ( w d f e H E P ) of a packet belonging to a flow at a switch output link caused by the packets belonging to higher and equal-priority flows. Namely, if link e is not used by flow f of demand d ( y d f e = 0 ), then the left-hand side (LHS) of inequality (17) is equal to or lower than 0. As a consequence, it has no impact on variable w d f e H E P (it can take a value of 0 or any value allowable by other constraints), and the latency of flow f is not constrained by the queuing of other flows in link e. Otherwise, if link e is used by flow f ( y d f e = 1 ), then variable w d f e H E P takes the value of buffering latency caused by the interfering higher/equal-priority flows (from set H ( d , f , e ) ) or any higher value allowable by other constraints. Constraints (18) and (19) estimate the worst case buffering latency ( w d f e L P ) of a packet belonging to a flow at a switch output link caused by the packets belonging to lower priority flows. In particular, constraint (18) ensures the activation of variable y d f e t L P , whenever there is a lower priority flow from subset L ( d , f , e , t ) realized in the network. Constraint (19) ensures that latency L ( t ) produced by a lower priority flow is included in the estimation of buffering latency of flow f of demand d in link e if both flows are present in the network. Constraint (20) estimates the overall worst case buffering latency for a flow at a switch output link. Constraint (21) estimates the static latency of a flow produced in a network link, which accounts for the link propagation delay, the store-and-forward delay in the origin node of this link (if the node is a switch), and the burst transmission delay in the link. Constraint (22) estimates the latency of a flow and ensures that it is within the allowable limit for this flow.
Eventually, constraints (23) and (24) define the range of the problem variables.
On the LHS of constraints (10)–(16), additional symbols λ d FH , π d v FH , μ d v DU-eMBB , μ d v DU-URLLC , μ d v CU-URLLC , κ e , and σ d f e are included, which denote the dual variables associated with these constraints in the linear programming (LP) relaxation of problem (2)–(24). Note that these dual variables are related to the constraints in which path selection variable x d f p appears, and they are used in a column-generation (CG) procedure presented in Section 5.3. For the presentation clarity, we omit the dual variables related to the rest of the constraints that are irrelevant for CG.
Remark 1.
MILP optimization problem (2)–(24) comprises, among others, two NP -hard subproblems: the single-path allocation problem [18], represented by constraints (10) and (15), and the 0–1 knapsack problem, represented by constraints (9) and (15). Therefore, the problem is also NP -hard, indicating that it may be challenging to solve.

5. Optimization Method

This section presents an optimization method that decreases the computational effort of solving the MILP problem presented in Section 4.2. The method consists of using a reduced set of candidate routing paths instead of the complete set of paths P and solving model (2)–(24) assuming these paths. It is a heuristic method based on the price-and-branch approach [19], which uses the CG technique, and it is called the price-and-branch algorithm (PBA) in the remainder of the paper.

5.1. Price-and-Branch Algorithm

The PBA method consists of the following steps, shown in the flowchart in Figure 3:
  • Generating initial solution: This step generates an initial solution necessary to initialize the set of paths in the column-generation procedure in Step (2). The solution is found using a greedy heuristic algorithm presented in Section 5.2.
  • Generating paths: The goal of this step is to find a subset of the complete set of candidate paths P that provides a good-quality solution to the problem and, at the same, decreases the computational effort of solving it. For this purpose, column generation (CG) is applied, which is an optimization technique allowing for a reduction of the number of variables (referred to as columns) in linear programming formulations [18,19]. The CG procedure dedicated to problem (2)–(24) is described in detail in Section 5.3.
  • Solving MILP: In this last step, the set of candidate paths resulting from Step (2) is applied in MILP model (2)–(24), and the model is solved using a branch-and-bound solver (e.g., such as CPLEX v.12.9 [56]).
The main processing steps of the optimization method are described in detail below.

5.2. Generating Initial Solution

A feasible solution satisfying the optimization problem constraints is generated at this step. To this end, a greedy heuristic algorithm is executed. The algorithm aims to select for each demand of each network slice the PP nodes where the DU/CU entities are placed together with single routing paths to realize the FH and MH flows in the network.
The considered greedy algorithm performs as follows:
  • Preprocessing: In the preprocessing phase, the clusters are ordered in decreasing order according to their number of demands. Also, the set of demands is ordered to reflect the order of clusters; i.e., the first demands correspond to those in the first clusters. Moreover, for each cluster, the PPs are ordered in increasing order according to the length of the shortest path to the most distant RU in the cluster. The following steps process the clusters, demands, and PPs according to such order.
  • Placement of DUs with routing of FH flows: In this step, the consecutive clusters are processed, for which PP nodes are selected for the realization of DU functions according to a first-fit policy. Namely, the nearest feasible PP v V PP is chosen for each cluster c C , such that (a) it has enough capacity for DU processing for all demands in the cluster and (b) the FH flows can be realized with a latency and bandwidth guarantee over the shortest paths between the RUs belonging to the cluster and the PP selected.
  • Routing of MH flows for eMBB demands: Here, the consecutive demands belonging to the network slices of type eMBB are processed. In particular, the shortest feasible routing paths, satisfying the latency and bandwidth constraints, are selected to realize MH flows between the previously chosen PPs (with DU processing) and the hub node, where the CU functions are performed for eMBB.
  • Placement of CUs with routing of MH flows for URLLC demands: Finally, PP nodes for the CU processing are selected for URLLC network slices. Namely, for each slice s S URLLC , the PPs are ordered in decreasing order according to the overall DU load of the demands belonging to this slice. Next, the first feasible PP v V PP is chosen, such that (a) it can accommodate the CU load of the slice and (b) for which the MH flows can be realized in the network over the shortest paths with the latency and bandwidth guarantee.
The above algorithm provides a set of initial paths P ˜ for the second step of the PBA (i.e., the CG procedure). Moreover, the PBA implementation in this work assumes that the CU processing for URLLC slices is performed in the PP nodes selected by the greedy algorithm. Therefore, the paths realizing the MH flows for URLLC demands have to terminate at these PP nodes, and only such paths are considered in the set of candidate paths P in the CG procedure in Section 5.3.

5.3. Column-Generation Procedure

The CG procedure uses the linear relaxation of problem (2)–(24), which is an LP problem. In the LP problem, the objective function and problem constraints are the same as in the original MILP problem. At the same time, the decision (binary) variables are relaxed and can take any value in the interval [ 0 , 1 ] .
At the beginning of CG, the LP problem is initialized and solved (using an LP solver) assuming a small set of allowable paths P ˜ P representing the initial, feasible solution found using the greedy algorithm in Section 5.2. Set P ˜ is iteratively extended with new paths provided by CG. At each iteration, the search for such paths is performed by solving a pricing problem. The pricing problem concerns finding a new path p, where p P and p P ˜ , for which its so-called reduced cost is positive (and the largest). When found, the new path (or paths) is included in set P ˜ , and the LP problem is solved again; otherwise, the CG procedure terminates.
The paths in set P ˜ are identified with variables x d f p , and each path p P ˜ is represented by a variable x d f p in the LP model. To define the pricing problem and calculate the reduced cost of variable x d f p , we first formulate the constraint related to this variable in the problem dual to the LP relaxation. This constraint uses the dual variables specified on the LHS of constraints (10)–(16) and is the following:
α FH ( f ) × λ d FH π d v FH α MH-eMBB ( d , f ) × μ d v D DU-eMBB α MH-URLLC ( d , f ) × μ d v D DU-URLLC μ d v C CU-URLLC e E ( p ) H ( d , f ) · κ e + σ d f e 0 , d D , f = F , p P ( d , f ) ,
where coefficients α FH ( f ) , α MH-eMBB ( d , f ) , and α MH-URLLC ( d , f ) activate/deactivate particular components of the constraint depending on the type of demand and flow of interest. Namely, α FH ( f ) = 1 if f is an FH flow, α MH-eMBB ( d , f ) = 1 if f is an MH flow of a demand belonging to an eMBB slice, and α MH-URLLC ( d , f ) = 1 if f is an MH flow of a demand belonging to an URLLC slice; otherwise, the coefficients take a value of 0.
The LHS of (25) represents the reduced cost of primal variable x d f p . Let λ FH , π FH , μ DU-eMBB , μ DU-URLLC , μ CU-URLLC , κ , and σ be the vectors representing an optimal dual solution obtained for the current LP. Constraint (25) imposes that all the values of its LHS are non-positive for such an optimal dual solution. However, there may be paths outside of the set of paths P ˜ in the current LP that can have a positive reduced cost for this optimal dual solution. Recall that the problem dual to the primal minimization problem (such as the LP considered) is a maximization problem, and the values of the optimal primal and dual objectives are always equal. Therefore, adding the paths of a positive reduced cost to the problem can decrease the maximum value of the dual objective and, thus, decrease the minimum value of the primal objective (2), which is the optimization goal. As adding such paths into set P ˜ may be beneficial, the CG procedure aims to search for them.
Let β ( d , f , p ) denote the last component of the LHS of (25), namely:
β ( d , f , p ) = e E ( p ) ( H ( d , f ) · κ e + σ d f e ) .
The reduced cost of variable x d f p associated with candidate path p P ( d , f ) of flow f of demand d, denoted as Δ ( d , f , p ) , is calculated as:
for f = { FH } : Δ ( d , f , p ) = λ d FH π d v D FH β ( d , f , p ) ,
for s S eMBB , d D ( s ) , f = { MH } : Δ ( d , f , p ) = μ d v D DU-eMBB β ( d , f , p ) ,
for s S URLLC , d D ( s ) , f = { MH } : Δ ( d , f , p ) = μ d v D DU-URLLC μ d v C CU-URLLC β ( d , f , p ) ,
where v D and v C denote the PP nodes terminating path p, where the DU and CU are placed, respectively. The optimal values of dual variables are obtained directly from the LP solver after solving the LP. The reduced cost can be calculated for any path p P , both currently included in P ˜ and out of this set.
Eventually, the pricing problem is defined as a problem of finding for each flow f F of each demand d D a new path p for which the reduced cost, calculated using a proper formula from (27)–(29), is positive and the largest. When found, path p is included in set P ˜ , and a new variable x d f p representing this path appears in the primal LP problem. The CG procedure implemented in this work assumes the search for new paths within the paths in set P that have not yet been included in set P ˜ . It is worth mentioning that several paths may exist with the same largest value of the reduced cost for given flow f of demand d. In such a case, we accept several paths, but only one (the first found) for each pair of distinct end nodes.
When the search for new paths is finished for all flows of all demands, the found paths are included in the LP, and the resulting LP problem is solved again using an LP solver. After that, solving of the pricing problem is repeated. Otherwise, if no such path exists for all flows of all demands, the CG procedure terminates, and the generated set of paths P ˜ is passed to the last step of the PBA, where the MILP problem is solved for this set of paths.

5.4. Algorithm Complexity

The greedy heuristic described in Section 5.2, used to generate an initial solution, is of polynomial complexity. It starts with sorting the sets of clusters, demands, and PPs. Next, it processes the sets’ elements iteratively, allocating the processing and transmission resources according to a first-fit policy.
The CG procedure consists of two phases: (a) generating new columns by solving the PP problem and (b) solving the LP problem, as shown in Figure 3. The former processing phase is polynomial in time since it consists of calculating the reduced cost, using (27)–(29), for the paths in the set of candidate paths not included in the LP problem and selecting the paths of the highest cost. The latter phase involves solving an LP problem, where the computation times in practice increase linearly with the number of problem variables and constraints. Still, these two phases are performed in a loop until new columns cannot be found. The number of loop repetitions may be high, which is the main factor that makes the algorithm time-consuming when solving larger problem instances.
Solving MILP in the last step of the algorithm is a complex task as the optimization problem considered is NP -hard. Nevertheless, applying the CG procedure reduces the number of problem variables, facilitating solving the problem.
Detailed computation times of the algorithm in different network scenarios are presented in Section 6.3.

6. Numerical Results

In this section, we evaluate the PBA using numerical experiments, comparing the method with the MILP approach, which consists of solving MILP model (2)–(24) using a MILP solver. Next, we analyze the impact of several network parameters, including the URLLC traffic ratio, PP capacity available, network slice prioritization policy, numerology applied, and network size on the packet-switched Xhaul network performance. The primary performance metrics used in the analysis are the objective function value (z), i.e., the number of required (active) PPs in the network and latencies of flows.

6.1. Evaluation Scenario

The evaluation is performed in two packet-switched transport network topologies, MESH-20 and MESH-38, presented in Figure 4, which were considered in the C-RAN studies in [27]. As mentioned in [12], mesh topologies are also foreseen for packet Xhaul transport networks. We assume that a random number of RUs (R in total) and one candidate PP node is connected to each switching node, and there is one hub node in the network. A cluster of RUs consists of the RUs linked with a given switch (SW). The network links have random lengths (in km) within the following limits: [ 0.2 0.5 ] for RU and PP connections, [ 1 3 ] between switches, and [ 10 15 ] for the hub [20]. The link capacities are 50 Gbit/s, 100 Gbit/s, and 400 Gbit/s, respectively, for the RU–SW, SW–SW, and PP/hub–SW connections [20]. The PPs have capacity C min × M , where C min is the minimum capacity required to support the complete DU/CU processing of the largest cluster of RUs in the particular network scenario, and M is a capacity multiplier; M is equal to 1.5 if not mentioned otherwise. The switch store-and-forward delay is 5 μ s [8].
A radio system consisting of eight MIMO layers with 32 antenna ports, 100 MHz channels [57], and different numerologies μ { 1 , 2 , 3 } is considered. Option 7.2 and Option 2 are assumed for the functional split [1] between the RU–DU and DU–CU, respectively. As in [4,58], the DU and CU entities require 5 and 1 unit of PP resources, respectively; thus, the complete DU/CU load is 6 per a RU. The maximum bit rates generated by each RU, calculated using the model in [57], are shown in Table 4. The Ethernet frame size is 1542 bytes [8].
The network supports the network slices that realize the services eMBB and URLLC. The radio resources are divided between the slices, and parameter γ represents the ratio (percentage) of resources assigned to the URLLC slice. The bit rates of Xhaul flows of the URLLC and eMBB slices are calculated by multiplying the reference values in Table 4 by γ and ( 1 γ ) , respectively. The same numerology is applied to the slices. The FH flows of URLLC and eMBB have 50 μ s and 100 μ one-way maximum latency limits, respectively, whereas 1 ms and 10 ms are assumed for the MH and BH flows [7]. By default, the packet priority is higher for the URLLC FH than for the eMBB FH, whereas the MH and BH packets have the lowest priority.
The k-shortest path algorithm was used to generate candidate routing paths between the network nodes. The experiments were performed on a 3.7 GHz 32-core Threadripper-class machine with 128 GB of RAM. The MILP/LP models were solved using CPLEX v.12.9 [56]. The MILP and PBA methods were executed with the 1-h computation time limit.

6.2. Impact of Candidate Paths

In Figure 5, we analyze the impact of the number of candidate paths (k) on the performance of the PBA in network MESH-20. We focus on the objective function value (z, left chart) and algorithm computation times (right chart) as a function of the number of RUs (R). The results are averaged for six scenarios with μ { 1 , 2 , 3 } and γ { 0.2 , 0.3 } .
We can see that z almost does not change with k in smaller network scenarios, R { 10 , 30 } . In larger networks ( R 50 ), there is some improvement of z if more candidate paths are available. The computation times increase with the number of RUs, which is directly related to the increasing number of demands processed by the algorithm. At the same time, the number of candidate paths does not significantly impact the runtimes. In the remaining analysis, k = 5 if not mentioned otherwise.

6.3. Performance of PBA vs. MILP

In this subsection, we compare the PBA with the MILP method.
In Table 5, we present detailed results of the objective function values (z) and computation times (T) obtained in network MESH-20, assuming different numerologies ( μ ), numbers of RUs (R), k = 5 , and γ = 0.2 . We show MILP optimality gaps ( Δ MILP ) and relative differences, denoted as Δ z and Δ T , between the values of z MILP and z PBA and T MILP and T PBA obtained for MILP and the PBA, respectively. Also, we report the number of columns (variables) in the MILP model using the entire set of paths ( cols MILP ) and after running the CG procedure ( cols PBA ), the initial heuristic solution ( z init ), and the runtimes of both the CG procedure ( T CG ) and the CPLEX solving the MILP model with the generated columns ( T BB ).
In Table 5, we can see that the MILP solutions are optimal ( Δ MILP = 0 % ) only for μ = 1 , and for R 30 RUs, when μ { 2 , 3 } . In these cases, the results of z achieved with the PBA are the same; hence, they are also optimal. In the remaining network scenarios, the PBA significantly outperforms the MILP approach, for which the optimality gap is high (even above 60 % ) after the 1 h of computation. Namely, the results of z are much lower for the PBA than for MILP, where the relative difference ( Δ z ) is about 40–50%. Such a difference can be explained by the complexity of solving the MILP model, in particular the number of problem variables, which is some tens of times smaller in the PBA ( cols PBA ) than in MILP ( cols MILP ). Indeed, there is a large difference in the CPLEX solving times ( T MILP and T BB ) in both methods, which comes from the use of a limited number of paths (generated using CG) in the PBA instead of the complete set of paths in MILP. The main computation effort of the PBA is related to the CG procedure, where T CG exceeds hundreds of seconds for larger R. Even though, the overall PBA runtimes ( T PBA ) compared to MILP ( T MILP ) can be reduced from a few to tens of times ( Δ T ).
In Figure 6, Figure 7 and Figure 8, we extend the analysis of the MILP and PBA performance to a larger set of network scenarios. Namely, we compare the objective function values (z in Figure 6) and the computation times (T in Figure 7) and report relative differences for these two metrics ( Δ z and Δ T in Figure 8) for different numbers of RUs (R) in networks MESH-20 and MESH-38. The results are obtained and averaged over μ { 1 , 2 , 3 } , γ { 0.2 , 0.3 } , k { 3 , 5 } , and ρ = 1.5 . In addition, the average optimality gaps of the MILP method ( Δ MILP , denoted as bars) are shown in Figure 6.
Figure 6 shows that MILP may achieve slightly better results than the PBA in smaller network scenarios (for R 40 in MESH-20 and R 30 MESH-38). Nonetheless, the PBA provides much better optimization results for larger problem instances, which is due to the complexity of solving MILP, also expressed by high optimality gaps on the level of some tens of a percent (see Figure 6). As shown in Figure 8, the relative difference in z increases with R, up to 30% in MESH-20 and 45% in MESH-38 for R = 70 . At the same time, the PBA runtimes, presented in Figure 7, are much shorter than in the MILP optimization approach. Here, Δ T first increases, up to some tens of percentand subsequently decreases for larger R, mainly due to the 1-h runtime limit reached in the MILP method.
In summary, the presented results indicate a limited scalability of the MILP approach. Concurrently, the PBA significantly reduces the complexity of solving the MILP model, at the same time providing good-quality results in much shorter times, particularly in large network scenarios.

6.4. Analysis of Network Performance

In this subsection, we evaluate the performance of a packet-switched Xhaul network supporting network slicing in different network scenarios. The results were obtained using the PBA.

6.4.1. URLLC Traffic Ratio

Figure 9 shows the impact of the URLLC traffic ratio ( γ ) on the number of active PPs (z) for different numerologies ( μ ) and R = 30 RUs in networks MESH-20 and MESH-38.
In general, the increase of the URRLC traffic ratio results in decreased eMBB traffic, as both utilize limited radio resources, which turns into a higher volume of more latency-sensitive FH traffic (with the 50 μ s limit) in the network. Since the buffering latencies of the URLLC FH packets increase, this FH traffic cannot be routed to distant PP locations due to latency constraints. Therefore, more PPs in the proximity of RUs have to be activated. The effect is more prominent for a low numerology ( μ = 1 ) as buffering latencies are higher here.

6.4.2. PP Capacity

Figure 10 presents the value of z as a function of the PP capacity, expressed by capacity multiplier M. The results were obtained for different numerologies and averaged over the network scenarios with 10 R 70 RUs.
It can be seen that increasing the PP capacity can reduce the number of required PPs in the network up to some level above which no significant gains are achieved. The reduction in z is higher for higher numerologies and reaches about 20% for μ = 3 , comparing the values of z for M = 1.5 and M = 4 , whereas it is about 12 % for μ = 2 and 8% in MESH-20 and 0% in MESH-38 for μ = 1 .

6.4.3. Network Slice Prioritization

Next, two prioritization policies described in Section 3.4 and applied in network slicing, namely SP–FH and DP–FH, were evaluated. Figure 11 shows the results comparing both policies in terms of the number of active PPs (denoted by bars) as a function of the number of RUs (R). In addition, a relative difference between the results of the policies is presented. The results were averaged over μ = { 1 , 2 , 3 } .
As seen in Figure 11, assigning different priorities to the FH packets belonging to the network slices of diverse FH latency requirements (in DP–FH) leads to lower PP requirements than when applying the same priorities (in SP–FH). This is achieved thanks to reducing the buffering latencies of the latency-sensitive URRLC FH packets as they are served with a higher priority than the less sensitive eMBB FH packets. Consequently, more distant PP nodes can be selected for the URRLC slice, and the overall number of active PPs may be reduced. In the case considered, the relative difference in performance is a few percent for a lower number of RUs and between 10–20% for larger R.

6.4.4. Latencies

Figure 12 and Figure 13 show the maximum and average latencies of the FH and MH flows, respectively, in network slices eMBB and URLLC as a function of the number of RUs. The results were obtained for different network scenarios assuming μ { 1 , 2 , 3 } , γ { 0.1 , 0.2 , , 0.9 } , and M { 1.5 , 2.0 , , 4.0 } .
As seen in Figure 12, the maximum latency of FH flows in all network scenarios is kept below the imposed limits, i.e., 50 μ s for URLLC and 100 μ for eMBB. Concurrently, the average FH latency is between 25 μ s and 40 μ s and does not change significantly with the number of RUs and the network topology. Figure 14 indicates higher MH latencies for eMBB than for URLLC, resulting from two opposite factors. On the one hand, URRLC MH flows are routed to a PP node, where the CU is placed, and are delayed by higher priority FH flows terminated at this node. On the other hand, eMBB MH flows are transported to a distant hub node, where the propagation latencies are higher for these flows.
Figure 14 presents the packet transport network’s overall latency (i.e., the sum of FH and MH latencies) as a function of the numerology ( μ ) applied. The results were obtained and averaged for R { 10 , 20 , , 70 } , γ = 0.3 , and M = 1.5 . The figure shows that the application of higher numerologies decreases the transport latencies. Besides, the overall transport latency is lower for network slice URRLC than for eMBB, which follows the results in Figure 12 and Figure 13, and the difference is higher in smaller network topology MESH-20.

6.5. Illustration of Optimized Network Slicing in Urban Network

Finally, we applied the PBA method for an optimized placement of DU/CU/MEC and routing of FH/MH flows for eMBB and URLLC slices in a 17-node urban network (WRO-17) shown in Figure 15 [20]. WRO-17 is a synthetic network connecting a subset of real antenna locations (79 RUs) in the Wrocław city center (Poland). The switches are located near RUs, and the network links are routed along the streets, with the lengths reflecting physical distances. To accommodate high aggregated flow bit rates in this scenario, the link capacities have been assumed to be 50 Gbit/s, 400 Gbit/s, and 1 Tbit/s, respectively, for RU–switch, switch–switch, and switch–PP links.
Figure 16 illustrates the optimized placement of processing resources and flow routing (in the uplink direction) in the network. Whereas the initial heuristic solution required 11 active PPs, the optimized solution found by the PBA and shown in the Figure requires just 7 active PPs. In the optimized solution, the FH flows are routed over the same paths for eMBB and URRLC, which is related to the fact that the DUs assigned for particular RUs are placed in the same PP nodes for both network slices. Still, the routing of MH flows differs in the network slices since the CUs are placed in different nodes, namely at a distant hub node for eMBB and a PP node, with a colocated MEC, within the access network for URRLC.

7. Applicability of PBA Optimization Method

The PBA method proposed in this work is suitable for solving network-planning tasks in packet-switched Xhaul access networks, which support soft network slicing while ensuring slice isolation. These tasks involve optimizing the placement of the DU, CU, and MEC processing resources and routing FH and MH flows for a given set of RUs according to specific slices’ latency, bandwidth, and resource placement requirements. Soft slicing is realized by sharing bandwidth and carrying the packet flows belonging to different slices over common network links.
The numerical results in Section 6 indicate that the PBA can provide good-quality solutions (optical or near-optimal) for the network instances comprising at least some tens of RUs in the mesh networks with some tens of switches, outperforming the classical MILP approach considerably. At the same time, the PBA running times increase with the problem size and, in larger network scenarios, can reach the one-hour limit imposed on the algorithm. For this reason, the method is suitable for network design problems where the solution provisioning time is not critical, for instance in the design of network connections and resource placement for a defined set of traffic demands (RUs) and network slices. Another applicable scenario of the method is the provisioning of processing resources and network connectivity for dynamic network slice requests. In this case, the only problem variables to be determined are related to the new slice request, whereas the variables corresponding to the already allocated slices are fixed. By reducing the number of problem variables in this scenario, the processing times can be expected to be lower than when optimizing the network for all slices (as in the network-planning case).
The primary advantage for a network operator in using the proposed method is the optimization of PP resources while ensuring that the packet-switched Xhaul transport network meets the latency requirements of latency-sensitive flows, specific to particular network slices. Achieving this is challenging in a network with dynamic latencies caused by packet buffering.

8. Concluding Remarks

This work addresses the problem of packet-switched Xhaul network planning in a network slicing scenario, where two principle 5G services were supported, eMBB and URLLC. We have shown that applying advanced optimization techniques, such as the column-generation method, can facilitate solving challenging optimization problems, such as the considered network slicing-planning problem in latency-aware packet Xhaul networks. The proposed method PBA is much more scalable than when solving the complete MILP model, and its performance does not depend significantly on the number of candidate paths, whereas MILP can perform poorly for larger sets of paths. Consequently, the PBA can provide good-quality results in much shorter times, particularly in large network scenarios where MILP has difficulties even finding a feasible solution. The relative difference in the objective function values reaches up to several tens of a percent in favor of the PBA, and the computation times are reduced between several up to some tens of times.
The network analysis brings several conclusions. The number of required PPs is correlated with the volume of traffic associated with the latency-sensitive (URLLC) network slice. The PP requirements are lower for higher 5G radio numerologies applied. When PP capacity increases, the number of active PPs is also somewhat reduced (up to 10–20% for higher numerologies). Assigning higher priorities to the data bursts of the low-latency FH flows related to URLLC network slices than to the bursts of the FH flows belonging to eMBB slices is more beneficial than using the same priorities. In this case, the reduction of the number of required PPs was between 10% and 20%. Eventually, the overall transport network latencies can be also reduced by applying higher numerologies.
In future works, we will focus on optimizing packet-switched Xhaul networks in dynamic network slicing scenarios with varying traffic loads. Also, we plan to study multi-layer packet–optical Xhaul transport network scenarios.

Funding

This research was supported by National Science Centre, Poland, under grant number 2018/31/B/ST7/03456.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. 3GPP. Study on New Radio Access Technology: Radio Access Architecture and Interfaces; Technical Report TR 38.801, v14.0.0; 3GPP: Sophia Antipolis, France, 2017. [Google Scholar]
  2. 3GPP. Architecture Description (Release 17); Technical Specification TS 38.401, v17.0.0; 3GPP: Sophia Antipolis, France, 2022. [Google Scholar]
  3. O-RAN Alliance. Available online: https://www.o-ran.org/ (accessed on 13 October 2023).
  4. Xiao, Y.; Zhang, J.; Ji, Y. Can Fine-grained Functional Split Benefit to the Converged Optical-Wireless Access Networks in 5G and Beyond? IEEE Trans. Netw. Serv. Manag. 2020, 17, 1774–1787. [Google Scholar] [CrossRef]
  5. Sirotkin, S. 5G Radio Access Network Architecture: The Dark Side of 5G; John Wiley & Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
  6. I, C.L.; Li, H.; Korhonen, J.; Huang, J.; Han, L. RAN Revolution with NGFI (xhaul) for 5G. IEEE J. Lightw. Technol. 2018, 36, 541–550. [Google Scholar] [CrossRef]
  7. IEEE.1914.1-2019; IEEE Standard for Packet-based Fronthaul Transport Networks. IEEE: Piscataway, NJ, USA, 2019.
  8. IEEE.802.1CM-2018; IEEE Standard for Local and Metropolitan Area Networks—Time-Sensitive Networking for Fronthaul. IEEE: Piscataway, NJ, USA, 2018.
  9. Common Public Radio Interface: eCPRI V1.2 Interface Specification; Ericsson AB: Stockholm, Sweden; Huawei Technologies Co., Ltd.: Shenzhen, China; NEC Corporation: Tokyo, Japan; Nokia: Espoo, Finland, 2018.
  10. IEEE.1914.3-2018; IEEE Standard for Radio over Ethernet Encapsulations and Mappings. IEEE: Piscataway, NJ, USA, 2018.
  11. Larrabeiti, D.; Contreras, L.M.; Otero, G.; Hernandez, J.A.; Fernandez-Palacios, J.P. Toward end-to-end latency management of 5G network slicing and fronthaul traffic. Opt. Fib. Technol. 2023, 76, 103220. [Google Scholar] [CrossRef]
  12. ITU. Transport Network Support of IMT-2020/5G; ITU-T Technical Report; ITU: Geneva, Switzerland, 2018. [Google Scholar]
  13. The 3rd Generation Partnership Project (3GPP). Available online: http://www.3gpp.org/ (accessed on 13 October 2023).
  14. Ordonez-Lucena, J.; Ameigeiras, P.; Lopez, D.; Ramos-Munoz, J.J.; Lorca, J.; Folgueira, J. Network Slicing for 5G with SDN/NFV: Concepts, Architectures, and Challenges. IEEE Comm. Mag. 2017, 55, 80–87. [Google Scholar] [CrossRef]
  15. Bhattacharjee, S.; Katsalis, K.; Arouk, O.; Schmidt, R.; Wang, T.; An, W.; Bauschert, T.; Nikaein, N. Network Slicing for TSN-Based Transport Networks. IEEE Access 2021, 9, 62788–62809. [Google Scholar] [CrossRef]
  16. O-RAN Alliance. Xhaul Packet Switched Architectures and Solutions; Technical Specification v5.0; O-RAN Alliance: Alfter, Germany, 2023. [Google Scholar]
  17. Vassilaras, S.; Gkatzikis, L.; Liakopoulos, N.; Stiakogiannakis, I.N.; Qi, M.; Shi, L.; Liu, L.; Debbah, M.; Paschos, G.S. The Algorithmic Aspects of Network Slicing. IEEE Comm. Mag. 2017, 55, 112–119. [Google Scholar] [CrossRef]
  18. Pióro, M.; Medhi, D. Routing, Flow, and Capacity Design in Communication and Computer Networks; Morgan Kaufmann: Burlington, MA, USA, 2004. [Google Scholar]
  19. Desrosiers, J.; Lubbecke, M.E. Branch-Price-and-Cut Algorithms. In Wiley Encyclopedia of Operations Research and Management Science; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  20. Klinkowski, M. Optimized Planning of DU/CU Placement and Flow Routing in 5G Packet Xhaul Networks. IEEE Trans. Netw. Serv. Manag. 2024, 21, 232–248. [Google Scholar] [CrossRef]
  21. Klinkowski, M. Optimization of Latency-Aware Flow Allocation in NGFI Networks. Comp. Commun. 2020, 161, 344–359. [Google Scholar] [CrossRef]
  22. Klinkowski, M. Latency-Aware DU/CU Placement in Convergent Packet-Based 5G Fronthaul Transport Networks. Appl. Sci. 2020, 10, 7429. [Google Scholar] [CrossRef]
  23. Carapellese, N.; Tornatore, M.; Pattavina, A.; Gosselin, S. BBU Placement over a WDM Aggregation Network Considering OTN and Overlay Fronthaul Transport. In Proceedings of the 2015 European Conference on Optical Communication (ECOC), Valencia, Spain, 7 September–1 October 2015. [Google Scholar]
  24. Musumeci, F.; Bellanzon, C.; Tornatore, N.C.M.; Pattavina, A.; Gosselin, S. Optimal BBU Placement for 5G C-RAN Deployment over WDM Aggregation Networks. IEEE J. Lightw. Technol. 2016, 34, 1963–1970. [Google Scholar] [CrossRef]
  25. Velasco, L.; Castro, A.; Asensio, A.; Ruiz, M.; Liu, G.; Qin, C.; Proietti, R.; Yoo, S.J.B. Meeting the Requirements to Deploy Cloud RAN Over Optical Networks. OSA/IEEE J. Opt. Commun. Netw. 2017, 9, B22–B32. [Google Scholar] [CrossRef]
  26. Wong, E.; Grigoreva, E.; Wosinska, L.; Machuca, C.M. Enhancing the Survivability and Power Savings of 5G Transport Networks based on DWDM Rings. OSA/IEEE J. Opt. Commun. Netw. 2017, 9, D74–D85. [Google Scholar] [CrossRef]
  27. Khorsandi, B.M.; Raffaelli, C. BBU location algorithms for survivable 5G C-RAN over WDM. Comput. Netw. 2018, 144, 53–63. [Google Scholar] [CrossRef]
  28. Carello, B.A.G.; Gao, M. On a Virtual Network Functions Placement and Routing Problem: Some Properties and a Comparison of Two Formulations. Networks 2020, 75, 158–182. [Google Scholar]
  29. Yu, H.; Musumeci, F.; Zhang, J.; Xiao, Y.; Tornatore, M.; Ji, Y. DU/CU Placement for C-RAN over Optical Metro-Aggregation Networks. In Proceedings of the 23rd IFIP WG 6.10 International Conference, (ONDM 2019), Athens, Greece, 13–16 May 2019. [Google Scholar]
  30. Xiao, Y.; Zhang, J.; Gao, Z.; Ji, Y. Service-Oriented DU-CU Placement using Reinforcement Learning in 5G/B5G Converged Wireless-Optical Networks. In Proceedings of the Optical Fiber Communication (OFC), San Diego, CA, USA, 8–12 March 2020. [Google Scholar]
  31. Wang, R.; Zhang, J.; Gu, Z.; Yan, S.; Xiao, Y.; Ji, Y. Edge-Enhanced Graph Neural Network for DU-CU Placement and Lightpath Provision in X-Haul Networks. OSA/IEEE J. Opt. Commun. Netw. 2022, 14, 828–839. [Google Scholar] [CrossRef]
  32. Edeagu, S.O.; Butt, R.A.; Idrus, S.M.; Gomes, N.J. Performance of PON Dynamic Bandwidth Allocation Algorithm for Meeting xHaul Transport Requirements. In Proceedings of the International Conference on Optical Network Design and Modeling, (ONDM 2021), Gothenburg, Sweden, 28 June–1 July 2021. [Google Scholar]
  33. Mondal, S.; Ruffini, M. A Min-Max Fair Resource Allocation Framework for Optical x-haul and DU/CU in Multi-tenant O-RANs. In Proceedings of the IEEE International Conference on Communications, Seoul, Republic of Korea, 16–20 May 2022. [Google Scholar]
  34. Liu, J.; Zhou, S.; Gong, J.; Niu, Z.; Xu, S. Graph-based Framework for Flexible Baseband Function Splitting and Placement in C-RAN. In Proceedings of the IEEE International Conference on Communications, London, UK, 8–12 June 2015. [Google Scholar]
  35. Koutsopoulos, I. Optimal Functional Split Selection and Scheduling Policies in 5G Radio Access Networks. In Proceedings of the IEEE International Conference on Communications (ICC 2017), Paris, France, 21–25 May 2017. [Google Scholar]
  36. Wang, X.; Alabbasi, A.; Cavdar, C. Interplay of Energy and Bandwidth Consumption in CRAN with Optimal Function Split. In Proceedings of the IEEE International Conference on Communications (ICC 2017), Paris, France, 21–25 May 2017. [Google Scholar]
  37. Alabbasi, A.; Wang, X.; Cavdar, C. Optimal Processing Allocation to Minimize Energy and Bandwidth Consumption in Hybrid CRAN. IEEE Trans. Green Commun. Netw. 2018, 2, 545–555. [Google Scholar] [CrossRef]
  38. Harutyunyan, D.; Riggio, R. Flex5G: Flexible Functional Split in 5G Networks. IEEE Trans. Netw. Serv. Manag. 2018, 15, 961–975. [Google Scholar] [CrossRef]
  39. Ejaz, W.; Sharma, S.K.; Saadat, S.; Naeem, M.; Anpalagan, A.; Chughtai, N.A. A Comprehensive survey on Resource Allocation for CRAN in 5G and Beyond Networks. J. Net. Comput. Appl. 2020, 160, 1–24. [Google Scholar] [CrossRef]
  40. Nakayama, Y.; Hisano, D.; Kubo, T.; Fukada, Y.; Terada, J.; Otaka, A. Low-Latency Routing Scheme for a Fronthaul Bridged Network. OSA/IEEE J. Opt. Commun. Netw. 2018, 10, 14–23. [Google Scholar] [CrossRef]
  41. Nakayama, Y.; Hisano, D. Rank-Based Low-Latency Scheduling for Maximum Fronthaul Accommodation in Bridged Network. IEEE Access 2018, 6, 78829–78838. [Google Scholar] [CrossRef]
  42. Garcia-Saavedra, A.; Salvat, J.X.; Li, X.; Costa-Perez, X. WizHaul: On the Centralization Degree of Cloud RAN Next Generation Fronthaul. IEEE Trans. Mob. Comput. 2018, 17, 2452–2466. [Google Scholar] [CrossRef]
  43. Mroziński, D.; Klinkowski, M.; Walkowiak, K. Cost-Aware DU Placement and Flow Routing in 5G Packet xHaul Networks. IEEE Access 2023, 11, 12710–12726. [Google Scholar] [CrossRef]
  44. Esmaeily, A.; Kralevska, K.; Mahmoodi, T. Slicing Scheduling for Supporting Critical Traffic in Beyond 5G. In Proceedings of the IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 8–11 January 2022. [Google Scholar]
  45. Yusupov, J.; Ksentini, A.; Marchetto, G.; Sisto, R. Multi-Objective Function Splitting and Placement of Network Slices in 5G Mobile Networks. In Proceedings of the IEEE Conference on Standards for. Communications and Networking, Paris, France, 29–31 October 2018. [Google Scholar]
  46. Ojaghi, B.; Adelantado, F.; Antonopoulos, A.; Verikoukis, C. SlicedRAN: Service-Aware Network Slicing Framework for 5G Radio Access Networks. IEEE Syst. J. 2022, 16, 80–87. [Google Scholar] [CrossRef]
  47. Moayyedi, A. Generalizable GNN-based 5G RAN/MEC Slicing and Admission Control in Metropolitan Networks. In Proceedings of the IEEE/IFIP Network Operations and Management Symposium, Miami, FL, USA, 8–12 May 2023. [Google Scholar]
  48. Klinkowski, M. Analysis of Latency-Aware Network Slicing in 5G Packet xHaul Networks. Int. J. Electr. Telecomm. 2023, 69, 335–340. [Google Scholar] [CrossRef]
  49. Pramanik, S.; Ksentini, A.; Chiasserini, C.F. Characterizing the Computational and Memory Requirements of Virtual RANs. In Proceedings of the 17th Wireless On-Demand Network Systems and Services Conference (WONS), Oppdal, Norway, 30 March–1 April 2022. [Google Scholar]
  50. Imran, M.A.; Zaidi, S.A.R.; Shakir, M.Z. Access, Fronthaul and Backhaul Networks for 5G & Beyond; Institution of Engineering and Technology: London, UK, 2017. [Google Scholar]
  51. Esmaeily, A.; Mendis, H.V.K.; Mahmoodi, T.; Kralevska, K. Beyond 5G Resource Slicing With Mixed-Numerologies for Mission Critical URLLC and eMBB Coexistence. IEEE Open J. Comm. Soc. 2023, 4, 727–747. [Google Scholar] [CrossRef]
  52. Perez, G.O.; Larrabeiti, D.; Hernandez, J.A. 5G New Radio Fronthaul Network Design for eCPRI-IEEE 802.1CM and Extreme Latency Percentiles. IEEE Access 2019, 7, 82218–82229. [Google Scholar] [CrossRef]
  53. Lee, H.S.; Moon, S.; Kim, D.Y.; Lee, J.W. Packet-Based Fronthauling in 5G Networks: Network Slicing-Aware Packetization. IEEE Comm. Stand. Mag. 2023, 7, 56–63. [Google Scholar] [CrossRef]
  54. Common Public Radio Interface: eCPRI V1.2 Requirements for the eCPRI Transport Network; Ericsson AB: Stockholm, Sweden; Huawei Technologies Co., Ltd.: Shenzhen, China; NEC Corporation: Tokyo, Japan; Nokia: Espoo, Finland, 2018.
  55. Zhang, J.; Wang, T.; Finn, N. Bounded Latency Calculating Method, Using Network Calculus; IEEE 802.1 Working Group Interim Session; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  56. IBM. CPLEX Optimizer; IBM: New York, NY, USA, 2019. [Google Scholar]
  57. Lagen, S.; Giupponi, L.; Hansson, A.; Gelabert, X. Modulation Compression in Next Generation RAN: Air Interface and Fronthaul Trade-offs. IEEE Comm. Mag. 2021, 59, 89–95. [Google Scholar] [CrossRef]
  58. Shehata, M.; Elbanna, A.; Musumeci, F.; Tornatore, M. Multiplexing Gain and Processing Savings of 5G Radio-Access-Network Functional Splits. IEEE Trans. Green Commun. Netw. 2018, 2, 982–991. [Google Scholar] [CrossRef]
Figure 1. Network slicing in a packet-switched Xhaul network: example of DU/CU placement and routing of FH/MH flows for eMBB and URLLC slices.
Figure 1. Network slicing in a packet-switched Xhaul network: example of DU/CU placement and routing of FH/MH flows for eMBB and URLLC slices.
Applsci 14 05608 g001
Figure 2. FH traffic prioritization: (a) SP–FH and (b) DP–FH.
Figure 2. FH traffic prioritization: (a) SP–FH and (b) DP–FH.
Applsci 14 05608 g002
Figure 3. A flowchart of the PBA method.
Figure 3. A flowchart of the PBA method.
Applsci 14 05608 g003
Figure 4. Network topologies; MESH-20 is a subgraph of MESH-38.
Figure 4. Network topologies; MESH-20 is a subgraph of MESH-38.
Applsci 14 05608 g004
Figure 5. The impact of the number of candidate paths (k) on the objective function value and computation times in network MESH-20.
Figure 5. The impact of the number of candidate paths (k) on the objective function value and computation times in network MESH-20.
Applsci 14 05608 g005
Figure 6. Comparison of the objective function values obtained using MILP and the PBA (lines) and the MILP optimality gaps (bars) in networks MESH-20 and MESH-38.
Figure 6. Comparison of the objective function values obtained using MILP and the PBA (lines) and the MILP optimality gaps (bars) in networks MESH-20 and MESH-38.
Applsci 14 05608 g006
Figure 7. Comparison of the MILP and PBA average computation times in networks MESH-20 and MESH-38 assuming a 1-h runtime limit.
Figure 7. Comparison of the MILP and PBA average computation times in networks MESH-20 and MESH-38 assuming a 1-h runtime limit.
Applsci 14 05608 g007
Figure 8. Relative difference in the MILP and PBA objective function values and computation times in networks MESH-20 and MESH-38 assuming a 1-h runtime limit.
Figure 8. Relative difference in the MILP and PBA objective function values and computation times in networks MESH-20 and MESH-38 assuming a 1-h runtime limit.
Applsci 14 05608 g008
Figure 9. Number of active PPs as a function of the URLLC traffic ratio assuming different numerologies and R = 30 RUs.
Figure 9. Number of active PPs as a function of the URLLC traffic ratio assuming different numerologies and R = 30 RUs.
Applsci 14 05608 g009
Figure 10. Average number of active PPs as a function of the PP capacity for different numerologies and γ = 0.3 ; the results averaged for 10 R 70 .
Figure 10. Average number of active PPs as a function of the PP capacity for different numerologies and γ = 0.3 ; the results averaged for 10 R 70 .
Applsci 14 05608 g010
Figure 11. Comparison of network slice prioritization policies as a function of the number of RUs assuming γ = 0.3 ; the results averaged for μ { 1 , 2 , 3 } .
Figure 11. Comparison of network slice prioritization policies as a function of the number of RUs assuming γ = 0.3 ; the results averaged for μ { 1 , 2 , 3 } .
Applsci 14 05608 g011
Figure 12. Maximum and average latencies of FH flows as a function of the number of RUs.
Figure 12. Maximum and average latencies of FH flows as a function of the number of RUs.
Applsci 14 05608 g012
Figure 13. Maximum and average latencies of MH flows as a function of the number of RUs.
Figure 13. Maximum and average latencies of MH flows as a function of the number of RUs.
Applsci 14 05608 g013
Figure 14. Overall packet transport latency as a function of the numerology applied.
Figure 14. Overall packet transport latency as a function of the numerology applied.
Applsci 14 05608 g014
Figure 15. WRO-17 network connecting RUs (triangles), PPs (squares), switches (circles), and the hub (hexagon).
Figure 15. WRO-17 network connecting RUs (triangles), PPs (squares), switches (circles), and the hub (hexagon).
Applsci 14 05608 g015
Figure 16. Optimized placement of DU/CU/MEC and routing of FH/MH flows (in uplink direction) for eMBB and URLLC slices in network WRO-17; filled squares represent active PPs, and the numbers are nodes’ indexes.
Figure 16. Optimized placement of DU/CU/MEC and routing of FH/MH flows (in uplink direction) for eMBB and URLLC slices in network WRO-17; filled squares represent active PPs, and the numbers are nodes’ indexes.
Applsci 14 05608 g016
Table 1. Notation.
Table 1. Notation.
Sets
V network nodes
V PP PP nodes; V PP V
E network links
E S output links of switches; E S E
C clusters of RUs
N types of network slices; N = { eMBB , URRLC }
S network slices
S eMBB network slices of type eMBB; S eMBB S
S URLLC network slices of type URLLC; S URLLC S
D traffic demands
D ( c ) demands associated with cluster c C ; D ( c ) D
D ( s ) demands belonging to network slice s S ; D ( s ) D
F types of traffic flows; F = { FH , MH }
H ( d , f , e ) demand–flow pairs of an equal/higher priority than flow f of demand d,
which cause buffering latencies of flow f in link  e E S
L ( d , f , e ) demand–flow pairs of a lower priority than flow f of demand d,
which cause buffering latencies of flow f in link  e E S
T ( d , f , e ) possible values of latency L ( d ¯ , f ¯ , e ) produced by lower priority flows ( d ¯ , f ¯ ) L ( d , f , e )
L ( d , f , e , t ) demand–flow pairs producing latency t T ( d , f , e ) ; L ( d , f , e , t ) L ( d , f , e )
P candidate routing paths
P ( d , f ) candidate paths for flow f of demand d;
P ( d , f , v ) candidate paths for flow f of demand d terminated at PP node v
P ( d , f , e ) candidate paths for flow f of demand d going through link e
E ( p ) links belonging to path p
Parameters
H ( e ) transmission capacity of link e
C ( v ) processing capacity of PP node v
H ( d , f ) bit rate of flow f of demand d
ρ DU processing load of the DU of demand d
ρ CU processing load of the CU of demand d
L P ( e ) propagation delay in link e
L SF ( e ) store-and-forward delay of the switch being the origin of link e E S
L ( d , f , e ) delay produced by transmission of the burst of frames of flow f of demand d at link e
L ^ ( d , f , e ) overall latency of flows in set H ( d , f , e )
L ( t ) the value of latency t T ( d , f , e )
L max ( d , f ) maximum one-way latency limit of flow f of demand d
Table 2. Problem variables.
Table 2. Problem variables.
u c v binary, u c v = 1 if cluster c has assigned PP node v for DU processing
u d v DU binary, u d v DU = 1 if DU processing of demand d is performed in PP node v
u d v CU binary, u d v CU = 1 if CU processing of demand d is performed in PP node v
u d v DUCU binary, u d v DUCU = 1 if both CU and DU processing of demand d is performed in PP node v
u s v CU-URLLC binary, u s v CU-URLLC = 1 if CU processing of URLLC slice s is performed in PP node v
z v binary, z v = 1 if PP node v is active
x d f p binary, x d f p = 1 if path p is selected to realize flow f of demand d
y d f e binary, y d f e = 1 if flow f of demand d is routed over link e
y d f e t LP binary, y d f e t LP = 1 if at least one of the LP flows belonging to  L ( d , f , e , t ) is active
w d f continuous, latency of flow f belonging to demand d
w d f e HEP continuous, latency in link e for flow f of demand d caused by higher/equal-priority flows
w d f e LP continuous, latency in link e for flow f of demand d caused by lower priority flows
w d f e buf continuous, dynamic latency in link e for flow f belonging to demand d
w d f e stat continuous, static latency in link e for flow f belonging to demand d
Table 3. MILP model.
Table 3. MILP model.
Objective minimize z = v V PP z v (2)
Constraints
placement of DU/CU
v V PP u c v = 1 , c C ,(3)
u d v DU = u c v , c C , d D ( c ) , v V PP ,(4)
u c v z v , v V PP , c C ,(5)
v V PP u s v CU-URLLC = 1 , s S URLLC ,(6)
u d v CU = u s v CU-URLLC , s S URLLC , d D ( s ) , v V PP ,(7)
u s v CU-URLLC z v , v V PP , s S URLLC ,(8)
d D ρ DU ( d ) · u d v DU + s S URLLC , d D ( s ) ρ CU ( d ) · u d v CU C ( v ) , v V PP ,(9)
path selection for flows
[ λ d FH ] p P ( d , f ) x d f p 1 = 0 , d D , f = { FH } ,(10)
[ π d v FH ] u d v DU p P ( d , f , v ) x d f p = 0 , d D , f = { FH } , v V PP ,(11)
[ μ d v DU-eMBB ] u d v DU p P ( d , f , v ) x d f p = 0 , s S eMBB , d D ( s ) , f = { MH } , v V PP ,(12)
[ μ d v DU-URLLC ] u d v DU p P ( d , f , v ) x d f p u d v DUCU = 0 , s S URLLC , d D ( s ) , f = { MH } , v V PP ,(13)
[ μ d v CU-URLLC ] u d v CU p P ( d , f , v ) x d f p u d v DUCU = 0 , s S URLLC , d D ( s ) , f = { MH } , v V PP ,(14)
usage of network links
[ κ e 0 ] H ( e ) d D f F p P ( d , f , e ) H ( d , f ) · x d f p 0 , e E ,(15)
[ σ d f e ] y d f e p P ( d , f , e ) x d f p = 0 , d D , f F , e E ,(16)
latency of flows
( d ¯ , f ¯ ) H ( d , f , e ) L ( d ¯ , f ¯ , e ) · y d ¯ f ¯ e + L ^ ( d , f , e ) · y d f e 1 w d f e HEP , d D , f F , e E S (17)
( d ¯ , f ¯ ) L ( d , f , e , t ) y d ¯ f ¯ e L ( d , f , e , t ) · y d f e t LP , d D , f F , e E S , t T ( d , f , e ) ,(18)
L ( t ) · y d f e t LP + y d f e 1 w d f e LP , d D , f F , e E S , t T ( d , f , e ) ,(19)
w d f e HEP + w d f e LP = w d f e buf , d D , f F , e E S ,(20)
y d f e · L P ( e ) + L SF ( e ) + L ( d , f , e ) = w d f e stat , d D , f F , e E ,(21)
e E w d f e stat + w d f e buf L max ( d , f ) , d D , f F ,(22)
u c v , u d v DU , u d v CU , u d v DUCU , u s v CU-URLLC , x d f p , y d f e , y d f e l LP , z v { 0 , 1 } , (23)
w d f e HEP , w d f e LP , w d f e stat , w d f e buf R + . (24)
Table 4. Maximum bit rates [Gbit/s] of Xhaul data flows per an RU.
Table 4. Maximum bit rates [Gbit/s] of Xhaul data flows per an RU.
FlowUplinkDownlink
Fronthaul21.62422.204
Midhaul3.0244.016
Backhaul34
Table 5. Comparison of the methods MILP and PBA in network MESH-20 assuming k = 5 and γ = 0.2 ; T in seconds.
Table 5. Comparison of the methods MILP and PBA in network MESH-20 assuming k = 5 and γ = 0.2 ; T in seconds.
ScenarioMILPPBARelative Diff.
μ R cols MILP z MILP Δ MILP T MILP cols PBA z init z PBA T CG T BB T PBA Δ z Δ T
11040,20040%410128450.250%0.7
30120,54980%322959148342360%0.9
50201,185100%21658441810236192550%0.8
70281,593120%111094132012984159990%1.1
21040,75240%2311418460.670%3.4
30122,14460%4843136146313340%14.4
50204,0281346%360069621884273946638%7.7
70285,5181663%360010,7532098763991544%3.9
31041,19040%711185844150%14.2
30123,33650%20223226145374410%49.8
50206,0121463%3600729118750710761450%5.9
70288,2701867%360011,7922081486192167856%2.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Klinkowski, M. A Price-and-Branch Algorithm for Network Slice Optimization in Packet-Switched Xhaul Access Networks. Appl. Sci. 2024, 14, 5608. https://doi.org/10.3390/app14135608

AMA Style

Klinkowski M. A Price-and-Branch Algorithm for Network Slice Optimization in Packet-Switched Xhaul Access Networks. Applied Sciences. 2024; 14(13):5608. https://doi.org/10.3390/app14135608

Chicago/Turabian Style

Klinkowski, Mirosław. 2024. "A Price-and-Branch Algorithm for Network Slice Optimization in Packet-Switched Xhaul Access Networks" Applied Sciences 14, no. 13: 5608. https://doi.org/10.3390/app14135608

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop