The use of IP routing protocols instead of the Spanning tree protocol and MPLS tags instead of VLAN identifiers in the infrastructure will significantly improve the development of the VPLS service. The difference between VPLS and L3VPN is in the Customer Edge (CE) side equipment.
4.1. Our Data Plane: VPP
VPP, which stands for Vector Packet Processing, is a framework from FDio foundation which provides out-of-the-box production quality switch and router functionality [
23,
24]. It is a fast, scalable, and consistent packet-processing stack that can run on commodity hardware. VPP comes with several useful features like high performance, avoiding context switch since it runs completely on user mode, no lock on the datapath, and modularity. VPP has a modular design, which means it allows users to attach new graph nodes without the need to alter its core. VPP runs completely on user space, so the forwarding performance, in this case, is several orders of magnitudes higher than that of the kernel. Preventing cache miss in the budget of the CPU cycle per packet ratio led to the formation of fast scalable VPP [
23,
24,
30,
51]. Another advantage of VPP is the ability to accomplish each atomic task in a committed node, process more than one packet at a time, and use vector instructions. The lack of some required services and the openness of many areas in development, including the API control plane, on the one hand, and a good framework for plugging, independent from the kernel with bridge implementation, and having a complete layer 2 and 3 networking stack, on the other hand, made us plug in a VPLS module in DPDK-based VPP [
23,
24,
28].
It can switch up to 14 million packets per second (Mpps) per core without any packet loss, and it has a scalable and hierarchical Forwarding Information Base (FIB) which provides excellent performance and updates. It is optimized for commodity hardware, so it can take advantage of DPDK for fast I/O, and vector instructions such as SSE, AVX, and AVX2. It is optimized for instruction per cycle. Also, it runs on a multi-core platform, where the main concept would be batching packets. By batching packets, their processing is done at the same time, resulting in improved performance and reduced hardware costs. Additionally, this system runs on a multi-core platform, leading to better performance and increased efficiency. It also follows a “run to completion” mode. Some of the main features of VPP in each layer are: (i) L2: VLAN, Q-in-Q, Bridge Domains, LLDP; (ii) L3: IPv4, IPv6, GRE, VXLAN, DHCP, IPSEC, Discovery, Segment Routing; (iii) L4: TCP, UDP; and (iv) Control Plane: CLI, IKEv2.
As the VPP architecture is based on graph nodes, there are two types of nodes in VPP architecture: internal and input nodes [
23]. Input nodes denote nodes that inject packets into the graph, whereas internal node denotes nodes that contain codes to run on packets. As shown in
Figure 9, it is implied that the VPP graph is copied to each core inside the CPU and, on the memory side, VPP creates huge pages to accommodate more incoming packets. These packets then traverse the graph in a pipeline manner. The dispatch function in VPP takes up to 256 packets and forms them as a single vector element. When the first packet goes into the core, the l-cache gets warm by having the code of the first node inside it and, for the rest of the packets, there will be no cache misses for executing that node.
There are a lot of workflow graphs for packet processing inside VPP, and also the user can attach their workflow to the main graph of VPP. The fundamental gain VPP brings is cache efficiency. In particular, there are two distinct caches: Instruction Cache and Data Cache. The former is for caching instructions and the latter is for caching data. According to
Figure 10a, when the ethernet header is about to process, for the first time, the node which represents the ethernet header process instruction is transferred to the instruction cache due to cache miss. After that, for the rest of the headers, the instruction code will have remained in I-cache. After finishing the batch processing of the vector of packets, it is time for the processing of the IP headers (
Figure 10b). Therefore, VPP batches IP headers and puts the vector inside D-Cache. At this time, the node which represents the instructions for processing the IP header is transferred to I-Cache due to cache miss. So, in essence, the cache miss for instructions is reduced significantly.
4.4. Implementation Phase
To implement VPLS using VPP and FRR, VPP must be installed on each switch. Then, using FRR, MPLS routing should be configured for the virtual LANs. Moreover, we need to configure VPLS on VPP. To do this, a VLAN for VPLS requires creation as well as a VFI for each switch. Moreover, we need to associate each VFI to a VLAN so that you can access the virtual LANs. Finally, we must define an MPLS label for each VFI so that one can communicate these VFIs together. To combine FRR and VPP to implement VPLS, we need to make some changes in the code of both projects. In particular, we need to change the FRR code so that it is compatible with the VPP and can receive VPLS messages from the VPP and, at the same time, we need to change the VPP code so that it can receive and process VPLS messages from the FRR. For example, in FRR we need to implement functions to send VPLS messages to VPP and, also in VPP, we need to implement functions to receive and process VPLS messages from FRR.
The pseudo-codes of the proposed service, as shown in
Figure 12, can be described as the control plane and data plane, which are expressed as follows:
Control Plane (FRR): In this section, we explain the major implementation configuration in FRR [
18] related to PW, including several functions as described below:
Ldpd(): This function enables LDP on all active MPLS interfaces. LDP is used for distributing labels between PE routers and establishing LPSs in MPLS networks.
L2vpn(): This function is responsible for creating, deleting, and updating PWs. PWs are used to provide point-to-point connectivity between two CE devices over a service provider’s MPLS network. L2VPN is a technology that enables the transport of Layer 2 traffic between different locations over an MPLS network.
Ldpe(): This function sends hello packages periodically and creates sessions with other LDP neighbors. LDP neighbors are routers that are directly connected and exchange label information with each other. The hello packages are used to discover and establish LDP sessions with other routers.
Lde(): This function is responsible for the distribution of marked labels. When an LSP is established between two routers, labels are assigned to the traffic that is being transported over the MPLS network. The LDE (Label Distribution Entity) is responsible for distributing these labels to other routers in the network.
In order to implement the proposed VPLS architecture in the FRR section, changes should be made in the codes related to L2VPN. In the L2vpn() section, VPLS-related functions should be added, including functions to create, delete, and update VPLSs, as well as VPLS database management functions. Also, we need to define VPLS parameters in LDP_Virtual teletype (VTY)_Configure file. For example, VLAN ID and VLAN mapping must be determined for VPLS. Moreover, the functions related to sending and receiving VPLS data in layer two (Layer 2) should be added to LDP. To achieve this, parts of the code related to L2TP (Layer 2 Tunneling Protocol) and MPLS message processing may be manipulated. In summary, the implementation configuration related to PW in FRR includes enabling LDP on all active MPLS interfaces, creating, deleting, and updating PWs, sending hello packages periodically, and distributing marked labels. These functions are essential for establishing and maintaining PW connectivity between CE devices over an MPLS network. In the FRR package, the ldpd directory contains the implementation code for the LDP daemon. The LDP daemon is responsible for managing the LDP protocol and establishing LSPs between routers in an MPLS network.
The LDP_Virtual teletype (VTY)_Configure file is a configuration file used to configure the LDP daemon. By activating L2vpn in this file, the LDP daemon is instructed to enable support for L2VPNs and to create, delete, and update PWs. In the code implementation, the LDP daemon is written in the C programming language. The LDP daemon source code is located in the ldpd directory of the FRR package. The ldpd code is organized into several files, including the following:
ldpd.c: This file contains the main function for the LDP daemon. It sets up the LDP protocol, initializes the LDP database, and starts the LDP event loop.
ldp_interface.c: This file contains the code for managing LDP interfaces. It handles interface events, such as interface up/down, and enables LDP on active MPLS interfaces.
ldp_l2vpn.c: This file contains the code for managing Layer 2 VPNs. It handles the creation, deletion, and updating of PWs, and also manages the L2VPN database.
ldp_ldp.c: This file contains the code for managing LDP sessions and exchanging label information with other LDP routers. It implements the LDP protocol and handles LDP messages.
ldp_label.c: This file contains the code for managing LDP labels. It handles label distribution, label assignment, and label retention.
Overall, the ldpd directory in the FRR package contains the implementation code for the LDP daemon, which is responsible for managing the LDP protocol and establishing LSPs in an MPLS network. By activating L2vpn in the LDP_Virtual teletype (VTY)_Configure file, the LDP daemon is instructed to enable support for Layer 2 VPNs and to manage PWs. The implementation code is written in the C programming language and is organized into several files based on their functionality.
In the following, using LDP_Virtual teletype (VTY)_Configure file, and by activating L2vpn [
56], the following sub-functions are called, which are summarized as follows:
Here, line 3 in Algorithm 1,
PseudoWireType() is responsible for specifying the PW type based on the type given by the network administrator. The PW type is stored with a special code number Ethernet or Ethernet_tagged.
DedicatedPseudoWire()—the interface and PW considered by the admin must be assigned. It must first be determined whether the interface and PW expressed have been used before. This is done by searching for the interface name in the existing l2vpn. If the PW ID or LSR ID was not previously used by another l2vpn, this PW will be removed from the passive PW list and assigned to the relevant l2vpn.
PwNeighborAddr()—the address of the router that is neighboring the current router by PW must be assigned. The family from which the address is selected (ipv4 or ipv6) and the neighbor’s router address are given by the admin. Because these data are set by the admin, their flag is stored as static.
PwNeighborId()—the router ID assigned to the current router by the PW is assigned. At the end line of Algorithm 1,
PwIdAssign(), the router’s ID, which is dedicated to PW, is assigned.
Algorithm 1 LDP _l2vpn_PWtype () |
Require: Enable L2vpn
- 1:
Input: [Int pw_type; struct l2vpn_pw *pw; union ldpd_addr addr; struct in_addr lsr_id] - 2:
pw = l2vpn_pw_find(l2vpn, InterfaceName); - 3:
Call sub-Function_1 PseudoWireType (pw_type); - 4:
Call sub-Function_2 DedicatedPseudoWire (); - 5:
Call sub-Function_3 PwNeighborAddr (addr); - 6:
Call sub-Function_4 PwNeighborId (addr, Id); - 7:
Call sub-Function_5 PwIdAssign (pwid);
|
In the l2vpn_Configure file, the following sub-functions are used inside the l2vpn function to perform the searches or changes needed to attach PW. Here, Line 2 of Algorithm 2, PwNew() creates PW given by the admin. PwFind(pw, InterfaceName) searches PW given by the admin in the corresponding tree. If there is such a PW to assign, we can activate it in the next steps if it is disabled and is assigned to l2vpn. In the following two functions, PwFindActive(pw) and PwFindinActive(pw), the active or inactive state of the given PW is specified. PwUpdateInfo(pw)—in the event of a new change, the index will be updated. PwInit(pw, fec) is called in ldpd(). When the LDE_ENGINE process is considered to run, PW information is transferred to the kernel. By calling PwExit(pw) in ldpd() function, the necessary updates about PW will be made. As shown in line 8 of Algorithm 2, two sub-functions appear:
Regarding kernel_remove(), when a path changes, zebra advertises the new path without removing the previous path. So another process has to be done in zebra to identify the next hop that has been removed along the way and remove the label from zebra. Regarding kernel_update(), recently deleted local tags should not be used immediately because this causes problems in terms of network instability. The garbage collector timer must be restarted and re-used when the network is stable.
Before calling PwNegotiate(nbr, fec) in line 19, the following steps 10–18 must be derived (by calling PwOk(pw, fec)): LSP formation towards the remote endpoint, tagging, size of MTU, and PW status. The validity of LSP formation is checked by zebra. However, if the PW status TLV is not supported in the remote peer, the peer will automatically delete this field. In this case, the PEs must call the process of discarding the label to change the signaling status.
According to RFC 4447 [
34], if the PW status is not declared in the initial_label_mapping package,
SendPwStatus(nbr, nm) returns the value of zero to execute the process of discarding the label. When changes are made to the PW flag, the new changes must be notified to the neighbors assigned by the lde. These changes are sent to each neighbor with notification messages. After receiving a notification message from the
RecvPwStatus(nbr, nm) function, the recipient PE updates the tag to the correct value. If the PW has not been removed, then the related configurations are fixed, otherwise the package is discarded by calling
RecvPwWildCard (nbr, nm, fec). If the pw status has changed from up to down, the assigned labels should be discarded from its neighbors’ table by calling
PwStatusUpdate(l2vpn, fec). Contrarily, if it returns to the up status, it should be rewritten. However, in l2vpn defined with the help of
PwCtl(pwctl), if PW is a subset of its member, the state of that PW sets to 1. By calling
BindingCtl(fn, fec, pwctl), the tags specified in the label mapping for PW are bounded to the local and remote PW tags and, after running this function, the values of the PW tags are completely allocated.
In the 30th line of the algorithm, when admin defines an l2vpn configured file, it has received neighbor information and added those to its neighborhood table. This can be done using
ldpe_l2vpn_init(l2vpn, pw) and
ldpe_l2vpn_exit(pw). Additionally, by calling
ldpe_l2vpn_pwexit(pw, tnbr), if the defined PW for l2vpn has already been used, the PW should be misdiagnosed and prevented from repeating.
Algorithm 2 l2vpn_PW () |
- 1:
Input: [struct l2vpn_pw *pw; struct fec fec; struct notify_msg nm; struct fec_node *fn; struct fec_nh *fnh; struct l2vpn_pw *pw; struct tnbr *tnbr; static struct ctl_pw pwctl; ] - 2:
Call struct l2vpn_pw *PwNew (); - 3:
Call struct l2vpn_pw *PwFind (pw, InterfaceName); - 4:
Call struct l2vpn_pw *PwFindActive (pw); - 5:
Call struct l2vpn_pw *PwFindinActive(pw) - 6:
Call Void PwUpdateInfo (pw); - 7:
Call Void PwInit (pw, fec) - 8:
Call PwExit(pw) { comment: lde_kernel_remove and lde_kernel_update} - 9:
Call Int PwOk (pw, fec) - 10:
if fnhremote_label NO_LABEL then - 11:
return (0); { comment: /* check for a remote label */} - 12:
end if - 13:
if pwl2vpnmtu != pwremote_mtu then - 14:
return (0); { comment: /* MTUs must match */} - 15:
end if - 16:
if (pwflags & F_PW_STATUSTLV) && pwremote_status != PW_FORWARDING then - 17:
return (0); { comment: /* check pw status if applicable */} - 18:
end if - 19:
Call Int PwNegotiate (nbr, fec); - 20:
if pw NULL then - 21:
return (0);{ comment: pw not configured, return and record– the mapping later} - 22:
end if - 23:
{comment: /* RFC4447—pseudowire status negotiation */} - 24:
Call Void SendPwStatus (nbr, nm) - 25:
Call Void RecvPwStatus (nbr, nm) - 26:
Call Void RecvPwWildCard (nbr, nm, fec) {/*RFC4447 PwID group wildcard*/} - 27:
Call Int PwStatusUpdate (l2vpn, fec) - 28:
Call Void PwCtl (pwctl) - 29:
Call Void BindingCtl (fn, fec, pwctl) - 30:
Call Void ldpe_l2vpn_init (l2vpn, pw) - 31:
Call Void ldpe_l2vpn_exit (pw) - 32:
Call Void ldpe_l2vpn_pw_exit (pw, tnbr)
|
In the ldpd()_Configuration file, if there is a change in l2vpn configuration, a new configuration will be made by inserting or deleting the variations.
Here, in line 4 of Algorithm 3, by calling MergeInterface(), the changes in the interface of an l2vpn file are checked so that if it is wasted, it will be removed from the database and released, and if it is not found, it will be registered in the database. In the MergeActivePw() function, if the active PW is wasted, it will be removed from the database and released. And in UpdateExistingActivePw(), if the LDP address is changed in a PW, it is sufficient that the target passengers are reinstalled, but under any of the following session conditions it must be changed.
If PW flags and configuration TLV status have changed, all neighbors must be reset. However, if PW type has changed most of the transfer unit, PW or LSR ID, the PW FEC must be reinstalled. Finally, calling MergeInActivePw() is like the description of active PW, except that the operation is performed on passive PW.
Finally, after applying the above algorithms, our control plane will be ready to provide the proposed service.
Algorithm 3 Ldpd () |
- 1:
Input: [struct l2vpnpw *pw; union ldpdaddr addr; struct nbr *nbr; struct ldpd_conf *xconf; struct l2vpn *l2vpn] - 2:
previous_pw_type = l2vpnpw_type; - 3:
previous_mtu = l2vpnmtu; - 4:
Call static void MergeInterface (xconf, l2vpn); - 5:
Call MergeActivePw (pw); - 6:
{comment: /* find deleted active pseudowires and also find new active pseudowires */} - 7:
Call UpdateExistingActivePw(pw, addr); {/*changes that require a session restart*/} - 8:
Call MergeInActivePw(pw);
|
Data Plane (VPP) In this section, we explain the algorithms implemented in VPP [
23] as our fast data plane to create VPLS based on Python APIs. The logical process for VPLS code in the Data Plane Development Kit (DPDK) is as follows:
Creation of l2 tunnel using the VppMPLSTunnelInterface algorithm;
Addressing routes using VppMplsRoute;
Bridge-domain using structured set_l2_bridge;
Obtain the packages and address them;
Packet encapsulation methods;
Learning and forwarding part;
The stream section of the packages in each direction;
Disable Bridge domain after finishing work.
In the following, the above items are described along with the structure in the corresponding platform.
In (a) process: The VppMPLSTunnelInterface algorithm is called from the main algorithm VPP_Interface. The goal of this algorithm is to establish L2 on MPLS. First, the VPP_Interface algorithm is checked, then the VppMPLSTunnelInterface, and finally the VPP_Neighbor. The ABCMeta, util, and VPP_Neighbor items need to be imported into the VPP_Interface algorithm. The ABC item is used to establish infrastructure in Python to define abstract base algorithms (ABCs). ABCMeta is also used to define and create ABC. In the util item, it is necessary to activate the util algorithm to establish the test structure in vpp. The algorithm VPP_Interface creates vpp interfaces and related subjects. There are many properties defined in this algorithm, but the ones required for the VPLS service can include assigning index interfaces, selecting names for interfaces, and activating MPLS on the VPP interface. VppMPLSTunnelInterface itself imports the following: from vpp_ip_route import VppRoutePath and from VPP_Interface import VppInterface. The last item is the VPP_Neighbor algorithm. This algorithm is required to establish VPP_Interface. In Algorithm 4, we have the function of finding neighbors. It should be noted that this section works with inet_pton, inet_ntop, AF_INET, and AF_INET6 sockets. Here, the VppMPLSTunnelInterface code is as follows:
Algorithm 4 VppMPLSTSTunnelInterface (VppInterface): |
- 1:
{Create MPLS Tunnel interface} - 2:
def __init__(self, test, paths, is_multicast=0, is_l2=0): - 3:
self._sw_if_index = 0; {Creating mpls tunnels with paths, multicast and is_l2 parameters} - 4:
super(VppMPLSTunnelInterface,self).__init__(test) - 5:
self._test = test - 6:
self.t_paths = paths - 7:
self.is_multicast = is_multicast - 8:
self.is_l2 = is_l2 - 9:
def add_vpp_config(self): - 10:
self._sw_if_index = 0xffffffff
|
In (b) process, by calling the vpp_ip_route algorithm, the routing is carried out. Before defining this function, the following rules must be satisfied:
- 1:
{comment: # from vnet / vnet / mpls / mpls _types.h}
- 2:
MPLS_IETF_MAX_LABEL = 0xfffff
- 3:
MPLS_LABEL_INVALID = MPLS_IETF_MAX_LABEL + 1
This section sets the maximum label to 0xfffff and determines the value of the MPLS label. An important part of VPLS performance will depend on this, as the important VppMplsRoute function is defined in vpp_ip_route.
In Algorithm 5, the local label, EOS bit, and table ID are set. If the EOS bit is equal to one, this means that we will not have multiple labels. The route-finding function in line 10 is also defined.
Algorithm 5 VppMplsRoute(VppObject): |
- 1:
{comment: MPLS Route/LSP} - 2:
def __init__(self, test, local_label, eos_bit, paths, table_id=0, is_multicast=0): - 3:
self._test = test - 4:
self.paths = paths - 5:
self.local_label = local_label - 6:
self.eos_bit = eos_bit - 7:
self.table_id = table_id - 8:
self.is_multicast = is_multicast - 9:
def find_route(test, ip_addr, len, table_id=0, inet=AF_INET)
|
In process (c), the Bridge-domain section is applied using the vpp_papi_provider algorithm. All the APIs required for the users are available in this algorithm. Regarding VPLS, it is necessary to create or remove Bridge-domain interfaces. In this section, applying the sw_interface_set_l2_bridge configuration is sufficient.
(d) Packages are taken from the customer interface. The physical address and IP addresses of the packages should also be written in the configuration.
(e) Packet encapsulation: After section (d), the bridge domain learns and sends each packet. This post will be in capsule form. Functions such as add_stream() and enable_capture() from vpp_pg_interface must be applied and called. Adding streams from packets to VPP is achieved by the add_stream(). The ability to capture from the packet-generator interfaces is enabled by enable_capture().
(f) Forwarding: According to the Capturing packages in the previous section, we can move forward packages based on learning by applying the get_capture command. This function from vpp_pg_interface receives capture packets.
In the (g) and (h) sections, a stream of packets is sent in each direction. Finally, after the work is completed, the Bridge-domain interface can be disabled. To achieve this, in the sw_interface_set_l2_bridge function, it is sufficient to set the enable parameter to zero.
After applying the above processes, our data plane will be ready to provide the proposed service.