1. Introduction
Data traffic is increasing massively as the type of services is changing interactively towards media and streaming. It has been predicted that connected devices will grow significantly to reach 100 billion devices by year 2025 [
1]. Passive Optical Networks (PON) are one of the major driving forces behind residential broadband access and 5G networks [
2] (for example, in cloud radio access networks (C-RAN) [
3,
4]), as they meet the ever-increasing demand for bandwidth-intensive applications such as ultra-high-definition TV, immersive video, and the stringent end-to-end latency required by mission-critical applications.
PON is a cost-effective optical technology that provides the advantage of using passive network elements to connect users in access networks. They consist of a central unit called an Optical Line Terminal (OLT) at the central office of the internet service provider, and it connects through optical fiber to several Optical Network Units (ONUs) located in—or close to—the customers’ premises within a 20 km range [
5]. The optical fiber cable is shared by introducing passive optical splitters into the optical distribution network (ODN) located between the OLT and the ONUs, by which it reaches up to 64 users [
6] (although some architectures allow split ratios of up to 1024 [
7]). In the upstream direction, the splitter combines the upstream wavelengths from the ONUs to the OLT [
8]. The PON architecture is referred to as a point-to-multipoint (P2MP) system [
9]. It is a very cost-effective and easy-to-manage solution, as it does not require any active electronic devices between the OLT and the ONUs [
10].
The PON system is based on a shared model that allows bi-directional communication between the ONUs and the OLT. The downstream traffic is broadcast from the OLT to all ONUs while the upstream communication from the ONUs to the OLT is achieved using a time-sharing principle [
8]. Owing to the shared nature of the PON, and ONU systems competing for network capacity, a mechanism must be put in place to control the allocation of the upstream transmission capacity in real time, thus avoiding data collision if two or more ONUs transmit simultaneously towards the OLT. PONs employ a Dynamic Bandwidth Allocation (DBA) algorithm to orchestrate the allocation of network resources in the shared medium. One of the main requirements of a DBA is that it satisfies the low latency and huge bandwidth requirements of emerging applications [
11].
The new generation of PON technology is based on the Time- and Wavelength-Division Multiplexing (TWDM) technique, which has been described as an evolutionary step that allows using multiple wavelengths to increase the capacity of the PONs [
12]. TWDM is a hybrid technique that combines Wavelength-division multiplexing (WDM) capacity expansion with the inherent resource granularity of a Time-division multiplexing (TDM-PON) to meet the growing demands for bandwidth, reach and aggregation [
13]. Some TWDM-PON proposals, based on four wavelengths typically have a maximum throughput capacity of 40 Gbps, thus meeting the requirements of the NG-PON2 standards [
4,
14]. TWDM-PON is used as a major application in mobile fronthaul networks for connecting the centralized baseband unit (BBU) and remote radio heads (RRHs) in 5G C-RANs, which have extreme requirements in terms of capacity, latency, and cost-efficiency [
15,
16].
In TWDM-based PONs, the resource allocation process in the upstream link is two-dimensional, consisting of wavelength and time slot allocation. The DBA scheme dynamically allocates the wavelengths (typically four) among the ONUs and shares available bandwidth in terms of time slots among the ONUs in the upstream link. An important characteristic of TWDM-PONs is the use of tunable transceivers at the ONUs [
5], which are thus enabled to switch their wavelengths. It is important for the DBA to efficiently handle the assignment of the wavelengths, which involves the switching of ONUs from one wavelength to the other. The wavelength assignment decision is communicated to the ONUs by the OLT, and the ONUs can transmit their frames at their allotted time slots on the assigned wavelength [
5]. This approach makes it necessary for ONUs to change their wavelengths to optimize the use of the shared medium. ONUs use tunable lasers to facilitate the switching of the wavelengths as instructed by the OLT, thus adding both complexity and a Laser Tuning Time (LTT) delay that may have a great impact on the performance of the system [
4,
5]. Only a few research works consider LTT when designing or evaluating the performance of DBAs for TWDM networks [
17,
18]. It is therefore, necessary to develop more sophisticated DBA algorithms that will ensure fair distribution of resources among ONUs while taking into consideration the delays caused by lasers switching between wavelengths.
In this paper, we propose a novel DBA algorithm to efficiently manage the allocation of bandwidth and wavelength assignment while considering the LTT delay. Transmitting on multi-wavelength PON poses the problem of scheduling with a constraint on the completion time and an overall effect on the delay. Therefore, we aim to reduce the queue delay by introducing a scheduling scheme based on the Longest Processing Time first (LPT) principle [
19]. The goal of LPT is to minimize the maximum completion time for processing and transmitting the requests from the ONUs. This is achieved by the OLT sorting the ONUs’ bandwidth requests in descending order, with the largest request being processed first. Finally, we introduce weight-based QoS differentiation following the Max-Min Weighted Fair Share principle [
20] to ensure a guaranteed bandwidth for demands requested by the users since traditional IPACT algorithm does not guarantee QoS [
21].
The main contribution of our TWDM-DBA is to effectively reduce end-to-end delay, and efficiently utilize the bandwidth while achieving QoS differentiation. We validated our algorithm by comparing it with the traditional IPACT algorithm, which has been extended to use up to four wavelengths. The performance metrics of our study include queue delay and throughput. The results show that our proposed DBA can significantly improve network performance in terms of queue delay and throughput while adding QoS differentiation.
The remainder of the paper is organized as follows. Related work and the state of the art are summarized in
Section 2.
Section 3 introduces the proposed TWDM-DBA algorithm.
Section 4 describes a performance evaluation of the proposed approach using simulation results. Conclusions and future work are described in the last section.
3. The Proposed Algorithm
Our proposal builds on IPACT DBA [
21], an online algorithm that follows an interleaved polling scheme to schedule transmission from the ONUs in a centralized approach. The requests from the ONUs are sent to the OLT, which has complete knowledge of the queue of the ONUs and when the last bit will arrive. With this knowledge, the OLT will start scheduling the grant for the next ONU. Since the OLT does not have to wait for the rest of the ONUs’ requests to reach the OLT before it starts processing them, the waiting time is reduced and the overall delay is minimized.
The original IPACT algorithm has been extended with the capability of coping with multiple wavelengths of the TWDM-PON in [
43]. Optimally scheduling the requests from the ONUs on the four wavelengths in TWDM-PON is a problem similar to the scheduling of computational tasks in a multiprocessor environment with identical processors acting in parallel. Mapping this environment with multiprocessor scheduling, with wavelength channels as machines and ONUs’ requests as jobs, is indicative of an NP-hard optimization problem, which is computationally prohibitive [
44]. Given a set J of jobs where job
Ji has length
Li and several wavelengths
ω, our objective is to achieve the earliest possible time required to schedule all jobs in
J on
ω wavelengths such that none overlaps. Since there is a large number of requests coming from the ONUs to be transmitted on the four wavelengths in real-time, heuristic approaches are most suitable in achieving near-optimal scheduling efficiency [
19].
We introduce the LPT scheduling algorithm, due to its simplicity, to solve the problem of scheduling the requests on multiple wavelengths to achieve minimal makespan of the requests’ processing [
45,
46]. LPT is a non-preemptive scheduling algorithm that uses the priority to schedule requests to achieve near-optimal efficiency. LPT allows the sorting of the requests made to the OLT during a cycle
i by the ONUs
J1(
i),
J2(
i)
…JM(
i), according to the length of time needed for them to be processed such that
Lr(
i) ≥
Ls(
i) ≥ …≥
Lm(
i) being r, s, and
m ≤ M. LPT has the advantage of scheduling almost equal loads on the wavelengths and avoiding situations where some wavelengths will be idle. The upper limit of LPT,
, has the approximation ratio shown in (1) where
is the maximum makespan of LPT heuristic, and
is the maximum makespan of an optimal scheduler [
47].
At the beginning of each cycle, the algorithm acknowledges the number of connected ONUs whose queues are not empty. Based on the lengths of the jobs, the jobs reported from connected ONUs (
Jm) are sorted in descending order. The ONUs are then assigned to the respective available wavelengths
ω such that ONU
m with job
Jm (i) with the longest processing time
Lm(i) is processed first and followed by the next one, assigned to the minimally loaded channel. If
the requested time is granted for the connected ONUs in a cycle (
), else
will be granted and certain jobs with lower lengths have to wait for the cycle (
). The aforementioned parameters are summarized in
Table 2, and the pseudocode is provided in Algorithm 1.
Furthermore, to guarantee fairness in the sharing of resources as IPACT has no inherent QoS mechanism, we introduce QoS guarantees based on Weighted Fair Queuing (WFQ) scheme in accordance with the Max-Min Weighted Fair Share principle [
48] for weight-based differentiation of users. WFQ is a discrete implementation of the generalized processor sharing (GPS) policy and an extension of fair queueing. It is realistically assumed that users have different bandwidth needs with varying priorities, therefore, all the ONUs do not request for an equal share of the resources at every given cycle. Consequently, allocating equal resources to them will lead to a waste of resources by the ONUs whose demands are lower than allocated grants, and some ONUs with higher requests will not be satisfied. Accordingly, some ONUs that have higher bandwidth demands are given more weight compared to ONUs with lower bandwidth demands and they are thus allocated relatively higher resources.
As shown in the pseudocode provided in Algorithm 2, we associate weights
1,
2, ...,
m with ONUs 1, 2, ...,
m, which reflect their relative resource share. The resources are allocated to the ONUs in increasing order of their requests, normalized by their weights, with the small requests being fully granted first. In this case, the ONU with the lowest demand is maximized, if satisfied, only then the ONU with the second-lowest demand will be maximized. After the ONU with the second-lowest is satisfied, only then the ONU with the third-lowest demand will be maximized, and so on. Therefore, no ONU gets more than its demand, and the ONUs whose demands are not met get a fair share of the resources in proportion to their weights. This also avoids the situation where the resources will be monopolized by ONUs with bigger requests and consequently eliminating network congestion to some extent. We combine the WFQ principle with the LPT algorithm to give us WFQLPT, a hybrid algorithm that provides inherent QoS with the minimal makespan associated with LPT.
Algorithm 1. Pseudocode for LPT executed at the OLT for each cycle i. |
Pseudocode of the LPT Heuristic Non-Preemptive Scheduler |
for m = 1:M if (ONUm is connected && Queue) Consider ONU; k = Connected_ONUs++; end if end for form = 1: k sortin descending order based on their length; end for if () in cycle i; end if |
Regarding the assignment of wavelength (
, our proposed DBA algorithm combines the time allocation and wavelength algorithms following the JTWS scheme previously described in [
12]. Once the OLT receives the requests from all connected ONUs by following an offline scheduling framework, it sorts the jobs according to the LPT scheme and thereafter assigns wavelengths in accordance with the Next Available Supported Channel (NASC) scheduling policy [
33]. This allows the ONUs to be assigned to the next available wavelength, where their requests will be granted. The choice of NASC aligns with the principles of the LPT scheme, in which the unassigned task with the largest computation time is assigned to the next available wavelength [
49].
The assignment of the wavelength according to NASC occurs in offline scheduling mode. The offline scheduling framework gives room for the LPT scheme and allows for applying WFQ QoS differentiation as scheduling decisions are made with full knowledge of all the jobs to be scheduled for a particular scheduling cycle. The cycle is the time difference between two consecutive allocation decisions. A profound advantage of the offline scheduling framework is the increased level of scheduling control, by which the OLT differentiates QoS. Specifically, the OLT adds all the ONUs with REPORT messages into a scheduling pool, and the scheduling is done after the OLT has sorted the REPORT messages and prioritized the ONUs based on their respective QoS. The channel is considered busy until the end of the last scheduled reservation, and then the procedure is applied for considering LTT when deciding whether or not to tune the supported wavelengths. Therefore, when a wavelength becomes free, it is assigned to the ONU with the longest job in the pool, as shown in Algorithm 3.
Algorithm 2. Pseudocode for WFQ executed at the OLT for each cycle i. |
Pseudocode of the Max-Min weighted fair-share queuing |
for m = 1:M if (ONUm is connected && Queue) Consider ONU; k = Connected_ONUs++; end if end for form = 1:k Search for the smallest weight () among the connected ONUs Normalize the remaining weights ( of remaining connected ONUs based on end for for m = 1:k = end for for m = 1:k end for for m = 1:k if ( Remaining += ( Distribute the Remaining among unserved ONUs end if if (ONUm is not served) = if () grant all Jobs end if else The remaining jobs wait for the next cycle end else end if end for if () grant all Jobs end if else The remaining jobs wait for the next cycle end else |
Algorithm 3. Pseudocode for NASC with LTT executed at the OLT for each cycle i. |
Pseudocode of the Wavelength Assignment-NASC with LTT |
if No tuning queue_delay += end if else if if( No tuning queue_delay += end if end if else queue_delay += end else end else |
Our algorithm sorts the requests from the ONUs at the OLT according to the length of time needed for them to be processed, in descending order according to the LPT principle. The OLT sends grant messages (GATE) to the ONUs and schedules the ONU with the longest processing time first, which is then transmitted on the next available wavelength. We introduce the concept of LTT, and if the wavelength that the ONU is currently tuned to is the same as one that has been newly assigned by the OLT, then no laser tuning time is added. As shown in Algorithm 3, if the newly assigned wavelength is different from the current wavelength, the ONU checks the time needed for its current wavelength to become free and adds the laser tuning time to it. If the time needed to tune to a new wavelength is more, the ONU will remain on its current wavelength and no tuning time delay will be added. If the time to tune to a new wavelength is less, the ONU will tune to the newly assigned wavelength, and the tuning time delay will be added. This process happens continuously whenever GATE and REPORT messages are exchanged during the lifecycle of the communication between the OLT and the ONUs.
Figure 1 illustrates the steps in the application of our algorithm.
4. Performance Evaluation
In this section, we evaluate the performance of our DBA algorithm. To validate the efficiency of our algorithm, we carried out extensive simulations using OPNET Modeller under different conditions.
4.1. Simulation Model
Our simulation setup consists of ONUs at the customers’ premises, a centralized OLT, and an ODN that emulates a passive optical splitter/combiner, which splits the optical fiber cable running from the OLT to the ONUs. To check the impact on our algorithms from the number of ONUs in a PON system, we have created three different sets of scenarios, with 8 ONUs, 16 ONUs and 64 ONUs, respectively. The simulations involving 16 ONUs scenarios are further classified into two subcategories based on the distances from the ONUs to the OLT. One scenario is composed of 16 ONUs that are physically located at distances uniformly distributed between 18 km and 20 km, while the other set of 16 ONUs are physically located at distances uniformly distributed between 2 km and 20 km. In the downstream communication, the OLT broadcasts data to the ONUs, and each ONU filters the data sent to it and discards others. The upstream channel has a total capacity of 4 Gbps on four wavelengths, each one with a rate of 1 Gbps dynamically managed by the DBA. All ONUs are connected to their respective traffic sources and equipped with a packet generator over a link of 1 Gbps, thus avoiding possible bottlenecks. The maximum cycle time (
δmax) is 1 ms, and the sources generate self-similar traffic [
43,
50] with Hurst parameter H = 0.75 and a mean packet rate that is adjusted according to varying offered load. The frame size follows a uniform distribution with a lower limit of 512 bits and an upper limit of 12,144 bits, thus realistically modeling Ethernet traffic [
43].
Several scenarios are created for the simulations in order to evaluate the effect on the algorithms from LPT scheduling, WFQ-based differentiation, and laser tuning time. A guaranteed weight of a specified percentage of the system’s total bandwidth capacity is allocated to some ONUs, thus causing them to have different QoS. A laser tuning time of LTT = 10 µs is selected in reference to ITU-T G.989.2 specifications class 2 devices [
27]. Traffic loads vary from 5% to 100% of the total load, where the maximum global offered load is 4 Gbps. The simulated algorithm sets are classified as IPACT, LPT, WFQ, and WFQLPT, depending on the configuration:
- Set 1.
IPACT with four wavelengths at LTT = 0 and 10 µs.
- Set 2.
LPT over IPACT with four wavelengths at LTT = 0 and 10 µs.
- Set 3.
WFQ with four wavelengths at LTT = 0 and 10 µs.
- Set 4.
LPT over WFQ (WFQLPT) with four wavelengths at LTT = 0 and 10 µs
For the simulations performed using the IPACT and LPT algorithms, the ONUs have an equal share of the total bandwidth and transmit on four wavelengths. To evaluate the QoS, we introduced different weights into the WFQ and WFQLPT algorithms. The 16 ONUs are distributed in such a way that ONU 1 and ONU 2 have a guaranteed share of, 20% (800 Mbps) and 10% (400 Mbps) respectively, and ONUs 3 to 16 each have 5% (200 Mbps).
4.2. Results
In terms of throughput and queue delay, we evaluate the performance of our novel DBA algorithm in comparison with IPACT, which has been extended to support four wavelengths. The results and discussion of each parameter based on allocated weight are presented as follows.
4.2.1. Throughput
The throughput represents the average number of bits per unit time, measured in Mbps, and it includes the Ethernet header (destination and source addresses) and trailer (frame check sequence) that are successfully transmitted by the ONUs. In this subsection, we present the comparative performance of the four DBA algorithms in terms of throughput for the upstream link under varying offered loads.
Figure 2 shows the QoS differentiation of our algorithms by allocating different weights to the ONUs in order to see the effect of LPT scheduling on the algorithms. We separate the Max-Min-based algorithms (WFQ and WFQLPT) from IPACT and LPT because of uneven bandwidth allocated to different ONUs.
Figure 2 (left) presents the results for IPACT and LPT transmitting on all four wavelengths at 0 LTT for all the ONUs. In this case, all ONUs have an equal share of the system, with each ONU having a share of 250 Mbps. Here, we see that both IPACT and LPT behave similarly, as they can transmit an equal amount of throughput up to 220 Mbps before reaching saturation at an offered load of 210 Mbps. Thus, the introduction of LPT scheduling has no noticeable effect on IPACT in terms of throughput.
Figure 2 (right) shows the results for WFQ and WFQLPT with different weights. This scenario displays the QoS differentiation of the ONUs, with ONUs 1 and 2 having a share of 20% (800 Mbps) and 10% (400 Mbps), respectively; and ONUs 3 to 16 each have 5% (200 Mbps). The results show that ONU 1 can transmit up to 730 Mbps at a bandwidth efficiency of 91% before reaching saturation at an offered load of 725 Mbps. ONU 2 can transmit up to 375 Mbps at a bandwidth efficiency of 93.75% before reaching saturation at 360 Mbps. ONUs 3 to 16 can transmit up to 180 Mbps at a bandwidth efficiency of 90% while reaching saturation at an offered load of 185 Mbps. Thus, the introduction of LPT has no noticeable effect on the WFQ algorithm in terms of throughput.
The scenarios in
Figure 3 show the effect of LTT on the throughput for the four algorithms.
Figure 3 (left) displays the results for ONU1, whose share of the resources (250 Mbps) is equal to the remaining 15 ONUs under the IPACT and LPT algorithms at both LTT = 0 µs and LTT = 10 µs. With the ONUs transmitting on all four wavelengths, it can be seen that there is no noticeable difference in both IPACT and LPT for the two different LTTs (LTT = 0 µs and LTT = 10 µs), as the ONUs are transmitting at a maximum throughput of 225 Mbps. Thus, the laser tuning time does not affect the throughput when ONUs have an equal allocation of 250 Mbps.
In
Figure 3 (right), we compare WFQ and WFQLPT at LTT = 0 µs and LTT = 10 µs for ONU1. In this case, ONU 1 has a share of 10% (400 Mbps) of the total resources, and the rest of the ONUs share the remaining 90%. We can see that at higher offered load, there is a difference in the throughput between LTT = 0 µs and LTT = 10 µs for both WFQ and WFQLPT. While ONU 1 can transmit up to 370 Mbps at LTT = 0 for both WFQ and WFQLPT, it is only able to transmit up to 335 Mbps when LTT of 10 µs is introduced before reaching saturation.
Figure 4 shows the results comparing the behavior of our system with different numbers of ONUs in the PON. We present the results for LPT and IPACT for 8 ONUs against 16 ONUs under LTT = 10 µs.
Figure 4 (left) shows that the throughput reaches 465 Mbps in the 8-ONU system and 225 Mbps in the 16-ONU system under the LPT algorithm.
Figure 4 (right) shows that the throughput reaches 465 Mbps with 8 ONUs and 225 Mbps with 16 ONUs under the IPACT algorithm. These results show a proportional increase from 225 Mbps to about 465 Mbps when the ONUs decrease from 16 to 8. These results show that LPT and IPACT behave similarly, regardless of the number of ONUs in the PON system.
In order to check the effect of the distance between the OLT and ONUs in our algorithms, we compare the results for the set of 16 ONUs that are scattered within a distance range of 2–20 km versus those within a distance range of 18–20 km from the ONUs to the OLT.
Figure 5 (left) shows the throughput in the case of the LPT algorithm. As we can see, LPT behaves the same within both ranges, as it can transmit up to 220 Mbps before reaching saturation at an offered load of 210 Mbps at LTT = 10 µs.
Figure 5 (right) shows the results for ONUs within 18–20 km against 2–20 km from the OLT under the IPACT algorithm at LTT = 10 µs. As we can see, the IPACT algorithm has the same behavior within both distance ranges, as the ONUs can transmit up to 222 Mbps before reaching saturation at 225 Mbps.
Figure 6 shows the LPT’s impact on the throughput in function of the distance and load for LTT = 0 µs. ONU 1 and ONU 4 are located at 2 km and 20 km from the OLT, respectively.
Figure 6 (left) shows the CDF of the throughput for IPACT. Therefore, we can conclude that the range in which the ONUs are spread has no impact on the throughput of the system at low loads. For both IPACT and LPT at heavy loads, the ONUs closest to the OLT have the same behavior. In contrast, for the distant ONUs, LPT suffers a deviation of less than 10% in terms of IPACT. This is because LPT reduces the delay of the frames even if the system works at heavy loads.
4.2.2. Queue Delay
The queue delay measured in our simulations is the average packet waiting time in the ONU queues before being processed. The ONUs’ queue delay is one of the components that forms the end-to-end delay, and the only one that is variable. In our scenario, packet transmission delay and propagation delay are negligible compared to the queue delay. We compare the queue delay for the IPACT, LPT, WFQ, and WFQLPT algorithms.
Figure 7 (left) shows the results for the average queue delay of all 16 ONUs for LPT versus IPACT at 0 LTT. In this scenario, all ONUs have an equal share of the system (250 Mbps) and transmit on all four wavelengths. The results show that the ONUs have a much lower delay of 0.08 ms under the LPT versus the IPACT algorithm (0.3 ms). It can be seen that ONUs under LPT and IPACT reach saturation at the same offered load of around 200 Mbps.
Figure 7 (right) shows the results for ONUs under WFQ and WFQLPT using three different sets of weights (share of resources) for 0 LTT. ONU 1 has 800 Mbps, ONU 2 has about 400 Mbps, and ONUs 3 to 16 have 200 Mbps each. We can see that the ONUs have lower delays under WFQLPT (0.1 ms) than WFQ (0.15 ms). In addition, the variation in the allocated resources for the ONUs causes them to have three different sets of saturation points with respect to the allocated resources.
Figure 8 presents the effect of LTT on queue delay for the four algorithms compared to the offered loads.
Figure 8 (left) shows the results for ONU 1 with an allocation of 250 Mbps under IPACT, and under LPT at both LTT = 0 µs and LTT = 10 µs. We can see that IPACT at LTT = 0 µs has a slightly lower queue delay (0.281 ms) than when LTT = 10 µs (0.296 ms).
In
Figure 8 (right), WFQ and WFQLPT are compared under LTT = 0 µs and LTT = 10 µs. In this case, ONU 1 has an allocation of 400 Mbps. For WFQLPT, the effect of LTT can be seen, as the queue delay is slightly lower when LTT = 0 µs in the working area (0.279 ms at an offered load of 15 Mbps) than when LTT = 10 µs (0.297 ms at an offered load of 15 Mbps). If we compare the LPT queue delay with WFQLPT, we observe that the queue delay in the working area has increased by up to 50 µs. This increase is justified because the proposed WFQLPT algorithm offers quality of service, guarantees throughput, and thus minimizes delay. In the figure for WFQ with WFQLPT on the right, the former only guarantees throughput while the latter guarantees both parameters.
Figure 9 shows the results for the simulations of the IPACT and LPT algorithms, which were run to check the behavior of our system when different numbers of ONUs are connected. We compare the queue delay for the three scenarios with 64 ONUs vs. 16 ONUs vs. 8 ONUs at LTT = 10 µs. The results show that, apart from the expected rescaling of the saturation point, the number of ONUs does not have any important impact, and the LPT and IPACT algorithms behave similarly.
Figure 9 (left) shows that the queue delay under the LPT algorithm is kept at almost the same value between 0.1 ms and 0.15 ms in the three scenarios until it reaches saturation at an offered load of approximately 50Mbps, in the case of 64 ONUs, 200 Mbps, in the case of 16 ONUs, and of 400 Mbps for the 8 ONUs.
Figure 9 (right) shows that, under the IPACT algorithm, the queue delay is similar for the three scenarios, as it is kept at around 0.3 ms until reaching an offered load of 48 Mbps in the case of 64 ONUs, 190 Mbps, in the case of 16 ONUs, and of 400 Mbps with 8 ONUs. These results confirm that the three sets of ONUs (64, 16, and 8) have similar behaviors under the IPACT and LPT algorithms.
In
Figure 10, we evaluate the performance of the LPT and IPACT algorithms in two scenarios with 16 ONUs: in the first, the ONUs are scattered within a distance range of 2–20 km to the OLT; and in the second, the distance range is 18–20 km.
Figure 10 (left) shows the queue delay results for the LPT algorithm, and it is evident that the behavior is the same in both ranges, as the queue delay is kept at 0.11 ms before reaching saturation at 190 Mbps.
Figure 10 (right) shows the IPACT algorithm’s queue delay results for the set of ONUs within 2–20 km vs. 18–20 km. It can be deduced that the queue delay in the 18–20 km range goes from 0.3 ms until reaching 0.35 ms near the saturation point at offered loads of 190 Mbps. Meanwhile, in the 2–20 km range scenario, the queue delay goes from 0.2 ms to 0.3 ms near the saturation point at offered loads of 190 Mbps.
Figure 11 shows the Cumulative Distribution Function (CDF) of the queue delay for ONU 1 and ONU 4 at offered loads of 37.5 Mbps and 150 Mbps under the IPACT and LPT algorithms for LTT = 10 µs. ONU 1 and ONU4 are located at, respectively, 2.61 km and 17.43 km from the OLT in the 16-ONU scenario.
Figure 11 (left) shows that, at low load, LPT’s margin of improvement is between 50 µs to 100 µs for IPACT, while under LPT the impact of the distance is limited to below 25 µs.
In
Figure 11 (right), we show the queue delay for both ONU 1 and ONU 4 at an offered load of 150 Mbps. For heavy loads, the improvement of LPT with respect to IPACT is more evident, as it increases the difference by 150 µs. Furthermore, the variability of the queue delay for heavy loads and different distances is kept at around 40 µs. This delay is similar to that of low loads. Thus, it can be deduced from
Figure 10 and
Figure 11 that even though the queue delay increases together with the distance between the ONU and OLT, the difference in the queue delay narrows as the offered load increases.
4.3. Discussion of the Results
The results show four main aspects. First, we have our analysis on the influence of laser tuning time on IPACT, in terms of throughput, and queue delay as a function of the system load and the distance between the ONUs and the OLT. Second, the LPT optimizes the queue delay more than IPACT in all the above scenarios. Third, as the WFQ guarantees a throughput for each user, we have evaluated the impact when the LTT is introduced. Finally, the performance of the WFQLPT has been evaluated, which guarantees a minimum queue delay and the bandwidth requested by the different ONUs.
We can achieve an average bandwidth efficiency of about 85% in the upstream link, which conforms to the minimum absolute efficiency that is stipulated in [
51]. The inefficiency in the system is a result of overhead from encapsulation, such as control message overhead (which represents bandwidth lost to GATE and REPORT message exchanges between the ONUs and OLT), guard band overhead, discovery overhead, and frame delineation overhead [
51]. The overhead consumes up to 16.37% of the bandwidth, with the minimum throughput being 836.3 Mbps on a 1 Gbps link.
Introducing a realistic LTT of 10 µs provokes a noticeable decrease in the throughput and an increase in the queue delay. The effect on the throughput is seen only at higher loads, above 320 Mbps (80% offered), with the delay introduced as a result of LTT reducing the throughput capability of the ONUs by 10% compared to when there is no LTT, thus reducing the bandwidth utilization of the system. The effect of LTT is not noticed at lower loads because the system is not operating at nearly full capacity. As such, the system can transmit up to the maximum allowable throughput. The effect of LTT on the queue delay is seen only at the point where the queue delay starts skyrocketing, which is at a much higher offered load when the LTT is 0 than when the LTT is 10 µs. At lower loads, the queue delay is kept minimal and comparable to when there is no LTT applied.
The number of ONUs connected to the OLT does not in any way affect the performance of the DBA algorithms in terms of queue delay and throughput. When the resources are to be shared equally among the ONUs in the PON, the algorithms behave the same way, with each ONU’s throughput and queue delay having similar values. The distance between the ONU and the OLT within the maximum allowed distance (20 km) of PON does not have any impact on the throughput. It does not matter where the ONU is located in the PON, the throughput will still be the same. However, the queue delay is affected and it decreases as the ONUs become closer to the OLT. However, the impact of the distance between the ONU and the OLT decreases as the offered load increases.
These results emphasize the need to consider LTT when designing DBA and evaluating its performance in order to obtain realistic results and model the behavior of a system whose delay requirements are within a band of 1 ms to 100 ms [
52], which is what critical services demand in 5G networks.
The LPT scheme is introduced to solve the problem of minimizing the total finish time when scheduling requests on multiple wavelengths. When LPT is applied to the IPACT algorithm, the queue delay is reduced (by 73%), but there is no noticeable effect on the throughput. Applying LPT to the WFQ algorithm gives us the WFQLPT algorithm, which is a hybrid that combines the low delay of LPT with the QoS differentiation provided by WFQ. Therefore, WFQLPT achieves QoS differentiation and proves to be superior to WFQ in terms of delay, which is reduced by approximately 33%. In terms of throughput, there is no noticeable difference between WFQLPT and WFQ.
When implementing QoS differentiation in WFQ and WFQLPT algorithms, the ONUs with higher priority obtain their share and the remaining ONUs obtain a fair share of the resources without any of them being starved. The introduction of QoS based on WFQ reduces wasted bandwidth because the bandwidth utilization is in the region of 91%, which is higher than when no QoS is applied (88%), and there is no noticeable impact on the delay.
5. Conclusions
TWDM-based PON is a promising technology with great potential for providing the high bandwidth capacity and low latency required by emerging services. TWDM-PON DBA algorithms need to take into account the laser tuning time (LTT), which is often ignored. In this paper, we have presented a WFQLPT, QoS-aware algorithm that considers LTT. Our algorithm builds on IPACT by adding the capability of supporting four wavelengths. We apply a scheduling mechanism based on the LPT scheme that arranges the requests from the ONUs in descending order before being scheduled on the assigned wavelengths in accordance with the NASC principle, thus reducing delay. As IPACT is known to lack QoS capability, we have introduced weight-based QoS differentiation based on Max-Min Weighted Fair Share in order to ensure fair sharing of resources. We evaluated our approach through simulations, and our results show that the bandwidth is shared fairly among the users while wavelengths are allocated in a more balanced manner. Introducing WFQ guarantees the allocation of resources based on the Service Level Agreement (SLA) while keeping delay bounded. We can see the effect of LPT in reducing the average packet delay on IPACT by 73% and on the WFQ algorithm by 33%. We have also shown that the delay introduced as a result of LTT gives the system more realistic behavior in terms of the throughput and queue delay.
This paper opens up a new horizon of research on the implementation of the DBA algorithm while focusing on efficient energy utilization in order to save power, which is a worthwhile contribution, given the predominant role of PONs in next-generation networks. We plan to introduce power-saving features such as laser doze/sleep mode [
53] and further exploit the laser tuning time to achieve optimal results while keeping the delay minimal. We also intend to enhance the algorithm by implementing it in a just-in-time manner in order to further reduce the delay [
33]. Furthermore, we will work on improving the management architecture of TWDM-PON by introducing the software-defined networking principle to decouple the OLT and move the DBA functions to a centralized controller, which will thus manage the network with flexibility [
54]. An interesting direction for future research will be to consider Long-Reach Passive Optical Networks (LRPON), with a multi-thread polling scheme to enhance their operations.