Note that, for each test case, we executed at least 30 experiments. The experimental time was 120 s unless specified otherwise. The results shown in Figures 7 and 8 include the mean, standard deviation, median, 25% and 75% percentiles, and the degree of dispersion.
4.2. Considered Scenarios for Performance Evaluation
Our objective was to observe how the MPTCP CCAs performed with respect to the design goals, i.e., improve throughput and ensure fairness, and considering the real-world complex Internet. To this end, we designed several scenarios, as shown in
Figure 6. In all the scenarios, an MPTCP client and an MPTCP server are connected through a different number of SFs. Note that, in the remaining text, an SPTCP flow with CUBIC as the CCA is considered the background traffic unless stated otherwise.
In Scenario #1, there are two SFs, each of which follows separate paths; thus, B1-1 and B1-2 are the NSBs. Background traffic is present in both paths. This scenario is designed to observe how well the considered CCAs utilize the underlying network.
In Scenario #2, there are two SFs that travel through SB B1, in the presence of background traffic. This scenario is designed to observe how fair the considered CCAs behave with SPTCP flows.
In Scenario #3, there are three SFs. SF1 goes through NSB B1, and SF2 and SF3 travel via SB B2. Background traffic is present in both B1 and B2. Thus, this scenario presents two challenges to the MPTCP CCAs, i.e., to grasp a fair share of BW for SF1 while competing with SPTCP in the NSB B1 and to ensure a fair share of BW for the SPTCP flows in SB B2.
Scenario #4 presents a more interesting situation where SF1 and SF2 travel via B1 and B2, respectively. Background traffic is present in both paths. However, the SF3 path changes after each 20 s interval, i.e., for the first 20 s, SF3 travels via B1, in the next 20 s SF3 goes through B2, in the next 20 s SF3 travels via B1, and so on. As a result, for the first 20 s, B1 is the SB, in the next 20 s B2 is the SB, and in the next 20 s, B1 is the SB. Therefore, B1 and B2 become the SB interchangeably after each 20 s interval. The MPTCP CCAs need to realize this dynamic environment in the network, categorize the flows accordingly, i.e., determine which are going through an SB and which are going through an NSB, and allocate the proper CWND for each SFs. This also challenges the efficacy of the SB or NSB detection mechanism.
Finally, Scenario #5 is a mixture of Scenarios 1-4. Thus, the MPTCP CCAs face all of the above-mentioned challenges simultaneously. SFs1–3 travel via SB B1 in the presence of background traffic. These SFs need to be fair with the SPTCP flow. SF7 travels via NSB B4 in the presence of background traffic. NSB B4 should attempt to attain a similar BW as an SPTCP flow. SFs4–5 travel via B2 and B3, respectively, and, following Scenario #4, SF6 interchangeably uses B2 and B3 after each 20 s interval. Consequently, B2 and B3 become the SB and NSB successively after each 20 s interval. This unique network scenario challenges the MPTCP CCAs in almost all possible ways, gives the experiment a near real-world flavor in a controlled way, and provides the readers with a better understanding of the BA-MPCUBIC and considered MPTCP CCAs performance.
4.3. Performance Evaluation in Terms of Aggregate Benefit and Jain’s Fairness Index
As MPTCP CCAs govern multiple SFs simultaneously, measuring only the throughput does not clearly reflect the proper network utilization. Paasch et al. defined a parameter “Aggregate Benefit” (
) to better capture the network utilization by the MPTCP SFs [
36]. They considered the goodput and available resources (i.e., BW) of the MPTCP SFs as follows:
where
,
,
, and
q are the total goodput acquired by the MPTCP SFs, the maximum available BW among all SFs, the actual available BW for
going through path
p, and the total number of SFs, respectively. The
value ranges from −1 to 1 where a larger value indicates better network utilization. Moreover,
indicates that the usage of MPTCP yields a benefit over SPTCP, and there is no benefit over SPTCP otherwise.
To better understand how fair the considered MPTCP CCAs behave with the SPTCP background traffic, for the MPTCP SFs and SPTCP flows going through a link, we obtain the Jain’s fairness index defined as follows [
37,
38]:
where
and
z are the allocated BW from the total BW of a link to a flow
x and the total number of flows going through the link, respectively. Fairness index values range from 0 to 1; the closer the value is to 1, the fairer the CCA.
Figure 7a and
Figure 8a show the
and fairness index for the considered MPTCP CCAs in Scenario #1. Here, the two MPTCP SFs travel through two separate paths via NSB B1-1 and B1-2. Background traffic is present in both B1-1 and B1-2. SF1 and SF2 should behave as two separate SPTCP flows to obtain an equal share of BW in the presence of the background traffic. We can observe that BA-MPCUBIC and U-MPCUBIC attain the highest and almost equal
. As previously described, U-MPCUBIC has been implemented as an MPCUBIC CCA where each SF follows the design of an SPTCP CUBIC CCA. As BA-MPCUBIC realizes an equal
as U-MPCUBIC, BA-MPCUBIC succeeds in its goal to behave as an SPTCP CCA while going through the NSBs. Interestingly, BA-MPCUBIC better utilizes the network compared with Ferlin’s SBD+MPTCUBIC. This is because Ferlin’s SBD+MPCUBIC encounters false-positive decisions from time to time and considers that the two SFs are going through an SB. In such cases, it reduces the CWND and fails to hold an equal share of BW at all times. We believe that the detection errors occur because Ferlin et al. [
22] base their decision only on the OWD. Moreover, Le’s MPCUBIC achieves smaller
than BA-MPCUBIC because it follows the same LIA design principle and attempts to behave very fairly with SPTCP flows even in the NSBs. However, Le’s MPCUBIC obtains better
values than LIA, OLIA, and BALIA because it implements CUBIC as a CCA rather than TCP NewReno [
39]. Due to the aggressive fairer nature, LIA, OLIA, and BALIA show poorer performance than the others, as was reported in a previous study [
5]. Considering the fairness issue, because one MPTCP SF and one SPTCP flow travel in both B1-1 and B1-2, the total BW should be divided in a 1:1 ratio. Thus, an MPTCP SF and an SPTCP flow should each occupy 5 Mbps of BW in both B1-1 and B1-2, respectively. From
Figure 8a, it is clearly evident that, compared to other approaches, both BA-MPCUBIC and U-MPCUBIC achieve the highest fairness index because they implement the SPTCP CUBIC CCA in NSBs. Moreover, the better NSB or SB detectability helped BA-MPCUBIC realize a higher fairness index than Ferlin’s SBD+MPCUBIC. The reason given previously for the better
value also applies to the better fairness index compared with Le’s MPCUBIC, LIA, OLIA, and BALIA.
For the considered MPTCP CCAs,
Figure 7b and
Figure 8b demonstrate the
and fairness index for Scenario #2. Here, both SF1 and SF2 travel through an SB B1 with background traffic. Because the two SFs share a common bottleneck with an SPTCP flow, together they should occupy the BW occupied by an SPTCP flow. In other words, the total throughput of the MPTCP SFs should be equal or close to that of the SPTCP flow. In this scenario, Le’s MPCUBIC performs the best among all considered CCAs. It showed the best fairness index and achieved a decent
value. Because Le’s MPCUBIC is based on LIA and CUBIC, it could ensure better fairness and realize a fair share of BW while competing with CUBIC as the SPTCP CCA. BA-MPCUBIC’s performance was very close to that of Le’s MPCUBIC with a slightly decreased fairness index and increased
value. BA-MPCUBIC initially takes some time to recognize that the SFs are going through an SB. This initial recognition time is the key that defines this difference. Nevertheless, BA-MPCUBIC shows a highly efficient performance compared to its competitors. In contrast, we can observe that U-MPCUBIC realized the highest
value but results in the lowest fairness index. This is because it continues sending data following SPTCP CCA behavior even in the SBs. Ferlin’s SBD+MPCUBIC also shows a similar trend as U-MPCUBIC with a comparatively lower
value and a better fairness index than U-MPCUBIC. As explained earlier, Ferlin’s SBD+MPCUBIC frequently encounters false-positive and false-negative detections. Although the flows go through the SBs, it sometimes considers that they are going through an NSB. Thus, Ferlin’s SBD+MPCUBIC unfairly receives greater BW, which results in a lower fairness index. LIA, OLIA, and BALIA show a considerably better fairness index in this scenario; however, they fail to obtain sufficient BW. We believe this is because LIA, OLIA, and BALIA are based on TCP NewReno, while the SPTCP flow uses CUBIC as the CCA.
For Scenario #3, the performance comparison of the considered CCAs in terms of
and fairness index are shown in
Figure 7c and
Figure 8c, respectively. Note that
Figure 8c shows the fairness index for each of the bottleneck links. Scenario #3 is a mixed scenario that includes both MPTCP CCA design goal challenges. SF1 goes through NSB B1 and SF2 and SF3 travels via the same SB B2. Background traffic is present in both B1 and B2. Therefore, SF1 should behave like an SPTCP flow to obtain an equal share of BW while sharing B1 with the background traffic. SF2 and SF3 should behave fairly following Scenario #2. In this mixed scenario, BA-MPCUBIC demonstrates the best results, i.e., it achieves a high fairness index in both the bottlenecks and ensures high BW utilization thanks to the state-of-the-art bottleneck detection and response algorithm. Although U-MPCUBIC achieves the highest
value, it behaves highly unfairly in B2 following Scenario #2. The unconditional implementation of SPTCP CUBIC CCA as the MPTCP CCA is the key reason behind this behavior. Ferlin’s SBD+MPCUBIC achieves a lower
value and fairness index in both the bottlenecks compared with BA-MPCUBIC. The reason explained previously also applies here. Le’s MPCUBIC ensures better fairness in B2 but fails to maintain this fairness in B1. In addition, the
value is significantly low compared with BA-MPCUBIC, Ferlin’s SBD+MPCUBIC, and U-MPCUBIC. For the aggressive fairer tendency inherited from LIA, Le’s MPCUBIC handles the situation in B2 efficiently but fails to do so in B1. Comparably to the previous scenarios, LIA, OLIA, and BALIA results showed similar tendencies. However, following Le’s MPCUBIC, they achieve better fairness in B2 but fail to maintain it in B1. They also return significantly lower
values. We believe the TCP NewReno based behavior, which competes with SPTCP CUBIC as the background traffic, is the reason for their worse performance.
Scenario #4 introduces an exclusive task to cope with dynamically changing network topology. Here, the SF3 changes its path after each 20 s interval resulting in changes in SB and NSB in real-time. Whenever SF3 goes through B1, B1 becomes the SB as SF1 and SF3 travel through it together, and B2 becomes the NSB. In both B1 and B2, an SPTCP CUBIC flow is present as background traffic. Thus, SF1 and SF3 together should behave like an SPTCP flow to be fair with the background traffic, whereas SF2 should behave like an SPTCP flow so that it can take a proper share of BW while competing with the background traffic.
Figure 7d and
Figure 8d show the performance comparison results of the considered CCAs in terms of
values and fairness index. We show the fairness index for each of the bottleneck links. In this changing network scenario, BA-MPCUBIC again proves to be the best performer compared with others, i.e., BA-MPCUBIC achieves the best fairness index and ensures better BW utilization. We believe the advanced SB and NSB detection mechanism is the key to this high performance. U-MPCUBIC again achieves the best
values; however, it results in the lowest fairness index because SPTCP CUBIC is implemented as the CCA without any awareness about the bottleneck condition. Ferlin’s SBD+MPCUBIC showed the second-best performance. The reason given for the performance in Scenario #2 also applies here. Le’s MPCUBIC neither obtains a better fairness index nor
values in this scenario. We believe that the absence of an SB or NSB detection mechanism renders Le’s MPCUBIC ineffective in this scenario. The inherited aggressive fairer nature dominates here and causes the three MPTCP flows collectively to behave like an SPTCP flow. The same phenomenon is also true for LIA, OLIA, and BALIA. Thus, they also yield much poorer
and fairness index values in this scenario.
All the challenges demonstrated in Scenarios #1–4 are present simultaneously in Scenario #5. There are seven SFs between the MPTCP client and server. SF1–3 travel through SB B1. SF4 and SF5 travel through B2 and B3, respectively. SF6 changes its path between B2 and B3 after each 20 s interval. Thus, B2 and B3 interchangeably become an SB and NSB after each 20 s interval. SF7 travels via NSB B4. Background traffic is present in all four bottleneck links. Collectively, the three SFs going through SB B1 should take BW close to that taken by an SPTCP flow. SF7, which competes with the SPTCP flow in B4, should try to behave similar to an SPTCP flow to grasp an equal share. When B2 becomes the SB, together SF4 and SF6 should behave like an SPTCP flow, whereas when B3 becomes the SB, SF5 and SF6 should together behave like an SPTCP flow.
Figure 7e and
Figure 8e show the
and fairness index values for the considered CCAs in Scenario #5, respectively. Again, we show the fairness index for each bottleneck link in
Figure 8e. BA-MPCUBIC performs the best in this complex network scenario considering the two MPTCP CCA design goals. BA-MPCUBIC achieves a comparatively better fairness index in all four bottleneck links. At the same time, it could attain a fair and comparatively better BW share, again thanks to its state-of-the-art bottleneck detection and response algorithm. As observed in the previous scenarios, U-MPCUBIC attains the maximum
; however, it returns the minimum fairness index in all bottleneck links except B4 because it adopts the SPTCP CCA mechanism irrespective of bottleneck conditions. In B4, U-MPCUBIC has the highest fairness index because there is only one SPTCP CUBIC flow as the background traffic, and SF7 acts as the SPTCP CUBIC flow by implementing U-MPCUBIC. This result verifies that, although U-MPCUBIC fulfills Goal 1 of MPTCP CCAs, it completely fails to fulfill Goal 2. Similar to Scenario #4, Ferlin’s SBD+MPCUBIC secures comparatively better
and fairness index values in all bottlenecks; however, the results are significantly less than those of BA-MPCUBIC. As explained previously, Ferlin’s SBD+MPCUBIC implements the same reaction algorithm as BA-MPCUBIC when SBs or NSBs are detected. The difference between Ferlin’s SBD+MPCUBIC and BA-MPCUBIC is the SB and NSB detection mechanism. Therefore, we believe that the decrease in performance by Ferlin’s SBD+MPCUBIC compared with BA-MPCUBIC is due to the detection of false-positive or negative SBs or NSBs. We further believe that its sole dependency on a single parameter (i.e., OWD) for the decision-making process is the key reason behind this degraded performance. As was observed in Scenario #4, BA-MPCUBIC significantly outperforms Le’s MPCUBIC, LIA, OLIA, and BALIA in terms of both
and fairness index. The reasons described previously also explain this poor performance. Therefore, it is evident that BA-MPCUBIC performs the best considering the fulfillment of the two MPTCP CCA design goals.
Finally,
Table 1 presents the overall performance comparison of the considered MPTCP CCAs in Scenarios #1–5 in terms of
and fairness index. Note that here we represent the average
. For the fairness index, we calculate an average fairness index considering the fairness indexes observed in all the bottlenecks in a scenario. From
Table 1, only BA-MPCUBIC and U-MPCUBIC could always ensure an
indicating that they could always ensure an incentive for using MPTCP over SPTCP. Although U-MPCUBIC yields a better
, it results in a very poor fairness index in most of the scenarios in comparison with BA-MPCUBIC. The previously mentioned reasons are valid for the comparatively poor performance of the other considered MPTCP CCAs. Therefore, it becomes evident that BA-MPCUBIC is the best performer in terms of
and fairness index with SPTCP flows, thus best satisfies the MPTCP CCAs’ design goals among the considered MPTCP CCAs in the experimented scenarios.
4.4. Performance Evaluation in Terms of Bottleneck Detection Time and Accuracy
In the previous section, we demonstrated and compared the performance of BA-MPCUBIC relative to fulfilling the MPTCP CCA design goals. In this section, we focus on how the BA-MPCUBIC algorithm performs in detecting SBs and NSBs. Initially, we consider how the proposed algorithm responds to changing network scenarios. Then, we compare the bottleneck detection time and detection accuracy for the considered scenarios.
We designed Scenario #4 such that the SB and NSB change every 20 s. Therefore, we selected Scenario #4 to observe the BA-MPCUBIC’s response to a changing network scenario. We randomly selected an experiment in Scenario #4 and plotted the throughput curve of the three SFs, as shown in
Figure 9. In this experiment, B1 and B2 become NSB and SB, respectively for the first 0–20 s, SB and NSB for the 20–40 s period, NSB and SB for the 40–60 s period, and so on. We observe that whenever the bottleneck changes, BA-MPCUBIC quickly recognizes the changes and applies the relevant algorithm. Even the SFs going through the SB share the available BW equally. For example, consider a 20–40 s interval when B1 is the bottleneck link and SF1 and SF3 travel via SB B1. B1 has a BW capacity of 10 Mbps. As SPTCP CUBIC flow is present as background traffic, the total available BW for both the SFs becomes 5 Mbps. SF1 and SF3 share this BW evenly. On the other hand, B2 has a BW capacity of 16 Mbps. As B2 is an NSB in the 20–40 s interval, SF2 gets an available capacity of 8 Mbps BW in the presence of an SPTCP CUBIC flow as background traffic. As we can observe, SF2 successfully utilizes that available BW. The same trend of yield is observed throughout the experiment, which demonstrates the efficacy of the proposed BA-MPCUBIC algorithm.
Table 2 shows the bottleneck detection time and detection accuracy of BA-MPCUBIC. We also calculated the detection accuracy and detection time for Ferlin’s SBD+MPCUBIC. One may argue that because BA-MPCUBIC implements the ECN_filter, which depends on the ECN mechanism, BA-MPCUBIC might not work or demonstrate a significant decrease in performance in the absence of the ECN mechanism. Therefore, we performed experiments using the same scenarios wherein we disabled the ECN mechanism and calculated the SB and NSB detection accuracy and time, as presented in
Table 2. We calculated detection accuracy as follows:
where
and
are the number of successful and total detections, respectively. The detection time implies the time required by an algorithm to successfully recognize an SB or NSB. For a static (non-changing) network scenario, the time is calculated from the start time of an SF to the time when it is determined that the SF is going through an SB or NSB. For a dynamically changing network scenario, it is calculated from the time a bottleneck link becomes the SB or NSB to the time when the algorithm successfully determines the changed bottleneck condition.
From
Table 2 it is evident that BA-MPCUBIC shows the best detection accuracy in all scenarios. Even in the absence of an ECN mechanism, BA-MPCUBIC achieves better accuracy than the compared algorithm. Moreover, in the case of detection time, it takes a shorter time than the competitor even in the absence of an ECN mechanism. The Ferlin’s SBD+MPCUBIC SB detection scheme primarily rely only on OWD. The state-of-the-art BA-MPCUBIC implements three filters related to RTT, ECN, and packet loss. This enables the proposed BA-MPCUBIC algorithm to understand and utilize all the available parameters in a systematic manner to produce a better detection capability. Even in the absence of an ECN mechanism, the other two filters successfully keep the detection process on the right track.