Next Article in Journal
A Narrative Review on Key Values Indicators of Millimeter Wave Radars for Ambient Assisted Living
Previous Article in Journal
EMF Exposure of Workers Due to 5G Private Networks in Smart Industries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An OpenFlow-Based Elephant-Flow Monitoring and Scheduling Strategy in SDN

1
The School of Aerospace, Hunan University of Technology, Zhuzhou 412007, China
2
College of Computer Science, Hunan University of Technology, Zhuzhou 412007, China
3
School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
4
The School of Computer Science and Artifical Intelligence, Hunan University of Technology, Zhuzhou 412007, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(13), 2663; https://doi.org/10.3390/electronics14132663
Submission received: 2 April 2025 / Revised: 1 June 2025 / Accepted: 4 June 2025 / Published: 30 June 2025

Abstract

This paper introduces a novel monitoring and scheduling strategy based on software-defined networking (SDN) to address the challenges of elephant flow scheduling and localization in conventional networks. The plan involves collecting and analyzing switch data, effectively monitoring elephant flows, and enhancing the traditional distributed solution. Meanwhile, elephant flow scenarios are simulated by the iperf tool, and Fat-Tree and Leaf-Spine topologies are simulated in Mininet. Experimental results demonstrate significant network stability and resource utilization improvements with the proposed strategy. Specifically, in the Leaf-Spine topology, the network throughput stabilized around 8 Mbps with minimal fluctuation and no congestion over a 120-s test, compared to multiple throughput drops to 0 Mbps under the Fat-Tree topology. In addition, the proposed scheduling approach takes advantage of monitoring and scheduling for elephant flow, a promising scheme to enhance traffic management efficiency in large-scale network environments.

1. Introduction

With the rapid development of cloud computing, the scale of data centers carrying cloud computing continues to expand, which has caused significant difficulties in the management and operation of data center networks. How to efficiently solve data center network problems is one of the main challenges for network operation and maintenance, as well as innovation personnel. Softwared Defined Network (SDN) has enabled flexible management and configuration of network devices by decoupling the control and data planes [1]. Industry highly uniformly uses the OpenFlow protocol as the control protocol of SDN, abstracting the control plane into a controller and the forwarding plane into a unified OpenFlow network device [2]. The controller controls the network device traffic forwarding through the OpenFlow protocol. In the data center network environment, scholars such as Benson pointed out that 80% of the network traffic is less than 10 KB, which is defined as mouse traffic, and 10% of the traffic has the characteristics of long duration and large carrying capacity, which is defined as elephant flow [3]. The network’s congestion level and performance stability are closely related to the elephant flow [4]. Therefore, the monitoring and scheduling of elephant traffic are key to solving data center network problems. Dynamic flow scheduling systems like Hedera have been shown to significantly improve bandwidth utilization in multi-rooted tree topologies [5]. However, while Hedera showcases the benefits of dynamic routing, it lacks fine-grained traffic classification and active polling control, which are central to our approach.
Although elephant flows account for only a small fraction of total traffic, they consume a disproportionately large amount of bandwidth and directly cause performance bottlenecks and congestion in data center environments. This highlights the dynamic and busy nature of data center traffic, where applications such as big data analytics, distributed storage, and real-time services frequently generate elephant flows, challenging traditional static routing and load balancing mechanisms [6]. Liu et al. [7], for instance, proposed a shim-layer mechanism to monitor TCP socket buffers, but their method remains confined to Fat-Tree structures and relies on static detection techniques, limiting its adaptability. Similarly, Hamdan et al. [8] and Bezerra et al. [9] introduced advanced detection strategies that combine terminal and network-layer cues to improve flow detection accuracy, but they do not provide frameworks for rerouting or scheduling.
The topology widely used in data centers is the Fat-Tree structure. Due to loops in the three-layer Fat-Tree structure, the Spanning Tree Protocol (STP) needs to be used to remove network loops and avoid broadcast storms [10,11]. Some scholars have studied STP based on OpenFlow. This protocol can make the network converge, but it also brings potential risks of single-channel network congestion [12]. Based on the Fat-Tree network topology, some researchers have proposed a load balancing mechanism for SDN, which integrates a “shim layer” in the terminal host to monitor the Transmission Control Protocol (TCP) socket buffer, thereby reducing the network load [7]. These works focus on monitoring and load balancing within the Fat-Tree topology but do not address the path congestion issues that arise due to single-path routing. In contrast, we explore the potential of the Leaf-Spine architecture, which avoids the limitations of single-path routing by leveraging Equal-Cost Multi-path (ECMP) to distribute traffic across multiple available paths, improving bandwidth utilization and reducing congestion.
In order to solve the inherent defects of single-path and three-layer structures, the flattened Leaf-Spine network structure was proposed and studied by scholars [13,14,15]. Google researchers conducted an in-depth review of the progress of data center network design and revealed the potential of the Leaf-Spine architecture to achieve efficient bandwidth utilization and optimize application performance in data center networks by discussing Google’s Jupiter project [13]. In addition, some scholars have explored the evolution of data center architecture over the past few decades, from the initial client-server model to the access-aggregate-core (AAC) architecture, and the Leaf-Spine architecture developed to meet the needs of low-latency and high-throughput server-to-server communication and load balancing. They also simulated the Leaf-Spine network environment through experiments and used machine learning methods to predict the traffic transmission from the Leaf-Spine switching layer to the server. Although these methods are theoretically feasible, they heavily depend on high-quality data [14]. Furthermore, while some studies on the Leaf-Spine architecture, such as the work by Sultan et al. [14] and Alizadeh et al. [15], provide valuable insights into network design decisions, they do not address operational monitoring or real-time scheduling systems.
Unlike the Fat-Tree structure, in the Leaf-Spine architecture all leaf nodes (Leaf switches) are directly connected to each spine node (Spine switch). Leaf nodes usually connect to servers, storage devices, and other network terminal devices. Spine nodes are responsible for high-speed data forwarding between leaf nodes. In addition, the Leaf-Spine architecture supports ECMP [16], which means that traffic is distributed among multiple available paths, effectively utilizing all connection paths, improving bandwidth utilization, and reducing congestion. Multiple parallel links between leaf and spine nodes can provide redundancy, increasing network reliability and fault tolerance. Many network design problems under similar reliability and path constraint requirements, such as those involving edge-disjoint paths and budget limits, can be modeled as edge-disjoint rooted distance-constrained minimum spanning tree problems (ERDCMST). Arbelaez et al. [17] proposed a constraint-based parallel local search algorithm for the ERDCMST, demonstrating its effectiveness on real-world optical network topologies in Europe.
Recent research has significantly advanced machine learning (ML) for intelligent detection and optimization in diverse domains, including SDN and supply chain management. Several studies have explored using ML for attack detection and traffic management in the context of SDN. For example, Rahman et al. discussed machine learning classifiers such as decision trees, random forests, and support vector machines for detecting Distributed Denial of Service (DDoS) attacks in SDN environments [18]. These intelligent detection frameworks enhance the analysis of flow behavior and real-time pattern recognition, which indirectly support our approach to dynamic flow classification and congestion management in SDNs.
Furthermore, ML techniques have proven transformative in the realm of supply chain optimization. Recent studies have highlighted AI’s role in supply chain disruption management, focusing on how real-time data analytics, including AI and blockchain technologies, improve operational resilience and decision-making [19]. This is particularly relevant for our approach to elephant flow scheduling in SDNs, where intelligent decision-making can optimize the allocation of network resources and improve overall system performance.
In this paper, by simulating the elephant flow in the network environment through Mininet [20] and setting the corresponding traffic thresholds, we can intuitively capture the pre-set traffic types, such as elephant flow and mouse flow. After the elephant flow appears in the Leaf-Spine structure, the method proposed in this paper will poll and schedule the elephant flow to the equivalent path. Compared with the Fat-Tree single path, the proposed Leaf-Spine polling scheduling method is verified. The results show that it can improve the utilization and stability of network equipment. Code available: https://github.com/cmy-hhxx/el_monitor (accessed on 30 May 2025).
The main contributions of this work are summarized as:
1. We designed and implemented an SDN-based elephant flow monitoring system using the Ryu controller. This system classifies traffic based on duration and bandwidth.
2. We proposed a polling-based dynamic elephant flow scheduling algorithm that performs path rerouting across equal-cost multipaths in Leaf-Spine topologies, avoiding congestion typical in traditional fat-tree structures.
3. We conducted simulations in Mininet and iperf to validate our approach, demonstrating that our proposed strategy achieves stable throughput (8 Mbps) and zero packet loss under load, outperforming traditional stability and link utilization scheduling methods.
This paper follows this organization: Section 2 introduces the design and implementation of our proposed elephant flow monitoring and scheduling strategy. Section 3 presents the experimental setup, network simulation environment, and evaluation results. Section 4 concludes the paper and discusses directions for future research.

2. Principles

2.1. Monitoring Scheme

In the software-defined network, Mininet is used to simulate the network environment. The controller actively collects the flow table entries of the OpenFlow switch to monitor the network status. A continuously executed monitoring function is used to analyze and classify the traffic generated in the network. Traffic with a duration of more than 30 s and a bandwidth greater than 5 Mbps is classified as an elephant flow; traffic with a duration of more than 30 s and a bandwidth greater than 2 Mbps and less than 5 Mbps is classified as a medium-sized elephant flow; traffic with a duration of more than 30 s and a bandwidth less than 2 Mbps is classified as a small elephant flow; and traffic with a duration of less than 30 s is classified as a mouse flow.
Define a monitoring function to collect data from the OpenFlow switch every 5 s, then mark the flow with features, and add them to the corresponding lists after classification. As shown in Figure 1, this is a flowchart of the elephant flow monitoring application. First, an application for elephant flow monitoring (Elephant Flow Monitor) is secondary developed based on the Ryu framework, named “el_monitor”. When the switch receives the traffic, it decodes it, encapsulates it into a packet-in message, and sends it to the controller. The packet-in message is a message sent from the switch to the controller when the switch encounters traffic that it cannot process independently. After the controller receives the Packet-In message, the el_monitor application running on the controller first decodes it to obtain detailed information about the traffic. Subsequently, the application uses the Address Resolution Protocol (ARP) to resolve the IP address contained in the traffic, which is crucial for the subsequent traffic type judgment. The ARP-based IP resolution step is used during packet decoding, and is not directly part of the add_flow function, but rather supports traffic classification decisions. Next, the application determines whether the traffic belongs to the elephant flow based on predefined criteria. If the traffic meets the elephant flow conditions, el_monitor will report it to the corresponding module in the Ryu controller for processing, which may include traffic redirection, priority adjustment or other forms of traffic engineering management. If the traffic does not meet the elephant flow conditions, the existing flow table on the OpenFlow switch will continue to perform regular traffic forwarding processing.
For the flow table delivery logic, this application inherits and improves the simple_switch_13 application built into Ryu. This application is an example built into the Ryu framework. It demonstrates basic flow table management and packet forwarding logic by simulating the behavior of a simple OpenFlow 1.3 protocol-compatible switch. Compared with the original simple_switch_13, the enhanced el_monitor module incorporates a flow statistics polling mechanism executed every 5 s, enables traffic classification based on bandwidth and duration, provides real-time console output of flow attributes such as type, IP address, and protocol, and integrates rerouting logic to support dynamic path adjustments. When an OpenFlow switch is connected to the Ryu controller, the EventOFPSwitchFeatures event is triggered, which carries the switch’s functional information, such as the supported OpenFlow version, the number of available flow tables, etc. Subsequently, the switch_features_handler function that handles this event is automatically called. Its main function is to respond to the EventOFPSwitchFeatures event and initialize the switch and configure the initial flow table rules. Specifically, a default flow table rule is added to the OpenFlow switch through the add_flow function defined in the Ryu controller application to ensure that all packets arriving at the switch are properly forwarded to the controller for further processing. The following Algorithm 1 is the core implementation logic of the el_monitor application developed on top of the Ryu framework, which performs traffic classification based on flow statistics.
Algorithm 1. The core implementation logic of the el_monitor application developed on top of the Ryu framework
Input: received traffic
Output: flow type
Step 1: define flow_stats_reply_handler function
   When a traffic statistics reply is received
  For each traffic statistics entry
  Parse and calculate the traffic rate
  Classify the traffic based on duration and rate:
    Elephant flow if duration > 30 s and rate > 5 Mbps
    Medium elephant flow if duration > 30 s and 2 Mbps < rate <= 5 Mbps
    Mouse flow if duration < 30 s
  Print out the classification results
Step 2: define the add_flow function
   Add a flow table entry to the specified datapath to enable direct forwarding of known traffic.
   The flow entry contains Match fields and Actions.
   This flow entry avoids repeating Packet-In events for future matching packets.
Step 3: define the packet_in_handler function
   When a Packet-In message is received
    Learn the source MAC address
   If the destination MAC address is known
    Set the output port to a known port
   Otherwise
    Set the output port to FLOOD(broadcast)
    Add a flow table entry to avoid future Packet-In messages
    Send a Packet-Out message to forward the current packet

2.2. Scheduling Scheme

When elephant flows appear in the network, if they are not handled in time, the link load will be too high. The flat topology used in the new data center network provides multiple equal-cost paths for traffic. Therefore, this paper studies the method of rerouting elephant flows after detecting elephant flows and diverting them to secondary paths to achieve elephant flow scheduling.
In order to avoid broadcast storms and loops, the traditional three-layer network environment uses the minimum spanning tree algorithm. There is only one path in the network. Although link aggregation can be used to improve utilization, there are still unused links. Based on the characteristics of the data center network structure, the topology tends to be stable, so this part of the link is difficult to be used. By adopting the Leaf-Spine type topology, multiple equal-cost paths can be created. At the same time, the SDN centralized control method is used to schedule traffic polling to each equal-cost path. When redundant links are fully utilized, the network utilization will be significantly better than the traditional solution based on the spanning tree.
When a long-lasting elephant flow appears in the network, the current forwarding path is deleted and the traffic is directed to another path. The polling strategy is used to evenly distribute the traffic to each equal-cost path to avoid congestion of a single link. The dynamic rerouting of traffic is achieved by using the “reroute.sh” application. The implementation process of the algorithm is as follows (Algorithm 2):
Algorithm 2. The implementation process of the algorithm.
Input: switch array
Output: normal flow
switch array = [Leaf1, Leaf2, …, Spine1, Spine2, …]
 Step 1: Define del-flow function:
 For switch in switch array
 Delete all flow entries on this switch to clear the old paths
 Step 2: Define dump-flow function:
 For switch in switch array:
 Print flow entries on this switch for diagnostic purposes
 Step 3: Define path_n function:
 Call del-flow(switch array) to clear all previous flow entries
 For each switch n in switch array:
      Configure new traffic path n by adding new flow table entries to reroute traffic
 Call dump-flow(switch array) to print the updated flow entries
 Step 4: Polling and Repeat
      Call all path_n() to continuously reroute traffic and maintain load balancing
Configure new traffic path n refers to updating flow table entries on all switches to redirect the elephant flow to a different ECMP path. This involves specifying new output ports and destination rules in each switch’s flow entries. The lines while true: and all path_n() represent the main loop of the scheduling process, not part of the path_n() function.

3. Experimental Setup and Results

3.1. Experimental Setup

In order to verify the effectiveness of the proposed method for monitoring elephant flows, Mininet simulates network devices and connects them to the Ryu controller to let the Ryu controller take over. Using the el_monitor application developed in this research work, the pingall command is used to test the connectivity of the network. Through analysis, a lot of mouse flows will be generated. Observe whether the el_monitor application can monitor these mouse flows. Then h1, h4 and h5 are set as network performance test clients (iperf clients), and h2, h3, and h6 are used as network performance test servers (iperf servers) for speed testing. The bandwidths are 8 Mbps, 4 Mbps, and 100 Kbps respectively. The elephant flow, medium-sized elephant flow and mouse flow are tested correspondingly. Observe whether the el_monitor application works normally during this process. The network topology is shown in Figure 2.
To ensure consistency in the emulation, the experiments employed a Leaf-Spine topology comprising nine switches, with six leaf switches and three spine switches. This structure provided multiple equal-cost paths to support robust traffic distribution. The iperf tool generated a mix of elephant and mice flows under varying load conditions. This study identified elephant flows as those that either transferred more than 10 MB of data or maintained a sustained throughput above 1 Mbps. The Ryu SDN controller managed the network and actively polled flow statistics every 5 s to detect and respond to emerging elephant flows. Each network link operated at 10 Gbps, with latency configured at 1 millisecond between edge and leaf switches and 2 milliseconds between leaf and spine switches, closely reflecting realistic data center conditions. Each switch port maintained a queue size of 1000 packets to simulate typical buffering behavior. Every simulation lasted for 120 s to capture both transient effects and steady-state traffic dynamics.
In a fully connected network such as Leaf-Spine, there are multiple equivalent paths. This paper uses the characteristics of this structure to schedule elephant flows. Simulate the network topology in Mininet and use the controller to send flow tables to converge the entire network. Then we test the elephant flow. After the elephant flow appears, we analyze the OpenFlow switch flow table through the running scheduler to observe whether the path changes. If it changes, the scheduling is successful. Traditional data centers generally use the Fat-Tree structure. In order to prove that the scheduling system under the Leaf-Spine structure has certain advantages, we use Mininet to simulate the Fat-Tree structure, connect the Ryu controller, and use the STP protocol to converge the network. Through the experiment, we observe that there are several paths in the network. Considering the need for data centers to migrate at any time, this will generate elephant flows, and specifically analyze the network resource utilization under the Leaf-Spine structure.

3.2. Experimental Results

Use the pinball command. At this time, a large number of ICMP messages will be generated in the network environment. The duration is short, and the capacity is small. It is a mouse flow. Then test the medium elephant flow and elephant flow, using bandwidths of 4 Mbps and 9 Mbps, respectively. Observe that the el_monitor application successfully identifies the traffic in the network and displays detailed information such as the protocol, IP address, and transmission rate used.
After the elephant flow appears in the Leaf-Spine network structure of the data center, the traffic needs to be guided in time. After the experiment simulates the Leaf-Spine network structure, the scheduler “reroute.sh” is used at this time to distribute the traffic to equal-cost paths, thereby realizing the switching of two paths. The program mainly implements dynamic rerouting of traffic through several steps: deleting old flow table entries (del-flow function), printing flow table entries (dump-flow function), and configuring path 1 and path 2 (path_1 and path_2 functions).
The experiment tested the elephant flow in the Fat-Tree single-path network and the elephant flow in the Leaf-Spine structure multi-path network, respectively. The network interface that generated the traffic was exported using Wireshark, and the packet loss rate was compared after matching the User Datagram Protocol (UDP) and the host IP. Due to the single-path nature enforced by the Spanning Tree Protocol in Fat-Tree structures, it is infeasible to dynamically reroute elephant flows, resulting in overburdened links and increased congestion risks. The load of a single link is too high, which will cause network congestion.
The throughput curve of the Fat-Tree structure is shown in Figure 3. After running STP in the Fat-Tree structure, there were multiple network congestions during the iperf flow process within 120 s. Network congestion occurred at 65 s, 75 s, 98 s, 105 s, etc. When the network is congested, the packet forwarding speed will become 0 packets/s, which means that no data packets can pass through the network and can only wait for the network to recover. Therefore, the network performance of the single-path solution is not stable.
The throughput curve using the Leaf-Spine structure is shown in Figure 4. After running the scheduler using the Leaf-Spine structure, the network packet loss rate is obviously better than the Fat-Tree single channel, and the network performance tends to be stable. During the 120-s test, the network rate can be stabilized to 8 Mbps, which is basically consistent with the bandwidth set by iperf. Although the network rate fluctuated slightly at the 35th, 40th, and 63rd seconds, there was no network congestion, and the traffic was still transmitted at a higher network rate. The network rate fluctuated between 7 Mbps and 8 Mbps, which had little impact on the overall performance. In addition to throughput analysis, packet loss and jitter were measured via Wireshark logs. These showed packet drops and near-zero throughput under congestion, though detailed curves are omitted due to space constraints. Future versions of the paper will include a full set of performance indicators including packet loss rate and jitter graphs.

4. Conclusions

This paper uses SDN to monitor and schedule elephant flows, leveraging SDN’s centralized control to overcome the limitations of traditional distributed networks. An elephant flow monitoring and scheduling application is developed based on the Ryu framework. The OpenFlow switch collects data every 5 s, and when corresponding conditions are met, flow details such as source IP, destination IP, port, and protocols are displayed on the console.
In contrast to the spanning tree method, the proposed approach enhances network utilization and analyzes paths between switches. Experiments simulating 9 Mbps elephant flows over 120 s show that the traditional Fat-Tree spanning tree structure causes network jitter, packet loss, and congestion. In contrast, the Leaf-Spine solution proposed here reduces jitter, avoids congestion, and maintains stable 9 Mbps throughput. This solution can help network operators improve monitoring, performance, and equipment utilization. The current approach has limitations that need further exploration. Static thresholds for elephant flow detection may not perform well in dynamic, unpredictable environments. The round-robin rerouting strategy lacks real-time awareness of network conditions, leading to inefficient path utilization. Future work will focus on adaptive thresholding for better detection flexibility and incorporating real-time telemetry to optimize path selection. Future improvements include adopting adaptive thresholding and integrating machine learning to enhance flow prediction and responsiveness, as well as extending evaluation to large-scale deployments for better scalability insights. Additionally, applying our approach in multi-tenant cloud environments introduces new challenges in policy isolation, dynamic resource sharing, and control overhead, which merit further investigation.
The proposed elephant flow scheduling approach not only demonstrates notable performance improvements but also offers valuable implications for existing network management protocols. By comparing with traditional mechanisms such as ECMP and OSPF, we highlight the advantages of SDN-based dynamic flow rerouting in enhancing throughput consistency and mitigating congestion. Moreover, the method can effectively complement Quality of Service frameworks by enabling real-time flow classification and adaptive path selection. Its fine-grained control capabilities also address the limitations of coarse-grained flow aggregation, offering a practical path forward for refining future network protocol designs.

Author Contributions

Conceptualization, M.C. and Q.C.; methodology, M.C.; software, M.C.; validation, M.C., Q.C. and H.W.; formal analysis, Y.S.; investigation, M.C.; resources, M.C.; data curation, M.C.; writing—original draft preparation, M.C.; writing—review and editing, M.C.; visualization, M.C.; supervision, H.W.; project administration, M.C.; funding acquisition, H.W. and Q.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Hunan Province, grant numbers 2023JJ50197 and 2024JJ7295, and the Education Department of Hunan Province under Grant 23A0444 and 22C0307.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xia, W.; Zhao, P.; Wen, Y.; Xie, H. A survey on data center networking (DCN): Infrastructure and operations. IEEE Commun. Surv. Tutor. 2016, 19, 640–656. [Google Scholar] [CrossRef]
  2. Liatifis, A.; Sarigiannidis, P.; Argyriou, V.; Lagkas, T. Advancing sdn from openflow to p4: A survey. ACM Comput. Surv. 2023, 55, 37. [Google Scholar] [CrossRef]
  3. Benson, T.; Akella, A.; Maltz, D.A. Network traffic characteristics of data centers in the wild. In Proceedings of the 10th ACM SIGCOMM Conference on Internet Measurement, Melbourne, Australia, 1–3 November 2010. [Google Scholar]
  4. Zhang, J.; Ye, M.; Guo, Z.; Yen, C.Y.; Chao, H.J. CFR-RL: Traffic engineering with reinforcement learning in SDN. IEEE J. Sel. Areas Commun. 2020, 38, 2249–2259. [Google Scholar] [CrossRef]
  5. Al-Fares, M.; Radhakrishnan, S.; Raghavan, B.; Huang, N.; Vahdat, A. Hedera: Dynamic flow scheduling for data center networks. Nsdi 2021, 10, 89–92. [Google Scholar]
  6. Tang, Q.; Zhang, H.; Dong, J.; Zhang, L. Elephant Flow Detection Mechanism in SDN-Based Data Center Networks. Sci. Program. 2020, 2020, 1–8. [Google Scholar] [CrossRef]
  7. Liu, J.; Li, J.; Shou, G.; Hu, Y.; Guo, Z.; Dai, W. SDN based load balancing mechanism for elephant flow in data center networks. In Proceedings of the 2014 International Symposium on Wireless Personal Multimedia Communications (WPMC), Sydney, Australia, 7–10 September 2014. [Google Scholar]
  8. Hamdan, M.; Khan, S.; Abdelaziz, A.; Sadiah, S.; Shaikh-Husin, N.; Al Otaibi, S.; Maple, C.; Marsono, M.N. DPLBAnt: Improved load balancing technique based on detection and rerouting of elephant flows in software-defined networks. Comput. Commun. 2021, 180, 315–327. [Google Scholar] [CrossRef]
  9. Bezerra, J.M.; Pinheiro, A.J.; deSouza, C.P.; Campelo, D.R. Performance evaluation of elephant flow predictors in data center networking. Future Gener. Comput. Syst. 2020, 102, 952–964. [Google Scholar] [CrossRef]
  10. Zhao, Z.; Xue, X.; Guo, B.; Zhao, Y.; Zhang, X.; Guo, Y.; Ji, W.; Yin, R.; Chen, B.; Huang, S. ReSAW: A reconfigurable and picosecond-synchronized optical data center network based on an AWGR and the WR protocol. J. Opt. Commun. Netw. 2022, 14, 702–712. [Google Scholar] [CrossRef]
  11. Hu, S.; Wang, X.; Shi, Z. A software defined network scheme for intra datacenter network based on fat-tree topology. J. Phys. Conf. Ser. 2021. [Google Scholar] [CrossRef]
  12. Das, D.; Sahoo, B.; Roy, S.; Mohanty, S. Performance Analysis of an OpenFlow-Enabled Network with POX, Ryu, and ODL Controllers. IETE J. Res. 2024, 70, 8538–8555. [Google Scholar] [CrossRef]
  13. Singh, A.; Ong, J.; Agarwal, A.; Anderson, G.; Armistead, A.; Bannon, R.; Boving, S.; Desai, G.; Felderman, B.; Germano, P.; et al. Jupiter rising: A decade of clos topologies and centralized control in google’s datacenter network. ACM SIGCOMM Comput. Commun. Rev. 2015, 45, 183–197. [Google Scholar] [CrossRef]
  14. Sultan, M.; Imbuido, D.; Patel, K.; MacDonald, J.; Ratnam, K. Designing knowledge plane to optimize leaf and spine data center. In Proceedings of the 2020 IEEE 13th International Conference on Cloud Computing (CLOUD), Beijing, China, 10 October 2020. [Google Scholar]
  15. Alizadeh, M.; Edsall, T. On the data path performance of leaf-spine datacenter fabrics. In Proceedings of the 2013 IEEE 21st Annual Symposium on High-Performance Interconnects, San Jose, CA, USA, 21–23 August 2013. [Google Scholar]
  16. Zhang, H.; Guo, X.; Yan, J.; Liu, B.; Shuai, Q. SDN-based ECMP algorithm for data center networks. In Proceedings of the 2014 IEEE Computers, Communications and IT Applications Conference, Washington, DC, USA, 11–13 September 2014. [Google Scholar]
  17. Arbelaez, A.; Mehta, D.; O’Sullivan, B.; Quesada, L. A constraint-based parallel local search for the edge-disjoint rooted distance-constrained minimum spanning tree problem. J. Heuristics 2018, 24, 359–394. [Google Scholar] [CrossRef]
  18. Rahman, O.; Quraishi, M.A.G.; Lung, C.H. DDoS Attacks Detection and Mitigation in SDN Using Machine Learning. In Proceedings of the 2019 IEEE World Congress on Services (SERVICES), Milan, Italy, 1 July 2019. [Google Scholar]
  19. Kashem, M.A.; Shamsuddoha, M.; Nasir, T.; Chowdhury, A.A. Supply Chain Disruption versus Optimization: A Review on Artificial Intelligence and Blockchain. Knowledge 2023, 3, 80–96. [Google Scholar] [CrossRef]
  20. Mininet: An Instant Virtual Network on Your Laptop. Available online: http://mininet.org/ (accessed on 25 December 2015).
Figure 1. The diagram of monitoring strategy for elephant flow.
Figure 1. The diagram of monitoring strategy for elephant flow.
Electronics 14 02663 g001
Figure 2. The network topology in the experimental setup.
Figure 2. The network topology in the experimental setup.
Electronics 14 02663 g002
Figure 3. The throughput curve of Fat-Tree.
Figure 3. The throughput curve of Fat-Tree.
Electronics 14 02663 g003
Figure 4. The throughput curve of Leaf-Spine.
Figure 4. The throughput curve of Leaf-Spine.
Electronics 14 02663 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Q.; Chen, M.; Wen, H.; Shi, Y. An OpenFlow-Based Elephant-Flow Monitoring and Scheduling Strategy in SDN. Electronics 2025, 14, 2663. https://doi.org/10.3390/electronics14132663

AMA Style

Chen Q, Chen M, Wen H, Shi Y. An OpenFlow-Based Elephant-Flow Monitoring and Scheduling Strategy in SDN. Electronics. 2025; 14(13):2663. https://doi.org/10.3390/electronics14132663

Chicago/Turabian Style

Chen, Qinghui, Mingyang Chen, Hong Wen, and Yazhi Shi. 2025. "An OpenFlow-Based Elephant-Flow Monitoring and Scheduling Strategy in SDN" Electronics 14, no. 13: 2663. https://doi.org/10.3390/electronics14132663

APA Style

Chen, Q., Chen, M., Wen, H., & Shi, Y. (2025). An OpenFlow-Based Elephant-Flow Monitoring and Scheduling Strategy in SDN. Electronics, 14(13), 2663. https://doi.org/10.3390/electronics14132663

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop