Scalable Container-Based Time Synchronization for Smart Grid Data Center Networks
Abstract
:1. Introduction
- Developing a simple optimization framework utilizing stochastic gradient descent (SGD) and the implicit function theorem (IFT) for dynamic load balancing and resource management in the SGDA network.
- Proposing a multi-queuing model (MQM) for real-time load management via cloud-based resource allocation and auto-scaling.
- Establishing a decentralized edge-to-fog framework for efficient advanced metering infrastructure (AMI) sensor communication and energy monitoring.
- Implementing predictive workload balancing and multipath routing to reduce latency and optimize traffic flow in SGDAs.
- Developing a security group firewalling strategy using OpenFlow to enhance network security and load management, providing a robust backend for data protection.
- Conducting performance evaluations comparing SGDA efficiency with traditional data center network topologies.
2. Relevant Literature Review
2.1. Topological Models
2.2. Research Gaps in SG DCNs
3. System Model
3.1. Analytical Framework-Scalable CTSM-SGDCN
3.1.1. Problem Definition and Optimization Formulation
3.1.2. Implicit Function Theorem for Load Balancing
3.2. Mathematical Model of Traffic and Stability
3.2.1. CTSM Traffic Stability Model
- i.
- QoS vector path stability:
- ii.
- Stream Workload Path (SWP) Stability
3.2.2. CTSM Auto-Scaling and Traffic Optimization
3.2.3. CTSM Multi-Queue System
- Dynamic allocation via auto-scaling
- Continuous Time Markov Chain
4. SGDA Ecosystem
4.1. System Components
4.1.1. Edge Layer
4.1.2. Fog Layer
4.1.3. Consolidation Domain (DC)
4.1.4. Mininet CloudFormation IaC
4.1.5. Security Group Firewalling
4.2. SGDA AMI Hardware Architecture and System Components
- Transmission and Step-Down Transformers
- Primary Setup: Three transmission transformers (models A, B, and C) are used. In our demonstration setup, each transformer steps down 240 V to 12 V.
- Grid Voltage Adaptation: Although actual grids operate with power ratings ranging from 11 kVA to 133 kVA, our configuration employs a three-phase transformer to step down a primary voltage of 415 V to 240 V per phase before additional local step-downs.
- Sensing and Data Acquisition
- Sensor Deployment: Sensors continuously monitor power parameters—current and voltage—across both transmission and distribution networks.
- Distribution-Level Monitoring: At the distribution level (DISCO), smart meters equipped with load-scheduling sensors provide enhanced monitoring of power consumption.
- System Layers and Communication: The SG system operates through three integrated layers:
- Metering Layer: Captures and transmits grid data from various points in the network.
- Monitoring/Control Layer: Processes and analyses real-time grid parameters to support immediate operational decisions.
- Cloud/IoT Interface: Utilizes wireless IoT modules and a dedicated VPC landing zone to support demand-side management (DSM) and provide centralized control.
- Distribution Network Control
- Relay Operations: The distribution network employs relays to route power efficiently to consumer blocks (see Figure 3).
- Local Controller: A PIC18F4550 microcontroller manages DISCO operations. It interfaces with the following:
- ▪
- Current Sensing: An ACS712-30 sensor measures current usage.
- ▪
- Voltage Monitoring: A voltage divider circuit tracks voltage consumption, with data fed into the microcontroller’s ADC to compute and display load demand.
- RF Communication: An RF IoT module transmits real-time data to the cloud, enabling remote relay activation and precise load management.
- Real-Time Feedback and Quality of Service (QoS)
- The AMI edge layer facilitates dynamic feedback between generation companies (GENCOs) and residential consumers (see Figure 4).
- After validating the accuracy of our smart grid neural network model, it was deployed in the design testbed. This implementation meets the QoS requirements by optimizing transmission capacity and minimizing time delays for critical services such as load management.
- Advanced Data Aggregation and Algorithmic ProcessingTo ensure robust performance, key algorithms within the SG system include the following:
- Traffic-Token Construction
- Data Encryption
- Sink Data Aggregation (SGDA)
- Initialization: Start with an arbitrary CIU node vector, denoted as ϑ⁰.
- Iterative Process:At each iteration i, a CIU row, i(k) (where i ∈ {1, …, n}), is randomly selected from the layered cluster parent nodes.
- Gradient Computation:The selected data streams are used to compute the gradient based on the local loss function, ∂(xᵢ, yᵢ), as detailed in Algorithm 1.
- Convergence:The procedure iteratively projects onto available hyper-node planes until convergence is achieved.
Algorithm 1: Edge Connection with SGD Optimization |
1: CIU Input 2: CIU Output: n 3: 4: for AMI-CIU Connection 5: Determine (CIU +AMI Destination) as vectors of different lengths. 6: For (CIU = 0, CIU > 0, i++) then 7: Compute Gradient: based on current load and connection 8: Update Weight +1 = 9: Set Cluster Parent 10 Set Cluster parent as (AMI Tx) based on updated weights 11: End loop when convergence is achieved 12: End |
4.3. Hardware Implementation
- i.
- System activation and control are accomplished with the following:
- GENCO sources (A, B, C) under testing, displaying status on an LCD interface.
- The RF IoT module configuration for real-time communication via activation start buttons which energizes relays, and routes power to the distribution buses.
- The cloud-driven LCD displays available power and real-time load status.
- ii.
- Dynamic load management and smart control
- Load variations trigger cloud-based automation as follows:
- (a)
- As demand increases, the DISCO reports to the cloud control unit, which activates additional GENCOs to meet consumption needs (Figure 4).
- (b)
- If demand exceeds capacity, the system implements smart load shedding, de-energizing relays to disconnect non-priority loads.
- (c)
- When demand drops, the system takes GENCOs offline, preventing overproduction.
- iii.
- Neural network-driven self-healing optimizes power allocation, ensuring efficient demand–supply balancing while minimizing energy wastage. This seamless integration of SGDA hardware, IoT, and cloud infrastructure can improve real-time monitoring, control, and energy efficiency in next-generation smart grids.
4.4. Neural Edge Network Design
4.4.1. Network Architecture
4.4.2. Preprocessing and Learning Algorithm
- Dimensionality Reduction: To eliminate redundant features.
- Data Normalization: Scaling all input features to lie between 0 and 1, forming a consistent input-target matrix.
- Clustering: Pre-filtering the data into clusters helps in detecting anomalies such as faulty demand meters or abnormal customer behavior.
4.4.3. Experimental Setup and Performance Evaluation
4.4.4. Data Aggregation via AMI Local Concentrator
- Executing batches of conjugate gradient updates concurrently across AMI nodes.
- Aggregating the data by averaging values computed for each iteration (k+1).
- Synchronizing node operations to ensure efficient data flow and scalability.
4.5. Computational Complexity Analysis
Algorithm 2: AMI Local Concentrator/Local Aggregator |
|
Algorithm 3: Container logical instantiation/Global Grid Concentrator |
1: Input: grid workload sources 2. Output: autoscaling group instances 3. Create an Object instance: Instantiate an object from a map. 4: For node Iteration For each node i from I to 5: Call Open Firewall Function to add :sg-link: sg-link) sg-link 6: load container balancer Load balancer self-connection 7: Recycle loop ( ). execute recycle loop ( ) 8: Direct control directly control the container resources 9: End |
Algorithm 4: SDN OpenFlow Firewall Services |
1: Input: Call Schedule (), OpenFlow firewall services (), services bundling (), Source address, a destination address, 2 queue size, link, link Information 2: Output: OpenFlow Destination (Load balancer ports) 3: Parameters: OpenFlow _weight ← Empty, weight ← o; weighted Moving ← 0; totalWeight ← 0, 4: OpenFlow_weight_ Container_history_queue ← null; 5: i ← 0; 6: While < OpenFlow monitor Call Schedule do 7: queue ← (HistoryListSize—OpenFlow_monitor Call − i) 8: OpenFlow _weight ← fiboA1 + fiboB2; 9: OpenFlow weighted Moving ← weighted Moving+ (Container historyItem * weight); 10: Recycle loop; 11: end while 12: Calculate event filtering () & execute dynamic network balancing: 13: Legitimate initial Value←(OpenFlow weight) * (pastInitialValue + trendPosteriorValue); 14: illegitimateTrendPosteriorValue ← (initialValue–pastInitialValue)+ (PosteriorValue) 15: If (Sensed event = 1), then, 16: Set services (), 17: Create another instance of the virtual node on OpenFlow; 18: Optimise flow table; 19: Do list control ( ); End |
Edge Streaming Algorithm
Algorithm 5: CTSM Secured Network Construction |
1: Input: AMI Network design message 2: Output: output in the Sink placed in the Cluster Servers Procedure: 3: Set CTSM VPC Encryption = 1 4: If (a node is a source node) then 5: Exit Call (AMI base hub) 6: Perform Flooding (initial Level, base stationID) 7: Wait for the TCP HELLO message to arrive 8: If (a Smart Meter sensor node sends a message to the AMI node) then 9: Set the message’s parentID, recHopCnt, and recLevel 10: Increment Net Information curEntry 11: If (current hop count > recHopCnt + 1) then 12: Set current hop count = recHopCnt + 1 13: If (current hop count > recHopCnt + 1) then 14: Break 15: If (TOS LOCAL ADDRESS is not a leaf node) then 16: Perform Inundation (current Level, current Node ID) 17: If (message Type is Encrypted) then 18: JOIN 19: If (the maximum number of child nodes is not exceeded by the parent node) then 20: Set parent = parentID 21: Else 22: Send message (RESET) to a node 23: End If 24: End If 25: End If 26: End If 27: End If 28: End |
Algorithm 6: CTSM Encrypted Data Construction |
1: Input: Network Message, Message type 2: Output: output Sent to Sink AMI local concentrator Procedure: 3: For i = 1 to Kn+1 do 4: Obtain Data that Has Been Encrypted 5: AggregateNode = gather neighbor(random) 6: Wait for the AMI cluster Node to provide a response message 7: newValue = ComputeKey(AMI cluster Node, KeySeed data, Received_KeySeed data) 8: make Direction = directionValue(newValue) 9: current CurveLevel = setCurveLevel 10: Homomorphic Curve = encryptedData(direction, curveLevel, newUpdate) 11: Packing(encryptedData) 12: Wait 13: Ready to send to the local aggregator 14: If (the construction is complete) then 15: Send to Local concentrator 16: End If 17: End For End |
Algorithm 7: CTSM Aggregated Data Security |
|
4.6. Performance Evaluation
4.6.1. DCN Description
4.6.2. Evaluation Results
- Received Energy Data: The CTSM SL-VPN architecture exhibits the highest percentage of received energy data (29.87%), indicating better energy consumption and efficiency compared to the other architectures. Sensitivity in context suggests that optimizing energy data reception is crucial for system performance, especially in high-demand SG systems.
- Packetization Service Delays: The CTSM SL-VPN shows the lowest packetization delay (13.11%), making it the most sensitive to minimizing delays in packetization. Lower packetization delays result in a more responsive system, which is needed for AMI edge data processing.
- Load Balancer Access Delays: The Ficonn architecture has the lowest load balancer access delay (3.66%), while the DCell architecture exhibits the highest (27.47%). This indicates that architectures with lower access delays can better handle dynamic traffic and high-volume requests, making them more sensitive to load balancing efficiency.
- Service Throughput: CTSM SL-VPN leads with the highest service throughput (27.27%), which is significant for ensuring the efficient delivery of high-bandwidth services. Sensitivity in service throughput highlights the importance of network architectures with a high capacity to maintain performance under heavy loads.
- Traffic Availability: CTSM SL-VPN also demonstrates high traffic availability (70.85%), which ensures stable network connections. Sensitivity in traffic availability means that architectures supporting greater availability are more resilient to disruptions, contributing to continuous edge-to-cloud service delivery.
- Encryption–Decryption Overhead: SGDA IPv6 Gateway and SGDA-SL-VPC have the highest encryption–decryption overheads (37.50% and 34.37%), indicating these architectures are more sensitive to security-related processing costs. Minimizing encryption overhead is crucial for optimizing performance, especially in our edge-to-cloud ecosystem (see Figure 2).
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
SGDA | Smart Grid Distributed Architecture |
DCN | Data Center Network |
SDN | Software-Defined Network |
SDN-CFS | Software-Defined CloudFormation Stack |
CTSM | Container-based Time Synchronization Model |
SL-VPC | Spine–Leaf Virtual Private Cloud |
MQM | Multi-Queuing Model |
LaScaDa | Layered Scalable Data Center |
OSPF | Open Shortest Path First |
ECMP | Equal-Cost Multipath Routing |
BGP | Border Gateway Protocol |
TRILL | Transparent Interconnection of Lots of Links |
SPB | Shortest Path Bridging |
IFT | Implicit Function Theorem |
AMI | Advanced Metering Infrastructure |
GENCOs | Generation Companies |
ITSF | Implicit Transparent Failover Service |
ALB | Application Load Balancers |
NLB | Network Load Balancer |
CTMC | Continuous Time Markov Chain |
QBD | Quasi-Birth-and-Death Process |
IPP | Independent Power Plants |
DISCOs | Distribution Companies |
ECMP | Equal-Cost Multipath |
VBB | Volume Bulk Balancer |
VBC | Volume Billing Container |
AWS | Amazon Web Services |
IaC | Infrastructure as Code |
ALB | Application Load Balancers |
DSM | Demand-Side Management |
CIU | Customer Interface Unit |
SRBNF | Subscriber Radial Basis Neural Network Function |
SGRBNF | Stochastic Gradient Radial Basis Neural Function |
SG-NNPC | Stochastic Gradient Neural Network Process Controller |
AMI-SGDA | AMI Intelligent Stochastic Gradient Descent Algorithm |
References
- Zhao, L.; Li, Q.; Ding, G. An Intelligent Web-Based Energy Management System for Distributed Energy Resources Integration and Optimization. J. Web Eng. 2024, 23, 165–195. [Google Scholar] [CrossRef]
- Liu, S.; Chen, L.; Lai, J. Integrated and Accountable Data Sharing for Smart Grids with Fog and Dual-Blockchain Assistance. IEEE Trans. Ind. Inform. 2024, 20, 4940–4952. [Google Scholar] [CrossRef]
- Zhao, S.; Xu, S.; Han, S.; Ren, S.; Wang, Y.; Chen, Z.; Chen, X.; Lin, J.; Liu, W. PPMM-DA: Privacy-Preserving Multidimensional and Multisubset Data Aggregation with Differential Privacy for Fog-Based Smart Grids. IEEE Internet Things J. 2024, 11, 6096–6110. [Google Scholar] [CrossRef]
- Zhang, Z.; Ma, J.; Mao, S.; Liang, P.; Liu, Q. A Low-Overhead and Real-Time Service Recovery Mechanism for Data Center Networks with SDN. In Proceedings of the 2024 International Conference on Networking and Network Applications (NaNA), Yinchuan, China, 9–12 August 2024; pp. 184–189. [Google Scholar]
- Jie, X.; Han, J.; Chen, G.; Wang, H.; Hong, P.; Xue, K. CACC: A Congestion-Aware Control Mechanism to Reduce INT Overhead and PFC Pause Delay. IEEE Trans. Netw. Serv. Manag. 2024, 21, 6382–6397. [Google Scholar] [CrossRef]
- Kathiravelu, P.; Zaiman, Z.; Gichoya, J.; Veiga, L.; Banerjee, I. Towards an internet-scale overlay network for latency-aware decentralized workflows at the edge. Comput. Netw. 2022, 203, 108654, ISSN 1389-1286. [Google Scholar] [CrossRef] [PubMed]
- Jiang, W.; Ren, F.; Wu, Y.; Lin, C.; Stojmenovic, I. Analysis of Backward Congestion Notification with Delay for Enhanced Ethernet Networks. IEEE Trans. Comput. 2014, 63, 2674–2684. [Google Scholar] [CrossRef]
- Welzl, M.; Islam, S.; von Stephanides, M. Real-Time TCP Packet Loss Prediction Using Machine Learning. IEEE Access 2024, 12, 159622–159634. [Google Scholar] [CrossRef]
- Meng, Q.; Zhang, Y.; Zhang, S.; Wang, Z.; Zhang, T.; Luo, H.; Ren, F. Switch-Assistant Loss Recovery for RDMA Transport Control. IEEE/ACM Trans. Netw. 2024, 32, 2069–2084. [Google Scholar] [CrossRef]
- Chouikhi, S.; Esseghir, M.; Meerghem-Boulahia, L. Energy Consumption Scheduling as a Fog Computing Service in Smart Grid. IEEE Trans. Serv. Comput. 2023, 16, 1144–1157. [Google Scholar] [CrossRef]
- Bi, J.; Zhang, K.; Yuan, H.; Zhang, J. Energy-Efficient Computation Offloading for Static and Dynamic Applications in Hybrid Mobile Edge Cloud System. IEEE Trans. Sustain. Comput. 2023, 8, 232–244. [Google Scholar] [CrossRef]
- Vahdat, A.; Al-Fares, M.; Farrington, N.; Mysore, R.N.; Porter, G.; Radhakrishnan, S. Scale-Out Networking in the Data Center. IEEE Micro 2010, 30, 29–41. [Google Scholar] [CrossRef]
- Radhakrishnan, S.; Tewari, M.; Kapoor, R.; Porter, G.; Vahdat, A. Dahu: Commodity switches for direct connect data center networks. In Proceedings of the Architectures for Networking and Communications Systems, San Jose, CA, USA, 21–22 October 2013; pp. 59–70. [Google Scholar] [CrossRef]
- Alqahtani, J.; Hamdaoui, B. Rethinking Fat-Tree Topology Design for Cloud Data Centers. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Misra, S.; Mondal, A.; Khajjayam, S. Dynamic Big-Data Broadcast in Fat-Tree Data Center Networks with Mobile IoT Devices. IEEE Syst. J. 2019, 13, 2898–2905. [Google Scholar] [CrossRef]
- Jiang, W.; Qi, J.; Yu, J.X.; Huang, J.; Zhang, R. HyperX: A Scalable Hypergraph Framework. IEEE Trans. Knowl. Data Eng. 2019, 31, 909–922. [Google Scholar] [CrossRef]
- Al-Fares, M.; Loukissas, A.; Vahdat, A. A Scalable, Commodity Data Center Network Architecture. ACM SIGCOMM Comput. Commun. Rev. 2008, 38, 63–74. [Google Scholar] [CrossRef]
- Greenberg, A.; Hamilton, J.R.; Jain, N.; Kandula, S.; Kim, C.; Lahiri, P.; Maltz, D.A.; Patel, P.; Sengupta, S. VL2: A Scalable and Flexible Data Center Network. In Proceedings of the ACM SIGCOMM 2009 Conference on Data Communication, Barcelona, Spain, 16–21 August 2009. [Google Scholar]
- Xia, Y.; Hamdi, M.; Chao, H.J. A Practical Large-Capacity Three-Stage Buffered Clos-Network Switch Architecture. IEEE Trans. Parallel Distrib. Syst. 2016, 27, 317–328. [Google Scholar] [CrossRef]
- Singla, A.; Hong, C.-Y.; Popa, L.; Godfrey, P.B. Jellyfish: Networking Data Centers Randomly. arXiv 2012, arXiv:1110.1687v3. [Google Scholar]
- Ye, X.; Mejia, P.; Yin, Y.; Proietti, R.; Yoo, S.J.B.; Akella, V. DOS—A scalable optical switch for datacenters. In Proceedings of the 2010 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS), La Jolla, CA, USA, 25–26 October 2010; pp. 1–12. [Google Scholar]
- Guo, Z.; Yang, Y. Collaborative Network Configuration in Hybrid Electrical/Optical Data Center Networks. In Proceedings of the IEEE 28th International Parallel and Distributed Processing Symposium, Phoenix, AZ, USA, 19–23 May 2014; pp. 852–861. [Google Scholar] [CrossRef]
- Guo, D.; Xie, J.; Zhou, X.; Zhu, X.; Wei, W.; Luo, X. Exploiting Efficient and Scalable Shuffle Transfers in Future Data Center Networks. IEEE Trans. Parallel Distrib. Syst. 2015, 26, 997–1009. [Google Scholar] [CrossRef]
- Li, X.; Lung, C.-H.; Majumdar, S. Energy aware green spine switch management for Spine-Leaf datacenter networks. In Proceedings of the IEEE International Conference on Communications (ICC), London, UK, 8–12 June 2015; pp. 116–121. [Google Scholar] [CrossRef]
- Wang, G.; Zhang, Y.; Yu, J.; Ma, M.; Hu, C.; Fan, J.; Zhang, L. HS-DCell: A Highly Scalable DCell-Based Server-Centric Topology for Data Center Networks. IEEE/ACM Trans. Netw. 2024, 32, 3808–3823. [Google Scholar] [CrossRef]
- Udeze, C.C.; Okafor, K.C.; Okezie, C.C.; Okeke, I.O.; Ezekwe, C.G. Performance Analysis of R-DCN Architecture for Next Generation Web Application Integration. In Proceedings of the 2014 IEEE 6th International Conference on Adaptive Science & Technology (ICAST), Ota, Nigeria, 29–31 October 2014; pp. 1–12. [Google Scholar]
- Lv, M.; Fan, J.; Fan, W.; Jia, X. A High-Performantal and Server-Centric Based Data Center Network. IEEE Trans. Netw. Sci. Eng. 2023, 10, 592–605. [Google Scholar] [CrossRef]
- Guo, C.; Lu, G.; Li, D.; Wu, H.; Zhang, X.; Shi, Y.; Tian, C.; Zhang, Y.; Lu, S. BCube: A high performance, server-centric network architecture for modular data centers. In Proceedings of the ACM SIGCOMM 2009 Conference on Data Communication, Barcelona, Spain, 16–21 August 2009; pp. 63–74. [Google Scholar]
- Lu, Y.; Gu, H.; Yu, X.; Li, P. X-NEST: A Scalable, Flexible, and High-Performance Network Architecture for Distributed Machine Learning. J. Light. Technol. 2021, 39, 4247–4254. [Google Scholar] [CrossRef]
- Taubenblatt, M.; Maniotis, P.; Tantawi, A. Optics enabled networks and architectures for data center cost and power efficiency. J. Opt. Commun. Netw. 2022, 14, A41–A49. [Google Scholar] [CrossRef]
- Emara, T.Z.; Huang, J.Z. Distributed Data Strategies to Support Large-Scale Data Analysis Across Geo-Distributed Data Centers. IEEE Access 2020, 8, 178526–178538. [Google Scholar] [CrossRef]
- Xu, C.; Wang, K.; Li, P.; Xia, R.; Guo, S.; Guo, M. Renewable Energy-Aware Big Data Analytics in Geo-Distributed Data Centers with Reinforcement Learning. IEEE Trans. Netw. Sci. Eng. 2020, 7, 205–215. [Google Scholar] [CrossRef]
- Das, S.; Sahni, S. Two-Aggregator Topology Optimization Using Single Paths in Data Center Networks. IEEE Trans. Cloud Comput. 2021, 9, 807–820. [Google Scholar] [CrossRef]
- Marahatta, A.; Xin, Q.; Chi, C.; Zhang, F.; Liu, Z. PEFS: AI-Driven Prediction Based Energy-Aware Fault-Tolerant Scheduling Scheme for Cloud Data Center. IEEE Trans. Sustain. Comput. 2020, 6, 655–666. [Google Scholar] [CrossRef]
- Lo, H.-Y.; Liao, W. CALM: Survivable Virtual Data Center Allocation in Cloud Networks. IEEE Trans. Serv. Comput. 2021, 14, 47–57. [Google Scholar] [CrossRef]
- Wang, X.; Erickson, A.; Fan, J.; Jia, X. Hamiltonian Properties of DCell Networks. Comput. J. 2015, 58, 2944–2955. [Google Scholar] [CrossRef]
- Fujiwara, I.; Koibuchi, M.; Matsutani, H.; Casanova, H. Skywalk: A Topology for HPC Networks with Low-Delay Switches. In Proceedings of the IEEE 28th International Parallel and Distributed Processing Symposium, Phoenix, AZ, USA, 19–23 May 2014; pp. 263–272. [Google Scholar] [CrossRef]
- Wang, S.; Li, D.; Cheng, Y.; Geng, J.; Wang, Y.; Wang, S.; Xia, S.; Wu, J. A Scalable, High-Performance, and Fault-Tolerant Network Architecture for Distributed Machine Learning. IEEE/ACM Trans. Netw. 2020, 28, 1752–1764. [Google Scholar] [CrossRef]
- Wang, G.; Lin, C.-K.; Fan, J.; Cheng, B.; Jia, X. A Novel Low-Cost Interconnection Architecture Based on the Generalized Hypercube. IEEE Trans. Parallel Distrib. Syst. 2020, 31, 647–662. [Google Scholar] [CrossRef]
- Li, Z.; Yang, Y. RRect: A Novel Server-Centric Data Center Network with High Power Efficiency and Availability. IEEE Trans. Cloud Comput. 2020, 8, 914–927. [Google Scholar] [CrossRef]
- Zhang, Z.; Deng, Y.; Min, G.; Xie, J.; Yang, L.T.; Zhou, Y. HSDC: A Highly Scalable Data Center Network Architecture for Greater Incremental Scalability. IEEE Trans. Parallel Distrib. Syst. 2019, 30, 1105–1119. [Google Scholar] [CrossRef]
- Chen, K.; Wen, X.; Ma, X.; Chen, Y.; Xia, Y.; Hu, C.; Dong, Q.; Liu, Y. Toward A Scalable, Fault-Tolerant, High-Performance Optical Data Center Architecture. IEEE/ACM Trans. Netw. 2017, 25, 2281–2294. [Google Scholar] [CrossRef]
- Li, Z.; Guo, Z.; Yang, Y. BCCC: An Expandable Network for Data Centers. IEEE/ACM Trans. Netw. 2016, 24, 3740–3755. [Google Scholar] [CrossRef]
- Li, Z.; Yang, Y. GBC3: A Versatile Cube-Based Server-Centric Network for Data Centers. IEEE Trans. Parallel Distrib. Syst. 2016, 27, 2895–2910. [Google Scholar] [CrossRef]
- Chkirbene, Z.; Hadjidj, R.; Foufou, S.; Hamila, R. LaScaDa: A Novel Scalable Topology for Data Center Network. IEEE/ACM Trans. Netw. 2020, 28, 2051–2064. [Google Scholar] [CrossRef]
- Jia, Z.; Sun, Y.; Liu, Q.; Dai, S.; Liu, C. cRetor: An SDN-Based Routing Scheme for Data Centers with Regular Topologies. IEEE Access 2020, 8, 116866–116880. [Google Scholar] [CrossRef]
- Zhang, C.; Wang, X.; Dong, A.; Zhao, Y.; Huang, M.; Li, F. Dynamic network service deployment across multiple SDN domains. Trans. Emerg. Telecommun. Technol. 2020, 31, e3709. [Google Scholar] [CrossRef]
- Montero, R.; Agraz, F.; Pagès, A.; Perelló, J.; Spadaro, S. SDN-based parallel link discovery in optical transport networks. Trans. Emerg. Telecommun. Technol. 2019, 30, e3512. [Google Scholar] [CrossRef]
- Mahmood, A.; Casetti, C.; Chiasserini, C.F.; Giaccone, P.; Härri, J. Efficient caching through stateful SDN in named data networking. Trans. Emerg. Telecommun. Technol. 2018, 29, e3271. [Google Scholar] [CrossRef]
- He, Q.; Wang, X.; Huang, M. OpenFlow-based low-overhead and high-accuracy SDN measurement framework. Trans. Emerg. Telecommun. Technol. 2018, 29, e3263. [Google Scholar] [CrossRef]
- Singh, A.K.; Maurya, S.; Kumar, N.; Srivastava, S. Heuristic approaches for the reliable SDN controller placement problem. Trans. Emerg. Telecommun. Technol. 2020, 31, e3761. [Google Scholar] [CrossRef]
- Bonnabel, S. Stochastic Gradient Descent on Riemannian Manifolds. IEEE Trans. Autom. Control. 2013, 58, 2217–2229. [Google Scholar] [CrossRef]
- Costilla-Enriquez, N.; Weng, Y.; Zhang, B. Combining Newton-Raphson and Stochastic Gradient Descent for Power Flow Analysis. IEEE Trans. Power Syst. 2021, 36, 514–517. [Google Scholar] [CrossRef]
- Liu, Y.; Huangfu, W.; Zhang, H.; Long, K. An Efficient Stochastic Gradient Descent Algorithm to Maximize the Coverage of Cellular Networks. IEEE Trans. Wirel. Commun. 2019, 18, 3424–3436. [Google Scholar] [CrossRef]
- Lei, Y.; Hu, T.; Li, G.; Tang, K. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 4394–4400. [Google Scholar] [CrossRef]
- Pu, S.; Olshevsky, A.; Paschalidis, I.C. A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent. IEEE Trans. Autom. Control 2021, 67, 5900–5915. [Google Scholar] [CrossRef]
- Peng, X.; Li, L.; Wang, F.-Y. Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 4649–4659. [Google Scholar] [CrossRef]
- Luo, X.; Qin, W.; Dong, A.; Sedraoui, K.; Zhou, M. Efficient and High-Quality Recommendations via Momentum-Incorporated Parallel Stochastic Gradient Descent-Based Learning. IEEE/CAA J. Autom. Sin. 2021, 8, 402–411. [Google Scholar] [CrossRef]
- Lin, L.; Cao, J.; Lam, J.; Rutkowski, L.; Dimirovski, G.M.; Zhu, S. A Bisimulation-Based Foundation for Scale Reductions of Continuous-Time Markov Chains. IEEE Trans. Autom. Control 2024, 69, 5743–5758. [Google Scholar] [CrossRef]
- Perez, J.F.; Van Houdt, B. Exploiting Restricted Transitions in Quasi-Birth-and-Death Processes. In Proceedings of the IEEE 2009 Sixth International Conference on the Quantitative Evaluation of Systems, Budapest, Hungary, 13–16 September 2009; pp. 123–132. [Google Scholar] [CrossRef]
- Maurya, V.N. Mathematical Modelling and Steady State Performance Analysis of a Markovian Queue with Heterogeneous Servers and Working Vacation. Am. J. Theor. Appl. Stat. 2015, 4, 1–10. Available online: https://sciencepublishinggroup.com/article/10.11648/j.ajtas.s.2015040201.11 (accessed on 16 February 2025).
- Okafor, K.C.; Anoh, K.; Chinebu, T.I.; Adebisi, B.; Chukwudebe, G.A. Mitigating COVID-19 Spread in Closed Populations Using Networked Robots and Internet of Things. IEEE Internet Things J. 2024, 11, 39424–39434. [Google Scholar] [CrossRef]
References | Design Strategy | Design Limitations | Application Domain |
---|---|---|---|
X-NEST [29] | Optical switching | High computational workload | A large-scale distributed machine learning system |
Flatter networks [30] | Composable and optical switching | High computational workload | High-performance computing workloads |
Geo-DCN [31] | Multiple data centers distributed without data replication | High computational workload | Data centers that are geographically dispersed |
Renewable Energy DCN [32] | Scheduling algorithm for reinforcement learning (RL)-based jobs | High computational workload | Big data analytics on a geo-distributed data center for DER Reinforcement learning Application |
Aggregation DCN [33] | Combination of two aggregators —Two-Chain Topology | Routing complexity | Generic data center on round aggregation top-of-rack DCN |
PEFS [34] | Scheduling strategy that is AI-driven, energy-aware, proactive, and fault-tolerant | Delay complexity | Cloud data centers (CDCs) |
CALM [35] | Polynomial-time algorithm/collocation-aware survivable placement (CASP) | Resource usage complexity | Cloud data centers (CDCs) |
DCell [36] | Hamiltonian-connected | High computational workload | Cloud data centers (CDCs) |
Skywalk [37] | Low-latency interconnects | Smaller QoS metric coverage | Large-scale high-performance computing (HPC) |
BML-BCube [38] | Distributed gradient synchronization algorithm | High computational workload | Large-scale distributed machine learning (DML) |
Generalized Hypercube [39] | Exchanged generalized hypercube (EGH) | High computational workload | Cloud data centers (CDCs) |
RRect [40] | Server-centric network linear diameter and scaled parallel paths | High computational workload | Enterprise data center |
HSDC [41] | High scalability with hypercube network | High computational workload | Enterprise data center |
WaveCube [42] | WaveCube optical multipathing and dynamic bandwidth provisioning | High computational workload | Optical data center networks |
BCCC [43] | Diameter linearity | High computational workload | Enterprise server-centric data center |
GBC [44] | Examines low-cost off-the-shelf switches and servers with any specified number of NIC interfaces. | High computational workload | Enterprise server-centric data center |
LaScaDa [45] | Explores hierarchical row-based routing | High computational workload | Cloud data centers (CDCs) |
cRetor [46] | Topology-aware routing scheme. A star algorithm for SDN controllers | High computational workload | Cloud data centers (CDCs) |
Proposed DCN | Edge-to-cloud orchestration using the SGDA network | Autonomous/managed computational workload | Smart grid infrastructure/Smart Cities |
Generator | Discriminator Values |
---|---|
Input units | 4 |
Output units | 1 |
Activation | Levenberg_Marquardt |
Hidden layers | 10 |
Optimization goal | MSE (minimum square error) |
Training epoch | 56 |
Classifier output | 25 |
Metrics | CTSM Spine-Leaf (%) | Dcell (%) | Mesh (%) | Skywalk (%) | Dahu (%) | Ficonn (%) | SGDA IPv6 Gateway (%) | SGDA- SL-VPC (%) |
---|---|---|---|---|---|---|---|---|
Received Energy Data kilowatt-hours (kWh) | 29.87 | 19.48 | 22.08 | 5.19 | 15.58 | 7.8 | 17.07 | 11.69 |
Packetization Service Delays (µs) | 13.11 | 21.31 | 19.67 | 18.03 | 16.39 | 11.49 | 15.75 | 14.35 |
Load Balancer Access Delays (ms) | 10.99 | 27.47 | 24.91 | 18.32 | 14.65 | 3.66 | 20.18 | 16.49 |
Service Throughput (Mbps) | 27.27 | 21.21 | 19.70 | 16.67 | 9.15 | 3.33 | 14.92 | 11.24 |
Traffic Availability (Mbps) | 70.85 | 37.19 | 30.45 | 25.81 | 18.34 | 10.91 | 20.17 | 8.98 |
Encryption–Decryption Overhead (Mbps) | 28.13 | 33.44 | 30.93 | 26.19 | 22.91 | 12.78 | 37.50 | 34.37 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Okafor, K.C.; Okafor, W.O.; Longe, O.M.; Ayogu, I.I.; Anoh, K.; Adebisi, B. Scalable Container-Based Time Synchronization for Smart Grid Data Center Networks. Technologies 2025, 13, 105. https://doi.org/10.3390/technologies13030105
Okafor KC, Okafor WO, Longe OM, Ayogu II, Anoh K, Adebisi B. Scalable Container-Based Time Synchronization for Smart Grid Data Center Networks. Technologies. 2025; 13(3):105. https://doi.org/10.3390/technologies13030105
Chicago/Turabian StyleOkafor, Kennedy Chinedu, Wisdom Onyema Okafor, Omowunmi Mary Longe, Ikechukwu Ignatius Ayogu, Kelvin Anoh, and Bamidele Adebisi. 2025. "Scalable Container-Based Time Synchronization for Smart Grid Data Center Networks" Technologies 13, no. 3: 105. https://doi.org/10.3390/technologies13030105
APA StyleOkafor, K. C., Okafor, W. O., Longe, O. M., Ayogu, I. I., Anoh, K., & Adebisi, B. (2025). Scalable Container-Based Time Synchronization for Smart Grid Data Center Networks. Technologies, 13(3), 105. https://doi.org/10.3390/technologies13030105