Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (602)

Search Parameters:
Keywords = Software Defined Network (SDN)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 817 KiB  
Article
Cross-Layer Security for 5G/6G Network Slices: An SDN, NFV, and AI-Based Hybrid Framework
by Zeina Allaw, Ola Zein and Abdel-Mehsen Ahmad
Sensors 2025, 25(11), 3335; https://doi.org/10.3390/s25113335 - 26 May 2025
Viewed by 126
Abstract
Within the dynamic landscape of fifth-generation (5G) and emerging sixth-generation (6G) wireless networks, the adoption of network slicing has revolutionized telecommunications by enabling flexible and efficient resource allocation. However, this advancement introduces new security challenges, as traditional protection mechanisms struggle to address the [...] Read more.
Within the dynamic landscape of fifth-generation (5G) and emerging sixth-generation (6G) wireless networks, the adoption of network slicing has revolutionized telecommunications by enabling flexible and efficient resource allocation. However, this advancement introduces new security challenges, as traditional protection mechanisms struggle to address the dynamic and complex nature of sliced network environments. This study proposes a Hybrid Security Framework Using Cross-Layer Integration, combining Software-Defined Networking (SDN), Network Function Virtualization (NFV), and AI-driven anomaly detection to strengthen network defenses. By integrating security mechanisms across multiple layers, the framework effectively mitigates threats, ensuring the integrity and confidentiality of network slices. An implementation was developed, focusing on the AI-based detection process using a representative 5G security dataset. The results demonstrate promising detection accuracy and real-time response capabilities. While full SDN/NFV integration remains under development, these findings lay the groundwork for scalable, intelligent security architectures tailored to the evolving needs of next-generation networks. Full article
Show Figures

Figure 1

19 pages, 2392 KiB  
Article
Intelligent Resource Allocation for Immersive VoD Multimedia in NG-EPON and B5G Converged Access Networks
by Razat Kharga, AliAkbar Nikoukar and I-Shyan Hwang
Photonics 2025, 12(6), 528; https://doi.org/10.3390/photonics12060528 - 22 May 2025
Viewed by 302
Abstract
Immersive content streaming services are becoming increasingly popular on video on demand (VoD) platforms due to the growing interest in extended reality (XR) and spatial experiences. Unlike traditional VoD, immersive VoD (IVoD) offers more engaging and interactive content beyond conventional 2D video. IVoD [...] Read more.
Immersive content streaming services are becoming increasingly popular on video on demand (VoD) platforms due to the growing interest in extended reality (XR) and spatial experiences. Unlike traditional VoD, immersive VoD (IVoD) offers more engaging and interactive content beyond conventional 2D video. IVoD requires substantial bandwidth and minimal latency to deliver its interactive XR experiences. This research examines intelligent resource allocation for IVoD services across NG-EPON and B5G X-haul converged networks. A proposed software-defined networking (SDN) framework employs artificial neural networks (ANN) with a backpropagation technique to predict bandwidth control based on traffic patterns and network conditions. The new immersive video storage, field-programmable gate array (FPGA), Queue Manager, and logical layer components are added to the existing OLT and ONU hardware architecture to implement the SDN framework. The SDN framework manages the entire network, predicts bandwidth requirements, and operates the immersive media dynamic bandwidth allocation (IMS-DBA) algorithm to efficiently allocate bandwidth to IVoD network traffic, ensuring that QoS metrics are met for IM services. Simulation results demonstrate that the proposed framework significantly enhances mean packet delay by up to 3% and improves packet drop probability by up to 4% as the traffic load varies from light to high across different scenarios, leading to enhanced overall QoS performance. Full article
(This article belongs to the Section Optical Communication and Network)
Show Figures

Figure 1

14 pages, 397 KiB  
Article
Service Function Chain Migration: A Survey
by Zhiping Zhang and Changda Wang
Computers 2025, 14(6), 203; https://doi.org/10.3390/computers14060203 - 22 May 2025
Viewed by 212
Abstract
As a core technology emerging from the convergence of Network Function Virtualization (NFV) and Software-Defined Networking (SDN), Service Function Chaining (SFC) enables the dynamic orchestration of Virtual Network Functions (VNFs) to support diverse service requirements. However, in dynamic network environments, SFC faces significant [...] Read more.
As a core technology emerging from the convergence of Network Function Virtualization (NFV) and Software-Defined Networking (SDN), Service Function Chaining (SFC) enables the dynamic orchestration of Virtual Network Functions (VNFs) to support diverse service requirements. However, in dynamic network environments, SFC faces significant challenges, such as resource fluctuations, user mobility, and fault recovery. To ensure service continuity and optimize resource utilization, an efficient migration mechanism is essential. This paper presents a comprehensive review of SFC migration research, analyzing it across key dimensions including migration motivations, strategy design, optimization goals, and core challenges. Existing approaches have demonstrated promising results in both passive and active migration strategies, leveraging techniques such as reinforcement learning for dynamic scheduling and digital twins for resource prediction. Nonetheless, critical issues remain—particularly regarding service interruption control, state consistency, algorithmic complexity, and security and privacy concerns. Traditional optimization algorithms often fall short in large-scale, heterogeneous networks due to limited computational efficiency and scalability. While machine learning enhances adaptability, it encounters limitations in data dependency and real-time performance. Future research should focus on deeply integrating intelligent algorithms with cross-domain collaboration technologies, developing lightweight security mechanisms, and advancing energy-efficient solutions. Moreover, coordinated innovation in both theory and practice is crucial to addressing emerging scenarios like 6G and edge computing, ultimately paving the way for a highly reliable and intelligent network service ecosystem. Full article
Show Figures

Figure 1

28 pages, 2049 KiB  
Review
A Survey on Software Defined Network-Enabled Edge Cloud Networks: Challenges and Future Research Directions
by Baha Uddin Kazi, Md Kawsarul Islam, Muhammad Mahmudul Haque Siddiqui and Muhammad Jaseemuddin
Network 2025, 5(2), 16; https://doi.org/10.3390/network5020016 - 20 May 2025
Viewed by 257
Abstract
The explosion of connected devices and data transmission in the Internet of Things (IoT) era brings substantial burden on the capability of cloud computing. Moreover, these IoT devices are mostly positioned at the edge of a network and limited in resources. To address [...] Read more.
The explosion of connected devices and data transmission in the Internet of Things (IoT) era brings substantial burden on the capability of cloud computing. Moreover, these IoT devices are mostly positioned at the edge of a network and limited in resources. To address these challenges, edge cloud-distributed computing networks emerge. Because of the distributed nature of edge cloud networks, many research works considering software defined networks (SDNs) and network–function–virtualization (NFV) could be key enablers for managing, orchestrating, and load balancing resources. This article provides a comprehensive survey of these emerging technologies, focusing on SDN controllers, orchestration, and the function of artificial intelligence (AI) in enhancing the capabilities of controllers within the edge cloud computing networks. More specifically, we present an extensive survey on the research proposals on the integration of SDN controllers and orchestration with the edge cloud networks. We further introduce a holistic overview of SDN-enabled edge cloud networks and an inclusive summary of edge cloud use cases and their key challenges. Finally, we address some challenges and potential research directions for further exploration in this vital research area. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

11 pages, 1182 KiB  
Proceeding Paper
A Decentralized Framework for the Detection and Prevention of Distributed Denial of Service Attacks Using Federated Learning and Blockchain Technology
by Mao-Hsiu Hsu and Chia-Chun Liu
Eng. Proc. 2025, 92(1), 48; https://doi.org/10.3390/engproc2025092048 - 6 May 2025
Viewed by 278
Abstract
With the rapid development of the internet of things (IoT) and smart cities, the risk of network attacks, particularly distributed denial of service (DDoS) attacks, has significantly increased. Traditional centralized security systems struggle to address large-scale attacks while simultaneously safeguarding privacy. In this [...] Read more.
With the rapid development of the internet of things (IoT) and smart cities, the risk of network attacks, particularly distributed denial of service (DDoS) attacks, has significantly increased. Traditional centralized security systems struggle to address large-scale attacks while simultaneously safeguarding privacy. In this study, we created a decentralized security framework that integrates federated learning (FL) with blockchain technology for DDoS attack detection and prevention. Federated learning enables devices to collaboratively learn without sharing raw data and ensures data privacy, while blockchain provides immutable event logging and distributed monitoring to enhance the overall security of the system. The created framework leverages multi-layer encryption and Hashgraph technology for event recording, ensuring data integrity and efficiency. Additionally, software-defined networking (SDN) was employed for dynamic resource management and rapid responses to attacks. This system improves the accuracy of DDoS detection and effectively reduces communication costs and resource consumption. It has significant potential for large-scale attack defense in IoT and smart city environments. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

5 pages, 135 KiB  
Editorial
Novel Methods Applied to Security and Privacy Problems in Future Networking Technologies
by Irfan-Ullah Awan, Amna Qureshi and Muhammad Shahwaiz Afaqui
Electronics 2025, 14(9), 1816; https://doi.org/10.3390/electronics14091816 - 29 Apr 2025
Viewed by 182
Abstract
The rapid development of future networking technologies, such as 5G, 6G, blockchain, the Internet of Things (IoT), cloud computing, and Software-Defined Networking (SDN) is set to revolutionize our methods of connection, communication, and data sharing [...] Full article
22 pages, 5204 KiB  
Article
Faulty Links’ Fast Recovery Method Based on Deep Reinforcement Learning
by Wanwei Huang, Wenqiang Gui, Yingying Li, Qingsong Lv, Jia Zhang and Xi He
Algorithms 2025, 18(5), 241; https://doi.org/10.3390/a18050241 - 24 Apr 2025
Viewed by 254
Abstract
Aiming to address the high recovery delay and link congestion issues in the communication network of Wide-Area Measurement Systems (WAMSs), this paper introduces Software-Defined Networking (SDN) and proposes a deep reinforcement learning-based faulty-link fast recovery method (DDPG-LBBP). The DDPG-LBBP method takes delay and [...] Read more.
Aiming to address the high recovery delay and link congestion issues in the communication network of Wide-Area Measurement Systems (WAMSs), this paper introduces Software-Defined Networking (SDN) and proposes a deep reinforcement learning-based faulty-link fast recovery method (DDPG-LBBP). The DDPG-LBBP method takes delay and link utilization as the optimization objectives and uses gated recurrent neural network to accelerate algorithm convergence and output the optimal link weights for load balancing. By designing maximally disjoint backup paths, the method ensures the independence of the primary and backup paths, effectively preventing secondary failures caused by path overlap. The experiment compares the (1+2ε)-BPCA, FFRLI, and LIR methods using IEEE 30 and IEEE 57 benchmark power system communication network topologies. Experimental results show that DDPG-LBBP outperforms the others in faulty-link recovery delay, packet loss rate, and recovery success rate. Specifically, compared to the superior algorithm (1+2ε)-BPCA, recovery delay is decreased by about 12.26% and recovery success rate is improved by about 6.91%. Additionally, packet loss rate is decreased by about 15.31% compared to the superior FFRLI method. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

25 pages, 5087 KiB  
Article
Optimizing the Long-Term Efficiency of Users and Operators in Mobile Edge Computing Using Reinforcement Learning
by Jianji Shao and Yanjun Li
Electronics 2025, 14(8), 1689; https://doi.org/10.3390/electronics14081689 - 21 Apr 2025
Viewed by 244
Abstract
Mobile edge computing (MEC) has emerged as a promising paradigm to enhance computational capabilities at the network edge, enabling low-latency services for users while ensuring efficient resource utilization for operators. One of the key challenges in MEC is optimizing offloading decisions and resource [...] Read more.
Mobile edge computing (MEC) has emerged as a promising paradigm to enhance computational capabilities at the network edge, enabling low-latency services for users while ensuring efficient resource utilization for operators. One of the key challenges in MEC is optimizing offloading decisions and resource allocation to balance user experience and operator profitability. In this paper, we integrate software-defined networking (SDN) and MEC to enhance system utility and propose an SDN-based MEC network framework. Within this framework, we formulate an optimization problem that jointly maximizes the utility of both users and operators by optimizing the offloading decisions, communication and computation resource allocation ratios. To address this challenge, we model the problem as a Markov decision process (MDP) and propose a reinforcement learning (RL)-based algorithm to optimize long-term system utility in a dynamic network environment. However, since RL-based algorithms struggle with large state spaces, we extend the MDP formulation to a continuous state space and develop a deep reinforcement learning (DRL)-based algorithm to improve performance. The DRL approach leverages neural networks to approximate optimal policies, enabling more effective decision-making in complex environments. Experimental results validate the effectiveness of our proposed methods. While the RL-based algorithm enhances the long-term average utility of both users and operators, the DRL-based algorithm further improves performance, increasing operator and user efficiency by approximately 22.4% and 12.2%, respectively. These results highlight the potential of intelligent learning-based approaches for optimizing MEC networks and provide valuable insights into designing adaptive and efficient MEC architectures. Full article
(This article belongs to the Special Issue Recent Advances and Challenges in IoT, Cloud and Edge Coexistence)
Show Figures

Figure 1

19 pages, 2942 KiB  
Article
SDN Anomalous Traffic Detection Based on Temporal Convolutional Network
by Ziyi Wang, Zhenyu Guan, Xu Liu, Caixia Li, Xuan Sun and Jun Li
Appl. Sci. 2025, 15(8), 4317; https://doi.org/10.3390/app15084317 - 14 Apr 2025
Viewed by 328
Abstract
The wide application of software-defined network (SDN) architecture, combined with its centralized control characteristics, have exacerbated the potential risk of network attacks, and the traditional anomaly traffic detection methods are facing the challenges of high false alarm rate and insufficient generalization ability due [...] Read more.
The wide application of software-defined network (SDN) architecture, combined with its centralized control characteristics, have exacerbated the potential risk of network attacks, and the traditional anomaly traffic detection methods are facing the challenges of high false alarm rate and insufficient generalization ability due to the reliance on manual rule design and the difficulty in capturing dynamic temporal features. In response to these challenges, we propose a Temporal Convolutional Network (TCN)-based anomalous traffic detection method for SDN. Taking the packet length sequence as the core feature, the long-term temporal dependency in the traffic data is effectively captured by causal convolution and dilation convolution operations of the TCN model, combined with the residual connection mechanism to optimize the gradient propagation and improve the stability of the model training. The experiments validate the model performance based on the public InSDN dataset, and the results show that the method achieves high accuracy in the binary classification task of normal and malicious traffic and improves its detection accuracy by about 5% compared with traditional statistical methods and mainstream deep learning models. Full article
Show Figures

Figure 1

26 pages, 5101 KiB  
Article
Federated Learning Augmented Cybersecurity for SDN-Based Aeronautical Communication Network
by Muhammad Ali, Yim-Fun Hu and Jian-Ping Li
Electronics 2025, 14(8), 1535; https://doi.org/10.3390/electronics14081535 - 10 Apr 2025
Viewed by 465
Abstract
With the requirements of government data protection regulations and industrial concerns regarding data protection and privacy, the security level required for data privacy and protection has increased. This has led researchers to investigate techniques that can train cybersecurity machine learning (ML) models without [...] Read more.
With the requirements of government data protection regulations and industrial concerns regarding data protection and privacy, the security level required for data privacy and protection has increased. This has led researchers to investigate techniques that can train cybersecurity machine learning (ML) models without sharing personal data. Federated Learning (FL) is a newly developed decentralized and distributed ML mechanism that emphasize privacy. In this technique, a learning algorithm is trained without collecting or exchanging sensitive data from distributed client models running at different locations. With the rapid increase in the number of cybersecurity attacks reported in the aviation industry in the last two decades, strong, dynamic, and effective countermeasures are required to protect the aviation industry and air passengers against such attacks, which can most of the time lead to catastrophic situations. This paper proposes and implements an FL model for identifying cyberattacks on a Software Defined Network (SDN)-based aeronautical communication networks. The machine learning model used in the FL architecture is a Deep Neural Network (DNN) model. The publicly available National Security Laboratory–Knowledge Discovery and Datamining (NSL-KDD) dataset was employed to train and validate the proposed FL model. The simulation results illustrated that the FL-based system can accurately and effectively identify potential cybersecurity attacks and minimize the risk of data and service exposure without degrading model performance. A comparison was also made between the FL and non-FL machine learning models. Preliminary results demonstrated that the FL model outperformed the non-FL machine learning approaches. FL reached an accuracy of 96%, compared to 76% and 83% for NFL. Full article
Show Figures

Figure 1

34 pages, 443 KiB  
Review
Advancements in Machine Learning-Based Intrusion Detection in IoT: Research Trends and Challenges
by Márton Bendegúz Bankó, Szymon Dyszewski, Michaela Králová, Márton Bertalan Limpek, Maria Papaioannou, Gaurav Choudhary and Nicola Dragoni
Algorithms 2025, 18(4), 209; https://doi.org/10.3390/a18040209 - 9 Apr 2025
Viewed by 1262
Abstract
This paper presents a systematic literature review based on the PRISMA model on machine learning-based Distributed Denial of Service (DDoS) attacks in Internet of Things (IoT) networks. The primary objective of the review is to compare research trends on deployment options, datasets, and [...] Read more.
This paper presents a systematic literature review based on the PRISMA model on machine learning-based Distributed Denial of Service (DDoS) attacks in Internet of Things (IoT) networks. The primary objective of the review is to compare research trends on deployment options, datasets, and machine learning techniques used in the domain between 2019 and 2024. The results highlight the dominance of certain datasets (BoT-IoT and TON_IoT) in combination with Decision Tree (DT) and Random Forest (RF) models, achieving high median accuracy rates (>99%). This paper discusses various datasets that are used to train and evaluate machine learning (ML) models for detecting Distributed Denial of Service (DDoS) attacks in Internet of Things (IoT) networks and how they impact model performance. Furthermore, the findings suggest that due to hardware limitations, there is a preference for lightweight ML solutions and preprocessed datasets. Current trends indicate that larger or industry-specific datasets will continue to gain popularity alongside more complex ML models, such as deep learning. This emphasizes the need for robust and scalable deployment options, with Software-Defined Networks (SDNs) offering flexibility, edge computing being extensively explored in cloud environments, and blockchain-integrated networks emerging as a promising approach for enhancing security. Full article
(This article belongs to the Special Issue Advances in Deep Learning and Next-Generation Internet Technologies)
Show Figures

Figure 1

39 pages, 4156 KiB  
Review
Enabling Green Cellular Networks: A Review and Proposal Leveraging Software-Defined Networking, Network Function Virtualization, and Cloud-Radio Access Network
by Radheshyam Singh, Line M. P. Larsen, Eder Ollora Zaballa, Michael Stübert Berger, Christian Kloch and Lars Dittmann
Future Internet 2025, 17(4), 161; https://doi.org/10.3390/fi17040161 - 5 Apr 2025
Viewed by 354
Abstract
The increasing demand for enhanced communication systems, driven by applications such as real-time video streaming, online gaming, critical operations, and Internet-of-Things (IoT) services, has necessitated the optimization of cellular networks to meet evolving requirements while addressing power consumption challenges. In this context, various [...] Read more.
The increasing demand for enhanced communication systems, driven by applications such as real-time video streaming, online gaming, critical operations, and Internet-of-Things (IoT) services, has necessitated the optimization of cellular networks to meet evolving requirements while addressing power consumption challenges. In this context, various initiatives undertaken by industry, academia, and researchers to reduce the power consumption of cellular network systems are comprehensively reviewed. Particular attention is given to emerging technologies, including Software-Defined Networking (SDN), Network Function Virtualization (NFV), and Cloud-Radio Access Network (C-RAN), which are identified as key enablers for reshaping cellular infrastructure. Their collective potential to enhance energy efficiency while addressing convergence challenges is analyzed, and solutions for sustainable network evolution are proposed. A conceptual architecture based on SDN, NFV, and C-RAN is presented as an illustrative example of integrating these technologies to achieve significant power savings. The proposed framework outlines an approach to developing energy-efficient cellular networks, capable of reducing power consumption by approximately 40 to 50% through the optimal placement of virtual network functions. Full article
Show Figures

Figure 1

17 pages, 2956 KiB  
Article
A3C-R: A QoS-Oriented Energy-Saving Routing Algorithm for Software-Defined Networks
by Sunan Wang, Rong Song, Xiangyu Zheng, Wanwei Huang and Hongchang Liu
Future Internet 2025, 17(4), 158; https://doi.org/10.3390/fi17040158 - 3 Apr 2025
Viewed by 297
Abstract
With the rapid growth of Internet applications and network traffic, existing routing algorithms are usually difficult to guarantee the quality of service (QoS) indicators such as delay, bandwidth, and packet loss rate as well as network energy consumption for various data flows with [...] Read more.
With the rapid growth of Internet applications and network traffic, existing routing algorithms are usually difficult to guarantee the quality of service (QoS) indicators such as delay, bandwidth, and packet loss rate as well as network energy consumption for various data flows with business characteristics. They have problems such as unbalanced traffic scheduling and unreasonable network resource allocation. Aiming at the above problems, this paper proposes a QoS-oriented energy-saving routing algorithm A3C-R in the software-defined network (SDN) environment. Based on the asynchronous update advantages of the asynchronous advantage Actor-Critic (A3C) algorithm and the advantages of independent interaction between multiple agents and the environment, the A3C-R algorithm can effectively improve the convergence of the routing algorithm. The process of the A3C-R algorithm first takes QoS indicators such as delay, bandwidth, and packet loss rate and the network energy consumption of the link as input. Then, it creates multiple agents to start asynchronous training, through the continuous updating of Actors and Critics in each agent and periodically synchronizes the model parameters to the global model. After the algorithm training converges, it can output the link weights of the network topology to facilitate the calculation of intelligent routing strategies that meet QoS requirements and lower network energy consumption. The experimental results indicate that the A3C-R algorithm, compared to the baseline algorithms ECMP, I-DQN, and DDPG-EEFS, reduces delay by approximately 9.4%, increases throughput by approximately 7.0%, decreases the packet loss rate by approximately 9.5%, and improves energy-saving percentage by approximately 10.8%. Full article
Show Figures

Figure 1

33 pages, 10347 KiB  
Article
Dynamic RSVP in Modern Networks for Advanced Resource Control with P4 Data Plane
by Pin-An Pan, Wen-Long Chin, Yen-Chun Huang, Yu-Xiang Huang and Cheng-Hsien Yu
Sensors 2025, 25(7), 2244; https://doi.org/10.3390/s25072244 - 2 Apr 2025
Viewed by 386
Abstract
This study focuses on leveraging the emerging Software-Defined Networking (SDN) technology, P4, to design a data plane for the Resource Reservation Protocol (RSVP) that can be applied in various scenarios, including both wired and wireless networks. This research explores the signaling mechanisms of [...] Read more.
This study focuses on leveraging the emerging Software-Defined Networking (SDN) technology, P4, to design a data plane for the Resource Reservation Protocol (RSVP) that can be applied in various scenarios, including both wired and wireless networks. This research explores the signaling mechanisms of the RSVP protocol, consolidates the data plane processing requirements, and ensures compliance with RSVP session Quality of Service (QoS) demands. Additionally, this study introduces the architecture, syntax, and external functionalities of the P4 language, which are utilized to develop the data plane required for RSVP-based resource reservation. Various parameters are pre-configured to enable the control plane to efficiently integrate RSVP reservation information into the data plane. Furthermore, Mininet is employed to create a virtual network topology, along with the BMv2 software switch, to evaluate whether the proposed system can fulfill RSVP’s end-to-end QoS guarantees. Different traffic transmission scenarios are examined to validate the system’s capability in accurately managing bandwidth allocation, latency, priority configuration, and packet counting for end-to-end QoS services. Full article
Show Figures

Figure 1

24 pages, 3496 KiB  
Article
What Is the Best Solution for Smart Buildings? A Case Study of Fog, Edge Computing and Smart IoT Devices
by Mauro Chiozzotto and Miguel Arjona Ramírez
Appl. Sci. 2025, 15(7), 3805; https://doi.org/10.3390/app15073805 - 31 Mar 2025
Viewed by 688
Abstract
This paper presents a case study of Fog Computing, Edge Computing (EC) and Intelligent EC applied to Smart Buildings, focusing on the deployment of innovative services and smart IoT devices, discussing new architecture as Software-Defined Network (SDN). Specifically, a comprehensive solution of a [...] Read more.
This paper presents a case study of Fog Computing, Edge Computing (EC) and Intelligent EC applied to Smart Buildings, focusing on the deployment of innovative services and smart IoT devices, discussing new architecture as Software-Defined Network (SDN). Specifically, a comprehensive solution of a Smart Building case is proposed to validate main statements and conclusions are drawn, providing a general guideline to address the problems of choosing between Edge or Fog Computing and the specific category of IoT devices. The methodology employed in this study is based on field research conducted in buildings within the metropolitan region of São Paulo, Brazil, that aim to enable their transformation into Smart Buildings (SBs). Moreover, principles of Electronic Systems Engineering and Cloud Computing such as reliability, scalability and security are applied. In that way, this study integrates advanced multimedia technical services to enhancing security and communication within the SBs through centralized control. The method focuses on identifying and analyzing the most common problems observed in field research within SBs in early stages of development, prior to the intensive implementation of IoT devices and Fog or Edge Computing technologies on the state of the art. The research adopts a comparative approach, investigating the best solutions for each application category. The results are consolidated in a main table within the article, correlating solutions to the four main problems identified in the field research, such as impairments in voice over IP and video communication using IoT devices; latency and delays in communication between SBs and the Cloud center; access security issues; and the Quality of Experience of video over IP communication, both in live transmissions and recordings between SBs. Regarding applications, this study considers the use of specific IoT devices and Cloud Computing architectures, such as Fog or IEC. Furthermore, it explores the implementation of new open network and communication models, such as SDN and NFV, to optimize communication between the various SBs and the SB’s connection to the control center of a Smart City. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

Back to TopTop