Next Issue
Volume 4, June
Previous Issue
Volume 3, December
 
 

Network, Volume 4, Issue 1 (March 2024) – 6 articles

Cover Story (view full-size image): This paper delves into data protection challenges in automated decision-making systems using machine learning algorithms, emphasizing the need to balance algorithm accuracy with privacy and personal data protection. It discusses data protection issues, the related attacks due to insufficient legal defenses, and the relevance of GDPR compliance. Privacy-enhancing techniques, including differential privacy in deep learning algorithms, are explored through experiments illustrating the intricacies of simultaneously maintaining accuracy and privacy. This research highlights the delicate equilibrium required to uphold individuals' rights when utilizing deep learning algorithms for decision making. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
19 pages, 2513 KiB  
Article
On the Capacity of Optical Backbone Networks
by João J. O. Pires
Network 2024, 4(1), 114-132; https://doi.org/10.3390/network4010006 - 11 Mar 2024
Viewed by 482
Abstract
Optical backbone networks, characterized by using optical fibers as a transmission medium, constitute the fundamental infrastructure employed today by network operators to deliver services to users. As network capacity is one of the key factors influencing optical network performance, it is important to [...] Read more.
Optical backbone networks, characterized by using optical fibers as a transmission medium, constitute the fundamental infrastructure employed today by network operators to deliver services to users. As network capacity is one of the key factors influencing optical network performance, it is important to comprehend its limitations and have the capability to estimate its value. In this context, we revisit the concept of capacity from various perspectives, including channel capacity, link capacity, and network capacity, thus providing an integrated view of the problem within the framework of the backbone tier. Hence, we review the fundamental concepts behind optical networks, along with the basic physical phenomena present in optical fiber transmission, and provide methodologies for estimating the different types of capacities, mainly using simple formulations. In particular, we propose a method to evaluate the network capacity that relies on the optical reach to account for physical layer aspects, in conjunction with capacitated routing techniques for traffic routing. We apply this method to three reference networks and obtain capacities ranging from tens to hundreds of terabits/s. Whenever possible, we also compare our results with published experimental data to understand how they relate. Full article
Show Figures

Figure 1

23 pages, 625 KiB  
Article
Data Protection Issues in Automated Decision-Making Systems Based on Machine Learning: Research Challenges
by Paraskevi Christodoulou and Konstantinos Limniotis
Network 2024, 4(1), 91-113; https://doi.org/10.3390/network4010005 - 01 Mar 2024
Viewed by 678
Abstract
Data protection issues stemming from the use of machine learning algorithms that are used in automated decision-making systems are discussed in this paper. More precisely, the main challenges in this area are presented, putting emphasis on how important it is to simultaneously ensure [...] Read more.
Data protection issues stemming from the use of machine learning algorithms that are used in automated decision-making systems are discussed in this paper. More precisely, the main challenges in this area are presented, putting emphasis on how important it is to simultaneously ensure the accuracy of the algorithms as well as privacy and personal data protection for the individuals whose data are used for training the corresponding models. In this respect, we also discuss how specific well-known data protection attacks that can be mounted in processes based on such algorithms are associated with a lack of specific legal safeguards; to this end, the General Data Protection Regulation (GDPR) is used as the basis for our evaluation. In relation to these attacks, some important privacy-enhancing techniques in this field are also surveyed. Moreover, focusing explicitly on deep learning algorithms as a type of machine learning algorithm, we further elaborate on one such privacy-enhancing technique, namely, the application of differential privacy to the training dataset. In this respect, we present, through an extensive set of experiments, the main difficulties that occur if one needs to demonstrate that such a privacy-enhancing technique is, indeed, sufficient to mitigate all the risks for the fundamental rights of individuals. More precisely, although we manage—by the proper configuration of several algorithms’ parameters—to achieve accuracy at about 90% for specific privacy thresholds, it becomes evident that even these values for accuracy and privacy may be unacceptable if a deep learning algorithm is to be used for making decisions concerning individuals. The paper concludes with a discussion of the current challenges and future steps, both from a legal as well as from a technical perspective. Full article
(This article belongs to the Special Issue Next Generation Networks and Systems Security)
Show Figures

Figure 1

23 pages, 696 KiB  
Article
A Hierarchical Security Event Correlation Model for Real-Time Threat Detection and Response
by Herbert Maosa, Karim Ouazzane and Mohamed Chahine Ghanem
Network 2024, 4(1), 68-90; https://doi.org/10.3390/network4010004 - 11 Feb 2024
Viewed by 623
Abstract
An intrusion detection system (IDS) perform postcompromise detection of security breaches whenever preventive measures such as firewalls do not avert an attack. However, these systems raise a vast number of alerts that must be analyzed and triaged by security analysts. This process is [...] Read more.
An intrusion detection system (IDS) perform postcompromise detection of security breaches whenever preventive measures such as firewalls do not avert an attack. However, these systems raise a vast number of alerts that must be analyzed and triaged by security analysts. This process is largely manual, tedious, and time-consuming. Alert correlation is a technique that reduces the number of intrusion alerts by aggregating alerts that are similar in some way. However, the correlation is performed outside the IDS through third-party systems and tools, after the IDS has already generated a high volume of alerts. These third-party systems add to the complexity of security operations. In this paper, we build on the highly researched area of alert and event correlation by developing a novel hierarchical event correlation model that promises to reduce the number of alerts issued by an intrusion detection system. This is achieved by correlating the events before the IDS classifies them. The proposed model takes the best features from similarity and graph-based correlation techniques to deliver an ensemble capability not possible by either approach separately. Further, we propose a correlation process for events rather than alerts as is the case in the current art. We further develop our own correlation and clustering algorithm which is tailor-made to the correlation and clustering of network event data. The model is implemented as a proof of concept with experiments run on standard intrusion detection sets. The correlation achieves an 87% data reduction through aggregation, producing nearly 21,000 clusters in about 30 s. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
Show Figures

Figure 1

20 pages, 8356 KiB  
Article
IDSMatch: A Novel Deployment Method for IDS Chains in SDNs
by Nadia Niknami and Jie Wu
Network 2024, 4(1), 48-67; https://doi.org/10.3390/network4010003 - 07 Feb 2024
Viewed by 616
Abstract
With the surge in cyber attacks, there is a pressing need for more robust network intrusion detection systems (IDSs). These IDSs perform at their best when they can monitor all the traffic coursing through the network, especially within a software-defined network (SDN). In [...] Read more.
With the surge in cyber attacks, there is a pressing need for more robust network intrusion detection systems (IDSs). These IDSs perform at their best when they can monitor all the traffic coursing through the network, especially within a software-defined network (SDN). In an SDN configuration, the control plane and data plane operate independently, facilitating dynamic control over network flows. Typically, an IDS application resides in the control plane, or a centrally located network IDS transmits security reports to the controller. However, the controller, equipped with various control applications, may encounter challenges when analyzing substantial data, especially in the face of high traffic volumes. To enhance the processing power, detection rates, and alleviate the controller’s burden, deploying multiple instances of IDS across the data plane is recommended. While deploying IDS on individual switches within the data plane undoubtedly enhances detection rates, the associated costs of installing one at each switch raise concerns. To address this challenge, this paper proposes the deployment of IDS chains across the data plane to boost detection rates while preventing controller overload. The controller directs incoming traffic through alternative paths, incorporating IDS chains; however, potential delays from retransmitting traffic through an IDS chain could extend the journey to the destination. To address these delays and optimize flow distribution, our study proposes a method to balance flow assignments to specific IDS chains with minimal delay. Our approach is validated through comprehensive testing and evaluation using a test bed and trace-based simulation, demonstrating its effectiveness in reducing delays and hop counts across various traffic scenarios. Full article
Show Figures

Figure 1

15 pages, 5174 KiB  
Article
A Study of Ethereum’s Transition from Proof-of-Work to Proof-of-Stake in Preventing Smart Contracts Criminal Activities
by Oliver J. Hall, Stavros Shiaeles and Fudong Li
Network 2024, 4(1), 33-47; https://doi.org/10.3390/network4010002 - 26 Jan 2024
Viewed by 774
Abstract
With the ever-increasing advancement in blockchain technology, security is a significant concern when substantial investments are involved. This paper explores known smart contract exploits used in previous and current years. The purpose of this research is to provide a point of reference for [...] Read more.
With the ever-increasing advancement in blockchain technology, security is a significant concern when substantial investments are involved. This paper explores known smart contract exploits used in previous and current years. The purpose of this research is to provide a point of reference for users interacting with blockchain technology or smart contract developers. The primary research gathered in this paper analyses unique smart contracts deployed on a blockchain by investigating the Solidity code involved and the transactions on the ledger linked to these contracts. A disparity was found in the techniques used in 2021 compared to 2023 after Ethereum moved from a Proof-of-Work blockchain to a Proof-of-Stake one, demonstrating that with the advancement in blockchain technology, there is also a corresponding advancement in the level of effort bad actors exert to steal funds from users. The research concludes that as users become more wary of malicious smart contracts, bad actors continue to develop more sophisticated techniques to defraud users. It is recommended that even though this paper outlines many of the currently used techniques by bad actors, users who continue to interact with smart contracts should consistently stay up to date with emerging exploitations. Full article
Show Figures

Figure 1

32 pages, 5248 KiB  
Review
A Survey on Routing Solutions for Low-Power and Lossy Networks: Toward a Reliable Path-Finding Approach
by Hanin Almutairi and Ning Zhang
Network 2024, 4(1), 1-32; https://doi.org/10.3390/network4010001 - 15 Jan 2024
Viewed by 855
Abstract
Low-Power and Lossy Networks (LLNs) have grown rapidly in recent years owing to the increased adoption of Internet of Things (IoT) and Machine-to-Machine (M2M) applications across various industries, including smart homes, industrial automation, healthcare, and smart cities. Owing to the characteristics of LLNs, [...] Read more.
Low-Power and Lossy Networks (LLNs) have grown rapidly in recent years owing to the increased adoption of Internet of Things (IoT) and Machine-to-Machine (M2M) applications across various industries, including smart homes, industrial automation, healthcare, and smart cities. Owing to the characteristics of LLNs, such as Lossy channels and limited power, generic routing solutions designed for non-LLNs may not be adequate in terms of delivery reliability and routing efficiency. Consequently, a routing protocol for LLNs (RPL) was designed. Several RPL objective functions have been proposed to enhance the routing reliability in LLNs. This paper analyses these solutions against performance and security requirements to identify their limitations. Firstly, it discusses the characteristics and security issues of LLN and their impact on packet delivery reliability and routing efficiency. Secondly, it provides a comprehensive analysis of routing solutions and identifies existing limitations. Thirdly, based on these limitations, this paper highlights the need for a reliable and efficient path-finding solution for LLNs. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop