Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (342)

Search Parameters:
Keywords = packet size

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2031 KB  
Article
The Impact of Security Protocols on TCP/UDP Throughput in IEEE 802.11ax Client–Server Network: An Empirical Study
by Nurul I. Sarkar, Nasir Faiz and Md Jahan Ali
Electronics 2025, 14(19), 3890; https://doi.org/10.3390/electronics14193890 - 30 Sep 2025
Viewed by 257
Abstract
IEEE 802.11ax (Wi-Fi 6) technologies provide high capacity, low latency, and increased security. While many network researchers have examined Wi-Fi security issues, the security implications of 802.11ax have not been fully explored yet. Therefore, in this paper, we investigate how security protocols (WPA2, [...] Read more.
IEEE 802.11ax (Wi-Fi 6) technologies provide high capacity, low latency, and increased security. While many network researchers have examined Wi-Fi security issues, the security implications of 802.11ax have not been fully explored yet. Therefore, in this paper, we investigate how security protocols (WPA2, WPA3) affect TCP/UDP throughput in IEEE 802.11ax client–server networks using a testbed approach. Through an extensive performance study, we analyze the effect of security on transport layer protocol (TCP/UDP), internet protocol layer (IPV4/IPV6), and operating systems (MS Windows and Linux) on system performance. The impact of packet length on system performance is also investigated. The obtained results show that WPA3 offers greater security, and its impact on TCP/UDP throughput is insignificant, highlighting the robustness of WPA3 encryption in maintaining throughput even in secure environments. With WPA3, UDP offers higher throughput than TCP and IPv6 consistently outperforms IPv4 in terms of both TCP and UDP throughput. Linux outperforms Windows in all scenarios, especially with larger packet sizes and IPv6 traffic. These results suggest that WPA3 provides optimized throughput performance in both Linux and MS Windows in 802.11ax client–server environments. Our research provides some insights into the security issues in Gigabit Wi-Fi that can help network researchers and engineers to contribute further towards developing greater security for next-generation wireless networks. Full article
Show Figures

Figure 1

15 pages, 673 KB  
Article
Integrating and Benchmarking KpqC in TLS/X.509
by Minjoo Sim, Gyeongju Song, Siwoo Eum, Minwoo Lee, Seyoung Yoon, Anubhab Baksi and Hwajeong Seo
Electronics 2025, 14(18), 3717; https://doi.org/10.3390/electronics14183717 - 19 Sep 2025
Viewed by 523
Abstract
Advances in quantum computing pose a fundamental threat to classical public-key cryptosystems, including RSA and elliptic-curve cryptography (ECC), which form the foundation for authentication and key exchange in the Transport Layer Security (TLS) protocol. In response to these emerging threats, Korea launched the [...] Read more.
Advances in quantum computing pose a fundamental threat to classical public-key cryptosystems, including RSA and elliptic-curve cryptography (ECC), which form the foundation for authentication and key exchange in the Transport Layer Security (TLS) protocol. In response to these emerging threats, Korea launched the KpqC (Korea Post-Quantum Cryptography) project in 2021 to design, evaluate, and standardize domestic PQC algorithms. To the best of our knowledge, this is the first systematic evaluation of the finalized Korean PQC algorithms (HAETAE, AIMer, SMAUG-T, NTRU+) within a production-grade TLS/X.509 stack, enabling direct comparison against NIST PQC and ECC baselines. To contextualize KpqC performance, we further compare against NIST-standardized PQC algorithms and classical ECC baselines. Our evaluation examines both static overhead (certificate size) and dynamic overhead (TLS 1.3 handshake latency) across computation-bound (localhost) and network-bound (LAN) scenarios, including embedded device and hybrid TLS configurations. Our results show that KpqC certificates are approximately 4.6–48.8× larger than ECC counterparts and generally exceed NIST PQC sizes. In computation-bound tests, both NIST PQC (ML-KEM) and KpqC hybrids exhibited similar handshake latency increases of approximately 8–9× relative to ECC. In network-bound tests, the difference between the two families was negligible, with relative overhead typically around 30–41%. These findings offer practical guidance for balancing security level, key size, packet size, and latency and support phased PQC migration strategies in real-world TLS deployments. Full article
(This article belongs to the Special Issue Trends in Information Systems and Security)
Show Figures

Figure 1

21 pages, 2093 KB  
Article
Dual-Stream Time-Series Transformer-Based Encrypted Traffic Data Augmentation Framework
by Daeho Choi, Yeog Kim, Changhoon Lee and Kiwook Sohn
Appl. Sci. 2025, 15(18), 9879; https://doi.org/10.3390/app15189879 - 9 Sep 2025
Viewed by 507
Abstract
We propose a Transformer-based data augmentation framework with a time-series dual-stream architecture to address performance degradation in encrypted network traffic classification caused by class imbalance between attack and benign traffic. The proposed framework independently processes the complete flow’s sequential packet information and statistical [...] Read more.
We propose a Transformer-based data augmentation framework with a time-series dual-stream architecture to address performance degradation in encrypted network traffic classification caused by class imbalance between attack and benign traffic. The proposed framework independently processes the complete flow’s sequential packet information and statistical characteristics by extracting and normalizing a local channel (comprising packet size, inter-arrival time, and direction) and a set of six global flow-level statistical features. These are used to generate a fixed-length multivariate sequence and an auxiliary vector. The sequence and vector are then fed into an encoder-only Transformer that integrates learnable positional embeddings with a FiLM + context token-based injection mechanism, enabling complementary representation of sequential patterns and global statistical distributions. Large-scale experiments demonstrate that the proposed method reduces reconstruction RMSE and additional feature restoration MSE by over 50%, while improving accuracy, F1-Score, and AUC by 5–7%p compared to classification on the original imbalanced datasets. Furthermore, the augmentation process achieves practical levels of processing time and memory overhead. These results show that the proposed approach effectively mitigates class imbalance in encrypted traffic classification and offers a promising pathway to achieving more robust model generalization in real-world deployment scenarios. Full article
(This article belongs to the Special Issue AI-Enabled Next-Generation Computing and Its Applications)
Show Figures

Figure 1

20 pages, 835 KB  
Article
Trustworthy Adaptive AI for Real-Time Intrusion Detection in Industrial IoT Security
by Mohammad Al Rawajbeh, Amala Jayanthi Maria Soosai, Lakshmana Kumar Ramasamy and Firoz Khan
IoT 2025, 6(3), 53; https://doi.org/10.3390/iot6030053 - 8 Sep 2025
Viewed by 822
Abstract
Traditional security methods fail to match the speed of evolving threats because Industrial Internet of Things (IIoT) technologies have become more widely adopted. A lightweight adaptive AI-based intrusion detection system (IDS) for IIoT environments is presented in this paper. The proposed system detects [...] Read more.
Traditional security methods fail to match the speed of evolving threats because Industrial Internet of Things (IIoT) technologies have become more widely adopted. A lightweight adaptive AI-based intrusion detection system (IDS) for IIoT environments is presented in this paper. The proposed system detects cyber threats in real time through an ensemble of online learning models that also adapt to changing network behavior. The system implements SHAP (SHapley Additive exPlanations) for model prediction explanations to allow human operators to verify and understand alert causes while addressing the essential need for trust and transparency. The system validation was performed using the ToN_IoT and Bot-IoT benchmark datasets. The proposed system detects threats with 96.4% accuracy while producing 2.1% false positives and requiring 35 ms on average for detection on edge devices with limited resources. Security analysts can understand model decisions through SHAP analysis because packet size and protocol type and device activity patterns strongly affect model predictions. The system underwent testing on a Raspberry Pi 5-based IIoT testbed to evaluate its deployability in real-world scenarios through emulation of practical edge environments with constrained computational resources. The research unites real-time adaptability with explainability and low-latency performance in an IDS framework specifically designed for industrial IoT security. The solution provides a scalable method to boost cyber resilience in manufacturing, together with energy and critical infrastructure sectors. By enabling fast, interpretable, and low-latency intrusion detection directly on edge devices, this solution enhances cyber resilience in critical sectors such as manufacturing, energy, and infrastructure, where timely and trustworthy threat responses are essential to maintaining operational continuity and safety. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of the Internet of Things)
Show Figures

Figure 1

23 pages, 2167 KB  
Article
ZBMG-LoRa: A Novel Zone-Based Multi-Gateway Approach Towards Scalable LoRaWANs for Internet of Things
by Mukarram Almuhaya, Tawfik Al-Hadhrami, David J. Brown and Sultan Noman Qasem
Sensors 2025, 25(17), 5457; https://doi.org/10.3390/s25175457 - 3 Sep 2025
Viewed by 561
Abstract
Internet of Things (IoT) applications are rapidly adopting low-power wide-area network (LPWAN) technology due to its ability to provide broad coverage for a range of battery-powered devices. LoRaWAN has become the most widely used LPWAN solution due to its physical layer (PHY) design [...] Read more.
Internet of Things (IoT) applications are rapidly adopting low-power wide-area network (LPWAN) technology due to its ability to provide broad coverage for a range of battery-powered devices. LoRaWAN has become the most widely used LPWAN solution due to its physical layer (PHY) design and regulatory advantages. Because LoRaWAN has a broad communication range, the coverage of the gateways might overlap. In LoRa technology, packets can be received concurrently by multiple gateways. Subsequently, the network server selects the packet with the highest receiver strength signal indicator (RSSI). However, this method can lead to the exhaustion of channel availability on the gateways. The optimisation of configuration parameters to reduce collisions and enhance network throughput in multi-gateway LoRaWAN remains an unresolved challenge. This paper introduces a novel low-complexity model for ZBMG-LoRa, mitigates the collisions using channel utilisiation, and categorises nodes into distinct groups based on their respective gateways. This categorisation allows for the implementation of optimal settings for each node’s subzone, thereby facilitating effective communication and addressing the identified issue. By deriving key performance metrics (e.g., network throughput, energy efficiency, and probability of effective delivery) from configuration parameters and network size, communication reliability is maintained. Optimal configurations for transmission power and spreading factor are derived by our method for all nodes in LoRaWAN networks with multiple gateways. In comparison to adaptive data rate (ADR) and other related state-of-the-art algorithms, the findings demonstrate that the novel approach achieves higher packet delivery ratio and better energy efficiency. Full article
Show Figures

Figure 1

23 pages, 3904 KB  
Article
The Remote Sensing Data Transmission Problem in Communication Constellations: Shop Scheduling-Based Model and Algorithm
by Jiazhao Yin, Yuning Chen, Xiang Lin and Qian Zhao
Technologies 2025, 13(9), 384; https://doi.org/10.3390/technologies13090384 - 1 Sep 2025
Viewed by 481
Abstract
Advances in satellite miniaturisation have led to a steep rise in the number of Earth-observation platforms, turning the downlink of the resulting high-volume remote-sensing data into a critical bottleneck. Low-Earth-Orbit (LEO) communication constellations offer a high-throughput relay for these data, yet also introduce [...] Read more.
Advances in satellite miniaturisation have led to a steep rise in the number of Earth-observation platforms, turning the downlink of the resulting high-volume remote-sensing data into a critical bottleneck. Low-Earth-Orbit (LEO) communication constellations offer a high-throughput relay for these data, yet also introduce intricate scheduling requirements. We term the associated task the Remote Sensing Data Transmission in Communication Constellations (DTIC) problem, which comprises two sequential stages: inter-satellite routing, and satellite-to-ground delivery. This problem can be cast as a Hybrid Flow Shop Scheduling Problem (HFSP). Unlike the classical HFSP, every processor (e.g., ground antenna) in DTIC can simultaneously accommodate multiple jobs (data packets), subject to two-dimensional spatial constraints. This gives rise to a new variant that we call the Hybrid Flow Shop Problem with Two-Dimensional Processor Space (HFSP-2D). After an in-depth investigation of the characteristics of this HFSP-2D, we propose a constructive heuristic, denoted NEHedd-2D, and a Two-Stage Memetic Algorithm (TSMA) that integrates an Inter-Processor Job-Swapping (IPJS) operator and an Intra-Processor Job-Swapping (IAJS) operator. Computational experiments indicate that when TSMA is initialized with the solution produced by NEHedd-2D, the algorithm attains the optimal solutions for small-sized instances and consistently outperforms all benchmark algorithms across problems of every size. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

22 pages, 3260 KB  
Article
Large-Scale Continuous Monitoring of Greenhouse Gases with Adaptive LoRaWAN in CN470–510 MHz Band
by Xueying Jin, David Chieng, Pushpendu Kar, Chiew Foong Kwong, Yeqin Li and Yin Wang
Sensors 2025, 25(17), 5349; https://doi.org/10.3390/s25175349 - 29 Aug 2025
Viewed by 804
Abstract
Continuous and near-real-time monitoring of greenhouse gases (GHGs) is critical for achieving Net Zero emissions, ensuring early detection, compliance, accountability, and adaptive management. To this end, there is an increasing need to monitor GHGs at higher temporal resolutions, greater spatial resolutions, and larger [...] Read more.
Continuous and near-real-time monitoring of greenhouse gases (GHGs) is critical for achieving Net Zero emissions, ensuring early detection, compliance, accountability, and adaptive management. To this end, there is an increasing need to monitor GHGs at higher temporal resolutions, greater spatial resolutions, and larger coverage scales. However, spatial resolution and coverage remain significant challenges due to limited sensor network coverage and power sources for sensor nodes, even in urban areas. LoRaWAN, a cost-effective solution that provides long-range and high-penetration wireless connectivity with a low energy consumption, is an ideal choice for this application. Despite its promise, LoRaWAN faces several challenges, including a low data rate, low packet transmission rate, and low packet delivery success ratio, especially when the node density or environment variability is high. This paper presents a simulation-based analysis of a large-scale urban LoRaWAN sensor network operating in the CN470–510 MHz band, which is the only frequency band officially designated for low-power wide-area (LPWA) technologies such as LoRaWAN in China. This study investigates how the node density, sensor measurement update rate (i.e., update interval), and sensor measurement payload size affect two primary performance metrics: the sensor update delivery ratio (DR) and the radio frequency (RF) energy consumption (RFEC) per successful update. The performances of several enhanced adaptive data transmission algorithms in comparison to the conventional ADR+ algorithms are also analysed. The results indicate that both DR and RFEC are significantly influenced by the node density, sensor update rate, and payload size, with the effects being particularly significant under high-node-density and high-update-rate conditions. The analysis further reveals that the ADR-NODE-KAMA algorithm consistently achieves the best performance across most scenarios, providing up to a 2% improvement in DR and a reduction of 10–15 mJ in RFEC per successful sensor measurement update. Additionally, the sensor measurement payload size is shown to have a substantial impact on network performance, with each added sensor measurement contributing to a DR reduction of up to 2.24% and an increase in RFEC of approximately 80 mJ. LoRaWAN network operators can gain practical insights from these findings to optimize the performance and efficiency of large-scale GHG monitoring deployments. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

10 pages, 2169 KB  
Proceeding Paper
Comparative Performance Analysis of Data Transmission Protocols for Sensor-to-Cloud Applications: An Experimental Evaluation
by Filip Tsvetanov and Martin Pandurski
Eng. Proc. 2025, 104(1), 35; https://doi.org/10.3390/engproc2025104035 - 25 Aug 2025
Viewed by 416
Abstract
This paper examines some of the most popular protocols for transmitting sensor data to cloud structures from publish/subscribe and request/response IoT models. The selection of a highly efficient message transmission protocol is essential, as it depends on the specific characteristics and purpose of [...] Read more.
This paper examines some of the most popular protocols for transmitting sensor data to cloud structures from publish/subscribe and request/response IoT models. The selection of a highly efficient message transmission protocol is essential, as it depends on the specific characteristics and purpose of the developed IoT system, which includes communication requirements, message size and format, energy efficiency, reliability, and cloud specifications. No standardized protocol can cover all the diverse application scenarios; therefore, for each developed project, the most appropriate protocol must be selected that meets the project’s specific requirements. This work focuses on finding the most appropriate protocol for integrating sensor data into a suitable open-source IoT platform, ThingsBoard. First, we conduct a comparative analysis of the studied protocols. Then, we propose a project that represents an experiment for transmitting data from a stationary XBee sensor network to the ThingsBoard cloud via HTTP, MQTT-SN, and CoAP protocols. We observe the parameters’ influence on the delayed transmission of packets and their load on the CPU and RAM. The results of the experimental studies for stationary sensor networks collecting environmental data give an advantage to the MQTT-SN protocol. This protocol is preferable to the other two protocols due to the lower delay and load on the processor and memory, which leads to higher energy efficiency and longer life of the sensors and sensor networks. These results can help users make operational judgments for their IoT applications. Full article
Show Figures

Figure 1

30 pages, 18910 KB  
Article
Evaluating 5G Communication for IEC 61850 Digital Substations: Historical Context and Latency Challenges
by Hafiz Zubyrul Kazme, Per Westerlund and Math H. J. Bollen
Energies 2025, 18(16), 4387; https://doi.org/10.3390/en18164387 - 18 Aug 2025
Viewed by 1112
Abstract
Digital substation technology adhering to the IEC 61850 standard has provided several opportunities and flexibility for the rapid growth and complexity of the present and future electrical grid. The communication infrastructure allows complete interoperability between legacy and modern devices. The emergence of 5G [...] Read more.
Digital substation technology adhering to the IEC 61850 standard has provided several opportunities and flexibility for the rapid growth and complexity of the present and future electrical grid. The communication infrastructure allows complete interoperability between legacy and modern devices. The emergence of 5G wireless communication and its utilization in substation operation presents significant advantages in terms of cost and scalability, while also introducing challenges. This paper identifies research gaps in the literature and offers valuable insights for future analysis by providing a simulation study using an empirical latency dataset of a 5G network to illustrate three aspects of substation operational challenges: coordination of protection schemes, sequential reception of packet data streams, and time synchronization processes. The findings show a mean latency of 8.5 ms for the 5G network, which is significantly higher than that of a wired Ethernet network. The results also indicate that the high latency and jitter compromise the selectivity of protection schemes. The variability in latency disrupts the sequence of arriving data packets such that the packet buffering and processing delay increases from around 1.5 ms to 11.0 ms and the buffer size would need to increase by 6 to 10 times to handle out-of-sequence packets. Additionally, a time synchronization success rate of 14.3% within a 0.1 ms accuracy range found in this study indicates that the IEEE 1588 protocol is severely affected by the latency fluctuations. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

23 pages, 3579 KB  
Article
Loss Clustering at MSP Buffer
by Andrzej Chydzinski and Blazej Adamczyk
J. Sens. Actuator Netw. 2025, 14(4), 84; https://doi.org/10.3390/jsan14040084 - 13 Aug 2025
Viewed by 646
Abstract
Packet losses cause a decline in the performance of packet networks, and this decline is related not only to the percentage of losses but also to the clustering of them together in series. We study how the correlation of packet sizes influences this [...] Read more.
Packet losses cause a decline in the performance of packet networks, and this decline is related not only to the percentage of losses but also to the clustering of them together in series. We study how the correlation of packet sizes influences this clustering when the losses are caused by buffer overflows. Specifically, for a model of a buffer with correlated packet sizes, we derive the burst ratio parameter, an intuitive metric for the inclination of losses to cluster. In addition to the burst ratio, we obtain the sequential losses distribution in the first, k-th, and stationary overflow periods. The Markovian Service Process (MSP) used by the model empowers it to mimic arbitrary packet size distributions and arbitrary correlation strengths. Using numeric examples, the impact of packet size correlation, buffer size, and traffic intensity on the burst ratio is showcased and discussed. Full article
(This article belongs to the Section Communications and Networking)
Show Figures

Figure 1

24 pages, 2094 KB  
Article
A Generalized and Real-Time Network Intrusion Detection System Through Incremental Feature Encoding and Similarity Embedding Learning
by Zahraa kadhim Alitbi, Seyed Amin Hosseini Seno, Abbas Ghaemi Bafghi and Davood Zabihzadeh
Sensors 2025, 25(16), 4961; https://doi.org/10.3390/s25164961 - 11 Aug 2025
Viewed by 611
Abstract
Many Network Intrusion Detection Systems (NIDSs) process sessions only after their completion, relying on statistical features generated by tools such as CICFlowMeter. Thus, they cannot be used for real-time intrusion detection. Packet-based NIDSs address this challenge by extracting features from the input packet [...] Read more.
Many Network Intrusion Detection Systems (NIDSs) process sessions only after their completion, relying on statistical features generated by tools such as CICFlowMeter. Thus, they cannot be used for real-time intrusion detection. Packet-based NIDSs address this challenge by extracting features from the input packet data. However, they often process packets independently, resulting in low detection accuracy. Recent advancements have captured temporal relations between the packets of a given session; however, they use a fixed window size for representing sessions. This representation is inefficient and ineffective for processing short and long sessions. Moreover, these systems cannot detect unobserved attack types during training. To address these issues, the proposed method extracts features from consecutive packets of an ongoing session in an online manner and learns a compact and discriminative embedding space using the proposed multi-proxy similarity loss function. Using the learned embedding and a novel class-wise thresholding approach, our method alleviates the imbalance issue in NIDSs and accurately identifies observed and novel attacks. The experiments on two large-scale datasets confirm that our method effectively detects attack activities by processing fewer than seven packets of an ongoing session. Moreover, it outperforms all the competing methods by a large margin for detecting observed and novel attacks. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

30 pages, 4817 KB  
Article
A Robust Multi-Port Network Interface Architecture with Real-Time CRC-Based Fault Recovery for In-Vehicle Communication Networks
by Sungju Lee, Sungwook Yu and Taikyeong Jeong
Actuators 2025, 14(8), 391; https://doi.org/10.3390/act14080391 - 7 Aug 2025
Viewed by 570
Abstract
As the automotive industry continues to evolve rapidly, there is a growing demand for high-throughput reliable communication systems within vehicles. This paper presents the implementation and verification of a fault-tolerant Ethernet-based communication protocol tailored for automotive applications operating at 1 Gbps and above. [...] Read more.
As the automotive industry continues to evolve rapidly, there is a growing demand for high-throughput reliable communication systems within vehicles. This paper presents the implementation and verification of a fault-tolerant Ethernet-based communication protocol tailored for automotive applications operating at 1 Gbps and above. The proposed system introduces a multi-port Network Interface Controller (NIC) architecture that supports real-time communication and robust fault handling. To ensure adaptability across various in-vehicle network (IVN) scenarios, the system allows for configurable packet sizes and transmission rates and supports diverse data formats. The architecture integrates cyclic redundancy check (CRC)-based error detection, real-time recovery mechanisms, and file-driven data injection techniques. Functional validation is performed using Verilog HDL simulations, demonstrating deterministic timing behavior, modular scalability, and resilience under fault injection. This paper presents a fault-tolerant Network Interface Controller (NIC), architecture incorporating CRC-based error detection, real-time recovery logic, and file-driven data injection. The system is verified through Verilog HDL simulation, demonstrating correct timing behavior, modular scalability, and robustness against injected transmission faults. Compared to conventional dual-port NICs, the proposed quad-port architecture demonstrates superior scalability and error tolerance under injected fault conditions. Experimental results confirm that the proposed NIC architecture achieves stable multi-port communication under embedded automotive environments. This study further introduces a novel quad-port NIC with an integrated fault injection algorithm and evaluates its performance in terms of error tolerance. Full article
Show Figures

Figure 1

12 pages, 1042 KB  
Article
Steady-State PERG Adaptation Reveals Temporal Abnormalities of Retinal Ganglion Cells in Treated Ocular Hypertension and Glaucoma
by Tommaso Salgarello, Andrea Giudiceandrea, Grazia Maria Cozzupoli, Martina Cocuzza, Romolo Fedeli, Donato Errico, Antonello Fadda, Filippo Amore, Marco Sulfaro, Epifanio Giudiceandrea, Matteo Salgarello, Stanislao Rizzo and Benedetto Falsini
Diagnostics 2025, 15(14), 1797; https://doi.org/10.3390/diagnostics15141797 - 16 Jul 2025
Viewed by 446
Abstract
Background/Objectives: This study investigates adaptive changes in long-lasting pattern electroretinogram (PERG) responses in ocular hypertension (OHT) and open-angle glaucoma (OAG) patients, and in healthy subjects. Methods: Sixty consecutive individuals were recruited, including 20 OHT, 20 OAG, and 20 normal subjects. All participants underwent [...] Read more.
Background/Objectives: This study investigates adaptive changes in long-lasting pattern electroretinogram (PERG) responses in ocular hypertension (OHT) and open-angle glaucoma (OAG) patients, and in healthy subjects. Methods: Sixty consecutive individuals were recruited, including 20 OHT, 20 OAG, and 20 normal subjects. All participants underwent comprehensive ophthalmologic examination, 30–2 perimetry, and retinal nerve fiber layer imaging. Steady-state (7.5 Hz) PERGs were recorded over approximately 2 min, in response to 90% contrast alternating gratings within a large field size. The recordings were acquired into a sequence of 10 averages (packets), lasting 10 s each, following a standardized adaptation paradigm (Next Generation PERG, PERGx). Key outcome measures included PERGx parameters reflecting response amplitude and phase changes over time. Results: The PERGx grand average scalar amplitude, a surrogate of ordinary PERG, was significantly reduced in both OHT and OAG groups compared to normal subjects (p < 0.01). In contrast, minimal adaptation changes were noted in PERGx amplitude among all groups. The PERGx phase exhibited a progressive decline over time, with consistent delays of approximately 20 degrees across all groups. Angular dispersion of the PERGx phase increased significantly in OHT patients compared to normal subjects (p < 0.05). An inverse relationship was observed between PERGx angular dispersion and treated intraocular pressure, specifically in OHT patients. Conclusions: The findings suggest that both OHT and OAG eyes may exhibit temporal abnormalities in PERG adaptation, potentially indicating early dysfunction in retinal ganglion cell activity. Translational Relevance: PERGx phase changes may have significant implications for glaucoma early detection and management. Full article
(This article belongs to the Special Issue Innovative Diagnostic Approaches in Retinal Diseases)
Show Figures

Figure 1

21 pages, 1644 KB  
Article
Fuzzy-Based Control System for Solar-Powered Bulk Service Queueing Model with Vacation
by Radhakrishnan Keerthika, Subramani Palani Niranjan and Sorin Vlase
Appl. Sci. 2025, 15(13), 7547; https://doi.org/10.3390/app15137547 - 4 Jul 2025
Viewed by 459
Abstract
This study proposes a Fuzzy-Based Control System (FBCS) for a Bulk Service Queueing Model with Vacation, designed to optimize service performance by dynamically adjusting system parameters. The queueing model is categorized into three service levels: (A) High Bulk Service, where a large number [...] Read more.
This study proposes a Fuzzy-Based Control System (FBCS) for a Bulk Service Queueing Model with Vacation, designed to optimize service performance by dynamically adjusting system parameters. The queueing model is categorized into three service levels: (A) High Bulk Service, where a large number of arrivals are processed simultaneously; (B) Medium Single Service, where individual packets are handled at a moderate rate; and (C) Low Vacation, where the server takes minimal breaks to maintain efficiency. The Mamdani Inference System (MIS) is implemented to regulate key parameters, such as service rate, bulk size, and vacation duration, based on input variables including queue length, arrival rate, and server utilization. The Mamdani-based fuzzy control mechanism utilizes rule-based reasoning to ensure adaptive decision-making, effectively balancing system performance under varying conditions. By integrating bulk service with a controlled vacation policy, the model achieves an optimal trade-off between processing efficiency and resource utilization. This study examines the effects of fuzzy-based control on key performance metrics, including queue stability, waiting time, and system utilization. The results indicate that the proposed approach enhances operational efficiency and service continuity compared to traditional queueing models. Full article
Show Figures

Figure 1

37 pages, 18679 KB  
Article
Real-Time DDoS Detection in High-Speed Networks: A Deep Learning Approach with Multivariate Time Series
by Drixter V. Hernandez, Yu-Kuen Lai and Hargyo T. N. Ignatius
Electronics 2025, 14(13), 2673; https://doi.org/10.3390/electronics14132673 - 1 Jul 2025
Viewed by 1505
Abstract
The exponential growth of Distributed Denial-of-Service (DDoS) attacks in high-speed networks presents significant real-time detection and mitigation challenges. The existing detection frameworks are categorized into flow-based and packet-based detection approaches. Flow-based approaches usually suffer from high latency and controller overhead in high-volume traffic. [...] Read more.
The exponential growth of Distributed Denial-of-Service (DDoS) attacks in high-speed networks presents significant real-time detection and mitigation challenges. The existing detection frameworks are categorized into flow-based and packet-based detection approaches. Flow-based approaches usually suffer from high latency and controller overhead in high-volume traffic. In contrast, packet-based approaches are prone to high false-positive rates and limited attack classification, resulting in delayed mitigation responses. To address these limitations, we propose a real-time DDoS detection architecture that combines hardware-accelerated statistical preprocessing with GPU-accelerated deep learning models. The raw packet header information is transformed into multivariate time series data to enable classification of complex traffic patterns using Temporal Convolutional Networks (TCN), Long Short-Term Memory (LSTM) networks, and Transformer architectures. We evaluated the proposed system using experiments conducted under low to high-volume background traffic to validate each model’s robustness and adaptability in a real-time network environment. The experiments are conducted across different time window lengths to determine the trade-offs between detection accuracy and latency. The results show that larger observation windows improve detection accuracy using TCN and LSTM models and consistently outperform the Transformer in high-volume scenarios. Regarding model latency, TCN and Transformer exhibit constant latency across all window sizes. We also used SHAP (Shapley Additive exPlanations) analysis to identify the most discriminative traffic features, enhancing model interpretability and supporting feature selection for computational efficiency. Among the experimented models, TCN achieves the most balance between detection performance and latency, making it an applicable model for the proposed architecture. These findings validate the feasibility of the proposed architecture and support its potential as a real-time DDoS detection application in a realistic high-speed network. Full article
(This article belongs to the Special Issue Emerging Technologies for Network Security and Anomaly Detection)
Show Figures

Figure 1

Back to TopTop