Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,468)

Search Parameters:
Keywords = latency

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 1896 KiB  
Article
Real-Time Cell Gap Estimation in LC-Filled Devices Using Lightweight Neural Networks for Edge Deployment
by Chi-Yen Huang, You-Lun Zhang, Su-Yu Liao, Wen-Chun Huang, Jiann-Heng Chen, Bo-Chang Dong, Che-Ju Hsu and Chun-Ying Huang
Nanomaterials 2025, 15(16), 1289; https://doi.org/10.3390/nano15161289 - 21 Aug 2025
Abstract
Accurate determination of the liquid crystal (LC) cell gap after filling is essential for ensuring device performance in LC-based optical applications. However, the introduction of birefringent materials significantly distorts the transmission spectrum, complicating traditional optical analysis. In this work, we propose a lightweight [...] Read more.
Accurate determination of the liquid crystal (LC) cell gap after filling is essential for ensuring device performance in LC-based optical applications. However, the introduction of birefringent materials significantly distorts the transmission spectrum, complicating traditional optical analysis. In this work, we propose a lightweight machine learning framework using a shallow multilayer perceptron (MLP) to estimate the cell gap directly from the transmission spectrum of filled LC cells. The model was trained on experimentally acquired spectra with peak-to-peak interferometry-derived ground truth values. We systematically evaluated different optimization algorithms, activation functions, and hidden neuron configurations to identify an optimal model setting that balances prediction accuracy and computational simplicity. The best-performing model, using exponential activation with eight hidden units and BFGS optimization, achieved a correlation coefficient near 1 and an RMSE below 0.1 μm across multiple random seeds and training–test splits. The model was successfully deployed on a Raspberry Pi 4, demonstrating real-time inference with low latency, memory usage, and power consumption. These results validate the feasibility of portable, edge-based LC inspection systems for in situ diagnostics and quality control. Full article
Show Figures

Figure 1

29 pages, 1424 KiB  
Article
A Multi-Layer Quantum-Resilient IoT Security Architecture Integrating Uncertainty Reasoning, Relativistic Blockchain, and Decentralised Storage
by Gerardo Iovane
Appl. Sci. 2025, 15(16), 9218; https://doi.org/10.3390/app15169218 (registering DOI) - 21 Aug 2025
Abstract
The rapid development of the Internet of Things (IoT) has enabled the implementation of interconnected intelligent systems in extremely dynamic contexts with limited resources. However, traditional paradigms, such as those using ECC-based heuristics and centralised decision-making frameworks, cannot be modernised to ensure resilience, [...] Read more.
The rapid development of the Internet of Things (IoT) has enabled the implementation of interconnected intelligent systems in extremely dynamic contexts with limited resources. However, traditional paradigms, such as those using ECC-based heuristics and centralised decision-making frameworks, cannot be modernised to ensure resilience, scalability and security while taking quantum threats into account. In this case, we propose a modular architecture that integrates quantum-inspired cryptography (QI), epistemic uncertainty reasoning, the multiscale blockchain MuReQua, and the quantum-inspired decentralised storage engine (DeSSE) with fragmented entropy storage. Each component addresses specific cybersecurity weaknesses of IoT devices: quantum-resistant communication on epistemic agents that facilitate cognitive decision-making under uncertainty, lightweight adaptive consensus provided by MuReQua, and fragmented entropy storage provided by DeSSE. Tested through simulations and use case analyses in industrial, healthcare and automotive networks, the architecture shows exceptional latency, decision accuracy and fault tolerance compared to conventional solutions. Furthermore, its modular nature allows for incremental integration and domain-specific customisation. By adding reasoning, trust and quantum security, it is possible to design intelligent decentralised architectures for resilient IoT ecosystems, thereby strengthening system defences alongside architectures. In turn, this work offers a specific architectural response and a broader perspective on secure decentralised computing, even for the imminent advent of quantum computers. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
14 pages, 501 KiB  
Article
Depressive and Anxiety Symptoms Among Patients with Asbestos-Related Diseases in Korea
by Min-Sung Kang, Mee-Ri Lee and Young Hwangbo
Toxics 2025, 13(8), 703; https://doi.org/10.3390/toxics13080703 - 21 Aug 2025
Abstract
Asbestos-related diseases (ARDs), including malignant mesothelioma, asbestos-related lung cancer, and asbestosis, are known for their long latency periods and poor prognoses. Although the physical effects of ARDs have been widely studied, limited research has examined the psychological burden faced by affected individuals. This [...] Read more.
Asbestos-related diseases (ARDs), including malignant mesothelioma, asbestos-related lung cancer, and asbestosis, are known for their long latency periods and poor prognoses. Although the physical effects of ARDs have been widely studied, limited research has examined the psychological burden faced by affected individuals. This study investigated depressive and anxiety symptoms among 275 patients officially recognized as asbestos victims in Korea. Mental health was assessed using the Korean version of the Patient Health Questionnaire-9 (PHQ-9), Generalized Anxiety Disorder-7 (GAD-7), and the Hospital Anxiety and Depression Scale (HADS). The analysis revealed that the mean ± standard deviation of depression and anxiety levels among patients with asbestos-related diseases were 8.06 ± 6.27 for PHQ-9, 6.02 ± 5.64 for GAD-7, 7.09 ± 5.44 for HADS-A, and 8.41 ± 5.47 for HADS-D. Patients with asbestosis had higher levels of depressive and anxiety symptoms than those with malignant mesothelioma or lung cancer, with symptom severity increasing alongside asbestosis grade. When compared with national data from the 2020–2021 Korea National Health and Nutrition Examination Survey (KNHANES), PHQ-9 and GAD-7 scores among ARD patients, particularly those with Grade 1 asbestosis, were higher than the scores reported for all major cancer types. These findings highlight the substantial psychological distress experienced by individuals with ARDs and emphasize the urgent need for targeted mental health interventions in this population. Full article
(This article belongs to the Section Human Toxicology and Epidemiology)
36 pages, 14078 KiB  
Article
Workload Prediction for Proactive Resource Allocation in Large-Scale Cloud-Edge Applications
by Thang Le Duc, Chanh Nguyen and Per-Olov Östberg
Electronics 2025, 14(16), 3333; https://doi.org/10.3390/electronics14163333 - 21 Aug 2025
Abstract
Accurate workload prediction is essential for proactive resource allocation in large-scale Content Delivery Networks (CDNs), where traffic patterns are highly dynamic and geographically distributed. This paper introduces a CDN-tailored prediction and autoscaling framework that integrates statistical and deep learning models within an adaptive [...] Read more.
Accurate workload prediction is essential for proactive resource allocation in large-scale Content Delivery Networks (CDNs), where traffic patterns are highly dynamic and geographically distributed. This paper introduces a CDN-tailored prediction and autoscaling framework that integrates statistical and deep learning models within an adaptive feedback loop. The framework is evaluated using 18 months of real traffic traces from a production multi-tier CDN, capturing realistic workload seasonality, cache–tier interactions, and propagation delays. Unlike generic cloud-edge predictors, our design incorporates CDN-specific features and model-switching mechanisms to balance prediction accuracy with computational cost. Seasonal ARIMA (S-ARIMA), Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), and Online Sequential Extreme Learning Machine (OS-ELM) are combined to support both short-horizon scaling and longer-term capacity planning. The predictions drive a queue-based resource-estimation model, enabling proactive cache–server scaling with low rejection rates. Experimental results demonstrate that the framework maintains high accuracy while reducing computational overhead through adaptive model selection. The proposed approach offers a practical, production-tested solution for predictive autoscaling in CDNs and can be extended to other latency-sensitive edge-cloud services with hierarchical architectures. Full article
(This article belongs to the Special Issue Next-Generation Cloud–Edge Computing: Systems and Applications)
Show Figures

Graphical abstract

53 pages, 2463 KiB  
Review
Efficient Caching Strategies in NDN-Enabled IoT Networks: Strategies, Constraints, and Future Directions
by Ala’ Ahmad Alahmad, Azana Hafizah Mohd Aman, Faizan Qamar and Wail Mardini
Sensors 2025, 25(16), 5203; https://doi.org/10.3390/s25165203 - 21 Aug 2025
Abstract
Named Data Networking (NDN) is identified as a significant shift within the information-centric networking (ICN) perspective that avoids our current IP-based infrastructures by retrieving data based on its name rather than where the host is placed. This shift in paradigm is especially beneficial [...] Read more.
Named Data Networking (NDN) is identified as a significant shift within the information-centric networking (ICN) perspective that avoids our current IP-based infrastructures by retrieving data based on its name rather than where the host is placed. This shift in paradigm is especially beneficial in Internet of Things (IoT) settings because information sharing is a critical challenge, as millions of IoT items create enormous traffic. Content caching in the network is another key characteristic of NDN used in IoT, which enables data storing within the network and provides IoT devices with the opportunity to address nearby caching nodes to gain the intended content, which, in its turn, will minimize latency as well as bandwidth consumption. However, effective caching solutions must be developed since cache management is made difficult by the constant shifting of IoT networks and the constrained capabilities of IoT devices. This paper gives an overview of cache strategies in NDN-based IoT systems. It emphasizes six strategy types: popularity-based, freshness-aware, collaborative, hybrid, probabilistic, and machine learning-based, evaluating their performances in terms of demands like content preference, cache update, and power consumption. By analyzing various caching policies and their performance characteristics, this paper provides valuable insights for researchers and practitioners developing caching strategies in NDN-based IoT networks. Full article
(This article belongs to the Section Internet of Things)
22 pages, 1904 KiB  
Article
Management of Virtualized Railway Applications
by Ivaylo Atanasov, Evelina Pencheva and Kamelia Nikolova
Information 2025, 16(8), 712; https://doi.org/10.3390/info16080712 - 21 Aug 2025
Abstract
Robust, reliable, and secure communications are essential for efficient railway operation and keeping employees and passengers safe. The Future Railway Mobile Communication System (FRMCS) is a global standard aimed at providing innovative, essential, and high-performance communication applications in railway transport. In comparison with [...] Read more.
Robust, reliable, and secure communications are essential for efficient railway operation and keeping employees and passengers safe. The Future Railway Mobile Communication System (FRMCS) is a global standard aimed at providing innovative, essential, and high-performance communication applications in railway transport. In comparison with the legacy communication system (GSM-R), it provides high data rates, ultra-high reliability, and low latency. The FRMCS architecture will also benefit from cloud computing, following the principles of the cloud-native 5G core network design based on Network Function Virtualization (NFV). In this paper, an approach to the management of virtualized FRMCS applications is presented. First, the key management functionality related to the virtualized FRMCS application is identified based on an analysis of the different use cases. Next, this functionality is synthesized as RESTful services. The communication between application management and the services is designed as Application Programing Interfaces (APIs). The APIs are formally verified by modeling the management states of an FRMCS application instance from different points of view, and it is mathematically proved that the management state models are synchronized in time. The latency introduced by the designed APIs, as a key performance indicator, is evaluated through emulation. Full article
(This article belongs to the Section Information Applications)
15 pages, 1970 KiB  
Article
Transmission Control for Space–Air–Ground Integrated Multi-Hop Networks in Millimeter-Wave and Terahertz Communications
by Liang Zong, Yun Cheng, Zhangfeng Ma, Han Wang, Zhan Liu and Yinqing Tang
Electronics 2025, 14(16), 3330; https://doi.org/10.3390/electronics14163330 - 21 Aug 2025
Abstract
Millimeter-wave (mmWave) and terahertz (THz) communications are susceptible to frequent link disruptions and severe performance degradation due to high directionality, significant path loss, and sensitivity to blockages. These challenges are particularly acute in highly dynamic and densely populated user environments. The issues present [...] Read more.
Millimeter-wave (mmWave) and terahertz (THz) communications are susceptible to frequent link disruptions and severe performance degradation due to high directionality, significant path loss, and sensitivity to blockages. These challenges are particularly acute in highly dynamic and densely populated user environments. The issues present significant obstacles to ensuring reliability and quality of service (QoS) in future space–air–ground integrated networks. To address these challenges, this paper proposes an adaptive transmission control scheme designed for space–air–ground integrated multi-hop networks operating in the mmWave/THz bands. By analyzing the intermittent connectivity inherent in such networks, the proposed scheme incorporates an incremental factor and a backlog indicator into its congestion control mechanism. This allows for the accurate differentiation between packet losses resulting from network congestion and those caused by channel blockages, such as human body occlusion or beam misalignment. Furthermore, the scheme optimizes the initial congestion window during the slow-start phase and dynamically adapts its transmission strategy during the congestion avoidance phase according to the identified cause of packet loss. Simulation results demonstrate that the proposed method effectively mitigates throughput degradation from link blockages, improves data transmission rates in highly dynamic environments, and sustains more stable end-to-end connectivity. Our proposed scheme achieves a 35% higher throughput than TCP Hybla, 40% lower latency than TCP Veno, and maintains 99.2% link utilization under high mobility. Full article
Show Figures

Figure 1

30 pages, 3477 KiB  
Article
Dynamic Task Scheduling Based on Greedy and Deep Reinforcement Learning Algorithms for Cloud–Edge Collaboration in Smart Buildings
by Ping Yang and Jiangmin He
Electronics 2025, 14(16), 3327; https://doi.org/10.3390/electronics14163327 - 21 Aug 2025
Abstract
Driven by technologies such as the Internet of Things and artificial intelligence, smart buildings have developed rapidly, and the demand for processing massive amounts of data has risen sharply. Traditional cloud computing is confronted with challenges such as high network latency and large [...] Read more.
Driven by technologies such as the Internet of Things and artificial intelligence, smart buildings have developed rapidly, and the demand for processing massive amounts of data has risen sharply. Traditional cloud computing is confronted with challenges such as high network latency and large bandwidth pressure. Although edge computing can effectively reduce latency, it has problems such as resource limitations and difficulties with cluster collaboration. Therefore, cloud–edge collaboration has become an inevitable choice to meet the real-time and reliability requirements of smart buildings. In view of the problems with the existing task scheduling methods in the smart building scenario, such as ignoring container compatibility constraints, the difficulty in balancing global optimization and real-time performance, and the difficulty in adapting to the dynamic environments, this paper proposes a two-stage cloud-edge collaborative dynamic task scheduling mechanism. Firstly, a task scheduling system model supporting container compatibility was constructed, aiming to minimize system latency and energy consumption while ensuring the real-time requirements of tasks were met. Secondly, for this task-scheduling problem, a hierarchical and progressive solution was designed: In the first stage, a Resource-Aware Cost-Driven Greedy algorithm (RACDG) was proposed to enable edge nodes to quickly generate the initial task offloading decision. In the second stage, for the tasks that need to be offloaded in the initial decision-making, a Proximal Policy Optimization algorithm based on Action Masks (AMPPO) is proposed to achieve global dynamic scheduling. Finally, in the simulation experiments, the comparison with other classical algorithms shows that the algorithm proposed in this paper can reduce the system delay by 26–63.7%, reduce energy consumption by 21.7–66.9%, and still maintain a task completion rate of more than 91.3% under high-load conditions. It has good scheduling robustness and application potential. It provides an effective solution for the cloud–edge collaborative task scheduling of smart buildings. Full article
Show Figures

Figure 1

21 pages, 2657 KiB  
Article
AI-Powered Adaptive Disability Prediction and Healthcare Analytics Using Smart Technologies
by Malak Alamri, Mamoona Humayun, Khalid Haseeb, Naveed Abbas and Naeem Ramzan
Diagnostics 2025, 15(16), 2104; https://doi.org/10.3390/diagnostics15162104 - 21 Aug 2025
Abstract
Background: By leveraging advanced wireless technologies, Healthcare Industry 5.0 promotes the continuous monitoring of real-time medical acquisition from the physical environment. These systems help identify early diseases by collecting health records from patients’ bodies promptly using biosensors. The dynamic nature of medical [...] Read more.
Background: By leveraging advanced wireless technologies, Healthcare Industry 5.0 promotes the continuous monitoring of real-time medical acquisition from the physical environment. These systems help identify early diseases by collecting health records from patients’ bodies promptly using biosensors. The dynamic nature of medical devices not only enhances the data analysis in medical services and the prediction of chronic diseases, but also improves remote diagnostics with the latency-aware healthcare system. However, due to scalability and reliability limitations in data processing, most existing healthcare systems pose research challenges in the timely detection of personalized diseases, leading to inconsistent diagnoses, particularly when continuous monitoring is crucial. Methods: This work propose an adaptive and secure framework for disability identification using the Internet of Medical Things (IoMT), integrating edge computing and artificial intelligence. To achieve the shortest response time for medical decisions, the proposed framework explores lightweight edge computing processes that collect physiological and behavioral data using biosensors. Furthermore, it offers a trusted mechanism using decentralized strategies to protect big data analytics from malicious activities and increase authentic access to sensitive medical data. Lastly, it provides personalized healthcare interventions while monitoring healthcare applications using realistic health records, thereby enhancing the system’s ability to identify diseases associated with chronic conditions. Results: The proposed framework is tested using simulations, and the results indicate the high accuracy of the healthcare system in detecting disabilities at the edges, while enhancing the prompt response of the cloud server and guaranteeing the security of medical data through lightweight encryption methods and federated learning techniques. Conclusions: The proposed framework offers a secure and efficient solution for identifying disabilities in healthcare systems by leveraging IoMT, edge computing, and AI. It addresses critical challenges in real-time disease monitoring, enhancing diagnostic accuracy and ensuring the protection of sensitive medical data. Full article
Show Figures

Figure 1

14 pages, 8079 KiB  
Article
Epilepsy Associated Gene, Pcdh7, Is Dispensable for Brain Development in Mice
by Jennifer Rakotomamonjy, Devin Davies, Xavier Valencia, Olivia Son, Ximena Gomez-Maqueo and Alicia Guemez-Gamboa
Genes 2025, 16(8), 985; https://doi.org/10.3390/genes16080985 - 21 Aug 2025
Abstract
Background/Objectives: Protocadherin 7 (Pcdh7) belongs to the protocadherin family, the largest subgroup of cell adhesion molecules. Members of this family are highly expressed in the brain, where they serve fundamental roles in many neurodevelopmental processes, including axon guidance, dendrite self-avoidance, [...] Read more.
Background/Objectives: Protocadherin 7 (Pcdh7) belongs to the protocadherin family, the largest subgroup of cell adhesion molecules. Members of this family are highly expressed in the brain, where they serve fundamental roles in many neurodevelopmental processes, including axon guidance, dendrite self-avoidance, and synaptic formation. PCDH7 has been strongly associated with epilepsy in multiple genome-wide association studies (GWAS), as well as with schizophrenia, PTSD, and childhood aggression. Despite these associations, the specific contributions of PCDH7 to epileptogenesis and brain development remain largely unexplored. Most of the existing literature on PCDH7 focuses on its function during cancer progression, with only one study suggesting that PCDH7 regulates dendritic spine morphology and synaptic function via interaction with GluN1. Methods: Here, we generate, validate, and characterize a murine null Pcdh7 allele in which a large deletion was introduced by CRISPR. Results: Analysis of embryonic, postnatal, and adult brain datasets confirmed PCDH7 widespread expression. Pcdh7+/− and Pcdh7−/− mice present no gross morphological defects and normal cortical layer formation. However, a seizure susceptibility assay revealed increased latencies in Pcdh7+/− mice, but not in Pcdh7+/+ and Pcdh7−/− mice, potentially explaining the association of PCDH7 with epilepsy. Conclusions: This initial characterization of Pcdh7 null mice suggests that, despite its widespread expression in the CNS and involvement in human epilepsy, PCDH7 is not essential for murine brain development and thus is not a suitable animal model for understanding PCDH7 disruption in humans. However, further detailed analysis of this mouse model may reveal circuit or synaptic abnormalities in Pcdh7 null brains. Full article
(This article belongs to the Special Issue The Genetic and Epigenetic Basis of Neurodevelopmental Disorders)
Show Figures

Figure 1

15 pages, 1141 KiB  
Article
Enhanced Transdermal Delivery of Lidocaine Hydrochloride via Dissolvable Microneedles (LH-DMNs) for Rapid Local Anesthesia
by Shengtai Bian, Jie Chen, Ran Chen, Shilun Feng and Zizhen Ming
Biosensors 2025, 15(8), 552; https://doi.org/10.3390/bios15080552 - 21 Aug 2025
Abstract
Microneedles represent an emerging transdermal drug delivery platform offering painless, minimally invasive penetration of the stratum corneum. This study addresses limitations of conventional lidocaine hydrochloride formulations, such as slow onset and poor patient compliance, by developing lidocaine hydrochloride-loaded dissolvable microneedles (LH-DMNs) for rapid [...] Read more.
Microneedles represent an emerging transdermal drug delivery platform offering painless, minimally invasive penetration of the stratum corneum. This study addresses limitations of conventional lidocaine hydrochloride formulations, such as slow onset and poor patient compliance, by developing lidocaine hydrochloride-loaded dissolvable microneedles (LH-DMNs) for rapid local anesthesia. LH-DMNs were fabricated via centrifugal casting using polyvinyl alcohol (PVA) as the matrix material in polydimethylsiloxane (PDMS) negative molds, which imparts high mechanical strength to the microneedles. Biocompatibility assessments showed negligible skin irritation, resolving within 3 min. And drug-loading capacity reached 24.0 ± 2.84 mg per patch. Pharmacodynamic evaluation via mouse hot plate tests demonstrated significant analgesia, increasing paw withdrawal latency to 36.11 ± 1.62 s at 5 min post-application (p < 0.01). The results demonstrated that the LH-DMNs significantly elevated the pain threshold in mice within 5 min, surpassing the efficacy of conventional anesthetic gels and providing a rapid and effective solution for pain relief. These findings validate the system’s rapid drug release and efficacy, positioning dissolvable microneedles as a clinically viable alternative for enhanced transdermal anesthesia. Full article
(This article belongs to the Special Issue Advanced Microfluidic Devices and MEMS in Biosensing Applications)
Show Figures

Graphical abstract

14 pages, 730 KiB  
Article
A Configurable Parallel Architecture for Singular Value Decomposition of Correlation Matrices
by Luis E. López-López, David Luviano-Cruz, Juan Cota-Ruiz, Jose Díaz-Roman, Ernesto Sifuentes, Jesús M. Silva-Aceves and Francisco J. Enríquez-Aguilera
Electronics 2025, 14(16), 3321; https://doi.org/10.3390/electronics14163321 - 21 Aug 2025
Abstract
Singular value decomposition (SVD) plays a critical role in signal processing, image analysis, and particularly in MIMO channel estimation, where it enables spatial multiplexing and interference mitigation. This study presents a configurable parallel architecture for computing SVD on 4 × 4 and 8 [...] Read more.
Singular value decomposition (SVD) plays a critical role in signal processing, image analysis, and particularly in MIMO channel estimation, where it enables spatial multiplexing and interference mitigation. This study presents a configurable parallel architecture for computing SVD on 4 × 4 and 8 × 8 correlation matrices using the Jacobi algorithm with Givens rotations, optimized via CORDIC. Exploiting algorithmic parallelism, the design achieves low-latency performance on a Virtex-5 FPGA, with processing times of 5.29 µs and 24.25 µs, respectively, while maintaining high precision and efficient resource usage. These results confirm the architecture’s suitability for real-time wireless systems with strict latency demands, such as those defined by the IEEE 802.11n standard. Full article
Show Figures

Figure 1

21 pages, 2309 KiB  
Review
A Comprehensive Review of Satellite Orbital Placement and Coverage Optimization for Low Earth Orbit Satellite Networks: Challenges and Solutions
by Adel A. Ahmed
Network 2025, 5(3), 32; https://doi.org/10.3390/network5030032 - 20 Aug 2025
Abstract
Nowadays, internet connectivity suffers from instability and slowness due to optical fiber cable attacks across the seas and oceans. The optimal solution to this problem is using the Low Earth Orbit (LEO) satellite network, which can resolve the problem of internet connectivity and [...] Read more.
Nowadays, internet connectivity suffers from instability and slowness due to optical fiber cable attacks across the seas and oceans. The optimal solution to this problem is using the Low Earth Orbit (LEO) satellite network, which can resolve the problem of internet connectivity and reachability, and it has the power to bring real-time, reliable, low-latency, high-bandwidth, cost-effective internet access to many urban and rural areas in any region of the Earth. However, satellite orbital placement (SOP) and navigation should be carefully designed to reduce signal impairments. The challenges of orbital satellite placement for LEO include constellation development, satellite parameter optimization, bandwidth optimization, consideration of signal impairment, and coverage optimization. This paper presents a comprehensive review of SOP and coverage optimization, examines prevalent issues affecting LEO internet connectivity, evaluates existing solutions, and proposes novel solutions to address these challenges. Furthermore, it recommends a machine learning solution for coverage optimization and SOP that can be used to efficiently enhance internet reliability and reachability for LEO satellite networks. This survey will open the gate for developing an optimal solution for global internet connectivity and reachability. Full article
Show Figures

Figure 1

32 pages, 2542 KiB  
Article
ECR-MobileNet: An Imbalanced Largemouth Bass Parameter Prediction Model with Adaptive Contrastive Regression and Dependency-Graph Pruning
by Hao Peng, Cheng Ouyang, Lin Yang, Jingtao Deng, Mingyu Tan, Yahui Luo, Wenwu Hu, Pin Jiang and Yi Wang
Animals 2025, 15(16), 2443; https://doi.org/10.3390/ani15162443 - 20 Aug 2025
Abstract
The precise, non-destructive monitoring of fish length and weight is a core technology for advancing intelligent aquaculture. However, this field faces dual challenges: traditional contact-based measurements induce stress and yield loss. In addition, existing computer vision methods are hindered by prediction biases from [...] Read more.
The precise, non-destructive monitoring of fish length and weight is a core technology for advancing intelligent aquaculture. However, this field faces dual challenges: traditional contact-based measurements induce stress and yield loss. In addition, existing computer vision methods are hindered by prediction biases from imbalanced data and the deployment bottleneck of balancing high accuracy with model lightweighting. This study aims to overcome these challenges by developing an efficient and robust deep learning framework. We propose ECR-MobileNet, a lightweight framework built on MobileNetV3-Small. It features three key innovations: an efficient channel attention (ECA) module to enhance feature discriminability, an original adaptive multi-scale contrastive regression (AMCR) loss function that extends contrastive learning to multi-dimensional regression for length and weight simultaneously to mitigate data imbalance, and a dependency-graph-based (DepGraph) structured pruning technique that synergistically optimizes model size and performance. On our multi-scene largemouth bass dataset, the pruned ECR-MobileNet-P model comprehensively outperformed 14 mainstream benchmarks. It achieved an R2 of 0.9784 and a root mean square error (RMSE) of 0.4296 cm for length prediction, as well as an R2 of 0.9740 and an RMSE of 0.0202 kg for weight prediction. The model’s parameter count is only 0.52 M, with a computational load of 0.07 giga floating-point operations per second (GFLOPs) and a CPU latency of 10.19 ms, achieving Pareto optimality. This study provides an edge-deployable solution for stress-free biometric monitoring in aquaculture and establishes an innovative methodological paradigm for imbalanced regression and task-oriented model compression. Full article
(This article belongs to the Section Aquatic Animals)
Show Figures

Figure 1

31 pages, 433 KiB  
Review
A Comprehensive Survey of 6G Simulators: Comparison, Integration, and Future Directions
by Evgeniya Evgenieva, Atanas Vlahov, Antoni Ivanov, Vladimir Poulkov and Agata Manolova
Electronics 2025, 14(16), 3313; https://doi.org/10.3390/electronics14163313 - 20 Aug 2025
Abstract
Modern wireless networks are rapidly advancing through research into novel applications that push the boundaries of information and communication systems to satisfy the increasing user demand. To facilitate this process, the development of communication network simulators is necessary due to the high cost [...] Read more.
Modern wireless networks are rapidly advancing through research into novel applications that push the boundaries of information and communication systems to satisfy the increasing user demand. To facilitate this process, the development of communication network simulators is necessary due to the high cost and difficulty of real-world testing, with many new simulation tools having emerged in recent years. This paper surveys the latest developments in simulators that support Sixth-Generation (6G) technologies, which aim to surpass the current wireless standards by delivering Artificial Intelligence (AI) empowered networks with ultra-low latency, terabit-per-second data rates, high mobility, and extended reality. Novel features such as Reconfigurable Intelligent Surfaces (RISs), Open Radio Access Network (O-RAN), and Integrated Space–Terrestrial Networks (ISTNs) need to be integrated into the simulation environment. The reviewed simulators and emulators are classified into general-purpose and specialized according to their type of link-level, system-level, and network-level categories. They are then compared based on scalability, computational efficiency, and 6G-specific technological considerations, with specific emphasis on open-source solutions as they are growing in prominence. The study highlights the strengths and limitations of the reviewed simulators, as well as the use cases in which they are applied, offering insights into their suitability for 6G system design. Based on the review, the challenges and future directions for simulators’ development are described, aiming to facilitate the accurate and effective modeling of future communication networks. Full article
(This article belongs to the Special Issue 6G and Beyond: Architectures, Challenges, and Opportunities)
Show Figures

Figure 1

Back to TopTop