Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,727)

Search Parameters:
Keywords = low-latency

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 299 KB  
Article
Multi-Objective Evaluation of Lightweight AI Models on Low-Cost Edge Devices Using Pareto Fronts
by Patricio Rojas-Carrasco and Maria Guinaldo
Appl. Sci. 2026, 16(8), 3679; https://doi.org/10.3390/app16083679 - 9 Apr 2026
Abstract
Deploying artificial intelligence models on low-cost edge devices requires balancing predictive accuracy with strict constraints on computational resources, such as inference latency and memory footprint. Despite growing interest in TinyML systems, limited empirical evidence exists on how these factors interact across different embedded [...] Read more.
Deploying artificial intelligence models on low-cost edge devices requires balancing predictive accuracy with strict constraints on computational resources, such as inference latency and memory footprint. Despite growing interest in TinyML systems, limited empirical evidence exists on how these factors interact across different embedded hardware platforms. This study presents a systematic multi-objective evaluation of three lightweight AI architectures—multinomial logistic regression (MLR), multilayer perceptron (MLP), and a reduced convolutional neural network (CNN)—implemented natively on three representative platforms: ESP32-S3, Raspberry Pi Pico, and Raspberry Pi Zero W. The models were evaluated on three image classification datasets of increasing complexity (Synthetic Geometric Figures, MNIST, and Fashion-MNIST), measuring classification accuracy, inference latency, and peak memory footprint under real execution conditions. Pareto-front analysis was used to identify efficient model–platform configurations and characterize the trade-offs between predictive performance and computational resources. The results provide quantitative insight into accuracy–resource trade-offs and establish a reproducible framework for evaluating lightweight AI models on resource-constrained edge devices, supporting informed design decisions in TinyML applications. Full article
24 pages, 955 KB  
Systematic Review
Telemedicine and 5G Technologies: A Systematic Global Review of Applications over the Past Decade
by Alessandra Franco, Francesca Angelone, Danilo Calderone, Alfonso Maria Ponsiglione, Maria Romano, Carlo Ricciardi and Francesco Amato
Bioengineering 2026, 13(4), 438; https://doi.org/10.3390/bioengineering13040438 - 8 Apr 2026
Abstract
This systematic review analyzes how the introduction and progressive deployment of 5G networks have influenced the evolution of telemedicine between 2014 and 2024, focusing on their impact on performance, accessibility, and the feasibility of advanced clinical applications across the pre-COVID-19, COVID-19, and post-COVID-19 [...] Read more.
This systematic review analyzes how the introduction and progressive deployment of 5G networks have influenced the evolution of telemedicine between 2014 and 2024, focusing on their impact on performance, accessibility, and the feasibility of advanced clinical applications across the pre-COVID-19, COVID-19, and post-COVID-19 periods. The review was conducted in accordance with PRISMA guidelines and included publications retrieved from SCOPUS, PubMed, and Web of Science using a PICO-based search strategy. Studies were selected based on predefined inclusion and exclusion criteria, and extracted data included clinical parameters, network characteristics such as bandwidth and latency, geographic setting, and type of telemedicine service. A total of 45 studies met the inclusion criteria, with most published between 2020 and 2024. The most frequently reported applications were telediagnosis, particularly robotic ultrasound, followed by telesurgery and teleconsultation. The low latency enabled by 5G networks supported complex telesurgical procedures over distances exceeding 5000 km, while in ultra-remote areas, hybrid solutions combining 5G and fiber-optic networks were often adopted to ensure stable connections. The integration of robotic platforms and AI-based tools further enhanced the precision and reliability of remote procedures. Overall, 5G technology has significantly advanced telemedicine by enabling real-time, high-quality care over long distances, improving access to specialist services and supporting more equitable and efficient digital healthcare delivery, particularly in underserved regions. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

23 pages, 995 KB  
Article
Eye-Tracking Response Modeling and Design Optimization Method for Smart Home Interface Based on Transformer Attention Mechanism
by Yanping Lu and Myun Kim
Electronics 2026, 15(8), 1562; https://doi.org/10.3390/electronics15081562 - 8 Apr 2026
Abstract
In response to the redundant spatio-temporal modeling and insufficient adaptation to dynamic decision-making in eye-tracking interaction of smart home interfaces, a smart home interface eye-tracking response optimization model based on spatio-temporal Transformer and gate control cross-attention is proposed. It adapts the physiological characteristics [...] Read more.
In response to the redundant spatio-temporal modeling and insufficient adaptation to dynamic decision-making in eye-tracking interaction of smart home interfaces, a smart home interface eye-tracking response optimization model based on spatio-temporal Transformer and gate control cross-attention is proposed. It adapts the physiological characteristics of eye-tracking jumps through dynamic sparse attention gating to compress computational redundancy and combines multi-objective reinforcement learning attention modulation to construct a closed-loop decision-making mechanism, optimizing interface parameters in real-time. Experiments showed that the model reduced eye-tracking trajectory prediction error by 23.7% compared to advanced benchmarks, increased the success rate of adapting to dynamic mutation scenarios to 89.2%, and controlled performance fluctuations within 2.3% under noise interference. In high-fidelity user testing, the accuracy of cross-task gaze transfer reached 93.4%, the failure rate of glare interference was optimized to 2.4%, and the user cognitive load index was reduced by 27.9%. Its resource consumption and energy consumption were reduced by 26.7% and 44.9%, respectively, while its posture deviation tolerance remained at 3.5°. The sparse spatio-temporal modeling of the spatio-temporal adaptive Transformer module and the enhanced gating mechanism of the hierarchical gated cross-attention module work together to break through the limitations of traditional methods in computational efficiency and dynamic feedback, providing high-precision and low-latency eye-tracking interaction solutions for smart home interface systems, and promoting the practical evolution of personalized human–machine collaborative control. Full article
31 pages, 2475 KB  
Article
Fuzzy-Logic Workload Orchestration Framework for Smart Campuses in Edge-Cloud System Architecture
by Abdullah Fawaz Aljulayfi
Electronics 2026, 15(8), 1556; https://doi.org/10.3390/electronics15081556 - 8 Apr 2026
Abstract
Transforming a conventional university campus into a smart campus by leveraging modern technologies aims to deliver university services efficiently, effectively, and at low cost. Modern technologies enhance campus life by providing services, such as smart classrooms and campus security, on demand. Seamless service [...] Read more.
Transforming a conventional university campus into a smart campus by leveraging modern technologies aims to deliver university services efficiently, effectively, and at low cost. Modern technologies enhance campus life by providing services, such as smart classrooms and campus security, on demand. Seamless service delivery requires reliable and efficient access to the services that take into consideration the dynamic contextual attributes related to, e.g., end-device mobility, latency sensitivity, and resource constraints. University staff, students, and visitors frequently submit different types of service requests on the move, which requires a robust orchestration framework capable of managing these requests across edge-cloud environments. The orchestration framework needs to intelligently distribute the workload, taking into consideration the latency sensitivity requirements and contextual conditions, including resource constraints. Therefore, a fuzzy-logic orchestration framework for smart-campus environments in edge-cloud architecture is proposed. The framework incorporates key factors, including user speed, resource utilization, and request delay sensitivity, in the decision-making process to satisfy both service consumers and service providers. It prioritizes latency-sensitive requests while simultaneously enhancing resource utilization efficiency. Simulation-based experimental results demonstrate the effectiveness of the proposed framework compared with benchmark approaches in orchestrating incoming workloads under several user and contextual conditions. Additionally, the results show that the proposed framework improves the execution rate by 30% compared to benchmark models and achieves more than double resource utilization efficiency. Full article
Show Figures

Figure 1

23 pages, 4282 KB  
Article
FPGA-Accelerated Machine Learning for Computational Environmental Information Processing in IoT-Integrated High-Density Nanosensor Networks
by Alaa Kamal Yousif Dafhalla, Fawzia Awad Elhassan Ali, Asma Ibrahim Gamar Eldeen, Ikhlas Saad Ahmed, Ameni Filali, Amel Mohamed essaket Zahou, Amal Abdallah AlShaer, Suhier Bashir Ahmed Elfaki, Rabaa Mohammed Eltayeb and Tijjani Adam
Information 2026, 17(4), 354; https://doi.org/10.3390/info17040354 - 8 Apr 2026
Abstract
This study presents a nanosensor network system for autonomous microclimate optimization in precision horticulture, leveraging a field-programmable gate array (FPGA)-based control architecture that is integrated with an edge-level machine learning inference. Unlike the conventional greenhouse automation systems, which exhibit thermal and hygroscopic hysteresis [...] Read more.
This study presents a nanosensor network system for autonomous microclimate optimization in precision horticulture, leveraging a field-programmable gate array (FPGA)-based control architecture that is integrated with an edge-level machine learning inference. Unlike the conventional greenhouse automation systems, which exhibit thermal and hygroscopic hysteresis often exceeding 32 °C and 78% relative humidity, the proposed framework embeds a random forest regression (RFR) model directly within the Altera DE2-115 FPGA fabric to enable predictive environmental regulation. The model achieved an R2 of 0.985 and root mean square error (RMSE) of 0.28 °C, allowing proactive compensation for the thermodynamic disturbances from the high-intensity light-emitting diode (LED) lighting with a 120 s predictive horizon. The real-time monitoring and remote supervision were supported via a NodeMCU-based IoT gateway, achieving a 140 ms mean communication latency and a 99.8% packet delivery reliability. The preliminary validation using lettuce (Lactuca sativa) optimized the environmental parameters, while the subsequent experiments with pepper (Capsicum annuum), a commercially important and environmentally sensitive crop, demonstrated system performance under real-world conditions. The control system maintained a temperature and humidity within ±0.3 °C and ±1.2% of the setpoints, respectively, and outperformed the baseline rule-based control with a 28% increase in fresh biomass, a 22% improvement in dry matter accumulation, a 25% reduction in actuator duty-cycle switching, and an 18% decrease in overall energy consumption. These results highlight the efficacy of FPGA-integrated edge intelligence combined with low-latency IoT telemetry as a scalable, energy-efficient, and high-fidelity solution for sub-degree environmental control in next-generation, controlled-environment, and vertical farming systems. Full article
Show Figures

Figure 1

19 pages, 1473 KB  
Article
Key Updatable Cross-Domain-Message Anonymous Authentication Scheme Based on Dual-Chain for VANET
by Mei Sun, Dongbing Zhang, Yuyan Guo and Xudong Zhai
Electronics 2026, 15(7), 1541; https://doi.org/10.3390/electronics15071541 - 7 Apr 2026
Abstract
Traditional VANET authentication schemes often face challenges such as centralization bottlenecks and the updating of vehicle keys or pseudonyms. This paper proposes a layered approach that divides VANET into regions, utilizing dual-blockchain to enable anonymous message authentication between vehicles and RSUs, as well [...] Read more.
Traditional VANET authentication schemes often face challenges such as centralization bottlenecks and the updating of vehicle keys or pseudonyms. This paper proposes a layered approach that divides VANET into regions, utilizing dual-blockchain to enable anonymous message authentication between vehicles and RSUs, as well as between vehicles within the VANET. Compared to traditional blockchain authentication methods, this paper introduces an approach that enhances authentication efficiency and ensures information security by establishing secure connections between private and consortium chains through a trusted authority (TA). By leveraging third-party public parameter updates, the automatic updating of private and public keys for VANET nodes is achieved without the need for certificate issuance and updates. This approach facilitates long-term anonymous authentication and communication between VANET nodes, reduces the frequency of authentication interactions, simplifies authentication processes, and lowers computational and communication costs. The proposed scheme is well-suited for practical VANET environments that require low authentication latency and robust large-scale privacy protection. Full article
Show Figures

Figure 1

27 pages, 1073 KB  
Article
An MMSE-Optimized Pre-Rake Receiver with a Comparative Analysis of Channel Estimation Methods for Multipath Channels
by Aoba Morimoto, Jaesang Cha, Incheol Jeong and Chang-Jun Ahn
Electronics 2026, 15(7), 1540; https://doi.org/10.3390/electronics15071540 - 7 Apr 2026
Abstract
In Time Division Duplex (TDD) Direct-Sequence Code Division Multiple Access (DS/CDMA) architectures, Pre-Rake filtering serves as a powerful transmitter-side strategy to alleviate receiver hardware constraints by leveraging channel reciprocity. Nevertheless, rapid channel fluctuations induced by high Doppler spreads critically undermine this reciprocity assumption. [...] Read more.
In Time Division Duplex (TDD) Direct-Sequence Code Division Multiple Access (DS/CDMA) architectures, Pre-Rake filtering serves as a powerful transmitter-side strategy to alleviate receiver hardware constraints by leveraging channel reciprocity. Nevertheless, rapid channel fluctuations induced by high Doppler spreads critically undermine this reciprocity assumption. This failure is primarily driven by the unavoidable latency between uplink reception and downlink transmission, leading to severe performance deterioration. To address these challenges and enhance system robustness in modern high-speed scenarios, we propose an improved hybrid transceiver architecture. This scheme integrates multiplexed Pre-Rake processing with a Matched Filter-based Rake receiver and employs a Minimum Mean Square Error (MMSE) equalizer to suppress the severe Inter-Symbol Interference (ISI) and Multi-User Interference (MUI). Furthermore, we conduct a comparative analysis of channel estimation methods tailored for a 10 Mbps high-speed transmission environment.Our investigation reveals that while complex quadratic interpolation is often prioritized in low-data-rate studies, simple averaging is sufficient and even superior in high-speed communications. This is because the shortened slot duration allows simple averaging to effectively track channel variations while avoiding the noise overfitting associated with higher-order interpolation. The simulation results demonstrate that the proposed MMSE-optimized architecture achieves superior Bit Error Rate (BER) performance, providing a practical and computationally efficient solution for next-generation mobile networks. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

25 pages, 1501 KB  
Article
MA-JTATO: Multi-Agent Joint Task Association and Trajectory Optimization in UAV-Assisted Edge Computing System
by Yunxi Zhang and Zhigang Wen
Drones 2026, 10(4), 267; https://doi.org/10.3390/drones10040267 - 7 Apr 2026
Abstract
With the rapid development of applications such as smart cities and the industrial internet, the computation-intensive tasks generated by massive sensing devices pose significant challenges to traditional cloud computing paradigms. Unmanned aerial vehicle (UAV)-assisted edge computing systems, leveraging their high mobility and wide-area [...] Read more.
With the rapid development of applications such as smart cities and the industrial internet, the computation-intensive tasks generated by massive sensing devices pose significant challenges to traditional cloud computing paradigms. Unmanned aerial vehicle (UAV)-assisted edge computing systems, leveraging their high mobility and wide-area coverage capabilities, offer an innovative architecture for low-latency and highly reliable edge services. However, the practical deployment of such systems faces a highly complex multi-objective optimization problem featured by the tight coupling of task offloading decisions, UAV trajectory planning, and edge server resource allocation. Conventional optimization methods are difficult to adapt to the dynamic and high-dimensional characteristics of this problem, leading to suboptimal system performance. To address this critical challenge, this paper constructs an intelligent collaborative optimization framework for UAV-assisted edge computing systems and formulates the system quality of service (QoS) optimization problem as a mixed-integer non-convex programming problem with the dual objectives of minimizing task processing latency and reducing overall system energy consumption. A multi-agent joint task association and trajectory optimization (MA-JTATO) algorithm based on hybrid reinforcement learning is proposed to solve this intractable problem, which innovatively decouples the original coupled optimization problem into three interrelated subproblems and realizes their collaborative and efficient solution. Specifically, the Advantage Actor-Critic (A2C) algorithm is adopted to realize dynamic and optimal task association between UAVs and edge servers for discrete decision-making requirements; the multi-agent deep deterministic policy gradient (MADDPG) method is employed to achieve cooperative and energy-efficient trajectory planning for multiple UAVs to meet the needs of continuous control in dynamic environments; and convex optimization theory is applied to obtain a closed-form optimal solution for the efficient allocation of computational resources on edge servers. Simulation results demonstrate that the proposed MA-JTATO algorithm significantly outperforms traditional baseline algorithms in enhancing overall QoS, effectively validating the framework’s superior performance and robustness in dynamic and complex scenarios. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

20 pages, 1234 KB  
Article
Lightweight Real-Time Navigation for Autonomous Driving Using TinyML and Few-Shot Learning
by Wajahat Ali, Arshad Iqbal, Abdul Wadood, Herie Park and Byung O Kang
Sensors 2026, 26(7), 2271; https://doi.org/10.3390/s26072271 - 7 Apr 2026
Abstract
Autonomous vehicle navigation requires low-latency and energy-efficient machine learning models capable of operating in dynamic and resource-constrained environments. Conventional deep learning approaches are often unsuitable for real-time deployment on embedded edge devices due to their high computational and memory demands. In this work, [...] Read more.
Autonomous vehicle navigation requires low-latency and energy-efficient machine learning models capable of operating in dynamic and resource-constrained environments. Conventional deep learning approaches are often unsuitable for real-time deployment on embedded edge devices due to their high computational and memory demands. In this work, we propose a unified TinyML-optimized navigation framework that integrates a lightweight convolutional feature extractor (MobileNetV2) with a metric-based few-shot learning classifier to enable rapid adaptation to unseen driving scenarios with minimal data. The proposed framework jointly combines feature extraction, few-shot generalization, and edge-aware optimization into a single end-to-end pipeline designed specifically for real-time autonomous decision-making. Furthermore, post-training quantization and structured pruning are employed to significantly reduce the memory footprint and inference latency while preserving the classification performance. Experimental results demonstrate that the proposed model achieved a 93.4% accuracy on previously unseen road conditions, with an average inference latency of 68 ms and a memory usage of 18 MB, outperforming traditional CNN and LSTM models in efficiency while maintaining a competitive predictive performance. These results highlight the effectiveness of the proposed approach in enabling scalable, real-time navigation on low-power edge devices. Full article
Show Figures

Figure 1

18 pages, 835 KB  
Article
Prism-Based Mapping of 6G Use Cases Integrating Technical Requirements and Multidimensional Service Classification
by Sunhye Kim, Yoon Seo, Seung-Hoon Hwang and Byungun Yoon
Systems 2026, 14(4), 404; https://doi.org/10.3390/systems14040404 - 7 Apr 2026
Viewed by 12
Abstract
Purpose: With the advent of sixth-generation (6G) communication technology, systematic mapping of its use cases to associated technical requirements has become essential for accelerating standardization, guiding R&D investment, and informing policy formulation. Methods: This study consolidated 65 use case scenarios from key academic [...] Read more.
Purpose: With the advent of sixth-generation (6G) communication technology, systematic mapping of its use cases to associated technical requirements has become essential for accelerating standardization, guiding R&D investment, and informing policy formulation. Methods: This study consolidated 65 use case scenarios from key academic and institutional 6G sources into 21 representative cases. A three-round Delphi-based expert assessment, employing a five-point Likert scale and interquartile-range-based consensus monitoring, was used to assign primary and secondary technical requirements across six core dimensions: immersive communication, massive communication, hyper-reliable low-latency communication, integrated sensing and communication, integrated artificial intelligence and communication (IAAC), and ubiquitous connectivity. A three-dimensional (3D) prism-based visualization framework was subsequently developed to represent the interdependencies among these requirements. Results: IAAC and massive communication emerged as the most critical requirements, each functioning as a primary or secondary driver across most use cases. The prism framework revealed hierarchical and complementary relationships among the six dimensions that conventional 2D wheel diagrams cannot adequately capture. Furthermore, a nine-criterion multidimensional classification framework, encompassing data transmission mode, decision-making mode, communication flow, interaction type, device type, deployment type, human activity innovation, user type, and personalization level, was developed, offering industry-specific guidance for service design. Collectively, the proposed framework supports user-centric design, informs strategic technology planning, and contributes to policy development while acknowledging existing limitations in quantitative mapping and economic analysis. Full article
Show Figures

Figure 1

26 pages, 5479 KB  
Article
Regional and Temporal Patterns of Long-Term Pseudorabies Virus Detection and Neuropathology in the Murine CNS
by Viktoria Korff, Issam El-Debs, Barbara G. Klupp, Conrad M. Freuling, Jens P. Teifke, Thomas C. Mettenleiter and Julia Sehl-Ewert
Pathogens 2026, 15(4), 395; https://doi.org/10.3390/pathogens15040395 - 7 Apr 2026
Viewed by 53
Abstract
Alphaherpesviruses, including Herpes Simplex Virus 1 (HSV-1) and Pseudorabies Virus (PrV), establish lifelong latency in the nervous system and can cause recurrent disease. While latency has classically been attributed to peripheral sensory ganglia, accumulating evidence indicates that the central nervous system (CNS) may [...] Read more.
Alphaherpesviruses, including Herpes Simplex Virus 1 (HSV-1) and Pseudorabies Virus (PrV), establish lifelong latency in the nervous system and can cause recurrent disease. While latency has classically been attributed to peripheral sensory ganglia, accumulating evidence indicates that the central nervous system (CNS) may also serve as a site of long-term viral persistence and reactivation. Here, we investigated the CNS as a viral reservoir using the attenuated mutant PrV-∆UL21/US3∆kin, which preferentially targets mesiotemporal brain regions. Following intranasal inoculation, mice were analyzed at 11–14, 21, 28, 42, 105, and 190 days post-infection (dpi). To assess the reactivation potential, a subset of animals received cyclophosphamide/dexamethasone at 170 dpi. Viral transcripts were detected by RNAscope™ in situ hybridization and RT-qPCR targeting the lytic gene UL19 encoding the major capsid protein and the latency-associated transcript (LAT). Histopathology included hematoxylin and eosin staining and immunohistochemistry for CD3, Iba1, GFAP, cleaved caspase-3 and viral glycoprotein gB. UL19 RNA signals displayed marked regional and temporal heterogeneity, with prominent detection in mesiotemporal structures. In contrast, LAT RNA levels remained low overall, with a transient peak during the acute phase. RT-qPCR confirmed high UL19 and LAT transcript levels during early infection, while LAT transcription returned to baseline levels thereafter. Histopathology showed a transition from acute necrotizing meningoencephalitis to prolonged low-grade inflammation with glial activation and focal apoptosis. Notably, UL19 RNA signals strongly correlated with T-cell infiltration, particularly at 42 dpi. Together, these findings define regional and temporal patterns of long-term PrV transcriptional activity and associated neuropathology in the murine CNS. Full article
(This article belongs to the Section Viral Pathogens)
Show Figures

Figure 1

27 pages, 1222 KB  
Article
Query-Adaptive Hybrid Search
by Pavel Posokhov, Stepan Skrylnikov, Sergei Masliukhin, Alina Zavgorodniaia, Olesia Koroteeva and Yuri Matveev
Mach. Learn. Knowl. Extr. 2026, 8(4), 91; https://doi.org/10.3390/make8040091 - 5 Apr 2026
Viewed by 131
Abstract
The modern information retrieval field increasingly relies on hybrid search systems combining sparse retrieval with dense neural models. However, most existing hybrid frameworks employ static mixing coefficients and independent component training, failing to account for the specific needs of individual queries and corpus [...] Read more.
The modern information retrieval field increasingly relies on hybrid search systems combining sparse retrieval with dense neural models. However, most existing hybrid frameworks employ static mixing coefficients and independent component training, failing to account for the specific needs of individual queries and corpus heterogeneity. In this paper, we introduce an adaptive hybrid retrieval framework featuring query-driven alpha prediction that dynamically calibrates the mixing weights based on query latent representations instantiated in a lightweight low-latency configuration and a full-capacity encoder-scale predictor, enabling flexible trade-offs between computational efficiency and retrieval accuracy without relying on resource-inefficient LLM-based online evaluation. Furthermore, we propose antagonist negative sampling, a novel training paradigm that optimizes the dense encoder to resolve the systematic failures of the lexical retriever, prioritizing hard negatives where BM25 exhibits high uncertainty. Empirical evaluations on large-scale multilingual benchmarks (MLDR and MIRACL) indicate that our approach demonstrates superior average performance compared to state-of-the-art models such as BGE-M3 and mGTE, achieving an nDCG@10 of 74.3 on long-document retrieval. Notably, our framework recovers up to 92.5% of the theoretical oracle performance and yields significant improvements in nDCG@10 across 16 languages, particularly in challenging long-context scenarios. Full article
(This article belongs to the Special Issue Trustworthy AI: Integrating Knowledge, Retrieval, and Reasoning)
Show Figures

Figure 1

24 pages, 4159 KB  
Article
A UAV–Satellite Hybrid Pipeline for Wildfire Detection and Dynamic Perimeter Prediction
by Hossein Keshmiri and Khan A. Wahid
Drones 2026, 10(4), 263; https://doi.org/10.3390/drones10040263 - 4 Apr 2026
Viewed by 296
Abstract
Effective wildfire management demands seamless integration of real-time detection and long-term spread forecasting. This paper proposes a novel power-efficient UAV–satellite hybrid pipeline that synergizes the agility of UAVs with the scale of satellite intelligence. The system begins with a dashboard-guided, multi-UAV detection module [...] Read more.
Effective wildfire management demands seamless integration of real-time detection and long-term spread forecasting. This paper proposes a novel power-efficient UAV–satellite hybrid pipeline that synergizes the agility of UAVs with the scale of satellite intelligence. The system begins with a dashboard-guided, multi-UAV detection module that scores fire likelihood from historical satellite data and enables scalable, energy-efficient deployment with low-latency onboard processing. This aerial component ensures persistent surveillance and reliable ignition detection, supported by a Dual LoRa (Long Range) communication scheme for robust and low-power connectivity. It achieves an F1-score of 97.4% while minimizing power consumption to extend operational flight times. Following detection, the pipeline transitions to a dynamic perimeter-prediction phase utilizing a custom Canadian boreal dataset. We employ a Squeeze-and-Excitation Residual U-Net (SE-ResUNet) to model spatiotemporal fire propagation based on static terrain and dynamic environmental features. The model was validated using a dynamic simulation framework that evaluates temporal consistency and convergence behavior against final cumulative burned-area masks, effectively addressing the absence of daily ground truth. Under these conditions, the model achieves a recall of 84% and an AUC of 0.97, demonstrating a strong capability to delineate active fire fronts. By coupling dashboard-driven UAV sensing with satellite-based predictive modeling, this work establishes a modular, foundational framework to support data-scarce forecasting in modern wildfire management. Full article
(This article belongs to the Special Issue UAVs and UGVs Robotics for Emergency Response in a Changing Climate)
Show Figures

Figure 1

27 pages, 1577 KB  
Article
An Intelligent Fuzzy Protocol with Automated Optimization for Energy-Efficient Electric Vehicle Communication in Vehicular Ad Hoc Network-Based Smart Transportation Systems
by Ghassan Samara, Ibrahim Obeidat, Mahmoud Odeh and Raed Alazaidah
World Electr. Veh. J. 2026, 17(4), 191; https://doi.org/10.3390/wevj17040191 - 4 Apr 2026
Viewed by 132
Abstract
Vehicular ad hoc networks (VANETs) operating in dense urban environments are characterized by highly dynamic topology, fluctuating traffic conditions, and stringent latency requirements, which significantly complicate reliable data routing and packet forwarding. To address these challenges, this paper proposes an Intelligent Fuzzy Protocol [...] Read more.
Vehicular ad hoc networks (VANETs) operating in dense urban environments are characterized by highly dynamic topology, fluctuating traffic conditions, and stringent latency requirements, which significantly complicate reliable data routing and packet forwarding. To address these challenges, this paper proposes an Intelligent Fuzzy Protocol (IFP) for adaptive vehicle-to-vehicle data routing under uncertain and rapidly changing traffic scenarios. The proposed protocol integrates fuzzy logic decision making with the real-time vehicular context, including vehicle velocity, traffic congestion level, distance to road junctions, and data urgency, to dynamically select appropriate forwarding actions. IFP employs a structured fuzzy inference engine comprising fuzzification, rule evaluation, inference aggregation, and centroid-based defuzzification to determine routing and forwarding decisions in a decentralized manner. To further enhance performance robustness, the fuzzy membership parameters and rule weights are optimized using metaheuristic techniques, namely, genetic algorithms (GAs) and particle swarm optimization (PSO). Extensive simulations are conducted using NS-3 coupled with SUMO under realistic urban mobility scenarios and varying network densities. The simulation results demonstrate that IFP significantly outperforms conventional routing approaches in terms of end-to-end delay, packet delivery ratio, and routing overhead. In particular, the optimized IFP variants achieve notable reductions in latency and improvements in delivery reliability under high-congestion conditions, while maintaining low computational and communication overhead. These findings confirm that IFP offers an interpretable, scalable, and energy-aware routing solution suitable for large-scale intelligent transportation systems and next-generation vehicular networks. Full article
(This article belongs to the Special Issue Power and Energy Systems for E-Mobility, 2nd Edition)
Show Figures

Figure 1

23 pages, 2145 KB  
Article
Seeing Through Touch: A Stereo-Vision Vibrotactile Aid for Visually Impaired People
by Claudia Presicci, Giulia Ballardini, Giorgia Marchesi, Paolo Robutti, Matteo Moro, Camilla Pierella, Andrea Canessa and Maura Casadio
Electronics 2026, 15(7), 1511; https://doi.org/10.3390/electronics15071511 - 3 Apr 2026
Viewed by 171
Abstract
Blind and visually impaired individuals face persistent challenges when navigating unfamiliar environments, where unseen obstacles compromise their safety and independence. Although many electronic travel aids have been proposed, most remain impractical for daily use—they often rely on bulky or costly hardware, require external [...] Read more.
Blind and visually impaired individuals face persistent challenges when navigating unfamiliar environments, where unseen obstacles compromise their safety and independence. Although many electronic travel aids have been proposed, most remain impractical for daily use—they often rely on bulky or costly hardware, require external processing, or provide unintuitive feedback. This work presents a wearable stereo-vision-based vibrotactile system for real-time obstacle detection and navigation assistance. The device combines an off-the-shelf stereo camera integrated with a simultaneous localization and mapping framework to perceive spatial geometry and detect obstacles in the user’s path. Two stereo-matching methods were implemented to estimate depth: a block-based algorithm optimized for low-latency performance and a semi-global approach providing denser depth maps. Detected obstacles are translated into distinct vibration patterns delivered through four skin-contact body-mounted actuators encoding both direction and distance. The system was evaluated with blindfolded sighted, visually impaired, and blind participants. Both stereo approaches supported reliable real-time guidance and high obstacle-avoidance rates, demonstrating robust performance on affordable, wearable hardware. These findings confirm the feasibility of real-time tactile guidance using commercially available components, marking a concrete step toward accessible navigation support that enhances safety and autonomy for blind and visually impaired individuals. Full article
(This article belongs to the Special Issue Feature Papers in Bioelectronics: 2025–2026 Edition)
Show Figures

Figure 1

Back to TopTop