Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,992)

Search Parameters:
Keywords = computational resource efficiency

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 16035 KB  
Article
An Optimized Dual-Path SGM System for Real-Time Stereo Matching on FPGA
by Yang Song, Hongyu Sun, Wenmin Song, Xiangpeng Wang and Fanqiang Lin
Electronics 2026, 15(8), 1549; https://doi.org/10.3390/electronics15081549 (registering DOI) - 8 Apr 2026
Abstract
Stereo matching constitutes a critical technology in applications such as autonomous driving and robot navigation. Conventional algorithms, however, often encounter limitations in real-time performance and resource efficiency when deployed on embedded platforms. This paper presents a real-time stereo matching system implemented on a [...] Read more.
Stereo matching constitutes a critical technology in applications such as autonomous driving and robot navigation. Conventional algorithms, however, often encounter limitations in real-time performance and resource efficiency when deployed on embedded platforms. This paper presents a real-time stereo matching system implemented on a Field-Programmable Gate Array (FPGA), which is built around a lightweight, hardware-optimized dual-path Semi-Global Matching (SGM) algorithm. The proposed method simplifies the traditional eight-path cost aggregation into horizontal and vertical dual-path aggregation, substantially reducing hardware resource consumption while preserving matching accuracy. The system employs a pipelined architecture that integrates image capture, DDR3 caching, and HDMI display output. Experimental results demonstrate that under the configuration of a 5 × 5 matching window and a disparity range of 64, the system generates stable disparity maps at 60 frames per second, with total power consumption below 2.2 W and FPGA core logic utilization under 30%. Compared to the conventional eight-path SGM, the dual-path strategy incurs only a marginal 6% increase in average bad pixel rate on standard stereo datasets while reducing Block RAM (BRAM) usage by approximately 30%. This achieves an effective practical balance between accuracy, computational efficiency, and power consumption. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

34 pages, 5480 KB  
Article
Metaheuristic Optimization of Treated Sewage Wastewater Quality Parameters with Natural Coagulants
by Joseph K. Bwapwa and Jean G. Mukuna
Water 2026, 18(8), 885; https://doi.org/10.3390/w18080885 (registering DOI) - 8 Apr 2026
Abstract
This study presents a comprehensive multi-objective optimization of sewage wastewater treatment using bio-based coagulants, guided by the Grey Wolf Optimizer (GWO) and its multi-objective variant (MOGWO). Experimental coagulation data, employing Citrullus lanatus and Cucumis melo as natural coagulants, were modeled using multivariate regression [...] Read more.
This study presents a comprehensive multi-objective optimization of sewage wastewater treatment using bio-based coagulants, guided by the Grey Wolf Optimizer (GWO) and its multi-objective variant (MOGWO). Experimental coagulation data, employing Citrullus lanatus and Cucumis melo as natural coagulants, were modeled using multivariate regression techniques, yielding high coefficients of determination (R2 > 0.95) across key water quality parameters. The optimization process targeted maximal reductions in turbidity, total suspended solids (TSS), biochemical oxygen demand (BOD), and chemical oxygen demand (COD) through strategic manipulation of pH and coagulant dosage. The single-objective GWO achieved significant outcomes, including a 96.68% turbidity reduction at pH 5 and 50 mg/L dosage. The MOGWO algorithm identified Pareto-optimal solutions, such as a 94.2% turbidity reduction at pH 5 and 72 mg/L dosage, and a balanced BOD reduction of 52.7% at pH 7. The predictive models indicated that optimal treatment conditions could reduce chemical usage by up to 90% compared to conventional coagulants, resulting in potential cost savings of up to 30%. Moreover, the algorithms demonstrated rapid convergence, averaging 200 iterations, highlighting their computational efficiency and robustness. These findings illustrate that integrating bio-based coagulants with advanced optimization techniques can achieve high treatment efficiency while reducing chemical inputs, thus directly supporting environmental sustainability by minimizing sludge and secondary pollution. In this situation, the wastewater treatment plant will focus on resource-recovery systems with less or no waste at the end of the treatment process. This approach aligns with circular economy principles by promoting eco-friendly, cost-effective wastewater treatment solutions suitable for resource-limited settings. The study offers a forward-looking pathway for environmentally responsible wastewater management practices that significantly reduce chemical dependency and contribute to pollution mitigation efforts. Full article
(This article belongs to the Section Wastewater Treatment and Reuse)
Show Figures

Figure 1

25 pages, 1501 KB  
Article
MA-JTATO: Multi-Agent Joint Task Association and Trajectory Optimization in UAV-Assisted Edge Computing System
by Yunxi Zhang and Zhigang Wen
Drones 2026, 10(4), 267; https://doi.org/10.3390/drones10040267 - 7 Apr 2026
Abstract
With the rapid development of applications such as smart cities and the industrial internet, the computation-intensive tasks generated by massive sensing devices pose significant challenges to traditional cloud computing paradigms. Unmanned aerial vehicle (UAV)-assisted edge computing systems, leveraging their high mobility and wide-area [...] Read more.
With the rapid development of applications such as smart cities and the industrial internet, the computation-intensive tasks generated by massive sensing devices pose significant challenges to traditional cloud computing paradigms. Unmanned aerial vehicle (UAV)-assisted edge computing systems, leveraging their high mobility and wide-area coverage capabilities, offer an innovative architecture for low-latency and highly reliable edge services. However, the practical deployment of such systems faces a highly complex multi-objective optimization problem featured by the tight coupling of task offloading decisions, UAV trajectory planning, and edge server resource allocation. Conventional optimization methods are difficult to adapt to the dynamic and high-dimensional characteristics of this problem, leading to suboptimal system performance. To address this critical challenge, this paper constructs an intelligent collaborative optimization framework for UAV-assisted edge computing systems and formulates the system quality of service (QoS) optimization problem as a mixed-integer non-convex programming problem with the dual objectives of minimizing task processing latency and reducing overall system energy consumption. A multi-agent joint task association and trajectory optimization (MA-JTATO) algorithm based on hybrid reinforcement learning is proposed to solve this intractable problem, which innovatively decouples the original coupled optimization problem into three interrelated subproblems and realizes their collaborative and efficient solution. Specifically, the Advantage Actor-Critic (A2C) algorithm is adopted to realize dynamic and optimal task association between UAVs and edge servers for discrete decision-making requirements; the multi-agent deep deterministic policy gradient (MADDPG) method is employed to achieve cooperative and energy-efficient trajectory planning for multiple UAVs to meet the needs of continuous control in dynamic environments; and convex optimization theory is applied to obtain a closed-form optimal solution for the efficient allocation of computational resources on edge servers. Simulation results demonstrate that the proposed MA-JTATO algorithm significantly outperforms traditional baseline algorithms in enhancing overall QoS, effectively validating the framework’s superior performance and robustness in dynamic and complex scenarios. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

20 pages, 1234 KB  
Article
Lightweight Real-Time Navigation for Autonomous Driving Using TinyML and Few-Shot Learning
by Wajahat Ali, Arshad Iqbal, Abdul Wadood, Herie Park and Byung O Kang
Sensors 2026, 26(7), 2271; https://doi.org/10.3390/s26072271 - 7 Apr 2026
Abstract
Autonomous vehicle navigation requires low-latency and energy-efficient machine learning models capable of operating in dynamic and resource-constrained environments. Conventional deep learning approaches are often unsuitable for real-time deployment on embedded edge devices due to their high computational and memory demands. In this work, [...] Read more.
Autonomous vehicle navigation requires low-latency and energy-efficient machine learning models capable of operating in dynamic and resource-constrained environments. Conventional deep learning approaches are often unsuitable for real-time deployment on embedded edge devices due to their high computational and memory demands. In this work, we propose a unified TinyML-optimized navigation framework that integrates a lightweight convolutional feature extractor (MobileNetV2) with a metric-based few-shot learning classifier to enable rapid adaptation to unseen driving scenarios with minimal data. The proposed framework jointly combines feature extraction, few-shot generalization, and edge-aware optimization into a single end-to-end pipeline designed specifically for real-time autonomous decision-making. Furthermore, post-training quantization and structured pruning are employed to significantly reduce the memory footprint and inference latency while preserving the classification performance. Experimental results demonstrate that the proposed model achieved a 93.4% accuracy on previously unseen road conditions, with an average inference latency of 68 ms and a memory usage of 18 MB, outperforming traditional CNN and LSTM models in efficiency while maintaining a competitive predictive performance. These results highlight the effectiveness of the proposed approach in enabling scalable, real-time navigation on low-power edge devices. Full article
Show Figures

Figure 1

22 pages, 4848 KB  
Article
A Lightweight Improved RT-DETR for Stereo-Vision-Based Excavator Posture Recognition
by Yunlong Hou, Ke Wu, Yuhan Zhang, Mengying Zhou, Jiasheng Lu and Zhao Zhang
Mathematics 2026, 14(7), 1226; https://doi.org/10.3390/math14071226 - 7 Apr 2026
Abstract
In intelligent excavator applications, traditional excavator posture recognition methods face two major challenges: limited recognition accuracy and insufficient computing resources on edge devices. To address these issues, this study proposes an excavator posture recognition method based on an improved Real-Time Detection Transformer (RT-DETR). [...] Read more.
In intelligent excavator applications, traditional excavator posture recognition methods face two major challenges: limited recognition accuracy and insufficient computing resources on edge devices. To address these issues, this study proposes an excavator posture recognition method based on an improved Real-Time Detection Transformer (RT-DETR). First, a new backbone network is designed based on the Reparameterized Vision Transformer to improve feature utilization efficiency while reducing computational demands. Next, the overall architecture is optimized by introducing lightweight Dynamic Upsamplers, which reduce information loss during upsampling and enhance multi-scale feature fusion. In addition, a Cross-Attention Fusion Module is adopted to strengthen local feature extraction while retaining the global modeling capability of the Transformer, thereby improving the discrimination between foreground and background. Finally, a Multi-Scale Fusion Network is introduced to further enhance the multi-scale feature representation ability of RT-DETR. Experimental results show that the proposed method achieves a mean average precision (mAP) of 94.29% for small object detection, which is 7.96% higher than that of the baseline RT-DETR, while reducing the number of model parameters by 34.95%. Compared with YOLO-series models, the proposed method improves mAP by 8.62% to 12.75%. These results indicate that the proposed method outperforms existing methods in both detection accuracy and computational efficiency and provides an efficient and feasible solution for real-time excavator posture recognition. Full article
Show Figures

Figure 1

6 pages, 892 KB  
Proceeding Paper
Applying Model Context Protocol for Offline Small Language Models in Industrial Data Management
by Nian-Ze Hu, You-Xin Lin, Hao-Lun Huang, Po-Han Lu, Chih-Chen Lin, Yu-Tzu Hung, Sing-Cih Jhang and Pei-Yu Chou
Eng. Proc. 2026, 134(1), 31; https://doi.org/10.3390/engproc2026134031 - 7 Apr 2026
Abstract
In recent years, Large Language Models (LLMs) have demonstrated strong capabilities in contextual reasoning and knowledge retrieval. However, their application in industrial domains is limited by concerns regarding data security, reliance on cloud infrastructure, and high operational costs. To address these challenges, this [...] Read more.
In recent years, Large Language Models (LLMs) have demonstrated strong capabilities in contextual reasoning and knowledge retrieval. However, their application in industrial domains is limited by concerns regarding data security, reliance on cloud infrastructure, and high operational costs. To address these challenges, this study proposes the use of the Model Context Protocol (MCP) as a middleware framework that enables the deployment of offline-operable Small Language Models (SLMs) for industrial data processing. MCP facilitates structured interaction between SLMs and external resources (e.g., databases, APIs, and processors), allowing secure and controlled data access without exposing proprietary systems. As illustrated in the proposed framework, user input is first processed by the SLM (Qwen-7B) for intent determination. When external data is required, MCP coordinates the invocation of relevant resources and integrates the returned results into the model. The SLM then generates the final response. This approach enables SLMs to perform local computation for contextual analysis and decision support while maintaining low computational requirements and full data locality. The proposed system eliminates dependence on cloud-based LLM services and enhances security and cost efficiency. Experimental results demonstrate that the MCP-based architecture provides a practical and effective solution for deploying intelligent assistants in industrial environments without relying on large-scale external AI services. Full article
Show Figures

Figure 1

15 pages, 3734 KB  
Article
An SVM-Based High-Precision Reconstruction Algorithm for High-Power Laser Beam Spots with Large Divergence Angles
by Wenrong Mo, Bin Li, Jianxin Wang, Cai Wen, Youlin Wang and Awais Tabassum
Optics 2026, 7(2), 26; https://doi.org/10.3390/opt7020026 - 7 Apr 2026
Abstract
Lasers are a key enabling technology across numerous engineering and scientific fields, especially in high-energy laser systems for defense, materials processing, and fusion research, where precise characterization of high-power, large-divergence-angle laser spots is critical. However, the inherent properties of high-power, large-divergence-angle lasers—such as [...] Read more.
Lasers are a key enabling technology across numerous engineering and scientific fields, especially in high-energy laser systems for defense, materials processing, and fusion research, where precise characterization of high-power, large-divergence-angle laser spots is critical. However, the inherent properties of high-power, large-divergence-angle lasers—such as large spot area and strong intensity contrast—pose real obstacles to existing methods, which often suffer from low accuracy and inefficiency. In this paper, a flat-field correction technique was proposed for the CCD to reduce the distortions produced by the non-uniform response of the sensor in spot measurements. Then, a spot recognition algorithm based on support vector machines was developed, which can effectively and accurately locate and identify laser spots with limited training samples and computational resources, achieving a classification accuracy of over 98.11%. Additionally, an efficient correction approach is proposed to assess the spot intensity and shape with high accuracy even at large tilt angles. Experimental results show that this proposed approach can measure the high-power laser spot with a large divergence angle precisely and efficiently, and improves both the measurement precision and operational efficiency remarkably. Full article
Show Figures

Figure 1

10 pages, 512 KB  
Proceeding Paper
Multitask Deep Neural Network for IMU Calibration, Denoising, and Dynamic Noise Adaption for Vehicle Navigation
by Frieder Schmid and Jan Fischer
Eng. Proc. 2026, 126(1), 44; https://doi.org/10.3390/engproc2026126044 - 7 Apr 2026
Abstract
In intelligent vehicle navigation, efficient sensor data processing and accurate system stabilization is critical to maintain robust performance, especially when GNSS signals are unavailable or unreliable. Classical calibration methods for Inertial Measurement Units (IMUs), such as discrete and system-level calibration, fail to capture [...] Read more.
In intelligent vehicle navigation, efficient sensor data processing and accurate system stabilization is critical to maintain robust performance, especially when GNSS signals are unavailable or unreliable. Classical calibration methods for Inertial Measurement Units (IMUs), such as discrete and system-level calibration, fail to capture time-varying, non-linear, and non-Gaussian noise characteristics. Likewise, Kalman filters typically assume static measurement noise levels for non-holonomic constraints (NHCs), resulting in suboptimal performance in dynamic environments. Furthermore, zero-velocity detection plays a vital role in preventing error accumulation by enabling reliable zero-velocity updates during motion stops, but classical thresholding approaches often lack robustness and precision. To address these limitations, we propose a novel multitask deep neural network (MTDNN) architecture that jointly learns IMU calibration, adaptive noise level estimation for NHC, and zero-velocity detection solely from raw IMU data. This shared-encoder design is utilized to minimize computational overhead, enabling real-time deployment on resource-constrained platforms such as Raspberry Pi. The model is trained using post-processed GNSS-RTK ground truth trajectories obtained from both a proprietary dataset and the publicly available 4Seasons dataset. Experimental results confirm the proposed system’s superior accuracy, efficiency, and real-time capability in GNSS-denied conditions. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

26 pages, 2634 KB  
Article
Minimal Angular Facial Representation for Real-Time Emotion Recognition
by Gerardo Garcia-Gil
Appl. Sci. 2026, 16(7), 3572; https://doi.org/10.3390/app16073572 - 6 Apr 2026
Abstract
Real-time facial emotion recognition remains challenging due to the high dimensionality and computational cost of dense facial representations, which limit their applicability in resource-constrained and real-time scenarios. This study proposes a compact, anatomically informed angular facial representation for efficient, interpretable emotion recognition under [...] Read more.
Real-time facial emotion recognition remains challenging due to the high dimensionality and computational cost of dense facial representations, which limit their applicability in resource-constrained and real-time scenarios. This study proposes a compact, anatomically informed angular facial representation for efficient, interpretable emotion recognition under real-time constraints. Facial landmarks are first extracted using a standard landmark detection framework, from which a reduced facial mesh of 27 anatomically selected points is defined. Internal geometric angles computed from this mesh are analyzed using temporal variability and redundancy criteria, resulting in a minimal set of eight angular descriptors that capture the most expressive facial dynamics while preserving geometric invariance and computational efficiency. The proposed representation is evaluated using multiple supervised machine learning classifiers under two complementary validation strategies: stratified frame-level cross-validation and strict Leave-One-Subject-Out evaluation. Under mixed-subject stratified validation, the best-performing model (MLP) achieved macro-averaged F1-scores exceeding 0.95 and near-unity ROC–AUC values. However, subject-independent evaluation revealed reduced generalization performance (average accuracy ≈55%), highlighting the influence of inter-subject morphological variability embedded in absolute angular descriptors. These findings indicate that a minimal angular geometric encoding provides strong intra-subject discriminative capability while transparently characterizing its cross-subject generalization limits, offering a practical and interpretable alternative for data- and resource-constrained real-time scenarios. Full article
Show Figures

Figure 1

17 pages, 2174 KB  
Article
RadarSSM: A Lightweight Spatiotemporal State Space Network for Efficient Radar-Based Human Activity Recognition
by Rubin Zhao, Fucheng Miao and Yuanjian Liu
Sensors 2026, 26(7), 2259; https://doi.org/10.3390/s26072259 - 6 Apr 2026
Abstract
Millimeter-wave radar has gradually gained popularity as a sensor mode for Human Activity Recognition (HAR) in recent years because it preserves the privacy of individuals and is resistant to environmental conditions. Nevertheless, the fast inference of high-dimensional and sparse 4D radar data is [...] Read more.
Millimeter-wave radar has gradually gained popularity as a sensor mode for Human Activity Recognition (HAR) in recent years because it preserves the privacy of individuals and is resistant to environmental conditions. Nevertheless, the fast inference of high-dimensional and sparse 4D radar data is still difficult to perform on low-resource edge devices. Current models, including 3D Convolutional Neural Networks and Transformer-based models, are frequently plagued by extensive parameter overhead or quadratic computational complexity, which restricts their applicability to edge applications. The present paper attempts to resolve these issues by introducing RadarSSM as a lightweight spatiotemporal hybrid network in the context of radar-based HAR. The explicit separation of spatial feature extraction and temporal dependency modeling helps RadarSSM decrease the overall complexity of computation significantly. Specifically, a spatial encoder based on depthwise separable 3D convolutions is designed to efficiently capture fine-grained geometric and motion features from voxelized radar data. For temporal modeling, a bidirectional State Space Model is introduced to capture long-range temporal dependencies with linear time complexity O(T), thereby avoiding the quadratic cost associated with self-attention mechanisms. Extensive experiments conducted on public radar HAR datasets demonstrate that RadarSSM achieves accuracy competitive with state-of-the-art methods while substantially reducing parameter count and computational cost relative to representative convolutional baselines. These results validate the effectiveness of RadarSSM and highlight its suitability for efficient radar sensing on edge hardware. Full article
(This article belongs to the Special Issue Radar and Multimodal Sensing for Ambient Assisted Living)
Show Figures

Figure 1

41 pages, 3961 KB  
Review
Open-Source Molecular Docking and AI-Augmented Structure-Based Drug Design: Current Workflows, Challenges, and Opportunities
by Faizul Azam and Suliman A. Almahmoud
Int. J. Mol. Sci. 2026, 27(7), 3302; https://doi.org/10.3390/ijms27073302 - 5 Apr 2026
Viewed by 500
Abstract
Molecular docking is a foundational technique in computational drug discovery, widely used to generate binding hypotheses, prioritize compounds, and support target-selectivity studies. The continued growth of open-source docking resources, together with improvements in scoring functions, sampling strategies, and hardware acceleration, has substantially lowered [...] Read more.
Molecular docking is a foundational technique in computational drug discovery, widely used to generate binding hypotheses, prioritize compounds, and support target-selectivity studies. The continued growth of open-source docking resources, together with improvements in scoring functions, sampling strategies, and hardware acceleration, has substantially lowered barriers to teaching, early-stage hit identification, and reproducible research. Beyond standalone docking engines, the open-source ecosystem now encompasses browser-accessible tools, preparation and analysis utilities, integrative modeling platforms, and AI-augmented methods for pose prediction, rescoring, and virtual screening. These developments have made docking workflows more accessible, customizable, and transparent across diverse research settings. This review examines open-source docking from a workflow-centered perspective, spanning study design, structural-data acquisition, binding-site definition, receptor and ligand preparation, docking execution, and post-docking validation. It further evaluates how open AI methods are being incorporated into these stages to expand structural coverage, improve screening efficiency, and support contemporary structure-based drug design. Collectively, this review outlines a practical and evidence-based framework for the effective use of open-source docking and virtual-screening pipelines in modern drug discovery. Full article
Show Figures

Figure 1

27 pages, 1279 KB  
Article
Query-Adaptive Hybrid Search
by Pavel Posokhov, Stepan Skrylnikov, Sergei Masliukhin, Alina Zavgorodniaia, Olesia Koroteeva and Yuri Matveev
Mach. Learn. Knowl. Extr. 2026, 8(4), 91; https://doi.org/10.3390/make8040091 - 5 Apr 2026
Viewed by 98
Abstract
The modern information retrieval field increasingly relies on hybrid search systems combining sparse retrieval with dense neural models. However, most existing hybrid frameworks employ static mixing coefficients and independent component training, failing to account for the specific needs of individual queries and corpus [...] Read more.
The modern information retrieval field increasingly relies on hybrid search systems combining sparse retrieval with dense neural models. However, most existing hybrid frameworks employ static mixing coefficients and independent component training, failing to account for the specific needs of individual queries and corpus heterogeneity. In this paper, we introduce an adaptive hybrid retrieval framework featuring query-driven alpha prediction that dynamically calibrates the mixing weights based on query latent representations instantiated in a lightweight low-latency configuration and a full-capacity encoder-scale predictor, enabling flexible trade-offs between computational efficiency and retrieval accuracy without relying on resource-inefficient LLM-based online evaluation. Furthermore, we propose antagonist negative sampling, a novel training paradigm that optimizes the dense encoder to resolve the systematic failures of the lexical retriever, prioritizing hard negatives where BM25 exhibits high uncertainty. Empirical evaluations on large-scale multilingual benchmarks (MLDR and MIRACL) indicate that our approach demonstrates superior average performance compared to state-of-the-art models such as BGE-M3 and mGTE, achieving an nDCG@10 of 74.3 on long-document retrieval. Notably, our framework recovers up to 92.5% of the theoretical oracle performance and yields significant improvements in nDCG@10 across 16 languages, particularly in challenging long-context scenarios. Full article
(This article belongs to the Special Issue Trustworthy AI: Integrating Knowledge, Retrieval, and Reasoning)
Show Figures

Figure 1

52 pages, 14386 KB  
Review
Trustworthy Intelligence: Split Learning–Embedded Large Language Models for Smart IoT Healthcare Systems
by Mahbuba Ferdowsi, Nour Moustafa, Marwa Keshk and Benjamin Turnbull
Electronics 2026, 15(7), 1519; https://doi.org/10.3390/electronics15071519 - 4 Apr 2026
Viewed by 139
Abstract
The Internet of Things (IoT) plays an increasingly central role in healthcare by enabling continuous patient monitoring, remote diagnosis, and data-driven clinical decision-making through interconnected medical devices and sensing infrastructures. Despite these advances, IoT healthcare systems remain constrained by persistent challenges related to [...] Read more.
The Internet of Things (IoT) plays an increasingly central role in healthcare by enabling continuous patient monitoring, remote diagnosis, and data-driven clinical decision-making through interconnected medical devices and sensing infrastructures. Despite these advances, IoT healthcare systems remain constrained by persistent challenges related to data privacy, computational efficiency, scalability, and regulatory compliance. Federated learning (FL) reduces reliance on centralised data aggregation but remains vulnerable to inference-based privacy risks, while edge-oriented approaches are limited by device heterogeneity and restricted computational and energy resources; the deployment of large language models (LLMs) further exacerbates concerns surrounding privacy exposure, communication overhead, and practical feasibility. This study introduces Trustworthy Intelligence (TI) as a guiding framework for privacy-preserving distributed intelligence in IoT healthcare, explicitly integrating predictive performance, privacy protection, and deployment-oriented system design. Within this framework, split learning (SL) is examined as a core architectural mechanism and extended to support split-aware LLM integration across heterogeneous devices, supported by a structured taxonomy spanning architectural configurations, system adaptation strategies, and evaluation considerations. The study establishes a systematic mapping between SL design choices and representative healthcare scenarios, including wearable monitoring, multi-modal data fusion, clinical text analytics, and cross-institutional collaboration, and analyses key technical challenges such as activation-level privacy leakage, early-round vulnerability, reconstruction risks, and communication–computation trade-offs. An energy- and resource-aware adaptive cut layer selection strategy is outlined to support efficient deployment across devices with varying capabilities. A proof-of-concept experimental evaluation compares the proposed SL–LLM framework with centralised learning (CL), federated learning (FL), and conventional SL in terms of training latency, communication overhead, model accuracy, and privacy exposure under realistic IoT constraints, providing system-level evidence for the applicability of the TI framework in distributed healthcare environments and outlining directions for clinically viable and regulation-aligned IoT healthcare intelligence. Full article
Show Figures

Figure 1

38 pages, 3132 KB  
Article
Lightweight Semantic-Aware Route Planning on Edge Hardware for Indoor Mobile Robots: Monocular Camera–2D LiDAR Fusion with Penalty-Weighted Nav2 Route Server Replanning
by Bogdan Felician Abaza, Andrei-Alexandru Staicu and Cristian Vasile Doicin
Sensors 2026, 26(7), 2232; https://doi.org/10.3390/s26072232 - 4 Apr 2026
Viewed by 270
Abstract
The paper introduces a computationally efficient semantic-aware route planning framework for indoor mobile robots, designed for real-time execution on resource-constrained edge hardware (Raspberry Pi 5, CPU-only). The proposed architecture fuses monocular object detection with 2D LiDAR-based range estimation and integrates the resulting semantic [...] Read more.
The paper introduces a computationally efficient semantic-aware route planning framework for indoor mobile robots, designed for real-time execution on resource-constrained edge hardware (Raspberry Pi 5, CPU-only). The proposed architecture fuses monocular object detection with 2D LiDAR-based range estimation and integrates the resulting semantic annotations into the Nav2 Route Server for penalty-weighted route selection. Object localization in the map frame is achieved through the Angular Sector Fusion (ASF) pipeline, a deterministic geometric method requiring no parameter tuning. The ASF projects YOLO bounding boxes onto LiDAR angular sectors and estimates the object range using a 25th-percentile distance statistic, providing robustness to sparse returns and partial occlusions. All intrinsic and extrinsic sensor parameters are resolved at runtime via ROS 2 topic introspection and the URDF transform tree, enabling platform-agnostic deployment. Detected entities are classified according to mobility semantics (dynamic, static, and minor) and persistently encoded in a GeoJSON-based semantic map, with these annotations subsequently propagated to navigation graph edges as additive penalties and velocity constraints. Route computation is performed by the Nav2 Route Server through the minimization of a composite cost functional combining geometric path length with semantic penalties. A reactive replanning module monitors semantic cost updates during execution and triggers route invalidation and re-computation when threshold violations occur. Experimental evaluation over 115 navigation segments (legs) on three heterogeneous robotic platforms (two single-board RPi5 configurations and one dual-board setup with inference offloading) yielded an overall success rate of 97% (baseline: 100%, adaptive: 94%), with 42 replanning events observed in 57% of adaptive trials. Navigation time distributions exhibited statistically significant departures from normality (Shapiro–Wilk, p < 0.005). While central tendency differences between the baseline and adaptive modes were not significant (Mann–Whitney U, p = 0.157), the adaptive planner reduced temporal variance substantially (σ = 11.0 s vs. 31.1 s; Levene’s test W = 3.14, p = 0.082), primarily by mitigating AMCL recovery-induced outliers. On-device YOLO26n inference, executed via the NCNN backend, achieved 5.5 ± 0.7 FPS (167 ± 21 ms latency), and distributed inference reduced the average system CPU load from 85% to 48%. The study further reports deployment-level observations relevant to the Nav2 ecosystem, including GeoJSON metadata persistence constraints, graph discontinuity (“path-gap”) artifacts, and practical Route Server configuration patterns for semantic cost integration. Full article
(This article belongs to the Special Issue Advances in Sensing, Control and Path Planning for Robotic Systems)
Show Figures

Figure 1

25 pages, 852 KB  
Article
Hardware Implementation-Based Lightweight Privacy- Preserving Authentication Scheme for Internet of Drones Using Physically Unclonable Function
by Razan Alsulieman, Eduardo Hernandez Escobar, Richard Swilley, Ahmed Sherif, Kasem Khalil, Mohamed Elsersy and Rabab Abdelfattah
Sensors 2026, 26(7), 2224; https://doi.org/10.3390/s26072224 - 3 Apr 2026
Viewed by 204
Abstract
The Internet of Drones (IoD) has emerged as a critical extension of the Internet of Things, enabling unmanned aerial vehicles to support diverse applications, including precision agriculture, logistics, disaster monitoring, and security surveillance. Despite its rapid growth, securing IoD communications remains a significant [...] Read more.
The Internet of Drones (IoD) has emerged as a critical extension of the Internet of Things, enabling unmanned aerial vehicles to support diverse applications, including precision agriculture, logistics, disaster monitoring, and security surveillance. Despite its rapid growth, securing IoD communications remains a significant challenge due to the open wireless environment, high drone mobility, and strict computational and energy constraints. Existing authentication mechanisms either rely on computationally expensive cryptographic operations or remain validated only at the protocol or simulation level, leaving a critical gap in practical, hardware-validated solutions suitable for resource-constrained drone platforms. This gap motivates the need for a lightweight, privacy-preserving authentication scheme that is both theoretically sound and experimentally deployable on real hardware. To address this, we propose a Physically Unclonable Functions (PUF)-assisted lightweight authentication scheme for IoD environments that binds cryptographic keys to each drone’s intrinsic hardware characteristics via PUFs. The scheme employs dynamically generated pseudo-identities to conceal permanent drone identities and prevent tracking, while authentication and key agreement are achieved using efficient symmetric cryptographic primitives, including SHA-256 for key derivation and updates, AES-256 for secure communication, and lightweight XOR operations to minimize overhead. Forward secrecy is ensured through rolling key updates, and periodic renewal of PUF challenges enhances resistance to replay and modeling attacks. To validate practicality, both software-based and hardware-based implementations were developed and evaluated. The software evaluation demonstrates a low communication overhead of 708.5 bytes and an average computation time of 18.87 ms. The hardware implementation on a Nexys A7-100T FPGA operates at 100 MHz with only 12.49% LUT utilization and low dynamic power consumption of approximately 182.5 mW. These results confirm that the proposed framework achieves an effective balance between security, privacy, and efficiency. The significance of this work lies in providing a fully hardware-validated, PUF-based authentication framework specifically tailored to the real-world constraints of IoD environments, offering a practical foundation for securing next-generation drone networks. Full article
Show Figures

Figure 1

Back to TopTop