Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (145)

Search Parameters:
Keywords = satellite-denied

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 7380 KB  
Article
Integrated Air–Ground Robotic System for Autonomous Post-Blast Operations in GNSS-Denied Tunnels
by Goretti Arias-Ferreiro, Marco A. Montes-Grova, Francisco J. Pérez-Grau, Sergio Noriega-del-Rivero, Rafael Herguedas, María T. Lázaro, Amaia Castelruiz-Aguirre, José Carlos Jimenez Fernandez, Mustafa Karahan and Antonio Alonso-Cepeda
Remote Sens. 2026, 18(8), 1133; https://doi.org/10.3390/rs18081133 - 10 Apr 2026
Viewed by 530
Abstract
Post-blast operations in tunnel construction represent a critical bottleneck due to mandatory downtime and hazardous environmental conditions. This study addresses these challenges by developing and validating an integrated cyber–physical architecture that coordinates an autonomous Unmanned Aerial Vehicle (UAV) and an Autonomous Wheel Loader [...] Read more.
Post-blast operations in tunnel construction represent a critical bottleneck due to mandatory downtime and hazardous environmental conditions. This study addresses these challenges by developing and validating an integrated cyber–physical architecture that coordinates an autonomous Unmanned Aerial Vehicle (UAV) and an Autonomous Wheel Loader (AWL) under the supervision of a Digital Twin acting as central operational digital interface. Specifically, this technology was designed to access the tunnel, evaluate post-blasting conditions, and initiate operations during mandatory exclusion periods for personnel. The system was validated in a realistic, Global Navigation Satellite System (GNSS)-denied tunnel environment emulating post-detonation visibility constraints. The results demonstrate that the aerial agent successfully navigated and mapped the excavation front in less than 8 min, establishing a shared coordinate system for the ground machinery. Through this collaborative workflow, the autonomous deployment enabled operations to commence 50% to 80% earlier than conventional manual procedures. Furthermore, the system reduced daily operational time by approximately 8%, with an estimated return on financial investment between one and seven months. Overall, the proposed framework eliminates human exposure during high-risk inspections and transforms the fragmented excavation cycle into a continuous, data-driven process. Full article
(This article belongs to the Special Issue Mobile Laser Scanning Systems for Underground Applications)
Show Figures

Figure 1

33 pages, 2787 KB  
Article
Energy-Aware Adaptive Communication Topology with Edge-AI Navigation for UAV Swarms in GNSS-Denied Environments
by Alizhan Tulembayev, Alexandr Dolya, Ainur Kuttybayeva, Timur Jussupbekov and Kalmukhamed Tazhen
Drones 2026, 10(4), 273; https://doi.org/10.3390/drones10040273 - 9 Apr 2026
Viewed by 303
Abstract
Energy-efficient and resilient decentralized unmanned aerial vehicles (UAV) swarm operation in global navigation satellite system (GNSS) denied environments remains challenging because propulsion demand, communication load, and onboard inference are tightly coupled at the mission level. Although prior studies have examined some of these [...] Read more.
Energy-efficient and resilient decentralized unmanned aerial vehicles (UAV) swarm operation in global navigation satellite system (GNSS) denied environments remains challenging because propulsion demand, communication load, and onboard inference are tightly coupled at the mission level. Although prior studies have examined some of these components separately, their joint evaluation within adaptive decentralized swarms remains limited under degraded navigation conditions. This study proposes an energy-aware adaptive communication-topology framework integrated with lightweight edge artificial intelligence (AI)-assisted navigation for decentralized UAV swarms operating without reliable GNSS support. The approach combines a unified mission-level energy-accounting structure for propulsion, communication, and onboard inference, a residual-energy-aware topology adaptation mechanism for preserving swarm connectivity, and a convolutional neural network-long short-term memory (CNN–LSTM) based edge-AI navigation module for improving localization robustness. The framework was evaluated in 1200 s Robot Operating System 2 (ROS2)–Gazebo–PX4 simulation scenarios against fixed topology and extended Kalman filter (EKF)-based baselines. Under the adopted simulation assumptions, the proposed configuration achieved a 22.7% reduction in total energy consumption, with the largest decrease observed in the communication-energy component, while preserving positive algebraic connectivity across all evaluated runs. The edge-AI module yielded a 4.8% root mean square error (RMSE) reduction relative to the EKF baseline, indicating a modest but meaningful improvement in localization performance. These results support the feasibility of integrated energy-aware swarm coordination in GNSS-denied environments; however, they should be interpreted as simulation-based evidence under the adopted modeling assumptions, and further high-fidelity propagation modeling, broader learning validation, and hardware-in-the-loop studies remain necessary. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

22 pages, 12678 KB  
Article
A UAV Localization Method Based on Unique Semantic Instances
by Yineng Li, Qinghua Zeng, Ziqi Jin, Junjie Wu, Rongbing Li and Junwei Wan
Remote Sens. 2026, 18(7), 1084; https://doi.org/10.3390/rs18071084 - 3 Apr 2026
Viewed by 291
Abstract
The unmanned aerial vehicle (UAV) localization method based on global features is a fast and efficient approach for satellite-denied environments. Such methods typically extract global features from aerial images and retrieve matches from a constructed feature database to locate UAVs. However, constructing the [...] Read more.
The unmanned aerial vehicle (UAV) localization method based on global features is a fast and efficient approach for satellite-denied environments. Such methods typically extract global features from aerial images and retrieve matches from a constructed feature database to locate UAVs. However, constructing the feature database requires traversing the entire map, leading to storage redundancy. Moreover, the reference images in the database often have fixed fields of view and orientations, making it difficult to adapt to the changes in aerial images caused by the altitude and attitude changes of the UAV. To address these challenges, this paper explores the uniqueness of semantic instances within the mission region and proposes a UAV localization method based on unique semantic instances. The proposed method first extracts the labels of unique semantic instances from aerial images. These labels are then used to retrieve and match the corresponding feature vectors stored in the database. The location is determined based on the centroid positions of the matched unique semantic instances stored in the feature vectors. Experimental results on both simulation and flight datasets show that the proposed method achieves a localization success rate exceeding 95% in the mission region and remains robust to changes in the attitude and field of view of aerial images. The proposed method requires storing only the categories and locations of the instances, significantly reducing data storage requirements. Full article
Show Figures

Figure 1

21 pages, 40575 KB  
Article
Navigation Error Characteristics of LIO-, VIO-, and RIMU-Assisted INS/GNSS Multi-Sensor Fusion Schemes in a GNSS-Denied Environment
by Kai-Wei Chiang, Syun Tsai, Chi-Hsin Huang, Yang-En Lu, Surachet Srinara, Meng-Lun Tsai, Naser El-Sheimy and Mengchi Ai
Sensors 2026, 26(7), 2068; https://doi.org/10.3390/s26072068 - 26 Mar 2026
Viewed by 506
Abstract
Autonomous vehicles at level 3 and above must maintain high navigation accuracy, particularly in global navigation satellite system (GNSS)-denied environments. The main innovations of this work are threefold. First, we integrate visual inertial odometry (VIO) and light detection and ranging (LiDAR) inertial odometry [...] Read more.
Autonomous vehicles at level 3 and above must maintain high navigation accuracy, particularly in global navigation satellite system (GNSS)-denied environments. The main innovations of this work are threefold. First, we integrate visual inertial odometry (VIO) and light detection and ranging (LiDAR) inertial odometry (LIO) as external updates to mitigate the rapid drift of micro-electromechanical system (MEMS)-based industrial-grade inertial measurement units (IMUs) during long-term GNSS outages. Second, we adopt a redundant IMU (RIMU) approach that fuses multiple low-cost IMUs to reduce sensor noise and improve reliability. Third, we propose a system calibration methodology using both static and dynamic vehicle motion to estimate extrinsic parameters (boresight angles and lever arms) of the sensors, achieving an overall boresight angle root-mean-square error of 0.04 degrees in the simulation. Experiments were conducted under a 7 min GNSS-denied scenario in an underground parking lot, allowing for comparison of the error characteristics of multi-sensor fusion schemes against a navigation-grade reference. The INS/GNSS/LIO framework achieved a two-dimensional root-mean-square position error of 1.22 m (95% position error within 2.5 m), meeting the lane-level (1.5 m) accuracy requirement under a GNSS outage exceeding 7 min without prior maps. In contrast, the RINS/GNSS/VIO framework yielded a 4.71 m 2D mean position error under the same conditions. This paper provides a quantitative comparison of the baseline error characteristics of VIO-, LIO-, and RIMU-assisted INS/GNSS fusion under a GNSS-denied navigation scenario. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

23 pages, 12572 KB  
Article
A Dynamics-Informed Non-Causal Deep Learning Framework for High-Precision SOP Positioning Using Low-Quality Data
by Zhisen Wang, Hu Lu and Zhiang Bian
Aerospace 2026, 13(3), 271; https://doi.org/10.3390/aerospace13030271 - 13 Mar 2026
Viewed by 365
Abstract
Low Earth Orbit (LEO) satellite signals of opportunity (SOP) provide a viable positioning alternative in GNSS (Global Navigation Satellite System)-denied environments, yet their accuracy is fundamentally constrained by the low-quality orbital data typically available, such as SGP4 (Simplified General Perturbations model 4) predictions [...] Read more.
Low Earth Orbit (LEO) satellite signals of opportunity (SOP) provide a viable positioning alternative in GNSS (Global Navigation Satellite System)-denied environments, yet their accuracy is fundamentally constrained by the low-quality orbital data typically available, such as SGP4 (Simplified General Perturbations model 4) predictions derived from Two-Line Elements (TLEs). To address this limitation, this paper proposes a dynamics-informed non-causal deep learning framework that enhances low-quality orbital data into high-fidelity trajectories for accurate SOP positioning. The proposed Non-Causal Dynamics-Informed Representation Temporal Convolutional Network (Non-Causal DIR-TCN) integrates phase space reconstruction and a Temporal Convolutional Network to explicitly model the chaotic dynamics inherent in LEO orbits, while relaxing the causality constraints of standard temporal convolutions to utilize both past and future context from the available SGP4 stream. Experimental results demonstrate that the framework significantly reduces orbit estimation errors and accelerates model convergence. When applied to LEO-SOP positioning, it achieves approximately 20% improvement in 2D positioning accuracy compared to conventional SGP4-based methods. This work effectively bridges the gap between accessible low-precision orbital data and high-accuracy state estimation, advancing the practical deployment of opportunistic signals for resilient positioning in challenging environments. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

10 pages, 3612 KB  
Proceeding Paper
Fault Diagnosis Algorithm for Redundant Dual-Axis RINSs Based on Geometric Constraint Observation
by Zhonghong Liang, Hui Luo, Yuanhan Wang, Pengcheng Mu, Yong Ruan, Zhikun Liao and Lin Wang
Eng. Proc. 2026, 126(1), 38; https://doi.org/10.3390/engproc2026126038 - 10 Mar 2026
Viewed by 187
Abstract
Dual-axis rotational inertial navigation systems (DRINSs) have been widely used in marine navigation due to their high accuracy. However, the long-term operation of a DRINS over weeks poses a significant challenge to its reliability. In order to address the fault diagnosis challenges faced [...] Read more.
Dual-axis rotational inertial navigation systems (DRINSs) have been widely used in marine navigation due to their high accuracy. However, the long-term operation of a DRINS over weeks poses a significant challenge to its reliability. In order to address the fault diagnosis challenges faced by DRINSs on long-endurance vessels in global navigation satellite system (GNSS)-denied environments, this paper proposes a fault diagnosis algorithm for redundant DRINSs based on geometric constraint observation. The mechanization of dual DRINSs is implemented using a globally referenced framework. A residual-normalized strong tracking filter based on geometric constraint observation is employed to estimate the fault states of the dual DRINSs. A highly robust fault diagnosis method is proposed to detect and diagnose faults in the inertial devices of dual DRINSs. The experimental results show that the proposed algorithm exhibits excellent performance with a diagnostic accuracy of 98.67% and low diagnostic delay. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

23 pages, 2271 KB  
Article
Adaptive Particle Filter-Neural Network Fusion for Cooperative Localization of Multi-UAV Systems in GNSS-Denied Indoor Environments
by Zhongyi Wang, Hao Wang and Shuzhi Liu
Computers 2026, 15(3), 172; https://doi.org/10.3390/computers15030172 - 6 Mar 2026
Viewed by 473
Abstract
Accurate autonomous navigation of unmanned aerial vehicles (UAVs) in complex indoor environments where satellite signals are denied remains a critical challenge. Conventional state estimation methods, such as particle filters, often suffer from particle degeneracy and high computational costs, limiting their robustness and real-time [...] Read more.
Accurate autonomous navigation of unmanned aerial vehicles (UAVs) in complex indoor environments where satellite signals are denied remains a critical challenge. Conventional state estimation methods, such as particle filters, often suffer from particle degeneracy and high computational costs, limiting their robustness and real-time applicability. Here, we introduce an adaptive particle filter-neural network (PF-NN) fusion framework that achieves high-fidelity cooperative localization for multi-UAV systems. Our approach integrates a lightweight neural network that optimizes particle weight allocation by learning from motion consistency, thereby mitigating sample impoverishment. This is coupled with an adaptive resampling strategy that dynamically adjusts the particle population based on the effective sample size, balancing computational load with estimation accuracy. By fusing ultra-wideband (UWB) inter-vehicle ranging with visual landmark observations, the system leverages both global and local constraints to achieve robust state estimation. In simulations involving six UAVs in a complex indoor setting, our algorithm demonstrated superior performance, achieving an average root-mean-square error (RMSE) of 0.437 m. This work provides a robust and efficient solution for multi-UAV cooperative localization, paving the way for reliable autonomous operations in GNSS-denied scenarios such as search-and-rescue and industrial inspection. Full article
(This article belongs to the Special Issue AI in Action: Innovations and Breakthroughs)
Show Figures

Graphical abstract

19 pages, 2798 KB  
Article
A High-Precision Cooperative Localization Method for UAVs Based on Multi-Condition Constraints
by Haiqiao Liu, Wen Jiang, Qing Long, Qijun Xia and Xiang Chen
Sensors 2026, 26(5), 1641; https://doi.org/10.3390/s26051641 - 5 Mar 2026
Viewed by 389
Abstract
Global Navigation Satellite Systems (GNSSs) often suffer from significant localization errors in signal-denied environments. Furthermore, the accuracy of multi-UAV cooperative localization is highly sensitive to the relative geometric configuration of the swarm. To address these challenges, this paper proposed a novel high-precision and [...] Read more.
Global Navigation Satellite Systems (GNSSs) often suffer from significant localization errors in signal-denied environments. Furthermore, the accuracy of multi-UAV cooperative localization is highly sensitive to the relative geometric configuration of the swarm. To address these challenges, this paper proposed a novel high-precision and robust cooperative localization method for UAVs. The proposed method comprised two key modules. First, based on the principle of minimizing the Geometric Dilution of Precision, we optimized both the quantity and geometric configuration of the UAV swarm to identify the top three optimal aerial formations. Second, we introduced Ground-Assisted Reference Stations or Unmanned Ground Vehicles to establish an air–ground cooperative localization system. By leveraging Time Difference of Arrival constraints, this system significantly enhanced localization accuracy and robustness. From this analysis, two optimal hybrid configurations were selected. Experimental results showed that while purely air-based geometric optimization enhanced horizontal coverage, it failed to effectively suppress Z-axis errors due to inadequate vertical baselines, with deviations consistently oscillating between 3.0 m and 5.0 m. Conversely, the introduction of edge-deployed ground reference stations reduced the Position Dilution of Precision to a remarkably low level of 0.75, effectively suppressing error divergence. This demonstrated that the proposed air–ground cooperative scheme outperformed traditional pure air-based swarm approaches in localization performance. These findings hold significant theoretical and practical value. Full article
Show Figures

Figure 1

23 pages, 7177 KB  
Article
Automated Object Detection and Change Quantification in Underground Mines Using LiDAR Point Clouds and 360° Image Processing
by Ana Fabiola Patricia Tejada Peralta, Roya Bakzadeh, Sina Siahidouzazar and Pedram Roghanchi
Appl. Sci. 2026, 16(5), 2337; https://doi.org/10.3390/app16052337 - 27 Feb 2026
Viewed by 442
Abstract
Underground mining environments pose significant challenges for automated hazard detection due to low illumination, restricted visibility, and the absence of Global Navigation Satellite System (GNSS) coverage. These factors limit situational awareness and delay inspection efforts, particularly after disruptive events when rapid assessment is [...] Read more.
Underground mining environments pose significant challenges for automated hazard detection due to low illumination, restricted visibility, and the absence of Global Navigation Satellite System (GNSS) coverage. These factors limit situational awareness and delay inspection efforts, particularly after disruptive events when rapid assessment is essential for safety. This study addresses this problem by developing a dual-pipeline framework for 2D–3D detection that uses 360° imaging and LiDAR-based machine learning to identify people, vehicles, and positional changes in underground settings without requiring personnel to re-enter hazardous areas. The objective was to create a system capable of recognizing objects and monitoring spatial changes under real underground mine conditions. The 2D component used a Ricoh Theta Z1 camera to collect panoramic images, and a YOLO (You Only Look Once) v8n model was fine-tuned using datasets representing low light, shadowed underground scenes. The 3D component employed an Ouster OS1-070-64 LiDAR sensor, and point clouds were processed through denoising, ICP alignment, surface reconstruction, manual annotation, and 2D projection. A YOLO-based model was then trained to detect objects and measure displacement between LiDAR scans. Results demonstrated strong performance for both components. The fine-tuned YOLOv8n model reliably detected personnel and vehicles despite challenging lighting and visual clutter, while the 3D pipeline localized objects in the registered LiDAR frame and quantified vehicle displacement between consecutive scans by comparing 3D bounding-box centroids after ICP alignment (displacement vector and magnitude). These findings indicate that the combined 2D–3D system can effectively support automated hazard recognition and environmental monitoring in GNSS-denied underground spaces. Full article
(This article belongs to the Special Issue The Application of Deep Learning in Image Processing)
Show Figures

Figure 1

10 pages, 2301 KB  
Proceeding Paper
Development of a Star Classifier for Optimal Geopositioning Purposes Using a Star-Sighting Device
by Guillaume Rance and Philippe Élie
Eng. Proc. 2026, 126(1), 31; https://doi.org/10.3390/engproc2026126031 - 25 Feb 2026
Viewed by 277
Abstract
In environments where Global Navigation Satellite Systems are denied, a common solution to estimate one’s position on the Earth is to use stars as inertial references, as was done centuries ago by navigators using a sextant. Nowadays, sextants have been replaced by star-sighting [...] Read more.
In environments where Global Navigation Satellite Systems are denied, a common solution to estimate one’s position on the Earth is to use stars as inertial references, as was done centuries ago by navigators using a sextant. Nowadays, sextants have been replaced by star-sighting devices, composed of inertial sensors, precise clocks, and one or more star sensors, combining the short-term precision of inertial navigation techniques and the long-term precision of celestial ones. In this context, this paper aims at developing a star classifier for geopositioning purposes, i.e., a way to discriminate stars in the sky so that an observer can choose the stars that would provide the most precise estimate of their position regarding the sighting performances of the device used (sensor definition, precision of the inertial sensor, etc.). The star classifier proposed in this paper is based on differential calculations and spherical trigonometry, and leads to closed-form expressions that are easily embeddable to evaluate the potential of a star. These closed-form expressions are then validated on an experimental setup. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

21 pages, 28930 KB  
Article
Geolocalization of Unmanned Aerial Vehicle Images and Mapping onto Satellite Images Utilizing 3D Gaussian Splatting
by Satoshi Arakawa, Kaiyu Suzuki and Tomofumi Matsuzawa
Sensors 2026, 26(4), 1322; https://doi.org/10.3390/s26041322 - 18 Feb 2026
Viewed by 724
Abstract
Geolocalization of images captured by unmanned aerial vehicles (UAVs) remains a significant challenge in Global Navigation Satellite System-denied environments. Although geolocalization is typically achieved by matching UAV images with satellite images, the viewpoint discrepancy between oblique UAV and nadir satellite images complicates this [...] Read more.
Geolocalization of images captured by unmanned aerial vehicles (UAVs) remains a significant challenge in Global Navigation Satellite System-denied environments. Although geolocalization is typically achieved by matching UAV images with satellite images, the viewpoint discrepancy between oblique UAV and nadir satellite images complicates this task. In this study, we employ 3D Gaussian Splatting (3DGS) to generate images from viewpoints close to the satellite viewpoint based on multiview UAV images. Assuming that the approximate flight area of the UAV is known, we propose a geolocalization method that directly establishes correspondences between 3DGS-rendered and satellite images using pixel-level image matching. These satellite images, which we refer to as wide-area satellite images, cover a larger area than the UAV observation range. Experimental results demonstrate that the proposed method achieves higher geolocalization accuracy than existing approaches that divide wide-area satellite images and perform image retrieval. Moreover, we demonstrate the potential for geographically consistent integration of independently captured and trained 3DGS models by leveraging the correspondences between 3DGS-rendered and wide-area satellite images. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

25 pages, 4827 KB  
Article
A Train Factor Graph Fusion Localization Method Assisted by GRU-IBiLSTM for Low-Cost SINS/GNSS
by Cheng Chen, Guangwu Chen and Xinye Ma
Sensors 2026, 26(4), 1226; https://doi.org/10.3390/s26041226 - 13 Feb 2026
Viewed by 1302
Abstract
The integrated strapdown inertial navigation system (SINS)/global navigation satellite system (GNSS) has been widely adopted in railway positioning applications. However, conventional filtering-based approaches are fundamentally constrained by their dependence on instantaneous state estimates while failing to exploit valuable historical measurement information. To overcome [...] Read more.
The integrated strapdown inertial navigation system (SINS)/global navigation satellite system (GNSS) has been widely adopted in railway positioning applications. However, conventional filtering-based approaches are fundamentally constrained by their dependence on instantaneous state estimates while failing to exploit valuable historical measurement information. To overcome this limitation, we develop a factor graph optimization (FGO) framework to enhance data utilization efficiency. During GNSS signal outages, existing implementations typically preserve only SINS factors while excluding GNSS observations, leading to unbounded error growth. To bridge this gap, our novel solution integrates a gated recurrent unit (GRU) with an Improved Bidirectional Long Short-Term Memory (IBiLSTM) network to generate accurate pseudo-GNSS observations through effective learning from both preceding and subsequent GNSS data sequences. Comprehensive evaluation under GNSS-denied conditions demonstrates that our approach achieves significant improvements over conventional neural network-aided methods, with horizontal root mean square error (RMSE) reductions of 49.22% (simulation) and 36.24% (onboard vehicle). Subsequent FGO processing yields additional performance gains, further reducing RMSE by 46.67% (simulation) and 35.31% (onboard vehicle). This innovative methodology effectively maintains positioning accuracy and ensures navigation continuity during GNSS outages, thereby offering a robust solution for train positioning systems in challenging environments. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

9 pages, 925 KB  
Proceeding Paper
New Approach for Jamming and Spoofing Detection Mechanisms for High Accuracy Solutions
by María Crespo, Adrián Chamorro, Miguel Ángel Azanza and Ana González
Eng. Proc. 2026, 126(1), 8; https://doi.org/10.3390/engproc2026126008 - 6 Feb 2026
Viewed by 570
Abstract
It is well-known that GNSS high accuracy solutions are increasingly vulnerable to jamming and spoofing attacks, posing significant challenges to their reliability, security, and accuracy. In the past years, GNSS communities have witnessed an increase in the frequency and sophistication of these attacks, [...] Read more.
It is well-known that GNSS high accuracy solutions are increasingly vulnerable to jamming and spoofing attacks, posing significant challenges to their reliability, security, and accuracy. In the past years, GNSS communities have witnessed an increase in the frequency and sophistication of these attacks, driven, among other factors, by the widespread availability of low-cost, off-the-shelf equipment capable of denying or even totally misleading GNSS-based positioning systems. On the one hand, jamming attacks aim at inhibiting signal reception by introducing high-power noise or interference, leading to degraded performance or complete failure in determining position. Jamming detection mechanisms need to be traced to GNSS receiver mitigation measures at signal processing level to analyze the radio frequency (RF) environment or receiver behavior. Signal-to-noise ratio (SNR) monitoring, power spectrum analysis, and signal power monitoring are commonly used to detect anomalies in signal characteristics. Jamming is often indicated with the presence of a combination of one or more dedicated indicators, opening space to characterize different levels of jamming attack allowing to optimize a response at user level. On the other hand, detecting spoofing attacks requires different advanced techniques to identify anomalies in satellite signals, receiver behavior, or consistency of computed position data. Indicators regarding internal consistency checks, as well as unexpected evolutions of GNSS signals, are typically suspicious behaviors to be analyzed as possible attacks. Additionally, ensuring trust in the received navigation information by including cryptographic authentication mechanisms is key to quickly detecting some kinds of spoofing. This paper presents the latest enhancements on jamming and spoofing detection and mitigation mechanisms for GMV GSharp® high accuracy and safe positioning solution. This new method, based on fuzzy logic systems, allows us to distinguish between different levels of attack and adapt the reactions to reduce the impact on the final user as much as possible. Additionally, test results obtained from real GNSS attacks datasets will be shown. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

23 pages, 3728 KB  
Article
Fault-Tolerant Optimization Algorithm for Ship-Integrated Navigation Systems Based on Perceptual Information Compensation
by Daheng Zhang, Xuehao Zhang, Weibo Wang and Muzhuang Guo
J. Mar. Sci. Eng. 2026, 14(3), 293; https://doi.org/10.3390/jmse14030293 - 2 Feb 2026
Viewed by 431
Abstract
Autonomous ships require reliable and economical navigation; however, their performance is hindered when satellite-based positioning signals become unavailable. In such global navigation satellite system (GNSS)-denied conditions, a backup navigation system integrating a strapdown inertial navigation system (SINS), Doppler velocity logger (DVL), and a [...] Read more.
Autonomous ships require reliable and economical navigation; however, their performance is hindered when satellite-based positioning signals become unavailable. In such global navigation satellite system (GNSS)-denied conditions, a backup navigation system integrating a strapdown inertial navigation system (SINS), Doppler velocity logger (DVL), and a compass (SINS/DVL/COMPASS) can provide essential state information, but the accuracy and fault tolerance of such systems are constrained by weak observability of position/heading errors and strong dependence on DVL measurements. This study proposes a fault-tolerant optimization method based on perceptual information compensation. First, radar imagery and electronic chart data are fused at the feature level using a weighted wavelet strategy to enhance the environmental feature saliency for shoreline extraction. Second, characteristic coastline inflection points are detected and tracked using a dual-curvature and distance-constrained procedure, generating external position observations via radar–chart matching. These observations are incorporated into the SINS/DVL/COMPASS framework to improve its state observability and robustness. Simulation results show that under nominal conditions, perceptual compensation mitigates error divergence and promotes the convergence of position errors, improving the positioning stability. In terms of robustness, the proposed method delivered more stable state-error behavior than the baseline under DVL speed faults of +2 m/s, −2 m/s, and +2 m/s injected at 301–330, 701–730, and 1101–1130 s, respectively. Quantitatively, the 3σ bounds of velocity and position-related errors are reduced under fault conditions, indicating improved fault tolerance and suitability for short-term nearshore autonomous navigation during GNSS outages. Full article
Show Figures

Figure 1

17 pages, 669 KB  
Article
NaviLoc: Trajectory-Level Visual Localization for GNSS-Denied UAV Navigation
by Pavel Shpagin and Taras Panchenko
Drones 2026, 10(2), 97; https://doi.org/10.3390/drones10020097 - 29 Jan 2026
Viewed by 1243
Abstract
Aerial-to-satellite visual localization enables GNSS-denied UAV navigation, but the appearance gap between low-altitude (50–150 m) UAV imagery and nadir satellite tiles makes per-frame Visual Place Recognition (VPR) unreliable. Under perceptual aliasing, high-similarity matches are often geographically inconsistent, so naïve anchoring fails. We introduce [...] Read more.
Aerial-to-satellite visual localization enables GNSS-denied UAV navigation, but the appearance gap between low-altitude (50–150 m) UAV imagery and nadir satellite tiles makes per-frame Visual Place Recognition (VPR) unreliable. Under perceptual aliasing, high-similarity matches are often geographically inconsistent, so naïve anchoring fails. We introduce NaviLoc, a training-free three-stage trajectory-level estimator that treats VPR as a noisy measurement source and exploits Visual-Inertial Odometry (VIO) as a relative-motion prior. Stage 1 (Global Align) estimates a global SE(2) transform by maximizing an explicit trajectory-level similarity objective. Stage 2 (Refinement) performs sliding-window bounded weighted Procrustes updates. Stage 3 (Smoothing) computes a strictly convex MAP trajectory estimate that fuses VIO displacements with VPR anchors while clamping detected outliers. On a challenging low-altitude rural UAV benchmark, NaviLoc attains 19.5 m Mean Localization Error (MLE)—a 16.0× reduction compared to state-of-the-art localization method AnyLoc-VLAD, and 32.1× compared to raw VIO drift. End-to-end inference runs at 9 FPS on Raspberry Pi 5, enabling real-time embedded deployment. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

Back to TopTop