Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (475)

Search Parameters:
Keywords = relative pose estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 295 KB  
Article
Pesticide Residues in Apples and Pears: A Deterministic Assessment of Chronic Exposure and Non-Carcinogenic Risk for European Consumers
by Jarosław Chmielewski, Barbara Gworek, Magdalena Florek-Łuszczki and Jarogniew J. Łuszczki
Molecules 2026, 31(5), 767; https://doi.org/10.3390/molecules31050767 - 25 Feb 2026
Abstract
(1) Pome fruits (apples and pears) are among the most frequently consumed fruits in Europe and may contribute to dietary exposure to pesticide residues. Although residue levels generally comply with maximum residue limits (MRLs), even low concentrations may cumulatively contribute to chronic health [...] Read more.
(1) Pome fruits (apples and pears) are among the most frequently consumed fruits in Europe and may contribute to dietary exposure to pesticide residues. Although residue levels generally comply with maximum residue limits (MRLs), even low concentrations may cumulatively contribute to chronic health risks under conditions of frequent and long-term consumption. This study aimed to quantitatively assess dietary exposure and the potential non-carcinogenic health risks associated with pesticide residues in apples and pears, using representative monitoring and consumption data. (2) The assessment was based on results of the Polish national official monitoring program for pesticide residues in food, specifically apples and pears sampled in 2022, as reported by the National Institute of Public Health (NIZP-PZH). These data were combined with age- and body weight-specific consumption scenarios derived from FAO/WHO GEMS/Food cluster diets and national Polish statistics. For the most frequently detected pesticides (captan, flonicamid, acetamiprid and fosetyl-Al in apples; captan and acetamiprid in pears), the mean and 95th percentile concentrations were used to estimate the estimated daily intake (EDI). Non-carcinogenic risk was characterized using the hazard quotient (HQ = EDI/ADI) and the cumulative Hazard Index (HI). The hazard quotient (HQ) was calculated as the ratio of estimated daily intake to the acceptable daily intake (HQ = EDI/ADI), while the Hazard Index (HI) was defined as the sum of individual HQ values for pesticides detected in a given commodity and exposure scenario (HI = ΣHQ). Calculations were performed separately for children and adults under several dietary scenarios (Polish general population, German child, German general population, GEMS/Food G08). (3) For all pesticides and exposure scenarios, the HQ values were well below 1, indicating no exceedance of the acceptable daily intake (ADI). The highest chronic exposure was observed for apples in children (German child scenario), with the HQ values for captan, flonicamid and acetamiprid in the approximate range of 0.01–0.05, while the HI remained < 0.1 even under high-consumption conditions. In adults (Polish and German general populations, GEMS/Food G08), HQ values were approximately one order of magnitude lower than in children, and the cumulative HI values for both apples and pears were far below 1. The contribution of pears to total exposure was limited, reflecting lower consumption and fewer active substances detected. (4) This quantitative risk assessment, based on Polish monitoring data from 2022, indicates that under current residue levels and consumption patterns, chronic dietary exposure to pesticide residues from apples and pears does not pose a relevant non-carcinogenic health concern for either children or adults. Nevertheless, children consistently showed higher relative exposure than adults, underscoring the importance of age-stratified risk assessment and continued monitoring of residues in commonly consumed fruits. The findings support existing regulatory frameworks while justifying sustained, targeted surveillance of key active substances in pome fruits as part of public health prevention strategies. Full article
31 pages, 11349 KB  
Article
Recognition, Localization and 3D Geometric Morphology Calculation of Microblind Holes in Complex Backgrounds Based on the Improved YOLOv11 Network and AVC Algorithm
by Chengfen Zhang, Dong Xia, Ruizhao Chen, Qunfeng Niu, Tao Wang and Li Wang
J. Imaging 2026, 12(3), 96; https://doi.org/10.3390/jimaging12030096 - 24 Feb 2026
Abstract
Microblind hole processing quality inspection, especially accurately identifying microblind hole contour features and precisely detecting 3D and morphological parameters, has always been challenging, especially for accurately identifying those of different sizes, depths, and contour features simultaneously. This poses a great challenge for identifying [...] Read more.
Microblind hole processing quality inspection, especially accurately identifying microblind hole contour features and precisely detecting 3D and morphological parameters, has always been challenging, especially for accurately identifying those of different sizes, depths, and contour features simultaneously. This poses a great challenge for identifying and localizing microblind hole contours based on machine vision and accurately calculating three-dimensional parameters. This study takes cigarette microblind holes (diameter of 0.1–0.2 mm, depth of approximately 35 µm) as the research object. It focuses on solving two major challenges: recognizing and localizing microblind hole contours in complex texture backgrounds and accurately calculating their 3D geometric morphology. An improved YOLOv11s model is proposed for microblind hole image multiobject detection with complex texture backgrounds to extract their features completely. An Area–Volume Computation (AVC) algorithm, which utilizes discrete integral estimation and curve-fitting principles, is also proposed for computing their surface area and volume. The experimental results show that the precision, recall, mAP@0.5, mAP@0.5:0.95, and prediction time of the improved YOLOv11 network are 0.915, 0.948, 0.925, 0.615, and 1.27 ms, respectively. The relative errors (REs) of the surface area and volume calculation of the microblind holes are 5.236% and 3.964%, respectively. The proposed method achieves microblind hole recognition, localization and 3D morphology calculation accuracy, meeting cigarette on-site inspection criteria. Additionally, a reference for detecting other similar objects in complex texture backgrounds and accurately calculating 3D tasks is provided. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 3rd Edition)
Show Figures

Figure 1

20 pages, 1780 KB  
Article
A Comprehensive Eye-Tracking System Toward Large FOV HMD
by Jiafu Lv, Di Zhang, Ke Han, Qi Wu and Sanxing Cao
Sensors 2026, 26(5), 1402; https://doi.org/10.3390/s26051402 - 24 Feb 2026
Viewed by 102
Abstract
Eye tracking in virtual reality (VR) head-mounted displays poses substantial engineering challenges, particularly under immersive display configurations with large fields of view (FOV), where optical layout, illumination, and image acquisition impose nontrivial system constraints. To address these design constraints, we present an integrated [...] Read more.
Eye tracking in virtual reality (VR) head-mounted displays poses substantial engineering challenges, particularly under immersive display configurations with large fields of view (FOV), where optical layout, illumination, and image acquisition impose nontrivial system constraints. To address these design constraints, we present an integrated near-eye eye-tracking prototype tailored for immersive VR headsets, combining customized hardware components and a real-time software pipeline. The proposed system integrates optimized near-eye illumination and image acquisition with a pupil detection module and a deep learning-based gaze-vector estimation model, forming a real-time software pipeline for stable end-to-end gaze mapping under fixed calibration conditions. Under identical system settings, calibration procedures, and gaze-point mapping conditions, we evaluate the proposed gaze-vector estimation model through a controlled model-level ablation. The attention-enhanced model achieves an average angular deviation of 1.15°, corresponding to a 61.4% relative reduction compared with a baseline ResNet-152 model without attention. To demonstrate the usability of the system outputs at the application level, we further implement a real-time visualization example that integrates pupil diameter, gaze vectors, and blink events to depict the temporal evolution of eye-movement signals. This work provides a cost-effective and reproducible engineering reference for near-eye eye-movement acquisition and visualization in immersive VR settings and serves as a technical foundation for subsequent interaction design or behavioral analysis studies. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

18 pages, 1626 KB  
Article
Rock Mass and Dust Emissions from Hard Coal Mining as a Sustainability Challenge During Energy Transition—The Case Study of Poland
by Andrzej Chmiela, Beata Barszczowska, Stefan Czerwiński and Adam Smoliński
Sustainability 2026, 18(4), 2145; https://doi.org/10.3390/su18042145 - 22 Feb 2026
Viewed by 241
Abstract
Coal continues to play a significant role in Poland’s electricity generation system, making the sustainable management of environmental impacts from hard coal mining a critical challenge during the ongoing energy transition. In line with the European Green Deal and circular economy principles, reducing [...] Read more.
Coal continues to play a significant role in Poland’s electricity generation system, making the sustainable management of environmental impacts from hard coal mining a critical challenge during the ongoing energy transition. In line with the European Green Deal and circular economy principles, reducing and managing mining-related waste emissions is an important component of sustainable development in regions undergoing a gradual phase-out of fossil fuel extraction. This study analyzes rock mass and dust emissions associated with underground hard coal mining in Poland over the period 2017–2025 using the most recent statistical data, including estimates for 2025 based on the first three quarters of the year. The scale, structure, and trends of emissions are examined to assess their implications for environmental sustainability, resource efficiency, and long-term land use. Particular attention is paid to the relationship between declining coal production and the relatively slower reduction in waste rock emissions, which indicates increasing contamination of extracted material and poses challenges for sustainable mining practices. The results show that while total coal output has decreased substantially, reductions in rock mass emissions have been less dynamic, highlighting the need for improved waste management strategies from a sustainability perspective. The study demonstrates that increasing the utilization of mining waste, through underground use and circular economy applications, can reduce environmental pressure, support compliance with sustainability policies, and mitigate long-term impacts on post-mining regions. Although the analysis focuses on Poland, the findings provide transferable insights for other countries seeking to balance energy security, mining sector restructuring, and sustainable development objectives during the transition away from fossil fuels. Full article
Show Figures

Figure 1

28 pages, 3745 KB  
Article
An Underwater 6-DoF Position and Orientation Estimation Method for Divers Based on the VideoPose5CH Model
by Kaidong Wang, Yi Yang, Qingbo Wei, Xingqun Zhou, Zhiqiang Hu and Quan Zheng
Sensors 2026, 26(4), 1335; https://doi.org/10.3390/s26041335 - 19 Feb 2026
Viewed by 225
Abstract
Accurate perception of a diver’s position and orientation by Autonomous Underwater Vehicles (AUVs) is essential for effective human–robot collaboration in underwater environments. However, conventional position and orientation estimation methods that combine deep learning with Perspective-n-Point (PnP) algorithms are primarily designed for rigid objects. [...] Read more.
Accurate perception of a diver’s position and orientation by Autonomous Underwater Vehicles (AUVs) is essential for effective human–robot collaboration in underwater environments. However, conventional position and orientation estimation methods that combine deep learning with Perspective-n-Point (PnP) algorithms are primarily designed for rigid objects. In contrast, divers exhibit highly variable postures underwater, with no fixed configuration. To address this limitation, this paper proposes a framework for estimating the six-degree-of-freedom (6-DoF) position and the orientation of a diver. In addition, a novel network architecture, termed “VideoPose5CH,” is proposed. In the proposed framework, temporal sequences of 2D joint coordinates are provided to VideoPose5CH, which then outputs the 3D joint coordinates of the current frame as well as the corresponding refined 2D joint locations. Subsequently, the diver’s 6-DoF position and orientation relative to the camera are further recovered via a PnP algorithm. To mitigate the scarcity of underwater 3D human pose datasets, a land-based 3D human pose dataset augmentation strategy tailored to underwater conditions is further proposed. With this strategy, diver pose estimation accuracy is improved and the robustness of the proposed method across diverse scenarios is enhanced. Experimental results demonstrate that the proposed method can stably estimate the 6-DoF position and orientation of the diver within a distance range of 2.643 m to 11.477 m. The average position errors along the three axes are 7.33 cm, 4.04 cm, and 27.15 cm, respectively, while the average orientation errors are 6.96°, 8.47°, and 2.62°. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

16 pages, 9023 KB  
Article
Optimising Camera–ChArUco Geometry for Motion Compensation in Standing Equine CT: A CT-Motivated Benchtop Study
by Cosimo Aliani, Cosimo Lorenzetto Bologna, Piergiorgio Francia and Leonardo Bocchi
Sensors 2026, 26(4), 1310; https://doi.org/10.3390/s26041310 - 18 Feb 2026
Viewed by 219
Abstract
Standing equine computed tomography (CT) acquisitions are susceptible to residual postural sway, which can introduce view-inconsistent motion and degrade image quality. External optical tracking based on ChArUco fiducials is a promising, low-cost strategy to enable projection-wise motion compensation, yet quantitative guidance on how [...] Read more.
Standing equine computed tomography (CT) acquisitions are susceptible to residual postural sway, which can introduce view-inconsistent motion and degrade image quality. External optical tracking based on ChArUco fiducials is a promising, low-cost strategy to enable projection-wise motion compensation, yet quantitative guidance on how camera–marker geometry affects pose-estimation performance remains limited. This CT-motivated benchtop study characterizes how the relative camera–ChArUco configuration influences both the accuracy (bias with respect to ground truth) and the precision (repeatability) of pose estimates obtained from RGB images using OpenCV ChArUco detection and reprojection-error minimization to estimate the rigid camera-to-board transformation. Controlled experiments systematically varied acquisition protocol (continuous repeated estimates at fixed pose versus cyclic repositioning), viewing angle over a wide angular range at two working distances, and camera-to-board distance over multiple depth settings. Ground truth for angular configurations was defined by a stepper-motor rotation stage, while distance ground truth was obtained by ruler measurements. Performance was summarized via mean absolute error and standard deviation across repeated measurements, complemented by variance-based statistical testing with multiple-comparison correction. Cyclic repositioning did not yield evidence of increased variability relative to continuous acquisitions, supporting view-by-view sampling. Viewing angle induced a consistent accuracy–precision trade-off for rotations: frontal views minimized mean error but exhibited higher variability, whereas oblique views reduced jitter at the expense of increased bias. Increasing working distance reduced repeatability, most prominently for depth-related components. Overall, these findings provide pre-clinical guidance for selecting camera/marker placement (moderately oblique viewpoints, limited working distance, sufficient image footprint) before in-scanner and in-vivo validation for standing equine CT motion compensation. Full article
Show Figures

Figure 1

25 pages, 6514 KB  
Article
An Optimization-Based Method for Relative Pose Estimation for Collaborating UAVs Using Observed Predefined Trajectories
by Guven Cetinkaya and Yakup Genc
Drones 2026, 10(2), 135; https://doi.org/10.3390/drones10020135 - 14 Feb 2026
Viewed by 279
Abstract
Accurate relative pose estimation between unmanned aerial vehicles (UAVs) is a key requirement for cooperative navigation, formation control, and swarm operation in GNSS-denied environments. In multi-UAV systems, monocular vision is attractive due to its low weight and power requirements; however, bearing-only measurements can [...] Read more.
Accurate relative pose estimation between unmanned aerial vehicles (UAVs) is a key requirement for cooperative navigation, formation control, and swarm operation in GNSS-denied environments. In multi-UAV systems, monocular vision is attractive due to its low weight and power requirements; however, bearing-only measurements can lead to angular ambiguities, particularly under symmetric or planar target motion. This paper presents a geometric framework for monocular relative pose estimation using observed known motion patterns, rather than relying on complex distributed system architectures. The method exploits trajectory-induced geometric constraints by back-projecting the observed image-plane trajectory of a target UAV into three-dimensional space and tracing rays from the camera center toward a geometrically parameterized reference trajectory. Relative pose parameters are refined through nonlinear optimization using Levenberg–Marquardt, enabling accurate estimation under noisy conditions. Beyond the estimation framework, the influence of cooperative trajectory geometry on angular observability is investigated through simulation experiments. The results indicate that planar collaborative motion may induce angular ambiguity despite numerical convergence, whereas introducing modest out-of-plane excitation through three-dimensional trajectories significantly improves observability. In addition to simulation-based evaluation, a limited real-world flight experiment is conducted to qualitatively validate the observed ambiguity patterns under practical sensing conditions. In particular, three-dimensional eight-shaped trajectories are shown to significantly suppress large angular outliers and improve estimation robustness without increasing computational complexity, providing validated guidance for active trajectory design to ensure observability in vision-based aerial scenarios. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

9 pages, 1760 KB  
Proceeding Paper
PM2.5 Concentration Estimation Based on Support Vector Regression: Hybrid Approach Using PM2.5-Sensitive Pixels and Multi-Features
by Ming-Jung Liu, Meng-Yuan Jiang, Yu-Cheng Wu and Jiun-Jian Liaw
Eng. Proc. 2025, 120(1), 48; https://doi.org/10.3390/engproc2025120048 - 5 Feb 2026
Viewed by 137
Abstract
Fine particulate matter (PM2.5) is a hazardous air pollutant that poses serious risks to human health. Long-term exposure to high concentrations of PM2.5 increases the likelihood of developing cardiovascular and respiratory diseases. Therefore, accurately monitoring [...] Read more.
Fine particulate matter (PM2.5) is a hazardous air pollutant that poses serious risks to human health. Long-term exposure to high concentrations of PM2.5 increases the likelihood of developing cardiovascular and respiratory diseases. Therefore, accurately monitoring PM2.5 concentrations are crucial for effective air quality management. However, due to the limited number and uneven distribution of monitoring stations, traditional monitoring methods fail to provide comprehensive data. With advancements in imaging technology and data processing, researchers have focused on estimating PM2.5 concentrations using image-based approaches. We constructed the PM2.5-sensitive pixel (PSP) approach. In addition to the original four image features—Sobel, Dark Channel Prior (DCP), entropy, and contrast—we identified a new image feature and integrate three meteorological variables, relative humidity, temperature, and wind speed, to enhance the estimation of PM2.5 concentrations. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

21 pages, 1606 KB  
Article
Forward Reference-Sample Equalization for High-Speed Shallow-Water Acoustic Communication
by Cheng He, Fei Sun, Enhui Ji, Pingyang Min and Tanghao You
Electronics 2026, 15(3), 650; https://doi.org/10.3390/electronics15030650 - 2 Feb 2026
Viewed by 171
Abstract
In shallow-water high-speed mobile acoustic channels, severe non-uniform Doppler effects pose significant challenges to traditional equalization methods based on linear and time-invariant channel assumptions. Existing approaches typically rely on inverse compensation strategies, which are inadequate for handling path-dependent nonlinear Doppler distortions and fail [...] Read more.
In shallow-water high-speed mobile acoustic channels, severe non-uniform Doppler effects pose significant challenges to traditional equalization methods based on linear and time-invariant channel assumptions. Existing approaches typically rely on inverse compensation strategies, which are inadequate for handling path-dependent nonlinear Doppler distortions and fail to accurately reflect the underlying physical propagation process. To address these limitations, this paper proposes a forward reference-sample equalization (FRSE) method. Based on estimated channel parameters, forward channel modeling is performed for all possible transmitted symbols to generate a reference-sample matrix that is consistent with channel-induced distortions. At the receiver, a least-squares decision criterion is employed to match the received signal with the closest reference sample, thereby enabling reliable demodulation. Simulation results demonstrate that, at a high relative speed of 30 kn and a signal-to-noise ratio (SNR) of 8 dB, the proposed method achieves a bit error rate (BER) of 1.75×104, significantly outperforming conventional equalization methods. Furthermore, sea trial experiments validate the robustness of the proposed approach in real shallow-water environments. By avoiding signal inversion, FRSE achieves improved detection reliability and strong robustness against non-uniform Doppler effects, highlighting its potential for practical underwater acoustic communication applications. Full article
Show Figures

Figure 1

27 pages, 12800 KB  
Article
Olfactory Enrichment of Captive Pygmy Hippopotamuses with Applied Machine Learning
by Jonas Nielsen, Frej Gammelgård, Silje Marquardsen Lund, Anja Sofie Banasik Præstekær, Astrid Vinterberg Frandsen, Camilla Strandqvist, Mikkel Haugaard Nielsen, Rasmus Nikolajgaard Olsen, Sussie Pagh, Thea Loumand Faddersbøll and Cino Pertoldi
Animals 2026, 16(3), 385; https://doi.org/10.3390/ani16030385 - 26 Jan 2026
Viewed by 437
Abstract
The pygmy hippopotamus (Choeropsis liberiensis, Morton, 1849) is classified as Endangered by the International Union for the Conservation of Nature (IUCN). Compared to other large, threatened mammals, this species remains relatively understudied and new findings indicate potential welfare concerns, emphasizing the [...] Read more.
The pygmy hippopotamus (Choeropsis liberiensis, Morton, 1849) is classified as Endangered by the International Union for the Conservation of Nature (IUCN). Compared to other large, threatened mammals, this species remains relatively understudied and new findings indicate potential welfare concerns, emphasizing the need for further research on the species welfare in zoological institutions. One approach to improving welfare in captivity is through environmental enrichment. This study investigated the effects of olfactory enrichment on three individual pygmy hippopotamuses through behavioral analysis and heat-map visualization. Using continuous focal sampling, several behaviors were influenced by the stimuli, with results showing a general decrease in inactivity and an increase in environmental engagement and interaction, particularly through scenting behavior. To further enhance behavioral quantification, machine learning techniques were applied to video data, comparing manual and automated behavior classification using the pose estimation program SLEAP. Four behaviors Standing, Locomotion, Feeding/Foraging, and Lying Down were compared. A confusion matrix, time budgets, and Kendall’s Coefficient of Concordance (W) were used to assess agreement between methods. The results showed a strong and moderate agreement between manual and automated annotations, for the female and calf, respectively. This demonstrates the potential of automation to complement behavioral observations in future welfare monitoring. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

24 pages, 25956 KB  
Article
Geometric Prior-Guided Multimodal Spatiotemporal Adaptive Motion Estimation for Monocular Vision-Based MAVs
by Yu Luo, Hao Cha, Hongwei Fu, Tingting Fu, Bin Tian and Huatao Tang
Drones 2026, 10(2), 83; https://doi.org/10.3390/drones10020083 - 25 Jan 2026
Viewed by 316
Abstract
Estimating the relative position and velocity of micro aerial vehicles (MAVs) using visual signals is a critical issue in numerous tasks. However, traditional relative motion estimation algorithms suffer severely from non-Gaussian noise interference and have limited observability, making it difficult to meet the [...] Read more.
Estimating the relative position and velocity of micro aerial vehicles (MAVs) using visual signals is a critical issue in numerous tasks. However, traditional relative motion estimation algorithms suffer severely from non-Gaussian noise interference and have limited observability, making it difficult to meet the practical requirements of complex dynamic scenarios. To address this dilemma, this paper proposes a Multimodal Decoupled Spatiotemporal Adaptive Network (MDSAN). Designed for air-to-air scenarios, MDSAN achieves high-precision relative pose and velocity estimation of dynamic MAVs while overcoming the observability limitations of traditional algorithms. In detail, MDSAN is collaboratively composed of two core sub-modules: Modality-Specific Convolutional Normalization (MSCN) blocks and Spatiotemporal Adaptive State (STAS) blocks. Specifically, MSCN uses custom convolution kernels tailored to three modalities—visual, physical, and geometric—to separate their features. This prevents interference between modalities and reduces non-Gaussian noise. STAS, built on a state-space model, combines two key functions: it tracks long-term MAV motion trends over time and strengthens the synergy between different modal features across space. Adaptive weights balance these two functions, enabling stable estimation, even when traditional methods struggle with low observability. Furthermore, MDSAN adopts a full-vision multimodal fusion scheme, completely eliminating the dependence on wireless communication and reducing hardware costs. Extensive experimental results demonstrate that MDSAN achieves the best performance in all scenarios, significantly outperforming existing motion estimation algorithms. It provides a new technical path that balances high precision, high robustness, and cost-effectiveness for technologies such as MAV swarm perception. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

22 pages, 2025 KB  
Article
Vision-Based Unmanned Aerial Vehicle Swarm Cooperation and Online Point-Cloud Registration for Global Localization in Global Navigation Satellite System-Intermittent Environments
by Gonzalo Garcia and Azim Eskandarian
Drones 2026, 10(1), 65; https://doi.org/10.3390/drones10010065 - 19 Jan 2026
Viewed by 379
Abstract
Reliable autonomy for drones operating in GNSS-intermittent or denied environments requires both stable inter-vehicle coordination and a shared global understanding of the environment. This paper presents a unified vision-based framework in which UAVs use biologically inspired swarm behaviors together with online monocular point-cloud [...] Read more.
Reliable autonomy for drones operating in GNSS-intermittent or denied environments requires both stable inter-vehicle coordination and a shared global understanding of the environment. This paper presents a unified vision-based framework in which UAVs use biologically inspired swarm behaviors together with online monocular point-cloud registration to achieve real-time global localization. First, we apply a passive-perception strategy, bird-inspired drone swarm-keeping, enabling each UAV to estimate the relative motion and proximity of its neighbors using only monocular visual cues. This decentralized mechanism provides cohesive and collision-free group motion without GNSS, active ranging, or explicit communication. Second, we integrate this capability with a cooperative mapping pipeline in which one or more drones acting as global anchors generate a globally referenced monocular SLAM map. Vehicles lacking global positioning progressively align their locally generated point clouds to this shared global reference using an iterative registration strategy, allowing them to infer consistent global poses online. Other autonomous vehicles optionally contribute complementary viewpoints, but UAVs remain the core autonomous agents driving both mapping and coordination due to their privileged visual perspective. Experimental validation in simulation and indoor testbeds with drones demonstrates that the integrated system maintains swarm cohesion, improves spatial alignment by more than a factor of four over baseline monocular SLAM, and preserves reliable global localization throughout extended GNSS outages. The results highlight a scalable, lightweight, and vision-based approach to resilient UAV autonomy in tunnels, industrial environments, and other GNSS-challenged settings. Full article
Show Figures

Figure 1

22 pages, 92351 KB  
Article
Robust Self-Supervised Monocular Depth Estimation via Intrinsic Albedo-Guided Multi-Task Learning
by Genki Higashiuchi, Tomoyasu Shimada, Xiangbo Kong and Hiroyuki Tomiyama
Appl. Sci. 2026, 16(2), 714; https://doi.org/10.3390/app16020714 - 9 Jan 2026
Viewed by 365
Abstract
Self-supervised monocular depth estimation has demonstrated high practical utility, as it can be trained using a photometric image reconstruction loss between the original image and a reprojected image generated from the estimated depth and relative pose, thereby alleviating the burden of large-scale label [...] Read more.
Self-supervised monocular depth estimation has demonstrated high practical utility, as it can be trained using a photometric image reconstruction loss between the original image and a reprojected image generated from the estimated depth and relative pose, thereby alleviating the burden of large-scale label creation. However, this photometric image reconstruction loss relies on the Lambertian reflectance assumption. Under non-Lambertian conditions such as specular reflections or strong illumination gradients, pixel values fluctuate depending on the lighting and viewpoint, which often misguides training and leads to large depth errors. To address this issue, we propose a multitask learning framework that integrates albedo estimation as a supervised auxiliary task. The proposed framework is implemented on top of representative self-supervised monocular depth estimation backbones, including Monodepth2 and Lite-Mono, by adopting a multi-head architecture in which the shared encoder–decoder branches at each upsampling block into a Depth Head and an Albedo Head. Furthermore, we apply Intrinsic Image Decomposition to generate albedo images and design an albedo supervision loss that uses these albedo maps as training targets for the Albedo Head. We then integrate this loss term into the overall training objective, explicitly exploiting illumination-invariant albedo components to suppress erroneous learning in reflective regions and areas with strong illumination gradients. Experiments on the ScanNetV2 dataset demonstrate that, for the lightweight backbone Lite-Mono, our method achieves an average reduction of 18.5% over the four standard depth error metrics and consistently improves accuracy metrics, without increasing the number of parameters and FLOPs at inference time. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Computer Vision)
Show Figures

Figure 1

15 pages, 4002 KB  
Article
LiDAR–Visual–Inertial Multi-UGV Collaborative SLAM Framework
by Hongyu Wei, Pingfan Wu, Xutong Zhang, Jianyong Zheng, Jianzheng Zhang and Kun Wei
Drones 2026, 10(1), 31; https://doi.org/10.3390/drones10010031 - 5 Jan 2026
Viewed by 883
Abstract
The collaborative execution of tasks by multiple Unmanned Ground Vehicles (UGVs) has become a development trend in the field of unmanned systems. Existing collaborative Simultaneous Localization and Mapping (SLAM) frameworks mainly employ methods based on visual–inertial or LiDAR–inertial. However, the use of C-SLAM [...] Read more.
The collaborative execution of tasks by multiple Unmanned Ground Vehicles (UGVs) has become a development trend in the field of unmanned systems. Existing collaborative Simultaneous Localization and Mapping (SLAM) frameworks mainly employ methods based on visual–inertial or LiDAR–inertial. However, the use of C-SLAM based on these three types of sensors is relatively less common. Therefore, these systems cannot achieve robust and accurate global localization performance in real-world environments. In order to address this issue, a LiDAR–visual–inertial multi-UGV collaborative SLAM framework is proposed in this paper. The whole system is divided into three parts. The first part constructs a front-end odometry by integrating the raw information from LiDAR, visual, and inertial sensors, which provides the accurate initial pose estimation and local mapping of each UGV for the collaborative system. The second part utilizes the similarity of different local mappings to form a global mapping of the environment. The third part achieves global localization and mapping optimization for multi-UGV localization system. In order to verify the effectiveness of the proposed framework, a series of real-world experiments have been conducted. Over an average trajectory length of 237 m, the framework achieves a mean Absolute Pose Error (APE) of 1.49 m and Relative Pose Error (RPE) of 1.68° after the global optimization. The experimental results demonstrate that the proposed framework achieves superior collaborative localization and mapping performance, with the mean APE reduced by 5.4% and mean RPE reduced by 1.4% compared to other methods. Full article
Show Figures

Figure 1

12 pages, 599 KB  
Article
Toxic and Trace Elements in Raw and Cooked Bluefish (Pomatomus saltatrix) from the Black Sea: Benefit–Risk Analysis
by Katya Peycheva, Veselina Panayotova, Tatyana Hristova, Diana A. Dobreva, Tonika Stoycheva, Rositsa Stancheva, Stanislava Georgieva, Evgeni Andreev, Silviya Nikolova and Albena Merdzhanova
Foods 2026, 15(1), 140; https://doi.org/10.3390/foods15010140 - 2 Jan 2026
Viewed by 630
Abstract
This study evaluated the effects of domestic cooking methods (pan-frying, smoking, and grilling) on the concentrations of elements of toxicological concern and essential elements (Cd, Cr, Cu, Fe, Mn, Ni, Zn, and Pb) in the traditionally consumed Black Sea bluefish (Pomatomus saltatrix [...] Read more.
This study evaluated the effects of domestic cooking methods (pan-frying, smoking, and grilling) on the concentrations of elements of toxicological concern and essential elements (Cd, Cr, Cu, Fe, Mn, Ni, Zn, and Pb) in the traditionally consumed Black Sea bluefish (Pomatomus saltatrix). The investigation also included an assessment of the associated health risks and benefits by calculating carcinogenic and non-carcinogenic effects as well as benefit–risk ratios. Toxic heavy metals such as Cd, Ni, and Pb were found to be below the maximum residual limits (MRLs) established by relevant food safety authorities. Cooking generally led to increased concentrations of both essential and toxic elements compared to raw samples, with the highest increases observed in grilled and smoked samples. Furthermore, evaluations of (a) estimated weekly intakes (EWIs), (b) target hazard quotients (THQs) for Cd, Cr, Cu, Fe, Mn, Ni, Pb, and Zn, and (c) hazard quotient ratios for essential fatty acids (HQEFA) relative elements indicated that consumption of these cooked bluefish species does not pose significant health risks to consumers. Full article
(This article belongs to the Special Issue Risk Assessment in Food Safety)
Show Figures

Figure 1

Back to TopTop