Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (216)

Search Parameters:
Keywords = loop closure

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1126 KB  
Article
Innovative Integrated Model of Industrial Wastewater Treatment with the Circular Use of Cerium Compounds as Multifunctional Coagulants: Comprehensive Assessment of the Process and Environmental and Economic Aspects
by Paweł Lejwoda, Barbara Białecka, Anna Śliwińska, Piotr Krawczyk and Maciej Thomas
Molecules 2025, 30(16), 3428; https://doi.org/10.3390/molecules30163428 - 20 Aug 2025
Viewed by 628
Abstract
This article presents an innovative method for phosphate(V) removal from industrial wastewater using cerium(III) chloride as a coagulant, integrated with reagent recovery. The process combines coagulation, acid extraction, and multistage recovery of cerium and phosphorus, enabling partial reagent loop closure. Based on our [...] Read more.
This article presents an innovative method for phosphate(V) removal from industrial wastewater using cerium(III) chloride as a coagulant, integrated with reagent recovery. The process combines coagulation, acid extraction, and multistage recovery of cerium and phosphorus, enabling partial reagent loop closure. Based on our previously published studies, at an optimised dose (81.9 mg Ce3+/L), phosphate(V) removal reached 99.86% and total phosphorus (sum of all phosphorus forms as elemental P), 99.56%, and 99.94% of the added cerium was retained in sludge. Reductions were also observed for TSS (96.67%), turbidity (98.18%), and COD (81.86%). The sludge (101.5 g Ce/kg, 22.2 g P/kg) was extracted with HCl, transferring 99.6% of cerium and 97.5% of phosphorus to the solution. Cerium was recovered as cerium(III) oxalate and thermally decomposed to cerium(IV) oxide. Redissolution in HCl and H2O2 yielded cerium(III) chloride (97.0% recovery and 98.6% purity). The HCl used for extraction can be regenerated on-site from chlorine and hydrogen obtained from gas streams, improving material efficiency. Life cycle assessment (LCA) showed environmental benefits related to eutrophication reduction but burdens from reagent use (notably HCl and oxalic acid). Although costlier than conventional precipitation, this method may suit large-scale applications requiring high phosphorus removal, low sludge, and alignment with circular economy goals. Full article
Show Figures

Graphical abstract

27 pages, 5515 KB  
Article
Optimizing Multi-Camera Mobile Mapping Systems with Pose Graph and Feature-Based Approaches
by Ahmad El-Alailyi, Luca Morelli, Paweł Trybała, Francesco Fassi and Fabio Remondino
Remote Sens. 2025, 17(16), 2810; https://doi.org/10.3390/rs17162810 - 13 Aug 2025
Viewed by 620
Abstract
Multi-camera Visual Simultaneous Localization and Mapping (V-SLAM) increases spatial coverage through multi-view image streams, improving localization accuracy and reducing data acquisition time. Despite its speed and generally robustness, V-SLAM often struggles to achieve precise camera poses necessary for accurate 3D reconstruction, especially in [...] Read more.
Multi-camera Visual Simultaneous Localization and Mapping (V-SLAM) increases spatial coverage through multi-view image streams, improving localization accuracy and reducing data acquisition time. Despite its speed and generally robustness, V-SLAM often struggles to achieve precise camera poses necessary for accurate 3D reconstruction, especially in complex environments. This study introduces two novel multi-camera optimization methods to enhance pose accuracy, reduce drift, and ensure loop closures. These methods refine multi-camera V-SLAM outputs within existing frameworks and are evaluated in two configurations: (1) multiple independent stereo V-SLAM instances operating on separate camera pairs; and (2) multi-view odometry processing all camera streams simultaneously. The proposed optimizations include (1) a multi-view feature-based optimization that integrates V-SLAM poses with rigid inter-camera constraints and bundle adjustment; and (2) a multi-camera pose graph optimization that fuses multiple trajectories using relative pose constraints and robust noise models. Validation is conducted through two complex 3D surveys using the ATOM-ANT3D multi-camera fisheye mobile mapping system. Results demonstrate survey-grade accuracy comparable to traditional photogrammetry, with reduced computational time, advancing toward near real-time 3D mapping of challenging environments. Full article
Show Figures

Graphical abstract

17 pages, 7341 KB  
Article
Three-Dimensional Environment Mapping with a Rotary-Driven Lidar in Real Time
by Baixin Tong, Fangdi Jiang, Bo Lu, Zhiqiang Gu, Yan Li and Shifeng Wang
Sensors 2025, 25(15), 4870; https://doi.org/10.3390/s25154870 - 7 Aug 2025
Viewed by 754
Abstract
Three-dimensional environment reconstruction refers to the creation of mathematical models of three-dimensional objects suitable for computer representation and processing. This paper proposes a novel 3D environment reconstruction approach that addresses the field-of-view limitations commonly faced by LiDAR-based systems. A rotary-driven LiDAR mechanism is [...] Read more.
Three-dimensional environment reconstruction refers to the creation of mathematical models of three-dimensional objects suitable for computer representation and processing. This paper proposes a novel 3D environment reconstruction approach that addresses the field-of-view limitations commonly faced by LiDAR-based systems. A rotary-driven LiDAR mechanism is designed to enable uniform and seamless full-field-of-view scanning, thereby overcoming blind spots in traditional setups. To complement the hardware, a multi-sensor fusion framework—LV-SLAM (LiDAR-Visual Simultaneous Localization and Mapping)—is introduced. The framework consists of two key modules: multi-threaded feature registration and a two-phase loop closure detection mechanism, both designed to enhance the system’s accuracy and robustness. Extensive experiments on the KITTI benchmark demonstrate that LV-SLAM outperforms state-of-the-art methods including LOAM, LeGO-LOAM, and FAST-LIO2. Our method reduces the average absolute trajectory error (ATE) from 6.90 m (LOAM) to 2.48 m, and achieves lower relative pose error (RPE), indicating improved global consistency and reduced drift. We further validate the system in real-world indoor and outdoor environments. Compared with fixed-angle scans, the rotary LiDAR mechanism produces more complete reconstructions with fewer occlusions. Geometric accuracy evaluation shows that the root mean square error between reconstructed and actual building dimensions remains below 5 cm. The proposed system offers a robust and accurate solution for high-fidelity 3D reconstruction, particularly suitable for GNSS-denied and structurally complex environments. Full article
Show Figures

Figure 1

16 pages, 4197 KB  
Review
Conformational Dynamics and Structural Transitions of Arginine Kinase: Implications for Catalysis and Allergen Control
by Sung-Min Kang
Life 2025, 15(8), 1248; https://doi.org/10.3390/life15081248 - 6 Aug 2025
Viewed by 485
Abstract
Arginine kinase is a key phosphagen kinase in invertebrates that facilitates rapid ATP regeneration by reversibly transferring phosphate groups between phosphoarginine and ADP. Structural studies have shown that the enzyme adopts distinct conformations in its ligand-free and ligand-bound states, known as the “open” [...] Read more.
Arginine kinase is a key phosphagen kinase in invertebrates that facilitates rapid ATP regeneration by reversibly transferring phosphate groups between phosphoarginine and ADP. Structural studies have shown that the enzyme adopts distinct conformations in its ligand-free and ligand-bound states, known as the “open” and “closed” forms, respectively. These conformational changes are crucial for catalytic activity, enabling precise positioning of active-site residues and loop closure during phosphoryl transfer. Transition-state analog complexes have provided additional insights by mimicking intermediate states of catalysis, supporting the functional relevance of the open/closed structural model. Furthermore, studies across multiple species reveal how monomeric and dimeric forms of arginine kinase contribute to its allosteric regulation and substrate specificity. Beyond its metabolic role, arginine kinase is also recognized as a major allergen in crustaceans. Its structural uniqueness and absence in vertebrates make it a promising candidate for selective drug targeting. By integrating crystallographic data with functional context, this review highlights conserved features and species-specific variations of arginine kinase that may inform the design of inhibitors. Such molecules have the potential to serve both as antiparasitic agents and as novel therapeutics to manage crustacean-related allergic responses in humans. Full article
(This article belongs to the Section Proteins and Proteomics)
Show Figures

Figure 1

18 pages, 3315 KB  
Article
Real-Time Geo-Localization for Land Vehicles Using LIV-SLAM and Referenced Satellite Imagery
by Yating Yao, Jing Dong, Songlai Han, Haiqiao Liu, Quanfu Hu and Zhikang Chen
Appl. Sci. 2025, 15(15), 8257; https://doi.org/10.3390/app15158257 - 24 Jul 2025
Viewed by 427
Abstract
Existing Simultaneous Localization and Mapping (SLAM) algorithms provide precise local pose estimation and real-time scene reconstruction, widely applied in autonomous navigation for land vehicles. However, the odometry of SLAM algorithms exhibits localization drift and error divergence over long-distance operations due to the lack [...] Read more.
Existing Simultaneous Localization and Mapping (SLAM) algorithms provide precise local pose estimation and real-time scene reconstruction, widely applied in autonomous navigation for land vehicles. However, the odometry of SLAM algorithms exhibits localization drift and error divergence over long-distance operations due to the lack of inherent global constraints. In this paper, we propose a real-time geo-localization method for land vehicles, which only relies on a LiDAR-inertial-visual SLAM (LIV-SLAM) and a referenced image. The proposed method enables long-distance navigation without requiring GPS or loop closure, while eliminating accumulated localization errors. To achieve this, the local map constructed by SLAM is real-timely projected onto a downward-view image, and a highly efficient cross modal matching algorithm is proposed to estimate the global position by aligning the projected local image to a geo-referenced satellite image. The cross-modal algorithm leverages dense texture orientation features, ensuring robustness against cross-modal distortion and local scene changes, and supports efficient correlation in the frequency domain for real-time performance. We also propose a novel adaptive Kalman filter (AKF) to integrate the global position provided by the cross-modal matching and the pose estimated by LIV-SLAM. The proposed AKF is designed to effectively handle observation delays and asynchronous updates while simultaneously rejecting the impact of erroneous matches through an Observation-Aware Gain Scaling (OAGS) mechanism. We verify the proposed algorithm through R3LIVE and NCLT datasets, demonstrating superior computational efficiency, reliability, and accuracy compared to existing methods. Full article
(This article belongs to the Special Issue Navigation and Positioning Based on Multi-Sensor Fusion Technology)
Show Figures

Figure 1

33 pages, 4382 KB  
Article
A Distributed Multi-Robot Collaborative SLAM Method Based on Air–Ground Cross-Domain Cooperation
by Peng Liu, Yuxuan Bi, Caixia Wang and Xiaojiao Jiang
Drones 2025, 9(7), 504; https://doi.org/10.3390/drones9070504 - 18 Jul 2025
Viewed by 1017
Abstract
To overcome the limitations in the perception performance of individual robots and homogeneous robot teams, this paper presents a distributed multi-robot collaborative SLAM method based on air–ground cross-domain cooperation. By integrating environmental perception data from UAV and UGV teams across air and ground [...] Read more.
To overcome the limitations in the perception performance of individual robots and homogeneous robot teams, this paper presents a distributed multi-robot collaborative SLAM method based on air–ground cross-domain cooperation. By integrating environmental perception data from UAV and UGV teams across air and ground domains, this method enables more efficient, robust, and globally consistent autonomous positioning and mapping. First, to address the challenge of significant differences in the field of view between UAVs and UGVs, which complicates achieving a unified environmental understanding, this paper proposes an iterative registration method based on semantic and geometric features assistance. This method calculates the correspondence probability of the air–ground loop closure keyframes using these features and iteratively computes the rotation angle and translation vector to determine the coordinate transformation matrix. The resulting matrix provides strong initialization for back-end optimization, which helps to significantly reduce global pose estimation errors. Next, to overcome the convergence difficulties and high computational complexity of large-scale distributed back-end nonlinear pose graph optimization, this paper introduces a multi-level partitioning majorization–minimization DPGO method incorporating loss kernel optimization. This method constructs a multi-level, balanced pose subgraph based on the coupling degree of robot nodes. Then, it uses the minimization substitution function of non-trivial loss kernel optimization to gradually converge the distributed pose graph optimization problem to a first-order critical point, thereby significantly improving global pose estimation accuracy. Finally, experimental results on benchmark SLAM datasets and the GRACO dataset demonstrate that the proposed method effectively integrates environmental feature information from air–ground cross-domain UAV and UGV teams, achieving high-precision global pose estimation and map construction. Full article
Show Figures

Figure 1

21 pages, 4044 KB  
Article
DK-SLAM: Monocular Visual SLAM with Deep Keypoint Learning, Tracking, and Loop Closing
by Hao Qu, Lilian Zhang, Jun Mao, Junbo Tie, Xiaofeng He, Xiaoping Hu, Yifei Shi and Changhao Chen
Appl. Sci. 2025, 15(14), 7838; https://doi.org/10.3390/app15147838 - 13 Jul 2025
Viewed by 633
Abstract
The performance of visual SLAM in complex, real-world scenarios is often compromised by unreliable feature extraction and matching when using handcrafted features. Although deep learning-based local features excel at capturing high-level information and perform well on matching benchmarks, they struggle with generalization in [...] Read more.
The performance of visual SLAM in complex, real-world scenarios is often compromised by unreliable feature extraction and matching when using handcrafted features. Although deep learning-based local features excel at capturing high-level information and perform well on matching benchmarks, they struggle with generalization in continuous motion scenes, adversely affecting loop detection accuracy. Our system employs a Model-Agnostic Meta-Learning (MAML) strategy to optimize the training of keypoint extraction networks, enhancing their adaptability to diverse environments. Additionally, we introduce a coarse-to-fine feature tracking mechanism for learned keypoints. It begins with a direct method to approximate the relative pose between consecutive frames, followed by a feature matching method for refined pose estimation. To mitigate cumulative positioning errors, DK-SLAM incorporates a novel online learning module that utilizes binary features for loop closure detection. This module dynamically identifies loop nodes within a sequence, ensuring accurate and efficient localization. Experimental evaluations on publicly available datasets demonstrate that DK-SLAM outperforms leading traditional and learning-based SLAM systems, such as ORB-SLAM3 and LIFT-SLAM. DK-SLAM achieves 17.7% better translation accuracy and 24.2% better rotation accuracy than ORB-SLAM3 on KITTI and 34.2% better translation accuracy on EuRoC. These results underscore the efficacy and robustness of our DK-SLAM in varied and challenging real-world environments. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

20 pages, 3710 KB  
Article
An Accurate LiDAR-Inertial SLAM Based on Multi-Category Feature Extraction and Matching
by Nuo Li, Yiqing Yao, Xiaosu Xu, Shuai Zhou and Taihong Yang
Remote Sens. 2025, 17(14), 2425; https://doi.org/10.3390/rs17142425 - 12 Jul 2025
Viewed by 751
Abstract
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity [...] Read more.
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity to noise and sparsity, and the inclusion of redundant or low-quality feature correspondences. These weaknesses hinder their performance in complex or dynamic environments and fail to meet the reliability requirements of autonomous systems. To overcome these challenges, we propose a novel and accurate LiDAR-inertial SLAM framework with three major contributions. First, we employ a robust multi-category feature extraction method based on principal component analysis (PCA), which effectively filters out noisy and weakly structured points, ensuring stable feature representation. Second, to suppress outlier correspondences and enhance pose estimation reliability, we introduce a coarse-to-fine two-stage feature correspondence selection strategy that evaluates geometric consistency and structural contribution. Third, we develop an adaptive weighted pose estimation scheme that considers both distance and directional consistency, improving the robustness of feature matching under varying scene conditions. These components are jointly optimized within a sliding-window-based factor graph, integrating LiDAR feature factors, IMU pre-integration, and loop closure constraints. Extensive experiments on public datasets (KITTI, M2DGR) and a custom-collected dataset validate the proposed method’s effectiveness. Results show that our system consistently outperforms state-of-the-art approaches in accuracy and robustness, particularly in scenes with sparse structure, motion distortion, and dynamic interference, demonstrating its suitability for reliable real-world deployment. Full article
(This article belongs to the Special Issue LiDAR Technology for Autonomous Navigation and Mapping)
Show Figures

Figure 1

18 pages, 24638 KB  
Article
Accelerating Wound Healing Through Deep Reinforcement Learning: A Data-Driven Approach to Optimal Treatment
by Fan Lu, Ksenia Zlobina, Prabhat Baniya, Houpu Li, Nicholas Rondoni, Narges Asefifeyzabadi, Wan Shen Hee, Maryam Tebyani, Kaelan Schorger, Celeste Franco, Michelle Bagood, Mircea Teodorescu, Marco Rolandi, Rivkah Isseroff and Marcella Gomez
Bioengineering 2025, 12(7), 756; https://doi.org/10.3390/bioengineering12070756 - 11 Jul 2025
Viewed by 545
Abstract
Advancements in bioelectronic sensors and actuators have paved the way for real-time monitoring and control of the progression of wound healing. Real-time monitoring allows for precise adjustment of treatment strategies to align them with an individual’s unique biological response. However, due to the [...] Read more.
Advancements in bioelectronic sensors and actuators have paved the way for real-time monitoring and control of the progression of wound healing. Real-time monitoring allows for precise adjustment of treatment strategies to align them with an individual’s unique biological response. However, due to the complexities of human–drug interactions and a lack of predictive models, it is challenging to determine how one should adjust drug dosage to achieve the desired biological response. This work proposes an adaptive closed-loop control framework that integrates deep learning, optimal control, and reinforcement learning to update treatment strategies in real time, with the goal of accelerating wound closure. The proposed approach eliminates the need for mathematical modeling of complex nonlinear wound-healing dynamics. We demonstrate the convergence of the controller via an in silico experimental setup, where the proposed approach successfully accelerated the wound-healing process by 17.71%. Finally, we share the experimental setup and results of an in vivo implementation to highlight the translational potential of our work. Our data-driven model suggests that the treatment strategy, as determined by our deep reinforcement learning algorithm, results in an accelerated onset of inflammation and subsequent transition to proliferation in a porcine wound model. Full article
Show Figures

Figure 1

18 pages, 16696 KB  
Technical Note
LIO-GC: LiDAR Inertial Odometry with Adaptive Ground Constraints
by Wenwen Tian, Juefei Wang, Puwei Yang, Wen Xiao and Sisi Zlatanova
Remote Sens. 2025, 17(14), 2376; https://doi.org/10.3390/rs17142376 - 10 Jul 2025
Viewed by 1170
Abstract
LiDAR-based simultaneous localization and mapping (SLAM) techniques are commonly applied in high-precision mapping and positioning for mobile platforms. However, the vertical resolution limitations of multi-beam spinning LiDAR sensors can significantly impair vertical estimation accuracy. This challenge is accentuated in scenarios involving fewer-line or [...] Read more.
LiDAR-based simultaneous localization and mapping (SLAM) techniques are commonly applied in high-precision mapping and positioning for mobile platforms. However, the vertical resolution limitations of multi-beam spinning LiDAR sensors can significantly impair vertical estimation accuracy. This challenge is accentuated in scenarios involving fewer-line or cost-effective spinning LiDARs, where vertical features are sparse. To address this issue, we introduce LIO-GC, which effectively extracts ground features and integrates them into a factor graph to rectify vertical accuracy. Unlike conventional methods relying on geometric features for ground plane segmentation, our approach leverages a self-adaptive strategy that considers the uneven point cloud distribution and inconsistency due to ground fluctuations. By optimizing laser range factors, ground feature constraints, and loop closure factors using graph optimization frameworks, our method surpasses current approaches, demonstrating superior performance through evaluation on open-source and newly collected datasets. Full article
Show Figures

Figure 1

32 pages, 2740 KB  
Article
Vision-Based Navigation and Perception for Autonomous Robots: Sensors, SLAM, Control Strategies, and Cross-Domain Applications—A Review
by Eder A. Rodríguez-Martínez, Wendy Flores-Fuentes, Farouk Achakir, Oleg Sergiyenko and Fabian N. Murrieta-Rico
Eng 2025, 6(7), 153; https://doi.org/10.3390/eng6070153 - 7 Jul 2025
Cited by 1 | Viewed by 3370
Abstract
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from [...] Read more.
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from sensing to deployment. We first examine the expanding sensor palette—monocular and multi-camera rigs, stereo and RGB-D devices, LiDAR–camera hybrids, event cameras, and infrared systems—highlighting the complementary operating envelopes and the rise of learning-based depth inference. The advances in visual localization and mapping are then analyzed, contrasting sparse and dense SLAM approaches, as well as monocular, stereo, and visual–inertial formulations. Additional topics include loop closure, semantic mapping, and LiDAR–visual–inertial fusion, which enables drift-free operation in dynamic environments. Building on these foundations, we review the navigation and control strategies, spanning classical planning, reinforcement and imitation learning, hybrid topological–metric memories, and emerging visual language guidance. Application case studies—autonomous driving, industrial manipulation, autonomous underwater vehicles, planetary rovers, aerial drones, and humanoids—demonstrate how tailored sensor suites and algorithms meet domain-specific constraints. Finally, the future research trajectories are distilled: generative AI for synthetic training data and scene completion; high-density 3D perception with solid-state LiDAR and neural implicit representations; event-based vision for ultra-fast control; and human-centric autonomy in next-generation robots. By providing a unified taxonomy, a comparative analysis, and engineering guidelines, this review aims to inform researchers and practitioners designing robust, scalable, vision-driven robotic systems. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

34 pages, 5774 KB  
Article
Approach to Semantic Visual SLAM for Bionic Robots Based on Loop Closure Detection with Combinatorial Graph Entropy in Complex Dynamic Scenes
by Dazheng Wang and Jingwen Luo
Biomimetics 2025, 10(7), 446; https://doi.org/10.3390/biomimetics10070446 - 6 Jul 2025
Viewed by 555
Abstract
In complex dynamic environments, the performance of SLAM systems on bionic robots is susceptible to interference from dynamic objects or structural changes in the environment. To address this problem, we propose a semantic visual SLAM (vSLAM) algorithm based on loop closure detection with [...] Read more.
In complex dynamic environments, the performance of SLAM systems on bionic robots is susceptible to interference from dynamic objects or structural changes in the environment. To address this problem, we propose a semantic visual SLAM (vSLAM) algorithm based on loop closure detection with combinatorial graph entropy. First, in terms of the dynamic feature detection results of YOLOv8-seg, the feature points at the edges of the dynamic object are finely judged by calculating the mean absolute deviation (MAD) of the depth of the pixel points. Then, a high-quality keyframe selection strategy is constructed by combining the semantic information, the average coordinates of the semantic objects, and the degree of variation in the dense region of feature points. Subsequently, the unweighted and weighted graphs of keyframes are constructed according to the distribution of feature points, characterization points, and semantic information, and then a high-performance loop closure detection method based on combinatorial graph entropy is developed. The experimental results show that our loop closure detection approach exhibits higher precision and recall in real scenes compared to the bag-of-words (BoW) model. Compared with ORB-SLAM2, the absolute trajectory accuracy in high-dynamic sequences improved by an average of 97.01%, while the number of extracted keyframes decreased by an average of 61.20%. Full article
(This article belongs to the Special Issue Artificial Intelligence for Autonomous Robots: 3rd Edition)
Show Figures

Figure 1

21 pages, 13446 KB  
Article
Field Evaluation of an Autonomous Mobile Robot for Navigation and Mapping in Forest
by Diego Tiozzo Fasiolo, Lorenzo Scalera, Eleonora Maset and Alessandro Gasparetto
Robotics 2025, 14(7), 89; https://doi.org/10.3390/robotics14070089 - 27 Jun 2025
Viewed by 1068
Abstract
This paper presents a mobile robotic system designed for autonomous navigation and forest and tree trait estimation, with a focus on the location of individual trees and the diameter of the trunks. The system integrates light detection and ranging data and images using [...] Read more.
This paper presents a mobile robotic system designed for autonomous navigation and forest and tree trait estimation, with a focus on the location of individual trees and the diameter of the trunks. The system integrates light detection and ranging data and images using a framework based on simultaneous localization and mapping (SLAM) and a deep learning model for trunk segmentation and tree keypoint detection. Field experiments conducted in a wooded area in Udine, Italy, using a skid-steered mobile robot, demonstrate the effectiveness of the system in navigating, while avoiding obstacles (even in cases where the Global Navigation Satellite System signal is not reliable). The results highlight that the proposed robotic system is capable of autonomously generating maps of forests as point clouds with minimal drift thanks to the loop closure strategy integrated in the SLAM algorithm, and estimating tree traits automatically. Full article
(This article belongs to the Special Issue Autonomous Robotics for Exploration)
Show Figures

Figure 1

25 pages, 2168 KB  
Article
A Study on the Evolution Game of Multi-Subject Knowledge Sharing Behavior in Open Innovation Ecosystems
by Gupeng Zhang, Hua Zou, Shuo Yang and Qiang Hou
Systems 2025, 13(7), 511; https://doi.org/10.3390/systems13070511 - 25 Jun 2025
Viewed by 416
Abstract
With the shift of the global innovation model from traditional closed-loop to open ecosystems, knowledge sharing and collaborative cooperation among firms have become key to obtaining sustainable competitive advantages. However, existing studies mostly focus on the static structure, and there is an insufficient [...] Read more.
With the shift of the global innovation model from traditional closed-loop to open ecosystems, knowledge sharing and collaborative cooperation among firms have become key to obtaining sustainable competitive advantages. However, existing studies mostly focus on the static structure, and there is an insufficient exploration of the dynamic evolutionary mechanism and multi-party game strategies. In this paper, a two-dimensional analysis framework integrating the evolutionary game and the Lotka–Volterra model is constructed to explore the behavioral and strategic evolution of core enterprises, SMEs and the government in the innovation ecosystem. Through theoretical modeling and numerical simulation, the effects of different variables on system stability are revealed. It is found that a moderately balanced benefit allocation can stimulate two-way knowledge sharing, while an over- or under-allocation ratio will inhibit the synergy efficiency of the system; a moderate difference in the knowledge stock can promote knowledge complementarity, but an over-concentration will lead to the monopoly and closure of the system; and the government subsidy needs to accurately match the cost of the openness of the enterprises with the potential benefits to the society, so as to avoid the incentive from being unused. Accordingly, it is suggested to optimize the competition structure among enterprises through the dynamic benefit distribution mechanism, knowledge sharing platform construction and classification subsidy policy, promote the evolution of the innovation ecosystem to a balanced state of mutual benefit and symbiosis, and provide theoretical basis and practical inspiration for the governance of the open innovation ecosystem. Full article
Show Figures

Figure 1

16 pages, 3055 KB  
Article
LET-SE2-VINS: A Hybrid Optical Flow Framework for Robust Visual–Inertial SLAM
by Wei Zhao, Hongyang Sun, Songsong Ma and Haitao Wang
Sensors 2025, 25(13), 3837; https://doi.org/10.3390/s25133837 - 20 Jun 2025
Viewed by 695
Abstract
This paper presents SE2-LET-VINS, an enhanced Visual–Inertial Simultaneous Localization and Mapping (VI-SLAM) system built upon the classic Visual–Inertial Navigation System for Monocular Cameras (VINS-Mono) framework, designed to improve localization accuracy and robustness in complex environments. By integrating Lightweight Neural Network (LET-NET) for high-quality [...] Read more.
This paper presents SE2-LET-VINS, an enhanced Visual–Inertial Simultaneous Localization and Mapping (VI-SLAM) system built upon the classic Visual–Inertial Navigation System for Monocular Cameras (VINS-Mono) framework, designed to improve localization accuracy and robustness in complex environments. By integrating Lightweight Neural Network (LET-NET) for high-quality feature extraction and Special Euclidean Group in 2D (SE2) optical flow tracking, the system achieves superior performance in challenging scenarios such as low lighting and rapid motion. The proposed method processes Inertial Measurement Unit (IMU) data and camera data, utilizing pre-integration and RANdom SAmple Consensus (RANSAC) for precise feature matching. Experimental results on the European Robotics Challenges (EuRoc) dataset demonstrate that the proposed hybrid method improves localization accuracy by up to 43.89% compared to the classic VINS-Mono model in sequences with loop closure detection. In no-loop scenarios, the method also achieves error reductions of 29.7%, 21.8%, and 24.1% on the MH_04, MH_05, and V2_03 sequences, respectively. Trajectory visualization and Gaussian fitting analysis further confirm the system’s good robustness and accuracy. SE2-LET-VINS offers a robust solution for visual–inertial navigation, particularly in demanding environments, and paves the way for future real-time applications and extended capabilities. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Back to TopTop