Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (183)

Search Parameters:
Keywords = indoor autonomous navigation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2568 KB  
Article
Towards Spatial Awareness: Real-Time Sensory Augmentation with Smart Glasses for Visually Impaired Individuals
by Nadia Aloui
Electronics 2025, 14(17), 3365; https://doi.org/10.3390/electronics14173365 - 25 Aug 2025
Viewed by 148
Abstract
This research presents an innovative Internet of Things (IoT) and artificial intelligence (AI) platform designed to provide holistic assistance and foster autonomy for visually impaired individuals within the university environment. Its main novelty is real-time sensory augmentation and spatial awareness, integrating ultrasonic, LiDAR, [...] Read more.
This research presents an innovative Internet of Things (IoT) and artificial intelligence (AI) platform designed to provide holistic assistance and foster autonomy for visually impaired individuals within the university environment. Its main novelty is real-time sensory augmentation and spatial awareness, integrating ultrasonic, LiDAR, and RFID sensors for robust 360° obstacle detection, environmental perception, and precise indoor localization. A novel, optimized Dijkstra algorithm calculates optimal routes; speech and intent recognition enable intuitive voice control. The wearable smart glasses are complemented by a platform providing essential educational functionalities, including lesson reminders, timetables, and emergency assistance. Based on gamified principles of exploration and challenge, the platform includes immersive technology settings, intelligent image recognition, auditory conversion, haptic feedback, and rapid contextual awareness, delivering a sophisticated, effective navigational experience. Exhaustive technical evaluation reveals that a more autonomous and fulfilling university experience is made possible by notable improvements in navigation performance, object detection accuracy, and technical capabilities for social interaction features, according to a thorough technical audit. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 6890 KB  
Article
SOAR-RL: Safe and Open-Space Aware Reinforcement Learning for Mobile Robot Navigation in Narrow Spaces
by Minkyung Jun, Piljae Park and Hoeryong Jung
Sensors 2025, 25(17), 5236; https://doi.org/10.3390/s25175236 - 22 Aug 2025
Viewed by 475
Abstract
As human–robot shared service environments become increasingly common, autonomous navigation in narrow space environments (NSEs), such as indoor corridors and crosswalks, becomes challenging. Mobile robots must go beyond reactive collision avoidance and interpret surrounding risks to proactively select safer routes in dynamic and [...] Read more.
As human–robot shared service environments become increasingly common, autonomous navigation in narrow space environments (NSEs), such as indoor corridors and crosswalks, becomes challenging. Mobile robots must go beyond reactive collision avoidance and interpret surrounding risks to proactively select safer routes in dynamic and spatially constrained environments. This study proposes a deep reinforcement learning (DRL)-based navigation framework that enables mobile robots to interact with pedestrians while identifying and traversing open and safe spaces. The framework fuses 3D LiDAR and RGB camera data to recognize individual pedestrians and estimate their position and velocity in real time. Based on this, a human-aware occupancy map (HAOM) is constructed, combining both static obstacles and dynamic risk zones, and used as the input state for DRL. To promote proactive and safe navigation behaviors, we design a state representation and reward structure that guide the robot toward less risky areas, overcoming the limitations of traditional approaches. The proposed method is validated through a series of simulation experiments, including straight, L-shaped, and cross-shaped layouts, designed to reflect typical narrow space environments. Various dynamic obstacle scenarios were incorporated during both training and evaluation. The results demonstrate that the proposed approach significantly improves navigation success rates and reduces collision incidents compared to conventional navigation planners across diverse NSE conditions. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

27 pages, 7285 KB  
Article
Towards Biologically-Inspired Visual SLAM in Dynamic Environments: IPL-SLAM with Instance Segmentation and Point-Line Feature Fusion
by Jian Liu, Donghao Yao, Na Liu and Ye Yuan
Biomimetics 2025, 10(9), 558; https://doi.org/10.3390/biomimetics10090558 - 22 Aug 2025
Viewed by 305
Abstract
Simultaneous Localization and Mapping (SLAM) is a fundamental technique in mobile robotics, enabling autonomous navigation and environmental reconstruction. However, dynamic elements in real-world scenes—such as walking pedestrians, moving vehicles, and swinging doors—often degrade SLAM performance by introducing unreliable features that cause localization errors. [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a fundamental technique in mobile robotics, enabling autonomous navigation and environmental reconstruction. However, dynamic elements in real-world scenes—such as walking pedestrians, moving vehicles, and swinging doors—often degrade SLAM performance by introducing unreliable features that cause localization errors. In this paper, we define dynamic regions as areas in the scene containing moving objects, and dynamic features as the visual features extracted from these regions that may adversely affect localization accuracy. Inspired by biological perception strategies that integrate semantic awareness and geometric cues, we propose Instance-level Point-Line SLAM (IPL-SLAM), a robust visual SLAM framework for dynamic environments. The system employs YOLOv8-based instance segmentation to detect potential dynamic regions and construct semantic priors, while simultaneously extracting point and line features using Oriented FAST (Features from Accelerated Segment Test) and Rotated BRIEF (Binary Robust Independent Elementary Features), collectively known as ORB, and Line Segment Detector (LSD) algorithms. Motion consistency checks and angular deviation analysis are applied to filter dynamic features, and pose optimization is conducted using an adaptive-weight error function. A static semantic point cloud map is further constructed to enhance scene understanding. Experimental results on the TUM RGB-D dataset demonstrate that IPL-SLAM significantly outperforms existing dynamic SLAM systems—including DS-SLAM and ORB-SLAM2—in terms of trajectory accuracy and robustness in complex indoor environments. Full article
(This article belongs to the Section Biomimetic Design, Constructions and Devices)
Show Figures

Figure 1

27 pages, 7729 KB  
Article
Autonomous Exploration in Unknown Indoor 2D Environments Using Harmonic Fields and Monte Carlo Integration
by Dimitrios Kotsinis, George C. Karras and Charalampos P. Bechlioulis
Sensors 2025, 25(16), 4894; https://doi.org/10.3390/s25164894 - 8 Aug 2025
Viewed by 205
Abstract
Efficient autonomous exploration in unknown obstacle cluttered environments with interior obstacles remains a challenging task for mobile robots. In this work, we present a novel exploration process for a non-holonomic agent exploring 2D spaces using onboard LiDAR sensing. The proposed method generates velocity [...] Read more.
Efficient autonomous exploration in unknown obstacle cluttered environments with interior obstacles remains a challenging task for mobile robots. In this work, we present a novel exploration process for a non-holonomic agent exploring 2D spaces using onboard LiDAR sensing. The proposed method generates velocity commands based on the calculation of the solution of an elliptic Partial Differential Equation with Dirichlet boundary conditions. While solving Laplace’s equation yields collision-free motion towards the free space boundary, the agent may become trapped in regions distant from free frontiers, where the potential field becomes almost flat, and consequently the agent’s velocity nullifies as the gradient vanishes. To address this, we solve a Poisson equation, introducing a source point on the free explored boundary which is located at the closest point from the agent and attracts it towards unexplored regions. The source values are determined by an exponential function based on the shortest path of a Hybrid Visibility Graph, a graph that models the explored space and connects obstacle regions via minimum-length edges. The computational process we apply is based on the Walking on Sphere algorithm, a method that employs Brownian motion and Monte Carlo Integration and ensures efficient calculation. We validate the approach using a real-world platform; an AmigoBot equipped with a LiDAR sensor, controlled via a ROS-MATLAB interface. Experimental results demonstrate that the proposed method provides smooth and deadlock-free navigation in complex, cluttered environments, highlighting its potential for robust autonomous exploration in unknown indoor spaces. Full article
(This article belongs to the Special Issue Radar Remote Sensing and Applications—2nd Edition)
Show Figures

Figure 1

28 pages, 21813 KB  
Article
Adaptive RGB-D Semantic Segmentation with Skip-Connection Fusion for Indoor Staircase and Elevator Localization
by Zihan Zhu, Henghong Lin, Anastasia Ioannou and Tao Wang
J. Imaging 2025, 11(8), 258; https://doi.org/10.3390/jimaging11080258 - 4 Aug 2025
Viewed by 451
Abstract
Accurate semantic segmentation of indoor architectural elements, such as staircases and elevators, is critical for safe and efficient robotic navigation, particularly in complex multi-floor environments. Traditional fusion methods struggle with occlusions, reflections, and low-contrast regions. In this paper, we propose a novel feature [...] Read more.
Accurate semantic segmentation of indoor architectural elements, such as staircases and elevators, is critical for safe and efficient robotic navigation, particularly in complex multi-floor environments. Traditional fusion methods struggle with occlusions, reflections, and low-contrast regions. In this paper, we propose a novel feature fusion module, Skip-Connection Fusion (SCF), that dynamically integrates RGB (Red, Green, Blue) and depth features through an adaptive weighting mechanism and skip-connection integration. This approach enables the model to selectively emphasize informative regions while suppressing noise, effectively addressing challenging conditions such as partially blocked staircases, glossy elevator doors, and dimly lit stair edges, which improves obstacle detection and supports reliable human–robot interaction in complex environments. Extensive experiments on a newly collected dataset demonstrate that SCF consistently outperforms state-of-the-art methods, including PSPNet and DeepLabv3, in both overall mIoU (mean Intersection over Union) and challenging-case performance. Specifically, our SCF module improves segmentation accuracy by 5.23% in the top 10% of challenging samples, highlighting its robustness in real-world conditions. Furthermore, we conduct a sensitivity analysis on the learnable weights, demonstrating their impact on segmentation quality across varying scene complexities. Our work provides a strong foundation for real-world applications in autonomous navigation, assistive robotics, and smart surveillance. Full article
Show Figures

Figure 1

17 pages, 1602 KB  
Article
Phase Portrait-Based Orientation-Aware Path Planning for Autonomous Mobile Robots
by Abdurrahman Yilmaz and Hasan Kivrak
Inventions 2025, 10(4), 65; https://doi.org/10.3390/inventions10040065 - 1 Aug 2025
Viewed by 320
Abstract
Path planning algorithms for mobile robots and autonomous systems have advanced considerably, yet challenges remain in navigating complex environments while satisfying non-holonomic constraints and achieving precise target orientation. Phase portraits are traditionally used to analyse dynamical systems via equilibrium points and system trajectories, [...] Read more.
Path planning algorithms for mobile robots and autonomous systems have advanced considerably, yet challenges remain in navigating complex environments while satisfying non-holonomic constraints and achieving precise target orientation. Phase portraits are traditionally used to analyse dynamical systems via equilibrium points and system trajectories, and can be a powerful framework for addressing these challenges. In this work, we propose a novel orientation-aware path planning algorithm that uses phase portrait dynamics by treating both obstacles and target poses as equilibrium points within the environment. Unlike conventional approaches, our method explicitly incorporates non-holonomic constraints and target orientation requirements, resulting in smooth, feasible trajectories with high final pose accuracy. Simulation results across 28 diverse scenarios show that our method achieves zero final orientation error with path lengths comparable to Hybrid A*, and planning times reduced by 52% on the indoor map and 84% on the playpen map relative to Hybrid A*. These results highlight the potential of phase portrait-based planning as an effective and efficient method for real-time autonomous navigation. Full article
Show Figures

Figure 1

25 pages, 8468 KB  
Article
An Autonomous Localization Vest System Based on Advanced Adaptive PDR with Binocular Vision Assistance
by Tianqi Tian, Yanzhu Hu, Xinghao Zhao, Hui Zhao, Yingjian Wang and Zhen Liang
Micromachines 2025, 16(8), 890; https://doi.org/10.3390/mi16080890 - 30 Jul 2025
Viewed by 331
Abstract
Despite significant advancements in indoor navigation technology over recent decades, it still faces challenges due to excessive dependency on external infrastructure and unreliable positioning in complex environments. This paper proposes an autonomous localization system that integrates advanced adaptive pedestrian dead reckoning (APDR) and [...] Read more.
Despite significant advancements in indoor navigation technology over recent decades, it still faces challenges due to excessive dependency on external infrastructure and unreliable positioning in complex environments. This paper proposes an autonomous localization system that integrates advanced adaptive pedestrian dead reckoning (APDR) and binocular vision, designed to provide a low-cost, high-reliability, and high-precision solution for rescuers. By analyzing the characteristics of measurement data from various body parts, the chest is identified as the optimal placement for sensors. A chest-mounted advanced APDR method based on dynamic step segmentation detection and adaptive step length estimation has been developed. Furthermore, step length features are innovatively integrated into the visual tracking algorithm to constrain errors. Visual data is fused with dead reckoning data through an extended Kalman filter (EKF), which notably enhances the reliability and accuracy of the positioning system. A wearable autonomous localization vest system was designed and tested in indoor corridors, underground parking lots, and tunnel environments. Results show that the system decreases the average positioning error by 45.14% and endpoint error by 38.6% when compared to visual–inertial odometry (VIO). This low-cost, wearable solution effectively meets the autonomous positioning needs of rescuers in disaster scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence for Micro Inertial Sensors)
Show Figures

Figure 1

16 pages, 5555 KB  
Article
Optimization of a Navigation System for Autonomous Charging of Intelligent Vehicles Based on the Bidirectional A* Algorithm and YOLOv11n Model
by Shengkun Liao, Lei Zhang, Yunli He, Junhui Zhang and Jinxu Sun
Sensors 2025, 25(15), 4577; https://doi.org/10.3390/s25154577 - 24 Jul 2025
Viewed by 384
Abstract
Aiming to enable intelligent vehicles to achieve autonomous charging under low-battery conditions, this paper presents a navigation system for autonomous charging that integrates an improved bidirectional A* algorithm for path planning and an optimized YOLOv11n model for visual recognition. The system utilizes the [...] Read more.
Aiming to enable intelligent vehicles to achieve autonomous charging under low-battery conditions, this paper presents a navigation system for autonomous charging that integrates an improved bidirectional A* algorithm for path planning and an optimized YOLOv11n model for visual recognition. The system utilizes the improved bidirectional A* algorithm to generate collision-free paths from the starting point to the charging area, dynamically adjusting the heuristic function by combining node–target distance and search iterations to optimize bidirectional search weights, pruning expanded nodes via a greedy strategy and smoothing paths into cubic Bézier curves for practical vehicle motion. For precise localization of charging areas and piles, the YOLOv11n model is enhanced with a CAFMFusion mechanism to bridge semantic gaps between shallow and deep features, enabling effective local–global feature fusion and improving detection accuracy. Experimental evaluations in long corridors and complex indoor environments showed that the improved bidirectional A* algorithm outperforms the traditional improved A* algorithm in all metrics, particularly in that it reduces computation time significantly while maintaining robustness in symmetric/non-symmetric and dynamic/non-dynamic scenarios. The optimized YOLOv11n model achieves state-of-the-art precision (P) and mAP@0.5 compared to YOLOv5, YOLOv8n, and the baseline model, with a minor 0.9% recall (R) deficit compared to YOLOv5 but more balanced overall performance and superior capability for small-object detection. By fusing the two improved modules, the proposed system successfully realizes autonomous charging navigation, providing an efficient solution for energy management in intelligent vehicles in real-world environments. Full article
(This article belongs to the Special Issue Vision-Guided System in Intelligent Autonomous Robots)
Show Figures

Figure 1

30 pages, 16390 KB  
Article
Model-Based RL Decision-Making for UAVs Operating in GNSS-Denied, Degraded Visibility Conditions with Limited Sensor Capabilities
by Sebastien Boiteau, Fernando Vanegas, Julian Galvez-Serna and Felipe Gonzalez
Drones 2025, 9(6), 410; https://doi.org/10.3390/drones9060410 - 4 Jun 2025
Cited by 1 | Viewed by 1858
Abstract
Autonomy in Unmanned Aerial Vehicle (UAV) navigation has enabled applications in diverse fields such as mining, precision agriculture, and planetary exploration. However, challenging applications in complex environments complicate the interaction between the agent and its surroundings. Conditions such as the absence of a [...] Read more.
Autonomy in Unmanned Aerial Vehicle (UAV) navigation has enabled applications in diverse fields such as mining, precision agriculture, and planetary exploration. However, challenging applications in complex environments complicate the interaction between the agent and its surroundings. Conditions such as the absence of a Global Navigation Satellite System (GNSS), low visibility, and cluttered environments significantly increase uncertainty levels and cause partial observability. These challenges grow when compact, low-cost, entry-level sensors are employed. This study proposes a model-based reinforcement learning (RL) approach to enable UAVs to navigate and make decisions autonomously in environments where the GNSS is unavailable and visibility is limited. Designed for search and rescue operations, the system enables UAVs to navigate cluttered indoor environments, detect targets, and avoid obstacles under low-visibility conditions. The architecture integrates onboard sensors, including a thermal camera to detect a collapsed person (target), a 2D LiDAR and an IMU for localization. The decision-making module employs the ABT solver for real-time policy computation. The framework presented in this work relies on low-cost, entry-level sensors, making it suitable for lightweight UAV platforms. Experimental results demonstrate high success rates in target detection and robust performance in obstacle avoidance and navigation despite uncertainties in pose estimation and detection. The framework was first assessed in simulation, compared with a baseline algorithm, and then through real-life testing across several scenarios. The proposed system represents a step forward in UAV autonomy for critical applications, with potential extensions to unknown and fully stochastic environments. Full article
Show Figures

Figure 1

23 pages, 4909 KB  
Article
Autonomous Navigation and Obstacle Avoidance for Orchard Spraying Robots: A Sensor-Fusion Approach with ArduPilot, ROS, and EKF
by Xinjie Zhu, Xiaoshun Zhao, Jingyan Liu, Weijun Feng and Xiaofei Fan
Agronomy 2025, 15(6), 1373; https://doi.org/10.3390/agronomy15061373 - 3 Jun 2025
Viewed by 1105
Abstract
To address the challenges of low pesticide utilization, insufficient automation, and health risks in orchard plant protection, we developed an autonomous spraying vehicle using ArduPilot firmware and a robot operating system (ROS). The system tackles orchard navigation hurdles, including global navigation satellite system [...] Read more.
To address the challenges of low pesticide utilization, insufficient automation, and health risks in orchard plant protection, we developed an autonomous spraying vehicle using ArduPilot firmware and a robot operating system (ROS). The system tackles orchard navigation hurdles, including global navigation satellite system (GNSS) signal obstruction, light detection and ranging (LIDAR) simultaneous localization and mapping (SLAM) error accumulation, and lighting-limited visual positioning. A key innovation is the integration of an extended Kalman filter (EKF) to dynamically fuse T265 visual odometry, inertial measurement unit (IMU), and GPS data, overcoming single-sensor limitations and enhancing positioning robustness in complex environments. Additionally, the study optimizes PID controller derivative parameters for tracked chassis, improving acceleration/deceleration control smoothness. The system, composed of Pixhawk 4, Raspberry Pi 4B, Silan S2L LIDAR, T265 visual odometry, and a Quectel EC200A 4G module, enables autonomous path planning, real-time obstacle avoidance, and multi-mission navigation. Indoor/outdoor tests and field experiments in Sun Village Orchard validated its autonomous cruising and obstacle avoidance capabilities under real-world orchard conditions, demonstrating feasibility for intelligent plant protection. Full article
(This article belongs to the Special Issue Smart Pest Control for Building Farm Resilience)
Show Figures

Figure 1

23 pages, 1213 KB  
Article
Mobile-AI-Based Docent System: Navigation and Localization for Visually Impaired Gallery Visitors
by Hyeyoung An, Woojin Park, Philip Liu and Soochang Park
Appl. Sci. 2025, 15(9), 5161; https://doi.org/10.3390/app15095161 - 6 May 2025
Viewed by 655
Abstract
Smart guidance systems in museums and galleries are now essential for delivering quality user experiences. Visually impaired visitors face significant barriers when navigating galleries due to existing smart guidance systems’ dependence on visual cues like QR codes, manual numbering, or static beacon positioning. [...] Read more.
Smart guidance systems in museums and galleries are now essential for delivering quality user experiences. Visually impaired visitors face significant barriers when navigating galleries due to existing smart guidance systems’ dependence on visual cues like QR codes, manual numbering, or static beacon positioning. These traditional methods often fail to provide adaptive navigation and meaningful content delivery tailored to their needs. In this paper, we propose a novel Mobile-AI-based Smart Docent System that seamlessly integrates real-time navigation and depth of guide services to enrich gallery experiences for visually impaired users. Our system leverages camera-based on-device processing and adaptive BLE-based localization to ensure accurate path guidance and real-time obstacle avoidance. An on-device object detection model reduces delays from large visual data processing, while BLE beacons, fixed across the gallery, dynamically update location IDs for better accuracy. The system further refines positioning by analyzing movement history and direction to minimize navigation errors. By intelligently modulating audio content based on user movement—whether passing by, approaching for more details, or leaving mid-description—the system offers personalized, context-sensitive interpretations while eliminating unnecessary audio clutter. Experimental validation conducted in an authentic gallery environment yielded empirical evidence of user satisfaction, affirming the efficacy of our methodological approach in facilitating enhanced navigational experiences for visually impaired individuals. These findings substantiate the system’s capacity to enable more autonomous, secure, and enriched cultural engagement for visually impaired individuals within complex indoor environments. Full article
(This article belongs to the Special Issue IoT in Smart Cities and Homes, 2nd Edition)
Show Figures

Figure 1

13 pages, 2628 KB  
Article
Indoor Localization Using 6G Time-Domain Feature and Deep Learning
by Chien-Ching Chiu, Hung-Yu Wu, Po-Hsiang Chen, Chen-En Chao and Eng Hock Lim
Electronics 2025, 14(9), 1870; https://doi.org/10.3390/electronics14091870 - 3 May 2025
Viewed by 725
Abstract
Accurate indoor localization is essential for Internet of Things (IoT) systems and autonomous navigation in the 6G communication system. However, achieving precision in environments affected by signal multipath effects and interference remains a challenge for 6G communication systems. We employ a Residual Neural [...] Read more.
Accurate indoor localization is essential for Internet of Things (IoT) systems and autonomous navigation in the 6G communication system. However, achieving precision in environments affected by signal multipath effects and interference remains a challenge for 6G communication systems. We employ a Residual Neural Network (ResNet) augmented with channel and spatial attention mechanisms to enhance indoor localization performance using time-domain data. Through extensive experimentation, our models, when equipped with an attention mechanism, can achieve accurate location under 20% interference. Numerical results show that the ResNet with a Channel Local Attention Block (CLAB) can reduce the localization error by about 12% even when the interference is high. Similarly, the ResNet with a Spatial Local Attention Block (SLAB) can also improve the localization accuracy. While a ResNet combining both CLAB and SLAB can reduce the position error to about 7 cm. Full article
Show Figures

Figure 1

36 pages, 10731 KB  
Article
Enhancing Airport Traffic Flow: Intelligent System Based on VLC, Rerouting Techniques, and Adaptive Reward Learning
by Manuela Vieira, Manuel Augusto Vieira, Gonçalo Galvão, Paula Louro, Alessandro Fantoni, Pedro Vieira and Mário Véstias
Sensors 2025, 25(9), 2842; https://doi.org/10.3390/s25092842 - 30 Apr 2025
Viewed by 703
Abstract
Airports are complex environments where efficient localization and intelligent traffic management are essential for ensuring smooth navigation and operational efficiency for both pedestrians and Autonomous Guided Vehicles (AGVs). This study presents an Artificial Intelligence (AI)-driven airport traffic management system that integrates Visible Light [...] Read more.
Airports are complex environments where efficient localization and intelligent traffic management are essential for ensuring smooth navigation and operational efficiency for both pedestrians and Autonomous Guided Vehicles (AGVs). This study presents an Artificial Intelligence (AI)-driven airport traffic management system that integrates Visible Light Communication (VLC), rerouting techniques, and adaptive reward mechanisms to optimize traffic flow, reduce congestion, and enhance safety. VLC-enabled luminaires serve as transmission points for location-specific guidance, forming a hybrid mesh network based on tetrachromatic LEDs with On-Off Keying (OOK) modulation and SiC optical receivers. AI agents, driven by Deep Reinforcement Learning (DRL), continuously analyze traffic conditions, apply adaptive rewards to improve decision-making, and dynamically reroute agents to balance traffic loads and avoid bottlenecks. Traffic states are encoded and processed through Q-learning algorithms, enabling intelligent phase activation and responsive control strategies. Simulation results confirm that the proposed system enables more balanced green time allocation, with reductions of up to 43% in vehicle-prioritized phases (e.g., Phase 1 at C1) to accommodate pedestrian flows. These adjustments lead to improved route planning, reduced halting times, and enhanced coordination between AGVs and pedestrian traffic across multiple intersections. Additionally, traffic flow responsiveness is preserved, with critical clearance phases maintaining stability or showing slight increases despite pedestrian prioritization. Simulation results confirm improved route planning, reduced halting times, and enhanced coordination between AGVs and pedestrian flows. The system also enables accurate indoor localization without relying on a Global Positioning System (GPS), supporting seamless movement and operational optimization. By combining VLC, adaptive AI models, and rerouting strategies, the proposed approach contributes to safer, more efficient, and human-centered airport mobility. Full article
Show Figures

Figure 1

26 pages, 14214 KB  
Article
Stereo Visual Odometry and Real-Time Appearance-Based SLAM for Mapping and Localization in Indoor and Outdoor Orchard Environments
by Imran Hussain, Xiongzhe Han and Jong-Woo Ha
Agriculture 2025, 15(8), 872; https://doi.org/10.3390/agriculture15080872 - 16 Apr 2025
Viewed by 1743
Abstract
Agricultural robots can mitigate labor shortages and advance precision farming. However, the dense vegetation canopies and uneven terrain in orchard environments reduce the reliability of traditional GPS-based localization, thereby reducing navigation accuracy and making autonomous navigation challenging. Moreover, inefficient path planning and an [...] Read more.
Agricultural robots can mitigate labor shortages and advance precision farming. However, the dense vegetation canopies and uneven terrain in orchard environments reduce the reliability of traditional GPS-based localization, thereby reducing navigation accuracy and making autonomous navigation challenging. Moreover, inefficient path planning and an increased risk of collisions affect the robot’s ability to perform tasks such as fruit harvesting, spraying, and monitoring. To address these limitations, this study integrated stereo visual odometry with real-time appearance-based mapping (RTAB-Map)-based simultaneous localization and mapping (SLAM) to improve mapping and localization in both indoor and outdoor orchard settings. The proposed system leverages stereo image pairs for precise depth estimation while utilizing RTAB-Map’s graph-based SLAM framework with loop-closure detection to ensure global map consistency. In addition, an incorporated inertial measurement unit (IMU) enhances pose estimation, thereby improving localization accuracy. Substantial improvements in both mapping and localization performance over the traditional approach were demonstrated, with an average error of 0.018 m against the ground truth for outdoor mapping and a consistent average error of 0.03 m for indoor trails with a 20.7% reduction in visual odometry trajectory deviation compared to traditional methods. Localization performance remained robust across diverse conditions, with a low RMSE of 0.207 m. Our approach provides critical insights into developing more reliable autonomous navigation systems for agricultural robots. Full article
Show Figures

Figure 1

33 pages, 14735 KB  
Article
Artificial Vision System for Autonomous Mobile Platform Used in Intelligent and Flexible Indoor Environment Inspection
by Marius Cristian Luculescu, Luciana Cristea and Attila Laszlo Boer
Technologies 2025, 13(4), 161; https://doi.org/10.3390/technologies13040161 - 16 Apr 2025
Cited by 1 | Viewed by 937
Abstract
The widespread availability of artificial intelligence (AI) tools has facilitated the development of complex, high-performance applications across a broad range of domains. Among these, processes related to the surveillance and assisted verification of indoor environments have garnered significant interest. This paper presents the [...] Read more.
The widespread availability of artificial intelligence (AI) tools has facilitated the development of complex, high-performance applications across a broad range of domains. Among these, processes related to the surveillance and assisted verification of indoor environments have garnered significant interest. This paper presents the implementation, testing, and validation of an autonomous mobile platform designed for the intelligent and flexible inspection of such spaces. The artificial vision system, the main component on which the study focuses, was built using a Raspberry Pi 5 development module supplemented by a Raspberry Pi AI Kit to enable hardware acceleration for image recognition tasks using AI techniques. Some of the most recognized neural network models were evaluated in line with the application’s specific requirements. Utilizing transfer learning techniques, these models were further developed and trained with additional image datasets tailored to the inspection tasks. The performance of these networks was then tested and validated on new images, facilitating the selection of the model with the best results. The platform’s flexibility was ensured by mounting the artificial vision system on a mobile structure capable of autonomously navigating indoor environments and identifying inspection points, markers, and required objects. The platform could generate reports, make decisions based on the detected conditions, and be easily reconfigured for alternative inspection tasks. Finally, the intelligent and flexible inspection system was tested and validated for its deep learning-based vision system performance. Full article
(This article belongs to the Special Issue Advanced Autonomous Systems and Artificial Intelligence Stage)
Show Figures

Graphical abstract

Back to TopTop