sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence (AI) and Machine-Learning-Based Localization

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Navigation and Positioning".

Deadline for manuscript submissions: closed (20 May 2023) | Viewed by 33223

Special Issue Editors


E-Mail Website
Guest Editor
School of Computing Science, University of Glasgow, Glasgow G12 8RZ, UK
Interests: cyber-physical security; localization/navigation with wireless communication system; Internet of Things (IoT) using Machine Learning (ML) or Artificial Intelligence methodology (AI)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China
Interests: Mapping; positioning and navigation; deformation radar remote sensing

E-Mail Website
Guest Editor
School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore City 639798, Singapore
Interests: RF propagation and Localization

E-Mail Website
Guest Editor
School of Electrical and Electronic Engineering, Nanyang Technological University Singapore, Singapore, Singapore
Interests: wireless and IMU localization and navigation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the proliferation of 5G technologies and the Internet of Things (IoT), there has been a quantum leap in the demand for precise location-based services, especially in the domain of robotics/drones in indoor and outdoor environments. The COVID-19 pandemic amplified the need for automated solutions which require knowledge of the sensor/robot location and perception of the dynamic environment. Deterministic-based approaches to localization are well established in the sensor community. However, recent years have witnessed the growth of various artificial intelligence (AI) and machine learning techniques to meet the challenging requirements of high precision in location accuracy, especially in dynamic indoor environments with possible multipath effects.

This Special Issue explores novel AI and machine learning approaches for localization in both indoor and outdoor environments. It provides the opportunity to uncover new ground and applications for precise localization. We invite contributions on (but not limited to) the following topics:

  • AI and machine learning algorithms for precise localization;
  • Location-based AI applications in robotics;
  • GPS-denied localization;
  • Data fusion for localization, including inertial, visual, and time-of-flight sensors;
  • Algorithms and methods for navigation;
  • Co-operative localization;
  • Ultrawide-band (UWB) based localization;
  • AI for non-line-of-sight (NLOS) detection and mitigation;
  • Wi-Fi, 5G technology, and Bluetooth low energy (BLE) applications for localization;
  • Localization using edge computing.

The ability to sense and localize people, edge devices such as IoT devices, drones, and robots matches the scope of Sensors, especially if the edge device incorporates measurement sensors.

Dr. Chee Kiat Seow
Dr. Henrik Hesse
Prof. Dr. Yunjia Wang
Prof. Dr. Soon Yim Tan
Dr. Kai Wen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI and machine learning algorithms for precise localization
  • Location-based AI applications in robotics
  • GPS-denied localization
  • Data fusion for localization, including inertial, visual, and time-of-flight sensors
  • Algorithms and methods for navigation
  • Co-operative localization
  • Ultrawide-band (UWB) based localization
  • AI for non-line-of-sight (NLOS) detection and mitigation
  • Wi-Fi, 5G technology, and Bluetooth low energy (BLE) applications for localization
  • Localization using edge computing

Related Special Issue

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 8775 KiB  
Article
Accurate Visual Simultaneous Localization and Mapping (SLAM) against Around View Monitor (AVM) Distortion Error Using Weighted Generalized Iterative Closest Point (GICP)
by Yangwoo Lee, Minsoo Kim, Joonwoo Ahn and Jaeheung Park
Sensors 2023, 23(18), 7947; https://doi.org/10.3390/s23187947 - 17 Sep 2023
Viewed by 1063
Abstract
Accurately estimating the pose of a vehicle is important for autonomous parking. The study of around view monitor (AVM)-based visual Simultaneous Localization and Mapping (SLAM) has gained attention due to its affordability, commercial availability, and suitability for parking scenarios characterized by rapid rotations [...] Read more.
Accurately estimating the pose of a vehicle is important for autonomous parking. The study of around view monitor (AVM)-based visual Simultaneous Localization and Mapping (SLAM) has gained attention due to its affordability, commercial availability, and suitability for parking scenarios characterized by rapid rotations and back-and-forth movements of the vehicle. In real-world environments, however, the performance of AVM-based visual SLAM is degraded by AVM distortion errors resulting from an inaccurate camera calibration. Therefore, this paper presents an AVM-based visual SLAM for autonomous parking which is robust against AVM distortion errors. A deep learning network is employed to assign weights to parking line features based on the extent of the AVM distortion error. To obtain training data while minimizing human effort, three-dimensional (3D) Light Detection and Ranging (LiDAR) data and official parking lot guidelines are utilized. The output of the trained network model is incorporated into weighted Generalized Iterative Closest Point (GICP) for vehicle localization under distortion error conditions. The experimental results demonstrate that the proposed method reduces localization errors by an average of 39% compared with previous AVM-based visual SLAM approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

15 pages, 11331 KiB  
Article
Outdoor Vision-and-Language Navigation Needs Object-Level Alignment
by Yanjun Sun, Yue Qiu, Yoshimitsu Aoki and Hirokatsu Kataoka
Sensors 2023, 23(13), 6028; https://doi.org/10.3390/s23136028 - 29 Jun 2023
Cited by 1 | Viewed by 1315
Abstract
In the field of embodied AI, vision-and-language navigation (VLN) is a crucial and challenging multi-modal task. Specifically, outdoor VLN involves an agent navigating within a graph-based environment, while simultaneously interpreting information from real-world urban environments and natural language instructions. Existing outdoor VLN models [...] Read more.
In the field of embodied AI, vision-and-language navigation (VLN) is a crucial and challenging multi-modal task. Specifically, outdoor VLN involves an agent navigating within a graph-based environment, while simultaneously interpreting information from real-world urban environments and natural language instructions. Existing outdoor VLN models predict actions using a combination of panorama and instruction features. However, these methods may cause the agent to struggle to understand complicated outdoor environments and ignore the details in the environments to fail to navigate. Human navigation often involves the use of specific objects as reference landmarks when navigating to unfamiliar places, providing a more rational and efficient approach to navigation. Inspired by this natural human behavior, we propose an object-level alignment module (OAlM), which guides the agent to focus more on object tokens mentioned in the instructions and recognize these landmarks during navigation. By treating these landmarks as sub-goals, our method effectively decomposes a long-range path into a series of shorter paths, ultimately improving the agent’s overall performance. In addition to enabling better object recognition and alignment, our proposed OAlM also fosters a more robust and adaptable agent capable of navigating complex environments. This adaptability is particularly crucial for real-world applications where environmental conditions can be unpredictable and varied. Experimental results show our OAlM is a more object-focused model, and our approach outperforms all metrics on a challenging outdoor VLN Touchdown dataset, exceeding the baseline by 3.19% on task completion (TC). These results highlight the potential of leveraging object-level information in the form of sub-goals to improve navigation performance in embodied AI systems, paving the way for more advanced and efficient outdoor navigation. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

20 pages, 6667 KiB  
Article
Deep Learning-Aided Inertial/Visual/LiDAR Integration for GNSS-Challenging Environments
by Nader Abdelaziz and Ahmed El-Rabbany
Sensors 2023, 23(13), 6019; https://doi.org/10.3390/s23136019 - 29 Jun 2023
Cited by 1 | Viewed by 1256
Abstract
This research develops an integrated navigation system, which fuses the measurements of the inertial measurement unit (IMU), LiDAR, and monocular camera using an extended Kalman filter (EKF) to provide accurate positioning during prolonged GNSS signal outages. The system features the use of an [...] Read more.
This research develops an integrated navigation system, which fuses the measurements of the inertial measurement unit (IMU), LiDAR, and monocular camera using an extended Kalman filter (EKF) to provide accurate positioning during prolonged GNSS signal outages. The system features the use of an integrated INS/monocular visual simultaneous localization and mapping (SLAM) navigation system that takes advantage of LiDAR depth measurements to correct the scale ambiguity that results from monocular visual odometry. The proposed system was tested using two datasets, namely, the KITTI and the Leddar PixSet, which cover a wide range of driving environments. The system yielded an average reduction in the root-mean-square error (RMSE) of about 80% and 92% in the horizontal and upward directions, respectively. The proposed system was compared with an INS/monocular visual SLAM/LiDAR SLAM integration and to some state-of-the-art SLAM algorithms. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

19 pages, 1694 KiB  
Article
Indoor Scene Recognition Mechanism Based on Direction-Driven Convolutional Neural Networks
by Andrea Daou, Jean-Baptiste Pothin, Paul Honeine and Abdelaziz Bensrhair
Sensors 2023, 23(12), 5672; https://doi.org/10.3390/s23125672 - 17 Jun 2023
Cited by 1 | Viewed by 1213
Abstract
Indoor location-based services constitute an important part of our daily lives, providing position and direction information about people or objects in indoor spaces. These systems can be useful in security and monitoring applications that target specific areas such as rooms. Vision-based scene recognition [...] Read more.
Indoor location-based services constitute an important part of our daily lives, providing position and direction information about people or objects in indoor spaces. These systems can be useful in security and monitoring applications that target specific areas such as rooms. Vision-based scene recognition is the task of accurately identifying a room category from a given image. Despite years of research in this field, scene recognition remains an open problem due to the different and complex places in the real world. Indoor environments are relatively complicated because of layout variability, object and decoration complexity, and multiscale and viewpoint changes. In this paper, we propose a room-level indoor localization system based on deep learning and built-in smartphone sensors combining visual information with smartphone magnetic heading. The user can be room-level localized while simply capturing an image with a smartphone. The presented indoor scene recognition system is based on direction-driven convolutional neural networks (CNNs) and therefore contains multiple CNNs, each tailored for a particular range of indoor orientations. We present particular weighted fusion strategies that improve system performance by properly combining the outputs from different CNN models. To meet users’ needs and overcome smartphone limitations, we propose a hybrid computing strategy based on mobile computation offloading compatible with the proposed system architecture. The implementation of the scene recognition system is split between the user’s smartphone and a server, which aids in meeting the computational requirements of CNNs. Several experimental analysis were conducted, including to assess performance and provide a stability analysis. The results obtained on a real dataset show the relevance of the proposed approach for localization, as well as the interest in model partitioning in hybrid mobile computation offloading. Our extensive evaluation demonstrates an increase in accuracy compared to traditional CNN scene recognition, indicating the effectiveness and robustness of our approach. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

17 pages, 1416 KiB  
Article
Deep Voxelized Feature Maps for Self-Localization in Autonomous Driving
by Yuki Endo and Shunsuke Kamijo
Sensors 2023, 23(12), 5373; https://doi.org/10.3390/s23125373 - 6 Jun 2023
Viewed by 1099
Abstract
Lane-level self-localization is essential for autonomous driving. Point cloud maps are typically used for self-localization but are known to be redundant. Deep features produced by neural networks can be used as a map, but their simple utilization could lead to corruption in large [...] Read more.
Lane-level self-localization is essential for autonomous driving. Point cloud maps are typically used for self-localization but are known to be redundant. Deep features produced by neural networks can be used as a map, but their simple utilization could lead to corruption in large environments. This paper proposes a practical map format using deep features. We propose voxelized deep feature maps for self-localization, consisting of deep features defined in small regions. The self-localization algorithm proposed in this paper considers per-voxel residual and reassignment of scan points in each optimization iteration, which could result in accurate results. Our experiments compared point cloud maps, feature maps, and the proposed map from the self-localization accuracy and efficiency perspective. As a result, more accurate and lane-level self-localization was achieved with the proposed voxelized deep feature map, even with a smaller storage requirement compared with the other map formats. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

18 pages, 797 KiB  
Article
EM and SAGE Algorithms for DOA Estimation in the Presence of Unknown Uniform Noise
by Ming-Yan Gong and Bin Lyu
Sensors 2023, 23(10), 4811; https://doi.org/10.3390/s23104811 - 16 May 2023
Cited by 3 | Viewed by 955
Abstract
The existing expectation maximization (EM) and space-alternating generalized EM (SAGE) algorithms are only applied to direction of arrival (DOA) estimation in known noise. In this paper, the two algorithms are designed for DOA estimation in unknown uniform noise. Both the deterministic and random [...] Read more.
The existing expectation maximization (EM) and space-alternating generalized EM (SAGE) algorithms are only applied to direction of arrival (DOA) estimation in known noise. In this paper, the two algorithms are designed for DOA estimation in unknown uniform noise. Both the deterministic and random signal models are considered. In addition, a new modified EM (MEM) algorithm applicable to the noise assumption is also proposed. Next, these EM-type algorithms are improved to ensure the stability when the powers of sources are not equal. After being improved, simulation results illustrate that the EM algorithm has similar convergence with the MEM algorithm, the SAGE algorithm outperforms the EM and MEM algorithms for the deterministic signal model, and the SAGE algorithm cannot always outperform the EM and MEM algorithms for the random signal model. Furthermore, simulation results show that processing the same snapshots from the random signal model, the SAGE algorithm for the deterministic signal model can require the fewest computations. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

18 pages, 22725 KiB  
Article
Reinforcement and Curriculum Learning for Off-Road Navigation of an UGV with a 3D LiDAR
by Manuel Sánchez, Jesús Morales and Jorge L. Martínez
Sensors 2023, 23(6), 3239; https://doi.org/10.3390/s23063239 - 18 Mar 2023
Cited by 3 | Viewed by 1798
Abstract
This paper presents the use of deep Reinforcement Learning (RL) for autonomous navigation of an Unmanned Ground Vehicle (UGV) with an onboard three-dimensional (3D) Light Detection and Ranging (LiDAR) sensor in off-road environments. For training, both the robotic simulator Gazebo and the Curriculum [...] Read more.
This paper presents the use of deep Reinforcement Learning (RL) for autonomous navigation of an Unmanned Ground Vehicle (UGV) with an onboard three-dimensional (3D) Light Detection and Ranging (LiDAR) sensor in off-road environments. For training, both the robotic simulator Gazebo and the Curriculum Learning paradigm are applied. Furthermore, an Actor–Critic Neural Network (NN) scheme is chosen with a suitable state and a custom reward function. To employ the 3D LiDAR data as part of the input state of the NNs, a virtual two-dimensional (2D) traversability scanner is developed. The resulting Actor NN has been successfully tested in both real and simulated experiments and favorably compared with a previous reactive navigation approach on the same UGV. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

18 pages, 2155 KiB  
Article
LSTM-Based Projectile Trajectory Estimation in a GNSS-Denied Environment
by Alicia Roux, Sébastien Changey, Jonathan Weber and Jean-Philippe Lauffenburger
Sensors 2023, 23(6), 3025; https://doi.org/10.3390/s23063025 - 10 Mar 2023
Cited by 3 | Viewed by 2356
Abstract
This paper presents a deep learning approach to estimate a projectile trajectory in a GNSS-denied environment. For this purpose, Long-Short-Term-Memories (LSTMs) are trained on projectile fire simulations. The network inputs are the embedded Inertial Measurement Unit (IMU) data, the magnetic field reference, flight [...] Read more.
This paper presents a deep learning approach to estimate a projectile trajectory in a GNSS-denied environment. For this purpose, Long-Short-Term-Memories (LSTMs) are trained on projectile fire simulations. The network inputs are the embedded Inertial Measurement Unit (IMU) data, the magnetic field reference, flight parameters specific to the projectile and a time vector. This paper focuses on the influence of LSTM input data pre-processing, i.e., normalization and navigation frame rotation, leading to rescale 3D projectile data over similar variation ranges. In addition, the effect of the sensor error model on the estimation accuracy is analyzed. LSTM estimates are compared to a classical Dead-Reckoning algorithm, and the estimation accuracy is evaluated via multiple error criteria and the position errors at the impact point. Results, presented for a finned projectile, clearly show the Artificial Intelligence (AI) contribution, especially for the projectile position and velocity estimations. Indeed, the LSTM estimation errors are reduced compared to a classical navigation algorithm as well as to GNSS-guided finned projectiles. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

16 pages, 11165 KiB  
Article
UWB Sensing for UAV and Human Comparative Movement Characterization
by Angela Digulescu, Cristina Despina-Stoian, Florin Popescu, Denis Stanescu, Dragos Nastasiu and Dragos Sburlan
Sensors 2023, 23(4), 1956; https://doi.org/10.3390/s23041956 - 9 Feb 2023
Cited by 4 | Viewed by 1668
Abstract
Nowadays, unmanned aerial vehicles/drones are involved in a continuously growing number of security incidents. Therefore, the research interest in drone versus human movement detection and characterization is justified by the fact that such devices represent a potential threat for indoor/office intrusion, while normally, [...] Read more.
Nowadays, unmanned aerial vehicles/drones are involved in a continuously growing number of security incidents. Therefore, the research interest in drone versus human movement detection and characterization is justified by the fact that such devices represent a potential threat for indoor/office intrusion, while normally, a human presence is allowed after passing several security points. Our paper comparatively characterizes the movement of a drone and a human in an indoor environment. The movement map was obtained using advanced signal processing methods such as wavelet transform and the phase diagram concept, and applied to the signal acquired from UWB sensors. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

24 pages, 2444 KiB  
Article
A Codeword-Independent Localization Technique for Reconfigurable Intelligent Surface Enhanced Environments Using Adversarial Learning
by Xuanshu Luo and Nirvana Meratnia
Sensors 2023, 23(2), 984; https://doi.org/10.3390/s23020984 - 14 Jan 2023
Viewed by 1644
Abstract
Reconfigurable Intelligent Surfaces (RISs) not only enable software-defined radio in modern wireless communication networks but also have the potential to be utilized for localization. Most previous works used channel matrices to calculate locations, requiring extensive field measurements, which leads to rapidly growing complexity. [...] Read more.
Reconfigurable Intelligent Surfaces (RISs) not only enable software-defined radio in modern wireless communication networks but also have the potential to be utilized for localization. Most previous works used channel matrices to calculate locations, requiring extensive field measurements, which leads to rapidly growing complexity. Although a few studies have designed fingerprint-based systems, they are only feasible under an unrealistic assumption that the RIS will be deployed only for localization purposes. Additionally, all these methods utilize RIS codewords for location inference, inducing considerable communication burdens. In this paper, we propose a new localization technique for RIS-enhanced environments that does not require RIS codewords for online location inference. Our proposed approach extracts codeword-independent representations of fingerprints using a domain adversarial neural network. We evaluated our solution using the DeepMIMO dataset. Due to the lack of results from other studies, for fair comparisons, we define oracle and baseline cases, which are the theoretical upper and lower bounds of our system, respectively. In all experiments, our proposed solution performed much more similarly to the oracle cases than the baseline cases, demonstrating the effectiveness and robustness of our method. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

18 pages, 2823 KiB  
Article
LRF-WiVi: A WiFi and Visual Indoor Localization Method Based on Low-Rank Fusion
by Wen Liu, Changyan Qin, Zhongliang Deng and Haoyue Jiang
Sensors 2022, 22(22), 8821; https://doi.org/10.3390/s22228821 - 15 Nov 2022
Cited by 3 | Viewed by 1596
Abstract
In this paper, a WiFi and visual fingerprint localization model based on low-rank fusion (LRF-WiVi) is proposed, which makes full use of the complementarity of heterogeneous signals by modeling both the signal-specific actions and interaction of location information in the two signals end-to-end. [...] Read more.
In this paper, a WiFi and visual fingerprint localization model based on low-rank fusion (LRF-WiVi) is proposed, which makes full use of the complementarity of heterogeneous signals by modeling both the signal-specific actions and interaction of location information in the two signals end-to-end. Firstly, two feature extraction subnetworks are designed to extract the feature vectors containing location information of WiFi channel state information (CSI) and multi-directional visual images respectively. Then, the low-rank fusion module efficiently aggregates the specific actions and interactions of the two feature vectors while maintaining low computational complexity. The fusion features obtained are used for position estimation; In addition, for the CSI feature extraction subnetwork, we designed a novel construction method of CSI time-frequency characteristic map and a double-branch CNN structure to extract features. LRF-WiVi jointly learns the parameters of each module under the guidance of the same loss function, making the whole model more consistent with the goal of fusion localization. Extensive experiments are conducted in a complex laboratory and an open hall to verify the superior performance of LRF-WiVi in utilizing WiFi and visual signal complementarity. The results show that our method achieves more advanced positioning performance than other methods in both scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

13 pages, 3306 KiB  
Article
AI-Based Positioning with Input Parameter Optimization in Indoor VLC Environments
by Sung-Hyun Oh and Jeong-Gon Kim
Sensors 2022, 22(21), 8125; https://doi.org/10.3390/s22218125 - 24 Oct 2022
Cited by 1 | Viewed by 2026
Abstract
Indoorlocation-based service (LBS) technology has been emerged as a major research topic in recent years. Positioning technology is essential for providing LBSs. The existing indoor positioning solutions generally use radio-frequency (RF)-based communication technologies such as Wi-Fi. However, RF-based communication technologies do not provide [...] Read more.
Indoorlocation-based service (LBS) technology has been emerged as a major research topic in recent years. Positioning technology is essential for providing LBSs. The existing indoor positioning solutions generally use radio-frequency (RF)-based communication technologies such as Wi-Fi. However, RF-based communication technologies do not provide precise positioning owing to rapid changes in the received signal strength due to walls, obstacles, and people movement in indoor environments. Hence, this study adopts visible-light communication (VLC) for user positioning in an indoor environment. VLC is based on light-emitting diodes (LEDs) and its advantage includes high efficiency and long lifespan. In addition, this study uses a deep neural network (DNN) to improve the positioning accuracy and reduce the positioning processing time. The hyperparameters of the DNN model are optimized to improve the positioning performance. The trained DNN model is designed to yield the actual three-dimensional position of a user. The simulation results show that our optimized DNN model achieves a positioning error of 0.0898 m with a processing time of 0.5 ms, which means that the proposed method yields more precise positioning than the other methods. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

16 pages, 3356 KiB  
Article
WLAN RSS-Based Fingerprinting for Indoor Localization: A Machine Learning Inspired Bag-of-Features Approach
by Sohaib Bin Altaf Khattak, Fawad, Moustafa M. Nasralla, Maged Abdullah Esmail, Hala Mostafa and Min Jia
Sensors 2022, 22(14), 5236; https://doi.org/10.3390/s22145236 - 13 Jul 2022
Cited by 18 | Viewed by 2910
Abstract
Location-based services have permeated Smart academic institutions, enhancing the quality of higher education. Position information of people and objects can predict different potential requirements and provide relevant services to meet those needs. Indoor positioning system (IPS) research has attained robust location-based services in [...] Read more.
Location-based services have permeated Smart academic institutions, enhancing the quality of higher education. Position information of people and objects can predict different potential requirements and provide relevant services to meet those needs. Indoor positioning system (IPS) research has attained robust location-based services in complex indoor structures. Unforeseeable propagation loss in complex indoor environments results in poor localization accuracy of the system. Various IPSs have been developed based on fingerprinting to precisely locate an object even in the presence of indoor artifacts such as multipath and unpredictable radio propagation losses. However, such methods are deleteriously affected by the vulnerability of fingerprint matching frameworks. In this paper, we propose a novel machine learning framework consisting of Bag-of-Features and followed by a k-nearest neighbor classifier to categorize the final features into their respective geographical coordinate data. BoF calculates the vocabulary set using k-mean clustering, where the frequency of the vocabulary in the raw fingerprint data represents the robust final features that improve localization accuracy. Experimental results from simulation-based indoor scenarios and real-time experiments demonstrate that the proposed framework outperforms previously developed models. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

23 pages, 9451 KiB  
Article
Research and Implementation of Autonomous Navigation for Mobile Robots Based on SLAM Algorithm under ROS
by Jianwei Zhao, Shengyi Liu and Jinyu Li
Sensors 2022, 22(11), 4172; https://doi.org/10.3390/s22114172 - 31 May 2022
Cited by 21 | Viewed by 7727
Abstract
Aiming at the problems of low mapping accuracy, slow path planning efficiency, and high radar frequency requirements in the process of mobile robot mapping and navigation in an indoor environment, this paper proposes a four-wheel drive adaptive robot positioning and navigation system based [...] Read more.
Aiming at the problems of low mapping accuracy, slow path planning efficiency, and high radar frequency requirements in the process of mobile robot mapping and navigation in an indoor environment, this paper proposes a four-wheel drive adaptive robot positioning and navigation system based on ROS. By comparing and analyzing the mapping effects of various 2D-SLAM algorithms (Gmapping, Karto SLAM, and Hector SLAM), the Karto SLAM algorithm is used for map building. By comparing the Dijkstra algorithm with the A* algorithm, the A* algorithm is used for heuristic searches, which improves the efficiency of path planning. The DWA algorithm is used for local path planning, and real-time path planning is carried out by combining sensor data, which have a good obstacle avoidance performance. The mathematical model of four-wheel adaptive robot sliding steering was established, and the URDF model of the mobile robot was established under a ROS system. The map environment was built in Gazebo, and the simulation experiment was carried out by integrating lidar and odometer data, so as to realize the functions of mobile robot scanning mapping and autonomous obstacle avoidance navigation. The communication between the ROS system and STM32 is realized, the packaging of the ROS chassis node is completed, and the ROS chassis node has the function of receiving speed commands and feeding back odometer data and TF transformation, and the slip rate of the four-wheel robot in situ steering is successfully measured, making the chassis pose more accurate. Simulation tests and experimental verification show that the system has a high precision in environment map building and can achieve accurate navigation tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 4027 KiB  
Review
Semantic Terrain Segmentation in the Navigation Vision of Planetary Rovers—A Systematic Literature Review
by Boyu Kuang, Chengzhen Gu, Zeeshan A. Rana, Yifan Zhao, Shuang Sun and Somtochukwu Godfrey Nnabuife
Sensors 2022, 22(21), 8393; https://doi.org/10.3390/s22218393 - 1 Nov 2022
Cited by 4 | Viewed by 2579
Abstract
Background: The planetary rover is an essential platform for planetary exploration. Visual semantic segmentation is significant in the localization, perception, and path planning of the rover autonomy. Recent advances in computer vision and artificial intelligence brought about new opportunities. A systematic literature [...] Read more.
Background: The planetary rover is an essential platform for planetary exploration. Visual semantic segmentation is significant in the localization, perception, and path planning of the rover autonomy. Recent advances in computer vision and artificial intelligence brought about new opportunities. A systematic literature review (SLR) can help analyze existing solutions, discover available data, and identify potential gaps. Methods: A rigorous SLR has been conducted, and papers are selected from three databases (IEEE Xplore, Web of Science, and Scopus) from the start of records to May 2022. The 320 candidate studies were found by searching with keywords and bool operators, and they address the semantic terrain segmentation in the navigation vision of planetary rovers. Finally, after four rounds of screening, 30 papers were included with robust inclusion and exclusion criteria as well as quality assessment. Results: 30 studies were included for the review, and sub-research areas include navigation (16 studies), geological analysis (7 studies), exploration efficiency (10 studies), and others (3 studies) (overlaps exist). Five distributions are extendedly depicted (time, study type, geographical location, publisher, and experimental setting), which analyzes the included study from the view of community interests, development status, and reimplementation ability. One key research question and six sub-research questions are discussed to evaluate the current achievements and future gaps. Conclusions: Many promising achievements in accuracy, available data, and real-time performance have been promoted by computer vision and artificial intelligence. However, a solution that satisfies pixel-level segmentation, real-time inference time, and onboard hardware does not exist, and an open, pixel-level annotated, and the real-world data-based dataset is not found. As planetary exploration projects progress worldwide, more promising studies will be proposed, and deep learning will bring more opportunities and contributions to future studies. Contributions: This SLR identifies future gaps and challenges by proposing a methodical, replicable, and transparent survey, which is the first review (also the first SLR) for semantic terrain segmentation in the navigation vision of planetary rovers. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

Back to TopTop