sensors-logo

Journal Browser

Journal Browser

Sensor Based Perception for Field Robotics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (30 November 2023) | Viewed by 15732

Special Issue Editors


E-Mail Website
Guest Editor
School of Mechanical and Mining Engineering, Faculty of Engineering, Architecture and Information Technology, The University of Queensland, Brisbane St Lucia, QLD 4072, Australia
Interests: mathematical modeling; model predictive control; estimation; multi-body dynamics; mechatronics; field robotics; mining automation; technology implementation

E-Mail Website
Guest Editor
School of Mechanical and Mining Engineering, Faculty of Engineering, Architecture and Information Technology, The University of Queensland, Brisbane St Lucia, QLD 4072, Australia
Interests: robot

Special Issue Information

Dear Colleagues,

Field robotics is concerned with the automation of vehicles and platforms to assist and/or replace humans performing tasks that are difficult, repetitive, unpleasant, or operate in harsh, unstructured environments. Field robotics encompasses the automation of many land, sea, and air platforms in agriculture, construction, mining, forestry, urban, underwater, military, and space applications. Field robotics is characterized by the application of the most advanced robotics principles in sensing, perception, control, and reasoning in unstructured and unknown environments. The appeal of field robotics is that it is challenging science, involves the latest engineering and systems design principles, and offers the real prospect of robotic principles making a substantial economic and social contribution to many different application areas. Recently, multi-robot systems have also become one of the main topics in the field to cover large-scale outdoor environments.

Field robots must be able to perceive the three-dimensional world around them in ways that enable safe and efficient autonomous decision making. This requires algorithms that interpret and integrate measurements from different sensors. Several distinct sub-problems exist that are usually nuanced by the application and the environment in which the field robot operates. Problems include (1) localization and mapping, (2) object identification, verification and classification, (3) field-based sensor calibration, (4) object tracking and pose estimation, and (5) multi-agent sensor fusion.  The development of algorithms that robustly meet the timeliness and accuracy requirements for these problems is a key challenge for the development of any field robot.

This Special Issue invites papers that address solutions for field robotics perception problems, including identification problems, and that address the limitations of existing approaches.

Prof. Dr. Peter Ross McAree
Dr. Tyson Phillips
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensor fusion
  • field robotics
  • localization and mapping
  • object identification, verification and classification
  • spatial occupancy mapping
  • sensor calibration
  • object tracking and pose estimation

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 6177 KiB  
Article
SyS3DS: Systematic Sampling of Large-Scale LiDAR Point Clouds for Semantic Segmentation in Forestry Robotics
by Habibu Mukhandi, Joao Filipe Ferreira and Paulo Peixoto
Sensors 2024, 24(3), 823; https://doi.org/10.3390/s24030823 - 26 Jan 2024
Viewed by 835
Abstract
Recently, new semantic segmentation and object detection methods have been proposed for the direct processing of three-dimensional (3D) LiDAR sensor point clouds. LiDAR can produce highly accurate and detailed 3D maps of natural and man-made environments and is used for sensing in many [...] Read more.
Recently, new semantic segmentation and object detection methods have been proposed for the direct processing of three-dimensional (3D) LiDAR sensor point clouds. LiDAR can produce highly accurate and detailed 3D maps of natural and man-made environments and is used for sensing in many contexts due to its ability to capture more information, its robustness to dynamic changes in the environment compared to an RGB camera, and its cost, which has decreased in recent years and which is an important factor for many application scenarios. The challenge with high-resolution 3D LiDAR sensors is that they can output large amounts of 3D data with up to a few million points per second, which is difficult to process in real time when applying complex algorithms and models for efficient semantic segmentation. Most existing approaches are either only suitable for relatively small point clouds or rely on computationally intensive sampling techniques to reduce their size. As a result, most of these methods do not work in real time in realistic field robotics application scenarios, making them unsuitable for practical applications. Systematic point selection is a possible solution to reduce the amount of data to be processed. Although our approach is memory and computationally efficient, it selects only a small subset of points, which may result in important features being missed. To address this problem, our proposed systematic sampling method called SyS3DS (Systematic Sampling for 3D Semantic Segmentation) incorporates a technique in which the local neighbours of each point are retained to preserve geometric details. SyS3DS is based on the graph colouring algorithm and ensures that the selected points are non-adjacent in order to obtain a subset of points that are representative of the 3D points in the scene. To take advantage of the ensemble learning method, we pass a different subset of nodes for each epoch. This leverages a new technique called auto-ensemble, where ensemble learning is proposed as a collection of different learning models instead of tuning different hyperparameters individually during training and validation. SyS3DS has been shown to process up to 1 million points in a single pass. It outperforms the state of the art in efficient semantic segmentation on large datasets such as Semantic3D. We also present a preliminary study on the validity of the performance of LiDAR-only data, i.e., intensity values from LiDAR sensors without RGB values for semi-autonomous robot perception. Full article
(This article belongs to the Special Issue Sensor Based Perception for Field Robotics)
Show Figures

Figure 1

33 pages, 4823 KiB  
Article
NR5G-SAM: A SLAM Framework for Field Robot Applications Based on 5G New Radio
by Panagiotis T. Karfakis, Micael S. Couceiro and David Portugal
Sensors 2023, 23(11), 5354; https://doi.org/10.3390/s23115354 - 5 Jun 2023
Cited by 5 | Viewed by 3020
Abstract
Robot localization is a crucial task in robotic systems and is a pre-requisite for navigation. In outdoor environments, Global Navigation Satellite Systems (GNSS) have aided towards this direction, alongside laser and visual sensing. Despite their application in the field, GNSS suffers from limited [...] Read more.
Robot localization is a crucial task in robotic systems and is a pre-requisite for navigation. In outdoor environments, Global Navigation Satellite Systems (GNSS) have aided towards this direction, alongside laser and visual sensing. Despite their application in the field, GNSS suffers from limited availability in dense urban and rural environments. Light Detection and Ranging (LiDAR), inertial and visual methods are also prone to drift and can be susceptible to outliers due to environmental changes and illumination conditions. In this work, we propose a cellular Simultaneous Localization and Mapping (SLAM) framework based on 5G New Radio (NR) signals and inertial measurements for mobile robot localization with several gNodeB stations. The method outputs the pose of the robot along with a radio signal map based on the Received Signal Strength Indicator (RSSI) measurements for correction purposes. We then perform benchmarking against LiDAR-Inertial Odometry Smoothing and Mapping (LIO-SAM), a state-of-the-art LiDAR SLAM method, comparing performance via a simulator ground truth reference. Two experimental setups are presented and discussed using the sub-6 GHz and mmWave frequency bands for communication, while the transmission is based on down-link (DL) signals. Our results show that 5G positioning can be utilized for radio SLAM, providing increased robustness in outdoor environments and demonstrating its potential to assist in robot localization, as an additional absolute source of information when LiDAR methods fail and GNSS data is unreliable. Full article
(This article belongs to the Special Issue Sensor Based Perception for Field Robotics)
Show Figures

Figure 1

18 pages, 2824 KiB  
Article
Robust Scan Registration for Navigation in Forest Environment Using Low-Resolution LiDAR Sensors
by Himanshu Gupta, Henrik Andreasson, Achim J. Lilienthal and Polina Kurtser
Sensors 2023, 23(10), 4736; https://doi.org/10.3390/s23104736 - 14 May 2023
Cited by 2 | Viewed by 1206
Abstract
Automated forest machines are becoming important due to human operators’ complex and dangerous working conditions, leading to a labor shortage. This study proposes a new method for robust SLAM and tree mapping using low-resolution LiDAR sensors in forestry conditions. Our method relies on [...] Read more.
Automated forest machines are becoming important due to human operators’ complex and dangerous working conditions, leading to a labor shortage. This study proposes a new method for robust SLAM and tree mapping using low-resolution LiDAR sensors in forestry conditions. Our method relies on tree detection to perform scan registration and pose correction using only low-resolution LiDAR sensors (16Ch, 32Ch) or narrow field of view Solid State LiDARs without additional sensory modalities like GPS or IMU. We evaluate our approach on three datasets, including two private and one public dataset, and demonstrate improved navigation accuracy, scan registration, tree localization, and tree diameter estimation compared to current approaches in forestry machine automation. Our results show that the proposed method yields robust scan registration using detected trees, outperforming generalized feature-based registration algorithms like Fast Point Feature Histogram, with an above 3 m reduction in RMSE for the 16Chanel LiDAR sensor. For Solid-State LiDAR the algorithm achieves a similar RMSE of 3.7 m. Additionally, our adaptive pre-processing and heuristic approach to tree detection increased the number of detected trees by 13% compared to the current approach of using fixed radius search parameters for pre-processing. Our automated tree trunk diameter estimation method yields a mean absolute error of 4.3 cm (RSME = 6.5 cm) for the local map and complete trajectory maps. Full article
(This article belongs to the Special Issue Sensor Based Perception for Field Robotics)
Show Figures

Figure 1

12 pages, 638 KiB  
Article
Motion-Based Extrinsic Sensor-to-Sensor Calibration: Effect of Reference Frame Selection for New and Existing Methods
by Tuomas Välimäki, Bharath Garigipati and Reza Ghabcheloo
Sensors 2023, 23(7), 3740; https://doi.org/10.3390/s23073740 - 4 Apr 2023
Viewed by 1236
Abstract
This paper studies the effect of reference frame selection in sensor-to-sensor extrinsic calibration when formulated as a motion-based hand–eye calibration problem. As the sensor trajectories typically contain some composition of noise, the aim is to determine which selection strategies work best under which [...] Read more.
This paper studies the effect of reference frame selection in sensor-to-sensor extrinsic calibration when formulated as a motion-based hand–eye calibration problem. As the sensor trajectories typically contain some composition of noise, the aim is to determine which selection strategies work best under which noise conditions. Different reference selection options are tested under varying noise conditions in simulations, and the findings are validated with real data from the KITTI dataset. The study is conducted for four state-of-the-art methods, as well as two proposed cost functions for nonlinear optimization. One of the proposed cost functions incorporates outlier rejection to improve calibration performance and was shown to significantly improve performance in the presence of outliers, and either match or outperform the other algorithms in other noise conditions. However, the performance gain from reference frame selection was deemed larger than that from algorithm selection. In addition, we show that with realistic noise, the reference frame selection method commonly used in the literature, is inferior to other tested options, and that relative error metrics are not reliable for telling which method achieves best calibration performance. Full article
(This article belongs to the Special Issue Sensor Based Perception for Field Robotics)
Show Figures

Figure 1

25 pages, 32503 KiB  
Article
Real-Time 6-DOF Pose Estimation of Known Geometries in Point Cloud Data
by Vedant Bhandari, Tyson Govan Phillips and Peter Ross McAree
Sensors 2023, 23(6), 3085; https://doi.org/10.3390/s23063085 - 13 Mar 2023
Cited by 2 | Viewed by 2936
Abstract
The task of tracking the pose of an object with a known geometry from point cloud measurements arises in robot perception. It calls for a solution that is both accurate and robust, and can be computed at a rate that aligns with the [...] Read more.
The task of tracking the pose of an object with a known geometry from point cloud measurements arises in robot perception. It calls for a solution that is both accurate and robust, and can be computed at a rate that aligns with the needs of a control system that might make decisions based on it. The Iterative Closest Point (ICP) algorithm is widely used for this purpose, but it is susceptible to failure in practical scenarios. We present a robust and efficient solution for pose-from-point cloud estimation called the Pose Lookup Method (PLuM). PLuM is a probabilistic reward-based objective function that is resilient to measurement uncertainty and clutter. Efficiency is achieved through the use of lookup tables, which substitute complex geometric operations such as raycasting used in earlier solutions. Our results show millimetre accuracy and fast pose estimation in benchmark tests using triangulated geometry models, outperforming state-of-the-art ICP-based methods. These results are extended to field robotics applications, resulting in real-time haul truck pose estimation. By utilising point clouds from a LiDAR fixed to a rope shovel, the PLuM algorithm tracks a haul truck effectively throughout the excavation load cycle at a rate of 20 Hz, matching the sensor frame rate. PLuM is straightforward to implement and provides dependable and timely solutions in demanding environments. Full article
(This article belongs to the Special Issue Sensor Based Perception for Field Robotics)
Show Figures

Figure 1

17 pages, 5939 KiB  
Article
Sensor Equipped UAS for Non-Contact Bridge Inspections: Field Application
by Roya Nasimi, Fernando Moreu and G. Matthew Fricke
Sensors 2023, 23(1), 470; https://doi.org/10.3390/s23010470 - 1 Jan 2023
Cited by 2 | Viewed by 2635
Abstract
In the future, sensors mounted on uncrewed aerial systems (UASs) will play a critical role in increasing both the speed and safety of structural inspections. Environmental and safety concerns make structural inspections and maintenance challenging when conducted using traditional methods, especially for large [...] Read more.
In the future, sensors mounted on uncrewed aerial systems (UASs) will play a critical role in increasing both the speed and safety of structural inspections. Environmental and safety concerns make structural inspections and maintenance challenging when conducted using traditional methods, especially for large structures. The methods developed and tested in the laboratory need to be tested in the field on real-size structures to identify their potential for full implementation. This paper presents results from a full-scale field implementation of a novel sensor equipped with UAS to measure non-contact transverse displacement from a pedestrian bridge. To this end, the authors modified and upgraded a low-cost system that previously showed promise in laboratory and small-scale outdoor settings so that it could be tested on an in-service bridge. The upgraded UAS system uses a commodity drone platform, low-cost sensors including a laser range-finder, and a computer vision-based algorithm with the aim of measuring bridge displacements under load indicative of structural problems. The aim of this research is to alleviate the costs and challenges associated with sensor attachment in bridge inspections and deliver the first prototype of a UAS-based non-contact out-of-plane displacement measurement. This work helps to define the capabilities and limitations of the proposed low-cost system in obtaining non-contact transverse displacement in outdoor experiments. Full article
(This article belongs to the Special Issue Sensor Based Perception for Field Robotics)
Show Figures

Figure 1

18 pages, 1353 KiB  
Article
Reflection-Aware Generation and Identification of Square Marker Dictionaries
by Sergio Garrido-Jurado, Juan Garrido, David Jurado-Rodríguez, Francisco Vázquez and Rafael Muñoz-Salinas
Sensors 2022, 22(21), 8548; https://doi.org/10.3390/s22218548 - 6 Nov 2022
Cited by 2 | Viewed by 2325
Abstract
Square markers are a widespread tool to find correspondences for camera localization because of their robustness, accuracy, and detection speed. Their identification is usually based on a binary encoding that accounts for the different rotations of the marker; however, most systems do not [...] Read more.
Square markers are a widespread tool to find correspondences for camera localization because of their robustness, accuracy, and detection speed. Their identification is usually based on a binary encoding that accounts for the different rotations of the marker; however, most systems do not consider the possibility of observing reflected markers. This case is possible in environments containing mirrors or reflective surfaces, and its lack of consideration is a source of detection errors, which is contrary to the robustness expected from square markers. This is the first work in the literature that focuses on reflection-aware square marker dictionaries. We present the derivation of the inter-marker distance of a reflection-aware dictionary and propose new algorithms for generating and identifying such dictionaries. Additionally, part of the proposed method can be used to optimize preexisting dictionaries to take reflection into account. The experimentation carried out demonstrates how our proposal greatly outperforms the most popular predefined dictionaries in terms of inter-marker distance and how the optimization process significantly improves them. Full article
(This article belongs to the Special Issue Sensor Based Perception for Field Robotics)
Show Figures

Figure 1

Back to TopTop