Localization and 3D Mapping of Intelligent Robotics

A special issue of Robotics (ISSN 2218-6581).

Deadline for manuscript submissions: 30 November 2024 | Viewed by 1526

Special Issue Editors


E-Mail Website
Guest Editor
Systems Engineering and Automation Department, Universidad Miguel Hernández de Elche (Alicante), 03202 Elche, Spain
Interests: mobile robots; deep learning; localization; mapping; scene recognition

E-Mail Website
Guest Editor

E-Mail Website
Guest Editor Assistant
Systems Engineering and Automation Department, Universidad Miguel Hernández de Elche (Alicante), 03202 Elche, Spain
Interests: full spherical view; fisheye images; mobile robot localization; visual odometry; local features

Special Issue Information

Dear Colleagues,

Localization and 3D mapping play pivotal roles in the autonomy of intelligent mobile robots, enabling them to navigate complex environments with precision. At the core of this technology lies the fusion of artificial intelligence and advanced sensor systems.

Localization involves determining a robot's position relative to its surroundings, typically achieved through techniques like Simultaneous Localization and Mapping (SLAM). By integrating data from onboard sensors such as LiDAR, cameras, and inertial measurement units, robots construct detailed maps of their environment while simultaneously estimating their own position within it.

Navigation heavily relies on accurate localization data, allowing robots to plan optimal paths and avoid obstacles in real-time. This capability is essential for applications ranging from warehouse logistics to search and rescue missions.

Moreover, the fusion of perception and detection technologies enables robots to interpret their surroundings comprehensively, identifying objects, obstacles, and hazards. This environment recognition capability enhances safety and efficiency across various tasks.

These advancements in localization and mapping have far-reaching applications, including industrial automation, agricultural robotics, and exploration in hazardous environments. As technology continues to evolve, intelligent robots equipped with sophisticated localization and mapping capabilities will further revolutionize numerous fields, making operations safer, more efficient, and increasingly autonomous.

Prof. Dr. Mónica Ballesta
Prof. Dr. Oscar Reinoso García
Guest Editors

Dr. María Flores Tenza
Guest Editor Assistant

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • mobile robots
  • localization
  • computer vision
  • navigation
  • mapping
  • artificial intelligence
  • neuronal networks
  • LiDAR
  • point cloud
  • perception and detection

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 7301 KiB  
Article
Vision-Based Situational Graphs Exploiting Fiducial Markers for the Integration of Semantic Entities
by Ali Tourani, Hriday Bavle, Deniz Işınsu Avşar, Jose Luis Sanchez-Lopez, Rafael Munoz-Salinas and Holger Voos
Robotics 2024, 13(7), 106; https://doi.org/10.3390/robotics13070106 - 16 Jul 2024
Viewed by 596
Abstract
Situational Graphs (S-Graphs) merge geometric models of the environment generated by Simultaneous Localization and Mapping (SLAM) approaches with 3D scene graphs into a multi-layered jointly optimizable factor graph. As an advantage, S-Graphs not only offer a more comprehensive robotic situational awareness by combining [...] Read more.
Situational Graphs (S-Graphs) merge geometric models of the environment generated by Simultaneous Localization and Mapping (SLAM) approaches with 3D scene graphs into a multi-layered jointly optimizable factor graph. As an advantage, S-Graphs not only offer a more comprehensive robotic situational awareness by combining geometric maps with diverse, hierarchically organized semantic entities and their topological relationships within one graph, but they also lead to improved performance of localization and mapping on the SLAM level by exploiting semantic information. In this paper, we introduce a vision-based version of S-Graphs where a conventional Visual SLAM (VSLAM) system is used for low-level feature tracking and mapping. In addition, the framework exploits the potential of fiducial markers (both visible and our recently introduced transparent or fully invisible markers) to encode comprehensive information about environments and the objects within them. The markers aid in identifying and mapping structural-level semantic entities, including walls and doors in the environment, with reliable poses in the global reference, subsequently establishing meaningful associations with higher-level entities, including corridors and rooms. However, in addition to including semantic entities, the semantic and geometric constraints imposed by the fiducial markers are also utilized to improve the reconstructed map’s quality and reduce localization errors. Experimental results on a real-world dataset collected using legged robots show that our framework excels in crafting a richer, multi-layered hierarchical map and enhances robot pose accuracy at the same time. Full article
(This article belongs to the Special Issue Localization and 3D Mapping of Intelligent Robotics)
Show Figures

Figure 1

20 pages, 17993 KiB  
Article
Semantic 3D Reconstruction for Volumetric Modeling of Defects in Construction Sites
by Dimitrios Katsatos, Paschalis Charalampous, Patrick Schmidt, Ioannis Kostavelis, Dimitrios Giakoumis, Lazaros Nalpantidis and Dimitrios Tzovaras
Robotics 2024, 13(7), 102; https://doi.org/10.3390/robotics13070102 - 11 Jul 2024
Viewed by 592
Abstract
The appearance of construction defects in buildings can arise from a variety of factors, ranging from issues during the design and construction phases to problems that develop over time with the lifecycle of a building. These defects require repairs, often in the context [...] Read more.
The appearance of construction defects in buildings can arise from a variety of factors, ranging from issues during the design and construction phases to problems that develop over time with the lifecycle of a building. These defects require repairs, often in the context of a significant shortage of skilled labor. In addition, such work is often physically demanding and carried out in hazardous environments. Consequently, adopting autonomous robotic systems in the construction industry becomes essential, as they can relieve labor shortages, promote safety, and enhance the quality and efficiency of repair and maintenance tasks. Hereupon, the present study introduces an end-to-end framework towards the automation of shotcreting tasks in cases where construction or repair actions are required. The proposed system can scan a construction scene using a stereo-vision camera mounted on a robotic platform, identify regions of defects, and reconstruct a 3D model of these areas. Furthermore, it automatically calculates the required 3D volumes to be constructed to treat a detected defect. To achieve all of the above-mentioned technological tools, the developed software framework employs semantic segmentation and 3D reconstruction modules based on YOLOv8m-seg, SiamMask, InfiniTAM, and RTAB-Map, respectively. In addition, the segmented 3D regions are processed by the volumetric modeling component, which determines the amount of concrete needed to fill the defects. It generates the exact 3D model that can repair the investigated defect. Finally, the precision and effectiveness of the proposed pipeline are evaluated in actual construction site scenarios, featuring reinforcement bars as defective areas. Full article
(This article belongs to the Special Issue Localization and 3D Mapping of Intelligent Robotics)
Show Figures

Figure 1

Back to TopTop