sensors-logo

Journal Browser

Journal Browser

Object Detection via Point Cloud Data

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 15 December 2024 | Viewed by 500

Special Issue Editors


E-Mail Website
Guest Editor
School of Mechanical and Mining Engineering, Faculty of Engineering, Architecture and Information Technology, The University of Queensland, St Lucia, QLD 4072, Australia
Interests: mathematical modeling; model predictive control; estimation; multi-body dynamics; mechatronics; field robotics; mining automation; technology implementation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Mechanical and Mining Engineering, Faculty of Engineering, Architecture and Information Technology, The University of Queensland, Brisbane St Lucia, QLD 4072, Australia
Interests: robot
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Autonomous platforms often require the ability to interact with objects in their immediate environment. These interactions can manifest, for example, as simple pick-and-place operations or as complex coordinations between multiple robotic agents. Nonetheless, at either end of the complexity scale, the decision-making requires to perform interactions requires an a priori level of situational awareness.

Situational awareness encapsulates the ability to perceive, understand and predict elements within a given environment to make informed decisions. A key belief required to facilitate object interactions is to first detect the object. Detection, in this sense, amounts to determining (i) where an object is and (ii) what an object is. This detection takes place at the perception and comprehension stages of situational awareness and is often facilitated via the interpretation of dense 3D point cloud measurements.

Determining where an object is extends to detecting if an object is indeed present within the point cloud data or identifying when multiple objects are present. Likewise, the determination of what an object is can involve selecting the most likely candidate from a catalogue of known object types or verifying that the assumed geometry is correct to some degree of tolerance. 

The final stage of providing situational awareness is the ability to predict future states. Pertaining to object detection, this might include where an object is likely to move, informed by an understanding of its intent or the constraints of its environment. It might also include whether an object is at risk or presents a threat to something else, e.g., collision avoidance.

This Special Issue requests papers that address problems related to object detection via point cloud data, as well as the limitations of existing approaches.

Prof. Dr. Peter Ross McAree
Dr. Tyson Phillips
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • object detection
  • perception
  • situational awareness
  • pose estimation
  • object classification
  • point cloud
  • LiDAR

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 7953 KiB  
Article
TAMC: Textual Alignment and Masked Consistency for Open-Vocabulary 3D Scene Understanding
by Juan Wang, Zhijie Wang, Tomo Miyazaki, Yaohou Fan and Shinichiro Omachi
Sensors 2024, 24(19), 6166; https://doi.org/10.3390/s24196166 - 24 Sep 2024
Viewed by 357
Abstract
Three-dimensional (3D) Scene Understanding achieves environmental perception by extracting and analyzing point cloud data with wide applications including virtual reality, robotics, etc. Previous methods align the 2D image feature from a pre-trained CLIP model and the 3D point cloud feature for the open [...] Read more.
Three-dimensional (3D) Scene Understanding achieves environmental perception by extracting and analyzing point cloud data with wide applications including virtual reality, robotics, etc. Previous methods align the 2D image feature from a pre-trained CLIP model and the 3D point cloud feature for the open vocabulary scene understanding ability. We believe that existing methods have the following two deficiencies: (1) the 3D feature extraction process ignores the challenges of real scenarios, i.e., point cloud data are very sparse and even incomplete; (2) the training stage lacks direct text supervision, leading to inconsistency with the inference stage. To address the first issue, we employ a Masked Consistency training policy. Specifically, during the alignment of 3D and 2D features, we mask some 3D features to force the model to understand the entire scene using only partial 3D features. For the second issue, we generate pseudo-text labels and align them with the 3D features during the training process. In particular, we first generate a description for each 2D image belonging to the same 3D scene and then use a summarization model to fuse these descriptions into a single description of the scene. Subsequently, we align 2D-3D features and 3D-text features simultaneously during training. Massive experiments demonstrate the effectiveness of our method, outperforming state-of-the-art approaches. Full article
(This article belongs to the Special Issue Object Detection via Point Cloud Data)
Show Figures

Figure 1

Back to TopTop