sensors-logo

Journal Browser

Journal Browser

Deep Learning for 3D Image and Point Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (30 June 2024) | Viewed by 4058

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of North Carolina-Charlotte, Charlotte, NC 28223-0001, USA
Interests: computer vision; pattern recognition; image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of North Carolina-Charlotte, Charlotte, NC 28223-0001, USA
Interests: deep learning; real-time AI; image processing

Special Issue Information

Dear Colleagues,

This Special Issue seeks submissions that use 3D sensor technologies in combination with AI algorithms (e.g., deep learning) to solve problems in visual analysis. Sensor technologies of interest include RGBD sensors, LiDaR sensors and 2D image sensors from which 3D information is to be extracted (e.g., structure-from-motion (SfM)).

Potential submission topics include: deep learning architectures for 3D AI, neuromorphic architectures for 3D AI, 3D pose recognition, 3D trajectory estimation, 3D object recognition, 3D surface reconstruction, sensor data compression, 3D object completion, 3D from single images, 3D scene analysis, 3D datasets for AI, and explainable AI for 3D inference.

While this Special Issue places emphasis on deep learning methods, applications of deep learning systems and novel integrations of deep learning systems as system-level AI solutions for complex problems are also relevant.

Dr. Andrew R. Willis
Dr. Hamed Tabkhi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • RGBD sensors
  • point cloud

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 6340 KiB  
Article
Bridging Formal Shape Models and Deep Learning: A Novel Fusion for Understanding 3D Objects
by Jincheng Zhang and Andrew R. Willis
Sensors 2024, 24(12), 3874; https://doi.org/10.3390/s24123874 - 15 Jun 2024
Viewed by 466
Abstract
This article describes a novel fusion of a generative formal model for three-dimensional (3D) shapes with deep learning (DL) methods to understand the geometric structure of 3D objects and the relationships between their components, given a collection of unorganized point cloud measurements. Formal [...] Read more.
This article describes a novel fusion of a generative formal model for three-dimensional (3D) shapes with deep learning (DL) methods to understand the geometric structure of 3D objects and the relationships between their components, given a collection of unorganized point cloud measurements. Formal 3D shape models are implemented as shape grammar programs written in Procedural Shape Modeling Language (PSML). Users write PSML programs to describe complex objects, and DL networks estimate the configured free parameters of the program to generate 3D shapes. Users write PSML programs to enforce fundamental rules that define an object class and encode object attributes, including shapes, components, size, position, etc., into a parametric representation of objects. This fusion of the generative model with DL offers artificial intelligence (AI) models an opportunity to better understand the geometric organization of objects in terms of their components and their relationships to other objects. This approach allows human-in-the-loop control over DL estimates by specifying lists of candidate objects, the shape variations that each object can exhibit, and the level of detail or, equivalently, dimension of the latent representation of the shape. The results demonstrate the advantages of the proposed method over competing approaches. Full article
(This article belongs to the Special Issue Deep Learning for 3D Image and Point Sensors)
Show Figures

Figure 1

11 pages, 2384 KiB  
Article
Global Guided Cross-Modal Cross-Scale Network for RGB-D Salient Object Detection
by Shuaihui Wang, Fengyi Jiang and Boqian Xu
Sensors 2023, 23(16), 7221; https://doi.org/10.3390/s23167221 - 17 Aug 2023
Viewed by 922
Abstract
RGB-D saliency detection aims to accurately localize salient regions using the complementary information of a depth map. Global contexts carried by the deep layer are key to salient objection detection, but they are diluted when transferred to shallower layers. Besides, depth maps may [...] Read more.
RGB-D saliency detection aims to accurately localize salient regions using the complementary information of a depth map. Global contexts carried by the deep layer are key to salient objection detection, but they are diluted when transferred to shallower layers. Besides, depth maps may contain misleading information due to the depth sensors. To tackle these issues, in this paper, we propose a new cross-modal cross-scale network for RGB-D salient object detection, where the global context information provides global guidance to boost performance in complex scenarios. First, we introduce a global guided cross-modal and cross-scale module named G2CMCSM to realize global guided cross-modal cross-scale fusion. Then, we employ feature refinement modules for progressive refinement in a coarse-to-fine manner. In addition, we adopt a hybrid loss function to supervise the training of G2CMCSNet over different scales. With all these modules working together, G2CMCSNet effectively enhances both salient object details and salient object localization. Extensive experiments on challenging benchmark datasets demonstrate that our G2CMCSNet outperforms existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Deep Learning for 3D Image and Point Sensors)
Show Figures

Figure 1

10 pages, 2003 KiB  
Article
RGBD Salient Object Detection, Based on Specific Object Imaging
by Xiaolian Liao, Jun Li, Leyi Li, Caoxi Shangguan and Shaoyan Huang
Sensors 2022, 22(22), 8973; https://doi.org/10.3390/s22228973 - 19 Nov 2022
Cited by 1 | Viewed by 1962
Abstract
RGBD salient object detection, based on the convolutional neural network, has achieved rapid development in recent years. However, existing models often focus on detecting salient object edges, instead of objects. Importantly, detecting objects can more intuitively display the complete information of the detection [...] Read more.
RGBD salient object detection, based on the convolutional neural network, has achieved rapid development in recent years. However, existing models often focus on detecting salient object edges, instead of objects. Importantly, detecting objects can more intuitively display the complete information of the detection target. To take care of this issue, we propose a RGBD salient object detection method, based on specific object imaging, which can quickly capture and process important information on object features, and effectively screen out the salient objects in the scene. The screened target objects include not only the edge of the object, but also the complete feature information of the object, which realizes the detection and imaging of the salient objects. We conduct experiments on benchmark datasets and validate with two common metrics, and the results show that our method reduces the error by 0.003 and 0.201 (MAE) on D3Net and JLDCF, respectively. In addition, our method can still achieve a very good detection and imaging performance in the case of the greatly reduced training data. Full article
(This article belongs to the Special Issue Deep Learning for 3D Image and Point Sensors)
Show Figures

Figure 1

Back to TopTop