Next Article in Journal
Mechanical Deformation Analysis of a Flexible Finger in Terms of an Improved ANCF Plate Element
Previous Article in Journal
Modeling and Fault Size Estimation for Non-Penetrating Damage in the Outer Raceway of Tapered Roller Bearing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TR-Net: A Transformer-Based Neural Network for Point Cloud Processing

1
School of Information Engineering, Zhengzhou University, No. 100 Science Avenue, Zhengzhou 450001, China
2
Henan Xintong Intelligent IOT Co., Ltd., No. 1-303 Intersection of Ruyun Road and Meihe Road, Zhengzhou 450007, China
*
Author to whom correspondence should be addressed.
Machines 2022, 10(7), 517; https://doi.org/10.3390/machines10070517
Submission received: 2 May 2022 / Revised: 19 June 2022 / Accepted: 22 June 2022 / Published: 27 June 2022
(This article belongs to the Topic Intelligent Systems and Robotics)

Abstract

Point cloud is a versatile geometric representation that could be applied in computer vision tasks. On account of the disorder of point cloud, it is challenging to design a deep neural network used in point cloud analysis. Furthermore, most existing frameworks for point cloud processing either hardly consider the local neighboring information or ignore context-aware and spatially-aware features. To deal with the above problems, we propose a novel point cloud processing architecture named TR-Net, which is based on transformer. This architecture reformulates the point cloud processing task as a set-to-set translation problem. TR-Net directly operates on raw point clouds without any data transformation or annotation, which reduces the consumption of computing resources and memory usage. Firstly, a neighborhood embedding backbone is designed to effectively extract the local neighboring information from point cloud. Then, an attention-based sub-network is constructed to better learn a semantically abundant and discriminatory representation from embedded features. Finally, effective global features are yielded through feeding the features extracted by attention-based sub-network into a residual backbone. For different downstream tasks, we build different decoders. Extensive experiments on the public datasets illustrate that our approach outperforms other state-of-the-art methods. For example, our TR-Net performs 93.1% overall accuracy on the ModelNet40 dataset and the TR-Net archives a mIou of 85.3% on the ShapeNet dataset for part segmentation.
Keywords: point cloud; deep learning; classification; part segmentation; transformer point cloud; deep learning; classification; part segmentation; transformer

Share and Cite

MDPI and ACS Style

Liu, L.; Chen, E.; Ding, Y. TR-Net: A Transformer-Based Neural Network for Point Cloud Processing. Machines 2022, 10, 517. https://doi.org/10.3390/machines10070517

AMA Style

Liu L, Chen E, Ding Y. TR-Net: A Transformer-Based Neural Network for Point Cloud Processing. Machines. 2022; 10(7):517. https://doi.org/10.3390/machines10070517

Chicago/Turabian Style

Liu, Luyao, Enqing Chen, and Yingqiang Ding. 2022. "TR-Net: A Transformer-Based Neural Network for Point Cloud Processing" Machines 10, no. 7: 517. https://doi.org/10.3390/machines10070517

APA Style

Liu, L., Chen, E., & Ding, Y. (2022). TR-Net: A Transformer-Based Neural Network for Point Cloud Processing. Machines, 10(7), 517. https://doi.org/10.3390/machines10070517

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop