A Point Cloud Data-Driven Pallet Pose Estimation Method Using an Active Binocular Vision Sensor
Abstract
:1. Introduction
2. The Proposed Pallet Pose Estimation Method
2.1. Overview of the Proposed Approach
2.2. Point Cloud Preprocessing
2.2.1. Point Cloud Filtering
2.2.2. Plain Segmentation
2.2.3. Key Points Extraction
2.3. Adaptive Gaussian Weight-based Fast Point Feature Histogram Definition
2.3.1. Adaptive Neighborhood Radius
- Set the range of point cloud neighborhood search radius rj from lower limit rmin to upper limit rmax with radius interval rd. The upper and lower limits of the radius range are determined by the average point cloud distance dp, which is defined as follows [32]:
- Calculate the covariance matrix and eigenvalues corresponding to different neighborhood radius rj. The neighborhood covariance matrix M is defined as follows:
- According to the eigenvalues, the neighborhood feature entropy function is constructed:
- The adaptive optimal neighborhood radius ropt of point cloud is determined based on the minimum criterion of neighborhood feature entropy function, that is, when reaches the minimum value, the corresponding neighborhood radius rj is the optimal neighborhood radius ropt:
2.3.2. Gaussian Weight-Based Fast Point Feature Histogram
- For each key point qk in the key point set Qt, search all the neighborhood points qki within its optimal neighborhood radius ropt.
- Compute the normal vectors ns and nt corresponding to qk and qki, calculate the relative position deviation between ns and nt, and generate the Simple Point Feature Histograms (SPFH) of qk(SPFH(qk)). Local coordinate system (u,v,w) is defined to calculate this deviation, which is shown in Figure 6:
- 3.
- Then, search the neighborhood points of qki based on the adaptive optimal neighborhood radius ropt, and generate the SPFH of qki(SPFH(qki)). Based on the Gaussian weight wGi, the SPFH(qki) is weighted to obtain the Adaptive Gaussian Weight-based Fast Point Feature Histogram of key points qk(AGWF(qk)):
2.4. Point Cloud Registration
2.4.1. Coarse Registration
- Compute AGWF feature descriptors for all key points in the source point cloud P and the target point cloud Q.
- N sample points Pu (u = 1, 2, …, N) are randomly selected from the source point cloud P, and the distance between two sample points is greater than the preset distance threshold dmin.
- According to the AGWF, search the closest points Qu (u = 1, 2, …, N) in the target point cloud Q to the sample points Pu, and obtain the initial match point pairs.
- Obtain the rigid transformation matrix M1 between initial match point pairs by Singular Value Decomposition (SVD). Set a registration threshold el, and calculate the distance function H(li) to evaluate the point cloud registration performance. The expression of the distance function H(li) is as follows:
- Repeat the above four steps; when H(li) reaches the minimum, the corresponding transformation matrix is the coarse registration rigid transformation matrix Mc. The rigid transformation of the source point cloud P is carried out based on Mc to obtain the point cloud Pr, and coarse registration is completed.
2.4.2. Accurate Registration
- Set a distance threshold ef and the maximum number of iterations I0. For each point pri in the source point set Pr, search for its corresponding closest point qi in the target point set Q, and form the corresponding points pairs, set Cl.
- Solve the rigid transformation matrix by SVD and obtain the rotation matrix Rn and the translation matrix Tn, where n is the number of iterations. Convert the source point Pr by the translation matrix (Rn, Tn) into Prn, and form the corresponding point pairs Cn. Calculate the average Euclidean distance en between every corresponding point pair.
- Repeat the above steps until en is smaller than ef or the maximum number of iterations I0 is reached, and finally obtain the optimal transformation matrix R and T.
- Let Rx, Ry and Rz be the rotation angles of the three coordinate axes, and tx, ty and tz be the translation vectors of the coordinate axes; the 6 DOF pose estimation can be represented as (Rx, Ry, Rz, tx, ty, tz). The optimal transformation matrix Ma can be expressed as
3. Pallet Pose Estimation Experiment
3.1. Data Collection
3.2. Multiple Scenario Experiments
3.2.1. The Ground Scene
3.2.2. The Shelf Scene
3.3. Results and Discussion
4. Conclusions
- A point cloud-driven method for the driverless industrial trucks to estimate the pose of the pallet in the production shop is proposed, which solves the problem that the pallet cannot be accurately forked due to the position deviation, and improves the security and stability of the logistics system.
- An adaptive optimal neighborhood radius selection criterion based on the minimum rule of the local neighborhood characteristic entropy function is proposed to determine the neighborhood radius of each key point adaptively, instead of selecting parameters based on experience manually, which significantly shortened the time of feature extraction and improved the accuracy.
- Traditional descriptors only consider the Euclidean distance between the query key point and the neighborhood point as the traditional methods, and the weight of the proposed descriptor is optimized by the Gaussian distribution function. The change of the weights of each neighborhood point is smoother and can describe the key point features more accurately and completely, thereby effectively improving the robustness of the feature descriptors.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Baglivo, L.; Biasi, N.; Biral, F.; Bellomo, N.; Bertolazzi, E.; Da Lio, M.; De Cecco, M. Autonomous pallet localization and picking for industrial forklifts: A robust range and look method. Meas. Sci. Technol. 2011, 22, 085502. [Google Scholar] [CrossRef]
- Hu, H.S.; Wang, L.; Luh, P. Intelligent manufacturing: New advances and challenges. J. Intell. Manuf. 2015, 26, 841–843. [Google Scholar] [CrossRef]
- Shuai, L.; Mingkang, X.; Weilin, Z.; Huilin, X. Towards Industrial Scenario Lane Detection: Vision-Based AGV Navigation Methods. In Proceedings of the 2020 IEEE International Conference on Mechatronics and Automation, Beijing, China, 13–16 October 2020; pp. 1101–1106. [Google Scholar]
- Baglivo, L.; Bellomo, N.; Marcuzzi, E.; Pertile, M.; Cecco, M. Pallet Pose Estimation with LIDAR and Vision for Autonomous Forklifts. In Proceedings of the IEEE 13th IFAC Symposium on Information Control Problems in Manufacturing IFAC-INCOM ‘09, Moscow, Russia, 3–5 June 2009; pp. 616–621. [Google Scholar]
- Mohamed, I.S.; Capitanelli, A.; Mastrogiovanni, F.; Rovetta, S.; Zaccaria, R. Detection, localisation and tracking of pallets using machine learning techniques and 2D range data. Neural Comput. Appl. 2020, 32, 8811–8828. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Z.H.; Liang, Z.D.; Zhang, M.; Zhao, X.; Li, H.; Yang, M.; Tan, W.M.; Pu, S.L. RangeLVDet: Boosting 3D Object Detection in LIDAR with Range Image and RGB Image. IEEE Sens. J. 2022, 22, 1391–1403. [Google Scholar] [CrossRef]
- Seelinger, M.; Yoder, J.D. Automatic visual guidance of a forklift engaging a pallet. Robot. Auton. Syst. 2006, 54, 1026–1038. [Google Scholar] [CrossRef]
- Syu, J.L.; Li, H.T.; Chiang, J.S.; Hsia, C.H.; Wu, P.H.; Hsieh, C.F.; Li, S.A. A computer vision assisted system for autonomous forklift vehicles in real factory environment. Multimed. Tools Appl. 2017, 76, 18387–18407. [Google Scholar] [CrossRef]
- Fan, R.Z.; Xu, T.B.; Wei, Z.Z. Estimating 6D Aircraft Pose from Keypoints and Structures. Remote Sens. 2021, 13, 663. [Google Scholar] [CrossRef]
- Guo, K.; Ye, H.; Gao, X.; Chen, H.L. An Accurate and Robust Method for Absolute Pose Estimation with UAV Using RANSAC. Sensors 2022, 22, 5925. [Google Scholar] [CrossRef]
- Varga, R.; Nedevschi, S. Robust Pallet Detection for Automated Logistics Operations. In Proceedings of the 11th International Conference on Computer Vision Theory and Applications, Rome, Italy, 27–29 February 2016; pp. 470–477. [Google Scholar]
- Casado, F.; Lapido, Y.L.; Losada, D.P.; Santana-Alonso, A. Pose estimation and object tracking using 2D images. Procedia Manuf. 2017, 11, 63–71. [Google Scholar] [CrossRef]
- Shao, Y.P.; Wang, K.; Du, S.C.; Xi, L.F. High definition metrology enabled three dimensional discontinuous surface filtering by extended tetrolet transform. J. Manuf. Syst. 2018, 49, 75–92. [Google Scholar] [CrossRef]
- Zhao, C.; Du, S.C.; Lv, J.; Deng, Y.F.; Li, G.L. A novel parallel classification network for classifying three-dimensional surface with point cloud data. J. Intell. Manuf. 2021, in press. [Google Scholar] [CrossRef]
- Zhao, C.; Lv, J.; Du, S.C. Geometrical deviation modeling and monitoring of 3D surface based on multi-output Gaussian process. Measurement 2022, 199, 111569. [Google Scholar] [CrossRef]
- Shao, Y.P.; Du, S.C.; Tang, H.T. An extended bi-dimensional empirical wavelet transform based filtering approach for engineering surface separation using high definition metrology. Measurement 2021, 178, 109259. [Google Scholar] [CrossRef]
- Zhao, C.; Lui, F.C.; Du, S.C.; Wang, D.; Shao, Y.P. An Earth Mover’s Distance based Multivariate Generalized Likelihood Ratio Control Chart for Effective Monitoring of 3D Point Cloud Surface. Comput. Ind. Eng. 2023, 175, 108911. [Google Scholar] [CrossRef]
- Wang, X.D.; Liu, B.; Mei, X.S.; Wang, X.T.; Lian, R.H. A Novel Method for Measuring, Collimating, and Maintaining the Spatial Pose of Terminal Beam in Laser Processing System Based on 3D and 2D Hybrid Vision. IEEE Trans. Ind. Electron. 2022, 69, 10634. [Google Scholar] [CrossRef]
- Lee, H.; Park, J.M.; Kim, K.H.; Lee, D.H.; Sohn, M.J. Accuracy evaluation of surface registration algorithm using normal distribution transform in stereotactic body radiotherapy/radiosurgery: A phantom study. J. Appl. Clin. Med. Phys. 2022, 23, e13521. [Google Scholar] [CrossRef]
- Xie, X.X.; Wang, X.C.; Wu, Z.K. 3D face dense reconstruction based on sparse points using probabilistic principal component analysis. Multimed. Tools Appl. 2022, 81, 2937–2957. [Google Scholar] [CrossRef]
- Liu, W.L. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter. Sensors 2017, 17, 539. [Google Scholar] [CrossRef] [Green Version]
- Yang, J.L.; Li, H.D.; Campbell, D.; Jia, Y.D. Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration. IEEE Trans. Pattern Anal. 2016, 38, 2241–2254. [Google Scholar] [CrossRef] [Green Version]
- Wu, P.; Li, W.; Yan, M. 3D scene reconstruction based on improved ICP algorithm. Microprocess. Microsyst. 2020, 75, 103064. [Google Scholar] [CrossRef]
- Fotsing, C.; Menadjou, N.; Bobda, C. Iterative closest point for accurate plane detection in unorganized point clouds. Automat. Constr. 2021, 125, 103610. [Google Scholar] [CrossRef]
- Guo, N.; Zhang, B.H.; Zhou, J.; Zhan, K.T.; Lai, S. Pose estimation and adaptable grasp configuration with point cloud registration and geometry understanding for fruit grasp planning. Comput. Electron. Agric. 2020, 179, 105818. [Google Scholar] [CrossRef]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D Registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 1848–1853. [Google Scholar]
- Tombari, F.; Salti, S.; Di Stefano, L. Unique Signatures of Histograms for Local Surface Description. Lect. Notes Comput. Sci. 2010, 6313, 356–369. [Google Scholar]
- Li, D.p.; Liu, N.; Guo, Y.L.; Wang, X.M.; Xu, J. 3D object recognition and pose estimation for random bin-picking using Partition Viewpoint Feature Histograms. Pattern Recogn. Lett. 2019, 128, 148–154. [Google Scholar] [CrossRef]
- Toumieh, C.; Lambert, A. Voxel-Grid Based Convex Decomposition of 3D Space for Safe Corridor Generation. J. Intell. Robot. Syst. 2022, 105, 87. [Google Scholar] [CrossRef]
- Khanna, N.; Delp, E.J. Intrinsic Signatures for Scanned Documents Forensics: Effect of Font Shape and Size. In Proceedings of the Proceedings of 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 30 May–2 June 2010; pp. 3060–3063. [Google Scholar]
- Xu, G.X.; Pang, Y.J.; Bai, Z.X.; Wang, Y.L.; Lu, Z.W. A Fast Point Clouds Registration Algorithm for Laser Scanners. Appl. Sci. 2021, 11, 3426. [Google Scholar] [CrossRef]
- Shao, Y.P.; Fan, Z.S.; Zhu, B.C.; Zhou, M.L.; Chen, Z.H.; Lu, J.S. A Novel Pallet Detection Method for Automated Guided Vehicles Based on Point Cloud Data. Sensors 2022, 22, 8019. [Google Scholar] [CrossRef] [PubMed]
- Hu, H.P.; Gu, W.K.; Yang, X.S.; Zhang, N.; Lou, Y.J. Fast 6D object pose estimation of shell parts for robotic assembly. Int. J. Adv. Manuf. Technol. 2022, 118, 1383–1396. [Google Scholar] [CrossRef]
Serial No. | Number of Scene Point Clouds | Number of Target Point Clouds | Number Of Key Points | Time Consumed/s |
---|---|---|---|---|
a | 55,827 | 6181 | 1154 | 1.6080 |
b | 58,788 | 6500 | 1327 | 1.8263 |
c | 64,656 | 7115 | 1442 | 1.9777 |
d | 53,433 | 7155 | 1548 | 1.3595 |
e | 50,085 | 5533 | 1014 | 0.7670 |
f | 47,142 | 5217 | 1272 | 1.2597 |
g | 44,118 | 4895 | 916 | 1.2393 |
h | 36,054 | 4001 | 1173 | 1.0132 |
Serial No. | Actual Deviation | Estimated Deviation | Relative Deviation | Accuracy/% | ||||||
---|---|---|---|---|---|---|---|---|---|---|
a | 0 | 0 | 5 | 0.0102 | 0.0267 | 4.8876 | 0.0102 | 0.0267 | 0.1124 | 97.76 |
b | 0 | 0 | 10 | 0.0109 | 0.0340 | 9.8656 | 0.0109 | 0.0340 | 0.1344 | 98.66 |
c | 0 | 0 | 15 | 0.0212 | 0.0534 | 15.09 | 0.0212 | 0.0534 | 0.1900 | 98.73 |
d | 0 | 0 | 20 | 0.0536 | 0.0687 | 26.53 | 0.0536 | 0.0687 | 6.5300 | 67.30 |
e | 0.050 | 0.050 | 0 | 0.0535 | 0.0505 | 0.4060 | 0.0035 | 0.0005 | 0.4060 | 96.00 |
f | 0.100 | 0.100 | 0 | 0.1096 | 0.0988 | 1.0620 | 0.0096 | 0.0012 | 1.0620 | 94.60 |
g | 0.150 | 0.150 | 0 | 0.1536 | 0.1477 | 1.2014 | 0.0036 | 0.0023 | 1.2014 | 98.03 |
h | 0.200 | 0.200 | 0 | 0.1928 | 0.1579 | 3.2503 | 0.0072 | 0.0421 | 3.2503 | 87.68 |
Total Average | 0.0098 | 0.0194 | 0.5010 | 97.30 |
Names | R/m | T/s | RMSE | Dt/% | DRMSE/% |
---|---|---|---|---|---|
SHOT | 0.012 | 1.564 | 0.018234 | −7.9 | 10.99 |
PFH | 0.012 | 3.872 | 0.035647 | 56.6 | 54.47 |
FPFH | 0.012 | 1.430 | 0.017559 | −17.34 | 7.57 |
GWF | 0.012 | 1.678 | 0.016239 | / | / |
Names | R/m | T/s | RMSE | Dt/% | DRMSE/% |
---|---|---|---|---|---|
SHOT | 0.014 | 1.564 | 0.018234 | 46.61 | 36.35 |
PFH | 0.014 | 3.872 | 0.035647 | 79.16 | 68.75 |
FPFH | 0.014 | 1.430 | 0.017559 | 48.15 | 36.23 |
AGWF | Adaptive radius | 0.952 | 0.011715 | / | / |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shao, Y.; Fan, Z.; Zhu, B.; Lu, J.; Lang, Y. A Point Cloud Data-Driven Pallet Pose Estimation Method Using an Active Binocular Vision Sensor. Sensors 2023, 23, 1217. https://doi.org/10.3390/s23031217
Shao Y, Fan Z, Zhu B, Lu J, Lang Y. A Point Cloud Data-Driven Pallet Pose Estimation Method Using an Active Binocular Vision Sensor. Sensors. 2023; 23(3):1217. https://doi.org/10.3390/s23031217
Chicago/Turabian StyleShao, Yiping, Zhengshuai Fan, Baochang Zhu, Jiansha Lu, and Yiding Lang. 2023. "A Point Cloud Data-Driven Pallet Pose Estimation Method Using an Active Binocular Vision Sensor" Sensors 23, no. 3: 1217. https://doi.org/10.3390/s23031217
APA StyleShao, Y., Fan, Z., Zhu, B., Lu, J., & Lang, Y. (2023). A Point Cloud Data-Driven Pallet Pose Estimation Method Using an Active Binocular Vision Sensor. Sensors, 23(3), 1217. https://doi.org/10.3390/s23031217