Visual Localization and Path Planning for a Dual-Arm Collaborative Pottery Robot
Abstract
:1. Introduction
- Combine robots with pottery to fill the gap in pottery automation.
- Achieve precise recognition of 3D stereoscopic machine vision and the combination of robotics and ceramics.
- Proposes fully-automated ceramic production, promoting the industrialization of ceramic products and the inheritance of pottery-making skills.
2. Modeling and Analysis of the Dual-Arm Collaborative Pottery Robot
2.1. Demand Analysis
2.2. D-H Model of the Pottery Robot
2.3. 3D Solid Model of the Pottery Robot
3. Methodology for the Dual-Arm Collaborative Pottery Robot
3.1. Visual Localization
- Image acquisition: the working area is photographed by cameras (in this case, two Realsense D435i cameras are used) mounted on the robot. These cameras capture real-time images of the pottery making process, including information such as the shape and color of the clay and the position of the tools.
- Feature extraction: the acquired image data are processed using computer vision algorithms to identify and extract feature points of interest. For pottery, these features may include the contours of the clay, the surface texture, or specific decorative elements. This process usually involves techniques such as edge detection, color analysis, and pattern matching [5,35,36].
- Position computation: based on the extracted feature information, geometric molding or template matching methods will be used to determine the precise position and attitude of the clay and its associated tools with respect to the robot. This step needs to take into account the calibration parameters of the camera, i.e., the internal and external parameters of the camera, to ensure an accurate conversion from 2D image to 3D spatial coordinates.
- Motion control: based on the calculated target position information, the control system generates commands to the two-armed collaborative robots, instructing them how to move in order to complete the given operation tasks, such as shaping, sculpting, or glazing. In order to achieve precision, a force feedback mechanism needs to be considered so that the robot can adapt to changes in the hardness or softness of the clay and make appropriate adjustments.
3.2. Trajectory Planning
3.2.1. Kinematic Model
3.2.2. Kinematic Inverse Solutions
3.2.3. Cartesian Spatial Trajectory Planning
4. Case Study
4.1. Visual Calibration
- The images acquired by the camera are binarized to identify the black board regions (i.e., quadrangles) at the locations of the candidate board’s corner points.
- In a subsequent filtering step, only those quadrilaterals that meet a specific size requirement are retained and arranged in a regular grid structure that has user-specified dimensions.
- After the initial inspection of the calibration plate has been completed, the location of the corner points can be determined with a very high degree of accuracy. This is because corner points (mathematically known as saddle points) are geometrically close to infinitely small points and therefore remain unbiased despite perspective shifts or lens distortion.
- After successful detection of the checkerboard grid, further sub-pixel level refinement can be performed to pinpoint the location of the saddle point. This process makes use of the exact gray values of the pixels surrounding the corner point, thus achieving sub-pixel level accuracy that is higher than whole-pixel accuracy.
4.2. Dual-Arm Cooperative Trajectory Control of Ceramic Work
4.3. Results Analysis
5. Conclusions and Future Work
- Deploy the simulation to actual hardware. Combined with visual localization, the pottery robot is allowed to autonomously complete the production of the purple clay teapots.
- Optimize the machine vision algorithm, introduce advanced technologies such as deep learning, and improve the recognition accuracy of the clay embryo and the tapping effect.
- Improve the research of dynamic coordinated control strategy of both arms and adopt adaptive motion planning to enhance the flexibility and response speed of the system in a dynamic environment.
- A torque sensor is integrated into the end-effector of the robotic arm to monitor in real time the distribution of force when in contact with the clay, thus dynamically adjusting the movements of both hands.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
ILDA | Integrated Linkage-driven Dexterous Anthropomorphic |
DoF | Degree of Freedom |
SURF | Speeded Up Robust Feature |
MPC | Model Predictive Control |
D-H | Denavit-Hartenberg |
References
- Torres, R.; Ferreira, N. Robotic manipulation in the ceramic industry. Electronics 2022, 11, 4180. [Google Scholar] [CrossRef]
- Tao, H.; Wang, P.; Chen, Y.; Stojanovic, V.; Yang, H. An unsupervised fault diagnosis method for rolling bearing using STFT and generative neural networks. J. Frankl. Inst. 2020, 357, 7286–7307. [Google Scholar]
- Sheng, H.; Wei, S.; Yu, X.; Tang, L. Research on Binocular Visual System of Robotic Arm Based on Improved SURF Algorithm. IEEE Sens. J. 2020, 20, 11849–11855. [Google Scholar]
- Nicolis, D.; Palumbo, M.; Zanchettin, A.M.; Rocco, P. Occlusion-Free Visual Servoing for the Shared Autonomy Teleoperation of Dual-Arm Robots. IEEE Robot. Autom. Lett. 2018, 3, 796–803. [Google Scholar]
- Fan, J.; Zheng, P.; Li, S. Vision-based holistic scene understanding towards proactive human–robot collaboration. Robot. Comput.-Integr. Manuf. 2022, 75, 102304. [Google Scholar]
- Yu, X.; He, W.; Li, Q.; Li, Y.; Li, B. Human-Robot Co-Carrying Using Visual and Force Sensing. IEEE Trans. Ind. Electron. 2021, 68, 8657–8666. [Google Scholar]
- Wang, G.; Li, Z.; Weng, G.; Chen, Y. An optimized denoised bias correction model with local pre-fitting function for weak boundary image segmentation. Signal Process. 2024, 220, 109448. [Google Scholar]
- Huang, P.; Gu, Y.; Li, H.; Yazdi, M.; Qiu, G. An Optimal Tolerance Design Approach of Robot Manipulators for Positioning Accuracy Reliability. Reliab. Eng. Syst. Saf. 2023, 237, 109347. [Google Scholar]
- Yang, C.; Lu, W.; Xia, Y. Positioning Accuracy Analysis of Industrial Robots Based on Non-Probabilistic Time-Dependent Reliability. IEEE Trans. Reliab. 2024, 73, 608–621. [Google Scholar]
- Cheng, C.; Zhang, H.; Sun, Y.; Tao, H.; Chen, Y. A cross-platform deep reinforcement learning model for autonomous navigation without global information in different scenes. Control Eng. Pract. 2024, 150, 105991. [Google Scholar]
- Qin, H.; Shao, S.; Wang, T.; Yu, X.; Jiang, Y.; Cao, Z. Review of Autonomous Path Planning Algorithms for Mobile Robots. Drones 2023, 7, 211. [Google Scholar] [CrossRef]
- Chu, Z.; Wang, F.; Lei, T.; Luo, C. Path Planning Based on Deep Reinforcement Learning for Autonomous Underwater Vehicles Under Ocean Current Disturbance. IEEE Trans. Intell. Veh. 2023, 8, 108–120. [Google Scholar]
- Messaoudi, K.; Oubbati, O.S.; Rachedi, A.; Lakas, A.; Bendouma, T.; Chaib, N. A survey of UAV-based data collection: Challenges, solutions and future perspectives. J. Netw. Comput. Appl. 2023, 216, 103670. [Google Scholar]
- Cui, S.; Chen, Y.; Li, X. A Robust and Efficient UAV Path Planning Approach for Tracking Agile Targets in Complex Environments. Machines 2022, 10, 931. [Google Scholar] [CrossRef]
- Zendehdel, N.; Chen, H.; Song, Y.S.; Leu, M.C. Implementing Eye Movement Tracking for UAV Navigation. In Proceedings of the International Symposium on Flexible Automation, Seattle, DC, USA, 21–24 July 2024; Volume 87882, p. V001T08A004. [Google Scholar]
- Yang, G.; Li, M.; Gao, Q. Multi-Automated Guided Vehicles Conflict-Free Path Planning for Packaging Workshop Based on Grid Time Windows. Appl. Sci. 2024, 14, 3341. [Google Scholar] [CrossRef]
- de Cássio Zequi, S.; Ren, H. Robotic motion planning in surgery. In Handbook of Robotic Surgery; Academic Press: Cambridge, MA, USA, 2025; pp. 169–178. [Google Scholar]
- Kim, U.; Jung, D.; Jeong, H.; Park, J.; Jung, H.M.; Cheong, J.; Choi, H.R.; Do, H.; Park, C. Integrated Linkage-Driven Dexterous Anthropomorphic Robotic Hand. Nat. Commun. 2021, 12, 7177. [Google Scholar]
- Ma, P.; He, X.; Chen, Y.; Liu, Y. ISOD: Improved small object detection based on extended scale feature pyramid network. Vis. Comput. 2025, 41, 465–479. [Google Scholar]
- Abondance, S.; Teeple, C.B.; Wood, R.J. A Dexterous Soft Robotic Hand for Delicate In-Hand Manipulation. IEEE Robot. Autom. Lett. 2020, 5, 5502–5509. [Google Scholar]
- Lee, D.H.; Choi, M.S.; Park, H.; Jang, G.R.; Park, J.H.; Bae, J.H. Peg-in-Hole Assembly with Dual-Arm Robot and Dexterous Robot Hands. IEEE Robot. Autom. Lett. 2022, 7, 8566–8573. [Google Scholar]
- Chen, Y.; Chu, B.; Freeman, C.T. Point-to-Point Iterative Learning Control with Optimal Tracking Time Allocation. IEEE Trans. Control Syst. Technol. 2018, 26, 1685–1698. [Google Scholar]
- Bebek, O.; Canduran, K. Rebirth of Ceramic Art in The Digital Age: Transformation Journey from 3D Modeling to NFT. Art Vis. 2024, 30, 135–140. [Google Scholar]
- Liu, M.; Gu, Q.; Yang, B.; Yin, Z.; Liu, S.; Yin, L.; Zheng, W. Kinematics Model Optimization Algorithm for Six Degrees of Freedom Parallel Platform. Appl. Sci. 2023, 13, 3082. [Google Scholar] [CrossRef]
- Jiang, W.; Wu, D.; Dong, W.; Ding, J.; Ye, Z.; Zeng, P.; Gao, Y. Design and Validation of a Nonparasitic 2R1T Parallel Hand-Held Prostate Biopsy Robot with Remote Center of Motion. J. Mech. Robot. 2024, 16, 051009. [Google Scholar]
- Cao, Y.; Chen, X.; Zhang, M.; Huang, J. Adaptive Position Constrained Assist-as-Needed Control for Rehabilitation Robots. IEEE Trans. Ind. Electron. 2024, 71, 4059–4068. [Google Scholar]
- Li, C.; Zheng, P.; Li, S.; Pang, Y.; Lee, C.K.M. AR-assisted digital twin-enabled robot collaborative manufacturing system with human-in-the-loop. Robot. Comput.-Integr. Manuf. 2022, 76, 102321. [Google Scholar]
- Corke, P.I. A Simple and Systematic Approach to Assigning Denavit–Hartenberg Parameters. IEEE Trans. Robot. 2007, 23, 590–594. [Google Scholar]
- Rocha, C.R.; Tonetto, C.P.; Dias, A. A Comparison between the Denavit-Hartenberg and the Screw-based Methods Used in Kinematic Modeling of Robot Manipulators. Robot. Comput.-Integr. Manuf. 2011, 27, 723–728. [Google Scholar]
- Lengiewicz, J.; Kursa, M.; Holobut, P. Modular-robotic Structures for Scalable Collective Actuation. Robotica 2017, 35, 787–808. [Google Scholar]
- Li, C.H.G.; Chang, Y.M. Automated Visual Positioning and Precision Placement of a Workpiece Using Deep Learning. Int. J. Adv. Manuf. Technol. 2019, 104, 4527–4538. [Google Scholar]
- Zhou, Q.; Li, X. Visual Positioning of Distant Wall-Climbing Robots Using Convolutional Neural Networks. J. Intell. Robot. Syst. 2020, 98, 603–613. [Google Scholar]
- Moriwaki, K. Adaptive Exposure Image Input System for Obtaining High-Quality Color Information. Syst. Comput. Jpn. 1994, 25, 51–60. [Google Scholar]
- Zhou, S.; Wu, J.; Lu, Q. Spatial Gating with Hybrid Receptive Field for Robot Visual Localization. Int. J. Comput. Intell. Syst. 2024, 17, 131. [Google Scholar]
- Singh, A.; Raj, K.; Kumar, T.; Verma, S.; Roy, A.M. Deep Learning-Based Cost-Effective and Responsive Robot for Autism Treatment. Drones 2023, 7, 81. [Google Scholar] [CrossRef]
- Wang, G.; Li, Z.; Weng, G.; Chen, Y. An overview of industrial image segmentation using deep learning models. Intell. Robot. 2025, 5, 143–180. [Google Scholar]
- Chen, Y.; Cui, Z.; Wang, X.; Wang, D.; Wang, Z.; Song, Z.; Liu, D. Combining Closed-form and Numerical Solutions for the Inverse Kinematics of Six-degrees-of-freedom Collaborative Handling Robot. Int. J. Adv. Robot. Syst. 2024, 21, 17298806241228372. [Google Scholar]
- Xie, S.; Sun, L.; Wang, Z.; Chen, G. A Speedup Method for Solving the Inverse Kinematics Problem of Robotic Manipulators. Int. J. Adv. Robot. Syst. 2022, 19, 17298806221104602. [Google Scholar]
- Chen, Y.; Chu, B.; Freeman, C.T. A coordinate descent approach to optimal tracking time allocation in point-to-point ILC. Mechatronics 2019, 59, 25–34. [Google Scholar]
- Gu, Y.; Wu, J.; Liu, C. Error Analysis and Accuracy Evaluation Method for Coordinate Measurement in Transformed Coordinate System. Measurement 2025, 242, 115860. [Google Scholar]
- Gao, H.; An, H.; Lin, W.; Yu, X.; Qiu, J. Trajectory Tracking of Variable Centroid Objects Based on Fusion of Vision and Force Perception. IEEE Trans. Cybern. 2023, 53, 7957–7965. [Google Scholar]
- Liu, Z.; Li, Z.; Ran, J. Robust Solution for Coordinate Transformation Based on Coordinate Component Weighting. J. Surv. Eng. 2023, 149, 04023006. [Google Scholar]
- Chen, Y.; Wu, L.; Wang, G.; He, H.; Weng, G.; Chen, H. An active contour model for image segmentation using morphology and nonlinear Poisson’s equation. Optik 2023, 287, 170997. [Google Scholar] [CrossRef]
Kinematics | Link Twist α/rad | Link Length a/mm | Link Offset d/mm | Joint Angle θ/° |
---|---|---|---|---|
Joint 1 | 0 | 89.159 | ||
Joint 2 | 0 | −425 | 0 | |
Joint 3 | 0 | −392.25 | 0 | |
Joint 4 | 0 | 109.15 | ||
Joint 5 | 0 | 94.65 | ||
Joint 6 | 0 | 0 | 82.3 |
Content | Parameter | |
---|---|---|
1 | Arm Length | 850 mm |
2 | Weight | 18.4 kg |
3 | Maximum Payload | 5 kg |
4 | Repeatability of Joint Position | 0.03 mm |
5 | Joint Speed | 180°/s |
6 | Rotation range | ±360° |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, W.; Feng, X.; Cao, L.; Wang, T.; Wang, G.; Chen, Y. Visual Localization and Path Planning for a Dual-Arm Collaborative Pottery Robot. Symmetry 2025, 17, 532. https://doi.org/10.3390/sym17040532
Zhang W, Feng X, Cao L, Wang T, Wang G, Chen Y. Visual Localization and Path Planning for a Dual-Arm Collaborative Pottery Robot. Symmetry. 2025; 17(4):532. https://doi.org/10.3390/sym17040532
Chicago/Turabian StyleZhang, Wei, Xirui Feng, Liangyu Cao, Tuo Wang, Guina Wang, and Yiyang Chen. 2025. "Visual Localization and Path Planning for a Dual-Arm Collaborative Pottery Robot" Symmetry 17, no. 4: 532. https://doi.org/10.3390/sym17040532
APA StyleZhang, W., Feng, X., Cao, L., Wang, T., Wang, G., & Chen, Y. (2025). Visual Localization and Path Planning for a Dual-Arm Collaborative Pottery Robot. Symmetry, 17(4), 532. https://doi.org/10.3390/sym17040532