1. Introduction
In traditional off-line welding modes, the trajectory of the welding torch needs to be set in advance. The acquisition of the motion trajectory mainly depends on advance measurements to calculate the motion trajectory and pass the information to the robot. However, these methods cannot adjust the position of the welding torch according to the situation during the welding process, causing some precision problems. The development of automation technology has had a huge impact on the field of robotic welding, and various forms of welding involving automation solutions based on different physical features are constantly being proposed. Among them, vision-based welding automation has unique advantages in terms of feature extraction [
1]. Compared with traditional welding solutions, vision-based automated welding solutions significantly improve the work efficiency and welding quality. The traditional welding quality improvement strategy mainly relies on the use of additional equipment to improve the welding quality, but it still requires a lot of manual work. On the contrary, vision-based automated welding robots can select welding paths and welding parameters for different welding conditions without relying on experience to gradually set the relevant parameters [
2]. Vision-based welding automation approaches can easily and intuitively obtain image features of the welding arc and seam, which can greatly facilitate the feature extraction required for the subsequent welding seam tracking.
When designing vision-based seam tracking systems, there are also certain problems to be solved. To obtain high-precision tracking results, it is necessary to keep the camera’s field of view at a position with a small distance from the molten pool. However, the closer the position to the molten pool, the stronger the interference of the arc light, which makes it more difficult to detect the welding seam [
3]. In addition, the features of the welding seam are inconspicuous because of the tight butt join of the splice plate and the small groove. Effective solutions to these problems are required when designing vision-based weld tracking systems.
According to whether the welding seam tracking system requires auxiliary light sources, it can be classed as an active vision welding seam tracking system or passive vision welding seam tracking system. In active vision welding seam tracking, the structured laser light is generally used as an auxiliary light source, and the features of the structured light are extracted to complete the welding seam tracking. Kawakara et al. built a welding seam tracking system with a laser line-structured light, and used the image sequence information to eliminate the noise in the weld image [
4]. Kim et al. adopted a text analysis method to process the laser stripes in an image to improve the robustness of the system, which reduced the influence of the strong arc light and metal splash [
5]. Jawad et al. used the improved Otus algorithm to segment the laser stripes and complete the fitting of the laser lines [
6]. Yang et al. proposed a seam tracking system based on an adaptive Hough transform to achieve the real-time extraction of laser stripe features [
7]. Zou et al. applied a continuous convolution operator tracking algorithm (CCOT) in their system to achieve real-time seam tracking, and a Histogram of Oriented Gradient (HOG) was used to extract the feature [
8]. The rapid development of convolutional neural network-related technologies has brought a new direction to seam tracking. Xiao et al. built a seam tracking system based on the Faster R-CNN, which can automatically identify the seam type and extract the seam edge [
9]. Zou et al. achieved welding seam detection based on an Single Shot MultiBox Detector (SSD) detector with a multi-feature fusion network [
10]. Zhao et al. proposed an image segmentation method based on deep learning to extract welding seam features, which can achieve highly robust welding seam tracking [
11]. To sum up, active vision systems mainly focus on improving the effect of the laser line feature extraction to obtain more precise tracking results.
Since the active vision relies on the illumination of the auxiliary light source, the seam features are more obvious. However, active-vision-based systems require additional structured light equipment. In addition, when building such systems, it is necessary to ensure the coordination of the welding torch, laser, and camera. The seam position predicted by active vision system is relatively far from the torch position because the features of the laser line are affected by the melt pool if the distance between the welding torch and the laser line is close. Therefore, the real-time performance of the active vision system will be effected. Passive vision-based systems have some advantages in actual production because they avoid the additional cost of an auxiliary light source and the system structure is relatively simple. Ge et al. performed grayscale feature extraction on the molten pool area and determined the welding offset by extracting the center and boundary positions of the molten pool [
12]. Wei et al. detected the edge with a Sobel operator and Canny operator to determine the seam position [
13]. Xu et al. used an improved Canny edge detection method to detect the edges of the seam and arc using two region of interest (ROI) and calculated the offset of the welding torch from the seam [
14]. Shao et al. designed an image processing algorithm based on the particle filter method to track the seam [
15]. Chen et al. achieved welding seam tracking based on the Mask-RCNN to segment the molten pool area, and a Hough line transformation was used to fit the seam line [
16]. The current research on passive vision welding seam tracking involves performing line fitting on the seam part directly, then performing image segmentation on the weld pool or arc part. The image processing requires different processing methods for the molten pool and the seam. None of these methods adapt well to changing conditions when the torch deviates too far from the seam because of the ROI processing.
Aiming at the problems existing in the current passive vision welding seam tracking research, a passive vision welding seam tracking system based on a semantic segmentation neural network is proposed in this paper, which can be used for end-to-end image segmentation. The aim of our work is to detect the position of the welding seam in real time and guide the path of the welding torch. The proposed method segments images based on deep learning, and can directly distinguish the welding arc and seam from the images. After the image segmentation, the connected component analysis is used to process the mis-segmented part in the image. Then, the positions of the welding seam and the welding torch are calculated according to the semantic image. Finally, the offset between the welding torch and the seam is calculated to accomplish seam tracking and a filter method is proposed to improve the precision.
4. Experiment and Analysis
In order to observe the effect of this method in welding seam tracking, we designed welding experiments for verification. The parameters of the welding experiments are shown in
Table 1.
In the welding experiments, the welding torch moves along the preset trajectory. V-groove splices with groove angles of 30 and 45 degrees are used, and the preset trajectory is set to be offset from the actual seam in the X direction. The specific teaching trajectory is divided into the following categories: no deviation from the actual seam, parallel to the actual seam and offset by 0.5 mm in the X direction, parallel to the actual seam and offset by 1 mm in the X direction, and offset 2 mm in the X direction. After arcing, the industrial camera will continuously image the arcing part, and the image information for different offset situations can be obtained.
During the welding process, the industrial camera is fixed on the welding torch, and the relative positions of the camera and the welding torch remain unchanged. Therefore, when calculating the offset between the welding seam and the welding torch, it is not necessary to convert the information from two-dimensional to three-dimensional, and it is only necessary to calculate the mapping relationship between the offset in the image and the actual offset. In the experiments, the field of view of the camera is small, and the calculation part is roughly in the center of the image, so the influence of the image distortion can be ignored. Due to the relative stillness of the camera and the welding torch, the pixel size can be measured as 0.06 mm/pixel by placing a ruler in the welding area.
In order to verify the superiority of the segmentation effect of BiseNetV2 with OHEM, this paper compares the training results of BiseNetV2 and ICNet with our method. The segmentation effect is shown in
Figure 8, and the results of the intersection of union (IoU) are shown in
Table 2.
It can be seen from the results in the table that the mIoU of BiseNetV2 with OHEM is higher than for BiseNetV2 and ICNet. Moreover, the mIoU of the seam part is much higher than for BiseNetV2 and ICNet. The reason is that the number of pixels occupied by the seam part in the collected images is much less than in the arc part, and it is difficult to extract the image features from the seam part. Therefore, when performing semantic segmentation, the difficult pixels are mainly concentrated in the seam part. For difficult pixels, the network parameters can be retrained using OHEM. In this way, the segmentation effect for the seam part can be improved. Using BiseNetV2 with OHEM can achieve about 57 FPS with pytorch1.7.0 and cuda110 in Nvidia RTX3090, which can achieve real-time segmentation.
According to the semantic segmentation results for BiseNetV2 with OHEM, we can calculate the offset between the welding seam and the welding torch according to the method in
Section 3.3. The errors for the experimental results for each group are shown in
Table 3. The specific offset prediction diagrams are shown in
Figure 9.
Because the distance between the groove and the upper edge of the 45 degree groove splicing plate is small, if the preset offset is too large, the molten pool will not all fall on the groove. Therefore, the preset offset of the 45 degree splice plate is relatively small. It can be seen from the results in the table that the error tends to increase with the increase in the offset. The reason for this trend is that when the welding torch deviates from the seam, the brightness of the seam area will decrease, the feature extraction of the seam during the semantic segmentation will become more difficult, and the segmentation accuracy will decrease.
In order to calculate the offset of the welding seam and the welding torch more precisely, a filter method is added to predict the offset in this paper. After predicting the offset of the current frame, the mean and variance of the offset from the previous five frames are calculated. Here, we calculate the difference between the current frame offset and the mean, and compare this with the variance. If the difference between the two is large, the median filter is used. Otherwise, the mean filter is used.
The offset prediction results after filtering are shown in
Figure 10, and the errors in each experiment are shown in
Table 4.
It can be seen from
Figure 10 and
Table 4 that the average error of the prediction of the offset decreases by 0.03 mm after filtering. The maximum error is stable within 0.4 mm, which can meet the precision requirements for automatic welding.
Author Contributions
Conceptualization, J.L., X.C. and Z.Z.; methodology, J.L., A.Y. and Z.Z.; software, A.Y.; formal analysis, J.L., A.Y. and Z.Z.; validation, A.Y., X.X. and R.L.; investigation, J.L., X.C. and Z.Z.; resources, J.L., X.C. and Z.Z.; data curation, A.Y., X.X. and Z.Z.; writing—original draft preparation, A.Y.; writing—review and editing, J.L., A.Y. and Z.Z.; visualization, J.L. and A.Y.; supervision, J.L., X.C. and Z.Z.; project administration, Z.Z.; funding acquisition, J.L. and Z.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported in part by the National Natural Science Foundation of China (61727802, 61901220 and 62101265), the China Postdoctoral Science Foundation (2021M691592) and the Fundamental Research Funds for the Central Universities (No.30922010705).
Data Availability Statement
The data that support the findings of this study are available from the corresponding author.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Yang, L.; Liu, Y.; Peng, J. Advances techniques of the structured light sensing in intelligent welding robots: A review. Int. J. Adv. Manuf. Technol. 2020, 110, 1027–1046. [Google Scholar] [CrossRef]
- Stavridis, J.; Papacharalampopoulos, A.; Stavropoulos, P. Quality assessment in laser welding: A critical review. Int. J. Adv. Manuf. Technol. 2018, 94, 1825–1847. [Google Scholar] [CrossRef]
- Yang, L. Study on Seam Laser Tracking System Based on Image Recognition; Shandong University: Jinan, China, 2006. [Google Scholar]
- Kawahara, M. Tracking control system using image sensor for arc welding. Automatic 1983, 4, 22–26. [Google Scholar] [CrossRef]
- Kim, J.S.; Son, Y.T.; Cho, H.S.; Koh, K.I. A robust method for vision-based seam tracking in robotic arc welding. In Proceedings of the Tenth International Symposium on Intelligent Control, Monterey, CA, USA, 27–29 August 1995; pp. 363–368. [Google Scholar]
- Muhammad, J.; Altun, H.; Abo-Serie, E. A robust butt welding seam finding technique for intelligent robotic welding system using active laser vision. Int. J. Adv. Manuf. Technol. 2018, 94, 13–29. [Google Scholar] [CrossRef]
- Yang, S.-M.; Cho, M.-H.; Lee, H.-Y.; Cho, T.-D. Weld line detection and process control for welding automation. Meas. Sci. Technol. 2007, 18, 819. [Google Scholar] [CrossRef]
- Zou, Y.; Chen, T. Laser vision seam tracking system based on image processing and continuous convolution operator tracker. Opt. Lasers Eng. 2018, 105, 141–149. [Google Scholar] [CrossRef]
- Xiao, R.; Xu, Y.; Hou, Z.; Chen, C.; Chen, S. An adaptive feature extraction algorithm for multiple typical seam tracking based on vision sensor in robotic arc welding. Sens. Actuators. A Phys. 2019, 297, 111533. [Google Scholar] [CrossRef]
- Zou, Y.; Zhu, M.; Chen, X. A robust detector for automated welding seam tracking system. J. Dyn. Syst. Meas. Control. 2021, 143, 7. [Google Scholar] [CrossRef]
- Zhao, Z.; Luo, J.; Wang, Y.; Bai, L.; Han, J. Additive seam tracking technology based on laser vision. Int. J. Adv. Manuf. Technol. 2021, 116, 197–211. [Google Scholar] [CrossRef]
- Ge, J.; Zhu, Z.; He, D.; Chen, L. A vision-based algorithm for seam detection in a PAW process for large-diameter stainless steel pipes. Int. J. Adv. Manuf. Technol. 2005, 26, 1006–1011. [Google Scholar] [CrossRef]
- Wei, S.; Kong, M.; Lin, T.; Chen, S. Autonomous seam acquisition and tracking for robotic welding based on passive vision. In Robotic Welding, Intelligence and Automation; Springer: Berlin/Heidelberg, Germany, 2011; pp. 41–48. [Google Scholar]
- Xu, Y.; Fang, G.; Chen, S.; Zou, J.J.; Ye, Z. Real-time image processing for vision-based weld seam tracking in robotic GMAW. Int. J. Adv. Manuf. Technol. 2014, 73, 1413–1425. [Google Scholar] [CrossRef]
- Shao, W.; Liu, X.; Wu, Z. A robust weld seam detection method based on particle filter for laser welding by using a passive vision sensor. Int. J. Adv. Manuf. Technol. 2019, 104, 2971–2980. [Google Scholar] [CrossRef]
- Chen, Y.; Shi, Y.; Cui, Y.; Chen, X. Narrow gap deviation detection in keyhole TIG welding using image processing method based on mask-RCNN model. Int. J. Adv. Manuf. Technol. 2021, 112, 2015–2025. [Google Scholar] [CrossRef]
- Yu, C.; Gao, C.; Wang, J.; Yu, G.; Shen, C.; Sang, N. BiSeNet V2: Bilateral network with guided aggregation for real-time semantic segmentation. Int. J. Comput. Vis. 2021, 129, 3051–3068. [Google Scholar] [CrossRef]
- Wang, R.J.; Li, X.; Ling, C.X. Pelee: A real-time object detection system on mobile devices. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar] [CrossRef]
- Yu, C.; Wang, J.; Peng, C.; Gao, C.; Yu, G.; Sang, N. Learning a discriminative feature network for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1857–1866. [Google Scholar]
- Shrivastava, A.; Gupta, A.; Girshick, R. Training region-based object detectors with online hard example mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 761–769. [Google Scholar]
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).