*Article* **A Novel Seam Tracking Technique with a Four-Step Method and Experimental Investigation of Robotic Welding Oriented to Complex Welding Seam**

**Gong Zhang 1,2, Yuhang Zhang 1,3, Shuaihua Tuo 1,3, Zhicheng Hou 1,\*, Wenlin Yang 1, Zheng Xu 1, Yueyu Wu 1, Hai Yuan <sup>1</sup> and Kyoosik Shin <sup>4</sup>**


**Abstract:** The seam tracking operation is essential for extracting welding seam characteristics which can instruct the motion of a welding robot along the welding seam path. The chief tasks for seam tracking would be divided into three partitions. First, starting and ending points detection, then, weld edge detection, followed by joint width measurement, and, lastly, welding path position determination with respect to welding robot co-ordinate frame. A novel seam tracking technique with a four-step method is introduced. A laser sensor is used to scan grooves to obtain profile data, and the data are processed by a filtering algorithm to smooth the noise. The second derivative algorithm is proposed to initially position the feature points, and then linear fitting is performed to achieve precise positioning. The groove data are transformed into the robot's welding path through sensor pose calibration, which could realize real-time seam tracking. Experimental demonstration was carried out to verify the tracking effect of both straight and curved welding seams. Results show that the average deviations in the *X* direction are about 0.628 mm and 0.736 mm during the initial positioning of feature points. After precise positioning, the average deviations are reduced to 0.387 mm and 0.429 mm. These promising results show that the tracking errors are decreased by up to 38.38% and 41.71%, respectively. Moreover, the average deviations in both *X* and *Z* direction of both straight and curved welding seams are no more than 0.5 mm, after precise positioning. Therefore, the proposed seam tracking method with four steps is feasible and effective, and provides a reference for future seam tracking research.

**Keywords:** welding robot; seam tracking; laser sensor; feature point extracting; complex welding seam

#### **1. Introduction**

Mechanical robots have become crucial for modern welding owing to high-volume profitability since manual welding yields low production rates [1]. Robotic welding brings different favorable circumstances, for instance, it has made strides in efficiency, weld quality, adaptability and workspace use, and it diminishes work costs in addition to focused unit cost [2].

Be that as it may, most welding robots still work in the working mode of "teach and playback" and their adaptability is not enough when the welding object or other conditions are changed [3]. Since welding as an empirical process is influenced by numerous factors, such as the mistakes of pre-machining, fitting of work pieces, and in-process defects, can result in variation in welding seam. However, welding robots in teach and playback

**Citation:** Zhang, G.; Zhang, Y.; Tuo, S.; Hou, Z.; Yang, W.; Xu, Z.; Wu, Y.; Yuan, H.; Shin, K. A Novel Seam Tracking Technique with a Four-Step Method and Experimental Investigation of Robotic Welding Oriented to Complex Welding Seam. *Sensors* **2021**, *21*, 3067. https:// doi.org/10.3390/s21093067

Academic Editors: Carmelo Mineo and Yashar Javadi

Received: 1 April 2021 Accepted: 25 April 2021 Published: 28 April 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

mode have no such capacities and typically weld a weldment with many defects and poor penetration [1].

There are generally three stages in robotic welding: (i) preparation—calibration, robot programming, and weld parameter, work-piece setting, (ii) welding—seam tracking, alternation of weld parameters in real time, (iii) analysis—weld quality inspection [4]. The seam tracking operation is essential for extracting weld seam characteristics which can be fed into the controller of welding robot to instruct the motion of the robot along the welding seam path. Seam tracking technology with laser vision sensing has the advantages of no contact, fast speed, and high precision, which are the keys to realizing welding automation and intelligence [5,6].

In order to fulfill the required welding accuracy for robotic welding, a seam tracking algorithm that enables the robot to plan its path along the actual welding line is necessary. Therefore, many studies have been conducted on automatic seam tracking using sensors such as tactile, touch, probe, vision sensors [7,8], laser sensors [9,10], arc sensors [11,12], electromagnetic sensors [13,14], and ultrasonic sensors [15,16]. The sensors have a very important role in robotic seam tracking; the chief tasks would be weld starting and ending points detection, weld edge detection, joint width measurement.

A basic laser sensor consists of three parts: laser diode, CCD camera, and filter. The laser diode could produce a stripe or dot which would be scanned by the camera. The CCD camera is always fixed at an angle to the laser to capture properly the projection of laser on the work piece [17]. The welding seam tracking system based on laser vision combines laser measurement and computer vision technology. It has the advantages of rich information acquisition, obvious welding seam characteristics, and strong anti-interference ability [18,19], which are suitable for real-time tracking systems. The mathematical model of transforming the laser feature points pixel coordinate to the three-dimensional coordinate of the welding feature points by designing the mechanical structure of the sensor was proposed [20].

Chen et al. [21] proposed a feature points positioning method that only needs two profile scans, which can effectively calculate the initial position of the weld. Chang et al. [22] filtered, derived and convolved the weld profile data, and located the feature points by finding the local maxima. Wang et al. [23] established welding seam profile detection and feature points extracting algorithms based on a NURBS-snake and visual attention model, and verified their effectiveness. Mastui et al. [24] introduced an adaptive welding robot system controlled by laser sensor for welding of thin plates with gap variation in single pass.

In a flexible welding process, Ciszak et al. [25] developed a low-cost system for identifying shapes in order to program industrial robots for a welding process in two dimension. The programming of industrial robots was to detect geometric shapes proposed by humans and to approximate them. Based on this, the robot could weld the same profiles on a two-dimensional plane. This is time-consuming as many welding robot applications are programmed by teach and playback, which means that they need to be reprogrammed each time they deal with a new task. Hairol et al. [26] suggested an alternative approach that can automatically recognize and locate the butt-welding position at starting, middle, auxiliary, and end point under three conditions which are (i) straight, (ii) saw tooth, and (iii) curve joint. This was done without any prior knowledge of the shapes involved. As an automatic welding process may experience different disturbances, Li et al. [27] proposed a robust method for identifying this seam based on cross-modal perception so as to precisely identify and automatically track the welding seam.

Wojciechowski et al. [28] proposed the method of automatic robotic assembly of two or more parts placed without fixing instrumentation and positioning on the pallet, which could support a robotic assembly process based on data from optical 3D scanners. The sequence of operations from scanning to place the parts in the installation position by an industrial robot was developed. Suszynski et al. [29] presented the concept of using an industrial robot equipped with a triangulation scanner in the assembly process in order to minimize the number of clamps that could hold the units in a particular position in space based on the proposed multistep processing algorithm.

These efforts have brought about many improvements in the feature points of the target weldment. However, there are certain limitations in the positioning accuracy due the factors such as the change of the welding type (especially oriented to complex welding seam) or the surface defects of the welding.

Due to these circumstances, we here introduce a novel seam tracking technique with a four-step method. First, a laser sensor is used to scan the groove of the weldment to collect profile data; then the data are processed by a filtering algorithm to smooth the noise; next, the second derivative algorithm is proposed to initially locate the feature points based on linear fitting to accurately locate the feature points; finally, according to the results of the sensor pose calibration, the three-dimensional coordinates in the base coordinate system of the welding robot are calculated from the two-dimensional coordinates of the image feature points, and the path planning is completed, with both the line and curve of the Y-shaped groove being targeted as well. The proposed seam tracking technique is tested and verified by way of experimental investigation.

Our proposed seam tracking technique with a four-step method utilizes edge detection and curvature recognition techniques based on laser scan data. The offset of the welding robot's motion with respect to the welding seam is measured by a laser sensor. By adding a differential point searching method, the feature points of the cross-section of the welding seam are found. Comparing to other seam tracking algorithms, we show the improvement of the required welding accuracy oriented to complex welding seam through theoretical proof, simulation, and experiments.

This paper is organized as follows: Section 2 presents the seam tracking system composition; Section 3 introduces the seam tracking methodology with four steps; Section 4 shows the results of the experimental investigation based on the proposed seam tracking technique; Section 5 gives the conclusion and perspective.

#### **2. Seam Tracking System Composition**

The experimental platform composition of the six-axis robot arm for seam tracking system is detailed in Figure 1. As evident in Figure 1, this experimental platform is mainly composed of the motion execution mechanism with six degrees of freedom, laser vision sensor, D/A conversion module, and industrial computer, robotic controller, welding equipment, i.e., welding power supply and wire feeding device, etc.

The execution mechanism is composed of two welding robots, and each of them has six degrees of freedom. The offset of the welding robot's motion with respect to the welding seam is measured by a laser vision sensor. Through robotic welding experiments, images of molten pool morphology and welding geometry under different welding parameters can be obtained. The main tasks for seam tracking would be weld starting and ending point detection, weld edge detection, joint width measurement, and weld path position determination with regard to welding robot co-ordinate frame.

**Figure 1.** Diagram of seam tracking system.

#### **3. Seam Tracking Methodology with Four Steps**

In this paper, we introduce a novel seam tracking technique with a four-step method: scanning, filtering, feature points extracting, and path planning. Firstly, the profile information is obtained by scanning the groove with a laser sensor; then, the data are filtered to smooth the noise; next, the feature points are extracted by the combination of the second derivative algorithm and linear fitting; finally, the data of the feature points are converted into the welding seam path of the robot, guiding the welding torch to move and realize the real-time tracking of the welding seam. The flowchart of the proposed four-step method is revealed in Figure 2.

**Figure 2.** Flowchart of the four-step method for (**a**) scanning; (**b**) filtering; (**c**) feature points extracting; and (**d**) path planning.

#### *3.1. Scanning and Filtering*

The purpose of scanning is to obtain the original data of the weldment groove profile, which is the basis for realizing seam tracking [30]. The laser sensor obtains the distance information of the measured object based on the principle of triangulation and then processes the scan data to obtain the profile feature of the measured object. While scanning, the sensor is fixed at the end-effector of the robot and parallel to the welding torch to ensure that the line laser is perpendicular to the measured object [31], covering the groove to the greatest extent, and at the same time, the welding robot is constantly moved to obtain the overall shape of the welding seam.

The combination of limiting filter and Gaussian filter is used to process the groove profile data obtained by scanning. The former is used to remove the pulse interference caused by accidental factors. The latter is used to smooth the data [32]. The data are processed using limiting filtering by comparing the absolute value of the difference between two adjacent sample values and the size of the threshold. Its principle can be expressed as [33]:

$$\mathbf{y} = \begin{cases} y\_n & |y\_n - y\_{n-1}| \le \Delta T \\ y\_{n-1} & |y\_n - y\_{n-1}| > \Delta T' \end{cases} \tag{1}$$

where *yn* and *yn*−<sup>1</sup> are the current and last sampled signal values, respectively, and Δ*T* represents the specified threshold.

Gaussian filtering is a type of linear smoothing filtering method that selects weights according to the shape of the Gaussian function. It is very effective in suppressing the noise that obeys the normal distribution [34], and the Gaussian function has good properties of symmetry, differentiability, and integrability. The function can accurately identify the discontinuous points of the signal, which is very beneficial for the subsequent feature points extracting. The expression of the one-dimensional Gaussian function can be described as [35]:

$$f(\mathbf{x}) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{\left(\mathbf{x}-\boldsymbol{\mu}\right)^2}{2\sigma^2}},\tag{2}$$

where *μ* is the mean value, which determines the position of the function, and *σ* is the standard deviation, which determines the magnitude of the distribution.

#### *3.2. Feature Point Extracting*

The feature points of the weldment are generally the corner points of the groove section, and its information can reflect the overall situation of the groove profile [36], so feature point extracting is required. This is done according to the cross-sectional characteristics of the weldment groove, combined with the related properties of the function discontinuities listed in Table 1. The groove feature points could be classified as follows: **A**, **B**, **E**, **F**, which are the first type of feature points, and **C**, **D**, which are the second type of feature points, as shown in Figure 3.

**Table 1.** Properties of discontinuous points of function.


**Figure 3.** Classification of groove feature points.

Based on the above analysis, the feature points can be located by determining the types of feature points contained in the groove section, and then deriving them to find the extreme points.

#### 3.2.1. Initial Positioning of Feature Points

The preliminary positioning method of the groove feature points is as follows: First, the original data are processed by filtering, and then the first derivative is obtained by the forward difference method and the extreme points are found to determine the first type of feature points, as compared in Figure 4. The abscissa and the ordinate, respectively, represent the *X* and *Z* axes of the sensor coordinate system.

**Figure 4.** Initial positioning of feature points for (**a**) the first type of feature points; and (**b**) all feature points.

It can be seen from the above figures that the maximum point of the first-order guide falls between the line segment **BC** and **DE**, and fails to accurately correspond to **B** and **E**. This is because the groove of the weldment under actual conditions needs to be machined, and its blunt edge is not a vertical line in an ideal state, but a diagonal line. Therefore, the second type of feature points are transformed into the first type, and the first-order derivative can be continued to differ, and the second-order derivative can be obtained and the point with the highest value can be found to locate all the feature points, as shown in Figure 4. So far, the six characteristic points of the trapezoidal groove have been preliminarily determined, and their location information is listed in Table 2.

**Table 2.** Results of initial positioning.


3.2.2. Precise Positioning of Feature Points

Due to the defects on the surface of the weldment, as given in Figure 5, the feature points obtained through preliminary positioning are **b** and **c**, while the true feature point should be **a**, which is clearly a deviation. Therefore, on the basis of preliminary positioning, linear fitting is performed on each segment of the groove to accurately locate the feature points.

**Figure 5.** Defects on the surface of the weldment.

Suppose any straight-line equation to be fitted is *y* = *ax* + *b*, and the calculation of equation parameters can be written as [37]:

$$\mathbf{x}\begin{bmatrix}a\\b\end{bmatrix} = \begin{bmatrix}\sum\_{i=1}^{n}\mathbf{x}\_{i}^{2} & \sum\_{i=1}^{n}\mathbf{x}\_{i}\\\sum\_{i=1}^{n}\mathbf{x}\_{i} & n\end{bmatrix}^{-1} \cdot \begin{bmatrix}\sum\_{i=1}^{n}\mathbf{x}\_{i}y\_{i}\\\sum\_{i=1}^{n}y\_{i}\\\sum\_{i=1}^{n}y\_{i}\end{bmatrix},\\\mathbf{y} = \begin{cases}y\_{n} & |y\_{n} - y\_{n-1}| \le \Delta T\\y\_{n-1} & |y\_{n} - y\_{n-1}| > \Delta T\end{cases} \tag{3}$$

where *a* is the slope, *b* is the intercept, (*xi*, *yi*) is the point passing through the straight line, and *n* is the number of points.

The fitting results are shown in Figure 6, and the relevant parameters of the straight line are illustrated in Table 3.

**Figure 6.** Fitting results.

**Table 3.** Parameters of fitting straight line.


Among them, *SSE* is the sum variance, which calculates the sum of squared errors between the fitting data and the corresponding points of the original data. The smaller the value, the better the fitting affects; *R*-squared is the coefficient of determination, which is used to characterize the quality of the fitting [38]; the closer its value is to 1, the better the fitting affects. It is easy to know that the fitting effect of each straight line is better. The results of precise positioning of the feature points are listed in Table 4. So far, the feature points extracting of the profile for the trapezoidal groove section would be completed.

**Table 4.** Results of precise positioning.


#### *3.3. Path Planning*

Because the data measured by the laser sensor are based on their own coordinate system, it is necessary to convert the feature points to the base coordinate system of the welding robot through pose calibration [39].

The relationship between two coordinate systems of the robot is depicted in Figure 7. The sensor calibration is to determine the transformation matrix *<sup>E</sup> <sup>S</sup>T* of {*S*} relative to {*E*}.

**Figure 7.** Relationship between two coordinate systems.

This paper uses the multipoint method for calibration [40]. The main steps are as follows:



where *α*, *β*, *γ* are the rotation angles of the *X*, *Y*, and *Z* axes of the tool coordinate system {*E*}, respectively.

**Figure 8.** Laser sensor calibration for (**a**) base coordinates; and (**b**) sensor coordinates.

Then, *<sup>B</sup> <sup>E</sup>T* can be simplified to

$$\begin{aligned} \, \_E^B T = \begin{bmatrix} & \, \_E^B \mathbf{R} & & \, \_E^E \mathbf{P} \\ 0 & 0 & 0 & 1 \end{bmatrix} \, \_{\prime} \end{aligned} \tag{5}$$

where *EP* = (*xE*, *yE*, *zE*) *<sup>T</sup>*, that is, the position of point *P* in the tool coordinate system {*E*} after the coordinate system is switched.

According to the transformation relationship of point *P* in space:

$${}^{B}P = {}^{B}\_{E}T \cdot {}^{E}\_{S}T \cdot {}^{S}P \,, \tag{6}$$

where the definition of each parameter in the formula is consistent with the above.

Since *<sup>E</sup> <sup>S</sup>T* contains 12 unknowns, at least 3 different fixed points need to be selected to solve the problem. The calibration results in this paper are as follows:

$$\begin{aligned} \;^E\_S T = \begin{bmatrix} 0.998 & -0.423 & -0.590 & 75.098 \\ -0.014 & 0.278 & -0.026 & 6.693 \\ 0.002 & 0.865 & -0.814 & 303.131 \\ 0 & 0 & 0 & 1 \end{bmatrix} \end{aligned} \tag{7}$$

At this point, the pose calibration of the sensor is completed. For any known points *SQ* in its coordinate system, the formula to transform it into the robot base coordinate system can be written as *BQ* <sup>=</sup> *<sup>B</sup>*

$${}^{B}\mathbf{Q} = {}^{B}\_{E}T \cdot {}^{E}\_{S}T \cdot {}^{S}\mathbf{Q}\_{\prime} \tag{8}$$

where *BQ* and *SQ* are respectively the position of point *Q* in the coordinate system {*B*} and the coordinate system {*S*}; *<sup>E</sup> <sup>S</sup>T* is the calibration result of Equation (4); the definition and calculation of *<sup>E</sup> <sup>S</sup>T* follow step 3.

#### **4. Experimental Procedures**

Experimental demonstration had been carried out at the proposed seam tracking method with four steps to guide the movement of the welding torch under actual testing conditions. Figure 9 reveals the prototype of whole experimental system, which mainly includes ABB IRB 1410 welding robot, IRC5 controller, LS-100CN laser sensor, Ehave CM350 welding power supply, RS-485 communication module, and an industrial computer.

**Figure 9.** A prototype of the experimental system.

In this paper, two typical weldments with materials of A304 stainless steel are selected as the welding objects, the physical prototypes of two typical welding grooves are illustrated in Figure 10, and the groove parameters of the weldment with straight line and curve are listed in Table 5.

**Figure 10.** Two typical welding grooves for (**a**) straight line; and (**b**) curve.

**Table 5.** Groove parameters of weldment.


When scanning the welding groove, the laser sensor is set to the trigger mode, and the welding robot is constantly moved to obtain the overall shape characteristics of the welding seam. The process of scanning two typical welding grooves by the laser sensor is represented in Figure 11.

**Figure 11.** Two typical welding grooves scanned by laser sensor: (**a**) straight line; (**b**) curve.

Before the experiment, we mark the starting and ending points of the welding path on the weldment, and then the straight and curved grooves are respectively taught a section of motion trajectory in the model of "teach", as shown in Figure 10. The red point is the teaching point, which is the position of the end point of the robotic welding torch. Multiple teaching points are connected to form a welding trajectory, and the pose data of the teaching trajectory in the welding torch coordinate system will be recorded simultaneously, which is used as a reference to calculate the experimental deviation.

During the experiment, if the straight groove is taken as an example, let us first move the end-effector of the robot, i.e., the welding torch, along the teaching trajectory. When it reaches reference point **L1**, as shown in Figure 10a, the laser sensor will be turned on to scan the welding groove and collect data. At the same time, the current tool coordinate system of

the welding robot will be switched to the end coordinate system, the position and posture data of the end coordinate system are obtained in real time through the API interface of the welding robot, and the sampling period is consistent with that of the laser sensor.

The welding robot continues to move. When the end of the welding torch moves to reference point **L2**, as shown in Figure 10a, the laser sensor will be turned off, the data transmission of the API interface is stopped, the data collection is completed. According to the feature points of the groove, the center point of the welding torch is calculated; according to the position and posture data of the end coordinate system obtained by API interface, the trajectory reference point is calculated. Through the calibration matrix of laser sensor (Formula (7)), the position data of the welding torch center point is transformed into the welding robot end coordinate system, and then through the calibration matrix of welding torch, it is transformed into the welding torch coordinate system.

After the above process, the groove data collected by the laser sensor are transformed into the center point data of the robotic welding torch, and the end coordinate system data collected by the API interface are transformed into the trajectory reference point data. The experimental results of two different welding grooves of straight and curved lines with both initial positioning and precise positioning using the proposed seam tracking method are compared in Figure 12.

**Figure 12.** Experimental results of (**a**) straight line with initial positioning; (**b**) straight line with precise positioning; (**c**) curve with initial positioning; and (**d**) curve with precise positioning.

The accuracy of the feature points positioning method is evaluated by comparing the deviation between the calculated welding center point and the actual welding torch end point. Among them, the average deviation *d* (mm) represents the average value of the difference between each welding center point and the end point of the welding torch; the deviation degree *p* (%) indicates the deviation degree of the deviation in this direction relative to the entire groove. The average deviation *d* (mm) and deviation degree *p* (%) can be written as:

$$d\_x = \frac{1}{n} \sum\_{i=1}^n \left(\mathbf{x}\_{tcp(i)} - \mathbf{x}\_{t(i)}\right), \\ d\_z = \frac{1}{n} \sum\_{i=1}^n \left(\mathbf{x}\_{tcp(i)} - z\_{t(i)}\right),\tag{9}$$

where *dx* and *dz* are the average deviation in the *X* and *Z* directions, respectively. *xtcp*(*i*) and *ztcp*(*i*) are the coordinates of the welding center point, *xt*(*i*) and *zt*(*i*) are the coordinates of the trajectory reference point, respectively. *n* is the number of points.

$$p\_x = \frac{d\_x}{l},\ p\_z = \frac{d\_z}{h},\tag{10}$$

where *px* and *pz* are the deviation degrees the in *X* and *Z* directions, respectively. *l* is the total length of the groove, and *h* is the depth of the groove.

The comparative results of different positioning methods for feature points are depicted in Table 6. As can be seen from the figures and table, the average deviations *dx* (mm) of the two different welding seams of both straight line and curve in the *X* direction are relatively large when only initial positioning is carried out. After precise positioning, the average deviations are reduced to 0.387 mm and 0.429 mm, respectively. Experimental procedures show promising results, in that the average deviations display a significant decrease by 38.38% and 41.71%, respectively.

**Table 6.** Error analysis results.


It is worth noting that the average deviations in both *X* and *Z* direction of two different welding seams of both straight line and curve after precise positioning are no more than 0.5 mm; this value is defined by Kovacevic et al. [42] and could fulfill the minimum accuracy requirements of robotic welding. Therefore, it is suggested that the proposed seam tracking method with four steps is feasible and effective, and provides a reference for future seam tracking research.

#### **5. Conclusions**

A novel seam tracking technique and experimental investigation of robotic welding oriented to complex welding seam are proposed in this study. Conclusions are as follows:


**Author Contributions:** Conceptualization, G.Z. and Z.H.; methodology, G.Z. and S.T.; software, S.T., Y.Z., and Y.W.; validation, S.T. and Y.Z.; formal analysis, S.T., Z.X.; investigation, G.Z. and Z.H.; resources, S.T. and W.Y.; data curation, Y.Z., S.T., Y.W., and W.Y.; writing—original draft, S.T. and Y.Z.; writing—review and editing, G.Z., Z.H. and K.S.; visualization, S.T.; supervision, G.Z. and Z.H.; project administration, G.Z. and H.Y.; funding acquisition, G.Z. and Z.H. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded in part by the National Key Research and Development Project of China, grant number 2018YFA0902903, the National Natural Science Foundation of China, grant number 62073092, the Natural Science Foundation of Guangdong Province, grant number 2021A1515012638, the Basic Research Program of Guangzhou City of China, grant number 202002030320.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data presented in this study are openly available in [A Novel Seam Tracking Technique with A Four-Step Method and Experimental Investigation of Robotic Welding Oriented to Complex Welding Seam—research data] at [https://cloud.huawei.com/home# /collection/v2/all] (accessed on 15 April 2021).

**Acknowledgments:** The authors would like to express their thanks to the Guangzhou Institute of Advanced Technology, Chinese Academy of Sciences, for helping them with the experimental characterization.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Sensor-Enabled Multi-Robot System for Automated Welding and In-Process Ultrasonic NDE**

**Momchil Vasilev, Charles N. MacLeod, Charalampos Loukas, Yashar Javadi \*, Randika K. W. Vithanage, David Lines, Ehsan Mohseni, Stephen Gareth Pierce and Anthony Gachagan**

> Centre for Ultrasonic Engineering (CUE), Department of Electronic & Electrical Engineering, University of Strathclyde, Glasgow G1 1XQ, UK; momchil.vasilev@strath.ac.uk (M.V.); charles.macleod@strath.ac.uk (C.N.M.); charalampos.loukas@strath.ac.uk (C.L.); randika.vithanage@strath.ac.uk (R.K.W.V.); david.lines@strath.ac.uk (D.L.); ehsan.mohseni@strath.ac.uk (E.M.); s.g.pierce@strath.ac.uk (S.G.P.);

a.gachagan@strath.ac.uk (A.G.)

**\*** Correspondence: yashar.javadi@strath.ac.uk

**Abstract:** The growth of the automated welding sector and emerging technological requirements of Industry 4.0 have driven demand and research into intelligent sensor-enabled robotic systems. The higher production rates of automated welding have increased the need for fast, robotically deployed Non-Destructive Evaluation (NDE), replacing current time-consuming manually deployed inspection. This paper presents the development and deployment of a novel multi-robot system for automated welding and in-process NDE. Full external positional control is achieved in real time allowing for on-the-fly motion correction, based on multi-sensory input. The inspection capabilities of the system are demonstrated at three different stages of the manufacturing process: after all welding passes are complete; between individual welding passes; and during live-arc welding deposition. The specific advantages and challenges of each approach are outlined, and the defect detection capability is demonstrated through inspection of artificially induced defects. The developed system offers an early defect detection opportunity compared to current inspection methods, drastically reducing the delay between defect formation and discovery. This approach would enable in-process weld repair, leading to higher production efficiency, reduced rework rates and lower production costs.

**Keywords:** non-destructive evaluation; robotic NDE; robotic welding; robotic control; in-process NDE; ultrasonic NDE; ultrasound

#### **1. Introduction**

The automated welding industry has been valued at USD 5.5 billion in 2018 and is expected to double by 2026, reaching USD 10.8 billion [1] with industrial articulated robots predicted to replace current traditional column and boom systems and manual operations. This growth has been driven by key high-value manufacturing sectors including automotive, marine, nuclear, petrochemical and defence. Paired with the technological demands of Industry 4.0 [2], the need for the development of intelligent and flexible sensor-enabled robotic welding systems has become paramount.

The wide adoption of automated manufacturing systems has subsequently raised the demand for automatically deployed and adaptive Non-Destructive Evaluation (NDE) in order to keep up with the faster production lines, when compared to manual manufacturing processes [3]. Developments in automated NDE are driven by industrial demand for fast and reliable quality control in high-value and high-throughput applications. In general, automatic systems provide greater positional accuracy, repeatability and inspection rates when compared to human operators, therefore, resulting in faster inspection speeds and reduced manufacturing costs. The ever-improving capabilities of such systems, on the other hand, lead to an overall increase in asset integrity and lifecycle, resulting in further long-term savings. Safety is another key advantage of automated NDE systems, as they

**Citation:** Vasilev, M.; MacLeod, C.N.; Loukas, C.; Javadi, Y.; Vithanage, R.K.W.; Lines, D.; Mohseni, E.; Pierce, S.G.; Gachagan, A. Sensor-Enabled Multi-Robot System for Automated Welding and In-Process Ultrasonic NDE. *Sensors* **2021**, *21*, 5077. https://doi.org/10.3390/s21155077

Academic Editor: Salvatore Salamone

Received: 7 June 2021 Accepted: 21 July 2021 Published: 27 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

can be deployed in hazardous environments, dangerous conditions and sites where human access is limited or not possible [4,5], thus improving working conditions and reducing the risks of workplace injuries and harmful substance exposure [6].

Single-axis scanners offer the ability for axial or circumferential scans of pipes and are suitable for on-site inspection of assets such as oil and gas pipelines. Such scanners can be guided by a track, or can be free-rolling where a projected laser line is used by the operator to positionally align the scanner with the weld [7,8]. Mobile crawler systems offer a higher degree of positional flexibility through a two-axis differential drive and can magnetically attach to the surfaces of assets enabling vertical deployment [9]. In addition, their compact size makes them well suited for remote applications with constrained access [4]. One particular challenge with such crawlers is accurately tracking their position, which is achieved through a combination of drive encoders, accelerometers, machine vision and in often cases expensive external measurement systems [10]. Multirotor aerial vehicles can deliver visual [11], laser and, more recently, contact ultrasonic [12] sensors in remote NDE inspection scenarios, where a magnetic crawler could not be deployed. While umbilical/tether cables are used commonly with mobile crawlers, they pose a challenge for the manoeuvrability and range of aerial systems. As a result, the power source, driving electronics and data storage for NDE sensors need to be on board the multirotor and, therefore, must be designed according to its limited payload capabilities. These systems can typically position and orient sensors in four axes (X, Y, Z and yaw) with recently developed over-actuated UAVs aiming to overcome this in support of omnidirectional contact-based airborne inspection [13].

Fixed inspection systems offer a higher degree of positional accuracy, compared to mobile systems. Gantries and cartesian scanners operate in a planar or boxed work envelope and are suited for components with simple geometries. Articulated robotic arms, on the other hand, operate in a spherical work envelope and enable the precise delivery of sensors in six Degrees of Freedom (DoF) with pose repeatability of under ±0.05 mm and maximum linear velocities of 2 m/s [14]. They are widely used in industry thanks to their flexibility and reprogrammability, and their positional repeatability makes them suited for operations with well controlled conditions such as component dimensions, position and orientation. Seven DoF robots are also available, with the additional seventh axis in the form of a linear track or a rotational joint allowing a wider range of robot poses to reach the same end-effector position, enabling the inspection of more complex structures.

As specified in the international standards for ultrasonic NDE of welds [15–17], joints of metals with a thickness of 8 mm or above are to be tested with shear waves, inserted through contact angled wedges, where the induced ultrasonic beam must have a normal angle of incidence with the weld interface. The ultrasonic probe must be moved across the surface of the sample in a way that provides full coverage of the weld joint. Alternatively, a sweep of multiple beams across a range of angles can be induced via beamforming through a Phased Array Ultrasonic Transducer (PAUT) [18], forming a sectorial scan. Moreover, PAUT probes enable the acquisition of all transmit–receive pairs through Full Matrix Capture (FMC), which offers the advantage of retrospective beamforming and reconstruction of the weld area through the Total Focusing Method (TFM) [19,20].

NDE is a particular bottleneck when considering high-value automated welding, as it is traditionally performed days after manufacturing when the parts are allowed to cool down [15,16], to ensure cooling-related defects are found. As such, any defects that are detected in the welds and do not pass an acceptance criteria [17] would either require the part to be sent back for repairs or, in some cases, would lead to scrapping the component altogether. Apart from adding to the overall production process inefficiency, this problem also results in higher production costs and longer, less consistent lead times. This, paired with the fact that welds of thicker components, large bore pipes and Wire + Arc Additive Manufacture (WAAM) parts [21] require days and, in some cases, weeks to complete, increases the need for fast in-process NDE inspection. By integrating the inspection into the manufacturing process, an early indication of potential defects can be obtained, effectively

addressing the production and cost inefficiencies by allowing for defects to be qualified and potentially repaired in-process.

Current state-of-the-art robotic NDE systems and automated welding systems rely on robot controllers for calculating the kinematics and executing the motion, which are usually programmed by users manually jogging the robot to individual positions through a teaching pendant. Furthermore, emerging sensors, such as optical laser profiles and cameras can be utilised and deployed to provide real-time path correction. However, the deployment of application-specific sensors is highly dependent on the commercially available software provided by industrial robot manufacturers and the supported communication protocols. Therefore, it would be particularly beneficial to bypass the internal motion planning of a robotic controller and to apply external real-time positional control, based on additional sensor inputs, effectively shifting the path planning and sensor integration to another controller. In particular, the Robot Sensor Interface (RSI) [22] communication protocol could be leveraged in order to provide such an external positional control capability.

RSI was developed by industrial robot manufacturer KUKA for influencing a preprogrammed motion path through sensor input in order to achieve an adaptive robotic behaviour. The protocol is based on an interpolation cycle, which executes in real-time intervals of 4 ms for KRC (KUKA Robot Controller) 4 controller-based robots, and 12 ms for legacy KRC 2-based robots. During this, an XML string with a special format is transmitted over a UDP (User Datagram Protocol) link between the robotic controller and an external sensor or system. In [3], RSI was used in conjunction with a force-torque sensor to maintain constant contact force between a composite wing component and an ultrasonic roller probe, effectively accounting for any discrepancies between the CAD model of the part and the as-built geometry. This method, however, required that the motion path is pre-set within a robotic program, making use of the built-in KUKA trajectory planning algorithm. In [23], a custom trajectory planning algorithm was developed and embedded on a KRC 4 controller through a real-time RSI configuration diagram. This gave the capability to dynamically set and update the target position over Ethernet and the layer of abstraction based on a C++ Dynamic Link Library (DLL), made it possible to utilise the toolbox in various programming environments, e.g., MATLAB, Python and LabVIEW. Although providing a fast response time, the toolbox did not have a provision for real-time motion correction based on sensory input and was fully reliant on the KRC for execution.

This paper presents the development of a sensor-enabled multi-robot system for automated welding and in-process ultrasonic NDE. Table 1 shows a comparison between this work and state-of-the-art commercial robotic NDE systems, i.e., Genesis Systems NSpect [24], TWI IntACom [25], Tecnatom RABIT [26], FRS Robotics URQC [27] and Spirit AeroSystems VIEWS [3]. A novel sensor-driven adaptive motion algorithm for the control of industrial robots has been developed. Full external positional control was achieved in real time allowing for on-the-fly motion correction, based on multi-sensory input. A novel multi-robot welding and NDE system was developed, allowing for the flexible manufacture of welded components and the research into, and deployment of, NDE techniques at the point of manufacture. Thus, the automatic high-temperature PAUT inspection of multi-pass welded samples at three distinct points of the welding manufacture has been made possible, for the first time: inspection of the hot as-welded components; interpass inspection, between welding pass deposition; and live-arc inspection, in parallel with the weld deposition. Through the insertion of artificially induced defects, it has been demonstrated that in-process ultrasonic inspection is capable of early defect detection, drastically reducing the delay between defect formation and discovery. Furthermore, the developed system has enabled the real-time control of the welding process through live-arc ultrasonic methods. Conventional PAUT and FMC are made possible through a high-speed ultrasonic phased array controller, allowing for the use of advanced image processing algorithms, producing results which cannot be achieved using conventional ultrasonics. The work presented herein has directly supported and enabled further research into in-process weld inspection, across sectors, with the aim of producing right-first-time

welds. As a result, it is envisaged that future High Value Manufacturing (HVM) of welded components will have an increased component quality, process efficiency, and reduced rework rates, lead-time inconsistencies and overall costs.


**Table 1.** Comparison between state-of-the-art commercial robotic NDE systems and this work.

Where denotes yes and denotes no.

#### **2. Experimental System**

#### *2.1. Hardware*

The automated welding and NDE system depicted in Figure 1 is based around a National Instruments cRIO 9038 [28] real-time embedded controller. The cRIO features a real-time processor and a Field-Programmable Gate Array (FPGA) on board, which enables fast, real-time parallel computations. Eight expansion slots for additional Input/Output modules enable direct sensor connectivity in addition to the Ethernet, USB and other interfaces, featured on the cRIO. The expansion modules used were the NI 9476 Digital Output, NI 9263 Analogue Output, NI 9205 Analogue Input, NI 9505 DC Motor Drive and an NI 9214 Thermocouple module.

**Figure 1.** Sensor-enabled multi-robot welding and in-process NDE system.

Automation was implemented through two 6 DoF industrial manipulators, controlled in real time through RSI over an Ethernet connection. A KUKA KR5 Arc HW with a KRC 2 controller was employed as the Welding Robot (WR), while a KUKA AGILUS KR3 with a KRC 4 controller was employed as the Inspection Robot (IR). The welding hardware comprised of a JÄCKLE/TPS ProTIG 350A AC/DC [29] welding power source and a TBi Industries water-cooled welding torch, mounted on the welding robot end effector. The welding arc was triggered through a 24 V digital signal connected to the power source, while the arc current was set through a 10 V differential analogue line. The power source featured process feedback in the form of measured arc current and arc voltage, also transmitted through differential analogue lines. A JÄCKLE/TPS 4-roll wire feeder, with an optical encoder was powered and controlled via the NI 9505. Its rotational speed was measured and controlled using Pulse Width Modulation (PWM) and was related appropriately to the desired control metric of linear wire feed rate. A Micro-Epsilon scanCONTROL 9030 [30] laser profiler was utilised for weld seam tracking and measurement, while an XIRIS XVC 1100 [31] high dynamic range weld monitoring camera provided visual feedback of the process.

The workpiece temperature was measured through permanently attached thermocouples, which were used to maintain the workpiece within a desired interpass temperature range. The thermocouples were also utilised for monitoring the temperature gradient across the workpiece, which is a crucial requirement for temperature compensation of the ultrasonic images. A high-temperature PAUT roller probe was attached to the flange of the IR driven by a PEAK LTPA [32] low-noise ultrasonic phased array controller. The bandwidth and storage of the cRIO were only sufficient for inspection with conventional UT probes, therefore, the LTPA had to be directly connected to the host PC when using phased array probes. The bandwidth challenge could be addressed by substituting the cRIO with a high-performance NI PXI real-time controller. Finally, the Graphic User Interface (GUI) was deployed on the host PC, facilitating the user input, process monitoring and control. The high-level system architecture is shown in Figure 2, where the hardware components are represented by blue blocks, the software tasks are represented by green blocks and the communication links are shown as arrows.

**Figure 2.** Sensor-enabled multi-robot welding and in-process NDE system architecture. Overall process control was implemented on the NI cRIO, while the GUI and PAUT acquisition and storage were executed on a host PC.

#### *2.2. Software*

All software was developed in the cRIO native LabVIEW environment which enabled rapid prototyping, due to the wide range of supported communication protocols and software libraries. The software architecture was built using the JKI state machine [33] and parallel real-time Timed Loops, ensuring program flexibility while also providing reliable and fast response times. Three parallel state machines were responsible for executing the program sequence, controlling the Welding Robot (WR) and controlling the Inspection Robot (IR), respectively.

#### 2.2.1. Real-Time Robotic Control

The real-time robotic control strategy employed full external positional control of the robots. This was achieved through a correction-based RSI motion, meaning that the robot controller did not hold any pre-programmed path, and the robot end-effector position was updated on-the-fly through positional corrections. At every iteration of the interpolation cycle, the current position and timestamp of the internal clock are sent by the robot controller as an XML string. An XML string response is returned by the cRIO, mirroring the timestamp to keep the connection alive, and providing positional corrections in each axis, which determine where the end-effector will move to over the next interpolation cycle. There are two types of positional corrections—absolute, where the new position is given with respect of the robot base, and relative, where the new position is given with respect to the current position. For example, an absolute correction of 1 mm in the *X*-axis will move the end-effector to the absolute coordinate *X* = 1 mm, while the same relative correction will move the robotic end-effector by 1 mm in the positive *X*-axis direction irrespective of its current position. Relative corrections were chosen for this body of work as the smaller magnitude of corrections sent to the robot controller made them safer for use during the development and testing stage.

Welding and inspection robot paths are inputted by the user as individual points in a table through the GUI, where each row corresponds to a point in the path, while the columns hold the cartesian coordinates for each axis. Additional columns in the welding path table provide control over the process while approaching the target, i.e., an "Arc On" Boolean determines if the WR should be welding, and a "Log On" Boolean enables the data logging. More sophisticated data can also be included as additional columns, for example, to choose the welding parameters through a lookup table containing the settings for root, hot, filling and cap passes, therefore allowing the user to enter the parameters from a relevant Welding Procedure Specification (WPS) document alongside the robotic path. When considering simpler geometries such as a plate or pipe butt-weld, the robotic paths can be manually entered as individual point coordinates; for example, a straight-line weld would only require two points—the start and the end of the weld. For more complex geometries this can be generated by Computer Aided Manufacture (CAM) or robotic path planning software and imported into the software [34–36].

#### 2.2.2. Trajectory Planning

An on-the-fly calculated trajectory planning algorithm running at the RSI interpolation cycle rate was implemented as demonstrated in Figure 3. A relative positional correction is sent to the KRC at each iteration of the interpolation cycle, consisting of a linear motion component dL and an adaptive motion component dA. The Linear Motion Controller (LMC) is responsible for executing a straight-line trajectory between the current end-effector position PC and a target position PT'. It is based on a linear acceleration–cruise–deceleration curve with the setpoint cruise speed V entered by the user. The Adaptive Motion Controller (AMC) generates an instantaneous adaptive correction dA in response to the sensory input and process requirements. The absolute adaptive correction DA, which is the cumulative total correction that has been applied by the AMC, is summed to the current target position PT taken from the robot path table to form PT'.

**Figure 3.** Trajectory planning and on-the-fly sensor-based motion correction algorithm.

Figure 4a shows the operation of the LMC with an example linear trajectory along the *X*-axis between a starting point PS and a termination point PT. The linear motion velocity vector VL at an arbitrary point P0 along the path is always directed towards the target point PT and is therefore parallel and coinciding with the PSPT vector. Furthermore, as the PSPT vector is aligned with the *X*-axis in Figure 4, the VL vector only consists of an *X*-axis component. In Figure 4b, an example AMC output dA, consisting of a sinusoidal oscillation in the *Y*-axis, is summed with dL before sending the positional correction to the KRC, resulting in a weaving motion between PS and PT. However, as the linear motion vector VL is always directed towards the target PT, a *Y*-axis component is introduced at all points that do not lie on the PSPT vector, which results in a distorted trajectory. The effects of this distortion become stronger and more evident closer to PT as illustrated by VL0 and VL1 in Figure 4b. In order to avoid the distortion in the LMC trajectory caused by the instantaneous correction dA, the absolute adaptive correction DA is summed with PT to give PT'. This offsetting of the target point ensures that the LMC-generated trajectory remains linear as shown in Figure 4c. As a result, a trade-off between target point accuracy and adaptive correction is inherently introduced in the system.

**Figure 4.** (**a**) Example linear motion generated by the LMC; (**b**) trajectory distortion introduced by instantaneous adaptive correction dA; (**c**) target point offsetting through absolute adaptive correction DA.

The demonstrated weaving motion is useful in various scenarios; for example, in welding, when mimicking the motion of manual welding techniques. Such a weaving motion is generally not achievable through a robotic teach pendant and requires path planning software. The software would normally create the path through a number of fundamental linear and circular motions, which would require a full trajectory recalculation if any of the parameters such as the travel speed, amplitude or frequency of weaving need to be modified. In contrast, as the weaving motion is calculated in real time, its parameters and driving function can be readily changed and updated on-the-fly. This approach can be applied to multiple axes at the same time and can be implemented with multiple sensors. For example, most modern automated welding power supplies offer the ability to monitor the arc current and arc voltage in real time, which can be utilised for process control. The measured arc voltage in the Gas Tungsten Arc Welding (GTAW) process is directly correlated to the distance between the welding torch and the workpiece, and as such is suitable for adaptive motion. When welding a workpiece that is assumed to be flat, but has surface height variations, the offset between the welding torch and the sample surface would vary along the weld as shown in Figure 5a, resulting in an inconsistent arc voltage and, therefore, inconsistent weld properties. The measured arc voltage was used as the control variable of a Proportional–Integral–Derivative (PID) control loop, the output of which was an instantaneous adaptive correction applied in the *Z*-axis. This allowed for Automatic Voltage Control (AVC), subsequently maintaining that the welding torch to workpiece distance is constant as illustrated in Figure 5b. The demonstrated approach can be applied for a variety of scenarios with equipment such as laser profilers, force-torque sensors and machine vision cameras among others.

**Figure 5.** (**a**) Open-loop welding of a sample with an uneven surface through a linear trajectory; the welding torch to sample distance changes along the weld; (**b**) closed-loop welding of a sample with an uneven surface through an adaptive trajectory; on-the-fly adjustment of torch offset is achieved through the measured arc voltage; the welding torch to sample distance is constant along the weld; the end point PT is shifted to PT' as a result of the adaptive motion.

#### 2.2.3. Welding Sequence

All relevant process parameters and ultrasonic measurements were timestamped, positionally encoded by the robot position and logged in a binary format for subsequent analysis. Before any welding, the WR performed a calibration using the laser profiler in order to measure and locate the weld groove. This was performed only once, as the workpieces were fixed to the table using 6-point clamping and their location was not expected to shift with respect to the WR. In applications where an initial scan of the weld groove is not practical, or where the weld groove is expected to shift, the welding system has the capability to utilise the laser profiler output for real-time seam tracking, through the AMC. All multipass welding and inspection trials were performed on 15 mm thick S275 structural steel plates, bevelled to form a 90◦ V-groove. The plates were butt-welded by the WR over a total of 21 passes deposited over 7 layers, as shown in Figure 6.

**Figure 6.** Multipass weld specification for 15 mm thick S275 steel bevelled with a 90◦ V-groove; a total of 21 passes are deposited over 7 layers; all linear dimensions are in millimetres.

#### **3. Ultrasonic Inspection**

The system was developed with the aim to perform ultrasonic inspections at three distinct points of the welding process: post-process, when all welding is completed; interpass in-process, between distinct welding passes; and live-arc in-process, in parallel with the weld deposition. Despite the distinct advantages and disadvantages of each approach, they would all fundamentally lead to early defect detection.

#### *3.1. Post-Process UT*

The accuracy and positional repeatability of robots can be leveraged for post-process NDE by performing continuous repeated inspections of the as-built component. This allows for the development of any defects such as cold cracking to be monitored by comparing successive ultrasonic images. Due to the elevated sample temperature introduced by the welding process and any post-heat treatment, a high-temperature capable ultrasonic probe assembly was necessary. An Olympus 5L64-32 × 10-A32-P-2.5-HY array (5 MHz, 64 element, 0.5 mm element pitch, 10 mm element elevation) was used in conjunction with an SA32C-ULT-N55S-IHC angled wedge (suited for shear wave inspection in steel centred around 55◦). The wedge is manufactured out of the material ULTEM and so is capable of short-term contact temperatures of up to 150 ◦C. High-temperature ultrasonic couplant was used between the transducer and wedge. Before touching down on each inspection position, the ultrasonic wedge was dipped in a custom-designed tray containing the same

high-temperature ultrasonic couplant to ensure good acoustic propagation between probe and sample. Figure 7 shows the detection and growth monitoring of a hydrogen crack that was artificially induced in the Heat Affected Zone (HAZ), adjacent to the weld toe, through localised water quenching [37].

**Figure 7.** Continuous post-weld ultrasonic imaging of artificially induced hydrogen crack. The crack was initiated 10 min after all welding passes were deposited and its growth was observed in time. The location of the crack was in the HAZ adjacent to the weld cap toe.

The elevated temperature of the sample after it is manufactured must be taken into account when performing NDE as the speed of sound in the material varies with temperature. As the sample cools down, this causes imaging anomalies in both amplitude and position. In [38], a Tungsten rod was introduced in the weld to form a static reflector of known size and location [39]. The weld was repeatedly inspected at regular time intervals for a period of 22 h, and the position and amplitude of the inserted reflector were extracted to form a thermal compensation curve. The sample temperature at the inspection location decreased from 164 ◦C at 2 min after welding to 28 ◦C at 75 min after welding. As a result, the reflected amplitude increased significantly from 25% to 62% of full screen height, and the defect indication's position shifted by 3 mm on the reconstructed sector scan image. These data were utilised to correct the position and amplitude of an artificially induced crack. The crack initiation was successfully detected 22 min after the weld completion, and it was observed to be growing over a total of 90 min.

#### *3.2. Interpass In-Process UT*

Interpass ultrasonic NDE allows for the detection of weld flaws through inspection between individual welding passes or layers and provides an opportunity for in-process repair, as only a small amount of material would need to be removed in order to excavate and repair the defects. This is particularly advantageous for the manufacture of components that are typically challenging to repair after all welding passes have been deposited, e.g., thick multipass welds and WAAM parts. A key challenge of interpass welding inspection is the complex sample geometry which changes as the weld is deposited and therefore differs from the as-built geometry [40]. Figure 8 shows that the unwelded portion of the V-groove in a multipass weld causes a number of reflections and artefacts in the ultrasonic images, as demonstrated at three distinct stages of the sample manufacture. As the weld is deposited, the sample geometry reflections change in shape and size, until they disappear

upon completion of the weld joint. Hence, appropriate signal processing and masking are required to effectively remove the false positive indications from the sample geometry.

**Figure 8.** Ultrasonic sectorial scan of 90◦ V-groove multipass weld; (**a**) before welding the groove edge is detected as a reflector (green marker); (**b**) after 7 passes are deposited, the size of the groove edge indication is reduced (blue marker); (**c**) after all welding passes are deposited, the groove edge is no longer detected.

The high interpass temperatures required to maintain the weld integrity (typically up to 250 ◦C) have driven research into the development of a novel, high-temperature capable PAUT probe [41]. The probe features a 5 MHz, linear 64-element PAUT transducer immersed in water and enclosed in a moulded high-temperature silicone rubber tyre, capable of operating at temperatures up to 350 ◦C. Coupling between the probe and the sample was achieved through a constant compressional force and high-temperature gel couplant as demonstrated in Figure 9. The novel probe has allowed for the interpass detection of artificially induced defects inside a partially filled multipass weld such as the one shown in Figure 10, where a Tungsten rod with a diameter of 2.4 mm and length of 30 mm was included in the weld.

**Figure 9.** Interpass in-process UT inspection with a novel high-temperature PAUT roller probe.

**Figure 10.** Interpass ultrasonic image of artificially induced defect (Tungsten rod with 2.4 mm diameter) (red marker) with a false positive indication from the unwelded groove edge (green marker).

As a result of the moving heat source in welding, thermal gradients in both the direction of welding and perpendicular to the direction of travel are introduced in the workpiece, ultimately resulting in ultrasonic image distortion. Furthermore, the dynamic nature of multipass welding essentially results in a different thermal gradient after each welding pass. An in-process thermal compensation procedure was proposed in [42] involving the parallel manufacture of a second, identical sample with an embedded Tungsten pipe, serving as an in-process calibration block. The reflection from the known in size and location pipe was used to calibrate for the effects of the temperature gradients and it was demonstrated that the approach provided more accurate results, compared to a traditional calibration on a sample with a side drilled hole at a uniform temperature. For the most accurate calibration and thermal compensation results, however, the sample temperature would need to be precisely known through a combination of measurement and weld modelling [43]. It is important to also note that interpass inspection could increase the component manufacture duration, as it is deployed sequentially with the welding deposition. In addition, increasing the interval between welding passes could lead to excessive sample cooling and the loss of interpass temperature. Therefore, the UT acquisition and image processing speed must be taken into account when considering the deployment of interpass NDE for welding applications.

#### *3.3. Live-Arc In-Process UT*

In-process UT deployed during the welding deposition offers rapid feedback for the welding process and allows not only measurement, but also control of the welding process. In [44], a pair of air-coupled ultrasonic transducers were used to induce guided Lamb waves through a section of 3 mm thick plate butt joint while it was deposited. Figure 11 shows that the solidification of the weld was monitored in real time through live-arc inprocess UT. This method has shown promise as the rate of change of the received Lamb waves' amplitude was found to be correlated to the weld penetration depth. In [45], a split-crystal ultrasonic wheel probe was attached to the welding torch and was utilised for thickness measurement of samples with a varying loss of wall thickness, as shown in Figure 12. The measured thickness was used to control and adapt the welding arc current, torch travel speed and wire feed rate on-the-fly. It was demonstrated that the approach provided sufficiently low latency and high accuracy for real-time welding process

control and, as a result, provided a better performance of welding samples with thickness variations, compared to a traditional open-loop automated welding system.

**Figure 11.** Live-arc in-process weld UT with non-contact Lamb waves.

**Figure 12.** On-the-fly adaptive welding control through live-arc UT sample measurement.

Current work in the University of Strathclyde is focused on addressing the challenges associated with deploying PAUT probes during the weld deposition (Figure 13). The next generation of PAUT probes will be dry coupled, which would remove the risk of unwanted weld contamination by the ultrasonic gel that can cause porosity [37] and would reduce the variation in coupling between the probe and the workpiece.

**Figure 13.** Live-arc PAUT inspection experiment.

#### **4. Conclusions and Future Work**

A novel sensor-enabled robotic system for automated welding and ultrasonic inspection was developed and evaluated. The system architecture was based around an NI cRIO real-time embedded controller which enabled real-time communication, data acquisition and control. A real-time external robotic control strategy for adaptive behaviour was developed, allowing for on-the-fly sensor-based trajectory corrections. The inspection capabilities of the system were demonstrated in three different scenarios:


Current work on masking the bevel edge reflections will remove false positives arising from the unfilled weld groove and thermal gradient compensation would enable the accurate locating and sizing of weld defects. Future developments of the PAUT probe will allow for a completely dry coupled inspection, eliminating the coupling and contamination challenges posed by the ultrasonic couplant. It is envisaged that future welding and live-arc in-process systems would possess the capability for automatic in-process defect detection, which would in turn significantly reduce the delay between the development and detection of a defect, offering the potential for in-process weld repair.

**Author Contributions:** Conceptualisation, M.V., C.N.M., Y.J., R.K.W.V., D.L., E.M., S.G.P. and A.G.; software, M.V., C.L. and D.L.; investigation, M.V., C.L., Y.J., R.K.W.V., D.L. and E.M.; resources, C.N.M., S.G.P. and A.G.; writing—original draft preparation, M.V.; writing—review and editing, C.N.M., C.L., D.L. and E.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was funded by the Engineering and Physical Sciences Research Council (EPSRC), grant number 2096856.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

