Next Article in Journal
A Machine Learning Method for the Fast Simulation of the Scattering Characteristics of a Target Under a Planar Layered Medium
Previous Article in Journal
Computer Vision in Monitoring Fruit Browning: Neural Networks vs. Stochastic Modelling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Adaptive Weld Seam Width Control Method for Long-Distance Pipeline Ring Welds

1
School of Materials Science and Engineering, Tianjin University, Tianjin 300350, China
2
China Petroleum Pipeline Research Institute Co., Ltd., Langfang 065000, China
3
Tianjin Key Laboratory of Advanced Joining Technology, Tianjin 300350, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(8), 2483; https://doi.org/10.3390/s25082483
Submission received: 27 March 2025 / Revised: 9 April 2025 / Accepted: 13 April 2025 / Published: 15 April 2025
(This article belongs to the Section Sensing and Imaging)

Abstract

:
In pipeline all-position welding processes, laser scanning provides critical geometric data of width-changing bevel morphology for welding torch swing control, yet conventional second-order derivative zero methods often yield pseudo-inflection points in practical applications. To address this, a third-order derivative weighted average threshold algorithm was developed, integrating image denoising, enhancement, and segmentation pre-processing with cubic spline fitting for precise bevel contour reconstruction. Bevel pixel points were captured by the laser sensor as inputs through the extracted second-order derivative eigenvalues to derive third-order derivative features, applying weighted threshold discrimination to accurately identify inflection points. Dual-angle sensors were implemented to synchronize laser-detected bevel geometry with real-time torch swing adjustments. Experimental results demonstrate that the system achieves a steady-state error of only 1.645% at the maximum swing width, a dynamic response time below 50 ms, and torch center trajectory tracking errors strictly constrained within ±0.1 mm. Compared to conventional methods, the proposed algorithm improves dynamic performance by 20.6% and exhibits unique adaptability to narrow-gap V-grooves. The results of these studies confirmed the ability of the method to provide real-time, accurate control for variable-width weld tracking, forming a swing-width adaptive control system.

1. Introduction

Laser scanning has recently matured and become a popular weld-tracking technology. The second-order derivative zero method is a classical edge detection technique that identifies the edge position in an image, but it often misjudges the edge position when dealing with non-linear, short, and thin lines, high noise, and poor clarity [1]. Researchers have enhanced the stability of identification using various improvement or transformation algorithms.
Recent innovations in multi-sensor fusion and deep learning have further expanded the capabilities of laser weld tracking. For instance, Ma et al. [2] proposed a transformer-optimized network for sub-millimeter defect detection in high-noise environments, achieving a 15% improvement in edge localization accuracy compared to conventional methods by integrating attention mechanisms with laser triangulation. Similarly, Yuan et al. [3] leveraged a DDPM-Swin Transformer hybrid to synthetically expand defect datasets, addressing data scarcity in welding inspections while maintaining a 0.1 mm resolution for groove geometry reconstruction. These approaches highlight the growing role of physics-informed AI in welding automation. In parallel, adaptive control strategies have emerged to address dynamic welding conditions. Zhang et al. [4] introduced an adaptive pseudoinverse control framework for hysteresis compensation in dielectric elastomer actuators, demonstrating a 30% reduction in steady-state error for non-linear systems—a methodology with potential applications in real-time torch positioning. Complementarily, Liu et al. [5] developed a constant-focus optical path system for thick-plate welding, achieving ±0.15 mm tracking accuracy through laser-ultrasonic fusion, though limited to fixed groove geometries. Wu et al. [6] proposed a laser stripe edge-guided network for autonomous seam recognition and feature extraction in multi-pass welding, integrating visual perception and deep learning to achieve real-time high-precision weld morphology detection and adaptive process parameter optimization. Gao, J. et al. [7] proposed a variable-gap weld feature extraction method that used the reflective properties of laser stripes in a fillet weld. A robust weld feature extraction model was established, a column grey difference operator was proposed, and the random sampling consistency algorithm was optimized to achieve weld tracking for variable-gap fillet welds. Zhu, C. H. et al. [8] proposed a vision system integrating multi-light sensing, dynamic image processing, and laser centerline extraction to achieve high-precision real-time detection and adaptive control of complex weld seam geometries. Despite these advancements, critical gaps persist. Yang et al. [9] identified that deep learning denoising methods (e.g., CNN-based approaches) struggle with real-time performance (<100 ms latency) when processing high-resolution laser scans (2048 × 2048 px), limiting their adoption in high-speed welding scenarios. Traditional methods, including second-order derivative algorithms and hybrid frameworks, remain constrained by pseudo-inflection point misidentification in variable-width grooves (e.g., V-type narrow gaps) under industrial noise levels (>60 dB). Sun, B. et al. [10] proposed a combinational structured light vision sensor that used the linear equation of the laser centerline and extracted feature points by solving the intersection through the joint equation. Zhang, G. et al. [11] proposed a second-order derivative algorithm to initially precisely position the feature points through linear fitting. Liu, X. et al. [12] proposed a laser-MIG composite weld seam width detection method based on a BP neural network compensated Kalman filter. The filter used the second-order difference method to obtain the weld seam contour width, while the BP neural network compensated for the optimal estimation of the Kalman filter. Ren, J. et al. [13] proposed a weld seam detection method based on the second-order anisotropic Gaussian directional derivative filter to improve key points in the image detection accuracy. Zhang, W. et al. [14] proposed a corner detection method using second-order generalized Gaussian directional derivatives, enhancing accuracy and noise robustness in complex scenes through optimized mathematical modeling. Cross-domain innovations bridge critical gaps in laser weld tracking.
Although laser-based weld seam tracking technology has advanced significantly, existing methods—including second-order derivative algorithms and hybrid deep learning frameworks—still suffer from limited robustness in high-noise environments and insufficient adaptability to variable geometric morphologies. This paper proposes an inflection point detection algorithm based on a third-order derivative weighted average threshold calculation, aiming to enhance the adaptability of laser tracking algorithms in complex engineering environments. This paper is structured as follows: Section 1 analyzes the advances and challenges in laser weld tracking. Section 2 describes the experimental setup, image pre-processing, and third-order derivative inflection point detection. Section 3 validates the laser sensing adaptive pendulum control, analyzes the dynamic response and accuracy, and discusses the applicability of the method. Section 4 summarizes algorithm advantages and applications.

2. System and Methods

2.1. System Composition

The experimental material was X80 pipeline steel of the main pipeline with a diameter of Φ1219 mm and a wall thickness of 30 mm. The welding consumables were an SG8-P (BOHLER Company, Kapfenberg, Austria), Φ1.0 mm solid wire. A V-type narrow gap bevel was used.
The experimental equipment was a CPP900-W1N single torch external welding machine and laser sensor developed by the China Petroleum Pipeline Bureau. This system used the laser sensor front method, and it collected the weld morphology data by the laser triangulation principle and adjusted the relative X/Y/Z position and torch by the laser sensor holder. It welded the trolley assembly as shown in Figure 1.
The ring seam welding test was performed in accordance with the GB/T 31032-2023 [15] steel pipeline welding and acceptance standard. Gamma correction, non-linear median filtering, and the Canny algorithm were used for denoising, enhancement, and segmentation of the bevel image. The cubic spline algorithm was used to fit the image curve to obtain information about the bevel image points to extract the third-order function eigenvalues from the second-order function features. This was also used to obtain the discriminating thresholds of the image inflection points through the weighted average algorithm to accurately extract the location of the inflection points of the bevel and design weld seams with different widths. Welding experiments were used to verify the performance of the algorithm.

2.2. Pre-Processing Algorithm

Gamma correction was performed according to Equation (1):
O = ( 2 n 1 ) × I 2 n 1 γ
where O denotes the output pixel value of the image, I denotes the original pixel value at each point of the image, and γ is a non-linear parameter describing the relationship between the input and output. Greyscale details were enhanced by different values of γ. In different bit depths of the image, pixels were usually between 0 and 2 n .
The original pixel I was normalized and restored to the bitmap after gamma correction. Based on the non-linear brightness perception of the human eye, the luminance response of the CRT monitor, and the sRGB standard, γ = 2.2 was selected [10].
A non-linear median filtering method was used to remove the image edges, and a 3 × 3 square window was used to enhance the image via Equation (2): [16]
Y i j = 3 × 3 M e d i a n x i j
The Canny algorithm was used to segment the image into different regions, and a 3 × 3 Gaussian filter was selected to create a normalized Gaussian kernel matrix, expressed as Equation (3):
H = 0.0573 0.1246 0.0573 0.1246 0.2738 0.1246 0.0573 0.1456 0.0573
The Soble operator was used, and a 3 × 3 matrix was chosen as Equation (4):
S x = 1 0 1 2 0 2 1 0 1 ,       S y = 1 2 1 0 0 0 1 2 1
The slope and direction were determined using Equation (5):
f = f 2 = G x 2 + G y 2 = f x 2 + f y 2 ,     θ = a r c t a n G x G y
where G is the gradient magnitude, and θ is the gradient direction. Based on experience in suppressing noise while preserving edge information, the double threshold detection value was selected as follows: [14]
  T h T l = 2.5
Figure 2 shows the flowchart for the bevel image pre-processing described above.

2.3. Third-Order Derivative Weighted-Average Threshold Inflection Detection Algorithm

The cubic spline function is expressed as Equation (6):
s i ( x ) = a i + b i x x i + c i x x i 2 + d i x x i 3
Curve fitting was performed using a cubic spline, and diagonal matrices were constructed using the Thomas algorithm, as shown in Equation (7):
1 0 0 0 0 0 Δ x 0 2 Δ x 0 + Δ x 1 Δ x 1 0 0 0 0 Δ x 1 2 Δ x 1 + Δ x 2 Δ x 2 0 0 0 Δ x n 2 2 Δ x n 1 + Δ x n 2 Δ x n 1 0 0 0 1 · m 0 m 1 m 2 m n = 6 0 y 2 y 1 Δ x 1 y 1 y 0 Δ x 0 y 3 y 2 Δ x 2 y 2 y 1 Δ x 1 y n y n 1 Δ x n 1 y n 1 y n 2 Δ x n 2 0
The system of diagonals was solved using Equation (8):
m i = 6 y i + 1 y i Δ x i y i y i 1 Δ x i 1 Δ x i 1 c i 1 d i 1 2 Δ x i 1 + Δ x i Δ x i 1 2 c i 1
n intervals m i , a i , b i , c i , d i were solved, and the bevel profile was fitted, as shown in Figure 3.
The curve characteristics points were solved using Equation (9):
f x = d d x d d x = 0 f c δ · f c + δ < 0
Then, x = c is the point of inflection, and the inflection of the bevel fit image was plotted in Figure 4.
Figure 4 contains multiple inflection points. The pseudo-inflection points were removed by extracting the third-order function eigenvalues from the second-order function features of the fitted curves, followed by calculating the weighted average, and finally, calculating the discriminative thresholds for the inflection points of the fitted curves. The image was divided into 3 regions, as shown in Figure 5.
The total number of pixel points was determined according to the pixel point data set x 1 , x 2 , x 3 , , x N , y 1 , y 2 , y 3 , , y N . The total number of pixel points in the curve fit was solved using Equation (10):
n = i = 1 N δ ( ( x i , y i ) ) P
where n is the total number of pixel points involved in the curve fitting, N is the size of the entire pixel point dataset, P is the number of pixel points used in the curve fitting, and δ is the indicator function (δ = 1 if the point ( x i , y i ) belongs to the set P; otherwise, δ = 0). According to the width of the narrow gap bevel, we took the whole image lying on the center line to the left and right of the 40% of the pixel area that could cover all the pixel points of the narrow gap bevel. The region was divided into region I, region II, and region III, and the function was defined as Equation (11):
R ( x i , y i ) = Ι ,     { ( x i , y i ) | 0 i 0.2 n } Ι Ι ,       { ( x i , y i ) | 0.2 n < i 0.8 n } Ι Ι Ι ,     { ( x i , y i ) | 0.8 n < i n }
After the division, the region III weights were assigned a value of 2, and regions I and II were assigned a weight value of 1. The inflection point decrease was identified by the Eigen points of the third-order derivative function. The eigenvalues of the third-order derivative were solved using Equation (12):
s i x = S i ¨ x i + 1 S i ¨ x i ( x i + 1 x i ) = 6 d i
where s i denotes the function of each segmented interval, and x i denotes the different interval data points. A weighted average was used to calculate the three regional averages using Equation (13):
W = i = 1 n w i s i x i i = 1 n w i
By introducing the defining function, the weight values were calculated using Equation (14):
W = i = 0 0.2 n s i x + 2 i = 0.2 n + 1 0.8 n s i x + i = 0.8 n + 1 n s i x 1.6 n
When s i x i > W , the inflection point was determined. The above algorithm was used to obtain the bevel cross-section data, and two angle sensors were used to establish the positional relationship between the laser sensor and oscillating torch, as shown in Figure 6. The figure shows that the angular relationship between the welding torch and the angle sensor B is θ 2 = θ 0 θ 1 .
For different pipe diameters and travel speeds, the formula for calculating the number of images acquired per 1 degree is given as Equation (15):
P i = l · f l a s e r v = π · D · f l a s e r 360 · v
P i denotes the number of images acquired within a 1-degree range (i = 1, 2…), with the unit of “sheet”; l represents the arc length corresponding to 1 degree in millimeters (mm); D is the pipe diameter in millimeters (mm); f l a s e r indicates the acquisition frequency of the laser sensor in hertz (Hz); and v is the travel speed of the welding carriage in millimeters per second (mm/s). These parameters collectively define the relationship between image acquisition and motion control during the welding process. The laser sensor’s frequency and the welding speed must be coordinated to ensure adequate image coverage and detection accuracy.
Using the D = 1219 mm pipe as an example, l = 10.6 mm, the welding speed of a normal welding process v = 300–600 mm/min, every 1° of running time was about 1.06–2.12 s, f l a s e r = 10 Hz, every 1° of running collected 10–21 images, every 0.1° generated 1–2 images, and every 0.1° corresponded to a traveling distance of about 1 mm. The detection accuracy of the pipeline welding process meets usage requirements. The image reading angle was defined as Δθ and was extracted once every 0.5°.
Before welding was started, the control system database was preset with a set of welding parameters for the annular weld, according to the position of 0–6 points, which were divided into 12 intervals. The resulting array was S n = 1 , 2 , 12 , each interval was 15° for the parameter preset, and the swing width array was written as S W i = S W 1 , S W 2 , S W 12 , i = ( 1 , 2 , 180 15 ) .
The angle sensor B began to move with the welding carriage from the 0-point position, and the angle change of the angle sensor B was recorded as θr. Throughout the welding process, the two-dimensional image width feature of the bevel was extracted and calculated by the pre-processing of the laser sensor and the third-order derivative weighted average threshold detection algorithm, which was recorded as G W j . According to the welding process parameter for every 15° interval, G W j was compared with the data of S W i . According to the comparison result, the welding torch was automatically controlled to complete the swing width adaptive adjustment, and the flow chart is shown in Figure 7.

3. Results and Discussion

Using the above algorithm, the weld bevel is scanned by the laser sensor front to calculate the bevel characteristic points, fitted bevel shapes, curves, and inflection points, as shown in Figure 8.
In the experimental platform, the data acquisition system, using displacement sensors, was installed in the swing torch transverse cross-slide and was used to measure the position signal of the swing process to record relevant data. Using the swing width adaptive verification method, during the processing of a narrow-gap V-bevel, the upper and lower bevel surface angles were 5° and 45°, respectively. The lower opening width was 4.4 ± 0.2 mm, and the width of the bevel gradually expanded and then narrowed. The welding track and the torch centerline cumulatively deviated from the right side by 2 mm to observe the process of adjusting the welding torch. The overall schematic diagram is shown in Figure 9.
The parameters used for the welding process are shown in Table 1.
The oscillation period of the torch was expressed as 2 × Oscillation time + 2 × Side stop time. After welding was started, the control system calculated the required torch swing parameters in real time according to the width data fed back from the laser sensor. The position signal from the displacement sensor was used to monitor the torch swing control parameter data. Three characteristic points were chosen to observe the weld shape in combination with the macroscopic metallographic image. The position signal of the displacement sensor was collected, the control parameter data of the torch swing was monitored, and three characteristic points were selected to observe the weld shape in combination with the macroscopic metallographic image. Change data of the torch swing width were used to determine the position of the widest width change of the bevel. The sampling point was designed to analyze the displacement, speed, and acceleration curves of the system controlling the torch swing process, as well as the welding path within the swing cycle. The change curves of the weld center position were collected and calculated, as well as the following error, as was the total welding path during a swing cycle. The variation curves of the weld center position were collected and calculated, as were the tracking error and total welding distance within the swing cycle, as shown in Figure 10.
Figure 10a shows the changes in the left and right trajectories of the torch, and the three marked points (Figure 10(a-1)–(a-3)) correspond to the widths of the top opening of the weld seam of 8.7 mm, 9.7 mm, and 8.9 mm, respectively. Figure 10b records the process during which the pendulum width increased from 2.33 mm to a maximum of 3.28 mm before recovering to 2.46 mm. It obtained the widest pendulum sampling point, and the measured pendulum distance was 3.28 mm. The displacement curve of the sampling point showed a steady-state error of 1.645% (steady state value of 3.227 mm), and the velocity curve had a rise time of 47.65 ms, a maximum velocity of 10.44 mm/s, and a peak acceleration of 328.1 mm/s2. The welding travel moved a distance of 2.1 mm in a single oscillation period of the torch. Figure 10c shows the adjustment of the torch center path and the statistics of the actual and theoretical center position errors, including the start point (0.00 s, 0.07 mm) to the endpoint (27.72 s, −2.05 mm), which showed a maximum error of 0.1 mm and a minimum of −0.1 mm.
The weld shape flatness and fusion quality showed that the system had an effective path-tracking capability. The continuous variation of swing width data confirmed that the adaptive algorithm responded to 1.4 times width fluctuations (2.33–3.28 mm) in real time. The steady-state error of the sampling point displacement was less than 2%, with no overshoot characteristics, verifying the stability of the control system. The velocity response time of 47.65 ms with a welding movement of 2.1 mm in one oscillation cycle indicated that the system achieved real-time tracking while maintaining a 0.3 m/s2 acceleration constraint. The central path deviation data matched the 2.0 mm deviation of the designed path, and the 0.2 mm peak-to-peak error band shows motion control accuracy.
The welding system achieved the tracking control of the variable width of the weld seam in terms of path tracking, with an adaptive adjustment range of the pendulum width of 40%, a dynamic response time of less than 50 ms, and a steady-state control accuracy higher than 98%. The motion control process had no overshoot characteristics, and the tracking error of the central path did not exceed ±0.1 mm, which demonstrates the reliability of the algorithm for the multi-parameter coupling control of velocity (10.44 mm/s), acceleration (328.1 mm/s2), and position (3.227 mm). This meets the real-time, accuracy, and stability requirements during complex weld seam formation.
Figure 10d shows the molding of different solder layers, including fillers 1, 3, 4, and 5. The weld bevels of different welding layers are well-shaped, indicating that based on the principle of third-order derivative weighted thresholding, combined with image pre-processing and real-time feedback control, the traditional method effectively solves the problem of being susceptible to noise interference and misjudging the edge position in complex welding scenes. Through real-time inflection point detection, the algorithm is able to quickly analyze the changes in the bevel shape and adjust the swing amplitude and path of the torch in real time, which improves the good adaptation of the torch to the changes in the width of the narrow-gap V-bevel. This thesis compares the performance with existing methods, as shown in Table 2.
As shown in Table 2, the proposed third-order derivative weighted threshold algorithm demonstrates performance across three critical dimensions: real-time capability, tracking accuracy, and adaptability. In terms of real-time performance, the dynamic response time of 47.65 ms outperforms state-of-the-art Z-groove anti-vibration methods [16] (60 ms) by 20.6%, primarily due to the elimination of multi-sensor synchronization delays and the optimized cubic spline fitting process. For tracking accuracy, the center trajectory error is constrained within ±0.1 mm, significantly surpassing lightweight models [20] (±0.4 mm) and matching the precision of 3D fusion methods [17] (±0.1 mm). This is attributed to the integration of dual-angle sensors and third-order derivative filtering, which effectively suppresses pseudo-inflection points in high-noise environments. Regarding adaptability, the algorithm uniquely supports V-type variable-width grooves without requiring preset parameters. The proposed method achieves the lowest tracking error (±0.1 mm) for V-type grooves while maintaining computational efficiency and real-time responsiveness.
Experimental results show that the proposed third-order derivative-weighted threshold algorithm achieves adaptive pendulum control under standardized industrial conditions for narrow-gap 5° V-grooves on X80 pipeline steel, particularly in filler layers with a tangible bevel feature. However, the applicability of this method is currently limited to cases involving precision machined grooves where beveling equipment ensures controlled geometry. Non-standard cases, such as flame-cut grooves or highly deviated manual assemblies, have not been validated due to irregular groove morphology and thermal deformation effects. A key limitation of the algorithm is its dependence on pre-processed groove geometry. While the dual-angle sensor system is effective in synchronizing the laser scan data with the torch adjustment under controlled conditions, in field applications involving heterogeneous materials or other groove profiles, it may be necessary to recalibrate the weighted threshold parameters and the cubic spline fitting criteria. For example, flame-cut grooves typically have high surface roughness and thermal distortion, which may introduce pseudo-inflection points even with third-order derivative filtering. To address these limitations, subsequent exploratory work should be continued. First, the modular structure of the core algorithm adds process-adaptive module integration, which compensates for the deformation effects of non-standard trenches. Second, collaborative trials with pipeline contractors will evaluate performance under real-world high-deviation conditions. Third, extending the validation to ISO standard geometries (e.g., U-grooves conforming to API 1104) and multiple grades of steel (X65–X100) will increase the versatility of the method. These steps are intended to transform the method from a lab-validated solution to a field-deployable tool while maintaining compliance with global welding standards. Furthermore, while the system demonstrates real-time capability, large-scale deployments may necessitate edge-AI optimizations to handle computational loads across extended welding operations. Future work will also explore cost–benefit trade-offs for retrofitting existing welding systems with the proposed laser triangulation and sensor modules, particularly in resource-constrained environments. To bridge these technical challenges with broader engineering innovations, recent advances in cross-domain methodologies could offer complementary insights. For instance, the multi-sensor fusion strategies in Collaborative Imaging of Subsurface Cavities Using Ground-Pipeline Penetrating Radar [23]—which integrates heterogeneous data for high-resolution defect detection—might inspire enhanced algorithms to mitigate thermal distortion in flame-cut grooves by combining laser triangulation with thermal imaging. Similarly, the microstructure-optimized approaches in Simulation of Ultrasonic Welding of Cu/Cu Joints with an Interlayer of Cu Nanoparticles [22], where nanoparticle-enabled interface refinement improves joint reliability, could inform future designs of adaptive filler materials for non-standard geometries. Additionally, the crack suppression mechanisms analyzed in Resistance Spot Welded NiTi Shape Memory Alloy to Ti6Al4V [24], which correlate welding parameters with residual stress distribution, may guide the development of stress-compensation modules for high-deviation scenarios.

4. Conclusions

This study proposes an inflection point detection algorithm based on a third-order derivative weighted average threshold. By extracting the eigenvalues of the third-order derivative function and combining them with weighted averaging of image regions, the algorithm automatically determines the inflection point discrimination threshold for groove images. Dual-angle sensors synchronize the laser-scanned groove geometry with real-time welding torch swing data, establishing an adaptive swing-width tracking control system. The effectiveness of the method was validated through welding experiments, leading to the following conclusions:
(1)
This study integrates laser triangulation with a third-order derivative weighted threshold algorithm, addressing the limitations of traditional second-order derivative zero-crossing methods in generating pseudo-inflection points under noisy conditions. Through image pre-processing and cubic spline fitting, high-precision groove contour reconstruction was achieved. Combined with dual-angle sensor synchronization technology, the system significantly enhances real-time tracking capabilities for V weld seam geometries.
(2)
Experimental results demonstrate that the system achieves a steady-state error of only 1.645% at the maximum swing width, a dynamic response time below 50 ms, and torch center trajectory tracking errors strictly constrained within ±0.1 mm. Compared to conventional methods, the proposed algorithm improves dynamic performance by 20.6% and exhibits unique adaptability to narrow-gap V-grooves.
(3)
Multi-layer and multi-pass welding experiments confirmed uniform weld formation and excellent fusion quality, highlighting the algorithm’s practical potential for X80 pipeline steel narrow-gap girth welds. Compared to existing approaches, this method demonstrates superior tracking accuracy.

Author Contributions

Conceptualization, Y.Z. and F.C.; methodology, Y.Z.; software, Y.Z.; validation, Y.Z. and S.W.; formal analysis, Y.Z.; investigation, F.C.; resources, Y.Z.; data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, Y.Z. and S.W.; visualization, S.W.; supervision, F.C.; project administration, Y.Z.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the China Petroleum Pipeline Research Institute Program (2021ZG01).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study are included in this published article.

Conflicts of Interest

Author Yi Zhang was employed by the company China Petroleum Pipeline Research Institute Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

References

  1. Zhu, C.; Ji, Y. Image Edge Detection and Optimization Based on Canny Operator. Pure Math. 2024, 14, 130–139. [Google Scholar] [CrossRef]
  2. Ma, D.; Fang, H.; Wang, N.; Lu, H.; Matthews, J.; Zhang, C. Transformer-optimized generation, detection, and tracking network for images with drainage pipeline defects. Comput.-Aided Civ. Infrastruct. Eng. 2023, 38, 2109–2127. [Google Scholar] [CrossRef]
  3. Yuan, K.; Lang, X.; Cao, J.; Zhang, H. Model of oil pipeline tiny defects detection based on DDPM gated parallel convolutional swin transformer. Meas. Sci. Technol. 2024, 36, 015104. [Google Scholar] [CrossRef]
  4. Zhang, X.; Liu, Y.; Chen, X.; Li, Z.; Su, C.-Y. Adaptive pseudoinverse control for constrained hysteretic nonlinear systems. IEEE/ASME Trans. Mechatron. 2023, 28, 2142–2154. [Google Scholar] [CrossRef]
  5. Liu, Y. Intelligent perception and seam tracking system for thick plate weldments based on constant-focus optical path. Appl. Sci. 2024, 14, 10846. [Google Scholar] [CrossRef]
  6. Wu, K.; Wang, T.; He, J.; Liu, Y.; Jia, Z. Autonomous Seam Recognition and Feature Extraction for Multi-Pass Welding Based on Laser Stripe Edge Guidance Network. Int. J. Adv. Manuf. Technol. 2020, 111, 2719–2731. [Google Scholar] [CrossRef]
  7. Gao, J.; Hong, Y.; Hong, B.; Li, X.; Jia, A.; Qu, Y. A Method of Feature Extraction of Position Detection and Weld Gap for GMAW Seam Tracking System of Fillet Weld with Variable Gaps. IEEE Sens. J. 2021, 21, 23537–23550. [Google Scholar] [CrossRef]
  8. Zhu, C.H.; Wang, Z.H.; Zhu, Z.M.; Guo, J.C. Research on Pipeline Intelligent Welding Based on Combined Line Structured Light Vision Sensing: A Partitioned Time-Frequency-Space Image Processing Algorithm. Int. J. Adv. Manuf. Technol. 2024, 134, 5463–5479. [Google Scholar] [CrossRef]
  9. Yang, L.; Fan, J.; Huo, B.; Li, E.; Liu, Y. Image denoising of seam images with deep learning for laser vision seam tracking. IEEE Sens. J. 2022, 22, 6098–6107. [Google Scholar] [CrossRef]
  10. Sun, B.; Zhu, Z.; Guo, J.; Zhang, T. A Detection Algorithm and Image Processing Flow Optimization for Visual Sensors Based on Combined Laser Structured Light. Tsinghua Sci. Technol. 2019, 24, 689–696. [Google Scholar]
  11. Zhang, G.; Zhang, Y.; Tuo, S.; Hou, Z.; Yang, W.; Xu, Z.; Wu, Y.; Yuan, H.; Shin, K. A Novel Seam Tracking Technique with a Four-Step Method and Experimental Investigation of Robotic Welding Oriented to Complex Welding Seam. Sensors 2021, 21, 3067. [Google Scholar] [CrossRef] [PubMed]
  12. Liu, X.; Huang, Y.; Zhang, Y.; Gao, X. Online Detection of Laser-MIG Hybrid Welding Seam Width Based on BP Neural Network Compensated Kalman Filter. Chin. J. Lasers 2022, 49, 1602011. [Google Scholar] [CrossRef]
  13. Ren, J.; Yu, W.; Guo, J.; Zhang, W.; Sun, C. Second-Order Anisotropic Gaussian Directional Derivative Filters for Blob Detection. arXiv 2023, arXiv:2305.00435. [Google Scholar]
  14. Zhang, W.; Sun, C. Corner detection using second-order generalized Gaussian directional derivative representations. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 1243–1256. [Google Scholar] [CrossRef]
  15. GB/T 31032-2023; Technical Specification for Welding of Steel Pipelines. China Standards Press: Beijing, China, 2023.
  16. Gao, J.; Hong, B.; Jia, A.; Zheng, Y. A Vibration-Resistant Detection Method of Weld Position and Gap for Seam Tracking of Z-Weave GMAW. Int. J. Adv. Manuf. Technol. 2024, 133, 3421–3432. [Google Scholar] [CrossRef]
  17. Zhang, G.; Huang, J.; Wu, Y.; Yang, G.; Di, S.; Yuan, H.; Shin, K. A Novel 3D Complex Welding Seam Tracking Method in Symmetrical Robotic MAG Welding Process Using a Laser Vision Sensing. Symmetry 2023, 15, 1093. [Google Scholar] [CrossRef]
  18. Johan, N.F.; Shah, H.N.M.; Sulaiman, M.; Naji, O.A.A.M.; Arshad, M.A. Groove Depth Measurement Based on Laser Extraction and Vision System. Int. J. Adv. Manuf. Technol. 2024, 130, 4523–4535. [Google Scholar] [CrossRef]
  19. Wang, N.; Yang, J.; Zhang, X.; Gong, T.; Zhong, K. Robust Weld Seam Tracking Method Based on Detection and Tracking of Laser Stripe. Int. J. Adv. Manuf. Technol. 2024, 130, 3129–3141. [Google Scholar] [CrossRef]
  20. Zou, Y.; Liu, C. A Light-Weight Object Detection Method Based on Knowledge Distillation and Model Pruning for Seam Tracking System. Measurement 2023, 220, 113438. [Google Scholar] [CrossRef]
  21. Lu, J.; Zhang, J.; Luo, J.; Yang, A.; Han, J.; Zhao, Z. Plate Additive, Seam-Tracking Technology Based on Feature Segmentation. Opt. Laser Technol. 2024, 168, 109848. [Google Scholar] [CrossRef]
  22. Ni, Z.; Li, B.; Nazarov, A.A.; Ma, J.; Yuan, Z.; Wang, X.; Li, H. Simulation of ultrasonic welding of Cu/Cu joints with an interlayer of Cu nanoparticles. Mater. Today Commun. 2024, 39, 109330. [Google Scholar] [CrossRef]
  23. Liu, H.; Chen, J.H.; Zhang, X.Y.; Dai, D.; Cui, J.; Spencer, B.F. Collaborative imaging of subsurface cavities using ground-pipeline penetrating radar. IEEE Geosci. Remote Sens. Lett. 2024, 21, 3002205. [Google Scholar] [CrossRef]
  24. Zang, Y.; Xie, J.; Chen, Y.; Zheng, M.; Liu, X.; Shen, J.; Oliveira, J. Resistance spot welded NiTi shape memory alloy to Ti6Al4V: Correlation between joint microstructure, cracking and mechanical properties. Mater. Des. 2025, 253, 113859. [Google Scholar] [CrossRef]
Figure 1. System configuration diagram.
Figure 1. System configuration diagram.
Sensors 25 02483 g001
Figure 2. Pre-processing flow chart.
Figure 2. Pre-processing flow chart.
Sensors 25 02483 g002
Figure 3. Bevel pixel position and fitted curve.
Figure 3. Bevel pixel position and fitted curve.
Sensors 25 02483 g003
Figure 4. Cubic spline interpolation and inflection points.
Figure 4. Cubic spline interpolation and inflection points.
Sensors 25 02483 g004
Figure 5. Division of fitted curve area.
Figure 5. Division of fitted curve area.
Sensors 25 02483 g005
Figure 6. Inflection points identification relationship between position of angle sensor A, torch, and angle sensor B.
Figure 6. Inflection points identification relationship between position of angle sensor A, torch, and angle sensor B.
Sensors 25 02483 g006
Figure 7. Adaptive swing width control flowchart.
Figure 7. Adaptive swing width control flowchart.
Sensors 25 02483 g007
Figure 8. Inflection point identification and fitted curve; (a) cubic spline interpolation and inflection points of Groove1; (b) cubic spline interpolation and inflection points of Groove2; (c) Groove1 fitting curves and inflection points; (d) Groove2 fitting curves and inflection points.
Figure 8. Inflection point identification and fitted curve; (a) cubic spline interpolation and inflection points of Groove1; (b) cubic spline interpolation and inflection points of Groove2; (c) Groove1 fitting curves and inflection points; (d) Groove2 fitting curves and inflection points.
Sensors 25 02483 g008
Figure 9. Experimental validation platform. (ac) The diagram illustrates: a 5° narrow-gap groove is adopted for the joint preparation. The welding torch advances along the oscillation centerline, performing asymmetric amplitude oscillations of 7.4 mm, 9.6 mm, and 7.4 mm in groove width. The starting point of the welding track (left side) is 130 mm away from the oscillation centerline, while the end point (right side) shows a 2 mm positional deviation relative to the oscillation centerline.
Figure 9. Experimental validation platform. (ac) The diagram illustrates: a 5° narrow-gap groove is adopted for the joint preparation. The welding torch advances along the oscillation centerline, performing asymmetric amplitude oscillations of 7.4 mm, 9.6 mm, and 7.4 mm in groove width. The starting point of the welding track (left side) is 130 mm away from the oscillation centerline, while the end point (right side) shows a 2 mm positional deviation relative to the oscillation centerline.
Sensors 25 02483 g009
Figure 10. Torch swing width adaptive adjustment process; (a) left–right path comparison; (b) maximum oscillation sampling point performance chart; (c) torch centerline adjustment diagram; (d) weld seam shaping of different welding layers; (b-1) actual torch swing width; (b-2) sampling point torch adjustment displacement curve (mm); (b-3) sampling point torch adjustment velocity curve (mm/s); (b-4) sampling point torch adjustment acceleration curve (mm/s2); (b-5) sampling point welding travel distance in an oscillation period of torch curve; (c-1) center path comparison; (c-2) center position error; (d-1) first layer filler pass; (d-2) third layer filler pass; (d-3) fourth layer filler pass; (d-4) fifth layer filler pass.
Figure 10. Torch swing width adaptive adjustment process; (a) left–right path comparison; (b) maximum oscillation sampling point performance chart; (c) torch centerline adjustment diagram; (d) weld seam shaping of different welding layers; (b-1) actual torch swing width; (b-2) sampling point torch adjustment displacement curve (mm); (b-3) sampling point torch adjustment velocity curve (mm/s); (b-4) sampling point torch adjustment acceleration curve (mm/s2); (b-5) sampling point welding travel distance in an oscillation period of torch curve; (c-1) center path comparison; (c-2) center position error; (d-1) first layer filler pass; (d-2) third layer filler pass; (d-3) fourth layer filler pass; (d-4) fifth layer filler pass.
Sensors 25 02483 g010aSensors 25 02483 g010b
Table 1. Table of relevant test parameters.
Table 1. Table of relevant test parameters.
NumberParameterValue
1Initial swing (mm)2.4
2Oscillation time (ms)100
3Side stop time (ms)40
4Travel speed (mm/min)450.0
Table 2. Performance comparison between the method and representative studies.
Table 2. Performance comparison between the method and representative studies.
Method/ReferenceMethod TypeDynamic Response Time (ms)Tracking Error (mm)Adaptability to Grooves
Article MethodThird-Order Derivative + Weighted Threshold47.65±0.1Narrow-Gap V
Ref. [9] (Deep Learning Denoising)Deep LearningN/A±0.3No
Ref. [17] (3D Tracking)Laser + Vision Fusion100±0.1Yes
Ref. [16] (Z-Groove Anti-Vibration)Vibration Suppression60±0.2Z-Groove
Ref. [18] (Multi-Sensor Fusion)Laser + Robotic Control75±0.5Partial
Ref. [19] (LSFP-Tracker)Laser Stripe Feature Extraction65±0.3Partial
Ref. [20] (Lightweight Model)Knowledge Distillation + Pruning50±0.4No
Ref. [21] (Feature Segmentation)Additive Manufacturing Tracking60±0.5Yes (Limited Width)
Ref. [22] (Laser Depth Measurement)Laser Geometry
Extraction
N/AN/ANo
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Wu, S.; Cheng, F. Automatic Adaptive Weld Seam Width Control Method for Long-Distance Pipeline Ring Welds. Sensors 2025, 25, 2483. https://doi.org/10.3390/s25082483

AMA Style

Zhang Y, Wu S, Cheng F. Automatic Adaptive Weld Seam Width Control Method for Long-Distance Pipeline Ring Welds. Sensors. 2025; 25(8):2483. https://doi.org/10.3390/s25082483

Chicago/Turabian Style

Zhang, Yi, Shaojie Wu, and Fangjie Cheng. 2025. "Automatic Adaptive Weld Seam Width Control Method for Long-Distance Pipeline Ring Welds" Sensors 25, no. 8: 2483. https://doi.org/10.3390/s25082483

APA Style

Zhang, Y., Wu, S., & Cheng, F. (2025). Automatic Adaptive Weld Seam Width Control Method for Long-Distance Pipeline Ring Welds. Sensors, 25(8), 2483. https://doi.org/10.3390/s25082483

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop